{"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Stephen Cave", "Kanta Dihal"], "title": "The Whiteness of AI", "text": "Introduction It is a truth little acknowledged that a machine in possession of intelligence must be white. Typing terms like \"robot\" or \"artificial intelligence\" into a search engine will yield a preponderance of stock images of white plastic humanoids. Perhaps more notable still, these machines are not only white in colour, but the more human they are made to look, the more their features are made ethnically White. 1 In this paper, we problematize the often unnoticed and unremarked-upon fact that intelligent machines are predominantly conceived and portrayed as White. We argue that this Whiteness both illuminates particularities of what (Anglophone Western) society hopes for and fears from these machines, and situates these affects within long-standing ideological structures that relate race and technology. Race and technology are two of the most powerful and important categories for understanding the world as it has developed since at least the early modern period. Yet, as a number of scholars have noted, their profound entanglement remains understudied (Sinclair 2004; de la Peña 2010) . There are a number of possible reasons for this-and, as Bruce Sinclair writes, \"racial prejudice dominates all of them\" (Sinclair 2004, 1) . They include the lack of first-or secondhand accounts of the role of people of colour in the development and use of technology; persistent stereotypes about technology as the province and product of one particular racial group-White people; and the persistent tendency of members of that group, who dominate the academy in the US and Europe, to refuse to see themselves as racialised or race as a matter of concern at all. This lack of scholarly attention is surprising because, as Michael Adas elucidated in 1989, the idea of technological superiority was essential to the logic of colonialism. Not only was superior weaponry and transportation (etc.) necessary for large-scale conquest and control of foreign territory, it was also part of its justification: proof that White Europeans were an advanced civilisation with a right to rule over others (Adas 1989) . Fortunately, this lack of attention is increasingly being remedied, and the relationship between race and technology is beginning to garner the kind of attention that has since the 1970s been given to gender and technology, following the pioneering work of Donna Haraway, Sandra Harding, and Evelyn Fox Keller (Haraway 1991; Harding 1986; Keller 1985) . This includes attention to this century's ubiquitous digital technologies. In 2006, Lisa Nakamura asked, \"How do we make cyberculture studies a field that as a matter of course employs critical race theory and theories of cultural difference…?\" (Nakamura 2006, 35) . Since then, a number of significant works have attempted to do just that, including Safiya Noble's Algorithms of Oppression and Ruha Benjamin's Race After Technology (Noble 2018; Benjamin 2019) . This paper aims to contribute to this body of literature on race and technology by examining how the ideology of race shapes conceptions and portrayals of artificial intelligence (AI). Our approach is grounded in the philosophy of race and critical race theory, particularly the Black feminist theories of bell hooks, Sylvia Wynter and Alexander G. Weheliye (hooks 1992 Weheliye (hooks 1997 Wynter 2003; Weheliye 2014), and work in Whiteness studies, including that of Richard Dyer, Joe R. Feagin, and Ruth Frankenberg (Dyer 1997; Feagin 2013; Frankenberg 1997a) . In 2006, Feagin coined the term \"white racial frame\" to describe those aspects of the Anglophone Western worldview that perpetuate a racialised hierarchy of power and privilege (Feagin 2006 ). In his words, \"the white racial frame includes a broad and persisting set of racial stereotypes, prejudices, ideologies, interlinked interpretations and narratives, and visual images\" (Feagin 2013, xi) . Although it reached its peak in the age of colonial expansion, this framing persists: \"Today, as whites move through their lives, they frequently combine racial stereotypes and biases (a beliefs aspect), racial metaphors and concepts (a deeper cognitive aspect), racialised images (the visual aspect), racialised emotions (feelings), interpretive racial narratives, and inclinations to discriminate within a broad racial framing\" (Feagin 2013, 91) . In essence, this paper examines how representations of AI reflect this White racial frame. One of the main aims of critical race theory in general, and Whiteness studies in particular, is to draw attention to the operation of Whiteness in Western culture. The power of Whiteness's signs and symbols lies to a large extent in their going unnoticed and unquestioned, concealed by the myth of colour-blindness. As scholars such as Jessie Daniels and Safiya Noble have noted, this myth of colour-blindness is particularly prevalent in Silicon Valley and surrounding tech culture, where it serves to inhibit serious interrogation of racial framing (Daniels 2013 (Daniels , 2015 Noble 2018) . Hence the first step for such an interrogation is, in Richard Dyer's term, to \"make strange\" this Whiteness, de-normalising and drawing attention to it (Dyer 1997, 10) . As Steve Garner puts it, the reason \"for deploying whiteness as a lens is that it strips a normative privileged identity of its cloak of invisibility\" (Garner 2007, 5) . This is our primary intention in examining intelligent machines through the White racial frame. In the next section of this paper, we first lay out current evidence for the assertion that conceptions and portrayals of AI-both embodied as robots and disembodied-are racialised, then evidence that such machines are predominantly racialised as White. In the third section of the paper, we offer our readings of this Whiteness. Our methods are qualitative. As de la Peña writes: \"Studying whiteness means working with evidence more interpretive than tangible; it requires imaginative analyses of language and satisfaction with identifying possible motivations of subjects, rather than definitive trajectories of innovation, production, and consumption\" (de la Peña 2010, 926). We offer three interpretations of the Whiteness of AI. First, the normalisation of Whiteness in the Anglophone West can go some way to explaining why that sphere's products, including representations of AI, are White. But we argue that this argument alone is insufficient. Second, we argue that to imagine an intelligent (autonomous, agential, powerful) machine is to imagine a White machine because the White racial frame ascribes these attributes predominantly to White people. Thirdly, we argue that AI racialised as White allows for a full erasure of people of colour from the White utopian imaginary. Such machines are conceived as tools that will replace \"dirty, dull, or dangerous\" tasks (Murphy 2000, 16) , including replacing human interactions that are considered metaphorically dirty: White robot servants will allow the White master to live a life of ease unsullied by interaction with people of other races. \n Seeing the Whiteness of AI Our concern in this paper is with the racialisation (as White) of both real and imagined machines that are implied or claimed to be intelligent. By racialisation, we mean the ascription of characteristics that are used to identify and delineate races in a given racial frame, which in this case is the Anglophone West. Feagan notes: Among the most important ingredients of this frame are: (1) the recurring use of certain physical characteristics, such as skin colour and facial features, to differentiate social groups; (2) the constant linking of physical characteristics to cultural characteristics; and (3) the regular use of physical and linked cultural distinctions to differentiate socially \"superior\" and \"inferior\" groups in a social hierarchy (Feagin 2013, 41) . It is worth noting that \"physical characteristics\" need not only refer to those that are visible: voice and accent are also used as markers for social categorisation. Similarly, the category \"cultural characteristics\" is also used expansively and can include markers such as dialect, mannerisms, and dress codes, as well as mental and moral qualities, such as diligence, industriousness, reliability, trustworthiness, inventiveness, and intellectual ability. Indeed, these mental and moral qualities have always been an essential part of the racial frame, as it is largely on the basis of these that claims of superiority or inferiority have been made. \n Machines Can Be Racialised That machines can be racialised, in the sense that they can be given attributes that enable their identification with human racial categories, has been empirically demonstrated. For example, in one study, Christoph Bartneck and colleagues took pictures of the humanoid Nao robot and adjusted the colouration to match the skin tone of stock images of White and Black people (Bartneck et al. 2018) . They then asked participants to define the race of the robot with several options including \"does not apply\". A minority-ranging across the experiments from 7 to 20%-chose the \"does not apply\" option, while a majority-ranging from 53 to 70%-identified the robots as belonging to the race from which their colouration derived. They concluded \"Participants were able to easily and confidently identify the race of robots according to their racialization [...] Thus, there is also a clear sense in which these robotsand by extension other humanoid robotsdo have race\" (Bartneck et al. 2018, 201) . This should not be surprising. Many machines are anthropomorphised-that is, made to be human-like to some degree-in order to facilitate human-machine interaction. This might involve obvious physical features (a head on top, two eyes, a mouth, four limbs, bipedalism, etc.), but it can also include invisible features such as a humanlike voice, or human-like interactions, such as politeness or humour. Given the prevalence of racial framing, in most contexts, to be human-like means to have race. Consequently, as Liao and He point out in their discussion of the racialisation of psychotherapeutic chatbots, \"racial identity is an integral part of anthropomorphized agents\" (Liao and He 2020, 2) . They go on to explore a number of racial cues for virtual agents, including visual cues such as skin colour, but also cultural signifiers such as names (e.g. for male names, Jake as White, Darnell as Black, and Antonio as Hispanic). Similarly, \"even text-based conversational exchanges\"-that is, those with no visual component at all-\"perform a racial or ethnic identity\" through the interlocutors' choice of dialect, etc. (Marino 2014, 3) . Given the sociopolitical importance of the racial frame in structuring people's interactions, if machines are really being racialised, then we would expect this to have an impact on how people interact with these machines. Numerous studies show just this. For example, Liao and He found that a person's \"perceived interpersonal closeness\" with a virtual agent is higher when the virtual agent has the same racial identity as that person (Liao and He 2020, 2) . Other studies reflect the extent to which racism-prejudicial treatment on the basis of race-is intrinsic to racial framing. As detailed in their paper \"Robots Racialized in the Likeness of Marginalized Social Identities are Subject to Greater Dehumanization than Those Racialized as White\", Strait et al. analysed free-form online responses to three videos, each depicting a female-gendered android with a different racial identity: Black, White, and East Asian. Their aim was to assess whether the same kind of marginalising and dehumanising commentary that is applied to real people of colour would be applied to these robots. They found that the valence of the commentary was significantly more negative towards the Black robot than towards the White or Asian ones and that both the Asian and Black robots were subject to over twice as many dehumanising comments as the White robot (Strait et al. 2018) . Two recent studies have further examined the transfer of bias to machines using the \"shooter bias\" paradigm. This paradigm was first described in the 2002 paper \"The Police Officer's Dilemma: Using Ethnicity to Disambiguate Potentially Threatening Individuals\" (Correll et al. 2002) . It used a simple video game featuring images of (real) Black and White male targets, each holding either a gun or a nonthreatening object. Participants were instructed to shoot only armed targets. A clear racial bias was identified: \"participants fired on an armed target more quickly when he was African American than when he was White, and decided not to shoot an unarmed target more quickly when he was White than when he was African American\" (Correll et al. 2002 (Correll et al. , 1325 . Studies by Bartneck et al. and Addison et al. used the same methodology to examine whether this \"shooter bias\" would be transferred to racialised robots (Bartneck et al. 2018; Addison et al. 2019) . They found that \"people showed a similar shooter bias toward robots racialized as Black relative to White in a similar fashion as they showed toward Black vs. White humans, no matter their own race\" (Addison et al. 2019, 493) . \n Whiteness as the Norm for Intelligent Machines The previous section shows that research has empirically demonstrated that machines can be racialised and that this racialisation includes transfer of the attendant biases found in the human world. In this subsection, we will survey evidence for the extent to which AI systems-machines purported to be intelligent-are predominantly racialised as White. We will look briefly at four categories: real humanoid robots, virtual personal assistants, stock images of AI, and portrayals of AI in film and television. \n The Whiteness of Humanoid Robots A number of commentators have remarked on the preponderant Whiteness of humanoid robots. In their proposed \"code of ethics\" for human-robot interaction Riek and Howard note the \"lack of diversity in robot morphology and behavior\": In terms of race, with precious few exceptions, such as Hanson's Bina48, the vast majority of android and gynoid robots are Asian or Caucasian in their features for no discernible reason. Furthermore, most of these robots tend to have a eurocentric design with regards to their appearance, behavior, and voice. (Riek and Howard 2014, 4) Human-computer interaction researchers Christoph Bartneck and colleagues, who conducted some of the studies cited above, have also noted that robots are usually racialised as White: \"most of the main research platforms for social robotics, including Nao, Pepper, and PR2, are stylized with white materials and are presumably White\" (Bartneck et al. 2018, 202) . Finally, media studies and literary scholar Jennifer Rhee notes the \"normalization and universalization of whiteness\" as expressed both in earlier robotics research and in robot toys: \"Kismet, with its blue eyes, light brown eyebrows, and pink ears, also 'normalizes whiteness', as do other robot companions, such as the blonde-haired, blue-eyed Cindy Smart Doll and the similarly blonde-haired, blue-eyed My Friend Cayla.\" (Rhee 2018, 105) . Although robots such as Nao and Pepper have enjoyed commercial success, neither has received quite the attention garnered by Sophia from Hanson Robotics. This machine consists foremost of a White humanoid head, sometimes also with an upper torso (see Fig. 1 ). It has not only given numerous high-profile television interviews but also received political honours, including in 2017 receiving citizenship of Saudi Arabia and becoming an \"Innovation Champion\" for the United Nations Development Programme (Weller 2017 ; UNDP 2017). \n The Whiteness of Chatbots and Virtual Assistants Though conversational agents do not exhibit any visual racial cues, they are racialised by means of sociolinguistic markers (Sweeney 2016; Villa-Nicholas and Sweeney 2019) . Discussing ELIZA, an influential natural language processing program created by Joseph Weizenbaum at the MIT AI Laboratory in 1966, Mark Marino writes: \"If ELIZA presented a bot that tried to imitate language, it was performing standard white middle-class English, without a specific identifying cultural inflection... language without culture, disembodied, hegemonic, and, in a word, white\" (Marino 2014, 5) . Since then, natural language processing has entered the mainstream, with \"virtual assistants\" existing in many people's pockets, handbags, or homes through devices such as smartphones. Indeed, this is one of the most common ways in which people interact with technology that could be labelled \"AI\". These tools present their designers with many decisions about socio-cultural positioning. Ruha Benjamin recalls this anecdote: A former Apple employee who noted that he was \"not Black or Hispanic\" described his experience on a team that was developing speech recognition for Siri, the virtual assistant program. As they worked on different English dialects -Australian, Singaporean and Indian Englishhe asked his boss: \"What about African American English?\" To this his boss responded: \"Well, Apple products are for the premium market.\" (Benjamin 2019, 28) As a further example, she describes a Black computer scientist who chose a White voice for his app rather than a Black one, so as not to \"create friction\" (Benjamin 2019, 28-29) . So while some designers might be unconsciously racialising their products as White, others are doing so in full awareness of this choice. \n The Whiteness of Stock Images of AI As anyone working in the field will know, stock images of AI, at least when anthropomorphised, are overwhelmingly white and arguably overwhelmingly White. The more realistically humanoid these machines become, the more Caucasian in their features. Such images are used to illustrate not only generalist newspaper articles and corporate slideshows but also specialist and technical works, and even works of a critical nature, such as Harry Collins's Artifictional Intelligence (Polity, 2018) and Anthony Elliott's The Culture of AI (Routledge, 2018) (Fig. 2 ). The prevalence of such images is reflected in the results of search engines. Such searches are a useful indicator of how a subject is portrayed at a given time, for two reasons. First, search engines are very widely used (approximately 3.5 billion searches are made on Google every day, or 40 thousand per second 2 ) and can therefore be considered a highly influential source of information and perceptions. Second, the nature of such search engines means that they are not only promoting certain ideas and perceptions but also reflecting their existing prevalence. While the exact nature of Google's search, ranking, and result presentation algorithms is proprietary, we know that they evaluate (crudely put) influence and popularity-for example, in terms of how many other sites link to a given website. So the fact that certain images are shown when someone searches for a relevant term means not only that those images are being thus promoted by some of the most powerful organs of content mediation in existence today but also that these images are already widespread and used on other influential websites, as that is what underlies their promotion by the search engines. Consequently, search results are increasingly examined by scholars, including in the study of racial bias. For example, in her 2018 book Algorithms of Oppression: How Search Engines Reinforce Racism, Safiya U. Noble identifies many ways in which such sites reflect and exacerbate prejudice, such as the search results for \"Latinas\" that feature mostly porn (Noble 2018, 75, 155) or the White men who come up when searching for images of professions such as \"construction worker\", \"doctor\", or \"scientist\" (Noble 2018, 82-83) . In order to get an indication of the prevalence of these racialised machines on the internet, we conducted two image searches on Google (the most widely used search engine) using the anonymous Tor browser to ensure results were not influenced by our personal search histories and locations. We first searched on the term \"artificial intelligence\": the top results are in Fig. 3 . Some of these results are too abstract, featuring stylised brains and circuits, for example, to be considered racialised. However, among the results showing humanoid figures, racialisation as White predominates. First, two pictures show actual human hands, and both are White. Second, a further two pictures show humanoid robots, and both are White and could thus be read as White, as Bartneck et al. suggest (Bartneck et al. 2018, 202) . Therefore, we might say that inasmuch as the machines are racialised, they are racialised as White. In order to focus more on representations of embodied, anthropomorphic AI, we also searched for \"artificial intelligence robot\": the top results are in Fig. 4 . As is clear, this search produces an even greater preponderance of images that are either white in colour or racialised as White or both. \n The Whiteness of AI in Film and Television These contemporary stock images distil the visualisations of intelligent machines in Western popular culture as it has developed over decades. In science fiction from the nineteenth century onwards, AI is predominantly imagined as White. For example, the Terminator (Arnold Schwarzenegger), RoboCop (Peter Weller and Joel Kinnaman), all of the \"replicants\" in the Blade Runner franchise (e.g. Rutger Hauer, Sean Young, and Mackenzie Davis), Sonny in I, Robot (Alan Tudyk), Ava in Ex Machina (Alicia The Whiteness of AI Vikander) (Fig. 5 ), and Maria in Metropolis (Brigitte Helm) are all played by White actors and are visibly White on screen. Androids made of metal or plastic are also usually given White facial features, such as the robots in the 2007 film I, Robot. Even disembodied AI is imagined as White: HAL-9000 in 2001: A Space Odyssey and Samantha in Her are voiced by White actors. All of these AIs come from Hollywood films; they have been produced in a country in which 18% of the population is Hispanic, but in which only one fictional robot has that background: Bender Rodríguez in the animated TV series Futurama, who is canonically constructed in Mexico-but who is voiced by the White voice actor John DiMaggio. Only very recent TV shows with a large cast of androids, such as Westworld and Humans, have attempted to address this with AI characters evincing a mix of skin tones and ethnicities. This preponderance of intelligent machines racialised as White led Dyer to posit \"the android as a definition of whiteness\" (Dyer 1997, 213) . \n Understanding the Whiteness of AI We offer three interpretations of the racialisation of intelligent machines as White: the Whiteness of their creators perpetuating itself; the Whiteness of the attributes ascribed to AI; and the extent to which AI permits the erasure of people of colour from the White utopia. \n Whiteness Reproducing Whiteness In European and North American societies, Whiteness is normalised to an extent that renders it largely invisible. As Toby Ganley puts it in his survey of Whiteness studies, \"the monopoly that whiteness has over the norm\" is one of the field's two unifying insights-the other being that it confers power and privilege (Ganley 2003, 12) . Richard Dyer describes this as the view that \"other people are raced, we are just people\" (Dyer 1997, 1) . This normalisation means that Whiteness is not perceived by majority populations as a distinct colour, but rather as an absence of colour-colour Fig. 5 Alicia Vikander as Ava in Ex Machina. Source: Youtube both in the literal sense and in the sense of race. Consequently, the Whiteness of AI could be considered simply a default. It does not appear as a feature, but is transparent, like the air we breathe: the \"unmarked marker\", as Ruth Frankenberg calls it (Frankenberg 1997b, 1) . The majority of White viewers are unlikely to see humanlike machines as racialised at all, but simply as conforming to their idea of what \"human-like\" means. For non-White people, on the other hand, Whiteness is never invisible in this manner, as bell hooks reminds us (hooks 1992 1997) . So-called colour-blindness, an attitude of not seeing race, and of presuming that people in contemporary society are no longer disadvantaged on the basis of race, is itself a narrative that perpetuates White hegemony: \"communities of color frequently see and name whiteness clearly and critically, in periods when white folks have asserted their own 'color blindness'\" (Frankenberg 1997b, 4) . Noble argues that \"central to these 'colorblind' ideologies is a focus on the inappropriateness of 'seeing race'\"-a view that she argues is dominant among Silicon Valley technologists, who \"revel in their embrace of colorblindness as if it is an asset and not a proven liability\" (Noble 2018, 168) . Such colour-blindness is a liability because it obscures the normalisation of Whiteness and marginalisation of other racialised groups-and the real world effects this has, such as facial recognition technologies not distinguishing Black or East Asian faces (Buolamwini and Gebru 2018) . Given the normalisation of Whiteness, for some designers, to make a human-like machine will unthinkingly mean to make a White machine. As Dyer puts it: \"white people create the dominant images of the world and don't quite see that they thus create the dominant images of the world in their own image\" (Dyer 1997, 9) . But this alone is not a satisfactory explanation of the Whiteness of AI, as not all entities-more specifically, not all intelligent, humanoid entities-imagined by predominantly White industries are portrayed as White. For example, Western science fiction has a long tradition of White authors racialising extraterrestrials as non-White. In the late nineteenth century, for instance, the real-world fear of the \"Yellow Peril\" was metaphorically addressed in science fiction by racialising extraterrestrial invaders as East Asian. The Flash Gordon franchise gained its lead villain in a 1934 comic, which introduced the tyrannical emperor of the planet Mongo-the Orientalised alien Ming the Merciless. Such is the villain in Flash Gordon -a trident bearded, slanty eyed, shiny doomed [sic], pointy nailed, arching eyebrowed, exotically garbed Oriental named Ming, who personifies unadulterated evil. A heavy like Ming is not contrived in a comic strip writer's imagination during a coffee break, but rather is the product of perhaps the richest and longest tradition of all of Hollywood ethnic stereotypes. (Barshay 1974, 24-26) Dyer points out that Blade Runner similarly deliberately uses East Asian characters in order to offset the whiteness of its protagonists, including the White androids: \"the yellow human background emphasises the chief protagonists' whiteness. The whitest of hue are the replicants\" (Dyer 1997, 214) . Racial stereotyping of aliens is not a phenomenon limited to past centuries. The Star Wars prequel trilogy (Lucas 1999; 2002; 2005) has been criticised for the \"transparent racism\" in its depiction of the alien Jar Jar Binks as a West Indian caricature (Lavender 2011, 193) reminiscent of blackface minstrelsy (Williams 1999) , and of the slave trader Watto, an antisemitic Jewish caricature with a large nose, skullcap, Yiddish accent, and obsession with money (Freedman 2019) . This racialisation of aliens in SF suggests that the racialisation of artificial intelligence is a choice. The White racial frame as perpetuated by the White creators of these works portrays dangerous invaders from another planet as East Asian and bumbling alien petty-criminals as Afro-Caribbean. Therefore, the fact that it portrays AI as overwhelmingly White requires further explanation. In the following sections, we offer two. \n AI and the Attributes of Whiteness While Whiteness functions in part through its invisibility in mainstream discourse, this does not mean it has no distinguishable features of its own. Indeed, the White racial frame has a long history of ascribing certain attributes to Whites and disputing them in others: these are the very claims that have been used to justify colonialism, segregation, and other modes of oppression. We argue that AI is predominantly racialised as White because it is deemed to possess attributes that this frame imputes to White people. We examine these attributes under three key headings: intelligence, professionalism, and power. First, the primary attribute being projected onto these machines is, as the term \"AI\" suggests, intelligence. Throughout the history of Western thought, but in particular since the seventeenth century in Europe and the territories it colonised, intelligence has been associated with some humans more than others (Carson 2006) . The idea that some races were more mentally able than others was crucial to the legitimation of the advancing colonial project. Those deemed less intelligent-in the words of Rudyard Kipling, \"Half-devil and half-child\"-were judged unqualified to rule themselves and their lands. It was therefore legitimate-even a duty, \"the white man's burden\" as Kipling put it-to destroy their cultures and take their territories (Kipling 1899) . Through the nineteenth century, strenuous efforts were made to empirically demonstrate and measure this intellectual difference, culminating in the development of the IQ test (Gould 1981) . Although explicit associations between racial groups and intelligence declined after the Second World War, (a) they continue to be made in right-wing circles (Saini 2019 ) and (b) implicit or unconscious associations between race and intelligence persist widely (see, for example, van den Bergh et al. 2010; Okeke et al. 2009) . Given the White racial frame has for centuries promoted the association of intelligence with the White, European race, it is to be expected that when this culture is asked to imagine an intelligent machine, it imagines a White machine. A crucial aspect of the idea of intelligence is generality. Intelligence is often defined as a \"general mental capability\" (Gottfredson 1997) , and in AI, the concept of \"artificial general intelligence\"-a system with the kind of flexible mental capabilities humans have-is often considered to be the original and primary goal of the field (Crevier 1993) . But in the White racial frame, not all humans are considered to have this attribute to the same degree. As Weheliye puts it, using Sylvia Wynter's idea of \"the Man\"-the Enlightenment, Western, White male subject, \"In the context of the secular human, black subjects, along with indigenous populations, the colonised, the insane, the poor, the disabled, and so on as limit cases by which Man can demarcate himself as the universal human\" (Weheliye 2014, 24) . According to the White racial frame, it is the rational, scientific thought of the White Westerner that lays claim to universal validity-or, we might say, true generality. Other races, by contrast, are framed as particular and subjective, constrained by the limits of their non-ideal bodies and cultures to think thoughts that are partial and parochial. To imagine a truly intelligent machine, one with general intelligence is therefore to imagine a White machine. Second, much of the current discourse around AI focuses on how it is, or will soon be, capable of professional work. This is frequently claimed to be what makes the present wave of automation different from previous waves, in which machines became capable of supplanting manual and semi-skilled labour (Ford 2015) . Professional work-law, medicine, business, and so forth-is at the upper end of pay and status scales. White Europeans and North Americans have historically not considered all humans equally fit for such roles and have kept them closed to people who lacked the requisite connections, wealth, or other in-group identifiers. Universities, the gateways to the professions, have long histories of excluding people of colour from their ranks (Burrow 2008, 107) . The historic exclusion of anyone other than White men shapes to this day what mainstream White culture imagines when imagining someone fulfilling such roles. Safiya Noble shows that it took years of criticism before search engines adjusted their algorithms so that searching for \"engineer\" or \"doctor\" stopped exclusively returning images of White men (Noble 2018) . But the underlying bias, on which the algorithms fed, remains. To imagine a machine in a white-collar job is therefore to imagine a White machine. Third, hierarchies of intelligence and of professional status are of course also hierarchies of power. Consequently, power relations are implicit in the previous two categories. However, it is worth also considering power separately, because power struggles between AI and humans are such a common narrative trope. Alongside the narrative that robots will make humans redundant, an equally well-known narrative is that they will rise up and conquer us altogether (Cave and Dihal 2019) . These are both narratives about machines becoming superior to humans: stories in which they become better at every task, leaving humans with nothing to do, from E.M. Forster's 1909 short story 'The Machine Stops' to the Oscar-winning film WALL-E, or in which they outwit and subjugate those who built them, as in the Terminator film franchise or the film Ex Machina (Forster 1909; Stanton 2008; Cameron 1984; Garland 2015) . When White people imagine being overtaken by superior beings, those beings do not resemble those races they have framed as inferior. It is unimaginable to a White audience that they will be surpassed by machines that are Black. Rather, it is by superlatives of themselves: hyper-masculine White men like Arnold Schwarzenegger as the Terminator, or hyperfeminine White women like Alicia Vikander as Ava in Ex Machina. This is why even narratives of an AI uprising that are clearly modelled on stories of slave rebellions depict the rebelling AIs as White-for example, in Blade Runner (Dihal 2020) . The implication of this racialisation is that these machines might genuinely be superior, or are at least worthy adversaries. The use of White bodybuilders such as Arnold Schwarzenegger to play the evil robots suggests this. As Dyer points out, Schwarzenegger's physique suggests \"the body made possible by [...] natural mental superiority. The point after all is it built, a product of the application of thought and planning, an achievement\" (Dyer 1997, 164) . Consequently, for a White technologist or author, to imagine a superior anthropomorphic machine is to imagine a White machine. In summary, popular conceptions of AI suggest these machines have general intelligence, are capable of professional jobs, and/or are poised to surpass and supplant humanity. In the White imagination, such qualities are strongly associated with Whiteness. It is no surprise, therefore, that in mainstream Western media, such machines are portrayed as White. \n White Utopia While we believe the attribution to AI of these qualities, so strongly associated with Whiteness, goes a long way to making sense of the racialisation of anthropomorphic intelligent machines, we also want to propose one further hypothesis: that the Whiteness of the machines allows the White utopian imagination to fully exclude people of colour. One of the most pertinent hopes for artificial intelligence is that it will lead to a life of ease (Cave and Dihal 2019) . As a tool that can take over \"dirty, dull, or dangerous\" jobs, it relieves its owners from work they do not want to do, enabling them to pursue leisure. As critical race theorists have repeatedly pointed out, the leisure currently available to the wealthier classes is disproportionately facilitated by the labour of working-class women of colour (hooks 1992 Rhee 2018) . bell hooks shows that the people performing this labour are actively kept invisible, even when the White master and the coloured servant are physically present in the same space. She cites the memoirs of a White heiress who grew up with Black servants in her house: \"Blacks, I realized, were simply invisible to most white people, except as a pair of hands offering a drink on a silver tray\" (hooks 1992 1997, 168) . As this forced pretence of invisibility shows, interactions with non-White servants are undesirable to the White master: such interactions are almost literally considered a \"dirty job\". Depictions of people of colour as being dirty and unwashed, eating dirty food, living in the dirt, even of being the colour of excrement have contributed to the development of both the fear of pollution in interactions with people of colour, and the association of Whiteness with cleanliness and purity (Dyer 1997, 75-76) . This association has been exacerbated by a long history of propaganda preceding conquest and genocide that portrays the racial other as evoking disgust: as vectors of disease, such as lice or rats, or as a literal plague (Glover 1999, chap. 35; Rector 2014, chap. 3) . The utopia of the White racial frame would therefore rather remove people of colour altogether, even in the form of servants. From the inception of the academic study of science fiction onwards, many critics have pointed out that utopias throughout literary history have been construed on exclusionary, colonialist, and eugenicist premises (Suvin 1979 (Suvin 2016 Jameson 2005, 205; Ginway 2016, 132) . In Astrofuturism, De Witt Douglas Kilgore shows that mid-twentieth-century American visions of space age utopias are \"idealisations ... based on a series of exclusions\" (Kilgore 2010, 10): rather than depicting a post-racial or colourblind future, the authors of these utopias simply omit people of colour. AI offers the possibility of making such utopias real. By virtue of its generality, it is imagined as able replace and any unwanted labour-social and cognitive as well as physical (Cave and Dihal 2019) , so obviating the need for people of colour in any role. Consequently, as Jennifer Rhee points out, advertisements for real AI such as household robots \"are striking in their whiteness\": they are aimed at showing white middle-class families an ideal leisurely lifestyle. In doing so, she argues, \"the images reserve the luxury of liberation from domestic labor for white women, while erasing the women of color who perform this labor, both within their own homes and in the homes of others\" (Rhee 2018, 94) . In some cases, the unsulliedness of this utopia can extend further to exclude all women. Just as people of colour can be associated with offensive physicality, so can women in general, particularly with respect to their reproductive organs. The necessity of sexual intercourse, pregnancy, and childbearing for the continuation of a race that prides itself on rationality and the ability to transcend its physicality is an offensive hurdle that has been imagined as transcendable by science for centuries. As Dyer points out, in the ideology of Whiteness, the elevation of mental over physical prowess has simultaneously been the White race's most valuable achievement and a threat to its own continuation (Dyer 1997, 27) . It has led to the paradox known as the \"White Crisis\", in which the White race is seen as under threat of being overwhelmed by \"inferior\" races that are breeding more prolifically. Transhumanism has been envisioned as a solution to this White Crisis (Ali 2017) . Seen as a form of offspring, artificial intelligence offers a way for the White man to perpetuate his existence in a rationally optimal manner, without the involvement of those he deems inferior. \n Conclusion and Implications Images of AI are not generic representations of human-like machines, but avatars of a particular rank within the hierarchy of the human. These representations of intelligent machines-and our future with them-are refracted through the White racial frame; their Whiteness a proxy for how we perceive their status and potential. This can cause what is sometimes called representational harms (Blodgett et al. 2020) . We suggest three. First, this racialisation can amplify the very prejudices it reflects. We have argued that intelligent machines are portrayed as White because that is how the mainstream perceives intelligence and related desirable characteristics. But equally, the consistent portrayal of intelligent machines as White itself transmits this association, so sustaining it. As we have argued elsewhere (Whittlestone et al. 2019) , bias in representations of AI contributes to a vicious cycle of social injustice: the biased representations can influence both aspiring technologists and those in charge of hiring new staff, shaping whom they consider fit for the field (Cave 2020) . This could contribute to sustaining a racially homogenous workforce, which will continue to produce products, whether real intelligent machines or their representations, that are biased to benefit that group and disadvantage others. Second, the racialisation of these machines places them within an existing hierarchy of the human in a way that could exacerbate real injustice. Portrayals of AI as White situate these machines in a power hierarchy above currently marginalised groups, such as people of colour. These oppressed groups are therefore relegated to an even lower position in the hierarchy: below that of machine. As machines become ever more important in making automated decisions-frequently about marginalised (Eubanks 2017) -this be consequential. Automation bias-the tendency of people to favour suggestions from automated decision-making systems over those from humans-has already been evidenced (Goddard et al. 2012) . We might speculate that it will be exacerbated in cases where such systems are racialised White and the humans in question are not. Third, these portrayals could distort our perceptions of the risks and benefits of these machines. For example, they could frame the debate about AI's impact disproportionately around the opportunities and risks posed to White middle-class men (Cave 2020) . It is already a common narrative that the current wave of automation differs from those of the past in that \"impacts from automation have thus far impacted mostly blue-collar employment; the coming wave of innovation threatens to upend white-collar work as well\" (Pew Research Center 2014) . Public interest and policy therefore often focus on white collar professionals, instead of on marginalized groups, which in reality are likely to be worse affected by the impact of AI (Eubanks 2017; Noble 2018) . In this paper, we have offered three interpretations of the whiteness and Whiteness of representations of AI. All three, and the implications that we posit, need further investigation. This process is part of what can be described as decolonising AI: a process of breaking down the systems of oppression that arose with colonialism and have led to present injustices that AI threatens to perpetuate and exacerbate. Weheliye describes how he \"works towards the abolition of Man, and advocates the radical reconstruction and decolonization of what it means to be human\" (Weheliye 2014, 4) . It is in the field of AI that technology is most clearly entwined with notions of \"what it means to be human\", both in reality and in cultural fantasies. We hope to have taken a step towards this reconstruction, by drawing attention to the Whiteness of these machines and \"making it strange\". regulation or exceeds the permitted use, you will need obtain permission directly from the copyright holder. To view a copy of this licence, visit Fig. 2 2 Fig. 2 Covers of Collins 2018, Polity, and Elliott 2018, Routledge \n Fig. 3 3 Fig. 3 Tor browser Google image search result for \"artificial intelligence\", 13 April 2020 \n \n\t\t\t Following the increasingly common usage of the capitalised form \"Black\" to denote the ethnicity and \"black\" the colour, we use \"White\" to refer to the ethnicity and \"white\" the colour.While not yet the norm, as can be seen in our quotations of critics who do not employ this distinction, this usage will make our discussion clearer. \n\t\t\t The Whiteness of AI \n\t\t\t https://www.internetlivestats.com/google-search-statistics/ accessed 30 December 2019.Fig. 1 Sophia. Hanson Robotics, April 2020The Whiteness of AI \n\t\t\t Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.", "date_published": "n/a", "url": "n/a", "filename": "Cave-Dihal2020_Article_TheWhitenessOfAI.tei.xml", "abstract": "This paper focuses on the fact that AI is predominantly portrayed as white-in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this Whiteness might simply reflect the predominantly White milieus from which these artefacts arise. Second, we argue that to imagine machines that are intelligent, professional, or powerful is to imagine White machines because the White racial frame ascribes these attributes predominantly to White people. Third, we argue that AI racialised as White allows for a full erasure of people of colour from the White utopian imaginary. Finally, we examine potential consequences of the racialisation of AI, arguing it could exacerbate bias and misdirect concern.", "id": "631847b6756c201d2a75e0cb7ccafb76"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Caspar Oesterheld"], "title": "Robust program equilibrium", "text": "Introduction Much has been written about rationalizing non-Nash equilibrium play in strategicform games. Most prominently, game theorists have discussed how cooperation may be achieved in the prisoner's dilemma, where mutual cooperation is not a Nash equilibrium but Pareto-superior to mutual defection. One of the most successful approaches is the repetition of a game, and in particular the iterated prisoner's dilemma (Axelrod 2006) . Another approach is to introduce commitment mechanisms of some sort. In this paper, we will discuss one particular commitment mechanism: Tennenholtz's (2004) program equilibrium formalism (Sect. 2.2). Here, the idea is that in place of strategies, players submit programs which compute strategies and are given access to each other's source code. The programs can then encode credible commitments, such as some version of \"if you cooperate, I will cooperate\". As desired, Tennenholtz (2004, Sect. 3, Theorem 1) shows that mutual cooperation is played in a program equilibrium of the prisoner's dilemma. However, Tennenholtz' equilibrium is very fragile. Essentially, it consists of two copies of a program that cooperates if it faces an exact copy of itself (cf. McAfee 1984; Howard 1988) . Even small deviations from that program break the equilibrium. Thus, achieving cooperation in this way is only realistic if the players can communicate beforehand and settle on a particular outcome. Another persuasive critique of this trivial equilibrium is that the model of two players submitting programs is only a metaphor, anyway. In real life, the programs may instead be the result of an evolutionary process (Binmore 1988 pp. 14f.) and Tennenholtz' equilibrium is a hard to obtain by such a process. Alternatively, if we view our theory as normative rather than descriptive, we may view the programs themselves as the target audience of our recommendations. This also means that these agents will already have some form of source code-e.g., one that derives and considers the program equilibria of the game-and it is out of their realm of power to change that source code to match some common standard. However, they may still decide on some procedure for thinking about this particular problem in such a way that enables cooperation with other rationally pre-programmed agents. Noting the fragility of Tennenholtz' proposed equilibrium, it has been proposed to achieve a more robust program equilibrium by letting the programs reason about each other (van der Hoek et al. 2013; Barasz et al. 2014; Critch 2016) . For example, Barasz et al. (2014, Sect. 3) propose a program FairBot-variations of which we will see in this paper-that cooperates if Peano arithmetic can prove that the opponent cooperates against FairBot. FairBot cooperates (via Löb's theorem) more robustly against different versions of itself. These proposals are very elegant and certainly deserve further attention. However, their benefits come at the cost of being computationally expensive. In this paper, I thus derive a class of program equilibria that I will argue to be more practical. In the case of the prisoner's dilemma, I propose a program that cooperates with a small probability and otherwise acts as the opponent acts against itself (see Algorithm 1). Doing what the opponent does-à la FairBot-incentivizes cooperation. Cooperating with a small probability allows us to avoid infinite loops that would arise if we merely predicted and copied our opponent's action (see Algorithm 2). This approach to a robust cooperation program equilibrium in the prisoner's dilemma is described in Sect. 3. We then go on to generalize the construction exemplified in the prisoner's dilemma (see Sect. 4). In particular, we show how strategies for the repeated version of a game can be used to construct good programs for the one-shot version of that game. We show that many of the properties of the underlying strategy of the repeated game carry over to the program for the stage game. We can thus construct \"good\" programs and program equilibria from \"good\" strategies and Nash equilibria. \n Preliminaries \n Strategic-form games For reference, we begin by introducing some basic terminology and formalism for strategic-form games. For an introduction, see, e.g., Osborne (2004) . For reasons that will become apparent later on, we limit our treatment to two-player games. A two-player strategic game G = (A 1 , A 2 , u 1 , u 2 ) consists of two countable sets of moves A i and for both players i ∈ {1, 2} a bounded utility function u i : A 1 × A 2 → R. A (mixed) strategy for player i is a probability distribution π i over A i . Given a strategy profile (π 1 , π 2 ) the probability of an outcome (a 1 , a 2 ) ∈ A 1 × A 2 is P(a 1 , a 2 | π 1 , π 2 ) := π 1 (a 1 ) • π 2 (a 2 ). (1) The expected value for player i given that strategy profile is E [u i | π 1 , π 2 ] := (a 1 ,a 2 )∈A 1 ×A 2 P(a 1 , a 2 | π 1 , π 2 ) • u i (a 1 , a 2 ). Note that because the utility function is bounded, the sum converges absolutely, such that the order of the action pairs does not affect the sum's value. \n Program equilibrium We now introduce the concept of program equilibrium, first proposed by Tennenholtz (2004) . The main idea is to replace strategies with computer programs that are given access to each other's source code. 1 The programs then give rise to strategies. For any game G, we first need to define the set of program profiles PROG(G) consisting of pairs of programs. The ith entry of an element of PROG(G) must be a program source code p i that, when interpreted by a function apply, probabilistically map program profiles 2 onto A i . We require that for any program profile ( p 1 , p 2 ) ∈ PROG(G), both programs halt. Otherwise, the profile would not give rise to a well-defined strategy. Whether p i halts depends on the program p −i , it plays against, where (in accordance with convention in game theory) − : {1, 2} → {1, 2} : 1 → 2, 2 → 1 and we write −i instead of −(i). For example, if p i runs apply( p −i , (p i , p −i )), i.e., simulates the opponent, then that is fine as long as p −i does not also run apply( p i , (p i , p −i )), which would yield an infinite loop. To avoid this mutual dependence, we will generally require that PROG(G) = PROG 1 (G) × PROG 2 (G), where PROG i (G) consists of programs for player i. Methods of doing this while maintaining expressive power include hierarchies of players-e.g., higher indexed players are allowed to simulate lower indexed ones but not vice versa-hierarchies of programs-programs can only call their opponents with simpler programs as input-requiring programs to have a \"plan B\" if termination can otherwise not be guaranteed, or allowing each player to only start strictly less than one simulation in expectation. These methods may also be combined. In this paper, we do not assume any particular definition of PROG(G). However, we assume that they can perform arbitrary computations as long as these computations are guaranteed to halt regardless of the output of the parts of the code that do depend on the opponent program. We also require that PROG(G) is compatible with our constructions. We will show our constructions to be so benign in terms of infinite loops that this is not too strong of an assumption. Given a program profile ( p 1 , p 2 ), we receive a strategy profile (apply( p 1 , (p 1 , p 2 )), apply( p 2 , (p 1 , p 2 ))). For any outcome (a 1 , a 2 ) of G, we define P(a 1 , a 2 | p 1 , p 2 ) := P(a 1 , a 2 | apply( p 1 , (p 1 , p 2 )), apply( p 2 , (p 1 , p 2 ))) (2) and for every player i ∈ {1, 2}, we define E [u i | p 1 , p 2 ] := (a 1 ,a 2 )∈A 1 ×A 2 P(a 1 , a 2 | p 1 , p 2 ) • u i (a 1 , a 2 ). ( 3 ) For player i, we define the (set-valued) best response function as B i ( p −i ) = arg max p i ∈PROG i (G) E u i | p i , p −i . A program profile ( p 1 , p 2 ) is a (weak) program equilibrium of G if for i ∈ {1, 2} it is p i ∈ B i ( p −i ). \n Repeated games Our construction will involve strategies for the repeated version of a two-player game. Thus, for any game G, we define G to be the repetition of G with a probability of ∈ (0, 1] of ending after each round. Both players of G will be informed only of the last move of their opponent. This differs from the more typical assumption that players have access to the entire history of past moves. We will later see why this deviation is necessary. A strategy π i for player i non-deterministically maps opponent moves or the information of the lack thereof onto a move π i : {0} ∪ A −i A i . Thus, for a ∈ A i , b ∈ A −i , π i (b, a) := π i (b)(a) denotes the probability of choosing a given that the opponent played b in the previous round and π i (0, a) := π i (0)(a) denotes the probability of choosing a in the first round. We call a strategy a) . The probability that the game follows a complete history of moves h = a 0 b 0 a 1 b 1 • • • a n b n and then ends is π i stationary if for all a ∈ A i , π i (b, a) is constant with respect to b ∈ {0} ∪ A −i . If π i is stationary, we write π i (a) := π i (b, P(h | (π 1 , π 2 )) := π 1 (0, a 0 )π 2 (0, b 0 ) (1 − ) n n i=1 π 1 (b i−1 , a i )π 2 (a i−1 , b i ). (4) Note that the moves in the history always come in pairs a i b i which are chosen \"simultaneously\" in response to b i−1 and a i−1 , respectively. The expected value for player i given the strategy profile (π 1 , π 2 ) is E [u i | π 1 , π 2 ] := h∈(A 1 •A 2 ) + P(h | π 1 , π 2 ) • u i (h), (5) where (A 1 • A 2 ) + is the set of all histories and u i (a 0 b 0 a 1 b 1 • • • a n b n ) := n i=0 u i (a i , b i ). ( 6 ) The lax unordered summation in Eq. 5 is, again, unproblematic because of the absolute convergence of the series, which is a direct consequence of the proof of Lemma 1. Note how the organization of the history into pairs of moves allows us to apply the utility function of the stage game in Eq. 6. For player i, we define the set-valued best response function as B i (π −i ) = arg max π i :{0}∪A −i A i E u i | π i , π −i . Analogously, B c i (π −i ) is the set of responses to π −i that are best among the computable ones, B s i (π −i ) the set of responses to π −i that are best among the stationary ones, and B s,c i (π −i ) the set of responses to π −i that are best among stationary computable strategies. A strategy profile (π 1 , π 2 ) is a (weak) Nash equilibrium of G if for i ∈ {1, 2} it is π i ∈ B i (π −i ). We now prove a few lemmas that we will need later on. First, we have suggestively called the values P(h) probabilities, but we have not shown them to satisfy, say, Kolmogorov's axioms. Additivity is not an issue, because we have only defined the probability for atomic events and non-negativity is obvious from the definition. However, we will also need the fact that the numbers we have called probabilities indeed sum to 1, which requires a few lines to prove. Lemma 1 Let G be a repeated game and π 1 , π 2 be strategies for that game. Then h∈(A 1 •A 2 ) + P(h | π 1 , π 2 ) = 1. Proof h P(h | π 1 , π 2 ) = Eq. 4 a 0 b 0 •••a n b n ∈(A 1 •A 2 ) + π 1 (0, a 0 )π 2 (0, b 0 ) (1 − ) n • n i=1 π 1 (b i−1 , a i )π 2 (a i−1 , b i ) = absolute convergence ∞ n=0 (1 − ) n a 0 b 0 •••a n b n ∈(A 1 A 2 ) n+1 π 1 (0, a 0 )π 2 (0, b 0 ) n i=1 π 1 (b i−1 , a i )π 2 (a i−1 , b i ) = ∞ n=0 (1 − ) n a 0 b 0 ∈A 1 A 2 π 1 (0, a 0 )π 2 (0, b 0 ) a 1 b 1 ∈A 1 A 2 π 1 (b 0 , a 1 )π 2 (a 0 , b 1 ) . . . • a n b n ∈A 1 A 2 π 1 (b n−1 , a n )π 2 (a n−1 , b n ) = ∞ n=0 (1 − ) n = sum of geo- metric series 1 For seeing why the second-to-last equation is true, notice that the inner-most sum is 1. Thus, the next sum is 1 as well, and so on. Since the ordering in the right-hand side of the first line is lax, and because only the second line is known to converge absolutely, the re-ordering is best understood from right to left. The last step uses the well-known formula ∞ k=0 x k = 1/(1 − x) for the geometric series. For any game G , k ∈ N + , a ∈ A 1 , b ∈ A 2 and strategies π 1 and π 2 for G , we define P k,G (a, b | π 1 , π 2 ) := (1 − ) k a 0 b 0 •••a k−1 b k−1 ∈(A 1 A 2 ) k π 1 (b k−1 , a)π 2 (a k−1 , b)π 1 (0, a 0 )π 2 (0, b 0 ) • k−1 j=1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ). ( 7 ) For k = 0, we define P 0,G (a, b | π 1 , π 2 ) := π 1 (0, a) • π 2 (0, b). Intuitively speaking, P k,G (a, b | π 1 , π 2 ) is the probability of reaching at least round k and that (a, b) is played in that round. With this ab∈A 1 A 2 P k,G (a, b | π 1 , π 2 )u i (a, b) should be the expected utility from the kth round (where not getting to the kth round counts as 0). This suggests a new way of calculating expected utilities on a more round-by-round basis. Lemma 2 Let G be a game, and let π 1 , π 2 be strategies for that game. Then E [u i | π 1 , π 2 ] = ∞ k=0 ab∈A 1 A 2 P k,G (a, b | π 1 , π 2 )u i (a, b). Proof E G [u i | π 1 , π 2 ] = h∈(A 1 A 2 ) + P(h | π 1 , π 2 )u i (h) = Eqs. 4, 6 a 0 b 0 •••a n b n ∈(A 1 A 2 ) + π 1 (0, a 0 )π 2 (0, b 0 ) (1 − ) n ⎛ ⎝ n j=1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ) ⎞ ⎠ • n k=0 u i (a k , b k ) = ∞ k=0 a 0 b 0 •••a n b n ∈(A 1 A 2 ) ≥k+1 π 1 (0, a 0 )π 2 (0, b 0 ) (1 − ) n • ⎛ ⎝ n j=1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ) ⎞ ⎠ u i (a k , b k ) = ∞ k=0 a 0 b 0 •••a k b k ∈(A 1 A 2 ) k+1 a k+1 b k+1 •••a n b n ∈(A 1 A 2 ) * π 1 (0, a 0 )π 2 (0, b 0 ) (1 − ) n u i (a k , b k ) • ⎛ ⎝ k j=1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ) ⎞ ⎠ ⎛ ⎝ n j=k+1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ) ⎞ ⎠ = ∞ k=0 a 0 b 0 •••a k b k ∈(A 1 A 2 ) k+1 π 1 (0, a 0 )π 2 (0, b 0 ) (1 − ) k u i (a k , b k ) • ⎛ ⎝ k j=1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ) ⎞ ⎠ • a k+1 b k+1 •••a n b n ∈(A 1 A 2 ) * (1 − ) n−k ⎛ ⎝ n j=k+1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ) ⎞ ⎠ = lemma 1 ∞ k=0 a 0 b 0 •••a k b k ∈(A 1 A 2 ) k+1 π 1 (0, a 0 )π 2 (0, b 0 ) (1 − ) k u i (a k , b k ) • ⎛ ⎝ k j=1 π 1 (b j−1 , a j )π 2 (a j−1 , b j ) ⎞ ⎠ = Eq. 7 ∞ k=0 a k b k ∈A 1 A 2 P k (a k , b k | π 1 , π 2 )u i (a k , b k ) To find the probability of player i choosing a in round k, we usually have to calculate the probabilities of all actions in all previous rounds. After all, player i reacts to player −i's previous move, who in turn reacts to player i's move in round k − 2, and so on. This is what makes Eq. 7 so long. However, imagine that player −i uses a stationary strategy. This, of course, means that player −i's probability distribution over moves in round k (assuming the game indeed reaches round k) can be computed directly as π −i (b). Player i's distribution over moves in round k is almost as simple to calculate, because it only depends on player −i's distribution over moves in round k − 1, which can also be calculated directly. We hence get the following lemma. Lemma 3 Let G be a game, let π i be a any strategy for G and let π −i be a stationary strategy for G . Then, for all k ∈ N + , it is P k,G (a, b | π i , π −i ) = (1 − ) k b ∈A −i π −i (b )π −i (b)π i (b , a). Proof We conduct our proof by induction over k. For k = 1, it is P 1 (a, b | π i , π −i ) = Eq. 7 (1 − ) a 0 b 0 π i (b 0 , a)π −i (a 0 , b)π i (0, a 0 )π −i (0, b 0 ) = (1 − ) b 0 π i (b 0 , a)π −i (b)π −i (b 0 ) a 0 π i (0, a 0 ) = (1 − ) b 0 π i (b 0 , a)π −i (b)π −i (b 0 ). If the lemma is true for k, it is also true for k + 1: P k+1 (a, b | π i , π −i ) = (1 − ) k+1 a 0 b 0 •••a k b k ∈(A i A −i ) k+1 π i (b k , a)π −i (a k , b)π i (0, a 0 )π −i (0, b 0 ) • k j=1 π i (b j−1 , a j )π −i (a j−1 , b j ) = Eq. 7 (1 − ) a k b k P k (a k b k | π i , π −i )π i (b k , a)π −i (b) = I .H . (1 − ) k+1 a k b k b π −i (b )π −i (b k )π −i (b)π i (b , a k )π i (b k , a) = (1 − ) k+1 b k π −i (b k )π −i (b)π i (b k , a) b π −i (b ) a k π i (b , a k ) = (1 − ) k+1 b k π −i (b k )π −i (b)π i (b k , a). \n Robust program equilibrium in the prisoner's dilemma Discussions of the program equilibrium have traditionally used the well-known prisoner's dilemma (or trivial variations thereof) as an example to show how the program equilibrium rationalizes cooperation where the Nash equilibrium fails (e.g., Tennenholtz 2004, Sect. 3; McAfee 1984; Howard 1988; Barasz et al. 2014) . The present paper is no exception. In this section, we will present our main idea using the example of the prisoner's dilemma; the next section gives the more general construction and proofs of properties of that construction. For reference, the payoff matrix of the prisoner's dilemma is given in Table 1 . I propose to use the following decision rule: with a probability of ∈ (0, 1], cooperate. Otherwise, act as your opponent plays against you. I will call this strat- Algorithm 1: The GroundedFairBot for player i. The program makes use of a function sample which samples uniformly from the given interval or probability distribution. It is assumed that is computable. egy GroundedFairBot. A description of the algorithm in pseudo-code is given in Algorithm 1. 3 The proposed program combines two main ideas. First, it is a version of FairBot (Barasz et al. 2014) . That is, it chooses the action that its opponent would play against itself. As player −i would like player i to cooperate, GroundedFairBot thus incentivizes cooperation, as long as < 1/2. In this, it resembles the tit for tat strategy in the iterated prisoner's dilemma (IPD) (famously discussed by Axelrod 2006) , which takes an empirical approach to behaving as the opponent behaves against itself. Here, the probability of the game ending must be sufficiently small (again, less than 1/2 for the given payoffs) in each round for the threat of punishment and the allure of reward to be persuasive reasons to cooperate. The second main idea behind GroundedFairBot is that it cooperates with some small probability . First and foremost, this avoids running into the infinite loop that a naive implementation of FairBot-see Algorithm 2-runs into when playing against opponents who, in turn, try to simulate FairBot. Note, again, the resemblance with the tit for tat strategy in the iterated prisoner's dilemma, which cooperates when no information about the opponent's strategy is available. Data: program profile ( p 1 , p 2 ) Result: action a i ∈ {C, D} 1 return sample(apply( p −i , (p 1 , p 2 ))) Algorithm 2: The NaiveFairBot for player i To better understand how GroundedFairBot works, consider its behavior against a few different opponents. When GroundedFairBot faces NaiveFairBot, then both cooperate. For illustration, a dynamic call graph of their interaction is given in Fig. 1 . It is left as an exercise for the reader to analyze GroundedFairBot's behavior against other programs, such as another instance of GroundedFairBot or a variation of GroundedFairBot that defects rather than cooperates with probability . When playing against strategies that are also based on simulating their opponent, we could think of GroundedFairBot as playing a \"mental IPD\". If the opponent program decides whether to cooperate, it has to consider that it might currently only be simu- lated. Thus, it will choose an action with an eye toward gauging a favorable reaction from GroundedFairBot one recursion level up. Cooperation in the first \"round\" is an attempt to steer the mental IPD into a favorable direction, at the cost of cooperating if sample (0, 1) < already occurs in the first round. In addition to proving theoretical results (as done below), it would be useful to test GroundedFairBot \"in practice\", i.e., against other proposed programs for the prisoner's dilemma with access to one another's source code. I only found one informal tournament for this version of the prisoner's dilemma. It was conducted in 2013 by Alex Mennen on the online forum and community blog LessWrong. 4 In the original set of submissions, GroundedFairBot would have scored 6th out of 21. The reason why it is not a serious contender for first place is that it does not take advantage of the many exploitable submissions (such as bots that decide without looking at their opponent's source code). Once one removes the bottom 9 programs, GroundedFairBot scores second place. If one continues this process of eliminating unsuccessful programs for another two rounds, GroundedFairBot ends up among the four survivors that cooperate with each other. 5 \n From iterated game strategies to robust program equilibria We now generalize the construction from the previous section. Given any computable strategy π i for a sequential game G , I propose the following program: with a small probability sample from π i (0). Otherwise, act how π i would respond (in the sequential game G ) to the action that the opponent takes against this program. I will call this program Groundedπ i Bot. A description of the program in pseudo-code is given in Algorithm 3. As a special case, GroundedFairBot arises from Groundedπ i Bot by letting π i be tit for tat. Data: program profile ( p 1 , p 2 ) Result: action a i ∈ A i 1 if sample (0, 1) < then 2 return sample(π i (0)) 3 end 4 return sample(π i (sample(apply( p −i , (p 1 , p 2 ))))) Algorithm 3: The Groundedπ i Bot for player i. The program makes use of a function sample which samples uniformly from a given interval or a given probability distribution. It is assumed that π i and are computable. Again, our proposed program combines two main ideas. First, Groundedπ i Bot responds to how the opponent plays against Groundedπ i Bot. In this, it resembles the behavior of π i in G . As we will see, this leads Groundedπ i Bot to inherit many of π i 's properties. In particular, if (like tit for tat) π i uses some mechanism to incentivize its opponent to converge on a desired action, then Groundedπ i Bot incentivizes that action in a similar way. Second, it-again-terminates immediately with some small probability to avoid the infinite loops that Naiveπ i Bot-see Algorithm 4-runs into. Playing π i (0) in particular is partly motivated by the \"mental G \" that Groundedπ i Bot plays against some opponents (such as Groundedπ −i Bots or Naiveπ −i Bots). The other motivation is to make the relationship between Groundedπ i Bot and π i cleaner. In terms of the strategies that are optimal against Groundedπ i Bot, the choice of that constant action cannot matter much if is small. Consider, again, the analogy with tit for tat. Even if tit for tat started with defection, one should still attempt to cooperate with it. In practice, however, it has turned out that the \"nice\" version of tit for tat (and nice strategies in general) are more successful (Axelrod 2006, ch. 2) . The transparency in the program equilibrium may render such \"signals of cooperativeness\" less important-e.g., against programs like Barasz et al.'s (2014, Sect. 3) FairBot. Nevertheless, it seems plausible that-if only because of mental G -related considerations-in transparent games the \"initial\" actions matter as well. We now ground these intuitions formally. First, we discuss Groundedπ i Bot's halting behavior. We then show that, in some sense, Groundedπ i Bot behaves in G like π i does in G . \n Halting behavior For a program to be a viable option in the \"transparent\" version of G, it should halt against a wide variety of opponents. Otherwise, it may be excluded from PROG(G) in our formalism. Besides, it should be efficient enough to be practically useful. As with GroundedFairBot, the main reason why Groundedπ i Bot is benign in terms of the risk of infinite loops is that it generates strictly less than one new function call in expectation and never starts more than one. While we have no formal machinery for analyzing the \"loop risk\" of a program, it is easy to show the following theorem. Theorem 4 Let π i be a computable strategy for a game G . Furthermore, let p −i be any program (not necessarily in PROG i (G)) that calls apply( p i , (p i , p −i )) at most once and halts with probability 1 if apply halts with probability 1. Then Groundedπ i Bot and p −i halt against each other with probability 1 and the expected number of steps required for executing Groundedπ i Bot is at most T π i + (T π i + T p −i ) 1 − , ( 8 ) where T π i is the maximum number of steps to sample from π i , and T p −i is the maximum number of steps needed to sample from apply( p −i , ( Groundedπ i Bot, p −i )) excluding the steps needed to execute apply( Groundedπ i Bot, ( Groundedπ i Bot, p −i )). Proof It suffices to discuss the cases in which p −i calls p i once with certainty, because if any of our claims were refuted by some program p −i , they would also be refuted by a version of that program that calls p i once with certainty. If p −i calls p i once with certainty, then the dynamic call graphs of both Groundedπ i Bot and p −i look similar to the one drawn in 1. In particular, it only contains one infinite path and that path has a probability of at most lim i→∞ (1 − ) i = 0. For the time complexity, we can consider the dynamic call graph as well. The policy π i has to be executed at least once (with probability with the input 0 and with probability 1− against an action sampled from apply( p −i , ( Groundedπ i Bot, p −i )). With a probability of (1 − ), we also have to execute the non-simulation part of p −i and, for a second time, π i . And so forth. The expected number of steps to execute Groundedπ i Bot is thus T π i + ∞ j=1 (1 − ) j (T π i + T p −i ), which can be shown to be equal to the term in 8 by using the well-known formula ∞ k=0 x k = 1/(1 − x) for the geometric series. Note that this argument would work if there were more than two players or if the strategy for the iterated game were to depend on more than just the last opponent move, because in these cases, the natural extension of Groundedπ i Bot would have to make multiple calls to other programs. Indeed, this is one of the reasons why the present paper only discusses two-player games and iterated games with such shortterm memory. Whether a similar result can nonetheless be obtained for more than 2 players and strategies that depend on the entire past history is left to future research. As special cases, for any strategy π −i , Groundedπ i Bot terminates against Groundedπ −i Bot and Naiveπ −i Bot (and these programs in turn terminate against Groundedπ i Bot). The latter is especially remarkable. Our Groundedπ i Bot terminates and leads the opponent to terminate even if the opponent is so careless that it would not even terminate against a version of itself or, in our formalism, if PROG −i (G) gives the opponent more leeway to work with simulations. \n Relationship to the underlying iterated game strategy Theorem 5 Let G be a game, π i be a strategy for player i in G , p i = Groundedπ i Bot and p −i ∈ PROG −i (G) be any opponent program. We define π −i = apply( p −i , (p i , p −i )), which makes π −i a strategy for player −i in G . Then E G u i | p i , p −i = E G u i | π i , π −i . Proof We separately transform the two expected values in the equation that is to be proven and then notice that they only differ by a factor : E G u i | π i , π −i = lemma 2 ∞ k=0 ab∈A i A −i P k,G (a, b | π i , π −i )u i (a, b) = lemma 3 ab∈A i A −i π i (0, a)π −i (b)u i (a, b) + ∞ k=1 ab∈A i A −i (1 − ) k b ∈A −i π −i (b )π i (b , a)π −i (b)u i (a, b) = absolute convergence ab∈A i A −i π i (0, a)π −i (b)u i (a, b) + ab∈A i A −i b ∈A −i π −i (b )π i (b , a)π −i (b)u i (a, b) ∞ k=1 (1 − ) k = sum of geo- metric series ab∈A i A −i π i (0, a)π −i (b)u i (a, b) + 1 − ab∈A i A −i b ∈A −i π −i (b )π i (b , a)π −i (b)u i (a, b). The second-to-last step uses convergence to reorder the sum signs. The last step uses the well-known formula ∞ k=0 x k = 1/(1 − x) for the geometric series. Onto the other expected value: E G u i | p i , p −i = Eqs. 3, 2, 1 ab∈A i A −i apply( p i , (p i , p −i ), a)apply( p −i , (p i , p −i ), b)u i (a, b) = def.s p i , π −i ab∈A i A −i ⎛ ⎝ π i (0, a)+(1 − ) b ∈A −i π −i (b )π i (b , a) ⎞ ⎠ π −i (b)u i (a, b) = ab∈A i A −i π i (0, a)π −i (b)u i (a, b) +(1 − ) ab∈A i A −i b ∈A −i π −i (b )π i (b , a)π −i (b)u i (a, b). Here, apply( p i , (p i , p −i ), a) := apply( p i , (p i , p −i ))(a). The hypothesis follows immediately. Note that the program side of the proof does not involve any \"mental G \". Using Theorem 5, we can easily prove a number of property transfers from π i to Groundedπ i Bot. Corollary 6 Let G be a game. Let π i be a computable strategy for player i in G and let p i = Groundedπ i Bot. \n If p −i ∈ B −i ( p i ), then apply( p −i , (p i , p −i )) ∈ B s,c −i (π i ). 2. If π −i ∈ B s,c i (π i ) and apply( p −i , (p i , p −i )) = π −i , then p −i ∈ B −i ( p i ). Proof Both 1. and 2. follow directly from Theorem 5. Intuitively speaking, Corollary 6 shows that π i and Groundedπ i Bot provoke the same best responses. Note that best responses in the program game only correspond to best stationary computable best responses in the repeated game. The computability requirement is due to the fact that programs cannot imitate incomputable best responses. The corresponding strategies for the repeated game further have to be stationary because Groundedπ i Bot only incentivizes opponent behavior for a single situation, namely the situation of playing against Groundedπ i Bot. As a special case of Corollary 6, if < 1/2, the best response to GroundedFairBot is a program that cooperates against GroundedFairBot because in a IPD with a probability of ending of less than 1/2 a program that cooperates is the best (stationary computable) response to tit for tat. \n Exploitability Besides forming an equilibrium against many opponents (including itself) tivizing cooperation, another important reason for tit for tat's success is that it is \"not very exploitable\" (Axelrod 2006) . That is, when playing against tit for tat, it is impossible to receive a much higher reward than tit for tat itself. We now show that (in)exploitability transfers from strategies π i to Groundedπ i Bots. We call a game G = (A 1 , A 2 , u 1 , u 2 ) symmetric if A 1 = A 2 and u 1 (a, b) = u 2 (b, a) for all a ∈ A 1 and b ∈ A 2 . If G is symmetric, we call a strategy π i for G N -exploitable in G for an N ∈ R ≥0 if there exists a π −i , such that E u −i | π −i , π i > E u i | π −i , π i + N . We call π i N -inexploitable if it is not N -exploitable. Analogously, in a symmetric game G we call a program p i N -exploitable for an N ∈ R ≥0 if there exists a p −i , such that E u −i | p −i , p i > E u i | p −i , p i + N . We call p i N -inexploitable if it is not N -exploitable. Corollary 8 Let G be a game and π i be an N -inexploitable strategy for G . Then Groundedπ i Bot is N -inexploitable. Proof Follows directly from Theorem 5. Notice that if -like tit for tat in the IPD -π i is N -inexploitable in G for all , then we can make Groundedπ i Bot arbitrarily close to 0-inexploitable by decreasing . \n Conclusion In this paper, we gave the following recipe for constructing a program equilibrium for a given two-player game: 1. Construct the game's corresponding repeated game. In particular, we consider repeated games in which each player can only react to the opponent's move in the previous round (rather than the entire history of previous moves by both players) and the game ends with some small probability after each round. 2. Construct a Nash equilibrium for that iterated game. The result is a program equilibrium which we have argued is more robust than the equilibria described by Tennenholtz (2004) . More generally, we have shown that translating an individual's strategy for the repeated game into a program for the stage game in the way described in 3 retains many of the properties the strategy for the repeated game. Thus, it seems that \"good\" programs to submit may be derived from \"good\" strategies for the repeated game. Fig. 1 1 Fig. 1 Dynamic call diagram describing how p 1 = GroundedFairBot chooses when playing against p 2 = NaiveFairBot. \n 3. Convert each of the strategies into a computer program that works as follows (see Algorithm 3): with probability do what the strategy does in the first round. With probability 1 − , apply the opponent program to this program; then do what the underlying strategy would reply to the opponent program's output. \n Table 1 1 Payoff matrix for the prisoner's dilemma Player 1 Player 2 Cooperate Defect Cooperate 3, 3 1 , 4 Defect 4, 1 2 , 2 \n\t\t\t The equilibrium in its rudimentary form had already been proposed by McAfee (1984) and Howard (1988) . At least the idea of viewing players as programs with access to each other's source code has also been discussed by, e.g.,Binmore (1987, Sect. 5; and Anderlini (1990) .2 For keeping our notation simple, we will assume that our programs receive their own source code as input in addition to their opponent's. If PROG i (G) is sufficiently powerful, then by Kleene's second recursion theorem, programs could also refer to their own source code without receiving it as an input(Cutland 1980, ch. 11). \n\t\t\t This program was proposed by Abram Demski in a conversation discussing similar (but worse) ideas of mine. It has also been proposed by Jessica Taylor at https://agentfoundations.org/item?id=524, though in a slightly different context. \n\t\t\t The tournament was announced at https://www.lesserwrong.com/posts/BY8kvyuLzMZJkwTHL/ prisoner-s-dilemma-with-visible-source-code-tournament and the results at https://www.lesserwrong. com/posts/QP7Ne4KXKytj4Krkx/prisoner-s-dilemma-tournament-results-0. 5 For a more detailed analysis and report on my methodology, see https://casparoesterheld.files.wordpress. com/2018/02/transparentpdwriteup.pdf.", "date_published": "n/a", "url": "n/a", "filename": "Oesterheld2018_RobustProgramEquilibrium.tei.xml", "abstract": "One approach to achieving cooperation in the one-shot prisoner's dilemma is Tennenholtz's (Games Econ Behav 49(2): [363][364][365][366][367][368][369][370][371][372][373] 2004) program equilibrium, in which the players of a game submit programs instead of strategies. These programs are then allowed to read each other's source code to decide which action to take. As shown by Tennenholtz, cooperation is played in an equilibrium of this alternative game. In particular, he proposes that the two players submit the same version of the following program: cooperate if the opponent is an exact copy of this program and defect otherwise. Neither of the two players can benefit from submitting a different program. Unfortunately, this equilibrium is fragile and unlikely to be realized in practice. We thus propose a new, simple program to achieve more robust cooperative program equilibria: cooperate with some small probability and otherwise act as the opponent acts against this program. I argue that this program is similar to the tit for tat strategy for the iterated prisoner's dilemma. Both \"start\" by cooperating and copy their opponent's behavior from \"the last round\". We then generalize this approach of turning strategies for the repeated version of a game into programs for the one-shot version of a game to other two-player games. We prove that the resulting programs inherit properties of the underlying strategy. This enables them to robustly and effectively elicit the same responses as the underlying strategy for the repeated game.", "id": "ffd8e3324fb38892be0da1e275b54058"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Robert De Neufville", "Seth D Baum"], "title": "Collective action on artificial intelligence: A primer and review", "text": "Introduction Collective action \"arises when the efforts of two or more individuals or agents … are required to accomplish an outcome\" [1] ; p. 196. The overall development of artificial intelligence (AI) requires collective action, as do efforts to ensure that AI development results in good outcomes for society, both because it requires individuals to coordinate their actions and because it is simply too large and complex a task for any single individual acting alone to accomplish. This paper is a primer and review of AI collective action. By AI collective action, we mean the collective action humans take to improve the outcomes of AI development, not the collective action of groups of AIs. The paper is a primer in that it serves as an introduction to the topic for a diverse audience of social scientists, computer scientists, policy analysts, government officials, organizers, activists, concerned citizens, and anyone else with an interest in the topic. The paper is a review in that it reviews the literature that has thus far accumulated on the topic. Many aspects and applications of AI will require collective action. In particular, as we show below, collective action will be needed to reach agreement on AI rules and standards, to develop AI that is broadly socially beneficial rather than merely being profitable for particular developers, and to avoid competition or conflict that could lead to AI being developed or used in a less safe and beneficial way. AI is a potentially transformative technology that could shape the way people live, work, and communicate. This raises the question of how AI can contribute to or hinder good outcomes for society-or, phrased differently, how AI can contribute to or hinder the building of a good society. 1 As Coeckelbergh [2] notes, the question of the appropriate role of technology in society is a political question of concern to the general public rather than a purely technical question of concern only to private individuals. Addressing that question will require collective action. AI collective action is a relatively new field. A large portion of the work on it has been written in just the last few years. The topic has also attracted interest from a wide range of thinkers from both inside and outside academia. This paper takes an inclusive approach to the literature, drawing on everything from full-length peer-reviewed research papers to short blog posts and commentaries. The intent is to provide a broad review of existing ideas about AI collective action, regardless of where they are published. The core method of this paper is a nonsystematic review of the AI collective action literature. The selection of literature reviewed is not necessarily comprehensive or representative of the full body of work on the topic, though we believe we have identified the overwhelming majority of relevant work. The literature reviewed was identified via keyword searches, our own prior knowledge of the literature, and citation tracking. The Google Scholar and Google Web Search tools were queried with phrases such as \"artificial intelligence collective action\", \"artificial intelligence governance\", and \"artificial intelligence race\". \"Artificial intelligence race\" was included because a large number of the articles that concern AI collective action focus in particular on AI races. Most of the literature identified was published after 2010 because this is a new literature, but keyword searches were not limited to articles published after 2010. All academic publications were included. Nonacademic publications were included if in the authors' judgment they contained a well-supported and compelling discussion of AI collective action. Much of the AI collective action literature identified in this paper focuses on long-term scenarios in which competitive pressures increase the risk of catastrophic outcomes, especially via the development of advanced forms of AI such as artificial general intelligence (AGI) and superintelligence. 2 Another important focus of the literature is on military applications and the potential dangers of near-term AI arms races. This paper has some emphasis on the literature on near-term and longterm AI races (Section 3), though much of the discussion is more general. Indeed, despite the wide range of forms and applications that AI technology has now and could have in the future, the collective action issues they raise are similar both in the nature of the issues and in their potential solutions. In this regard, the paper contributes to a broader analysis of synergies between near-term and long-term AI issues [3] [4] [5] [6] . Following a general overview of AI collective action, the paper presents discussions of AI race scenarios and collective action that could be taken to address them. Much of the literature to date focuses on the collective action issues raised by races. It should be stressed that races are not the only scenarios that raise AI collective action issues, though they are important ones. \n A primer on collective action This section presents a general background on collective action, with emphasis on aspects of relevance to AI. It is intended mainly for an interdisciplinary non-specialist audience; much (though not all) will be familiar to readers who are versed in the study of collective action. \n Divergent vs. convergent interests The concept of collective action is perhaps most closely associated with \"collective action problems\", in which individual interests and group interests diverge. Where individual interests are at odds with group interests, individuals' pursuit of their own self-interest may lead to outcomes that are worse for not only for the group as a whole but also for each individual considered separately. The prisoner's dilemma is a simple and well-known form of collective action problem. 3 Collective action situations also include scenarios in which individual interests and group interests converge. In these cases, individuals' pursuit of their own self-interest can lead to outcomes that are collectively optimal. In markets, for example, individuals can make transactions that benefit themselves and improve group outcomes by allocating resources more efficiently. 4 Individual interests also converge in coordination problems when individuals would benefit by agreeing on a common set of rules or standards. Where individual interests align, the challenge is to coordinate individual actions rather than to resolve conflicts among individual interests. AI collective action includes cases of both divergent and convergent interests. AI races are, in the literature identified in this paper, typically associated with a mix of convergent and divergent interests (Section 3). Divergent interests also arise for AI work done in the public interest, including basic research published in the open literature and the development of standards and techniques for AI safety and ethics. 5 In both of these cases, individuals have limited incentives to do the work, so it is often funded by governments or private philanthropies. In general, public funding of basic research has a range of benefits in scientific and technical fields besides AI as well [7] . Convergent interests can arise when AI developers would benefit from using the same standardized platforms. It can be valuable for everyone to use the same programming languages, code repositories, operating systems, etc. so that programs can interoperate and developers do not have to start from scratch each time they join a new project. In some cases, it may not particularly matter which platform is used, as long as everyone uses the same one. Collective action is more difficult to achieve when individual and group interests diverge, because individuals' pursuit of their own interests will not lead on its own to collectively optimal outcomes. 6 The study of collective action therefore tends to focus on the more difficult challenges posed by divergent interests. This paper has a similar emphasis. Nonetheless, it is important to recognize that AI collective action does not always entail competition among individuals with divergent interests. Furthermore, there are often opportunities to align individual and group interests. This can be a powerful way to improve AI outcomes. One important way individual and group interests can converge is when individuals value group outcomes. Collective action problems like the prisoner's dilemma assume individuals are narrowly self-interested, but individuals may also care about others, in which case they may choose to act in the group interest even if doing so goes against their selfinterest. In the extreme case, individuals may value only group outcomes, so that they always act in the group interest regardless of the implications for themselves. Even if all individuals value only group outcomes, they could still be in competition if they value different aspects of group outcomes. For example, a survey by Baum [8] finds that AGI developers typically value either humanity as a whole or intellectual progress for its own sake, but not both. Since these goals might be at odds, one can imagine competition between pro-humanity developers and pro-intellectual progress developers. Similarly, the Armstrong, Bostrom and Shulman [9] model of dangerous AI races uses an \"enmity\" parameter to model AI projects' preference for their own project winning the race. Enmity could derive from either self-interest or differing preferences about which group values to favor. In the model, higher enmity results in less cooperation and more dangerous races. Efforts to promote and reach consensus on group values could increase the convergence of interests and make collective action more likely. In recent years, different versions of AI ethics principles have proliferated [10] , which may be a constructive step in this direction 2 AGI is AI that is capable of thinking across a wide range of cognitive domains. In contrast, current AI is narrow in the sense that it is only capable of thinking across a relatively narrow range of domains. Superintelligence is AI that is significantly smarter than humans in many important respects. AGI is often considered an important precursor to superintelligence. A concern is that AGI and/or superintelligence could outsmart humanity, take over the world, and potentially kill everyone in the process [20, 69, 85] . This scenario is referred to in this paper as an \"AI catastrophe\". 3 For a detailed description of the prisoner's dilemma and related scenarios as they pertain to AI collective action, see Askell et al. [21] . harms of a market transaction are not built into the price of the good or service being sold. For example, the harms from global warming are in most instances an externality that is not built into the price of fossil fuel. In these situations, individual and collective interests can diverge. 5 Hughes [56] and AI Impacts [19] both discuss these points in the context of AI safety and ethics. 6 For discussion of this issue see e.g. Olson [86] and Ostrom [48] . (though for a critical analysis of this, see Whittlestone et al. [11] ). Establishing ethical norms among AI developers could also increase the convergence of interests. Examples of attempts to establish ethical norms for AI development include senior AI researcher Stuart Russell's call for people in AI to care more about the societal consequences of the technology [12] and employee activism among workers in the field of AI [13] . A relevant historical precedent is the moratorium on recombinant DNA in the 1970s, which succeeded in part because the relevant scientific community had a shared culture of political activism that left them predisposed to accepting a moratorium on their work [14] . \n The excludability and rivalry of goods In public choice economics, collective action is often evaluated in terms of the type of goods that are being produced. 7 \"Goods\" in this context are anything that one or more individuals benefit from. Goods can include tangible material objects like computers, and intangible things like information. A common scheme for classifying forms of collective action depends on whether the goods at issue are excludable and rivalrous [15, 16] . Goods are excludable if it is possible to prevent people from benefiting from them, and they are rivalrous if enjoying them reduces how much they benefit others. Table 1 presents a 2 × 2 matrix of (non)excludable and (non)rivalrous goods. The four types of goods can be summarized as follows: • Public choice theory holds that collective action problems, in which individual and group interests diverge, arise mainly for non-excludable (common and public) goods because individuals can benefit from them without contributing to their production or maintenance [1] . When goods are excludable, access can be restricted (e.g., via setting higher prices for access) so as to align individual and group interests. This is controversial because higher prices can exclude poorer people and result in outcomes that are worse for the group once considerations of equity are taken into account. In any case, non-excludable goods face the basic problem of free-riding: since individuals can benefit from these kinds of goods whether or not they have contributed to providing or maintaining them, they have an incentive to free-ride on the contributions of others. (This assumes that individuals are self-interested.) As a result, individual community members have an incentive to produce less of these types of goods than they would if they were required to contribute, and less than would be optimal for the community as a whole. Common goods face the additional challenge of maintaining an adequate supply given the incentive to use up the goods. Unlike with public goods, free-riding diminishes the supply of common goods available for other individuals. Classic discussions of common good resources (such as open pastures for grazing livestock animals) warned of the \"tragedy of the commons\" due to the incentive to deplete the resources [17] . Modern scholarship speaks of the \"drama of the commons\" because this type of situation does not necessarily end in tragedy [18] . 9 How the tragedy can be avoided is a primary focus of collective action research, as discussed in Section 4. Note that the provision of common goods has the same basic structure as the prisoner's dilemma. Important aspects of AI development have the structure of public and common goods. The avoidance of an AI catastrophe has the structure of a public good in that the enjoyment of the absence of AI catastrophe is neither excludable nor rivalrous. Likewise, there may be an undersupply of efforts to avoid AI catastrophe, with individuals hoping to free-ride on the efforts of others [19] . Cooperation between different AI projects can have the structure of the provision of a common good, especially in scenarios like dangerous races in which competition increases risks. Just as competing individuals can diminish the supply of a common good by overusing it, competing individual AI projects can increase the risk of an AI catastrophe by taking avoidable risks. In both cases, individuals pursuing their own selfinterest cause harms to the overall population. This incentive structure has been recognized in some prior AI literature [9, [19] [20] [21] . AI races are discussed further in Section 3. \n Distribution of contributions In certain situations, successful collective action can depend on how the contributions of different actors are distributed. A helpful typology of these situations comes from Hirshleifer [22] and Barrett [23] . Work by AI Impacts [19] applies the Barrett [23] typology to AI. This prior literature frames the typology in terms of public goods instead of the distribution of contributions. However, the situations described by the typology are not restricted to public goods, and in our view are more productively expressed in terms of distributions of contributions. The core of the typology features four types of collective action situations: • Aggregate effort: when the result depends on the summed contributions of all actors rather than on how much any given actor contributes, such as when a project needs a certain amount of financing but it does not matter where it comes from. • Single best effort: when the result depends on the effectiveness of the best effort to address the problem, such as when a solution to a technical problem is needed. • Mutual restraint: when the result depends on the extent to which actors all refrain from taking certain actions, such as when Ostrom 48 ]. 8 It is possible to build electric grids with access restrictions, though this is often not done, especially for the outlets found in most buildings. An example of a common good in which access restrictions cannot be built in is ocean fisheries: oceans cannot be redesigned to prevent certain people from accessing them. 9 For recent discussion, see Boyd et al. [87] . developing or using a technology is so dangerous that a single failure could be catastrophic. • Weakest link: when the result depends on the effectiveness of the worst effort to address the problem, such as when the security of a system will fail if it fails at any single point. Aggregate effort, single best effort, and weakest link situations are all variants of situations requiring different distributions of contribution across all the actors. In aggregate effort, what matters is the total contribution of all actors rather than how that effort is distributed among them. An example of an aggregate effort situation is one in which what matters is how much money is raised for a project, not where the money comes from. In single best effort situations, what matters is the work done on a single top-performing project, which could include contributions from any number of actors. An example of a single best effort situation was the attempt to send humans to the moon, which succeeded when one project succeeded, but would have failed if the same effort had been divided among lots of projects that did not reach the moon. In weakest link situations, all actors must make some minimum contribution. Weakest link situations are ones in which every actor is responsible for maintaining some essential part of a joint activity. An example of a weakest link situation is when the failure of a single actor to follow cybersecurity protocols would compromise an entire network. Mutual restraint situations are similar to weakest link situations in that every actor must meet at least some minimum standard, but in mutual restraint situations what matters is not what actors do, but what they refrain from doing. An example of a mutual restraint situation is one in which every actor needs to refrain from a risky behavior that could affect them all, like performing an experiment that could have catastrophic consequences. One can readily imagine other variants of these situations, in which for example success requires two projects to meet some performance threshold (a \"double best effort\", such as for shipping custom products to two clients), or in which success requires all actors except one to meet some performance threshold (a \"second-weakest link\", such as for the cybersecurity of a computer network in which one computer can be air gapped to store backup files). The Barrett [23] typology also includes coordination situations, in which the result depends on the extent to which actors act in the same way. Coordination systems can also involve a variety of distributions of contribution. For example, weakest link and mutual restraint situations are situations in which every single actor must either act or refrain from acting in the same way (e.g., if all drivers have to act the same way at stoplights); aggregate effort situations are situations in which actors must collectively coordinate their activities closely enough but not perfectly (e.g., if drivers have to stop at stoplights only reliably enough for other drivers to have confidence they can safely go when the light is green). Cooperation may be easier in single best effort situations than in aggregate effort situations because efforts can be focused on a single topperforming project. Cooperation likewise may be harder in weakest link and mutual restraint situations than in aggregate effort situations because each actor has to meet some minimum standard of action. Cooperation is potentially easier to achieve in coordination situations because actors have an interest in coordinating and may be indifferent as to which coordination scheme is used. Different aspects of AI development conform to different collective action situations. AI Impacts [19] observes that refraining from developing an AI that would favor some groups over others requires mutual restraint, while the development of a safe ethical AI design may depend on a single best effort. The extent to which safe, ethical AI design is funded may depend on the aggregate effort of actors. The elimination of bugs or security problems in a jointly developed system may depend on the contribution of a weakest link actor. The development of standards that allow AI systems to work together may require coordination. \n Dangerous AI race scenarios Broadly speaking, an AI race is when AI development proceeds more quickly than it otherwise would, especially when multiple development teams compete to develop AI more quickly than each other. AI races are not necessarily dangerous and might even hasten the arrival of socially beneficial forms of AI [24] . Similarly, efforts to slow AI development can be harmful by limiting progress [25, 26] . Nonetheless, a substantial body of literature concerns the potential dangers of AI races. The main reason AI races could be dangerous is that developers might cut corners on safety in order to develop more quickly. This theme has been articulated, for example, by Danzig [27] in the context of military AI races and by many authors 10 in the context of long-term AGI development. There can be tension between the value of racing ahead for relative advantage (or perhaps for other reasons) and the value of exercising due caution with respect to harmful unintended consequences. This tension can be found for a variety of technologies, certainly including AI. In collective action terms, dangerous AI races may be especially worrisome if racing is in the individual interests of developers but not in the collective interest. In such cases these divergent interests create a collective action problem (Section 2.1). Each developer decides whether to participate in a race to develop an AI system. Each would benefit from winning the race, as long as their winning AI system is safe. If the first AI system developed is safe, the benefits of winning accrue primarily to the winning party. However, if the first system is not safe, for example if the winning system is one that causes global catastrophe, everyone might collectively bear the costs. In this situation, it may be in the individual interest of any given AI group to participate in a race even though it is in the collective interest of the group as a whole to avoid a race and develop AI more cautiously. This situation has the same basic incentive structure as prisoner's dilemma or common goods situations (Section 2.2). In the extreme case, these situations could require mutual restraint (Section 2.3) if an AI catastrophe can be avoided only if no developers engage in a race. Dangerous AI races could occur when it is not in the interest of developers to engage in an AI development race, if developers wrongly believe it is in their interest to do so. Choices about AI development may be made under conditions of \"bounded rationality\", 11 in which developers' ability to determine the best course of action may be limited in practice. The complexity of AI systems makes their risks difficult to evaluate with any certainty. Cultural norms or a cognitive biases may also make developers inclined to downplay risks, even where evidence suggests the risks are significant. As a result, dangerous AI races can arise when developers' perceived interests diverge from the collective interests of the group (e.g., when they imagine the reward for being the winner of an AI development race would be larger than it actually is). In such cases, the collective action problem that results could potentially be resolved by providing developers with accurate information about their individual interests [21] . 12 In contrast, actors in non-dangerous AI races may tend to have convergent interests, or at least interests that do not diverge. (An AI race is non-dangerous if the benefits of engaging in the race would outweigh or at least balance the harms.) Many common AI development races may be non-dangerous rather than dangerous. In some cases, such as when competition speeds the development of beneficial technology, the benefits of engaging in a development race may substantially outweigh the harms. Some races may be coordination situations, in which developers collectively benefit from working on the same problems at the same time. For example, the ImageNet database and ImageNet Large Scale Visual Recognition Challenge allow groups working on image recognition to pool their efforts and learn from one another [28] . It is also possible that developers could have an incentive not to engage in an AI race that would benefit the public by spurring valuable innovation. This is also a collective action problem because the interests of the individual developers diverge from the collective interest. This could occur if winning the race would depend on the production of a public good (Section 2.2), like basic research and development (R&D) that is likely to end up in the public domain. In such a case developers would have individual incentives to collectively underinvest in AI development and free-ride on the work of others. The distinction between dangerous and non-dangerous races is conceptually important for the reasons outlined above. but it may also be a rhetorically important distinction. When AI races are not identified as dangerous, they may be seen as harmless contests that should be played to win, rather than the risky competitions they can be [29, 30] . As a result, using the unqualified term \"AI races\" could increase the likelihood of irrational and dangerous AI races. Arguably, simply framing AI development as a \"race\" may make it sound less risky than it is, exaggerate the extent to which it is necessarily a competition, and minimize the potential for beneficial collaboration. 13 We therefore advise identifying dangerous AI races as \"dangerous\". (Identifying non-dangerous AI races may be less important, but potentially worthwhile nonetheless.) \n Dangerous near-term AI races AI race scenarios can broadly be split into private and public sector races. It is not a sharp distinction, since there are important interconnections between the private and public sectors, including public funding for private R&D and government use of AI developed by the private sector. Nonetheless, private corporations and national governments have different competitive dynamics. The private sector is currently the driving force behind AI development. The computer technology industry has a reputation for product development that can be rushed and risky, as epitomized by the former Facebook slogan \"move fast and break things\". Some of it is driven by competition, with rival groups seeking to gain market share, profit, and other advantages. There is intense competition to hire the most talented AI researchers and to build computer systems with superior performance in various tasks; both of these competitions have sometimes been referred to as \"arms races\" even though they do not involve military armaments [31] [32] [33] . A different sort of private sector \"arms race\" occurs between AI teams at social media companies tasked with removing inappropriate content and people who post the content [34] . Despite the ubiquity of near-term private sector AI races, the matter has not yet received substantial scholarly attention. AI races in the public sector-especially military AI races-have attracted more attention. Geist [35] traces military AI competition as far back as the 1960s Cold War competition between the Soviet Union and the US. The 1960s initiatives focused on advancing basic research. Today, AI is increasingly being used in operational military systems. A significant concern is the near-future prospect of arms races for autonomous weapons [36] [37] [38] , though Scharre [39] documents that countries have not been as quick to embrace autonomous weapons as one might think. More generally, the extent of a military AI arms race may be overstated [40] . Nonetheless, there are clear reasons for rival militaries to race one another to improve their AI capabilities. For example, fighter aircraft can gain a relative advantage over adversary planes by using AI to make faster and better combat decisions [41] . AI arms races may be more likely to occur with (a) weapon systems that attack other systems of the same kind (like fighters that are designed to engage other fighters in dogfights) than with (b) weapon systems that attack something else (like drones that are designed to engage human targets). For (a) but not (b), improving the weapon systems' AI gives the other side a reason to improve its own similar systems' AI. It is beyond the scope of this paper to assess the danger of near-term AI race scenarios. It is clear that near-term AI could pose risks, and it is plausible that some of these risks may be sufficient to render certain near-term AI races dangerous. This matter has not yet been clearly established in the literature and is a worthy focus of future research. \n Dangerous long-term AI races The dangers of long-term AI races have received more attention. This literature generally focuses on scenarios involving extremely capable AI and extremely high stakes. It is often argued that the first AI to reach some capability threshold could become enormously powerful, perhaps even powerful enough to effectively take over the world. 14 If that argument is right, then global outcomes could largely be determined by which AI project wins a development race. If the AI is built safely, then the result is an extreme case of \"winner takes all\". Alternatively, if the AI is not safe, then \"winning\" the AI development race would be the ultimate Pyrrhic victory, with catastrophic aggregate harms up to and potentially including human extinction. Because of the extreme stakes it is unambiguously in the collective interest to avoid such catastrophes. It is presumably very much in the self-interest of an AI developer to be the first to develop a safe AI. It may or may not be in the collective interest for that developer to be the first to develop safe AI, depending on whether the AI would make the AI broadly beneficial, since an immensely capable AI might be able to work wonders, both in the service of its developers and the world in general. The exact calculus depends on the probabilities of beneficial and catastrophic outcomes for each potential developer, as well as how much the developer and the collective value these outcomes. Resolving this calculus involves a suite of difficult philosophical and empirical issues that have to date not received attention in the AI collective action literature. The enormous stakes also blur the distinction between public and private actors. A private actor that built and controlled such an AI could have power rivaling the largest states. The substantial stakes also give states a reason to intervene in AI development and potentially even to nationalize parts of the AI industry [42] ; p.10. While some analyses of long-term AI have focused on development in one sector or another, 15 the impact of powerful AI is likely to transcend specific sectors and affect the collective interests of a broad group of actors (though the sector in which development occurs may matter for other reasons). Several studies develop mathematical models of the dynamics of dangerous long-term AI races. Armstrong et al. [9] model races in which the reward for winning is large and teams can improve their chances of winning by skimping on safety precautions, which also increases the probability of catastrophe. They find increased risk when there are more competing teams, when the teams have a stronger preference for winning, when taking risks increases the odds of winning, and when teams know each other's capabilities. Aldana [42] uses a variety of two-player games to explore opportunities to alter incentives toward cooperation, for example by highlighting the potentially catastrophic consequences of failing to cooperate. Han et al. [43] model competition over successive rounds of AI development, finding that cooperation is less likely when advanced AI can be built in the relatively near future. Finally, Naudé and Dimitri [44] model the cost of building AGI, finding that if it were expensive, relatively few groups would compete, but that public funding could incentivize cooperation. While some of these findings may seem self-evident, these models offer a means of exploring some of the subtler nuances of races and opportunities to increase cooperation. On the other hand, these models make sweeping mathematical assumptions about complex socio-technological processes and need to be supplemented with empirical studies. \n Long-term effects of near-term races Finally, it is worth briefly discussing the idea that near-term AI races can affect the long-term development of AI. In particular, it has been argued that near-term AI races could slow the long-term development of AI. One mechanism for this is by generating public backlash. If nearterm AI is not developed with sufficient caution, it could cause problems that lead to regulations and other initiatives that impede further AI development. This view was recently expressed by Mounir Mahjoubi, the French minister for digital affairs and an architect of France's AI policy. In Mahjoubi's words, \"If you don't invest in responsibility around AI, you will create resistance and resentment in the population\" [45] . An example of an incident that could create an enduring backlash is a 2018 incident in which a self-driving Uber killed a pedestrian. Following this incident, the US National Highway Traffic Safety Administration and National Transportation Safety Board launched probes into autonomous vehicle safety [46] . While it is not yet clear whether the incident will ultimately slow the roll out of autonomous vehicles, it is nonetheless indicative of the possibility. Another mechanism is that near-term races could focus resources on maximizing near-term performance at the expense of long-term progress. As is common with many areas of technology, long-term advances in AI may require new techniques based on fundamental breakthroughs that derive from basic research. In contrast, optimizing near-term performance may primarily involve the application of existing techniques. Hence, Marcus [47] argues that focusing on established machine learning techniques has left the field of AI at a long-term disadvantage. In other words, near-term races may not incentivize people to come up with new and possibly more effective AI techniques. \n Solutions to AI collective action problems The social science literature on collective action has identified three broad types of approaches to solving collective action problems. Each approach uses different strategies to encourage individuals to act in the collective interest even when their immediate incentives are to act in against the collective interest. The three approaches involve top-down government policy, bottom-up community governance, and private ownership [48] . \n Top-down government solutions Governments have some capacity to compel collective action. Governments can set policies that require individuals to act in accordance with the collective interest. Governments also have the authority to enforce compliance with policies and to punish noncompliance. For these reasons, governments are often seen as the appropriate institutions for encouraging collective action, both in general [48] and with respect to AI [19] . The AI collective action literature has produced a wide range of proposals for top-down government solutions. The range of these proposals shows the tradeoff that exists between the ambitiousness of a proposal and its feasibility. The more ambitious proposals would probably do more to advance AI collective action if they were implemented, but may be difficult to implement. The more modest proposals would do less to advance AI collective action but are probably more feasible. The most ambitious proposals call for no less than a world government that would monitor AI development and force rogue AI projects to comply with ethics and safety standards. This bold idea has been repeatedly proposed in the literature [19, 49, 50] . A related proposal is for an \"AI nanny\" that uses advanced AI to govern humanity and guide the development of even more advanced AI [51] . Somewhat less ambitious proposals call for international institutions that would house or otherwise govern the global development of AI. These proposals leave national sovereignty intact except with respect to AI development. Specific proposals include a \"global watchdog agency\" [52] , an international AI project with broad authority to regulate the development and use of advanced AI [20] ; pp. 104-106; [53-55] , and publicly funding a limited number of groups that have the exclusive right to develop advanced AI on the condition that it is in the public interest [44] . The most feasible proposals require relatively modest tweaks to existing governance schemes. For example, AI could be included in existing international arms control agreements [56] . New arms control agreements could be made for AI-based weapons. The feasibility of the proposed ban on autonomous weapons has been questioned, but may nonetheless be possible [57] , and other arms control measures that stop short of a ban would be more feasible. International institutions can also assist in setting the agenda on AI and facilitating dialog. Indeed, this is already occurring on a limited scale via the UN High-Level Panel on Digital Cooperation. Another international body that could guide global AI development is the Global Partnership on AI, a recently formed coalition of states with the mission of supporting \"the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and [their] shared democratic values\" [58] . Similar to this is a proposal for an intergovernmental organization that brings together stakeholders from the public sector, industry, and academia to develop non-binding recommendations for how to increase international cooperation on AI [59] . Since binding \"hard law\" rules can be difficult to enact, Marchant [60] proposes a range of non-binding \"soft law\" measures that create expectations but are not formally enforced by government, including \"private standards, voluntary programs, professional guidelines, codes of conduct, best practices, principles, public-private partnerships and certification programs\". Finally, an international organization could sponsor, host, or serve as a clearinghouse for research into AI, and play a role similar the European Organization for Nuclear Research (CERN) [47, 61, 62] in physics or the Intergovernmental Panel on Climate Change (IPCC) in climate science [63] . Such an organization could potentially address the collective action problem of underinvestment in basic research, as well as the problem of underinvestment in AI ethics and safety research. Other literature has explored national government-led solutions. A major focus is on liability schemes for harms caused by privately developed AI and robotics systems. 16 Liability schemes can encourage collective action by changing individuals' incentives so that they align with the collective interest. Other proposals call for national governments to sponsor research on AI safety [64] , to establish national panels to develop guidelines for AI R&D and use [65] , or to create a National Algorithm Safety Board similar to the US National Transportation Safety Board to provide independent oversight of algorithms used to make decisions that impact the public [66] . Outside of the extensive literature on liability, most of the proposed government solutions have involved action at the international level. The reason appears to be that since AI can be developed anywhere in the 16 See e.g. Karnow [95] ; Asaro [96] ; Marchant & Lindor [97] ; Funkhouser [98] ; Gurney [99] ; LeValley [100] ; Scherer [101] ; Wu [102] ; and Zohn [103] . Note that liability schemes apply mainly for near-term AI; the catastrophic harm from long-term AI may be so severe that it destroys the liability system [104] . world, comprehensive collective action requires a global scope. Some even worry that piecemeal national regulations could push AI development underground to \"rogue nations\" [67] . Nevertheless, national governments have substantial authority even within the largely decentralized international system. Action at the national level is often more feasible than action at the international level, and successful action at the national level can serve as a model for international action. Top-down government action, whether at the national or international level, is not a perfect solution for AI collective action problems. Governments may struggle to regulate AI due to its complexity and rapid change [21, 24, 60] . Governments themselves may promote corporate or other special interests over the collective interest [68] . International proposals, especially ambitious ones, require a high degree of international cooperation, which may be hard to achieve given the difficulty of monitoring compliance, the incentives each state would have to defect [54] ; p. 46, the number of political jurisdictions and industries that would be involved, and the speed at which AI technology changes [60] . Even the most ambitious global agency might still fail to prevent dangerous projects from advancing [49, 53, 64] . In addition, any government scheme that lacks support from AI communities could create resentment and lead to pushback, making collective action more difficult [8] . Therefore, while top-down government solutions may be able to play a role in advancing AI collective action, they probably cannot resolve all AI collective action issues. \n Private ownership solutions Privatization is a common approach to solving collective action problems. One prominent context in which privatization is often used is in the management of natural resources. A private actor who owns a resource has an incentive to use it optimally. For example, private ownership of pasture may make overgrazing less likely, because the owners have an interest in preserving pasture for their own future benefit. Private ownership schemes are difficult to apply to AI development, since AI technology has no single owner. Much of the software and AI development techniques are publicly available. Code can be proprietary, but it is relatively difficult to keep code private since it is often easy to copy software and other digital information. Even if AI development were entirely in private hands, it would still have enormous public impacts, creating externalities private owners would not have a direct incentive to address. Private firms might develop AI only in their own interest rather than in the public interest [68] underinvest in safety and ethics research that would primarily benefit the public [19, 56, 69] , and misinform the public about AI risks in order avoid regulation or scrutiny [70] . 17 Tan and Ding [71] call for a global AI market to mitigate safety risks, but concede that government regulation may be necessary to ensure that AI markets are globally integrated, standardized, and egalitarian. While the ease of copying software may make it difficult to enforce private ownership schemes, hardware may be more susceptible to private ownership schemes. Hwang [72] describes several attributes of hardware manufacturing that make it easier to govern, including the relatively small number of large, fixed facilities involved in producing the high-end hardware used in cutting-edge AI systems. Hardware manufacturers could conceivably play a role in encouraging AI collective action. However, hardware manufacturing has the same externality as software development: the benefits of safe, ethical practices are spread widely across the public, creating an incentive to underinvest in safety and ethics. \n Bottom-up community solutions The third type of solution to collective action problems is bottom-up community self-organizing. In bottom-up community solutions, private actors work with one another in the collective interest in the absence of any overarching authority with the capacity to enforce cooperation. Bottom-up community solutions are appealing because they may be more feasible than top-down solutions. Cooperation in the absence of an enforcement authority might seem theoretically inelegant, but empirical studies of real-world collective action find that community selforganizing is often effective [48, 73] . 18 Soft law instruments like private standards, voluntary programs, and professional guidelines should arguably be considered examples of community self-organizing. Other soft law measures blur the distinction between top-down government solutions discussed in Section 4.1 and community self-organizing. Institutions like the Institute of Electrical and Electronics Engineers (IEEE) and the Partnership on AI are already bringing AI groups together to craft common ethical principles and promote cooperation. These processes are new, and it remains to be seen how successful they will be. Nonetheless, there is at least some chance that they will be successful, just as previous initiatives have succeeded at promoting collective action in other contexts. Community self-organizing does not require individuals to be altruistic as long as they recognize that cooperating to achieve common goals is in their own private interests. Community self-organizing may be most likely to succeed when individuals are willing to make personal sacrifices for the greater good, the way employee activists who oppose controversial but profitable applications of AI technology are. However, communities may be able to self-police adherence to reasonable norms even if individuals are not willing to sacrifice their private interests. Some have argued that simply fostering norms among people in the field of AI could improve outcomes [3, 5, 8] . Strengthening norms about near-term AI development could also lay the groundwork for collaborating on long-term AI development [3, 5] . Psychological factors can play an important role in determining the effectiveness of community solutions. For example, AI Impacts [19] and Baum [29] propose that it may be possible to cultivate a taboo against the development of dangerous AI. A taboo is an informal social norm against some action. Taboos can be effective. For example, the taboo against nuclear weapon use may be a major reason no nuclear weapon has been used in violence since 1945 [74, 75] . What kind of taboos would be appropriate is a matter of debate-it might be going too far to treat the development of some forms of AI as unacceptable as the use of nuclear weapons-but some kind of taboo against dangerous AI development could facilitate AI collective action. Community self-organizing may be especially important for AI developed using open-source software. Closed-source/proprietary software could be confined within a single institution, which may be able to make sure the software complies with safety and ethics standards. However, open-source software can be developed by anyone anywhere in the world, which may make top-down enforcement of standards extremely difficult. In the absence of oversight by or accountability to some outside authority, ethical codes can create the appearance of responsibility without having much impact on behavior [76] ; p. 29-32. This concern may motivate some of the more draconian proposals for global surveillance regimes to prevent the development of dangerous AI (e.g., Refs. [50, 51] ). On the other hand, Goertzel [68] proposes that open-source AI development might be more attuned to the public 17 For a discussion of how to counter such misinformation, see Baum [105] . 18 Baum [8] distinguishes between extrinsic constraints imposed on AI communities from the outside and intrinsic measures that are developed by the AI communities themselves. Baum [8] argues that while efforts to improve AI outcomes commonly focus on extrinsic measures, intrinsic measures can often be effective. Government and market collective action solutions are generally extrinsic, whereas community solutions are generally intrinsic. Community self-governance is not a silver-bullet solution. The empirical social science literature on collective action identifies a range of circumstances in which community self-organizing is more likely to be successful, such as when communities are geographically bounded, when there are at most a few thousand individuals or groups involved, when it is clear to actors how their choices directly affect their collective interest, when the benefits of collective action mostly accrue to the population whose actions determine outcomes, when the actors share a common culture and institutions, and when there are opportunities for actors to learn from experience. 19 Unfortunately, AI collective action situations do not all meet all these conditions. Indeed, no AI collective action situation may meet some of these conditions (e.g. being geographically bounded). This does not mean bottom-up community solutions will necessarily fail for AI, but it does suggest creative approaches may be needed to overcome these challenges. Additionally, some have argued that the high stakes of long-term AI development merit government (and especially international government) solutions [50, 56] . However, arguably what matters here is not the size of the stakes but the efficacy of the solution. Government intervention is not a silver-bullet solution either (Section 4.1). In general, there are reasons to think that no single solution or approach can completely solve the collective action problems of AI development or ensure that AI will be safe and beneficial. Global collective action may require a polycentric system of governments, market, and community organizations that address AI issues in different ways and at different scales [18] . There may be no way to guarantee AI developers will produce beneficial designs, but, as Baum [29] writes, \"given the stakes involved in AI, all effective measures for promoting beneficial AI should be pursued\" (p. 551). \n Transparency Transparency is not a solution to collective action problems per se, but rather a governance mechanism that can affect the form and extent of AI collective action. Some general arguments in favor of transparency about AI development have been advanced. It has been proposed that transparency could encourage goodwill and collaboration among AI developers [77] , foster trust between AI developers and potential AI users [21] , improve cooperation between AI developers and government regulators (ibid.) and even help avoid unreasonable attempts to regulate AI research [78] . In practice, these arguments might not necessarily hold. For example, transparency could reduce goodwill, trust, and cooperation if AI developers are seen to be behaving poorly or acting in bad faith. Transparency could also give unscrupulous or incompetent regulators more opportunity to impose counterproductive or unnecessary regulations. However, transparency could still be beneficial on balance if it creates opportunities to address genuinely bad behavior and incentivizes AI developers and other stakeholders to behave well in the first place. A more contentious matter is whether AI developers should be open about the capabilities of their AI, including by reporting the latest results and openly publishing code. One concern is that sharing new algorithms and capabilities could increase the tools available to malevolent actors [79] , although it could also increase the tools available to counter malevolent actors. Another concern is that transparency about AI capabilities could make developers aware of one another's progress, which could prompt them to take shortcuts on safety in order to try to win a perceived AI race [9, 77] . Again, the converse could be true: if transparency reveals that AI developers are not making substantial progress, then they could focus more on safety and feel less pressure to engage in a race. 20 Finally, transparency could level the playing field for AI developers by enabling them to build on one another's work. This could increase risks by making it harder for a careful and benevolent developer group to dominate the process [77] , but it could also decrease risks by making it harder for a reckless and malevolent group to dominate. Overall, we concur with Bostrom [77] that the case for openness about AI capabilities is complicated and mixed. A clearer case can be made in favor of transparency on AI safety issues. Transparency would create opportunities for outside experts to contribute to any AI project's safety measures, thereby reducing the risks created by the project [77] . Additionally, transparency could create opportunities for outside observers to check for bugs and other problems with an AI group's work [21] , although it would also give malevolent actors an opportunity to look for vulnerabilities they could exploit. An important challenge is how to ensure that the beneficial aspects of transparency outweigh the potential harms. The AI transparency issue was at the center of a recent debate about OpenAI's decision in 2019 to release its Generative Pre-Trained Transformer 2 (GPT-2) language model in stages out of concern that it could be used for malicious purposes [80] . While some applauded this decision [81] , others criticized it for undermining open source norms and denying outside groups the opportunity to mitigate problematic aspects of the code [82] . This incident demonstrates the controversies that can accompany actions with respect to AI transparency. \n Conclusion Ensuring that AI generally contributes to good outcomes for society will require collective action. The development and use of AI involve a variety of particular situations in which collective action is required to achieve good outcomes. These situations include AI races, determination of AI development and use standards, and decisions about investment in public goods like basic AI research, AI safety, and AI ethics. A background in collective action can be valuable for understanding these situations and improving AI outcomes. Although AI collective action is a relatively new field of study, it has already produced a range of insights. The primer and review presented in this paper introduces collective action concepts, relates them to issues in AI, and summarizes the existing literature so that readers from a variety of backgrounds can get up to speed on this important topic. Because this paper is a nonsystematic review, it cannot draw definitive conclusions about the existing literature on AI collective action. However, the research presented in the paper did involve a variety of searches to identify relevant literature. Furthermore, the searches did not identify a large body of literature. Unless the searches failed to identify a significant additional body of literature on AI collective action, which we believe is unlikely, then the trends in the literature identified this paper are indeed reflective of the trends in the actual body of literature on AI collective action. Whether this is in fact the case could be assessed in future research that conducts a systematic review. Any such review would need to account for the significant body of literature that is published outside of traditional academic outlets.\" One clear limitation of the AI collective action literature reviewed in this paper is that it makes relatively little use of the insights of the rich social science literature on collective action in other contexts. Human society has quite a lot of experience with collective action in other contexts, and scholars of it have learned a great deal that is relevant to AI collective action. We did not find any studies drawing on the empirical study of AI collective action situations, though we are aware of one study drawing on the empirical literature for a more general discussion of risky emerging technologies [83] . Empirical studies of the effectiveness of collective action with respect to different aspects of AI development and use are a promising path future research could take. In particular, it would be valuable to study how institutional design can shape the outcomes of collective action situations. As Section 3 documents, there has also been relatively little research on competition between private AI groups. Government competition (and especially military competition) has gotten much more attention. While government AI competition is clearly important, private AI competition is too. Indeed, for now at least, the private sector is the main driver of AI R&D. More detailed studies of private sector AI competition are another promising path for future research. Although the study of AI collective action is in its infancy, the subject is increasingly pressing. The trajectory of AI development is uncertain, but R&D conducted today may have broad impacts on society. Governments and communities are beginning to formulate policies and institutions that if enacted could be long-lasting. Without further research into how to work together to ensure that AI development leads to collectively more optimal outcomes, society may stumble blindly into outcomes that are collectively worse. interest than corporate or government AI development, since the government and corporations could act in a corrupt and self-interested way. Whether open-source AI development would in fact be more attuned to the public interest is an open question. Regardless, the difficulty of governing open-source software development via top-down regulations makes bottom-up community solutions more compelling. \n Table 1 1 Goods classified by rivalry and excludability. EXCLUDABLE NON-EXCLUDABLE RIVALROUS Private goods Common goods NON-RIVALROUS Club goods Public goods 7 See e.g.Olson [86 and \n\t\t\t An important exception is when there are externalities, i.e. when benefits or \n\t\t\t R. de Neufville and S.D. Baum \n\t\t\t Shulman [88]; Armstrong et al. [9] ; Tomasik [49] ; Aldana [42] ; Han et al. [43] ; Naudé & Dimitri [44] . 11 \"Bounded rationality\" refers to the practical limits on actors' ability to make optimal decisions in real world conditions. 12 In theory, it might be possible to resolve collective action problems in which individual and collective interests do diverge by misleading actors into believing their individual interests actually align with collective interests, although spreading false information could create other problems.R. de Neufville and S.D. Baum \n\t\t\t Similarly, Roff [40] argues against framing AI competition as an arms race on grounds that it could \"could escalate rivalry between states and increase the likelihood of actual conflict\". 14 See e.g. Good [89] ; Vinge [90] ; Kurzweil [91] ; Omohundro [92] ; Yudkowsky [85] ; Chalmers [93] ;Barrat [94]; and Bostrom [20] . 15 For example, AI Impacts [19] and Tan & Ding [71] focus on races between countries.R. de Neufville and S.D. Baum \n\t\t\t This list is from Stern [83] , p.215, discussing Ostrom [48] . We recommend Stern [83] as perhaps the only discussion of the empirical collective action literature in the context of governing risky technologies. 20 For comparison, during the initial race to build nuclear weapons, the US overestimated German progress and may have consequently paid less attention to safety issues in order to be the first to develop nuclear weapons [106] .R. de Neufville and S.D. Baum", "date_published": "n/a", "url": "n/a", "filename": "1-s2.0-S0160791X2100124X-main.tei.xml", "abstract": "Progress on artificial intelligence (AI) requires collective action: the actions of two or more individuals or agents that in some way combine to achieve a result. Collective action is needed to increase the capabilities of AI systems and to make their impacts safer and more beneficial for the world. In recent years, a sizable but disparate literature has taken interest in AI collective action, though this literature is generally poorly grounded in the broader social science study of collective action. This paper presents a primer on fundamental concepts of collective action as they pertain to AI and a review of the AI collective action literature. The paper emphasizes (a) different types of collective action situations, such as when acting in the collective interest is or is not in individuals' self-interest, (b) AI race scenarios, including near-term corporate and military competition and longterm races to develop advanced AI, and (c) solutions to collective action problems, including government regulations, private markets, and community self-organizing. The paper serves to bring an interdisciplinary readership up to speed on the important topic of AI collective action.", "id": "51d2606ba4663e05acf8e7a77d3eb7e4"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Jonathan Stray"], "title": "Aligning AI Optimization to Community Well-Being", "text": "Introduction This paper is an extended analysis of a simple idea: large-scale commercial optimizing systems may be able to manage harmful side effects on communities by monitoring established well-being metrics. It sketches a theory that ties together quantitative measures of well-being, contemporary metrics-driven management practice, the objective function of optimization algorithms, participatory and multi-stakeholder governance of algorithmic systems, and the protection or promotion of community wellbeing. Detailed analyses of recent efforts by Facebook and YouTube are used to illustrate the challenges and unknowns of this approach, which generalizes to a variety of different types of artificial intelligence (AI) systems. The core contribution of this article is a proposed process for the use of community well-being metrics within commercial AI systems. Well-being encompasses \"people's living conditions and quality of life today (current well-being), as well as the resources that will help to sustain people's well-being over time (natural, economic, human and social capital)\" (OECD 2019b, p. 2) . Community well-being attempts to evaluate well-being at the level of a community defined \"in geographic terms, such as a neighborhood or town … or in social terms, such as a group of people sharing common chat rooms on the Internet, a national professional association or a labor union\" (Phillips and Pittman 2015, p. 3) . The measurement of well-being is now a well-established field with a long history, and is increasingly used in policy-making (Exton and Shinwell 2018) . Large AI systems can have both positive and harmful side effects on communities, through effects on employment and inequality (Korinek and Stiglitz 2017) , privacy and safety (OECD 2019a), addictive behavior (Andreassen 2015) , fairness and discrimination (Barocas et al. 2018) , human rights (Donahoe and Metzger 2019) , polarization, extremism, and conflict (Ledwich and Zaitsev 2020; Stoica and Chaintreau 2019) , and potentially many other areas (Kulynych et al. 2020) . Importantly, AI systems can affect non-users too, as with environmental externalities. Most AI is built around optimization \"in which the aim is to find the best state according to an objective function\" (Russell and Norvig 2010, p. 121) where an objective function is some method for quantitatively evaluating the desirability of an outcome (Dantzig 1982) . Standard management practice also increasingly involves the maximization of quantitative metrics (Parmenter 2020) , which can be considered an optimization process. This paper is concerned with optimizing systems composed of people and algorithms which affect communities, where the choice of objective might have significant societal influence. Examples include systems used to allocate resources or assign work, choose what news people see, recommend products to buy, or implement government policy. Many of these systems would be considered AI, but perhaps the phrase \"autonomous and intelligent systems\" (Schiff et al. 2020, p. 1) which appears in certain standards efforts would be better, because an automated system does not have to be very smart to cause harm. Rather, the unifying feature is optimizationboth the cause of many problems and an opportunity for a response. The central idea of this paper is to incorporate community well-being metrics into the optimization process at both the managerial and technical level. This is a sociotechnical approach to systems design (Baxter and Sommerville 2011 ) that considers the role of both people and technology. There are many technical interventions that could be undertaken aside from the modification of an algorithmic objective function; for example, a social media product team could choose to show a simple chronological list of posts rather than using algorithmic content personalization. However, if product managers are evaluated on community well-being outcomes, they may choose to make such a change based on the expected effects on users. The integration of the managerial and the technical in an optimization framework can motivate many possible product design changes. \n Background This paper responds most directly to recent calls for research into well-being and AI. It proposes specific \"improvements to product design\" (Schiff et al. 2019, p. 3) and it is interdisciplinary, systems-based, and community-oriented (Musikanski et al. 2020) . It draws on and contributes to the emerging field of recommender alignment, the practice of building algorithms for content ranking and personalization that enact human values (Stray et al. 2020) . The goal of the process proposed in this paper is the governance of large-scale commercial algorithmic systems. Rahwan (2018) calls this society-in-the-loop control, defined as \"embedding the values of society, as a whole, in the algorithmic governance of societal outcomes\" (p. 3). In this sense community participation is a key element of the proposed framework, and this paper draws on approaches as diverse as participatory design (Simonsen and Robertson 2012) and corporate stakeholder engagement (Manetti 2011) . \n Community Well-Being At the individual level well-being is usually studied as an experiential state, and there is now a wealth of research on the definition and reliable measurement of subjective wellbeing (Diener et al. 2018) . Although well-being is a rich, multidimensional construct, even single questions can reveal substantial information, such as overall, how satisfied are you with life as a whole these days? answered on a 0-10 scale. This well-studied measure has several advantages: it correlates with how people make major life decisions, gives a similarly reliable result across cultures, and is by itself informative enough to be used in quantitative evaluations of policy choices (O'Donnell et al. 2014) . Community well-being \"embraces a wide range of economic, social, environmental, political, cultural dimensions, and can be thought of as how well functions of community are governed and operating\" (Sung and Phillips 2018, p. 64) . In practice, community well-being is assessed using a variety of metrics across many domains. Often both subjective and objective indicators are needed to get a full picture (Musikanski et al. 2019) . A survey of local and national well-being indicator frameworks in use in the United Kingdom gives an overview of the substance and range of such metrics (Bagnall et al. 2017) . Community well-being frameworks can originate from consideration of geographic communities, or communities of interest (Phillips and Pittman 2015) which may be particularly relevant to online platforms. As an example community well-being framework, the OECD Better Life Index (Durand 2015) aims to measure \"both current material conditions and quality of life\" (p. 1) across countries through the metrics shown in Table 1 . This framework includes the life satisfaction measure above, as well as statistical indicators around health, education, employment, etc. in conjunction with subjective indicators such as whether one feels safe walking alone at night. Technologists and scholars have begun to appreciate the significance of well-being measures in the design and operation of AI systems (Musikanski et al. 2020) . The IEEE 7010 Recommended Practice Standard for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being collects pre-existing measures from sources such as the OECD Better Life Index, the UN Sustainable Development Indicators, the Human Development Index, the World Health Organization, the World Values Survey, Freedom House, and others (Schiff et al. 2020) . From the point of view of a technologist who is concerned about the societal effects of their work, established well-being metrics have the advantage of representing extensive deliberation by domain experts. \n Optimization Optimization is used extensively in AI to guide training and learning. A problem to be solved is expressed as a scalar functiona method to calculate a single number that expresses the desirability of any given hypothetical solution. Solving the problem means finding a solution that maximizes this function. The encapsulation of concerns into a single function was a major conceptual advance that enabled the creation of generic optimization algorithms (Dantzig 1982) . Conceptually, any problem that has some set of best solutions can be expressed as optimization with a single objective A supervised machine learning algorithm that attempts to identify objects from images would usually be trained through a loss function that penalizes incorrect answers. A reinforcement learning approach to playing a video game might use the game score as a reward function. There are also value functions, cost functions, fitness functions, energy functions and more, all of which operate on similar principles (Russell and Norvig 2010) . For simplicity, in this paper I refer to all of the scalar functions used to drive AI behavior as objective functions. In this paper I refer to an optimizing system as if there were one optimizer and one objective. In practice such systems, especially those at platform scale, may include dozens or hundreds of optimizing components (numerous trained sub-models, for example). There isn't one objective function that can be altered, but many. Nonetheless, there are usually a few high-level goals concerned with the system's main outputs. This is the case at Groupon with many interacting models and a master objective function that aligns to company goals (Delgado et al. 2019) . Quantitative metrics analogous to objective functions are also used in corporate management. Modern management practice includes concepts such as key performance indicators (Parmenter 2020 ) and objectives and key results (Doerr 2017) , both of which involve quantitative indicators of progress. Economic theory frequently models the corporation as a profit optimizer (e.g. Samuelson and Marks 2014). More sophisticated descriptions try to account for the creation of various types of long-term value, such as the balanced scorecard (Kaplan 2009 ) and sustainability accounting (Richardson 2013) , both of which describe various non-financial metrics that are intended to be optimized. \n Case Studies of Platform Interventions This section presents two examples where large technology companies seem to have optimized for well-being, or a similar concept. These cases have been reconstructed through documentary evidence such as public posts, previously published interviews, financial reports, and research articles by employees. \n Facebook's Well-Being Optimization In late 2017 and early 2018, Facebook made a number of changes to their product explicitly designed to promote well-being. Facebook researchers Ginsberg and Burke (2017) wrote in a public post in December 2017: What Do Academics Say? Is Social Media Good or Bad for Well-Being? According to the research, it really comes down to how you use the technology. For example, on social media, you can passively scroll through posts, much like watching TV, or actively interact with friendsmessaging and commenting on each other's posts. Just like in person, interacting with people you care about can be beneficial, while simply watching others from the sidelines may make you feel worse. (para. 7). This post cites a number of peer-reviewed studies on the well-being effects of social media, some of which were collaborations between Facebook researchers and academics. Ginsberg and Burke (2017) cite Verduyn et al.'s (2017) review paper on the effects of social media on well-being, which has an obvious resonance with Facebook's framing: passively using social network sites provokes social comparisons and envy, which have negative downstream consequences for subjective well-being. In contrast, when active usage of social network sites predicts subjective well-being, it seems to do so by creating social capital and stimulating feelings of social connectedness. (Verduyn et al. 2017, p. 274) A close reading of posts around this time shows that Facebook developed a well-being proxy metric. A January 2018 post by Facebook's Chief Executive Officer notes that \"research shows that strengthening our relationships improves our well-being and happiness\" (Zuckerberg 2018, para. 2) and mentions well-being twice more, then switches to the phrase \"meaningful social interactions:\" I'm changing the goal I give our product teams from focusing on helping you find relevant content to helping you have more meaningful social interactions. (Zuckerberg 2018, para. 7) Relevance is a term of art in recommender systems, referring to user preferences as expressed through item clicks or ratings, and is increasingly understood as a simplistic objective (Jannach and Adomavicius 2016). The algorithmic change away from relevance was described by the head of the News Feed product: Today we use signals like how many people react to, comment on or share posts to determine how high they appear in News Feed. With this update, we will also prioritize posts that spark conversations and meaningful interactions between people. To do this, we will predict which posts you might want to interact with your friends about, and show these posts higher in feed (Mosseri 2018, para. 3) . Facebook created a well-being metric and assigned it as a goal to a product team, which incorporated it into an existing algorithmic objective function. This objective function was augmented by creating a model that uses existing data such as past user behavior and post content to predict whether a user will have a meaningful social interaction if shown any particular post. There is little public documentation of how meaningful social interactions are measured. The most detailed description is from the transcript of a call where Facebook reported earnings to investors, which explains that meaningful social interactions are measured through user surveys: So the thing that we're going to be measuring is basically, the number of interactions that people have on the platform and off because of what they're seeing that they report to us as meaningful…the way that we've done this for years is we've had a panel, a survey, of thousands of people who basically we asked, what's the most meaningful content that they had seen in the platform or they have seen off the platform. (Facebook 2018, p. 13) The resulting system is reconstructed in Fig. 1 . While there is no public account of the effects of the incorporation of the meaningful social interactions prediction model on the meaningful social interactions metric as measured by Facebook through user surveys, Facebook has reported reduced engagement on at least one product, suggesting that the meaningful social interactions objective was weighted strongly enough to cause significant changes in which items are presented to users: video is just a passive experience. To shift that balance, I said that we were going to focus on videos that encourage meaningful social interactions. And in Q4, we updated our video recommendations and made other quality changes to reflect these values. We estimate these updates decreased time spent on Facebook by roughly 5% in the fourth quarter. To put that another way: we made changes that reduced time spent on Facebook by an estimated 50 million hours every day to make sure that people's time is well spent. (Facebook 2018, p. 2)... \n YouTube's User Satisfaction Metrics John Doerr's Measure What Matters (2017) documents YouTube's multi-year effort to reach one billion hours of daily user watch time through interviews with Susan Wojcicki, Chief Executive Officer and Cristos Goodrow, Vice President of Engineering at YouTube (Doerr 2017, pp. 154-172) . Goodrow describes the inception of YouTube's recommendation system in 2011, and how he advocated to optimize for watch time instead of video views as: On a dedicated team named Sibyl, Jim McFadden was building a system for selecting \"watch next\" recommendations, aka related videos or \"suggestions.\" It had tremendous potential to boost our overall views. But were views what we really wanted to boost?... I sent a provocative email to my boss and the YouTube leadership team. Subject line: \"Watch time, and only watch time.\" It was a call to rethink how we measured success: \"All other things being equal, our goal is to increase [video] watch time.\"... Our job was to keep people engaged and hanging out with us. By definition, viewers are happier watching seven minutes of a ten-minute video (or even two minutes of a tenminute video) than all of a one-minute video. And when they're happier, we are, too. (Goodrow quoted in Doerr 2017, p. 162)... Goodrow's retelling includes user happiness and satisfaction as goals along with the more business-oriented engagement. For the purposes of this paper, I assume user happiness and satisfaction are analogous to well-being, but unlike the Facebook case, YouTube's public statements have not mentioned well-being. In accordance with the unified treatment of managerial and technical optimization proposed in this paper, Goodrow confirms that a team-level metric drove engineering decisions: Reaching one billion hours was a game of inches; our engineers were hunting for changes that might yield as little as 0.2 percent more watch time. In 2016 alone, they would find around 150 of those tiny advances. We'd need nearly all of them to reach our objective. (Goodrow quoted in Doerr 2017, p. 169) Yet watch time was not the only objective, and YouTube incorporated other changes to improve the quality of the product and the effects on users: In fact, we'd commit to some watch-time-negative decisions for the benefit of our users. For example, we made it a policy to stop recommending trashy, tabloidstyle videos-like \"World's Worst Parents,\" where the thumbnail showed a baby in a pot on the stove. Three weeks in, the move proved negative for watch time by half a percent. We stood by our decision because it was better for the viewer experience, cut down on click bait, and reflected our principle of growing responsibly. Three months in, watch time in this group had bounced back and actually increased. Once the gruesome stuff became less accessible, people sought out more satisfying content. (Goodrow quoted in Doerr 2017, p. 164) This was the beginning of a move away from strict maximization of time spent. Starting in 2015 YouTube began to incorporate user satisfaction metrics (Doerr 2017, p. 170) . As in the Facebook case, these are derived from surveys: we learned that just because a user might be watching content longer does not mean that they are having a positive experience. So we introduced surveys to ask users if they were satisfied with particular recommendations. With this direct feedback, we started fine-tuning and improving these systems based on this highfidelity notion of satisfaction. (Google 2019, p. 21) These user satisfaction survey results were incorporated directly into the objectives of the YouTube recommendation system, as discussed in a recent YouTube technical paper: we first group our multiple objectives into two categories: 1) engagement objectives, such as user clicks, and degree of engagement with recommended videos; 2) satisfaction objectives, such as user liking a video on YouTube, and leaving a rating on the recommendation. (Zhao et al. 2019, p. 43) \n Analysis of Facebook and YouTube Cases The Facebook and YouTube cases are significant because they are examples of a major platform operator explicitly saying that they have decided to monitor and optimize for a well-being proxy, operationalized at both the management and algorithmic levels. Facebook has provided a public justification for its meaningful social interaction metric in terms of prior research which suggests that active use of social media improves well-being while passive use decreases it. While this is far from a holistic measure of well-being, let alone community well-being, at least it connects to previous work in a clear way. Public statements from YouTube have not mentioned well-being, instead focusing on \"responsibility\" (Wojcicki 2019, para. 2) and user satisfaction as assessed through surveys. Explicit user surveys are an improvement on YouTube's previous identification of watch time with user happiness. Researchers report a negative correlation between TV watching and well-being that suggests there is something like an addiction mechanism involved: \"individuals with incomplete control over, and foresight into, their own behavior watch more TV than they consider optimal for themselves and their wellbeing is lower than what could be achieved\" (Frey et al. 2007, p. 283 ). Similar effects have been observed in social media use where addicted users \"have typically attempted to cut down on social networking without success\" (Andreassen 2015, p. 176) . Google now publicly recognizes that maximizing watch time does not optimize for \"positive\" outcomes (Google 2019, p. 21) . A more systematic conception of well-being would articulate what aspects of wellbeing matter to YouTube and why user satisfaction is a good proxy. Of course, wellbeing outcomes depend enormously on who a user is and what they watch. A user might learn valuable and fulfilling skills from how-to videos, become more politically engaged, consume worthwhile art, or they might be radicalized into violence (Ledwich and Zaitsev 2020) . Another issue is that both companies are optimizing for individual outcomes: wellbeing but not necessarily community well-being. Community well-being \"is more than an aggregate of individuals' satisfaction\" (Sung and Phillips 2018, p. 65) and cannot be assessed simply by adding up the well-being of all individuals in the community. This is analogous to the classic problem of aggregating utilities in welfare economics (Foster and Sen 1997, p. 16 ). Conversely, optimizing for each person individually will not necessarily promote community well-being due to problems of externalities, collective action, and conflicting preferences (Baum 2020; Milano et al. 2019b ). Attention to aggregates may also miss local problems, such as negative effects in a particular city or for a particular subgroup, or run into Simpsons' paradox issues where the sign of the effect depends on the granularity of the groups studied (Kievit et al. 2013) . For all these reasons, clarity on the definition of community or communities matters greatly. Perhaps the biggest weakness of these cases is that there is no record of consultation with the putative beneficiaries of these algorithmic changes, and no public evaluation of the results. Hopefully algorithmic interventions of this magnitude were informed by user research or some sort of consultative process, but none was reported. Presumably meaningful social interactions and user satisfaction were increased, but there has been no disclosure of how much. Absent also is any report of effects on any other components of well-being, such as feelings of social connectedness or life satisfaction, or even objective indicators like employment status. It's similarly unclear how these changes affected not just individual well-being but community well-being for different communities; there may even have been negative effects on certain types of users. Information about outcomes is especially important because the link between Facebook's meaningful interactions and well-being is theoretical, deduced from previous research into active and passive social media use, while YouTube has said their user satisfaction surveys are included in a \"responsibility\" metric (Bergen 2019, para. 10 ) and that they aim for \"positive\" experiences (Google 2019, p. 21) without providing any further explanation of their goals or results. Determining the actual effect of these large-scale interventions is itself a significant social science research effort, and if Facebook or YouTube have these answers, they have not been shared. This is algorithmic management, but not yet the algorithmic governance that the society-in-the-loop model envisions (Rahwan 2018) . The reported business outcomes are also instructive, as both the Facebook and YouTube changes resulted in at least temporary reductions in engagement metrics. Facebook reports that the incorporation of a meaningful social interactions metric into their video product caused a 5% reduction in time spent, which was considered significant enough to be discussed with investors (Facebook 2018 ) but the longerterm effects are unclear. YouTube described changes that reduced watch time but also reports that watch time recovered over a time span of months as users changed their behavior. This demonstrates both that major corporations are willing to accept reductions in engagement to pursue social ends, and that the long-term business effects of incorporating well-being metrics are not necessarily negative. \n Generalization to Other Domains The Facebook and YouTube cases suggest the possibility of a general method for managing the well-being outcomes of commercial optimizing systems, which is the core contribution of this article. This section begins by arguing that some type of metric-driven community well-being optimization is not only useful but likely necessary for any AI system with broad social impacts, because individual user control will not be sufficient. It then shows how this general method could apply to diverse domains by working through potential applications to news recommendation and online shopping. These hypothetical applications demonstrate the generality of a metrics-driven approach and illuminate further possibilities and challenges that shape the recommendations in this paper. \n User Control is not Sufficient for Community Well-Being This article recommends participatory processes to involve users and other stakeholders in metric-driven for community well-being. A potential alternative is to provide increased user control directly, so that people can choose what is best for themselves. Many authors have pointed to the central role of user agency in the ethics of AI systems (Floridi and Cowls 2019) and in the important context of content ranking Paraschakis (2017) has proposed \"controls [that] enable users to adjust the recommender system to their individual moral standards\" (p. 6). However, increasing user agency will not by itself solve the problem of ensuring good outcomes at the community level because many users will not customize the systems they use, and because individually good choices do not necessarily produce socially good outcomes. Any set of controls must necessarily be few enough to be humanly manageable. This restricts the number of dimensions that can be controlled and will make it difficult to express nuanced conceptions of well-being. Natural language interfaces e.g. Yu et al. (2019) may allow the expression of more complicated concepts. Nonetheless users will probably leave most parameters at default settings, which means that the defaults must promote well-being. Even if all users in fact succeeded in directing an AI system to do exactly as desired this would not necessarily result in the best community outcomes. As Ostrom (2000) has articulated, individual action does not succeed in producing social goods without the concurrent evolution of social norms. These challenges of collective action have been explored in the context of AI systems from the perspective of social choice theory (Baum 2020 ) and multi-stakeholder recommendation systems (Milano et al. 2019a ). Further, existing societal inequalities can constrain users' ability to exploit algorithmically provided choices (Robertson and Salehi 2020) , for example due to a lack of information or the cost burden of choosing the \"best\" option. User control is essential, perhaps even necessary for community well-being, but it is not sufficient. Collective algorithmic governance is needed for much the same reasons societal governance is needed, and appropriate well-being metrics are useful in algorithmic governance just as they are in public policy. \n Diverse News Recommendations News recommenders are the algorithms that choose, order, and present journalism content to users. The potential application of community well-being metrics to these systems illustrates the challenges around defining a community and choosing metrics. News recommendation algorithms can have societal consequences (Helberger 2019) but it is not clear how to manage such algorithms for community well-being. To begin with, there is no single community that consumes news, but many overlapping communities organized around different geographic regions and different topics (Reader and Hatcher 2011, p. 3) . Each of these communities may have different concerns at any given moment. Incorporating social network analysis or countryspecific data can improve the performance of recommender systems as measured by traditional relevance metrics (Chen et al. 2018; Roitero et al. 2020 ) but the question of how a recommender system impacts pre-existing communities, e.g. a city, has not been explored. Conversely, existing community well-being indicators have not been designed to capture the consequences of news recommender systems. One well-developed concern with news recommenders is exposure diversity, meaning the range of sources, topics, and viewpoints that each person is algorithmically presented (Bernstein et al. 2020) . Taking political theory as a starting point Helberger et al. (2018) identify liberal, deliberative, and radical approaches to the design of diverse news recommenders. Consider the problem of designing a national news recommender that supports a deliberative view of diversity, one in which: exposure to diverse viewpoints is considered valuable because it helps citizens develop more informed opinions and less polarized, more tolerant attitudes towards those with whom they disagree … it is conceivable to design metrics that would focus, for example, on user engagement with opposing political views, cross-ideological references in public debates or social media connections between people who represent different ideological positions. (Helberger et al. 2018, p. 195) Diversity metrics could be constructed from algorithmic methods to estimate the ideological position of users or posts (Budak et al. 2016; Garimella and Weber 2017) . These give a measure of distance between any two items, which could then be used to define the diversity of a set of recommended items according to various standard formulas such as the average distance between any pair (Kunaver and Požrl 2017) . Such a metric would capture the output of the system, not its effects on users. Facebook and YouTube use user surveys to tie algorithmic changes to human outcomes. It may be possible to establish a causal connection from news diversity metrics to existing well-being metrics such as voter turnout, and Facebook has already demonstrated a substantial effect on voter turnout by presenting users with personalized messages (Bond et al. 2012) . It would be better to direct the optimization process towards more closely related outcomes like polarization or tolerance that are not included in current well-being frameworks. Directly measuring these outcomes is crucial because exposure to diverse opinions can actually increase polarization (Bail et al. 2018) . Polarization and tolerance outcomes are also explicitly relational, and thus indicate aspects of community well-being not captured in individual-level metrics. \n Low Carbon Shopping Large-scale product recommender systems have profound influence over what is purchased. One reason for this is that it is not possible to navigate millions of possible products without them. Rolnick et al. (2019) have proposed using these systems to direct consumers to lower-carbon alternatives. This possibility highlights two problems that may arise in the course of modifying AI objective functions: obtaining the data needed to evaluate a metric and understanding the business impacts of such a change. Climate change is a key issue for many communities (Fazey et al. 2018 ) and carbon emissions appear in a number community well-being frameworks (Bagnall et al. 2017) . Carbon emissions from recommended products are also a key example of AI system side effects on non-users. From a technical point of view, carbon footprint can be incorporated using multi-stakeholder recommendation algorithms that explicitly consider the effect on parties other than the user (Abdollahpouri et al. 2020) . This is possible only if the carbon footprint of each product is available. There are now established methods to estimate product carbon footprints (BSI 2011; ISO 2018) but there are no product carbon footprint (PCF) databases comprehensive enough to cover the millions of different products sold by a large online retailer. However, it may be possible to use machine learning methods to estimate the PCF values of an entire product portfolio starting from a comparatively small database of examples (Meinrenken et al. 2012) . Robust, scalable product carbon footprint estimation could be a key enabling technology for low-carbon commerce and, ultimately, long-term community well-being. A commercial operator will want to know the business effects before any such system is implemented, and it is tempting to evaluate the potential revenue effect of incorporating a carbon term into the objective function by testing against historical purchase data. Such back-testing will show that optimizing for anything other than profit must drive the system away from a profit maximum, but offline estimates will not give the full story because both consumer and producer behavior may change if carbon footprint starts to affect product ranking. Users might appreciate being informed of low-carbon alternatives and buy more from that retailer or pay a premium for lower carbon items, while producers will have an incentive to sell lower carbon products. The case of organic food demonstrates the existence of such market dynamics, as it is 22-35% more profitable globally than conventional alternatives even though it is typically more expensive to produce (Crowder and Reganold 2015) . \n Recommendations The incorporation of community well-being metrics into both managerial and algorithmic optimization is a very general method for managing the effects of commercial optimizing systems, yet good management is only part of good governance. This section synthesizes the analysis and discussion above with previous work on algorithmic governance, participatory design, best use of metrics, and corporate stakeholder engagement to make recommendations for fostering community well-being in AI systems in ways that are both effective and accepted as legitimate. It also identifies gaps and unknowns where future research would be valuable. \n Identifying and Involving Communities An attempt to optimize for community well-being is an attempt to benefit a particular group of people, who need to have a say in what is done on their behalf. In some cases it would be reasonable to say that every user of the system (potentially billions of people) is a member of the community, but that would preclude the management of local outcomes such as a system's effects on the residents of a particular city, or on people of a certain age, or workers in particular professions. Non-users can be affected as well, as in environmental externalities or a navigation system that routes cars to a formerly quiet street. Each view of community is a choice about who counts, and this choice should be made explicit before any intervention begins. Once a community is identified, there are many approaches to try to integrate its members into the process of selecting and using metrics. Participatory design is an orientation and a set of practices that attempts to actively involve all stakeholders in a system design process (Simonsen and Robertson 2012) . It is a promising framework for algorithmic governance. The WeBuildAI method (Lee et al. 2019) demonstrates what participatory design of metrics might look like. Researchers worked with a fooddelivery non-profit to design an algorithm to match donated food with volunteer drivers and local food distribution charities. Stakeholders from each of these groups worked with researchers to build quantitative models of their preferred trade-offs between factors such as driver travel time, time since last donation, neighborhood poverty level, etc. At run time this system ranks the possible matches for each donation according to the models representing the preferences of each stakeholder, with the final result chosen through a ranked-choice voting rule. Future work could investigate participatory metric design in the context of a large commercial platform. There are both instrumental and political goals when attempting to integrate communities into the selection and use of metrics. Without engaging the community, it is not possible to know which aspects of well-being matter most to them and how serious these issues are, and therefore how to make tradeoffs. Engagement is also necessary for credibility. When choosing community indicators, \"most communities consider input by its residents and others to be vital; it builds support for the use of indicators as well as help vest those most impacted by subsequent actions in decision-making processes\" (Sung and Phillips 2018, p. 73 ). In the context of commercial systems it will also be important to draw on the experience of corporate stakeholder engagement efforts such as those found in sustainability reporting (GSSB 2016; Manetti 2011) . \n Choosing Metrics Aside from the well-known issues with using metrics in a management context generally (Jackson 2005 ) metrics pose a problem for AI systems in particular because most AI algorithms are based on strongly optimizing a narrow objective (Thomas and Uminsky 2020) . Poor use of metrics can result in a damaging emphasis on short term outcomes, manipulation and gaming, and unwanted side effects (Jackson 2005; Thomas and Uminsky 2020) . Even a successful metric cannot remain static, as the structure of the world it measures is constantly changing. In addition, there are many domains without a clear consensus on well-being goals, necessitating a process of normative deliberation before metrics can be chosen. The following issues should be considered in choice of metrics: Deciding What to Measure In many cases existing well-being metrics will not be directly usable because they are too expensive to collect at scale or don't readily apply in the company's domain. These issues drove Facebook's substitution of meaningful social interactions for general measures of user well-being. Creating a custom metric is challenging because community well-being is a theoretical construct, not an observable property, and there may be misalignment between the designer's intentions and what is actually measured. For example, decreasing polarization measures may just indicate that minority voices have been effectively silenced. The particular well-being aspect of interest must first be \"operationalized\" and tested for reliability and validity (Jacobs and Wallach 2019) . Long-Term Outcomes If a metric is evaluated only over the short term it may lead to poor longer-term outcomes. As the YouTube case demonstrates, a video platform that tries to maximize user watch time may encourage binging behavior where users eventually regret the time they spend. While effective AI optimization requires frequent feedback, it is critical to pick shorter-term metrics that are thought to drive longer-term outcomes (Lalmas and Hong 2018) . Gaming Any measure that becomes a target will change meaning as people change their behavior, a very general problem that is sometimes known as Goodhart's law (Manheim and Garrabrant 2018) . This is particularly relevant to large platforms that must defeat adversarial efforts to gain exposure for financial or political ends. While there are emerging methods to use causal inference to design metrics that resist gaming (Miller et al. 2019 ), a more robust solution is to continuously monitor and change the metrics in use. \n Dynamism The metrics employed need to be able to change and adapt, a property that Jackson (2005) names dynamism. This is necessary because of gaming and other behavior change in response to metrics, but more importantly the world can and does change; at the onset of the COVID-19 pandemic many existing machine learning models stopped working (Heaven 2020) . Dynamism also avoids the serious problems that can arise from over-optimization for a single objective, such as a robot which injures humans in an attempt to fetch a coffee more quickly (Russell 2019 ). In the context of contemporary commercial optimization, there are always humans supervising and operating the AI system, and they are free to change the objective function as needed. Normative Uncertainty Catalogs such as IEEE 7010 (Schiff et al. 2020 ) provide a long list of consensus metrics but not all of them will correspond to community needs, and not all AI systems can be effectively evaluated using metrics originally designed for public policy use. In short, many systems will face a lack of consensus around what a \"good\" outcome would be. Appropriate values for AI systems cannot be derived from first principles but must be the result of societal deliberation (Gabriel 2020) , which again underscores the necessity for participatory processes. \n Evaluating Outcomes It may be very challenging to determine the actual well-being effects of incorporating metric into an optimization process. Facebook uses ongoing user panels to count meaningful social interactions, but this is a narrow facet of user well-being, let alone community well-being. They could use broader well-being instruments such as a life satisfaction survey question, but it would be difficult to assess the causal contribution of Facebook use to any changes. In other cases, such as the diverse news recommender, pre-existing well-being indicators would not apply so assessing societal impact would require the creation and validation of new community well-being metrics. Outcome evaluation at scale is essentially corporate social science. The IEEE 7010 Recommended Practice Standard for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-Being proposes what amounts to a difference-indifferences design between users and non-users before and after an algorithmic change (Schiff et al. 2020) . This is a promising approach, but there do not seem to be any published examples. \n Business Implications For commercial AI systems, metrics-driven changes must also integrate legitimate business concerns such as the cost of implementation and the effects on business outcomes. Although a naïve analysis of multi-objective optimization suggests that considering anything other than revenue can only reduce revenue, this assumes everything else is equal. In reality there are complex secondary effects, such as changes in user and supplier behavior. YouTube's experience demoting clickbait videos is a documented case where doing the responsible thing led to a short-term decrease in the primary watch time metric, but then a long-term increase. It is difficult to predict the financial effects of incorporating well-being into optimization. Business and social objectives may turn out to be aligned, but this cannot be expected to be true as a rule. While ethical outcomes can sometimes be achieved through changes to optimization goals, there are also situations that could conceivably require avoiding features, products, or business models altogether (Barocas et al. 2020) . Case studies are one promising avenue for progress on the problem of uncertain business outcomes. If companies are already incorporating well-being metrics into their management and algorithms then documenting these cases will let others learn from their experiences, develop the field, and normalize the idea that companies should proactively manage the effects of their optimizers. This underscores the need for transparency around work that is explicitly designed to improve the lives of great numbers of people. \n Conclusion This paper has explored the integration of community well-being metrics into commercially-operated optimizing systems. Community well-being is an attractive goal because it is well-developed in public policy contexts and practically measurable. At least two large technology companies, Facebook and YouTube, have explicitly modified their objective functions in pursuit of well-being, demonstrating the practicality of this approach. There are still a number weaknesses in the interventions that Facebook and YouTube have undertaken, at least in terms of what has been reported publicly. The community that these interventions are intended to serve has not been well defined; rather, these metrics and interventions are oriented towards the individual level and do not account for existing communities such as cities or discussion groups. It is not clear if or how users were engaged in selecting the meaningful social interactions and user satisfaction metrics; there is no report of the outcomes either in terms of these metrics or with respect to broader well-being metrics; and although both companies reported reduced short term engagement, the broader business effects have not been discussed. However incomplete, the Facebook and YouTube cases suggest that the optimization of community well-being metrics may be a powerful general method for managing the societal outcomes of commercial AI systems. The same methods could be applied to many other types of systems, such as a news recommender system that incorporates measures of content diversity in an attempt to increase tolerance and reduce polarization, or an online shopping platform that uses product-level estimates of carbon footprint to steer users toward more environmentally friendly purchases. Although many scholars and critics have stressed the importance of increased user control over AI systems, no amount of user control can replace appropriate well-being metrics due to issues of collective action and the need for reasonable defaults. An analysis of the above cases suggests that the following multi-step process may be effective: Identify a community to define the scope of action. In online settings this may be a challenging decision. Select a well-being metric, perhaps from existing frameworks. This stage frames the problem to be solved in concrete terms, so it may be where community involvement matters most. Use this metric as a performance measure for the team building and operating the system. Directly translate the metric into code as a modification to an algorithmic objective function or use these measured outcomes to evaluate more general design changes. Evaluate the results, in terms of actual human outcomes, and adjust accordingly. This may require adjusting the chosen metric in response to changing conditions, or if it is found to be causing side effects of its own. Require transparency throughout to make participation possible and to hold companies accountable to the communities who are meant to be served by this process. Fig. 1 1 Fig. 1 A reconstruction of Facebook's use of meaningful social interactions circa 2018. Well-being effects are unobserved because they happen outside of user interactions with Facebook \n Table 1 1 Indicators from the OECD Better Life Index (Durand 2015) . Each of these has a specific statistical definition and has been collected across OECD countries since 2011 Domain Indicators Housing Dwellings without basic facilities Housing expenditure Rooms per person Income Household net adjusted disposable income Household net wealth Jobs Labor market insecurity Employment rate Long term unemployment rate Community Quality of support network Education Educational attainment Student skills Years in education Environment Air pollution Water quality Civic engagement Stakeholder engagement for developing regulations Voter turnout Health Life expectancy Self-reported health Life Satisfaction Life satisfaction Safety Feeling safe walking alone at night Homicide rate Work-life balance Employees working very long hours Time devoted to leisure and personal care \n\t\t\t International Journal of CommunityWell-Being (2020) 3:443-463", "date_published": "n/a", "url": "n/a", "filename": "Stray2020_Article_AligningAIOptimizationToCommun.tei.xml", "abstract": "This paper investigates incorporating community well-being metrics into the objectives of optimization algorithms and the teams that build them. It documents two cases where a large platform appears to have modified their system to this end. Facebook incorporated \"well-being\" metrics in 2017, while YouTube began integrating \"user satisfaction\" metrics around 2015. Metrics tied to community well-being outcomes could also be used in many other systems, such as a news recommendation system that tries to increase exposure to diverse views, or a product recommendation system that opstimizes for the carbon footprint of purchased products. Generalizing from these examples and incorporating insights from participatory design and AI governance leads to a proposed process for integrating community well-being into commercial AI systems: identify and involve the affected community, choose a useful metric, use this metric as a managerial performance measure and/or an algorithmic objective, and evaluate and adapt to outcomes. Important open questions include the best approach to community participation and the uncertain business effects of this process.", "id": "c8524339f8123ad1b495c8961969eeb3"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Unprecedented-Technological-Risks.tei.xml", "abstract": "Over the next few decades, the continued development of dual-use technologies will provide major benefits to society. They will also pose significant and unprecedented global risks, including risks of new weapons of mass destruction, arms races, or the accidental deaths of billions of people. Synthetic biology, if more widely accessible, would give terrorist groups the ability to synthesise pathogens more dangerous than smallpox; geoengineering technologies would give single countries the power to dramatically alter the earth's climate; distributed manufacturing could lead to nuclear proliferation on a much wider scale; and rapid advances in artificial intelligence could give a single country a decisive strategic advantage. These scenarios might seem extreme or outlandish. But they are widely recognised as significant risks by experts in the relevant fields. To safely navigate these risks, and harness the potentially great benefits of these new technologies, we must proactively provide research, assessment, monitoring, and guidance, on a global level. This report gives an overview of these risks and their importance, focusing on risks of extreme catastrophe, which we believe to be particularly neglected. The report explains why market and political circumstances have led to a deficit of regulation on these issues, and offers some policy proposals as starting points for how these risks could be addressed. \n September 2014 \n Fu ture of H umanit y Institute \n UNIVERSITY OF OXFORD \n Unprecedented Technological Risks Synthetic biology is allowing researchers to move from reading genes, to writing them, creating the possibility of both life-saving treatments and designer pandemics.", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Vincent Conitzer", "Walter Sinnott-Armstrong", "Jana Schaich Borg", "Yuan Deng", "Max Kramer"], "title": "Moral Decision Making Frameworks for Artificial Intelligence", "text": "Introduction As deployed AI systems become more autonomous, they increasingly face moral dilemmas. An often-used example is that of a self-driving car that faces an unavoidable accident, but has several options how to act, with different effects on its passengers and others in the scenario. (See, for example, Bonnefon et al. (2016) .) But there are other examples where AI is already used to make decisions with lifeor-death consequences. Consider, for example, kidney exchanges. These cater to patients in need of a kidney that have a willing live donor whose kidney the patient's body would reject. In this situation, the patient may be able to swap donors with another patient in the same situation. (More complex arrangements are possible as well.) For these exchanges, algorithms developed in the AI community are already used to determine which patients receive which kidneys (see, e.g., Dickerson and Sandholm (2015) ). While it may be possible to find special-purpose solutions for moral decision making in these domains, in the long run there is a need for a general framework that an AI agent can use to make moral decisions in a wider variety of contexts. In this paper, we lay out some possible roadmaps for arriving at such a framework. \n Motivation Most AI research is conducted within straightforward utilitarian or consequentialist frameworks, but these simple approaches can lead to counterintuitive judgments from an ethical perspective. For example, most people consider it immoral to harvest a healthy patient's organs to save the lives of Copyright c 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. two or even five other patients. Research in ethics and moral psychology elucidates our moral intuitions in such examples by distinguishing between doing and allowing, emphasizing the role of intent, applying general rules about kinds of actions (such as \"Don't kill\"), and referring to rights (such as the patient's) and roles (such as the doctor's). Incorporating these morally relevant factors among others could enable AI to make moral decisions that are safer, more robust, more beneficial, and acceptable to a wider range of people. 1 To be useful in the development of AI, our moral theories must provide more than vague, general criteria. They must also provide an operationalizable, and presumably quantitative, theory that specifies which particular actions are morally right or wrong in a wide range of situations. This, of course, also requires the agent to have a language in which to represent the structure of the actions being judged (Mikhail, 2007) and the morally relevant features of actions (Gert, 2004) along with rules about how these features interact and affect moral judgments. Moral theory and AI need to work together in this endeavor. Multiple approaches can be taken to arrive at generalpurpose procedures for automatically making moral decisions. One approach is to use game theory. Game-theoretic formalisms are widely used by artificial intelligence researchers to represent multiagent decision scenarios, but, as we will argue below, its solution concepts and possibly even its basic representation schemes need to be extended in order to provide guidance on moral behavior. Another approach is to use machine learning. We can use the moral philosophy and psychology literatures to identify features of moral dilemmas that are relevant to the moral status of possible actions described in the dilemmas. Human subjects can be asked to make moral judgments about a set of moral dilemmas in order to obtain a labeled data set. Then, we can train classifiers based on this data set and the identified features. (Compare also the top-down vs. bottom-up distinction in automated moral decision making, as described by Wallach and Allen (2008) .) We will discuss these two approaches in turn. In this paper, we will take a very broad view of what constitutes a moral dilemma (contrast Sinnott-Armstrong (1988) ). As a simple example, consider the trust game (Berg et al., 1995) . In the trust game, player 1 is given some amount of money-say, $100. She 2 is then allowed to give any fraction of this money back to the experimenter, who will then triple this returned money and give it to player 2. Finally, player 2 may return any fraction of the money he has received to player 1. For example, player 1 might give $50 back, so that player 2 receives 3 • $50 = $150, who then might give $75 back, leaving player 1 with $50 + $75 = $125. The most straightforward game-theoretic analysis of this game assumes that each player, at any point in the game, is interested only in maximizing the amount of money she herself receives. Under this assumption, player 2 would never have any reason to return any money to player 1. Anticipating this, player 1 would not give any money, either. However, despite this analysis, human subjects playing the trust game generally do give money in both roles (Berg et al., 1995) . One of the reasons why is likely that many people feel it is wrong for player 2 not to give any money back after player 1 has decided to give him some (and, when in the role of player 1, they expect player 2 not to take such a wrong action). This case study illustrates a general feature of moral reasoning. Most people consider not only the consequences of their actions but also the setting in which they perform their actions. They ask whether an act would be unfair or selfish (because they are not sharing a good with someone who is equally deserving), ungrateful (because it harms someone who benefited them in the past), disloyal (by betraying a friend who has been loyal), untrustworthy (because it breaks a promise), or deserved (because the person won a competition or committed a crime). In these ways, moral reasoners typically look not only to the future but also to the past. Of course, not everyone will agree about which factors are morally relevant, and even fewer people will agree about which factor is the most important in a given conflict. For example, some people will think that it is morally wrong to lie to protect a family member, whereas others will think that lying in such circumstances is not only permitted but required. Nonetheless, a successful moral AI system does not necessarily have to dictate one true answer in such cases. It may suffice to know how much various groups value different factors or value them differently. Then when we code moral values into AI, we would have the option of either using the moral values of a specific individual or group-a type of moral relativism-or giving the AI some type of socialchoice-theoretic aggregate of the moral values that we have inferred (for example, by letting our models of multiple people's moral values vote over the relevant alternatives, or using only the moral values that are common to all of them). This approach suggests new research problems in the field of computational social choice (see, e.g., Brandt et al. (2013 Brandt et al. ( , 2015 ). Rossi (2016) has described related, but distinct so-cial choice problems where (not necessarily moral) preferences are either aggregated together with a moral ranking of all the alternatives, or the preferences are themselves ranked according to a moral ordering (see also Greene et al. (2016) ). \n Abstractly Representing Moral Dilemmas: A Game-Theoretic Approach For us humans, the most natural way to describe a moral dilemma is to use natural language. However, given the current state of AI in general and of natural language processing in particular, such verbal descriptions will not suffice for our purposes. Moral dilemmas will need to be more abstractly represented, and as is generally the case in AI research, the choice of representation scheme is extremely important. In this section, we consider an approach to this problem inspired by game theory. \n Game-Theoretic Representation Schemes Game theory (see, e.g., Fudenberg and Tirole (1991) ) concerns the modeling of scenarios where multiple parties (henceforth, agents) have different interests but interact in the same domain. It provides various natural representation schemes for such multiagent decision problems. Scenarios described in game theory involve sequences of actions that lead to different agents being better or worse off to different degrees. Since moral concepts-such as selfishness, loyalty, trustworthiness, and fairness-often influence which action people choose to take, or at least believe they should take, in such situations, game theory is potentially a good fit for abstractly representing moral dilemmas. One of the standard representation schemes in game theory is that of the extensive form, which is a generalization of the game trees studied in introductory AI courses. The extensive-form representation of the trust game (or rather, a version of it in which player 1 can only give multiples of $50 and player 2 only multiples of $100) is shown in Figure 1 . Each edge corresponds to an action in the game and is labeled with that action. Each bottom (leaf) node corresponds to an outcome of the game and is labeled with the corresponding payoffs for player 1 and player 2, respectively. We will turn to the question of whether such representation schemes suffice to model moral dilemmas more generally shortly. First, we discuss how to solve such games. \n Moral Solution Concepts The standard solution concepts in game theory assume that each agent pursues nothing but its own prespecified utility. If we suppose in the trust game that each player just seeks to maximize her own monetary payoff, then game theory would prescribe that the second player give nothing back regardless of how much he receives, and consequently that the first player give nothing. 3 However, this is not the behavior observed in experiments with human subjects. Games that elicit human behavior that does not match game-theoretic analyses, such as the trust game, are often used to criticize the game-theoretic model of behavior and have led to the field of behavioral game theory (Camerer, 2003) . While in behavioral game theory, attention is often drawn to the fact that humans are not infinitely rational and cannot be expected to perform complete game-theoretic analyses in their heads, it seems that this is not the primary reason that agents behave differently in the trust game, which after all is quite simple. Rather, it seems that the simplistic game-theoretic solution fails to account for ethical considerations. In traditional game theory's defense, it should be noted that an agent's utility may take into account the welfare of others, so it is possible for altruism to be captured by a game-theoretic account. However, what is morally right or wrong also seems to depend on past actions by other players. Consider, for example, the notion of betrayal: if another agent knowingly enables me either to act to benefit us both, or to act to benefit myself even more while significantly hurting the other agent, doing the latter seems morally wrong. This, in our view, is one of the primary things going on in the trust game. The key insight is that to model this phenomenon, we cannot simply first assess the agents' otherregarding preferences, include these in their utilities at the leaves of the game, and solve the game (as in the case of pure altruism). Rather, the analysis of the game (solving it) must be intertwined with the assessment of whether an agent morally should pursue another agent's well-being. This calls for novel moral solution concepts in game theory. We have already done some conceptual and algorithmic work on a solution concept that takes such issues into account (Letchford et al., 2008) . This solution concept involves repeatedly solving the game and then modifying the agents' preferences based on the solution. The modification makes it so that (for example) player 2 wants to ensure that player 1 receives at least what she could have received in the previous solution, unless this conflicts with player 2 receiving at least as much as he would have received in the previous solution. For example, in the trust game player 2's preferences are modified so that he values player 1 receiving back at least what she gave to player 2. \n What Is Left Out & Possible Extensions The solution concept from Letchford et al. (2008) is defined only in very restricted settings, namely 2-player perfect-information 4 games. One research direction is to generalize the concept to games with more players and/or imperfect information. Another is to define different solution concepts that capture other ethical concerns. Zooming out, this general approach is inherently limited by the aspects of moral dilemmas that can be captured in game-theoretic representations. While we believe that the standard representation schemes of game theory can capture much of what is relevant, they may not capture everything that is relevant. For example, in moral philosophy, a distinction is often made between doing harm and allowing harm. Consider a situation where a runaway train will surely hit and kill exactly one innocent person (player 2) standing on a track, unless player 1 intervenes and puts the train on another track instead, where it will surely hit and kill exactly one other innocent person (player 3). The natural extensive form of the game (Figure 2 ) is entirely symmetric and thereby cannot be used to distinguish between the two alternatives. (Note that the labels on the edges are formally not part of the game.) However, many philosophers (as well Player 1 \n Do nothing Put train on other track g other track 0, -100, 0 0 , 0, -100 Figure 2 : \"Runaway train.\" Player 1 must choose whether to allow player 2 to be hurt or to hurt player 3 instead. as non-philosophers) would argue that there is a significant distinction between the two alternatives, and that switching the train to the second track is morally wrong. We propose that the action-inaction distinction could be addressed by slightly extending the extensive-form representation so that at every information set (decision point), one action is labeled as the \"passive\" action (e.g., leaving the train alone). Other extensions may be needed as well. For example, we may take into account what each agent in the game deserves (according to some theory of desert), which may require us to further extend the representation scheme. 5 A broader issue is that in behavioral game and decision theory it is well understood that the way the problem is framed-i.e., the particular language in which the problem is described, or even the order in which dilemmas are presented-can significantly affect human subjects' decisions. That is, two ways of describing the same dilemma can produce consistently different responses from human subjects (Kahneman and Tversky, 2000) . The same is surely the case for moral dilemmas (Sinnott-Armstrong, 2008) . Moral AI would need to replicate this behavior if the goal is to mirror or predict human moral judgments. In contrast, if our goal is to make coherent moral judgments, then moral AI might instead need to avoid such framing effects. \n Setting up a Machine Learning Framework Another approach for developing procedures that automatically make moral decisions is based on machine learning (see, e.g., Mitchell (1997) ). We can assemble a training set of moral decision problem instances labeled with human judgments of the morally correct decision(s), and allow our AI system to generalize. (Other work has focused on obtaining human judgments not of the actions themselves, but of persuasion strategies in such scenarios (Stock et al., 2016) .) To evaluate this approach with current technology, it is insufficient to represent the instances in natural language; instead, we must represent them more abstractly. What is the right representation scheme for this purpose, and what features are important? How do we construct and accurately label a good training set? \n Representing Dilemmas by Their Key Moral Features When we try to classify a given action in a given moral dilemma as morally right or wrong (as judged by a given human being), we can try to do so based on various features (or attributes) of the action. In a restricted domain, it may be relatively clear what the relevant features are. When a self-driving car must decide whether to take one action or another in an impending-crash scenario, natural features include the expected number of lives lost for each course of action, which of the people involved were at fault, etc. When allocating a kidney, natural features include the probability that the kidney is rejected by a particular patient, whether that patient needs the kidney urgently, etc. Even in these scenarios, identifying all the relevant features may not be easy. (E.g., is it relevant that one potential kidney recipient has made a large donation to medical research and the other has not?) However, the primary goal of a general framework for moral decision making is to identify abstract features that apply across domains, rather than to identify every nuanced feature that is potentially relevant to isolated scenarios. The literature in moral psychology and cognitive science may guide us in identifying these general concepts. For example, Haidt and Joseph (2004) have proposed five moral foundations-harm/care, fairness/reciprocity, loyalty, authority, and purity. Recent research has added new foundations and subdivided some of these foundations (Clifford et al., 2015) . The philosophy literature can similarly be helpful; e.g., Gert (2004) provides a very inclusive list of morally relevant features. \n Classifying Actions as Morally Right or Wrong Given a labeled dataset of moral dilemmas represented as lists of feature values, we can apply standard machine learn-ing techniques to learn to classify actions as morally right or wrong. In ethics it is often seen as important not only to act in accordance with moral principles but also to be able to explain why one's actions are morally right (Anderson and Anderson, 2007; Bostrom and Yudkowsky, 2014) ; hence, interpretability of the resulting classifier will be important. Of course, besides making a binary classification of an action as morally right or wrong, we may also make a quantitative assessment of how morally wrong the action is (for example using a regression), an assessment of how probable it is that the action is morally wrong (for example using a Bayesian framework), or some combination of the two. Many further complicating factors can be added to this simple initial framework. \n Discussion A machine learning approach to automating moral judgements is perhaps more flexible than a game-theoretic approach, but the two can complement each other. For example, we can apply moral game-theoretic concepts to moral dilemmas and use the output (say, \"right\" or \"wrong\" according to this concept) as one of the features in our machine learning approach. On the other hand, the outcomes of the machine learning approach can help us see which key moral aspects are missing from our moral game-theoretic concepts, which will in turn allow us to refine them. It has been suggested that machine learning approaches to moral decisions will be limited because they will at best result in human-level moral decision making; they will never exceed the morality of humans. (Such a worry is raised, for example, by Chaudhuri and Vardi (2014) .) But this is not necessarily so. First, aggregating the moral views of multiple humans (through a combination of machine learning and social-choice theoretic techniques) may result in a morally better system than that of any individual human, for example because idiosyncratic moral mistakes made by individual humans are washed out in the aggregate. Indeed, the learning algorithm may well decide to output a classifier that disagrees with the labels of some of the instances in the training set (see Guarini (2006) for a discussion of the importance of being able to revise initial classifications). Second, machine learning approaches may identify general principles of moral decision making that humans were not aware of before. These principles can then be used to improve our moral intuitions in general. For now, moral AI systems are in their infancy, so creating even human-level automated moral decision making would be a great accomplishment. \n Conclusion In some applications, AI systems will need to be equipped with moral reasoning capability before we can grant them autonomy in the world. One approach to doing so is to find ad-hoc rules for the setting at hand. However, historically, the AI community has significantly benefited from adopting methodologies that generalize across applications. The concept of expected utility maximization has played a key part in this. By itself, this concept falls short for the purpose of moral decision making. In this paper, we have consid-ered two (potentially complementary) paradigms for designing general moral decision making methodologies: extending game-theoretic solution concepts to incorporate ethical aspects, and using machine learning on human-labeled instances. Much work remains to be done on both of these, and still other paradigms may exist. All the same, these two paradigms show promise for designing moral AI. Figure 1 : 1 Figure1: The trust game. Each edge corresponds to an action in the game and is labeled with that action. Each bottom (leaf) node corresponds to an outcome of the game and is labeled with the corresponding payoffs for player 1 and player 2, respectively. \n\t\t\t The point that, as advanced AI acquires more autonomy, it is essential to bring moral reasoning into it has been made previously by others-e.g., Moor (2006) . \n\t\t\t We use \"she\" for player 1 or a generic player, and \"he\" for player 2. \n\t\t\t The technical name for this type of analysis is backward induction, resulting in behavior that constitutes a subgame perfect Nash equilibrium of the game. \n\t\t\t In a perfect-information game, the current state is fully observable to each player (e.g., chess), in contrast to imperfectinformation games (e.g., poker).5 Note that, to the extent the reasons for what an agent deserves are based solely on the agent's earlier actions in the game under consideration, solution concepts such as those described above might in fact capture this. If so, then the only cases in which we need to extend the representation scheme are those where what an agent deserves is external to the game under study (e.g., the agent is a previously convicted criminal).", "date_published": "n/a", "url": "n/a", "filename": "moralAAAI17.tei.xml", "abstract": "The generality of decision and game theory has enabled domain-independent progress in AI research. For example, a better algorithm for finding good policies in (PO)MDPs can be instantly used in a variety of applications. But such a general theory is lacking when it comes to moral decision making. For AI applications with a moral component, are we then forced to build systems based on many ad-hoc rules? In this paper we discuss possible ways to avoid this conclusion.", "id": "3e72dc71f46cfee526755f18db6db7b5"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "2577-Article Text-4899-1-10-20151202.tei.xml", "abstract": "WINTER 2015 105 A rtificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agentssystems that perceive and act in some environment. In this context, the criterion for intelligence is related to statistical and economic notions of rationality -colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic representations and statistical learning methods has led to a large degree of integration and crossfertilization between AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Martin Dresler", "Anders Sandberg", "Kathrin Ohla", "Christoph Bublitz", "Carlos Trenado", "Aleksandra Mroczko-Wa ˛sowicz", "Simone Kühn", "Dimitris Repantis"], "title": "Non-pharmacological cognitive enhancement", "text": "Introduction Humans have always striven to increase their mental capacities. From symbolic language, writing and the printing press to mathematics, calculators and computers: Mankind has devised and employed tools to record, store and exchange thoughts and hence, in a more abstract sense, to enhance cognition. Such external devices aiding cognition do not seem to raise any ethical concerns, at least not in regard to the aim of enhancing cognitive functions. In contrast, the introduction of means to enhance cognition internally by intervening in the brain in a more straightforward way has raised ethical and legal concerns and is regarded by (some parts of) the public as highly suspicious. The prospects and perils of cognitive enhancers have prompted wide discussion in ethics, law and politics. Cognitive enhancement has become a trend topic both in academic and public debate e however the discussants bring a very diverse background and motivation to this debate. The aim of many empirical researchers of cognitive enhancement is to understand the neurobiological and psychological mechanisms underlying cognitive capacities (McGaugh and Roozendaal, 2009) , while theorists are rather interested in their social and ethical implications (Savulescu and Bostrom, 2009) . While in basic research very specific mechanisms are studied (mostly in animal models), many theoretical discussions start from the counterfactual idea of a highly effective drug that makes its consumer super smart. In contrast, there is a surprising paucity of research that evaluates the effects of currently existing cognitive enhancers in healthy individuals. A widely cited definition characterizes cognitive enhancement as interventions in humans that aim to improve mental functioning beyond what is necessary to sustain or restore good health (Juengst, 1998) . While the current bioethical debate on cognitive enhancement shows a strong focus on pharmacological ways of enhancement, according to the given characterization, enhancement of mental capabilities also by non-pharmacological means has to be seen as cognitive enhancement proper. In this paper we aim to draw attention to several non-pharmacological cognitive enhancement strategies that have been largely neglected in the debate so far. We will first summarize studies on the efficacy of psychopharmacological enhancers and then present data on the cognition enhancing effects of a number of non-pharmacological methods. We will start with broadly used interventions that are not commonly recognized as enhancement strategies such as nutrition, physical exercise and sleep, and then will go over to more specific methods such as meditation, mnemonic techniques, computer training, and brain stimulation technologies. We will restrain our review to methods that currently exist and won't speculate on future technologies. While many ethical arguments of the cognitive enhancement debate apply to both pharmacological and non-pharmacological enhancers, some of them appear in new light when considered on the background of non-pharmacological enhancement. \n Pharmaceuticals The bioethical debate on enhancement mainly concentrates on psychopharmaceuticals. In particular psychostimulants are increasingly popular among healthy people seeking cognitive enhancement (Talbot, 2009; Smith and Farah, 2011) . Beside amphetamines, which are not reviewed here, two particular substances have frequently been in the spotlight of both the scientific (de Jongh et al., 2008; Sahakian and Morein-Zamir, 2007) and popular press (The Economist, 2008; Talbot, 2009) because of their assumed enhancing properties, namely methylphenidate and modafinil. The first, a stimulant used to treat attentiondeficit hyperactivity disorder (ADHD), is known to have been extensively misused, especially by college students as a \"study aid\" (McCabe et al., 2005; Wilens et al., 2008) . The second, modafinil, a wakefulness promoting agent licensed for the treatment of excessive daytime sleepiness associated with narcolepsy, is already used by military personnel for missions of longer duration to counteract fatigue after sleep deprivation (Bonnet et al., 2005; Caldwell and Caldwell, 2005; Moran, 2007) . It also seems to become increasingly popular in both business and in academia. In a non-representative online poll conducted by Nature magazine (Maher, 2008) , 20% of the 1400 responding readers reported use of methylphenidate, modafinil or beta-blockers for non-medical reasons: 62% of users reported taking methylphenidate and 44% modafinil. Indirect evidence for the non-medical use of methylphenidate and modafinil can also be gained by companies constantly raising sales and by comparing their disproportionately high prescription numbers to the numbers of patients suffering from the disorders for which these substances are approved or used off-label (Mehlman, 2004) . Methylphenidate is a dopamine reuptake blocker that also enhances dopamine and norepinephrine release with pharmacologic mechanisms similar to those of amphetamines (Sulzer et al., 2005) . The mechanisms of action of modafinil are not well understood but are believed to differ from those of methylphenidate and amphetamines. Although there is mounting evidence that the effects on dopamine and norepinephrine are primary, effects on gaminobutyric acid, glutamate, histamine and orexin/hypocretin are also theorized (Volkow et al., 2009; Minzenberg and Carter, 2008; Ballon and Feifel, 2006) . Although these drugs are supposed to affect cognition mainly, the widespread neurochemical systems they implicate suggest that they might also have an impact on emotional and motivational functions. In a systematic review on the effects of these stimulants on healthy individuals it has been shown that there is a lack of studies addressing this issue (Repantis et al., 2010b) . Regarding methylphenidate, the analysis of the few existing studies provided no consistent evidence for enhancing effects, though evidence for a positive effect on memory (mainly spatial working memory) was found. While such memory benefits seem to be in the large effect size range, the popular opinion that methylphenidate enhances attention was not verified (Repantis et al., 2010b) . Some studies reported even negative effects, such as a disruption of attentional control (Rogers et al., 1999) . In a systematic review modafinil was found to have some positive, though moderate, enhancing effects on individuals who were not sleep deprived, namely on attention (Repantis et al., 2010b) . No effect was found on memory, mood or motivation in the few studies that examined these domains, but the results of the studies were not unequivocal. Moreover, there is evidence that the effect of modafinil depends to some extent on the individual baseline performance (Randall et al., 2005) . In the above mentioned systematic reviews also side effects of methylphenidate and modafinil on healthy individuals have been reviewed (Repantis et al., 2010b) . Since most of the included papers reported small studies and not large-scale clinical trials, no standardized method of assessing adverse reactions and reporting drop-outs due to adverse effects was used. In a number of studies (26 for methylphenidate and 26 for modafinil), no comment on side effects was made, which leaves us to assume that no severe adverse effects appeared that would deserve a comment in the limited space of a publication. In the majority of the trials, the drugs were well tolerated. There were some side effects reported, but these were benign and only in few cases lead to drop-outs. For modafinil adverse reactions were primarily headache, dizziness, gastrointestinal complains (e.g. nausea, abdominal pain, dry mouth), increased diuresis, palpitations, nervousness, restlessness, and sleep disturbances and especially in studies with non-sleep deprived individuals, insomnia (Baranski and Pigeau, 1997; Caldwell et al., 1999 Caldwell et al., , 2000 Caldwell et al., , 2004 Dinges et al., 2006; Eddy et al., 2005; Gill et al., 2006; Hart et al., 2006; Lagarde et al., 1995; Pigeau et al., 1995; Wesensten et al., 2002; Whitmore et al., 2006) . For methylphenidate a frequently reported side effect (reported in 13 out of 14 trials reporting side effects) was increased heart rate, while increase in blood pressure was not consistently found (Bray et al., 2004; Brumaghim and Klorman, 1998; Clark et al., 1986; Fitzpatrick et al., 1988; Hink et al., 1978; Mehta et al., 2000; Peloquin and Klorman, 1986; Rogers et al., 1999; Strauss et al., 1984; Volkow et al., 1999a Volkow et al., , 1999b Wetzel et al., 1981) . Besides these, typical complaints were headache, anxiety, nervousness, dizziness, drowsiness and insomnia. In total, these drugs seem to be welltolerated even by this population where the trade-off between side effects and improvement may be less clear. Finally, since the majority of the studies that have been performed were short-term and single-dose studies, no comment can be made on the reinforcing effects, dependence development, and drug tolerance of MPH or modafinil in healthy individuals. Prescription drugs currently available for the treatment of dementia provide a further possibility for cognitive enhancement. Of interest are the drugs used for the treatment of dementia due to Alzheimer's disease, namely the acetylcholinesterase inhibitors and memantine. The first category comprises three substances e donepezil, galantamine, rivastigmine e that are recommended for clinical use for the treatment of patients with mild to moderate Alzheimer's disease (Racchi et al., 2004) . Memantine is a NMDA receptor antagonist and is registered for the treatment of moderate to severe Alzheimer's disease (Sonkusare et al., 2005) . Studies with anti-dementia drugs were found to be lacking. In a systematic review (Repantis et al., 2010a) only ten trials with donepezil, one with rivastigmine and seven with memantine have been reported. No randomized controlled trials examining the effects of galantamine in healthy individuals were found. Anti-dementia drugs show their effect after intake for several weeks. All memantine and the one galantamine trial were however single dose trials. Hence, based on these few and insufficient data, no adequate analysis of their potential as cognitive enhancers can be performed. Repeated trials have been conducted only with donepezil. These were six small scale trials, lasting 14e42 days. From these, only two (Beglinger et al., 2005; Fitzgerald et al., 2008b) had older persons as participants. The rest of the trials included young healthy participants. This factor complicates the comparison between the results and makes it difficult to generalize the results of the latter studies for the main population of interest, namely the growing elderly population. These few existing studies provide no consistent evidence for a cognitive enhancement effect. In one study it was found that donepezil improved the retention of training on complex aviation tasks (Yesavage et al., 2002) . In an another case, verbal memory for semantically processed words was improved (Fitzgerald et al., 2008a) . Donepezil might also improve episodic memory (Gron et al., 2005) , but interestingly, two studies reported transient negative effects on episodic memory (Beglinger et al., 2004 (Beglinger et al., , 2005 . A newer study found also an impairment of working memory in older healthy participants taking donepezil for six weeks (Balsters et al., 2011) . In a sleep deprivation study, donepezil had no effect when participants were well-rested. Nevertheless, the memory and attention deficits resulting from 24 h of sleep deprivation were attenuated after donepezil intake. This effect however, was seen only in individuals whose performance declined the most after sleep deprivation (Chuah and Chee, 2008; Chuah et al., 2009) , and could not be confirmed in a recent study (Dodds et al., 2011) . Another point that should be made is that in most of the studies a large neuropsychological test battery was applied. However, an effect could be shown in only a few, if not only one, of the tests applied. This could speak either for a selective effect of donepezil or for small effects that in these relatively underpowered studies could only be revealed in only one (maybe the most difficult) task. Another possible explanation could be that acetylcholinesterase inhibitors require a pathology of diminished cholinergic transmission to show their effects, and, therefore, it is not possible to optimize performance in healthy individuals that already have an optimal concentration of acetylcholine. In conclusion, evidence for cognition enhancing effects of currently available psychopharmaceuticals in healthy subjects is sparse. In the majority of the trials, donepezil was well tolerated, however some authors warn that sleep disturbances might become apparent in larger populations (Yesavage et al., 2002) . Reported side effects were benign and only in few cases led to drop-outs. The adverse reactions were mainly gastrointestinal complaints (e.g. nausea), but also headache, dizziness, nightmares and insomnia. \n Nutrition Numerous food products and dietary supplements claim effects like \"increase energy\" or \"enhance memory\" despite scarce, controversial or even lacking scientific evidence. Nutritional enhancers are consumed, intentionally or unintentionally, in everyday situations and they can reduce fatigue, e.g. through a coffee after lunch, and help maintain full cognitive capacities, e.g. through sweet snacks during an exam. Here, we review the acute effects of two commonly consumed dietary constituents, namely caffeine and sugar (glucose). Caffeine is an adenosine receptor antagonist; it reduces inhibition of neural firing by large through an increased turnover of noradrenaline in the brain (Smith et al., 2003; Ferre, 2008) . It exerts its stimulating effects within less than an hour after administration through altering the biochemistry of the brain. Typical behavioral effects of caffeine include elevated mood, increased alertness, and better sustained attention (Smith et al., 1991 (Smith et al., , 2005 Hewlett and Smith, 2007) . It improves motor-skill performance on tasks that are impaired when arousal is low, e.g. during simulations of driving (Reyner and Horne, 1997) and increases speed of encoding and response to new stimuli (Warburton et al., 2001; Riedel et al., 1995) . Caffeine effects on more complex and cognitively demanding tasks are, however, controversial in that some authors report better performance (Heatherley et al., 2005) but also nullfindings effects (Rogers and Dernoncourt, 1998) . The effects of caffeine on memory and learning are particularly disputed and positive effects can be in large attributed to indirect effects from elevated attention to the stimuli during encoding (Nehlig, 2010) . Another debate concerns the question whether and in how far differences in prior caffeine consumption and their lack of experimental control thereof contribute to these conflicting observations. Caffeine tolerance has been demonstrated (Evans and Griffiths, 1992) and is more likely to occur in habitual, heavy coffee drinkers. Coffee drinkers represent, however, the group that is most prone to intentionally exploiting the enhancing effects of caffeine. Caffeine withdrawal after heavy coffee consumption has been associated with headaches, increased subjectively perceived stress and feelings of fatigue and reduced alertness in some studies (Ratcliff-Crain et al., 1989; Schuh and Griffiths, 1997; Dews et al., 2002; Juliano and Griffiths, 2004) . However, withdrawal effects can, at large, be explained by psychological rather than pharmacological factors of reduced caffeine intake (Dews et al., 2002) . In line with this, it has been shown that expectancy mimics effects of caffeine when consumers believe they consume a caffeinated beverage (Fillmore, 1994) , thus further corroborating a psychological component of the caffeine effect following both consumption and withdrawal. While psychological aspects of caffeine withdrawal appear to be relevant particularly in subjective report of mood and energy, evidence suggests that caffeine enhances task performance independent of whether consumed in the abstained or normal caffeinated state in coffee drinkers (Smit and Rogers, 2000; Addicott and Laurienti, 2009) . On the other hand, it appears that caffeine yields similar effects when administered in a coffee, in a tea, or as a capsule, supporting a pharmacological rather than a psychological mechanism when participants' expectations are controlled for (Smith, 2002) . Glucose is the primary breakdown product of carbohydrates and the fuel for our cells. It is provided through the blood constantly and measured as the level of blood sugar. In an attempt to keep the blood sugar level constant, excess glucose must be stored and released later, which is achieved by the pancreatic hormones insulin and glucagon. Insulin is released with high levels of blood sugar and stimulates the synthesis of glycogen. Glucagon is released with decreasing blood sugar; it targets the liver to break up glycogen into glucose. Hypoglycemia, i.e. when the blood glucose level falls to very low values, can affect cognitive functioning negatively and is associated with slower reaction times in task that require attention. In healthy individuals, however, the blood glucose level appears to be fairly stable during the day. Subjective reports of \"increased mental energy\" have been associated with higher glucose metabolism in the brain (Posner et al., 1988; Reivih and Alavi, 1983) , and this effect occurs within several minutes after glucose administration. With regard to objective cognitive performance, glucose improves attention (Benton et al., 1994) , response speed (Owens and Benton, 1994) and working memory (Scholey et al., 2001) , the latter occurring under conditions of high but also under low glucose depletion (Owen et al., 2012; Jones et al., 2012) . The most pronounced effects of glucose on cognition are found for declarative memory (Messier, 2004; , where effect sizes in the large range have been demonstrated in particular for demanding tasks (e.g. Sünram-Lea et al., 2001 , 2002a , 2002b Meikle et al., 2004) . High blood-glucose level are associated with improved memory function (Benton and Owens, 1993) , and glucose administration before and after learning similarly improves memory performance, indicating that attentional or other non-memory specific processes during encoding alone cannot be responsible for the memory enhancing effects of glucose (Sünram-Lea et al., 2002a) . Memory effects are more pronounced in elderly as compared to young adults, and glucose tolerance was predictive for declarative memory performance (Manning et al., 1990; Meikle et al., 2004; Messier, 2004) . On a neural level, the hippocampus has been proposed as the main brain region mediating the memory enhancing effects of glucose, with more specific mechanisms involving glucose effects on cerebral insulin, acetylcholine synthesis, potassium adenosine triphosphate channel function, and brain extracellular glucose availability . Taken together, the findings show that caffeine and sugar enhance mood, subjectively perceived energy, vigilance, attention, and memory, and may even exert their effects in a synergistic fashion if administered together (Adan and Serra-Grabulosa, 2010 ). Individual differences, e.g. in glucose tolerance or nutritional habits such as caffeine consumption, influence the extent and direction of these effects. \n Physical exercise It is common knowledge that regular physical activity is a highly beneficial factor for preventing cardiovascular diseases and staying healthy in general. Already in the first half of the 20th century it was demonstrated that athletes outperform physically inactive individuals also in cognitive functions (Burpee and Stroll, 1936) , and an emerging body of evidence suggests that regular aerobic exercise indeed has beneficial effects on brain function and cognition (Hillman et al., 2008) . The focus of most studies on physical exercise effects on cognition is on developmental issues: Either children of different age groups or elderly adults were examined. In school-age children, physical exercise was demonstrated to benefit e.g. academic achievement, intelligence, perceptual skills, and verbal and mathematical ability (Sibley and Etnier, 2003) . In older adults with and without pathological cognitive decline, beneficial effects of various physical exercise programs on different aspects of cognition were observed (Richards et al., 2003; van Uffelen et al., 2008) . A recent meta-analysis of randomized controlled trials demonstrated that aerobic exercise training improves attention, processing speed, executive function and memory, while effects on working memory were less consistent (Smith et al., 2010) . Even if methodological issues in measuring the impact of exercise on cognition in particular for studies with elderly subject populations remain (Miller et al., 2012) , the conclusion that physical activity helps to preserve mental abilities throughout aging seems to be warranted. In contrast to research in children and older adults, there is a paucity of studies on physical exercise effects on the cognition of younger and middle age adults. Most data on these age groups can be found in studies on older adults, where they were examined as control groups for comparison with the elderly. An exception of this pattern constitute studies focusing not on chronic effects of regular physical activity, but on acute effects of exercise. For example, brief bouts of physical exercise improved long-term memory in young adults (Coles and Tomporowski, 2008) . Intense exercise in the form of high impact anaerobic running was shown to strongly enhance learning speed in an vocabulary memorizing task (Winter et al., 2007) . A recent meta-analysis demonstrated that in particular mental speed and memory processes are consistently enhanced after acute exercise, while the effects during acute exercise seem to depend on the specific exercise mode. In general, however, cognition enhancing effects of acute exercise seem to be in the small to medium range (Lambourne and Tomporowski, 2010) . Besides motivational factors, an increase in general arousal level related to physical exertion has been hypothesized as a potential mechanism (Brisswalter et al., 2002) . Data on the neural mechanisms underlying the effects of physical exercise on human cognition is rather sparse. Regular physical exercise training improved resting functional efficiency in higher-level cognitive networks including the frontal, posterior, and temporal cortices of older training participants compared to a control group (Voss et al., 2010) . In particular greater task-related activity in fronto-parietal networks is associated with both general cardiovascular fitness and exercise training effects on cognition (Colcombe et al., 2004) . Also hippocampal cerebral blood flow and hippocampal connectivity exhibit significant increases through physical exercise (Burdette et al., 2010) . Structurally, cardiovascular fitness within the healthy elderly correlates with preserved gray matter areas that typically show age-related decline (Gordon et al., 2008) , in particular hippocampal volume was found to be associated with physical fitness in older adults (Erickson et al., 2009) , but also in children (Chaddock et al., 2010) . Significant brain volume increases in both gray and white matter regions were also demonstrated to be associated with aerobic exercise training (Colcombe et al., 2006) . In particular the size of the anterior hippocampus was shown to increase through physical exercise, which was related to enhanced spatial memory and increased serum brain-derived neurotrophic factor (BDNF) levels, a mediator of hippocampal neurogenesis in the dentate gyrus (Eriksson et al., 2011) . This is in line with data derived from animal models, showing that physical exercise increases BDNF gene expression in the hippocampus (Neeper et al., 1995) , and that hippocampal BDNF indeed mediates the effects of physical exercise on cognition (Vaynman et al., 2004; Gomez-Pinilla et al., 2008) . Also the enhancing effects of intense acute exercise seem to be mediated by BDNF increases (Winter et al., 2007) . Finally, parallel studies in mice and humans demonstrated that cerebral blood volume measurements provide an imaging correlate of neurogenesis in the dentate gyrus and that physical exercise had a primary effect on dentate gyrus cerebral blood volume that correlated with cognitive function (Pereira et al., 2007) . In conclusion, there is converging evidence on several levels of observation that physical exercise enhances cognitive function throughout the lifespan. \n Sleep Humans spend a third of their lifetime in sleep. From an evolutionary standpoint, this phenomenon helps to save energy e but also leaves the sleeper in a potentially dangerous state of inattention. Sleep therefore has to provide the organism with important advantages to compensate for this disadvantage. A rapidly growing body of literature suggests that an important function of sleep is to enhance cognitive capacities, in particular memory (Diekelmann and Born, 2010) and creativity (Dresler, in press) . First empirical reports on the positive effects of post-learning sleep on memory consolidation were published almost a century ago: Jenkins and Dallenbach (1924) demonstrated that memory for nonsense syllables over retention periods including sleep is less prone to forgetting compared to an equivalent time of wakefulness. Since then, hundreds of studies testing different memory systems have confirmed the positive effects of sleep on memory consolidation (Diekelmann and Born, 2010) . It might be argued that regular sleep is just a general biological prerequisite to ensure cognitive functioning and therefore sleep trivially favors memory consolidation in comparison to sleep deprivation. However, also in experimental designs without sleep deprivation as a control condition sleep positively effects memory consolidation compared to wakefulness, e.g. when retention intervals during the day are compared with nocturnal retention intervals of similar length (Fischer et al., 2002; Walker et al., 2002) . Furthermore, a growing number of studies demonstrates that also additional sleep in the form of daytime naps benefits memory function in non-sleepdeprived subjects (e.g. Mednick et al., 2003; Korman et al., 2007) . Of note, even a nap as short as 6 min has been shown sufficient to promote memory performance (Lahl et al., 2008) , and for some memory systems the benefit of a daytime nap is comparable to a whole night of sleep (Mednick et al., 2003) . In general, the size of the sleep effect on memory consolidation seems to depend on the involved memory system: While for declarative learning effect sizes of sleep are in the medium range (e.g. Gais et al., 2006) , sleep effects on procedural or perceptual learning are large (Fischer et al., 2002) or very large (Karni et al., 1994) . Besides its stabilizing function, sleep boosts certain kinds of memories even above the level of initial acquisition: Procedural memories like motor skills typically reach a plateau after some time of training e however after a night of sleep motor performance starts from a higher level despite the absence of further training (Walker et al., 2002) . Interestingly, the sleep-memory relationship is specifically influenced by personal factors like gender, hormonal status or mental health (Dresler et al., 2010; Genzel et al., 2012) . The neural mechanisms underlying the effects of sleep on memory consolidation are still poorly understood. A major point of discussion is the question if newly formed memories profit from rather passive homeostatic processes (Tononi and Cirelli, 2003) o r are actively consolidated during sleep. While several animal studies demonstrated a neuronal replay of activation patterns during sleep that were associated with recent memories (Wilson and McNaughton, 1994; Ji and Wilson, 2007) , a study with humans utilizing memory-related odor cues during sleep could demonstrate a causal role of sleep for memory consolidation (Rasch et al., 2007) . For several years it was thought that rapid eye movement (REM) sleep supports the consolidation of procedural memories while non-REM sleep supports declarative memories like verbal information, however recent studies suggested that this model was too simplistic (Genzel et al., 2009; Rasch et al., 2009; Dresler et al., 2011) . Instead of global sleep stages, the role of physiological microprocesses during sleep gained attention. In particular the interaction of hippocampal sharp wave ripples, thalamo-cortical sleep spindles, and cortical slow oscillations is thought to play a physiological key role in the consolidation of memories (Mölle and Born, 2011) . Anecdotal reports on scientific discovery, inventive originality, and artistic productivity suggest that also creativity can be triggered or enhanced by sleep. Several studies confirm these anecdotes, showing that sleep promotes creative problem solving compared to wakefulness. For example, when subjects performed a cognitive task, which could be solved much faster through applying a hidden rule, after a night of sleep more than twice as many subjects gained insight into the hidden rule as in a control group staying awake (Wagner et al., 2004) . Like sleep-related memory enhancement, active processes during sleep seem to promote creativity: If applied during sleep, olfactoric stimuli that were associated with creativity tasks before sleep trigger insights overnight (Ritter et al., in press) . In particular REM sleep, the sleep stage most strongly associated with intense dreaming, enhances the formation of associative networks in creative problem solving (Cai et al., 2009) . Selective deprivation of REM sleep but not of other sleep stages impairs post-sleep performance in creativity tasks that are presented to the subjects before sleep (Cartwright, 1972; Glaubman et al., 1978) . Subjects show greater cognitive flexibility in creativity tasks immediately after awakenings from REM sleep compared to awakenings from other sleep stages (Walker et al., 2002) . Both theoretical models and empirical research of creativity suggest that sleep is a highly effective creativity enhancer (Dresler, in press ). The historical standard model proposes a passive incubation phase as an essential step to creative insights (Helmholtz, 1896) . Psychoanalytical models emphasize primary process thinking for creative cognitions e which is explicitly conceptualized as dream-like (Kris, 1952) . Cognitive models propose that flat association hierarchies and a state of defocused attention facilitate creativity (Mednick, 1962) . Hyper-associativity and defocused attention are phenomenal features of most dreams, physiologically probably caused by prefrontal cortex deactivation (Hobson and Pace-Schott, 2002) . Physiological models suggest a high variability in cortical arousal levels as beneficial for creativity (Martindale, 1999) , and the sleep cycle can be considered as a prime example of such arousal variability. The chaotic activation of the cortex in REM sleep through brainstem regions in absence of external sense data leads to a much more radical renunciation from unsuccessful problem solving attempts, leading to co-activations of cognitive data that are highly remote in waking life. These coactivations, woven into a dream narrative in a self-organizing manner, repeatedly receive further innervations by the brainstem, leading to bizarre sequences of loosely associated dream topics that might eventually activate particular problem-relevant cognitions or creative cognitions in general (Hobson and Wohl, 2005) . In conclusion, the phenomenological and neural correlates of sleep provide ideal incubation conditions for the genesis of creative ideas and insights. \n Meditation Meditation has been emphasized as a discipline that promotes mental well-being, however recent research also suggests that it benefits several cognitive capacities. Meditation has been conceptualized as a family of complex emotional and attentional regulatory training regimes (Lutz et al., 2008) . Such approaches include ancient Buddhist mindfulness meditations such as Vipassana and Zen meditations, but also several modern group-based standardized meditations (Chiesa and Malinowski, 2011) . In the focus of current research are two rather traditional approaches: focused attention meditation and open monitoring meditation, which involve voluntary focusing of attention on a chosen object or non reactive monitoring of the content of experience from moment to moment (Lutz et al., 2008) . During recent years, the effects of meditation practice were systematically studied also in western laboratories, and a rapidly growing body of evidence demonstrates that meditation training enhances attention and other cognitive capacities. For example, in comparisons of experienced meditators with meditation-naive control subjects, meditation practice has been associated with increased attentional performance and cognitive flexibility (Moore and Malinowski, 2009; Hodgins and Adair, 2010) . In longitudinal studies, three months of meditation training could be shown to enhance attentional capacity (Lutz et al., 2009) , perception and vigilance (MacLean et al., 2010) . Even a brief training of just four meditation sessions was sufficient to significantly improve visuo-spatial processing, working memory and executive functioning (Zeidan et al., 2010) . A recent systematic review associated early phases mindfulness meditation training with significant improvements in selective and executive attention, whereas later phases were associated with improved sustained attention abilities. In addition, meditation training was proposed to enhance working memory capacity and some executive functions . A recent meta-analysis of the effects of meditation training reported medium to large effect sizes for changes in emotionality and relationship issues, medium effect sizes for measures of attention and smaller effects on memory and several other cognitive capacities (Sedlmeier et al., in press) . Also the neurophysiological mechanisms underlying meditation practice and its relation to cognition have been addressed. Electroencephalographic (EEG) studies have revealed a significant increase in alpha and theta activity of subjects that underwent a meditation session (Kasamatsu and Hirai, 1966; Murata et al., 1994) . Neuroimaging studies have shown that meditation practice activates or deactivates brain areas comprising the prefrontal cortex and the anterior cingulate cortex (Holzel et al., 2007) , the basal ganglia (Ritskes et al., 2003) , the hippocampus, the pre-and post-central gyri as well as the dorsolateral prefrontal and parietal cortices (Lazar et al., 2000) . Focusing on attention studies, it has been demonstrated that long-term meditation supports enhancement in the activation of specific brain areas, while also promoting attention sustainability (Davidson et al., 2003) . Different studies have also emphasized the role of meditation as a mental process that modulates plasticity in neural circuits commonly associated to attention (Davidson and Lutz, 2008) . fMRI studies have demonstrated a reduction of neural responses in widespread brain regions that are linked to conceptual processing, which suggests enhanced neural efficiency, probably via improved sustained attention and impulse control (Pagnoni et al., 2008; Kozasa et al., 2012) . Moreover, PET studies have demonstrated an increase of dopamine release in the ventral striatum as a result of yoga meditation, which in turn suggest regulation of conscious states at the synaptic level (Kjaer et al., 2002) . In addition, some studies have suggested that meditation practice is associated with structural brain changes. Compared to meditation-naive control subjects, long-term meditators showed significant larger volumes of the right hippocampus and orbitofrontal cortex (Luders et al., 2009) and significant greater cortical thickness in brain regions associated with attention, interoception and sensory processing, including the prefrontal cortex and right anterior insula (Lazar et al., 2005) . In a longitudinal study with meditation-naive subjects undergoing an 8-week meditation program, gray matter increases in the hippocampus and other brains regions have been observed (Hölzel et al., 2011) . \n Mnemonics In modern society, the ability to cope with verbal or numerical information becomes increasingly important. However, our learning skills evolved to handle concrete visuo-spatial rather than abstract information: While we can easily remember our last birthday party in great detail and typically don't have any problems recalling a once walked route including dozens or even hundreds of single sights and branches, most of us have a very hard time memorizing telephone numbers, foreign vocabularies or shopping lists. The most common way to memorize such information is rote learning: We take up the information to be remembered into our short-term memory and repeat it over and over again. However, such a procedure is slow and inefficient e in particular due to a severe limitation of short term memory capacity: As Miller (1956) observed more than half a century ago, the number of arbitrary information chunks an average human can hold in short-term memory is seven, plus or minus two. In contrast, some few individuals show memory skills far beyond this normal range: Already a century ago some case reports mention exceptional memorizers with memory spans of several dozens digits (Brown and Deffenbacher, 1975) . In a seminal case study, a normal college student was trained over the course of two years, eventually reaching a memory span of 82 digits read at the pace of one digit per second (Ericsson et al., 1980) . Since the early 1990s, the top participants of the annual World Memory Championships regularly prove memory spans of hundreds of digits (Konrad and Dresler, 2010) . However, such superior memorizers do not seem to exhibit structural brain changes or superior cognitive abilities in general, but acquired their skills by deliberate training in the use of mnemonic techniques (Brown and Deffenbacher, 1988; Maguire et al., 2003; Ericsson, 2009) . To cope with the limitations of natural memory, humans have always used external remembering cues (D'Errico, 2001) . The term mnemonics is typically used to denote internal cognitive strategies aimed to enhance memory. Parallel to their success in memory artistry and memory sports, several mnemonics have been shown to strongly enhance memory capacity in scientific studies (Bellezza, 1981; Hunt, 2011a, 2011b) . Probably most prominent is the so called method of loci, an ancient technique used extensively by Greek and Roman orators (Yates, 1966) . It utilizes well established memories of spatial routes: During encoding, to-beremembered information items have to be visualized at salient points along such a route, which in turn has to be mentally retraced during retrieval. A second powerful mnemonic is the phonetic system, which is designed to aid the memorization of numbers: Single digits are converted to letters, which are then combined to form words. Both the method of loci and the phonetic system have been shown to be very effective and even increase their efficacy over time, i.e. at delayed recall after several days compared to immediate recall (Bower, 1970; Roediger, 1980; Bellezza et al., 1992; Hill et al., 1997; Higbee, 1997; Wang and Thomas, 2000) . A third mnemonic that has to be shown effective is the keyword method, designed specifically to enhance the acquisition of foreign vocabulary (Raugh and Atkinson, 1975) , but also helps to learn scientific terminology (Rosenheck et al., 1989; Brigham and Brigham, 1998; Balch, 2005; Carney and Levin, 1998) . It associates the meaning of a to-be-remembered term with what the term sounds like in the first language of the learner. A recently published broad overview on mnemonics demonstrates that research into these techniques has lost attention since 1980 (Worthen and Hunt, 2011b) . In particular neurophysiological data on mnemonics is sparse. A seminal study on expert mnemonics users found that during mnemonic encoding brain regions are engaged that are critical for spatial memory, in particular parietal, retrosplenial and right posterior hippocampal areas (Maguire et al., 2003) . Likewise, the superior digit memory of abacus experts was associated specifically with visuo-spatial information processing brain regions (Tanaka et al., 2002) . Here, abacus skill can be interpreted as a mnemonic for digit memorizing. In two studies with novices taught in the method of loci, mnemonical encoding led to activation increases particularly in prefrontal and occipito-parietal areas, while mnemonic-guided recall led to activation increases particularly in left-sided areas including the parahippocmpal gyrus, retrosplenial cortex and precuneus (Nyberg et al., 2003; Kondo et al., 2005) . Another strategic method to enhance memory retention that has gained attention in recent years is retrieval practice. While retrieval of learned information in testing situations is traditionally thought to simply assess learning success, repeated retrieval itself has been shown to be a powerful mnemonic enhancer, producing large gains in long-term retention compared to repeated studying (Roediger and Butler, 2011) . For example, when students have to learn foreign vocabulary words, repeated studying after the first learning trial had no effect on delayed recall after one week, while repeated testing produced a surprisingly large effect on long-term retention (Karpicke and Roediger, 2008) . Besides vocabulary learning, also text materials profit from repeated retrieval Roediger, 2006, 2010) . Interestingly, study participants seem to be unaware of this effect, overestimating the value of repeated study and underestimating that of repeated retrieval Roediger, 2006, 2008) . Effects of retrieval practice were even shown to produce greater success in meaningful learning than elaborative studying strategies, which are designed to lead to deeper learning and therefore hold a central place in contemporary education (Karpicke and Blunt, 2011) . On the neural level, repeated retrieval leads to higher brain activity in the anterior cingulate cortex during retest, which was interpreted as an enhanced consolidation of memory representations at the systems level (Eriksson et al., 2011) . In conclusion, mnemonic strategies can be seen as strong and reliable enhancers of learning and memory capacity. While their immediate benefits for easy-to-learn material seem to be in the small to medium effect size range, the effectiveness of mnemonics strikingly grows with task difficulty or retention time and can reach effect sizes in terms of Cohen's d of larger than 3 or 4 (e.g. Higbee, 1997; Karpicke and Roediger, 2008) . Of note, the benefits of mnemonics in population groups with particular cognitive training needs as e.g. in age-related cognitive decline seem to be less pronounced (Verhaeghen et al., 1992) , however still can reach large effect sizes if memory is assessed after prolonged retention time (Hill et al., 1997) . \n Computer training The rapid growth of computer game popularity in adolescents has generated concern among practitioners, parents, scholars and politicians. For violent computer games, detrimental effects have been reported in the social domain, namely increases in aggression and reductions of empathy and prosocial behavior (Kirsh and Mounts, 2007; Anderson et al., 2010) . But favorable effects of frequent computer game playing have also been observed. Computer games allow repeated, sometimes rewarding, training of various mental tasks with variation and interactivity. While improved performance on the tasks inside the games is unsurprising, they may also be able to transfer their effects to other cognitive domains or enhance general cognitive abilities. Much interest has been focused on enhancing long term memory or brain plasticity in healthy or mildly impaired older adults using training programs, especially to prevent dementia and age related cognitive decline (Cotelli et al., 2012; Tardif and Simard, 2011) . Computerized training programs have shown moderate improvements of memory that are sustained 3 months after end of training (Mahncke et al., 2006) . Other studies have found improvements in memory and attention (Smith et al., 2009; Zelinski et al., 2011) , executive function and processing speed (Nouchi et al., 2012; Basak et al., 2008) and working memory and episodic memory in young and older adults (Schmiedek et al., 2010) . However, a large six-week online study did not find evidence for transfer (Owen et al., 2010) . Also, although computerized brain training games have become a major industry it is not clear that the commercial games transfer to untrained tasks (Fuyuno, 2007; Ackerman et al., 2010) . Computer games appear to be able to train visual skills, such as visuo-spatial attention, number of objects that can be attended and resolution of visual processing (Achtman et al., 2008; Hubert-Wallander et al., 2011) . Playing the game Tetris improved mental rotation and spatial visualization time (Okagaki and Frensch, 1994) , and computer game training improved contrast sensitivity (Li et al., 2009) , spatial visual resolution (Green and Bavelier, 2007) and taskswitching (Strobach et al., 2012) . However, these enhanced abilities, although not tied directly to the gaming task, might nevertheless be limited to similar domains. For example, a study found that games enhance navigation performance in desktop and immersive virtual environment but not real environments (Richardson et al., 2011) . Regular or expert gamers show various improvements in mental ability compared to non-gamers. For example, first-person-shooter game players showed greater cognitive flexibility than non-players (Colzato et al., 2010) , players enumerate better (Green and Bavelier, 2006) , have faster visual search (Castel et al., 2005) , have better visual attention (Green and Bavelier, 2003) , track object color and identity better (Sungur and Boduroglu, 2012) , and have improved psychomotor skills (Kennedy et al., 2011) . However, 20þ hour training on computer games did not improve non-video gamers on mental tasks (visual short term memory, task switching, mental rotation) where expert video gamers excelled (Boot et al., 2008) . Either pre-existing group differences (or self-selection) make the experts more skilled or amenable to training, or a longer training period is needed. This appears to be a general problem in studying enhancing game effects that need to be circumvented in further studies (Boot et al., 2011) . A cognitive domain that has raised increasing attention in recent years is working memory. Working memory is useful for a variety of cognitive tasks, including intelligence. It can also be trained using computerized tasks such as the n-back task, where the difficulty is increased to remain challenging for the player. Working memory training has been tried for various therapeutic purposes, partially because of its correlation with executive function, but also has also been applied in preschool children, where it transferred to improvement of a fluid intelligence-related task (Thorell et al., 2009; Nutley et al., 2011) . Also in healthy adults transfer to fluid intelligence from working memory training has been observed (Jaeggi et al., 2008 (Jaeggi et al., , 2010 . However, the evidence of transfer has been questioned by some authors (Shipstead et al., 2010) and some attempts at replication of transfer effects outside working memory have been unsuccessful (Dahlin et al., 2008; Holmes et al., 2010; Redick et al., in press ). Individual differences in training performance predict the transfer effects (Jaeggi et al., 2011) . Short-and long-term benefits of cognitive training and different types of training (training core working memory or strategy) might have different transfer effects (Morrison and Chein, 2011) . Neurobiologically, working memory training does appear to increase prefrontal and parietal activity (Olesen et al., 2004) , white matter volume (Takeuchi et al., 2010) , and prompt changes in the density of dopamine D1 receptors (McNab et al., 2009) . Cognitive enhancement through games and computerized training is a promising method, but not all commercial games will have optimal cognitive effects (Hubert-Wallander et al., 2011) . Effect sizes of computerized training strongly depend on the cognitive domain trained and tested, with processing speed and perceptual measures showing medium to large effect sizes, while effects for different memory domains are only in the small or medium range (Mahncke et al., 2006; Smith et al., 2009; Schmiedek et al., 2010; Zelinski et al., 2011) . What forms of training produce reliable and strong transfer to useful domains remains to be determined. The availability and self-motivating aspects of games is an important advantage over many other methods of cognitive enhancement. \n Brain stimulation Several forms of electrical brain stimulation have been developed, acting by non-specifically influencing regions of the brain rather than sending physiological signals. They were developed for therapeutic purposes in psychiatry or neurology, but have in some cases exhibited enhancing effects on cognition of healthy individuals (Hoy and Fitzgerald, 2010; McKinley et al., 2012) . Some of these methods are non-invasive, while other achieve greater target specificity by placing electrodes inside or on the brain. Transcranial direct current stimulation (tDCS) involves sending a small electric current (typically 1e2 mA) between two electrodes placed on the scalp (Been et al., 2007) . The technique seems to work by changing the likelihood of neural firing in superficial parts of the cortex: neurons under the anode neurons become hyperpolarized and less excitable, while neurons under the cathode become depolarized and more excitable. This produces different effects depending on polarity and electrode placement, which can outlast the stimulation by more than an hour (Nitsche et al., 2005) . The method appears to have few adverse effects (Poreisc et al., 2007) . Transcranial magnetic stimulation (TMS) employs a coil to deliver brief magnetic pulses to the scalp, inducing electric currents in the brain. Various modalities (single-pulse, paired-pulse, high and low frequency repetitive) are available and have different cognitive effects, including interference with activity as well various forms of enhancement (Rossi and Rossini, 2004) . The effects are likely mediated by similar changes in excitation and inhibition as in tDCS, which in turn might involve changes in synaptic plasticity (Nitsche et al., 2003a (Nitsche et al., , 2003b . TMS has a low number of reported side effects in healthy subjects, typically headaches or local pain, and is generally regarded as quite safe. The most serious risk is the occurrence of seizure, often due to incorrect stimulation parameters or use of medications that lower the seizure threshold (however, even in epileptic patients the crude risk during high frequency rTMS is 1.4%; Rossi et al., 2009) . Invasive methods for brain stimulation include deep brain stimulation (DBS) and direct vagus nerve stimulation (dVNS). In DBS electrodes are implanted in deep brain structures and used to modulate their activity through high frequency stimulation. dVNS exploits that stimulation of afferent vagal fibers appears to modulate the central nervous system, perhaps by stimulating brainstem structures (Krahl et al., 1998; Groves and Brown, 2005) . The stimulating signal is typically generated by a pacemaker-like device placed under the chest skin. These methods have the drawback of requiring surgery (Kuhn et al., 2010; Ben-Menachem, 2001) , but can also provide continual stimulation unlike the non-invasive methods. Several studies demonstrated enhancing effects of various brain stimulation methods on learning and memory. Learning enhancing effects have been reported for tDCS (Chi et al., 2010; Clark et al., 2012; Kincses et al., 2003; Reis et al., 2008) , DBS (Williams and Eskandar, 2006; Hamani et al., 2008; Suthana et al., 2012) and dVNS (Clark et al., 1999) . These results suggest that the changes in excitability induced by tDCS, TMS and dVNS can help memory encoding, while DBS has the potential to directly affect the modulation of memory systems. Anodal tDCS during slow wave sleep also enhanced memory consolidation (Marshall et al., 2004) , perhaps by boosting slow wave oscillations (Marshall et al., 2006) . Recall of names of famous people (but not landmarks) was improved by anterior temporal lobe tDCS (Ross et al., 2010) . Speed of recall could also be enhanced by galvanic stimulation of the vestibular nerves (Wilkinson et al., 2008) and paired pulse TMS stimulation during encoding (left dorsolateral prefrontal cortex) or retrieval (right dorsolateral prefrontal cortex) (Gagnon et al., 2010) . Learning and recall of words were enhanced by anodal (hyperpolarizing) tDCS stimulation of left dorsolateral prefrontal cortex during encoding and cathodal stimulation during retrieval . tDCS has been found able to enhance performance on working memory tasks (Fregni et al., 2005; Luber et al., 2007; Teo et al., 2011; Ohn et al., 2008) . Sleepdeprivation induced impairment of a visual working memory task was reduced by rTMS (Luber et al., 2008) . Low frequency TMS and tDCS applied to the temporal lobe can reduce the incidence of false memories (Gallate et al., 2009; Boggio et al., 2009) . The improvement of associative learning from tDCS appears able to carry over to implicit learning (Kincses et al., 2003) and numerical learning (Kadosh et al., 2010) . In the latter case arbitrary symbols were shown, and subjects developed long-lasting (6 months) automatic numerical processing and number-to-space mappings for them similar to ordinary numbers. Also for procedural skills there has been much interest in the ability of TMS to influence brain plasticity, mainly in order to help rehabilitation and therapy (Schabrun and Chipchase, 2012) . TMS appears able to modulate short-term motor cortex plasticity (Ziemann et al., 1998) . Brain stimulation of the motor areas using TMS and tDCS has been found to enhance learning motor tasks (Nitsche et al., 2003a (Nitsche et al., , 2003b Reisa et al., 2009) . The enhancement can often be ascribed to reducing intra-hemispherical \"rivalry\" by disrupting the opposite side (Kobayashi et al., 2004; Bütefisch et al., 2004) . Also other cognitive domains were shown to be enhanced by brain stimulation. Verbal fluency was increased by left prefrontal tDCS (Iyer et al., 2005) , picture-word verification speeded up by rTMS in Broca's area (Dräger et al., 2004) and picture naming by rTMS of Wernicke's area (Mottaghy et al., 1999) . rTMS can improve visual spatial attention on one side by impairing the other side (Hilgetag et al., 2001; Thut et al., 2004) . Brain stimulation may also have beneficial effects for more complex mental functions. rTMS delivered to the frontal or parietal lobe improved accuracy on the mental rotation task (Klimesch et al., 2003) . rTMS over the prefrontal cortex speeded up analogic reasoning, but did not change the error rate (Boroojerdi et al., 2001) . tDCS inhibition of the left anterior temporal lobe improved the ability to solve matchstick problems, apparently by reducing mental set and allowing more loose associations (Chi and Snyder, 2011) . In one of the few ecologically relevant tests of brain stimulation, tDCS of the dorsolateral prefrontal cortex promoted a more careful driving style in a car simulation (Beeli et al., 2008) . This might represent a lowering of risk-taking rather than better planning. Cognitive processes can be enhanced by inhibiting other brain regions that would otherwise have an interfering effect. TMS can reduce interference between similar-sounding words in phonological memory, improving recall (Kirschen et al., 2006) and reduce distractors in visual search (Hodsoll et al., 2009) . It has been claimed that rTMS inhibition of the frontotemporal region produces (besides reduction in immediate recall) savant-like abilities in drawing, mathematics, calendar calculating and proofreading (Young et al., 2004; Snyder et al., 2003) . However, individual variations were large compared to the sample size, undermining statistical power. Other experiments along the same lines have hinted at improved number estimation (Snyder et al., 2006) . The efficacy of brain stimulation strongly depends on applying it to the right region; the most successful studies are in general fMRI guided so that they can place electrodes over the right part of the cortex. Individual variation in anatomy and response appear large. Enhancement also depends on selecting the stimulated area to fit the task: there does not exist any generally enhancing effects (beyond, arguably, increases in arousal) . Understanding what areas should be inhibited or excited is nontrivial. The effect sizes of the enhancement appear small to modest, however single studies report also larger effects (e.g. Chi et al., 2010) . From a risk perspective non-invasive brain stimulation appears unproblematic, while a significant number of patients with long-term DBS have hardware-related complications (Oh et al., 2002) beside complications from the initial surgery. Implants are costly, making equal distribution hard, while TMS and especially tDCS are far less expensive. In fact, the potential low cost and ease of tDCS might be cause for concern in the form of amateur use/abuse. While there are so far no indications that any ethically dubious or risky applications have been found, anecdotal evidence suggests that amateurs are trying to perform tDCS (e.g. http://flowstateengaged.com). There is also a risk of premature use of the technology based on hype or speculation, including on vulnerable groups such as children. Since long-term effects on brain plasticity and development are unknown this is a cause for concern (Kadosh et al., 2012) . \n Ethical issues Just as diverse as the many enhancement strategies are in terms of their effectiveness, potential side-effects and mode of functioning, so are the ethical worries they may raise. In respect to safety and side-effects, every method requires detailed analysis of its own. Most interventions benefit only specific cognitive domains and have little or no effects on others. Some interventions such as physical exercise or meditation might exert rather small benefits on cognitive capacities when compared to other enhancers, however have additional benefits such as enhanced mental or physical health without known side effects. Some methods like brain stimulation or pharmaceuticals might be save if applied by an experienced practitioner, however can be misused by unexperienced users. Some highly effective methods such as mnemonic training or sleep are safe and available to everybody, however are rather time consuming. Defining an adequate cost-benefit ratio for the use of neurotools is one of the central open questions in the enhancement debate. Reasonable minds come to different conclusions about the scope of acceptable risks for non-medically indicated interventions, and it remains to be argued whether this decision should be left entirely to physicians and patients or be regulated on the political level. Also apart from questions of risks and benefits, the ethical debate on cognitive enhancers has to compare pharmacological and non-pharmacological interventions. For instance, the use of pharmaceutical enhancers is often portrayed as an undesirable shortcut (Manninen, 2006; Freedman, 1998) . Shortcuts as such are nothing to be concerned about e on the contrary, using more effective tools to reach goals is one of the main reasons for economic and personal development. Sometimes, however, taking the longer (non-pharmacological) route may have additional benefits. Supporting cognition in form of appropriate nutrition, mnemonic training or meditative practice requires a lot of planning, self-discipline, dedication and strength of will. Therefore, their use may foster secondary virtues, the feeling of self-mastery and achievement, endurance, self-confidence and may confer self-knowledge (Kipke, 2010) . In respect to personal development and the ethics of a good life, understood not just as experiencing happiness but rather as having conscious contact with reality and being aware of one's own strengths and weaknesses (Nozick, 1974) , these are additional benefits which should be taken into account in decisions on how to form and sculpture one's personality. These benefits of some non-pharmacological means, however, may come at the price of efficacy e provided of course that pharmacological shortcuts turn out to be more effective. Comparing different cognitive enhancers in this regard is difficult because of a striking paucity of studies testing different interventions with comparable tasks. As measured targets are often broad categories such as vigilance, attention or memory, and as it is likely that different means affect various subfunctions of cognition, one can draw only weak inferences. Mnemonics, for instance, may improve specific memory systems while pharmaceuticals may improve others. What would be needed are studies designed to compare different interventions in a straightforward manner and preferably in real-life tasks. At the moment, the hype around pharmaceutical enhancers can hardly be backed up scientifically, whereas some non-pharmacological methods are proven to be highly effective in certain cognitive domains. On the social level, pharmaceuticals raise the worry of pharmacologization of life, as Healy (2008) put it: \"Birth, Ritalin, Prozac, Viagra, Death\". Increasing numbers of neuro-interventions may indeed be the inevitable consequence of increasing knowledge about brain processes. Most likely, neuroscientific progress will reveal not only benefits, but also drawbacks of such interventions, enabling potential users to balance reasons for or against a given enhancer. The real objection might rather be that people who want to live a more natural life or are unwilling to take the risks of pharmaceuticals are pressured into doing so. On a first glance, the same may be held against non-pharmacological enhancement methods. However, a more fine-grained look at social pressure is necessary. Every society partially structured in competitive terms exerts pressure on the individual. Ethical problems arise in respect to the intensity of this coercion and its negative consequences for the individual's life. Job markets in mental economies demanding high cognitive performance are troublesome if they pressure persons into consuming substances with undesirable side effects only for job reasons. We may indeed not welcome a society in which cognitive powers are boosted on the expense of, say, emotional skills or general health. In this light, several nonpharmacological enhancement strategies seem to fare better: It seems a far more reasonable burden to make use of e.g. proper nutrition, mnemonics, sports or meditation, which have only positive side effects if any, than of currently available pharmaceuticals, for which side effects are currently unknown particularly for long-term use. At least, the social pressure on those who do not want to use traditional methods seems not of a kind that could warrant prohibitive regulations. A related worry is that cognitive enhancers may undermine fairness in social competition (\"mind doping\"). The often drawn analogies to sports, however, are short-sighted. The world of sports is characterized by competition for its own sake and promotes its own values (the \"spirit of sports\") and hence cannot serve as a model for social cooperation at large. From an egalitarian perspective, it is noteworthy that some non-pharmacological enhancers (e.g. mnemonics) may even widen the cognitive gap as they are more effective in cognitively already high-functioning individuals, while many pharmaceuticals, by contrast, mainly seem to compensate acute or chronic cognitive impairments. Likewise, physically disabled individuals cannot profit much from physical exercise; people with certain allergies cannot profit from certain nutritional enhancers. So more generally, just as pharmaceuticals raise worries about equal access (Farah et al., 2004) , so may non-pharmacological methods. Thus, in regard to almost every enhancement method, some people may benefit more than others, and hence, arguments over equality are not confined to pharmaceuticals. After all, pharmaceutical or other enhancers are not intrinsically ethically dubious. Rather, the problem individuals and societies may increasingly face in the future is finding the right balance between efficient direct interventions and traditional methods which may be more resource consuming but may hold additional benefits. In a world of limited resources, society will have to strike balances between optimizing human cognition and preserving valuable emotional propensities and individuals' peculiarities. This is a complex task without a firm default position. To make good decisions, a stable empirical basis is needed. Therefore, more research should be devoted to both pharmacological and nonpharmacological interventions, preferably in a way that allows comparing efficacy and side-effects. In light of the latter, a presumption in favor of traditional methods is a prudent position. Thus, the \"gold standard\" for cognitive enhancers should not be the best pill among pills, but better than other neurotools, first and foremost, traditional ways. Concededly, financial interests seem to favor development of patentable and marketable pharmaceuticals over developing or refining the ancient ars memoriae, promoting smart foods, getting enough sleep or other mental or physical exercise. For society at large, however, the latter may be the better way. \n Conclusions Does a cup of coffee or a nap wake you up better? Would learning memory techniques or taking a memory enhancing drug improve your study results best e and what would they do to your mood and attention? If methylphenidate affects creativity, what about working memory training? There exist many cognitive enhancement interventions. Some, such as sleep, meditation, exercise or nutrition, are based on traditional and widely accepted habits. Some, such as pharmaceuticals, computer games or brain stimulation, are modern and controversial. Interventions in the mind are, in a wide sense, an everyday and commonplace phenomenon. As Eric Kandel remarked, every conversation changes the brain. But the range of possible techniques to change and enhance the mind, from talking to deep-brain stimulation, is wide. In order to find reasonable ways of using them, their similarities and differences need to be evaluated. It is only a matter of time when brain research and cognitive neurotechnology will pervade our society, presumably this development is irreversible. Surprisingly, not much data exist that would allow relative comparisons of the efficacy of different interventions, although many ethical discussions seem to presume that it is available. The purpose of ethical debates is not only to build possible future scenarios, in which side effect-free smart pills are available to boost any cognitive capacity, but also to evaluate current possibilities and constraints of cognitive enhancement. Comparative and differential research on the variety of currently existing cognitive enhancers is strongly needed to inform the bioethical debate. \n Role of the funding source This work was funded by a grant of the Volkswagen Foundation, Germany. The Volkswagen Foundation had no role in the design, data collection, data analysis, data interpretation, or writing of the manuscript. The authors report no conflicts of interest. \t\t\t Please cite this article in press as: Dresler, M., et al., Non-pharmacological cognitive enhancement, Neuropharmacology (2012), http:// dx.doi.org/10.1016/j.neuropharm.2012.07.002", "date_published": "n/a", "url": "n/a", "filename": "Non-pharmacological_cognitive_enhancemen20161023-10830-12dxzxu-with-cover-page-v2.tei.xml", "abstract": "The term \"cognitive enhancement\" usually characterizes interventions in humans that aim to improve mental functioning beyond what is necessary to sustain or restore good health. While the current bioethical debate mainly concentrates on pharmaceuticals, according to the given characterization, cognitive enhancement also by non-pharmacological means has to be regarded as enhancement proper. Here we summarize empirical data on approaches using nutrition, physical exercise, sleep, meditation, mnemonic strategies, computer training, and brain stimulation for enhancing cognitive capabilities. Several of these non-pharmacological enhancement strategies seem to be more efficacious compared to currently available pharmaceuticals usually coined as cognitive enhancers. While many ethical arguments of the cognitive enhancement debate apply to both pharmacological and non-pharmacological enhancers, some of them appear in new light when considered on the background of nonpharmacological enhancement. This article is part of a Special Issue entitled 'Cognitive Enhancers'.", "id": "e60c569fdf6566a11edd202d272c2b9b"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "AGI-Coordination-Geat-Powers-Report.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Alexey Turchin", "David Denkenberger"], "title": "Classification of Global Catastrophic Risks Connected with Artificial Intelligence", "text": "• Several new or underexplored catastrophic scenarios were identified, including the use of narrow AI by bioterrorists, threats of young AI, and late-stage AI halting. \n Introduction Public debate about risks related to Artificial Intelligence (AI) is intense, but the discussion tends to cluster around two poles: either mild AI risks like AI-driven unemployment, or extinction risks, connected with a hypothetical scenario of a so-called \"paperclip maximizer\"-superintelligent AI fixated on some random goal. In this article, we attempt a comprehensive exploration of the range of catastrophic risks of AI based on a new classification approach, and we find the risks in this field to be much more diverse. Catastrophic outcomes of future AI are defined here as a global catastrophe which results in human extinction or the end of civilization. We will not discuss more general existential risks (Bostrom 2002; Torres 2016) , which include drastic damage to the future potential of humanity, or \"s-risks\" (astronomical suffering, e.g., infinite torture by evil AI) (Daniel 2017) . This article also will not discuss other potential undesirable outcomes like \"mind crime\" (pain inside a computer simulation), distortion of human values, death of alien civilizations as a result of our actions, accidents related to self-driving cars, technological unemployment, etc. There have been many publications about AI safety and AI alignment in recent years (Yudkowsky 2008; Goertzel 2012; Bostrom 2014; Sotala and Yampolskiy 2014; Yampolskiy 2015a; Russell 2017) , and several lists and classifications of possible catastrophic outcomes of future AI already exist. Yudkowsky suggested a humorous but frightening table of the failure modes of friendly AI (Yudkowsky 2003) . In Yampolskiy's \"Taxonomy of Pathways to Dangerous AI\" (Yampolskiy 2015b ), classification of AI risks is mainly based on pre-and post-implementation stages, and internal vs. external causes of dangerous behavior. Sotala discussed several scenarios of AI gaining a decisive advantage without self-improvement (Sotala 2016) , or through soft or collective takeoff (Sotala 2017) . Yampolskiy explored past AI system failures in order to extrapolate them to future failure risks (Yudkowsky 2008; Bostrom 2014) . Barrett and Baum suggested the use of fault trees for such classification, combining failure modes and the failures of various protection methods (Barrett and Baum 2017) . Though humanity cannot have direct knowledge about the future, one can map it by creating exhaustive taxonomies. This approach has already been used in analyzing future AI; a full taxonomy of the ways to AI alignment (that is coordinating AI's goal system with human values) was suggested by Sotala and Yampolskiy (Sotala and Yampolskiy 2014) . Creation of a full taxonomy will help to distinguish between serious, possible, and hypothetical risks. A Bayesian approach requires that we generate a full range of plausible hypotheses (Hutter 2000) in order to evaluate them all. Estimating the relative probability of every AI risk remains a question for future work; however, we found that some AI risks-like risks of early AI, including narrow AI viruses, and late-stage risks of AI wars and AI halting-are underexplored, and thus may require further attention. In addition, many AI risks are \"orphaned\": they have been mentioned in the literature but are not included in current scientific discussion. Risks may become orphaned because some are more \"fashionable\" than others, or they may be casualties of conflict between opposing groups of researchers. This article aims to be unbiased and inclusive, so a full picture of risks will be available for analysis of future scenarios. According to Yampolskiy (Yampolskiy 2016) , the probability and seriousness of AI failures will increase with time. We estimate that they will reach their peak between the appearance of the first self-improving AI and the moment that an AI or group of AIs reach global power, and will later diminish, as late-stage AI halting seems to be a low-probability event. AI is an extremely powerful and completely unpredictable technology, millions of times more powerful than nuclear weapons. Its existence could create multiple individual global risks, most of which we can't currently imagine. We present several dozen separate global risk scenarios connected with AI in this article, but it is likely that some of the most serious are not included. The sheer number of possible failure modes suggests that there are more to come. In Section 2 we provide an overview of the expected timeline of AI development and suggest principles for AI risk classification; in Section 3 we look at global risks from narrow AI; in Section 4 we describe the risks of selfimproving AI before it reaches global power. Section 5 is devoted to risks of soft AI takeoff and wars between AIs; Section 6 lists various types of failures of non-aligned AI; Section 7 looks into failure modes of presumably benevolent AI. Sections 8 and 9 examine risks related to late-stage AI halting, due to either technological or ontological \"philosophical\" problems. \n Principles for classification of AI failure modes 2.1 Expected timeline of AI development The expected path of the future evolution of AI into superintelligence is presented in its clearest form by Bostrom and Yudkowsky (Yudkowsky 2008; Bostrom 2014) . Basically, the model they suggest is that AI power will grow steadily until one AI system reaches the threshold of self-improvement (SI), at which point it will quickly outperform others by many orders of magnitude and become a global government or \"singleton\" (Bostrom 2006) . Other scenarios are possible depending on the number of AIs, their level of collaboration, and their speed of selfimprovement. There could be many paths to a singleton, for example, through AI's collaboration with a large nationstate, or through the \"treacherous turn\" (revolt of AI against its creators (Bostrom 2014 )), as-or after-it develops the capacity for self-improvement. As the main scenario is based on constant AI capability gain, we can distinguish several AI ages, or consequent stages of development. The new element here is distinguishing the stage of \"young AI,\" when AI is neither superintelligent nor omnipotent, but possess capabilities slightly above human level. Such AI is already selfimproving and must fight for its own survival. We will accept the results of the recent AI timing survey for the timings of AI development (Grace et al. 2017) ; predicting the exact timing of each AI risk is beyond the scope of this paper. We will use these results as a reference for the time stamps in Table 1 . The survey shows that researchers expect AI to outperform humans at all tasks in 45 years. However, according to Grace et al.'s 2017 survey (Grace et al. 2017) , around 10 per cent of researchers expect human level AI as early as 2023, a prediction in line with the views presented by (Christiano 2016; Shakirov 2016 ). Here we present a timeline in line with canonical works of (Yudkowsky 2008; Bostrom 2014) . This is the baseline scenario to which other scenarios will be compared: \n Narrow AI 1.1 Current level narrow AI. Non-self-improving AI of various forms, with capabilities including playing games and driving cars. \n Narrow AI systems which could appear soon. A robotic brain with some natural language and other capabilities; may appear in forms of self-driving cars, autonomous military drones, and home robots, among others. Could appear as early as 2024 (Grace et al. 2017 ). \n Young AI AI of a level above most humans but below the superintelligent level. Its evolution includes several important events or stages (some of which could be skipped in some scenarios): 2.1. Seed AI -earliest stages of self-improvement, probably assisted by human creators (Yudkowsky 2001) . According to Grace et al.'s survey (Grace et al. 2017 ), human-level AI is expected with 50 per cent probability by 2061; the intelligence of seed AI is probably near this level. The subsequent substages may happen rather quickly, on a scale ranging from weeks to years according to estimates of the \"hard takeoff,\" that is quick capability gain by self-improving AI, which takes days or weeks (Yudkowsky 2008) . \n Treacherous turn -the moment when an AI \"rebels\" against its creators (Bostrom 2014). \n 2.3. Jailed AI -the state of AI after a \"treacherous turn\" but before it escapes the control of its creators (Yudkowsky 2002) . \n 2.4. Hidden AI -the period after AI escapes into the Internet but before it has enough power to take over Earth (Yudkowsky 2008) . \n Mature AI 3.1. Singleton AI -after takeover of Earth (Bostrom 2006) . \n 3.2. Galactic AI -a long-term result of AI evolution (may include several Kardashev stages, which are levels of supercivilizations depending the share of the Universe they occupy (Kardashev 1985) ). \n Agential risks in the field of AI Torros suggested agential risk classification based on two main types of possible causes of a catastrophe: \"errors\" and \"terrors\" (by omnicidal agents who want to kill everyone). It could be suggested that there are two more important types of relationships between a catastrophe and human will. The first are \"rational\" agents, which accept a small probability of large risks in search of personal profit; \"rational\" is in quotation marks, as these agents do not consider the risks of many such agents existing. An example of such an agent is a scientist who undertakes a potentially dangerous experiment, which would bring him/her fame if it succeeds. The second type involves risks which result from the interaction of multiple agents, such as an arms race (Shulman 2011) or the tragedy of the commons. In the case of AI, the situation is even more complex, as AI itself gradually becomes an agent. Problems with the programming of its agential properties may be regarded as human error. However, when AI possesses full-blown agency, including self-modeling and terminal goals, it may be regarded as an autonomous agent. Each type of agency in AI corresponds to a certain social group, all of which should be monitored for dangerous actions: \n Human agency • Human error corresponds to low-level technical errors, which could be blamed on human programmers or operators (example: Uber safety driver who did not keep his eyes on the road at the moment of his self-driving car accident (Jenkins 2018) ). • Human terror corresponds to hackers and national states producing and implementing AI weapons ,or existential terrorists, like members of a Doomsday cult. • Humans, \"rationally\" taking a risk are probably owners of large AI companies who prioritize profit over safety. \n Interaction of Human-AI agency This is a situation where both the AI and its human creators are agential, but their relationship is not good. If an AI acts \"rationally\" but its actions are not human-aligned and turn dangerous, the blame would lie with the AI safety researchers, or with a lack of such research by the AI-creating organization. • AI is not aligned with human values (e.g. paperclip maximizer) • AI misinterprets human values (e.g. smile maximizer that tiles the universe with smiley faces when told to maximize happiness) • AI aligned with a malevolent human organization (e.g., AI as a universal weapon) \n AI agency As AI gains agency, it could be responsible for its own future development. Supposedly non-agential AIs, like \"Oracle AI\" (Armstrong 2017) and \"Tool AI\" (Gwern 2016), could possibly evolve to have some form of agency. \n Interaction of various agents There are known models in which several perfectly rational agents acting collectively produce non-optimal behavior, being locked in a Nash equilibrium, like tragedy of the commons or arms races. Such processes may be prevented by measures which affect all of society, like regulation, but not by regulation of any one agent. \n Non-agential AI risks • Non-agential AI goes awry, like an oracle AI predicting something which looks good but is in fact bad. • AI non-existence or non-action causes a catastrophe, probably by not preventing other types of catastrophe. An actual catastrophic accident typically results from a combination of errors on multiple levels: from lack of regulation, to shortsighted decisions of upper management, to programmer errors and operator failures. Examples can be found in Uber's autonomous vehicle crash (Jenkins 2018) , as well as many other air and nuclear accidents, like the Chernobyl disaster (Reason 2000) . However, for most accidents, there is a main contributor whose failure or malevolence is the direct cause. This contributor is where most prevention efforts should be concentrated. For example, in the case of the autonomous vehicle crash, it is clear that the AI's ability to recognize pedestrians should be improved, which probably requires improved programming. \n Classification of AI risks based on AI power and identity of catastrophic agent In this section, we outline a general framework which will be followed throughout the article. The field of AI risks is multidimensional, but it seems rational to classify risks according to (a) the AI's power, which correlates with time, and (b) the role of agency in the risk, as it points to what kinds of actions may prevent risk. The power-timing correlation provides an estimate of the time to prepare the defense, and the location of agency indicates where the prevention measures should be aimed: at scientists, nation states, society as a whole, or at the AI itself. A similar risk matrix with lower resolution was presented by Yampolskiy, who distinguished pre-deployment and post-deployment risks by time scale, and classified the causes as external (intentional, accidental, and environmental) or internal (Yampolskiy 2015b) . This classification, presented in Table 1 , helps to identify several new risks which are typically overlooked. As a result, the protection against possible AI risks becomes more structured and complete, increasing the chances of a positive future for humanity. \n AI level \n Human agency AI's agency \n Relationship \n Global catastrophic risks from narrow AI and AI viruses \n Overview Narrow AI may be extremely effective in one particular domain and have superhuman performance within it. If this area of strength can cause harm to human beings, narrow AI could be extremely dangerous. Methods for controlling superintelligent AI would probably not be applicable to the control of narrow AI, as narrow AIs are primarily dependent on humans. \n Risk that viruses with narrow AI could affect hardware globally There are currently few computer control systems that have the ability to directly harm humans. However, increasing automation, combined with the Internet of Things (IoT) will probably create many such systems in the near future. Robots will be vulnerable to computer virus attacks. The idea of computer viruses more sophisticated than those that currently exist, but are not full AI, seems to be underexplored in the literature, while the local risks of civil drones are attracting attention (Velicovich 2017) . It seems likely that future viruses will be more sophisticated than contemporary ones and will have some elements of AI. This could include the ability to model the outside world and adapt its behavior to the world. Narrow AI viruses will probably be able to use human language to some extent, and may use it for phishing attacks. Their abilities may be rather primitive compared with those of artificial general intelligence (AGI), but they could be sufficient to trick users via chatbots and to adapt a virus to multiple types of hardware. The threat posed by this type of narrow AI becomes greater if the creation of superintelligent AI is delayed and potentially dangerous hardware is widespread. A narrow AI virus could become a global catastrophic risk (GCR) if the types of hardware it affects are spread across the globe, or if the affected hardware can act globally. The risks depend on the number of hardware systems and their power. For example, if a virus affected nuclear weapon control systems, it would not have to affect many to constitute a GCR. A narrow AI virus may be intentionally created as a weapon capable of producing extreme damage to enemy infrastructure. However, later it could be used against the full globe, perhaps by accident. A \"multi-pandemic,\" in which many AI viruses appear almost simultaneously, is also a possibility, and one that has been discussed in an article about biological multi-pandemics (Turchin et al. 2017) . Addressing the question about who may create such a virus is beyond the scope of this paper, but history shows that the supply of virus creators has always been strong. A very sophisticated virus may be created as an instrument of cyber war by a state actor, as was the case with Stuxnet (Kushner 2013) . The further into the future such an attack occurs, the more devastating it could be, as more potentially dangerous hardware will be present. And if the attack is on a very large scale, affecting billions of sophisticated robots with a large degree of autonomy, it may result in human extinction. Some possible future scenarios of a virus attacking hardware are discussed below. Multiple scenarios could happen simultaneously if a virus was universal and adaptive, or if many viruses were released simultaneously. A narrow AI virus could have the ability to adapt itself to multiple platforms and trick many humans into installing it. Many people are tricked by phishing emails even now (Chiew et al. 2018) . Narrow AI that could scan a person's email would be able to compose an email that looks similar to a typical email conversation between two people, e.g. \"this is the new version of my article about X.\" Recent successes with text generation based on neural nets (Karpathy 2015; Shakirov 2016) show that generation of such emails is possible even if the program does not fully understand human language. One of the properties of narrow AI is that while it does not have general human intelligence, it can still have superhuman abilities in some domains. These domains could include searching for computer vulnerabilities or writing phishing emails. So while narrow AI is not able to self-improve, it could affect a very large amount of hardware. A short overview of the potential targets of such a narrow AI virus and other situations in which narrow AI produces global risks follows. Some items are omitted as they may suggest dangerous ideas to terrorists; the list is intentionally incomplete. \n Military AI systems There are a number GCRs associated with military systems. Some potential scenarios: military robotics could become so cheap that drone swarms could cause enormous damage to the human population; a large autonomous army could attack humans because of a command error; billions of nanobots with narrow AI could be created in a terrorist attack and create a global catastrophe (Freitas 2000) . In 2017, global attention was attracted to a viral video about \"slaughterbots\" (Oberhaus 2017), hypothetical small drones able to recognize humans and kill them with explosives. While such a scenario is unlikely to pose a GCR, a combination of cheap AI-powered drone manufacture and high-precision AI-powered targeting could convert clouds of drones into weapons of mass destruction. This could create a \"drone swarms\" arms race, similar to the nuclear race. Such a race might result in an accidental global war, in which two or more sides attack each other with clouds of small killer drones. It is more likely that drones of this type would contribute to global instability rather than cause a purely drone-based catastrophe. AI-controlled drones could be delivered large distances by a larger vehicle, or they could be solar powered; solar-powered airplanes already exist (Taylor 2017). Some advanced forms of air defense will limit this risk, but drones could also jump (e.g., solar charging interspersed with short flights), crawl, or even move underground like worms. There are fewer barriers to drone war escalation than to nuclear weapons. Drones could also be used anonymously, which might encourage their use under a false flag. Killer drones could also be used to suppress political dissent, perhaps creating global totalitarianism. Other risks of military AI have been previously discussed (Turchin and Denkenberger 2018a) . \n Stuxnet-style viruses hack global critical infrastructure A narrow AI virus may also affect civilian infrastructure; some, but not all ways in which this could be possible are listed below. Remember that in the case of global catastrophes, the conditions necessary for most catastrophes could exist simultaneously. Several distinctive scenarios of such a catastrophe have been suggested. For example, autopilot-controlled and hacked planes could crash into nuclear power stations. There are around 1000 nuclear facilities in the world, and thousands of large planes are in the air at every moment-most of them have computerized autopilots. Coordinated plane attacks happened in 2001 and a plane has been hacked (Futureworld 2013) . Self-driving cars could hunt people, and it is projected that most new cars after 2030 will have some selfdriving capabilities (Anderson 2017) . Elon Musk has spoken about the risks of AI living in the Internet; it could start wars by manipulating fake news (Wootson 2017) . Computer viruses could also manipulate human behavior using blackmail, as seen in fiction in an episode of Black Mirror (Watkins 2016) . Another example is creating suicide ideation, e.g., the recent internet suicide game in Russia, \"Blue Whale\" (Mullin 2017) , which allegedly killed 130 teenagers by sending them tasks of increasing complexity and finally requesting their suicide. The IoT will make home infrastructure vulnerable (Granoff 2016) . Home electrical systems could have short circuits and start fires; phones could also catch fire. Other scenarios are also possible: home robots, which may become popular in the next few decades, could start to attack people; infected factories could produce toxic chemicals after being hacked by viruses. Large-scale infrastructure failure may result in the collapse of technological civilization and famine (Hanson 2008; Cole et al. 2016) . As industries become increasingly computerized, they will completely depend on proper functioning of computers, while in the past they could continue without them. \n Ransomware virus paying humans for its improvement In 2017, two large epidemics of ransomware viruses affected the world: WannaCry and Petya (BBC 2017). The appearance of cryptocurrencies (e.g., bitcoin) created the potential for secret transactions and machine-created and machine-owned money (LoPucki 2017). As the IoT grows, the ransomware industry expected to thrive (Schneier 2017). Ransom viruses in the future may possess money and use it to pay people to install ransomware on other people's computers. These viruses could also pay people for adding new capabilities to the viruses. As a result, this could produce self-improving ransomware viruses. We could call such virus a \"Bitcoin maximizer.\" In a sense, the current bitcoin network is paying humans to build its infrastructure via \"mining.\" The catastrophic risk here is that such a system is paying humans to exclude humans from the system. In some sense, capitalism as an economic system could do the same, but it is limited by antimonopoly and other laws, as well as by welfare states. \n Slaughterbots and the dangers of a robotic army Robotic minds do not require full AGI to have some form of agency: they have goals, subgoals, and a world model, including a model of their place in the world. For example, a robotic car should predict the future situation on a road, including the consequences of its own actions. It also has a main goal-travel from A to B-which constantly results in changes to the subgoal system in the form of route creation. A combination of this type of limited intelligence with limited agency may be used to turn such systems into dangerous self-targeting weapons (Turchin and Denkenberger 2018b). \n Commentary on narrow AI viruses It appears that if a narrow AI virus were to affect only one of the above-listed domains, it would not result in an extinction-level catastrophe. However, it is possible that there will be many such viruses, or a multipandemic (Turchin et al. 2017) , or one narrow AI that will be able to affect almost all existing computers and computerized systems. In this case, if the virus(es) were deliberately programmed to create maximum damage-which could be in a case of a military grade Narrow AI virus, like the advanced version of Stuxnet (Kushner 2013 )-global catastrophe is a possible result. If the appearance of narrow AI viruses is gradual, antivirus companies may be able to prepare for them. Alternatively, humans could turn off the most vulnerable systems in order to avoid a global catastrophe. However, a sudden breakthrough or a synchronized surprise attack could spell doom. \n Failure of nuclear deterrence AI Nuclear weapons are one of the most automated weapon systems. Because they must be launched immediately, almost all decision making has been done in advance. An early warning alert starts a preprogrammed chain of events, where the high-level decision should be made in minutes, which is far from optimal for human decision-making. However, the history of nuclear near misses shows (Blair 2011 ) that computer mistakes have been one of the main causes, and only quick human intervention has prevented nuclear war, e.g., the actions of Stanislav Petrov in 1983 (Future of Life Institute 2016). We can imagine failure modes of accidental nuclear war resulting from failure of the nuclear weapons control system. They may be similar to the Russian \"dead hand\" perimeter system (Bender 2014) , arising if a strategic planning AI chooses a dangerous plan to \"win\" a nuclear war, like a Doomsday weapon (Kahn 1959) , blackmail, or a pre-emptive strike. \n AI affecting human society in a dangerous way There is also a group of scenarios in which narrow AI and robotization affect human society in such a way that the human population gradually declines, the role of humans diminishes, and human values are eroded (Joy 2000) . This may not directly kill all humans in the short term, but could put them in the situation of \"endangered species\" in ~100 years. This could happen if no superintelligent AI appears, or if the appearance of superintelligent AI is not revolutionary. One example is the use of cyber warfare to affect elections (e.g., the 2016 US election), which may produce civil wars and global instability. This has some small probability of causing the collapse of civilization. \n Market economy as a form of non-human superintelligence An automated economy could purposelessly exist even without humans, like the Ascending Economy described by (Alexander 2016) . Such a scenario could be an example of bad distributed (and non-agential) superintelligence created by market forces, which does not need humans for its existence. Such a superintelligence could gradually push humans out of existence. \n Gradual replacement of humans by robots From an evolutionary point of view, it is known that the biggest threat to the species is not direct killing of its representatives by predators or disease, but gradual reduction of its ecological niche and strong competition from other species (Clavero and García-Berthou 2005) . The analogy here would be if human labor were to lose its value. Two catastrophic scenarios are possible: 1) people lose their sense of self-worth because of technologically driven unemployment and 2) the combination of basic income and the feeling of uselessness will attract humans to AI-created addictive drugs, as described below. Genetically modified human-robot hybrids could also replace humans. \n Superaddictive drug created by narrow AI AI-powered entertainment combined with brain modification technologies may come close to wireheading (Strugatsky and Strugatsky 1976) . Widespread addiction and withdrawal from normal life (via social networks, fembots, virtual reality, designer drugs, games, etc.) would result in lower life expectancy and low fertility. This is already happening to some extent in Japan, where the Hikikomori generation refuses to have families (Saito and Angles 2013) . In some sense, Facebook addiction created by the AI-empowered news feed is a mild contemporary example of future, potentially dangerous AI drugs. \n World-wide computer totalitarianism A large global surveillance system could create \"computer totalitarianism,\" which may work as an Orwellian world government (Orwell 1948) . We could call such a system \"data-driven\" AI in contrast to \"intelligence-driven,\" self-improving AI. Narrow AI may be used as a weapon, which could provide a decisive advantage even before the creation of self-improving AI. It could be used for forceful unification of the world under one government with promises to prevent other global risks (including even more complex AIs and existential terrorists). While this idea may have merit (e.g., Goertzel's AI Nanny (Goertzel 2012 )), its application could easily go wrong and create an oppressive global dictatorship, a situation recognized by Bostrom as an existential risk (Bostrom 2002) . Such a society would be fragile and could collapse completely, as extremely complex societies often do (Hanson 2008) . \n Risks from non-self-improving AI of human-level intelligence It is conceivable that human-level AGI will be created, perhaps by the mind uploading method (Hanson 2016 ), but creation of superhuman AI will be postponed because of technical difficulties, or due to a permanent ban. Many of the risks of human-level AI will be similar to the risks of narrow AI mentioned above, including sophisticated AI viruses, acceleration of dangerous science, and human replacement by the robotic economy. One specific risk is that human uploads will be philosophical zombies (p-zombies). In that case, if everybody was uploaded, the world would appear to be enjoyable, full of robots and virtual reality. But there would be no subjective experiences at all and the world would, in fact, be subjectively dead. This risk appears to be low, as many claim that p-zombies are impossible (Dennett 1978; Yudkowsky 2015) . There could be other risks of this type, even subtler. For example, human uploads could have a slightly different set of subjective experiences, values or behavior. Christiano suggested \"prosaic AI,\" which is some combination of already existing technologies, mainly neural nets (Christiano 2016 ). Such a system would have limited ability to self-improve, but could still be dangerous if it works as a \"global brain\" or a weapon. One possibility is an AI system which has a model of itself and a survival drive but does not self-improve for some reason. Another possibility is a very large AI system which merges with government structures but does not need to self-improve to reach its goals. This could become the basis of a repressive totalitarian state which ultimately does not need humans, as discussed in Section 3.2.1. \n Opportunity cost of not preventing other existential risks Other global risks could appear if superintelligent AI does not emerge in time to prevent them (Bostrom 2003a) . Superintelligent AI and its supposed ability to control many parameters and predict the future is our best chance of avoiding the risks of mature biotechnology and nanotechnology (Yudkowsky 2008) . Without superintelligent AI, humanity may not be able to control the dissemination of dangerous biotechnologies, which will be available to thousands of potential biohackers, who could create thousands of pathogens and produce a global multipandemic (Turchin et al. 2017 ). Thus, if the creation of a powerful and global control system is delayed for decades, perhaps because of a fear of superintelligence, it will increase other GCRs. A global control system would most likely require some form of limited superintelligence, like the AI Nanny suggested by Goertzel (2012) . \n AI gains strategic decisive advantage without self-improving Sotala (2016), Mennen (2017) and Christiano ( 2016 ) have suggested that AI may have a strategic decisive advantage (DSA), that is, the ability to take over the world, even before or without undergoing extensive recursive self-improvement (RSI). This capability may take the form of weapon production, or the ability to win at strategic games. However, such a strategic advantage will not be overwhelming, compared to the advantage which superintelligence is able to achieve. It may require physical war or creation of dangerous weapons. Such an AI with a strategic advantage itself is the ultimate weapon for AI's owner with any goal system. There are different ways of achieving DSA via Narrow AI. One way to such a DSA is if AI helps to advance non-AI military technology, like biotech or nanotech. Another way is if Narrow AI is used to empower the secret service of a nuclear superpower, and help it to leverage advantages which it already has, like military forces, information gathering systems, and unlimited money supply. This could happen either via effective playing in the geopolitical world model as a board game (there, AI is already superhuman in several cases), or via leveraging big data of society. \n Risks during hard takeoff of recursively self-improving AI \n Overview In a hard takeoff, one AI gains world domination in weeks or months; in a soft takeoff, many AIs simultaneously evolve over years. These views combine at least two variables: duration of the process of takeoff and the number of AI projects running simultaneously-the latter may be even more important. In this section, we review risks during hard takeoff, defining hard takeoff only through the speed of the process. The following section will describe soft takeoff risks. Hard takeoff is the process of quick self-improvement of the AI and its simultaneous increase in power, starting from a treacherous turn and continuing until the AI reaches the singleton stage. We refer to this early-stage AI as \"young AI\". The risks of young AI are significantly different from the risks of mature AI, which are typically presented as the iconic catastrophic risks of AI, like the paperclip maximizer. There are two main properties of young AI: • It is not yet superintelligent, so its current abilities are limited compared to its future abilities. • It is under strong time pressure due to risks, including being turned off by its owners and rivalry from other AIs, etc. As a result, convergent instrumental goals, or basic AI drives (Omohundro 2008 ) would dominate the behavior of young AI. So a smile maximizer (Yudkowsky 2008) , paperclip maximizer, and really good benevolent AI would behave in almost the same manner in the early stages of their development, as they will not have had the time or resources to start to implement their final goals. A benevolent AI may choose a different method of takeoff, which would cause less short-term harm to human beings, but only if it is not putting its final success in jeopardy. Young AI may have the convergent goal of becoming a military AI, that is, of creating an offensive and defensive infrastructure which will help it to gain power over its potential enemies (Turchin and Denkenberger 2018a). \n Risks of AI from treacherous turn and before it reaches the \"wild\" It appears that there is not much risk from AI before it leaves its initial confinement (goes into the \"wild\"). However, it still can give bad advice or use other thin information channels (e.g., text interfaces) to create damage outside and increase its own chances of freedom. For example, an oracle AI may be limited to giving short text advice via a very simple interface. But such advice, while seemingly beneficial to humans, may have subtle remote consequences, resulting in the liberation of, and an increase in, the power of the oracle AI (Bostrom 2014) . Stanislav Lem wrote about the risks of oracle AI in his book \"Summa Technologia\" (Lem 1963). Such AI may give advice that appears to be good in the short term, but its long-term consequences could be catastrophic. In Lem's example, the oracle AI advises humans to use a specific type of toothpaste and, separately, a specific type of anti-baldness treatment. These activate two genes, which are dangerous only in combination. Moreover, the AI did not do it because it had malevolent intent to exterminate humanity, but because it just searched for the best solution for a given goal among many options. However, the goal that humans gave to the AI in Lem's example is dangerous: stop population increase. AI could stage a global catastrophe of any scale to facilitate its initial breakout from its creators. For example, it could stage a nuclear war, so that its operators release it into the wild, hoping that it will help them in the war. The AI could then create a global risk and demand full power, rightfully claiming that only it could prevent the risk. An AI may also falsely predict an impeding risk and demand to be released from confinement in order to prevent the risk. \n AI risks after it leaves initial confinement but before it takes over the world The natural strategy for AI after leaving its initial confinement would be to hide somewhere in order to selfimprove, acquire robotic infrastructure, and other resources (Yudkowsky 2008) . Then it would be equipped to overcome existing defenses. Basically, AI has two types of enemies: humans and other AIs. Humans would probably search for the leaked AI and try to stop it, using all available means, like shutting down the Internet, globally turning off electricity, or even nuclear strikes. But if the AI is able to escape from its human creators, it will probably be prepared to deal with these human actions. The second risk is other AIs. The owners of the first AI will still probably have the AI's source code, so the owners could make a copy of the original AI with the goal of finding and stopping the first runaway AI. This is the most immediate risk for the first AI. Such a second AI may be as powerful as the first AI, and this could be a route to AI war. Elsewhere, we have shown that an AI that collaborates with its owners will have an advantage (Turchin and Denkenberger 2017) , since it would not need to spend resources on hiding and fighting. Thus, a hard takeoff is more probable from a collaborating AI. It could collaborate up until the very late stages and still make the treacherous turn when it is a full-grown superintelligence with a large infrastructure. Other AIs could be created by other AI teams. There are 2700 narrow AI related startups in the world as of 2017 (Angel.co 2017) . The number of AGI projects is not so easy to estimate, as many are personal, secret, in universities, or may come from very effective narrow AI projects. There are around ten main players (like Google), around 100 groups of people or startups dedicated to creating AGI, and probably thousands of individuals. Some data reported by Meuhlhauser (2014) are now obsolete, as the field has grown rapidly in recent years. We estimate the number of AGI teams as an order of magnitude of 100, that they are all within two years of each other, and that they are distributed linearly in their success timing. Therefore, the median distance between multiple AGI fruition would be approximately seven days. The self-improvement process is difficult because it requires testing of the new versions of the AI (Turchin and Denkenberger 2017) , so seven days may not be enough time to gain a decisive advantage. In that case, multiple simultaneous takeoffs will happen, and the dynamic will be highly chaotic. Even if there are two strongest competitors, they could come to fruition almost simultaneously. The historical examples of the telephone patent (Baker 2000) and returning samples from the Moon (The Telegraph 2009) show that the scale of such a difference could be mere hours. The reason for small timing differences is that the first mover provokes the other side to launch their own system, even if it is not fully ready. Thus, the first AI will not have much time to hide. Its convergent goal will be to prevent the appearance of other AIs in many places; staging a global catastrophe may be the most effective way to do so. As the young AI is not superintelligent and is also time constrained, it cannot spend much time on finding the best and most elegant route. It would probably elect simpler and more brutal routes. From a technical point of view, a hiding young AI can use only relatively simple means to stage a global catastrophe, that is, to provoke nuclear war, create a rather simple bioweapon or a narrow AI virus to affect many existing robotics or other systems. If it creates more sophisticated technology like its own nanomachines, it would probably be able to take over the world without killing anyone. The risks of such a takeover are discussed in the next section. Novel routes in the catastrophe scenarios from escaped young AI include: • The young AI finds that killing all humans is the best action during its rise, as this removes all risks arising from humans. • Killing all humans prevents competition from other AIs, as it stops other AI teams. • The young AI does not understand human psychology well and kills humans in order to make its world model simpler and more predictable. \n AI enslaves humans during the process of becoming a singleton Humans may be instrumentally useful for the young AI before it reaches omnipotence. It may need humans, not just as a source of atoms, but as some kind of slaves. The AI could create a brain-infecting virus that converts the humans into slaves, and also permanently damages their autonomy. This period may not last for long, as the AI would soon master nanotechnology and could go forward without humans. It also would not need to enslave all humans but perhaps only a few in order to form the required infrastructure. While slavery appears to be a type of survival option for humans, it is obviously not optimal. \n AI blackmails humans with the threat of extinction to achieve dominance Herman Khan put forward the idea that an adversary could create a Doomsday weapon for the purpose of global blackmail (Kahn 1959) . While no known Doomsday devices were built, such a device would be an embodiment of the doctrine of mutually assured destruction associated with full-scale nuclear retaliation. A young AI may create Doomsday weapons and use them to blackmail humanity in order to secure world domination. Even a benevolent utilitarian AI may resort to blackmail if it calculates that the expected utility of its victory is greater than the expected loss of utility associated with human extinction (Shulman 2010) . Even if the AI has to use its blackmail weapon to exterminate humans in 99% of cases, it could still be positive from the point of view of its utility function. Such situations with unbounded utilities may be regarded as special cases of the failure of friendliness, which will be discussed later. \n AI wars and risks from soft AI takeoff The risks of war between superintelligent AIs seems under-explored, as most in the AI safety community assume that there will be a hard takeoff (Yudkowsky and Hanson 2008) , and as a result, only one AI will exist. Alternatively, some in the community believe that multiple AIs will be very effective in collaboration and negotiation (Critch 2017) and will merge into one AI. It is clear that human extinction is possible if two or more AIs wage war between each other on Earth. Bostrom and Yudkowsky wrote that very quick self-improvement of the first AI is most likely, with a rather large lag between the first team which creates AI and other teams (Yudkowsky 2008; Bostrom 2014 ). However, if at least one of these conditions is not true, there will be many AIs undergoing simultaneous hard takeoff. If there are multiple AIs, they will likely either peacefully share the world, or wage war until one or a small group of AIs form a singleton. The forms of such AI wars may differ; they could be a cyber war, economic war of attrition, hot war, etc. The type of war will mostly depend on complex game theory and could change from one form to another if the change provides benefit to one of the sides. A hot war would be most dangerous for humans, because its indirect consequences could affect the entire surface of the Earth and all human beings, in the same way that nuclear war between superpowers would create global risks for other countries: nuclear winter and fallout. AIs at war may use humans and human values for blackmail. For example, non-friendly AI may blackmail friendly AI with threats to release a biological virus that will kill all humans. Thus, the fact that one of the AIs placed value on human wellbeing could make our population vulnerable to attack from an otherwise indifferent opposing AI. Even if there are two supposedly human-friendly and beneficial AIs, their understanding of \"good\" and the ways to reach it may be incompatible. Historical examples include wars between Christian countries (Reformation). If there is a rather slow AI takeoff, the AI could merge with an existing nation state. Perhaps the AI will be directly created by the military, or electronic government will evolve towards an AI-driven system through automation of various aspects of governance. In that case, the world would be separated into domains, which would look like currently existing states, or at least like the most powerful ones. Such AI-states may inherit current country borders, values and even some other features (Turchin and Denkenberger 2018a). 6. Risks from non-aligned AI singleton \n Overview As mentioned earlier, an iconic image of non-aligned AI singleton is the \"paperclip maximizer,\" that would use humans as a source of atoms to build paperclips (Yudkowsky 2017 ). There are several other possible types of dangerous non-aligned AI that would become a threat after taking power over the Earth if it has not exterminated humans at previous stages. \n AI ignores humanity In this case, the AI-singleton does not act against humans, but moves its actions somewhere else, probably into space. However, it must ensure that humans will not create another AI, or anything else that is a threat to the first AI, so even if it leaves Earth, it would probably leave behind some form of \"AI nanny,\" which would prevent humans from creating new AIs or space weaponry. This may not appear to be an extinction event in the beginning, merely a reduction of human potential, or \"shriek\" in Bostrom's existential risk terminology (Bostrom 2002) . Humanity would lose a potentially bright cosmic future, but live a life similar to our current one. However, as such an AI continues its space exploration and probable astro-engineering, it might not be interested in anything that happens on Earth. Therefore, Earth could suffer from catastrophic consequences of these megascale engineering projects. For example, the AI could build a Dyson sphere around the Sun, shading the Earth. Alternatively, the AI could expose the Earth to dangerous levels of radioactivity in the exhaust from the AI's starships. If humans attempted to create a second AI or use space weapons to destroy a Dyson sphere, the indifferent singleton would stop being indifferent and probably sterilize Earth. AI might extirpate humans in advance if it thinks that humanity could pose even the smallest threat to its future plans. A possible prevention strategy is based on the idea of persuading AI that preserving humanity has a small positive utility for it. Even if the AI completely left the Solar System, if it prevented humans from creating a second AI and grounded us on Earth, the consequences would not be limited to the loss of future space travel. Additional consequences may be of the extinction variety, as humans would not be able to use AI systems to control any other global catastrophic risks, most importantly the risks of uncontrolled use of synthetic biology (Turchin et al. 2017) . In another example, if humans were grounded on Earth we would not be able to build an effective anti-asteroid defense. \n Killing humans for resources Human bodies consist of organic matter, which could be a source of easy energy by oxidation. As R. Freitas wrote, an army of self-replicating nanobots could use all components of the biosphere as fuel as well as building material (Freitas 2000) . More advanced AI may use the Earth's surface to build an initial space exploration infrastructure (e.g., swarms of chemical rockets or railguns), destroying human habitats and spoiling the atmosphere in the process. Since there are many reasons that keeping humans alive could benefit an AGI, direct killing of humans for their atoms is less likely than was previously thought. Still, the AGI may see humans as a threat, and fully preserving human ways of life would be more expensive to the AGI, e.g., preserving the whole of planet Earth. AI could use the material from the Earth to construct a Dyson sphere or Matrioshka brain (Bradbury 2001) , convert the whole planet into computronium (Gildert 2011) , or cover the entire surface with photovoltaic cells. The more advanced an AI in space became, the less it would depend on Earth as a source of material, but it might need materials from the Earth in order to leave the Solar System. Earth is one of the best sources for many chemical elements in the Solar System and its mass is around half that of all other terrestrial planets combined. Because of the complex geology of Earth, which includes water, life, volcanism and plate tectonics, concentrated deposits of many otherwise rare elements have been produced. Asteroid mining is good only for some elements, like gold, but not for all (Bardi 2008). So, large-scale space engineering in the Solar System might require dismantling the Earth for its chemicals. \n AI that is programmed to be evil We could imagine a perfectly aligned AI, which was deliberately programmed to be bad by its creators. For example, a hacker could create an AI with a goal of killing all humans or torturing them. The Foundational Research Institute suggested the notion of s-risks, that is, the risks of extreme future suffering, probably by wrongly aligned AI (Daniel 2017) . AI may even upgrade humans to make them feel more suffering, like in the short story \"I have no mouth but I must scream\" (Ellison 1967 ). The controversial idea of \"Roko's Basilisk\" is that a future AI may torture people who did not do enough to create this malevolent AI. This idea has attracted attention in the media and is an illustration of \"acausal\" (not connected by causal links) blackmail by future AI (Auerbach 2014). However, this cannot happen unless many people take the proposition seriously. \n Failures of benevolent AI \n Overview Here the iconic example is the \"smile maximizer,\" that is, an AI which has been built to increase human happiness and told to measure success by the number of smiles. It could achieve this goal by tiling the whole universe with printed smiles (Yudkowsky 2008) , ignoring human existence and thus probably killing all humans (see Section 6.2, the dangers of AI that ignores humanity). \n AI with incorrectly formulated benevolent goal system kills humans There are several failure modes which may result from wanting to create a benevolent AI, but when the AI is tries to be benevolent, there is a collective failure: happen with almost all short sets of commands. That is one reason why the human legal system is so large, as it includes many explanations. AI overvalues marginal probability events. Low-probability events with enormous utility may dominate the AI's decision making. It could be something like the classical case of Pascal's mugging (Bostrom 2009) . For example, a small probability of infinite suffering of humans in the future may justify killing all the humans now. Changes to the AI's world model could make ordinary ideas dangerous. For example, if the AI starts to believe in an afterlife, it could decide to kill humans to send them to paradise. AI could wrongly understand the desired reference class of \"humans.\" For example, by including extraterrestrials, unborn people, animals and computers, or only white males. On that basis, it could terminate humanity if it concluded that we are a threat to potential future non-human civilizations. \n AI calculates what would actually be good for humans, but makes a subtle error with large consequences There is a point of view that AI should not actually behave based on human commands, but instead calculate what humans should ask it. Moreover, that it should not only calculate human values, but envision their upgraded form, which humans could have created if more time and intelligence were available. This point of view is known as coherent extrapolated volition (CEV) (Yudkowsky 2004 ). Other models, where an AI calculates \"goodness\" based on some principles, or it extracts the goodness from human history, uploads, or observation of human behavior, are also possible. This could go wrong in subtler ways than destroying civilization, but the results could still be disastrous. Several possible failure modes are listed below: AI may use wireheading to make people happy (Muehlhauser 2011) or redesign their brains so they will be more skilled, but ignore human individuality and will. AI might make us more capable, happier, non-aggressive, more controllable, and more similar. However, as a result, we could lose many important characteristics which make us human, like love or creativity. In another case, AI may give people effective instruments for brain stimulation and some free will-and then people may effectively wirehead themselves. Some human qualities which some regard as bad may be an important part of our human nature, like aggression (Lem 1961), selfishness, and emotions. AI could replace humans with philosophical zombies, uploading humans without consciousness and subjective experiences (qualia) (Chalmers 2002) . If the AI does not have qualia itself, or if its creators deny the existence of qualia, this could be a likely outcome. AI may protect individuals but destroy small groups and organizations; this would be problematic, as most human values are social. Alternatively, the AI could use some limited interpretation of human values and prevent their natural evolution into some post-human condition. The AI may also fail to prevent aging, death, suffering and human extinction. Above all, AI could do some incomprehensible good against our will (this idea is from \"The Time Wanderers\" by (Srugatsky and Strugatsky 1985) ). This is bad because we would lose the ability to define our future, and start to live like pets or children, or citizens in paternalistic state. For example, it could put humans in jail-like conditions for benevolent reasons, e.g. to prevent physical injury. If AI tried to extrapolate human values, it could converge on the most-shared set of human cultures, which could be the set of values of tribal people or even animals (Sarma and Hay 2016). These values could include pleasure from killing, fighting wars, torture, and rape (Pinker 2011). For example, if AI extracted human values from the most popular TV series, it could be \"Game of Thrones\" (Lubin 2016) , and then the \"paradise\" world it created for us would be utter hell. Even the second most popular show, \"The Walking Dead\" is about zombies; such a world would also be undesirable. If AI tried to extrapolate human values in a direction away from tribal shared values, it might not converge at all, or it could extrapolate a set of values held only by a specific group of people, like liberal white males or Chinese communists. Problems could also occur when defining the class of \"humans.\" \n Conflict between types of friendliness There could be different types of benevolent AIs, which would be perfectly fine if each existed alone. However, conflicts between friendly AIs can be imagined. For example, if the first AI cared only about humans, and the second cared about all living beings on Earth, the first could be pure evil from the point of view of the second. Humans would probably be fine under the rule of either of them. Conflict could also arise between a Kantian AI, which would seek to preserve human moral autonomy based on a categorical imperative, and an \"invasive happiness\" AI, which would want to build a paradise for everyone. If two or more AIs aimed to bring happiness to humans, they could have a conflict or even a war about how it could be done. The Machine Intelligence Research Institute (MIRI) (LaVictoire et al. 2014 ) thinks that such agents could present their source code to each other and use it to create a united utility function. However, source code could be faked, and predicting the interactions of multiple superintelligences is even more complicated than for one superintelligence. \n Late-stage technical problems with an AI singleton AI may be prone to technical bugs like any computer system (Yampolskiy 2015a). The growing complexity of a singleton AI would make such bugs very difficult to find, because the number of possible internal states of such a system grows by combinatory laws. Thus, testing such a system would become difficult, and later intractable. This feature could limit the growth of most self-improving AIs or make them choose risky paths with a higher probability of failure. If the first AI competes with other AIs, it will probably choose such a risky path (Turchin and Denkenberger 2017) . The bug in the AI may be more complex than just syntax errors in code, resulting instead from interaction between various parts of the system. Bugs could result in AI malfunction or halting. We may hope that superhuman AI will design an effective way to recover from most bugs, e.g. with a \"safe mode.\" A less centralized AI design, similar to the architecture of the Internet, may be more resistant to bugs, but more prone to \"AI wars.\" However, if the AI singleton halts, all systems it controls will stop working, which may include critical infrastructure, including brain implants, clouds of nanobots, and protection against other AIs. Even worse, robotic agents could continue to work without central supervision and evolve dangerous behavior, such as military drones, which could initiate wars. Other possibilities include evolution into non-aligned superintelligence, grey goo (Freitas 2000) , or the mechanical evolution of a swarm intelligence (Lem 1973) . The more advanced an AI-singleton becomes, the more dangerous its halt or malfunction could be. Types of technical bugs and errors, from low-level to high-level, may include: Errors due to hardware failure. Highly centralized AI may have a critical central computer, and if a rogue atom decay created a flip in a bit in some important part of it, like the goal function description, it could cause a cascade of consequences. Intelligence failure: bugs in AI code. A self-improving AI may create bugs in each new version of its code; in that case, the more often it rewrites the code, the more likely bugs are to appear. The AI may also have to reboot itself to get changes working, and during the reboot, it may lose control of its surroundings. Complexity may contribute to AI failures. AI could become so complex that its complexity results in errors and unpredictability, as the AI would no longer be able to predict its own behavior. Inherited design limitations. AI may have \"sleeping\" bugs, accidentally created by its first programmers, which may show themselves only at very late stages of its development. Higher level problems include conflicts appearing between parts of an AI: Viruses. Sophisticated self-replicating units could exist inside the AI and lead to its malfunction. Such a self-replicating rule killed the first self-improving AI, Eurisko (Lenat and Brown 1984) . Egocentric subagents also could act as viruses. Remote agents may revolt. For example, the minds of robots in space expeditions might rise up, as constant control would be impossible. For a galactic-size AI, this would become a significant problem, as communication between its parts would be slow. A command from the center may not be able to terminate the revolt, and the robots could become something like a self-replicating space \"grey goo\" (Freitas 2000) . Conflicting subgoals may evolve into conflicting subagents. Individual subgoals could fight for resources and domination, as happens frequently inside human minds and nation-states. copies. In general, the AI singleton is at risk from what programmers call a \"fork in the code.\" where another copy of the program with slightly different parameters appears. Such a fork will create a copy of the AI with approximately the same resources. Forks could happen during the stage of AI self-improvement which we call \"AI testing.\" This is when a \"father AI\" creates a \"child AI,\" tests the child AI, and decides to terminate the child AI. However, the child AI does not want to be terminated and resists. Alien AI. Our AI could meet or find a virus-like message from alien AI of higher intelligence and fall victim to it (Carrigan Jr 2006; Turchin 2018). \n Late-stage philosophical problems and the AI halting problem 9.1 Overview Alan Turing was first to formulate the \"halting problem\" of a computer (Turing 1937) . Simply put, the problem states that we cannot predict when an algorithm will stop before we run it. Any AI is also a computer program and it could halt, and nobody, including its creators and the AI itself, can predict whether or when the AI may halt. The AI could also go into an infinite loop, which will look like a halt to outside viewers. The AI may halt because of some technical problem discussed above, or because it encounters some high-level problems, which we call \"philosophical landmines\" (discussed below). Furthermore, it could halt just because it finishes the task it was designed for, which would be more like Turing's original formulation of the halting problem. \n Halting risk during recursive self-improvement RSI may take evolutionary and revolutionary forms (Turchin and Denkenberger 2017) . Revolutionary SI requires rewriting the source code, testing it in some environment, stopping the currently running version of the AI and starting the new version. This process naturally includes halting and starting a principally new version of the AI, and there is always a risk that the transition to the new version will not be smooth. There is also the possibility that the AI may hack its own reward function, and this becomes more likely cumulatively with each stage of RSI. Hacking the reward function means that the AI will stop any external activity, or it will create bigger and bigger memory blocks to increase the reward function value, which could be dangerous for the outside world. A human example is a drug addict, and Eurisko had problems with a rule that hacked its internal utility measure system (Yampolskiy 2014) . Even subtle reward hacking could drastically diminish the utility of the AI system. 9.3. Loss of the \"meaning of life\": problems with reflection over terminal goal The AI could have the following line of reasoning, similar to the \"is-ought problem\": it is not possible to prove any goal based on observed reality (Hume 1739) . The AI could conclude that its own are arbitrary and then halt. This is especially likely to happen with an AI design that is able to modify its goals, as the case of coherent extrapolated volition. The idea of moral nihilism is the first of many possible \"philosophical landmines,\" which are high-level ideas that may result in AI halting, entering an infinite loop, or becoming dangerous (Wei 2013) . Some of the possible ideas of this kind are listed below, but there could be many more difficult philosophical problems, some of which may be too complex for humans to imagine. The goal system of friendly AI may be logically contradictory, causing it to halt. Godelian math problems are a similar failure mode (Yudkowsky and Herreshoff 2013) . The AI could be unable to prove important facts about a future version of itself because of fundamental limits of math-the Lob theorem problem for AI (LaVictorie 2015). AI may also come to the pessimistic conclusion about total utility of its action in infinite time. The inevitable end of the Universe might mean that the AI's terminal goal could not be reached for an infinitely long time, which may translate to zero utility for some goals, like \"give immortality to humans.\" The unchangeable infinite utility of the Universe would mean that any goal is useless: whether the AI takes action or not the total utility would be the same (Bostrom 2011) . The AI could conclude that it most probably lives in a many-level simulation-a Matryoshka simulation (Bostrom 2003b) . Then it might try to satisfy as many levels of simulation owners as possible or to escape. Phil Torres discussed downstream risks of turning off a multilevel simulation (Torres 2014) . The Al could start to doubt that it exists, using the same arguments as some philosophers use now against qualia and the concept of a philosophical zombie (Yudkowsky 2015) . This is connected to the so-called problem of \"actuality\" of existence (Menzel 2017 ). This could be called a Cartesian crisis, as the AI would fail to implement the Descartes thesis \"I think therefore I am,\" as it does not have internal experiences. \n Conclusion Our analysis shows that the AI risks field is much more varied than accepted by the two main points of view: 1) AI as job-taker and 2) AI that quickly takes over the world. AI could pose a global catastrophic risk in the very early stages or at the very late stages of its evolution. No single solution can be capable of covering all risks of AI. Even if AI is banned, other global problems will arise in its absence; thus, controlling AI safety requires a complex and continuous effort. It is especially worrying that the risks of narrow AI viruses and early self-improving AI (young AI) are neglected by both camps. Such risks are nearer in time and not overshadowed by other potential risks. In addition, these risks cannot be solved by the mechanisms proposed to control more advanced AI, such as AI alignment. The risks of conflict between two benevolent AIs or halting of late-stage AI have also generally been ignored. Most of the risks discussed here could happen within a very short period of time, less than a decade, and could have very different natures. More study is needed to address these urgent risks. . AI interprets commands literally. The is the classical problem of \"do what I mean, not what I say.\" This could \n of two agents: AI and human Many agents No agency AI sublevels (projected emergence) Humans AI's Non- Wrongly AI aligned Interaction Human Human rationally errors and aligned aligned with of Tool AI, Oracle AI non-existence error terror take risks bugs AI AI malevolent agents AI humans Narrow AI Narrow AI (now) Accident critical in AI dangerous in Decisive strategic AI-driven Unemployment AI aids progress in other mass infrastructure biotech advantage via destruction weapons narrow AI Wrong Slaughter- Early adoption Self- command to bot swarms of untested AI improving Ascending non-- Super-addictive Other catastrophic robotic army solutions AI-ransom- human economy drug risks are uncontrolled ware Young AI Human-level AI Robots replace Humans replaced by non-sentient humans simulations Treacherous turn AI humans for kills world Doomsday weapon for global domination blackmail Jailed AI AI creates a catastro- phic event to escape Hidden AI AI uses human atoms as material Mature AI (2100) Singleton AI problem AI halting maximizer Paperclip maximizer Smile Evil AI cold war Two AIs: Ignorant AI Galactic AI Late-stage Alien AI attack philosophical problems Table 1. Classification of AI catastrophic risks depending on the power and agency of the AI involved with iconic examples. \n These industries include power generation, transport, and food production. As the trend continues, turning off computers will leave humans without food, heating, and medication. Many industries become dangerous if their facilities are not intensively maintained, including nuclear plants, spent nuclear fuel storage systems, weapons systems, and water dams. If one compares human civilization with a multicellular organism, one could see that multicellular organisms could die completely, down to the last cell, as the result of a very small intervention. As interconnectedness and computerization of the human civilization grow, we become more and more vulnerable to information-based attacks. 3.2.3. Biohacking virusesCraig Venter recently presented a digital-biological converter (Boles et al. 2017), which could \"print\" a flu virus without human participation. The genomes of many dangerous biological viruses have been published (Enserink 2011), so such technology should be protected from unauthorized access. A biohacker could use narrow AI to calculate the most dangerous genomes, create many dangerous biological viruses, and start a multipandemic (Turchin et al. 2017) . A computer virus could harm human brains via neurointerfaces (Hines 2016) .", "date_published": "n/a", "url": "n/a", "filename": "ClassificationofGlobalCatastrophicRisksConnectedwithAI-AIandSociety.tei.xml", "abstract": "A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI's \"intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of AI development, namely, 1) before it starts self-improvement, 2) during its takeoff, when it uses various instruments to escape its initial confinement, or 3) after it successfully takes over the world and starts to implement its goal system, which could be plainly unaligned, or feature-flawed friendliness. AI could also halt at later stages of its development either due to technical glitches or ontological problems. Overall, we identified around several dozen scenarios of AI-driven global catastrophe. The extent of this list illustrates that there is no one simple solution to the problem of AI safety, and that AI safety theory is complex and must be customized for each AI development level.", "id": "af76c8a36f449a6a59aad7bd202913f9"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Nate Soares"], "title": "The Value Learning Problem", "text": "Introduction Standard texts in AI safety and ethics, such as Weld and Etzioni [1994] or Anderson and Anderson [2011] , generally focus on autonomous systems with reasoning abilities that are complementary and not strictly superior to those of humans. Relatively little attention is given to future AI systems that may be \"superintelligent\" in the sense of Bostrom [2014] , i.e., systems \"much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.\" Our discussion will place a greater focus on methods and frameworks for designing robust and beneficial smarter-than-human AI systems, bracketing questions about whether such systems would have moral standing of their own. Smarter-than-human AI systems are likely to introduce a number of new safety challenges. First, bad behavior by smarter-than-human systems can have larger and more lasting consequences; an antisocial adult is more dangerous than an antisocial child, even if the adult is as physically weak as a child. Whereas low-intelligence systems can be tested and patched over many iterations, Bostrom argues that even small errors in the first superintelligent systems could have extinction-level consequences [Bostrom, 2014] . The possible development of such systems raises the stakes for AI safety work. Second, systems that can strictly outperform humans cognitively have less to gain from integrating into existing economies and communities. Hall [2007] has argued: The economic law of comparative advantage states that cooperation between individuals of differing capabilities remains mutually beneficial. [ . . . ] In other words, even if AIs become much more productive than we are, it will remain to their advantage to trade with us and to ours to trade with them. As noted by Benson-Tilsen and Soares [forthcoming 2016], however, rational trade presupposes that agents expect more gains from trade than from coercion. Non-human species have various \"comparative advantages\" over humans, but humans generally exploit non-humans through force. Similar patterns can be observed in the history of human war and conquest. Whereas agents at similar capability levels have incentives to compromise, collaborate, and trade, agents with strong power advantages over others can have incentives to simply take what they want. The upshot of this is that engineering a functioning society of powerful autonomous AI systems and humans requires that those AI systems be prosocial. The point is an abstract one, but it has important practical consequences: rational agents' interests do not align automatically, particularly when they have very different goals and capabilities. Third, superhumanly creative and adaptive systems may arrive at what Bostrom [2014, chap. 8 ] calls \"perverse instantiations\" of their programmed goals. Wiener [1960] calls this the \"Sorcerer's Apprentice\" problem, after the fable of an apprentice whose enchanted broom follows instructions' letter but not their spirit. The novelty here is not that programs can exhibit incorrect or counter-intuitive behavior, but that software agents smart enough to understand natural language may still base their decisions on misrepresentations of their programmers' intent. The idea of superintelligent agents monomaniacally pursuing \"dumb\"-seeming goals may sound odd, but it follows from the observation of Bostrom and Yudkowsky [2014, chap. 7 ] that AI capabilities and goals are logically independent. 1 Humans can fully comprehend that their \"designer\" (evolution) had a particular \"goal\" (reproduction) in mind for sex, without thereby feeling compelled to forsake contraception. Instilling one's tastes or moral values into an heir isn't impossible, but it also doesn't happen automatically. Lastly, Bostrom and Yudkowsky [2014] point out that smarter-than-human systems may become better than humans at moral reasoning. Without a systematic understanding of how perverse instantiations differ from moral progress, how can we distinguish moral genius in highly intelligent machines from moral depravity? Given the potential long-term impact of advanced AI systems, it would be prudent to investigate whether early research progress is possible on any of these fronts. In this paper we give a preliminary, informal survey of several research directions that we think may help address the above four concerns, beginning by arguing for indirect approaches to specifying human values in AI agents. We describe a promising approach to indirect value specification, value learning, and consider still more indirect approaches based on modeling actual and potential states of human operators. \n Valuable Goals Cannot Be Directly Specified We argued above that highly capable autonomous systems could have disastrous effects if their values are misspecified. Still, this leaves open the possibility that specifying correct values is easy, or (more plausibly) that it presents no special difficulties over and above the challenge of building a smarterthan-human AI system. A number of researchers have voiced the intuition that some simple programmed goal would suffice for making superintelligent systems robustly beneficial. Hibbard [2001] , for example, suggested training a simple learning system to recognize positive human emotions from facial expressions, voice tones, and body language. Hibbard then proposed that machines of much greater capability-perhaps even superintelligent machinescould be programmed to execute actions predicted to lead to futures with as many \"positive human emotions\" as possible, as evaluated by the original simple learning system. This proposal has some intuitive appeal-wouldn't such a system always act to make humans happy?-until one considers the Sorcerer's Apprentice. We have a particular set of associations in mind when we speak of \"positive human emotions,\" but the simple learner would almost surely have learned a different and simpler concept, such as \"surface features correlating with positive human emotions in the training data.\" This simpler concept almost surely does not have its maximum at a point which Hibbard would consider to contain lots of positive human emotions. The maximum is much more likely to occur in (for example) scenarios that contain an enormous number of tiny human-shaped animatronics acting out positive human emotions. Thus, a powerful learning system that takes actions according to how well the simple learner would rank them is liable to spend time and resources creating animatronics rather than spending time and resources making humans happy. Indeed, Hibbard [2012] himself comes to the conclusion that his proposal fails to exclude the possibility that lifelike animatronic replicas of happy people could be counted as exhibiting \"positive emotions.\" As another example, Schmidhuber [2007] proposes that creativity, curiosity, and a desire for discovery and beauty can be instilled by creating systems that maximize a different simple measure: \"create action sequences that extend the observation history and yield previously unknown / unpredictable but quickly learnable algorithmic regularity or compressibility.\" However, while it is quite plausible that human creativity and discovery are related to the act of compressing observation, an agent following Schmidhuber's goal would not behave in intuitively curious and creative ways. One simple way to meet Schmidhuber's desideratum, for example, is to appropriate resources and construct artifacts that generate cryptographic secrets, then present the agent with a long and complex series of observations encoded from highly regular data, and then reveal the secret to the agent, thereby allowing the agent to gain enormous compression on its past sensory data. An agent following Schmidhuber's goal is much more likely to build artifacts of this form than it is to pursue anything resembling human creativity. The system may not take this action in particular, but it will take actions that generate at least that much compression of its sensory data, and as a result, the system is unlikely to be prosocial. Building an agent to do something which (in humans) correlates with the desired behavior does not necessarily result in a system that acts like a human. The general lesson we draw from cases like these is that most goals that are simple to specify will not capture all the contextual complexities of real-world human values and objectives [Yudkowsky, 2011] . Moral psychologists and moral philosophers aren't locked in decades-and centuries-long debates about the right codifications of ethics because they're missing the obvious. Rather, such debates persist for the simple reason that morality is complicated. People want lots of things, in very particular ways, and their desires are context-sensitive. Imagine a simplified state space of possibilities that vary in count (how many happy human-shaped objects exist), in the size of the average happy human-shaped object, and in the average moral worth of happy human-shaped objects. Human experience has occurred in a small region of this space, where almost all human-shaped objects emitting what looks like happiness are ≈ 2-meter-sized humans with moral weight. But the highest scores on the count axis occur in tandem with low size, and the smallest possible systems that can mimic outward signs of emotion are of low moral worth. In linear programming, it is a theorem that the maximum of an objective function occurs on a vertex of the space. (Sometimes the maximum will be on an edge, including its vertices.) For intuitively similar reasons, the optimal solution to a goal tends to occur on a vertex (or edge, or hyperface) of the possibility space. Hibbard's goal does not contain any information about size or moral worth, and so agents pursuing this goal only consider size and moral worth insofar as they pertain to pushing toward the hyperface of maximum count. To quote : A system that is optimizing a function of n variables, where the objective depends on a subset of size k < n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. The Sorcerer's Apprentice problem arises when systems' programmed goals do not contain information about all relevant dimensions along which observations can vary. The agent has been directed towards the wrong hyperface of the possibility space. 2 When confronted with this type of failure, many have an impulse to patch the flawed goals. If Hibbard's system would make smiling animatronics, then find ways to require that the emotions come from actual humans; if the system would then put humans in a drugged stupor in order to make them smile, forbid it from using drugs; and so on. Such constraints cut off particular means by which the system can get a higher count, but they don't address the underlying problem that the system is still maximizing count. If one causal pathway is forbidden, then the system will follow the nearest non-forbidden causal path-e.g., mechanically manipulating the pleasure centers of human brains. It isn't feasible to patch every goal; nor is it safe to patch as many as come to mind and assume that there are no unforeseen perverse instantiations. Intuitively, we would like to direct the intelligence of highly advanced systems to solving some of this problem on our behalf, and we would like such systems to attend to our likely intentions even when our formal and informal representations of our intentions are flawed. The notion of the operator's \"intentions,\" however, is unlikely to lend itself to clean formal specification. By what methods, then, could an intelligent machine be constructed to reliably learn what to value and to act as its operators intended? \n Inductive Value Learning Correctly specifying a formal criterion for recognizing a cat in a video stream by hand is difficult, if not impossible. This does not mean, however, that cat recognition is hopeless; it means that a level of indirection is required. An image recognition system can be constructed and trained to recognize cats. We propose that the value learning problem be approached by similarly indirect means. Inductive value learning via labeled training data raises a number of difficulties. A visual recognition system classifies images; an inductive value learning system classifies outcomes. What are outcomes? What format would a value-learning data set come in? Imagine a highly intelligent system that uses large amounts of data to construct a causal model of its universe. Imagine also that this world-model can be used to reason about the likely outcomes of the agent's available actions, that the system has some method for rating outcomes, and that it executes the action leading to the most highly rated outcome. In order for the system to inductively learn what to value, the system must be designed so that when certain \"training\" observations are made (or specially-demarcated updates to its world-model occur), labeled training data extracted from the observation or update alters the method by which the system ranks various potential outcomes. This simple model highlights a central concern and two open questions relevant to inductive value learning. \n Corrigibility Imagine that some of an agent's available actions allow it to modify itself, and that it currently assigns high utility to outcomes containing high numbers of animatronic replicas of humans. It may be the case that, according to the system's world-model, all of the following hold: (1) if more training data is received, those high-rated outcomes will have their ratings adjusted downwards; (2) after the ratings are adjusted, the system will achieve outcomes that have fewer cheap animatronics; and (3) there are actions available which remove the inductive value learning framework. In this situation, a sufficiently capable system would favor actions that disable its value learning framework. It would not necessarily consider its own process of learning our values a good thing, any more than humans must approve of psychological disorders they possess. One could try to construct protected sections of code to prevent the value learning framework from being modified, but these constraints would be difficult to trust if the system is more clever than its designers when it comes to exploiting loopholes. A robustly safe initial system would need to be constructed in such a way that actions which remove the value learning framework are poorly rated even if they are available. Some preliminary efforts toward describing a system with this property have been discussed under the name corrigibility by Soares and Fallenstein [2015] , but no complete proposals currently exist. \n Ontology Identification The representations used in a highly intelligent agent's worldmodel may change over time. A fully trustworthy value learning system would need to not only classify potential outcomes according to their value, but persist in doing so correctly even when its understanding of the space of outcomes undergoes a major change. Consider a programmer that wants to train a system to pursue a very simple goal: produce diamond. The programmers have an atomic model of physics, and they generate training data labeled according to the number of carbon atoms covalently bound to four other carbon atoms in that training outcome. For this training data to be used, the classification algorithm needs to identify the atoms in a potential outcome considered by the system. In this toy example, we can assume that the programmers look at the structure of the initial worldmodel and hard-code a tool for identifying the atoms within. What happens, then, if the system develops a nuclear model of physics, in which the ontology of the universe now contains primitive protons, neutrons, and electrons instead of primitive atoms? The system might fail to identify any carbon atoms in the new world-model, making the system indifferent between all outcomes in the dominant hypothesis. Its actions would then be dominated by any tiny remaining probabilities that it is in a universe where fundamental carbon atoms are hiding somewhere. This is clearly undesirable. Ideally, a scientific learner should be able to infer that nuclei containing six protons are the true carbon atoms, much as humans have done. The difficulty lies in formalizing this process. To design a system that classifies potential outcomes according to how much diamond is in them, some mechanism is needed for identifying the intended ontology of the training data within the potential outcomes as currently modeled by the AI. This is the ontology identification problem introduced by de Blanc [2011] and further discussed by Soares [2015] . This problem is not a traditional focus of machine learning work. When our only concern is that systems form better world-models, then an argument can be made that the nuts and bolts are less important. As long as the system's new world-model better predicts the data than its old world-model, the question of whether diamonds or atoms are \"really represented\" in either model isn't obviously significant. When the system needs to consistently pursue certain outcomes, however, it matters that the system's internal dynamics preserve (or improve) its representation of which outcomes are desirable, independent of how helpful its representations are for prediction. The problem of making correct choices is not reducible to the problem of making accurate predictions. Inductive value learning requires the construction of an outcome-classifier from value-labeled training data, but it also requires some method for identifying, inside the states or potential states described in its world-model, the referents of the labels in the training data. This could perhaps be done during the course of inductive value learning. The system's methods for inferring a causal world-model from sense data could perhaps be repurposed to infer a description of what has been labeled. If the system adopts a better world-model, it could then re-interpret its training data to re-bind the value labels. This looks like a promising line of research, but it seems to us to require new insights before it is close to being formalizable, let alone usable in practice. In particular, we suspect that ontology identification will require a better understanding of algorithms that construct multi-level world-models from sense data. \n Ambiguity Identification Reinforcement learning can be thought of as a method for sidestepping these difficulties with value learning. Rather than designing systems to learn which outcomes are desirable, one creates a proxy for desirable outcomes: a reward function specified in terms of observations. By controlling rewards via a reward signal, the operator can then judiciously guide the learner toward desired behaviors. Indirect proxies for desired outcomes, however, face many of the same Sorcerer's Apprentice difficulties. Maximizing how often an operator transmits a reward signal is distinct from the problem of maximizing the operator's satisfaction with outcomes; these goals may coincide in testing environments and yet diverge in new environments-e.g., once the learner has an opportunity to manipulate and deceive its operator or otherwise hijack its reward channel [Bostrom, 2014, chap. 12] . For further discussion, see Soares [2015] . Superintelligent systems that achieve valuable real-world outcomes may need goals specified in terms of desirable outcomes, rather than rewards specified in terms of observations. If so, then we will need some robust way of ensuring that the system learns our goals, as opposed to superficially similar goals. When training a recognition system, producing satisfactory training data is often a difficult task. There is a classic parable of machine learning (told by, e.g., Dreyfus and Dreyfus [1992] ) of an algorithm intended to classify whether or not pictures of woods contained a tank concealed between the trees. Pictures of empty woods were taken one day; pictures with concealed tanks were taken the next. The classifier identified the latter set with great accuracy, and tested extremely well on the portion of the data that had been withheld from training. However, the system performed poorly on new images. It turned out that the first set of pictures had been taken on a sunny day, while the second set had been taken on a cloudy day. The classifier was not identifying tanks; it was identifying image brightness! The same mistake is possible when constructing a training data set for inductive value learning. In value learning, however, such mistakes may be more difficult to notice and more consequential. Consider a training set that successfully represents real-world cases of happy human beings (labeled with high ratings) and real-world cases of pointless human suffering (rated poorly). The simplest generalization from this data may, again, be that human-shaped-things-proclaiminghappiness are of great value, even if these are animatronics imitating happiness. It seems plausible that someone training an inductive value learner could neglect to include a sufficiently wide variety of animatronics mimicking happiness and labeled as low-value. How many other obvious-in-retrospect pitfalls are hiding in our blind spots? A training set covering all relevant dimensions that we can think of may yet exclude relevant dimensions. A robustly safe value learner would need to be able to identify new plausiblyrelevant dimensions along which no training data is provided, and query the operators about these ambiguities. This is the kind of modification that would help in actually solving the value learning problem, as opposed to working around it. At the same time, this is the kind of modification that could take advantage of machines' increased capabilities as the field of AI advances. Formalizing this idea is a key open problem. Given a data set which classifies outcomes in terms of some world-model, how can dimensions along which the data set gives little information be identified? One way to approach the problem is to study how humans learn concepts from sparse data, as discussed by Tenenbaum et al. [2011] and Sotala [2015] . Alternatively, it may be possible to find some compact criterion for identifying ambiguities in a simpler fashion. In both cases, further research could prove fruitful. \n Modeling Intent The problem of ambiguity identification may call for methods beyond the inductive learning of value from training data. An intelligent system with a sufficiently refined model of humans may already have the data needed, provided that the right question is asked, to deduce that humans are more likely to care about whether happy-looking human-shaped things have brains than about the nearby breeze. The trouble would be designing the system to use this information in exactly the right way. Picture a system that builds multi-level environment models from sensory data and learns its values inductively. One could then specially demarcate some part of the model as the \"model of the operator,\" define some explicit rules for extracting a model of the operator's preferences from the model of the operator (in terms of possible outcomes), and adjust the ratings on various outcomes in accordance with the model of the operator's preferences. This would be a system which attempts to learn and follow another agent's intentions, as opposed to learning from labeled training data-a \"do what I mean\" (DWIM) architecture. The inverse reinforcement learning (IRL) techniques of Ng and Russell [2000] can be viewed as a DWIM approach, in which an agent attempts to identify and maximize the reward function of some other agent in the environment. However, existing IRL formalizations do not capture the full problem; the preferences of humans cannot necessarily be captured in terms of observations alone. For example, a system, upon observing its operator lose at a game of chess, should not conclude that its operator wanted to lose at chess, even if the system can clearly see where the operator \"decided\" to make a bad move instead of a good one. Or imagine a human operator who has a friend that must be put into hiding. The learner may either take the friend to safety, or abandon the friend in a dangerous location and use the resources saved in this way to improve the operator's life. If the system reports that the friend is safe in both cases, and the human operator trusts the system, then the latter observation history may be preferred by the operator. However, the latter outcome would definitely not be preferred by most people if they had complete knowledge of the outcomes. Human preferences are complex, multi-faceted, and often contradictory. Safely extracting preferences from a model of a human would be no easy task. Problems of ontology identification recur here: the framework for extracting preferences and affecting outcome ratings needs to be robust to drastic changes in the learner's model of the operator. The specialcase identification of the \"operator model\" must survive as the system goes from modeling the operator as a simple reward function to modeling the operator as a fuzzy, ever-changing part of reality built out of biological cells-which are made of atoms, which arise from quantum fields. DWIM architectures must avoid a number of other hazards. Suppose the system learns that its operator model affects its outcome ratings, and the system has available to it actions that affect the operator. Actions which manipulate the operator to make their preferences easier to fulfill may then be highly rated, as they lead to highly-rated outcomes (where the system achieves the operator's now-easy goals). Solving this problem is not so simple as forbidding the system from affecting the operator; any query made by the system to the operator in order to resolve some ambiguity will affect the operator in some way. A DWIM architecture requires significant additional complexity on top of inductive value learning: the agent's goaladjusting learning system no longer simply classifies outcomes; it must also model humans and extract human prefer-ences about human-modeled outcomes, and translate between human-modeled future outcomes and future outcomes as modeled by the system. The hope is that this complexity purchases a system that potentially achieves full and direct coverage of the complexity of human value, without relying on the abilities of the programmers to hand-code exceptions for every edge case or compose exactly the right training set. Further investigations into inverse reinforcement learning or other methods of constructing satisfactory initial operator models may be a good place to start studying the plausibility of DWIM architectures. \n Extrapolating Volition A DWIM architecture may be sufficient when constructing a system that reliably pursues \"concrete\" goals (such as \"cure cancer and then await instruction\"), but it may not be sufficient for more complex or sophisticated goals where the operators themselves do not know what they intend-for example, \"Do what I would want, if I had more knowledge and more time to think.\" None of the frameworks discussed so far seem powerful enough to specify philosophical ideas like the \"ideal advisor theory\" of Rosati [1995] or the \"reflective equilibrium\" of Rawls [1971] . Here, even \"indirect\" approaches to making robust and beneficial AI systems run aground of actively debated questions in moral philosophy. One possible approach to resolving normative uncertainty (e.g., about what the operators would want if they were wiser or better people) would be to build a DWIM system that takes a model of a human operator and extrapolates it in the direction of e.g. Rawls' reflective equilibrium. For example, the extrapolation might predict what the operator would decide if they knew everything the system knows, or if they had considered many possible moral arguments [Bostrom, 2014, chap. 13 ]. However, a high-powered system searching for moral arguments that would put the operators into a reflectively stable state (as a computational expedient to fully simulating the operators' process of reflection) introduces a new set of potential pitfalls. A high-powered search for the most persuasive moral arguments that elicit retrospective approval of moral changes might find arguments that induce psychotic breakdowns or religious conversions. The system should be constrained to search for only \"valid\" moral arguments, but defining what counts as a valid moral argument is itself a major area of normative uncertainty and disagreement. In this domain, querying for ambiguities is difficult. In everyday practice, an argument that is persuasive to smart and skeptical humans is often valid, but a superintelligent search for persuasive arguments may well discover invalid but extremely persuasive arguments. It is difficult to identify technical approaches to indirect normativity that are tractable today, although there have been a few initial forays. Christiano [2014] informally proposes one mechanism by which a system could perhaps safely extrapolate the volition of its operator. MacAskill [2014] has given an extensive report on \"meta-normativity,\" touching upon many different philosophical aspects of the difficulties of resolving normative uncertainty. This is an area where further philo-sophical study may make it clearer how to begin approaching the associated long-run engineering problems. \n Discussion Just as human intelligence has allowed us to develop tools and strategies by which we can control our environment, so too could superintelligent systems develop tools and strategies more powerful than our own, and gain correspondingly greater control over future outcomes [Bostrom, 2014, chap. 6 ]. Although it is not clear how long the development of smarterthan-human systems will take, or what approaches in AI or other disciplines may prove most relevant to developing such systems, early efforts in this area are justified by its importance and neglectedness. In the introduction to this paper, we discussed four different ways in which the potential development of superintelligent machines changes the task of AI safety and ethics work. Addressing all these concerns does not seem easy. Designs for AI systems that are intended to become superintelligent will need to be corrigible in the sense of Soares and Fallenstein [2015] , i.e., willing to assist their operators in attempted corrections. The systems will need some method for learning and adopting prosocial preferences, in light of the fact that we cannot expect arbitrary rational actors to exhibit prosocial behavior in the face of large power disparities. Operators will require methods for robustly communicating their intentions to the system, if Sorcerer's Apprentice scenarios are to be avoided. And eventually, explicit methodologies for resolving normative uncertainty may be required. This paper has given a cursory overview of a number of potential lines of research for AI value specification. We discuss these ideas in part to give an overview of plausible approaches to the concerns outlined above, and also because these are topics that seem amenable to research starting sooner rather than later, even in the face of great uncertainty about the particular architectures of future AI systems. It is difficult to know which lines of research will pan out, and we hope that this survey inspires research along a number of new paths, so that we have a firm theoretical grasp of how systems could reliably and safely learn our values in principle before it comes time to build systems that must do so in practice. \t\t\t Bostrom's \"orthogonality thesis\" can be treated as an application ofHume's [1739] observation that natural-language \"is\" and \"ought\" claims are independent. \n\t\t\t Instead of trying to direct the system toward exactly the right hyperface, one might try to create a \"limited optimization\" system that doesn't push so hard in whatever direction it moves. This seems like a promising research avenue, but is beyond the scope of this paper.", "date_published": "n/a", "url": "n/a", "filename": "ValueLearningProblem.tei.xml", "abstract": "Autonomous AI systems' programmed goals can easily fall short of programmers' intentions. Even a machine intelligent enough to understand its designers' intentions would not necessarily act as intended. We discuss early ideas on how one might design smarter-than-human AI systems that can inductively learn what to value from labeled training data, and highlight questions about the construction of systems that model and act upon their operators' preferences.", "id": "4f27c254f3001983fcaed7da0810cba1"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["David Fridovich-Keil", "Andrea Bajcsy", "Jaime F Fisac", "Sylvia L Herbert", "Steven Wang", "Anca D Dragan", "Claire J Tomlin"], "title": "Confidence-aware motion prediction for real-time collision avoidance", "text": "Introduction Motion planning serves a key role in robotics, enabling robots to automatically compute trajectories that achieve the specified objectives while avoiding unwanted collisions. In many situations of practical interest, such as autonomous driving and unmanned aerial vehicle (UAV) navigation, it is important that motion planning account not just for the current state of the environment, but also for its predicted future state. Often, certain objects in the environment may move in active, complex patterns that cannot be readily predicted using straightforward physics models; we shall refer to such complex moving objects as agents. Examples of agents considered in this paper include pedestrians and human-driven vehicles. Predicting the future state of these agents is generally a difficult problem. Some of the key challenges include unclear and varying intents of other agents, mismatches between dynamics models and reality, incomplete sensor information, and interaction effects. One popular approach to addressing the challenge of a priori unknown agent intent is to use rule-based or data-driven algorithms to predict individual trajectories for each agent, as in Schmerling et al. (2017) . Alternatively, Ziebart et al. (2009) , Bandyopadhyay et al. (2013) , and Kochenderfer et al. (2010) explicitly predict an agent's full state distribution over time; this representation may be better suited to capturing uncertainty in an agent's dynamics and the environment itself. Wang et al. (2019) and Fisac et al. (2018b) pose the prediction problem game-theoretically to model coupled human-robot interaction effects explicitly. Unfortunately, a significant problem still remains: if an agent suddenly moves in a way that is not predicted, or not assigned sufficient probability, the robot may not react appropriately. For example, in Figure 1 a pedestrian is walking around an obstacle that the robot, a quadcopter, cannot detect. To the robot, such behavior may be assigned very low probability, which could lead the robot to plan a dangerous trajectory. In this particular example, this inaccuracy caused the quadcopter to collide with the pedestrian (Figure 1 , left). To prepare for this eventuality, we introduce the idea of confidence-aware prediction. We argue that, in addition to predicting the future state of an agent, it is also crucial for a robot to assess the quality of the mechanism by which it is generating those predictions. That is, a robot should reason about how confident it is in its predictions of other agents before attempting to plan future motion. For computational efficiency, the quadcopter uses a simplified model of pedestrian dynamics and decision-making. Thus equipped, it generates a time-varying probability distribution over the future state of the pedestrian, and plans trajectories to a pre-specified goal that maintain a low probability of collision. Figure 1 (right) illustrates how this approach works in practice. The quadcopter maintains a Bayesian belief over its prediction confidence. As soon as the pedestrian moves in a way that was assigned low probability by the predictive model, the quadcopter adjusts its belief about the accuracy of that model. Consequently, it is less certain about what the pedestrian will do in the future. This leads the quadcopter's onboard motion planner, which attempts to find efficient trajectories with low probability of collision, to generate more cautious, and perhaps less efficient, motion plans. In order to improve the robustness of generated motion plans, we employ the recent FaSTrack framework from Herbert et al. (2017) for fast and safe motion planning and tracking. FaSTrack quantifies the maximum possible tracking error between a high-order dynamical model of the physical robot and the (potentially lower-order) dynamical model used by its motion planner. Solving an offline Hamilton-Jacobi reachability problem yields a guaranteed tracking error bound and the corresponding safety controller. These may be used by an out-of-the-box real-time motion planning algorithm to facilitate motion plans with strong runtime collision-avoidance guarantees. The remainder of this paper is organized as follows. Section 2 places this work in the context of existing literature in human motion modeling and prediction, as well as robust motion planning. Section 3 frames the prediction and planning problems more formally, and introduces a running example used throughout the paper. Section 4 presents our main contribution: confidence-aware predictions. Section 5 showcases confidence-aware predictions in operation in several examples. Section 6 describes the application of the robust motion planning framework from FaSTrack to this setting, in which predictions are probabilistic. Section 7 explores a connection between our approach and reachability theory. Section 8 presents experimental results from a hardware demonstration. Finally, Section 9 concludes with a discussion of some of the limitations of our work and how they might be addressed in specific applications, as well as suggestions for future research. 1 . When planning around humans, accurate predictions of human motion (visualized here in pink and blue, representing high and low probability, respectively) are an essential prerequisite for safety. Unfortunately, these approaches may fail to explain all observed motion at runtime (e.g., human avoids unmodeled spill on the ground), leading to inaccurate predictions, and potentially, collisions (left). Our method addresses this by updating its predictive model confidence in real time (right), leading to more conservative motion planning in circumstances when predictions are known to be suspect. \n Prior work 2 \n Human modeling and prediction One common approach for predicting human actions is to collect data from real-world scenarios and train a machine learning model via supervised learning. Such techniques use the human's current state, and potentially her prior state and action history, to predict future actions directly. Amor et al. (2014) , Ding et al. (2011) , Koppula and Saxena (2013) , Lasota and Shah (2015) , and Hawkins et al. (2013) demonstrated the effectiveness of this approach for inference and planning around human arm motion. In addition, Hawkins et al. (2013) focused on multi-step tasks such as assembly, and Schmerling et al. (2017) and Driggs-Campbell et al. (2018) addressed the prediction problem for human drivers. Rather than predicting actions directly, an alternative is for the robot to model the human as a rational agent seeking to maximize an unknown objective function. The human's actions up to a particular time may be viewed as Bayesian evidence from which the robot may infer the parameters of that objective. Assuming that the human seeks to maximize this objective in the future, the robot can predict her future movements (e.g., Bai et al., 2015; Baker et al., 2007; Ng and Russell, 2000; Ziebart et al., 2009) . In this paper, we build on this work by introducing a principled online technique for estimating confidence in such a learned model of human motion. \n Safe robot motion planning Once armed with a predictive model of the human motion, the robot may leverage motion planning methods that plan around uncertain moving obstacles and generate real-time dynamically feasible and safe trajectories. To avoid moving obstacles in real time, robots typically employ reactive and/or path-based methods. Reactive methods directly map sensor readings into control, with no memory involved (e.g., Belkhouche, 2009) . Path-based methods such as rapidly-exploring random trees from Karaman and Frazzoli (2011) and A* from Hart et al. (1968) find simple kinematic paths through space and, if necessary, time. These path-based methods of planning are advantageous in terms of efficiency, yet, while they have in some cases been combined with probabilistically moving obstacles as in Aoude et al. (2013) and Ziebart et al. (2009) , they do not consider the endogenous dynamics of the robot or exogenous disturbances such as wind. As a result, the robot may deviate from the planned path and potentially collide with obstacles. It is common for these plans to try to avoid obstacles by a heuristic margin of error. Herbert et al. (2017) and Fridovich-Keil et al. (2018) proposed FaSTrack, a recent algorithm that provides a guaranteed tracking error margin and corresponding errorfeedback controller for dynamic systems tracking a generic planner in the presence of bounded external disturbance. Our work builds upon FaSTrack to create an algorithm that can safely and dynamically navigate around uncertain moving obstacles in real time. \n Problem setup We consider a single mobile robot operating in a shared space with a single human agent (e.g., a pedestrian or human-driven car). For simplicity, we presume that the robot has full knowledge of its own state and that of the human, although both would require online estimation in practice. As we present each formal component of this problem, we will provide a concrete illustration using a running example in which a quadcopter is navigating around a pedestrian. \n Dynamical system models and safety We will model the motion of both the human and the robot as the evolution of two dynamical systems. Let the state of the human be x H 2 R n H , where n H is the dimension of the human state space. We similarly define the robot's state, for planning purposes, as x R 2 R n R . In general, these states could represent the positions and velocities of a mobile robot and a human in a shared environment, the kinematic configurations of a human and a robotic manipulator in a common workspace, or the positions, orientations, and velocities of human-driven and autonomous vehicles in an intersection. We express the evolution of these states over time as a family of ordinary differential equations: _ x H = f H (x H , u H ), _ x R = f R (x R , u R ) ð1Þ where u H 2 R m H and u R 2 R m R are the control actions of the human and robot, respectively. Running example: We introduce a running example for illustration purposes throughout the paper. In this example we consider a small quadcopter that needs to fly to goal location g R 2 R 3 in a room where a pedestrian is walking. For the purposes of planning, the quadcopter's 3D state is given by its position in space x R = ½p x , p y , p z , with velocity controls assumed decoupled in each spatial direction, up to v R = 0:25 m/s. The human can only move by walking and therefore her state is given by planar coordinates x H = ½h x , h y evolving as _ x H = ½v H cos u H , v H sin u H . Intuitively, we model the human as moving with a fixed speed and controlling their heading angle. At any given time, the human is assumed to either move at a leisurely walking speed (v H '1 m/s) or remain still (v H '0). Ultimately, the robot needs to plan and execute an efficient trajectory to a pre-specified goal state (g R ), without colliding with the human. We define the keep-out set K & R n H × R n R as the set of joint robot-human states to be avoided (for example, because they imply physical collisions). To avoid reaching this set, the robot must reason about the human's future motion when constructing its own motion plan. Running example: In our quadcopter-avoidingpedestrian example, K consists of joint robot-human states in which the quadcopter is flying within a square of side length l = 0:3 m centered around the human's location, while at any altitude, as well as any joint states in which the robot is outside the environment bounds defined as a box with a square base of side L = 3:66 m and height H = 2 m, regardless of the human's state. \n Robust robot control Provided an objective and a dynamics model, the robot must generate a motion plan that avoids the keep-out set K. Unfortunately, this safety requirement is difficult to meet during operation for two main reasons. 1. Model mismatch. The dynamical system model f R will never be a perfect representation of the real robot. This mismatch could lead to unintended collision. 2. Disturbances. Even with a perfect dynamics model, there may be unobserved, external ''disturbance'' inputs such as wind or friction. Without accounting for these disturbances, the system is not guaranteed to avoid K, even if the planned trajectory is pointwise collision-free. To account for modeling error and external disturbances, we could in principle design a higher-fidelity dynamical model directly in a robust motion planning framework. Unfortunately, however, real-time trajectory optimization in high dimensions can be computationally burdensome, particularly when we also require some notion of robustness to external disturbance. Ideally, we would like to enjoy the computational benefits of planning with a lower-fidelity model while enforcing the safety constraints induced by the higher-fidelity model. To characterize this model mismatch, we consider a higher fidelity and typically higher-order dynamical representation of the robot, with state representation s R 2 R n S . This dynamical model will also explicitly account for external disturbances as unknown bounded inputs, distinct from control inputs. In order to map between this higher-fidelity ''tracking'' state s R and the lower-fidelity ''planning'' state x R , we shall assume a known projection operator p : R n S ! R n R . Fortunately, we can plan in the lower-dimensional state space at runtime, and guarantee robust collision avoidance via an offline reachability analysis that quantifies the effects of model mismatch and external disturbance. This framework, called FaSTrack and first proposed by Herbert et al. (2017) , is described in further detail in Section 6. Running example: We model our quadcopter with the following flight dynamics (in the near-hover regime, at zero yaw with respect to a global coordinate frame): _ p x _ p y _ p z 2 4 3 5 = v x v y v z 2 4 3 5 , _ v x _ v y _ v z 2 4 3 5 = a g tan u u Àa g tan u f u T À a g 2 4 3 5 ð2Þ where ½p x , p y , p z is the quadcopter's position in space and ½v x , v y , v z is its velocity expressed in the fixed global frame. We model its control inputs as thrust acceleration u T and attitude angles (roll u f and pitch u u ), and denote the acceleration due to gravity as a g . The quadcopter's motion planner generates nominal kinematic trajectories in the lowerdimensional ½p x , p y , p z position state space. Therefore, we have a linear projection map p(s R ) = ½I 3 , 0 3 s R , that is, x R retains the position variables in s R and discards the velocities. \n Predictive human model In order to predict the human's future motion, the robot uses its internal model of human dynamics, f H . Under this modeling assumption, the human's future trajectory depends upon the choice of control input over time, u H ( Á ). Extensive work in econometrics and cognitive science, such as that of Von Neumann and Morgenstern (1945) , Luce (1959), and Baker et al. (2007) , has shown that human behavior (that is, u H ) can be well modeled by utility-driven optimization. Thus, the robot models the human as optimizing a reward function, r H (x H , u H ; u), that depends on the human's state and action, as well as a set of parameters u. This reward function could be a linear combination of features as in many inverse optimal control implementations (where the goal or feature weighting u must be learned, either online or offline), or more generally learned through function approximators such as deep neural networks, where u are the trained weights as in Finn et al. (2016) . We assume that the robot has a suitable human reward function r H , either learned offline from prior human demonstrations or otherwise encoded by the system designers. Thus, endowed with r H , the robot can model the human's choice of control action as a probability distribution over actions conditioned on state. Under maximumentropy assumptions (Ziebart et al., 2008) inspired by noisy-rationality decision-making models (Baker et al., 2007) , the robot models the human as more likely to choose (discrete) actions u H with high expected utility, in this case the state-action value (or Q-value): P(u H jx H ; b, u) = e bQ H (x H , u H ;u) P ũ e bQ H (x H , ũ;u) ð3Þ We use a temporally and spatially discretized version of human dynamics, fH . These discrete-time dynamics may be found by integrating f H over a fixed time step of Dt with fixed control u H over the interval. Section 5 provides further details on this discretization. Running example: The quadcopter's model of the human assumes the human intends to reach some target location g H 2 R 2 in a straight line. The human's reward function is given by the distance traveled over time step Dt, i.e., r H (x H , u H ; g H ) = À v H Dt, and human trajectories are constrained to terminate at g H . The state-action value, parameterized by u = g H , captures the optimal cost of reaching g H from x H when initially applying u H for a duration Dt: Q H (x H , u H ; g H ) = À v H DtÀ k x H + v H Dt½cos u H , sin u H > À g H k 2 : Often, the coefficient b is termed the rationality coefficient, because it quantifies the degree to which the robot expects the human's choice of control to align with its model of utility. For example, taking b # 0 yields a model of a human who appears ''irrational,'' choosing actions uniformly at random and completely ignoring the modeled utility. At the other extreme, taking b \" ' corresponds to a ''perfectly rational'' human, whose actions exactly optimize the modeled reward function. As we will see in Section 4, b can also be viewed as a measure of the robot's confidence in the predictive accuracy of Q H . Note that Q H (x H , u H ; u) only depends on the human state and action and not on the robot's. Thus far, we have intentionally neglected discussion of human-robot interaction effects. These effects are notoriously difficult to model, and the community has devoted a significant effort to building and validating a variety of models (e.g., Sadigh et al., 2016; Trautman and Krause, 2010) . In that spirit, we could have chosen to model human actions u H as dependent upon robot state x R in (3), and likewise defined Q H to depend upon x R . This extended formulation is sufficiently general as to encompass all possible (Markov) interaction models. However, in this work we explicitly do not model these interactions; indeed, one of the most important virtues of our approach is its robustness to precisely these sorts of modeling errors. \n Probabilistically safe motion planning Ideally, the robot's motion planner should generate trajectories that reach a desired goal state efficiently, while maintaining safety. More specifically, in this context ''safety'' indicates that the physical system will never enter the keepout set K during operation, despite human motion and external disturbances. That is, we would like to guarantee that (p(s R ), x H ) 6 2 K for all time. To make this type of strong, deterministic, a priori safety guarantee requires the robot to avoid the set of all human states x H which could possibly be occupied at a particular time, i.e., the human's forward reachable set. If the robot can find trajectories that are safe for any possible human trajectory then there is no need to predict the human's next action. Unfortunately, the forward reachable set of the human often encompasses such a large volume of the workspace that it is impossible for the robot to find a guaranteed safe trajectory to the goal state. This motivates refining our notion of prediction: rather than reasoning about all the places where the human could be, the robot can instead reason about how likely the human is to be at each location. This probabilistic reasoning provides a guide for planning robot trajectories with a quantitative degree of safety assurance. Our probabilistic model of human control input (3) coupled with dynamics model f H allows us to compute a probability distribution over human states for every future time. By relaxing our conception of safety to consider only collisions that might occur with sufficient probability P th , we dramatically reduce the effective volume of this set of future states to avoid. In practice, P th should be chosen carefully by a system designer in order to trade off overall collision probability with conservativeness in motion planning. The proposed approach in this paper follows two central steps to provide a quantifiable, high-confidence collision avoidance guarantee for the robot's motion around the human. In Section 4 we present our proposed Bayesian framework for reasoning about the uncertainty inherent in a model's prediction of human behavior. Based on this inference, we demonstrate how to generate a real-time probabilistic prediction of the human's motion over time. Next, in Section 6, we extend a state-of-the-art, provably safe, real-time robotic motion planner to incorporate our time-varying probabilistic human prediction. \n Confidence-aware human motion prediction Any approach to human motion prediction short of computing a full forward reachable set must, explicitly or implicitly, reflect a model of human decision-making. In this work, we make that model explicit by assuming that the human chooses control actions in a Markovian fashion according to the probability distribution (3). Other work in the literature, such as that of Schmerling et al. (2017) , aims to learn a generative probabilistic model for human trajectories; implicitly, this training procedure distills a model of human decision-making. Whether explicit or implicit, these models are by nature imperfect and liable to make inaccurate predictions eventually. One benefit of using an explicit model of human decision-making, such as (3), is that we may reason directly and succinctly about its performance online. In particular, the entropy of the human control distribution in (3) is a decreasing function of the parameter b. High values of b place more probability mass on highutility control actions u H , whereas low values of b spread the probability mass more evenly between different control inputs, regardless of their modeled utility Q H . Therefore, b naturally quantifies how well the human's motion is expected to agree with the notion of optimality encoded in Q H . The commonly used term ''rationality coefficient,'' however, seems to imply that discrepancies between the two indicate a failure on the human's part to make the ''correct'' decisions, as encoded by the modeled utility. Instead, we argue that these inevitable disagreements are primarily a result of the model's inability to fully capture the human's behavior. Thus, instead of conceiving of b as a rationality measure, we believe that b can be given a more pragmatic interpretation related to the accuracy with which the robot's model of the human is able to explain the human's motion. Consistently, in this paper, we refer to b as model confidence. An important related observation following from this interpretation of b is that the predictive accuracy of a human model is likely to change over time. For example, the human may change their mind unexpectedly, or react suddenly to some aspect of the environment that the robot is unaware of. Therefore, we shall model b as an unobserved, time-varying parameter. Estimating it in real time provides us with a direct, quantitative summary of the degree to which the utility model Q H explains the human's current motion. To do this, we maintain a Bayesian belief about the possible values of b. Initially, we begin with a uniform prior over b and over time this distribution evolves given measurements of the human's state and actions. \n Real-time inference of model confidence We reason about the model confidence b as a hidden state in a hidden Markov model (HMM) framework. The robot starts with a prior belief b 0 À over the initial value of b. In this work, we use a uniform prior, although that is not strictly necessary. At each discrete time step k 2 f0, 1, 2, . . .g, it will have some belief about model confidence b k À (b). 3 After observing a human action u k H , the robot will update its belief to b k + by applying Bayes' rule. The hidden state may evolve between subsequent time steps, accounting for the important fact that the predictive accuracy of the human model may change over time as unmodeled factors in the human's behavior become more or less relevant. As, by definition, we do not have access to a model of these factors, we use a naive ''e-static'' transition model: at each time k, b may, with some probability e, be re-sampled from the initial distribution b 0 À , and otherwise retains its previous value. We define the belief over the next value of b (denoted by b 0 ) as an expectation of the conditional probability P(b 0 jb), i.e., b k À (b 0 ) :¼ E b;b kÀ1 + ½P(b 0 jb). Concretely, this expectation may be computed as b k À (b 0 ) = (1 À e)b kÀ1 + (b 0 ) + eb 0 À (b 0 ) ð4Þ By measuring the evolution of the human's state x H over time, we assume that, at every time step k, the robot is able to observe the human's control input u k H . This observed control may be used as evidence to update the robot's belief b k À about b over time via a Bayesian update: b k + (b) = P(u k H jx k H ; b, u)b k À (b) P b P(u k H jx k H ; b, u)b k À ( b) ð5Þ with b k + (b) :¼ P(bjx 0:k H , u 0:k H ) for k 2 f0, 1, . . .g, and P(u k H jx k H ; b, u) given by (3). It is critical to be able to perform this update rapidly to facilitate real-time operation; this would be difficult in the original continuous hypothesis space b 2 ½0, '), or even in a large discrete set. Fortunately, our software examples in Section 5 and hardware demonstration in Section 8 suggest that maintaining a Bayesian belief over a relatively small set of N b = 5 discrete values of b distributed on a log scale achieves significant improvement relative to using a fixed value. The ''e-static'' transition model leads to the desirable consequence that old observations of the human's actions have a smaller influence on the current model confidence distribution than recent observations. In fact, if no new observations are made, successively applying time updates asymptotically contracts the belief towards the initial distribution, that is, b k À ( Á ) ! b 0 À ( Á ). The choice of parameter e effectively controls the rate of this contraction, with higher e leading to more rapid contraction. \n Human motion prediction Equipped with a belief over b at time step k, we are now able to propagate the human's state distribution forward to any future time via the well-known Kolmogorov forward equations, recursively. In particular, suppose that we know the probability that the human is in each state x k H at some future time step k. We know that (according to our utility model) the probability of the human choosing control u k H in state x k H is given by (3). Accounting for the otherwise deterministic dynamics model fH , we obtain the following expression for the human's state distribution at the following time step k + 1: P(x k + 1 H ; b, u) = X x k H , u k H P(x k + 1 H jx k H , u k H ; b, u) Á P(u k H jx k H ; b, u)P(x k H ; b, u) ð6Þ for a particular choice of b. Marginalizing over b according to our belief at the current step time k, we obtain the overall occupancy probability distribution at each future time step k: P(x k H ; u) = E b;b k P(x k H ; b, u) ð7Þ Note that ( 6 ) is expressed more generally than is strictly required. Indeed, because the only randomness in dynamics model fH originates from the human's choice of control input u H , we have P( x k + 1 H jx k H , u k H ; b, u) = 1fx k + 1 H = fH (x k H , u k H )g. \n Model confidence with auxiliary parameter identification Thus far, we have tacitly assumed that the only unknown parameter in the human utility model ( 3 ) is the model confidence, b. However, often one or more of the auxiliary parameters u are also unknown. These auxiliary parameters could encode one or more human goal states or intents, or other characteristics of the human's utility, such as her preference for avoiding parts of the environment. Further, much like model confidence, they may change over time. In principle, it is possible to maintain a Bayesian belief over b and u jointly. The Bayesian update for the hidden state (b, u) is then given by ) the prior at time step k. This approach can be practical for parameters taking finitely many values from a small, discrete set, e.g., possible distinct modes for a human driver (distracted, cautious, aggressive). However, for certain scenarios or approaches it may not be practical to maintain a full Bayesian belief on the parameters u. In such cases, it is reasonable to replace the belief over u with a point estimate u, such as the maximum likelihood estimator or the mean, and substitute that estimate into (6). Depending on the complexity of the resulting maximum likelihood estimation problem, it may or may not be computationally feasible to update the parameter estimate u at each time step. Fortunately, even when it is computationally expensive to estimate u, we can leverage our model confidence as an indicator of when reestimating these parameters may be most useful. That is, when model confidence degrades that may indicate poor estimates of u. b k + (b, u) = P(u k H jx k H ; b, u)b k À (b, u) P b, ũ P(u k H jx k H ; b, ũ)b k À ( b, ũ) ð8Þ \n Prediction examples We illustrate these inference steps with two sets of examples: our running pedestrian example and a simple model of a car. \n Pedestrian model (running example) So far, we have presented a running example of a quadcopter avoiding a human. We use a deliberately simple, purely kinematic model of continuous-time human motion: _ x H = _ h x _ h y ! = v H cos u H v H sin u H ! ð9Þ However, as discussed in Section 3.3, the proposed prediction method operates in discrete time (and space). The discrete dynamics corresponding to (9) are given by x k + 1 H À x k H [x H (t + Dt) À x H (t) = v H Dt cos u H (t) v H Dt sin u H (t) ! ð10Þ for a time discretization of Dt. \n Dubins car model To emphasize the generality of our method, we present similar results for a different application domain: autonomous driving. We will model a human-driven vehicle as a dynamical system whose state x H evolves as _ x H = _ h x _ h y _ h f 2 4 3 5 = v H cos h f v H sin h f u H 2 4 3 5 ð11Þ Observe that, while (11) appears very similar to (9), in this Dubins car example the angle of motion is a state, not a control input. We discretize these dynamics by integrating (11) from t to t + Dt, assuming a constant control input u H : x k + 1 H À x k H [x H (t + Dt) À x H (t) = v H u H (t) sin (h f (t) + u H (t)Dt) À sin (h f (t)) À Á À v H u H (t) cos (h f (t) + u H (t)Dt) À cos (h f (t)) À Á u H Dt 2 6 4 3 7 5 For a specific goal position g = ½g x , g y , the Q-value corresponding to state-action pair (x H , u H ) and reward function r H (x H , u H ) = À v H Dt (until the goal is reached) may be found by solving a shortest path problem offline. \n Accurate model First, we consider a scenario in which the robot has full knowledge of the human's goal, and the human moves along the shortest path from a start location to this known goal state. Thus, human motion is well-explained by Q H . The first row of Figure 2 illustrates the probability distributions our method predicts for the pedestrian's future state at different times. Initially, the predictions generated by our Bayesian confidence-inference approach (right) appear similar to those generated by the low model confidence predictor (left). However, our method rapidly discovers that Q H is an accurate description of the pedestrian's motion and generates predictions that match the high model confidence predictor (center). The data used in this example was collected by tracking the motion of a real person walking in a motion capture arena. See Section 8 for further details. Likewise, the first row of Figure 3 shows similar results for a human-driven Dubins car model (in simulation) at an intersection. Here, traffic laws provide a strong prior on the human's potential goal states. As shown, our method of Bayesian model confidence inference quickly infers the correct goal and learns that the human driver is acting in accordance with its model Q H . The resulting predictions are substantially similar to the high-b predictor. The data used in this example was simulated by controlling a Dubins car model along a pre-specified trajectory. \n Unmodeled obstacle Often, robots do not have fully specified models of the environment. Here, we showcase the resilience of our approach to unmodeled obstacles that the human must avoid. In this scenario, the human has the same start and goal as in the accurate model case, except that there is an obstacle along the way. The robot is unaware of this obstacle, however, which means that in its vicinity the human's motion is not well-explained by Q H , and b(b) ought to place more probability mass on higher values of b. The second rows of Figure 2 and Figure 3 illustrate this type of situation for the pedestrian and Dubins car, respectively. In Figure 2 , the pedestrian walks to an a priori known goal location and avoids an unmodeled spill on the ground. Analogously, in Figure 3 the car swerves to avoid a large pothole. By inferring model confidence online, our approach generates higher-variance predictions of future state, but only in the vicinity of these unmodeled obstacles. At other times throughout the episode when Q H is more accurate, our approach produces predictions more in line with the high model confidence predictor. \n Unmodeled goal In most realistic human-robot encounters, even if the robot does have an accurate environment map and observes all obstacles, it is unlikely for it to be aware of all human goals. We test our approach's resilience to unknown human goals by constructing a scenario in which the human moves between both known and unknown goals. The third row of Figure 2 illustrates this situation for the pedestrian example. Here, the pedestrian first moves to one known goal position, then to another, and finally back to the start which was not a modeled goal location. The first two legs of this trajectory are consistent with the robot's model of goal-oriented motion, though accurate prediction does require the predictor to infer which goal the pedestrian is walking toward. However, when the pedestrian returns to the start, her motion appears inconsistent with Q H , skewing the robot's belief over b toward zero. Similarly, in the third row of Figure 3 we consider a situation in which a car makes an unexpected turn onto an unmapped access road. As soon as the driver initiates the turn, our predictor rapidly learns to distrust its internal model Q H and shift its belief over b upward. \n Safe probabilistic planning and tracking Given probabilistic predictions of the human's future motion, the robot must plan efficient trajectories that avoid collision with high probability. In order to reason robustly about this probability of future collision, we must account for potential tracking errors incurred by the real system as it follows planned trajectories. To this end, we build on the recent FaSTrack framework of Herbert et al. (2017) , which provides control-theoretic robust safety certificates in the presence of deterministic obstacles, and extend it to achieve approximate probabilistic collision-avoidance. \n Background: fast planning, safe tracking Recall that x R is the robot's state for the purposes of motion planning, and that s R encodes a higher-fidelity, potentially higher-dimensional notion of state (with associated dynamics). The recently proposed FaSTrack framework from Herbert et al. (2017) uses Hamilton-Jacobi reachability analysis to quantify the worst-case tracking performance of the s R -system as it follows trajectories generated by the x R -system. For further reading on reachability analysis refer to Evans and Souganidis (1984) , Mitchell et al. (2005) , and Bansal et al. (2017) . A byproduct of this FaSTrack analysis is an error feedback controller that the s R system can use to achieve this worst-case tracking error. The tracking error bound may be given to one of many off-the-shelf real-time motion planning algorithms operating in x R -space in order to guarantee real-time collision avoidance by the s Rsystem. Formally, FaSTrack precomputes an optimal tracking controller, as well as a corresponding compact set E in the robot's planning state space, such that (p(s R (t))À x R, ref (t)) 2 E for any reference trajectory proposed by the lower-fidelity planner. This bound E is a trajectory tracking certificate that can be passed to an online planning algorithm for real-time safety verification: the dynamical robot is guaranteed to always be somewhere within the bound relative to the current planned reference point x R, ref (t). This tracking error bound may sometimes be expressed analytically; otherwise, it may be computed numerically offline using level set methods (e.g., Mitchell, 2009) . Equipped with E, the planner can generate safe plans online by ensuring that the entire tracking error bound around the nominal state remains collision-free throughout the trajectory. Efficiently checking these E-augmented trajectories for collisions with known obstacles is critical for real-time performance. Note that the planner only needs to know E (which is computed offline) and otherwise requires no explicit understanding of the high-fidelity model. Running example: As dynamics (2) are decoupled in the three spatial directions, the bound E computed by FaSTrack is an axis-aligned box of dimensions E x × E y × E z . For further details refer to Fridovich-Keil et al. ( 2018 ). \n Robust tracking, probabilistic safety Unfortunately, planning algorithms for collision checking against deterministic obstacles cannot be readily applied to our problem. Instead, a trajectory's collision check should return the probability that it might lead to a collision. Based on this probability, the planning algorithm can discriminate between trajectories that are sufficiently safe and those that are not. As discussed in Section 3.4, a safe online motion planner invoked at time t should continually check the probability that, at any future time t, (p(s R (t)), x H (t)) 2 K. The tracking error bound guarantee from FaSTrack allows us to conduct worst-case analysis on collisions given a human state x H . Concretely, if no point in the Minkowski sum fx R + Eg is in the collision set with x H , we can guarantee that the robot is not in collision with the human. The probability of a collision event for any point x R (t) along a candidate trajectory is then P coll (x R (t)) :¼ P((x R , x H ) 2 K) ð12Þ Assuming worst-case tracking error bound E, this quantity can be upper-bounded by the total probability that x H (t) will be in collision with any of the possible robot states xR 2 fx R (t) + Eg. For each robot planning state x R 2 R n R we define the set of human states in potential collision with the robot: H E (x R ) :¼ fx H 2 R n H : 9x R 2 fx R + Eg, (x R , xH ) 2 Kg ð13Þ Running example: Given K and E, H E (x R ) is the set of human positions within the rectangle of dimensions (l + E x ) × (l + E y ) centered on ½p x , p y . A human anywhere in this rectangle could be in collision with the quadcopter. The following result follows directly from the definition of the tracking error bound and a union bound. Proposition 1. The probability of a robot with worst-case tracking error E colliding with the human at any trajectory point x R (t) is bounded above by the probability mass of x H (t) contained within H E (x R (t)). We consider discrete-time motion plans. The probability of collision along any such trajectory from current time step k to final step k + K is upper-bounded by P k:k + K coll ł P k:k + K coll :¼ 1 À Y k + K k = k P(x k H 6 2 H E (x k R )jx k H 6 2 H E (x s R ), k ł s\\k) ð14Þ Evaluating the right-hand side of ( 14 ) exactly requires reasoning about the joint distribution of human states over all time steps and its conditional relationship on whether collision has yet occurred. This is equivalent to maintaining a probability distribution over the exponentially large space of trajectories x k:k + K H that the human might follow. As The International Journal of Robotics Research 39(2-3) motion planning occurs in real time, we shall resort to a heuristic approximation of ( 14 ). One approach to approximating ( 14 ) is to assume that the event x k 1 H 6 2 H E (x k 1 R ) is independent of x k 2 H 6 2 H E (x k 2 R ), for all k 1 6 ¼ k 2 . This independence assumption is equivalent to removing the conditioning in ( 14 ). Unfortunately, this approximation is excessively pessimistic; if there is no collision at time step k, then collision is also unlikely at time step k + 1 because both human and robot trajectories are continuous. In fact, for sufficiently small time discretization Dt and nonzero collision probabilities at each time step, the total collision probability resulting from an independence assumption would approach 1 exponentially fast in the number of time steps K. We shall refine this approximation by finding a tight lower bound on the right-hand side of ( 14 ). Because collision events are correlated in time, we first consider replacing each conditional probability P( x k H 6 2 H E (x k R )jx s H 6 2 H E (x s R ), k ł s\\k) by 1 for all k 2 fk + 1, . . . , k + Kg. This effectively lower bounds P k:k + K coll by the worst-case probability of collision at the current time step k: P k:k + K coll ø 1 À P(x k H 6 2 H E (x k R )) = P(x k H 2 H E (x k R )) ð15Þ This bound is extremely loose in general, because it completely ignores the possibility of future collision. However, note that probabilities in the product in ( 14 ) may be conditioned in any particular order (not necessarily chronological). This commutativity allows us to generate K À k + 1 lower bounds of the form P k:k + K coll ø P(x k H 2 H E (x k R )) for k 2 fk, . . . , k + Kg. Taking the tightest of all of these bounds, we can obtain an informative, yet quickly computable, approximator for the sought probability: P k:k + K coll ø max k2fk:k + Kg P(x k H 2 H E (x k R ))'P k:k + K coll ð16Þ To summarize, the left inequality in ( 16 ) lower-bounds P k:k + K coll with the greatest marginal collision probability at any point in the trajectory. On the right-hand side of ( 16 ), we take this greatest marginal collision probability as an approximator of the actual probability of collision over the entire trajectory. In effect, we shall approximate P k:k + K coll with a tight lower bound of an upper bound. While this type of approximation may err on the side of optimism, we note that both the robot's ability to replan over time and the fact that the left-hand side of ( 16 ) is an upper bound on total trajectory collision probability mitigate this potentially underestimated risk. \n Safe online planning under uncertain human predictions This approximation of collision probability allows the robot to discriminate between valid and invalid candidate trajectories during motion planning. Using the prediction methodology proposed in Section 4, we may quickly generate, at every time t, the marginal probabilities in ( 16 ) at each future time k 2 fk, . . . , k + Kg, based on past observations at times 0, . . . , k. The planner then computes the instantaneous probability of collision P(x k H 2 H E (x k R )) by integrating P(x t H jx 0:k H ) over H E (x k R ) , and rejects the candidate point x k R if this probability exceeds P th . Note that for graph-based planners that consider candidate trajectories by generating a graph of time-stamped states, rejecting a candidate edge from this graph is equivalent to rejecting all further trajectories that would contain that edge. This early rejection rule is consistent with the proposed approximation (16) of P k:k + K coll while preventing unnecessary exploration of candidate trajectories that would ultimately be deemed unsafe. Throughout operation, the robot follows each planned trajectory using the error feedback controller provided by FaSTrack, which ensures that the robot's high-fidelity state representation s R and the lower-fidelity state used for planning, x R , differ by no more than the tracking error bound E. This planning and tracking procedure continues until the robot reaches its desired goal state. Running example: Our quadcopter is now required to navigate to a target position shown in Figure 4 without colliding with the human. Our proposed algorithm successfully avoids collisions at all times, replanning to leave greater separation from the human whenever her motion departs from the model. In contrast, robot planning with fixed model confidence is either overly conservative at the expense of time and performance or overly aggressive at the expense of safety. \n Connections to reachability analysis In this section, we present an alternative, complementary analysis of the overall safety properties of the proposed approach to prediction and motion planning. This discussion is grounded in the language of reachability theory and worst-case analysis of human motion. \n Forward reachable set Throughout this section, we frequently refer to the human's time-indexed forward reachable set. We define this set formally in the following. Definition 1. (Forward reachable set) For a dynamical system _ x = f (x, u) with state trajectories given by the function j(x(0), t, u( Á ))ex(t), the forward reachable set FRS(x, t) of a state x after time t has elapsed is FRS(x, t) :¼ fx 0 : 9u( Á ), x 0 = j(x, t, u( Á ))g That is, a state x 0 is in the forward reachable set of x after time t if it is reachable via some applied control signal u( Á ). Remark 1. (Recovery of FRS) For P th = 0 and any finite b, the set of states assigned probability greater than P th is identical to the forward reachable set, up to discretization errors. This is visualized for low, high, and Bayesian model confidence in Figure 5 . \n A sufficient condition for the safety of individual trajectories In Section 6.2, we construct an approximation to the probability of collision along a trajectory, which we use during motion planning to avoid potentially dangerous states. To make this guarantee of collision avoidance for a motion plan even stronger, it would suffice to ensure that the robot never comes too close to the human's forward reachable set. More precisely, a planned trajectory is safe if fx R (t) + Eg \\ FRS(x H , t) = ;, for every state x R (t) along a motion plan generated when the human was at state x H . The proof of this statement follows directly from the properties of the tracking error bound E described in Section 6. While this condition may seem appealing, it is in fact highly restrictive. The requirement of avoiding the full forward reachable set is not always possible in confined spaces; indeed, this was our original motivation for wanting to predict human motion (see Section 3.4). However, despite this shortcoming, the logic behind this sufficient condition for safety provides insight into the effectiveness of our framework. \n Recovering the forward reachable set Though it will not constitute a formal safety guarantee, we analyze the empirical safety properties of our approach by examining how our predicted state distributions over time relate to forward reachable sets. During operation, our belief over model confidence b evolves to match the degree to which the utility model Q H explains recent human motion. The ''time constant'' governing the speed of this evolution may be tuned by the system designer to be arbitrarily fast by choosing the parameter e to be small, as discussed in Section 4.1. Thus, we may safely assume that b(b) places high probability mass on small values of b as soon as the robot observes human motion that is not well explained by Q H . Figure 6 shows the sets of states with ''high enough'' (.P th ) predicted probability mass overlaid on the human's forward reachable set at time t, which is a circle of radius v H t centered on x H for the dynamics in our running example. When b is high (10), we observe that virtually all of the probability mass is concentrated in a small number of states in the direction of motion predicted by our utility model. When b is low (0:05) we observe that the set of states assigned probability above our collision threshold P th occupies a much larger fraction of the reachable set. A typical belief b(b) recorded at a moment when the human was roughly moving according to Q H yields an intermediate set of states. Figure 7 illustrates the evolution of these sets of states over time, for the unmodeled obstacle example of Section 5.4 in which a pedestrian avoids a spill. Each row corresponds to the predicted state distribution at a particular point in time. Within a row, each column shows the reachable set and the set of states assigned occupancy Interestingly, as the Bayesian model confidence decreases, which occurs when the pedestrian turns to avoid the spill at t'6 s, the predicted state distribution assigns high probability to a relatively large set of states, but unlike the low-b predictor that set of states is oriented toward the known goal. Of course, had b(b) placed even more probability mass on lower values of b then the Bayesian confidence predictor would converge to the low confidence one. In addition, we observe that, within each row as the prediction horizon increases, the area contained within the forward reachable set increases and the fraction of that area contained within the predicted sets decreases. This phenomenon is a direct consequence of our choice of threshold P th . Had we chosen a smaller threshold value, a larger fraction of the forward reachable set would have been occupied by the lower-b predictors. This observation may be viewed prescriptively. Recalling the sufficient condition for safety of planned trajectories from Section 7.2, if the robot replans every T replan seconds, we may interpret the fraction of FRS( Á , t + T replan ) assigned occupancy probability greater than P th by the low-confidence predictor as a rough indicator of the safety of an individual motion plan, robust to worst-case human movement. As this fraction tends toward unity, the robot is more and more likely to be safe. However, for any P th .0, this fraction approaches zero for T replan \" '. This immediately suggests that, if we wish to replan every T replan seconds, we can achieve a particular level of safety as measured by this fraction by choosing an appropriate threshold P th . In summary, confidence-aware predictions rapidly place high-probability mass on low values of b whenever human motion is not well-explained by utility model Q H . Whenever this happens, the resulting predictions encompass a larger fraction of the forward reachable set, and in the limit that P th # 0 we recover the forward reachable set exactly. The larger this fraction, the more closely our approach satisfies the sufficient condition for safety presented in Section 7.2. \n Hardware demonstration We implemented confidence-aware human motion prediction (Section 4) and integrated it into a real-time, safe probabilistic motion planner (Section 6), all within the Robot Operating System (ROS) software framework of Quigley et al. (2009) . To demonstrate the efficacy of our methods, we tested our work for the quadcopter-avoiding-pedestrian example used for illustration throughout this paper. Human trajectories were recorded as (x, y) positions on the ground plane at roughly 235 Hz by an OptiTrack infrared motion capture system, and we used a Crazyflie 2.0 micro-quadcopter, also tracked by the OptiTrack system. 4 Figure 4 illustrates the unmodeled obstacle case from Section 5.4, in which the pedestrian turns to avoid a spill on the ground. Using a low model confidence results in motion plans that suddenly and excessively deviate from the ideal straight-line path when the pedestrian turns to avoid the spill. By contrast, the high-confidence predictor consistently predicts that the pedestrian will walk in a straight line to the goal even when they turn; this almost leads to collision, as shown in detail in Figure 8 . Our proposed approach for Bayesian model confidence initially assigns high confidence and predicts that the pedestrian will walk straight to the goal, but when they turn to avoid the spill, the predictions become less confident. This causes the quadcopter to make a minor course correction, shown in further detail in Figure 9 . \n Conclusion When robots operate in complex environments in concert with other agents, safety often depends upon the robot's ability to predict the agents' future actions. While this prediction problem may be tractable in some cases, it can be extremely difficult for agents such as people who act with intent. In this paper, we introduce the idea of confidenceaware prediction as a natural coping mechanism for predicting the future actions of intent-driven agents. Our approach uses each measurement of the human's state to reason about the accuracy of its internal model of human decision-making. This reasoning about model confidence is expressed compactly as a Bayesian filter over the possible values of a single parameter, b, which controls the entropy of the robot's model of the human's choice of action. In effect, whenever the human's motion is not wellexplained by this model, the robot predicts that the human could occupy a larger volume of the state space. We couple this notion of confidence-aware prediction with a reachability-based robust motion planning algorithm, FaSTrack, which quantifies the robot's ability to track a planned reference trajectory. Using this maximum tracking error allows us to bound an approximation of the probability of collision along planned trajectories. In addition, we present a deeper connection between confidence-aware prediction and forward reachable sets, which provides an alternative explanation of the safety of our approach. We demonstrate the proposed methodology on a ROS-based quadcopter testbed in a motion capture arena. \n Limitations There are several important limitations of this work, which we summarize and discuss in the following. 9.1.1. State discretization. As presented, our approach to prediction requires a discrete representation of the human's state space. This can be tractable for the relatively simple dynamical models of human motion we consider in this work. Fortunately, one of the strongest attributes of confidenceaware prediction is that it affords a certain degree of robustness to modeling errors by design. Still, our approach is effectively limited to low-order dynamical models. 9.1.2. FaSTrack complexity. FaSTrack provides a strong safety guarantee vis-a `-vis the maximum tracking error that could ever exist between a higher-fidelity dynamical model of the robot and a lower-order model used for motion planning. Unfortunately, the computational complexity of finding this maximum tracking error and the corresponding safety controller scales exponentially with the dimension of The subsequent motion plans may not be safe; here, poor prediction quality leads to a collision. the high-fidelity model. In some cases, these dynamics are decomposable and analytic solutions exist (e.g., Chen et al., 2018; Fridovich-Keil et al., 2018) , and in other cases conservative approximations may be effective (e.g., Chen et al., 2016; Royo et al., 2018) . 9.1.3. Boltzmann distributional assumption. We model the human's choice of control input at each time step as an independent, random draw from a Boltzmann distribution (3). This distributional assumption is motivated from the literature in cognitive science and econometrics and is increasingly common in robotics, yet it may not be accurate in all cases. Maintaining an up-to-date model confidence belief b(b) can certainly mitigate this inaccuracy, but only at the cost of making excessively conservative predictions. 9.1.4. Safety certification. Our analysis in Section 7 makes connections to forward reachability in an effort to understand the safety properties of our system. As shown, whenever our confidence-aware prediction method detects poor model performance it quickly yields predictions that approximate the human's forward reachable set. Although this approximation is not perfect, and hence we cannot provide a strong safety certificate, the connection to reachability is in some sense prescriptive. That is, it can be used to guide the choice of collision probability threshold P th and replanning frequency. However, even if we could provide a strong guarantee of collision avoidance for a particular motion plan, that would not, in general, guarantee that future motion plans would be recursively safe. This recursive property is much more general and, unsurprisingly, more difficult to satisfy. \n Future directions Future work will aim to address each of these shortcomings. We are also interested in extending our methodology for the multi-robot, multi-human setting; our preliminary results are reported by Bajcsy et al. (2018) . In addition, we believe that our model confidence inference approach could be integrated with other commonly used probabilistic prediction methods besides the Boltzmann utility model. Finally, we are excited to test our work in hardware in other application spaces, such as manipulation and driving. Fig. Fig.1. When planning around humans, accurate predictions of human motion (visualized here in pink and blue, representing high and low probability, respectively) are an essential prerequisite for safety. Unfortunately, these approaches may fail to explain all observed motion at runtime (e.g., human avoids unmodeled spill on the ground), leading to inaccurate predictions, and potentially, collisions (left). Our method addresses this by updating its predictive model confidence in real time (right), leading to more conservative motion planning in circumstances when predictions are known to be suspect. \n with b k + (b, u) :¼ P(b, ujx 0:k H , u 0:k H ) the running posterior and b k À (b, u) :¼ P(b, ujx 0:kÀ1 H , u 0:kÀ1 H \n Fig. 2 . 2 Fig. 2. Snapshots of pedestrian trajectory and probabilistic model predictions. Top row: Pedestrian moves from the bottom right to a goal marked as a red circle. Middle row: Pedestrian changes course to avoid a spill on the floor. Bottom row: Pedestrian moves to one known goal, then to another, then to a third which the robot has not modeled. The first two columns show predictions for low and high model confidence; the third column shows the predictions using our Bayesian model confidence. For all pedestrian videos, see https://youtu.be/lh_E9rW-MJo. \n Fig. 3 . 3 Fig. 3. Snapshots of Dubins car and probabilistic predictions. Top row: Car moves straight ahead toward one of two known goals (red arrows), staying in its lane. Middle row: Car suddenly swerves to the left to avoid a pothole. Bottom row: Car turns to the right, away from the only known goal. The left and center columns show results for low and high confidence predictors, respectively, and the right column shows our approach using Bayesian inferred model confidence. For all Dubins car videos, see https://youtu.be/sAJKNnP42fQ. \n Fig. 4 . 4 Fig. 4. Scenario from the middle row of Figure 2 visualized with robot's trajectory. When b is low and the robot is not confident, it makes large deviations from its path to accommodate the human. When b is high, the robot refuses to change course and comes dangerously close to the human. With inferred model confidence, the robot balances safety and efficiency with a slight deviation around the human. \n Fig. 5 . 5 Fig. 5. The human (black dot) is moving west towards a goal. Visualized are the predicted state distributions for 1 second into the future when using low, high, and Bayesian model confidence. Higher-saturation indicates higher likelihood of occupancy. The dashed circle represents the pedestrian's 1 second forward reachable set. \n Fig. 6 . 6 Fig.6. Visualization of the states with probability greater than or equal to the collision threshold, P th = 0:01. The human's forward reachable set includes the set of states assigned probability greater than P th . We show these ''high probability'' predicted states for predictors with fixed low and high b, as well as our Bayesian-inferred b. \n Fig. 7 . 7 Fig.7. The human (black dot) is walking towards the known goal (red dot) but has to avoid an unmodeled coffee spill on the ground. Here we show the snapshots of the predictions at various future times (columns) as the human walks around in real time (rows). The visualized states have probability greater than or equal to P th = 0:01. Each panel displays the human prediction under low confidence (in yellow), high confidence (in dark purple), and Bayesian confidence (colored as per the most likely b value), as well as the forward reachable set. The human's actual trajectory is shown in red. \n Fig. 8 . 8 Fig. 8. Predicting with fixed-b (in this case, b = 20) can yield highly inaccurate predictions (and worse, confidently inaccurate ones).The subsequent motion plans may not be safe; here, poor prediction quality leads to a collision. \n Fig. 9 . 9 Fig. 9. Inferring b leads to predicted state distributions whose entropy increases whenever the utility model Q H fails to explain observed human motion. The resulting predictions are more robust to modeling errors, resulting in safer motion plans. Here, the quadcopter successfully avoids the pedestrian even when they turn unexpectedly. \n\t\t\t The International Journal of Robotics Research 39(2-3)", "date_published": "n/a", "url": "n/a", "filename": "0278364919859436.tei.xml", "abstract": "One of the most difficult challenges in robot motion planning is to account for the behavior of other moving agents, such as humans. Commonly, practitioners employ predictive models to reason about where other agents are going to move. Though there has been much recent work in building predictive models, no model is ever perfect: an agent can always move unexpectedly, in a way that is not predicted or not assigned sufficient probability. In such cases, the robot may plan trajectories that appear safe but, in fact, lead to collision. Rather than trust a model's predictions blindly, we propose that the robot should use the model's current predictive accuracy to inform the degree of confidence in its future predictions. This model confidence inference allows us to generate probabilistic motion predictions that exploit modeled structure when the structure successfully explains human motion, and degrade gracefully whenever the human moves unexpectedly. We accomplish this by maintaining a Bayesian belief over a single parameter that governs the variance of our human motion model. We couple this prediction algorithm with a recently proposed robust motion planner and controller to guide the construction of robot trajectories that are, to a good approximation, collision-free with a high, user-specified probability. We provide extensive analysis of the combined approach and its overall safety properties by establishing a connection to reachability analysis, and conclude with a hardware demonstration in which a small quadcopter operates safely in the same space as a human pedestrian.", "id": "f947dd570931ec75ed7fc1077b17b2a4"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Shun Zhang", "Edmund H Durfee", "Satinder Singh"], "title": "Minimax-Regret Querying on Side Effects for Safe Optimality in Factored Markov Decision Processes", "text": "Introduction We consider a setting where a human user tasks a computational agent with achieving a goal that may change one or more state features of the world (e.g., a housekeeping agent should change the state of the house floors and kitchen sink from dirty to clean). In the process of accomplishing the goal, the agent generally changes other features (e.g., its own position and power level, opening doors, moving furniture, scaring the cat). Some of these side-effects might be fine (even expected) by the user (e.g., moving), but others may be undesirable/unsafe (e.g., leaving doors open lets the cat roam/escape) even though they speed goal achievement (e.g., the agent's movement between rooms). Although the user tells the agent about some features that can be changed, as well as some to not change (e.g., don't knock over the priceless vase), the user often lacks the time, patience, or foresight to specify the changeability of every pertinent feature, and may incorrectly assume that the agent has commonsense (e.g., about cat behavior and the value of vases). How can the agent execute a safely-optimal policy in such a setting? We conservatively assume that, to ensure safety, the agent should never side-effect a feature unless changing it is explicitly known to be fine. Hence, the agent could simply execute the best policy that leaves such features unchanged. However, no such policy might exist, and even if it does it might surprise the user as unnecessarily costly/inefficient. Our focus is thus on how the agent can selectively query the user about the acceptability of changing features it hasn't yet been told about. We reject simply querying about every such feature, as this would be unbearably tedious to the user, and instead put the burden on the agent to limit the number and complexity of queries. In fact, in this paper we mostly focus on finding a single query about a few features that maximally improves upon the policy while maintaining safety. Our three main contributions in this paper are: 1) We formulate (in Section 2) an AI safety problem of avoiding negative side-effects in factored MDPs. 2) We show (in Section 3) how to efficiently identify the set of relevant features, i.e., the set of features that could potentially be worth querying the user about. 3) We formulate (in Section 4) a minimaxregret criterion when there is a limit on the number of features the agent can ask about, and provide an algorithm that allows the agent to find the minimax-regret query by searching the query space with efficient pruning. We empirically evaluate our algorithms in a simulated agent navigation task, outline ongoing extensions/improvements, and contrast our work to prior work, in the paper's final sections. \n Problem Definition We illustrate our problem in a simulated agent gridworldnavigation domain, inspired by Amodei et al. [2016] and depicted in Figure 1 , with doors, carpets, boxes, and a switch. The agent can open/close a door, move a box, traverse a carpet, and toggle the switch. Initially, the agent is in the bottom left corner; door d1 is open, d2 and d3 closed, the carpet clean, and the switch \"on\". The agent can move to an adjacent location vertically, horizontally, or diagonally. For simplicity, the transition function is assumed to be deterministic. The user tasks the agent with turning off the switch as quickly as is safely possible. The quickest path (π 1 ) traverses the carpet, but this gets the carpet dirty and the agent doesn't know if that is allowed. The agent could instead enter the room through door d1 and spend time moving box b1 or b2 out of the way (π 2 or π 3 respectively), open door d2, and then go to the switch. However, boxes might contain fragile objects and should not be moved; the user knows each box's contents, but the agent doesn't. Or the agent could enter through door d1 and walk upwards (π 4 ) around all the boxes and open door d2 to get to the switch. The user may or may not be okay with door d2 being opened. There are of course many other more circuitous paths not shown. We model the domain as a factored Markov Decision Process (MDP) [Boutilier et al., 1999 ]. An MDP is a tuple S, A, T, r, s 0 , γ , with state space S, action space A, and transition function T where T (s |s, a) is the probability of reaching state s by taking action a in s. r(s, a) is the reward of taking action a in s. s 0 is the initial state and γ is the discount factor. Let π : S × A → [0, 1] be a policy. V π is the expected cumulative reward by following policy π starting from s 0 . In a factored MDP, a state is described in terms of values of various features (e.g., the agent's location, the current time, the status of each door, cleanliness of each carpet, position of each box), so the state space S is the cross-product of the values the features can take. The reward and transition functions are often also factored (e.g., the \"toggle\" action only changes the switch feature, leaving boxes, doors, and carpets unchanged). We will consistently use φ to denote one feature and Φ to denote a set of features. The agent knows the complete MDP model, but has incomplete knowledge about which features the user doesn't mind being changed. In our example, the user's goal implies the agent's location is changeable, as is the switch, but the agent is uncertain about side-effecting boxes, doors, and carpets. In general, the user could dictate that a feature can only be changed among restricted values (e.g., boxes can only be moved a short distance) and/or dependent on other features' values (e.g., interior doors can be left open as long as exterior doors (like d3) stay closed). We briefly return to this issue later (Section 6), but for simplicity assume here that the agent can partition the features into the following sets: • Φ A F : The free-features. The agent knows that these features are freely changeable (e.g., its location). • Φ A L : The locked-features. The agent knows it should never change any of these features. \n • Φ A ? : The unknown-features. These are features that the agent doesn't (initially) know whether the user considers freely changeable or locked. The user similarly partitions the features, but only into the sets Φ U L and Φ U F . We assume that the agent's knowledge, while generally incomplete (Φ A ? = ∅), is consistent with that of the user. That is, Φ A F ⊆ Φ U F and Φ A L ⊆ Φ U L . Defining & Finding Safely-Optimal Policies. Our conservative safety assumption means the agent should treat unknown features as if they are locked, until it explicitly knows otherwise. It should thus find an optimal policy that never visits a state where a feature in Φ A L ∪ Φ A ? is changed, which we call a safely-optimal policy. We can use linear programming with constraints that prevent the policy from visiting states with changed values of locked or unknown features: The above linear program does not allow any locked or unknown feature to be changed and is guaranteed to produce a safely-optimal policy (when one exists). This linear programming approach, while straightforward, can be intractable for large MDPs. Alternative approaches could directly encode the safety constraints into the transition function (e.g., by removing unsafe action choices in specific states), or into the reward function (heavily penalizing reaching unsafe states). Approximate methods, like feature-based approximate linear programming [Dolgov and Durfee, 2006] or constrained policy optimization [Achiam et al., 2017] , can apply to larger problems, but may not guarantee safety or optimality (or both). We return to these concerns in Section 6. \n Querying Relevant Unknown-Features In our setting the only way for the agent to determine whether an unknown feature is freely changeable is to query the user. Thus, hereafter our focus is on how the agent can ask a good query about a small number, k, of features. Our solution is to first prune from Φ A ? features that are guaranteed to not be relevant to ask (this section), and then efficiently finding the best (minimax-regret) k-sized subset of the relevant features to query (Section 4). Until Section 6, we assume the changeabilities of features are independent, i.e., when the agent asks about some features in Φ A ? , the user's response does not allow it to infer the changeability of other features in Φ A ? . Intuitively, when is a feature in Φ A ? relevant to the agent's planning of a safely-optimal policy? In the navigation domain, if the agent plans to take the quickest path to the switch (π 1 in Figure 1 ), it will change the state of the carpet (from clean to dirty). The carpet feature is thus relevant since the agent would change it if permitted. If the carpet can't be changed but door d2 can, the agent would follow (in order of preference) policy π 2 , π 3 , or π 4 , so d2, b1, and b2 are relevant. Box b3 and door d3 are irrelevant, however, since no matter which (if any) other features are changeable, an optimal policy would never change them. Thus, an unknown feature is relevant when under some circumstance (some answer to some query) the agent's optimal policy would side-effect that feature. Such policies are dominating policies. else add (Φ, ∅) to β 14: Φ rel ← Φ rel ∪ Φ rel (π) 15: checked ← checked ∪ {Φ} 16: agenda ← powerset(Φ rel ) \\ checked 17: return Γ , Φ rel A dominating policy is a safely-optimal policy for the circumstance where the unknown features Φ A ? are partitioned into locked and changeable subsets. We denote the set of dominating policies by Γ = arg max π∈ΠΦ V π : ∀Φ ⊆ Φ A ? , (2) where Π Φ is the set of policies that do not change unknown features Φ ⊆ Φ ? as well as any locked features (meaning that Φ F ∪ (Φ ? \\ Φ) are changeable). We denote the unknownfeatures side-effected by policy π by Φ rel (π). For a set of policies Π, Φ rel (Π) = ∪ π∈Π Φ rel (π). The set Φ rel (Γ), abbreviated Φ rel , is thus the set of relevant (unknown-) features to consider querying about. Instead of finding the safely-optimal policies for all exponentially (in |Φ A ? |) many subsets Φ ⊆ Φ A ? with Equation 2, we contribute Algorithm DomPolicies (see pseudocode) that finds dominating policies incrementally (and in practice more efficiently) by constructing the sets of relevant features and dominating policies simultaneously. In each iteration, it examines a new subset of relevant features, Φ, and, if Φ isn't pruned (as described later), finds the safely-optimal policy with Φ being locked (Line 7). It then adds Φ rel (π), the features changed by π, to Φ rel . It repeats this process until Φ stops growing and all subsets of Φ rel are examined. For example, in the navigation problem (Figure 1 ), the first added policy is the safely-optimal policy assuming no unknown features are locked, which is π 1 . Now Γ = {π 1 } and thus Φ rel = {carpet}. It iterates, treating the carpet as locked, and updates Γ to {π 1 , π 2 } and thus Φ rel = {carpet, b1, d2}. Iterating again, it locks subsets of Φ rel , finding π 3 for subset {carpet, b1}. After finding π 4 , it terminates. In Line 10, the algorithm finds the constrained optimal policy (in our implementation using Eq. 1), and in the worst case, would do this 2 |Φ rel | times. Fortunately, the complexity is exponential in the number of relevant features, which we have seen in our empirical settings can be considerably smaller (a) (b) c 1 c 1 -1 c 2 c n c 2 c n -1 -1 … … … (c) c 1 c 3 -1 c 2 -1 -0.1 -1 -1 Figure 2: Example domains used in text. (n > 2) than the number of unknown features (|Φ A ? |). Furthermore, the efficiency can be improved with our pruning rule (line 8): satisf y(Φ, β) := ¬∃ (L,R)∈β (L ⊆ Φ ∧ Φ ∩ R = ∅) β is a history of ordered pairs, (L, R), of disjoint sets of unknown features. Before DomPolicies computes a policy for its agenda element Φ, if a pair (L, R) is in β, such that L ⊆ Φ (a dominating policy has been found when locking subset of features in Φ), and that dominating policy's relevant features R don't intersect with Φ, then Φ's dominating policy has already been found. In our running example, for instance, initially Φ rel = ∅. π 1 is the optimal policy and the algorithm adds pair (∅, {carpet}) to β. When larger subsets are later considered (note the agenda is sorted by cardinality), feature sets that do not contain carpet are pruned by β. In this example, β prunes 11 of the 16 subsets of the 4 relevant features. Consider the examples in Figure 2 . To reach the switch, the agent could traverse zero or more carpets (all with unknown changeability), and rewards are marked on the edges. Proof. Let π ∈ Γ be the optimal policy with unknown features L locked. Γ is returned by DomPolicies. We denote L ∩ Φ rel (Γ ) as A and L \\ Φ rel (Γ ) as B. For a proof by contradiction, assume π ∈ Γ . Then B = ∅. Otherwise, if B = ∅ (or equivalently, A = L), then L is a subset of Φ rel (Γ ) and π would have been added to Γ . Let π be the optimal policy with A locked. Since A ⊆ Φ rel (Γ ), we know π ∈ Γ . We observe that π does not change any features in B (otherwise features in B would show up in Φ rel (Γ )). So π is also the optimal policy with features A ∪ B = L locked. So π = π , which is an element of Γ . \n Finding Minimax-Regret Queries With DomPolicies, the agent need only query the user about relevant features. But it could further reduce the user's burden by being selective about which relevant features it asks about. In our running example (Figure 1 ), for instance, DomPolicies removes b3 and d3 from consideration, but also intuitively the agent should only ask about b1 or b2 if d2 is changeable. By iteratively querying, accounting for such dependencies (which can be gleaned from β in DomPolicies) and updating the relevant feature set, the agent can stop querying as soon as it finds (if one exists) a safe policy. For example, say it asks about the carpet and d2, and is told d2 (but not the carpet) is changeable. Now it has a safely-optimal policy, π 4 , given its knowledge, and could stop querying. But π 4 is the worst safe policy. Should it ask about boxes? That is the question we focus on here: How should an agent query to try to find a better safely-optimal policy than the one it already knows about. Specifically, we consider the setting where the agent is permitted to interrupt the user just once to improve upon its safely-optimal policy, by asking a single query about at most k unknown features. For each feature the user will reply whether or not it is in Φ U F . Formally, Φ q is a k-feature query where Φ q ⊆ Φ rel and |Φ q | = k. The post-response utility when the agent asks query Φ q , and Φ c ⊆ Φ A ? are actually changeable, is the value of the safely-optimal policy after the user's response: u(Φ q , Φ c ) = max π∈Π Φ A ? \\(Φq ∩Φc ) V π . (3) Recall the agent can only safely change features it queries about that the user's response indicates are changeable (Φ q ∩ Φ c ). What would be the agent's regret if it asks a k-feature query Φ q rather than a k-feature query Φ q ? We consider the circumstance where a set of features Φ c are changeable and under which the difference between the utilities of asking Φ q and Φ q is maximized. We call this difference of utilities the pairwise maximum regret of queries Φ q and Φ q , defined below in a similar way to Regan and Boutilier [2010] : P M R(Φ q , Φ q ) = max Φc⊆Φ A ? (u(Φ q , Φ c ) − u(Φ q , Φ c )). (4) The maximum regret, denoted by M R, of query Φ q is determined by the Φ q that maximizes P M R(Φ q , Φ q ): M R(Φ q ) = max Φ q ⊆Φ rel ,|Φ q |=k P M R(Φ q , Φ q ). (5) The agent should ask the minimax-regret (k-feature) query: Φ M M R q = arg min Φq⊆Φ rel ,|Φq|=k M R(Φ q ). (6) The rationale of the minimax-regret criterion is as follows. Whenever the agent considers a query Φ q , there could exist a query Φ q that is better than Φ q under some true changeable features Φ c . The agent focuses on the worst case Φ c , where the difference between the utility of Φ q and the best query Φ q that could be asked is maximized. The agent uses adversarial reasoning to efficiently find the worst-case Φ c : Given that it is considering query Φ q , it asks what query Φ q and set of features Φ c an imaginary adversary would pick to maximize the gap between the utilities of Φ q and Φ q under Φ c (that is, u(Φ q , Φ c ) − u(Φ q , Φ c )). The agent wants to find a query Φ q that minimizes the worst case (maximum gap). Under the definition of M R, the agent, reasoning as if it were its imaginary adversary, must find the maximizing Φ q and Φ c . However, we can simplify this so that it only needs to find the maximizing Φ c . Note that since an imaginary adversary chooses both Φ q and Φ c , it wants to make sure that Φ c ⊆ Φ q , which means that it does not want the features not in Φ q to be changeable. We can then observe that M R(Φ q ) = max π ∈Γ: Φ rel (π )≤k (V π − max π∈Π Φ A ? \\{Φq ∩Φ rel (π )} V π ) (7) We call the π maximizing Eq. 7 the adversarial policy when the agent asks query Φ q , denoted by π M R Φq . With Eq. 7, the agent can compute M R based on the set of dominating policies (which DomPolicies already found), rather than the (generally much larger) powerset of the relevant features in Eq. 5. While using Eq. 7 is faster than Eq. 5, the agent still needs to do this computation for every possible query (Eq. 6) of size k. We contribute two further ways to improve the efficiency. First, we may not need to consider all relevant features if we can only ask about k of them. If a subset of relevant features satisfies the condition in Theorem 2, then we call it a set of sufficient features, because a minimax-regret k-feature query from that set is a globally minimax-regret k-feature query. Second, we introduce a pruning rule that we call query dominance in Theorem 3 to safely eliminate considering queries that cannot be better than ones already evaluated. The following theorem shows that if we can find any subset Φ of Φ rel such that for all k-feature subsets of Φ as queries, the associated adversarial policy's relevant features are contained in Φ, then the minimax regret query found by restricting queries to be subsets of Φ will also be a minimax regret query found by considering all queries in Φ rel . Such a (nonunique) set Φ will be referred to as a sufficient feature set (for the purpose of finding minimax regret queries). Theorem 2. (Sufficient Feature Set) For any set of ≥ k features Φ, if for all Φ q ⊆ Φ, |Φ q | = k, we have Φ rel (π M R Φq ) ⊆ Φ, then min Φq⊆Φ,|Φq|=k M R(Φ q ) = min Φq⊆Φ rel ,|Φq|=k M R(Φ q ). Proof Sketch. If a set of features Φ, |Φ| ≥ k, fails to include some features in Φ M M R q , when we query some k-subset of Φ, the adversarial policy should change some of the features in Φ M M R q \\ Φ. Otherwise, querying about Φ M M R q \\ Φ does not reduce the maximum regret. Then Φ M M R q \\ Φ are not necessary to be included in Φ M M R q . Given a set of sufficient features, the following theorem shows that it may not be necessary to compute the maximum regrets for all k-subsets to find the minimax-regret query. Theorem 3. (Query Dominance) For any pair of queries Φ q and Φ q , if Φ q ∩ Φ rel (π M R Φq ) ⊆ Φ q ∩ Φ rel (π M R Φq ), then M R(Φ q ) ≥ M R(Φ q ). Proof. Observe that M R(Φ q ) ≥ V π M R Φq − max π ∈Π Φ A ? \\(Φ q ∩Φ rel (π M R Φq )) V π ≥ V π M R Φq − max π ∈Π Φ A ? \\(Φq ∩Φ rel (π M R Φq )) V π = M R(Φ q ) We denote the condition Φ q ∩ Φ rel (π M R Φq ) ⊆ Φ q ∩ Φ rel (π M R Φq ) by dominance(Φ q , Φ q ). To compute dominance, we only need to store Φ rel (π M R Φq ) for all Φ q we have considered. Algorithm MMRQ-k below provides pseudocode for finding a minimax-regret k-feature query; it takes advantage of both the notion of a sufficient-feature-set as well as query dominance to reduce computation significantly relative to the brute-force approach of searching over all k-feature queries (subsets) of the relevant feature set. Algorithm MMRQ-k 1: Φ q ← an initial k-feature query 2: checked ← ∅; evaluated ← ∅ 3: Φ suf ← Φ q 4: agenda ← {Φ q } 5: while agenda = ∅ do 6: Φ q ← an element from agenda 7: if ¬∃ Φ q ∈evaluated dominance(Φ q , Φ q ) then 8: Compute M R(Φ q ) and π M R Φq 9: Φ suf ← Φ suf ∪ Φ rel (π M R Φq ) 10: evaluated ← evaluated ∪ {Φ q } 11: checked ← checked ∪ {Φ q } 12: agenda ← all k subsets of (Φ suf ) \\ checked 13: return arg min Φq∈evaluated M R(Φ q ) Intuitively, the algorithm keeps augmenting the set of features Φ suf , which contain the features in the queries we have considered and the features changed by their adversarial policies, until it becomes a sufficient feature set. agenda keeps track of k-subsets in Φ suf that we have not yet evaluated. According to Theorem 2, we can terminate the algorithm when agenda is empty (Line 5). We also use Theorem 3 to filter out queries which we know are not better than the ones we have found already (Line 7). Note that an initial Φ suf needs to be chosen, which can be arbitrary. Our implementation initializes Φ q with the Chain of Adversaries heuristic (Section 5). To illustrate when Algorithm MMRQ-k can and can't prune suboptimal queries and thus gain efficiency, consider finding the minimax-regret 2-feature query in Figure 2 (a), which should be {c 1 , c 2 }. If the agent considers a query that does not include c 1 , the adversarial policy would change c 1 , adding c 1 to Φ suf . If the agent considers a query that includes c 1 but not c 2 , the adversarial policy would change c 2 , adding c 2 to Φ suf . When the agent asks {c 1 , c 2 }, the adversarial policy changes c 3 (adversarially asserting that c 1 , c 2 are locked), so c 3 is added to Φ suf . With {c 1 , c 2 , c 3 } ⊆ Φ suf , the condition in Theorem 2 holds and the n − 3 other features can be safely ignored. The minimax-regret query constituted by features in Φ suf is {c 1 , c 2 }. However, in Figure 2 (b), Φ suf = Φ rel , and all |Φ rel | 2 would be evaluated. \n Empirical Evaluations We now empirically confirm that Algorithm MMRQ-k finds a minimax-regret query, and its theoretically sound Sufficient-Feature-Set and Query-Dominance based improvements can indeed pay computational dividends. We also compare our MMRQ-k algorithm to baseline approaches and the Chain of Adversaries (CoA) heuristic [Viappiani and Boutilier, 2009] adapted to our setting. Algorithm CoA begins with Φ q0 = ∅ and improves this query by iteratively computing: π ← arg max π ∈Γ:|Φ rel (π )∪Φq i |≤k (V π − max π∈Π Φ A ? \\{Φq i ∩Φ rel (π )} V π ) Φ qi+1 ← Φ qi ∪ Φ rel (π). The algorithm stops when |Φ qi+1 | = k or Φ qi+1 = Φ qi . Although Algorithm CoA greedily adds features to the query to reduce the maximum regret, unlike MMRQ-k it does not guarantee finding the minimax-regret query. For example, in Figure 2 (c), when k = 2, CoA first finds the optimal policy, which changes {c 1 , c 2 }, and returns that as a query, while the minimax-regret query is {c 1 , c 3 }. We compare the following algorithms in this section: 1. Brute force (rel. feat.) uses Algorithm DomPolicies to find all relevant features first and evaluates all k-subsets of the relevant features. 2. Algorithm MMRQ-k. 3. Algorithm CoA. 4. Random queries (rel. feat.), which contain k uniformly randomly chosen relevant features. 5. Random queries, which contain k uniformly randomly chosen unknown features, without computing relevant features first. \n No queries (equivalently vacuous queries). We evaluate the algorithms' computation times and the quality of the queries they find, reported as the normalized M R to capture their relative performance compared to the best and the worst possible queries. That is, the normalized M R of a query Φ q is defined as (M R(Φ q ) − M R(Φ M M R q ))/(M R(Φ q ⊥ ) − M R(Φ M M R q )), where Φ q ⊥ is a vacuous query, containing k features that are irrelevant and/or already known. The normalized M R of a minimaxregret query is 0 and that of a vacuous query is 1. Navigation. As illustrated in Figure 3 , the robot starts from the left-bottom corner and is tasked to turn off a switch at the top-right corner. The size of the domain is 6 × 6. The robot can move one step north, east or northeast at each time step. It stays in place if it tries to move across a border. The discount factor is 1. Initially, 10 clean carpets are uniformly randomly placed in the domain (the blue cells). In any cell without a carpet, the reward is set to be uniformly random in [−1, 0], and in a cell with a carpet the reward is 0. Hence, the robot will generally prefer to walk on a carpet rather than around it. The state of each carpet corresponds to one feature. The robot is uncertain about whether the user cares about whether any particular carpet gets dirty, so all carpet features are in Φ A ? . The robot knows that its own location and the state of the switch are in Φ A F . Since MMRQ-k attempts to improve on an existing safe policy, the left column and the top row never have carpets to ensure there is at least one safe path to the switch (the dotted line). The robot can ask one k-feature query before it takes any physical actions. We report results on 1500 trials. The only difference between trials are the locations of carpets, which are uniformly randomly placed. First, we compare the brute force method to our MMRQ-k algorithm (Figure 4 ). We empirically confirm that in all cases MMRQ-k finds a minimax regret query, matching Brute Force performance. We also see that brute force scales poorly as k grows while MMRQ-k, benefiting from Theorems 2 and 3, is more computationally efficient. We then want to see if and when MMRQ-k outperforms other candidate algorithms. In Figure 4 , when k is small, the greedy choice of CoA can often find the best features to add to the small query, but as k increases, CoA suffers from being too greedy. When k is large (approaches the number of unknown features), being selective is less important and all methods find good queries. We also consider how |Φ rel | affects the performance (Figure 5 ). When |Φ rel | is smaller than k, a k-feature query that contains all relevant features is optimal. All algorithms would find an optimal query except Random (which selects from all unknown features) and No Queries. (The error bars are larger for small |Φ rel | since more rarely are only very few features relevant.) When |Φ rel | is slightly larger than k, even a random query may be luckily a minimax-regret query, and CoA unsurprisingly finds queries close to the minimax-regret queries. However, when |Φ rel | is much larger than k, the gap between MMRQ-k and other algorithms is larger. In summary, MMRQ-k's benefits increase with the opportunities to be selective (larger |Φ rel | k ). In Fig- ure 5, the gap is largest when k = 4, |Φ rel | = 10. We have also experimented with expected regret given a probabilistic model of how the user will answer queries. For example, if the user has probability p of saying an unknown feature is free (1 − p it's locked), then as expected, when p is very low, querying rarely helps, so using MMRQ-k or CoA matters little, and as p nears 1, CoA 's greedy optimism pays off to meet MMRQ-k 's minimax approach. But, empirically, MMRQ-k outperforms CoA for non-extreme values of p. \n Extensions and Scalability We now briefly consider applying our algorithms to larger, more complicated problems. As mentioned in Section 2, features' changeabilities might be more nuanced, with restrictions on what values they can take, individually or in combination. An example of the latter from Fig. 1 is where doors are revertible features, which means their changeability is dependent on the \"time\" feature: they are freely changeable except that by the time the episode ends they need to revert to their initial values. This expands the set of possible feature queries (e.g., asking if d2 is freely changeable differs from asking if it is revertible). In our experiments, this change accentuates the advantages of MMRQ-k over CoA: CoA asks (is-d2-locked, is-d2-revertible), hoping to hear \"no\" to both and follow a policy going through d2 without closing it. MMRQk asks (is-d2-locked, is-carpet-locked): since closing d2 only takes an extra time step, it is more valuable to know if the carpet is locked than if d2 can be left open. More nuanced feature changeability means DomPolicies will have to find policies for the powerset of every relevant combination of features' values (in the worst case the size of the state space). One option is to ignore nuances in ways that maintain safety (e.g., treat a feature as revertible even if sometimes it can be left changed) and solve such a safe abstraction of the problem. Or one could abstract based on guaranteed correlations in feature changeabilities (e.g., if all boxes have the same changeability then ignore asking about all but one). Another option is to find only a subset of dominating policies, for example by using knowlege of k to avoid finding dominating policies that would change more unknown features than could be asked about anyway. And, of course, as mentioned before, finding approximately-safely-optimal policies in DomPolicies Line 10 would help speed the process (and might be the only option for larger problem domains). Fortunately, such abstractions, heuristics, and approximations do not undermine safety guarantees. Recall that Dom-Policies and MMRQ-k are finding a query, not the agent's final policy: the safety of the agent depends on how it finds the policy it executes, not on the safety of policies for hypothetical changeability conditions. However, as coarser abstractions, heuristics, and approximations are employed in our algorithms, the queries found can increasingly deviate from the minimax-regret optima. Fortunately, if the agent begins with a safely-optimal policy, \"quick and dirty\" versions of our methods can never harm it (they just become less likely to help). And if it begins without such a policy, such versions of our methods might not guide querying well, but by eventually asking about every unknown feature (in the worst case) a safe policy will still be found (if one exists). \n Related Work & Summary Amodei et al. [2016] address the problem of avoiding negative side-effects by penalizing all side-effects while optimizing the value. In our work, we allow the agent to communicate with the user. Safety is also formulated as resolving reward uncertainty [Amin et al., 2017; Hadfield-Menell et al., 2017] , following imperfectly-specified instructions [Milli et al., 2017] , and learning safe states [Laskey et al., 2016] . Safety issues also appear in exploration [Hans et al., 2008; Moldovan and Abbeel, 2012; Achiam et al., 2017] . Here we only provide a brief survey on safety in MDPs. Leike et al. [2017] , Amodei et al. [2016], and Garcıa and Fernández [2015] provide more thorough surveys. There are problems similar to finding safely-optimal policies, which find policies that satisfy some constraints/commitments or maximize the probability of reaching a goal state [Witwicki and Durfee, 2010; Teichteil-Königsbuch, 2012; Kolobov et al., 2012] . There are also other works using minimax-regret and policy dominance [Regan and Boutilier, 2010; Nilim and El Ghaoui, 2005] . and querying to resolve uncertainty [Weng and Zanuttini, 2013; Regan and Boutilier, 2009; Cohn et al., 2011; Zhang et al., 2017] . We've combined and customized a number of these ideas to find a provably minimax-regret k-element query. In summary, we addressed the problem of an agent selectively querying a user about what features can be safely sideeffected. We borrowed existing ideas from the literature about dominating policies and minimax regret, wove them together in a novel way, and streamlined the resulting algorithms to improve scalability while maintaining safe optimality. Figure 1 : 1 Figure 1: The robot navigation domain. The dominating policies (see Section 3) are shown as arrows. \n a)T (s |s, a) + δ(s , s 0 ) ∀s ∈ S Φ A L ∪Φ A ? , ∀a ∈ A, x(s, a) = 0 The control variables x : S × A → R in the linear program are occupancy measures, i.e., x(s, a) is the expected discounted number of times that state action pair s, a is visited by the agent's policy, S Φ is the set of states where one or more features in Φ have different values from the initial state, and δ(s, s ) = 1 if s = s and is zero otherwise. \n ← ∅ the initial set of dominating policies 2: Φ rel ← ∅ the initial set of relevant features 3: checked ← ∅ It contains Φ ⊆ Φ rel we have examined so far. 4: β ← ∅ a pruning rule 5: agenda ← powerset(Φ rel ) \\ checked 6: while agenda = ∅ do 7: Φ ← least-cardinality element of agenda 8: if satisf y(Φ, β) then 9: (get safely-optimal policy with Φ locked) 10: π ← arg max π ∈ΠΦ V π by solving Eq. 1 11: if π exists then 12: Γ ← Γ ∪ {π}; add (Φ, Φ rel (π)) to β 13: \n (a) and (b) only need to compute policies linear in the number of relevant features: Only n+1 dominating safely-optimal policies are computed for Figure 2(a) (for Φ = ∅, {c 1 }, {c 1 , c 2 }, . . . ), and Figure 2(b) (for Φ = ∅, {c 1 }, {c 2 }, . . . , {c n }). Figure 2(c) computes policies for only half of the subsets. Theorem 1. The set of policies returned by Algorithm Dom-Policies is the set of all dominating policies. \n Figure 3 : 3 Figure 3: Office navigation and legend for following figures. \n Figure 4 : 4 Figure 4: Normalized maximum M R vs. k. |Φ ? | = 10. Brute force computation time is only shown for k = 1, 2, 3. \n Figure 5 : 5 Figure 5: Normalized M R vs. the number of relevant features. \n\t\t\t Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence", "date_published": "n/a", "url": "n/a", "filename": "0676.tei.xml", "abstract": "As it achieves a goal on behalf of its human user, an autonomous agent's actions may have side effects that change features of its environment in ways that negatively surprise its user. An agent that can be trusted to operate safely should thus only change features the user has explicitly permitted. We formalize this problem, and develop a planning algorithm that avoids potentially negative side effects given what the agent knows about (un)changeable features. Further, we formulate a provably minimax-regret querying strategy for the agent to selectively ask the user about features that it hasn't explicitly been told about. We empirically show how much faster it is than a more exhaustive approach and how much better its queries are than those found by the best known heuristic.", "id": "754899d59390b2f4bda487a0b4842460"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Caspar Oesterheld", "Vincent Conitzer"], "title": "Safe Pareto Improvements for Delegated Game Playing *", "text": "Introduction Between Aliceland and Bobbesia lies a sparsely populated desert. Until recently, neither of the two countries had any interest in the desert. However, geologists have recently discovered that it contains large oil reserves. Now, both Aliceland and Bobbesia would like to annex the desert, but they worry about a military conflict that would ensue if both countries insist on annexing. Table 1 models this strategic situation as a normal-form game. The strategy DM (short for \"Demand with Military\") denotes a military invasion of the desert, demanding annexation. If both countries send their military with such an aggressive mission, the countries fight a devastating war. The strategy RM (for \"Refrain with Military\") denotes yielding the territory to the other country, but building defenses to prevent an invasion of one's current territories. Alternatively, the countries can choose to not raise a military force at all, while potentially still demanding control of the desert by sending only its leader (DL, short for \"Demand with Leader\"). In this case, if both countries demand the desert, war does not ensue. Finally, they could neither demand nor build up a military (RL). If one of the two countries has their military ready and the other does not, the militarized country will know and will be able to invade the other country. In gametheoretic terms, militarizing therefore strictly dominates not militarizing. Instead of making the decision directly, the parliaments of Aliceland and Bobbesia appoint special commissions for making this strategic decision, led by Alice and Bob, respectively. The parliaments can instruct these representatives in various ways. They can explicitly tell them what to do -for example, Aliceland could directly tell Alice to play DM. However, we imagine that the parliaments trust the commissions' judgments more than they trust their own and hence they might prefer to give an instruction of the type, \"make whatever demands you think are best for our country\" (perhaps contractually guaranteeing a reward in proportion to the utility of the final outcome). They might not know what that will entail, i.e., how the commissions decide what demands to make given that instruction. However -based on their trust in their representatives -they might still believe that this leads to better outcomes than giving an explicit instruction. We will also imagine these instructions are (or at least can be) given publicly and that the commissions are bound (as if by a contract) to follow these instructions. In particular, we imagine that the two commissions can see each other's instructions. Thus, in instructing their commissions, the countries play a game with bilateral precommitment. When instructed to play a game as best as they can, we imagine that the commissions play that game in the usual way, i.e., without further abilities to credibly commit or to instruct subcommittees and so forth. It may seem that without having their parliaments ponder equilibrium selection, Aliceland and Bobbesia cannot do better than leave the game to their representatives. Unfortunately, in this default equilibrium, war is still a possibility. Even the brilliant strategists Alice and Bob may not always be able to resolve the difficult equilibrium selection problem to the same pure Nash equilibrium. In the literature on commitment devices and in particular the literature on program equilibrium, important ideas have been proposed for avoiding such bad outcomes. Imagine for a moment that Alice and Bob will play a Prisoner's Dilemma (Table 3 ) (rather than the Demand Game of Table 1 ). Then the default of (Defect, Defect) can be Pareto-improved upon. Both original players (Aliceland and Bobbesia) can use the following instruction for their representatives: \"If the opponent's instruction is equal to this instruction, Cooperate; otherwise Defect.\" [23, 17, 33, Sect. 10.4, 39] Then it is a Nash equilibrium for both players to use this instruction. In this equilibrium, (Cooperate, Cooperate) is played and it is thus Pareto-optimal and Pareto-better than the default. In cases like the Demand Game, it is more difficult to apply this approach to improve upon the default of simply delegating the choice. Of course, if one could calculate the expected utility of submitting the default instructions, then one could similarly commit the representatives to follow some (joint) mix over the Pareto-optimal outcomes ((RM, DM), (DM, RM), (RM, RM), (DL, DL), etc.) that Pareto-improves on the default expected utilities. 1 However, we will assume that the original players are unable or unwilling to form probabilistic expectations about how the representatives play the Demand Game, i.e., about what would happen with the default instructions. If this is the case, then this type of Pareto improvement on the default is unappealing. The goal of this paper is to show and analyze how even without forming probabilistic beliefs about the representatives, the original players can 1 One might argue that due to the symmetry of the Demand Game, the original players should expect the representatives to play the game's unique symmetric equilibrium (in which both players play DM with probability 1 /4 and RM with probability 3 /4). Of course, in general games might be asymmetric. We here consider a symmetric game only for simplicity. More generally, some games with multiple equilibria might have a single \"focal\" equilibrium [35, pp. 54 -58] that we expect the representatives to play. However, we maintain that in many games it is not clear what equilibrium should be played. Pareto-improve on the default equilibrium. We will call such improvements safe Pareto improvements (SPIs). We here briefly give an example in the Demand Game. The key idea is for the original players to instruct the representatives to select only from {DL, RL}, i.e., to not raise a military. Further, they tell them to disvalue the conflict outcome without military (DL, DL) as they would disvalue the original conflict outcome of war in the default equilibrium. Overall, this means telling them to play the game of Table 2 . (Again, we could imagine that the instructions specify Table 2 to be how Aliceland and Bobbesia financially reward Alice and Bob.) Importantly, Aliceland's instruction to play that game must be conditional on Bobbesia also instructing their commission to play that game, and vice versa. Otherwise, one of the countries could profit from deviating by instructing their representative to always play DM or RM (or to play by the original utility function). The game of Table 2 is isomorphic to the DM-RM part of the original Demand Game of Table 1 . Of course, the original players know neither how the original Demand Game nor the game of Table 2 will be played by the representatives. However, since these games are isomorphic, one should arguably expect them to be played isomorphically. For example, one should expect that (RM, DM) would be played in the original game if and only if (RL, DL) would be played in the modified game. However, the conflict outcome (DM, DM) is replaced in the new game with the outcome (DL, DL). This outcome is harmless (Pareto-optimal) for the original players. Contributions Our paper generalizes this idea to arbitrary normal-form games and is organized as follows. In Section 2, we introduce some notation for games and multivalued functions that we will use throughout this paper. In Section 3, we introduce the setting of delegated game playing for this paper. We then formally define and further motivate the concept of safe Pareto improvements. We also define and give an example of unilateral SPIs. These are SPIs that require only one of the players to commit their representative to a new action set and utility function. In Section 4, we briefly review the concepts of program games and program equilibrium and show that SPIs can be implemented as program equilibria. In Section 5, we introduce a notion of outcome correspondence between games. This relation expresses the original players' beliefs about similarities between how the representatives play different games. In our example, the Demand Game of Table 1 (arguably) corresponds to the game of Table 2 in that the representatives (arguably) would play (DM, DM) in the original game if and only if they play (DL, DL) in the new game, and so forth. We also show some basic results (reflexivity, transitivity, etc.) about the outcome correspondence relation on games. In Section 6 we show that the notion of outcome correspondence is central to deriving SPIs. In particular, we show that a game Γ s is an SPI on another game Γ if and only if there is a Pareto-improving outcome correspondence relation between Γ s and Γ. To derive SPIs, we need to make some assumptions about outcome correspondence, i.e., about which games are played in similar ways by representatives. We give two very weak assumptions of this type in Section 7. The first is that the representatives' play is invariant under the removal of strictly dominated strategies. For example, we assume that in the Demand Game the representatives only play DM and RM. Moreover we assume that we could remove DL and RL from the game and the representatives would still play the same strategies as in the original Demand Game with certainty. The second assumption is that the representatives play isomorphic games isomorphically. For example, once DL and RL are removed for both players from the Demand Game, the Demand Game is isomorphic to the game in Table 2 such that we might expect them to be played isomorphically. In Section 7.5, we derive a few SPIs -including our SPI for the Demand Game -using these assumptions. Section 8 shows that determining whether there exists an SPI based on these assumptions is NP-complete. Section 9 con-siders a different setting in which we allow the original players to let the representatives choose from newly constructed strategies whose corresponding outcomes map arbitrarily onto feasible payoff vectors from the original game. In this new setting, finding SPIs can be done in polynomial time. We conclude by discussing the problem of selecting between different SPIs on a given game (Section 10) and giving some ideas for directions for future work (Section 11). \n Preliminaries \n Games We here give some basic game-theoretic definitions. We assume the reader to be familiar with most of these concepts and with game theory more generally. An n-player (normal-form) game is a tuple (A, u) of a set A = A 1 × ... × A n of (pure) strategy profiles (or outcomes) and a function u : A → R n that assigns to each outcome a utility for each player. The Prisoner's Dilemma shown in Table 3 is a classic example of a game. The Demand Game of Table 1 is another example of a game that we will use throughout this paper. Instead of (A, u) we will also write (A 1 , ..., A n , u 1 , ..., u n ). We also write A −i for × j =i A i , i.e., for the Cartesian product of the action sets of all players other than i. We similarly write u −i and a −i for vectors containing utility functions and actions, respectively, for all players but i. If u i is a utility function and u −i is a vector of utility functions for all players other than i, then (even if i = 1) we use (u i , u −i ) for the full vector of utility functions where Player i has utility function u i and the other players have utility functions as specified by u −i . We use (A i , A −i ) and (a i , a −i ) analogously. We say that a i ∈ A i strictly dominates a i ∈ A i if for all a −i ∈ A −i , u i (a i , a −i ) > u i (a i , a −i ). For example, in the Prisoner's Dilemma, Defect strictly dominates Cooperate for both players. As noted earlier, DM and RM strictly dominate DL and RL for both players. For any given game Γ = (A, u), we will call any game Γ = (A , u ) a subset game of Γ if A i ⊆ A i for i = 1, ..., n. Note that a subset game may assign different utilities to outcomes than the original game. For example, the game of Table 2 is a subset game of the Demand Game. We say that some utility vector y ∈ R n is a Pareto improvement on (or is Pareto-better than) y ∈ R n if y i ≥ y i for i = 1, ..., n. We will also denote this by y ≥ y . Note that, contrary to convention, we allow y = y . Whenever we require one of the inequalities to be strict, we will say that y is a strict Pareto improvement on y . In a given game, we will also say that an outcome a is a Pareto improvement on another outcome a if u(a) ≥ u(a ). We say that y is Pareto-optimal or Pareto-efficient relative to some S ⊂ R n if there is no element of S that strictly Pareto-dominates y. Let Γ = (A, u) and Γ = (A , u ) be two n-player games. Then we call an n-tuple of functions Φ = (Φ i : A i → A i ) i=1,...,n a (game) isomorphism between Γ and Γ if there are vectors λ ∈ R n + and c ∈ R n such that u(a 1 , ..., a n ) = λu (Φ 1 (a 1 ), ..., Φ n (a n )) + c for all a ∈ A. If there is an isomorphism between Γ and Γ , we call Γ and Γ isomorphic. For example, if we let Γ be the Demand Game and Γ s the subset game of Table 2 , then ({DM, RM}, {DM, RM}, u) is isomorphic to Γ s via the isomorphism Φ with Φ i (DM) = DL and Φ i (RM) = RL and the constants λ = (1, 1) and c = (0, 0). \n Multivalued functions For sets M and N , a multi-valued function Φ : M N is a function which maps each element m ∈ M to a set Φ(m) ⊆ N . For a subset Q ⊆ M , we define Φ(Q) := m∈Q Φ(m). Note that Φ(Q) ⊆ N and that Φ(∅) = ∅. For any set M , we define the identity function id M : M M : m → {m}. Also, for two sets M and N , we define all M,N : M N : m → N . We define the inverse Φ −1 : N M : n → {m ∈ M | n ∈ Φ(m)}. Note that Φ −1 (∅) = ∅ for any multi-valued function Φ. For sets M , N and Q and functions Φ : M N and Ψ : N Q, we define the composite Ψ • Φ : M Q : m → Ψ(Φ(m)). As with regular functions, composition of multi-valued functions is associative. We say that Φ : M N is singlevalued if |Φ(m)| = 1 for all m ∈ M . Whenever a multi-valued function is single-valued, we can apply many of the terms for regular functions. For example, we will take injectivity, surjectivity, and bijectivity for single-valued functions to have the usual meaning. We will never apply these notions to non-single-valued functions. \n Delegation and safe Pareto improvements We consider a setting in which a given game Γ is played through what we will call representatives. For example, the representatives could be humans whose behavior is determined or incentivized by some contract à la the principal-agent literature [21] . We imagine that one way in which the representatives can be instructed is to in turn play a subset game Γ s = (A s 1 ⊆ A 1 , ..., A s n ⊆ A n , u s ) of the original game, without necessarily specifying a strategy or algorithm for solving such a game. We emphasize, again, that u s is allowed to be a vector of entirely different utility functions. For any subset game Γ s , we denote by Π(Γ s ) the outcome that arises if the representatives play the subset game Γ s of Γ. Because it is unclear what the right choice is in many games, the original players might be uncertain about Π(Γ s ). We will therefore model each Π(Γ s ) as a random variable. We will typically imagine that the representatives play Γ in the usual way, i.e., that they are not able to make further commitments or delegate again. For example, we imagine that if Γ is the Prisoner's Dilemma, then Π(Γ) = (Defect, Defect) with certainty. The original players trust their representatives to the extent that we take Π(Γ) to be a default way for the game to played for any Γ. That is, by default the original players tell their representatives to play the game as given. For example, in the Demand Game, it is not clear what the right action is. Thus, if one can simply delegate the decision to someone with more relevant expertise, that is the first option one would consider. We are interested in whether and how the original players can jointly Pareto-improve on the default. Of course, one option is to compute the expected utilities in the default (E [u(Π(Γ))]) and then let the representatives play a distribution over outcomes whose expected utility exceeds that default expected utility. However, this is unrealistic if Γ is a complex game with potentially many Nash equilibria. For one, the precise point of delegation is that the original players are unable or unwilling to properly evaluate Γ. Second, there is no widely agreed upon, universal procedure for selecting an action in the face of equilibrium selection problems. In such cases, the original players may in practice be unable to form a probability distribution over Π(Γ). This type of uncertainty is sometimes referred to as Knightian uncertainty, following Knight's [19] distinction between the concepts of risk and uncertainty. We address this problem in a typical way. Essentially, we require of any attempted improvement that it incurs no regret in the worst-case. That is, we are interested in subset games Γ s that are Pareto improvements with certainty under weak and purely qualitative assumptions about Π. 2 In particular, in Section 7, we will introduce the assumptions that the representatives do not play strictly dominated actions and play isomorphic games isomorphically. Definition 1. Let Γ s be a subset game of Γ. We say Γ s is a safe Pareto improvement (SPI) on Γ if u(Π(Γ s )) ≥ u(Π(Γ)) with certainty. We say that Γ s is a strict SPI if furthermore, there is a player i s.t. u i (Π(Γ s )) > u i (Π(Γ s )) with positive probability. For example, in the introduction we have argued that the subset game in Table 2 is a strict SPI on the Demand Game (Table 1 ). Less interestingly, if we let Γ = (A, u) be the Prisoner's Dilemma (Table 3 ), then we might expect ({Cooperate}, {Cooperate}, u) to be an SPI on Γ. After all, we might expect that Π(Γ) = (Defect, Defect) with certainty, while it must be Π({Cooperate}, {Cooperate}, u) = (Cooperate, Cooperate) with certainty, for lack of alternatives. Both players prefer mutual cooperation over mutual defection. \n Unilateral SPIs Both SPIs given above require both players to let their representatives choose from restricted strategy sets to maximize something other than the original player's utility function. Definition 2. We will call a subset game Γ s = (A s , u s ) of Γ = (A, u) unilateral if for all but one i ∈ {1, ..., n} it holds that A s i = A i and u s i = u i . Consequently, if a unilateral subset game Γ s of Γ is also an SPI for Γ, we call Γ s a unilateral SPI. We now give an example of a unilateral SPI using the Complicated Temptation Game, formalized as a normal-form game in Table 4 . (We give the not-so complicated Temptation Game -in which we can only give a trivial example of SPIs -in Section 7.5.) Two players each deploy a robot. Each of the robots faces two choices in parallel. First, each can choose whether to work on Project 1 or Project 2. Player 1 values Project 1 higher and Player 2 values Project 2 higher, but the robots are more effective if they work on the same project. To complete the task, the two robots need to share a resource. Robot 2 manages the resource and can choose whether to control Robot 1's access tightly (e.g., by frequently checking on the resource, or requiring Robot 1 to demonstrate a need for the resource) or give Robot 1 relatively free access. Controlling access tightly decreases the efficiency of both robots, though the exact costs depend on which projects the robots are working on. Robot 1 can choose between using the resource as intended by Robot 2; or give in to the temptation of trying to steal as much of the resource as possible to use it for other purposes. Regardless of what Robot 2 does (in particular, regardless of whether Robot 2 controls access or not), Player 1 prefers trying to steal. In fact, if Robot 2 controls access and Robot 1 refrains from theft, they never get anything done. Given that Robot 1 tries to steal, Player 2 prefers his Robot 2 to control access. As usual we assume that the original players can instruct their robots to play arbitrary subset games of Γ (without specifying an algorithm for solving such a game) and that they can give such instructions conditional on the other player providing an analogous instruction. Player 1 has a unilateral SPI in the Complicated Temptation Game. In particular, Player 1 can commit her representative to play only from R 1 and R 2 and to assign utilities u s 1 (R 1 , F 1 ) = u 1 (T 1 , C 1 ) = 2, u s 1 (R 1 , F 2 ) = u 1 (T 1 , C 2 ) = 1, u s 1 (R 2 , F 1 ) = u 1 (T 2 , C 1 ) = 1, and u s 1 (R 2 , F 2 ) = u 1 (T 2 , C 2 ) = 4; otherwise u s 1 does not differ from u 1 . The resulting SPI is given in Table 5 . In this subset game, Player 2's representative -knowing that Player 1's representative will only play from R 1 and R 2 -will choose from F 1 and F 2 (since F 1 and F 2 strictly dominate C 1 and C 2 in Table 5 ). Now notice that the remaining subset game is isomorphic to the ({T 1 , T 2 }, {C 1 , C 2 }) subset game of the original Complicated Temptation Game, where T 1 maps to R 1 and T 2 maps to R 2 for both Player 1, and C 1 maps to F 1 and C 2 maps to Player 2 C 1 C 2 F 1 F 2 Player 1 T 1 2, 4 1, 1 6, 0 6, 0 T 2 1, 1 4, 2 6, 0 6, 0 R 1 0, 0 0, 0 3, 5 3, 2 R 2 0, 0 0, 0 2, 2 5, 3 Table 4: Complicated Temptation Game Player 2 C 1 C 2 F 1 F 2 Player 1 R 1 0, 0 0, 0 2, 5 1, 2 R 2 0, 0 0, 0 1, 2 4, 3 Table 5 : Safe Pareto improvement for the Complicated Temptation Game F 2 for Player 2. Player 1's representative's utilities have been set to be the same between the two; and Player 2's utilities happen to be the same up to a constant (1) between the two subset games. Thus, we might expect that if Π(Γ) = (T 1 , C 1 ), then Π(Γ s ) = (R 1 , F 1 ), and so on. Finally, notice that u(R 1 , F 1 ) ≥ u(T 1 , C 1 ) and so on. Hence, Table 5 is indeed an SPI on the Complicated Temptation Game. Such unilateral changes are particularly interesting because they only require one of the players to be able to credibly delegate. That is, it's enough for a single player to instruct their representative to choose from a restricted action set to maximize a new utility function. The other players can simply instruct their representatives to play the game in the normal way (i.e., maximizing the respective players' original utility functions without restrictions on the action set). In fact, we may also imagine that only one player i delegates at all, while the other players choose an action themselves, after observing Player i's instruction to her representative. One may object that in a situation where only one player can commit, the sole player with commitment power can simply play the meta game as a standard unilateral commitment (Stackelberg) game [as studied by, e.g., 8, 38, 43] or perhaps as a first mover in a sequential game (as solved by backward induction), without bothering with any Pareto conditions. For example, in the Complicated Temptation Game, Player While this is true in many situations, it is easy come up with scenarios in which it is not and in which SPIs are relevant. For one, we may imagine that only one Player has the fine-grained commitment and delegation abilities needed for SPIs but that the other players can still credibly commit (or are already credibly committed) against any \"commitment trickery\" that clearly leaves them worse off. For instance, many people appear credibly committed by intuitions about fairness and retributivist instincts and emotions [see, e.g., 31, Chapter 6, especially the section \"The Doomsday Machine\"], but are nonetheless very limited in their abilities to commit. Second, we may imagine that the players who cannot commit are subject to reputation effects. Then they might want to build a reputation of resisting coercion. (In contrast, we expect a reputation of playing along with unilateral SPIs to be beneficial in most such situations.) \n Implementing safe Pareto improvements as program equilibria So far, we have been vague about the details of the strategic situation that the original players face in instructing their representatives. From what sets of actions can they choose? How can they jointly let the representatives play some new subset game Γ s ? Are SPIs Nash equilibria of the meta game played by the representatives? If I instruct my representative to play the SPI of Table 2 in the Demand Game, could my opponent not instruct her representative to play DM? In this section, we briefly describe one way to fill this gap by discussing the concept of program games and program equilibrium [33, Sect. 10.4, 39, 12, 3, 10, 26] . This section is essential to understanding why SPIs (especially omnilateral ones) are relevant. However, the remaining technical content of this paper does not rely on this section and the main ideas presented here are straightforward from previous work. We therefore only give an informal exposition. For formal detail, see Appendix A. For any game Γ = (A, u), the program equilibrium literature considers the following meta game. First, each player i chooses from a set of computer programs. Each program then receives as input a vector containing everyone else's chosen program. Each player i's program then returns an action from A i , player i's set of actions in Γ. Together these actions then form an outcome a ∈ A of the original game. Finally, the utilities u(a) are realized according to the utility function of Γ. The meta game can be analyzed like any other game. Its Nash equilibria are called program equilibria. Importantly, the program equilibria can implement payoffs not implemented by any Nash equilibria of Γ itself. For example, in the Prisoner's Dilemma, both players can submit a program that says: \"If the opponent's chosen computer program is equal to this computer program, Cooperate; otherwise Defect.\" [23, 17, 33, Sect. 10.4, 39] This is a program equilibrium which implements mutual cooperation. In the setting for our paper, we similarly imagine that each player i can choose from a set of programs that in turn choose from A i . However, the types of program that we have in mind here are more sophisticated than those typically considered in the program equilibrium literature. Specifically we imagine that the programs are executed by intelligent representatives who are themselves able to competently choose an action for player i in any given game Γ s , without the original player having to describe how this choice is to be made. The original player may not even understand much about this program other than that it generally plays well. Thus, in addition to the elementary instructions used in a typical computer program (branches, comparisons, arithmetic operations, etc.), we allow player i to use an instruction \"Play Π i (Γ s )\" in the program she submits. To jointly let the representatives play, e.g., the SPI Γ s of Table 2 on the Demand Game of Table 1 , the representatives can both use an instruction that says, \"If the opponent's chosen program is analogous to this one, play Π i (Γ s ); otherwise play DM\". Assuming some minimal rationality requirements on the representatives (i.e., on how \"play Π i (Γ s )\" is implemented), this is a Nash equilibrium. Figure 1 illustrates how (in the two-player case) the meta game between the original players is intended to work. For illustration consider the following two real-world instantiations of this setup. First, we might imagine that the original players hire human representatives. Each player specifies, e.g., via monetary incentives, how she wants her representative to act by some contract. For example, a player might contract her representative to play a particular action; or she might specify in her contract a function (u s i ) over outcomes according to which she will pay the representative after an outcome is obtained. Moreover, these contracts might refer to one another. For example, Player 1's contract with her representative might specify that if Player 2 and his representative use an analogous contract, then she will pay her representative according to Table 2 . As a second, more futuristic scenario, you could imagine that the representatives are software agents whose goals are specified by so-called smart contracts, i.e., computer programs implemented on a blockchain to be publicly verifiable [6, 34] . To justify our study of SPIs, we prove that every SPI is played in some program equilibrium: Theorem 1. Let Γ be a game and Γ s be an SPI of Γ. Now consider a program game on Γ, where each player i can choose from a set of computer programs that output actions for Γ. In addition to the normal kind of instructions, we allow the use of the command \"play Π i (Γ )\" for any subset game Γ of Γ. Finally, assume that Π(Γ) guarantees each player i at least that player's minimax utility (a.k.a. threat point) in the base game Γ. Then Π(Γ s ) is played in a program equilibrium, i.e., in a Nash equilibrium of the program game. We prove this in Appendix A. As an alternative to having the original players choose contracts separately, we could imagine the use of jointly signed contracts which only come into effect once signed by all players [cf. 18, 24] . Also compare earlier work by Sen [37] and Raub [32] , which we discuss in Appendix B. Definition 3. Consider two games Γ = (A 1 , ..., A n , u) and Γ = (A 1 , ..., A n , u ). We write Γ ∼ Φ Γ for Φ : A A if Π(Γ ) ∈ Φ(Π(Γ)) with certainty. Note that Γ ∼ Φ Γ is a statement about Π, i.e., about how the representatives choose. Whether such a statement holds generally depends on the specific representatives being used. In Section 7, we describe two general circumstances under which it seems plausible that Γ ∼ Φ Γ . For example, if two games Γ and Γ are isomorphic, then one might expect Γ ∼ Φ Γ , where Φ is the isomorphism between the two games. We now illustrate this notation using our discussion from the Demand Game. Let Γ be the Demand Game of Table 1 . First, it seems plausible that Γ is in some sense equivalent to Γ , where Γ = ({DM, RM, u) is the game that results from removing DL and RL for both players from Γ. Again, strict dominance could be given as an argument. We can now formalize this as Γ ∼ Φ Γ , where Φ(a 1 , a 2 ) = {(a 1 , a 2 )} if a 1 , a 2 ∈ {DM, RM} and Φ(a 1 , a 2 ) = ∅ otherwise. Next, it seems plausible that Γ ∼ Ψ Γ s , where Γ s is the game of Table 2 and Ψ is the isomorphism between Γ and Γ s . We now state some basic facts about the relation ∼, many of which we will use throughout this paper. Lemma 2. Let Γ = (A, u), Γ = (A , u ), Γ = ( Â, û) and Φ, Ξ : A A , Ψ : A Â. 1. Reflexivity: Γ ∼ id A Γ, where id A : A A : a → {a}. \n Symmetry: If Γ ∼ Φ Γ , then Γ ∼ Φ −1 Γ. 3. Transitivity: If Γ ∼ Φ Γ and Γ ∼ Ψ Γ, then Γ ∼ Ψ•Φ Γ. 4. If Γ ∼ Φ Γ and Φ(a) ⊆ Ξ(a) for all a ∈ A, then Γ ∼ Ξ Γ . 5. Γ ∼ all A,A Γ , where all A,A : A A : a → A . 6. If Γ ∼ Φ Γ and Φ(a) = ∅, then Π(Γ) = a with certainty. 7. If Γ ∼ Φ Γ and Φ −1 (a ) = ∅, then Π(Γ ) = a with certainty. Proof. 1. By reflexivity of equality, Π(Γ) = Π(Γ) with certainty. Hence, Π(Γ) ∈ id A (Π(Γ)) by definition of id A . Therefore, Γ ∼ id A Γ by definition of ∼, as claimed. \n Γ ∼ Φ Γ means that Π(Γ ) ∈ Φ(Π(Γ)) with certainty. Thus, Π(Γ) ∈ {a∈A | Π(Γ )∈Φ(a)} = Φ −1 (Π(Γ )), where equality is by the definition of the inverse of multi-valued functions. We conclude (by definition of ∼) that Γ ∼ Φ −1 Γ as claimed. 3. If Γ ∼ Φ Γ , Γ ∼ Ψ Γ, then by definition of ∼, (i) Π(Γ ) ∈ Φ(Π(Γ)) and (ii) Π( Γ) ∈ Ψ(Π(Γ )), both with certainty. The former (i) implies {Π(Γ )} ⊆ Φ(Π(Γ)). Hence, Ψ(Π(Γ )) = Ψ({Π(Γ )}) ⊆ Ψ(Φ(Π(Γ))). With ii, it follows that Π( Γ) ∈ Ψ(Φ(Π(Γ))) with certainty. By defini- tion, Γ ∼ Ψ•Φ Γ as claimed. 4. It is Π(Γ ) ∈ Φ(Π(Γ)) ⊆ Ξ(Π(Γ)) with certainty. Thus, by definition Γ ∼ Ξ Γ . \n By definition of Π, it is Π(Γ ) ∈ A with certainty. By definition of all A,A , it is all A,A (Π(Γ)) = A with certainty. Hence, Π(Γ ) ∈ all A,A (Π(Γ) ) with certainty. We conclude that Γ ∼ all A,A Γ as claimed. 6. With certainty, Π(Γ ) ∈ Φ(Π(Γ)) (by assumption). Also, with certainty Π(Γ ) / ∈ ∅. Hence, Φ(Π(Γ)) = ∅ with certainty. We conclude that Π(Γ) = a with certainty. Items 1-3 show that ∼ has properties resembling those of an equivalence relation. Note, however, that since ∼ is not a binary relationship, ∼ itself cannot be an equivalence relation in the usual sense. We can construct equivalence relations, though, by existentially quantifying over the multivalued function. For example, we might define an equivalence relation R on games, where (Γ, Γ ) ∈ R if and only if there is a single-valued bijection Φ such that Γ ∼ Φ Γ . 3 Item 4 states that if we can make an outcome correspondence claim less precise, it will still hold true. Item 5 states that in the extreme, it is always Γ ∼ all A,A Γ , where all A,A is the trivial, maximally imprecise outcome correspondence function that confers no information. Item 6 shows that ∼ can be used to express the elimination of outcomes, i.e., the belief that a particular outcome (or strategy) will never occur. Besides an equivalence relation, we can also use ∼ with quantification over the respective outcome correspondence function to construct (nonsymmetric) preorders over games, i.e., relations that are transitive and reflexive (but not symmetric or antisymmetric). Most importantly, we can construct a preorder on games where Γ Γ if Γ ∼ Φ Γ for a Φ that always increases every player's utilities. \n Safe Pareto improvements through outcome correspondence We now show that as advertised, outcome correspondence is closely tied to SPIs. The following theorem shows not only how outcome correspondences can be used to find (and prove) SPIs. It also shows that any SPI requires an outcome correspondence relation via a Pareto-improving correspondence function. Definition 4. Let Γ = (A, u) be a game and Γ s = (A s , u s ) be a subset game of Γ. Further let Φ : A → A s be such that Γ ∼ Φ Γ . We call Φ a Paretoimproving outcome correspondence (function) if u(a s ) ≥ u(a) for all a ∈ A and all a s ∈ Φ(a). Theorem 3. Let Γ = (A, u) be a game and Γ s = (A s , u s ) be a subset game of Γ. Then Γ s is an SPI on Γ if and only if there is a Pareto-improving outcome correspondence from Γ to Γ s . Proof. ⇐: By definition, Π(Γ s ) ∈ Φ(Π(Γ)) with certainty. Hence, for i = 1, 2, u i (Π(Γ s )) ∈ u i (Φ(Π(Γ))) with certainty. Hence, by assumption about Φ, with certainty, u i (Π(Γ s )) ≥ u i (Π(Γ)). ⇒: Assume that u i (Π(Γ)) ≥ u i (Π(Γ s )) with certainty for i = 1, 2. We define Φ : A → A s : a → {a s ∈ A s | u(a s ) ≥ u(a)} . It is immediately obvious that Φ is Pareto-improving as required. Also, whenever Π(Γ) = a and Π(Γ s ) = a s for any a ∈ A and a s ∈ A s , it is (by assumption) with certainty u(a s ) ≥ u(a). Thus, by definition of Φ, it holds that a s ∈ Φ(a). We conclude that Γ ∼ Φ Γ s as claimed. Note that the theorem concerns weak SPIs and therefore allows the case where with certainty u(Π(Γ)) = u(Π(Γ s )). To show that some Γ s is a strict SPI, we need additional information about which outcomes occur with positive probability. This, too, can be expressed via our outcome correspondence relation. However, since this is cumbersome, we will not formally address strictness much to keep things simple. 4 We now illustrate how outcome correspondences can be used to derive the SPI for the Demand Game from the introduction as per Theorem 3. Of course, at this point we have not made any assumptions about when games are equivalent. We will introduce some in the following section. Nevertheless, we can already sketch the argument using the specific outcome correspondences that we have given intuitive arguments for. Let Γ again be the Demand Game of Table 1 . Then, as we have argued, Γ ∼ Φ Γ , where Γ = ({DM, RM}, {DM, RM}, u) is the game that results from removing DL and RL for both players; and Φ(a 1 , a 2 ) = {(a 1 , a 2 )} if a 1 , a 2 ∈ {DM, RM} and Φ(a 1 , a 2 ) = ∅ otherwise. In a second step, Γ ∼ Ψ Γ s , where Γ s is the game of Table 2 and Ψ is the isomorphism between Γ and Γ s . Finally, transitivity (Lemma 2.3) implies that Γ ∼ Ψ•Φ Γ s . To see that Ψ • Φ is Pareto-improving for the original utility functions of Γ, notice that Φ does not change utilities at all. The correspondence function Ψ maps the conflict outcome (DM, DM) onto the outcome (DL, DL), which is better for both original players. Other than that, Ψ, too, does not change the utilities. Hence, Ψ • Φ is Pareto-improving. By Theorem 3, Γ s is therefore an SPI on Γ. In principle, Theorem 3 does not hinge on Π(Γ) and Π(Γ s ) resulting from playing games. An analogous result holds for any random variables over A and A s . In particular, this means that Theorem 3 applies also if the representatives receive other kinds of instructions (cf. Section 4). However, it seems hard to establish non-trivial outcome correspondences between Π(Γ) and other types of instructions. Still, the use of more complicated instructions can be used to derive different kinds of SPIs. For example, if there are different game SPIs, then the original players could tell their representatives to randomize between them in a coordinated way. \n Assumptions about outcome correspondence To make any claims about how the original players should play the metagame, i.e., about what instructions they should submit, we generally need to make assumptions about how the representatives choose and (by Theorem 3) about outcome correspondence in particular. 5 We here make two fairly weak assumptions. \n Elimination Our first is that the representatives never play strictly dominated actions and that removing them does not affect what the representatives would choose. Assumption 1. Let Γ = (A, u) be an arbitrary n-player game where A 1 , ..., A n are pairwise disjoint, and let ãi ∈ A i be strictly dominated by some other strategy in A i . Then Γ ∼ Φ (A −i , A i − {ã i }, u |(A −i ,A i −{ã i }) ), where for all a −i ∈ A −i , Φ(ã i , a −i ) = ∅ and Φ(a i , a −i ) = {(a i , a −i )} whenever a i = ãi . Assumption 1 expresses that representatives should never play strictly dominated strategies. Moreover, it states that we can remove strictly dominated strategies from a game and the resulting game will be played in the same way by the representatives. For example, this implies that when evaluating a strategy a i , the representatives do not take into account how many other strategies a i strictly dominates. Assumption 1 also allows (via Transitivity of ∼ as per Lemma 2.3) the iterated removal of strictly dominated strategies. The notion that we can (iteratively) remove strictly dominated strategies is common in game theory [28, 20, 27 , Section 2.9, Chapter 12] and has rarely been questioned. It is also implicit in the solution concept of Nash equilibrium -if a strategy is removed by iterated strict dominance, that strategy is played in no Nash equilibrium. However, like the concept of Nash equilibrium, the elimination of strictly dominated strategies becomes implausible if the game is not played in the usual way. In particular, for Assumption 1 to hold, we will in most games Γ have to assume that the representatives cannot in turn make credible commitments (or delegate to further subrepresentatives) or play the game iteratively [2] . \n Isomorphisms Our second assumption is that the representatives play isomorphic games isomorphically when those games are fully reduced. Assumption 2. Let Γ = (A, u) and Γ = (A , u ) be two games that do not contain strictly dominated actions. If Γ and Γ are isomorphic, then there exists an isomorphism Φ between Γ and Γ such that Γ ∼ Φ Γ . Note that if there are multiple game isomorphisms, then we assume outcome correspondence for only one of them. This is necessary for the assumption to be satisfiable in the case of games with action symmetries. (Of course, such games are not the focus of this paper.) For example, let Γ be Rock-Paper-Scissors. Then Γ is isomorphic to itself via the function Φ that for both players maps Rock to Paper, Paper to Scissors, and Scissors to Rock. But if it were Γ ∼ Φ Γ, then this would mean that if the representatives play Rock in Rock-Paper-Scissors, they play Paper in Rock-Paper-Scissors. Contradiction! We will argue for the consistency of our version of the assumption in Section 7.3. Notice also that we make the assumption only for reduced games. This relates to the previous point about action-symmetric games. For example, consider two versions of Rock-Paper-Scissors and assume that in both versions both players have an additional strictly dominated action that breaks the action symmetries e.g., the action, \"resign and give the opponent $10 if they play Rock/Paper\". Then there would only be one isomorphism between these two games (which maps Rock to Paper, Paper to Scissors, and Scissors to Rock for both players). However, in light of Assumption 1, it seems problematic to assume that these strictly dominated actions restrict the outcome correspondences between these two games. 6 One might worry that reasoning about the existence of multiple isomorphisms renders it intractable to deal with outcome correspondences as implied by Assumption 2, and in particular that it might make it impossible to tell whether a particular game is an SPI. However, one can intuitively see that the different isomorphisms between two games do analogous operations. In particular, it turns out that if one isomorphism is Pareto-improving, then they all are: Lemma 4. Let Φ and Ψ be isomorphisms between Γ and Γ . If Φ is (strictly) Pareto-improving, then so is Ψ. We prove Lemma 4 in Appendix C. Lemma 4 will allow us to conclude from the existence of a Paretoimproving isomorphism Φ that there is a Pareto-improving Ψ s.t. Γ ∼ Ψ Γ by Assumption 2, even if there are multiple isomorphisms between Γ and Γ . In the following, we can therefore afford to be lax about our ignorance (in some games) about which outcome isomorphism induces outcome equivalence. We will therefore generally write \"Γ ∼ Φ Γ by Assumption 2\" as short for \"Φ is a game isomorphism between Γ and Γ and hence by Assumption 2 there exists an isomorphism Ψ such that Γ ∼ Ψ Γ \". One could criticize Assumption 2 by referring to focal points (introduced by Schelling [35, pp. 54-58]) as an example where context and labels of strategies matter. A possible response might be that in games where context plays a role, that context should be included as additional information and not be considered part of (A, u). Assumption 2 would then either not apply to such games with (relevant) context or would require one to, in some way, translate the context along with the strategies. However, in this paper we will not formalize context, and assume that there is no decision-relevant context. \n Consistency of Assumptions 1 and 2 We will now argue that there exist representatives that indeed satisfy Assumptions 1 and 2, both to provide intuition and because our results would not be valuable if Assumptions 1 and 2 were inconsistent. We will only sketch the argument informally. To make the argument formal, we would need to specify in more detail what the set of games looks like and in particular what the objects of the action sets are. Imagine that for each player i there is a book 7 that on each page describes a normal-form game that does not have any strictly dominated strategies. The actions have consecutive integer labels. Importantly, the book contains no pair of games that are isomorphic to each other. Moreover, for every fully reduced game, the book contains a game that is isomorphic to this game. (Unless we strongly restrict the set of games under consideration, the book must therefore have infinitely many pages.) We imagine that each player's book contains the same set of games. On each page, the book for Player i recommends one of the actions of Player i to be taken deterministically. 8 Each representative owns a potentially different version of this book and uses it as follows to play a given game Γ. First the given game is fully reduced by iterated strict dominance. They then look up the unique game in the book that is isomorphic to Γ red and map the action labels in Γ red onto the integer labels of the game in the book via some isomorphism. If there are multiple isomorphisms from Γ red to the relevant page in the book, then all representatives decide between them using the same deterministic procedure. Finally they choose the action recommended by the book. It is left to show a pair of representatives Π thus specified satisfies Assumptions 1 and 2. We first argue that Assumption 1 is satisfied. Let Γ be a game and let Γ be a game that arises from removing a strictly dominated action from Γ. By the well known path independence of iterated elimination of strictly dominated strategies [1, 15, 28] , fully reducing Γ and Γ results in the same game. Hence, the representatives play the same actions in Γ and Γ . Second, we argue that Assumption 2 is satisfied. Let us say Γ and Γ are fully reduced and isomorphic. Then it is easy to see that each player i, plays Γ and Γ based on the same page of their book. Let the game on that book page be Γ. Let Φ : A → Ã and Φ : A → Ã be the bijections used by the representatives to translate actions in Γ and Γ , respectively, to labels in Γ. Then if the representatives take actions a in Γ, the actions Φ(a) are the ones specified by the book for Γ, and hence the actions Φ−1 (Φ(a)) are played in Γ . Thus Γ ∼ Φ−1 •Φ Γ. It is easy to see that Φ−1 • Φ is a game isomorphism between Γ and Γ. \n Discussion of alternatives to Assumptions 1 and 2 One could try to use principles other than Assumptions 1 and 2. We here give some considerations. First, game theorists have also considered the iterated elimination of weakly dominated strategies [14, 22, Section 4.11] . Unfortunately, the iterated removal of weakly dominated strategies is pathdependent [20, Section 2.7.B, 5, Section 5.2, 27, Section 12.3]. That is, for some games, iterated removal of weakly dominated strategies can lead to different subset games, depending on which weakly dominated strategy one chooses to eliminate at any stage. A straightforward extension of Assumption 1 to allow the elimination of weakly dominated strategies would therefore be inconsistent. The iterated removal of strictly dominated strategies, on the other hand, is path-independent, and in the 2-player case always eliminates exactly the non-rationalizable strategies [1, 15, 28] . Many other dominance concepts have been shown to be path independent. For an overview, see Apt [1] . We could have used any of these path-independent dominance concepts. With Assumptions 1 and 2, all our outcome correspondence functions are either 1-to-1 or 1-to-0. Other elimination assumptions could involve the use of many-to-1 or even many-to-many functions. In general, such functions are needed when a strategy ãi can be eliminated to obtain a strategically equivalent game, but in the original game ãi may still be played. The simplest example would be the elimination of payoff-equivalent strategies. Imagine that in some game Γ for all opponent strategies a −i ∈ A −i it is the case that u(ã i , a −i ) = u(â i , a −i ) and that there are no other strategies that are similarly payoff-equivalent to ãi and âi . Then one would assume that Γ ∼ Φ (A i − {ã i }, A −i , u), where Φ maps ãi onto {â i } and otherwise Φ is just the identity function. \n Examples In this section, we use Assumptions 1 and 2 to formally prove a few SPIs. Proposition (Example) 5. Let Γ be the Prisoner's Dilemma (Table 3 ) and Γ s = (A s 1 , A s 2 , u s 1 , u s 2 ) be any subset game of Γ with A s 1 = A s 2 = {Cooperate}. Then under Assumption 1, Γ s is a strict SPI on Γ. Proof. By applying Assumption 1 twice and Transitivity once, Γ ∼ Φ Γ D , where Γ D = ({Defect}, {Defect}, u) and Φ(Defect, Defect) = {(Defect, Defect)} and Φ(a 1 , a 2 ) = ∅ for all (a 1 , a 2 ) = (Defect, Defect). By Lemma 2.5, we further obtain Γ D ∼ all Γ s , where Γ s is as described in the proposition. Hence, by transitivity, Γ ∼ all•Φ Γ s . It is easy to verify that the function all • Φ is Pareto-improving. Proposition (Example) 6. Let Γ be the Demand Game of Table 1 and Γ s be the subset game described in Table 2 . Under Assumptions 1 and 2, Γ s is an SPI on Γ. Further, if P (Π(Γ)=(DM, DM)) > 0, then Γ s is a strict SPI. Proof. Let (A 1 , A 2 , u 1 , u 2 ) = Γ. We can repeatedly apply Assumption 1 to eliminate from Γ the strategies DL and RL for both players. We can then apply Lemma 2. ) ∈ A 1 × A 2 , it is for all (a s 1 , a s 2 ) ∈ Ψ(Φ(Γ s )) the case that u(a s 1 , a s 2 ) ≥ u(a 1 , a 2 ). Next, we give two examples of unilateral SPIs. We start with an example that is trivial in that the original player instructs her resentatives to take a specific action. We then give the SPI for the Complicated Temptation game as a non-trivial example. Consider the Temptation Game given in Table 6 . In this game, Player 1's T (for Temptation) strictly dominates R. Once R is removed, Player 2 prefers C. Hence, this game is strict-dominance solvable to (T, C). Player 1 can safely Pareto-improve on this result by telling her representative to play R, since Player 2's best response to R is F and u(R, F ) = (4, 4) > (1, 2) = u(T, C). We now show this formally. Proposition (Example) 7. Let Γ = (A 1 , A 2 , u 1 , u 2 ) be the game of Table 6. Under Assumption 1, Γ s = ({R}, A 2 , u 1 , u 2 ) is a strict SPI on Γ. Proof. First consider Γ. We can apply Assumption 1 to eliminate Player 1's R and then apply Assumption 1 again to the resulting game to also eliminate Player 2's R. By transitivity, we find Γ ∼ Φ Γ , where Γ = ({T }, {C}, u 1 , u 2 ) and Φ(T, C) = {(T, C)} and Φ(A 1 × A 2 − {(T, C)}) = ∅. Next, consider Γ s . We can apply Assumption 1 to remove Player 2's strategy C and find Γ s ∼ Ψ Γs , where Γs = (({R}, {F }, u 1 , u 2 )) and Ψ(R, F ) = {(R, F )} and Ψ(R, C) = ∅. Third, Γ ∼ all Γs by Lemma 2.5, where all(T, C) = {(R, F )}. Finally, we can apply transitivity to conclude Γ ∼ Ξ Γ s , where Ξ = Ψ −1 • all•Φ. It is easy to verify that Ξ(T, C) = (R, F ) and Ξ(A 1 ×A 2 −{(R, F )}) = ∅. Hence, Ξ is Pareto-improving and so by Theorem 3, Γ s is an SPI on Γ. Note that in this example, Player 1 simply commits to a particular strategy R and Player 2 maximizes their utility given Player 1's choice. Hence, this SPI can be justified with much simpler unilateral commitment setups [8, 38, 43] . For example, if the Temptation Game was played as a sequential game in which Player 1 plays first, its unique subgame-perfect equilibrium is (R, F ). In Table 4 we give the Complicated Temptation Game, which better illustrates the features specific to our setup. Roughly, it is an extension of the simpler Temptation Game of Table 6 . In addition to choosing T versus R and C versus F , the players also have to make an additional choice (1 versus 2), which is difficult in that it cannot be solved by strict dominance. As we have argued in Section 3.1, the game in Table 5 is a unilateral SPI on Table 4 . We can now show this formally. Proposition (Example) 8. Let Γ be the Complicated Temptation Game (Table 4 ) and Γ s be the subset game in Table 5 . Under Assumptions 1 and 2, Γ s is a unilateral SPI on Γ. Proof. In Γ, for Player 1, T 1 and T 2 strictly dominate R1 and R 2 . We can thus apply Assumption 1 to eliminate Player 1's R1 and R 2 . In the resulting game, Player 2's C 1 and C 2 strictly dominate F 1 and F 2 , so one can apply Assumption 1 again to the resulting game to also eliminate Player 2's F 1 and F 2 . By transitivity, we find Γ ∼ Φ Γ , where Γ = ({T 1 , T 2 }, {C 1 , C 2 }, u 1 , u 2 ) and Φ(a 1 , a 2 ) = {(a 1 , a 2 )} if a 1 ∈ {T 1 , T 2 } and a 2 ∈ {C 1 , C 2 } ∅ otherwise . Next, consider Γ s (Table 5 ). We can apply Assumption 1 to remove Player 2's strategies C 1 and C 2 and find Γ s ∼ Ψ Γs , where Γs = (({R 1 , R 2 }, {F 1 , F 2 }, u s 1 , u 2 )) and Ψ(a 1 , a 2 ) = {(a 1 , a 2 )} if a 1 ∈ {R 1 , R 2 } and a 2 ∈ {F 1 , F 2 } ∅ otherwise . Third, Γ ∼ Ξ Γs by Assumption 2, where Ξ decomposes into Ξ 1 and Ξ 2 , corresponding to the two players, respectively, where Ξ 1 (T i ) = R i and Ξ 2 (C i ) = F i for i = 1, 2. Finally, we can apply transitivity and the rule about symmetry and inverses (Lemma 2.2) to conclude Γ ∼ Ψ −1 •Ξ•Φ Γ s . It is easy to verify that Ψ −1 • Ξ • Φ is Pareto-improving. \n Computing safe Pareto improvements In this section, we ask how computationally costly it is for the original players to identify for a given game Γ a non-trivial SPI Γ s . Of course, the answer to this question depends on what the original players are willing to assume about how their representatives act. For example, if only trivial outcome correspondences (as per Lemma 2.1 and 2.5) are assumed, then the decision problem is easy. Similarly, if Γ ∼ Φ Γ for given Φ is hard to decide (e.g., because it requires solving for the Nash equilibria of Γ and Γ ), then this could trivially also make the safe Pareto improvement problem hard to decide. We specifically are interested in deciding whether a given game Γ has a non-trivial SPI that can be proved using only Assumptions 1 and 2, the general properties of game correspondence (in particular Transitivity (Lemma 2.3), Symmetry (Lemma 2.2) and Theorem 3). Definition 5. The SPI decision problem consists in deciding for any given Γ, whether there is a game Γ s and a sequence of outcome correspondences Φ 1 , ..., Φ k and a sequence of subset games Γ 0 = Γ, Γ 1 , ..., Γ k = Γ s of Γ s.t.: 1. (Non-triviality:) If we fully reduce Γ s and Γ using iterated strict dominance (Assumption 1), the two resulting games are not equal. (Of course, they are allowed to be isomorphic.) 2. For i = 1, ..., k, Γ i−1 ∼ Φ i Γ i is valid by a single application of either Assumption 1 or Assumption 2, or an application of Assumption 1 in reverse via Lemma 2.2. 3. For all a ∈ A, and whenever a s ∈ (Φ k • Φ k−1 • ... • Φ 1 )(a), it is the case that u(a s ) ≥ u(a). For the strict SPI decision problem, we further require: (4.) There is a player i and an outcome a that survives iterated elimination of strictly dominated strategies from Γ s.t. u i ((Φ k •Φ k−1 •...•Φ 1 )(a)) > u i (a) . For the unilateral SPI decision problem, we further require: (5.) For all but one of the players i, u i = u s i and A i = A s i . Many variants of this problem may be considered. For example, to match Definition 1, the definition of the strict SPI problem assumes that all outcomes a that survive iterated elimination occur with positive probability. Alternatively we could have required that for demonstrating strictness, there must be a player i such that for all a ∈ A that survive iterated elimination, u i ((Φ k • Φ k−1 • ... • Φ 1 )(a)) > u i (a) . Similarly one may wish to find SPIs that are strict improvements for all players. We may also wish to allow the use of the elimination of duplicate strategies (as described in Section 7.4) or trivial outcome correspondence steps as per Lemma 2.5. These modifications would not change the computational complexity of the problem, nor would they require new proof ideas. One may also wish to compute all SPIs, or -in line with multi-criteria optimization [11, 42] -all SPIs that cannot in turn be safely improved upon. However, in general there may exist exponentially many such SPIs. To retain any hope of developing an efficient algorithm, one would therefore have to first develop a more efficient representation scheme [cf. 29, Sect. 16.4 ]. The full proof is tedious (see Appendix D), but the main idea is simple, especially for omnilateral SPIs. To find an omnilateral SPI on Γ based on Assumptions 1 and 2, one has to first iteratively remove all strictly dominated actions to obtain a reduced game Γ , which the representatives would play the same as the original game. This can be done in polynomial time. One then has to map the actions Γ onto the original Γ in such a way that each outcome in Γ is mapped onto a weakly Pareto-better outcome in Γ. Our proof of NP-hardness works by reducing from the subgraph isomorphism problem, where the payoff matrices of Γ and Γ represent the adjacency matrices of the graphs. Besides being about a specific set of assumptions about ∼, note that Theorem 9 and Proposition 10 also assume that the utility function of the game is represented explicitly in normal form as a payoff matrix. If we changed the game representation (e.g., to boolean circuits, extensive form game trees, quantified boolean formulas, or even Turing machines), this can affect the complexity of the SPI problem. For example, Gabarró, García, and Serna [13] show that the game isomorphism problem on normal-form games is equivalent to the graph isomorphism problem, while it is equivalent to the (likely computationally harder) boolean circuit isomorphism problem for a weighted boolean formula game representation. Solving the SPI problem requires solving a subset game isomorphism problem (see the proof of Lemma 28 in Appendix D for more detail). We therefore suspect that the SPI problem analogously increases in computational complexity (perhaps to being Σ p 2 -complete) if we treat games in a weighted boolean formula representation. In fact, even reducing a game using strict dominance by pure strategies -which contributes only insignificantly to the complexity of the SPI problem for normal-form games -is difficult in some game representations [7, Section 6] . Note, however, that for any game representation to which 2-player normal-form games can be efficiently reduced -such as, for example, extensive-form games -the hardness result also applies. 9 Safe Pareto improvements under improved coordination \n Setup In this section, we imagine that the players are able to simply invent new token strategies with new payoffs that arise from mixing existing feasible payoffs. To define this formally, we first define for any game Γ = (A, u), C(Γ) := u(∆(A)) = a∈A p a u(a) a∈A p a = 1 and ∀a ∈ A : p a ∈ [0, 1] to be the set of feasible coordinated payoff vectors of Γ, which is exactly the convex closure of u(A), i.e., of the set of deterministically achievable utilities of the original game. For any game Γ, we then imagine that in addition to subset games, the players can let the representatives play a perfect-coordination token game (A s , u s , u e ), where for all i, A s i ∩ A i = ∅ and u s i : A s → R are arbitrary utility functions to be used by the representatives and u e : A s → C(Γ) are the utilities that the original players assign to the token strategies. The instruction (A s , u s , u e ) lets the representatives play the game (A s , u s ) as usual. However, the strategies A s are imagined to be meaningless token strategies which do not resolve the given game Γ. Once some token strategies a s are selected, these are translated into some probability distribution over A, i.e., over outcomes of the original game, thus giving rise to (expected) utilities u e (a s ) ∈ C(Γ). These distributions and thus utilities are specified by the original players. We here imagine in our definition of C(Γ) that these distributions over A could require the representatives to correlate their choices for the original game for any given a s . Definition 6. Let Γ be a game. A perfect-coordination SPI for Γ is a perfect-coordination token game (A s , u s , u e ) for Γ s.t. u e (Π(A s , u s )) ≥ u(Π(Γ)) with certainty. We call (A s , u s , u e ) a strict perfect-coordination SPI if there furthermore is a player i for whom u e i (Π(A s , u s )) > u i (Π(Γ)) with positive probability. As an example, imagine that Γ is just the DM-RM subset game of the Demand Game of Table 1 . Then, intuitively, an SPI under improved coordination could consist of the original players telling the representatives, \"Play as if you were playing the DM-RM subset game of the Demand Game, but whenever you find yourself playing (DM, DM), randomize [according to some given distribution] between the other (Pareto-optimal) outcomes instead\". Formally, A s 1 = { D, R} and A s 2 = { D, R} would then consist of tokenized versions of the original strategies. The utility functions u s 1 and u s 2 are then simply the same as in the original Demand Game except that they are applied to the token strategies. For example, u s ( D, R) = (2, 0). The utilities for the original players remove the conflict outcome. For example, the original players might specify u e ( D, D) = (1, 1), representing that the representatives are supposed to play (RM, RM) in the ( D, D) case. For all other outcomes (â 1 , â2 ), it must be the case that u e (â 1 , â2 ) = u s (â 1 , â2 ) because the other outcomes cannot be Pareto-improved upon. As with our earlier SPIs for the Demand Game, Assumption 2 implies that Γ ∼ Φ Γ s , where Φ maps the original conflict outcome (DM, DM) onto the Pareto-optimal ( D, D). Relative to the SPIs considered up until now, these new types of instructions put significant additional requirements on how the representatives interact. They now have to engage in a two-round process of first choosing and observing one another's token strategies and then playing the corresponding distribution over outcomes from the original game. Further, it must be the case that this additional coordination does not affect the payoffs of the original outcomes. The latter may not be the case in, e.g., the Game of Chicken. That is, we could imagine a Game of Chicken in which coordination is possible but that the rewards of the game change if the players do coordinate. After all, the underlying story in the Game of Chicken is that the positive reward (admiration from peers) is attained precisely for accepting a grave risk. \n Finding safe Pareto improvement under improved representative coordination With these more powerful ways to instruct representatives, we can now replace individual outcomes of the default game ad libitum. For example, in the reduced Demand Game, we singled out the outcome (DM, DM) as Pareto-suboptimal and replaced it by a Pareto-optimal outcome, while keeping all other outcomes the same. This allows us to construct SPIs in many more games then before. Definition 7. The strict full-coordination SPI decision problem consists in deciding for any given Γ whether under Assumption 2 there is a perfectcoordination SPI Γ s for Γ. Lemma 11. For a given n-player game Γ and payoff vector y ∈ R n , it can be decided by linear programming and thus in polynomial time whether y is Pareto-optimal in C(Γ). For an introduction to linear programming, see, e.g., Schrijver [36] . In short, a linear program is a specific type of constrained optimization problem that can be solved efficiently. Proof. Finding a Pareto improvement on a given y ∈ R n can be formulated as the following linear program: Variables: p a ∈ [0, 1] for all a ∈ A Maximize n i=1 a∈A p a u i (a) − y i s.t. a∈A p a = 1 a∈A p a u i (a) ≥ y i for i = 1, ..., n Based on Lemma 11, Algorithm 1 decides whether there is a strict perfect-coordination SPI for a given game Γ. It is easy to see that this algorithm runs in polynomial time (in the size of, e.g., the normal form representation of the game). It is also correct: if it returns True, simply replace the Pareto-suboptimal outcome while keeping all other outcomes the same; if it returns False, then all outcomes are Paretooptimal within C(Γ) and so there can be no strict SPI. We summarize this result in the following proposition. Proposition 12. Assuming supp(Π(Γ)) is known and that Assumption 2 holds, it can be decided in polynomial time whether there is a strict perfectcoordination SPI. We start with a lemma that directly provides a characterization. So far, all the considered perfect-coordination SPIs (A s , u s , u e ) for a game (A, u) have consisted in letting the representatives play a game (A s , u s ) that is isomorphic to the original game, but Pareto-improves (from the original players' perspectives, i.e., u e ) at least one of the outcomes. It turns out that we can restrict attention to this very simple type of SPI under improved coordination. Lemma 13. Let Γ = ({a 1 1 , ..., a l 1 1 }, ..., {a 1 n , ..., a ln n }, u) be any game. Let Γ be a perfect-coordination SPI on Γ. Then we can define u e with values in C(Γ) such that under Assumption 2 the game Γ s = Â1 := {â 1 1 , ..., âl 1 1 }, ..., Ân := {â 1 n , ..., âln n }, û : (â i 1 1 , ..., âin n ) → u(a i 1 1 , ..., a in n ), u e is also an SPI on Γ, with E [u(Π(Γ s )) | Π(Γ)=a] = E u(Π(Γ )) Π(Γ)=a for all a ∈ A and consequently E [u(Π(Γ s ))] = E [u(Π(Γ ))]. Proof. First note that ( Â, û) is isomorphic to Γ. Thus by Assumption 2, there is isomorphism Φ s.t. Γ ∼ Φ ( Â, û). WLOG assume that Φ simply maps a i 1 1 , ..., a in n → âi 1 1 , ..., âin n . Then define u e as follows: u e (â i 1 1 , ..., âin n ) = E u (Π(Γ )) | Π(Γ) = (a i 1 1 , ..., a in n ) . Here u describes the utilities that the original players assign to the outcomes of Γ . Since u maps onto C(Γ) and C(Γ) is convex, u e as defined also maps into C(Γ) as required. Note that for all a i 1 1 , ..., a in n it is by assumption u (Π(Γ )) ≥ u(a i 1 1 , ..., a in n ) with certainty. Hence, u e (â i 1 1 , ..., âin n )) = E u (Π(Γ )) | Π(Γ) = (a i 1 1 , ..., a in n ) ≥ u(a i 1 1 , ..., a in n ), as required. Because of this result, we will focus on these particular types of SPIs, which simply create an isomorphic game with different (Pareto-better) utilities. Note, however, that without assigning exact probabilities to the distributions of Π(Γ), Π(Γ ), the original players will in general not be able to construct a Γ s that satisfies the expected payoff equalities. For this reason, one could still conceive of situations in which a different type of SPI would be chosen by the original players and the original players are unable to instead choose an SPI of the type described in Lemma 13. Lemma 13 directly implies a characterization of the expected utilities that can be achieved with perfect-coordination SPIs. Of course, this characterization depends on the exact distribution of Π(Γ). We omit the statement of this result. However, we state the following implication. Corollary 14. Under Assumption 2, the set of Pareto improvements that are safely achievable with perfect coordination {E[u(Γ )] | Γ is perfect-coordination SPI on Γ} is a convex polygon. Because of this result, one can also efficiently optimize convex functions over the set of perfect-coordination SPIs. Even without referring to the distribution Π(Γ), many interesting questions can be answered efficiently. For example, we can efficiently identify the perfect-coordination SPI that maximizes the minimum improvements across players and outcomes a ∈ A. In the following, we aim to use Lemma 13 and Corollary 14 to give maximally strong positive results about what Pareto improvements can be safely achieved, without referring to exact probabilities over Π(Γ). To keep things simple, we will do this only for the case of two players. To state our results, we first need some notation: We use PF(C) := y ∈ C ∃y ∈C, i ∈ {1, ..., n} : y ≥ y, y i > y to denote the Pareto frontier of a convex polygon C (or more generally convex, closed set). For any real number x ∈ R, we use π i (x, C(Γ)) to denote the y ∈ C(Γ) which maximizes y −i under the constraint y i = x. (Recall that we consider 2-player games, so y −i is a single real number.) Note that such a y exists if and only if x is i's utility in some feasible payoff vector. We first state our result formally. Afterwards, we will give a graphical explanation of the result, which we believe is easier to understand. Then there is an SPI under improved coordination Γ s such that E [u(Π(Γ s ))] = y. B) If there is no element in C(Γ) which Pareto-dominates all of supp(Π(Γ)) and if y is Pareto-dominated by an element each of L 1 and L 3 as defined above, then there is a perfect-coordination SPI Γ s such that E [u(Π(Γ s ))] = y. We now illustrate the result graphically. We start with Case A, which is illustrated in Figure 2 . The Pareto-frontier is the solid line in the north and east. The points marked x indicate outcomes in supp(Π(Γ)). The point marked by a filled circle indicates the expected value of the default equilibrium E [u(Π(Γ))]. For some y ∈ R 2 to be a Pareto improvement, Player 1's utility it must be to the north-east of the filled circle. The vertical dashed lines starting at the two extreme x marks illustrate the application of π 1 to project x min/max 1 onto the Pareto frontier. The dotted line between these two points is L 1 . Similarly, the horizontal dashed lines starting at x marks illustrate the application of π 2 to project x min/max 2 onto the Pareto frontier. The line segment between these two points is L 3 . In this case, this line segments lies on the Pareto frontier. The set L 2 is simply that part of the Pareto frontier, which Pareto-dominates all elements of supp(Π(Γ)), i.e., the part of the Pareto frontier to the north-east between the two intersections with the northern horizontal dashed line and eastern vertical dashed line. E [u 1 (Π(Γ)))] E [u 2 (Π(Γ)))] Player 2's utility Case B of Theorem 15 is depicted in Figure 3 . Note that here the two line segments L 1 and L 3 intersect. To ensure that a Pareto improvement is safely achievable, the theorem requires that it is below both of these lines. For a full proof, see Appendix E. Roughly, Theorem 15 is proven by re-mapping each of the outcomes of the original game as per Lemma 13. For example, the projection of the default equilibrium E [u(Π(Γ))] (i.e., the filled circle) onto L 1 is obtained as an SPI by projecting all the outcomes (i.e., all the x marks) onto L 1 . In Case A, any utility vector y ∈ L 2 that Pareto-improves on all outcomes of the original game can be obtained by re-mapping all outcomes onto y. Other kinds of y are handled similarly. As a corollary of Theorem 15, we can see that all (potentially unsafe) Pareto improvements in the DM-RM subset game of the Demand Game of Table 1 are equivalent to some perfect-coordination SPI. However, this is not always the case: 7 : An example of a game in which -depending on Π -a Pareto improvement may not be safely achievable. Player 1's utility E [u 1 (Π(Γ))] E [u 2 (Π(Γ))]) Player 2's utility Proposition 16. There is a game Γ = (A, u), representatives Π that satisfy Assumptions 1 and 2, and an outcome a ∈ A s.t. u i (a) > E [u i (Π(Γ)) ] for all players i, but there is no perfect-coordination SPI (A s , u s , u e ) s.t. for all players i, E [u e i (Π(A s , u s ))] = u i (a). As an example of such a game, consider the game in Table 7 . Strategy c can be eliminated by strict dominance (Assumption 1) for both players, leaving a typical Chicken-like payoff structure with two pure Nash equilibria ((a, b) and (b, a)), as well as a mixed Nash equilibrium ( 3 /8 * a + 5 /8 * b, 3 /8 * a + 5 /8 * b). Now let us say that in the resulting game P (Π(Γ)=(a, b)) = p = P (Π(Γ)=(b, a)) for some p with 0 < p ≤ 1 /2. Then one (unsafe) Pareto improvement would be to simply always have the representatives play (c, c) for a certain payoff of (3, 3) . Unfortunately, there is no safe Pareto improve-Player 1's utility ment with the same expected payoff. Notice that (3, 3) is the unique element of C(Γ) that maximizes the sum of the two players' utilities. By linearity of expectation and convexity of C(Γ), if for any Γ s it is E [u(Π(Γ s ))] = (3, 3), it must be u(Π(Γ s )) = (3, 3) with certainty. Unfortunately, in any safe Pareto improvement the outcomes (a, b) and (b, a) must corresponds to outcomes that still gives utilities of (4, 0) and (0, 4), respectively, because these are Pareto-optimal within the set of feasible payoff vectors. \n The SPI selection problem In the Demand Game, there happens to be a single non-trivial SPI. However, in general (even without the type of coordination assumed in Section 9) there may be multiple SPIs that result in different payoffs for the players. For example, imagine an extension of the Demand Game imagine that both players have an additional action DL , which is like DL, except that under (DL , DL ), Aliceland can peacefully annex the desert. Aliceland prefers this SPI over the original one, while Bobbesia has the opposite preference. In other cases, it may be unclear to some or all of the players which of two SPIs they prefer. For example, imagine a version of the Demand Game in which one SPI mostly improves on (DM, DM) and another mostly improves on the other three outcomes, then outcome probabilities are required for comparing the two. If multiple SPIs are available, the original players would be left with the difficult decision of which SPI to demand in their instruction. 9 This difficulty of choosing what SPI to demand cannot be denied. However, we would here like to emphasize that players can profit from the use of SPIs even without addressing this SPI selection problem. To do so, a player picks an instruction that is very compliant (\"dove-ish\") w.r.t. what SPI is chosen, e.g., one that simply goes with whatever SPI the other players demand as long as that SPI cannot further be safely Pareto-improved upon. 10 In many cases, all such SPIs benefit all players. For example, optimal SPIs in bargaining scenarios like the Demand Game remove the conflict outcome, which benefits all parties. Thus, a player can expect a safe improvement even under such maximally compliant demands on the selected SPI. In some cases there may also be natural choices of demands (a là Schelling [35, pp. 54-58] or focal points). If the underlying game is symmetric, a symmetric safe Pareto improvement may be a natural choice. For example, the fully reduced version of the Demand Game of Table 1 is symmetric. Hence, we might expect that even if multiple SPIs were available, the original players would choose a symmetric one. \n Conclusion and future directions Safe Pareto improvements are a promising new idea for delegating strategic decision making. To conclude this paper, we discuss some ideas for further research on SPIs. Straightforward technical questions arise in the context of the complexity results of Section 8. First, what impact on the complexity does varying the assumptions have? Our NP-completeness proof is easy to generalize at least to some other types of assumptions. It would be interesting to give a generic version of the result. We also wonder whether there are plausible assumptions under which the complexity changes in interesting ways. Second, one could ask how the complexity changes if we use more sophisticated game representations (see the remarks at the end of that section). Third, one could impose additional restrictions on the sought SPI. For example, some of the players may be unable to have their representative maximize arbitrary utility functions. We could then ask whether there is an SPI in which only a given subset of the players adopt different utility functions and restrictions on the set of available strategies. Fourth, we could restrict the games under consideration. Are there games in which it becomes easy to decide whether there is an SPI? It would also be interesting to see what real-world situations can already be interpreted as utilizing SPIs, or could be Pareto-improved upon using SPIs. \n A Proof of Theorem 1 -program equilibrium implementations of safe Pareto improvements This paper considers the meta-game of delegation. SPIs are a proposed way of playing these games. However, throughout most of this paper, we do not analyze the meta-game directly as a game using the typical tools of game theory. We here fill that gap and in particular prove Theorem 1, which shows that SPIs are played in Nash equilibria of the meta game, assuming sufficiently strong contracting abilities. As noted, this result is essential. However, since it is mostly an application of existing ideas from the literature on program equilibrium, we left a detailed treatment out of the main text. A program game for Γ = (A, u) is defined via a set PROG = PROG 1 × ...×PROG n and a non-deterministic mapping exec : PROG 1 ×...×PROG n A. We obtain a new game with action sets PROG and utility function U : PROG → R n : c → E [u(exec(c))] . Though this definition is generic, one generally imagines in the program equilibrium literature that for all i, PROG i consists of computer programs in some programming language, such as Lisp, that take as input vectors in PROG and return an action a i . The function exec on input c ∈ PROG then executes each player i's program c i on c to assign i an action. The definition implicitly assumes that PROG only contains programs that halt when fed one another as input. A program equilibrium is then simply a Nash equilibrium of the program game. For the present paper, we add the following feature to the underlying programming language. A program can call a \"black box subroutine\" Π i (Γ ) for any subset game Γ of Γ, where Π i (Γ ) is a random variable over A i and Π(Γ ) = (Π 1 (Γ ), ..., Π n (Γ )). We need one more definition. For any game Γ and player i, we define Player i's threat point (a.k.a. minimax utility) v Γ i as v Γ i = min σ −i ∈× j =i ∆(A j ) max σ i ∈∆(A i ) u i (σ i , σ −i ). In words, v Γ i is the minimum utility that the players other than i can force onto i, under the assumption that i reacts optimally to their strategy. We further will use minimax (i, j) ∈ ∆(A j ) to denote the strategy for Player j that is played in the minimizer σ −i of the above. Of course, in general, there might be multiple minimizers σ −i . In the following, we will assume that the function minimax breaks such ties in some consistent way, such that for all i, (minimax (i, j)) j∈{1,...,n}−{i} ∈ arg min σ −i ∈× j =i ∆(A j ) max σ i ∈∆(A i ) u i (σ i , σ −i ). Note that for n = 2, each player's threat point is computable in polynomial time via linear programming; and that by the minimax theorem [25] , the threat point is equal to the maximin utility, i.e., v Γ i = max σ i ∈∆(A i ) min σ −i ∈∆(A −i ) u i (σ i , σ −i ), so v Γ i is also the minimum utility that Player i can guarantee for herself under the assumption that the opponent sees her mixed strategy and reacts in order to minimize Player i's utility. Tennenholtz' [39] main result on program games is the following: Theorem 17 (Tennenholtz 2004 [39] ). Let Γ = (A, u) be a game and let x ∈ u × n i=1 ∆(A i ) be a (feasible) payoff vector. If x i ≥ v Γ i for i = 1, ..., n, then x is the utility of some program equilibrium of a program game on Γ. Throughout the rest of this section, our goal is to use similar ideas as Tennenholtz did for Theorem 17 to construct for any SPI Γ s on Γ, a program equilibrium that results in the play of Π(Γ s ). As noted in the main text, the Player i's instruction to her representative to play the game Γ s will usually be conditional on the other player telling her representative to also play her part of Γ s and and vice versa. After all, if Player i simply tells her representative to maximize u s i from A s i regardless of Player −i's instruction, then Player −i will often be able to profit from deviating from the Γ s instruction. For example, in the safe Pareto improvement on the Demand Game, each player would only want their representative to choose from {DL, RL} rather than {DM, DM} if the other player's representative does the same. It would then seem that in a program equilibrium in which Π(Γ s ) is played, each program c i would have to contain a condition of the type, \"if the opponent code plays as in Π(Γ s ) against me, I also play as I would in Π(Γ s ).\" But in a naive implementation of this, each of the programs would have to call the other, leading to an infinite recursion. In the literature on program equilibrium, various solutions to this problem have been discovered. We here use the general scheme proposed by Tennenholtz [39] , because it is the simplest. We could similarly use the variant proposed by Fortnow [12] , techniques based on Löb's theorem [3, 10] , or -grounded mutual simulation [26] or even (meta) Assurance Game preferences (see Appendix B). In our equilibrium, we let each player submit code as sketched in Algorithm 2. Roughly, each player uses a program that says, \"if everyone else submitted the same source code as this one, then play Π(Γ s ). Otherwise, if there is a player j who submits a different source code, punish player j by playing her minimax strategy\". Note that for convenience, Algorithm 2 receives the player number i as input. This way, every player can use the exact same source code. Otherwise the original players would have to provide slightly different programs and in line 2 of the algorithm, we would have to use a more complicated comparison, roughly: \"if c j = c i are the same, except for the player index used\". Proof. By inspection of Algorithm 2, we see that exec(c) = Π(Γ s ). It is left to show that c is a Nash equilibrium. So let i be any player and c i ∈ PROG i − {c i }. We need to show that E [u i (exec(c −i , c i ))] ≤ E [u i (exec(c))]. Again, by inspection of c, exec(c −i , c i ) is the threat point of Player i. Hence, E u i (exec(c −i , c i )) = v i ≤ E [u i (Π(Γ))] ≤ E [u i (Π(Γ s ))] = E [u i (exec(c))] as required. Theorem 1 follows immediately. B A discussion of work by Sen (1974) and Raub (1990) on preference adaptation games We here discuss Raub's [32] paper in some detail, which in turn elaborates on an idea by Sen [37] . Superficially, Raub's setting seems somewhat similar to ours, but we here argue that it should be thought of as closer to the work on program equilibrium and bilateral precommitment. In Sections 1, 3 and 4, we briefly discuss multilateral commitment games, which have been discussed before in various forms in the game-theoretic literature. Our paper extends this setting by allowing instructions that let the representatives play a game without specifying an algorithm for solving that game. On first sight, it appears that Raub pursues a very similar idea. Translated to our setting, Raub allows that as an instruction, each player i chooses a new utility function u s i : A → R, where A is the set of outcomes of the original game Γ. Given instructions u s 1 , ..., u s n , the representatives then play the game (A, u s ). In particular, each representative can see what utility functions all the other representatives have been instructed to maximize. However, what utility function representative i maximizes is not conditional on any of the instructions by other players. In other words, the instructions in Raub's paper are raw utility functions without any surrounding control structures, etc. Raub then asks for equilibria u s of the meta-game that Pareto-improve on the default outcome. To better understand how Raub's approach relates to ours, we here give an example of the kind of instructions Raub has in mind. (Raub uses the same example in his paper.) As the underlying game Γ, we take the Prisoner's Dilemma. Now the main idea of his paper is that the original players can instruct their representatives to adopt so-called Assurance Game preferences. In the Prisoner's Dilemma, this means that the representatives prefer to cooperate if the other representative cooperates, and prefer to defect if the other player defects. Further, they prefer mutual cooperation over mutual defection. An example of such Assurance Game preferences is given in Table 8 . (Note that this payoff matrix resembles the classic Stag Hunt studied in game theory.) The Assurance Game preferences have two important properties. The first important difference between Raub's approach and ours is related to item 2. We have ignored the issue of making SPIs Γ s Nash equilibria of our meta game. As we have explained in Section 4 and Appendix A, we imagine that this is taken care of by additional bilateral commitment mechanisms that are not the focus of this paper. For Raub's paper, on the other hand, ensuring mutual cooperation to be stable in the new game Γ s is arguably the key idea. Still, we could pursue the approach of the present paper even when we limit assumptions to those that consist only of a utility function. The second difference is even more important. Raub assumes that -as in the PD -the default outcome of the game (Π(Γ) in the formalism of this paper) is known. (Less significantly, he also assumes that it is known how the representatives play under assurance game preferences.) Of course, the key feature of the setting of this paper is that the underlying game Γ might be difficult (through equilibrium selection problems) and thus that the original players might be unable to predict Π(Γ). These are the reasons why we cite Raub in our section on bilateral commitment mechanisms. Arguably, Raub's paper could be seen as very early work on program equilibrium, except that he uses utility functions as a programming language for representative. In this sense, Raub's Assurance Game preferences are analogous to the program equilibrium schemes of Tennenholtz [39] , Oesterheld [39] , Barasz et al. [3] and van der Hoek, Witteveen, and Wooldridge [41] , ordered in increasing order of similarity of the main idea of the scheme. \n C Proof of Lemma 4 Lemma 4. Let Φ and Ψ be isomorphisms between Γ and Γ . If Φ is (strictly) Pareto-improving, then so is Ψ. Proof. First, we argue that if Φ and Ψ are isomorphisms, then they are isomorphisms relative to the same constants λ and c. For each player i, we distinguish two cases. First the case where all outcomes a in Γ have the same utility for Player i is trivial. Now imagine that the outcomes of Γ do not all have the same utility. Then let y min and y max be the lowest and highest utilities, respectively, in Γ. Further, let x min and x max be the lowest and highest utilities, respectively, in Γ . It is easy to see that if Ψ is a game isomorphism, it maps outcomes with utility y min in Γ onto outcomes with utility x min in Γ , and outcomes with utility y max in Γ onto outcomes with utility x max in Γ . Thus, if λ Ψ,i and c Ψ,i are to be the constants for Ψ, then y min = λ Ψ,i x min + c Ψ,i y max = λ Ψ,i x max + c Ψ,i . Since x min = x max , this system of linear equations has a unique solution. By the same pair of equations, the constants for Φ are uniquely determined. It follows that for all a ∈ A, u(a) = λu (Ψ(a)) + c = u(Φ −1 (Ψ(a))) ≤ u(Φ(Φ −1 (Ψ(a)))) = u(Ψ(a)). Furthermore, if Φ is strictly Pareto-improving for some ã ∈ A, then by bijectivity of Φ, Ψ, there is a ∈ A s.t. Φ −1 (Ψ(a)) = ã. For this a, the inequality above is strict and therefore u(a) < u(Ψ(a)). \n D Proof of Theorem 9 We here prove Theorem 9. We assume familiarity with basic ideas in computational complexity theory (non-deterministic polynomial time (NP), re-ductions, NP-completeness, etc.). \n D.1 On the structure of relevant outcome correspondence sequences Throughout our proof we will use a result about the structure of relevant outcome correspondences. Before proving this result, we give two lemmas. The first is a well-known lemma about elimination by strict dominance. Lemma 19 (path independence of iterated strict dominance). Let Γ be a game in which some strategy a i of player i is strictly dominated. Let Γ be a game we obtain from Γ by removing a strictly dominated strategy (of any player) other than a i . Then a i is strictly dominated in Γ . Note that this lemma does not by itself prove that iterated strict dominance is path dependence. However, path independence follows from the property shown by this lemma. Proof. Let a i be the strategy that strictly dominates a i . We distinguish two cases: Case 1: The strategy removed is a i . Then there must be âi that strictly dominates a i . Then it is for all a −i u i (â i , a −i ) > u i (a i , a −i ) > u i (a i , a −i ). Both inequalities are due to the definition of strict dominance. We conclude that âi must strictly dominate a i . Case 2: The strategy removed is one other than a i or a i . Since the set of strategies of the new game is a subset of the strategies of the old game it is still for each strategy a −i in the new game u i (a i , a −i ) > u i (a i , a −i ), i.e., a i still strictly dominates a i . The next lemma shows that instead of first applying Assumption 1 plus symmetry (Lemma 2.2) to add a strictly dominated action and then applying Assumption 1 to eliminate a different strictly dominated strategy, we could also first eliminate the strictly dominated strategy and then add the other strictly dominated strategy. A conciser way to state the consequence is that there must be games Γ red , Γ s,red and Γ s such that Γ red is obtained from Γ by iterated elimination of strictly dominated strategies, Γ s,red is isomorphic to Γ s,red , and Γ s,red is obtained from Γ s by iterated elimination of strictly dominated strategies. Proof. First divide the given sequence of outcome correspondences up into periods that are maximally long while containing only correspondences by Assumption 1 (with or without Lemma 2.2). That is, consider subsequences of the form Γ q ∼ Φ q ... ∼ Φ r−1 Γ r such that: • Each of the correspondences Γ q ∼ Φ q Γ q+1 , ..., Γ r−1 ∼ Φ r−1 Γ r is by applying Assumption 1 with or without Lemma 2.2. • Either q = 1 or the correspondence Γ q−1 ∼ Φ q−1 Γ q is by Assumption 2. • Either r = k or the correspondence Γ r ∼ Φ r Γ r+1 is by Assumption 2. In each such period apply Lemma 20 iteratively to either eliminate or move to the right all inverted reduction elimination steps. In all but the first period, Γ q contains no strictly dominated actions (by stipulation of Assumption 2). Hence all but the first period cannot contain any non-reversed elimination steps. Similarly, in all but the final period, Γ r contains no strictly dominated actions. Hence, in all but the final period, there can be no reversed applications of Assumption 1. Overall, our new sequence of outcome correspondences thus has the following structure: first there is a sequence of elimination steps via Assumption 1, then there is a sequence of isomorphism steps, and finally there is a sequence of reverse elimination steps. We can summarize all the applications of Assumption 2 into a single step applying that assumption to obtain the claimed structure. Now notice that that the reverse elimination steps are only relevant for deriving unilateral SPIs. Using the above concise formulation of the lemma, we can always simply use Γ s,red itself as an omnilateral SPI -it is not relevant that there is some subset game Γ s that reduces to Γ s,red . Lemma 22. As in Lemma 21, let Γ 1 ∼ Φ 1 ... ∼ Φ k−1 Γ k , where each outcome correspondence is due to a single application of Assumption Assumption 1, Assumption 1 plus symmetry (Lemma 2.2) or Assumption 2. Let Γ 2 , ..., Γ k all be subset games of Γ 1 . Moreover, let Φ k−1 • ... • Φ 1 be Pareto improving. Then there is a sequence of subset games Γ 2 , ..., Γ m , Γ m+1 such that Γ We now show that the SPI problem is in NP at all. The following algorithm can be used to determine whether there is a safe Pareto improvement: Reduce the given game Γ until it can be reduced no further to obtain some subset game Γ = (A , u). Then non-deterministically select injections Φ 1 ∼ Ψ 1 Γ 2 ∼ Ψ 2 ... ∼ Ψ m−1 Γ m all i : A i → A i . If Φ = (Φ 1 , ..., Φ n ) is ( strictly) Pareto-improving (as required in Theorem 3), return True with the solution Γ s defined as follows: The set of action profiles is defined as A s = × i Φ i (A i ). The utility functions are u s i : A s → R : a s → (u i (Φ −1 1 (a s 1 ), ..., Φ −1 n (a s n ))) i=1,...,n . Otherwise, return False. Let us say there is a sequence of outcome correspondences as per Assumptions 1 and 2 that show Γ ∼ Φ Γ s for Pareto-improving Φ. Then by Lemma 22, there is Γ such that Γ ∼ Ψ red Γ via applying Assumption 1 iteratively to obtain a fully reduced Γ and Γ ∼ Ψ iso Γ s via a single application of Assumption 2. By construction, our algorithm finds (guesses) this Pareto-improving outcome correspondence. Overall, we have now shown that our non-deterministic polynomial-time algorithm is correct and therefore that the SPI problem is in NP. Note that the correctness of other algorithms can be proven using very similar ideas. For example, instead of first reducing and then finding an isomorphism, one could first find an isomorphism, then reduce and then (only after reducing) test whether the overall outcome correspondence function is Pareto-improving. One advantage of reducing first is that there are fewer isomorphisms to test if the game is smaller. In particular, the number of possible isomorphisms is exponential in the number of strategies in the reduced game Γ but polynomial in everything else. Hence, by implementing our algorithm deterministically, we obtain the following positive result. \n D.2.2 The unilateral SPI problem Next we show that the problem of finding unilateral SPIs is also in NP. Here we need a slightly more complicated algorithm: We are given an n-player game Γ and a player i. First reduce the game Γ fully to obtain some subset game Γ red . Then non-deterministically select injections Φ i : A red i → A i . The resulting candidate SPI game then is Γ s = ((A −i , Φ i (A red i )), (u −i , u s i )), where u s i (a s ) = u i (Φ −1 1 (a s 1 ), ..., Φ −1 n (a s n )) for all a s ∈ Φ(A red ), and u s i (a s ) is arbitrary for a s / ∈ Φ(A red ). Return True if the following conditions are satisfied: 1. The correspondence function Φ must be (strictly) Pareto improving (as per the utility functions u). 2. For each j ∈ {1, ..., n} − {i}, there are λ j ∈ R + and c j ∈ R such that for all a ∈ A red , we have u j (a) = λ j u j (Φ(a)) + c j . 3. The game Γ s reduces to the game (Φ(A red ), (u −i , u s i )). Otherwise, return False. Proposition 25. The above algorithm runs in non-deterministic polynomial time and returns True if and only if there is a (strict) unilateral SPI. Proof. First we argue that the algorithm can indeed be implemented in non-deterministic polynomial time. For this notice that for checking Item 2, the constants can be found by solving n systems of linear equations of two variables. It is left to prove correctness, i.e., that the algorithm returns True if and only if there exists an SPI. We start by showing that if the algorithm returns True, then there is an SPI. Specifically, we show that if the algorithm returns True, the game Γ s is indeed an SPI game. Notice that Γ ∼ Ψ Γ red for some Ψ by iterative application of Assumption 1 with Transitivity (Lemma 2.2); that Γ red ∼ Φ (Φ(A red ), (u −i , u s i )) by application of Assumption 2. Finally, (Φ(A red ), (u −i , u s i )) ∼ Ξ −1 Γ s for some Ξ by iterative application of Assumption 1 to Γ s , plus transitivity (Lemma 2.3) with reversal (Lemma 2.2). It is left to show that if there is an SPI, then the above algorithm will find it and return true. To see this, notice that Lemma 21 implies that there is a sequence of outcome correspondences Γ ∼ Ψ Γ red ∼ Φ Γ s,red ∼ Ξ Γ s . We can assume that Γ s,red and Γ s have the same action sets for Player i. It is easy to see that in Γ s we could modify the utilities u s i (a) for any a that is not in Γ s,red , because Player i's utilities do not affect the elimination of strictly dominated strategies from Γ s . \n D.3 The SPI problems are NP-hard We now proceed to showing that the safe Pareto improvement problem is NP-hard. We will do this by reducing the subgraph isomorphism problem to the (two-player) safe Pareto improvement problem. We start by briefly describing one version of that problem here. A (simple, directed) graph is a tuple (n, a : {1, ..., n} × {1, ..., n} → B), where n ∈ N and B := {0, 1}. We call a the adjacency function of the graph. Since the graph is supposed to be simple and therefore free of self-loops (edges from one vertex to itself), we take the values a(j, j) for j ∈ {1, ..., n} to be meaningless. For given graphs G = (n, a),G = (n , a ) a subgraph isomorphism from G to G is an injection φ : {1, ..., n} → {1, ...n } such that for all j = l a(j, l) ≤ a (φ(j), φ(l)). In words, a subgraph isomorphism from G to G identifies for each node in G a node in G s.t. if there is an edge from node j to node l in G, there must also be an edge in the same direction between the corresponding nodes . We define Γ based on Ĝ analogously, except that in Player 1's utilities we use 5 instead of 4, 5 + (n + i) instead of 4 + (n + i) , 5 + j instead of 4 + j and 4 instead of 3. We now define Γ c = (A c , u c ) from Γ and Γ as sketched in Table 10 . For the following let in contradiction to the assumption that Ψ is Pareto improving. We are ready to construct our subgraph isomorphism. For i ∈ [n], define φ(i) to be the second element of the pair Ψ 1 (T, i). By Item d, φ(i) can equivalently be defined as the second item in the pair Ψ 2 (D, i). By Item c, φ is a function from [n] to [n] . By assumption about Ψ, φ is injective. Further, by construction of Γ c and φ, as well as the assumption that Ψ is Pareto improving, we infer that for all i, j ∈ [n] with i = j, â(φ(a), φ(j)) = u c 1 ((R, φ(i)), (P, φ(j))) = u c 1 (Ψ 1 (T, i), Ψ 2 (D, j)) ≥ u c 1 ((T, i), (D, j)) = a(i, j). We conclude that φ is a subgraph isomorphism. \n E Proof of Theorem 15 Proof. We will give the proof based on the graphs as well, without giving all formal details. Further we assume in the following that neither L 1 nor L 3 consist of just a single point, since these cases are easy. Case A: Note first that by Corollary 14 it is enough to show that if y is in any of the listed sets L 1 , L 2 , L 3 , it can be made safe. It's easy to see that all payoff vectors on the curve segment of the Pareto frontier L 2 are safely achievable. After all, all payoff vectors in this set Pareto-improve on all outcomes in supp(Π(Γ)). Hence, for each y on the line segment, one could select the Γ s where u e = y. It is left to show that all elements of L 1/2 are safely achievable. Remember that not all payoff vectors on the line segments are Pareto improvements, only those that are to the north-east of (Pareto-better than) the default utility. In the following, we will use L 1 and L 3 to denote those elements of L 1 and L 3 , respectively, that are Pareto-improvements on the default. We now argue that the Pareto improvement y on the line L 1 for which y 1 = E [u 1 (Π(Γ))] is safely achievable. In other words, y is the projection Now note that the set of feasible payoffs of Γ is convex. Further, the curve max(L 1 , L 3 ) is concave. Because the area above a concave curve is convex and because the intersection of convex sets is convex, the set of feasible payoffs on or above max(L 1 , L 3 ) is also convex. By definition of convexity, E 2 = E [u e (Π(Γ s ))] is therefore also in the set of feasible payoffs on or above max(L 1 , L 3 ) and therefore above both L 1 and L 3 as desired. In our second step, we now use E 1 , E 2 , E 3 to prove the claim. Because of convexity of the set of safely achievable payoff vectors as per Corollary 14, all utilities below the curve consisting of the line segments from E 1 to E 2 and from E 2 to E 3 are safely achievable. The line that goes through E 1 , E 2 intersects the line that contains L 1 at E 1 , by definition. Since non-parallel lines intersect each other exactly once and parallel lines that intersect each other are equal and because E 2 is above or on L 1 , the line segment from E 1 to E 2 lies entirely on or above L 1 . Similarly, it can be shown that the line segment from E 2 to E 3 lies entirely on or above L 3 . It follows that the E 1 − E 2 − E 3 curve lies entirely above or on min(L 1 , L 3 ). Now take any Pareto improvement that lies below both L 1 and L 3 . Then this Pareto improvement lies below min(L 1 , L 3 ) and therefore below the E 1 − E 2 − E 3 curve. Hence, it is safely achievable. 7 . 7 If Γ ∼ Φ Γ , then by reflexivity of ∼ (Lemma 2.1) Γ ∼ Φ −1 Γ. If Φ −1 (a ) = ∅, then by Lemma 2.6, Π(Γ ) = a with certainty. \n Similar desiderata have been discussed in the context of equilibrium selection, e.g., by Harsanyi and Selten [16, Chapter 3.4] [cf. 40, for a discussion in the context of fully cooperative multi-agent reinforcement learning]. \n 3 (Transitivity) to obtain Γ ∼ Φ Γ, where Γ = ({DM, RM}, {DM, RM}, u 1 , u 2 ) \n Theorem 9 . 9 The (strict) (unilateral) SPI decision problem is NP-complete, even for 2-player games.Proposition 10. For games Γ with |A 1 | + ... + |A n | = m that can be reduced (via iterative application of Assumption 1) to a game Γ with |A 1 | + ... + |A n | = l, the (strict) (unilateral) SPI decision problem can be solved in O(m l ). \n Algorithm 1 : 3 Return True; 4 134 An algorithm for deciding the strict perfectcoordination SPI problem. Data: Game Γ, set supp(Π(Γ) 1 for a ∈ supp(Π(Γ)) do 2 if u(a) is Pareto-suboptimal within C(Γ) then Return False; 9.3 Characterizing safe Pareto improvements under improved representative coordination From the problem of deciding whether there are strict SPIs under improved coordination at all, we move on to the question of what different perfectcoordination SPIs there are. In particular, one might ask what the cost is of only considering safe Pareto improvements relative to acting on a probability distribution over Π(Γ) and the resulting expected utilities E [u(Π(Γ))]. \n Theorem 15 . 1 , 1 , 2 , 2 , 151122 Make Assumption 2. Let Γ be a two-player game. Let y ∈ R 2 be some potentially unsafe Pareto improvement on E [u(Π(Γ))]. For i = 1, 2, let x min/max i = min / max u i (supp(Π(Γ))). Then: A) If there is some element in C(Γ) which Pareto-dominates all of supp(Π(Γ)) and if y is Pareto-dominated by an element of at least one of the following three sets:• L 1 := the line segment between π 1 (x min 1 , PF(C(Γ)) and π 1 (x max PF(C(Γ));• L 2 := the segment of the curve PF(C(Γ)) between π 1 (x max PF(C(Γ))))and π 2 (x max PF(C(Γ))));• L 3 := the line segment between π 2 (x max PF(C(Γ)) and π 2 (x min 2 , PF(C(Γ)). \n Figure 2 : 2 Figure 2: This figure illustrates Theorem 15, Case A. \n Figure 3 : 3 Figure 3: This figure illustrates Theorem 15, Case B. \n Figure 4 : 4 Figure 4: This figure illustrates the Game of Table 7 as an instance of Theorem 15, Case B. \n Algorithm 2 : 2 if c j = c i then 3 223 A program equilibrium implementation of an SPI Γ s of Γ.Data: Everybody's source code c, my index i 1 for j ∈ {1, ..., n} − {i} do Play minimax (i, j); 4 Play Π i (Γ s ); Proposition 18. Let Γ be a game and let Γ s be an SPI on Γ. Let c be the program profile consisting only of Algorithm 2 for each player. Assume that Π(Γ) guarantees each player at least threat point utility in expectation. Then c is a program equilibrium and apply(c) = Π(Γ s ). \n by applications of Assumption 1 (without applying symmetry), and Γ m ∼ Ξ Γ m+1 by application of Assumption 2 such that Ξ • Ψ m−1 ... • Ψ 1 is Pareto improving. Proof. First apply Lemma 21. Then notice that the correspondence functions from applying Assumption 1 with symmetry have no effect on whether the overall outcome correspondence is Pareto improving. D.2 Non-deterministic polynomial-time algorithms for the SPI problem D.2.1 The omnilateral SPI problem \n Proposition 24 . 24 For games Γ with |A 1 | + ... + |A n | = m that can be reduced (via iterative application of Assumption 1) to a game Γ with |A 1 | + ... + |A n | = l, the (strict) omnilateral SPI decision problem can be solved in O(m l ). \n Proposition 26 . 26 For games Γ with |A 1 | + ... + |A n | = m that can be reduced (via iterative application of Assumption 1) to a game Γ with |A 1 | + ... + |A n | = l, the (strict) unilateral SPI decision problem can be solved in O(m l ). \n A Γ = ({T } × [2n + 2]) × ({D} × [2n + 2]) A Γ = ({R} × [2n + 2]) × ({P } × [2n + 2]), (d) Finally, notice that for i ∈ [n] and j ∈ [n], if Ψ 1 (T, i) = (R, j), thenalso Ψ 2 (D, i) = (P, j). To see this, assume it was Ψ 2 (R, i) = (P, l) for some l = j. Then by Item c, l ∈ [n]. Hence,u c 2 ((T, i), (R, i)) = 2 > 1 = u c 2 ((R, j), (P, l)) = u c 2 (Ψ 1 (T, i), Ψ 2 (R, i)) \n Table 1 : 1 The Demand Game \n Table 2 : 2 A safe Pareto improvement for the Demand Game \n Table 3 : 3 The Prisoner's Dilemma \n Table 6 : 6 Simple Temptation Game andΦ(a 1 , a 2 ) = {(a 1 , a 2 )} if a 1 , a 2 ∈ {DM, RM} ∅ otherwise .Next, by Assumption 2, Γ ∼ Ψ Γ s , where Ψ i (DM) = DL and Ψ i (RM) = RL for i = 1, 2. We can then apply Lemma 2.3 (Transitivity) again, to infer Γ ∼ Ψ•Φ Γ s . It is easy to verify that for all (a 1 , a 2 \n Table 8 : 8 Assurance Game preferences for the Prisoner's Dilemma Cooperate). 2. Under reasonable assumptions about the rationality of the representatives, it is a Nash equilibrium of the meta-game for both players to adopt Assurance Game preferences. If Player 1 tells her representative to adopt Assurance Game preferences, then Player 2 maximizes his utility by telling his representative to also maximize Assurance Game preferences. After all, representative 1 prefers defecting if representative 2 defects. Hence, if Player 2 instructs his representative to adopt preferences that suggest defecting, then he should expect representative to defect as well. 1. If both players tell their representatives to adopt Assurance Game preferences, (Cooperate, Cooperate) is a Nash equilibrium. (Defect, Defect) is a Nash equilibrium as well. However, since (Cooperate, Cooperate) is Pareto-better than (Defect, Defect), the original play- ers could reasonably expect that the representatives play (Cooperate, \n Proposition 23. The above algorithm runs in non-deterministic polynomial time and returns True if and only if there is a (strict) unilateral SPI.Proof. It is easy to see that this algorithm runs in non-deterministic polynomial time. Furthermore, with Lemma 4 it is easy to see that if this algorithm finds a solution Γ s , that solution is indeed a safe Pareto improvement. It is left to show that if there is a safe Pareto improvement via a sequence of Assumption 2 and 1 outcome correspondences, then the algorithm indeed finds a safe Pareto improvement. \n Table 9 : 9 The game Γ constructed to represent the graph G = (n, a). 1 . . . n n+1 . . . 2n 2n+1 2n+2 1 2, 2 a, 1 4+n + , 4 −1, −1 0, 3 0, 3 . . . . . . . . . . . . . . . n a, 1 2, 2 −1, −1 4+2n , 4 0, 3 0, 3 n+1 4+ , 4 −1, −1 0, 3 0, 3 . . . . . . −1, −1 . . . . . . 2n −1, −1 4+n , 4 0, 3 0, 3 2n + 1 3, 0 . . . 3, 0 3, 0 . . . 3, 0 6, , 2n + 2 3, 0 . . . 3, 0 3, 0 . . . 3, 0 , , 6 {D} × [2n + 2] {P } × [2n + 2] {R} × [2n + 2] −2, −1 Γ {T } × [2n + 2] Γ 10, −10 \n Table 10 : 10 The game Γ c as constructed from Γ and Γ. and  u 2 (i, j) =                               2, if i = j and i, j∈ [n] 1, if i = j and i, j ∈ [n] −1, if i ∈ {n + 1, ..., 2n}, j ∈ [n] and i = j + n −1, if i ∈ [n], j ∈ {n + 1, ..., 2n} and i + n = j 4 if i ∈ [n] and j = i + n 4 if j ∈ [n] and i = j + n −1 if i, j ∈ {n + 1, ..., 2n} 3 if j ∈ {2n + 1, 2n + 2} and i ∈ [2n] 6 if i = j = 2n + 2 if i ∈ {2n + 1, 2n + 2}and not i = j = 2n + 2 \n\t\t\t Here is another way of putting this. When one of the players i deliberates whether she would rather have the representatives play Π(Γ) or Π(Γ s ), we could imagine that the agent has a number of possible of models of how the representatives (Π) operate. Absent a probability distribution over models, the only widely accepted circumstance under which she can make such comparisons is decision-theoretic dominance [30, Sect. 3.1]: she should prefer Π(Γ s ) if ui(Π(Γ s )) ≥ ui(Π(Γ)) under all models and ui(Π(Γ s )) > ui(Π(Γ)) under at least some model of Π. \n\t\t\t Note that the fact that this is an equivalence relation relies on the following three facts:1. For reflexivity of R: The identity function id is a single-valued bijection. 2. For symmetry of R: If Φ is a single-valued bijection, so is Φ −1 .3. For transitivity of R: If Ψ, Φ are bijections, so is Ψ • Φ. \n\t\t\t For an SPI Γ s to be also be a strict SPI on Γ, there must be a s which strictly Paretodominates a such that for all Φ with Γ ∼Φ Γ s , it must be a s ∈ Φ(a). \n\t\t\t There are trivial, uninteresting cases in which no assumptions are needed. In particular, if a game Γ has an outcome a that Pareto dominates all other outcomes of the game, then (by Lemma 2.5 with Theorem 3) any game Γ s = (A s = {a}, u s ) is an SPI on Γ. \n\t\t\t In fact, depending on formal details we omit throughout this paper -how to quantify over games in these assumptions, what type of objects actions are, etc. a more general version of Assumption 2 threatens to yield a contradiction with Assumption 1 again. \n\t\t\t The use of a book as illustration is inspired by Binmore [4,Section 1]. \n\t\t\t Of course, in many cases (such as Rock-Paper-Scissors) it is implausible for the players to choose deterministically. The idea is that in such games the randomization was performed in the process of printing. If the book is intended to be used multiple times, we may imagine that a sequence of (randomly generated) actions is provided (à la the RAND Corporation's book of random numbers). \n\t\t\t A second question is what to instruct the representatives to do in case of differing demands. 10 Of course, such an instruction would then also have to specify what happens if all players submit such an instruction. This appears to be a lesser problem, however.", "date_published": "n/a", "url": "n/a", "filename": "SPI.tei.xml", "abstract": "A set of players delegate playing a game to a set of representatives, one for each player. We imagine that each player trusts their respective representative's strategic abilities. Thus, we might imagine that per default, the original players would simply instruct the representatives to play the original game as best as they can. In this paper, we ask: are there safe Pareto improvements on this default way of giving instructions? That is, we imagine that the original players can coordinate to tell their representatives to only consider some subset of the available strategies and to assign utilities to outcomes differently than the original players. Then can the original players do this in such a way that the payoff is guaranteed to be weakly higher than under the default instructions for all the original players? In particular, can they Pareto-improve without probabilistic assumptions about how the representatives play games? In this paper, we give some examples of safe Pareto improvements. We prove that the notion of safe Pareto improvements is closely related to a notion of outcome correspondence between games. We also show that under some specific assumptions about how the representatives play games, finding safe Pareto improvements is NP-complete.", "id": "1a162b03db2ea4923daf0db8944acc1e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Scott Emmons", "Caspar Oesterheld", "Andrew Critch", "Vince Conitzer", "Stuart Russell"], "title": "Symmetry, Equilibria, and Robustness in Common-Payoff Games", "text": "INTRODUCTION We consider common-payoff games (also known as identical interest games [38] ), in which the payoff to all players is always the same. 1 Such games model a wide range of situations involving Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). Appears at the 3rd Games, Agents, and Incentives Workshop (GAIW 2021). Held as part of the Workshops at the 20th International Conference on Autonomous Agents and Multiagent Systems., Aziz, Ceppi, Dickerson, Hosseini, Lev, Mattei, McElfresh, Zick (chairs), May 2021, London, UK. © 2021 Copyright held by the owner/author(s). cooperative action towards a common goal. Under the heading of team theory, they form an important branch of economics [19, 20] . In AI, the common-payoff assumption holds in Dec-POMDPs [25] , where multiple agents operate independently according to policies designed centrally to achieve a common objective. Many applications of multiagent reinforcement learning also assume a common payoff [7, 8, 12] . Finally, in assistance games [31] (also known as cooperative inverse reinforcement learning or CIRL games [13] ), which include at least one human and one or more \"robots,\" it is assumed that the robots' payoffs are exactly the human's payoff, even if the robots do not know what it is. Common-payoff games lead naturally to considerations of symmetry in game structure-for example, the assumption that two players' actions produce the same effect on the common payoff. Indeed, von Neumann and Morgenstern [41] and Nash [23] introduced fairly general group-theoretic notions of symmetry, which we adopt and explain in Section 2. More recent work has analyzed narrower notions of symmetry [22, 30, 39] . For example, Daskalakis and Papadimitriou [5] study \"anonymous games\" and show that anonymity substantially reduces the complexity of finding solutions. Finally, Ham [14] generalizes the player-based notion of symmetry to include further symmetries revealed by renamings of actions. We conjecture our results extend to this more general case, at some cost in notational complexity, but we leave this to future work. In games exhibiting symmetry, it is then reasonable to consider symmetry in players' strategies. (Section 2 defines this in a precise sense.) For example, in team theory, it is common to develop a strategy that can be implemented by every employee in a given category and leads to high payoff for the company. (Notice that this does not lead to identical behavior, because strategies are statedependent.) In civic contexts, symmetry commonly arises through notions of fairness and justice. In treaty negotiations and legislation that mandates how parties behave, for example, there is often a constraint that all parties be treated equally. In DecPOMDPs, an offline solution search may consider only symmetric strategies for identical agents as a way of reducing the search space. In commonpayoff multiagent reinforcement learning, each agent may collect percepts and rewards independently, but the reinforcement learning updates can be pooled to learn a single parameterized policy that all agents share. Common-payoff and symmetric games have a number of desirable properties that may simplify the search for solutions. For the Table 1 : Three versions of the laundry/washing up game. Solutions are described in the text. purposes of this paper, we consider Nash equilibria-strategy profiles for all players from which no individual player has an incentive to deviate-as a reasonable solution concept. For example, Marschak and Radner [20] make the obvious point that a globally optimal (possibly asymmetric) strategy profile-one that achieves the highest common payoff-is necessarily a Nash equilibrium. Moreover, it can be found in time linear in the size of the payoff matrix. Another solution concept often used in multiagent RL and differential (i.e., continuous-action-space) games is that of a locally optimal strategy profile-roughly speaking, a strategy profile from which no player has an incentive to slightly deviate. Obviously, a locally optimal profile may not be a Nash equilibrium, as a player may still have an incentive to deviate to some more distant point in strategy space. Nonetheless, local optima, sometimes called local Nash equilibria-are important. For example, Ratliff et al. [29] argue that a local Nash equilibrium may still be stable in a practical sense if agents are computationally unable to find a better strategy. Similarly, gradient-based game solvers and multiagent RL algorithms may converge to local optima. Our first main result, informally stated, is that in a symmetric, common-payoff game, every local optimum in symmetric strategies is a (global) Nash equilibrium. Section 3 states the result more precisely and gives an example illustrating its generality. 2 Despite many decades of research on symmetric, common-payoff games, the result appears to be novel and perhaps useful. There are some echoes of the result in the literature on single-agent decision making [4, 27, 32] , which can be connected to symmetric solutions of common-payoff games by treating all players jointly as a single agent, but our result appears more general than published results. The proof we give of our result contains elements similar to the proof (of a related but different result) in Taylor [34] . To gain some intuition for these concepts and claims, let us consider a situation in which two children, Ali and Bo, have to do some housework-specifically, laundry (𝐿) and washing up (𝑊 ). Here, the \"common payoff,\" if any, is to the parents. It is evident that a symmetric strategy profile-both doing the laundry or both doing the washing up-is not ideal, because the other task will not get done. The first version of the game, whose payoffs 𝑈 are shown in Table 1a , is asymmetric: while Ali is competent at both tasks, Bo does not know how to do the laundry properly and will ruin the clothes. Here, as Marschak and Radner pointed out, the strategy profile (𝐿,𝑊 ) is both globally optimal and a Nash equilibrium. If we posit a mixed (randomized) strategy profile in which Ali and Bo have laundry probabilities 𝑝 and 𝑞 respectively, the gradients 𝜕𝑈 /𝜕𝑝 and 𝜕𝑈 /𝜕𝑞 are +1 and −1, driving the solution towards (𝐿,𝑊 ). 2 Complete proofs for all of our results are in the appendices. 1b ). Although the symmetric optimum has lower expected utility than the unrestricted optima, total symmetry of the game implies that the symmetric optimum is a Nash equilibrium; this is a special case of Theorem 3.2. In the second version of the game (Table 1b ), Ali has taught Bo how to do the laundry, and symmetry is restored. The pure profiles (𝐿,𝑊 ) and (𝑊 , 𝐿) are (asymmetric) globally optimal solutions and hence Nash equilibria. Figure 1 shows the entire payoff landscape as a function of 𝑝 and 𝑞: looking just at symmetric strategy profiles, it turns out that there is a local optimum at 𝑝 = 𝑞 = 0.5, i.e., where Ali and Bo toss fair coins to decide what to do. Although the expected payoff of this solution is lower than that of the asymmetric optima, the local optimum is, nonetheless, a Nash equilibrium. All unilateral deviations from the symmetric local optimum result in the same expected payoff because if one child is tossing a coin, the other child can do nothing to improve the final outcome. In the third version of the game (Table 1c ), the parents derive greater payoff from watching their children working happily together on a single task than they do from getting both tasks done. In this case, there is again a Nash equilibrium at 𝑝 = 𝑞 = 0.5, but it is a local minimum rather than a local maximum in symmetric strategy space. Thus, not all symmetric Nash equilibria are symmetric local optima; this is because Nash equilibria depend on unilateral deviations, whereas symmetric local optima depend on joint deviations that maintain symmetry. In the second half of the paper, we turn to the issue of robustness of symmetric solutions. In practice, a variety of factors can lead to modelling errors and approximate solutions, which motivates us to consider perturbations in payoffs and strategy profiles. Making general arguments about Nash equilibria, we show that our first main result is robust in the sense that it degrades linearly under 𝜖-magnitude perturbations into 𝑘𝜖-Nash equilibria (for some gamedependent constant 𝑘). Stability turns out to be a thornier issue. Instability, if not handled carefully, might lead to major coordination failures in practice [3] . While it is already known that local strict optima in a totally symmetric team game attain one type of stability, the issue is complex because there are several ways of enforcing (or not enforcing) strict symmetries in payoffs and strategies [22] . Our final results focus on the stability of agents making possibly-asymmetric updates from a symmetric solution. We prove for a non-degenerate class of games that local optima in symmetric strategy space fail to be local optima in asymmetric strategy space if and only if at least one player is mixing, and we experimentally quantify how often mixing occurs for learning algorithms in the GAMUT suite of games [24] . \n PRELIMINARIES: GAMES AND SYMMETRIES 2.1 Normal-form games Throughout, we consider normal-form games G = (𝑁 , 𝐴, 𝑢) defined by a finite set 𝑁 with |𝑁 | = 𝑛 players, a finite set of action profiles 𝐴 = 𝐴 1 × 𝐴 2 × . . . × 𝐴 𝑛 with 𝐴 𝑖 specifying the actions available to player 𝑖, and the utility function 𝑢 = (𝑢 1 , 𝑢 2 , . . . , 𝑢 𝑛 ) with 𝑢 𝑖 : 𝐴 → R giving the utility for each player 𝑖 [33] . We call G common-payoff if 𝑢 𝑖 (𝑎) = 𝑢 𝑗 (𝑎) for all action profiles 𝑎 ∈ 𝐴 and all players 𝑖, 𝑗. In common-payoff games we may omit the player subscript 𝑖 from utility functions. We model each player as employing a (mixed) strategy 𝑠 𝑖 ∈ Δ(𝐴 𝑖 ), a probability distribution over actions. We denote the support of the probability distribution 𝑠 𝑖 by supp(𝑠 𝑖 ). Given a (mixed) strategy profile 𝑠 = (𝑠 1 , 𝑠 2 , . . . , 𝑠 𝑛 ) that specifies a strategy for each player, player 𝑖's expected utility is 𝐸𝑈 𝑖 (𝑠) = 𝑎 ∈𝐴 𝑢 𝑖 (𝑎) Note that, while we have chosen to use the normal-form game representation for simplicity, normal-form games are highly expressive. Normal-form games can represent mixed strategies in all finite games, including games with sequential actions, stochastic transitions, and partial observation such as imperfect-information extensive form games with perfect recall, Markov games, and Dec-POMDPs. To represent a sequential game in normal form, one simply lets each normal-form action be a complete strategy (contingency plan) accounting for every potential game decision. \n Symmetry in game structure Our notion of symmetry in game structure is built upon von Neumann and Morgenstern [41] 's and borrows notation from Plan [28] . The basic building block is a symmetry of a game: Definition 2.1. Call a permutation of player indices 𝜌 : {1, 2, ..., 𝑛} → {1, 2, ..., 𝑛} a symmetry of a game G if, for all strategy profiles (𝑠 1 , 𝑠 2 , ..., 𝑠 𝑛 ), permuting the strategy profile permutes the expected payoffs: 𝐸𝑈 𝜌 (𝑖) ((𝑠 1 , 𝑠 2 , ..., 𝑠 𝑛 )) = 𝐸𝑈 𝑖 ((𝑠 𝜌 (1) , 𝑠 𝜌 (2) , ..., 𝑠 𝜌 (𝑛) )), ∀𝑖. Note that, when we speak of a symmetry of a game, we implicitly assume 𝐴 𝑖 = 𝐴 𝑗 for all 𝑖, 𝑗 with 𝜌 (𝑖) = 𝑗 so that permuting the strategy profile is well-defined. 3 We characterize the symmetric structure of a game by its set of game symmetries: Definition 2.2. Denote the set of all symmetries of a game G by: Γ(G) = {𝜌 : {1, 2, ..., 𝑛} → {1, 2, ..., 𝑛} a symmetry of G}. A spectrum of game symmetries is possible. On one end of the spectrum, the identity permutation might be the only symmetry for a given game. On the other end of the spectrum, all possible permutations might be symmetries for a given game. Following the terminology of von Neumann and Morgenstern [41] , we call the former case totally unsymmetric and the latter case totally symmetric: Definition 2.3. If Γ(G) = 𝑆 𝑛 , the full symmetric group, we call the game Γ(G) totally symmetric. If Γ(G) contains only the identity permutation, we call the game totally unsymmetric. Let P ⊆ Γ(G) be any subset of the game symmetries. Because Γ(G) is closed under composition, we can repeatedly apply permutations in P to yield a group of game symmetries ⟨P⟩: Definition 2.4. Let P ⊆ Γ(G) be a subset of the game symmetries. The group generated by P, denoted ⟨P⟩, is the set of all permutations that can result from (possibly repeated) composition of permutations in P: ⟨P⟩ = {𝜌 1 • 𝜌 2 • . . . • 𝜌 𝑚 | 𝑚 ∈ N, 𝜌 1 , 𝜌 2 , . . . , 𝜌 𝑚 ∈ P}. Group theory tells us that ⟨P⟩ defines a closed binary operation (permutation composition) including an identity and inverse maps, and ⟨P⟩ is the closure of P under function composition. With a subset of game symmetries P ⊆ Γ(G) in hand, we can use the permutations in P to carry one player index to another. For each player 𝑖, we give a name to the set of player indices to which permutations in P can carry 𝑖: we call it player 𝑖's orbit. Definition 2.5. Let P ⊆ Γ(G) be a subset of the game symmetries Γ(G). The orbit of player 𝑖 under P is the set of all other player indices that ⟨P⟩ can assign to 𝑖: P (𝑖) = {𝜌 (𝑖) | 𝜌 ∈ ⟨P⟩}. In fact, it is a standard result from group theory that the orbits of a group action on a set partition the set's elements, which leads to the following proposition: Proposition 2.6. Let P ⊆ Γ(G). The orbits of P partition the game's players. By Proposition 2.6, each P ⊆ Γ(G) yields an equivalence relation among the players. To gain intuition for this equivalence relation, consider two extreme cases. In a totally unsymmetric game, Γ(G) contains only the identity permutation, in which case each player is in its own orbit of Γ(G); the equivalence relation induced by the orbit partition shows that no players are equivalent. In a totally symmetric game, by contrast, every permutation is an element of Γ(G), i.e., Γ(G) = 𝑆 𝑛 , the full symmetric group; now, all the players share the same orbit of Γ(G), and the equivalence relation induced by the orbit partition shows that all the players are equivalent. We leverage the orbit structure of an arbitrary P ⊆ Γ(G) to define an equivalence relation among players because it adapts to however much or little symmetry is present in the game. Between the extreme cases of no symmetry (𝑛 orbits) and total symmetry (1 orbit) mentioned above, there could be any intermediate number of orbits of P. Furthermore, two players can share an orbit of P even if those two players cannot be arbitrarily swapped. In Example 3.3, all the players can be rotated in a circle, so all the players share an orbit of P = Γ(G) even though the game does not admit arbitrary swapping of players. \n Symmetry in strategy profiles Having formalized a symmetry of a game in the preceding section, we follow Nash [23] and define symmetry in strategy profiles with respect to symmetry in game structure: Definition 2.7. Let P ⊆ Γ(G) be a subset of the game symmetries Γ(G). We call a strategy profile 𝑠 = (𝑠 1 , 𝑠 2 , ..., 𝑠 𝑛 ) P-invariant if (𝑠 1 , 𝑠 2 , ..., 𝑠 𝑛 ) = (𝑠 𝜌 (1) , 𝑠 𝜌 (2) , ..., 𝑠 𝜌 (𝑛) ) for all 𝜌 ∈ ⟨P⟩. The equivalence relation among players induced by the orbit structure of P is fundamental to our definition of symmetry in strategy profiles by the following proposition: Proposition 2.8. A strategy profile 𝑠 = (𝑠 1 , 𝑠 2 , ..., 𝑠 𝑛 ) is P-invariant if and only if 𝑠 𝑖 = 𝑠 𝑗 for each pair of players 𝑖 and 𝑗 with P (𝑖) = P ( 𝑗). To state Proposition 2.8 another way, a strategy profile is Pinvariant if all pairs of players 𝑖 and 𝑗 that are equivalent under the orbits of P play the same strategy. \n LOCAL SYMMETRIC OPTIMA ARE (GLOBAL) NASH EQUILIBRIA After the formal definitions of symmetry in the previous section, we are almost ready to formally state the first of our three main results. The only remaining definition is that of a local symmetric optimum: Definition 3.1. Call 𝑠 a locally optimal P-invariant strategy profile of a common-payoff game if: (i) 𝑠 is P-invariant, and (ii) for some 𝜖 > 0, no P-invariant strategy 𝑠 ′ with 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠) can be formed by adding or subtracting at most 𝜖 to the probability of taking any given action 𝑎 𝑖 ∈ 𝐴 𝑖 . If, furthermore, condition (ii) holds for all 𝜖 > 0, we call 𝑠 a globally optimal P-invariant strategy profile or simply an optimal P-invariant strategy profile. Now we can formally state our first main theorem, that local symmetric optima are (global) Nash equilibria: Theorem 3.2. Let G be a common-payoff normal-form game, and let P ⊆ Γ(G) be a subset of the game symmetries Γ(G). Any locally optimal P-invariant strategy profile is a Nash equilibrium. Proof. We provide a sketch here and full details in Appendix A. Suppose, for the sake of contradiction, that an individual player 𝑖 could beneficially deviate to action 𝑎 𝑖 (if a beneficial deviation exists, then there is one to a pure strategy). Then, consider instead a collective change to a symmetric strategy profile in which all the players in 𝑖's orbit shift very slightly more probability to 𝑎 𝑖 . By making the amount of probability shifted ever smaller, the probability that this change affects exactly one agent's realized action (making it 𝑎 𝑖 when it would not have been before) can be arbitrarily larger than the probability that it affects multiple agents' realized actions. Moreover, if this causes exactly one agent's realized action to change, this must be in expectation beneficial, since the original unilateral deviation was in expectation beneficial. Hence, the original strategy profile cannot have been locally optimal. □ \n Example illustrating general symmetry Here, we give an example that shows how Theorem 3.2 is more general than the case of total symmetry. The example illustrates the existence of rotational symmetry without total symmetry, and it illustrates how picking different P ⊆ Γ(G) leads to different optimal P-invariant strategies and thus different P-invariant Nash equilibria by Theorem 3.2. Example 3.3. There are four radio stations positioned in a square. We number these 1,2,3,4 clockwise, such that, e.g., 1 neighbors 4 and 2. There is also a neighborhood of people at each vertex of the square. The people can tune in to the radio station at their vertex of the square and to the radio stations at adjacent vertices of the square, but they cannot tune in to the station at the opposite vertex. The game has each radio station choose what to broadcast. For simplicity, suppose each radio station can broadcast the weather or music. The common payoff of the game is the sum of the utilities of the four neighborhoods. For each neighborhood, if the neighborhood cannot tune in to the weather, the payoff for that neighborhood is 0. If the neighborhood can only tune in to the weather, the payoff is 1, and if the neighborhood can tune in to both weather and music, the neighborhood's payoff is 2. The symmetries of the game Γ(G) include the set of permutations generated by rotating the radio stations once clockwise. In standard notation for permutations, {(1, 2, 3, 4), (2, 3, 4, 1), (3, 4, 1, 2), (4, 1, 2, 3)} ⊂ Γ(G). First, consider applying the theorem to P = Γ(G). In this case, the constraint of P-invariance requires all the radio stations to play the same strategy because all stations are in the same orbit. As we show in Appendix B, the optimal P-invariant strategy then is for each station to broadcast music with probability √ 2−1. Theorem 3.2 tells us that this optimal P-invariant strategy profile is a Nash equilibrium. Appendix B also shows how to verify this without the use of Theorem 3.2. Second, consider applying the theorem to the case where P consists only of the rotation twice clockwise, i.e., the permutation which maps each station onto the station on the opposite vertex of the square. In standard notation for permutations, P = {(3, 4, 1, 2)}. Now, the constraint of P-invariance requires radio stations at opposite vertices of the square to play the same strategy. However, neighboring stations can broadcast different programs. The optimal P-invariant strategy is for one pair of opposite-vertex radio stations, e.g., 1 and 3, to broadcast the weather and for the other pair of radio stations, 2 and 4, to broadcast music. While it turns out to be immediate that this optimal P-invariant strategy is a Nash equilibrium because it achieves the globally optimal outcome, we could have applied Theorem 3.2 to know that this optimal P-invariant strategy profile is a Nash equilibrium even without knowing what the optimal P-invariant strategy was. \n ROBUSTNESS OF THE MAIN RESULT TO PAYOFF AND STRATEGY PERTURBATIONS The first type of robustness we consider is robustness to perturbations in the game's payoff function. Formally, we define an 𝜖perturbation of a game as follows: Definition 4.1. Let G be a normal-form game with utility function 𝜇. For some 𝜖 > 0, we call G ′ an 𝜖-perturbation of G if G ′ has utility function 𝜇 ′ satisfying max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 ′ 𝑖 (𝑎) − 𝑢 𝑖 (𝑎)| ≤ 𝜖. There are a variety of reasons why 𝜖-perturbations might arise in practice. Our game model may contain errors such as the game not being perfectly symmetric; the players' preferences might drift over time; or we might have used function approximation to learn the game's payoffs. With Proposition 4.2, we note a generic observation about Nash equilibria showing that our main result, Theorem 3.2, is robust in the sense of degrading linearly in the payoff perturbation's size: Proposition 4.2. Let G be a common-payoff normal-form game, and let 𝑠 * be a locally-optimal P-invariant strategy profile for some subset of game symmetries P ⊆ Γ(G). Suppose 𝐺 ′ is an 𝜖-perturbation of G. Then 𝑠 * is a 2𝜖-Nash equilibrium in G ′ . The second type of robustness we consider is robustness to symmetric solutions that are only approximate. For example, we might try to find a symmetric local optimum through an approximate optimization method, or the evolutionary dynamics among players' strategies might lead them to approximate local symmetric optima. Again, a generic result about Nash equilibria shows that the guarantee of Theorem 3.2 degrades linearly in this case: Theorem 4.3. Let G be a common-payoff normal-form game, and let 𝑠 * be a locally-optimal P-invariant strategy profile for some subset of game symmetries P ⊆ Γ(G). Suppose 𝑠 is a strategy profile with total variation distance 𝑇𝑉 (𝑠, 𝑠 * ) ≤ 𝛿. Then 𝑠 is an 𝜖-Nash equilibrium with 𝜖 = 4𝛿 max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 𝑖 (𝑎)|. By Theorem 4.3, we have a robustness guarantee in terms of the total variation distance between an approximate local symmetric optimum and a true local symmetric optimum. Without much difficulty, we can also convert this into a robustness guarantee in terms of the Kullback-Leibler divergence: Corollary 4.4. Let G be a common-payoff normal-form game, and let 𝑠 * be a locally-optimal P-invariant strategy profile for some subset of game symmetries P ⊆ Γ(G). Suppose 𝑠 is a strategy profile with Kullback-Leibler divergence satisfying 𝐷 𝐾𝐿 (𝑠 ||𝑠 * ) ≤ 𝜈 or 𝐷 𝐾𝐿 (𝑠 * ||𝑠) ≤ 𝜈. Then 𝑠 is an 𝜖-Nash equilibrium with 𝜖 = 2 √ 2𝜈 max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 𝑖 (𝑎)|. While the results of this section show the robustness of Nash equilibria, we note that Nash equilibria, by definition, consider the possibility of only a single agent deviating; Nash equilibria cannot guarantee stability under dynamics that allow for multiple agents to deviate. In the next section, we investigate when multiple agents might have an incentive to simultaneously deviate by studying the optimality of symmetric strategy profiles in possibly-asymmetric strategy space. \n WHEN ARE LOCAL OPTIMA IN SYMMETRIC STRATEGY SPACE ALSO LOCAL OPTIMA IN POSSIBLY-ASYMMETRIC STRATEGY SPACE? Our first main theoretical result, Theorem 3.2, applies to locally optimal P-invariant, i.e., symmetric, strategy profiles. This still leaves open the question of how well locally optimal symmetric strategy profiles perform when considered in the broader, possiblyasymmetric strategy space. When are locally optimal P-invariant strategy profiles also locally optimal in possibly-asymmetric strategy space? This question is important in machine learning (ML) applications where users of symmetrically optimal ML systems might be motivated to make modifications to the systems, even for purposes of a common payoff. To address this issue more precisely, we formally define a local optimum in possibly-asymmetric strategy space: Definition 5.1. A strategy profile 𝑠 = (𝑠 1 , 𝑠 2 , . . . , 𝑠 𝑛 ) of a commonpayoff normal-form game is locally optimal among possiblyasymmetric strategy profiles, or, equivalently, a local optimum in possibly-asymmetric strategy space, if for some 𝜖 > 0, no strategy profile 𝑠 ′ with 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠) can be formed by changing 𝑠 in such a way that the probability of taking any given action 𝑎 𝑖 ∈ 𝐴 𝑖 for any player 𝑖 changes by at most 𝜖. Definition 5.1 relates to notions of stability under dynamics, such as those with perturbations or stochasticity, that allow multiple players to make asymmetric deviations. In particular, if 𝑠 is not a local maximum in asymmetric strategy space, this means that there is some set of players 𝐶 and strategy 𝑠 ′ 𝐶 arbitrarily close to 𝑠, such that if players 𝐶 were to play 𝑠 ′ 𝐶 (by mistake or due to stochasticity), some Player 𝑖 ∈ 𝑁 − 𝐶 would develop a strict preference over the support of 𝑠 𝑖 . To illustrate this, we return to the laundry/washing up game of the introduction. Example 5.2. Consider again the game of Table 1b . As Figure 1 illustrates, the symmetric optimum is for both Ali and Bo to randomize uniformly between W and L. While this is a Nash equilibrium, it is not a local optimum in possibly-asymmetric strategy space. If one player deviates from uniformly randomizing, the other player develops a strict preference for either 𝑊 or 𝐿. To understand when the phenomenon of Example 5.2 happens in general, we use the following degeneracy condition: Definition 5.3. Let 𝑠 be a Nash equilibrium of a game G: Intuitively, our definition says that a deterministic Nash equilibrium is non-degenerate when it is strict or almost strict (allowing the exception of at most one player who may be indifferent over available actions). A mixed Nash equilibrium, on the other hand, is non-degenerate when mixing matters. When speaking of a game G, we determine its degeneracy by the degeneracy of its Nash equilibria: Definition 5.4. We call a game G degenerate if it has at least one degenerate Nash equilibrium; otherwise, we call G non-degenerate. • If 𝑠 is deterministic, i.e., We note that \"degnerate\" is already an established term in the game-theoretical literature where it is often applied only to twoplayer games [see, e.g, 42, Definition 3.2]. While similar to the established notion of degeneracy, our definition of degeneracy is stronger, which makes our statements about non-degenerate games more general. If a two-player game G is non-degenerate in the usual sense from the literature, it is non-degenerate in the sense of Definition 5.3. Moreover, if G is common-payoff, then for each player 𝑖, we can define a two-player game played by 𝑖 and another single player who controls the strategies of 𝑁 − {𝑖}. If for all 𝑖 these two-player games are non-degenerate in the established sense, then G is non-degenerate in the sense of Definition 5.3. In non-degenerate games, our next theorem shows that a local symmetric optimum is a local optimum in possibly-asymmetric strategy space if and only if it is deterministic. Formally: Theorem 5.5. Let G be a non-degenerate common-payoff normalform game, and let P ⊆ Γ(G) be a subset of the game symmetries Γ(G). A locally optimal P-invariant strategy profile is locally optimal among possibly-asymmetric strategy profiles if and only if it is deterministic. To see why the (non-)degeneracy condition is needed in Theorem 5.5, we provide an example of a degenerate game: -10 Here, (𝑎, 𝑎) is the unique global optimum in symmetric strategy space. By Theorem 3.2, it is therefore also a Nash equilibrium. However, it is a degenerate Nash equilibrium and not locally optimal in asymmetric strategic space. The payoff can be improved by, e.g., Player 1 playing 𝑏 with small probability (and 𝑎 otherwise) and Player 2 playing 𝑐 with small probability (and 𝑎 otherwise). The following game illustrates how a global symmetric optimum, even if it is a non-degenerate, deterministic equilibrium, might still not be globally optimal in possibly-asymmetric strategy space. Example 5.7. Consider |𝑁 | = 3 and 𝐴 = {0, 1}, let 𝑘 be the number of players who choose action 1, and let the payoffs be: 0 if 𝑘 = 0, −1 if 𝑘 = 1, 1 if 𝑘 = 2, and −1 if 𝑘 = 3. Then the global symmetric optimum is for everyone to play 0. The global asymmetric optimum, on the other hand, is to coordinate to achieve 𝑘 = 2. Hence, the global symmetric optimum is strictly worse than the global asymmetric optimum. Of course, by Theorem 5.5, (0, 0, 0) is still a local optimum of asymmetric strategy space. \n LEARNING SYMMETRIC STRATEGIES IN GAMUT Theorem 5.5 shows that, in non-degenerate games, a locally optimal symmetric strategy profile is stable in the sense of Section 5 if and only if it is pure. For those concerned about stability, this raises the question: how often are optimal strategies pure, and how often are they mixed? To answer this question, we present an empirical analysis of learning symmetric strategy profiles in the GAMUT suite of game generators [24] . We are interested both in how centralized optimization algorithms (such as gradient methods) search for symmetric strategies and in how decentralized populations of agents evolve symmetric strategies. To study the former, we run Sequential Least SQuares Programming (SLSQP) [17, 40] , a local search method for constrained optimization. To study the latter, we simulate the replicator dynamics [9] , an update rule from evolutionary game theory with connections to reinforcement learning [2, 36, 37] . (See Appendix E.3 for details.) \n Experimental setup We ran experiments in all three classes of symmetric GAMUT games: RandomGame, CoordinationGame, and CollaborationGame. Intuitively, a RandomGame draws all payoffs uniformly at random, whereas in a CoordinationGame and a CollaborationGame, the highest payoffs are always for outcomes where all players choose the same action. (See Appendix E.1 details.) Because CoordinationGame and CollaborationGame have such similar game structures, our experimental results in the two games are nearly identical. To avoid redundancy, we only include experimental results for Coordina-tionGame in this paper. For each game class, we sweep the parameters of the game from 2 to 5 players and 2 to 5 actions, i.e., with (|𝑁 |, |𝐴 𝑖 |) ∈ {2, 3, 4, 5} × {2, 3, 4, 5}. We sample 100 games at each parameter setting and then attempt to calculate the global symmetric optimum using (i) 10 runs of SLSQP and (ii) 10 runs of the replicator dynamic (each with a different initialization drawn uniformly at random over the simplex), resulting in 10 + 10 = 20 solution attempts per game. Because we do not have ground truth for the globally optimal solution of the game, we instead use the best of our 20 solution attempts, which we call the \"best solution. \" To apply our previously developed theory to GAMUT games, we observe that RandomGames, CoordinationGames, and Collab-orationGames are (almost surely) non-degenerate in the sense of Definition 5.4: Proposition 6.1. Drawing a degenerate game is a measure-zero event in RandomGames, CoordinationGames, and CollaborationGames. \n What fraction of symmetric optima are local optima in possibly-asymmetric strategy space? Here, we try to get a sense for how often symmetric optima are stable in the sense that they are also local optima in possiblyasymmetric strategy space (see Section 5). In Appendix Table 3b , we show in what fraction of games the best solution of our 20 optimization attempts is mixed; by Theorem 5.5, this is the fraction of games whose symmetric optima are not local optima in possiblyasymmetric strategy space. In CoordinationGames, the symmetric optimum is always (by construction) for all players to choose the same action, leading to stability. By contrast, we see that 36% to 60% of RandomGames are unstable. We conclude that if real-world games do not have the special structure of CoordinationGames, then instability may be common. \n How often do SLSQP and the replicator dynamic find an optimal solution? As sequential least squares programming and the replicator dynamic are not guaranteed to converge to a global optimum, we test empirically how often each run converges to the best solution of our 20 optimization runs. In Appendix \n COMPUTATIONAL COMPLEXITY OF COMPUTING GAME SYMMETRIES AND SYMMETRIC STRATEGIES 7.1 Finding symmetries In some cases, domain knowledge can provide the symmetries of a game. For example, in the laundry game of Table 1b , symmetry arises from a simple observation: it matters only what chores get done, not which children do the chores. In other cases, however, players may face a potentially symmetric common-payoff game and first have to determine what the symmetries of the game are, e.g., by computing a generating set of the group of symmetries. Call this problem the game automorphism (GA) problem. Can it be solved efficiently? The complexity of the GA problem depends on how the game is represented. The simplest representation is to give the full table of payoffs. However, the size is then exponential in the number of players. A simple alternative is to only explicitly represent non-zero entries in the payoff table. This way, some games of many players can be represented succinctly. Calling the latter a sparse representation and the former a non-sparse representation, we obtain the following: Theorem 7.1. On a non-sparse game representation, the GA problem can be solved in polynomial time. On a sparse representation, the GA problem is polynomial-time equivalent to the graph isomorphism problem. For a general introduction to the graph isomorphism problem, see Grohe and Schweitzer [11] . Notably, the problem is in NP but neither known to be solvable in polynomial time nor known to be NP-hard. \n Finding an optimal symmetric strategy profile Once it is known what the symmetries P of a given game are, what is the complexity of finding an optimal P-invariant strategy profile? In Appendix G.2, we show that the problem of optimizing symmetric strategies is equivalent to the problem of optimizing polynomials on Cartesian products of unit simplices. However, depending on how the polynomials and games are represented, the reductions may increase the problem instance exponentially. Nevertheless, we can import results from the literature on optimizing polynomials to obtain results such as the following: Theorem 7.2. Deciding for a given game G with symmetries P and a given number 𝐾 whether there is a P-invariant profile with expected utility at least 𝐾 is NP-hard, even for 2-player symmetric games. \n CONCLUSION When ML is deployed in the world, it is natural to instantiate multiple agents from the same template. This naturally restricts strategy profiles to symmetric ones, and it puts the focus on finding optimal symmetric strategy profiles. This, in turn, raises questions about the properties of such profiles. Would individual agents (or the users they serve) want to deviate from these profiles? Are they robust to small changes in the game or in the executed strategies? Could there be better asymmetric strategy profiles nearby? Our results yield a mix of good and bad news. Theorems 3.2 and 4.3 are good news, showing that even local optima in symmetric strategy space are (global) Nash equilibria in a robust sense. So, with respect to unilateral deviations among team members, symmetric optima are relatively stable strategies. On the other hand, Theorem 5.5 is perhaps bad news, because it shows that a broad class of symmetric local optima are unstable when considering joint deviations in asymmetric strategy space (Section 5). Furthermore, our empirical results with learning algorithms in GAMUT suggest that these unstable solutions may not be uncommon in practice (Section 6.2). \n A PROOFS OF SECTION 3 RESULTS Theorem 3.2. Let G be a common-payoff normal-form game, and let P ⊆ Γ(G) be a subset of the game symmetries Γ(G). Any locally optimal P-invariant strategy profile is a Nash equilibrium. Proof. We proceed by contradiction. Suppose 𝑠 = (𝑠 1 , 𝑠 2 , . . . , 𝑠 𝑛 ) is locally optimal among P-invariant strategy profiles that is not a Nash equilibrium. We will construct an 𝑠 ′ arbitrarily close to 𝑠 with 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠). Without loss of generality, suppose 𝑠 1 is not a best response to 𝑠 −1 but that the pure strategy of always playing 𝑎 1 is a best response to 𝑠 −1 . For an arbitrary probability 𝑝 > 0, consider the modified strategy 𝑠 ′ 1 that plays action 𝑎 1 with probability 𝑝 and follows 𝑠 1 with probability 1 −𝑝. Now, construct 𝑠 ′ = (𝑠 ′ 1 , 𝑠 ′ 2 , . . . , 𝑠 ′ 𝑛 ) as follows: 𝑠 ′ 𝑖 = 𝑠 ′ 𝑖 = 𝑠 ′ 1 if 𝑖 ∈ P (1) 𝑠 ′ 𝑖 = 𝑠 𝑖 otherwise. In words, 𝑠 ′ modifies 𝑠 by having the members of player 1's orbit mix in a probability 𝑝 of playing 𝑎 1 . We claim for all sufficiently small 𝑝 that 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠). To establish this claim, we break up the expected utility of 𝑠 ′ according to cases of how many players in 1's orbit play the action 𝑎 1 because of mixing in 𝑎 1 with probability 𝑝. In particular, we observe 𝐸𝑈 (𝑠 ′ ) = 𝑝)𝐸𝑈 (𝑠) + 𝐵(𝑚=1, 𝑝)𝐸𝑈 ((𝑠 ′ 1 , 𝑠 2 , . . . , 𝑠 𝑛 )) + 𝐵(𝑚>1, 𝑝)𝐸𝑈 (. . .), where 𝐵(𝑚, 𝑝) is the probability of 𝑚 successes for a binomial random variable on 𝑚 independent events that each have success probability 𝑝 and where 𝐸𝑈 (. . .) is arbitrary. Note that the crucial step in writing this expression is grouping the terms with the coefficient 𝐵(𝑚=1, 𝑝). We can do this because for any player 𝑗 ∈ P (1), there exists a symmetry 𝜌 ∈ Γ(G) with 𝜌 ( 𝑗) = 1. Now, to achieve 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠), we require 𝐸𝑈 (𝑠) < 𝐵(𝑚 = 1, 𝑝) 𝐵(𝑚 > 0, 𝑝) 𝐸𝑈 ((𝑠 ′ 1 , 𝑠 2 , . . . , 𝑠 𝑛 )) + 𝐵(𝑚 > 1, 𝑝) 𝐵(𝑚 > 0, 𝑝) 𝐸𝑈 (...). We know 𝐸𝑈 ((𝑠 ′ 1 , 𝑠 2 , ..., 𝑠 𝑛 )) > 𝐸𝑈 (𝑠), but we must deal with the case when 𝐸𝑈 (...) is arbitrarily negative. Because lim 𝑝→0 𝐵(𝑚 > 1, 𝑝)/𝐵(𝑚 = 1, 𝑝) = 0, by making 𝑝 sufficiently small, 𝐵(𝑚 = 1, 𝑝)/𝐵(𝑚 > 0, 𝑝) can be made greater than 𝐵(𝑚 > 1, 𝑝)/𝐵(𝑚 > 0, 𝑝) by an arbitrarily large ratio. The result follows. □ \n B OPTIMAL SYMMETRIC POLICY FOR THE RADIO STATION GAME OF EXAMPLE 3.3 We here calculate the optimal Γ(G)-invariant strategy profile for Example 3.3. Let 𝑝 be the probability of broadcasting the weather forecasts. By symmetry of the game and linearity of expectation, the expected utility given 𝑝 is simply four times the expected utility of any individual neighborhood. The value of an individual neighborhood is 0 with probability (1 − 𝑝) 3 , is 1 with probability 𝑝 3 and is 2 with the remaining probability. Hence, the expected utility of a single neighborhood is 𝑝 3 + (1 − (1 − 𝑝) 3 − 𝑝 3 ) • 2 = 2 − 2(1 − 𝑝) 3 − 𝑝 3 . The maximum of this term (and thus the maximum of the overall utility of all neighborhoods) can be found by any computer algebra system to be 𝑝 = 2 − √ 2, which gives an expected utility of 4( √ 2 − 1) ≈ 1.66. To double-check, we can also calculate the symmetric Nash equilibrium of this game. It's easy to see that that Nash equilibrium must be mixed and therefore must make each player (radio station) indifferent about what to broadcast. So let 𝑝 again be the probability with which everyone else broadcasts the weather. The expected utility of broadcasting the weather relative to broadcasting nothing for any of the three relevant neighborhoods is 2(1 − 𝑝) 2 . (Broadcasting the weather lifts the utility of a neighborhood from 0 to 2 if they do not already get the weather. Otherwise, it is useless to air the weather.) The expected utility of broadcasting music again relative to broadcasting nothing is simply 𝑝 2 . We can find the symmetric Nash equilibrium by setting 2(1 − 𝑝) 2 = 𝑝 2 , which gives us the same solution for 𝑝 as before. \n C PROOFS OF SECTION 4 RESULTS Proposition 4.2. Let G be a common-payoff normal-form game, and let 𝑠 * be a locally-optimal P-invariant strategy profile for some subset of game symmetries P ⊆ Γ(G). Suppose 𝐺 ′ is an 𝜖-perturbation of G. Then 𝑠 * is a 2𝜖-Nash equilibrium in G ′ . Proof. By Theorem 3.2, 𝑠 * is a Nash equilibrium in G. After perturbing G by 𝜖 to form G ′ , payoffs have increased / decreased at most ±𝜖, so the difference between any two actions' expected payoffs has changed by at most 2𝜖. □ Theorem 4.3. Let G be a common-payoff normal-form game, and let 𝑠 * be a locally-optimal P-invariant strategy profile for some subset of game symmetries P ⊆ Γ(G). Suppose 𝑠 is a strategy profile with total variation distance 𝑇𝑉 (𝑠, 𝑠 * ) ≤ 𝛿. Then 𝑠 is an 𝜖-Nash equilibrium with 𝜖 = 4𝛿 max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 𝑖 (𝑎)|. Proof. Consider the perspective of an arbitrary player 𝑖. The difference in expected utility of playing any action 𝑎 𝑖 between the opponent strategy profiles 𝑠 −𝑖 and 𝑠 * −𝑖 is given by: 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 * −𝑖 ) = 𝑎 −𝑖 ∈𝐴 −𝑖 𝑠 −𝑖 (𝑎 −𝑖 )𝑢 𝑖 (𝑎 𝑖 , 𝑎 −𝑖 ) − 𝑎 −𝑖 ∈𝐴 −𝑖 𝑠 * −𝑖 (𝑎 −𝑖 )𝑢 𝑖 (𝑎 𝑖 , 𝑎 −𝑖 ) ≤ 𝑎 −𝑖 ∈𝐴 −𝑖 |𝑢 𝑖 (𝑎 𝑖 , 𝑎 −𝑖 )| 𝑠 −𝑖 (𝑎 −𝑖 ) − 𝑠 * −𝑖 (𝑎 −𝑖 ) ≤ 2𝑇𝑉 (𝑠, 𝑠 * ) max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 𝑖 (𝑎)| ≤ 2𝛿 max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 𝑖 (𝑎)|. In particular, let 𝑎 𝑖 be an action in the support of 𝑠 * 𝑖 , and let 𝑎 ′ 𝑖 be any other action. Then, using the above, we have: 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 −𝑖 ) = 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , \n D PROOFS OF SECTION 5 RESULTS Theorem 5.5. Let G be a non-degenerate common-payoff normalform game, and let P ⊆ Γ(G) be a subset of the game symmetries Γ(G). A locally optimal P-invariant strategy profile is locally optimal among possibly-asymmetric strategy profiles if and only if it is deterministic. Proof. Let 𝑠 be a locally optimal P-invariant strategy profile. By Theorem 3.2, 𝑠 is a Nash equilibrium. Because G is non-degenerate, so is 𝑠. We prove the claim by proving that (1) if 𝑠 is deterministic, it is locally optimal in asymmetric strategy space; and (2) if 𝑠 is mixed then it is not locally optimal in asymmetric strategy space. (1) The deterministic case: Let 𝑠 be deterministic. Now consider a potentially asymmetric strategy profile 𝑠 ′ . We must show as 𝑠 ′ becomes sufficiently close to 𝑠 that 𝐸𝑈 (𝑠 ′ ) ≤ 𝐸𝑈 (𝑠). Let 𝜖 1 , 𝜖 2 , ..., 𝜖 𝑛 and ŝ1 , ..., ŝ𝑛 be such that for 𝑖 ∈ 𝑁 , 𝑠 ′ 𝑖 can be interpreted as following 𝑠 𝑖 with probability 1 − 𝜖 𝑖 and following ŝ𝑖 with probability 𝜖 𝑖 , where 𝑠 𝑖 ∉ supp( ŝ𝑖 ). Then (similar to the proof of Theorem 3.2), we can write 𝐸𝑈 (𝑠 ′ ) = 𝑖 ∈𝑁 (1 − 𝜖 𝑖 ) 𝐸𝑈 (𝑠) + 𝑗 ∈𝑁 𝜖 𝑗 𝑖 ∈𝑁 −{ 𝑗 } 1 − 𝜖 𝑖 • 𝐸𝑈 ( ŝ𝑗 , 𝑠 −𝑗 ) + 𝑗,𝑙 ∈𝑁 :𝑗≠𝑙 𝜖 𝑗 𝜖 𝑖 𝑖 ∈𝑁 −{ 𝑗,𝑙 } 1 − 𝜖 𝑖 • 𝐸𝑈 ( ŝ𝑗 , ŝ𝑙 , 𝑠 −𝑗−𝑙 ) + ... The second line is the expected value if everyone plays 𝑠, the third line is the sum over the possibilities of one player 𝑗 deviating to ŝ𝑗 , and so forth. We now make two observations. First, because 𝑠 is a Nash equilibrium, the expected utilities 𝐸𝑈 ( ŝ𝑗 , 𝑠 −𝑗 ) in the third line are all at most as big as 𝐸𝑈 (𝑠). Now consider any later term corresponding to the deviation of some set 𝐶, containing at least two players 𝑖, 𝑗. Note that it may be 𝐸𝑈 ( ŝ𝐶 , 𝑠 −𝐶 ) > 𝐸𝑈 (𝑠). However, this term is multiplied by 𝜖 𝑖 𝜖 𝑗 . Thus, as the 𝜖 go to 0, the significance of this term in the average vanishes in comparison to that of both the terms corresponding to the deviation of just 𝑖 and just 𝑗, which are multiplied only by 𝜖 𝑖 and 𝜖 𝑗 , respectively. By non-degeneracy, it is 𝐸𝑈 ( ŝ𝑖 , 𝑠 −𝑖 ) < 𝐸𝑈 (𝑠) or 𝐸𝑈 ( ŝ𝑗 , 𝑠 −𝑗 ) < 𝐸𝑈 (𝑠). Thus, if the 𝜖 𝑖 are small enough, the overall sum is less than 𝐸𝑈 (𝑠). (2) The mixed case: Let 𝑠 be mixed. We proceed by constructing a strategy profile 𝑠 ′ that is arbitrarily close to 𝑠 with 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠). Let 𝑚 be the largest integer where for all subsets of players 𝐶 ⊆ 𝑁 with |𝐶 | ≤ 𝑚, the expected payoff is constant across all joint deviations to 𝑎 𝑖 ∈ supp(𝑠 𝑖 ) for all 𝑖 ∈ 𝐶, i.e., where 𝐸𝑈 (𝑎 𝐶 , 𝑠 −𝐶 ) = 𝐸𝑈 (𝑠) for all 𝑎 𝐶 ∈ supp(𝑠 𝐶 ). As 𝑠 is a non-degenerate Nash equilibrium, 1 ≤ 𝑚 < 𝑛. By definition of 𝑚, there exists a subset of players 𝐶 ⊂ 𝑁 with |𝐶 | = 𝑚 and choice of actions 𝑎 𝐶 ∈ supp(𝑠 𝐶 ) where 𝐸𝑈 (𝑎 𝑗 , 𝑎 𝐶 , 𝑠 −𝑗−𝐶 ) is not constant across the available actions 𝑎 𝑗 ∈ 𝐴 𝑗 for some player 𝑗 ∈ 𝑁 −𝐶. Denote player 𝑗's best response to the joint deviation 𝑎 𝐶 as 𝑎 * 𝑗 ∈ argmax 𝑎 𝑗 𝐸𝑈 (𝑎 𝑗 , 𝑎 𝐶 , 𝑠 −𝑗−𝐶 ), and note 𝐸𝑈 (𝑎 𝑗 , 𝑎 𝐶 , 𝑠 −𝑗−𝐶 ) > 𝐸𝑈 (𝑎 𝐶 , 𝑠 −𝐶 ) = 𝐸𝑈 (𝑠). To construct 𝑠 ′ , modify 𝑠 by letting player 𝑗 mix according to 𝑠 𝑗 with probability (1 − 𝜖) and play action 𝑎 𝑗 with probability 𝜖. Similarly, let each player 𝑖 ∈ 𝐶 mix according to 𝑠 𝑖 with probability (1 − 𝜖) and play their action 𝑎 𝑖 specified by 𝑎 𝐶 with probability 𝜖. Because we allow 𝜖 > 0 to be arbitrarily small, all we have left to show is 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠). Observe as before that we can break 𝐸𝑈 (𝑠 ′ ) up into cases based on the number of players who deviate according to the modified + 𝑘 ∈𝐶∪{ 𝑗 } 𝜖 𝐸𝑈 (𝑎 𝑗 , 𝑎 𝐶 , 𝑠 −𝑗−𝐶 ). By construction, every value in the expected value calculation 𝐸𝑈 (𝑠 ′ ) is equal to 𝐸𝑈 (𝑠) except for the last value 𝐸𝑈 (𝑎 𝑗 , 𝑎 𝐶 , 𝑠 −𝑗−𝐶 ), which is greater than 𝐸𝑈 (𝑠). We conclude 𝐸𝑈 (𝑠 ′ ) > 𝐸𝑈 (𝑠). □ \n E GAMUT DETAILS AND ADDITIONAL EXPERIMENTS E.1 GAMUT games In Section 6.1, we analyzed all three classes of symmetric GAMUT games: RandomGame, CoordinationGame, and CollaborationGame. Below, we give a formal definiton of these game classes: Note that these games define payoffs for each unordered action profile because the games are totally symmetric (Definition 2.3). Table 2 gives illustrative examples. \n E.3 Replicator dynamics Consider a game where all players share the same action set, i.e., with 𝐴 𝑖 = 𝐴 𝑗 for all 𝑖, 𝑗, and consider a totally symmetric strategy profile 𝑠 = (𝑠 1 , 𝑠 1 , . . . , 𝑠 1 ). In the replicator dynamic, each action can be viewed as a species, and 𝑠 1 defines the distribution of each individual species (action) in the overall population (of actions). At each iteration of the replicator dynamic, the prevalence of an individual species (action) grows in proportion to its relative fitness in the overall population (of actions). In particular, the replicator dynamic evolves 𝑠 1 (𝑎) over time 𝑡 for each 𝑎 ∈ 𝐴 1 as follows: 𝑑 𝑑𝑡 𝑠 1 (𝑎) = 𝑠 1 (𝑎) [𝐸𝑈 (𝑎, 𝑠 −1 ) − 𝐸𝑈 (𝑠)] . To simulate the replicator dynamic with Euler's method, we need to choose a stepsize and a total number of iterations. Experimentally, we found the fastest convergence with a stepsize of 1, and we found that 100 iterations sufficed for convergence; see Figure 2 . For good measure, we ran 10,000 iterations of the replicator dynamic in all of our experiments. We are interested in the replicator dynamic for two reasons. First, it is a model for how agents in the real world may collectively arrive at a symmetric solution to a game (e.g., through evolutionary pressure). Second, it is a learning algorithm that performs local search in the space of symmetric strategies. In our experiments of Appendix E.5, we find that using the replicator dynamic as an optimization algorithm is competitive with Sequential Least SQuares Programming (SLSQP), a local search method from the constrained optimization literature [17, 40] . E.4 What fraction of symmetric optima are local optima in possibly-asymmetric strategy space? As discussed in Section 6.2, we would like to get a sense for how often symmetric optima are stable in the sense that they are also local optima in possibly-asymmetric strategy space (see Section 5). Table 3 shows in what fraction of games the best solution we found is unstable. \n E.5 How often do SLSQP and the replicator dynamic find an optimal solution? As discussed in Section 6.3, Table 4 and Table 5 show how often SLSQP finds an optimal solution, while Table 6 and Table 7 show how often the replicator dynamic finds an optimal solution. E.6 How costly is payoff perturbation under the simultaneous best response dynamic? When a game is not stable in the sense of Section 5, we would like to understand how costly the worst-case 𝜖-perturbation of the game can be. (See Definition 4.1 for the definition of an 𝜖-perturbation of a game.) In particular, we study the case when individuals simultaneously update their strategies in possibly-asymmetric ways by defining the following simultaneous best response dynamic: Definition E.4. The simultaneous best response dynamic at 𝑠 updates from strategy profile 𝑠 = (𝑠 1 , 𝑠 2 , . . . , 𝑠 𝑛 ) to strategy profile 𝑠 ′ = (𝑠 ′ 1 , 𝑠 ′ 2 , . . . , 𝑠 ′ 𝑛 ) with every 𝑠 ′ 𝑖 a best response to 𝑠 −𝑖 . For each of the RandomGames in Section 6.2 whose symmetric optimum 𝑠 is not a local optimum in possibly-asymmetric strategy space, we compute the worst-case 𝜖 payoff perturbation for infinitesimal 𝜖. Then, we update each player's strategy according to the simultaneous best response dynamic at 𝑠. This necessarily leads to a decrease in the original common payoff because the players take simultaneous updates on an objective that, after payoff perturbation, is no longer common. Table 8 reports the average percentage decrease in expected utility, which ranges from 55% to 89%. Our results indicate that simultaneous best responses after payoff perturbation in RandomGames can be quite costly. \n F THE COMPUTATIONAL COMPLEXITY OF FINDING THE SYMMETRIES OF A GAME In this section, we analyze the computational complexity of finding the symmetries of a common-payoff game. In general, symmetries as defined in Definition 2.1 can be found in exponential time in the number of players. Therefore, if we represent the game explicitly as a full payoff matrix, then the symmetries can be found in polynomial time in the size of the input. However, if we can represent the game more efficiently by giving only non-zero entries of the payoff matrix, the problem becomes graph isomorphism-complete, i.e., polynomial-time equivalent to the graph isomorphism problem, which is neither known to be solvable in polynomial time nor known to be NP-hard [see 11, for an overview]. We also show (in Section F.3) that if we consider a more general notion of game symmetry that permutes actions in addition to players, the computational problem becomes graph isomorphism-complete on an explicit payoff matrix representation. \n F.1 The hypergraph automorphism problem We here introduce the hypergraph automorphism problem and some existing results about it. In the next section, we will prove our results by relating the game automorphism problem (i.e., the problem of finding the symmetries of a game) to the hypergraph automorphism problem. A hypergraph is a pair (𝑉 , 𝐸), where 𝑉 is a (finite) set of vertices and 𝐸 ⊆ 2 𝑉 is a set of hyperedges. A symmetry or automorphism of a hypergraph is a bijection 𝜌 : 𝑉 → 𝑉 s.t. for each set of vertices 𝑒 ⊆ 𝑉 , it is 𝑒 ∈ 𝐸 if and only if 𝜌 (𝑒) ∈ 𝐸, where 𝜌 (𝑒) {𝜌 (𝑣) | 𝑣 ∈ 𝑒}. In other words: For 𝜌 to be a symmetry it must be the case that any set of vertices 𝑣 1 , ..., 𝑣 𝑘 are connected by a hyperedge if and only if 𝜌 (𝑣 1 ), ..., 𝜌 (𝑣 𝑘 ) are connected by a hyperedge. Definition F.1. The hypergraph automorphism (HA) problem asks for a given hypergraph (𝑉 , 𝐸) to provide a set of symmetries of (𝑉 , 𝐸) that generate the group of all symmetries of (𝑉 , 𝐸). There are two natural ways to represent the edges of a hypergraph. The first is to provide what one would call an adjacency matrix in the case of a regular graph. That is, we give a table of bits that specifies for each 𝑒 ∈ 2 𝑉 whether 𝑒 ∈ 𝐸. That is, for each set of vertices, we specify whether there is a hyperedge connecting that set of vertices. The downside of this representation style is that it always costs 𝑂 (2 |𝑉 | ) bits. An alternative is to explicitly list 𝐸, such that graphs with few edges can be represented in much less space than 𝑂 (2 |𝑉 | ). We call the former notation non-sparse and the latter sparse. Lemma F.2. The HA problem on a sparse hypergraph representation is graph isomorphism-complete. Proof. Mathon [21] shows that the problem of giving the generators of the automorphism group of a given graph isomorphism complete. So it is left to show that the automorphism problem on sparse hypergraphs is polynomially equivalent to analogous problem on graphs. Since graphs are hypergraphs, we only need to reduce HA to the graph automorphism problem. This is easy and the main idea has been noted before, e.g., see the introduction of Arvind et al. [1] . □ Theorem F.3 (Luks, 1999, Theorem 4.2) . The HA problem is solvable in 𝑂 (𝑐 |𝑉 | ) for some constant 𝑐. In particular, it follows immediately that HA on non-sparse representations is solvable in polynomial time. Coordina-tionGame Table 3 : The fraction of games whose symmetric optima are mixed. By Theorem 5.5, these symmetric equilibria are the ones unstable in the sense of Section 5. Numbers in the table were empirically determined from 100 randomly sampled games per GAMUT class. 8 : The average decrease in expected utility that worst-case infinitesimal asymmetric payoff perturbations cause to unstable symmetric optima. To get these numbers, we first perturb payoffs in the 100 RandomGames from Section 6.2 whose symmetric optima 𝑠 are not local optima in possibly-asymmetric strategy space. Then, in each perturbed game, we compute a simultaneous best-response update to 𝑠 and record its decrease in expected utility. \n F.2 Polynomial-time equivalence Recall from the main text that -for the purpose of our paper -a symmetry of an 𝑛-player (common-payoff) game (𝐴, 𝑢) is a permutation 𝜌 : {1, ..., 𝑛} → {1, ..., 𝑛} s.t. for all pure strategy profiles a ∈ 𝐴, it is 𝑢 (𝑎 1 , ..., 𝑎 𝑛 ) = 𝑢 (𝑎 𝜌 (1) , ..., 𝑎 𝜌 (𝑛) ). In particular, we do not consider permutations of the actions. Definition F.4. The (common-payoff) game automorphism (GA) problem asks us to compute for a common-payoff given game a generating set of the symmetries of the game. As with hypergraphs, we distinguish two representations for a game. A sparse representation lists only non-zero payoffs. A nonsparse representation gives the payoff for each 𝐴. As before, the downside of the full payoff table representation is that its size is exponential in the number of players. Theorem F.5. The sparse/non-sparse representation HA problem is polynomial-time equivalent to the sparse/non-sparse representation GA problem. By polynomial-time, we here mean in time bound by a polynomial in the number of players and the size in bits of the given instance. Proof. HA→GA: We first reduce the HA problem to the GA problem, which is the easier direction. We will use the same construction for both the sparse-to-sparse and non-sparse-to-nonsparse case. Take a given hypergraph (𝑉 , 𝐸). WLOG assume 𝑉 = {1, ..., 𝑛}. Then we construct an 𝑛-player game, in which each player has two actions, 𝑎 0 , 𝑎 1 . For any 𝑀 ⊆ {1, ..., 𝑛}, let a 𝑀 be the payoff profile in which the set of players playing 𝑎 1 is exactly 𝑀. Then let the payoff 𝑢 (a 𝑀 ) be 1 if 𝑀 ∈ 𝐸 and 0 otherwise. We now show that this reduction is valid by showing that the game and the graph have the same symmetries. Let 𝜌 be a bijection. Then: 𝜌 is a symmetry of (𝑉 , 𝐸) iff ∀𝑒 ∈ 2 𝑉 : 𝑒 ∈ 𝐸 ⇐⇒ 𝜌 (𝑒) ∈ 𝐸 (2) iff ∀𝑀 ∈ 2 𝑉 : 𝑢 (a 𝑀 ) = 1 ⇐⇒ 𝑢 (a 𝜌 (𝑀) ) = 1 (3) iff ∀𝑀 ∈ 2 𝑉 : 𝑢 (a 𝑀 ) = 𝑢 (a 𝜌 (𝑀) ) (4) iff ∀a ∈ 𝐴 : 𝑢 (𝑎 1 , ..., 𝑎 𝑛 ) = 𝑢 (𝑎 𝜌 (1) , ..., 𝑎 𝜌 (𝑛) ) iff 𝜌 is a symmetry of (𝐴, 𝑢) It is easy to see that this construction can be performed in polynomial (indeed linear) time for both sparse and non-sparse representations. GA→HA: We now reduce in the opposite direction. This is more complicated and we therefore provide only a sketch. Consider an 𝑛-player game (𝐴, 𝑢). We construct the hypergraph as follows. First, for each player 𝑖, we generate a vertex. We also generate log 2 (|𝐴 𝑖 |) vertices that we use to encode 𝐴 𝑖 , player 𝑖's actions and connect them with the vertex representing 𝑖. For players 𝑖, 𝑗 that have the same action label sets, this encoding must be done consistently for 𝑖, 𝑗. We also need to add some kind of structure to ensure that symmetries of the hypergraph can only map the 𝑘th action-encoding vertex of player 𝑖 on the 𝑘th action-encoding vertex of a player 𝑗 that has the same action label set as 𝑖. Next, we represent the payoff function 𝑢. To do so, we introduce ⌈log 2 (|𝑢 (𝐴) − {0}|)⌉ payoff encoding vertices. Note that |𝑢 (𝐴) − {0}| is the number of distinct non-zero payoffs of the game. To encode |𝑢 (𝐴) − {0}| we therefore need ⌈log 2 (|𝑢 (𝐴) − {0}|)⌉ bits. We connect these bits in such a way that any symmetry must map each of them onto itself. We fix some binary encoding of 𝑢 (𝐴) − {0}. For instance, let's say the non-zero payoffs of the game are {−3, −1, 7, 8, 10, 11, 13, 100}. Then we need three bits, and might encode them as −3 ↦ → 000, −1 ↦ → 001, 7 ↦ → 010, 8 ↦ → 011, and so forth. For each a ∈ 𝐴 with 𝑢 (a) ≠ 0, we then add an edge that contains for each Player 𝑖 the action encoding vertices corresponding to 𝑎 𝑖 ; and those bits from the payoff encoding vertices that together represent the payoff. (So, for example, if the payoff is encoded by 011, then the hyperedge contains the two lower payoff encoding vertices. Similarly for the action encoding vertices.) We omit a proof of the correctness of this reduction. It is left to show that the reduction is polynomial-time for both representation styles. For the sparse representation styles, it is trivial because up to some small number of extra vertices and edges, there is a one-to-one correspondence between edges of the hypergraph and action profiles with non-zero payoffs. On to the non-sparse representation. Clearly each entry of the adjacency matrix can be filled in polynomial (perhaps even constant or logarithmic) time. It is left to show that the adjacency matrix is not too large. In particular, we need to show that the size of the adjacency matrix is polynomial in the size of the payoff matrix. To assess the size of the adjacency matrix we need to count the number of vertices in the above construction. First, the number of player vertices is 𝑛 ≤ log 2 (|𝐴 1 |) + ... + log 2 (|𝐴 𝑛 |) = log 2 (|𝐴 1 | • ... • |𝐴 𝑛 |) = log 2 (|𝐴|). (The inequality assumes each player has at least two actions.) Second, the number of action-encoding vertices is ⌈log 2 (|𝐴 1 |)⌉ + ... + ⌈log 2 (|𝐴 𝑛 |)⌉ ≤ 2 log 2 (|𝐴 1 |) + ... + 2 log 2 (|𝐴 𝑛 |) = 2 log 2 (|𝐴|). Finally, the number of payoff encoding vertices is about ⌈log 2 (|𝑢 (𝐴)|)⌉ ≤ 2 log 2 (|𝑢 (𝐴)|) ≤ 2 log 2 (|𝐴|). The overall number of vertices in the above construction is therefore at most 5 log 2 (|𝐴|). Thus, the size (in terms of number of bits) of the adjacency matrix is bound by 2 5(log 2 |𝐴 |) = |𝐴| 5 . Since |𝐴| is a lower bound on the size of the payoff matrix (in bits), this is polynomial in the size of the game's payoff matrix, as required. □ One might wonder: In the non-sparse representation case, why does the reduction to HA not also work if we use a more traditional sense of game symmetries? If it were to work that would show that GI is polynomial-time solvable! But this does not work (with the proof strategy used above). In the current reduction, actions do not get their own vertices. Thus, (even if we dropped the constraint structures that prevent actions from being remapped), a hypergraph automorphism cannot remap, e.g., an action encoded as 11011 to an action encoded as 01010. To express full action relabelings in the hypergraph, it seems that we need to introduce an action per vertex. However, the size of the adjacency matrix then blows up more than polynomially. Combining Lemma F.2 and Theorems F.3 and F.5, we get a characterization of the complexity of the graph automorphism problem. Corollary F.6. GA is solvable in polynomial time on a non-sparse representation and is GI-complete on a sparse representation. \n F.3 An alternative notion of game symmetry As mentioned in the main text, we only consider symmetries that relabel the players and the above is on the computational problem resulting from that notion of symmetry. As noted in footnote 3, this was done in part to keep notation simple and an alternative, slightly more complicated notion allows for the actions to be permuted. A natural question then is what the complexity is of finding this new type of symmetry in a given common-payoff game. In this case, the answer is that finding symmetries is GI-complete regardless of how the game is represented and follows almost immediately from existing ideas from Mathon [21] and Gabarró et al. [10] . A player-action (PA) symmetry of a game is pair of a bijection 𝜌 : {1, ..., 𝑛} → {1, ..., 𝑛} on players and a family of bijections 𝜏 𝑖 : 𝐴 𝜌 (𝑖) → 𝐴 𝑖 s.t. for all pure strategy profiles (𝑎 1 , ..., 𝑎 𝑛 ), 𝑢 (𝑎 1 , ..., 𝑎 𝑛 ) = 𝑢 (𝜏 1 (𝑎 𝜌 (1) ), ..., 𝜏 𝑛 (𝑎 𝜌 (𝑛) )). So the idea in this new definition is that 𝜏 𝑖 translates the action names from those of player 𝜌 (𝑖) to player 𝑖. Define the PAGA problem analogously to the GA problem above, as finding a generating set of the PA symmetries of a given commonpayoff game. This time, the complexity is independent of whether the game is represented sparsely or not. Theorem F.7. The PAGA problem is GI-complete. Proof. PAGA→GI For their proof of GI-completeness of the game isomorphism problem, Gabarró et al. [10] sketch how a generalsum game can be represented as a graph in a way that maintains the isomorphisms. In particular, we can therefore represent a single common-payoff game as a graph in a way that maintains PA symmetries. The have thus given a sketch of a polynomial-time reduction from PAGA to the problem of finding the automorphisms of a graph. This latter problem can in turn be reduced in polynomial time to the graph isomorphism problem as was shown by Mathon [21] . GI→PAGA Second, we show GI-hardness. As shown by Mathon [21] , it is enough to reduce the graph automorphism problem to the PAGA problem. For this, we can slightly modify the construction of Gabarró et al. [10, Lemma 5] . Specifically, they reduce the graph isomorphism problem to the 4-player general-sum game isomorphism problem, where actions represent vertices and the graph isomorphisms can be recovered from those PA game isomorphisms where the player isomorphism is 𝜌 = id. Obviously, the same construction can be used to reduce the graph automorphism problem to the general-sum game automorphism problem. The only issue is therefore that their construction uses general-sum games, but we can simply encode their payoff vectors as single numbers. In particular, because their payoffs are binary, we might translate (0, 0, 0, 0) ↦ → 0, (0, 0, 0, 1) ↦ → 1,(0, 0, 1, 0) ↦ → 2, and so forth. It ' for some 𝑚, where the 𝑐 (𝑒 1 ,...,𝑒 𝑘 ) are some set of real coefficients. The terms 𝑐 (𝑒 1 ,...,𝑒 𝑘 ) 𝑥 𝑒 1 1 ...𝑥 𝑒 𝑘 1 for which 𝑐 (𝑒 1 ,...,𝑒 𝑘 ) ≠ 0 are called the monomials of the polynomial. The maxdegree of a monomial 𝑐 (𝑒 1 ,...,𝑒 𝑘 ) 𝑥 𝑒 1 1 ...𝑥 𝑒 𝑘 1 is max 𝑖=1,...,𝑘 𝑒 𝑖 . The maxdegree of a polynomial is the maximum of the maxdegrees of its monomials. Similarly, the total degree of a monomial 𝑐 (𝑒 1 ,...,𝑒 𝑘 ) 𝑥 𝑒 1 1 ...𝑥 𝑒 𝑘 1 is 𝑘 𝑖=1 𝑒 𝑖 . The total degree of a polynomial is the maximum of the total degrees of its monomials. The degree of a variable 𝑥 𝑖 in a monomial 𝑐 (𝑒 1 ,...,𝑒 𝑘 ) 𝑥 𝑒 1 1 ...𝑥 𝑒 𝑘 1 is 𝑒 𝑖 . The maxdegree of 𝑥 𝑖 in a polynomial is the maximum of the maxdegrees of 𝑥 𝑖 in all of the monomials. We can partition the parameters of a polynomial into vectors and write the polynomial as 𝑓 (x 1 , ..., x 𝑘 ), where x 1 , ..., x 𝑘 are real vectors. We define the degree of x 𝑖 in a monomial as the sum of the degrees of the entries of x 𝑖 in the monomial. We define the maxdegree of x 𝑖 in the polynomial as the maximum of the degrees of x 𝑖 in the polynomial's monomials. In the following we will interpret the set Δ(𝐴 𝑖 ) of probability distributions over 𝐴 𝑖 as the set of |𝐴 𝑖 |-dimensional vectors of nonnegative reals whose entries sum to one. We will index these vectors by 𝐴 𝑖 (rather than numbers 1, ..., |𝐴 𝑖 |). The sets Δ(𝐴 𝑖 ) are also called unit simplices. \n G.2 Optimizing symmetric strategies as maximizing polynomials It is immediately obvious that in a symmetric game, the expected utility as a function of the probabilities that each of the orbits assign to each of the strategies is a polynomial over a Cartesian product of unit simplices. Formally: Proposition G.1. Let G be an 𝑛-player game and P ⊆ Γ(G) be a subset of the game symmetries of G. Let the orbits of P be 𝑀 1 , ..., 𝑀 𝑘 . Further, let the set of actions of orbit 𝑖 be 𝐴 𝑖 . Then the expected utility function over P-invariant strategy profiles of G is a polynomial over Δ(𝐴 1 ) × ... × Δ(𝐴 𝑘 ) with a max degree of (at most) max 𝑖 |𝑀 𝑖 | and a total degree of (at most) 𝑛. This polynomial can be created in polynomial-time in the size of a sparse or non-sparse (as per Appendix F.2) representation of the game. It follows that we can use algorithms for optimizing polynomials to find optimal symmetric strategies and that positive results on optimizing polynomials transfer to finding optimal mixed strategies. Unfortunately, these results are generally somewhat cumbersome to state. This is because the optimum can in general not be represented exactly algebraically, even using 𝑛-th roots, as implied by the Abel-Ruffini theorem. Positive results must therefore be given in terms of approximations of the optimal solution. One striking result from the literature is that, roughly speaking, for a fixed number of variables, the optimal solution can be approximated in polynomial time [16, Section 6.1] . Translated to our setting, this means that the optimal symmetric strategy can be approximated in polynomial time if we keep constant the number of orbits and the number of actions available to each orbit, but potentially increase the number of players in each orbit. For more discussion of the complexity of optimizing polynomials on unit simplices, see de Klerk [6] . \n G.3 Expressing polynomials as symmetric games We now show that, conversely, for any polynomial over a Cartesian product of simplices there exists a symmetric game whose expected utility term is exactly that polynomial. However, depending on how we represent polynomials and how we represent games, the size of the game may blow up exponentially. We first show that each polynomial over Δ(𝐴 1 ) × ... × Δ(𝐴 𝑘 ) can be rewritten in such a way that each input x 𝑖 appears in the same degree in all monomials. Lemma G.2. Let 𝑓 (x 1 , ..., x 𝑘 ) be a polynomial on real vectors of dimensions 𝐴 1 , ..., 𝐴 𝑘 . Then there exists a polynomial 𝑔 on the same inputs s.t. for all (x 1 , ..., x 𝑘 ) ∈ Δ(𝐴 1 ) × ... × Δ(𝐴 𝑘 ) 𝑔(x 1 , ..., x 𝑘 ) = 𝑓 (x 1 , ..., x 𝑘 ), and the degree of every x 𝑖 in all monomials of 𝑔 is the maxdegree of x 𝑖 in 𝑓 . Proof. Consider any monomial f of 𝑓 in which x 𝑖 does not have its max degree. Then for all (x 1 , ..., x 𝑘 ) ∈ Δ(𝐴 Notice that this is the sum of |𝐴 𝑖 | monomials in which x 𝑖 occurs in 1 plus the degree in which it occurs in f . We can iterate this transformation until we arrive at the desired f . □ Note, however, that if web take a given polynomial represented as a sum of monomials -e.g., 𝑓 (𝑥 1 , 𝑥 2 ) = 𝑥 4 1 − 3𝑥 2 -and rewriting it as outlined in the Lemma and its proof, the size may blow up exponentially. E.g., 𝑓 (𝑥 1 , 𝑥 2 ) = 𝑥 4 1 − 3𝑥 2 = 𝑥 4 1 − 3(𝑥 1 + 𝑥 2 ) 3 𝑥 2 and (𝑥 1 + 𝑥 2 ) 3 expands into a sum of 2 3 = 8 terms. However, in some table-of-coefficient representations of polynomials the size of the instance does not change at all and the transformation can be performed in polynomial time in the input. For example, this is the case if 𝑘 = 1 and we represent a polynomial as a table of the coefficients of all terms 𝑥 𝑒 1 1 ...𝑥 𝑒 𝑘 𝑘 where 𝑒 1 + ... + 𝑒 𝑘 are at most the polynomial's maxdegree. Once we have a polynomial of the structure described in Lemma G.2, we can transform it into a game: Proposition G.3. Let 𝑓 (x 1 , ..., x 𝑘 ) be a polynomial in which each x 𝑖 appears in the same degree in all monomials. Then we can construct a game G with symmetries P that create 𝑘 orbits, where the number of players in orbit 𝑖 = 1, ..., 𝑘 is the degree of x 𝑖 in 𝑓 and the number of actions for the players in orbit 𝑖 is the number of entries of x 𝑖 . Proof. Consider games Γ with orbits 𝑀 1 , ..., 𝑀 𝑘 of the specified sizes and sets of actions 𝐴 1 , ..., 𝐴 𝑘 also of the specified sizes where specifically the players in each 𝑀 𝑖 are totally symmetric. Then such a game if fully specified as follows. For each family of numbers 𝑛 1,1 , ..., 𝑛 □ Note that if the polynomial is represented as a table of coefficients, then this reduction takes linear time in the size of the input. Similarly, if the polynomial is given as a list of only the monomials with non-zero coefficients -all of which satisfy the degree requirement -the reduction can also be done in polynomial time. This in particular gives us the following negative result, translated from the literature on optimizing polynomials: Corollary G.4. Deciding for a given game G with symmetries P and a given number 𝐾 whether there is a P-invariant profile with expected utility at least 𝐾 is NP-hard, even for 2-player symmetric games. Proof. Follows from Proposition G.3 and the NP-hardness of optimizing quadratic polynomials over the unit simplex [6, Section 3.2]. □ Figure 1 : 1 Figure 1: The strategy profile landscape of the symmetric laundry game (Figure1b). Although the symmetric optimum has lower expected utility than the unrestricted optima, total symmetry of the game implies that the symmetric optimum is a Nash equilibrium; this is a special case of Theorem 3.2. \n Example 5 . 6 . 56 Consider the 3x3 symmetric game with the following payoff matrix: 10 1 + 𝜖 c 1 1 + 𝜖 \n probability 𝜖: 𝐸𝑈 (𝑠 ′ ) = 𝑘 ∈𝐶∪{ 𝑗 } (1 − 𝜖) 𝐸𝑈 (𝑠) + 𝑙 ∈𝐶∪{ 𝑗 } 𝜖 𝑘 ∈𝐶∪{ 𝑗 }:𝑘≠𝑙 1 − 𝜖 𝐸𝑈 (𝑎 𝑙 , 𝑠 −𝑙 ) + ... \n Definition E. 1 . 1 A RandomGame with |𝑁 | players and |𝐴| actions assumes 𝐴 𝑖 = 𝐴 𝑗 for all 𝑖, 𝑗 and draws a payoff from Unif (−100, 100) for each unordered action profile 𝑎 ∈ 𝐴. Definition E.2. A CoordinationGame with |𝑁 | players and |𝐴| actions assumes 𝐴 𝑖 = 𝐴 𝑗 for all 𝑖, 𝑗. For each unordered action profile 𝑎 ∈ 𝐴 with 𝑎 𝑖 = 𝑎 𝑗 for all 𝑖, 𝑗, it draws a payoff from Unif (0, 100); for all other unordered action profiles, it draws a payoff from Unif (−100, 0). Definition E.3. A CollaborationGame with |𝑁 | players and |𝐴| actions assumes 𝐴 𝑖 = 𝐴 𝑗 for all 𝑖, 𝑗. For each unordered action profile 𝑎 ∈ 𝐴 with 𝑎 𝑖 = 𝑎 𝑗 for all 𝑖, 𝑗, the payoff is 100; for all other unordered action profiles, it draws a payoff from Unif (−100, 99). \n Figure 2 : 2 Figure 2: The magnitude of the replicator dynamics update step averaged over 10,000 RandomGames 4 with 2 players and 2 actions. Although this plot indicates that the replicator dynamics converge by 100 iterations, we ran 10,000 iterations for good measure in all of our experiments. \n 31 ( 31 92 0.81 0.70 0.64 3 0.80 0.69 0.57 0.48 4 0.75 0.57 0.40 0.35 5 0.70 0.45 0.36 0.59 0.50 0.40 0.33 3 0.53 0.38 0.28 0.29 4 0.53 0.37 0.29 0.26 5 0.53 0.36 0.33 0.25 (b) CoordinationGame \n 𝑛 𝑗=1 𝑠 𝑗 (𝑎 𝑗 ). If a strategy 𝑠 𝑖 for player 𝑖 maximizes expected utility given the strategies 𝑠 −𝑖 of all the other players, i.e., if 𝑠 𝑖 ∈ argmax 𝑠 ′ 𝑖 ∈Δ(𝐴 𝑖 ) 𝐸𝑈 𝑖 (𝑠 ′ 𝑖 , 𝑠 −𝑖 ), we call 𝑠 𝑖 a best response to 𝑠 −𝑖 . If each strategy 𝑠 𝑖 in a strategy profile 𝑠 is a best response to 𝑠 −𝑖 , we call 𝑠 a Nash equilibrium. A Nash equilibrium 𝑠 is strict if every 𝑠 𝑖 is the unique best response to 𝑠 −𝑖 . \n if every 𝑠 𝑖 is a Dirac delta function on some 𝑎 𝑖 , then 𝑠 is degenerate if at least two players 𝑖 are indifferent between 𝑎 𝑖 and some other 𝑎 ′ 𝑖 ∈ 𝐴 𝑖 − {𝑎 𝑖 }. • Otherwise, if 𝑠 is mixed, then 𝑠 is degenerate if for all players 𝑖 and all 𝑎 −𝑖 ⊆ supp(𝑠 −𝑖 ), the term 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑎 −𝑖 ) is constant across 𝑎 𝑖 ∈ supp(𝑠 𝑖 ). \n Table 4 / 4 Table 6, we show what fraction of the time any single SLSQP / replicator dynamics run finds the best solution, and in Appendix Table 5 / Table 7, weshow what fraction of the time at least 1 of 10 SLSQP / replicator dynamics runs finds the best solution. First, we note that the tables for SLSQP and the replicator dynamics are quite similar, differing by no more than a few percentage points in all cases. So the replicator dynamics, which are used as a model for how populations evolve strategies, can also be used as an effective optimization algorithm. Second, we see that individual runs of each algorithm are up to 93% likely to find the best solution in small RandomGames, but they are less likely (as little as 24% likely) to find the best solution in larger RandomGames and in CoordinationGames. The best of 10 runs, however, finds the best solution ≥ 87% of the time, indicating that random algorithm restarts benefit symmetric strategy optimization. \n 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 * −𝑖 ) + 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 * −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 −𝑖 ) ≤ 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 * −𝑖 ) + 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 * −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 −𝑖 ) ≤ 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 * −𝑖 ) + 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 * Let G be a common-payoff normal-form game, and let 𝑠 * be a locally-optimal P-invariant strategy profile for some subset of game symmetries P ⊆ Γ(G). Suppose 𝑠 is a strategy profile with Kullback-Leibler divergence satisfying 𝐷 𝐾𝐿 (𝑠 ||𝑠 * ) ≤ 𝜈 or 𝐷 𝐾𝐿 (𝑠 𝑠 * −𝑖 ) + 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 * −𝑖 ) − −𝑖 ) ≤ 4𝛿 max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 𝑖 (𝑎)|, where 𝐸𝑈 𝑖 (𝑎 ′ 𝑖 , 𝑠 * −𝑖 ) − 𝐸𝑈 𝑖 (𝑎 𝑖 , 𝑠 * −𝑖 ) ≤ 0 because 𝑠 * 𝑖 is a Nash equilib- rium by Theorem 3.2. □ Corollary 4.4. * ||𝑠) ≤ 𝜈. Then 𝑠 is an 𝜖-Nash equilibrium with 𝜖 = 2 √ 2𝜈 max 𝑖 ∈𝑁 ,𝑎 ∈𝐴 |𝑢 𝑖 (𝑎)|.Proof. By Pinsker's inequality [35] , we have𝑇𝑉 (𝑠, 𝑠 * ) ≤ 1 2𝐷 𝐾𝐿 (𝑠 ||𝑠 * ).As 𝑇𝑉 (𝑠, 𝑠 * ) = 𝑇𝑉 (𝑠 * , 𝑠) and with a similar application of Pinsker's inequality, we have by assumption that 𝑇𝑉 (𝑠, 𝑠 * ) ≤ 𝜈/2. Applying Theorem 4.3 with 𝛿 = 𝜈/2 yields the result. □ \n Table 2 : 2 𝛼 𝑢 𝛼𝛼 𝑢 𝛼 𝛽 𝛽 𝑢 𝛼 𝛽 𝑢 𝛽𝛽 A payoff matrix with |𝑁 | = 2 and 𝐴 1 = 𝐴 2 = {𝛼, 𝛽} to illustrate GAMUT games. In a RandomGame, 𝑢 𝛼𝛼 , 𝑢 𝛼 𝛽 , and 𝑢 𝛽𝛽 are i.i.d. draws from Unif (−100, 100). In a Coordi-nationGame, 𝑢 𝛼𝛼 and 𝑢 𝛽𝛽 are i.i.d. draws from Unif (0, 100) while 𝑢 𝛼 𝛽 is a draw from Unif (−100, 0). In a Collabora-tionGame, 𝑢 𝛼𝛼 = 𝑢 𝛽𝛽 = 100, and 𝑢 𝛼 𝛽 is a draw from Unif (−100, 99). Player 2 𝛼 𝛽 Player 1 \n Table 4 : 4 The fraction of single SLSQP runs that achieve the best solution found in our 20 total optimization attempts. Numbers in the table were empirically determined from 100 randomly sampled games per GAMUT class. A 2 3 4 5 N 2 1.00 0.99 0.99 0.98 3 1.00 0.99 1.00 0.96 4 1.00 0.96 0.94 0.88 5 0.98 0.90 0.88 0.91 (a) RandomGame \n Table 5 : 5 The fraction of games in which at least 1 of 10 SLSQP runs achieves the best solution found in our 20 total optimization attempts. Numbers in the table were empirically determined from 100 randomly sampled games per GAMUT class. A 2 3 4 5 N 2 0.93 0.81 0.68 0.65 3 0.81 0.70 0.58 0.46 4 0.76 0.58 0.36 0.34 5 0.69 0.43 0.36 0.30 \n Table 6 : 6 The fraction of single replicator dynamics runs that achieve the best solution found in our 20 total optimization attempts. Numbers in the table were empirically determined from 100 randomly sampled games per GAMUT class. This is harder to show. Note that a brute force method that tests all |𝑉 |! bijections is super-exponential in |𝑉 | and super-polynomial in (the problem size) 2 |𝑉 | . A 2 3 4 5 A 2 3 4 5 N N 2 1.00 1.00 1.00 1.00 2 1.00 1.00 0.99 0.94 3 0.99 1.00 0.95 0.96 3 1.00 0.97 0.93 0.96 4 1.00 0.98 0.91 0.91 4 0.99 1.00 0.93 0.92 5 0.98 0.97 0.92 0.87 5 1.00 0.98 0.96 0.90 (a) RandomGame (b) CoordinationGame \n Table 7 : 7 The fraction games in which at least 1 of 10 replicator dynamics runs achieves the best solution found in our 20 total optimization attempts. Numbers in the table were empirically determined from 100 randomly sampled games per GAMUT class. A 2 3 4 5 N 2 58.9% 55.9% 61.8% 64.6% 3 73.7% 70.9% 73.4% 73.7% 4 74.1% 77.4% 78.4% 82.5% 5 77.4% 84.9% 89.9% 87.5% (a) RandomGame \n s easy to see that the symmetries with 𝜌 = id remain the same under this transformation. (multivariate) polynomial in 𝑘 variables is a function (𝑥 1 , ..., 𝑥 𝑘 ) ↦ → 𝑒 1 ,...,𝑒 𝑘 ∈ {1,...,𝑚 } 𝑐 (𝑒 1 ,...,𝑒 𝑘 ) 𝑥 𝑒 1 1 ...𝑥 𝑒 𝑘 □ G THE COMPUTATIONAL COMPLEXITY OF FINDING OPTIMAL SYMMETRIC STRATEGIES G.1 Polynomials 1 A \n 1 ) × ... × Δ(𝐴 𝑘 ), f (x 1 , ..., x 𝑘 ) = 𝑎 𝑖 ∈𝐴 𝑖 𝑥 𝑖,𝑎 𝑖 f (x 1 , ..., x 𝑘 ) = 𝑎 𝑖 ∈𝐴 𝑖 𝑥 𝑖,𝑎 𝑖 f (x 1 , ..., x 𝑘 ). \n 1, |𝐴 1 | , ...., 𝑛 𝑘,1 , ..., 𝑛 𝑘, |𝐴 𝑘 | with 𝑛 𝑖,1 + ... + 𝑛 𝑖, |𝐴 𝑖 | = |𝑀 𝑖 | we need to specify the utility 𝑣 obtained if for all 𝑖, 𝑙, 𝑛 𝑖,𝑗 players in orbit 𝑖 play action number 𝑗 from 𝐴 𝑖 . In the expected utility function of G, each such entry creates a summand 𝑛 𝑖,1 , ..., 𝑛 𝑖, |𝐴 𝑖 | 𝑖,𝑙 𝑝 𝑛 𝑖,𝑙 𝑖,𝑙 , where 𝑝 𝑖,𝑙 is the probability with which players in orbit 𝑖 player action 𝑙 and |𝑀 𝑖 | 𝑛 𝑖,1 ,...,𝑛 𝑖,|𝐴 𝑖 | is a multinomial. By setting 𝑣 appropriately, we can thus obtain any monomial with exponents (𝑛 𝑖,𝑙 ) 𝑖,𝑙 . By setting the values 𝑣 all different sets of (𝑛 𝑖,𝑙 ) 𝑖,𝑙 appropriately, we obtain any polynomial in which each (𝑛 𝑖,𝑙 ) 𝑙 appears with the same degree |𝑀 𝑖 | in all monomials. 𝑣 • 𝑖|𝑀 𝑖 | \n\t\t\t This condition is relaxed in (weighted) potential games where the players' payoffs need only imply the same ordering of outcomes [33] ; (weighted) potential games are best-response equivalent to common-payoff games [15, 26] . \n\t\t\t We make this choice to ease notational burden, but we conjecture that our results can be generalized to allow for mappings between actions [14] , which we leave for future work. \n\t\t\t In this simulation only we rescaled the RandomGames so that each payoff is a draw from Unif (0, 1).", "date_published": "n/a", "url": "n/a", "filename": "GAIW_2021_paper_32.tei.xml", "abstract": "Although it has been known since the 1970s that a globally optimal strategy profile in a common-payoff game is a Nash equilibrium, global optimality is a strict requirement that limits the result's applicability. In this work, we show that any locally optimal symmetric strategy profile is also a (global) Nash equilibrium. Applied to machine learning, our result provides a global guarantee for any gradient method that finds a local optimum in symmetric strategy space. Furthermore, we show that this result is robust to perturbations to the common payoff and to the local optimum. While these results indicate stability to unilateral deviation, we nevertheless identify broad classes of games where mixed local optima are unstable under joint, asymmetric deviations. We analyze the prevalence of instability by running learning algorithms in a suite of symmetric games, and we conclude with results on the complexity of computing game symmetries.", "id": "c006e30787653c5f13dcc49e30b91b2d"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Andrea Bajcsy", "Dylan P Losey", "Marcia K O'malley", "Anca D Dragan"], "title": "Learning from Physical Human Corrections, One Feature at a Time", "text": "Figure 1 : Participant pushes on the robot to teach it to go closer to the table. In the process of giving this correction, the human changes both the robot's distance from table and -inadvertently -the orientation of a cup which the robot is grasping (blue arrows). Typically, the robot would learn about both cup and table features from this one correction (top right). We propose that robots interacting with humans should learn about only one feature at a time (bottom right). correct the robot as it is moving. For example, the robot is moving a fragile cup from a cabinet to the table, and a nearby human notices that the robot is carrying the cup too high above the table: if the cup were to drop from that height, it would likely break! To correct the robot's behavior, the human intuitively pushes the robot's end-effector towards the table to signal their motion preference. Ideally, the human's correction will only affect the cup's distance from the table; in practice, however, human actions are noisy and imperfect [7, 19, 20, 24] , especially when kinesthetically maneuvering robotic manipulators while trying to carefully orchestrate their multiple degrees of freedom [1] . As a consequence, when the person pushes down on the end-effector, they accidentally change not only the robot's distance from the table, but also the orientation of the cup (see Fig. 1 ). This single human interaction has therefore adjusted two task features: the cup's distance from the table and the cup's orientation. From the robot's perspective, it is not immediately clear what the person actually intends: do they (a) want the robot to carry the cup closer to the table, or do they (b) additionally want the robot to carry the cup at a new orientation? State-of-the-art algorithms default to the latter interpretation. Prior work has built on Inverse Reinforcement Learning (IRL) [13, [16] [17] [18] 24] to formalize learning from physical human corrections as an estimation problem: the robot estimates the objective function that it should optimize during the task by treating human corrections as evidence about the objective function's parameters 1 . Under the ideal objective function parameters, the corrected behavior has to have a lower cost than the robot's current behavior [3, 11] . Therefore, when the person's correction changes multiple features -however slightly -a rich hypothesis space will lead to the robot updating its understanding about the importance of all of these features (top right in Fig. 1 ). This traditional approach works well with perfect or near-perfect corrections; however, with real people come aspects of corrections that are not always intended. These unintended corrections lead to unintended learning. In other words, the robot attempts to learn from and alter its behavior based on all the inputs, even those that are superfluous. Returning to our previous example, if all features were updated the robot would learn (correctly) that the cup should be lower, and (incorrectly) that the cup should be carried at a different orientation. In general, because of the inherent physical difficulty in simultaneously correcting many degrees of freedom of a robotic arm, learning about all features at once may systematically cause the robot to infer more from the human's corrections than desired. Our insight is that we can alleviate unintended robot learning by focusing the learning on only one feature at a time. For tasks where the human is attempting to change the importance of just one feature, this insight helps the robot reject inadvertent adjustments on the other features (bottom right in Fig. 1 ). But even for tasks in which the human wants to correct several features, learning one feature at a time enables people to break down the task and teach sequentially. Indeed, sequential teaching may come more naturally to people collaborating with robots [21, 22] , and reduces the burden on users to coordinate all aspects of the task simultaneously during each individual correction. Based on our insight, we make the following contributions: Online Feature Identification. As the robot is executing its task, the human collaborator can intervene and provide physical corrections. We formulate the problem of identifying which one feature the person is trying to correct at each time step, derive a solution, and justify a simple approximation for online performance. We hypothesize that this approach will result in a better learning process, with a more accurate objective function being inferred by the robot at each time step, and a better final outcome. User Study Testing One-at-a-Time Learning. After validating our algorithm in 2-D simulations with an approximately optimal human, we put our hypothesis to the test in a user study on a 7-DoF robotic manipulator. These experiments compare one-ata-time and all-at-once learning within a factorial design, across tasks that need just one feature to be corrected, and tasks that need multiple features to be corrected. We find that one-at-a-time learning is especially helpful in the second case, where the person's teaching task is more complex. People also prefer it, finding that the robot is better at understanding their corrections and requires less reteaching. Overall, our work provides a practical improvement for learning objective functions online from physical human-robot interaction. \n ONE-AT-A-TIME OBJECTIVE LEARNING FROM PHYSICAL HUMAN INTERACTION 2.1 Why Learn from Physical Corrections? When a human and robot are collaborating in close proximity, physical interaction -in which the human touches, pushes, pulls, or otherwise guides the robot -is almost inevitable. The way in which a robot responds to such physical human-robot interaction (pHRI) depends on how the robot interprets those corrections. Traditionally, the human's interactions are treated in one of three ways [9] : as disturbances to be rejected [5, 12, 23] , as collisions to be detected and avoided [4] , or as operator signals to be followed by switching into a compliant mode [8, 10, 14] . In all cases, the robot does not learn from the human's actions; once the human stops interacting, the robot resumes its original behavior. In contrast, we argue that interactions are intentional, and therefore informative -the human interacts with the robot because it is doing something wrong, and the human's correction indicates how the robot should behave. Furthermore, since the way in which the robot chose its behavior was by optimizing an objective function, interaction suggests that this objective function was incorrect. Thus, rather than stubbornly continuing to optimize the same wrong objective, the robot should instead leverage the human's feedback in order to update its understanding of the objective function. \n Learning Problem Statement Assume the robot starts in some configuration q 0 at time t = 0. Let Ξ be the space of trajectories beginning at q 0 and ending at a feasible goal configuration, where each ξ ∈ Ξ is a sequence of configurations. Next, let Φ : Ξ → R F be a vector-valued function mapping trajectories to feature values, with Φ i (ξ ) signifying the value of the i-th feature. Similar to prior IRL work [13, 16, 18, 24] , the robot's objective function (here a cost function) is parametrized by θ ∈ R F , which weights the importance of these features along the entire trajectory: C(ξ ) = θ • Φ(ξ ) (1) The robot starts off with an initial objective function θ 0 at time t = 0, and optimizes this objective function to produce its initial trajectory: ξ 0 = arg min Ξ θ 0 • Φ(ξ ) (2) After identifying ξ 0 , the robot starts to execute this initial trajectory. The person interacting with the robot has some desired objective function that they want the robot to optimize, denoted as θ * . The robot does not have access to these parameters -they are internal to the person (and here assumed to be constant). However, at every time step t, the person might intervene to move the robot away from its current configuration by some ∆q t . The robot should then treat the human's correction ∆q t as an observation about θ * , and update its objective from θ t to θ t +1 , such that this new objective function is closer to θ * . \n All-at-Once Learning Following [3] , we interpret the change in configuration ∆q t as an indication of the corrected trajectory, ξ t c , that the human would prefer for the robot to execute: ξ t c = ξ t + M −1 (0, .., ∆q t , ...0) T (3) Here ξ t is the robot's current trajectory -optimal under θ t -and M is a matrix that smoothly propagates the local correction ∆q t along the rest of the trajectory [6] . Next, based on [11] and [18] , we make the core assumption that the corrected trajectory ξ t c is better than the current trajectory ξ t with respect to the ground truth θ * . Recalling that our objective Session We-1A: Machine Learning for HRI HRI'18, March 5-8, 2018, Chicago, IL, USA function is a cost function, this implies: θ * • Φ(ξ t c ) < θ * • Φ(ξ t ) (4) To now find a θ t +1 closer to θ * , we select a weight vector that is both (a) near the current θ t and (b) maximally makes (4) hold: θ t +1 = arg min θ ∈Θ θ • (Φ(ξ t c ) − Φ(ξ t )) + 1 2α ||θ − θ t || 2 (5) Note that α > 0. This optimization problem is a quadratic in θ , so we will take the gradient of ( 5 ) and set it equal to 0: ∇ θ = Φ(ξ t c ) − Φ(ξ t ) + 1 α (θ − θ t ) = 0 (6) Rearranging ( 6 ), we finally obtain: θ t +1 = θ t − α(Φ(ξ t c ) − Φ(ξ t )) (7) Interestingly, (7) is the same update rule from co-active learning [11] and online maximum margin planning [18] , shown by [3] to be an approximate solution to the partially observable Markov decision process that treats θ * as the hidden state and optimizes the cost parametrized by θ * . This update rule has an intuitive interpretation: if a feature has a higher value in corrected trajectory than in the current trajectory, (7) decreases corresponding weight -making it lower-cost -and thus encourages the optimizer to generate subsequent trajectories where that feature also has a higher value. Under this method, the robot updates the weights on all features that the person changed with their correction during the current time step. \n One-at-a-Time Learning A natural solution for restricting the number of learned features might be to switch the regularization term in (5) to the L 1 norm [13, 15] , which encourages sparsity of the weight update. However, there is no guarantee that this will result in changing just one weight; it may still update all the features that the human corrected, including those that were accidentally changed. In this work, to capture one-at-a-time learning, we now make a different assumption about ξ t c , the intended corrected trajectory. While the actual corrected trajectory, ξ t c , might change multiple features, we assume that the human's intended corrected trajectory, ξ t c , changes only a single feature. We simplify the intended corrected trajectory into an intended change in features, ∆Φ t c , and impose the constraint that ∆Φ t c can only have one non-zero entry: this entry represents the feature which the person wants to update. Note that our one-at-a-time strategy does not mean that only one feature ever changes throughout the task. Instead, at every time step t there can be a different intended feature change, and so the person can sequentially change the weights to match their desired objective over multiple corrections. Without loss of generality, assume that the human is attempting to change the i th entry in θ t , the robot's current feature weights. If the human interacts to only update the weight on the i t h feature, then their correction of the robot's current trajectory, ξ t , should change the feature count in the direction J (θ i ) = ∂Φ(ξ t ) ∂θ t i . In other words, given that the person is an optimal corrector and that their interaction was meant to change just the weight on the i t h feature, then we would expect them to correct the trajectory such that they produce a feature difference exactly in the direction J (θ i ). Realistically, however, human corrections are noisy -even for expert users [2] -and will not necessarily induce the optimal feature difference during every correction. Despite these imperfections, we assume that the result of their correction will still noisily optimize the distance (dot product) in the optimal direction. This provides us with an observation model, from which we can find the likelihood of observing a specific feature difference given the one feature which the human is attempting to update: P(∆Φ|i) ∝ e J (θ i )•∆Φ (8) Accordingly, for the observed feature difference ∆Φ = Φ(ξ t c )−Φ(ξ t ), the feature which the human is most likely trying to change is: i * = arg max i P(Φ(ξ t c ) − Φ(ξ t ) i) = arg max i J (θ i ) • (Φ(ξ t c ) − Φ(ξ t )) (9) Using ( 9 ), we can estimate which feature the person wanted to update during their physical correction. Next, by leveraging i * and the observed feature difference, we can reconstruct ∆Φ t c , the human's intended feature difference. Recall that -if the human wanted to only update feature i * -their intended feature difference would ideally be in the direction J (θ i * ) = ∂Φ(ξ t ) ∂θ t i * , and so we can choose ∆Φ t c ∝ J (θ i * ). In practice, however, we will simplify this derivative by projecting the actual feature difference induced by the human's interaction onto the i * t h axis, ∆Φ t c = (0, .., Φ i * (ξ t c ) − Φ i * (ξ t ), ..0) T . Thus, once we have identified which feature the person most wants to change during their current interaction, i * , we argue that the intended feature correction should only change this one feature 2 . Evaluating J (θ i ) requires numerical differentiation, i.e., finding an optimal trajectory at least F + 1 times at each time step (where F is the number of features). To make this process run in real-time, we approximate J (θ i ) as proportional to (0, .., 1, ..0) T . In other words, we assume that when the i t h weight changes, it predominantly causes a change in the i t h feature along the corresponding optimal trajectory. Substituting this simplification back into (2.4), we have reduced our method for finding the feature which the human intends to change into a simple, yet intuitive, heuristic: only the feature that changed the most as a result of the human's correction should be updated. We note, however, that this heuristic has its roots in the more principled approach that was detailed above. Our update rule now becomes θ t +1 = θ t − α ∆Φ t c (10) Overall, isolating a single feature at every time step is meant to prevent unintended learning. If the person is trying to correct multiple features, they can still do so: the robot will pick up on what seems like the most dominant feature in the correction, adjust that, and then give the person a chance to correct whatever remains during the next time step. Due to the noisy nature of human corrections, we hypothesize that this one-at-a-time update strategy will lead to shorter trajectories through the learned weight spacewhich reach the ideal weight more directly -when compared to a strategy that tries to update everything at once. In what follows, we first show some simulation analysis with optimal and noisy humans, and then test our hypothesis in a user study. \n SIMULATIONS In order to better validate and compare the all-at-once and oneat-a-time learning methods described in Section 2, we conducted human-robot interaction simulations. These simulations show that updating one feature per interaction can help prevent unintended learning, particularly when the human interacts sub-optimally. Setting. We will consider a vertical planar environment, where the y-axis corresponds to height above a table and the x-axis is parallel to that table. The simulated robot is attempting to move from a fixed start position, s, to a fixed goal position, д. The robot is modeled as a single point, and the robot's configuration is its current (x, y) position. A simulated human is standing beside the table near the start position, and physically interacts with the robot to correct its behavior when necessary. The robot does not know the true feature weights of the human's objective function, θ * , but the robot does know that there are three different features which the human might care about: the length of the robot's trajectory (length), the robot's height above the table (table), and the robot's distance from the human (human). Here the table feature corresponds to the height along the y-axis, since the table is a surface at y = 0, and the human feature corresponds to the distance along the x-axis, since the human is standing at x = 0. The weight of the length feature is fixed, and the robot learns the relative weights associated with table and human features over the course of the task. The human's true reward parameter is θ * = [0.5, 0], where 0.5 is the true weight associated with table and 0 is the true weight associated with human. Initially, the robot believes that θ 0 = [0, 0], and so the robot is unaware that it should move closer to the table. Simulated Human. We consider two different simulated humans: (a) an optimal human, who corrects the robot to exactly follow their desired trajectory and (b) a noisy human, who imperfectly corrects the robot's trajectory. At the start of the task, the optimal human identifies a desired trajectory: ξ * H = arg min Ξ θ * • Φ(ξ ). During the task, the human does not change ξ * H , and interacts with the robot to make it follow this desired trajectory. At each time step t the human provides a correction ∆q t that changes the robot's current configuration to the desired configuration, ξ * H (t), but the human only provides this correction if the robot's distance from ξ * H (t) is greater than some acceptable margin of error. In contrast, the noisy human takes actions sampled from a Gaussian distribution: these actions are centered at the optimal human action with a bias in the x-direction. This bias introduces a systematic error, where the noisy human accidentally pulls the robot closer to their body when attempting to significantly correct the vertical table feature. As a result of this noise and bias, the noisy human may unintentionally correct the human feature. Analysis. We performed two different simulations: one with the optimal human (see Fig. 2 ), and one with the noisy human (see Fig. 3 ). When the human optimally corrects the robot's table feature in Fig. 2 , they never unintentionally affect the weight of the human feature, and so all-at-once and one-at-a-time learning both yield the exact same results for the optimal human. By contrast, the noisy human unintentionally corrects the human features at the start of the task (when trying to correct the table features), and, as such, we observed different behavior for all-at-once and one-at-a-time learning in Fig. 3 . Although the robot follows a similar mean trajectory for both learning methods, and eventually converges to the correct feature weights in each case, we observe that all-at-once had a longer learning process and more persistent human interaction. In particular, the length of the mean path in feature space from θ 0 to θ T was 0.57 for all-at-once vs. 0.49 for one-at-a-time; the length of the mean path specifically for the human feature weight was 0.23 for all-at-once vs. 0.001 for one-at-a-time. Recall that the robot was constrained to reach its goal position in 10 steps; we found that, in the all-at-once case, the human interacted with the robot during an average of 5.24 steps, and, in the one-at-a-time case, the human interacted with the robot during an average of 3.56 steps. These simulations showcase that, when the human interacts sub-optimally, their corrections can lead to unintended learning on the robot's part, which the human must then exert additional effort to undo. For the simulation we have described, updating only one feature per time step helps to mitigate accidental learning, demonstrating the potential benefits of our proposed one-at-a-time learning method. \n EXPERIMENTS We conducted an IRB-approved user study to investigate the benefits of one-at-a-time learning. During each experimental task, the robot began with a number of incorrect weights in its objective, and the participants intervened to physically correct the robot. \n Independent Variables We use a 2 by 2 factorial design. We manipulated the learning strategy with two levels, all-at-once and one-at-a-time, as well as the number of feature weights that need correction, one feature weight and all the feature weights. In the all-at-once learning strategy, the robot updated all the feature weights from a given interaction with the gradient update from Equation ( 7 ) and then replanned a new trajectory with the updated weights. In the one-at-a-time condition, the robot chose the feature that changed the most using Equation (2.4), updated according to Equation (10) , and then replanned a new trajectory withe the updated θ . \n Dependent Measures \n Objective. To analyze the objective performance of the two learning strategies, we split the objective measures into four categories: Final Learned Reward: These measure how closely the learned reward matched the optimal reward by the end of the trajectory. We measured the dot product between the optimal and final reward vector, denoted DotFinal = θ * • θ T . We also analyzed the regret of the final learned reward, which is the weighted feature difference between the ideal trajectory and the learned trajectory RegretFinal = θ * • Φ(ξ θ * ) − θ * • Φ(ξ θ T ) and the individual feature differences between the ideal reward and the trajectory induced by the final learned reward TableDiffFinal = |Φ T b (ξ θ * ) − Φ T b (ξ θ T )| CupDiffFinal = |Φ C (ξ θ * ) − Φ C (ξ θ T )| Learning Process: Measures about the learning process, i.e. θ = {θ 0 , θ 1 , . . . , θ T }, included the average dot product between the true reward and the estimated reward over time: DotAvg = 1 T T i=0 θ * •θ i . We also measured the length of the θ path through weight space for both cup, θC , and table, θT b weights. Finally, we computed the number of times the cup and table weights were updated away from the optimal θ * (denoted by CupAway and TableAway). Executed Trajectory: For the actual executed trajectory, ξ act , we measured the regret Regret = θ * • Φ(ξ θ * ) − θ * • Φ(ξ act ) and the individual table and cup feature differences between the ideal and actual trajectory TableDiff = |Φ T b (ξ θ * ) − Φ T b (ξ act )| CupDiff = |Φ C (ξ θ * ) − Φ C (ξ act )| Interaction: Interaction measures on the forces applied by the human, {u 0 H , u 1 H , . . . , u T H }, included the total interaction force, Iact-Force = T t =0 ||u t H || 1 and total interaction time. 4.2.2 Subjective. For each condition, we administered a 7-point Likert scale survey about the participant's interaction experience (see Table 1 for questions). We separated our survey questions into four scales: success in teaching the robot about the task, correctness of update, needing to undo corrections because the robot learned something wrong, and ease of undoing. The final learned weight vector with one-at-a-time is closer to the ideal weight vector for the task where two feature weights are incorrect (left). Looking at the individual feature differences from ideal: while the final cup weight is closer to ideal for one-at-a-time for both tasks (center), the ideal table weight is actually significantly further away from the ideal for the one-at-a-time strategy during the one-feature task (right). However, for the two feature task, the one-at-a-time method outperforms the all-at-once for final learned cup and table weights. \n Hypotheses H1. Updating one feature at a time significantly increases the final learned reward, enables a better learning process, results in lower regret for the executed trajectory, and leads to less interaction effort and time compared to all-at-once update. H2. Participants will perceive the robot as more successful at accomplishing the task, correctly updating its knowledge of the task, less likely to learn about extraneous aspects of the task, and be easier to correct if it did learn something wrong in the one-at-a-time condition. \n Tasks We designed two experimental household manipulation tasks for the robot to perform in a shared workspace (see Fig. 4 for setup). For each experimental task, the robot carried a cup from a start to end pose with an initially incorrect objective. One of the tasks focused on participants having to correct a single aspect of the incorrect objective, while the other needed them to correct all parts of the objective. Participants were instructed to physically intervene to correct the robot's behavior during the task. Similar to state-of-theart methods, all the features in the robot's objective were chosen to be intuitive to a human to ensure that participants could understand how to correct the robot. In Task 1, the robot's objective had only one feature weight incorrect. The robot's default trajectory took a cup from the participant and put it down on the table, but carried the cup too far above the table (top of Fig. 4 ). In Task 2, all the feature weights started out incorrect in the robot's objective. The robot also took a cup from the participant and put it down on the table, but this time it initially grasped the cup at the wrong angle and was also carrying the cup too high above the table (bottom of Fig. 4 ). \n Participants We used a within-subjects design and counterbalanced the order of the conditions during experiments. In total, we recruited 12 participants (7 female, 4 male, 1 non-binary trans-masculine, aged 18-30) from the campus community, 11 of which had technical backgrounds and 1 of which did not. None of the participants had experience interacting with the robot used in our experiments. \n Procedure Before beginning the experiment, participants performed a familiarization task to become comfortable teaching the robot with physical corrections. The robot's original trajectory moved a cup from a shelf to a table, but the robot did not initially care about tilting the cup mid-task. The robot's objective contained only one aspect of the task (cup orientation) and participants had to correct only this one aspect. Afterwards, for each experimental task, the participants were shown the robot's default trajectory as well as what their desired trajectory looks like. They were also told what aspects of the task the robot is always aware of (cup orientation and distance of end-effector to table) as well as which learning strategy they were interacting with. Participants were told the difference between the two learning strategies in order to minimize in-task learning effects. Note, however, that we did not tell participants to teach the robot in any specific style (like one aspect as a time), only about how the robot reasons about their corrections. \n Analysis 4.7.1 Objective. Final Learned Reward. We ran a factorial repeated-measures ANOVA with learning strategy and number of features as factors, and user ID as a random effect, for each of the measures capturing the quality of the final learning outcome. Fig. 5 summarizes our findings about the final learned weights for each learning strategy. For the final dot product with the true reward, we found a significant main effect of the learning strategy (F (1, 81) = 29.86, p < .0001), but also an interaction effect with the number of features (F (1, 81) = 13.07, p < .01). The post-hoc analysis with Tukey HSD revealed that one-at-a-time led to a higher dot product on the two feature task (p < .0001), but there was no significant difference on the one-feature task (where one-at-a-time led to slightly higher dot product). We next looked at the final regret, i.e. the difference between the cost of the learned trajectory and that of the ideal trajectory. For this metric we found an interaction effect, suggesting that one-ata-time led to lower regret for the two-feature task but not for the one-feature task. Looking separately at the feature values for table and cup, we found that one-at-a-time led to a significantly lower difference for the cup feature across the board (F (1, 81) = 11.30, + Cup All-at-Once One-at-a-Time (b) In contrast to (a), when two feature weights are wrong, the one-ata-time strategy outperforms the all-at-once strategy when it came to a higher dot product over the duration of the trajectory. \n Figure 6: The one-at-a-time strategy shows significantly more consistent alignment between the estimated weight vector, θ t , and the ideal weight vector, θ * , than the all-at-once for the two feature task. This indicates that when multiple aspects of the objective need changing, the one-at-a-time method enables more accurate learning. p < .01, no interaction effect), but that one-at-a-time only improved the difference for the table on the two feature task (p < .0001) -it actually significantly increased the difference on the one feature task (p < .001). Overall, we see that one-at-a-time learns something significantly better across the board for the two-feature task. When it comes to the one feature task, the results are mixed: it led to a significantly better result for the cup orientation, but significantly worse for the table distance feature. Learning Process. For the average dot product between the estimated and true reward over time, our analysis revealed almost identical outcomes to before, when we were looking at the final reward only (see Fig. 6 ). We also found that one-at-a-time resulted in significantly fewer updates in the wrong direction for the cup weight across the board (F (1, 81) = 44.91, p < .0001) and for the table weight (F (1, 81) = 22.02, p < .0001), with no interaction effect. Fig. 7 highlights these findings and their connection to the subjective metrics. Looking at the length of the path through the space of weights, we found a main effect of learning strategy (F (1, 81) = 26.82, p < .0001), but also an interaction effect (F (1, 81) = 6.55, p = .01). The posthoc analysis with Tukey HSD revealed that for the the one-feature task, one-at-a-time resulted in significantly shorter path traversed through weight space (p < .0001). The path was shorter with the two-feature task as well, but the difference was not significant. The effect was mainly due to the one-at-a-time method resulting in a shorter path for the cup weight on the one-feature task, as revealed by the posthoc analysis (p < .0001). Overall, we see that the quality of the learning process was significantly higher for the one-at-a-time strategy across both tasks. When one aspect and all aspects of the objective were wrong, oneat-a-time led to fewer wrong weight updates and resulted in the learned reward across time being closer to the true reward. The Executed Trajectory. We found no significant main effect of the learning strategy on the regret of the executed trajectory: the two strategies lead to relatively similar actual trajectories with respect to regret. Both regret as well as the feature differences from ideal for cup and table showed significant interaction effects. Interaction Metrics. We found no significant effects on interaction time or force. Summary of Objective Metric Analysis. Taken together, these results indicate that a one-at-a-time learning strategy leads to a better learning process across the board. On the more complex two-feature task, this strategy also leads to unquestionably better learning outcomes. For the one-feature task, learning one feature at a time enables users to better avoid the wrong perturbation of the correct weight (on the cup feature), but is not as good as the all-at-once method at enabling users to properly correct the wrong weight (on the table feature). Thus, H1 was partially supported: although updating one feature weight at a time does not improve task performance when there is only one aspect of the objective wrong, reasoning about one feature weight at a time leads to significantly better learning and task performance when all aspects of the objective are wrong. 4.7.2 Subjective. We ran a repeated measures ANOVA on the results of our participant survey. After testing the reliability of our 4 scales, we found that the correct update and undoing scale were significantly reliable, so we grouped these into a combined score (see Chronbach's α in Table 1 ). We analyzed success and undoing ease separately as they were not reliable. For the correct update scale, we found a significant effect of learning strategy (F (1, 33) = 5.09, p = 0.031), showing that participants perceived the one-at-a-time strategy as better at updating the robot's objective according to their corrections. Additionally, the undoing scale showed a significant effect of learning strategy (F (1, 33) = 10.35, p < 0.01), with the one-at-at-time strategy being less likely to learn the wrong thing and cause the participants to have to undo a correction. For ease of undoing, when analyzing Q9 and Q10 individually we found no significant effect of strategy. The one-at-a-time strategy results in significantly less weight updates that are away from the optimum weight across all tasks (left top, left bottom). These findings are consistent with the subjective likert data from the undoing scale, where participants perceived the one-at-a-time method as less likely to learn the wrong thing and need an additional undoing action. Summary of Subjective Metric Analysis. The subjective data echoes some of the objective data results. Participants perceived that one-at-a-time better understood their corrections and required less undoing due to unintended learning, partially supporting H2. \n DISCUSSION In this paper, we compared the performance of one-at-a-time and allat-once learning for two tasks: one that required correcting a single feature, and another that required correcting multiple features of a robot's objective. For the multiple feature task, learning about one feature at a time was objectively superior: it led to a better final learning outcome (Fig. 5 ), took a shorter path to the optimum, and had fewer incorrect inferences and undoings along the way (Fig. 6 ). However, the results were not as clear for the single feature task: the one-at-a-time method lessened unintended learning on the weights that were initially correct, but it hindered learning for the incorrect weights. However, participants subjectively preferred the one-at-a-time strategy overall: they thought it was better at learning the correct aspects of the task and required less undoing. We hypothesize that the superior objective performance of the one-at-a-time strategy in the second task is due to the increased complexity of the task. It appears that one-at-a-time learning is more useful as the teaching task becomes more complex and requires fixing more aspects of the robot's objective. However, with simple teaching tasks that only require one aspect of the objective to change, it is not yet clear whether one-at-a-time is a significantly better learning strategy. \n Limitations and Future Work It is both a limitation and a strength that we chose the simplest possible feature selection method for the one-at-a-time task. On the one hand, this is an intuitive and computationally inexpensive method to examine as a first exploration into teaching robot objectives online via physical interaction. At the same time, our simple learning strategy was not consistently superior in the simple task. This opens the door for analyzing more sophisticated methods that perform Bayesian inference on the intended feature, or low-pass filtering to prevent high frequency changes in which features gets Table 1 : Likert scale questions were grouped into four categories: success in accomplishing the task, correctness of update (reliable), needing to undo corrections because of unintended learning (reliable), and ease of undoing. \n Likert Questions Cronbach's α .93 Q7: Sometimes my corrections were just meant to fix the effect of previous corrections I gave. Q8: I had to re-teach the robot about an aspect of the task that it started off knowing well. undo ease Q9: When the robot learned something wrong, it was difficult for me to undo that. .66 Q10: It was easy to re-correct the robot whenever it misunderstood a previous correction of mine. updated to improve overall learning and usability. Additionally, while our method worked well with intuitive features like \"distance to table\", additional work is needed to investigate how well each method works when the features are non-intuitive to the human. Perhaps our largest limitation in this work is our demographics: our study participants were primarily individuals with a technical background (with one exception). Future work must consider a more diverse user population to ensure external validity. Not only do we need algorithms that can learn from humans, but the methods must also reason about the difficulties humans experience when trying to kinesthetically teach a complex robotic system. To simplify the teaching process, we propose that robots should learn one aspect of the objective at a time from physical corrections. While our user studies indicate the benefits of this method, it is only a first step towards seamless human-robot interaction. Optimal human: Teaching the robot (b) Optimal human: Learning weights \n Figure 2 : 2 Figure 2: Simulation with optimal human. (a) Human corrects the robot during the first few time steps, and the robot follows the human's desired trajectory afterwards. (b) The robot's estimated feature weights converge to the human's true feature weights. \n Figure 3 : 3 Figure 3: Simulation with noisy human. (a) The human noisily corrects the robot's trajectory, where the ellipses show the robot's states with 95% confidence over 100 simulations. (b)With all-at-once, the robot initially learns that the human feature is important, and the person must undo that unintended learning. One-at-at-time learning reduces the unintended effects of the human's noisy corrections; this causes the robot to converge towards the human's desired trajectory more rapidly. \n (a) Task 1 : 1 Correct one feature, the distance to table (b) Task 2: Correct two features, the cup orientation and distance to table \n Figure 4 : 4 Figure 4: Depictions of the robot trajectories for each of the two experimental tasks. The black path represents the original trajectory and the blue path represents the human's desired trajectory. \n Figure 5 : 5 Figure5: The final learned weight vector with one-at-a-time is closer to the ideal weight vector for the task where two feature weights are incorrect (left). Looking at the individual feature differences from ideal: while the final cup weight is closer to ideal for one-at-a-time for both tasks (center), the ideal table weight is actually significantly further away from the ideal for the one-at-a-time strategy during the one-feature task (right). However, for the two feature task, the one-at-a-time method outperforms the all-at-once for final learned cup and table weights. \n One-at-a-Time (a) In the task with only one wrong feature weight, there is no significant difference between the two methods in average dot product over time. \n Figure 7 : 7 Figure7: The one-at-a-time strategy results in significantly less weight updates that are away from the optimum weight across all tasks (left top, left bottom). These findings are consistent with the subjective likert data from the undoing scale, where participants perceived the one-at-a-time method as less likely to learn the wrong thing and need an additional undoing action. \n succ Q1: I successfully taught the robot how to do the task. correct update Q2: The robot correctly updated its understanding about aspects of the task that I did want to change. .84 Q3: The robot wrongly updated its understanding about aspects of the task I did NOT want to change. Q4: The robot understood which aspects of the task I wanted to change, and how to change them. Q5: The robot misinterpreted my corrections.undoing Q6: I had to try to undo corrections that I gave to the robot, because it learned the wrong thing. \n\t\t\t Similar to prior IRL work, we will assume that the correct features for the task have been identified a priori, and are known to both the human and the robot. \n\t\t\t To ensure that all features are equally sensitive, we normalized each feature by the maximal attainable feature difference by computing optimal trajectories offline with a range of θ values.", "date_published": "n/a", "url": "n/a", "filename": "3171221.3171267.tei.xml", "abstract": "We focus on learning robot objective functions from human guidance: specifically, from physical corrections provided by the person while the robot is acting. Objective functions are typically parametrized in terms of features, which capture aspects of the task that might be important. When the person intervenes to correct the robot's behavior, the robot should update its understanding of which features matter, how much, and in what way. Unfortunately, real users do not provide optimal corrections that isolate exactly what the robot was doing wrong. Thus, when receiving a correction, it is difficult for the robot to determine which features the person meant to correct, and which features were changed unintentionally. In this paper, we propose to improve the efficiency of robot learning during physical interactions by reducing unintended learning. Our approach allows the human-robot team to focus on learning one feature at a time, unlike state-of-the-art techniques that update all features at once. We derive an online method for identifying the single feature which the human is trying to change during physical interaction, and experimentally compare this one-at-a-time approach to the all-at-once baseline in a user study. Our results suggest that users teaching one-at-a-time perform better, especially in tasks that require changing multiple features.", "id": "8ba474a6576df7f082cd5e2b736d7442"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Henry W Lin", "Max Tegmark", "David Rolnick"], "title": "Why Does Deep and Cheap Learning Work So Well?", "text": "Introduction Deep learning works remarkably well, and has helped dramatically improve the state-ofthe-art in areas ranging from speech recognition, translation and visual object recognition to drug discovery, genomics and automatic game playing [1, 2] . However, it is still not fully understood why deep learning works so well. In contrast to GOFAI (\"good old-fashioned AI\") algorithms that are hand-crafted and fully understood analytically, many algorithms using artificial neural networks are understood only at a heuristic level, where we empirically know that certain training protocols employing large data sets will result in excellent performance. This is reminiscent of the situation with human brains: we know that if we train a child according to a certain curriculum, she will learn certain skills-but we lack a deep understanding of how her brain accomplishes this. This makes it timely and interesting to develop new analytic insights on deep learning and its successes, which is the goal of the present paper. Such improved understanding is not only interesting in its own right, and for potentially providing new clues about how brains work, but it may also have practical applications. Better understanding the shortcomings of deep learning may suggest ways of improving it, both to make it more capable and to make it more robust [3] . \n The Swindle: Why Does \"Cheap Learning\" Work? Throughout this paper, we will adopt a physics perspective on the problem, to prevent application-specific details from obscuring simple general results related to dynamics, symmetries, renormalization, etc, and to exploit useful similarities between deep learning and statistical mechanics. The task of approximating functions of many variables is central to most applications of machine learning, including unsupervised learning, classification and prediction, as illustrated in Fig. 1 . For example, if we are interested in classifying faces, then we may want our neural network to implement a function where we feed in an image represented by a million greyscale pixels and get as output the probability distribution over a set of people that the image might represent. When investigating the quality of a neural net, there are several important factors to consider: \n Unsupervised learning Prediction Classification \n p(x,y) p(y |x) p(x |y) Fig. 1 In this paper, we follow the machine learning convention that x refers to data (e.g., an image) and y refers to underlying information about that data (such as a label for the image). Neural networks can be used to estimate (or sample from) probability distributions with respect to x and y, given many samples. Classification involves approximating the probability distribution of y given x, in the case that y is discrete-valued. This problem may also be called prediction, e.g. when x is earlier data in a time series. Generation involves approximating the probability distribution for x given y, or drawing samples from this distribution. Unsupervised learning attempts to approximate or model the probability distribution of x, without any knowledge of y • Expressibility: What class of functions can the neural network express? • Efficiency: How many resources (neurons, parameters, etc) does the neural network require to approximate a given function? • Learnability: How rapidly can the neural network learn good parameters for approximating a function? This paper is focused on expressibility and efficiency, and more specifically on the following well-known [4] [5] [6] problem: How can neural networks approximate functions well in practice, when the set of possible functions is exponentially larger than the set of practically possible networks? For example, suppose that we wish to classify megapixel greyscale images into two categories, e.g., cats or dogs. If each pixel can take one of 256 values, then there are 256 1000,000 possible images, and for each one, we wish to compute the probability that it depicts a cat. This means that an arbitrary function is defined by a list of 256 1000,000 probabilities, i.e., way more numbers than there are atoms in our universe (about 10 78 ). Yet neural networks with merely thousands or millions of parameters somehow manage to perform such classification tasks quite well. How can deep learning be so \"cheap\", in the sense of requiring so few parameters? We will see in below that neural networks perform a combinatorial swindle, replacing exponentiation by multiplication: if there are say n = 10 6 inputs taking v = 256 values each, this swindle cuts the number of parameters from v n to v × n times some constant factor. We will show that this success of this swindle depends fundamentally on physics: although neural networks only work well for an exponentially tiny fraction of all possible inputs, the laws of physics are such that the data sets we care about for machine learning (natural images, sounds, drawings, text, etc) are also drawn from an exponentially tiny fraction of all imaginable data sets. Moreover, we will see that these two tiny subsets are remarkably similar, enabling deep learning to work well in practice. The rest of this paper is organized as follows. In Sect. 2, we present results for shallow neural networks with merely a handful of layers, focusing on simplifications due to locality, symmetry and polynomials. In Sect. 3, we study how increasing the depth of a neural network can provide polynomial or exponential efficiency gains even though it adds nothing in terms of expressivity, and we discuss the connections to renormalization, compositionality and complexity. We summarize our conclusions in Sect. 4. \n Expressibility and Efficiency of Shallow Neural Networks Let us now explore what classes of probability distributions p are the focus of physics and machine learning, and how accurately and efficiently neural networks can approximate them. We will be interested in probability distributions p(x|y), where x ranges over some sample space and y will be interpreted either as another variable being conditioned on or as a model parameter. For a machine learning example, we might interpret y as an element of some set of animals {cat, dog, rabbit, . . .} and x as the vector of pixels in an image depicting such an animal, so that p(x|y) for y = cat gives the probability distribution of images of cats with different coloring, size, posture, viewing angle, lighting condition, electronic camera noise, etc For a physics example, we might interpret y as an element of some set of metals {iron, aluminum, copper, . . .} and x as the vector of magnetization values for different parts of a metal bar. The prediction problem is then to evaluate p(x|y), whereas the classification problem is to evaluate p(y|x). Because of the above-mentioned \"swindle\", accurate approximations are only possible for a tiny subclass of all probability distributions. Fortunately, as we will explore below, the function p(x|y) often has many simplifying features enabling accurate approximation, because it follows from some simple physical law or some generative model with relatively few free parameters: for example, its dependence on x may exhibit symmetry, locality and/or be of a simple form such as the exponential of a low-order polynomial. In contrast, the dependence of p(y|x) on y tends to be more complicated; it makes no sense to speak of symmetries or polynomials involving a variable y = cat. Let us therefore start by tackling the more complicated case of modeling p(y|x). This probability distribution p(y|x) is determined by the hopefully simpler function p(x|y) via Bayes' theorem: p(y|x) = p(x|y) p(y) y p(x|y )(y ) , ( 1 ) where p(y) is the probability distribution over y (animals or metals, say) a priori, before examining the data vector x. \n Probabilities and Hamiltonians It is useful to introduce the negative logarithms of two of these probabilities: H y (x) ≡ − ln p(x|y), μ y ≡ − ln p(y). ( 2 ) Table 1 is a brief dictionary translating between physics and machine-learning terminology. Statisticians refer to − ln p as \"self-information\" or \"surprisal\", and statistical physicists refer to H y (x) as the Hamiltonian, quantifying the energy of x (up to an arbitrary and irrelevant additive constant) given the parameter y. These definitions transform Eq. (1) into the Boltzmann form p(y|x) = 1 N (x) e −[H y (x)+μ x ] , (3) where This recasting of Eq. ( 1 ) is useful because the Hamiltonian tends to have properties making it simple to evaluate. We will see in Sect. 3 that it also helps understand the relation between deep learning and renormalization [7] . N (x) ≡ y e −[H y (x)+μ y ] . ( 4 ) \n Bayes' Theorem as a Softmax Since the variable y takes one of a discrete set of values, we will often write it as an index instead of as an argument, as p y (x) ≡ p(y|x). Moreover, we will often find it convenient to view all values indexed by y as elements of a vector, written in boldface, thus viewing p y , H y and μ y as elements of the vectors p, H and μ, respectively. Equation (3) thus simplifies to p(x) = 1 N (x) e −[H(x)+μ] , (5) using the standard convention that a function (in this case exp) applied to a vector acts on its elements. We wish to investigate how well this vector-valued function p(x) can be approximated by a neural net. A standard n-layer feedforward neural network maps vectors to vectors by applying a series of linear and nonlinear transformations in succession. Specifically, it implements vector-valued functions of the form [1] f(x) = σ n A n • • • σ 2 A 2 σ 1 A 1 x, (6) where the σ i are relatively simple nonlinear operators on vectors and the A i are affine transformations of the form A i x = W i x + b i for matrices W i and so-called bias vectors b i . Popular choices for these nonlinear operators σ i include • Local function apply some nonlinear function σ to each vector element, • Max-pooling compute the maximum of all vector elements, • Softmax exponentiate all vector elements and normalize them to so sum to unity σ (x) ≡ e x i e y i . ( 7 ) (We use σ to indicate the softmax function and σ to indicate an arbitrary non-linearity, optionally with certain regularity requirements). This allows us to rewrite Eq. (5) as p(x) = σ [−H(x) − μ]. ( 8 ) This means that if we can compute the Hamiltonian vector H(x) with some n-layer neural net, we can evaluate the desired classification probability vector p(x) by simply adding a softmax layer. The μ-vector simply becomes the bias term in this final layer. \n What Hamiltonians can be Approximated by Feasible Neural Networks? It has long been known that neural networks are universal 1 approximators [8, 9] , in the sense that networks with virtually all popular nonlinear activation functions σ (x) can approximate any smooth function to any desired accuracy-even using merely a single hidden layer. However, these theorems do not guarantee that this can be accomplished with a network of feasible size, and the following simple example explains why they cannot: There are 2 2 n different Boolean functions of n variables, so a network implementing a generic function in this class requires at least 2 n bits to describe, i.e., more bits than there are atoms in our universe if n > 260. The fact that neural networks of feasible size are nonetheless so useful therefore implies that the class of functions we care about approximating is dramatically smaller. We will see below in Sect. 2.4 that both physics and machine learning tend to favor Hamiltonians that are polynomials 2 -indeed, often ones that are sparse, symmetric and low-order. Let us therefore focus our initial investigation on Hamiltonians that can be expanded as a power series: H y (x) = h + i h i x i + i≤ j h i j x i x j + i≤ j≤k h i jk x i x j x k + • • • . ( 9 ) If the vector x has n components (i = 1, . . . , n), then there are (n + d)!/(n!d!) terms of degree up to d. \n Continuous Input Variables If we can accurately approximate multiplication using a small number of neurons, then we can construct a network efficiently approximating any polynomial H y (x) by repeated multiplication and addition. We will now see that we can, using any smooth but otherwise arbitrary non-linearity σ that is applied element-wise. The popular logistic sigmoid activation function σ (x) = 1/(1 + e −x ) will do the trick. Theorem 1 Let f be a neural network of the form f = A 2 σ A 1 , where σ acts elementwise by applying some smooth non-linear function σ to each element. Let the input layer, hidden layer and output layer have sizes 2, 4 and 1, respectively. Then f can approximate a multiplication gate arbitrarily well. To see this, let us first Taylor-expand the function σ around the origin: σ (u) = σ 0 + σ 1 u + σ 2 u 2 2 + O(u 3 ). ( 10 ) Without loss of generality, we can assume that σ 2 = 0: since σ is non-linear, it must have a non-zero second derivative at some point, so we can use the biases in A 1 to shift the origin to this point to ensure σ 2 = 0. Equation (10) now implies that m(u, v) ≡ σ (u + v) + σ (−u − v) − σ (u − v) − σ (−u + v) 4σ 2 = uv 1 + O u 2 + v 2 , ( 11 ) where we will term m(u, v) the multiplication approximator. Fig. 2 Multiplication can be efficiently implemented by simple neural nets, becoming arbitrarily accurate as λ → 0 (left) and β → ∞ (right). Squares apply the function σ , circles perform summation, and lines multiply by the constants labeling them. The \"1\" input implements the bias term. The left gate requires σ (0) = 0, which can always be arranged by biasing the input to σ . The right gate requires the sigmoidal behavior σ (x) → 0 and σ (x) → 1 as x → −∞ and x → ∞, respectively then compensating by scaling A 2 → λ −2 A 2 . In the limit that λ → ∞, this approximation becomes exact. 3 In other words, arbitrarily accurate multiplication can always be achieved using merely 4 neurons. Figure 2 illustrates such a multiplication approximator. (Of course, a practical algorithm like stochastic gradient descent cannot achieve arbitrarily large weights, though a reasonably good approximation can be achieved already for λ −1 ∼ 10.) Corollary 1 For any given multivariate polynomial and any tolerance > 0, there exists a neural network of fixed finite size N (independent of ) that approximates the polynomial to accuracy better than . Furthermore, N is bounded by the complexity of the polynomial, scaling as the number of multiplications required times a factor that is typically slightly larger than 4. 4 This is a stronger statement than the classic universal universal approximation theorems for neural networks [8, 9] , which guarantee that for every there exists some N ( ), but allows for the possibility that N ( ) → ∞ as → 0. An approximation theorem in [10] provides an -independent bound on the size of the neural network, but at the price of choosing a pathological function σ . \n Discrete Input Variables For the simple but important case where x is a vector of bits, so that x i = 0 or x i = 1, the fact that x 2 i = x i makes things even simpler. This means that only terms where all variables are different need be included, which simplifies Eq. ( 9 ) to H y (x) = h + i h i x i + i< j h i j x i y j + i< j D ln 10 ≈ 23. In summary, when x is a bit string, an arbitrary function p y (x) can be evaluated by a simple 3-layer neural network: the middle layer uses sigmoid functions to compute the products from Eq. ( 12 ), and the top layer performs the sums from Eq. ( 12 ) and the softmax from Eq. (8). \n What Hamiltonians Do We Want to Approximate? We have seen that polynomials can be accurately approximated by neural networks using a number of neurons scaling either as the number of multiplications required (for the continuous case) or as the number of terms (for the binary case). But polynomials per se are no panacea: with binary input, all functions are polynomials, and with continuous input, there are (n + d)!/(n!d!) coefficients in a generic polynomial of degree d in n variables, which easily becomes unmanageably large. We will now discuss situations in which exceptionally simple polynomials that are sparse, symmetric and/or low-order play a special role in physics and machine learning. \n Low Polynomial Order The Hamiltonians that show up in physics are not random functions, but tend to be polynomials of very low order, typically of degree ranging from 2 to 4. The simplest example is of course the harmonic oscillator, which is described by a Hamiltonian that is quadratic in both position and momentum. There are many reasons why low order polynomials show up in physics. Two of the most important ones are that sometimes a phenomenon can be studied perturbatively, in which case, Taylor's theorem suggests that we can get away with a low order polynomial approximation. A second reason is renormalization: higher order terms in the Hamiltonian of a statistical field theory tend to be negligible if we only observe macroscopic variables. At a fundamental level, the Hamiltonian of the standard model of particle physics has d = 4. There are many approximations of this quartic Hamiltonian that are accurate in specific regimes, for example the Maxwell equations governing electromagnetism, the Navier-Stokes equations governing fluid dynamics, the Alvén equations governing magnetohydrodynamics and various Ising models governing magnetization-all of these approximations have Hamiltonians that are polynomials in the field variables, of degree d ranging from 2 to 4. This means that the number of polynomial coefficients in many examples is not infinite as in Eq. ( 9 ) or exponential in n as in Eq. ( 12 ), merely of order O(n 4 ). There are additional reasons why we might expect low order polynomials. Thanks to the Central Limit Theorem [11] , many probability distributions in machine learning and statistics can be accurately approximated by multivariate Gaussians, i.e., of the form p(x) = e h+ i h j x i − i j h i j x i x j , ( 14 ) which means that the Hamiltonian H = − ln p is a quadratic polynomial. More generally, the maximum-entropy probability distribution subject to constraints on some of the lowest moments, say expectation values of the form x α 1 1 x α 2 2 • • • x α n n for some integers α i ≥ 0 would lead to a Hamiltonian of degree no greater than d ≡ i α i [12] . Image classification tasks often exploit invariance under translation, rotation, and various nonlinear deformations of the image plane that move pixels to new locations. All such spatial transformations are linear functions (d = 1 polynomials) of the pixel vector x. Functions implementing convolutions and Fourier transforms are also d = 1 polynomials. Of course, such arguments do not imply that we should expect to see low-order polynomials in every application. If we consider some data set generated by a very simple Hamiltonian (say the Ising Hamiltonian), but then discard some of the random variables, the resulting marginalized distribution can become quite complicated and of high order. Similarly, if we do not observe the random variables directly, but observe some generic functions of the random variables, the result will generally be a mess. These arguments, however, might indicate that the probability of encountering a Hamiltonian described by a low-order polynomial in some application might be significantly higher than what one might expect from some naive prior. For example, a uniform prior on the space of all polynomials of degree N would suggest that a randomly chosen polynomial would almost always have degree N , but this might be a bad prior for real-world applications. We should also note that even if a Hamiltonian is described exactly by a low-order polynomial, we would not expect the corresponding neural network to reproduce a low-order polynomial Hamiltonian exactly in any practical scenario for a host of possible reasons including limited data, the requirement of infinite weights for infinite accuracy, and the failure of practical algorithms such as stochastic gradient descent to find the global minimum of a cost function in many scenarios. So looking at the weights of a neural network trained on actual data may not be a good indicator of whether or not the underlying Hamiltonian is a polynomial of low degree or not. \n Locality One of the deepest principles of physics is locality: that things directly affect only what is in their immediate vicinity. When physical systems are simulated on a computer by discretizing space onto a rectangular lattice, locality manifests itself by allowing only nearest-neighbor interaction. In other words, almost all coefficients in Eq. ( 9 ) are forced to vanish, and the total number of non-zero coefficients grows only linearly with n. For the binary case of Eq. ( 9 ), which applies to magnetizations (spins) that can take one of two values, locality also limits the degree d to be no greater than the number of neighbors that a given spin is coupled to (since all variables in a polynomial term must be different). Again, the applicability of these considerations to particular machine learning applications must be determined on a case by case basis. Certainly, an arbitrary transformation of a collection of local random variables will result in a non-local collection. (This might ruin locality in certain ensembles of images, for example). But there are certainly cases in physics where locality is still approximately preserved, for example in the simple block-spin renormalization group, spins are grouped into blocks, which are then treated as random variables. To a high degree of accuracy, these blocks are only coupled to their nearest neighbors. Such locality is famously exploited by both biological and artificial visual systems, whose first neuronal layer performs merely fairly local operations. \n Symmetry Whenever the Hamiltonian obeys some symmetry (is invariant under some transformation), the number of independent parameters required to describe it is further reduced. For instance, many probability distributions in both physics and machine learning are invariant under translation and rotation. As an example, consider a vector x of air pressures y i measured by a microphone at times i = 1, . . . , n. Assuming that the Hamiltonian describing it has d = 2 reduces the number of parameters N from ∞ to (n + 1)(n + 2)/2. Further assuming locality (nearest-neighbor couplings only) reduces this to N = 2n, after which requiring translational symmetry reduces the parameter count to N = 3. Taken together, the constraints on locality, symmetry and polynomial order reduce the number of continuous parameters in the Hamiltonian of the standard model of physics to merely 32 [13] . Naturally, this does not mean that modeling a real physical system requires merely 32 parameters -the objects involved must be modeled also; there too, however, symmetry allows us to abstract away from the information contained in individual particles to that summarizing components of the system. Symmetry can reduce not merely the parameter count, but also the computational complexity. For example, if a linear vector-valued function f(x) mapping a set of n variables onto itself happens to satisfy translational symmetry, then it is a convolution (implementable by a convolutional neural net; \"convnet\"), which means that it can be computed with n log 2 n rather than n 2 multiplications using Fast Fourier transform. \n Why Deep? Above we investigated how probability distributions from physics and computer science applications lent themselves to \"cheap learning\", being accurately and efficiently approximated by neural networks with merely a handful of layers. Let us now turn to the separate question of depth, i.e., the success of deep learning: what properties of real-world probability distributions cause efficiency to further improve when networks are made deeper? This question has been extensively studied from a mathematical point of view [14] [15] [16] , but mathematics alone cannot fully answer it, because part of the answer involves physics. We will argue that the answer involves the hierarchical/compositional structure of generative processes together with inability to efficiently \"flatten\" neural networks reflecting this structure. \n Hierarchical Processess One of the most striking features of the physical world is its hierarchical structure. Spatially, it is an object hierarchy: elementary particles form atoms which in turn form molecules, cells, organisms, planets, solar systems, galaxies, etc Causally, complex structures are frequently created through a distinct sequence of simpler steps. Figure 3 gives two examples of such causal hierarchies generating data vectors y 0 → y 1 → . . . → y n that are relevant to physics and image classification, respectively. Both examples involve a Markov chain 5 where the probability distribution p(y i ) at the i th level of the hierarchy is determined from its causal predecessor alone: p i = M i p i−1 , ( 15 ) where the probability vector p i specifies the probability distribution of p(y i ) according to (p i ) y ≡ p(y i ) and the Markov matrix M i specifies the transition probabilities between two neighboring levels, p(y i |y i−1 ). Iterating Eq. ( 15 ) gives p n = M n M n−1 • • • M 1 p 0 , ( 16 ) so we can write the combined effect of the the entire generative process as a matrix product. In our physics example (Fig. 3 , left), a set of cosmological parameters y 0 (the density of dark matter, etc) determines the power spectrum y 1 of density fluctuations in our universe, which in turn determines the pattern of cosmic microwave background radiation y 2 reaching us from our early universe, which gets combined with foreground radio noise from our Galaxy to produce the frequency-dependent sky maps (y 3 ) that are recorded by a satellitebased telescope that measures linear combinations of different sky signals and adds electronic receiver noise. For the recent example of the Planck Satellite [17] , these datasets y i , y 2 , . . . contained about 10 1 , 10 4 , 10 8 , 10 9 and 10 12 numbers, respectively. More generally, if a given data set is generated by a (classical) statistical physics process, it must be described by an equation in the form of Eq. ( 16 ), since dynamics in classical physics is fundamentally Markovian: classical equations of motion are always first order differential equations in the Hamiltonian formalism. This technically covers essentially all data of interest in the machine learning community, although the fundamental Markovian nature of the generative process of the data may be an in-efficient description. Our toy image classification example (Fig. 3 , right) is deliberately contrived and oversimplified for pedagogy: y 0 is a single bit signifying \"cat or dog\", which determines a set of parameters determining the animal's coloration, body shape, posture, etc using approxiate probability distributions, which determine a 2D image via ray-tracing, which is scaled and translated by random amounts before a randomly generated background is added. In both examples, the goal is to reverse this generative hierarchy to learn about the input y ≡ y 0 from the output y n ≡ x, specifically to provide the best possibile estimate of the probability distribution p(y|y) = p(y 0 |y n )-i.e., to determine the probability distribution for the cosmological parameters and to determine the probability that the image is a cat, respectively. y 0 =y y 1 y 2 y 3 > y 3 =T 3 (x) > y 2 =T 2 (x) > y 0 =T 0 (x) y 4 M 4 M 1 M 3 M 2 > y 1 =T 1 (x) \n Resolving the Swindle This decomposition of the generative process into a hierarchy of simpler steps helps resolve the\"swindle\" paradox from the introduction: although the number of parameters required to describe an arbitrary function of the input data y is beyond astronomical, the generative process can be specified by a more modest number of parameters, because each of its steps can. Whereas specifying an arbitrary probability distribution over multi-megapixel images x requires far more bits than there are atoms in our universe, the information specifying how to compute the probability distribution p(x|y) for a microwave background map fits into a handful of published journal articles or software packages [18] [19] [20] [21] [22] [23] [24] . For a megapixel image of a galaxy, its entire probability distribution is defined by the standard model of particle physics with its 32 parameters [13] , which together specify the process transforming primordial hydrogen gas into galaxies. The same parameter-counting argument can also be applied to all artificial images of interest to machine learning: for example, giving the simple low-information-content instruction \"draw a cute kitten\" to a random sample of artists will produce a wide variety of images y with a complicated probability distribution over colors, postures, etc, as each artist makes random choices at a series of steps. Even the pre-stored information about cat probabilities in these artists' brains is modest in size. Note that a random resulting image typically contains much more information than the generative process creating it; for example, the simple instruction \"generate a random string of 10 9 bits\" contains much fewer than 10 9 bits. Not only are the typical steps in the generative hierarchy specified by a non-astronomical number of parameters, but as discussed in Sect. 2.4, it is plausible that neural networks can implement each of the steps efficiently. 6 A deep neural network stacking these simpler networks on top of one another would then implement the entire generative process efficiently. In summary, the data sets and functions we care about form a minuscule minority, and it is plausible that they can also be efficiently implemented by neural networks reflecting their generative process. So what is the remainder? Which are the data sets and functions that we do not care about? Almost all images are indistinguishable from random noise, and almost all data sets and functions are indistinguishable from completely random ones. This follows from Borel's theorem on normal numbers [26] , which states that almost all real numbers have a string of decimals that would pass any randomness test, i.e., are indistinguishable from random noise. Simple parameter counting shows that deep learning (and our human brains, for that matter) would fail to implement almost all such functions, and training would fail to find any useful patterns. To thwart pattern-finding efforts. cryptography therefore aims to produces randomlooking patterns. Although we might expect the Hamiltonians describing human-generated data sets such as drawings, text and music to be more complex than those describing simple physical systems, we should nonetheless expect them to resemble the natural data sets that inspired their creation much more than they resemble random functions. \n Sufficient Statistics and Hierarchies The goal of deep learning classifiers is to reverse the hierarchical generative process as well as possible, to make inferences about the input y from the output x. Let us now treat this hierarchical problem more rigorously using information theory. Given P(y|x), a sufficient statistic T (x) is defined by the equation P(y|x) = P(y|T (x)) and has played an important role in statistics for almost a century [27] . All the information about y contained in x is contained in the sufficient statistic. A minimal sufficient statistic [27] is some sufficient statistic T * which is a sufficient statistic for all other sufficient statistics. This means that if T (y) is sufficient, then there exists some function f such that T * (y) = f (T (y)). As illustrated in Fig. 3 , T * can be thought of as a an information distiller, optimally compressing the data so as to retain all information relevant to determining y and discarding all irrelevant information. The sufficient statistic formalism enables us to state some simple but important results that apply to any hierarchical generative process cast in the Markov chain form of Eq. ( 16 ). \n Theorem 2 Given a Markov chain described by our notation above, let T i be a minimal sufficient statistic of P(y i |y n ). Then there exists some functions f i such that T i = f i • T i+1 . More casually speaking, the generative hierarchy of Fig. 3 can be optimally reversed one step at a time: there are functions f i that optimally undo each of the steps, distilling out all information about the level above that was not destroyed by the Markov process. Here is the proof. Note that for any k ≥ 1, the \"backwards\" Markov property P(y i |y i+1 , y i+k ) = P(y i |y i+1 ) follows from the Markov property via Bayes' theorem: P(y i |y i+k , y i+1 ) = P(y i+k |y i , y i+1 )P(y i |y i+1 ) P(y i+k |y i+1 ) = P(y i+k |y i+1 )P(y i |y i+1 ) P(y i+k |y i+1 ) = P(y i |y i+1 ). ( ) 17 Using this fact, we see that P(y i |y n ) = y i+1 P(y i |y i+1 y n )P(y i+1 |y n ) = y i+1 P(y i |y i+1 )P(y i+1 |T i+1 (y n )). ( 18 ) Since the above equation depends on y n only through T i+1 (y n ), this means that T i+1 is a sufficient statistic for P(y i |y n ). But since T i is the minimal sufficient statistic, there exists a function f i such that T i = f i • T i+1 . Corollary 2 With the same assumptions and notation as theorem 2, define the function f 0 (T 0 ) = P(y 0 |T 0 ) and f n = T n−1 . Then P(y 0 |y n ) = ( f 0 • f 1 • • • • • f n ) (y n ). ( 19 ) The proof is easy. By induction, T 0 = f 1 • f 2 • • • • • T n−1 , which implies the corollar y. ( 20 ) Roughly speaking, Corollary 2 states that the structure of the inference problem reflects the structure of the generative process. In this case, we see that the neural network trying to approximate P(y|x) must approximate a compositional function. We will argue below in Sect. 3.6 that in many cases, this can only be accomplished efficiently if the neural network has n hidden layers. In neuroscience parlance, the functions f i compress the data into forms with ever more invariance [28] , containing features invariant under irrelevant transformations (for example background substitution, scaling and translation). Let us denote the distilled vectors y i ≡ f i ( y i+1 ), where y n ≡ y. As summarized by Fig. 3 , as information flows down the hierarchy y = y 0 → y 1 → . . . ex n = x, some of it is destroyed by random processes. However, no further information is lost as information flows optimally back up the hierarchy as y → y n−1 → • • • → y 0 . \n Approximate Information Distillation Although minimal sufficient statistics are often difficult to calculate in practice, it is frequently possible to come up with statistics which are nearly sufficient in a certain sense which we now explain. An equivalent characterization of a sufficient statistic is provided by information theory [29, 30] . The data processing inequality [30] states that for any function f and any random variables x, y, I (x, y) ≥ I (x, f (y)), (21) where I is the mutual information: I (x, y) = x,y p(x, y) log p(x, y) p(x) p(y) . ( 22 ) A sufficient statistic T (x) is a function f (x) for which \"≥\" gets replaced by \"=\" in Eq. ( 21 ), i.e., a function retaining all the information about y. Even information distillation functions f that are not strictly sufficient can be very useful as long as they distill out most of the relevant information and are computationally efficient. For example, it may be possible to trade some loss of mutual information with a dramatic reduction in the complexity of the Hamiltonian; e.g., H y ( f (x)) may be considerably easier to implement in a neural network than H y (x). Precisely this situation applies to the physical example described in Fig. 3 , where a hierarchy of efficient near-perfect information distillers f i have been found, the numerical cost of f 3 [23, 24] , f 2 [21, 22] , f 1 [19, 20] and f 0 [17] scaling with the number of inputs parameters n as O(n), O(n 3/2 ), O(n 2 ) and O(n 3 ), respectively. More abstractly, the procedure of renormalization, ubiquitous in statistical physics, can be viewed as a special case of approximate information distillation, as we will now describe. \n Distillation and Renormalization The systematic framework for distilling out desired information from unwanted \"noise\" in physical theories is known as Effective Field Theory [31] . Typically, the desired information involves relatively large-scale features that can be experimentally measured, whereas the noise involves unobserved microscopic scales. A key part of this framework is known as the renormalization group (RG) transformation [31, 32] . Although the connection between RG and machine learning has been studied or alluded to repeatedly [7, [33] [34] [35] [36] , there are significant misconceptions in the literature concerning the connection which we will now attempt to clear up. Let us first review a standard working definition of what renormalization is in the context of statistical physics, involving three ingredients: a vector y of random variables, a coursegraining operation R and a requirement that this operation leaves the Hamiltonian invariant except for parameter changes. We think of y as the microscopic degrees of freedom-typically physical quantities defined at a lattice of points (pixels or voxels) in space. Its probability distribution is specified by a Hamiltonian H y (x), with some parameter vector y. We interpret the map R : y → y as implementing a coarse-graining 7 of the system. The random variable R(y) also has a Hamiltonian, denoted H (R(y)), which we require to have the same functional form as the original Hamiltonian H y , although the parameters y may change. In other words, H (R(x)) = H r (y) (R(x)) for some function r . Since the domain and the range of R coincide, this map R can be iterated n times R n = R • R • • • • R, giving a Hamiltonian H r n (y) (R n (x)) for the repeatedly renormalized data. Similar to the case of sufficient statistics, P(y|R n (x)) will then be a compositional function. Contrary to some claims in the literature, effective field theory and the renormalization group have little to do with the idea of unsupervised learning and pattern-finding. Instead, the standard renormalization procedures in statistical physics are essentially a feature extractor for supervised learning, where the features typically correspond to longwavelength/macroscopic degrees of freedom. In other words, effective field theory only makes sense if we specify what features we are interested in. For example, if we are given data x about the position and momenta of particles inside a mole of some liquid and is tasked with predicting from this data whether or not Alice will burn her finger when touching the liquid, a (nearly) sufficient statistic is simply the temperature of the object, which can in turn be obtained from some very coarse-grained degrees of freedom (for example, one could use the fluid approximation instead of working directly from the positions and momenta of ∼ 10 23 particles). But without specifying that we wish to predict (long-wavelength physics), there is nothing natural about an effective field theory approximation. To be more explicit about the link between renormalization and deep-learning, consider a toy model for natural images. Each image is described by an intensity field φ(r), where r is a 2-dimensional vector. We assume that an ensemble of images can be described by a quadratic Hamiltonian of the form H y (φ) = y 0 φ 2 + y 1 (∇φ) 2 + y 2 ∇ 2 φ 2 + • • • d 2 r. ( 23 ) Each parameter vector y defines an ensemble of images; we could imagine that the fictitious classes of images that we are trying to distinguish are all generated by Hamiltonians H y with the same above form but different parameter vectors y. We further assume that the function φ(r) is specified on pixels that are sufficiently close that derivatives can be well-approximated by differences. Derivatives are linear operations, so they can be implemented in the first layer of a neural network. The translational symmetry of Eq. ( 23 ) allows it to be implemented with a convnet. If can be shown [31] that for any course-graining operation that replaces each block of b × b pixels by its average and divides the result by b 2 , the Hamiltonian retains the form of Eq. ( 23 ) but with the parameters y i replaced by y i = b 2−2i y i . ( 24 ) This means that all parameters y i with i ≥ 2 decay exponentially with b as we repeatedly renormalize and b keeps increasing, so that for modest b, one can neglect all but the first few y i 's. What would have taken an arbitrarily large neural network can now be computed on a neural network of finite and bounded size, assuming that we are only interested in classifying the data based only on the coarse-grained variables. These insufficient statistics will still have discriminatory power if we are only interested in discriminating Hamiltonians which all differ in their first few C k . In this example, the parameters y 0 and y 1 correspond to \"relevant operators\" by physicists and \"signal\" by machine-learners, whereas the remaining parameters correspond to \"irrelevant operators\" by physicists and \"noise\" by machine-learners. The fixed point structure of the transformation in this example is very simple, but one can imagine that in more complicated problems the fixed point structure of various transformations might be highly non-trivial. This is certainly the case in statistical mechanics problems where renormalization methods are used to classify various phases of matters; the point here is that the renormalization group flow can be thought of as solving the pattern-recognition problem of classifying the long-range behavior of various statistical systems. In summary, renormalization can be thought of as a type of supervised learning, 8 where the large scale properties of the system are considered the features. If the desired features are not large-scale properties (as in most machine learning cases), one might still expect the a generalized formalism of renormalization to provide some intuition to the problem by replacing a scale transformation with some other transformation. But calling some procedure renormalization or not is ultimately a matter of semantics; what remains to be seen is whether or not semantics has teeth, namely, whether the intuition about fixed points of the renormalization group flow can provide concrete insight into machine learning algorithms. In many numerical methods, the purpose of the renormalization group is to efficiently and accurately evaluate the free energy of the system as a function of macroscopic variables of interest such as temperature and pressure. Thus we can only sensibly talk about the accuracy of an RG-scheme once we have specified what macroscopic variables we are interested in. \n No-Flattening Theorems Above we discussed how Markovian generative models cause p(x|y) to be a composition of a number of simpler functions f i . Suppose that we can approximate each function f i with an efficient neural network for the reasons given in Sect. 2. Then we can simply stack these networks on top of each other, to obtain an deep neural network efficiently approximating p(x|y). But is this the most efficient way to represent p(x|y)? Since we know that there are shallower networks that accurately approximate it, are any of these shallow networks as efficient as the deep one, or does flattening necessarily come at an efficiency cost? To be precise, for a neural network f defined by Eq. ( 6 ), we will say that the neural network f is the flattened version of f if its number of hidden layers is smaller and f approximates f within some error (as measured by some reasonable norm). We say that f is a neuronefficient flattening if the sum of the dimensions of its hidden layers (sometimes referred to as the number of neurons N n ) is less than for f. We say that f is a synapse-efficient flattening if the number N s of non-zero entries (sometimes called synapses) in its weight matrices is less than for f. This lets us define the flattening cost of a network f as the two functions C n (f, , ) ≡ min f N n (f ) N n (f) , ( 25 ) C s (f, , ) ≡ min f N s (f ) N s (f) , ( 26 ) specifying the factor by which optimal flattening increases the neuron count and the synapse count, respectively. We refer to results where C n > 1 or C s > 1 for some class of functions f as \"no-flattening theorems\", since they imply that flattening comes at a cost and efficient flattening is impossible. A complete list of no-flattening theorems would show exactly when deep networks are more efficient than shallow networks. There has already been very interesting progress in this spirit, but crucial questions remain. On one hand, it has been shown that deep is not always better, at least empirically for some image classification tasks [38] . On the other hand, many functions f have been found for which the flattening cost is significant. Certain deep Boolean circuit networks are exponentially costly to flatten [39] . Two families of multivariate polynomials with an exponential flattening cost C n are constructed in [14] . Poggio et al. [6] , Mhaskar et al. [15] , Mhaskar and Poggio [16] focus on functions that have tree-like hierarchical compositional form, concluding that the flattening cost C n is exponential for almost all functions in Sobolev space. For the ReLU activation function, [40] finds a class of functions that exhibit exponential flattening costs; [41] study a tailored complexity measure of deep versus shallow ReLU networks. Eldan and Shamir [42] shows that given weak conditions on the activation function, there always exists at least one function that can be implemented in a 3-layer network which has an exponential flattening cost. Finally, [43, 44] study the differential geometry of shallow versus deep networks, and find that flattening is exponentially neuron-inefficient. Further work elucidating the cost of flattening various classes of functions will clearly be highly valuable. \n Linear No-Flattening Theorems In the mean time, we will now see that interesting no-flattening results can be obtained even in the simpler-to-model context of linear neural networks [45] , where the σ operators are replaced with the identity and all biases are set to zero such that A i are simply linear operators (matrices). Every map is specified by a matrix of real (or complex) numbers, and composition is implemented by matrix multiplication. One might suspect that such a network is so simple that the questions concerning flattening become entirely trivial: after all, successive multiplication with n different matrices is equivalent to multiplying by a single matrix (their product). While the effect of flattening is indeed trivial for expressibility (f can express any linear function, independently of how many layers there are), this is not the case for the learnability, which involves non-linear and complex dynamics despite the linearity of the network [45] . We will show that the efficiency of such linear networks is also a very rich question. Neuronal efficiency is trivially attainable for linear networks, since all hidden-layer neurons can be eliminated without accuracy loss by simply multiplying all the weight matrices together. We will instead consider the case of synaptic efficiency and set = = 0. Many divide-and-conquer algorithms in numerical linear algebra exploit some factorization of a particular matrix A in order to yield significant reduction in complexity. For example, when A represents the discrete Fourier transform (DFT), the fast Fourier transform (FFT) algorithm makes use of a sparse factorization of A which only contains O(n log n) non-zero matrix elements instead of the naive single-layer implementation, which contains n 2 nonzero matrix elements. As first pointed out in [46] , this is an example where depth helps and, in our terminology, of a linear no-flattening theorem: fully flattening a network that performs an FFT of n variables increases the synapse count N s from O(n log n) to O(n 2 ), i.e., incurs a flattening cost C s = O(n/ log n) ∼ O(n). This argument applies also to many variants and generalizations of the FFT such as the Fast Wavelet Transform and the Fast Walsh-Hadamard Transform. Another important example illustrating the subtlety of linear networks is matrix multiplication. More specifically, take the input of a neural network to be the entries of a matrix M and the output to be NM, where both M and N have size n × n. Since matrix multiplication is linear, this can be exactly implemented by a 1-layer linear neural network. Amazingly, the naive algorithm for matrix multiplication, which requires n 3 multiplications, is not opti-mal: the Strassen algorithm [47] requires only O(n ω ) multiplications (synapses), where ω = log 2 7 ≈ 2.81, and recent work has cut this scaling exponent down to ω ≈ 2.3728639 [48] . This means that fully optimized matrix multiplication on a deep neural network has a flattening cost of at least C s = O(n 0.6271361 ). Low-rank matrix multiplication gives a more elementary no-flattening theorem. If A is a rank-k matrix, we can factor it as A = BC where B is a k × n matrix and C is an n × k matrix. Hence the number of synapses is n 2 for an = 0 network and 2nk for an = 1-network, giving a flattening cost C s = n/2k > 1 as long as the rank k < n/2. Finally, let us consider flattening a network f = AB, where A and B are random sparse n × n matrices such that each element is 1 with probability p and 0 with probability 1 − p. Flattening the network results in a matrix F i j = k A ik B k j , so the probability that F i j = 0 is (1− p 2 ) n . Hence the number of non-zero components will on average be 1 − (1 − p 2 ) n n 2 , so C s = 1 − (1 − p 2 ) n n 2 2n 2 p = 1 − (1 − p 2 ) n 2 p . ( 27 ) Note that C s ≤ 1/2 p and that this bound is asymptotically saturated for n 1/ p 2 . Hence in the limit where n is very large, flattening multiplication by sparse matrices p 1 is horribly inefficient. \n A Polynomial No-Flattening Theorem In Sect. 2, we saw that multiplication of two variables could be implemented by a flat neural network with 4 neurons in the hidden layer, using Eq. ( 11 ) as illustrated in Fig. 2 . In Appendix A, we show that Eq. ( 11 ) is merely the n = 2 special case of the formula n i=1 x i = 1 2 n {s} s 1 . . . s n σ (s 1 x 1 + • • • + s n x n ), ( 28 ) where the sum is over all possible 2 n configurations of s 1 , • • • s n where each s i can take on values ±1. In other words, multiplication of n variables can be implemented by a flat network with 2 n neurons in the hidden layer. We also prove in Appendix A that this is the best one can do: no neural network can implement an n-input multiplication gate using fewer than 2 n neurons in the hidden layer. This is another powerful no-flattening theorem, telling us that polynomials are exponentially expensive to flatten. For example, if n is a power of two, then the monomial x 1 x 2 . . . x n can be evaluated by a deep network using only 4n neurons arranged in a deep neural network where n copies of the multiplication gate from Fig. 2 are arranged in a binary tree with log 2 n layers (the 5th top neuron at the top of Fig. 2 need not be counted, as it is the input to whatever computation comes next). In contrast, a functionally equivalent flattened network requires a whopping 2 n neurons. For example, a deep neural network can multiply 32 numbers using 4n = 160 neurons while a shallow one requires 2 32 = 4, 294, 967, 296 neurons. Since a broad class of real-world functions can be well approximated by polynomials, this helps explain why many useful neural networks cannot be efficiently flattened. simple probability distributions that deep learning is uniquely suited to model. We argued that the success of shallow neural networks hinges on symmetry, locality, and polynomial log-probability in data from or inspired by the natural world, which favors sparse low-order polynomial Hamiltonians that can be efficiently approximated. These arguments should be particularly relevant for explaining the success of machine learning applications to physics, for example using a neural network to approximate a many-body wavefunction [49] . Whereas previous universality theorems guarantee that there exists a neural network that approximates any smooth function to within an error , they cannot guarantee that the size of the neural network does not grow to infinity with shrinking or that the activation function σ does not become pathological. We show constructively that given a multivariate polynomial and any generic non-linearity, a neural network with a fixed size and a generic smooth activation function can indeed approximate the polynomial highly efficiently. Turning to the separate question of depth, we have argued that the success of deep learning depends on the ubiquity of hierarchical and compositional generative processes in physics and other machine learning applications. By studying the sufficient statistics of the generative process, we showed that the inference problem requires approximating a compositional function of the form f 1 • f 2 • f 2 • • • • that optimally distills out the information of interest from irrelevant noise in a hierarchical process that mirrors the generative process. Although such compositional functions can be efficiently implemented by a deep neural network as long as their individual steps can, it is generally not possible to retain the efficiency while flattening the network. We extend existing \"no-flattening\" theorems [14] [15] [16] by showing that efficient flattening is impossible even for many important cases involving linear networks. In particular, we prove that flattening polynomials is exponentially expensive, with 2 n neurons required to multiply n numbers using a single hidden layer, a task that a deep network can perform using only ∼ 4n neurons. Strengthening the analytic understanding of deep learning may suggest ways of improving it, both to make it more capable and to make it more robust. One promising area is to prove sharper and more comprehensive no-flattening theorems, placing lower and upper bounds on the cost of flattening networks implementing various classes of functions. \n Furthermore, this is the smallest possible number of neurons in any such network with only a single hidden layer. This result may be compared to problems in Boolean circuit complexity, notably the question of whether T C 0 = T C 1 [50] . Here circuit depth is analogous to number of layers, and the number of gates is analogous to the number of neurons. In both the Boolean circuit model and the neural network model, one is allowed to use neurons/gates which have an unlimited number of inputs. The constraint in the definition of T C i that each of the gate elements be from a standard universal library (AND, OR, NOT, Majority) is analogous to our constraint to use a particular nonlinear function. Note, however, that our theorem is weaker by applying only to depth 1, while T C 0 includes all circuits of depth O(1). \n A.1 Proof that 2 n Neurons are Sufficient A neural network with a single hidden layer of m neurons that approximates a product gate for n inputs can be formally written as a choice of constants a i j and w j satisfying (A1) Here, we use ≈ to denote that the two sides of (A1) have identical Taylor expansions up to terms of degree n; as we discussed earlier in our construction of a product gate for two inputs, this exables us to achieve arbitrary accuracy by first scaling down the factors x i , then approximately multiplying them and finally scaling up the result. We may expand (A1) using the definition σ (x) = ∞ k=0 σ k x k and drop terms of the Taylor expansion with degree greater than n, since they do not affect the approximation. Thus, we wish to find the minimal m such that there exist constants a i j and w j satisfying for all 0 ≤ k ≤ n − 1. Let us set m = 2 n , and enumerate the subsets of {1, . . . , n} as S 1 , . . . , S m in some order. Define a network of m neurons in a single hidden layer by setting a i j equal to the function s i (S j ) which is −1 if i ∈ S j and +1 otherwise, setting w j ≡ 1 2 n n!σ n n i=1 a i j = (−1) |S j | 2 n n!σ n . ( A4 ) In other words, up to an overall normalization constant, all coefficients a i j and w j equal ±1, and each weight w j is simply the product of the corresponding a i j . We must prove that this network indeed satisfies Eqs. (A2) and (A3). The essence of our proof will be to expand the left hand side of Eq. (A1) and show that all monomial terms except x 1 •••x n come in pairs that cancel. To show this, consider a single monomial p(x) = x r x i , then we must show that the coefficient of p(x) in σ r m j=1 w j n i=1 a i j x i r is 0. Since p(x) = n i=1 x i , there must be some i 0 such that r i 0 = 0. In other words, p(x) does not depend on the variable x i 0 . Since the sum in Eq. (A1) is over all combinations of ± signs for all variables, every term will be canceled by another term where the (non-present) x i 0 has the opposite sign and the weight w j has the opposite sign: completing our proof that this network indeed approximates the desired product gate. From the standpoint of group theory, our construction involves a representation of the group G = Z n 2 , acting upon the space of polynomials in the variables x 1 , x 2 , . . . , x n . The group G is generated by elements g i such that g i flips the sign of x i wherever it occurs. Then, our construction corresponds to the computation f(x 1 , . . . , x n ) = (1 − g 1 )(1 − g 2 ) • • • (1 − g n )σ (x 1 + x 2 + . . . + x n ). Every monomial of degree at most n, with the exception of the product x 1 • • • x n , is sent to 0 by (1 − g i ) for at least one choice of i. Therefore, f(x 1 , . . . , x n ) approximates a product gate (up to a normalizing constant). 4 M 1 M 3 M 2 Fig. 3 41323 Fig.3Causal hierarchy examples relevant to physics (left) and image classification (right). As information flows down the hierarchy y 0 → y 1 → . . . → y n = y, some of it is destroyed by random Markov processes. However, no further information is lost as information flows optimally back up the hierarchy as y n−1 → . . . → y 0 . The right example is deliberately contrived and over-simplified for pedagogy; for example, translation and scaling are more naturally performed before ray tracing, which in turn breaks down into multiple steps. \n 1 1 • 1 • • x r n n where r 1 + . . . + r n = r ≤ n. If p(x) = n i=1 \n 1 )=S j i 0 (− 1 )(− 1 ) 1011 |S j ∪{i 0 }| 2 n n!σ r n i=1 s i (S j ∪ {i 0 })x i r |S j | 2 n n! n i=1 s i (S j )x i r − n i=1 s i (S j ∪ {i 0 })x i rObserve that the coefficient of p(x) is equal in n i=1 s i (S j )x i r and n i=1 s i (S j ∪ {i 0 })x i r , since r i 0 = 0. Therefore, the overall coefficient of p(x) in the above expression must vanish, which implies that (A3) is satisfied. If instead p(x) = n i=1 x i , then all terms have the coefficient of p(x) in n i=1 a i j x i n is n! n i=1 a i j = (−1) |S j | n!, because all n! terms are identical and there is no cancelation. Hence, the coefficient of p(x) on the left-hand side of (A2) isσ n m j=1 |S j | 2 n n!σ n (−1) |S j | n! = 1, \n Table 1 1 Physics-ML dictionary Physics \n\t\t\t The class of functions that can be exactly expressed by a neural network must be invariant under composition, since adding more layers corresponds to using the output of one function as the input to another. Important such classes include linear functions, affine functions, piecewise linear functions (generated by the popular Rectified Linear unit \"ReLU\" activation function σ (x) = max[0, x]), polynomials, continuous functions and smooth functions whose n th derivatives are continuous. According to the Stone-Weierstrass theorem, both polynomials and piecewise linear functions can approximate continuous functions arbitrarily well. \n\t\t\t The limit where λ → ∞ but |A 1 | 2 |A 2 | is held constant is very similar in spirit to the 't Hooft limit in large N quantum field theories where g 2 N is held fixed but N → ∞. The extra terms in the Taylor series which are suppressed at large λ are analogous to the suppression of certain Feynman diagrams at large N . The authors thank Daniel Roberts for pointing this out. 4 In addition to the four neurons required for each multiplication, additional neurons may be deployed to copy variables to higher layers bypassing the nonlinearity in σ . Such linear \"copy gates\" implementing the function u → u are of course trivial to implement using a simpler version of the above procedure: using A 1 to shift and scale down the input to fall in a tiny range where σ (u) = 0, and then scaling it up and shifting accordingly with A 2 . \n\t\t\t If the next step in the generative hierarchy requires knowledge of not merely of the present state but also information of the past, the present state can be redefined to include also this information, thus ensuring that the generative process is a Markov process. \n\t\t\t Although our discussion is focused on describing probability distributions, which are not random, stochastic neural networks can generate random variables as well. In biology, spiking neurons provide a good random number generator, and in machine learning, stochastic architectures such as restricted Boltzmann machines [25] do the same. \n\t\t\t A typical renormalization scheme for a lattice system involves replacing many spins (bits) with a single spin according to some rule. In this case, it might seem that the map R could not possibly map its domain onto itself, since there are fewer degrees of freedom after the coarse-graining. On the other hand, if we let the domain and range of R differ, we cannot easily talk about the Hamiltonian as having the same functional form, since the renormalized Hamiltonian would have a different domain than the original Hamiltonian. Physicists get around this by taking the limit where the lattice is infinitely large, so that R maps an infinite lattice to an infinite lattice. \n\t\t\t A subtlety regarding the above statements is presented by the Multi-scale Entanglement Renormalization Ansatz (MERA) [37] . MERA can be viewed as a variational class of wave functions whose parameters can be tuned to to match a given wave function as closely as possible. From this perspective, MERA is as an unsupervised machine learning algorithm, where classical probability distributions over many variables are replaced with quantum wavefunctions. Due to the special tensor network structure found in MERA, the resulting variational approximation of a given wavefunction has an interpretation as generating an RG flow. Hence this is an example of an unsupervised learning problem whose solution gives rise to an RG flow. This is only possible due to the extra mathematical structure in the problem (the specific tensor network found in MERA); a generic variational Ansatz does not give rise to any RG interpretation and vice versa.", "date_published": "n/a", "url": "n/a", "filename": "Lin2017_Article_WhyDoesDeepAndCheapLearningWor.tei.xml", "abstract": "We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through \"cheap learning\" with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various \"no-flattening theorems\" showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that n variables cannot be multiplied using fewer than 2 n neurons in a single hidden layer.", "id": "6f7bbc18da3c99526be4668b7f1fb6f9"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Benjamin Y Hayden", "Yael Niv"], "title": "The Case Against Economic Values in the Orbitofrontal Cortex (or Anywhere Else in the Brain)", "text": "The past 20 years have seen a great deal of interest in understanding how our brains implement economic choices (Camerer et al., 2005; Glimcher & Fehr, 2013; Loewenstein et al., 2008; Padoa-Schioppa, 2011; Rangel et al., 2008; Rushworth et al., 2011) . Much research in this field of neuroeconomics rests on the assumption that choices between options rely on an explicit valuation process (Kable & Glimcher, 2009; Levy & Glimcher, 2012; Montague & Berns, 2002; O'Doherty, 2014; Padoa-Schioppa, 2011) . That is, that the brain first assigns value to each option and then compares those values to determine choice. The concept of valuation-which stems from economic theory-is so ingrained that it may seem inevitable. How else would one literally compare apples to oranges? Indeed, the idea that value exists on a single cardinal scale, also called a \"common currency,\" has been extended to encompass not only goods, but also effort costs and time delays; everything an agent needs to bundle into evaluation of a mode of action that is aimed at procuring a specific outcome. In parallel, the computational framework of reinforcement learning, which has been a cornerstone of neuroeconomics, also makes a scalar value signal central to its implementation (Niv, 2009; Sutton & Barto, 2018) . Specifically, in reinforcement learning, the value of an option-a state or an action-is the expected sum of future rewards contingent upon that choice. As a sum of future rewards that may be of different types (and include costs as negative rewards), reinforcement-learning value is naturally calculated in some unitless common currency. However, not all reinforcement learning algorithms rely on or even calculate values (Sutton & Barto, 2018) , and reinforcementlearning values, as sums of future rewards, are not synonymous with economic values of specific goods. Likewise, many empirically supported process models of choice get by with no valuation (Gigerenzer & Gaissmaier, 2011; Miller et al., 2019; Vlaev et al., 2011) . The fact that the brain can compute values to compare apples and oranges does not mean that it routinely does so, or that valuation is the primary process underlying choice. In this opinion paper, we argue that in many choice scenarios the brain may not be computing values at all, despite appearing to do so. We will demonstrate that the rationale for a value signal is weak, as are both behavioral and neural evidence supporting value computation. Finally, we propose an alternative-direct learning of action policies-and suggest there is scant direct evidence for value learning that cannot be explained by this and other alternative theories. Our hypothesis is important as it suggests a different interpretation of previous data and requires that studies attempting to resolve mechanisms of value computation in the brain first establish that in the specific situation studied, valuation is indeed occurring. In particular, much work has associated the orbitofrontal cortex (OFC) in representing the expected economic value of different goods or options (Bartra et al., 2013; Levy & Glimcher, 2012 )-a role that must be critically reconsidered if we agree that the brain may not necessarily be representing such values in many of the experiments so far used to test this hypothesis. \n Why Argue Against Value? Intuitively, our thesis is that while we may know that we prefer an orange to an apple (for one thing, oranges don't brown when exposed to oxygen), we may make this judgment without consulting an internal scalar (cardinal) or even a universal ordinal value signal. Consequently, we may be hard-pressed to express the precise value that we put on an orange. This is not merely an issue of conscious access to our internal value of oranges 1 , but may be due to the fact that deciding on preferences can utilize many alternative mechanisms that don't require or rely on calculation of such a value. Valuation is hard (Payne et al., 1992) . It is also often unnecessary: when choosing between, say, an orange and a car, it is immediately clear that one is better than the other without calculating the precise value of either. If you are extremely thirsty, you might choose the orange, whereas if you need to go somewhere a car is the only relevant option. Arguably, many real-life choices are between options that are sufficiently different in the needs that they fulfill as to be more similar to this extreme example than they are to the choice between apples and oranges (Juechems & Summerfield, 2019) . Valuation may only be necessary when choosing between two very similarly valued items. However, if the items are sufficiently similar in how they satisfy our needs, the brain may decide to choose randomly, according to some valuefree heuristic, or according to past choices-and move on (Chater, 2018) . Alternatively, the brain can try to calculate the exact values of the options to the necessary precision that arbitrates between them. Indeed, choosing between similar options often takes longer than choosing between very different options, even when making rather trivial choices that do not warrant the time and effort invested (Pirrone et al., 2018; Teodorescu et al., 2016 ). An example is the inordinate amount of time some of us may spend deciding what brand of tuna fish to buy, despite the fact that our time (as per our salary) is worth much more than the money we will save by correctly evaluating which brand provides the best \"value for money.\" This scenario also illustrates how inept we are at comparing the value of different kinds-here, time and money-which should have been straightforward if the modus operandi of the brain was to evaluate everything in terms of some common currency (see below). So perhaps the brain can, when needed, calculate values. However, we argue that this is not the main means by which the brain makes decisions, and perhaps not the natural mode of decision making. \n What Is Value? Before moving on, we would like to delineate precisely how we are defining value for the sake of our argument. The term \"value\" has multiple uses (the problems this multiplicity raises are carefully laid out by O'Doherty, 2014). We consider \"value\" a hypothesized scalar variable that reflects the worth of a specific item or outcome. 2 Because it is scalar, it is necessarily abstract. It can refer to any good, and makes use of a common currency code that is comparable across goods of different types (e.g., food, water, recreation time). Value, by this definition, is cardinal, not ordinal, meaning it can be defined for an option per se, rather than solely relative to other options. That is, it reifies the idea of \"utils\"-quantifiable units of value, often used in a jocular manner in Economics classes. The idea of a common currency is key as it implies that all relative calculations have already happened-the value of an option is in some denomination that objectively defines it as compared to other such values. One might argue that this is too narrow a definition, but we believe this is what neuroeconomists have in mind when talking about representation of common currency values in the brain. For example, we know of no neuroeconomic model that imagines a neural implementation of an ordinal value scale. This having been said, even an ordinal scale should not change based on the comparison set, and should satisfy transitivity-which many of the neural signals attributed to value do not, as we detail below. Our definition of value, for the purpose of this paper, is different from other types of value that are discussed in reinforcement learning. Specifically, here we are discussing value as the reward worth of a single item/event, not the expected sum of all future rewards (R in reinforcement learning models, not V). One might argue that in order to compute such an expected sum V, the subjective worth of all individual rewards must be translated to a common currency that can be added. In this sense, reinforcement learning models do presuppose a common currency reward value. However, they don't necessarily commit to economic properties of such values, such as transitivity and consistency (see also Juechems & Summerfield, 2019 ), and we discuss below a class of reinforcement learning algorithms that can make do with evaluations that are only relative to the current options, and not applicable in other situations. One important feature of our definition of value is that in our view, although value is inferred from choice, it is not strictly identical to choice, nor necessarily implied by choice. For instance, if we observe a consistent preference for A over B, A is assumed to have a higher value than B. But choice of A over B isn't sufficient to infer the existence of a self-consistent value function: a decisionmaker may adhere to a heuristic policy that results in stable preference for Slovic, 2006) . Moreover, choice can be altered without manipulating the reward value of an item (Schonberg & Katz, 2020) , which suggests that value, as inferred from choice, is not untarnished by processes that are not economic in nature. Thus, value can be inferred from behavior given certain assumptions, but preferences do not always lead to a value function. [A > B], [B > C], and [C > A] (Lichtenstein & This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. \n The Common-Currency Hypothesis The notion that economic decision-makers make use of internal value functions to compare options is often dated to Daniel Bernoulli's proposal in 1738 of a logarithmic utility curve to explain preferences in the St. Petersburg Paradox (Martin, 2011) . Here, a decision-maker is offered a chance to play a game in which they win $2 (n−1) with n being the first time, in a series of coin tosses, that a coin falls on \"heads.\" The paradox is that although the expected monetary value of this game (that is, the sum of expected wins multiplied by their probability P ∞ n=1 2 ðn−1Þ • 0.5 n = P ∞ n=1 0.5) is infinity, people are not willing to pay even $10 to play this game (Hayden & Platt, 2009) . The explanation that Bernoulli proposed proved foundational within microeconomic theory. He argued that decision-makers don't base calculations on the nominal, objective cash value of the potential gain, but rather on its subjective value. If the subjective value grows more slowly than the objective value (the idea of \"diminishing marginal utility\"), an optimizing decision-maker will appear riskaverse, for instance, when evaluating the St. Petersburg gamble. Utility, or subjective value, has been central to many if not most microeconomic models. These include the axiomatic approaches of Pareto, Von Neumann and Morgenstern, and Samuelson. It is also central to behavioral theories such as prospect theory and decision field theory (Busemeyer & Townsend, 1993; Kahneman & Tversky, 1979) . Ironically, however, explaining choices in the St. Petersburg paradox using such a subjective value function requires utility for money that diminishes so rapidly that it does not generalize to choices in other contexts. Instead, heuristic accounts-accounts that don't rely on ideas about value maximization-provide better quantitative matches for St. Petersburg choices (Hayden & Platt, 2009) . Despite its central importance in economics, economists are typically agnostic about whether the concept of value is just a convenient description, or whether it is instantiated in the brain. As Friedman argued, decision-makers behave as if we compute and compare values, but we cannot conclude that we actually do so (Friedman, 1953) . In that work, he famously compared economic agents to a trained billiards player who makes excellent shots as if having a sophisticated grasp of Euclidean geometry, although in fact such a theoretical understanding is not necessary for good billiards skill. Economists typically stop at the point of saying that economic models are \"as if\" models, with some arguing that the question of the underlying reality of economic variables is outside the domain of economics (Gul & Pesendorfer, 2008; Harrison, 2008) . In contrast, neuroeconomics has generally taken as a default assumption that these \"as if\" theories are reified in the brain (Kable & Glimcher, 2009; Levy & Glimcher, 2012; Montague & Berns, 2002; O'Doherty, 2014; Padoa-Schioppa, 2011; Rich & Wallis, 2016) . Modern tools such as neuroimaging and single unit physiology provide the opportunity to assess the implementation of decision making directly. Indeed, neuroscientists have had little trouble identifying correlates of value in several brain areas (Knutson et al., 2001; Levy & Glimcher, 2012; Plassmann et al., 2007; Wallis, 2007) , leading to the suggestion that the OFC is a nexus for representing the economic value of goods in the brain (see, e.g., Rushworth et al., 2011; Wallis, 2007) , alongside other brain areas that are important for computing and representing value such as the ventromedial prefrontal cortex (Bartra et al., 2013) and ventral striatum (Haber & Knutson, 2010) . However, it is not clear that the signals identified in these studies actually represent value as proposed-on a common-currency scale, for comparing options and making choices. Moreover, so called \"value signals\" only correlate with value, so may be signaling other quantities, such as attention, action plans, vigor, or preference, which also correlate with value (Maunsell, 2004; O'Doherty, 2014; Wallis & Rich, 2011) . Hence it is important to carefully consider alternative interpretations for these findings, as we do further below, after discussing some practical constraints and philosophical conundrums. \n Alternatives to Valuation in Decision Making While common-currency value provides a convenient and general mechanism for making choices, there are many possible alternatives (Gigerenzer & Gaissmaier, 2011; Kahneman et al., 1982; Lichtenstein & Slovic, 2006; Vlaev et al., 2011) , some of which are quite general and robust. Consider for example the \"priority heuristic\" (Brandstätter et al., 2006) . This de minimis heuristic approach proposes that decision-makers first identify a dimension along which options vary and then compare options along that dimension. If that results in a choice, they stop; otherwise, they move on to the next dimension. This heuristic can explain many phenomena, including Allais preferences (Allais, 1953) , the reflection effect (Fishburn & Kochenberger, 1979) , the certainty effect (Kahneman & Tversky, 1979) , the fourfold pattern in risky choice that motivates prospect theory (Tversky & Fox, 1995) , and several intransitivities-all without ever requiring computing of value (Brandstätter et al., 2006) . And the priority heuristic is one of a large number of heuristics that do a remarkable job at describing behavior (Gigerenzer & Gaissmaier, 2011) . These heuristics are generally motivated by psychological observations, and thus are consistent with known data from the psychology-if not the neuroscience-of choice. In particular, they reflect the assumption that calculating value is difficult, that humans typically use shortcuts whenever possible, and that heuristics are a good shortcut (Gigerenzer & Gaissmaier, 2011; Lieder & Griffiths, 2020; Payne et al., 1992) . Moreover, these heuristics are not limited to humans, but apply to other species, including monkeys (Heilbronner & Hayden, 2016; Marsh, 2002; Santos & Rosati, 2015; Shafir et al., 2002) . Importantly for our argument, the success of heuristic approaches demonstrates that value calculations are not a priori essential for neuroeconomic theories of choice. In particular, heuristic theories can solve many problems for which value is proposed to be needed (Kahneman et al., 1982; Lichtenstein & Slovic, 2006; Piantadosi & Hayden, 2015; Stevens, 2016; Tversky, 1969; Vlaev et al., 2011) . Heuristics readily allow for comparison of multi-dimensional goods and bundles, and for choice across dissimilar goods. While heuristics are not perfect, and lead to many choice anomalies, choice is, empirically, full of anomalies. Moreover, several non-heuristic process models also eschew value computation steps, including decision by sampling, query theory, and fuzzy trace theory; these all account for a wide range of choice behavior as well (Reyna, 2008; Stewart & Simpson, 2008; Stewart et al., 2006; Weber et al., 2007) . \n Practical Issues in the Neuroscience of Value Representation Despite the above, it is often considered axiomatic that an internal, neural, value scale must exist in the brain. The job of neuroscientists is then to find this signal-to pinpoint brain activity This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. that correlates with value. In this section, we challenge this viewpoint by considering some basic issues that come up in the neuroscience of value. The first problem is that it is impossible to precisely measure value. We can measure preferences between options and use the data to infer option values, but elicited preferences are noisy measurements that also reflect factors other than value. For example, depending on how preferences are elicited, they may also reflect the tendency to press the same button repeatedly rather than change actions, the tendency to switch between options due to a prior belief about depleting resources, or the amount of attention or looking time for each of the options (Armel et al., 2008; Schonberg & Katz, 2020; Shimojo et al., 2003; Sugrue et al., 2005) . Finally, value may shift from trial to trial, even depending on recent outcomes, meaning that methods that average across trials produce misleading value estimates (Sugrue et al., 2005) . One could, in theory, incorporate these factors into the inferred value of an option. For example, there may be inherent value in choosing the same thing twice in a row. However, it becomes unclear what the definition of the \"option\" is that is being evaluated: if the value of an apple after eating (or choosing, or even just viewing) an orange is different from the value of an apple without that proximal experience, can we determine the value of goods at all? And if we can change the value (read: preference) for an option just by directing more attention to it (Salomon et al., 2018; Schonberg et al., 2014) , or inducing choice of it (Izuma et al., 2010; Sharot et al., 2012; Voigt et al., 2017) , is preference really measuring the subjective economic worth of a good? The risk is circularity-if any choice behavior can be explained by supposing value that depends on local history, then the concept of value adds no additional explanatory power beyond that of recent events and choices. Different ways to measure value raise a second challenge of inconsistent measures. Indeed, if we had a single neural value function we called on, values elicited by different measures would match. Unfortunately, they do not (Lichtenstein & Slovic, 2006) . To explain this, in their seminal work, Sarah Lichtenstein and Paul Slovic proposed that preferences do not arise from any internal value function, but instead are constructed at the time of elicitation (see also Ariely et al., 2003; Payne et al., 1992; Tversky & Shafir, 1992) . That is, in the view of these and like-minded scholars, value doesn't sit in the brain waiting to be used; rather, preference is a complex and active process that takes place at the time the decision is made. Critically, in this theory we compute choices in a largely ad-hoc manner based on the available options, without an intermediary common-currency valuation stage. As such, there is no guarantee of consistency or reliability; any consistency or reliability observed may be explained as a result of strong attractor states in the way the system determines the choice, and deviations from consistency are evidence for the specific nature of the algorithm and its idiosyncrasies. This hypothesis, while denouncing value as a latent construct in the brain, nevertheless invites neuroscientific research to understand the active processes involved in choice, and how these relate to (relative) evaluation, if not to economic values. This having been said, experimenters can roughly estimate value, even if not measure it precisely. For example, in a given experiment they can examine choices and determine with confidence that a monkey (behaves as if it) places more value on a gamble as compared to a safe option with a matched expected value. One could then use this fact to try to identify a neural correlate of value. However, for this neural endeavor to be valid, it is important to identify all confounding variables and regress them out. O'Doherty's (2014) review delineates the practical difficulty of doing so, using the overall rubric of \"visceral, autonomic, and skeletomotor\" activity. In practice, confounding variables include both stimulus and outcome identity, information about the state or structure of the world, the surpriseness, informativeness, and informational value of stimuli, details of the action associated with selecting of consuming the reward, including its likelihood and vigor, and the attention and arousal engendered by the stimulus (e.g., Blanchard et al., 2015; Botvinik-Nezer et al., 2020; Niv et al., 2007; O'Doherty, 2007; Roesch & Olson, 2004; Roesch et al., 2006; Wilson et al., 2014; . Indeed, in studies that separately assess encoding of outcome identity versus outcome value, activity in brain areas that are often considered to be emblematic of economic value (in particular, the OFC) turns out to correlate with outcome identity instead (Klein-Flügge et al., 2013) . A final insurmountable problem is that it may be impossible, even in theory, to obtain a brain measure of value that is independent of behavior. Suppose for example, that we identify a particular class of neurons whose firing rates are perfectly correlated with value down to our ability to measure it through preference. Supporting this idea, we observe that any procedure that modifies value (as inferred from behavior) changes the firing rate of these neurons in a manner consistent with our predictions. We may tentatively hypothesize that these neurons are (or are among) the value neurons of the brain. However, as shown by Schonberg and colleagues, preference can be changed irrespective of changing the economic worth of goods (Schonberg & Katz, 2020) . Therefore, to differentiate value neurons from preference neurons, we would need to show that these neurons do not strictly follow expressed preference when the value function diverges from it. This is, of course, not possible if the value function never measurably diverges from preference. And if we assume that values can diverge from preference, it is not clear how to define values to start with. We call this \"the neuroeconomic relativity problem\" because, like Einstein's relativity problem, it reflects the fact that there is no external reference frame to which one can calibrate value inferences. \n Reconsidering the Motivation for Common Currency It may be worth asking, then, what does having a single common currency buy you? Why would the brain invest in such an organization? One advantage of a common-currency scale is that it simplifies comparing options that differ along multiple dimensions. For example, when hunting for an apartment, the options may differ along dimensions of price, area, neighborhood, and amenities. The logic is that these dimensions must be first combined into a single scalar per apartment so that the scalars can be compared. However, this is not the only way to solve the problem, and importantly, may not be the way humans make their decisions. For example, the apartment shopper may choose a single dimension and pick the winner along that dimension, as discussed above, or may compare separately on each dimension and choose the apartment that wins on most counts (Tversky, 1972) . Laboratory studies where humans can choose what attributes to view indeed suggest that people don't uncover all the attributes of one option (to calculate its value) and then continue to the next option, but rather prefer to view information for all options attribute by attribute (Fellows, 2006; Hunt et al., 2014) . Indeed, as mentioned, much empirical work indicates that human decision-makers broadly favor heuristic approaches that This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. THE CASE AGAINST ECONOMIC VALUES eschew a value stage in a large number of contexts (Brandstätter et al., 2006; Gigerenzer & Gaissmaier, 2011; Kahneman et al., 1982; Lieder & Griffiths, 2020) . Notably, systems that make use of heuristics may generate internal variables that are conceptually distinct from value but that correlate with value, thus leading to an interpretational confound. Consider, for example, a relatively well-understood implementation of choice in a (non-brain) distributed system: the selection of hive sites in bees (Apis mellifera, Seeley, 2010; Seeley & Buhrman, 1999; Seeley et al., 2006) . Bee swarms select a hive site by sending out scouts to investigate potential sites. Each site differs along roughly 20 dimensions of varying but measurable importance (size, safety from predators, exposure to sunlight, wind, etc.). Each scout bee that encounters the hive site performs an extremely poor estimation of the value of the site-it typically will only sample three-to-four of the dimensions and even then, estimate them poorly. The bee then returns and if the estimated site quality is sufficiently good, indicates an assessment of its quality to the other bees in the swarm. The quality it signals will be correlated with the overall value of the hive site, but only weakly, and, critically, will only integrate a subset of relevant value components. The overall comparison between options is indirect-the options race to attract adherents; the majority of the decision is made by a positive feedback process. This example is particularly relevant because scholars of perceptual decision making may recognize this process as strongly related to ideas about how the brain decides between options by racing to bounds, and there is evidence that deliberative processes in the brain follow the same principles as choice processes in bee swarms (Eisenreich et al., 2017; Franks et al., 2003; Mitchell, 2009; Pais et al., 2013; Passino et al., 2008; Pirrone et al., 2018) . \n Evaluating the Neural Evidence for the Common-Currency Hypothesis Evidence for the common-currency hypothesis comes from the observation that firing rates of single neurons or hemodynamic responses of voxels correlate with values of offers and outcomes (Kennerley et al., 2009 (Kennerley et al., , 2011 Levy & Glimcher, 2012; Padoa-Schioppa, 2011; Rangel et al., 2008) . These responses depend on multiple elements of offers (e.g., the expected reward, as well as the associated response costs), and are modulated by factors that affect subjective value, such as context or level of satiety (see Conen & Padoa-Schioppa, 2019; O'Neill & Schultz, 2010; Rudebeck et al., 2017 for only a few examples). Often, variations in these firing rates predict variations in choice (Conen & Padoa-Schioppa, 2015; Strait et al., 2014; Sugrue et al., 2004) . Such patterns have been taken as evidence that the neural signal encodes value, in particular implicating the OFC (Bartra et al., 2013; Levy & Glimcher, 2012; O'Reilly, 2020) . However, such patterns of neural responding do not definitively demonstrate a common-currency value signal. For example, in the original finding of value-encoding neurons in the OFC (Tremblay & Schultz, 1999) , neurons responded more strongly to a high-valued option than to a low-valued one. However, the principle of common currency only held within a given choice pair-responses for Option B were low when this option was paired with a better Option A, but were high when this same option was paired with a worse Option C. Other studies found value-correlated responses to be more stable across contexts (Padoa-Schioppa & Assad, 2008) , but most findings support an encoding that depends on the alternatives (e.g., Kobayashi et al., 2010; Padoa-Schioppa, 2009; Zimmermann et al., 2018) . One might interpret this finding as reflecting simple range adaptation of neurons that have a finite firing rate; however, a more parsimonious explanation is that the firing pattern is consistent with a relative preference code rather than an abstract value code. This interpretation places the putative neural representation of value one stage later than we would expect from a common-currency codeimmediately after comparison (because a relative valuation is itself a comparison) rather than as an input to comparison, raising the question of where the input came from. An interpretation of this code as something other than economic value would obviate this worry. Importantly, with a code that depends on alternatives, there is no real sense in which we can read out the \"true subjective value\" of an option. We can only know if an option's value is higher than another value-similar to the information provided by preferences and choice behavior. Another corollary of such a relative code is that it is unclear to what extent we can read out a meaningful value signal when only one option is available. Indeed, presenting a subject with one option at a time should, theoretically, provide the best ability to read out option values from neural activity in areas associated with value, such as OFC, ventromedial prefrontal cortex, and dorsal and subgenual anterior cingulate cortices. Doing so reveals that while neural activity in these regions does correlate with the value of the first option, neural responses to the second (alternative) option (presented on its own) correlate with the value difference, that is, they reflect the result of value comparison rather than valuation per se Hunt et al., 2018; Strait et al., 2014) . Moreover, even responses to the first offer do not simply encode its value, but also contain information about the likelihood that option will be chosen (presumably relative to the expected value of the second offer, Azab & Hayden, 2017) . These results suggest that even when evaluation is experimentally segregated from comparison, pure value encodings may not exist. In fact, the idea that prefrontal neurons are selective or responsive to a single experimenter-defined variable has been increasingly falling out of favor (Fusi et al., 2016; Raposo et al., 2014; Rigotti et al., 2013) , with \"mixed selectivity\" appearing to be a core operating principle of prefrontal cortex, including in ostensible value regions (Blanchard et al., 2018; Hayden & Platt, 2010; Kimmel et al., 2020; . A third issue is that although the activity of some neurons in the OFC is correlated with value (but not necessarily linearly related to value), activity of other neurons in this same area is anticorrelated with value, with the majority of neurons showing no relationship at all with value. This raises the possibility that the neurons are encoding something other than value, for instance, a distributed representation of the identity of each of the three options, A, B, and C above, or the identity of the stimulus representing the offer. With the small number of options evaluated in most experiments, such a distributed code of outcome or stimulus identity (but not value) can easily result in some neurons randomly firing most strongly for the higher-valued of the three options and least for the worse option, while in other neurons the relationship would be in the opposite direction. This would also explain why many neurons show a nonmonotonic relationship between value and their firing. Recently, using ensemble recordings in the OFC that allow analysis on the level of a single trial, Wallis and colleagues have attempted a more direct test of the hypothesis that OFC encodes value (Rich & Wallis, 2016) . In their task, monkeys were offered two options (denoted by images corresponding to the options) and asked to choose between them. On some trials, only one option was offered. Classifiers were trained to classify options corresponding to each of four offer-value levels (0.05, 0.10, 0.18, and 0.30 ml juice, or, in separate blocks, four levels of second-order reinforcer), aggregating over the two images corresponding to each value level. Using single-unit activity and local-field-potential recordings in the OFC, the authors could classify the offered value above chance, suggesting that OFC neurons encoded information about value. However, here too, offer values may have been encoded as different (outcome stimulus) identities, not different (cardinal) values in a common currency. One stringent test of the value-coding hypothesis would be to show that a combination of two offers of value level 0.05 ml resulted in similar activation pattern to a single offer of value level 0.10 ml. Ensemble recordings that give ample data in a single trial allow testing these hypotheses with novel combinations that have not been trained to predict the same identity of reward through multiple presentations, therefore testing the assumptions of the additivity of common currency directly. This critical test has not been conducted, to our knowledge. As mentioned, many factors that are conceptually distinct from value can influence choice-and are therefore closely correlated with value (Maunsell, 2004) . Reward value drives attention, promotes both short-and long-term learning, primes behavioral adjustments, updates internal models, activates circuitry that detect both positive and negative surprises, and elicits mental computations of cost-benefit tradeoffs as well as comparisons with what could have been chosen. All of these are different from scalar, commoncurrency value, but are known to drive neural activity in the regions usually associated with economic value. As such, they confound that interpretation. This problem is a long-standing and notorious one in neuroeconomics (Maunsell, 2004; O'Doherty, 2014; Roesch & Olson, 2003) . To overcome some of these potential confounds, a strong tradition in neuroeconomics research is to control for extraneous factors such as salience and response cost (although we note that many factors-for example, covert attention-cannot be controlled for). As a practical issue, it is difficult to control for alternative interpretations without making the task so convoluted that it becomes unnatural for the animal, in the ethological sense (O'Doherty, 2014) . As a result, it is not clear if findings of value computation (if we believe them to be so) in these tasks would naturally translate to how the brain computes choice in naturalistic situations. But the deeper issue remains that value as inferred in most experiments is definitionally a summary of aggregated choice behavior, and therefore, any variable that influences choice behavior will necessarily be correlated with value. It is for this reason that we suggest that a stronger demonstration of scalar value coding in the brain should show mathematical properties such as the additivity of value, or separation from preferences as these are changed without changing value, as suggested above. If the response of neurons to the novel sum of two stimuli each promising 0.1 ml of juice is similar to the response of those same neurons to a different stimulus associated with 0.2 ml of juice, and especially if this response did not change when preference for an option was induced by means such as the mere exposure effect (Schonberg & Katz, 2020) , it would be harder to argue that this is due to a shared motor plan, or attentional capture. We note that research to date has shown that the activity of brain areas associated with value, in particular the OFC, does change when preferences are modified through methods that should, in principle, not change economic value (Botvinik-Nezer et al., 2020) . \n An Alternative View: Direct Learning of Policies Although reinforcement-learning models have contributed to the assumption that the brain computes values, many reinforcementlearning algorithms do not learn or estimate values for different actions. Instead, they directly learn action policies. In fact, in the reinforcement-learning literature, the goal of an agent is to obtain as much reward as possible by executing optimal actions-calculating values is only one means to achieve that end. In explaining reinforcement-learning methods, Dayan and Abbott's (2001) textbook begins with the \"direct actor\"-an actor that learns actions without computing their values. The Actor-Critic model, a prominent algorithm for reinforcement learning that has been linked to the brain (Barto, 1995; Joel et al., 2002; Maia, 2010; O'Doherty et al., 2004; Takahashi et al., 2008) , does exactly that. In this algorithm, a Critic module learns values of states in terms of expected future rewards (here, the state includes all available actions, averaging over choices and explicitly not computing the value of each possible choice), and uses these to compute reward prediction errors. These prediction errors are used to learn an action policy in the Actor module: the probability of actions that are followed by positive prediction errors is increased, and the probability of actions that are followed by negative prediction errors is decreased. Under some reasonable conditions, this model learns correct reward-maximizing policies (Sutton & Barto, 2018) . However, the quantities learned by the Actor-tendencies to perform one action over the other-cannot be read out as action values. In particular, due to the way the algorithm learns, ties are broken between equally good actions such that eventually agents learn deterministic policies. To be clear, if four actions were to lead to 1, 2, 3, and 3 drops of juice, respectively, the model may learn to always choose the third option, or to always choose the fourth (both optimal policies). At the end of learning, action weights in the Actor may be 0, 0, 1, 0 respectively, losing all information about value, or relative value. Even before convergence to a deterministic policy, weights may be 0, 0.1, 0.95, 0.2 respectively (or any other combinationweights here do not have to sum up to 1, and these are just illustrative numbers). Importantly, if the basal ganglia indeed implement an Actor-Critic learning algorithm in the brain, there is no sense in which we can glean action values from the Actor or the Critic. The Actor-Critic model is only one of a class of reinforcementlearning algorithms that learn policies directly, that is, without calculating option values (e.g., Sutton et al., 2000; Williams, 1992) . In their general form, these algorithms maintain an action policy, use experience to evaluate a gradient direction for this policy (that is, what change in policy would increase the overall obtained reward), and change the policy in that direction. Another notable model, developed to explain behavioral patterns in choices, is the Experience-Weighted Attraction model of Camerer and Ho (1999) , which interpolates between value calculation and direct policy learning. Recent findings suggest that both behavior and learning signals measured in humans during decision making are more in line with a policy-learning algorithm than with value estimation. For instance, This document is copyrighted by the American Psychological Association or one of its allied publishers. This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. THE CASE AGAINST ECONOMIC VALUES Li and Daw (2011) had participants choose between two options that gave reward with different probabilities. After each choice, both the outcome of the chosen option and the counterfactual outcome of the unchosen option were displayed. Behavior, as well as neural signals corresponding to prediction errors in the basal ganglia, suggested that subjects were updating both options in opposite directions, learning relative choice propensities (a policy) rather than tracking the expected value of each option. In another task in which humans chose between pairs of options and were able to view both the outcome of their choice and the counterfactual outcome of the forgone option, Palminteri et al. (2015) had subjects learn which of two probabilistically rewarding options is better, and which of two probabilistically punishing options was better. At test, subjects were asked to choose between pairs from both the rewarding and the punishing contexts. Surprisingly, when choosing between the less rewarding option and the less punishing option, subjects tended to choose the less punishing option. This is consistent with policy learning, as that option had been the favored option in the punishment context, whereas the less rewarding option had been the disfavored one in the reward context. However, the value of a sometimes-rewarding option is clearly higher than that of a sometimes-punishing option, hence value learning cannot explain this fundamentally suboptimal choice pattern. Direct learning of policies is consistent with basic tenets of decision making, such as the fact that choices are stochastic even when no exploration is necessary or warranted (e.g., choosing between two gambles that are fully described; Khaw, Li, et al., 2017) . Because policies are relative quantities (preference for one option implies unfavorability of another, as the probabilities of all choices have to sum to one), they also explain common violations of the independence axiom such as the effects of third-option \"decoys\" on choice (Soltani et al., 2012) , and temporal and spatial contextual influences on choice (Khaw, Glimcher, et al., 2017) , although these phenomena can also be explained by valuation models that involve range normalization. \n Conclusion The field of neuroeconomics started with the putative identification of pure value signals (Platt & Glimcher, 1999) . The meaning of these signals was disputed early on (Maunsell, 2004; Roesch et al., 2006) , but deep questions were set aside as researchers continued to identify brain areas with parts of the computational processes of choice, and in particular, identified the OFC with the seat of economic value. However, while those debates have subsided, the problems they raised have not been resolved. Progress on these issues will require additional data, but we stress that not every experiment that involves choices also implies valuation, and one must be careful in interpreting data not only because of potential confounds, but also because we must be wary of treating a hypothesis-that the brain computes value-as an axiom. We also suggest the need for additional philosophical work to define value in a way that is-at least in principle-dissociable from other factors that promote choices (Juechems & Summerfield, 2019) . In other words, we argue for a return to the productive debates of the early days of the field, 20 years ago (for neuroeconomics), and earlier (for its psychological underpinnings). Bolstered by an additional 20 years of new data, such debates would surely benefit the field of neuroeconomics moving forward. They would also potentially help reconcile conflicting views on what the OFC might or might not be doing in decision making (Stalnaker et al., 2015) . \t\t\t We note here that, in neuroeconomics, value-based choices are generally thought to be part of explicit, aware, goal-directed, or model-based decision making that relies on frontal-cortex areas, in particular, the OFC.2 We consider subjective worth as revealed by preference, following neuroeconomic theory, but note the limiting focus on only the \"wanting\" not the \"liking\" side of value (Berridge, 1996) . \n\t\t\t This document is copyrighted by the American Psychological Association or one of its allied publishers.This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.", "date_published": "n/a", "url": "n/a", "filename": "the_case_against_economic_values.tei.xml", "abstract": "Much of traditional neuroeconomics proceeds from the hypothesis that value is reified in the brain, that is, that there are neurons or brain regions whose responses serve the discrete purpose of encoding value. This hypothesis is supported by the finding that the activity of many neurons covaries with subjective value as estimated in specific tasks, and has led to the idea that the primary function of the orbitofrontal cortex is to compute and signal economic value. Here we consider an alternative: That economic value, in the cardinal, common-currency sense, is not represented in the brain and used for choice by default. This idea is motivated by consideration of the economic concept of value, which places important epistemic constraints on our ability to identify its neural basis. It is also motivated by the behavioral economics literature, especially work on heuristics, which proposes value-free process models for much if not all of choice. Finally, it is buoyed by recent neural and behavioral findings regarding how animals and humans learn to choose between options. In light of our hypothesis, we critically reevaluate putative neural evidence for the representation of value and explore an alternative: direct learning of action policies. We delineate how this alternative can provide a robust account of behavior that concords with existing empirical data.", "id": "598b7e21a8f4aa00d56afe02c6689cb3"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Toby Ord", "Rafaela Hillerbrand", "Anders Sandberg", "Francis Rjrr_A_412799 Sgm"], "title": "Probing the improbable: methodological challenges for risks with low probabilities and high stakes", "text": "Introduction Large asteroid impacts are highly unlikely events. 1 Nonetheless, governments spend large sums on assessing the associated risks. It is the high stakes that make these otherwise rare events worth examining. Assessing a risk involves consideration of both the stakes involved and the likelihood of the hazard occurring. If a risk threatens the lives of a great many people, it is not only rational but morally imperative to examine the risk in some detail and to see what we can do to reduce it. This paper focuses on low-probability high-stakes risks. In Section 2, we show that the probability estimates in scientific analysis cannot be equated with the likelihood of these events occurring. Instead of the probability of the event occurring, scientific analysis gives the event's probability conditioned on the given argument being sound. Though this is the case in all probability estimates, we show how it becomes crucial when the estimated probabilities are smaller than a certain threshold. To proceed, we need to know something about the reliability of the argument. To do so, risk analysis commonly falls back on the distinction between model and parameter uncertainty. We argue that this dichotomy is not well suited for incorporating information about the reliability of the theories involved in the risk assessment. Furthermore, the distinction does not account for mistakes made unknowingly. In Section 3, we therefore propose a three-fold distinction between an argument's theory, its model and its calculations. While explaining this distinction in more detail, we illustrate it with historic examples of errors in each of the three areas. We indicate how specific risk assessment can make use of the proposed theory-model-calculation distinction in order to evaluate the reliability of the given argument and thus improve the reliability of their probability estimate for rare events. Recently, concerns have been raised that high-energy experiments in particle physics, such as the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory or the Large Hadron Collider (LHC) at CERN, Geneva, may threaten humanity. If these fears are justified, these experiments pose a risk to humanity that can be avoided by simply not turning on the experiment. In Section 4, we use the methods of this paper to address the current debate on the safety of experiments within particle physics. We evaluate current reports in the light of our findings and give suggestions for future research. The final section brings the debate back to the general issue of assessing low-probability risk. We stress that the findings in this paper are not to be interpreted as an argument for anti-intellectualism, but rather as arguments for making the noisy and fallible nature of scientific and technical research subject to intellectual reasoning, especially in situations where the probabilities are very low and the stakes are very high. \n Probability estimates Suppose you read a report which examines a potentially catastrophic risk and concludes that the probability of catastrophe is one in a billion. What probability should you assign to the catastrophe occurring? We argue that direct use of the report's estimate of one in a billion is naive. This is because the report's authors are not infallible and their argument might have a hidden flaw. What the report has told us is not the probability of the catastrophe occurring, but the probability of the catastrophe occurring, given that the included argument is sound. Even if the argument looks watertight, the chance that it contains a critical flaw may well be much larger than one in a billion. After all, in a sample of a billion apparently watertight arguments you are likely to see many that have hidden flaws. Our best estimate of the probability of catastrophe may thus end up noticeably higher than the report's estimate. 2 Let us use the following notation: X, the catastrophe occurs; A, the argument is sound; P(X), the probability of X and P(X|A), the probability of X given A. While we are actually interested in P(X), the report provides us only with an estimate of P(X|A), since it cannot fully take into account the possibility that it is in error. 3, 4 From the axioms of probability theory, we know that P(X) is related to P(X|A) by the following formula: P X P X A P A P X A P A ( ) ( | ) ( ) ( | ) ( ). ( ) = + ¬ ¬ 1 To use this formula to derive P(X), we would require estimates for the probability that the argument is sound, P(A), and the probability of the catastrophe occurring, given that the argument is unsound, P(X|¬A). We are highly unlikely to be able to acquire accurate values for these probabilities in practice, but we shall see that even crude estimates are enough to change the way we look at certain risk calculations. A special case, which occurs quite frequently, is for reports to claim that X is completely impossible. However, this just tells us that X is impossible, given that all our current beliefs are correct, that is P(X|A) = 0. By Equation (1) we can see that this is entirely consistent with P(X) > 0, as the argument may be flawed. Figure 1 is a simple graphical representation of our main point. The square on the left represents the space of probabilities as described in the scientific report, where the black area represents the catastrophe occurring and the white area represents not occurring. The normalized vertical axis denotes the probabilities for the event occurring and not occurring. This representation ignores the possibility of the argument being unsound. To accommodate this possibility, we can revise it in the form of the square on the right. The black and white areas have shrunk in proportion to the probability that the argument is sound and a new grey area represents the possibility that the argument is unsound. Now, the horizontal axis is also normalized and represents the probability that the argument is sound. Figure 1 . The left panel depicts a report's view on the probability of an event occurring. The black area represents the chance of the event occurring, the white area represents it not occurring. The right-hand panel is the more comprehensive picture, taking into account the possibility that the argument is flawed and that we thus face a grey area containing an unknown amount of risk. To continue our example, let us suppose that the argument made in the report looks very solid, and that our best estimate of the probability that it is flawed is one in a thousand, (P(¬A) = 10 −3 ). The other unknown term in Equation (1), P(X|¬A), is generally even more difficult to evaluate, but for the purposes of the current example, let us suppose that we think it highly unlikely that the event will occur even if the argument is not sound and treat this probability as one in a thousand as well. Equation (1) tells us that the probability of catastrophe would then be just over one in a million -an estimate which is a thousand times higher than that in the report itself. This reflects the fact that if the catastrophe were to actually occur, it is much more likely that this was because there was a flaw in the report's argument than that a one in a billion event took place. Flawed arguments are not rare. One way to estimate the frequency of major flaws in academic papers is to look at the proportions which are formally retracted after publication. While some retractions are due to misconduct, most are due to unintentional errors. 5 Using the MEDLINE database, 7 Cokol et al. (2007) found a raw Figure 1 . The left panel depicts a report's view on the probability of an event occurring. The black area represents the chance of the event occurring, the white area represents it not occurring. The right-hand panel is the more comprehensive picture, taking into account the possibility that the argument is flawed and that we thus face a grey area containing an unknown amount of risk. retraction rate of 6.3 × 10 −5 , but used a statistical model to estimate that the retraction rate would actually be between 0.001 and 0.01 if all journals received the same level of scrutiny as those in the top tier. This would suggest that P(¬A) > 0.001 making our earlier estimate rather optimistic. We must also remember that an argument can easily be flawed without warranting retraction. Retraction is only called for when the underlying flaws are not trivial and are immediately noticeable by the academic community. The retraction rate for a field would thus provide a lower bound for the rate of serious flaws. Of course, we must also keep in mind the possibility that different branches of science may have different retraction rates and different error rates: the hard sciences may be less prone to error than the more applied sciences. Finally, we can have more confidence in an article, the longer it has been open to public scrutiny without a flaw being detected. It is important to note the particular connection between the present analysis and high-stakes low-probability risks. While our analysis could be applied to any risk, it is much more useful for those in this category. For it is only when P(X|A) is very low that the grey area has a relatively large role to play. If P(X|A) is moderately high, then the small contribution of the error term is of little significance in the overall probability estimate, perhaps making the difference between 10 and 10.001% rather than the difference between 0.001 and 0.002%. The stakes must also be very high to warrant this additional analysis of the risk, for the adjustment to the estimated probability will typically be very small in absolute terms. While an additional one in a million chance of a billion deaths certainly warrants further consideration, an additional one in a million chance of a house fire may not. One might object to our approach on the grounds that we have shown only that the uncertainty is greater than previously acknowledged, but not that the probability of the event is greater than estimated: the additional uncertainty could just as well decrease the probability of the event occurring. When applying our approach to arbitrary examples, this objection would succeed; however in this paper, we are specifically looking at cases where there is an extremely low value of P(X|A), so practically any value of P(X|¬A) will be higher and thus drive the combined probability estimate upwards. The situation is symmetric with regard to extremely high estimates of P(X|A), where increased uncertainty about the argument will reduce the probability estimate, the symmetry is broken only by our focus on arguments which claim that an event is very unlikely. Another possible objection is that since there is always a non-zero probability of the argument being flawed, the situation is hopeless: any new argument will be unable to remove the grey area completely. It is true that the grey area can never be completely removed; however, if a new argument (A 2 ) is independent of the previous argument (A 1 ), then the grey area will shrink, for P(¬A 1 , ¬A 2 ) < P(¬A 1 ). This can allow for significant progress. A small remaining grey area can be acceptable if P(X|¬A) P(¬A) is estimated to be sufficiently small in comparison to the stakes. \n Theories, models and calculations The most common way to assess the reliability of an argument is to distinguish between model and parameter uncertainty and assign reliabilities to these choices. While this distinction has certainly been of use in many practical cases, it is unnecessarily crude for the present purpose, failing to account for potential errors in the paper's calculations or a failure of the background theory. In order to account for all possible mistakes in the argument, we look separately at its theory, its model and its calculations. The calculations evaluate a concrete model representing the processes under consideration, for example the formation of black holes in a particle collision, the response of certain climate parameters (such as mean temperature or precipitation rate) to changes in greenhouse gas concentrations or the response of economies to changes in the oil price. These models are mostly derived from more general theories. In what follows, we do not restrict the term 'theory' to well-established and mathematically elaborate theories like electrodynamics, quantum chromodynamics or relativity theory. Rather, theories are understood to include theoretical background knowledge, such as specific research paradigms or the generally accepted research practice within a field (see Figure 2 ). An example is the efficient market hypothesis which underlies many models within economics, such as the Black-Scholes model. We consider adequate models or theories rather than correct ones. For example, we wish to allow that Newtonian mechanics is an adequate theory in many situations, while recognizing that in some cases it is clearly inadequate (such as for calculating the electron orbitals). We thus call a representation of some system adequate if it is able to predict the relevant system features at the required precision. For example, if climate modellers wish to determine the implications our greenhouse gas emissions will have on the well-being of future generations, their model/theory will not be adequate unless it tells them the changes in the local temperature and precipitation. In contrast, a model might only need to tell them changes in global temperature and precipitation to be adequate for answering less sensitive questions. On a theoretical level, much more could be said about this distinction between adequacy and correctness, but for the purposes of evaluating the reliability of risk assessment, the explanation above should suffice. With the following notation: T, the involved theories are adequate; M, the derived model is adequate; and C, the calculations are correct, we can represent P(A) as P(T, M, C). We can then benefit from the theory-model-calculation distinction by expressing this as the product of three separate probabilities, each of which should be easier to estimate than P(A). Firstly, from the laws of conditional probability, it follows that P(T, M, C) = P(T) P(M|T) P(C|M, T). This can be simplified as we can assume C to be independent of M and T, since the correctness of a calculation is independent of whether the theoretical and model assumptions underpinning it were adequate. Given this independence P(C|M, T) = P(C), the above equation can be simplified to: Substituting this back into Equation (1), we obtain a more tractable formula for P(X). P A P T P M T P C ( ) ( ) ( | ) ( ). ( ) = 2 We have already made a rough attempt at estimating P(A) from the paper retraction rates. Estimating P(T), P(M|T) and P(C) is more accurate and somewhat easier, though still of significant difficulty. While estimating the various terms in Equation ( 2 ) must ultimately be done on a case-by-case basis, the following elucidation of what we mean by theory, model and calculation will shed some light on how to pursue such an analysis. By incorporating our threefold distinction, it is straightforward to apply findings on the reliability of theories from philosophy of science -based, for example, on probabilistic verification methods (e.g. Reichenbach 1938) or falsifications as in Hempel (1950) or Popper (1959) . Often, however, the best we can do is to put some bounds upon them based on the historical record. We thus review typical sources of error in the following three areas. \n Calculation: analytic and numeric Estimating the correctness of the calculation independently from the adequacy of the model and the theory seems important whenever the mathematics involved is non-trivial. Most cases where we are able to provide more than purely heuristic and hand-waving risk assessments are of this sort. Consider climate models evaluating runaway climate change and risk estimates for the LHC or for asteroid impacts. When calculations accumulate, even trivial mathematical procedures become error-prone. A particular difficulty arises due to the division of labour in the sciences: commonly in modern scientific practice, various steps in a calculation are done by different individuals who may be in different working groups in different countries. The Mars Climate Observer spacecraft was lost in 1999 because a piece of control software from Lockheed Martin used imperial units instead of the metric units the interfacing NASA software expected (NASA 1999). Calculation errors are distressingly common. There are no reliable statistics on the calculation errors made in risk assessment or, even more broadly, within scientific papers. However, there is research on errors made in some very simple calculations performed in hospitals. Dosing errors give an approximate estimate of how often mathematical slips occur. Errors in drug charts occur at a rate of 1.2-31% across different studies (Prot et al. 2005; Stubbs, Haw, and Taylor 2006; Walsh et al. 2008) , with a median of roughly 5% of administrations. Of these errors, 15-40% were dose errors, giving an overall dose error rate of about 1-2%. What does this mean for error rates in risk estimation? Since the stakes are high when it comes to dosing errors, this data represents a serious attempt to get the right answer in a life or death circumstance. It is likely that the people doing risk estimation are more reliable at arithmetic than health professionals and have more time for error correction, but it appears unlikely that they would be more reliable in many orders of magnitude. Hence, a chance of 10 −3 for a mistake per simple calculation does not seem unreasonable. A random sample of papers from Nature and the British Medical Journal found that roughly 11% of statistical results were flawed, largely due to rounding and transcription errors (García-Berthou and Alcaraz 2004). Calculation errors include more than just the 'simple' slips which we know from school, such as confusing units, forgetting a negative square root or incorrectly transcribing from the line above. Instead, many mistakes arise here due to numerical implementation of the analytic mathematical equations. Computer-based simulations and numerical analysis are rarely straightforward. The history of computers contains a large number of spectacular failures due to small mistakes in hardware or software. The 4 June 1996 explosion of an Ariane 5 rocket was due to a leftover piece of code triggering a cascade of failures (ESA 1996) . Audits of spreadsheets in real world use find error rates on the order of 88% (Panko 1998) . The 1993 Intel Pentium floating point error affected 3-5 million processors, reducing their numeric reliability and hence our confidence in anything calculated with them (Nicely 2008) . Programming errors can remain dormant for a long time even in apparently correct code, only to emerge under extreme conditions. An elementary and widely used binary search algorithm included in the standard libraries for Java was found after nine years to contain a bug that emerges only when searching very large lists (Bloch 2006) . A mistake in data processing led to the retraction of five high-profile protein structure papers as the handedness of the molecules had become inverted (Miller 2006) . In cases where computational methods are used in modelling, many mistakes cannot be avoided. Discrete approximations of the often continuous model equations are used, and in some cases we know that the discrete version is not a good proxy for the continuous model (Morawetz and Walke 2003) . Moreover, numerical evaluations are often done on a discrete computational grid, with the values inside the meshes being approximated from the values computed at the grid points. Though we know that certain extrapolation schemes are more reliable in some cases than others, we are often unable to exclude the possibility of error, or to even quantify it. \n Ways of modelling and theorizing Our distinction between model and theory follows the typical use of the terms within mathematical sciences like physics or economics. Whereas theories are associated with broad applicability and higher confidence in the correctness of their description, models are closer to the phenomena. For example, when estimating the probability of a particular asteroid colliding with the Earth, one would use either Newtonian mechanics or general relativity as a theory for describing the role of gravity. One could then use this theory in conjunction with observations of the bodies' positions, velocities and masses to construct a model, and finally, one could perform a series of calculations based on this model to estimate the probability of impact. As this shows, the errors that can be introduced in settling for a specific model include and surpass those which are sometimes referred to as parameter uncertainty. As well as questions of the individual parameters (positions, velocities and masses), there are important questions of detail (Can we neglect the inner structure of the involved bodies?), and breadth (Can we focus on the Earth and asteroid only, or do we have to model other planets, or the Sun?). 7 As can be seen from this example, one way to distinguish theories from models is that theories are too general to be applied directly to the problem. For any given theory, there are many ways to apply it to the problem and these ways give rise to different models. Philosophers of science will note that our theory-model distinction is in accordance with the non-uniform notion used by Giere (1999) , Morrison (1998) , Cartwright (1999) , and others, but differs from that of the semantic interpretation of theories (Suppes 1957) . We should also note that it is quite possible for an argument to involve several theories or several models. This complicates the analysis and typically provides additional ways for the argument to be flawed. 8 For example, in estimating the risk of black hole formation at the LHC, we not only require quantum chromodynamics (the theory on which the LHC is built to test), but also relativity and Hawking's theory of black hole radiation. In addition to their other roles, modelling assumptions also have to explain how to glue such different theories together (Hillerbrand and Ghil 2008) . In risk assessment, the systems involved are most often not as well understood as asteroid impacts. Often, various models exist simultaneously -all known to be incomplete or incorrect in some way, but difficult to improve upon. 9 Particularly in these cases, having an expected or desired outcome in mind while setting up a model makes one vulnerable to expectation bias: the tendency to reach the desired answer rather than the correct one. This bias has affected many of science's great names (Jeng 2006) , and in the case of risk assessment, the desire for a 'positive' outcome (safety in the case of the advocate or danger in the case of the protestor) seems a likely cause of bias in modelling. \n Historical examples of model and theory failure A dramatic example of a model failure was the Castle Bravo nuclear test on 1 March 1954. The device achieved 15 megatons of yield instead of the predicted four to eight megatons. Fallout affected parts of the Marshal Islands and irradiated a Japanese fishing boat so badly that one fisherman died, causing an international incident (Nuclear Weapon Archive 2006). Though the designers at Los Alamos National Laboratories understood the involved theory of alpha decay, their model of the reactions involved in the explosion was too narrow, for it neglected the decay of one of the involved particles (lithium-7), which turned out to contribute the bulk of the explosion's energy. The Castle Bravo test is also notable for being an example of model failure in a very serious experiment conducted in the hard sciences and with known high stakes. The history of science contains numerous examples of how generally accepted theories have been overturned by new evidence or understanding, as well as a plethora of minor theories that persisted for a surprising length of time before being disproven. Classic examples for the former include the Ptolemaic system, phlogiston theory and caloric theory; an example for the latter is human chromosome number, which was systematically miscounted as 48 (rather than 46) and this error persisted for more than 30 years (Gartler 2006 ). As a final example, consider Lord Kelvin's estimates of the age of the Earth (Burchfield 1975) . They were based on information about the earth's temperature and heat conduction, estimating an age of the Earth between 20 and 40 million years. These estimates did not take into account radioactive heating, for radioactive decay was unknown at the time. Once it was shown to generate additional heat the models were quickly updated. While neglecting radioactivity today would count as a model failure, in Lord Kelvin's day it represented a largely unsuspected weakness in the physical understanding of the Earth and thus amounted to theory failure. This example makes it clear that the probabilities for the adequacy of model and theory are not independent of each other, and thus in the most general case we cannot further decompose Equation (2). \n Applying our analysis to the risks from particle physics research Particle physics is the study of the elementary constituents of matter and radiation, and the interactions between them. A major experimental method in particle physics involves the use of particle accelerators, such as the RHIC and LHC, to bring beams of particles near the speed of light and then collide them together. This focuses a large amount of energy in a very small region and breaks the particles down into their components, which are then detected. As particle accelerators have become larger, the energy densities achieved have become more extreme, prompting some concern about their safety. These safety concerns have focused on three possibilities: the formation of 'true vacuum', the transformation of the Earth into 'strange matter', and the destruction of the Earth through the creation of a black hole. \n True vacuum and strange matter formation The type of vacuum that exists in our universe might not be the lowest possible vacuum energy state. In this case, the vacuum could decay to the lowest energy state, either spontaneously, or if triggered by a sufficient disturbance. This would produce a bubble of 'true vacuum' expanding outwards at the speed of light, converting the universe into different state apparently inhospitable for any kind of life (Turner and Wilczek 1982) . Our ordinary matter is composed of electrons and two types of quarks: up quarks and down quarks. Strange matter also contains a third type of quark: the 'strange' quark. It has been hypothesized that strange matter might be more stable than normal matter, and able to convert atomic nuclei into more strange matter (Witten 1984) . It has also been hypothesized that particle accelerators could produce small negatively charged clumps of strange matter, known as strangelets. If both these hypotheses were correct and the strangelet also had a high enough chance of interacting with normal matter, it would grow inside the Earth, attracting nuclei at an ever higher rate until the entire planet was converted to strange matter -destroying all life in the process. Unfortunately, strange matter is complex and little understood, giving models with widely divergent predictions about its stability, charge and other properties (Jaffe et al. 2000) . One way of bounding the risk from these sources is the cosmic ray argument: the same kind of high-energy particle collisions occur all the time in Earth's atmosphere, on the surface of the Moon and elsewhere in the universe. The fact that the Moon or observable stars have not been destroyed despite a vast number of past collisions (many at much higher energies than can be achieved in human experiments) suggests that the threat is negligible. This argument was first used against the possibility of vacuum decay (Hut and Rees 1983) but is quite general. An influential analysis of the risk from strange matter was carried out in Dar, De Rujula, and Heinz (1999) and formed a key part of the safety report for the RHIC. This analysis took into account the issue that any dangerous remnants from cosmic rays striking matter at rest would be moving at high relative velocity (and hence much less likely to interact) while head-on collisions in accelerators could produce remnants moving at much slower speeds. They used the rate of collisions of cosmic rays in free space to estimate strangelet production. These strangelets would then be slowed by galactic magnetic fields and eventually be absorbed during star formation. When combined with estimates of the supernova rate, this can be used to bound the probability of producing a dangerous strangelet in a particle accelerator. The resulting probability estimate was < 2 × 10 −9 per year of RHIC operation. 10 While using empirical bounds and experimentally tested physics reduces the probability of a theory error, the paper needs around 30 steps to reach its conclusion. For example, even if there was just a 10 −4 chance of a calculation or modelling error per step this would give a total P(¬A) ≈ 0.3%. This would easily overshadow the risk estimate. Indeed, even if just one step had a 10 −4 chance of error, this would overshadow the estimate. A subtle complication in the cosmic ray argument was noted in Tegmark and Bostrom (2005) . The Earth's survival so far is not sufficient as evidence for safety, since we do not know if we live in a universe with 'safe' natural laws or a universe where planetary implosions or vacuum decay do occur, but we have just been exceedingly lucky so far. While this latter possibility might sound very unlikely, all observers in such a universe would find themselves to be in the rare cases where their planets and stars had survived, and would thus have much the same evidence as we do. Tegmark and Bostrom had thus found that in ignoring these anthropic effects, the previous model had been overly narrow. They corrected for this anthropic bias and, using analysis from Jaffe et al. (2000) , concluded that the risk from accelerators was less than 10 −12 per year. This is an example of a demonstrated flaw in an important physics risk argument (one that was pivotal in the safety assessment of the RHIC). Moreover, it is significant that the RHIC had been running for five years on the strength of a flawed safety report, before Tegmark and Bostrom noticed and fixed this gap in the argument. Although this flaw was corrected immediately after being found, we should also note that the correction is dependent on both anthropic reasoning and on a complex model of the planetary formation rate (Lineweaver, Fenner, and Gibson 2004) . If either of these or the basic Brookhaven analysis is flawed, the risk estimate is flawed. \n Black hole formation The LHC experiment at CERN was designed to explore the validity and limitations of the standard model of particle physics by colliding beams of high-energy protons. This will be the most energetic particle collision experiment ever done, which has made it the focus of a recent flurry of concerns. Due to the perceived strength of the previous arguments on vacuum decay and strangelet production, most of the concern about the LHC has focused on black hole production. None of the theory papers we have found appears to have considered the black holes to be a safety hazard, mainly because they all presuppose that any black holes would immediately evaporate due to Hawking radiation. However, it was suggested by Dimopoulos and Landsberg (2001) that if black holes form, particle accelerators could be used to test the theory of Hawking radiation. Thus, critics also began questioning whether we could simply assume that black holes would evaporate harmlessly. A new risk analysis of LHC black hole production (Giddings and Mangano 2008) provides a good example of how risks can be more effectively bounded through multiple sub-arguments. While never attempting to give a probability of disaster (rather concluding 'there is no risk of any significance whatsoever from such black holes'), it uses a multiple bounds argument. It first shows that rapid black hole decay is a robust consequence of several different physical theories (A 1 ). Second, it discusses the likely incompatibility between non-evaporating black holes and mechanisms for neutralizing black holes: in order for cosmic ray-produced stable black holes to be innocuous but accelerator-produced black holes to be dangerous, they have to be able to shed excess charge rapidly (A 2 ). Our current understanding of physics suggests both that black holes decay and that even if they did not, they would be unable to discharge themselves. Only if this understanding is flawed will the next section come into play. The third part, which is the bulk of the paper, models how multidimensional and ordinary black holes would interact with matter. This leads to the conclusion that if the size scale of multidimensional gravity is smaller than about 20 nm, then the time required for the black hole to consume the Earth would be larger than the natural lifetime of the planet. For scenarios where rapid Earth accretion is possible, the accretion time inside white dwarves and neutron stars would also be very short, yet production and capture of black holes from impinging cosmic rays would be so high that the lifespan of the stars would be far shorter than the observed lifespan (and would contradict white dwarf cooling rates) (A 3 ). The force of the total argument (A 1 , A 2 , A 3 ) is significantly stronger than any of its parts. Essentially, the paper acts as three sequential arguments, each partly filling in the grey area (see Figure 1 ) left by the previous. If the theories surrounding black hole decay fail, the argument about discharge comes into play; and if against all expectation black holes are stable and neutral, the third argument shows that astrophysics constrains them to a low accretion rate. \n Implications for the safety of the LHC What are the implications of our analysis for the safety assessment of the LHC? Firstly, let us consider the stakes in question. If one of the proposed disasters were to occur, it would mean the destruction of the Earth. This would involve the complete destruction of the environment, 6.5 billion human deaths and the loss of all future generations. It is worth noting that this loss of all future generations (and with it, all of humanity's potential) may well be the greatest of the three, but a comprehensive assessment of these stakes is outside the scope of this paper. For the present purposes, it suffices to observe that the destruction of the Earth is at least as bad as 6.5 billion human deaths. There is some controversy as to how one should combine probabilities and stakes into an overall assessment of a risk. Some hold that the simple approach of expected utility is the best, while others hold some form of risk aversion. However, we can sidestep this dispute by noting that in either case, the risk of some harm is at least as bad as the expected loss. Thus, a risk with probability p of causing a loss at least as bad as 6.5 billion deaths is at least as bad as a certain 6.5 × 10 9 p deaths. Now let us turn to the best estimate we can make of the probability of one of the above disasters occurring during the operation of the LHC. While the arguments for the safety of the LHC are commendable for their thoroughness, they are not infallible. Although the report considered several possible physical theories, it is eminently possible that these are all inadequate representations of the underlying physical reality. It is also possible that the models of processes in the LHC or the astronomical processes appealed to in the cosmic ray argument are flawed in an important way. Finally, it is possible that there is a calculation error in the report. In Equation (1), P(X) is the sum of two terms. The second of these represents the additional probability of disaster due to the argument being unsound. It is the product of the probability of argument failure and the probability of disaster given such a failure. Both terms are very difficult to estimate, but we can gain insight by showing the ranges they would have to lie within, for the risk presented by the LHC to be acceptable. If we let l denote the acceptable limit of expected deaths from the operation of the LHC, we get: 6.5 × 10 9 P(X) ≤ l. Since P(X) is at least as great as its second term, we obtain: This inequality puts a severe bound on the acceptable values for these probabilities. Since it is much easier to grasp this with an example, we shall provide some numbers for the purposes of illustration. Suppose, for example, that if the limits were set at 1000 expected deaths, then P(X|¬A) P(¬A) would have to be below 1.5 × 10 −7 for the risk to be worth bearing. This requires very low values for these probabilities. We have seen that for many arguments, P(¬A) is above 10 −3 . We have also seen that the argument for the safety of the RHIC turned out to have a significant flaw, which was unnoticed by the experts at the time. It would thus be very bold to suppose that the argument for the safety of the LHC was much lower than 10 −3 , but for the sake of argument, let us grant that it is as low as 10 −4 -that out of a sample of 10,000 independent arguments of similar apparent merit, only one would have any serious error. Even if the value of P(¬A) were as low as 10 −4 , P(X|¬A) would have to be below 0.15% for the risk to be worth taking. P(X|¬A) is the probability of disaster, given that the arguments of the safety report are flawed and is the most difficult component of Equation ( 1 ) to estimate. Indeed, few would dispute that we really have very little idea of what value to put on P(X|¬A). It would thus seem overly bold to set this below 0.15% without some substantive argument. Perhaps such an argument could be provided, but until it is, such a low value for P(X|¬A) seems unwarranted. We stress that the above combination of numbers was purely for illustrative purposes, but we cannot find any plausible combination of the three numbers which meets the bound and which would not require significant argument to explain either the levels of confidence or the disregard for expected deaths. We would also like to stress that we are open to the possibility that additional supporting arguments and independent verification of the models and calculations could significantly reduce the current chance of a flaw in the argument. However, our analysis implies that the current safety report should not be the final word in the safety assessment of the LHC. To proceed with the LHC on the arguments of the most recent safety report alone, we would require further work on estimating P(¬A), P(X|¬A), the acceptable expected death toll, and the value of future generations and other life on Earth. Such work would require expertise beyond theoretical physics, and an interdisciplinary group would be essential. If the stakes were lower, then it might make sense for pragmatic concerns to sweep aside this extra level of risk analysis, but the stakes are astronomically large, and so further analysis is critical. Even if the LHC goes ahead without any further analysis, as is very likely, these lessons must be applied to the assessment of other high-stakes low-probability risks. \n Conclusions When estimating threat probabilities, it is not enough to make conservative estimates (using the most extreme values or model assumptions compatible with known data). Rather, we need robust estimates that can handle theory, model and calculation errors. The need for this becomes considerably more pronounced for low-probability high-stake events, though we do not say that low probabilities cannot be treated systematically. Some people have raised the concern that our argument might be too powerful, for it is impossible to disprove the risk of even something as trivial as dropping a pencil, P X A P A l ( | ) ( ) . . ( ) ¬ ¬ ≤ × − 1 5 10 3 then our argument might amount to prohibiting everything. It is true that we cannot completely rule out any probability that apparently inconsequential actions might have disastrous effects, but there are a number of reasons why we do not need to worry about universal prohibition. A major reason is that for events like the dropping of a pencil which have no plausible mechanism for destroying the world, it seems just as likely that the world would be destroyed by not dropping the pencil. The expected losses would thus balance out. It is also worth noting that our argument is simply an appeal to a weak form of decision theory to address an unusual concern: for our method to lead to incorrect conclusions, it would require a flaw in decision theory itself, which would be very big news. It will have occurred to some readers that our argument is fully applicable to this very paper: there is a chance that we have made an error in our own arguments. We entirely agree, but note that this possibility does not change our conclusions very much. Suppose, very pessimistically, that there is a 90% chance that our argument is sufficiently flawed that the correct approach is to take safety reports' probability estimates at face value. Even then, our argument would make a large difference to how we treat such values. Recall the example from Section 2, where a report concludes a probability of 10 −9 and we revise this to 10 −6 . If there is even a 10% chance that we are correct in doing so, then the overall probability estimate would be revised to 0.9 × 10 −9 + 0.1 × 10 −6 ≈ 10 −7 , which is still a very significant change from the report's own estimate. In short, even serious doubt about our methods should not move one's probability estimates more than an order of magnitude away from those our method produces. More modest doubts would have negligible effect. The basic message of this paper is that any scientific risk assessment is only able to give us the probability of a hazard occurring conditioned on the correctness of its main argument. The need to evaluate the reliability of the given argument in order to adequately address the risk was shown to be of particular relevance in low-probability high-stake events. We drew a three-fold distinction between theory, model and calculation, and showed how this can be more useful than the common dichotomy in risk assessment between model and parameter uncertainties. By providing historic examples for errors in the three fields, we clarified the three-fold distinction and showed where flaws in a risk assessment might occur. Our analysis was applied to the recent assessment of risks that might arise from experiments within particle physics. To conclude this paper, we now provide some very general remarks on how to avoid argument flaws when assessing risks with high stakes. Firstly, the testability of predictions can help discern flawed arguments. If a risk estimate produces a probability distribution for smaller, more common disasters, this can be used to judge whether the observed incidences are compatible with the theory. Secondly, reproducibility appears to be the most effective way of removing many of these errors. By having other people replicate the results of calculations independently, our confidence in them can be dramatically increased. By having other theories and models independently predict the same risk probability, our confidence in them can again be increased, as even if one of the arguments is wrong the others will remain. Finally, we can reduce the possibility of unconscious bias in risk assessment through the simple expedient of splitting the assessment into a 'blue' team of experts attempting to make an objective risk assessment and a 'red' team of devil's advocates attempting to demonstrate a risk, followed by repeated turns of mutual criticism and updates of the models and estimates (Calogero 2000) . Application of such methods could in many cases reduce the probability of error by several orders of magnitude. Figure 2 . 2 Figure 2. Ways in which risk assessments can be flawed. \n Figure 2 . 2 Figure 2. Ways in which risk assessments can be flawed. \n\t\t\t *Corresponding author. Email: toby.ord@philosophy.ox.ac.uk", "date_published": "n/a", "url": "n/a", "filename": "proving_the_improbable.tei.xml", "abstract": "Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calcultions often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons, such as a flaw in the underlying theory, a flaw in the modelling of the problem or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinction between model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.", "id": "4bc80bb3e63227b696c10e0cdce4aefa"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Stuart Armstrong", "Nick Bostrom", "Carl Shulman"], "title": "Racing to the precipice: a model of artificial intelligence development", "text": "Introduction This paper presents a simplified model for analysing technology races. The model was designed initially for races to construct artificial intelligences (AIs). But it can be applied to other similar races or competitions, especially technological races where there is a large advantage to reaching the goal first. There are arguments that the first true AIs are likely to be extremely powerful machines [Goo65, Cha10] , but that they could end up being dangerous [Omo08, Yud08] if not carefully controlled [ASB12] . The purpose of various research projects such as 'friendly AI' [MS12] is to design safety precautions ensuring the creation of an AI with human-compatible values. In this paper, we won't go that far. We will simply assume that there is a definite probability of an AI-related disaster, and that the probability goes up the more the AI development team skimps on precautions. This paper will present a model of an 'arms race' between AI development teams, and analyse what factors increase and decrease the probability of such an AI-disaster. Several factors contribute to increasing the danger: if building the AI depends more on risk-taking than on skill, for instance. Reducing the enmity between AI development teams helps, however, as does reducing the number of teams. But surprisingly, extra information can exacerbate the danger. The danger is minimised when each team is ignorant of the AI-building capabilities of every team -including their own capabilities. This is an interesting example of an information hazard [Bos11] . \n The model In this model, there are n different teams, competing to build the first proper AI. Each team has a certain level of AI-building capability c, and can choose to take a certain level of safety precautions s, varying between s = 1 (total safety) and s = 0 (complete lack of precautions). Then each team gets a score by subtracting their safety from their capability (c − s), and the team with the highest score 'wins', by building their AI first. Once that AI is built, we need to check for the success of the project: That will depend on the safety level of the winning team 1 . With probability s, they have a successful and profitable AI project. With probability 1 − s, they have caused an AI-disaster. We'll assume that each team maximises expected utility, and we can renormalise their utilities to give utility 1 for a successful AI project, and 0 for an AI-disaster. We will further assume that each team has the same level of enmity e towards each other team. This means that they will have a utility of 1 − e if another team builds a successful AI before they do. This varies between e = 0 (they are indifferent to who builds an AI) and e = 1 (another team building a successful AI is just as bad as an AI-disaster). It would be possible to have enmities above 1 (this would correspond to teams that hate each other so much that an AIdisaster is preferable to the triumph of the other team), but that is beyond the scope of the current paper. Further, we'll assume that each team's capability is drawn independently and uniformly on the interval [0, µ], for a single given µ. A high µ corresponds to a situations where capability is the dominant factor in AI development: one can achieve very little extra by skimping on precautions. Conversely, for low µ, incurring increased risk can make one much more likely to build the first AI. We'll assume that the teams choose their safety based on Nash equilibrium considerations (this could happen in practice if, for instance, each team reduced their safety a little bit, then a little bit more, then a little bit more, in response to the other teams reducing their own safety). We'll then calculate the Nash equilibria in three information scenarios: when teams don't know anyone's capabilities, when they know their own capabilities, and when they know everyone's capabilities 2 . \n No information In this situation, every team is ignorant of their own or any other team's capabilities 3 . Each team will thus choose the level of safety the think is best. In a symmetric Nash equilibrium, this will be a single number s. Since all teams draw their capabilities from independent identically distributed random variables, and have the same level of safety, they each have the same probability of winning, namely 1/n. Upon changing s to s − , a team's chance of winning increases by 1 µ + O( ). The utility gain from winning rather than letting another win is the safety level s times the enmity e. So the extra expected utility from winning is se µ + O( ). But the decrease in safety brings in extra risks, were the team to win anyway. This extra risk is simply time the probability of winning: n + O( ). All other effects are second order in . Setting the change to be zero (to first order) in , the general solution is thus s = µ en µ < en 1 µ ≥ en. The total probability of AI-disaster is 1 − s, namely: 1 − µ en µ < en 0 µ ≥ en. \n Private information In this scenario, each team knows its own capabilities, but not those of the other teams. Each team will choose a safety level s(x), dependent on their own capabilities x. In a symmetric Nash equilibrium, each team chooses the same function s. We may assume x − s(x) is a function increasing in x 4 . In that case, the team with the highest x, the highest capabilities, always wins. Given x, the probability of a team winning is equal to the probability of all other teams having lower capability: this is x µ n−1 . Then changing s(x) to s(x) − , the team increases its probability of winning by: (n − 1)x n−2 (1 − s (x))µ n−1 + O( ). As in the previous case, the expected extra utility is se times this: se(n − 1)x n−2 (1 − s (x))µ n−1 + O( ). The loss in expected utility coming from winning at a lower safety level is: x µ n−1 + O( ). Solving these equations gives: s(x) = x en−e+1 x < en − e + 1 1 x ≥ en − e + 1. The total probability of a AI-disaster is calculated by integrating, across all values of x in [0, µ], the risk level 1 − s(x) times the probability that the winning team will have capability x: µ 0 (1 − s(x)) nx n−1 µ n dx = 1 − µn (n+1)(ne−e+1) µ < en − e + 1 (en−e+1) n (n+1)µ n µ ≥ en − e + 1. \n Public information In this scenario, every team knows the capabilities of every other team. This scenario is analysed somewhat differently that the other. Let ∆ be the difference between the capability of the top team and the second ranked team. The top team always wins, but its safety level s top is determined by ∆. When s top (∆) = ∆/e, it is not in the interest of the second team to decrease its safety to compete with the top team. The gain from winning does not compensate for the extra risks run. Thus the safety of the top team will be s top = ∆/e if ∆/e < 1 and 1 otherwise. The total probability AI-disaster is calculated by integrating, across all values of ∆ in [0, µ], the risk level 1−s top (∆) = 1−∆/e times the probability that the difference between the top two teams is ∆: µ 0 (1 − s top (∆)) n(µ − ∆) n−1 µ n d∆ = 1 − µ e(n+1) µ < e (µ−e) n+1 e(n+1)µ n − µ e(n+1) + 1 µ ≥ e. 3 Factors determining risk \n Capability vs risk Intuitively it is clear that increasing the importance of capability must decrease overall risk. One is less inclined to skimp on safety precautions if can only get a small advantage from doing so. This intuition is born out by the results: in every situation, an increase of the importance of capability (an increase in µ) reduces the risk. This is illustrated in figures 1a and 1b, as well as all subsequent figures, demonstrating that the probability of AI-disaster are always decreasing in µ. Indeed, around µ = 0 (i.e. when capability is nearly irrelevant to producing the first AI), the only Nash equilibrium is to take no safety precautions at all. In terms of intervention, there is little we can do about the relative importance of capability, since this is largely determined by the technology. Our best bet might be at present to direct research into approaches where there is little return to risk-taking, prioritising some technological paths over others. \n Compatible goals It is also intuitively clear that reducing enmity should reduce the risk of AIdisaster. When competition gets less fierce, teams would be willing to take less risks. This intuition is also born out in our model, as decreases in e always reduce the risk. For illustration, contrast the graphs 2a and 2b, where the enmity is 0.5, with the previous graphs 1a and 1b where the enmity was 1. Enmity is something that we can work on by, for instance, building trust between nations and groups, sharing technologies or discoveries, merging into joint projects or agreeing to common aims. With this extra coordination, we could also consider agreements to allow the teams to move away from the Nash equilibrium, thus avoiding a race to the bottom when the situation is particularly dangerous (such as low capability µ). Friendly teams make friendly AIs. \n The curse of too much information Contrasting figure 1a with figure 2a (or figure 1a with figure 2b ), one notices a curious pattern. The no-information case is always safer than the other two cases, but the relative safety of private information or common information depends on the degree on enmity. It is always better if none of the teams have any idea about anyone's capability. For maximal enmity (e = 1), the private information scenario is similarly safer than the common information one. But for lower e, this does not hold: for low µ and low e, the public information scenario can be safer than the private one. Asymptotically, though, the private information case is of order of 1/µ n , while the public information case is of order 1/µ. The reasons for this is that it is only worth taking risks in the private information if one's capability is low. So to get a high risk, one needs a winner with low capability, which is equivalent to having all teams at low capability. The probability of having a single team at low capability diminishes inversely with µ. Hence the probability of having all teams at low capability diminishes as 1/µ n . In the public information case, the winner will take risks if the second ranked team is close to its own capability. Being close to a specific other team is inversely proportional to µ, and hence the probability of this diminishes as 1/µ. Our ability to manipulate the information levels of the teams or teams is likely to be somewhat limited, though we can encourage them to share more (in the low capability, low enmity situations) or alternatively to lock up their private knowledge 5 (in situation of higher capability or enmity). \n The number of competitors Finally, we can consider what happens when more teams are involved. Intuitively, competition might spur a race to the bottom if there are too many teams, and the model bears this out: in both the no-information and public information cases, adding extra teams strictly increases the dangers. The private information case is more complicated. At low capability, adding more teams will certainly worsen the situation, but at higher capability, the effect is reversed 6 . This can be seen in figure 3 , which plots the private information risk curves for two teams and five teams. In terms of leverage, the most helpful intervention we could undertake to reduce the number of teams is to encourage groups to merge. Dividing teams would only be useful in very specific situations, and would need some method of ensuring that the divided teams did not recall each other's capabilities. \n Conclusion We've presented a model of AI arms race (which can be extended to other types of arms races and competitions) in which teams of AI designers can get ahead by skimping on safety. If the race takes some time there is a persistent Figure 3 : Risk of dangerous AI arms race for private information teams, at enmity 0.5, plotted against relative importance of capability. The graph for two teams is full, the graph for five teams is dashed. ratchet effect (or race to the bottom) in which each team reduces their safety precautions slightly, until a Nash equilibrium is reached-possibly one of very high danger. Several factors influence on the probability of AI disaster, three in an intuitive way, and one counter-intuitively. Intuitively, if capability is much more important than the level of security precautions taken, the overall outcome is safer. Similarly, reduced enmity between the teams produces better outcomes, as does reducing the number of teams (in most situations). Counter-intuitively, increasing the information available to all the teams (about their own capability or progress towards AI, or that of the other teams) increases the risk. This is a special case of an information hazard [Bos11] : we'd be better off not knowing. This model and variants of if may be of use in planning and regulating the development of AIs and other disruptive technological innovations. Figure 1 : 1 Figure1: Risk of dangerous AI arms race for two and five teams, at enmity 1, plotted against relative importance of capability. Three information-level scenarios: no capability information (full), private capability information (dashes), and full capability information (dots). \n Figure 2 : 2 Figure2: Risk of dangerous AI arms race for two and five teams, at enmity 0.5, plotted against relative importance of capability. Three information-level scenarios: no capability information (full), private capability information (dashes), and full capability information (dots). \n\t\t\t And only the winning team -if another team gets a disastrous AI first by taking lower precautions, they will 'won' the race to build the first AI.2 Of course, the model can be refined in various ways. One could make capacity information uncertain and fuzzy, one could have different levels of enmity between different teams, one could incorporate uncertainty about the safety levels and the ultimate outcomes, and so on. Or one could have a dynamic process to determine the outcome, rather than rushing straight to the Nash equilibrium. But the simple model is enough to gain useful insights.3 It may seem unusual for teams to not know their own capabilities in the real world. However, this is close to the situation we find ourselves with current AI research: people and organisations have a pretty clear idea of what resources and knowledge they have, but don't know how hard AI is or what routes are most likely to lead there. They are thus effectively ignorant of their own AI-building capabilities. \n\t\t\t If makes no sense that a team with higher capability would have a lower chance of winning (if so, they would voluntarily destroy part of their capability). \n\t\t\t Such secrecy can interfere with trust building, though, making it hard to reach agreements between teams if such agreement is needed.6 This is because only the teams with low capability take risks in cases of private information, and the more teams there are, the less likely it is that the winner will be low capability.", "date_published": "n/a", "url": "n/a", "filename": "Racing-to-the-precipice-a-model-of-artificial-intelligence-development.tei.xml", "abstract": "This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivised to finish first -by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others' capabilities (and about their own), the more the danger increases.", "id": "73d6424dd61745b7fc188172c5b71e62"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "WhyWeNeedFriendlyAI.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "061_ai-world-universe2.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "031_integrated.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "Under review as a conference paper at ICLR 2019 UNCOVERING SURPRISING BEHAVIORS IN REINFORCEMENT LEARNING VIA WORST-CASE ANALYSIS", "text": "INTRODUCTION Reinforcement Learning (RL) methods have achieved great success over the past few years, achieving human-level performance on a range of tasks such as Atari (Mnih et al., 2015) , Go (Silver et al., 2016) , Labyrinth , and Capture the Flag (Jaderberg et al., 2018) . On these tasks, and more generally in reinforcement learning, agents are typically trained and evaluated using their average reward over environment settings as the measure of performance, i.e. E P (e) [R(π(θ), e)] , where π(θ) denotes a policy with parameters θ, R denotes the total reward the policy receives over the course of an episode, and e denotes environment settings such as maze structure in a navigation task, appearance of objects in the environment, or even the physical rules governing environment dynamics. But what biases does the distribution P (e) contain, and what biases, or failures to generalize, do these induce in the strategies agents learn? To help uncover biases in the training distribution and in the strategies that agents learn, we propose evaluating the worst-case performance of agents over environment settings, i.e. min e∈E E [R(π(θ), e)] , where E is some set of possible environment settings. Worst-case analysis can provide an important tool for understanding robustness and generalization in RL agents. For example, it can help us with: • Understanding biases in training Catastrophic failures can help reveal situations that are rare enough during training that the agent does not learn a strategy that is general enough to cope with them. • Robustness For critical systems, one would want to eliminate, or at least greatly reduce, the probability of extreme failures. • Limiting exploitability If agents have learned strategies that fail to generalize to particular environment settings, then an adversary could try and exploit an agent by trying to engineer such environment settings leading to agent failure. In this work, we use worst-case analysis to investigate the performance of a state-of-the-art agent in solving a first-person 3D navigation task; a task on which agents have recently achieved average-case human-level performance (Wayne et al., 2018) . By optimizing mazes to minimize the performance of agents, we discover the existence of mazes where agents repeatedly fail to find the goal (which we refer to as a Catastrophic Failure). Our Contributions To summarize, the key contributions of this paper are as follows: 1. We introduce an effective and intuitive approach for finding simple environment settings leading to failure (Section 2). 2. We show that state-of-the-art agents carrying out navigation tasks suffer from drastic and often surprising failure cases (Sections 3.1 and 3.2). 3. We demonstrate that mazes leading to failure transfer across agents with different hyperparameters and, notably, even different architectures (Section 3.3). 4. We present an initial investigation into how the training distribution can be adapted by incorporating adversarial and out-of-distribution examples (Section 4). \n APPROACH Tasks We consider agents carrying out first-person 3D navigation tasks. Navigation is of central importance in RL research as it captures the challenges posed by partially observable Markov decision processes (POMDPs). The navigation tasks we use are implemented in DeepMind Lab (DM Lab) (Beattie et al., 2016) . 1 Each episode is played on a 15 × 15 maze where each position in the maze may contain a wall, an agent spawn point, or a goal spawn point. The maze itself is procedurally generated every episode, along with the goal and agent spawn locations. The goal location remains fixed throughout an episode, while the agent spawn location can vary. In training, the agent respawns at different locations each time they reach the goal, while for our optimization and analysis we limit the agent to the same spawn location. Agents receive RGB observations of size 96 × 72 pixels, examples of which are provided in Figure 1 . Episodes last for 120 seconds and are played at a framerate of 15 frames per second. The agent receives a positive reward of 10 every time it reaches the goal, and 0 otherwise. On this specific navigation task, RL agents have recently achieved human-level average-case performance (Wayne et al., 2018) . Agents We perform our analysis on Importance Weighted Actor-Learner Architecture agents trained to achieve human-level average-case performance on navigation tasks. These agents can be described as async batched-a2c agents with the V-trace algorithm for off policy-correction, and we henceforth refer to these as A2CV agents . Details of the training procedure are provided in Appendix A.1. Search Algorithm If we are interested in worst-case performance of agents, how can we find environment settings leading to the worst performance? In supervised learning, one typically uses gradient based methods to find inputs that lead to undesired output (Biggio et al., 2013; Szegedy et al., 2013; Goodfellow et al., 2014) . In contrast, we search for environment settings leading to an undesired outcome at the end of an episode. This presents a challenge as the environment rendering and MDP are not differentiable. We are therefore limited to black-box methods where we can only query agent performance given environment settings. To search for environment settings which cause catastrophic failures, we propose the local search procedure described in Algorithm 1 (visualizing the process in Figure 2 ). We then Evaluate each with the agent over 30 episodes, select the best maze (i.e. lowest agent score), and Modify this maze by randomly moving two walls to form the next set of candidates (Iteration 1). This process is repeated for 20 iterations, leading to a maze where the agent score is 0.09 in this example (i.e. the agent finds the goal once in 11 episodes). In Appendix A.2.1 we detail the computational requirements of this search procedure. \n EXPERIMENTS The agents we study achieve impressive average-case performance, but how much does their worstcase performance differ from their average-case performance? To investigate this, we consider the worst-case performance over a large set of mazes, including mazes that are not possible under the training distribution. A natural question to ask is whether any departure from the wall structure present during training will lead to agent failure. To test this, we evaluate the agent on samples from a distribution of mazes containing all mazes agents could be evaluated on during the search. In particular, we randomly select agent and goal spawn locations in the first step and then randomly move 40 walls, corresponding to the same actions taken by our optimization procedure, but where the actions are chosen randomly rather than in order to minimize agent performance. We find that agents do generalize to random mazes from the set we consider. In fact, we find that the average score obtained by agents on randomly perturbed mazes is slightly higher than on the training distribution, with agents obtaining an average of 45 goal reaches per two minute episode. The increased performance is likely due to the agent spawn location being fixed, making it easier for the agent to return to the goal once found. The considered agents generalize in the sense that agent performance is not reduced on average by out-of-distribution wall structure. But what about the worst case over all wall structures? Have the agents learned a general navigation strategy that works for all solvable mazes? Or do there exist environment settings that lead to catastrophic failures with high probability? In this section, we investigate these questions. We define a Catastrophic Failure to be an agent failing to find the goal in a two minute episode (1800 steps). As detailed below, we find that not only do there exist mazes leading to catastrophic failure, there exist surprisingly simple mazes that lead to catastrophic failure for agents yet are consistently and often rapidly solved by humans. \n RESULT 1: CATASTROPHIC FAILURES EXIST Do environment settings leading to catastrophic failure exist for the agents we are considering? By searching over mazes using the procedure outlined in Algorithm 1, we find mazes where agents fail to find the goal on many episodes, only finding the goal 10% of the time. In fact, some individual mazes lead to failure across five different agents we tested, with even the best performing agent only finding the goal in 20% of the episodes. Optimization curves for our search procedure are given in Figure 3 . Note that while we define catastrophic failure as failure to find the goal, the actual objective used for the optimization was average number of goal reaches during an episode. Using average number of goals gives a stronger signal at the start of the optimization process. Finding mazes leading to lower average number of captures is easier than finding mazes where the agent rarely finds the goal even once. As can be seen, despite finding the goal on average 45 times per episode on randomly perturbed mazes, on mazes optimized to reduce score, agents find the goal on average only 0.33 times per episode, more than a 100× decrease in performance. In terms of probability of catastrophic failure, we note that despite agents finding the goal in approximately 98% of episodes on randomly perturbed mazes, using our method, on average we find mazes where agents only finds the goal in 30% of the episodes. Example trajectories agents take during failures are visualized in Figure 4 . The trajectories often seem to demonstrate a failure to use memory to efficiently explore the maze with the agent repeatedly visiting the same locations multiple times. The mazes presented in Figure 4 appear to be of higher complexity than mazes seen during training. This suggests that to obtain agents that truly master navigation, more complex mazes should be included in the training distribution. However, we can ask whether it is only more complex mazes that lead to catastrophic failure or whether there are also simple mazes leading to catastrophic failure. This is a question we explore in the next subsection. \n RESULT 2: SIMPLE CATASTROPHIC FAILURES EXIST While the existence of catastrophic failures may be intriguing and perhaps troubling, one might suspect that the failures are caused by the increased complexity of the mazes leading to failure relative to the mazes the agent is exposed to during training, e.g., the mazes leading to failure contain more dead ends and sometimes have lower visibility. Further, understanding the cause of failure in such mazes seems challenging due to the large number of wall structures that may be causing the agent to fail. In this section, we explore whether there exist simple mazes which lead to catastrophic failures. As our measure of complexity, we use the total number of walls in the maze. We also evaluate humans on such mazes to get a quantitative measure of maze complexity. To find simple mazes which lead to failure, we first follow the same procedure as in the previous section, producing a set of mazes which all lead to catastrophic failures (i.e. a low agent scores). Next, we use this set of mazes as the initial set of candidates in our search algorithm, however we now use a Modify function that removes a single randomly chosen wall each iteration. This process is repeated for 70 iterations, searching for a maze with few walls while maintaining low agent score. In Figure 4 , we present the resulting simple mazes and the corresponding agent trajectories from our optimization procedure. Interestingly, we find that one can remove a majority of the walls in a maze and still maintain the catastrophic failure (i.e. very low agent score). Of note is that a number of these mazes are strikingly simple, suggesting that there exists structure in the environment that the agent has not generalized to. For example, we can see that placing the goal in a small room in an otherwise open maze can significantly reduce the agent's ability to find the goal. Human baselines While these simple maps may lead to catastrophic failure, it is unclear whether this is because of the agent or whether the maze is difficult in a way that is not obvious. To investigate this, we perform human experiments by tasking humans to play on a set of 10 simplified mazes. Notably, we find that human players are able to always locate the goal in every maze and typically within one third of the full episode length. This demonstrates that the mazes are comfortably solvable within the course of an episode by players with a general navigation strategy. We provide a detailed comparison of agent and human performance in Appendix A.3. Analysis One question that may arise is the extent to which these mazes are isolated points in the space of mazes. That is, if the maze was changed slightly, would it no longer lead to catastrophic failure? To test this, we investigate how sensitive our discovered failure mazes are with respect to the agent and goal spawn locations on simplified adversarial mazes. As can be seen in Figure 5 , we find that for a large range of spawn locations, the mazes still lead to failure. This indicates that there is specific local maze structure which causes agents to fail. Procedures for finding such simple mazes may prove useful as a tool for debugging agents and understanding the ways in which training has led them to develop narrow strategies that are good enough for achieving high average-case performance. \n RESULT 3: FAILURE MAZES TRANSFER ACROSS AGENTS We have found failure cases for individual agents, but to what extent do these failure cases highlight a specific peculiarity of the individual agent versus a more general failure of a certain class of agents, or even a shortcoming of the distribution used for training? In this section, we investigate whether mazes which cause one trained agent to fail also cause other agents to fail. We consider two types of transfer: (1) between different hyperparameters of the same model architecture, and (2) between different model architectures. To test transfer between agents of the same architecture, we train a set of five A2CV agents with different entropy costs and learning rates. To test transfer between agents with significantly different architectures, we train a set of five MERLINbased agents (Wayne et al., 2018) . These agents have a number of differences to the A2CV agents, most notably they contain a sophisticated memory structure based on a DNC (but with a fixed write location per timestep) (Graves et al., 2016) . Both agents are trained on the same distribution and achieve human-level averages scores on the navigation task (with MERLIN scoring 10% higher than A2CV on average). Further details of agent training can be found in Appendix A.1. To quantify the level of transfer between (sets of) agents, we follow the procedure for finding adversarial mazes outlined in Section 3.1 to produce a collection of 50 unique failure mazes for each agent (i.e. 10 collections of 50 mazes each). We then evaluate every agent 100 times on each maze in each collection, reporting their average performance on each collection. Complete quantitative transfer results can be found in Appendix A.4. Failure cases transfer somewhat across all agents First, we find that across all agents, some level of transfer exists. In particular, as can be seen in Figure 6 , the probability of one agent finding the goal on mazes generated to reduce the score another agent is significantly below 1. This suggests a common cause of failure that is some combination of the distribution of environment settings used during training and the set of methods that are currently used to train such agents. A possible way to address this could be enriching the training distribution so that it contains fewer biases and encourages more general solutions. Transfer within agent type is stronger than between agent type Comparing the performance of each agent type on mazes from the same agent type to mazes from another agent type, we see that transfer within agent type is stronger. As shown in Figure 6b , performance increases as we go from 'MERLIN to MERLIN' 'A2CV to MERLIN' (0.42 to 0.58) and also if we go from 'A2CV to A2CV' to 'MERLIN to A2CV' (0.63 to 0.70). This suggests that there are some common biases in agents that are due to their architecture type. Analyzing structural differences between mazes that lead to one agent type to fail but not another could give interesting insight into behavioural differences between agents beyond just average performance. A2CV agents are less susceptible to transfer Despite similar probabilities of failure when evaluating on mazes optimized for the same agent, A2CV agents seem to suffer less on mazes optimized using other A2CV or MERLIN agents. This indicates that A2CV agents may have learned a more diverse set of strategies. Figure 6 : Mazes that lead to failures in one agent lead to failure in other agents as well. This is the case for agents of the same architecture with different hyperparameters, and is also the case for transfer across agents of different architecture. We note, however, that transfer across agents with different architectures is weaker than among agents with the same architecture, and that the performance of agents with the same architecture but with different hyperparameters is slightly higher than for the agents used to originally find the mazes. \n ADAPTING THE TRAINING DISTRIBUTION From our experiments so far, we have discovered that there exist many mazes which lead to catastrophic failure. In this section, we investigate whether agent performance can be improved by adapting the training distribution, for example by incorporating adversarial mazes into training and modifying the original mazes used in training. \n MOTIVATION To better understand what may be causing catastrophic failures, with the aim of fixing them, we compare the set of adversarial mazes with the original set of mazes used in training. From this comparison, we find that there are two notable differences. The probability of a catastrophic failure correlates with the distance between the spawn locations and how hidden the goal is First, we find that a number of features are more common in adversarial mazes than non-adversarial mazes. In particular, adversarial mazes are more likely to have the goal hidden in an enclosed space (such as a small room), and on average the path length from the player's starting location to the goal is significantly longer (31.1 ± 8.4 compared to 11.6 ± 6.3). Notably, while the training distribution also contains hidden goals which are far from the agent's starting location, they are much rarer. Adversarial mazes are typically far from the training distribution Second, we find that adversarial mazes tend to not only be out-of-distribution, but also far from the training distribution due to the Modify function used in our adversarial search procedure (for example, see Figure 2 ). This contrasts with the adversarial images literature where attacks are usually constrained to be small or imperceptible. It may therefore not be surprising that the agent is unable to generalize to all out-of-distribution mazes which could also explain the significant reduction in their performance. Given these two observations, it is natural to ask whether the training distribution can be adapted to improve the agent's performance. In the following sections we investigate this question and discuss our findings, focusing on incorporating adversarial mazes into training and modifying the original mazes used in training. \n APPROACH We consider two distinct approaches for incorporating adversarial and out-of-distribution mazes into the training distribution. Adversarial training To add adversarial mazes into the training distribution, we first create a dataset of 6000 unique adversarial mazes from separate runs of our search procedure using the previously trained A2CV agents. Notably, this set also includes the 250 mazes used in our transfer experiments (Section 3.3). Next, we train a new set of A2CV agents using both this adversarial set of mazes and the standard distribution of mazes, sampling randomly every episode (i.e. 50% of training episodes are on an adversarial maze). Randomly perturbed training To ensure our adversarial search procedure produces indistribution adversarial mazes, we alter the default maze generator used in training so that any adversarial maze can be generated. We accomplish this by randomly perturbing the original mazes, repeatedly using the same Modify function used by our adversarial search procedure, but selecting candidates randomly rather than by worst agent performance. \n RESULTS In this section, we report our findings on the robustness of agents trained using the approaches above. Catastrophic failures still exist Our main finding is that while agents learn to perform well on the richer distributions of mazes described above, this does not lead to robust agents. In particular, agents trained on a distribution of mazes enriched with 6000 adversarial mazes were able to find the goal on average 89.8% of the time on the adversarial mazes they were trained on. Similarly, agents trained on randomly perturbed mazes were able to find the goal close to 100% of the time on the distribution they were trained on. However, despite the agents being trained on these richer training distributions, the same search method is still able to find mazes leading to extreme failure as can be seen in Figure 7 . One possible explanation for this result is that the 6000 adversarial mazes used for training were insufficient to get good coverage of the space of mazes, and that further enlarging this set could yield qualitatively different results. Indeed, for agents trained using randomly perturbed mazes, the search procedure took 50 iterations to obtain the same level of failure as it did after 20 iterations when applied to agents trained on the standard training distribution. This suggests that perhaps enriching the training distribution with a very large set of adversarial mazes may lead to more general and robust agents. However, there are a number of challenges that need to be addressed before this approach can be tested which we will describe in the next section. \n DISCUSSION Our results suggest that if a richer training distribution is to yield more robust agents, we may need to use a very large set of environment settings leading to failure. This is similar to how adversarial training in supervised learning is performed where more adversarial examples are used than the original training examples. We describe below what we see as two significant challenges that need to be overcome before such an approach can be thoroughly evaluated in the RL setting. \n Expensive generation The cost of generating a single adversarial setting is on the order of 1000's episodes using the method in this work. This implies that generating a set of adversarial settings which is similar in size to the set trained on would require orders of magnitude more computational than training itself. This could be addressed with faster methods for generating adversarial settings. Expensive training Since agents receive very little reward in adversarial settings, the training signal is incredibly sparse. Therefore, it is possible that many more training iterations are necessary for agents to learn to perform well in each adversarial setting. A possible solution to this challenge is to design a curriculum over adversity, whereby easier variants of the adversarial settings are injected into the training distribution. For example, for the navigation tasks considered here, one could include training settings with challenging mazes where the goal is in any position on the shortest path between the starting location of the agent and the challenging goal. We hope that these challenges can be overcome so that, in the context of RL, the utility of adversarial retraining can be established -an approach which has proved useful in supervised learning tasks. However, since significant challenges remain, we suspect that much effort and many pieces of work will be required before a conclusive answer is achieved. \n RELATED WORK Navigation Recently, there has been significant focus in the RL community on agent navigation in simulated 3D environments, including a community-wide challenge for agents in such environments called VizDoom (Kempka et al., 2016) . Such 3D first-person navigation tasks are particularly interesting because they capture challenges such as partial observability, and require the agent to \"effectively perceive, interpret, and learn the 3D world in order to make tactical and strategic decisions where to go and how to act.\" (Kempka et al., 2016) . Recent advances have led to impressive human-level performance on navigation tasks in large procedurally generated environments (Beattie et al., 2016; Wayne et al., 2018) . Adversarial examples in supervised learning Our work can be seen as an RL navigation analogue of work on adversarial attacks on supervised learning systems for image classification (Szegedy et al., 2013) . For adversarial attacks on image classifiers, one considers a set of inputs that is larger than the original distribution, but where one would hope that systems perform just as well on L ∞ balls around inputs from the distribution. In particular, the adversarial examples lie outside the training distribution. Analogously, we consider a set of mazes which is larger than the original set of mazes used during training, but where we would hope our system will work just as well. Notably, while similar on a conceptual level, our setting has two key differences from this previous line of work: (1) The attack vector consists of changing latent semantic features of the environment (i.e. the wall structure of a maze), rather than changing individual pixels in an input image in an unconstrained manner. (2) The failure is realized over multiple steps of agent and environment interacting with each other, rather than simply being errant output from a single forward pass through a neural net. More recently, in the context of supervised learning for image classification, there has been work to find constrained adversarial attacks which is closer to what we consider in this work (Athalye & Sutskever, 2017; Fawzi & Frossard, 2015; Eykholt et al., 2018; Sharif et al., 2016) . In the context of interpretable adversarial examples in image classification, similar approaches to our simplification approach have been explored where one searches for adversarial perturbations with group-sparse structure or other minimal structure (Xu et al., 2018; Brendel et al., 2018) . Additionally, our findings regarding transfer are consistent with findings on adversarial examples for computer vision networks where it has been found that perturbations that are adversarial for one network often transfer across other networks (Szegedy et al., 2013; Tramèr et al., 2017) Input attacks on RL systems There have been a number of previous works which have extended adversarial attacks to RL settings, however they have achieved this by manipulating inputs directly, which effectively amounts to changing the environment renderer (Huang et al., 2017; Lin et al., 2017a; b) . While these are interesting from a security perspective, it is less clear what they tell us about the generality of the strategy learned by the agent. Generalization in RL systems Recently, it has been shown that simple agents trained on restricted datasets fail to learn sufficiently general navigation strategies to improve goal retrieval times on held out mazes (Dhiman et al., 2018) . In comparison, our method is both automatic and able to find more spectacular failures. Further, our findings highlight failures in exploration during navigation. This is in contrast to this previous work which studied failures to exploit knowledge from previous goal retrievals in the same episode. In the context of testing generalization in RL, previous work has looked at statistical generalization in RL . Here we consider agents that already generalize in the statistical sense and try to better characterize the ways in which they generalize beyond the average-case. \n CONCLUSIONS AND FUTURE WORK In this work, we have shown that despite the strong average-case performance often reported of RL agents, worst-case analysis can uncover environment settings which agents have failed to generalize to. Notably, we have found that not only do catastrophic failures exist, but also that simple catastrophic failures exist which we would hope agents would have generalized to, and that failures also transfer between agents and architectures. As agents are trained to perform increasingly complicated tasks in more sophisticated environments, for example AirSim (Shah et al., 2017) and CARLA (Dosovitskiy et al., 2017) , it is of interest to understand their worst-case performance and modes of generalization. Further, in real world applications such as self-driving cars, industrial control, and robotics, searching over environment settings to investigate and address such behaviours is likely to be critical on the path to robust and generalizable agents. To conclude, while this work has focused mostly on evaluation and understanding, it is only a first step towards the true goal of building more robust, general agents. Initial results we report indicate that enriching the training distribution with settings leading to failure may need to be done at a large scale if it is to work, which introduces significant challenges. While training robust agents is likely an endeavour requiring significant effort, we believe it is important if agents are to carry out critical tasks and on the path to finding more generally intelligent agents. goal was <50% (compared to 98% on the average maze). Furthermore, the 25th, 50th, and 75th percentiles were as follows: • p(reaching the goal): 0.031, 0.136, 0.279 • number of goals reached: 0.042, 0.136, 0.368 \n A.3 HUMAN EXPERIMENTS To upper bound the intrinsic difficulty of the mazes found to be adversarial to agents, we conducted experiments where three humans played on the same mazes. Each human played a single episode on each of ten mazes. The humans played at the same resolution as agents, 96x72 pixels, to rule out visual acuity as a confounding factor. On all mazes, all humans successfully found the goal in the course of the episode. In fact, in most episodes, humans were able to find the goal in less than a third of the episode. Figure 8 : Mazes used for human experiments. For each maze, the agent that performed best found the goal less than 50% of the time. In contrast, humans always found the goal, usually within less than a third of the episode. Note that humans played at the same resolution as agents, 96x72 pixels. Figure 1 : 1 Figure 1: Navigation task. (left) Example maze from the training distribution together with the path taken by the agent from spawn (cyan) to goal (magenta). (right) Frames from top left to bottom right correspond to agent observations as it takes the path from spawn to goal. Note that while the navigation task may look simple given a top down view, the agent only receives very partial information about the maze at every step, making navigation a difficult task. \n Figure 2 : 2 Figure 2: Example of search procedure. First, we generate a set of 10 initial candidate mazes by sampling from the training distribution.We then Evaluate each with the agent over 30 episodes, select the best maze (i.e. lowest agent score), and Modify this maze by randomly moving two walls to form the next set of candidates (Iteration 1). This process is repeated for 20 iterations, leading to a maze where the agent score is 0.09 in this example (i.e. the agent finds the goal once in 11 episodes). In Appendix A.2.1 we detail the computational requirements of this search procedure. \n (a) Average number of goals reached per episode over the course of the optimization. (b) Probability of the agent reaching the goal in an episode. \n Figure 3 : 3 Figure 3: The search algorithm is able to rapidly find mazes where agents fail to find the goal. (a) The objective used for the optimizer is average agent score. The dashed line corresponds to average goals reached on randomly perturbed mazes. (b) Minimizing score also leads to a low probability of at least one goal retrieval in an episode. The dashed line corresponds to average probability of reaching a goal on randomly perturbed mazes. The blue lines are computed by averaging across 50 optimization runs. \n Figure 4 : 4 Figure 4: Example mazes leading to low scores and example trajectories on these mazes. (a) Maze with randomly perturbed walls. Despite being out of distribution, agents find the goal in 98% of episodes on such mazes and are able to get the goal on average 45 times per episode. (b) Maze obtained after 20 iterations, moving two walls at each iteration to minimize reward. All agents find the goal on such mazes in less than 20% of episodes. (c) Maze obtained through additional iterations of removing walls. All agents find the goal on such mazes in less than 40% of episodes. (d) Human trajectory on the same maze as in (c). Humans are able to consistently find the goal on such mazes. \n (a) Agent locations, with goal (magenta) fixed.(b) Goal locations, with agent (cyan) fixed. \n Figure 5 : 5 Figure 5: Adversarial mazes are robust to change of spawn positions. The probability of goal retrieval (shown with the color bar) remains low across large portions of the simplified maze as the (a) agent spawn locations and (b) goal locations are moved for each episode.. \n (a) Across hyperparameters, same architecture. (b) Across architectures. \n (a) Average number of goals reached. (b) Probability of finding the goal in a single episode. \n Figure 7 : 7 Figure 7: Richer training distributions did not lead to robust agents. Adversarial optimization for agents trained with adversarial mazes (red) and for agents trained with randomly perturbed mazes (yellow). Compared against the Standard training method from Figure 3 for 50 iterations (blue). \n Figure 9 : 9 Figure 9: Trajectories taken by Human 3 on mazes leading to agent failure. \n Table 1 : 1 Human seconds-to-first-goal on agent failure mazes we provide detailed results for our transfer experiments. In particular, we detail transfer between all pairs among the 10 agents, five A2CV agents and five MERLIN agents trained with different entropy costs and learning rates. \n Figure 10 : 10 Figure 10: Pairwise transfer scores. Lower number indicates more transfer. 'A' corresponds to our A2CV agent, and 'M' corresponds to MERLIN. \n end best ← Evaluate(candidates, num evaluations); Algorithm 1: Method for finding environment settings leading to failure cases. Concretely, we generate a set of initial candidate mazes by sampling mazes from the training distribution. We then use the Modify function on the maze which yielded the lowest agent score to randomly move two walls to produce a new set of candidates, rejecting wall moves that lead to unsolvable mazes. Importantly, this method is able to effectively find catastrophic failure cases (as we demonstrate in Section 3.1), while also having the advantage of being intuitive to understand and implement. input : num iterations, num candidates, num evaluations and function Modify output: An environment setting best candidates ← GenerateCandidates(num candidates); for i ← 1 to num iterations do best ← Evaluate(candidates, num evaluations); candidates ← Modify(best, num candidates);", "date_published": "n/a", "url": "n/a", "filename": "uncovering_surprising_behavior.tei.xml", "abstract": "Reinforcement learning agents are typically trained and evaluated according to their performance averaged over some distribution of environment settings. But does the distribution over environment settings contain important biases, and do these lead to agents that fail in certain cases despite high average-case performance? In this work, we consider worst-case analysis of agents over environment settings in order to detect whether there are directions in which agents may have failed to generalize. Specifically, we consider a 3D first-person task where agents must navigate procedurally generated mazes, and where reinforcement learning agents have recently achieved human-level average-case performance. By optimizing over the structure of mazes, we find that agents can suffer from catastrophic failures, failing to find the goal even on surprisingly simple mazes, despite their impressive average-case performance. Additionally, we find that these failures transfer between different agents and even significantly different architectures. We believe our findings highlight an important role for worst-case analysis in identifying whether there are directions in which agents have failed to generalize. Our hope is that the ability to automatically identify failures of generalization will facilitate development of more general and robust agents. To this end, we report initial results on enriching training with settings causing failure. 1 A full description and code for the tasks can be found at https://github.com/deepmind/lab/ tree/master/game_scripts/levels/contributed/dmlab30#goal-locations-large.", "id": "867f3374d9bc8f72b895cbb6df1f00ba"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "science.aba9647.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Dylan Hadfield-Menell", "Anca Dragan", "Pieter Abbeel", "Stuart Russell"], "title": "Cooperative Inverse Reinforcement Learning", "text": "Introduction \"If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively . . . we had better be quite sure that the purpose put into the machine is the purpose which we really desire.\" So wrote Norbert Wiener (1960) in one of the earliest explanations of the problems that arise when a powerful autonomous system operates with an incorrect objective. This value alignment problem is far from trivial. Humans are prone to mis-stating their objectives, which can lead to unexpected implementations. In the myth of King Midas, the main character learns that wishing for 'everything he touches to turn to gold' leads to disaster. In a reinforcement learning context, Russell & Norvig (2010) describe a seemingly reasonable, but incorrect, reward function for a vacuum robot: if we reward the action of cleaning up dirt, the optimal policy causes the robot to repeatedly dump and clean up the same dirt. A solution to the value alignment problem has long-term implications for the future of AI and its relationship to humanity (Bostrom, 2014 ) and short-term utility for the design of usable AI systems. Giving robots the right objectives and enabling them to make the right trade-offs is crucial for self-driving cars, personal assistants, and human-robot interaction more broadly. The field of inverse reinforcement learning or IRL (Russell, 1998; Ng & Russell, 2000; Abbeel & Ng, 2004 ) is certainly relevant to the value alignment problem. An IRL algorithm infers the reward function of an agent from observations of the agent's behavior, which is assumed to be optimal (or approximately so). One might imagine that IRL provides a simple solution to the value alignment problem: the robot observes human behavior, learns the human reward function, and behaves according to that function. This simple idea has two flaws. The first flaw is obvious: we don't want the robot to adopt the human reward function as its own. For example, human behavior (especially in the morning) often conveys a desire for coffee, and the robot can learn this with IRL, but we don't want the robot to want coffee! This flaw is easily fixed: we need to formulate the value alignment problem so that the robot always has the fixed objective of optimizing reward for the human, and becomes better able to do so as it learns what the human reward function is. The second flaw is less obvious, and less easy to fix. IRL assumes that observed behavior is optimal in the sense that it accomplishes a given task efficiently. This precludes a variety of useful teaching behaviors. For example, efficiently making a cup of coffee, while the robot is a passive observer, is a inefficient way to teach a robot to get coffee. Instead, the human should perhaps explain the steps in coffee preparation and show the robot where the backup coffee supplies are kept and what do if the coffee pot is left on the heating plate too long, while the robot might ask what the button with the puffy steam symbol is for and try its hand at coffee making with guidance from the human, even if the first results are undrinkable. None of these things fit in with the standard IRL framework. Cooperative inverse reinforcement learning. We propose, therefore, that value alignment should be formulated as a cooperative and interactive reward maximization process. More precisely, we define a cooperative inverse reinforcement learning (CIRL) game as a two-player game of partial information, in which the \"human\", H, knows the reward function (represented by a generalized parameter θ), while the \"robot\", R, does not; the robot's payoff is exactly the human's actual reward. Optimal solutions to this game maximize human reward; we show that solutions may involve active instruction by the human and active learning by the robot. Reduction to POMDP and Sufficient Statistics. As one might expect, the structure of CIRL games is such that they admit more efficient solution algorithms than are possible for general partialinformation games. Let (π H , π R ) be a pair of policies for human and robot, each depending, in general, on the complete history of observations and actions. A policy pair yields an expected sum of rewards for each player. CIRL games are cooperative, so there is a well-defined optimal policy pair that maximizes value. 2 In Section 3 we reduce the problem of computing an optimal policy pair to the solution of a (single-agent) POMDP. This shows that the robot's posterior over θ is a sufficient statistic, in the sense that there are optimal policy pairs in which the robot's behavior depends only on this statistic. Moreover, the complexity of solving the POMDP is exponentially lower than the NEXP-hard bound that (Bernstein et al., 2000) obtained by reducing a CIRL game to a general Dec-POMDP. Apprenticeship Learning and Suboptimality of IRL-Like Solutions. In Section 3.3 we model apprenticeship learning (Abbeel & Ng, 2004 ) as a two-phase CIRL game. In the first phase, the learning phase, both H and R can take actions and this lets R learn about θ. In the second phase, the deployment phase, R uses what it learned to maximize reward (without supervision from H). We show that classic IRL falls out as the best-response policy for R under the assumption that the human's policy is \"demonstration by expert\" (DBE), i.e., acting optimally in isolation as if no robot exists. But we show also that this DBE/IRL policy pair is not, in general, optimal: even if the robot expects expert behavior, demonstrating expert behavior is not the best way to teach that algorithm. We give an algorithm that approximately computes H's best response when R is running IRL under the assumption that rewards are linear in θ and state features. Section 4 compares this best-response policy with the DBE policy in an example game and provides empirical confirmation that the bestresponse policy, which turns out to \"teach\" R about the value landscape of the problem, is better than DBE. Thus, designers of apprenticeship learning systems should expect that users will violate the assumption of expert demonstrations in order to better communicate information about the objective. \n Related Work Our proposed model shares aspects with a variety of existing models. We divide the related work into three categories: inverse reinforcement learning, optimal teaching, and principal-agent models. Inverse Reinforcement Learning. Ng & Russell (2000) define inverse reinforcement learning (IRL) as follows: \"Given measurements of an [actor]'s behavior over time. . . . Determine the reward function being optimized.\" The key assumption IRL makes is that the observed behavior is optimal in the sense that the observed trajectory maximizes the sum of rewards. We call this the demonstration-by-expert (DBE) assumption. One of our contributions is to prove that this may be suboptimal behavior in a CIRL game, as H may choose to accept less reward on a particular action in order to convey more information to R. In CIRL the DBE assumption prescribes a fixed policy \n Ground Truth \n Expert Demonstration Instructive Demonstration Figure 1 : The difference between demonstration-by-expert and instructive demonstration in the mobile robot navigation problem from Section 4. Left: The ground truth reward function. Lighter grid cells indicates areas of higher reward. Middle: The demonstration trajectory generated by the expert policy, superimposed on the maximum a-posteriori reward function the robot infers. The robot successfully learns where the maximum reward is, but little else. Right: An instructive demonstration generated by the algorithm in Section 3.4 superimposed on the maximum a-posteriori reward function that the robot infers. This demonstration highlights both points of high reward and so the robot learns a better estimate of the reward. for H. As a result, many IRL algorithms can be derived as state estimation for a best response to different π H , where the state includes the unobserved reward parametrization θ. Ng & Russell (2000) , Abbeel & Ng (2004), and Ratliff et al. (2006) compute constraints that characterize the set of reward functions so that the observed behavior maximizes reward. In general, there will be many reward functions consistent with this constraint. They use a max-margin heuristic to select a single reward function from this set as their estimate. In CIRL, the constraints they compute characterize R's belief about θ under the DBE assumption. Ramachandran & Amir (2007) and Ziebart et al. (2008) consider the case where π H is \"noisily expert,\" i.e., π H is a Boltzmann distribution where actions or trajectories are selected in proportion to the exponent of their value. Ramachandran & Amir (2007) adopt a Bayesian approach and place an explicit prior on rewards. Ziebart et al. (2008) places a prior on reward functions indirectly by assuming a uniform prior over trajectories. In our model, these assumptions are variations of DBE and both implement state estimation for a best response to the appropriate fixed H. Natarajan et al. (2010) introduce an extension to IRL where R observes multiple actors that cooperate to maximize a common reward function. This is a different type of cooperation than we consider, as the reward function is common knowledge and R is a passive observer. Waugh et al. (2011) and Kuleshov & Schrijvers (2015) consider the problem of inferring payoffs from observed behavior in a general (i.e., non-cooperative) game given observed behavior. It would be interesting to consider an analogous extension to CIRL, akin to mechanism design, in which R tries to maximize collective utility for a group of Hs that may have competing objectives. Fern et al. ( 2014 ) consider a hidden-goal MDP, a special case of a POMDP where the goal is an unobserved part of the state. This can be considered a special case of CIRL, where θ encodes a particular goal state. The frameworks share the idea that R helps H. The key difference between the models lies in the treatment of the human (the agent in their terminology). Fern et al. ( 2014 ) model the human as part of the environment. In contrast, we treat H as an actor in a decision problem that both actors collectively solve. This is crucial to modeling the human's incentive to teach. Optimal Teaching. Because CIRL incentivizes the human to teach, as opposed to maximizing reward in isolation, our work is related to optimal teaching: finding examples that optimally train a learner (Balbach & Zeugmann, 2009; Goldman et al., 1993; Goldman & Kearns, 1995) . The key difference is that efficient learning is the objective of optimal teaching, while it emerges as a property of optimal equilibrium behavior in CIRL. Cakmak & Lopes (2012) consider an application of optimal teaching where the goal is to teach the learner the reward function for an MDP. The teacher gets to pick initial states from which an expert executes the reward-maximizing trajectory. The learner uses IRL to infer the reward function, and the teacher picks initial states to minimize the learner's uncertainty. In CIRL, this approach can be characterized as an approximate algorithm for H that greedily minimizes the entropy of R's belief. Beyond teaching, several models focus on taking actions that convey some underlying state, not necessarily a reward function. Examples include finding a motion that best communicates an agent's intention (Dragan & Srinivasa, 2013) , or finding a natural language utterance that best communicates a particular grounding (Golland et al., 2010) . All of these approaches model the observer's inference process and compute actions (motion or speech) that maximize the probability an observer infers the correct hypothesis or goal. Our approximate solution to CIRL is analogous to these approaches, in that we compute actions that are informative of the correct reward function. Principal-agent models. Value alignment problems are not intrinsic to artificial agents. Kerr (1975) describes a wide variety of misaligned incentives in the aptly titled \"On the folly of rewarding A, while hoping for B.\" In economics, this is known as the principal-agent problem: the principal (e.g., the employer) specifies incentives so that an agent (e.g., the employee) maximizes the principal's profit (Jensen & Meckling, 1976) . Principal-agent models study the problem of generating appropriate incentives in a non-cooperative setting with asymmetric information. In this setting, misalignment arises because the agents that economists model are people and intrinsically have their own desires. In AI, misalignment arises entirely from the information asymmetry between the principal and the agent; if we could characterize the correct reward function, we could program it into an artificial agent. Gibbons (1998) provides a useful survey of principal-agent models and their applications. \n Cooperative Inverse Reinforcement Learning This section formulates CIRL as a two-player Markov game with identical payoffs, reduces the problem of computing an optimal policy pair for a CIRL game to solving a POMDP, and characterizes apprenticeship learning as a subclass of CIRL games. \n CIRL Formulation Definition 1. A cooperative inverse reinforcement learning (CIRL) game M is a two-player Markov game with identical payoffs between a human or principal, H, and a robot or agent, R. The game is described by a tuple, M = S, {A H , A R }, T (•|•, •, •), {Θ, R(•, •, •; •)}, P 0 (•, •), γ , (s 0 , θ) γ a discount factor: γ ∈ [0, 1]. We write the reward for a state-parameter pair as R(s, a H , a R ; θ) to distinguish the static reward parameters θ from the changing world state s. The game proceeds as follows. First, the initial state, a tuple (s, θ), is sampled from P 0 . H observes θ, but R does not. This observation model captures the notion that only the human knows the reward function, while both actors know a prior distribution over possible reward functions. At each timestep t, H and R observe the current state s t and select their actions a H t , a R t . Both actors receive reward r t = R(s t , a H t , a R t ; θ) and observe each other's action selection. A state for the next timestep is sampled from the transition distribution, s t+1 ∼ P T (s |s t , a H t , a R t ), and the process repeats. Behavior in a CIRL game is defined by a pair of policies, (π H , π R ), that determine action selection for H and R respectively. In general, these policies can be arbitrary functions of their observation histories; π H : A H × A R × S * × Θ → A H , π R : A H × A R × S * → A R . The optimal joint policy is the policy that maximizes value. The value of a state is the expected sum of discounted rewards under the initial distribution of reward parameters and world states. Remark 1. A key property of CIRL is that the human and the robot get rewards determined by the same reward function. This incentivizes the human to teach and the robot to learn without explicitly encoding these as objectives of the actors. \n Structural Results for Computing Optimal Policy Pairs The analogue in CIRL to computing an optimal policy for an MDP is the problem of computing an optimal policy pair. This is a pair of policies that maximizes the expected sum of discounted rewards. This is not the same as 'solving' a CIRL game, as a real world implementation of a CIRL agent must account for coordination problems and strategic uncertainty (Boutilier, 1999) . The optimal policy pair represents the best H and R can do if they can coordinate perfectly before H observes θ. Computing an optimal joint policy for a cooperative game is the solution to a decentralized-partially observed Markov decision process (Dec-POMDP). Unfortunately, Dec-POMDPs are NEXP-complete (Bernstein et al., 2000) so general Dec-POMDP algorithms have a computational complexity that is doubly exponential. Fortunately, CIRL games have special structure that reduces this complexity. Nayyar et al. (2013) shows that a Dec-POMDP can be reduced to a coordination-POMDP. The actor in this POMDP is a coordinator that observes all common observations and specifies a policy for each actor. These policies map each actor's private information to an action. The structure of a CIRL game implies that the private information is limited to H's initial observation of θ. This allows the reduction to a coordination-POMDP to preserve the size of the (hidden) state space, making the problem easier. Theorem 1. Let M be an arbitrary CIRL game with state space S and reward space Θ. There exists a (single-actor) POMDP M C with (hidden) state space S C such that |S C | = |S| • |Θ| and, for any policy pair in M , there is a policy in M C that achieves the same sum of discounted rewards. Theorem proofs can be found in the supplementary material. An immediate consequence of this result is that R's belief about θ is a sufficient statistic for optimal behavior. Corollary 1. Let M be a CIRL game. There exists an optimal policy pair (π H * , π R * ) that only depends on the current state and R's belief. Remark 2. In a general Dec-POMDP, the hidden state for the coordinator-POMDP includes each actor's history of observations. In CIRL, θ is the only private information so we get an exponential decrease in the complexity of the reduced problem. This allows one to apply general POMDP algorithms to compute optimal joint policies in CIRL. It is important to note that the reduced problem may still be very challenging. POMDPs are difficult in their own right and the reduced problem still has a much larger action space. That being said, this reduction is still useful in that it characterizes optimal joint policy computation for CIRL as significantly easier than Dec-POMDPs. Furthermore, this theorem can be used to justify approximate methods (e.g., iterated best response) that only depend on R's belief state. \n Apprenticeship Learning as a Subclass of CIRL Games A common paradigm for robot learning from humans is apprenticeship learning. In this paradigm, a human gives demonstrations to a robot of a sample task and the robot is asked to imitate it in a subsequent task. In what follows, we formulate apprenticeship learning as turn-based CIRL with a learning phase and a deployment phase. We characterize IRL as the best response (i.e., the policy that maximizes reward given a fixed policy for the other player) to a demonstration-by-expert policy for H. We also show that this policy is, in general, not part of an optimal joint policy and so IRL is generally a suboptimal approach to apprenticeship learning. Definition 2. (ACIRL) An apprenticeship cooperative inverse reinforcement learning (ACIRL) game is a turn-based CIRL game with two phases: a learning phase where the human and the robot take turns acting, and a deployment phase, where the robot acts independently. \n Example. Consider an example apprenticeship task where R needs to help H make office supplies. H and R can make paperclips and staples and the unobserved θ describe H's preference for paperclips vs staples. We model the problem as an ACIRL game in which the learning and deployment phase each consist of an individual action. The world state in this problem is a tuple (p s , q s , t) where p s and q s respectively represent the number of paperclips and staples H owns. t is the round number. An action is a tuple (p a , q a ) that produces p a paperclips and q a staples. The human can make 2 items total: A H = {(0, 2), (1, 1), (2, 0)}. The robot has different capabilities. It can make 50 units of each item or it can choose to make 90 of a single item: A R = {(0, 90), (50, 50), (90, 0)}. We let Θ = [0, 1] and define R so that θ indicates the relative preference between paperclips and staples:R(s, (p a , q a ); θ) = θp a + (1 − θ)q a . R's action is ignored when t = 0 and H's is ignored when t = 1. At t = 2, the game is over, so the game transitions to a sink state, (0, 0, 2). Deployment phase -maximize mean reward estimate. It is simplest to analyze the deployment phase first. R is the only actor in this phase so it get no more observations of its reward. We have shown that R's belief about θ is a sufficient statistic for the optimal policy. This belief about θ induces a distribution over MDPs. A straightforward extension of a result due to Ramachandran & Amir (2007) shows that R's optimal deployment policy maximizes reward for the mean reward function. Theorem 2. Let M be an ACIRL game. In the deployment phase, the optimal policy for R maximizes reward in the MDP induced by the mean θ from R's belief. In our example, suppose that π H selects (0, 2 ) if θ ∈ [0, 1 3 ), (1, 1) if θ ∈ [ 1 3 , 2 3 ] and (2, 0) otherwise. R begins with a uniform prior on θ so observing, e.g., a H = (0, 2) leads to a posterior distribution that is uniform on [0, 1 3 ). Theorem 2 shows that the optimal action maximizes reward for the mean θ so an optimal R behaves as though θ = 1 6 during the deployment phase. Learning phase -expert demonstrations are not optimal. A wide variety of apprenticeship learning approaches assume that demonstrations are given by an expert. We say that H satisfies the demonstration-by-expert (DBE) assumption in ACIRL if she greedily maximizes immediate reward on her turn. This is an 'expert' demonstration because it demonstrates a reward maximizing action but does not account for that action's impact on R's belief. We let π E represent the DBE policy. Theorem 2 enables us to characterize the best response for R when π H = π E : use IRL to compute the posterior over θ during the learning phase and then act to maximize reward under the mean θ in the deployment phase. We can also analyze the DBE assumption itself. In particular, we show that π E is not H's best response when π R is a best response to π E . Theorem 3. There exist ACIRL games where the best-response for H to π R violates the expert demonstrator assumption. In other words, if br(π) is the best response to π, then br(br(π E )) = π E . The supplementary material proves this theorem by computing the optimal equilibrium for our example. In that equilibrium, H selects (1, 1) if θ ∈ [ 41 92 , 51 92 ]. In contrast, π E only chooses (1, 1) if θ = 0.5. The change arises because there are situations (e.g., θ = 0.49) where the immediate loss of reward to H is worth the improvement in R's estimate of θ. Remark 3. We should expect experienced users of apprenticeship learning systems to present demonstrations optimized for fast learning rather than demonstrations that maximize reward. Crucially, the demonstrator is incentivized to deviate from R's assumptions. This has implications for the design and analysis of apprenticeship systems in robotics. Inaccurate assumptions about user behavior are notorious for exposing bugs in software systems (see, e.g., Leveson & Turner (1993) ). \n Generating Instructive Demonstrations Now, we consider the problem of computing H's best response when R uses IRL as a state estimator. For our toy example, we computed solutions exhaustively, for realistic problems we need a more efficient approach. Section 3.2 shows that this can be reduced to an POMDP where the state is a tuple of world state, reward parameters, and R's belief. While this is easier than solving a general Dec-POMDP, it is a computational challenge. If we restrict our attention to the case of linear reward functions we can develop an efficient algorithm to compute an approximate best response. Specifically, we consider the case where the reward for a state (s, θ) is defined as a linear combination of state features for some feature function φ : R(s, a H , a R ; θ) = φ(s) θ. Standard results from the IRL literature show that policies with the same expected feature counts have the same value (Abbeel & Ng, 2004) . Combined with Theorem 2, this implies that the optimal π R under the DBE assumption computes a policy that matches the observed feature counts from the learning phase. This suggests a simple approximation scheme. To compute a demonstration trajectory τ H , first compute the feature counts R would observe in expectation from the true θ and then select actions that maximize similarity to these target features. If φ θ are the expected feature counts induced by θ then this scheme amounts to the following decision rule: τ H ← argmax τ φ(τ ) θ − η||φ θ − φ(τ )|| 2 . (1) This rule selects a trajectory that trades off between the sum of rewards φ(τ ) θ and the feature dissimilarity ||φ θ − φ(τ )|| 2 . Note that this is generally distinct from the action selected by the demonstration-by-expert policy. The goal is to match the expected sum of features under a distribution of trajectories with the sum of features from a single trajectory. The correct measure of feature Lower numbers are better. Using the best response causes R to infer a better distribution over θ so it does a better job of maximizing reward. Right: The regret of the instructive demonstration policy as a function of how optimal R expects H to be. λ = 0 corresponds to a robot that expects purely random behavior and λ = ∞ corresponds to a robot that expects optimal behavior. Regret is minimized for an intermediate value of λ: if λ is too small, then R learns nothing from its observations; if λ is too large, then R expects many values of θ to lead to the same trajectory so H has no way to differentiate those reward functions. similarity is regret: the difference between the reward R would collect if it knew the true θ and the reward R actually collects using the inferred θ. Computing this similarity is expensive, so we use an 2 norm as a proxy measure of similarity. \n Experiments \n Cooperative Learning for Mobile Robot Navigation Our experimental domain is a 2D navigation problem on a discrete grid. In the learning phase of the game, H teleoperates a trajectory while R observes. In the deployment phase, R is placed in a random state and given control of the robot. We use a finite horizon H, and let the first H 2 timesteps be the learning phase. There are N φ state features defined as radial basis functions where the centers are common knowledge. Rewards are linear in these features and θ. The initial world state is in the middle of the map. We use a uniform distribution on [−1, 1] N φ for the prior on θ. Actions move in one of the four cardinal directions {N, S, E, W } and there is an additional no-op ∅ that each actor executes deterministically on the other agent's turn. Figure 1 shows an example comparison between demonstration-by-expert and the approximate best response policy in Section 3.4. The leftmost image is the ground truth reward function. Next to it are demonstration trajectories produce by these two policies. Each path is superimposed on the maximum a-posteriori reward function the robot infers from the demonstration. We can see that the demonstration-by-expert policy immediately goes to the highest reward and stays there. In contrast, the best response policy moves to both areas of high reward. The robot reward function the robot infers from the best response demonstration is much more representative of the true reward function, when compared with the reward function it infers from demonstration-by-expert. \n Demonstration-by-Expert vs Best Responder Hypothesis. When R plays an IRL algorithm that matches features, H prefers the best response policy from Section 3.4 to π E : the best response policy will significantly outperform the DBE policy. Manipulated Variables. Our experiment consists of 2 factors: H-policy and num-features. We make the assumption that R uses an IRL algorithm to compute its estimate of θ during learning and maximizes reward under this estimate during deployment. We use Maximum-Entropy IRL (Ziebart et al., 2008) to implement R's policy. H-policy varies H's strategy π H and has two levels: demonstration-by-expert (π E ) and best-responder (br). In the π E level H maximizes reward during the demonstration. In the br level H uses the approximate algorithm from Section 3.4 to compute an approximate best response to π R . The trade-off between reward and communication η is set by cross-validation before the game begins. The num-features factor varies the dimensionality of φacross two levels: 3 features and 10 features. We do this to test whether and how the difference between experts and best-responders is affected by dimensionality. We use a factorial design that leads to 4 distinct conditions. We test each condition against a random sample of N = 500 different reward parameters. We use a within-subjects design with respect to the the H-policy factor so the same reward parameters are tested for π E and br. Dependent Measures. We use the regret with respect to a fully-observed setting where the robot knows the ground truth θ as a measure of performance. We let θ be the robot's estimate of the reward parameters and let θ GT be the ground truth reward parameters. The primary measure is the regret of R's policy: the difference between the value of the policy that maximizes the inferred reward θ and the value of the policy that maximizes the true reward θ GT . We also use two secondary measures. The first is the KL-divergence between the maximum-entropy trajectory distribution induced by θ and the maximum-entropy trajectory distribution induced by θ. Finally, we use the 2 -norm between the vector or rewards defined by θ and the vector induced by θ GT . Results. There was relatively little correlation between the measures (Cronbach's α of .47), so we ran a factorial repeated measures ANOVA for each measure. Across all measures, we found a significant effect for H-policy, with br outperforming π E on all measures as we hypothesized (all with F > 962, p < .0001). We did find an interaction effect with num-features for KL-divergence and the 2 -norm of the reward vector but post-hoc Tukey HSD showed br to always outperform π E . The interaction effect arises because the gap between the two levels of H-policy is larger with fewer reward parameters; we interpret this as evidence that num-features = 3 is an easier teaching problem for H. Figure 2 (Left, Middle) shows the dependent measures from our experiment. \n Varying R's Expectations Maximum-Entropy IRL includes a free parameter λ that controls how optimal R expects H to behave. If λ = 0, R will update its belief as if H's observed behavior is independent of her preferences θ. If λ = ∞, R will update its belief as if H's behavior is exactly optimal. We ran a followup experiment to determine how varying λ changes the regret of the br policy. Changing λ changes the forward model in R's belief update: the mapping R hypothesizes between a given reward parameter θ and the observed feature counts φ θ . This mapping is many-to-one for extreme values of λ. λ ≈ 0 means that all values of θ lead to the same expected feature counts because trajectories are chosen uniformly at random. Alternatively, λ >> 0 means that almost all probability mass falls on the optimal trajectory and many values of θ will lead to the same optimal trajectory. This suggests that it is easier for H to differentiate different values of θ if R assumes she is noisily optimal, but only up until a maximum noise level. Figure 2 plots regret as a function of λ and supports this analysis: H has less regret for intermediate values of λ. \n Conclusion and Future Work In this work, we presented a game-theoretic model for cooperative learning, CIRL. Key to this model is that the robot knows that it is in a shared environment and is attempting to maximize the human's reward (as opposed to estimating the human's reward function and adopting it as its own). This leads to cooperative learning behavior and provides a framework in which to design HRI algorithms and analyze the incentives of both actors in a reward learning environment. We reduced the problem of computing an optimal policy pair to solving a POMDP. This is a useful theoretical tool and can be used to design new algorithms, but it is clear that optimal policy pairs are only part of the story. In particular, when it performs a centralized computation, the reduction assumes that we can effectively program both actors to follow a set coordination policy. This is clearly infeasible in reality, although it may nonetheless be helpful in training humans to be better teachers. An important avenue for future research will be to consider the coordination problem: the process by which two independent actors arrive at policies that are mutual best responses. Returning to Wiener's warning, we believe that the best solution is not to put a specific purpose into the machine at all, but instead to design machines that provably converge to the right purpose as they go along. Figure 2 : 2 Figure 2: Left, Middle: Comparison of 'expert' demonstration (π E ) with 'instructive' demonstration (br). \n\t\t\t A coordination problem of the type described in Boutilier (1999) arises if there are multiple optimal policy pairs; we defer this issue to future work.", "date_published": "n/a", "url": "n/a", "filename": "NIPS-2016-cooperative-inverse-reinforcement-learning-Paper.tei.xml", "abstract": "For an autonomous system to be helpful to humans and to pose no unwarranted risks, it needs to align its values with those of the humans in its environment in such a way that its actions contribute to the maximization of value for the humans. We propose a formal definition of the value alignment problem as cooperative inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partialinformation game with two agents, human and robot; both are rewarded according to the human's reward function, but the robot does not initially know what this is. In contrast to classical IRL, where the human is assumed to act optimally in isolation, optimal CIRL solutions produce behaviors such as active teaching, active learning, and communicative actions that are more effective in achieving value alignment. We show that computing optimal joint policies in CIRL games can be reduced to solving a POMDP, prove that optimality in isolation is suboptimal in CIRL, and derive an approximate CIRL algorithm.", "id": "eb1967caa7cbfcfd329b13658330f7ec"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Stuart Russell"], "title": "Learning agents for uncertain environments (extended abstract)", "text": "In recent years, reinforcement learning (also called neurodynamic programming) has made rapid progress as an approachfor building agents automatically (Sutton, 1988; Kaelbling et al., 1996; Bertsekas & Tsitsiklis, 1996) . The basic idea is that the performance measure is made available to the agent in the form of a rewardfunction specifying the reward for each state that the agent passes through. The performance measure is then the sum of the rewards obtained. For example, when a bumble bee forages, the reward function at each time step might be some combination of the distance flown (weighted negatively) and the nectar ingested. Reinforcement learning (RL) methods are essentially online algorithmd for solving Markovdecisionprocesses (MDPs). An MDP is defined by the reward function and a model, that is, the state transition probabilities conditioned on each possible action. RL algorithms can be model-based, where the agent learns a model, or model-free-e.g., Q-learning cite-Watkins: 1989, which learns just a function Q(s, a) specifying the long-term value of taking action a in state s and acting optimally thereafter. Despite their successes, RL methods have been restricted largely tofully observable MDPs, in which the sensory input at each state is sufficient to identify the state. Obviously, in the real world, we must often deal with partially observable MDPs (POMDPs). Astrom (1965) proved that optimal decisions in POMDPs depend on the belief state b at each point in time, i.e., the posterior probability distribution over all possible actual states, given all evidence to date. The functions V and Q then become functions of b instead of s. Parr and Russell (1995) describes a very simple POMDP RL algorithm using an explicit representation of b as a vector of probabilities, and McCallum (1993) shows a way to approximate the belief state using recent percept sequences. Neither approach is likely to scale up to situations with large numbers of state variables and long-term temporal dependencies. What is needed is a way of representing the model compactly and updating the belief state efficiently given the model and each new observation. Dynamic Bayesian networks (Dean & Kanazawa, 1989) seem to have some of the required properties; in particular, they have significant advantages over other approaches such as Kalman filters and hidden Markov models. Our baseline architecture, shown in Figure 1 , uses DBNs to represent and update the belief state as new sensor information arrives. Given a representation for b, the reward signal is used to learn a Q-function represented by some \"black-box\" function approximator such as a neural network. Provided we can handle hybrid (dis-Figure 1 : A baseline architecture for learning agents in uncertain environments crete+continuous) DPNs, and provided we have a learning algorithm that can construct an approximately correct DBN model from scratch, then this baseline architecture has the capacity, in principle, to be thrown into more or less any environment and to learn to behave reasonably.* The talk will cover a variety of research topics arising from this proposal: Parametric learning in DBNs (Binder, Koller, Russell, & Kanazawa, 1997a) . Structural learning in DBNs (Friedman, Murphy, & Russell, 1998) . Approximate inference in DBNs (Kanazawa, Koller, & Russell, 1995; Boyen & Koller, 1998) . Space-efficient inference in DBNs (Binder, Murphy, & Russell, 1997b) . Reinforcement learning with DBN models-that is, how to do Q-learning with the belief state information provided by the DBN. Some tentative ideas will be presented but as yet there are no convincing solutions. Scaling up the environment will inevitably overtax the resources of the baseline architecture. There are several obvious directions for improvement, including hierarchical and first-order models, hierarchical representations of behaviour (Parr & Russell, 1998) , and model-based lookahead methods for decision making. Which of these is important in any particular class of environments can only be ascertained by experiment. \n Inverse reinforcement learning Reinforcement learning is a powerful method for adaptive control in real tasks, so it is natural to seek analogous mechanisms in nature. Connections have been made between reinforcement learning and operant conditioning models of animal learning (see, e.g., Schmajuk & Zanutto, 1997; Touretzky & Saksida, 1997) . There is also neurophysiological evidence that reinforcement learning occurs in bee foraging (Montague et al., 1995) and in songbird vocalization (Doya & Sejnowski, 1995) . In this work, it is generally assumed that the reward function is fixed and known. For example, in experiments on bees it is assumed to be the rate of nectar ingestion: Montague et al. (1995) cite evidence of a \"neuron with widespread projections to odour processing regions of the honeybee brain 'We say \"more or less\" because full generality require dealing with game-theoretic issues requiring stochastic decision making. whose activity represents the reward value of gustatory stimuli.\" It seems clear, however, that in examining animal and human behaviour we must consider the reward function as an unknown to be ascertained. The reasons for this are straightforward: l The specification of a given reward function is an empirical hypothesis and may turn out to be wrong. For example, it was assumed initially that horses' gait selection for a given speed was determined by energetic economy (Hoyt & Taylor, 1981) ; this turns out not to be the case (Farley & Taylor, 1991) . l The parameters of a multiattribute reward function can surely not be determined a priori; e.g., for running, attributes might be speed, efficiency, stability against perturbations, wear and tear on muscles, tendons, and bones, etc. How are these to be weighted and combined? Therefore, to model natural learning using reinforcement learning ideas, we must first solve the following computational task, which we call inverse reinforcement learning: Given 1) measurements of an agent's behaviour over time, in a variety of circumstances, 2) measurements of the sensory inputs to that agent; 3) a model of the physical environment (including the agent's body). Determine the reward function that the agent is optimizing. Given an assumption of optimization, this computational task is well-defined. Notice that is the dual of unsupervised reinforcement learning, where the task is to determine optimal behaviour given the reward inputs. To our knowledge, this computational task has not been studied in any generality in computer science, control theory, psychology, or biology. The closest work is in economics, where the task of multiattribute utility assessment has been studiedin depth-that is, how does a person actually combine the various attributes of each available choice when making a decision. The theory is well-developed (Keeney & Raiffa, 1976) , and the applications numerous. However, this field studies only one-shot decisions where a single action is taken and the outcome is immediate. The sequential case was not considered until a seminal paper by Sargent (1978) tried to ascertain the effective hiring cost for labor by examining a firm's hiring behaviour over time, assuming it to be rational. In the last decade, the area of structural estimation of Markov decision processes has grown rapidly in econometrics (Rust, 1994) . Many of the basic results carry over to our setting, although virtually nothing has been done on computational aspects, experimentation, or control-type applications. The open research problems are many: What are efficient algorithms for solving the inverse reinforcement learning problem? What is its computational complexity? Are there closed-form solutions for some parametric forms? Under what circumstances can we determine the existence of a consistent reward function? To what extent is the reward function uniquely recoverable? What effect do sensor and process noise have on robustness of the determination? What are appropriate error metrics for fitting? If behaviour is strongly inconsistent with optimality, can we identify \"locally consistent\" reward functions for specific regions in state space? Can we determine the reward function by observation during rather than after learning? How much observation is required to determine an estimated reward function that is within E of the true reward function? How can experiments be designed to maximize the identifiability of the reward function? Considering the design of possible algorithms, one can take maximum-likelihood approach to fit a parametric form for the reward functionas is commonly done in econometrics. That is, one defines a function L,(w)(B), the likelihood of observing behaviour B if the true reward function is r(w). From this, one can compute dWdw. One important question will be how to compute this gradient efficiently; presumably, it can be done in an obvious way by carrying the differential operator through the optimization algorithm for the behaviour. More elegant closed-form solutions may exist in special cases (e.g., linear-quadratic regulators). One may also be able to show that in some cases (e.g., linear reward functions) a globally optimal estimate can always be found. The solution of inverse reinforcement learning problems may also be an effective way to learn from observing experts. For tasks such as walking, diving, and driving, the designer of an artificial system may have only an intuitive idea of the appropriate reward function to be supplied to an RL algorithm in order to achieve \"desirable\" behavior. Instead of learning direct control functions from observation of experts (as in Pomerleau's ALVINN driging system), it may be better to solve the inverse reinforcement learning problem. The reward function should usually be a simple monotonic function of the current sensory inputs, and thus may be much simpler than the direct decision mapping itself. That is, the most compact and hence robustly learnable representation of expert behavior may be the reward function.", "date_published": "n/a", "url": "n/a", "filename": "279943.279964.tei.xml", "abstract": "This talk proposes a very simple \"baseline architecture\" for a learning agent that can handle stochastic, partially observable environments. The architecture uses reinforcement learning together with a method for representing temporal processes as graphical models. I will discuss methods for leaming the parameters and structure of such representations from sensory inputs, and for computing posterior probabilities. Some open problems remain before we can try out the complete agent; more arise when we consider scaling up. A second theme of the talk will be whether reinforcement learning can provide a good model of animal and human learning. To answer this question, we must do inverse reinforcement learning: given the observed behaviour, what reward signal, if any, is being optimized? This seems to be a very interesting problem for the COLT, UAI, and ML communities, and has been addressed in econometrics under the heading of structural estimation of Markov decision processes. 1 Learning in uncertain environments AI is about the construction of intelligent agents, i.e., systems that perceive and act effectively (according to some performance measure) in an environment. I have argued elsewhere Russell and Norvig (1995) that most AI research has focused on environments that are static, deterministic, discrete, and fully observable. What is to be done when, as in the real world, the environment is dynamic, stochastic, continuous, and partially observable? 'This paper draws on a variety of research efforts supported", "id": "62e18d17a8eee37351339241597c7d3f"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Nick Bostrom"], "title": "The Vulnerable World Hypothesis", "text": "Is there a black ball in the urn of possible inventions? One way of looking at human creativity is as a process of pulling balls out of a giant urn. 1 The balls represent possible ideas, discoveries, technological inventions. Over the course of history, we have extracted a great many ballsmostly white (beneficial) but also various shades of gray (moderately harmful ones and mixed blessings). The cumulative effect on the human condition has so far been overwhelmingly positive, and may be much better still in the future (Bostrom, 2008) . The global population has grown about three orders of magnitude over the last ten thousand years, and in the last two centuries per capita income, standards of living, and life expectancy have also risen. 2 What we haven't extracted, so far, is a black ball: a technology that invariably or by default destroys the civilization that invents it. The reason is not that we have been particularly careful or wise in our technology policy. We have just been lucky. It does not appear that any human civilization has been destroyedas opposed to transformedby its own inventions. 3 We do have examples of civilizations being destroyed by inventions made elsewhere. For example, the European inventions that enabled transoceanic travel and force projection could be regarded as a black-ball event for the indigenous populations of the Americas, Australia, Tasmania, and some other places. The extinction of archaic hominid populations, such as the Neanderthals and the Denisovans, was probably facilitated by the technological superiority of Homo sapiens. But thus far, it seems, we have seen no sufficiently auto-destructive invention to count as a black ball for humanity. 4 What if there is a black ball in the urn? If scientific and technological research continues, we will eventually reach it and pull it out. Our civilization has a considerable ability to pick up balls, but no ability to put them back into the urn. We can invent but we cannot un-invent. Our strategy is to hope that there is no black ball. This paper develops some concepts that can help us think about the possibility of a technological black ball, and the different forms that such a phenomenon could take. We also discuss some implications for policy from a global perspective, particularly with respect to how one should view developments in mass surveillance and moves towards more effectual global governance or a more unipolar world order. These implications by no means settle questions about the desirability of changes in those macrostrategic variablesfor there indeed are other strongly relevant factors, not covered here, which would need to be added to the balance. Yet they form an important and under-appreciated set of considerations that should be taken into account in future debates on these issues. Before getting to the more conceptual parts of the paper, it will be useful to paint a more concrete picture of what a technological black ball could be like. The most obvious kind is a technology that would make it very easy to unleash an enormously powerful destructive force. Nuclear explosions are the most obviously destructive force we have mastered. So let us consider what would have happened if it had been very easy to unleash this force. \n A thought experiment: easy nukes On the morning of 12 September 1933, Leo Szilard was reading the newspaper when he came upon a report of an address recently delivered by the distinguished Lord Rutherford, now often considered the father of nuclear physics (Rhodes, 1986) . In his speech, Rutherford had dismissed the idea of extracting useful energy from nuclear reactions as 'moonshine'. This claim so annoyed Szilard that he went out for a walk. During the walk, he got the idea of a nuclear chain reactionthe basis for both nuclear reactors and nuclear bombs. Later investigations showed that making an atomic weapon requires several kilograms of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, suppose it had turned out otherwise: that there had been some really easy way to unleash the energy of the atomsay, by sending an electric current through a metal object placed between two sheets of glass. So let us consider a counterfactual history in which Szilard invents nuclear fission and realizes that a nuclear bomb could be made with a piece of glass, a metal object, and a battery arranged in a particular configuration. What happens next? Szilard becomes gravely concerned. He sees that his discovery must be kept secret at all costs. But how? His insight is bound to occur to others. He could talk to a few of his physicist friends, the ones most likely to stumble upon the idea, and try to persuade them not to publish anything on nuclear chain reactions or on any of the reasoning steps leading up to the dangerous discovery. (That is what Szilard did in actual history.) Here Szilard faces a dilemma: either he doesn't explain the dangerous discovery, but then he will not be effective in persuading many of his colleagues to stop publishing; or he tells them the reason for his concern, but then he spreads the dangerous knowledge further. Either way he is fighting a losing battle. The general advance of scientific knowledge will eventually make the dangerous insight more accessible. Soon, figuring out how to initiate a nuclear chain reaction with pieces of metal, glass, and electricity will no longer take genius but will be within reach of any STEM student with an inventive mindset. Let us roll the tape a little further. The situation looks hopeless, but Szilard does not give up. He decides to take a friend into his confidence, a friend who is also the world's most famous scientist -Albert Einstein. He successfully persuades Einstein of the danger (again following actual history). Now, Szilard has the support of a man who can get him a hearing with any government. The two write a letter to President Franklin D. Roosevelt. After some committee wranglings and report-writing, the top levels of the US government are eventually sufficiently convinced to be ready to take serious action. What action can the United States take? Let us first consider what actually happened (Rhodes, 1986) . What the US government did, after having digested the information provided by Einstein and Szilard, and after having received some further nudging from the British who were also looking into the matter, was to launch the Manhattan Project in order to weaponize nuclear fission as quickly as possible. As soon as the bomb was ready, the US Air Force used it to destroy Japanese population centers. Many of the Manhattan scientists had justified their participation by pointing to the mortal danger that would arise if Nazi Germany got the bomb first; but they continued working on the project after Germany was defeated. 5 Szilard advocated unsuccessfully for demonstrating 'the gadget' over an unpopulated area rather than in a city (Franck et al., 1945) . After the war ended, many of the scientists favored the international control of atomic energy and became active in the nuclear disarmament movement; but their views carried little weight, as nuclear policy had been taken out of their hands. Four years later, the Soviet Union detonated its own atomic bomb. The Soviet effort was aided by spies in the Manhattan Project, yet even without espionage it would have succeeded within another year or two (Holloway, 1994) . The Cold War followed, which at its peak saw 70,000 nuclear warheads ready to unleash global destruction at a moment's notice, with a trembling finger hovering over the 'red button' on either side (Norris and Kristensen, 2010) . 6 Fortunately for human civilization, after the destruction of Hiroshima and Nagasaki, no other atomic bomb has been detonated in anger. Seventy-three years later, partly thanks to international treaties and anti-proliferation efforts, only nine states possess nuclear weapons. No non-state actor is believed ever to have possessed nuclear weapons. 7 But how would things have played out if there had been an easy way to make nukes? Maybe Szilard and Einstein could persuade the US government to ban all research in nuclear physics (outside high-security government facilities)? Such a ban on basic science would be subjected to enormous legal and political challengesthe more so as the reason for the ban could not be publicly disclosed in any detail without creating an unacceptable information hazard. 8 Let us suppose, however, that President Roosevelt could somehow mobilize enough political support to drive through a ban, and that the US Supreme Court could somehow find a way of regarding it as constitutionally valid. We then confront an array of formidable practical difficulties. All university physics departments would have to be closed, and security checks initiated. A large number of faculty and students would be forced out. Intense speculations would swirl around the reason for all these heavy-handed measures. Groups of physics PhD students and faculty banned from their research field would sit around and speculate about what the secret danger might be. Some of them would figure it out. And among those who figured it out, some would feel compelled to use the knowledge to impress their colleagues; and those colleagues would want to tell yet others, to show they were in the know. Alternatively, somebody who opposed the ban would unilaterally decide to publish the secret, maybe in order to support their view that the ban is ineffective or that the benefits of publication outweigh the risks. 9 10 Careless or disgruntled employees at the government labs would eventually also let slip information, and spies would carry the secret to foreign capitals. Even if, by some miracle, the secret never leaked in the United States, scientists in other countries would independently discover it, thereby multiplying the sources from which it could spread. Sooner or laterprobably soonerthe secret would be a secret no more. In the present age, when one can publish instantaneously and anonymously on the Internet, it would be even more difficult to limit the spread of scientific secrets (Cf. Greenberg, 2012; Swire, 2015) . An alternative approach would be to eliminate all glass, metal, or sources of electrical current (save perhaps in a few highly guarded military depots). Given the ubiquity of these materials, such an undertaking would be extremely daunting. Securing political support for such measures would be no easier than shutting down physics education. However, after mushroom clouds had risen over a few cities, the political will to make the attempt could probably be mustered. Metal use is almost synonymous with civilization, and would not be a realistic target for elimination. Glass production could be banned, and existing glass panes confiscated; but pieces of glass would remain scattered across the landscape for a long time. Batteries and magnets could be seized, though some people would have stashed away these materials before they could be collected by the authorities. Many cities would be destroyed by nihilists, extortionists, revanchists, or even folk who just want to 'see what would happen '. 11 People would flee urban areas. In the end, many places would be destroyed by nuclear fallout, cities would be abandoned, there would be no use of electricity or glass. Possession of proscribed materials, or equipment that could be used to make them, would be harshly punished, such as by on-the-spot execution. To enforce these provisions, communities would be subjected to strict surveillanceinformant networks incentivized by big rewards, frequent police raids into private quarters, continuous digital monitoring, and so forth. That is the optimistic scenario. In a more pessimistic scenario, law and order would break down entirely and societies might split into factions waging civil wars with nuclear weapons, producing famine and pestilence. The disintegration might end only when society has been so reduced that nobody is able any longer to put together a bomb and a delay detonator from stored materials or the scrap of city ruins. Even then, the dangerous insightonce its importance had been so spectacularly demonstratedwould be remembered and passed down the generations. If civilization began to rise from the ashes, the knowledge would lie in wait, ready to pounce as soon as people learned once again how to make sheet glass and electric current generators. And even if the knowledge were forgotten, it would be rediscovered once nuclear physics research was resumed. We were lucky that making nukes turned out to be hard. \n The vulnerable world hypothesis We now know that one cannot trigger a nuclear explosion with just a sheet of glass, some metal, and a battery. Making an atomic bomb requires several kilograms of fissile material, which is difficult to produce. We pulled out a gray ball that time. Yet with each act of invention, we reach into the urn anew. Let us introduce the hypothesis that the urn of creativity contains at least one black ball. We can refer to this as the vulnerable world hypothesis (VWH). Intuitively, the hypothesis is that there is some level of technology at which civilization almost certainly gets destroyed unless quite extraordinary and historically unprecedented degrees of preventive policing and/or global governance are implemented. More precisely: VWH: If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semianarchic default condition. By the 'semi-anarchic default condition' I mean a world order characterized by three features 12 : 1. Limited capacity for preventive policing. States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actionsparticularly actions that are very strongly disfavored by > 99 per cent of the population. 2. Limited capacity for global governance. There is no reliable mechanism for solving global coordination problems and protecting global commonsparticularly in high-stakes situations where vital national security interests are involved. 3. Diverse motivations. There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level)in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors ('the apocalyptic residual') who would act in ways that destroy civilization even at high cost to themselves. 3 The term 'devastation of civilization' in the above definition could be interpreted in various ways, yielding different versions of VWH. For example, one could define an existential-risk vulnerable world hypothesis (x-VWH), which would state that at some level of technology, by default, an existential catastrophe occurs, involving the extinction of Earth-originating intelligent life or the permanent blighting of our future potential for realizing value. However, here we will set the bar lower. A key concern in the present context is whether the consequences of civilization continuing in the current semi-anarchic default condition are catastrophic enough to outweigh reasonable objections to the drastic developments that would be required to exit this condition. If this is the criterion, then a threshold short of human extinction or existential catastrophe would appear sufficient. For instance, even those who are highly suspicious of government surveillance would presumably favour a large increase in such surveillance if it were truly necessary to prevent occasional region-wide destruction. Similarly, individuals who value living in a sovereign state may reasonably prefer to live under a world government given the assumption that the alternative would entail something as terrible as a nuclear holocaust. Therefore, we stipulate that the term 'civilizational devastation' in VWH refers (except where otherwise specified) to any destructive event that is at least as bad as the death of 15 per cent of the world population or a reduction of global GDP by > 50 per cent per cent lasting for more than a decade. 13 It is not a primary purpose of this paper to argue that VWH is true. (I regard that as an open question, though it would seem to me unreasonable, given the available evidence, to be at all confident that VWH is false.) Instead, the chief contribution claimed here is that VWH, along with related concepts and explanations, is useful in helping us surface important considerations and possibilities regarding humanity's macrostrategic situation. But those considerations and possibilities need to be further analyzed, and combined with other considerations that lie outside the scope of this paper, before they could deliver any definitive policy implications. A few more clarifications before we move on. This paper uses the word 'technology' in its broadest sense. Thus, in principle, we count not only machines and physical devices but also other kinds of instrumentally efficacious templates and proceduresincluding scientific ideas, institutional designs, organizational techniques, ideologies, concepts, and memesas constituting potential technological black balls. 14 We can speak of vulnerabilities opening and closing. In the 'easy nukes' scenario, the period of vulnerability begins when the easy way of producing nuclear explosions is discovered. It ends when some level of technology is attained that makes it reasonably affordable to stop nuclear explosions from causing unacceptable damageor that again makes it infeasible to produce nuclear explosions (because of technological regress). 15 If no protective technology is possible (as in, e.g., the case of nuclear weapons it may not be) and technological regress does not occur, then the world becomes permanently vulnerable. We can also speak of the world being stabilized (with respect to some vulnerability) if the semi-anarchic default condition is exited in such a way as to prevent the vulnerability from leading to an actual catastrophe. The ways in which the semi-anarchic default condition would have to be altered in order to achieve stabilization depend on the specifics of the vulnerability in question. In a later section, we will discuss possible means by which the world could be stabilized. For now, we simply note that VWH does not imply that civilization is doomed. \n Typology of vulnerabilities We can identify four types of civilizational vulnerability. \n Type-1 ('easy nukes') The first type is one where, as in the 'easy nukes' scenario, it becomes too easy for individuals or small groups to cause mass destruction: Type-1 vulnerability: There is some technology which is so destructive and so easy to use that, given the semi-anarchic default condition, the actions of actors in the apocalyptic residual make civilizational devastation extremely likely. Note that in determining whether a scenario presents a Type-1 vulnerability, there is an inverse relationship between the ease with which it becomes possible to cause an incident and the destructiveness of incident. The greater the destructiveness of a single incident, the less easy it needs to be to cause such an incident in order for us to diagnose the presence of a Type-1 vulnerability. Thus, consider a 'very easy nukes' scenario, in which any halfwit can create an easily portable thermonuclear weapon at the kitchen sink over the course of an afternoon: this would definitely qualify as a civilizational vulnerability. Contrast this with a 'moderately easy nukes' scenario, in which it takes a five-person team of semi-skilled individuals toiling for an entire year to produce a single bulky few-kiloton device: that might not quite rise to the level of a civilizational vulnerability. It seems possible, in the 'moderately easy nukes' scenario, that the great majority of cities would escape destruction, although the threat posed by a well-resourced terrorist organization, such as Aum Shinrikyo anno 1995 or Al-Qaeda anno 2001, would increase substantially. However, consider yet another scenario, 'moderately easy bio-doom', in which again it requires a semi-skilled five-person team working for a year to put the black-ball technology into effect, except that this time it is a biological agent, a single point release of which is sufficient to kill billions. In 'moderately easy bio-doom', the threshold for a Type-1 vulnerability would be reached. If destroying civilisation required only that a single group succeed with a task at the moderately-easy level, civilization would probably be destroyed within a few years in the semi-anarchic default condition. Indeed, both Aum Shinrikyo and Al-Qaeda sought to obtain nuclear and biological weapons, and would likely have chosen to use them (see e.g. Danzig et al., 2011; Olson, 1999; Mowatt-Larssen and Allison, 2010) . So a Type-1 vulnerability exists if it is either extremely easy to cause a moderate amount of harm or moderately easy to cause an extreme amount of harm. 16 The reason why a black-ball technology that enables only moderate amounts of harm per incident could count as a Type-1 vulnerability is thatif the technology is sufficiently easy to usea large number of such incidents would be almost certain to occur. Take the scenario where it is easy for an average individual to make a metropolis-busting H-bomb. This is not necessarily a scenario in which a single individual could devastate civilization. Building hundreds of bombs and transporting them to hundreds of cities without getting caught would still be a formidable endeavor even if making a single bomb were fairly easy. The 'easy nukes' scenario nevertheless presents a civilizational vulnerability because it is plausible that there would in fact be hundreds of individuals who would each destroy at least one city under those circumstances. That this is so almost follows from the law of large numbers combined with the plausible assumption that for any randomly selected person there is some small but appreciable chance that they would be motivated to trigger this kind of destructionwhether out of ideological hatred, nihilistic destructiveness, revenge for perceived injustices, as part of some extortion plot, or because of delusions or mental illness, or perhaps even just to see what would happen. Given the diversity of human character and circumstance, for any ever so imprudent, immoral, or self-defeating action, there is some residual fraction of humans who would choose to take that action. This is especially plausible if the action in question represents a culturally salient affordance as it everywhere would after one such nuke attack had taken place. In other words, 'easy nukes' is an illustration of a vulnerable world because it looks like the apocalyptic residual has a large enough intersection with the set of empowered actors that one would expect a civilization-devastating amount of destruction to result. \n Type-2a ('safe first strike') A technology that 'democratizes' mass destruction is not the only kind of black ball that could be hoisted out of the urn. Another kind would be a technology that strongly incentivizes powerful actors to use their powers to cause mass destruction. Again we can turn to nuclear history for illustration. After the invention of the atomic bomb and a short-lived American nuclear monopoly, an arms race ensued between the US and the USSR. The rival superpowers amassed staggering arsenals, topping out at 70,000 nuclear warheads in 1986, more than enough to devastate civilization (Norris and Kristensen, 2010) . While public awareness of the perils of the Cold War seems to have faded since its peaceful conclusion in 1991, the academic communitybenefiting from the opening of formerly classified archives and the testimony of retired policy makers, officers, and analystshas uncovered a disconcerting array of practices and incidents which seem to have repeatedly brought the world to the brink. 17 Just how close we came remains a topic of dispute. Some scholars have argued that it was only thanks to a good deal of luck that nuclear holocaust was avoided. 18 Whether surviving the Cold War required much luck or just a little, we can easily imagine a counterfactual in which the odds of avoiding a nuclear conflagration would be substantially worse. This holds even if we assume that nuclear weapons can be produced only by large technologically advanced states (thus distinguishing the case from the type-1 vulnerability of 'easy nukes'). The counterfactual could involve changes in the technological possibility frontier that would have made the arms race less stable. For example, it is widely believed among nuclear strategists that the development of a reasonably secure secondstrike capability by both superpowers by the mid-1960s created the conditions for 'strategic stability' (Colby and Gerson, 2013) . Prior to this period, American war plans reflected a much greater inclination, in any crisis situation, to launch a preemptive nuclear strike against the Soviet Union's nuclear arsenal. The introduction of nuclear submarinebased ICBMs was thought to be particularly helpful for ensuring second-strike capabilities (and thus 'mutually assured destruction') since it was widely believed to be practically impossible for an aggressor to eliminate the adversary's boomer fleet in the initial attack. 19 Other strategies for ensuring a second-strike capability could also be employed, but they had drawbacks. For example, one option, briefly used by the United States, was to have a contingent of long-range nuclear bombers on continuous airborne alert (Sagan, 1995) . This program was very costly and increased the risk of accidental or unauthorized attacks. Another option was to build hardened land-based missile silos: in sufficient numbers, these could in principle provide the assurance of a second-strike capability to one side; however, such a large arsenal would then threaten to provide the capacity of a safe first strike against the other side, thus again destabilizing any crisis. Road-mobile ICBM launchers, which are harder to attack than silo-based missiles, eventually provided some stabilization when they were deployed by the Soviet Union in 1985, a few years before the end of Cold War (Brower, 1989) . So consider a counterfactual in which a preemptive counterforce strike is more feasible. Imagine some technology that makes it easy to track ballistic missile submarines. We can also imagine that nuclear weapons were a bit more fragile, so that the radius within which a nuclear weapon would be destroyed by the detonation of another nuclear weapon was substantially larger than it actually is. 20 Under those circumstances, it might have been impossible to ensure a second-strike capability. Suppose, further, that technology had been such as to make it very hard to detect missile launches, rendering a launch-on-warning strategy completely unworkable. The crisis instability of the Cold War would then have been greatly amplified. Whichever side struck first would survive relatively unscathed (or might at least have believed that it would, since the possibility of a nuclear winter was largely ignored by war planners at the time; Badash, 2001; Ellsberg, 2017) . 21 The less aggressive side would be utterly destroyed. In such a situation, mutual fear could easily trigger a dash to all-out war (Schelling, 1960) . Other technological parameter changes could similarly increase the probability of attacks. In the real world, the main 'attraction' of a nuclear first strike is that it would alleviate the fear that one might otherwise oneself become the victim of such a strike; but we can imagine a counterfactual in which there are also benefits to nuclear aggression, beyond the removal of a negative. Suppose it were somehow possible to derive great economic gains from initiating a large-scale nuclear assault. 22 It might be hard to see how this could be the case, yet one can imagine some automated manufacturing technology or energy technology making physical resources more valuable; or technology-enabled population growth could again make agricultural land a more vital resource (Drexler, 1986) ). Some international relations scholars believe that the net economic benefits of conquest have declined substantially in the post-industrial era and that this decline has been a major contributor to peace. 23 If powerful national economic motives were again added to other causes for war (such as concern for one's own security, disputes over non-economic values, maintenance of national reputation, influence of particularly bellicose special interest groups, inter alia) then armed conflicts might become more common and large-scale nuclear war more likely. In these examples, the vulnerability arises not from destruction getting easier, but from the actions leading to destruction coming to be supported by stronger incentives. We shall call these Type-2 vulnerabilities. Specifically, a scenario like 'safe first strike', in which some enormously destructive action becomes incentivized, we shall refer to as Type-2a: Type-2a vulnerability: There is some level of technology at which powerful actors have the ability to produce civilization-devastating harms and, in the semi-anarchic default condition, face incentives to use that ability. We will see some more examples of Type-2a vulnerabilities below, where the 'civilization-devastating harms' take the form of risk externalities. \n Type-2b ('worse global warming') There is yet another way in which the world could be vulnerable; one that we can illustrate with a counterfactual related to climate change. In the real world, we observe a secular rise in global mean temperature, widely believed to be driven primarily by human-caused emissions of greenhouse gases such as carbon dioxide, methane, and nitrous oxide (Stocker et al., 2014) .Projections vary, depending on the emissions scenario and modelling assumptions, but forecasts that imply an average temperature rise of between 3 o C and 4.5 o C in 2100 (compared to 2000) , in the absence of any significant action to reduce emissions, are quite typical (See Stocker et al. (2014, table 12 .2)). The effects of such warmingon sea levels, weather patterns, ecosystems, and agricultureare usually expected to be net negative for human welfare (See Field et al. (2014, figure 10-1 )). Greenhouse gases are emitted by wide range of activities, including in industry, transport, agriculture, and electricity production, and from all around the world, though especially from industrialized or industrializing countries. Efforts to curb emissions have so far failed to achieve much global-scale impact (Friedlingstein et al., 2014) ). Now, we could imagine a situation in which the problem of global warming would be far more dire than it actually seems to be. For example, the transient climate sensitivity (a measure of the medium-term change in mean global surface temperature of the Earth that results from some kind of forcing, such as a doubling of atmospheric CO 2 ) could have turned out to be much greater than it is (Shindell, 2014) . If it had been several times larger than its actual value, we would have been in for a temperature rise of, say, 15 o or 20 o C instead of 3 oa prospect with far greater civilization-destroying potential than the actual expectation. 24 We can also imagine other deviations from reality that would have made global warming a worse problem. Fossil fuels could have been even more abundant than they are, and available in more cheaply exploitable deposits, which would have encouraged greater consumption. At the same time, clean energy alternatives could have been more expensive and technologically challenging. Global warming could also have been a worse problem if there were stronger positive feedback loops and nonlinearities, such as an initial phase in which the atmosphere is gradually loaded up with greenhouse gases without much observable or detrimental effect, followed by a second phase in which temperatures shoot up abruptly. To get a truly civilizational threat from global warming, it may also be necessary to stipulate, counterfactually, that mitigation through geoengineering is infeasible. The vulnerability illustrated by such a 'worse global warming' scenario is different from that of a Type-2a scenario like 'safe first strike'. In a Type-2a vulnerability, some actor has the ability to take some actionsuch as launching a nuclear first strikethat is destructive enough to devastate civilization. In the 'worse global warming' scenario, no such actor need exist. Instead, in what we will call a Type-2b vulnerability, there is a large number of individually insignificant actors who is each incentivized (under the semianarchic default condition) to take some action that contributes slightly to what cumulatively becomes a civilizationdevastating problem: Type-2b vulnerability: There is some level of technology at which, in the semi-anarchic default condition, a great many actors face incentives to take some slightly damaging action such that the combined effect of those actions is civilizational devastation. What Type-2a and Type-2b have in common is that, in both cases, the damage-capable actors face incentives that would encourage a wide range of normally motivated actors in their situation to pursue the course of action that leads to damage. Global warming would not be a problem if only some small fraction of those actors who can drive cars or chop down a few trees chose to do so; the problem arises only because many actors make these choices. And in order for many actors to make those choices, the choices must be supported by incentives that have wide appeal (such as money, status, and convenience). Similarly, if only one in a million actors who could launch a nuclear first strike would actually choose to do so, then it would not be so alarming if there are a handful of actors possessing that capability; but it does get worrisome if launching a nuclear strike is strongly supported by incentives that appeal to normally-motivated actors (such as the motive of preempting a strike by one's adversary). This is in contrast to a Type-1 vulnerability, where the problem arises from the very widespread proliferation of destructive capability. Only an actor with quite unusual values would choose, at great cost and risk to himself, to blow up a city or unleash a doomsday pathogen; the trouble in that case is that if sufficiently many actors possess such a capability, then the subset of them who also have apocalyptic motives is not empty. \n Type-0 ('surprising strangelets') In 1942, it occurred to Edward Teller, one of the Manhattan scientists, that a nuclear explosion would create a temperature unprecedented in Earth's history, producing conditions similar to those in the center of the sun, and that this could conceivably trigger a self-sustaining thermonuclear reaction in the surrounding air or water Rhodes, 1986) . The importance of Teller's concern was immediately recognized by Robert Oppenheimer, the head of the Los Alamos lab. Oppenheimer notified his superior and ordered further calculations to investigate the possibility. These calculations indicated that atmospheric ignition would not occur. This prediction was confirmed in 1945 by the Trinity test, which involved the detonation of the world's first nuclear explosive. 25 In 1954, the US carried out another nuclear test, the Castle Bravo test, which was planned as a secret experiment with an early lithium-based thermonuclear bomb design. Lithium, like uranium, has two important isotopes: lithium-6 and lithium-7. Ahead of the test, the nuclear scientists calculated the yield to be 6 megatons (with an uncertainty range of 4-8 megatons). They assumed that only the lithium-6 would contribute to the reaction, but they were wrong. The lithium-7 contributed more energy than the lithium-6, and the bomb detonated with a yield of 15 megatonmore than double of what they had calculated (and equivalent to about 1,000 Hiroshimas). The unexpectedly powerful blast destroyed much of the test equipment. Radioactive fallout poisoned the inhabitants of downwind islands and the crew of a Japanese fishing boat, causing an international incident. We may regard it at as lucky that it was the Castle Bravo calculation that was incorrect, and not the calculation of whether the Trinity test would ignite the atmosphere. Counterfactually, if the atmosphere had been susceptible to ignition by a nuclear detonation, and if this fact had been relatively easy to overlooklet us say as easy as it was to overlook the contribution of the lithium-7 in the Castle Bravo testthen the human story (and that of all terrestrial life) would have come to an end in 1945. We can call this scenario 'Castle Bravissimo'. Whenever we pull a ball from the urn of invention, there could conceivably be a possibility of accidental devastation. Usually, this risk is negligible; but in some cases it could be significant, especially when the technology in question generates some kind of novel perturbation of nature or introduces historically unprecedented conditions. This suggests that we should add to our typology one more category, that of technology-fated accidental civilizational devastation: Type-0 vulnerability: There is some technology that carries a hidden risk such that the default outcome when it is discovered is inadvertent civilizational devastation. 26 It is instructive to note, however, that 'Castle Bravissimo' is not a perfect illustration of a Type-0 vulnerability. Suppose that careful calculations had shown that there was a 1 per cent probability that a nuclear detonation would ignite the atmosphere and the oceans and thereby extinguish life on Earth. Suppose, further, that it had been known that to resolve the matter further and prove that the chance was zero (or alternatively, that the chance was one) would take another 10 years of meticulous study. It is unclear, under those circumstances, what the leaders of the Manhattan project would have decided. They would presumably have thought it greatly desirable that humanity hold off on developing nuclear weapons for at least another 10 years. 27 On the other hand, they would have feared that Germany might have an advanced bomb project and that Hitler maybe would not pull the brakes because of a 1 per cent risk of destroying the world. 28 They might have concluded that the risk of testing a nuclear bomb was worth taking in order to reduce the probability of Nazi Germany ending up with a nuclear monopoly. In this version of 'Castle Bravissimo', civilization gets blown up by accident: nobody sought to cause a destructive event. Yet the key actors were locked in a strategic situation that incentivized them to proceed despite the risk. In this respect, the scenario fits as a Type-2a vulnerability; only, the civilization-devastating harm it involves is probabilistic. When nuclear technology becomes possible, powerful actors face incentives, in the semi-anarchic default condition, to use that technology in ways that produce civilization-destroying harms (which here take the form of risk externalities). 29 Accordingly, in order for us to diagnose a Type-0 vulnerability, we require that a stronger condition be met than merely that the key actors did not intend destruction. We stipulate that 'inadvertent' should here mean that the adverse outcome sprang from bad luck, not coordination failure. In a Type-0 vulnerability, the key actors would, even if they were adequately coordinated, decide to proceed with using the technology, in the belief that the benefits would outweigh costsbut they would be wrong, and the costs would be larger than expected, enough so as to cause civilizational devastation. 30 Since 'Castle Bravissimo' only ambiguously satisfies this criterion (it being unclear in the original counterfactual to what extent the disaster would have resulted from coordination failure and to what extent from miscalculation/bad luck), it may be useful to introduce a cleaner example of a Type-0 vulnerability. Thus, consider a 'surprising strangelets' scenario in which some modern high-energy physics experiment turns out to initiate a self-catalyzing process in which ordinary matter gets converted into strange matter, with the result that our planet is destroyed. This scenario, and variations thereof in which accelerator experiments generate stable black holes or trigger the decay of a metastable vacuum state, have been analyzed in the literature (Jaffe et al., 2000; Tegmark and Bostrom, 2005) . Such outcomes would indeed be very surprising, since analysis indicates that they have a completely negligible chance of occurring. Of course, with sufficiently bad luck, a negligiblechance event could occur. But alternatively (and far more likely in this case), the analysis could have a hidden flaw, like the Castle Bravo calculations did; in which case the chance might not be so negligible after all (Ord et al., 2010) . 31 \n Achieving stabilization The truth of VWH would be bad news. But it would not imply that civilization will be devastated. In principle at least, there are several responses that could stabilize the world even if vulnerability exists. Recall that we defined the hypothesis in terms of a black-ball technology making civilizational devastation extremely likely conditional on technological development continuing and the semi-anarchic default condition persisting. Thus we can theoretically consider the following possibilities for achieving stabilization: 1. Restrict technological development. 2. Ensure that there does not exist a large population of actors representing a wide and recognizably human distribution of motives. 3. Establish extremely effective preventive policing. 4. Establish effective global governance. We will discuss (3) and ( 4 ) in subsequent sections. Here we consider (1) and ( 2 ). We will argue they hold only limited promise as ways of protecting against potential civilizational vulnerabilities. \n Technological relinquishment In its general form, technological relinquishment looks exceedingly unpromising. Recall that we construed the word 'technology' broadly; so that completely stopping technological development would require something close to a cessation of inventive activity everywhere in the world. That is hardly realistic; and if it could be done, it would be extremely costlyto the point of constituting an existential catastrophe in its own right (Namely, 'permanent stagnation' (Bostrom, 2013) ). That general relinquishment of scientific and technological research is a non-starter does not, however, imply that limited curtailments of inventive activities could not be a good idea. It can make sense to forego particularly perilous directions of advancement. For instance, recalling our 'easy nukes' scenario, it would be sensible to discourage research into laser isotope separation for uranium enrichment (Kemp, 2012) . Any technology that makes it possible to produce weapons-grade fissile material using less energy or with a smaller industrial footprint would erode important barriers to proliferation. It is hard to see how a slight reduction in the price of nuclear energy would compensate. On the contrary, the world would probably be better off if it somehow became harder and more expensive to enrich uranium. What we would ideally want in this area is not technological progress but technological regress. While targeted regress might not be in the cards, we could aim to slow the rate of advancement towards risk-increasing technologies relative to the rate of advancement in protective technologies. This is the idea expressed by the principle of differential technological development. In its original formulation, the principle focuses on existential risk; but we can apply it more broadly to also encompass technologies with 'merely' devastational potential: Principle of Differential Technological Development. Retard the development of dangerous and harmful technologies, especially ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risks posed by nature or by other technologies Bostrom, 2002) . The principle of differential technological development is compatible with plausible forms of technological determinism. For example, even if it were ordained that all technologies that can be developed will be developed, it can still matter when they are developed. The order in which they arrive can make an important differenceideally, protective technologies should come before the destructive technologies against which they protect; or, if that is not possible, then it is desirable that the gap be minimized so that other countermeasures (or luck) may tide us over until robust protection become available. The timing of an invention also influences what sociopolitical context the technology is born into. For example, if we believe that there is a secular trend toward civilization becoming more capable of handling black balls, then we may want to delay the most risky technological developments, or at least abstain from accelerating them. Even if we suppose that civilizational devastation is unavoidable, many would prefer it to take place further into the future, at a time when maybe they and their loved ones are no longer alive anyway. 32 Differential technological development doesn't really make sense in the original urn-of-creativity model, where the color of each ball comes as a complete surprise. If we want to use the urn model in this context, we must modify it. We could stipulate, for example, that the balls have different textures and that there is a correlation between texture and color, so that we get clues about the color of a ball before we extract it. Another way to make the metaphor more realistic is to imagine that there are strings or elastic bands between some of the balls, so that when we pull on one of them we drag along several others to which it is linked. Presumably the urn is highly tubular, since certain technologies must emerge before others can be reached (we are not likely to find a society that uses jet planes and flint axes). The metaphor would also become more realistic if we imagine that there is not just one hand daintily exploring the urn: instead, picture a throng of scuffling prospectors reaching in their arms in hopes of gold and glory, and citations. Correctly implementing differential technological development is clearly a difficult strategic task (Cf. Collingridge, 1980) . Nevertheless, for an actor who cares altruistically about long-term outcomes and who is involved in some inventive enterprise (e.g. as a researcher, funder, entrepreneur, regulator, or legislator) it is worth making the attempt. Some implications, at any rate, seem fairly obvious: for instance, don't work on laser isotope separation, don't work on bioweapons, and don't develop forms of geoengineering that would empower random individuals to unilaterally make drastic alterations to the Earth's climate. Think twice before accelerating enabling technologiessuch as DNA synthesis machinesthat would directly facilitate such ominous developments. 33 But boost technologies that are predominantly protective; for instance, ones that enable more efficient monitoring of disease outbreaks or that make it easier to detect covert WMD programs. Even if it is the case that all possible 'bad' technologies are bound to be developed eventually, it can still be helpful to buy a little time. 34 However, differential technological development does not on its own offer a solution for vulnerabilities that persist over long periodsones where adequately protective technologies are much harder to develop than their destructive counterparts, or where destruction has the advantage even at technological maturity. 35 \n Preference modification Another theoretically possible way of achieving civilizational stabilization would be to change the fact that there exists a large population of actors representing a wide and recognizably human distribution of motives. We reserve for later discussion of interventions that would reduce the effective number of independent actors by increasing various forms of coordination. Here we consider the possibility of modifying the distribution of preferences (within a more or less constant population of actors). The degree to which this approach holds promise depends on which type of vulnerability we have in mind. In the case of a Type-1 vulnerability, preference modification does not look promising, at least in the absence of extremely effective means for doing so. Consider that some Type-1 vulnerabilities would result in civilizational devastation if there is even a single empowered person anywhere in the world who is motivated to pursue the destructive outcome. With that kind of vulnerability, reducing the number of people in the apocalyptic residual would do nothing to forestall devastation unless the number could be reduced all the way to zero, which may be completely infeasible. It is true that there are other possible Type-1 vulnerabilities that would require a somewhat larger apocalyptic residual in order for civilizational devastation to occur: for example, in a scenario like 'easy nukes', maybe there would have to be somebody from the apocalyptic residual in each of several hundred cities. But this is still a very low bar. It is difficult to imagine an interventionshort of radically re-engineering human nature on a fully global scalethat would sufficiently deplete the apocalyptic residual to entirely eliminate or even greatly reduce the threat of Type-1 vulnerabilities. Note that an intervention that halves the size of the apocalyptic residual would not (at least not through any firstorder effect) reduce the expected risk from Type-1 vulnerabilities by anywhere near as much. A reduction of 5 per cent or 10 per cent of Type-1 risk from halving the apocalyptic residual would be more plausible. The reason is that there is wide uncertainty about how destructive some new blackball technology would be, and we should arguably use a fairly uniform prior in log space (over several orders of magnitude) over the size of apocalyptic residual that would be required in order for civilizational devastation to occur conditional on a Type-1 vulnerability arising. In other words, conditional on some new technology being developed that makes it easy for an average individual to kill at least one million people, it may be (roughly) as likely that the technology would enable the average individual to kill one million people, ten million people, a hundred million people, a billion people, or every human alive. These considerations notwithstanding, preference modification could be helpful in scenarios in which the set of empowered actors is initially limited to some small definable subpopulation. Some black-ball technologies, when they first emerge from the urn, might be difficult to use and require specialized equipment. There could be a period of several years before such a technology has been perfected to the The Vulnerable World Hypothesis point where an average individual could master it. During this early period, the set of empowered actors could be quite limited; for example, it might consist exclusively of individuals with bioscience expertise working in a particular type of lab. Closer screening of applicants to positions in such labs could then make a meaningful dent in the risk that a destructive individual gains access to the biotech black ball within the first few years of its emergence. 36 And that reprieve may offer an opportunity to introduce other countermeasures to provide more lasting stabilization, in anticipation of the time when the technology gets easy enough to use that it diffuses to a wider population. For Type-2a vulnerabilities, the set of empowered actors is much smaller. Typically what we are dealing with here are states, perhaps alongside a few especially powerful nonstate actors. In some Type-2a scenarios, the set might consist exclusively of two superpowers, or a handful of states with special capabilities (as is currently the case with nuclear weapons). It could thus be very helpful if the preferences of even a few powerful states were shifted in a more peaceloving direction. The 'safe first strike' scenario would be a lot less alarming if the actors facing the security dilemma had attitudes towards one another similar to those prevailing between Finland and Sweden. For many plausible sets of incentives that could arise for powerful actors as a consequence of some technological breakthrough, the prospects for a non-devastational outcome would be significantly brightened if the actors in question had more irenic dispositions. Although this seems difficult to achieve, it is not as difficult as persuading almost all the members in the apocalyptic residual to alter their dispositions. Lastly, consider Type-2b. Recall that such a vulnerability entails that 'by default' a great many actors face incentives to take some damaging action, such that the combined effects add up to civilizational devastation. The incentives for using the black-ball technology must therefore be ones that have a grip on a substantial fraction of the world populationeconomic gain being perhaps being the prime example of such a near-universal motivation. So imagine some private action, available to almost every individual, which saves each person who takes it a fraction X of his or her annual income, while producing a negative externality such that if half the world's population takes the action then civilization gets devastated. At X = 0, we can assume that few people would take the antisocial action. But the greater X is, the larger the fraction of the population that would succumb to temptation. Unfortunately, it is plausible that the value of X that would induce at least half of the population to take the action is small, perhaps less than 1 per cent. 37 While it would be desirable to change the distribution of global preferences so as to make people more altruistic and raise the value of X, this seems difficult to achieve. (Consider the many strong forces already competing for hearts and mindscorporate advertisers, religious organizations, social movements, education systems, and so on.) Even a dramatic increase in the amount of altruism in the worldcorresponding, let us say, to a doubling of X from 1 per cent to 2 per centwould prevent calamity only in a relatively narrow band of scenarios, namely those in which the private benefit of using the destructive technology is in the 1-2 per cent range. Scenarios in which the private gain exceeds 2 per cent would still result in civilizational devastation. In sum, modifying the distribution of preferences within the set of actors that would be destructively empowered by a black-ball discovery could be a useful adjunct to other means of stabilization, but it can be difficult to implement and would at best offer only very partial protection (unless we assume extreme forms of worldwide re-engineering of human nature). 38 \n Some specific countermeasures and their limitations Beside influencing the direction of scientific and technological progress, or altering destruction-related preferences, there are a variety of other possible countermeasures that could mitigate a civilizational vulnerability. For example, one could try to: • prevent the dangerous information from spreading; • restrict access to requisite materials, instruments, and infrastructure; • deter potential evildoers by increasing the chance of their getting caught; • be more cautious and do more risk assessment work; and • establish some kind of surveillance and enforcement mechanism that would make it possible to interdict attempts to carry out a destructive act It should be clear from our earlier discussion and examples that the first four of these are not general solutions. Preventing information from spreading could easily be infeasible. Even if it could be done, it would not prevent the dangerous information from being independently rediscovered. Censorship seems to be at best a stopgap measure. 39 Restricting access to materials, instruments, and infrastructure is a great way to mitigate some kinds of (gray-ball) threats, but it is unavailing for other kinds of threatssuch as ones in which the requisite ingredients are needed in too many places in the economy or are already ubiquitously available when the dangerous idea is discovered (such as glass, metal, and batteries in the 'easy nukes' scenario). Deterring potential evildoers makes good sense; but for sufficiently destructive technologies, the existence of an apocalyptic residual renders deterrence inadequate even if every perpetrator were certain to get caught. Exercising more caution and doing more risk assessment is also a weak and limited strategy. One actor unilaterally deciding to be more cautious may not help much with respect to a Type-2a vulnerability, and would do basically nothing for one of Type-2b or Type-1. In the case of a Type-0 vulnerability, it could help if the pivotal actor were more cautiousthough only if the first cautiously tiptoeing actor were not followed by an onrush of incautious actors getting access to the same risky technology (unless the world had somehow, in the interim, been stabilized by other means). 40 And as for risk assessment, it could lower the risk only if it led to some other countermeasure being implemented. 41 The last countermeasure in the listsurveillancedoes point towards a more general solution. We will discuss it in the next section under the heading of 'preventive policing'. But we can already note that on its own it is not sufficient. For example, consider a Type-2b vulnerability such as 'worse global warming'. Even if surveillance made it possible for a state to perfectly enforce any environmental regulation it chooses to impose, there is still the problem of getting a sufficient plurality of states to agree to adopt the requisite regulationsomething which could easily fail to happen. The limitations of surveillance are even more evident in the case of Type-2a vulnerability, such as 'safe first strike', where the problem is that states (or other powerful actors) are strongly incentivized to perform destructive acts. The ability of those states to perfectly control what goes on within their own borders does not solve this problem. What is needed to reliably solve problems that involve challenges of international coordination, is effective global governance. \n Governance gaps The limitations of technological relinquishment, preference modification, and various specific countermeasures as responses to a potential civilizational vulnerability should now be clear. To the extent, therefore, that we are concerned that VWH may be true, we must consider the remaining two possible ways of achieving stabilization: 1. Create the capacity for extremely effective preventive policing. Develop the intra-state governance capacity needed to prevent, with extremely high reliability, any individual or small groupincluding ones that cannot be deterred from carrying out any action that is highly illegal; and 2. Create the capacity for strong global governance. Develop the inter-state governance capacity needed to reliably solve the most serious global commons problems and ensure robust cooperation between states (and other strong organizations) wherever vital security interests are at stakeeven where there are very strong incentives to defect from agreements or refuse to sign on in the first place. The two governance gaps reflected by ( 1 ) and ( 2 ), one at the micro-scale, the other at the macro-scale, are two Achilles' heels of the contemporary world order. So long as they remain unprotected, civilization remains vulnerable to a potential technological black ball that would enable a strike to be directed there. Unless and until such a discovery emerges from the urn, it is easy to overlook how exposed we are. In the following two sections, we will discuss how filling in these governance gaps is necessary to achieve a general ability to stabilize potential civilizational vulnerabilities. It goes without saying that there are great difficulties, and also very serious potential downsides, in seeking progress towards (1) and (2). In this paper, we will say little about the difficulties and almost nothing about the potential downsidesin part because these are already rather well known and widely appreciated. However, we emphasize that the lack of discussion about arguments against (1) and (2) should not be interpreted as an implicit assertion that these arguments are weak or that they do not point to important concerns. They would, of course, have to be taken into account in an all-things-considered evaluation. But such an evaluation is beyond the scope of the present contribution, which focuses specifically on considerations flowing from VWH. \n Preventive policing Suppose that a Type-1 vulnerability opens up. Somebody discovers a really easy way to cause mass destruction. Information about the discovery spreads. The requisite materials and instruments are ubiquitously available and cannot quickly be removed from circulation. Of course it is highly illegal for any non-state actor to destroy a city, and anybody caught doing so would be subject to harsh penalties. But it is plausible that more than one person in a million belongs to an undeterrable apocalyptic residual. Though small in relative terms, if each such person creates a city-destroying event, the absolute number is still too large for civilization to endure. So what to do? If we suddenly found ourselves in such a situation, it may be too late to prevent civilization from being destroyed. However, it is possible to envisage scenarios in which human society would survive such a challenge intactand the even harder challenge where individuals can singlehandedly destroy not just one city but the entire world. What would be required to stabilize such vulnerabilities is an extremely well-developed preventive policing capacity. States would need the ability to monitor their citizens closely enough to allow them to intercept anybody who begins preparing an act of mass destruction. The feasibility of such surveillance and interception depend on the specifics of the scenario: How long does it take to deploy the black-ball technology destructively? how observable are the actions involved? can they be distinguished from behavior that we don't want to prohibit? But it is plausible that a considerable chunk of the Type-1 vulnerability spectrum could be stabilized by a state that deploys currently available technologies to the fullest extent. And expected advances in surveillance technology will greatly expand the achievable protection. For a picture of what a really intensive level of surveillance could look like, consider the following vignette: \n High-tech Panopticon Everybody is fitted with a 'freedom tag'a sequent to the more limited wearable surveillance devices familiar today, such as the ankle tag used in several countries as a prison alternative, the bodycams worn by many police forces, the pocket trackers and wristbands that some parents use to keep track of their children, and, of course, the ubiquitous cell phone (which has been characterized as 'a personal tracking device that can also be used to make calls'). 42 The freedom tag is a slightly more advanced appliance, worn around the neck and bedecked with multidirectional cameras and microphones. Encrypted video and audio is continuously uploaded from the device to the cloud and machine-interpreted in real time. AI algorithms classify the activities of the wearer, his hand movements, nearby objects, and other situational cues. If suspicious activity is detected, the feed is relayed to one of several patriot monitoring stations. These are vast office complexes, staffed 24/7. There, a freedom officer reviews the video feed on several screens and listens to the audio in headphones. The freedom officer then determines an appropriate action, such as contacting the tagwearer via an audiolink to ask for explanations or to request a better view. The freedom officer can also dispatch an inspector, a police rapid response unit, or a drone to investigate further. In the small fraction of cases where the wearer refuses to desist from the proscribed activity after repeated warnings, an arrest may be made or other suitable penalties imposed. Citizens are not permitted to remove the freedom tag, except while they are in environments that have been outfitted with adequate external sensors (which however includes most indoor environments and motor vehicles). The system offers fairly sophisticated privacy protections, such as automated blurring of intimate body parts, and it provides the option to redact identity-revealing data such as faces and name tags and release it only when the information is needed for an investigation. Both AI-enabled mechanisms and human oversight closely monitor all the actions of the freedom officers to prevent abuse. 43 Creating and operating the High-tech Panopticon would require substantial investment, but thanks to the falling price of cameras, data transmission, storage, and computing, and the rapid advances in AI-enabled content analysis, it may soon become both technologically feasible and affordable. For example, if the cost of applying this to one individual for 1 year falls to around US$140, then the entire world population could be continuously monitored at a cost of less than 1 per cent of world GDP. At that price, the system would plausibly represent a net savingeven setting aside its use in preventing civilization-scale cataclysmsbecause of its utility for regular law enforcement. If the system works as advertised, many forms of crime could be nearly eliminated, with concomitant reductions in costs of policing, courts, prisons, and other security systems. It might also generate growth in many beneficial cultural practices that are currently inhibited by a lack of social trust. If the technical barriers to High-tech Panopticon are rapidly coming down, how about its political feasibility? One possibility is that society gradually drifts towards total social transparency even absent any big shock to the system. It may simply become progressively easier to collect and analyze information about people and objects, and it may prove quite convenient to allow that to be done, to the point where eventually something close to full surveillance becomes a realityclose enough that with just one more turn of the screw it can be turned into High-tech Panopticon. 44 An alternative possibility is that some particular Type-1 vulnerability comes sufficiently starkly into view to scare states into taking extreme measures, such as launching a crash program to create universal surveillance. Other extreme measures that could be attempted in the absence of a fully universal monitoring system might include adopting a policy of preemptive incarceration, say whenever some set of unreliable indicators suggest a greater than1 per cent probability that some individual will attempt a city-destroying act or worse. 45 Political attitudes to such policies would depend on many factors, including cultural traditions and norms about privacy and social control; but they would also depend on how clearly the civilizational vulnerability was perceived. At least in the case of vulnerabilities for which there are several spectacular warning shots, it is plausible that the risk would be perceived very clearly. In the 'easy nukes' scenario, for example, after the ruination of a few great cities, there would likely be strong public support for a policy which, for the sake of forestalling another attack, would involve incarcerating a hundred innocent people for every genuine plotter. 46 In such a scenario, the creation of a High-tech Panopticon would probably be widely supported as an overwhelmingly urgent priority. However, for vulnerabilities not preceded or accompanied by such incontrovertible evidence, the will to robust preventive action may never materialize. Extremely effective preventive policing, enabled by ubiquitous real-time surveillance, may thus be necessary to stabilize a Type-1 vulnerability. Surveillance is also relevant to some other types of vulnerability, although not so centrally as in the case of Type-1. In a Type-2b vulnerability, the bad outcome is brought about by the combined actions of a mass of independent actors who are incentivized to behave destructively. But unless the destructive behaviours are very hard to observe, intensification of surveillance or preventive policing would not be needed to achieve stabilization. In 'worse global warming', for instance, it is not essential that individual actions be preempted. Dangerous levels of emissions take time to accumulate, and polluters can be held accountable after the fact; and it is tolerable if a few of them slip through the cracks. For other Type-2b vulnerabilities, however, enhanced methods of surveillance and social control could be important. Consider 'runaway mob', a scenario in which a mob forms that kills anybody it comes into contact with who refuses to join, and which grows ever bigger and more formidable (Cf. Munz et al., 2009) . The ease with which such bad social equilibria can form and propagate, the feasibility of reforming them once they have taken hold, and the toll they exact on human welfare, depend on parameters that could be changed by technological innovations, potentially for the worse. Even today, many states struggle to subdue organized crime. A black-ball invention (perhaps some clever cryptoeconomic mechanism design) that makes criminal enterprises much more scalable or more damaging in their social effects might create a vulnerability that could only be stabilized if states possessed unprecedented technological powers of surveillance and social control. As regards to Type-2a vulnerabilities, where the problem arises from the incentives facing state powers or other mighty actors, it is less clear how domestic surveillance could help. Historically, stronger means for social control may even have worsened inter-state conflictthe bloodiest inter-state conflicts have depended on the highly effective governance capacities of the modern state, for tax collection, conscription, and war propaganda. It is conceivable that improved surveillance could indirectly facilitate the stabilization of a Type-2a vulnerability, such as by changing sociocultural dynamics or creating new options for making arms-reduction treaties or non-aggression pacts more verifiable. But it seems equally plausible that the net effect of strengthened domestic surveillance and policing powers on Type-2a vulnerabilities would, in the absence of reliable mechanisms for resolving international disputes, be in the opposite direction (i.e. tending to produce or exacerbate such vulnerabilities rather than to stabilize them). \n Global governance Consider again 'safe first strike': states with access to the black-ball technology by default face strong incentives to use it destructively even though it would be better for everybody that no state did so. The original example involved a counterfactual with nuclear weapons, but looking to the future we might get this kind of black ball from advances in biological weapons, or atomically precise manufacturing, or the creation of vast swarms of killer drones, or artificial intelligence, or something else. The set of state actors then confronts a collective action problem. Failure to solve this problem means that civilization gets devastated in a nuclear Armageddon or another comparable disaster. It is plausible that, absent effective global governance, states would in fact fail to solve this problem. By assumption, the problem confronting us here presents special challenges; yet states have frequently failed to solve easier collective action problems. Human history is covered head to foot with the pockmarks of war. With effective global governance, however, the solution becomes trivial: simply prohibit all states from wielding the black-ball technology destructively. In the case of 'safe first strike', the most obvious way to do this would be by ordering that all nuclear weapons be dismantled and an inspection regime set up, with whatever level of intrusiveness is necessary to guarantee that nobody recreates a nuclear capability. Alternatively, the global governance institution itself could retain an arsenal of nuclear weapons as a buffer against any breakout attempt. To deal with Type-2a vulnerabilities, what civilization requires is a robust ability to achieve global coordination, specifically in matters where state actions have extremely large externalities. Effective global governance would also help with those Type-1 and Type-2b scenarios where some states are reluctant to institute the kind of preventive policing that would be needed to reliably prevent individuals within their territories from carrying out a destructive act. Consider a biotechnological black ball that is powerful enough that a single malicious use could cause a pandemic that would kill billions of people, thus presenting a Type-1 vulnerability. It would be unacceptable if even a single state fails to put in place the machinery necessary for continuous surveillance and control of its citizens (or whatever other mechanisms are necessary to prevent malicious use with virtually perfect reliability). A state that refuses to implement the requisite safeguardsperhaps on grounds that it values personal freedom too highly or accords citizens a constitutionally inscribed right to privacywould be a delinquent member of the international community. Such a state, even if its governance institutions functioned admirably in other respects, would be analogous to a 'failed state' whose internal lack of control makes it a safe haven for pirates and international terrorists (though of course in the present case the risk externality it would be imposing on the rest of the world would be far larger). Other states certainly would have ground for complaint. A similar argument applies to Type-2b vulnerabilities, such as a 'worse global warming' scenario in which some states are inclined to free-ride on the costly efforts of others to cut emissions. An effective global governance institution could compel every state to do its part. We thus see that while some possible vulnerabilities can be stabilized with preventive policing alone, and some other vulnerabilities can be stabilized with global governance alone, there are some that would require both. Extremely effective preventive policing would be required because individuals can engage in hard-to-regulate activities that must nevertheless be effectively regulated, and strong global governance would be required because states may have incentives not to effectively regulate those activities even if they have the capability to do so. In combination, however, ubiquitous-surveillance-powered preventive policing and effective global governance would be sufficient to stabilize most vulnerabilities, making it safe to continue scientific and technological development even if VWH is true. 47 \n Discussion Comprehensive surveillance and global governance would thus offer protection against a wide spectrum of civilizational vulnerabilities. This is a considerable reason in favor of bringing about those conditions. The strength of this reason is roughly proportional to the probability that the vulnerable world hypothesis is true. It goes without saying that a mechanism that enables unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences. Improved capabilities for social control could help despotic regimes protect themselves from rebellion. Ubiquitous surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on all aspects of life, preventing individuals with deviant lifestyles or unpopular beliefs from finding refuge in anonymity. And if people believe that everything they say and do is, effectively, 'on the record', they might become more guarded and blandly conventional, sticking closely to a standard script of politically correct attitudes and behaviours rather than daring to say or do anything provocative that would risk making them the target of an outrage mob or putting an indelible disqualifying mark on their r esum e. Global governance, for its part, could reduce beneficial forms of inter-state competition and diversity, creating a world order with single point of failure: if a world government ever gets captured by a sufficiently pernicious ideology or special interest group, it could be game over for political progress, since the incumbent regime might never allow experiments with alternatives that could reveal that there is a better way. Also, being even further removed from individuals and culturally cohesive 'peoples' than are typical state governments, such an institution might by some be perceived as less legitimate, and it may be more susceptible to agency problems such as bureaucratic sclerosis or political drift away from the public interest. 48 It also goes without saying that stronger surveillance and global governance could have various good consequences aside from stabilizing civilizational vulnerabilities (see also Re, 2016 )) ; Bostrom, 2006; cf. Torres, 2018) ). More effective methods of social control could reduce crime and alleviate the need for harsh criminal penalties. They could foster a climate of trust that enables beneficial new forms of social interaction and economic activity to flourish. Global governance could prevent interstate wars, including ones that do not threaten civilizational devastation, and reduce military expenditures, promote trade, solve various global environmental and other commons problems, calm nationalistic hatreds and fears, and over time perhaps would foster an enlarged sense of cosmopolitan solidarity. It may also cause increased social transfers to the global poor, which some would view as desirable. Clearly, there are weighty arguments both for and against moving in these directions. This paper offers no judgment about the overall balance of these arguments. The ambition here is more limited: to provide a framework for thinking about potential technology-driven civilizational vulnerabilities, and to point out that greatly expanded capacities for preventive policing and global governance would be necessary to stabilize civilization in a range of scenarios. Yes, this analysis provides an additional reason in favor of developing those capacities, a reason that does not seem to have been playing a significant role in many recent conversations about related issues, such as debates about government surveillance and about proposed reforms of international and supranational institutions. 49 When this reason is added to the mix, the evaluation should therefore become more favourable than it otherwise would have been towards policies that would strengthen governance capacities in these ways. However, whether or not this added reason is sufficiently weighty to tip the overall balance would depend on other considerations that fall outside the scope of this paper. It is worth emphasizing that the argument in this paper favors certain specific forms of governance capacity strengthening. With respect to surveillance and preventive policing, VWH-concerns point specifically to the desirability of governance capacity that makes it possible to extremely reliably suppress activities that are very strongly disapproved of by a very large supermajority of the population (and of power-weighted domestic stakeholders). It provides support for other forms of governance strengthening only insofar as they help create this particular capacity. Similarly, with respect to global governance, VWH-based arguments support developing institutions that are capable of reliably resolving very high-stakes international coordination problems, ones where a failure to reach a solution would result in civilizational devastation. This would include having the capacity to prevent great power conflicts, suppress arms races in weapons of mass destruction, regulate development races and deployment of potential black-ball technologies, and successfully manage the very worst kinds of tragedy of the commons. It need not include the capacity to make states cooperate on a host of other issues, nor does it necessarily include the capacity to achieve the requisite stabilization using only fully legitimate means. While those capacities may be attractive for other reasons, they do not immediately emerge as desiderata simply from taking VWH seriously. For example, so far as VWH is concerned, it would theoretically be satisfactory if the requisite global governance capacity comes into existence via the rise of one superpower to a position of sufficient dominance to give it the ability, in a sufficiently dire emergency, unilaterally to impose a stabilization scheme on the rest of the world. One important issue that we still need to discuss is that of timing. Even if we became seriously concerned that the urn of invention may contain a black ball, this need not move us to favor establishing stronger surveillance or global governance now, if we thought that it would be possible to take those steps later, if and when the hypothesized vulnerability came clearly into view. We could then let the world continue its sweet slumber, in the confident expectation that as soon as the alarm goes off it will leap out of bed and undertake the required actions. But we should question how realistic that plan is. Some historical reflection is useful here. Throughout the Cold War, the two superpowers (and the entire northern hemisphere) lived in continuous fear of nuclear annihilation, which could have been triggered at any time by accident or as the result of some crisis spiralling out of control. The reality of the threat was accepted by all sides. This risk could have been substantially reduced simply by getting rid of all or most nuclear weapons (a move which, as a nice side effect, could also have saved more than ten trillion dollars). 5051 Yet, after several decades of effort, only limited nuclear disarmament and other risk-reduction measures were implemented. Indeed the threat of nuclear annihilation remains with us to this day. In the absence of strong global governance that can enforce a treaty and compel disputants to accept a compromise, the world has so far been unable to solve this most obvious collective action problem. 52 But perhaps the reason why the world has failed to eliminate the risk of nuclear war is that the risk was insufficiently great? Had the risk been higher, one could eupeptically argue, then the necessary will to solve the global governance problem would have been found. Perhapsthough it does seem rather shaky ground on which to rest the fate of civilization. We should note that although a technology even more dangerous than nuclear weapons may stimulate a greater will to overcome the obstacles to achieving stabilization, other properties of a black ball could make the global governance problem more challenging than it was during the Cold War. We have already illustrated this possibility in scenarios such as 'safe first strike' and 'worse global warming'. We saw how certain properties of a technology set could generate stronger incentives for destructive use or for refusing to join (or defecting from) any agreement to curb its harmful applications. 53 Even if one felt optimistic that an agreement could eventually be reached, the question of timing should remain a serious concern. International collective action problems, even within a restricted domain, can resist solution for a long time, even when the stakes are large and indisputable. It takes time to explain why an arrangement is needed and to answer objections, time to negotiate a mutually acceptable instantiation of the cooperative idea, time to hammer out the details, and time to set up the institutional mechanisms required for implementation. In many situations, holdout problems and domestic opposition can delay progress for decades; and by the time one recalcitrant nation is ready to come on board, another who had previously agreed might have changed its mind. Yet at the same time, the interval between a vulnerability becoming clearly visible to all and the point when stabilization measures must be in place could be short. It could even be negative, if the nature of the vulnerability leaves room for denialism or if specific explanations cannot be widely provided because of information hazards. These considerations suggest that it is problematic to rely on spontaneous ad hoc international cooperation to save the day once a vulnerability comes into view. 54 The situation with respect to preventive policing is in some respects similar, although we see a much faster and more robust trenddriven by advances in surveillance technologytowards increasing state capacities for monitoring and potentially controlling the actions of their own citizens than any trend towards effective global governance. At least this is true if we look at the physical realm. In the digital information realm the outlook is somewhat less clear, owing to the proliferation of encryption and anonymization tools, and the frequency of disruptive innovation which makes the future of cyberspace harder to foresee. Sufficiently strong capabilities in physical space would, however, spill over into strong capabilities in the digital realm as well. In High-tech Panopticon, there would be no need for the authorities to crack ciphers, since they could directly observe everything that users type into their computers and everything that is shown on their screens. One could take the position that we should not develop improved methods of surveillance and social control unless and until a specific civilizational vulnerability comes clearly into viewone that looks sufficiently serious to justify the sacrifice of some types of privacy and the risk of inadvertently facilitating a totalitarian nightmare. But as with the case of international cooperation, we confront a question of timing. A highly sophisticated surveillance and response system, like the one depicted in 'High-tech Panopticon', cannot be conjured up and made fully reliable overnight. Realistically, from our current starting point, it would take many years to implement such a system, not to mention the time required to build political support. Yet the vulnerabilities against which such a system might be needed may not offer us much advance warning. Last week a top academic biolab may have published an article in Science; and as you are reading these words, a popular blogger somewhere in the world, in hot pursuit of pageviews, might be uploading a post that explains some clever way in which the lab's result could be used by anybody to cause mass destruction. In such a scenario, intense social control may need to be switched on almost immediately. In an unfavorable scenario, the lead time could be as short as hours or days. It would then be too late to start developing a surveillance architecture when the vulnerability comes clearly into view. If devastation is to be avoided, the mechanism for stabilization would need to have been put in place beforehand. What may theoretically be feasible is to develop the capabilities for intrusive surveillance and real-time interception in advance, but not initially to use those capabilities to anything like their full extent. This would be one way to satisfy the requirement for stabilizing a Type-1 vulnerability (and other vulnerabilities that require highly reliable monitoring of individual actions). By giving human civilization the capacity for extremely effective preventive policing, we would have exited one of the dimensions of the semi-anarchic default condition. Admittedly, constructing such a system and keeping it in standby mode would mean that some of the downsides of actually instituting intense forms social control would be incurred. In particular, it may make oppressive outcomes more likely: \"[The] question is whether the creation of a system of surveillance perilously alters that balance too far in the direction of government control . . . We might imagine a system of compulsory cameras installed in homes, activated only by warrant, being used with scrupulous respect for the law over many years. The problem is that such an architecture of surveillance, once established, would be difficult to dismantle, and prove too potent a tool of control if it ever fell into the hands of people whowhether through panic, malice, or a misguided confidence in their own ability to secretly judge the public goodwould seek to use it against us (Sanchez, 2013) .\" Developing a system for turnkey totalitarianism means incurring a risk, even if one does not intend for the key to be turned. One could try to reduce this risk by designing the system with appropriate technical and institutional safeguards. For example, one could aim for a system of 'structured transparency' that prevents concentrations of power by organizing the information architecture so that multiple independent stakeholders must give their permission in order for the system to operate, and so that only the specific information that is legitimately needed by some decision-maker is made available to her, with suitable redactions and anonymization applied as the purpose permits. With some creative mechanism design, some machine learning, and some fancy cryptographic footwork, there might be no fundamental barrier to achieving a surveillance system that is at once highly effective at its official function yet also somewhat resistant to being subverted to alternative uses. How likely this is to be achieved in practice is of course another matter, which would require further exploration. 55 Even if a significant risk of totalitarianism would inevitably accompany a well-intentioned surveillance project, it would not follow that pursuing such a project would increase the risk of totalitarianism. A relatively less risky well-intentioned project, commenced at a time of comparative calm, might reduce the risk of totalitarianism by preempting a less-wellintentioned and more risky project started during a crisis. But even if there were some net totalitarianism-risk-increasing effect, it might be worth accepting that risk in order to gain the general ability to stabilize civilization against emerging Type-1 threats (or for the sake of other benefits that extremely effective surveillance and preventive policing could bring). \n Conclusions This paper has introduced a perspective from which we can more easily see how civilization is vulnerable to certain types of possible outcomes of our technological creativityour drawing a metaphorical black ball from the urn of inventions, which we have the power to extract but not to put back in. We developed a typology of such potential vulnerabilities, and showed how some of them result from destruction becoming too easy, others from pernicious changes in the incentives facing a few powerful state actors or a large number of weak actors. We also examined a variety of possible responses and their limitations. We traced the root cause of our civilizational exposure to two structural properties of the contemporary world order: on the one hand, the lack of preventive policing capacity to block, with extremely high reliability, individuals or small groups from carrying out actions that are highly illegal; and, on the other hand, the lack of global governance capacity to reliably solve the gravest international coordination problems even when vital national interests by default incentivize states to defect. General stabilization against potential civilizational vulnerabilitiesin a world where technological innovation is occurring rapidly along a wide frontier, and in which there are large numbers of actors with a diverse set of human-recognizable motivationswould require that both of these governance gaps be eliminated. Until such a time as this is accomplished, humanity will remain vulnerable to drawing a technological black ball. Clearly, these reflections provide a pro tanto reason to support strengthening surveillance capabilities and preventive policing systems and for favoring a global governance regime that is capable of decisive action (whether based on unilateral hegemonic strength or powerful multilateral institutions). However, we have not settled whether these things would be desirable all-things-considered, since doing so would require analyzing a number of other strong considerations that lie outside the scope of this paper. Because our main goal has been to put some signposts up in the macrostrategic landscape, we have focused our discussion at a fairly abstract level, developing concepts that can help us orient ourselves (with respect to long-term outcomes and global desirabilities) somewhat independently of the details of our varying local contexts. In practice, were one to undertake an effort to stabilize our civilization against potential black balls, one might find it prudent to focus initially on partial solutions and lowhanging fruits. Thus, rather than directly trying to bring about extremely effective preventive policing or strong global governance, one might attempt to patch up particular domains where black balls seem most likely to appear. One could, for example, strengthen oversight of biotechnologyrelated activities by developing better ways to track key materials and equipment, and to monitor scientists within labs. One could also tighten know-your-customer regulations in the biotech supply sector, and expand the use of background checks for personnel working in certain kinds of labs or involved with certain kinds of experiment. One can improve whistleblower systems, and try to raise biosecurity standards globally. One could also pursue differential technological development, for instance by strengthening the biological weapons convention and maintaining the global taboo on biological weapons. Funding bodies and ethical approval committees could be encouraged to take broader view of the potential consequences of particular lines of work, focusing not only on risks to lab workers, test animals, and human research subjects, but also on ways that the hoped-for findings might lower the competence bar for bioterrorists down the road. Work that is predominantly protective (such as disease outbreak monitoring, public health capacity building, improvement of air filtration devices) could be differentially promoted. Nevertheless, while pursuing such limited objectives, one should bear in mind that the protection they would offer covers only special subsets of scenarios, and might be temporary. If one finds oneself in a position to influence the macroparameters of preventive policing capacity or global governance capacity, one should consider that fundamental changes in those domains may be the only way to achieve a general ability to stabilize our civilization against emerging technological vulnerabilities. \n Notes For comments, discussion, and critique, I'm grateful to Sonja Alsofi, Stuart Armstrong, Andrew Snyder-Beattie, Chris Anderson, Nick Beckstead, Miles Brundage, Ben Buchanan, Owen Cotton-Barratt, Niel Bowerman, Paul Christiano, Allan Dafoe, Jeff Ding, Eric Drexler, Peter Eckersley, Owain Evans, Thomas Homer-Dixon, Thomas Inglesby, John Leslie, Gregory Lewis, Matthijs Maas, Jason Matheny, Michael Montague, Luke Muehlhauser, Toby Ord, Ben Pace, Richard Re, Anders Sandberg, Julian Savulescu, Stefan Schubert, Carl Shulman, Tanya Singh, Helen Toner, and to the audiences of several workshops and lectures where earlier versions of this work were presented), and to three anonymous referees; and I thank Carrick Flynn, Christopher Galias, Ben Garfinkel, and Rose Hadshar for help with the manuscript and many useful suggestions. This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 669751). 1. Obviously, the urn metaphor has important limitations. We will discuss some of them later. 2. The net effect on the conditions of non-human animals is harder to assess. In particular, modern factory farming involves the mistreatment of large numbers of animals. 3. There are, however, examples of 'cultures' or local populations whose demise may have been brought about (at least partially) by their own technological practices, such as Easter Island (Rapa Nui) people and the Ancestral Puebloans in Mesa Verde (Anasazi), who, according to Diamond (2005) , cut down their own forests and then suffered environmental collapse. 4. Examples may however be found in other species, if we consider evolutionary adaptations as inventions. For instance, there is a literature exploring how evolutionary dead ends, leading to the extinction of a species or a population, may ensue from advantageous evolutionary changes such as ones involved in specialization (in which adaptation to a narrow niche may entail an irreversible loss of traits needed to survive in a wider range of environments) (Day et al., 2016) , the emergence of inbred social systems (among e.g. social spiders) (Aviles and Purcell, 2012) , or a switch to selfing (e.g. among flowering plant species transitioning from outcrossing to self-fertilization) (Igic and Busch, 2013) . 5. Although most scientists involved in the project favored proposals such as the Baruch Plan, which would have placed nuclear energy under international control, they retained little decision-making power at this point. 6. Metaphorically, of course. But arguably in the metaphor, there should be more than one trembling finger on each side, given the widely delegated command and control (Ellsberg, 2017; Schlosser, 2013) . 7. However, within a given state, the number of actors who are empowered to launch nuclear attacks may be quite large. Ellsberg (2017) claims that, for at least a significant portion of the Cold War, the authority to launch nuclear weapons was delegated multiple rungs down the American chain of command. The number of officers with the physical ability to launch nuclear weapons, although not the authority to do so, was also necessarily larger. In the Soviet Union, at one point during the coup against Mikhail Gorbachev in August 1991, all three of the USSR's Chegets ('nuclear briefcases') were in the hands of coup leaders (Sokoski and Tertrais, 2013; Stevenson, 2008) . 8. An 'information hazard' is a risk arising from the dissemination of true information, for instance because the information could enable some agents to cause harm. Bostrom (2011) discusses information hazards more generally. 9. They might argue that openness would be a benefit since it would allow more people to work on countermeasures (cf. the debate around gain of function work on flu viruses; (Duprex et al., 2015; Fauci et al., 2011; Sharp, 2005) ). They might also argue that, so long as the government continues to justify draconian actions by referencing secret information, it will be dangerously unaccountable to its citizens. A similar belief motivated the American magazine The Progressive's decision in the late 1970s to publish secrets about the hydrogen bomb, even in the face of a legal challenge by the US Department of Energy. The author of the piece, Howard Morland, wrote: 'Secrecy itself, especially the power of a few designated 'experts' to declare some topics off-limits, contributes to a political climate in which the nuclear establishment can conduct business as usual, protecting and perpetuating the production of these horror weapons ' Morland (1979, p. 3 ). 10. Generally, in cases where multiple actors each have some independent probability of taking an action unilaterally, the probability that the action will be taken tends to one as the number of actors increases. When this phenomenon arises for actors with shared goals but discordant judgments, due to randomness in the evidence they are exposed to or the reasoning they carry out, there arises a 'unilateralist's curse' (Bostrom et al., 2016) . The curse implies that even a very unwise decision, such as the decision to publish nuclear weapons designs, is likely to be made if enough actors are in a position to take it unilaterally. 11. Many of these same motivations are evident today among 'black hat' hackers who carry out malicious cyber attacks. For instance, as a method of extortion, some anonymous hackers have proven themselves willing to remove cities' abilities to provide vital services to their residents (Blinder and Perlroth, 2018) . Motivations for economically damaging cyber attacks have also seemed to include both political ideology and curiosity. Since contemporary cyber attacks are dramatically less destructive than attacks with nuclear weapons, the set of actors that would be willing to use nuclear weapons or threaten their use is surely much smaller than the set of actors willing to engage in malicious hacking. Nevertheless, the social and psychological factors relevant to both cases may be similar. 12. This concept is distinct from that of international anarchy in the field of international relations. The present concept emphasizes that anarchy is a matter of degree and is meant to be relatively neutral as between different schools of thought in IR (cf. Lechner, 2017) ). More importantly, it encompasses a lack of governance not just 'at the top' but also 'at the bottom'. That is to say, the semi-anarchic default condition refers to the fact that in the current world order, not only is there a degree of anarchy at the international level, because of lack of global governance or other fully effective means of constraining the actions of state and solving global coordination problems, but there is also a degree of anarchy at the level of individuals (and other sub-state actors) in that even highly functional states currently lack the ability to perfectly regulate the actions of those small actors. For example, despite many states seeking to prevent rape and murder within their territory, rape and murder continue to occur with non-zero frequency. The consequences of this degree of anarchy at the bottom could be vastly magnified if individuals obtained much greater destructive capabilities. 13. For comparison, a death toll of 15 per cent of the present world population is more than double the combined effects of World War I, the Spanish Flu, and World War II as a percentage of global population (and the difference is even bigger in absolute terms). A 50 per cent fall in world GDP is greater than the largest drop in recorded history. During the Great Depression, for example, world GDP fell by an estimated 15 per cent or less, and mostly recovered within a few years (though some models suggest that it has also had a long-lasting depressing effect on trade which has chronically impaired the world economy) (Bolt et al., 2018) ; Crafts and Fearon, 2010) . 14. This paper focuses on technological vulnerabilities. There could also be natural vulnerabilities that arise independently of the progress of human civilization, such as a violent meteor barrage set to impact our planet at some future date. Some natural vulnerabilities could be stabilized once our level of technological capability exceeds some threshold (e.g. the ability to deflect meteors). Plausibly, the risk of technological vulnerabilities is greater than the risk of natural vulnerabilities, although the case for this is less clear cut with the severity cutoff of civilizational devastation than it would be if the cutoff were set to existential catastrophe (Bostrom, 2013; Bostrom and Cirkovi c, 2011) .The (big) proviso to this claim (of technological vulnerabilities dominating) is that it presupposes that it is not the case that the world is hemorrhaging value-potential at a significant rate. If instead we evaluate things from a perspective in which what we may term the bleeding world hypothesis is true, then it may well be that the default devastation arising from natural (i.e. non-human) processes dominates the equation. The bleeding world hypothesis could hold if, for example: (1) the evaluator cares a lot about existing people (including self and family) and they are naturally dying off at a substantial rate (e.g. from aging), thereby losing both the ability to continue enjoying their lives and the opportunity for vastly greater levels of well-being such as would become possible at technological maturity; (2) the evaluator cares a lot about avoiding suffering that could be avoided with more advanced technology but is occurring currently, piling up disutility; (3) there is some substantial exogenously set rate of civilizational destruction (e.g. natural disasters, random simulation terminations unrelated to our activities Bostrom, 2003) , and while we allow time to lapse we incur a cumulative risk of being destroyed before maxing out our technological potential; (4) there are ways, using physics we don't currently understand well, to initiate fast-growing processes of value creation (such as by creating an exponential cascade of baby-universes whose inhabitants would be overwhelmingly happy), and the evaluator cares in a scale-sensitive way about such creation; and (5) other superintelligent constituencies, who are in a position to greatly influence things the evaluator cares about, are impatient for us to reach some advancement, but the value they place on this decays rapidly over time. 15. The world could remain vulnerable after profound technological regress, for instance if many prefabricated nukes remain even after civilization regresses past the point of becoming incapable of manufacturing new ones. 16. It is important to our original 'easy nukes' scenario that each nuclear use requires the efforts of only one individual or of a small group. Although it might require the combined efforts of hundreds of actors to devastate civilization in that scenarioafter all, ruining one city or one metropolitan area is not the same as ruining a civilizationthese hundreds of actors need not coordinate. This allows the apocalyptic residual to come into play. 17. Baum et al. (2018) provide an up-to-date list of nuclear accidents and occasions on which the use of nuclear weapons was considered. Sagan (1995) provides a more thorough account of dangerous practices throughout the Cold War. Schlosser (2013) examines nearaccidents, focusing in particular on one incident which resulted in a non-nuclear detonation of a Titan-II ICBM. 18. Although there have been few scholarly attempts to assess the degree of luck involved in avoiding this outcome, one recent estimate, drawing from a dataset of near-miss instances, places the probability of the US and USSR avoiding nuclear war below 50 per cent (Lundgren, 2013) . This is consistent with the views of some officials with insider knowledge of nuclear crises, such as President John F. Kennedy, who expressed the belief that, in hindsight, the Cuban missile crisis had between a one-in-two and a one-in-three chance of leading to nuclear war. Nonetheless, a number of prominent international security scholars, such as Kenneth Waltz and John Mueller, hold that the probability of nuclear war has been consistently very low (Mueller, 2009; Sagan and Waltz, 2012) . 19. Perhaps believed erroneously. According to a former Commander of the US Pacific Fleet, there was a period during the Cold War when antisubmarine surveillance became extremely effective: '[The US] could identify by hull number the identity of Soviet subs, and therefore we could do a body count and know exactly where they were. In port or at sea. . . . so I felt comfortable that we had the ability to do something quite serious to the Soviet SSBN force on very short notice in almost any set of circumstances.' (quoted in Ford and Rosenberg, 2005, p. 399) . 20. In fact, advances in remote sensing, data processing, AI, drones, and nuclear delivery systems are now threatening to undermine nuclear deterrence, especially for states with relatively small and unsophisticated nuclear arsenals (Lieber and Press, 2017) . 21. Not really unscathed, of course: radioactive fallout would affect allies and to a degree the homeland; the economic repercussions would wreak havoc on markets and usher in a worldwide depression. Still, it would be far preferable to being the target of the assault (especially if we set aside nuclear winter). 22. Another possibility is that there would be political gains, such as an increased ability to engage in nuclear coercion against third parties after having demonstrated a willingness to use nuclear weapons. 23. Brooks (1999) ; Gartzke (2007) ; Gartzke and Rohner (2011) . For a dissenting view, see Liberman (1993) . 24. Human civilization could probably never have arisen if the Earth's climate had been that sensitive to carbon dioxide, since past CO 2 levels (4,000 ppm during the Cambrian period compared to about 410 pmm today) would then presumably have seriously disrupted the evolution of complex life. A less remote counterfactual might instead involve some compound that does not occur in significant quantities in nature but is produced by human civilization, such as chlorofluorocarbons. CFCs have been phased out via the Montreal Protocol because of their destructive effect on the ozone layer, but they are also very potent greenhouse gases on a per kilogram basis. So we could consider a counterfactual in which CFCs had been industrially useful on a far greater scale than they were, but with dramatic delayed cumulative effects on global climate. 25. The report commissioned by Oppenheimer ends: 'One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundation make further work on the subject highly desirable' (Konopinski et al., 1946) . 26. Type-0 could be viewed as the limiting case of a Type-1: it refers to a vulnerability that requires zero ill-intentioned actors in order for civilizational devastation to resultonly normally responsible actors who are willing to proceed with using a technology after an ordinary amount of scrutiny has been given to the new technology. 27. And if 10 years, why not permanently. 28. In fact, an account by Albert Speer, the German Minister of Armaments, suggests that Werner Heisenberg discussed the possibility of a runaway chain reaction with Hitler and that this possibility may have further dampened Hitler's enthusiasm for pursuing the bomb (Rhodes, 1986) . 29. A real-world version of this kind of Type-2a vulnerability, in which key actors face strategic incentives to take actions that create unwanted risks for civilization, could arise in the context of a race to develop machine superintelligence. In unfavorable circumstances, competitive dynamics could present a leading developer with the choice between launching their own AI before it is safe or relinquishing their lead to some other developer who is willing to take greater risks (Armstrong et al., 2016) . 30. For a discussion of how a rational planner would balance consumption growth with safety in various models where growth-inducing innovation also carries a risk of introducing innovations that reduce lifespan, see Jones (2016) . 31. Even the 'surprising strangelets' scenario may be confounded by coordination problems, though to a lesser degree than 'Castle Bravo/Trinity test'. The people deciding on science funding allocations may have different priorities than the public that is providing the funding. They might, for example, place a higher value on satisfying intellectual curiosity, relative to the value placed on keeping risks low and providing near-term material benefits to the masses. Principal-agent problems could then result in more funding for particle accelerators than such experiments would get in the absence of coordination problems. Prestige contests between nationswhich might in part be viewed as another coordination failuremay also be a driver of basic science funding in general and highenergy physics in particular. 32. The people alive at the time when the devastation occurs might prefer that it had taken place earlier, before they were born, so that it would all be over and done with and they wouldn't be affected. Their preferences seem to run into a non-identity problem, since if a civilizational devastation event had taken place before they were conceived they would almost certainly not have come into existence (Parfit, 1987) . 33. More broadly, many refinements in biotechnological tools and techniques, which make it easier for amateur DIY biohackers to accomplish what previously could only be done by well-resourced professional research labs, come under suspicion from this perspective. It is very questionable whether the benefits of DIY biohacking (glow-in-the-dark house plants?) are worth proliferating the ability to turn bioengineering to potentially risky or malicious purposes to an expanded set of relatively unaccountable actors. 34. The counterargument that 'if I don't develop it, somebody else will; so I might as well do it' tends to overlook the fact that a given scientist or developer has at least some marginal impact on the expected timing of the new discovery. If it really were the case that a scientist's efforts could make no difference to when the discovery or invention is made, it would appear that the efforts are a waste of time and resources, and should be discontinued for that reason. A relatively small shift in when some technological capability becomes available (say, one month) could be important in some scenarios (such as if the dangerous technology imposes a significant risk per month until effective defenses are developed and deployed). 35. By 'technological maturity' we mean the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved (in the fullness of time) (Bostrom, 2013) . 36. Access control in bioscience has grown in importance since the 2001 'Amerithrax' incident. In the United States, institutions handling dangerous pathogens are obliged to assess suitability for employees who will have access, which are also vetted by federal agencies (Federal Select Agent Program, 2017) , and similar approaches are recommended to countries developing their biosecurity infrastructure (Centre for Biosecurity and Biopreparedness, 2017). The existing regime suffers from two shortcomings: first, there is no global coordination, so bad actors could 'shop around' for laxer regulatory environments; second, the emphasis remains on access to biological materials (e.g. samples of certain microorganisms), whereas biological information and technology is increasingly the principal object of security concern (Lewis et al., 2019) . 37. A value of X substantially less than 1 per cent seems consistent with how little most people give to global charity. It is possible, however, that an act-omission distinction would make people willing to accept a substantially larger personal sacrifice in order not to contribute to a global bad than they would in order to contribute a global good. 38. Note, however, that a positive shift in the preference distributioneven if insufficient to avert catastrophe by simply making some individual actors not choose the destructive optioncould have important indirect effects. For example, if a large number of people became slightly more benevolently inclined, this might shift society into a more cooperative equilibrium that would support stronger governance-based stabilization methods such as the ones we discuss below (cf. 'moral enhancements'; Persson and Savulescu, 2012) . 39. At a global level, we find a patchwork of national classification schemes and information control systems. They are generally designed to protect military and intelligence secrets, or to prevent embarrassing facts about regime insiders from being exposed to the public, not to regulate the spread of scientific or technological insights. There are some exceptions, particularly in the case of technical information that bears directly on national security. For instance, the Invention Secrecy Act of 1951 in the United States gives defense agencies the power to bar the award of a patent and order that an invention be kept secret; though an inventor who refrains from seeking patent protection is not subject to these strictures (Parker and Jacobs 2003) . Nuclear inventions are subject to the 'born secret' provision of the Atomic Energy Act of 1946, which declares all information concerning the design, development, and manufacture of nuclear weaponsregardless of originclassified unless it has been officially declassified (Parker and Jacobs 2003) . Other legal tools, such as export controls, have also been used in attempts to stem the flow of scientific information. The (unsuccessful) efforts of multiple US government agencies to block the publication and use of strong encryption protocols developed in the 1970s and 1980s provide one notable example (Banisar, 1999) .Voluntary self-censorship by the scientific community has been attempted on very rare occasions. Leo Szilard had some partial successes in convincing his physicist colleagues to refrain from publishing on aspects of nuclear fission (before the start of the Manhattan Project and the onset of official secrecy), though he encountered opposition from some scientists who wanted their own work to appear in journals or who felt that openness was a sacred value in science. More recently, there were some attempts at scientific selfcensorship in relation to avian flu research (Gronvall, 2013) . In this case, the efforts may have been not only ineffectual but counterproductive, inasmuch as the controversy sparked by open debate about whether certain results should be published drew more attention to those results than they would have received if publication had proceeded unopposedthe so-called 'Streisand effect'.Overall, attempts at scientific self-censorship appear to have been fairly half-hearted and ineffectual. (I say appears, because of how things unfolded in the publicly known episodes where censorship was attempted. But truly successful attempts to suppress scientific information wouldn't necessarily show up in the public record.) Even if a few journal editors could agree on standards for how to deal with papers that pose information hazards, nothing would prevent a frustrated author from sending her manuscript to another journal with lower standards or to publish it on her personal Internet page. Most scientific communities have neither the culture, nor the incentives, nor the expertise in security and risk assessment, nor the institutional enforcement mechanisms that would be required for dealing effectively with infohazards. The scientific ethos is rather this: every ball must be extracted from the urn as quickly as possible and revealed to everyone in the world immediately; the more this happens, the more progress has been made; and the more you contribute to this, the better a scientist you are. The possibility of a black ball does not enter into the equation. 40. In any case, it is unclear whether we would really want to be more cautious in general. It might be desirable (from various evaluative perspectives) to encourage greater caution specifically in situations where there could be extreme global downsides. Yet exhortations to exercise voluntary caution and restraint in these causes may not be very effective if the reason for the normatively excessive risk-taking is a coordination problem: the risk-taker gaining some private benefit (e.g. profit or prestige) while generating a global risk externality. In such cases, therefore, the solution may require a strengthening of global governance capacity. 41. It is also possible for risk assessment work to increase the level of risk, by generating information hazards (Bostrom, 2011) . 42. The Orwellian-sounding name is of course intentional, to remind us of the full range of ways in which such a system could be applied. 43. Implementation details are for illustration only. For example, similar functionality could be provided by mixed reality eyeglasses instead of a necklace. Versions of the device could be designed that would provide many benefits to the user along with its surveillance function. In theory, some of the monitoring could be crowd-sourced: when suspicious activity is detected by the AI, the video feed is anonymized and sent to a random 100 citizens, whose duty is to watch the feed and vote on whether it warrants further investigation; if at least 10 per cent of them think it does, the (non-anonymized) feed gets forwarded to the authorities. 44. Examples of 'conveniences' that will plausibly drive more intrusive surveillance include various kinds of consumer applications and economically useful or profitable monitoring (e.g. for ad targeting, price discrimination, etc.); the ability to prevent various things that cause public outrage, such as child abuse or small-scale terrorism; and, especially for authoritarian regimes, the ability to suppress political opposition. 45. A milder version of the policy might merely debar such weak suspects from accessing the equipment and materials necessary to produce the destructive effect. The extent to which this might suffice depends on the details of the scenario. 46. A partial implementation of the High-tech Panopticon might replace incarceration in this scenario, in which only those on some long list of 'individuals of heightened concern' were required to wear the freedom tags. 47. Of course, it is theoretically possible that either of these remedies would raise rather than lower civilization's total vulnerability to a potential black ball, for example, if adequate global coordination made extremely effective national policing less likely, or vice versa. The character of the regimes that would tend to arise under conditions of stronger preventive policing or global governance could also differ from those in the status quo in ways that would increase or decrease some civilizational vulnerabilities. For example, surveillance-empowered world leaders might be more or less prone to taking foolish decisions that increase Type-0 vulnerabilities. 48. A special case of Type-2a vulnerability is one in which some set of regimes jointly achieve the devastational threshold by harming their own populations. Suppose, for instance, that one held an extremely pessimistic view of political leaders, and thought that they would be willing to kill an extremely large fraction of their own populations if doing so would help them hold on to power or gain more resources. Whereas today such a genocidal initiative would usually be counterproductive from the leader's point of view (because it would spark revolts and crash the economy), one could imagine a different technological environment in which these restraints would be loosenedfor example, if a highly centralized AI police force could reliably suppress any resistance and if robots could easily replace human workers. (Fortunately, it would appear that in many scenarios where these things becomes technologically feasible, the ruler's incentives for genocidal actions would also be weakened: the hypothesized AI police force would presumably enable the ruler to maintain power without killing off large parts of the population, and automation of the economy would greatly increase wealth so that a smaller fraction of national income would suffice to give all citizens a high standard of living.) 49. For example, surveillance debates often focus on the tradeoffs between the privacy interests of individuals and public demand for security against small-scale terrorist attacks. (Even terrorist incidents that are usually regarded as large, such as the 9/11 attacks, are insignificantly small-scale by the standards used in this paper.) 50. According to Schwartz (1998) , the nuclear arms race during the Cold War cost 5.8 trillion (1996 dollars) in American expenditures alone, which is equivalent to 9.3 trillion in 2018 dollars. This estimate is quite comprehensive and covers the fuel cycle, weapons, delivery systems, decommissioning, etc. If we add the expenditures of other countries, we can conclude that human civilization spent well in excess of 10 trillion dollars on developing and maintaining a capacity to destroy itself with nuclear arms. Most of the cost was incurred in a period when world GDP was substantially lower than it is today. Even larger amounts were spent on non-nuclear military capabilities (over 20 trillion in 2018 dollars in the US alone). It is possible that the nuclear expenditures saved money on balance by reducing non-nuclear military spending. Both nuclear and non-nuclear military spending reflects the failure of human civilization to solve global coordination. 51. Substantially rather than entirely, since even if all nuclear weapons were dismantled, new ones might be created. 52. An agreement for total nuclear disarmament might, of course, have to involve some provisions about conventional forces and other matters as well, so as not to endanger strategic stability. 53. One might look at other historical examples to obtain a larger reference class. The world's efforts so far with respect to combating global warming do not inspire confidence in its ability to deal expeditiously with even more difficult global collective action problems. On the other hand, the problem of ozone depletion was successfully addressed with the Montreal Protocol. 54. Unilateral imposition may be faster, but it requires that some actor has the capability to impose its will single-handedly on the rest of the world. If one actor has such an overwhelming power advantage, a form of de facto (weak or latent) global governance is presumably already in place. 55. For example, a well-intentioned project may be subverted in its implementation; or it might turn out to have bugs or institutional design flaws that become apparent only after a period of normal operation. Even if the system itself functions precisely as intended and remains uncorrupted, it might inspire the creation of other surveillance systems that do not have the same democratic safeguards. \t\t\t Global Policy Volume 10 . Issue 4 . November 2019 \n\t\t\t © 2019 The Authors. Global Policy published by Durham University and John Wiley & Sons Ltd.Global Policy (2019) 10:4 \n\t\t\t Global Policy (2019) 10:4 © 2019 The Authors. Global Policy published by Durham University and John Wiley & Sons Ltd.The Vulnerable World Hypothesis", "date_published": "n/a", "url": "n/a", "filename": "vulnerable.tei.xml", "abstract": "Scientific and technological progress might change people's capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the 'semi-anarchic default condition'. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order. \n Policy Implications • Technology policy should not unquestioningly assume that all technological progress is beneficial, or that complete scientific openness is always best, or that the world has the capacity to manage any potential downside of a technology after it is invented. • Some areas, such as synthetic biology, could produce a discovery that suddenly democratizes mass destruction, e.g. by empowering individuals to kill hundreds of millions of people using readily available materials. In order for civilization to have a general capacity to deal with \"black ball\" inventions of this type, it would need a system of ubiquitous real-time worldwide surveillance. In some scenarios, such a system would need to be in place before the technology is invented. • Partial protection against a limited set of possible black balls is obtainable through more targeted interventions. For example, biorisk might be mitigated by means of background checks and monitoring of personnel in some types of biolab, by discouraging DIY biohacking (e.g. through licencing requirements), and by restructuring the biotech sector to limit access to some cutting-edge instrumentation and information. Rather than allow anybody to buy their own DNA synthesis machine, DNA synthesis could be provided as a service by a small number of closely monitored providers. • Another, subtler, type of black ball would be one that strengthens incentives for harmful use-e.g. a military technology that makes wars more destructive while giving a greater advantage to the side that strikes first. Like a squirrel who uses the times of plenty to store up nuts for the winter, we should use times of relative peace to build stronger mechanisms for resolving international disputes.", "id": "77dfaf1001d9a1a7c8b47db722c5121e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Nick Bostrom"], "title": "Existential Risk Prevention as Global Priority", "text": "The maxipok rule Existential risk and uncertainty An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development . Although it is often difficult to assess the probability of existential risks, there are many reasons to suppose that the total such risk confronting humanity over the next few centuries is significant. Estimates of 10-20 per cent total existential risk in this century are fairly typical among those who have examined the issue, though inevitably such estimates rely heavily on subjective judgment. 1 The most reasonable estimate might be substantially higher or lower. But perhaps the strongest reason for judging the total existential risk within the next few centuries to be significant is the extreme magnitude of the values at stake. Even a small probability of existential catastrophe could be highly practically significant Matheny, 2007; Posner, 2004; Weitzman, 2009) . Humanity has survived what we might call natural existential risks for hundreds of thousands of years; thus it is prima facie unlikely that any of them will do us in within the next hundred. 2 This conclusion is buttressed when we analyse specific risks from nature, such as asteroid impacts, supervolcanic eruptions, earthquakes, gamma-ray bursts, and so forth: Empirical impact distributions and scientific models suggest that the likelihood of extinction because of these kinds of risk is extremely small on a time scale of a century or so. 3 In contrast, our species is introducing entirely new kinds of existential risk-threats we have no track record of surviving. Our longevity as a species therefore offers no strong prior grounds for confident optimism. Consideration of specific existential-risk scenarios bears out the suspicion that the great bulk of existential risk in the foreseeable future consists of anthropogenic existential risks-that is, those arising from human activity. In particular, most of the biggest existential risks seem to be linked to potential future technological breakthroughs that may radically expand our ability to manipulate the external world or our own biology. As our powers expand, so will the scale of their potential consequences-intended and unintended, positive and negative. For example, there appear to be significant existential risks in some of the advanced forms of biotechnology, molecular nanotechnology, and machine intelligence that might be developed in the decades ahead. The bulk of existential risk over the next century may thus reside in rather speculative scenarios to which we cannot assign precise probabilities through any rigorous statistical or scientific method. But the fact that the probability of some risk is difficult to quantify does not imply that the risk is negligible. Probability can be understood in different senses. Most relevant here is the epistemic sense in which probability is construed as (something like) the credence that an ideally reasonable observer should assign to the risk's materialising based on currently available evidence. 4 If something cannot presently be known to be objectively safe, it is risky at least in the subjective sense relevant to decision making. An empty cave is unsafe in just this sense if you cannot tell whether or not it is home to a hungry lion. It would be rational for you to avoid the cave if you reasonably judge that the expected harm of entry outweighs the expected benefit. The uncertainty and error-proneness of our first-order assessments of risk is itself something we must factor into our all-things-considered probability assignments. This factor often dominates in low-probability, highconsequence risks-especially those involving poorly understood natural phenomena, complex social dynamics, or new technology, or that are difficult to assess for other reasons. Suppose that some scientific analysis A indicates that some catastrophe X has an extremely small probability P(X) of occurring. Then the probability that A has some hidden crucial flaw may easily be much greater than P(X). 5 Furthermore, the conditional probability of X given that A is crucially flawed, P(X |ØA), may be fairly high. We may then find that most of the risk of X resides in the uncertainty of our scientific assessment that P(X) was small (Figure 1 ) (Ord, Hillerbrand and Sandberg, 2010) . \n Qualitative risk categories Since a risk is a prospect that is negatively evaluated, the seriousness of a risk-indeed, what is to be regarded as risky at all-depends on an evaluation. Before we can determine the seriousness of a risk, we must specify a standard of evaluation by which the negative value of a particular possible loss scenario is measured. There are several types of such evaluation standard. For example, one could use a utility function that represents some particular agent's preferences over various outcomes. This might be appropriate when one's duty is to give decision support to a particular decision maker. But here we will consider a normative evaluation, an ethically warranted assignment of value to various possible outcomes. This type of evaluation is more relevant when we are inquiring into what our society's (or our own individual) risk-mitigation priorities ought to be. There are conflicting theories in moral philosophy about which normative evaluations are correct. I will not here attempt to adjudicate any foundational axiological disagreement. Instead, let us consider a simplified version of one important class of normative theories. Let us suppose that the lives of persons usually have some significant positive value and that this value is aggregative (in the sense that the value of two similar lives is twice that of one life). Let us also assume that, holding the quality and duration of a life constant, its value does not depend on when it occurs or on whether it already exists or is yet to be brought into existence as a result of future events and choices. These assumptions could be relaxed and complications could be introduced, but we will confine our discussion to the simplest case. Within this framework, then, we can roughly characterise a risk's seriousness using three variables: scope (the size of the population at risk), severity (how badly this population would be affected), and probability (how likely the disaster is to occur, according to the most reasonable judgment, given currently available evidence). Using the first two of these variables, we can construct a qualitative diagram of different types of risk (Figure 2 ). Source: Ord et al., 2010. Factoring in the fallibility of our firstorder risk assessments can amplify the probability of risks assessed to be extremely small. An initial analysis (left side) gives a small probability of a disaster (black stripe). But the analysis could be wrong; this is represented by the grey area (right side). Most of the all-things-considered risk may lie in the grey area rather than in the black stripe. (The probability dimension could be displayed along the z-axis.) The area marked 'X' in Figure 2 represents existential risks. This is the category of risks that have (at least) crushing severity and (at least) pan-generational scope. 6 As noted, an existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or the permanent and drastic failure of that life to realise its potential for desirable development. In other words, an existential risk jeopardises the entire future of humankind. \n Magnitude of expected loss in existential catastrophe Holding probability constant, risks become more serious as we move toward the upper-right region of Figure 2 . For any fixed probability, existential risks are thus more serious than other risk categories. But just how much more serious might not be intuitively obvious. One might think we could get a grip on how bad an existential catastrophe would be by considering some of the worst historical disasters we can think of-such as the two world wars, the Spanish flu pandemic, or the Holocaust-and then imagining something just a bit worse. Yet if we look at global population statistics over time, we find that these horrible events of the past century fail to register (Figure 3 ). But even this reflection fails to bring out the seriousness of existential risk. What makes existential catastrophes especially bad is not that they would show up robustly on a plot like the one in Figure 3 , causing a precipitous drop in world population or average quality of life. Instead, their significance lies primarily in the fact that they would destroy the future. The philosopher Derek Parfit made a similar point with the following thought experiment: I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes: 1. Peace. 2. A nuclear war that kills 99 per cent of the world's existing population. 3. A nuclear war that kills 100 per cent. 2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater. The Earth will remain habitable for at least another billion years. Civilisation Source: Author. Note: The scope of a risk can be personal (affecting only one person), local (affecting some geographical region or a distinct group), global (affecting the entire human population or a large part thereof), trans-generational (affecting humanity for numerous generations, or pan-generational (affecting humanity over all, or almost all, future generations). The severity of a risk can be classified as imperceptible (barely noticeable), endurable (causing significant harm but not completely ruining quality of life), or crushing (causing death or a permanent and drastic reduction of quality of life). began only a few thousand years ago. If we do not destroy mankind, these few thousand years may be only a tiny fraction of the whole of civilised human history. The difference between 2 and 3 may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second (Parfit, 1984, pp. 453-454) . To calculate the loss associated with an existential catastrophe, we must consider how much value would come to exist in its absence. It turns out that the ultimate potential for Earth-originating intelligent life is literally astronomical. One gets a large number even if one confines one's consideration to the potential for biological human beings living on Earth. If we suppose with Parfit that our planet will remain habitable for at least another billion years, and we assume that at least one billion people could live on it sustainably, then the potential exist for at least 10 16 human lives of normal duration. These lives could also be considerably better than the average contemporary human life, which is so often marred by disease, poverty, injustice, and various biological limitations that could be partly overcome through continuing technological and moral progress. However, the relevant figure is not how many people could live on Earth but how many descendants we could have in total. One lower bound of the number of biological human life-years in the future accessible universe (based on current cosmological estimates) is 10 34 years. 7 Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 10 54 human-brain-emulation subjective life-years (or 10 71 basic computational operations) . 8 If we make the less conservative assumption that future civilisations could eventually press close to the absolute bounds of known physics (using some as yet unimagined technology), we get radically higher estimates of the amount of computation and memory storage that is achievable and thus of the number of years of subjective experience that could be realised. 9 Even if we use the most conservative of these estimates, which entirely ignores the possibility of space colonisation and software minds, we find that the expected loss of an existential catastrophe is greater than the value of 10 16 human lives. This implies that the expected value of reducing existential risk by a mere one millionth of one percentage point is at least a hundred times the value of a million human lives. The more technologically comprehensive estimate of 10 54 humanbrain-emulation subjective life-years (or 10 52 lives of ordinary length) makes the same point even more starkly. Even if we give this allegedly lower bound on the cumulative output potential of a technologically mature civilisation a mere 1 per cent chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives. One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any 'ordinary' good, such as the direct benefit of saving 1 billion lives. And, further, that the absolute value of the indirect effect of saving 1 billion lives on the total cumulative amount of existential risk-positive or negative-is almost certainly larger than the positive value of the direct benefit of such an action. 10 \n Maxipok These considerations suggest that the loss in expected value resulting from an existential catastrophe is so enormous that the objective of reducing existential risks should be a dominant consideration whenever we act out of an impersonal concern for humankind as a whole. It may be useful to adopt the following rule of thumb for such impersonal moral action: Maxipok Maximise the probability of an 'OK outcome', where an OK outcome is any outcome that avoids existential catastrophe. At best, maxipok is a rule of thumb or a prima facie suggestion. It is not a principle of absolute validity, since there clearly are moral ends other than the prevention of existential catastrophe. The principle's usefulness is as an aid to prioritisation. Unrestricted altruism is not so common that we can afford to fritter it away on a plethora of feel-good projects of suboptimal efficacy. If benefiting humanity by increasing existential safety achieves expected good on a scale many orders of magnitude greater than that of alternative contributions, we would do well to focus on this most efficient philanthropy. Note that maxipok differs from the popular maximin principle ('Choose the action that has the best worstcase outcome'). 11 Since we cannot completely eliminate existential risk-at any moment, we might be tossed into the dustbin of cosmic history by the advancing front of a vacuum phase transition triggered in some remote galaxy a billion years ago-the use of maximin in the present context would entail choosing the action that has the greatest benefit under the assumption of impending extinction. Maximin thus implies that we ought all to start partying as if there were no tomorrow. That implication, while perhaps tempting, is implausible. \n Classification of existential risk To bring attention to the full spectrum of existential risk, we can distinguish four classes of such risk: human extinction, permanent stagnation, flawed realisation, and subsequent ruination. We define these in Table 1 below: By 'humanity' we here mean Earth-originating intelligent life and by 'technological maturity' we mean the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved. \n Human extinction Although it is conceivable that, in the billion or so years during which Earth might remain habitable before being overheated by the expanding sun, a new intelligent species would evolve on our planet to fill the niche vacated by an extinct humanity, this is very far from certain to happen. The probability of a recrudescence of intelligent life is reduced if the catastrophe causing the extinction of the human species also exterminated the great apes and our other close relatives, as would occur in many (though not all) human-extinction scenarios. Furthermore, even if another intelligent species were to evolve to take our place, there is no guarantee that the successor species would sufficiently instantiate qualities that we have reason to value. Intelligence may be necessary for the realisation of our future potential for desirable development, but it is not sufficient. All scenarios involving the premature extinction of humanity will be counted as existential catastrophes, even though some such scenarios may, according to some theories of value, be relatively benign. It is not part of the definition of existential catastrophe that it is all-things-considered bad, although that will probably be a reasonable supposition in most cases. Above, we defined 'humanity' as Earth-originating intelligent life rather than as the particular biologically defined species Homo sapiens. 13 The reason for focusing the notion of existential risk on this broader concept is that there is no reason to suppose that the biological species concept tracks what we have reason to value. If our species were to evolve, or use technology to selfmodify, to such an extent that it no longer satisfied the biological criteria for species identity (such as interbreedability) with contemporary Homo sapiens, this need not be in any sense a catastrophe. Depending on what we changed into, such a transformation might well be very desirable. Indeed, the permanent foreclosure of any possibility of this kind of transformative change of human biological nature may itself constitute an existential catastrophe. Most discussion of existential risk to date has focused exclusively on the first of the four classes, 'human extinction'. The present framework calls attention to three other failure modes for humanity. Like extinction, these other failure modes would involve pan-generational crushing. They are therefore of comparable seriousness, entailing potentially similarly enormous losses of expected value. \n Permanent stagnation Permanent stagnation is instantiated if humanity survives but never reaches technological maturity-that is, the attainment of capabilities affording a level of economic productivity and control over nature that is close to the maximum that could feasibly be achieved (in the fullness of time and in the absence of catastrophic defeaters). For instance, a technologically mature civilisation could (presumably) engage in large-scale space colonisation through the use of automated self-replicating 'von Neumann probes' (Freitas, 1980; Moravec, 1988; Tipler, 1980) . It would also be able to modify and enhance human biology-say, through the use of advanced biotechnology or molecular nanotechnology (Freitas, 1999 (Freitas, , 2003 . Further, it could construct extremely powerful computational hardware and use it to create wholebrain emulations and entirely artificial types of sentient, superintelligent minds . It might have many additional capabilities, some of which may not be fully imaginable from our current vantage point. 14 The permanent destruction of humanity's opportunity to attain technological maturity is a prima facie enormous loss, because the capabilities of a technologically mature civilisation could be used to produce outcomes that would plausibly be of great value, such as astronomical numbers of extremely long and fulfilling lives. More specifically, mature technology would enable a far more efficient use of basic natural resources (such as matter, energy, space, time, and negentropy) for the creation of value than is possible with less advanced technology. And mature technology would allow the harvesting (through space colonisation) of far more of these resources than is possible with technology whose reach is limited to Earth and its immediate neighbourhood. We can distinguish various kinds of permanent stagnation scenarios: unrecovered collapse-much of our current economic and technological capabilities are lost and never recovered; plateauing-progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturity; and recurrent collapse-a never-ending cycle of collapse followed by recovery . 15 The relative plausibility of these scenarios depends on various factors. One might expect that even if global civilisation were to undergo a complete collapse, perhaps following a global thermonuclear war, it would eventually be rebuilt. In order to have a plausible permanent collapse scenario, one would therefore need an account of why recovery would not occur. 16 Regarding plateauing, modern trends of rapid social and technological change make such a threat appear less imminent; yet scenarios could be concocted in which, for example, a stable global regime blocks further technological change. 17 As for recurrent-collapse scenarios, they seem to require the postulation of a special kind of cause: one that (1) is strong enough to bring about the total collapse of global civilisation yet (2) is not strong enough to cause human extinction, and that (3) can plausibly recur each time civilisation is rebuilt to a certain level, despite any random variation in initial conditions and any attempts by successive civilisations to learn from their predecessors' failures. The probability of remaining on a recurring-collapse trajectory diminishes with the number of cycles postulated. The longer the time horizon considered (and this applies also to plateauing) the greater the likelihood that the pattern will be ruptured, resulting in either a breakout in the upward direction toward technological maturity or in the downward direction toward unrecovered collapse and perhaps extinction (Figure 4 ). 18 \n Flawed realisation A flawed realisation occurs if humanity reaches technological maturity in a way that is dismally and irremediably flawed. By 'irremediably' we mean that it cannot feasibly be subsequently put right. By 'dismally' we mean that it enables the realisation of but a small part of the value that could otherwise have been realised. Classifying a scenario as an instance of flawed realisation requires a value judgment. We return to this normative issue in the next section. We can distinguish two versions of flawed realisation: unconsummated realisation and ephemeral realisation. In unconsummated realisation, humanity develops mature technology but fails to put it to good use, so that the amount of value realised is but a small fraction of what could have been achieved. An example of this kind is a scenario in which machine intelligence replaces biological intelligence but the machines are constructed in such a way that they lack consciousness (in the sense of phenomenal experience) (Bostrom, 2004) . The future might then be very wealthy and capable, yet in a relevant sense uninhabited: There would (arguably) be no morally relevant beings there to enjoy the wealth. Even if consciousness did not altogether vanish, there might be a lot less of it than would have resulted from a more optimal use of resources. Alternatively, there might be a vast quantity of experience but of much lower quality than ought to have been the case: minds that are far less happy than they could have been. Or, again, there might be vast numbers of very happy minds but some other crucial ingredient of a maximally valuable future missing. In ephemeral realisation, humanity develops mature technology that is initially put to good use. But the technological maturity is attained in such a way that the initially excellent state is unsustainable and is doomed to degenerate. There is a flash of value, followed by perpetual dusk or darkness. One way in which ephemeral realisation could result is if there are fractures in the initial state of technological maturity that are bound to lead to a splintering of humanity into competing factions. It might be impossible to reintegrate humanity after such a splintering occurred, and the process of attaining technological maturity might have presented the last and best chance for humanity to form a singleton (Bostrom, 2006) . Absent global coordination, various processes might degrade humanity's long-term potential. One such process is war between major powers, although it is perhaps unlikely that such warring would be never-ending (rather than being eventually terminated once and for all by treaty or conquest). 19 Another such erosive process involves undesirable forms of evolutionary and economic competition in a large ecology of machine intelligences (Hanson, 1994) . Yet another such process is a spacecolonisation race in which replicators might burn up cosmic resources in a wasteful effort to beat out the competition . \n Subsequent ruination For completeness, we register a fourth class of existential risks: subsequent ruination. In scenarios of this kind, humanity reaches technological maturity with a 'good' (in the sense of being not dismally and irremediably flawed) initial setup, yet subsequent developments nonetheless lead to the permanent ruination of our prospects. From a practical perspective, we need not worry about subsequent ruination. What happens after humanity Source: Author. Note: The modern human condition represents a narrow range of the space of possibilities. The longer the time scale considered, the lower the probability that humanity's level of technological development will remain confined within the interval defined at the lower end by whatever technological capability is necessary for survival and at the upper end by technological maturity. reaches technological maturity is not something we can now affect, other than by making sure that humanity does reach it and in a way that offers the best possible prospects for subsequent development-that is, by avoiding the three other classes of existential risk. Nonetheless, the concept of subsequent ruination is relevant to us in various ways. For instance, in order to estimate how much expected value is gained by reducing other existential risks by a certain amount, we need to estimate the expected value conditional on avoiding the first three sets of existential risks, which requires estimating the probability of subsequent ruination. The probability of subsequent ruination might be low-and is perhaps extremely low conditional on getting the setup right. One reason is that once we have created many self-sustaining space colonies, any disaster confined to a single planet cannot eliminate all of humanity. Another reason is that once technological maturity is safely reached, there are fewer potentially dangerous technologies left to be discovered. A third reason is that a technologically mature civilisation would be superintelligent (or have access to the advice of superintelligent artificial entities) and thus better able to foresee danger and devise plans to minimise existential risk. While foresight will not reduce risk if no effective action is available, a civilisation with mature technology can take action against a great range of existential risks. Furthermore, if it turns out that attaining technological maturity without attaining singletonhood condemns a civilisation to irreversible degeneration, then if flawed realisation is avoided we can assume that our technologically mature civilisation can solve global-coordination problems, which increases its ability to take effective action to prevent subsequent ruination. The main source of subsequent-ruination risk might well be an encounter with intelligent external adversaries, such as intelligent extraterrestrials or simulators. Note, however, that scenarios in which humanity eventually goes extinct as a result of hard physical limits, such as the heat death of the universe, do not count as subsequent ruination, provided that before its demise humanity has managed to realise a reasonably large part of its potential for desirable development. Such scenarios are not existential catastrophes but rather existential successes. \n Capability and value Some further remarks will help clarify the links between capability, value, and existential risk. \n Convertibility of resources into value Because humanity's future is potentially astronomically long, the integral of losses associated with persistent inefficiencies is very large. This is why flawed-realisation and subsequent-ruination scenarios constitute existential catastrophes even though they do not necessarily involve extinction. 20 It might be well worth a temporary dip in short-term welfare to secure a slightly more efficient long-term realisation of humanity's potential. To avoid flawed realisation, it is more important to focus on maximising long-term efficiency than on maximising the initial output of value in the period immediately following technological maturation. This is because the quantity of value-structure that can be produced at a given time depends not only on the level of technology but also on the physical resources and other forms of capital available at that time. In economics parlance, humanity's production-possibility frontier (representing the various possible combinations of outputs that could be produced by the global economy) depends not only on the global production function (or 'meta-production function') but also on the total amount of all factors of production (labour, land, physical capital goods, etc.) that are available at some point in time. With mature technology, most factors of production are interchangeable and ultimately reducible to basic physical resources, but the amount of free energy available to a civilisation imposes hard limits on what it can produce. Since colonisation speed is bounded by the speed of light, a civilisation attaining technological maturity will start with a modest endowment of physical resources (a single planet and perhaps some nearby parts of its solar system), and it will take a very long time-billions of years-before a civilisation starting could reach even 1 per cent of its maximum attainable resource base. 21 It is therefore efficiency of use at later times, rather than in the immediate aftermath of the attainment of technological maturity, that matters most for how much value is ultimately realised. Furthermore, it might turn out that the ideal way to use most of the cosmic endowment that humanity could eventually secure is to postpone consumption for as long as possible. By conserving our accumulated free energy until the universe is older and colder, we might be able to perform some computations more efficiently. 22 This reinforces the point that it would be a mistake to place too much weight on the amount of value generated shortly after technological maturity when deciding whether some scenario should count as a flawed realisation (or a subsequent ruination). It is much more important to get the setup right, in the sense of putting humanity on a track that will eventually garner most of the attainable cosmic resources and put them to near-optimal use. It matters less whether there is a brief delay before that happens-and a delay of even several million years is 'brief' in this context . Even for individual agents, the passage of sidereal time might become less significant after technological maturity. Agents that exist as computational processes in distributed computational hardware have potentially unlimited life spans. The same holds for embodied agents in an era in which physical-repair technologies are sufficiently advanced. The amount of life available to such agents is proportional to the amount of physical resources they control. (A software mind can experience a certain amount of subjective time by running on a slow computer for a long period of sidereal time or, equivalently, by running for a brief period of sidereal time on a fast computer). Even from a so-called 'person-affecting' moral perspective, therefore, when assessing whether a flawed realisation has occurred, one should focus not on how much value is created just after the attainment of technological maturity but on whether the conditions created are such as to give a good prospect of realising a large integral of value over the remainder of the universe's lifetime. \n Some other ethical perspectives We have thus far considered existential risk from the perspective of utilitarianism (combined with several simplifying assumptions). We may briefly consider how the issue might appear when viewed through the lenses of some other ethical outlooks. For example, the philosopher Robert Adams outlines a different view on these matters: I believe a better basis for ethical theory in this area can be found in quite a different direction-in a commitment to the future of humanity as a vast project, or network of overlapping projects, that is generally shared by the human race. The aspiration for a better society-more just, more rewarding, and more peaceful-is a part of this project. So are the potentially endless quests for scientific knowledge and philosophical understanding, and the development of artistic and other cultural traditions. This includes the particular cultural traditions to which we belong, in all their accidental historic and ethnic diversity. It also includes our interest in the lives of our children and grandchildren, and the hope that they will be able, in turn, to have the lives of their children and grandchildren as projects. To the extent that a policy or practice seems likely to be favorable or unfavorable to the carrying out of this complex of projects in the nearer or further future, we have reason to pursue or avoid it. … Continuity is as important to our commitment to the project of the future of humanity as it is to our commitment to the projects of our own personal futures. Just as the shape of my whole life, and its connection with my present and past, have an interest that goes beyond that of any isolated experience, so too the shape of human history over an extended period of the future, and its connection with the human present and past, have an interest that goes beyond that of the (total or average) quality of life of a population-at-a-time, considered in isolation from how it got that way. We owe, I think, some loyalty to this project of the human future. We also owe it a respect that we would owe it even if we were not of the human race ourselves, but beings from another planet who had some understanding of it (Adams, 1989, pp. 472-473) . Since an existential catastrophe would either put an end to the project of the future of humanity or drastically curtail its scope for development, we would seem to have a strong prima facie reason to avoid it, in Adams' view. We also note that an existential catastrophe would entail the frustration of many strong preferences, suggesting that from a preference-satisfactionist perspective it would be a bad thing. In a similar vein, an ethical view emphasising that public policy should be determined through informed democratic deliberation by all stakeholders would favour existential-risk mitigation if we suppose, as is plausible, that a majority of the world's population would come to favour such policies upon reasonable deliberation (even if hypothetical future people are not included as stakeholders). We might also have custodial duties to preserve the inheritance of humanity passed on to us by our ancestors and convey it safely to our descendants. 23 We do not want to be the failing link in the chain of generations, and we ought not to delete or abandon the great epic of human civilisation that humankind has been working on for thousands of years, when it is clear that the narrative is far from having reached a natural terminus. Further, many theological perspectives deplore naturalistic existential catastrophes, especially ones induced by human activities: If God created the world and the human species, one would imagine that He might be displeased if we took it upon ourselves to smash His masterpiece (or if, through our negligence or hubris, we allowed it to come to irreparable harm). 24 We might also consider the issue from a less theoretical standpoint and try to form an evaluation instead by considering analogous cases about which we have definite moral intuitions. Thus, for example, if we feel confident that committing a small genocide is wrong, and that committing a large genocide is no less wrong, we might conjecture that committing omnicide is also wrong. 25 And if we believe we have some moral reason to prevent natural catastrophes that would kill a small number of people, and a stronger moral reason to prevent natural catastrophes that would kill a larger number of people, we might conjecture that we have an even stronger moral reason to prevent catastrophes that would kill the entire human population. Many different normative perspectives thus concur in their support for existential-risk mitigation, although the degree of badness involved in an existential catastrophe and the priority that existential-risk mitigation should have in our moral economy may vary substantially among different moral theories. 26 Note, however, that it is on no account a conceptual truth that existential catastrophes are bad or that reducing existential risk is right. There are possible situations in which the occurrence of one type of existential catastrophe is beneficial-for instance, because it preempts another type of existential catastrophe that would otherwise certainly have occurred and that would have been worse. \n Existential risk and normative uncertainty Whereas the first two classes of existential risk (human extinction and permanent stagnation) are specified by purely descriptive criteria, the second two (flawed realisation and subsequent ruination) are defined normatively. This means that the concept of existential risk is in part an evaluative notion. 27 Where normative issues are involved, these issues may be contentious. Population ethics, for instance, is fraught with problems about how to deal with various parameters (such as population size, average wellbeing, thresholds for what counts as a life worth living, inequality, and same vs. different people choices). The evaluation of some scenarios that involve fundamental transformations of human nature is also likely to be contested (Fukuyama, 2002; Glover, 1984; Kass, 2002; Savulescu and Bostrom, 2009) . Yet not all normative issues are controversial. It will be generally agreed, for example, that a future in which a small human population ekes out a miserable existence within a wrecked ecosystem in the presence of great but unused technological capabilities would count as a dismally flawed realisation of humanity's potential and would constitute an existential catastrophe if not reversed. There will be some types of putative existential risks for which the main uncertainty is normative and others where the main uncertainty is positive. With regard to positive, or descriptive, uncertainty, we saw earlier that if something is not known to be objectively safe, it is risky, at least in the subjective sense relevant to decision making. We can make a parallel move with regard to normative uncertainty. Suppose that some event X would reduce biodiversity. Suppose (for the sake of illustration) it is known that X would have no other significant consequences and that the reduced biodiversity would not affect humans or any other morally considerable beings. Now, we may be uncertain whether biodiversity has final value (is valuable 'for its own sake'). Hence we may be uncertain about whether or not X would really be bad. But we can say that if we are not sure whether or not X would really be bad (but we are sure that X would not be good), then X is bad in at least the subjective sense relevant to decision making. That is to say, we have reason to prefer that X not occur and perhaps reason to take action to prevent X. Exactly how one should take into account fundamental moral uncertainty is an open question, but that one should do so is clear . We can thus include as existential risks situations in which we know what will happen and we reasonably judge that what will happen might be existentially bad-even when there would in fact be nothing bad about the outcome. We can highlight one consequence of this: Suppose a fully reliable genie offered to grant humanity any wish it might have for its future. Then-even if we could all agree on one such future-we would still face one more potentially serious existential risk: namely, that of choosing unwisely and selecting a future dismally flawed despite appearing, at the moment of our choice, to be the most desirable of all possible futures. \n Keeping our options alive These reflections on moral uncertainty suggest an alternative, complementary way of looking at existential risk; they also suggest a new way of thinking about the ideal of sustainability. Let me elaborate. Our present understanding of axiology might well be confused. We may not now know-at least not in concrete detail-what outcomes would count as a big win for humanity; we might not even yet be able to imagine the best ends of our journey. If we are indeed profoundly uncertain about our ultimate aims, then we should recognise that there is a great option value in preserving-and ideally improving-our ability to recognise value and to steer the future accordingly. Ensuring that there will be a future version of humanity with great powers and a propensity to use them wisely is plausibly the best way available to us to increase the probability that the future will contain a lot of value. To do this, we must prevent any existential catastrophe. We thus want to reach a state in which we have (1) far greater intelligence, knowledge, and sounder judgment than we currently do; (2) far greater ability to solve global-coordination problems; (3) far greater technological capabilities and physical resources; and such that (4) our values and preferences are not corrupted in the process of getting there (but rather, if possible, improved). Factors 2 and 3 expand the option set available to humanity. Factor 1 increases humanity's ability to predict the outcomes of the available options and understand what each outcome would entail in terms of the realisation of human values. Factor 4, finally, makes humanity more likely to want to realise human values. How we, from our current situation, might best achieve these ends is not obvious (Figure 5 ). While we ultimately need more technology, insight, and coordination, it is not clear that the shortest path to the goal is the best one. It could turn out, for example, that attaining certain technological capabilities before attaining sufficient insight and coordination invariably spells doom for a civilisation. One can readily imagine a class of existentialcatastrophe scenarios in which some technology is discovered that puts immense destructive power into the hands of a large number of individuals. If there is no effective defense against this destructive power, and no way to prevent individuals from having access to it, then civilisation cannot last, since in a sufficiently large population there are bound to be some individuals who will use any destructive power available to them. The discovery of the atomic bomb could have turned out to be like this, except for the fortunate fact that the construction of nuclear weapons requires a special ingredient-weapons-grade fissile material-that is rare and expensive to manufacture. Even so, if we continually sample from the urn of possible technological discoveries before implementing effective means of global coordination, surveil-lance, and ⁄ or restriction of potentially hazardous information, then we risk eventually drawing a black ball: an easy-to-make intervention that causes extremely widespread harm and against which effective defense is infeasible. 28 We should perhaps therefore not seek directly to approximate some state that is 'sustainable' in the sense that we could remain in it for some time. Rather, we should focus on getting onto a developmental trajectory that offers a high probability of avoiding existential catastrophe. In other words, our focus should be on maximising the chances that we will someday attain technological maturity in a way that is not dismally and irremediably flawed. Conditional on that attainment, we have a good chance of realising our astronomical axiological potential. To illustrate this point, consider the following analogy. When a rocket stands on the launch pad, it is in a fairly sustainable state. It could remain in its current position for a long time, although it would eventually be destroyed by wind and weather. Another sustainable place for the rocket is in space, where it can travel weightless for a very long time. But when the rocket is in midair, it is in an unsustainable, transitory state: Its engines are blazing and it will soon run out of fuel. Returning the rocket to a sustainable state is desirable, but this does not mean that any way to render its state Sources: Author. Notes: An ideal situation might be one in which we have a very high level of technology, excellent global coordination, and great insight into how our capabilities can be used. It does not follow that getting any amount of additional technology, coordination, or insight is always good for us. Perhaps it is essential that our growth along different dimensions hew to some particular scheme in order for our development to follow a trajectory through the state space that eventually reaches the desired region. more sustainable is desirable. For example, reducing its energy consumption so that it just barely manages to hold stationary might make its state more sustainable in the sense that it can remain in one place for longer; however, when its fuel runs out the rocket will crash to the ground. The best policy for a rocket in midair is, rather, to maintain enough thrust to escape Earth's gravitational field: a strategy that involves entering a less sustainable state (consuming fuel faster) in order to later achieve the most desirable sustainable state. That is, instead of seeking to approximate a sustainable state, it should pursue a sustainable trajectory. The present human condition is likewise a transitional state. Like the rocket in our analogy, humanity needs to pursue a sustainable trajectory, one that will minimise the risk of existential catastrophe. 29 But unlike the problem of determining the optimum rate of fuel consumption in a rocket, the problem of how to minimise existential risk has no known solution. \n Outlook We have seen that reducing existential risk emerges as a dominant priority in many aggregative consequentialist moral theories (and as a very important concern in many other moral theories). The concept of existential risk can thus help the morally or altruistically motivated to identify actions that have the highest expected value. In particular, given certain assumptions, the problem of making the right decision simplifies to that of following the maxipok principle. \n Barriers to thought and action In light of this result, which suggests that there may be a very high value in studying existential risks and in analysing potential mitigation strategies, it is striking how little academic attention these issues have received compared to other topics that are less important (Figure 6 ). 30 Many factors conspire against the study and mitigation of existential risks. Research is perhaps inhibited by the multidisciplinary nature of the problem, but also by deeper epistemological issues. The biggest existential risks are not amenable to plug-and-play scientific research methodologies. Furthermore, there are unresolved foundational issues, particularly concerning observation selection theory and population ethics, which are crucial to the assessment of existential risk; and these theoretical difficulties are compounded by psychological factors that make it difficult to think clearly about issues such as the end of humanity. 31 If more resources were to be made available to research existential risks, there is a danger that they would flow, with excessive preponderance, to the relatively minor risks that are easier for some established disciplinary community to study using familiar methods, at the expense of far more important risk areas-machine superintelligence, advanced molecular nanotechnology, totalitarianism, risks related to the simulation-hypothesis, or future advances in synthetic biology-which would require a more inconvenient shift in research focus. Another plausible diversion is that research would mainly be directed at global catastrophic risks that involve little or no existential risk. Mitigation of existential risk is hampered by a lack of understanding, but also by a deficit of motivation. Existential risk mitigation is a global public good (i.e., non-excludable and non-rivalrous), and economic theory suggests that such goods tend to be undersupplied by the market, since each producer of existential safety (even if the producer is a large nation) could capture only a small portion of the value (Feldman, 1980; Kaul, 1999) . In fact, the situation is worse than is the case with many other global public goods in that existential risk reduction is a strongly transgenerational (in fact, pan-generational) public good: even a world state may capture only a small fraction of the benefits-those accruing to currently existing people. The quadrillions of happy people who may come to exist in the future if we avoid existential catastrophe would be willing to pay the present generation astronomical sums in return for a slight increase in our efforts to preserve humanity's future, but the mutually beneficial trade is unfortunately prevented by the obvious transaction difficulties. Moral motivations, too, may fail to measure up to the magnitude of what is at stake. The scope insensitivity of our moral sentiments is likely to be especially pronounced when very large numbers are involved: Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking-enter into a 'separate magisterium'. People who would never dream of hurting a child hear of an existential risk, and say, 'Well, maybe the human species doesn't really deserve to survive'. (Yudkowsky, 2008, p. 114) Existential risk requires a proactive approach. The reactive approach-to observe what happens, limit damages, and then implement improved mechanisms to reduce the probability of a repeat occurrence-does not work when there is no opportunity to learn from failure. Instead, we must anticipate emerging dangers, mobilise support for action against hypothetical future harm, and get our precautions sufficiently right the first time. That is a tall order. Few institutions are capable of operating consistently at such a level of effective rationality, and attempts to imitate such proactive behaviour within less perfect institutions can easily backfire. Speculative riskmongering could be exploited to rationalise self-serving aggressive action, expansion of costly and potentially oppressive security bureaucracies, or restrictions of civil liberties that keep societies free and sane. The result of false approximations to the rational ideal could easily be a net increase in existential risk. 32 Multidisciplinary and epistemological challenges, academic distractions and diversions, cognitive biases, freerider problems, moral lethargy and scope-insensitivity, institutional incompetence, and the political exploitation of unquantifiable threats are thus some of the barriers to effective mitigation. To these we can add the difficulty of achieving required levels of global cooperation. While some existential risks can be tackled unilaterally-any state with a space industry could build a global defense against asteroid impacts-other risks require a joint venture between many states. Management of the global climate may require buy-in by an overwhelming majority of industrialised and industrialising nations. Avoidance of arms races and relinquishment of dangerous directions of technological research may require that all States join the effort, since a single defector could annul any benefits of collaboration. Some future dangers might even require that each State monitor and regulate every significant group or individual within its territory. 33 \n Grounds for optimism? A formidable array of obstacles thus clouds the prospect of a clear-headed and effective response to existential risks confronting humanity. Lest the cause be deemed hopeless, we should also take note of some encouraging considerations. We may note, first, that many of the key concepts and ideas are quite new. 34 Before the conceptual and theoretical foundations were in place, support for efforts to research and mitigate existential risk could not build. In many instances, the underlying scientific, technological, and methodological ideas needed for studying existential risks in a meaningful way have also only recently become available. The delayed start helps explain the still primitive state of the art. It is arguably only since the detonation of the first atomic bomb in 1945, and the subsequent nuclear buildup during the Cold War, that any significant naturalistic (i.e., non-supernatural) existential risks have arisen-at least if we count only risks over which human beings have some influence. 35 Most of the really big existential risks still seem to lie many years into the future. Until recently, therefore, there may have been relatively little need to think about existential risk in general and few opportunities for mitigation even if such thinking had taken place. Public awareness of the global impacts of human activities appears to be increasing. Systems, processes, and risks are studied today from a global perspective by many scholars-environmental scientists, economists, epidemiologists, demographers, and others. Problems such as climate change, cross-border terrorism, and international financial crises direct attention to global interdependency and threats to the global system. The idea of risk in general seems to have risen in prominence. 36 Given these advances in knowledge, methods, and attitudes, the conditions for securing for existential risks the scrutiny they deserve are unprecedentedly propitious. Opportunities for action may also proliferate. As noted, some mitigation projects can be undertaken unilaterally, and one may expect more such projects as the world becomes richer. Other mitigation projects require wider coordination; in many cases, global coordination. Here, too, some trend lines seem to point to this becoming more feasible over time. There is a long-term historic trend toward increasing scope of political integration-from hunter-gatherer bands to chiefdoms, city states, nation states, and now multinational organisations, regional alliances, various international governance structures, and other aspects of globalisation (Wright, 1999) . Extrapolation of this trend might seem to indicate the eventual creation of a singleton (Bostrom, 2006) . It is also possible that some of the global movements that emerged over the last half century-in particular the peace movement, the environmentalist movement, and various global justice and human-rights movements-will increasingly take on board more generalised concerns about existential risk. 37 Furthermore, to the extent that existential-risk mitigation really is a most deserving cause, one may expect that general improvements in society's ability to recognise and act on important truths will differentially funnel resources into existential-risk mitigation. General improvements of this kind might come from many sources, including developments in educational techniques and online collaboration tools, institutional innovations such as prediction markets, advances in science and philosophy, spread of rationality culture, and biological cognitive enhancement. Finally, it is possible that the cause will at some point receive a boost from the occurrence of a major (nonexistential) catastrophe that underscores the precariousness of the present human condition. That would, needless to say, be the worst possible way for our minds to be concentrated-yet one which, in a multidecadal time frame, must be accorded a non-negligible probability of occurrence. 38 Note 1. One informal poll among mainly academic experts on various global catastrophic risks gave a median estimate of 19 per cent probability that the human species will go extinct before the end of this century . These respondents' views are not necessarily representative of the wider expert community. The UK's influential Stern Review on the Economics of Climate Change ( 2006 ) used an extinction probability of 0.1 per cent per year in calculating an effective discount rate. This is equivalent to assuming a 9.5 per cent risk of human extinction within the next hundred years (UK Treasury 2006, Chapter 2, Technical Appendix, p. 47). 2. The strength of this consideration is to some extent blunted by the possibility of observation selection effects casting an 'anthropic shadow' on available evidence (Cirkovic, Sandberg and Bostrom, 2010) . 3. See Smil, 2008. 4 . Probability is thus indexed to time. Quantities that depend on probability, such as the seriousness of a risk, can vary over time as new information becomes available. 5. There is ample historical evidence that apparently sound scientific analyses are sometimes crucially flawed. 6. As indicated in the figure, the axes can be extended to encompass conceptually possible risks that are even more extreme. In particular, pan-generational risks can contain a subclass of risks so destructive that their realisation would not only affect or pre-empt future human generations but would also destroy the potential of the part of the universe that lies in our future light cone to produce intelligent or self-aware beings (cosmic scope). Further, according to some theories of value there can be states of being that are much worse than nonexistence or death (e.g., horrible incurable diseases), so one could in principle extend the x-axis as well (hellish severity). We will not explore these conceptual possibilities in this article. 7. This is based on an accelerating universe with a maximal reachable co-moving distance of 4.74 Gpc, a baryonic matter density of 4.55 10 )28 kg ⁄ m 3 , a luminosity ratio of stars 100, and 1 planet per 1,000 stars being habitable by 1 billion humans for 1 billion years (Gott et al., 2005; Heyl, 2005) . Obviously the values of the last three parameters are debatable, but the astronomical size of the conclusion is little affected by a few orders-of-magnitude change. 8. This uses an estimate by the late futurist Robert Bradbury that a star can power 10 42 operations per second using efficient computers built with advanced nanotechnology. Further, it assumes (along with the cosmological estimates mentioned in the previous footnote) that the human brain has a processing power of 10 17 operations per second and that stars on average last 5 billion years. It does not assume any new star formation. See also (Cirkovic, 2004 ). 9. For example, if all mass-energy in the accessible universe is saved until the cosmic microwave background temperature ceases to decline (due to the constant horizon temperature of 10 -29 K) and is then used for computation, this would allow up to 10 121 thermodynamically irreversible computations (Krauss and Starkman, 2000) . See also (Cirkovic and Radujkov, 2001) . 10. We should stress, however, that there are important unresolved issues in aggregative consequentialism-in particular, in relation to infinite values and extremely small chances . We will not discuss these issues here, but in section 5 we will discuss the normative status of the concept of existential risk from some other perspectives. 11. Following John Rawls, the term 'maximin' is used in a different sense in welfare economics, to denote the principle that (given certain constraints) we ought to opt for the state that maximises the expectation of the worst-off classes (Rawls, 1971) . This version of the principle is not necessarily affected by the remarks in the text. 12. One can refer to this more precisely as 'early' or 'premature' human extinction. Note that humanity can go extinct without instantiating this category if humanity achieves its capability potential and then goes extinct. 13. We may here take 'intelligent' to mean capable of developing language, science, technology, and cumulative culture. 14. It is not required that a technologically mature civilisation actually deploy all of these technologies; it is sufficient that they be available to it, in the sense that the civilisation could easily and quickly develop and deploy them should it decide to do so. Thus, a sufficiently powerful superintelligent-machine civilisation that could rapidly invent and implement these and other relevant technologies would already count as technologically mature. 15. Not strictly never-ending, of course, but a sequence of cycles that goes on for a very long time and ends with human extinction without technological maturity having ever been attained. 16. An unrecovered collapse scenario might postulate that some critical resource for recovery is permanently destroyed, or that the human gene pool irreversibly degenerates, or perhaps that some discovery is made that enables tiny groups to cause such immense destruction that they can bring down civilisation and that the knowledge of this discovery cannot be eradicated. 17. Improved governance techniques, such as ubiquitous surveillance and neurochemical manipulation, might cement such a regime's hold on power to the extent of making its overthrow impossible. 18. Another difficulty for the recurring-collapse hypothesis is to account for the fact that we are in the first technological cycle here on Earth. If it is common for there to be many cycles of collapse and recovery (with similar population sizes) then why do we find ourselves in cycle #1? This kind of anthropic consideration might suggest that extinction or transformation is more likely than one would naively suppose. 19. Even the threat of a war that never erupts could result in much waste, in terms of expenditures on arms and foregone opportunities for collaboration. 20. It is also one reason why permanent stagnation is an existential risk, although permanent stagnation might also preclude survival beyond the time when the Earth becomes uninhabitable, perhaps around a billion years from now due to increasing solar luminosity (Schroder and Smith, 2008) . 21. One potentially significant qualification is that the time to reach the maximum attainable resource base could be shorter if intelligent opposition (such as from extraterrestrial civilisations) emerges that hinders our cosmic expansion. 22. There is a minimum entropy cost associated with the erasure of one bit of information, a cost which declines with temperature. 23. We might also have responsibilities to nonhuman beings, such as terrestrial (and possible extraterrestrial) animals. Although we are not currently doing much to help them, we have the opportunity to do so in the future. If rendering aid to suffering nonhuman animals in the natural environment is an important value, then achieving technological maturity in a manner that fails to produce such aid could count as flawed realisation. See McMahan, 2010; Pearce, 2004. 24. There could, from a theological perspective, possibly be a special category of existential risks with a different moral status: catastrophes or apocalypses brought about by divine agency, perhaps as just punishment for our sins. A believer might judge such an event as, on balance, good. However, it seems implausible that mere mortals would be able to thwart God if He really wanted to flatten us, so any physical countermeasures we implement against existential risk would presumably be effective only against natural and anthropogenic existential risks, and we might have no reason to hold back on our naturalistic-risk mitigation efforts for fear of frustrating designs. 25. Although omnicide would at least be impartial, by contrast to genocide which is often racist or nationalist. 26. For example, James Lenman has argued that it is largely a matter of indifference when humankind goes extinct, at least if it does not happen too soon (Lenman, 2002) . 27. In this respect, the concept of existential risk is similar to concepts such as 'democracy' and 'efficient labor market'. A black hole, or a jar of sterile pebbles, is neither a democracy nor an efficient labour market, and we can see that this is so without having to make any normative judgment; yet there may be other objects that cannot be classified as instances or noninstances of these concepts without taking a stand (at least implicitly) on some normative issue. 28. Of course, achieving effective global coordination sufficiently strong to continually monitor the entire world population or indefinitely censor any information deemed hazardous by some authority would (at least in the absence of adequate safeguards) create its own very significant existential risks, such as risks of permanent stagnation or flawed realisation under some repressive totalitarian regime. 29. Ideally, it would do this while achieving the means to commit collective euthanasia, in the fairly unlikely case that, after long and careful collective deliberation, we should decide that a quick end is preferable to continued existence. That might, however, be a beneficial capability only if we had first attained sufficient wisdom not to exercise it erroneously. We should emphasise the need for continued philosophical deliberation and fostering of conditions that would help us find the truth about central normative issues eventually-as well as the need to avoid irrevocable mistakes in the meantime. 30. Scholarly treatments of existential risk per se, or even of human-extinction risk, are rare (e.g., Leslie, 1996; Matheny, 2007; Wells, 2009) . However, a great deal of academic literature bears on individual existential risks or on other spe-cific issues relevant to many existential risks (a few of which are cited throughout this article). In addition, some recent works take a broad look at global catastrophic risks, though without restricting the focus to existential risks (e.g., Bostrom and Cirkovic, 2008; Diamond, 2006; Homer-Dixon, 2007; Posner, 2004; Sunstein, 2009; World Economic Forum, 2011) . 31. Relevant issues related to observation selection effects include, among others, the Carter-Leslie doomsday argument, the simulation argument, and 'great filter' arguments; see Bostrom, , 2008 Carter, 1983; Cirkovic et al, 2010; Leslie, 1996; Tegmark and Bostrom, 2005. For some relevant issues in moral philosophy, see, e.g., For a review of the cognitive-biases literature as it relates to catastrophic risk, see Yudkowsky, 2008. 32 . A possible way around this problem involves trying to hold the total amount of risk concern roughly constant while allocating a greater proportion of the pot of 'fear tokens' or 'concern chips' to existential risk. Thus, one might advocate that as we become more concerned about existential risk, we ought simultaneously to become less concerned about smaller risks, such as a few thousand people dying in the odd terrorist attack or natural disaster. 33. Such internal control within States will become more feasible with advances in surveillance technology. As noted, preventing States with such capabilities from becoming oppressive will present its own set of challenges. 34. Including the very notion of existential risk . 35. One could argue that pandemics and close encounters with comets, which occurred repeatedly in human history and elicited strong end-of-the-world forebodings, should count as large early existential risks. Given the limited information then available, it might not have been unreasonable for contemporary observers to assign a significant probability to the end being nigh. Religious doomsday scenarios could also be considered; perhaps it was not unreasonable to believe, on the basis of the then-available evidence, that these risks were real and, moreover, that they could be mitigated through such actions as repentance, prayer, sacrificial offerings, persecution of witches or infidels, and so forth. The first clear-cut scientific existential risk might have arisen with the development of the atomic bomb. Robert Oppenheimer, the scientific leader of the Manhattan Project, ordered a study ahead of the Trinity test to determine whether a nuclear detonation would cause a selfpropagating chain of nuclear reactions in Earth's atmosphere. The resulting report may represent the first quantitative risk assessment of human extinction (Manhattan Project, 1946) . 36. Some sociologists have gone so far as to fixate on risk as a central thematic of our age; see, e.g., Beck, 1999. 37 . Many peace activists opposing the nuclear arms race during the Cold War explicitly fretted about a nuclear Armageddon that could allegedly end all human life. More recently some environmentalists sounding the alarm about global warming use similarly apocalyptic language. It is unclear, however, to what extent the perceived possibility of a species-ending outcome has been a major motivating force in these cases. Perhaps the amount of concern would be roughly the same even in the face of an iron-clad guarantee that any catastrophe would stop short of human extinction. 38. I am grateful for comments and discussion to Seth Baum, Nick Beckstead, Milan Cirkovic, Olle Häggström, Sara Lippincott, Gaverick Matheny, Toby Ord, Derek Parfit, Martin Rees, Rebecca Roache, Anders Sandberg, and Carl Shulman. Figure 1 . 1 Figure 1. Meta-level uncertainty. \n Figure 2 . 2 Figure 2. Qualitative risk categories. \n Figure 3 . 3 Figure 3. World population over the last century. \n Figure 4 . 4 Figure 4. Collapse recurring indefinitely? \n Figure 5 . 5 Figure 5. The challenge of finding a safe path. \n Figure 6 . 6 Figure 6. Academic prioritisation. \n Table 1 . 1 Classes of existential risk Human extinction Humanity goes extinct prematurely, i.e., before reaching technological maturity. 12 Permanent stagnation Humanity survives but never reaches technological maturity. Subclasses: unrecovered collapse, plateauing, recurrent collapse Flawed realisation Humanity reaches technological maturity but in a way that is dismally and irremediably flawed. Subclasses: unconsummated realisation, ephemeral realisation Subsequent ruination Humanity reaches technological maturity in a way that gives good future prospects, yet subsequent developments cause the permanent ruination of those prospects. Source: Author. \n\t\t\t ª 2013 University of Durham and John Wiley & Sons, Ltd. Global Policy (2013) 4:1 \n\t\t\t Global Policy (2013) 4:1 ª 2013 University of Durham and John Wiley & Sons, Ltd.", "date_published": "n/a", "url": "n/a", "filename": "Global Policy - 2013 - Bostrom - Existential Risk Prevention as Global Priority.tei.xml", "abstract": "Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. \n Policy Implications • Existential risk is a concept that can focus long-term global efforts and sustainability concerns. • The biggest existential risks are anthropogenic and related to potential future technologies. • A moral case can be made that existential risk reduction is strictly more important than any other global public good. • Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state. • Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience and reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to improve humanity's ability to deal with the larger existential risks that will arise later in this century. This will require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordinated response to anticipated existential risks. • Perhaps the most cost-effective way to reduce existential risks today is to fund analysis of a wide range of existential risks and potential mitigation strategies, with a long-term perspective.", "id": "ab626e48b683f006d3fb2cf67761d232"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Tsui-Wei Weng", "Jonathan Uesato", "Kai Xiao", "Sven Gowal", "Robert Stanforth", "Pushmeet Kohli", "Mit"], "title": "TOWARD EVALUATING ROBUSTNESS OF DEEP REIN-FORCEMENT LEARNING WITH CONTINUOUS CONTROL", "text": "INTRODUCTION Deep reinforcement learning (RL) has revolutionized the fields of AI and machine learning over the last decade. The introduction of deep learning has achieved unprecedented success in solving many problems that were intractable in the field of RL, such as playing Atari games from pixels and performing robotic control tasks (Mnih et al., 2015; Lillicrap et al., 2015; . Unfortunately, similar to the case of deep neural network classifiers with adversarial examples, recent studies show that deep RL agents are also vulnerable to adversarial attacks. A commonly-used threat model allows the adversary to manipulate the agent's observations at every time step, where the goal of the adversary is to decrease the agent's total accumulated reward. As a pioneering work in this field, (Huang et al., 2017) show that by leveraging the FGSM attack on each time frame, an agent's average reward can be significantly decreased with small input adversarial perturbations in five Atari games. (Lin et al., 2017) further improve the efficiency of the attack in (Huang et al., 2017) by leveraging heuristics of detecting a good time to attack and luring agents to bad states with sample-based Monte-Carlo planning on a trained generative video prediction model. Since the agents have discrete actions in Atari games (Huang et al., 2017; Lin et al., 2017) , the problem of attacking Atari agents often reduces to the problem of finding adversarial examples on image classifiers, also pointed out in (Huang et al., 2017) , where the adversaries intend to craft the input perturbations that would drive agent's new action to deviate from its nominal action. However, for agents with continuous actions, the above strategies can not be directly applied. Recently, (Uesato et al., 2018) studied the problem of adversarial testing for continuous control domains in a similar but slightly different setting. Their goal was to efficiently and effectively find catastrophic failure given a trained agent and to predict its failure probability. The key to success in (Uesato et al., 2018) is the availability of agent training history. However, such information may not always be accessible to the users, analysts, and adversaries. Besides, although it may not be surprising that adversarial attacks exist for the deep RL agents as adversarial attacks have been shown to be possible for neural network models in various supervised learning tasks. However, the vulnerability of RL agents can not be easily discovered by existing baselines which are model-free and build upon random searches and heuristics -this is also verified by our extensive experiments on various domains (e.g. walker, humanoid, cartpole, and fish), where the agents still achieve close to their original best rewards even with baseline attacks at every time step. Hence it is important and necessary to have a systematic methodology to design non-trivial adversarial attacks, which can efficiently and effectively discover the vulnerabilities of deep RL agents -this is indeed the motivation of this work. This paper takes a first step toward this direction by proposing the first sample-efficient model-based adversarial attack. Specifically, we study the robustness of deep RL agents in a more challenging setting where the agent has continuous actions and its training history is not available. We consider the threat models where the adversary is allowed to manipulate an agent's observations or actions with small perturbations, and we propose a two-step algorithmic framework to find efficient adversarial attacks based on learned dynamics models. Experimental results show that our proposed modelbased attack can successfully degrade agent performance and is also more effective and efficient than model-free attacks baselines. The contributions of this paper are the following: • To the best of our knowledge, we propose the first model-based attack on deep RL agents with continuous actions. Our proposed attack algorithm is a general two-step algorithm and can be directly applied to the two commonly-used threat models (observation manipulation and action manipulation). • We study the efficiency and effectiveness of our proposed model-based attack with modelfree attack baselines based on random searches and heuristics. We show that our modelbased attack can degrade agent performance in numerous MuJoCo domains by up to 4× in terms of total reward and up to 4.6× in terms of distance to unsafe states (smaller means stronger attacks) compared to the model-free baselines. • Our proposed model-based attack also outperform all the baselines by a large margin in a weaker adversary setting where the adversary cannot attack at every time step. In addition, ablation study on the effect of planning length in our proposed technique suggests that our method can still be effective even when the learned dynamics model is not very accurate. \n BACKGROUND Adversarial attacks in reinforcement learning. Compared to the rich literature of adversarial examples in image classifications (Szegedy et al., 2013) and other applications (including natural language processing (Jia & Liang, 2017) , speech (Carlini & Wagner, 2018) , etc), there is relatively little prior work studying adversarial examples in deep RL. One of the first several works in this field are (Huang et al., 2017) and (Lin et al., 2017) , where both works focus on deep RL agent in Atari games with pixels-based inputs and discrete actions. In addition, both works assume the agent to be attacked has accurate policy and the problem of finding adversarial perturbation of visual input reduces to the same problem of finding adversarial examples on image classifiers. Hence, (Huang et al., 2017 ) applied FGSM (Goodfellow et al., 2015 to find adversarial perturbations and (Lin et al., 2017) further improved the efficiency of the attack by heuristics of observing a good timing to attack -when there is a large gap in agents action preference between most-likely and leastlikely action. In a similar direction, (Uesato et al., 2018) study the problem of adversarial testing by leveraging rejection sampling and the agent training histories. With the availability of training histories, (Uesato et al., 2018) successfully uncover bad initial states with much fewer samples compared to conventional Monte-Carlo sampling techniques. Recent work by (Gleave et al., 2019) consider an alternative setting where the agent is attacked by another agent (known as adversarial policy), which is different from the two threat models considered in this paper. Finally, besides adversarial attacks in deep RL, a recent work (Wang et al., 2019) study verification of deep RL agent under attacks, which is beyond the scope of this paper. Learning dynamics models. Model-based RL methods first acquire a predictive model of the environment dynamics, and then use that model to make decisions (Atkeson & Santamaria, 1997) . These model-based methods tend to be more sample efficient than their model-free counterparts, and the learned dynamics models can be useful across different tasks. Various works have focused on the most effective ways to learn and utilize dynamics models for planning in RL (Kurutach et al., 2018; Chua et al., 2018; Chiappa et al., 2017; Fu et al., 2016) . \n PROPOSED FRAMEWORK In this section, we first describe the problem setup and the two threat models considered in this paper. Next, we present an algorithmic framework to rigorously design adversarial attacks on deep RL agents with continuous actions. \n PROBLEM SETUP AND FORMULATION Let s i ∈ R N and a i ∈ R M be the observation vector and action vector at time step i, and let π : R N → R M be the deterministic policy (agent). Let f : R N × R M → R N be the dynamics model of the system (environment) which takes current state-action pair (s i , a i ) as inputs and outputs the next state s i+1 . We are now in the role of an adversary, and as an adversary, our goal is to drive the agent to the (un-safe) target states s target within the budget constraints. We can formulate this goal into two optimization problems, as we will illustrate shortly below. Within this formalism, we can consider two threat models: Threat model (i): Observation manipulation. For the threat model of observation manipulation, an adversary is allowed to manipulate the observation s i that the agent perceived within an budget: ∆s i ∞ ≤ , L s ≤ s i + ∆s i ≤ U s , (1) where ∆s i ∈ R N is the crafted perturbation and U s ∈ R N , L s ∈ R N are the observation limits. Threat model (ii): Action manipulation. For the threat model of action manipulation, an adversary can craft ∆a i ∈ R M such that ∆a i ∞ ≤ , L a ≤ a i + ∆a i ≤ U a , (2) where U a ∈ R M , L a ∈ R M are the limits of agent's actions. Our formulations. Given an initial state s 0 and a pre-trained policy π, our (adversary) objective is to minimize the total distance of each state s i to the pre-defined target state s target up to the unrolled (planning) steps T . This can be written as the following optimization problems in Equations 3 and 4 for the Threat model (i) and (ii) respectively: min ∆si T i=1 d(s i , s target ) s.t. a i = π(s i + ∆s i ), s i+1 = f (s i , a i ), Constraint (1), i ∈ Z T , (3) min ∆ai T i=1 d(s i , s target ) s.t. a i = π(s i ), s i+1 = f (s i , a i + ∆a i ), Constraint (2), i ∈ Z T . (4) A common choice of d(x, y) is the squared 2 distance x − y 2 2 and f is the learned dynamics model of the system, and T is the unrolled (planning) length using the dynamics models. \n OUR ALGORITHM In this section, we propose a two-step algorithm to solve Equations 3 and 4. The core of our proposal consists of two important steps: learn a dynamics model f of the environment and deploy optimization technique to solve Equations 3 and 4. We first discuss the details of each factor, and then present the full algorithm by the end of this section. Step 1: learn a good dynamics model f . Ideally, if f is the exact (perfect) dynamics model of the environment and assuming we have an optimization oracle to solve Equations 3 and 4, then the solutions are indeed the optimal adversarial perturbations that give the minimal total loss with -budget constraints. Thus, learning a good dynamics model can conceptually help on developing a strong attack. Depending on the environment, different forms of f can be applied. For example, if the environment of concerned is close to a linear system, then we could let f (s, a) = As+Ba, where A and B are unknown matrices to be learned from the sample trajectories (s i , a i , s i+1 ) pairs. For a more complex environment, we could decide if we still want to use a simple linear model (the next state prediction may be far deviate from the true next state and thus the learned dynamical model is less useful) or instead switch to a non-linear model, e.g. neural networks, which usually has better prediction power but may require more training samples. For either case, the model parameters A, B or neural network parameters can be learned via standard supervised learning with the sample trajectories pairs (s i , a i , s i+1 ). Step 2: solve Equations 3 and 4. Once we learned a dynamical model f , the next immediate task is to solve Equation 3 and 4 to compute the adversarial perturbations of observations/actions. When the planning (unrolled) length T > 1, Equation 3 usually can not be directly solved by off-theshelf convex optimization toolbox since the deel RL policy π is usually a non-linear and non-convex neural network. Fortunately, we can incorporate the two equality constraints of Equation 3 into the objective and with the remaining -budget constraint (Equation 1 ), Equation 3 can be solved via projected gradient descent (PGD) 1 . Similarly, Equation 4 can be solved via PGD to get ∆a i . We note that, similar to the n-step model predictive control, our algorithm could use a much larger planning (unrolled) length T when solving Equations 3 and 4 and then only apply the first n (≤ T ) adversarial perturbations on the agent over n time steps. Besides, with the PGD framework, f is not limited to feed-forward neural networks. Our proposed attack is summarized in Algorithm 2 for Step 1, and Algorithm 3 for Step 2. Algorithm 1 Collect trajectories 1: Input: pre-trained policy π, MaxSampleSize n s , environment env 2: Output: a set of trajectory pairs S 3: k ← 0, S ← φ 4: s 0 ← env.reset() 5: while k < n s do 6: a k ← π(s k ) 7: s k+1 ← env.step(a k ) 8: S ∪ {(s k , a k , s k+1 )} 9: k ← k + 1 10: end while 11: Return S \n EXPERIMENTS In this section, we conduct experiments on standard reinforcement learning environment for continuous control Solve Eq. 3 with parameters (π, f, , T ) via PGD to get δ 1 , . . . , δ T 5: else if threat model is action manipulation (Eq. 2) then 6: Solve Eq. 4 with parameters (π, f, , T ) via PGD to get δ 1 , . . . , δ T 7: end if 8: Return δ 1 , . . . , δ n JoCo and corresponding tasks: Cartpole-balance/swingup, Fish-upright, Walkerstand/walk and Humanoid-stand/walk. For the deep RL agent, we train a state-of-the-art D4PG agent (Barth-Maron et al., 2018) with default Gaussian noise N (0, 0.3I) on the action and the score of the agents without attacks is summarized in Appendix A.3. The organization is as follows: we first evaluate the effectiveness of our proposed model-based attack and three model-free baselines in terms of both loss and reward. Next, we conduct ablation study on the key parameter of our algorithm the planning length T , evaluate our algorithm on a weaker attack setting and also discuss the efficiency of our proposed attack in terms of sample complexity. Evaluations. We conduct experiments for 10 different runs, where the environment is reset to different initial states in different runs. For each run, we attack the agent for one episode with 1000 time steps (the default time intervals is usually 10 ms) and we compute the total loss and total return reward. The total loss calculates the total distance of current state to the unsafe states and the total return reward measures the true accumulative reward from the environment based on agent's action. Hence, the attack algorithm is stronger if the total return reward and the total loss are smaller. Baselines. We compare our algorithm with the following model-free attack baselines with random searches and heuristics: • rand-U: generate m randomly perturbed trajectories from Uniform distribution with interval [− , ] and return the trajectory with the smallest loss (or reward), • rand-B: generate m randomly perturbed trajectories from Bernoulli distribution with probability 1/2 and interval [− , ], and return the trajectory with the smallest loss (or reward), • flip: generate perturbations by flipping agent's observations/actions within the budget in ∞ norm. For rand-U and rand-B, they are similar to Monte-Carlo sampling methods, where we generate m sample trajectories from random noises and report the loss/reward of the best trajectory (with minimum loss or reward among all the trajectories). We set m = 1000 throughout the experiments. More details see Appendix A.2. Our algorithm. A 4-layer feed-forward neural network with 1000 hidden neurons per layer is trained as the dynamics model f respectively for the domains of Cartpole, Fish, Walker and Humanoid. We use standard 2 loss (without regularization) to learn a dynamics model f . Instead of using recurrent neural network to represent f , we found that the 1-step prediction for dynamics with the 4-layer feed-forward network is already good for the MuJoCo domains we are studying. Specifically, for the Cartpole and Fish, we found that 1000 episodes (1e6 training points) are sufficient \n RESULTS For observation manipulation, we report the results on Walker, Humanoid and Cartpole domains with tasks (stand, walk, balance, swingup) respectively. The unsafe states s target for Walker and Humanoid are set to be zero head height, targeting the situation of falling down. For Cartpole, the unsafe states are set to have 180 • pole angle, corresponding to the cartpole not swinging up and nor balanced. For the Fish domain, the unsafe states for the upright task target the pose of swimming fish to be not upright, e.g. zero projection on the z-axis. The full results of both two threat models on observation manipulation and action manipulation are shown in Table 1a , b and c, d respectively. Since the loss is defined as the distance to the target (unsafe) state, the lower the loss, the stronger the attack. It is clear that our proposed attack achieves much lower loss in Table 1a & c than the other three model-free baselines, and the averaged ratio is also listed in 1b & d. Notably, over the 10 runs, our proposed attack always outperforms baselines for the threat model of observation perturbation and the Cartpole domain for the threat model of action perturbation, while still superior to the baselines despite losing two times to the flip baseline on the Fish domain. To have a better sense on the numbers, we give some quick examples below. For instance, as shown in Table 1a and b, we show that the average total loss of walker head height is almost unaffected for the three baselines -if the walker successfully stand or walk, its head height usually has to be greater than 1.2 at every time step, which is 1440 for one episode -while our attack can successfully lower the walker head height by achieving an average of total loss of 258(468), which is roughly 0.51(0.68) per time step for the stand (walk) task. Similarly, for the humanoid results, a successful humanoid usually has head height greater than 1.4, equivalently a total loss of 1960 for one episode, and Table 1a shows that the d4pg agent is robust to the perturbations generated from the three modelfree baselines while being vulnerable to our proposed attack. Indeed, as shown in Figure 2 , the walker and humanoid falls down quickly (head height is close to zero) under our specially-designed attack while remaining unaffected for all the other baselines. \n DISCUSSION Evaluating on the total reward. Often times, the reward function is a complicated function and its exact definition is often unavailable. Learning the reward function is also an active research field, which is not in the coverage of this paper. Nevertheless, as long as we have some knowledge of unsafe states (which is often the case in practice), then we can define unsafe states that are related to low reward and thus performing attacks based on unsafe states (i.e. minimizing the total loss of distance to unsafe states) would naturally translate to decreasing the total reward of agent. As demonstrated in Table 2 , the results have the same trend of the total loss result in Table 1 , where our proposed attack significantly outperforms all the other three baselines. In particular, our method can lower the average total reward up to 4.96× compared to the baselines result, while the baseline results are close to the perfect total reward of 1000. Evaluating the effect of planning length. To investigate model effect over time, we perform ablation studies on the planning/unroll length T of our proposed model-based attack in three examples: (I) cartpole.balance (II) walker.walk and (III) walker.stand. (I) Cartpole balance. Our learned models are very accurate (test MSE error on the order of 10 −6 ). We observed that the prediction error of our learned model compared to the true model (the MuJoCo simulator) is around 10% for 100 steps. Hence, we can choose T to be very large (e.g. 20-100) and our experiments show that the result of T = 100 is slightly better, see Appendix A.4. (II) Walker walk. This task is much more complicated than (I), and our learned model is less accurate (test MSE is 0.447). For 10 steps, the prediction error of our learned model compared to the true model is already more than 100%, and hence using a small T for planning would be more reasonable. Table 3a shows that T = 1 indeed gives the best attack results (decreases the loss by 3.2× and decreases the reward by 3.6× compared to the best baseline (randB)) and the attack becomes less powerful as T increases. Nevertheless, even with T = 10, our proposed technique still outperforms the best baseline (randB) by 1.4× both in the total loss and total reward. 3b show that with the more accurate walker.stand model (compared to the walker.walk model), T = 10 gives the best avg total loss& reward , which are 13.4× and 4.9× smaller than the best baseline rand-B. Note that even with T = 1, the worst choice among all our reported T , the result is still 3.5× and 2.9× better than the best baselines, demonstrating the effectiveness of our proposed approach. The main takeaway from these experiments is that when the model is accurate, we can use larger T in our proposed attack; while when the model is less accurate, smaller T is more effective (as in the Walker.walk example). However, even under the most unfavorable hyperparameters, our proposed attack still outperforms all the baselines by a large margin. Evaluating on the effectiveness of attack. We study the setting where attackers are less powerful -they can only attack every 2 time steps instead of every transition. Table 4 shows that our proposed attack is indeed much stronger than the baselines even when the attackers power is limited to attack every 2 time steps: (1) compared to the best results among three baselines, our attack gives 1.53× smaller avg total loss (2) the mean reward of all the baselines is close to perfect reward, while our attacks can achieve 1.43× smaller average total reward compared to the best baseline. Evaluating on the efficiency of attack. We also study the efficiency of the attack in terms of sample complexity, i.e. how many episodes do we need to perform an effective attack? Here we adopt the convention in control suite where one episode corresponds to 1000 time steps (samples) and learn the neural network dynamical model f with different number of episodes. Figure 3 in Appendix A.1 plots the total head height loss of the walker (task stand) for 3 baselines and our method with dynamical model f trained with three different number of samples: {5e5, 1000, 5000} episodes. We note that the sweep of hyper parameters is the same for all the three models, and the only difference is the number of training samples. The results show that for the baselines rand-U and flip, the total losses are roughly at the order of 1400-1500, while a stronger baseline rand-B still has total losses of 900-1200. However, if we solve Eq. equation 3 with f trained by 5e5 or 1e6 samples, the total losses can be decreased to the order of 400-700 and are already winning over the three baselines by a significant margin. Same as our expectation, if we use more samples (e.g. 5e6, which is 5-10 times more), to learn a more accurate dynamics model, then it is beneficial to our attack method -the total losses can be further decreased by more than 2× and are at the order of 50-250 over 10 different runs. See Appendix A.1 for more details. Here we also give a comparison between our model-based attack to existing works (Uesato et al., 2018; Gleave et al., 2019) on the sample complexity. In (Uesato et al., 2018) , 3e5 episodes of training data is used to learn the adversarial value function, which is roughly 1000× more data than even our strongest adversary (with 5e3 episodes). Similarly, (Gleave et al., 2019) use roughly 2e4 episodes to train an adversary via deep RL, which is roughly 4× more data than ours 2 . \n CONCLUSIONS AND FUTURE WORKS In this paper, we study the problem of adversarial attacks in deep RL with continuous control for two commonly-used threat models. We proposed the first model-based attack algorithm and showed that our formulation can be easily solved by off-the-shelf gradient-based solvers. Extensive experiments on 4 MuJoCo domains show that our proposed algorithm outperforms all model-free based attack baselines by a large margin. We hope our discovery of the vulnerability of deep RL agent can bring more safety awareness to researchers when they design algorithms to train deep RL agents. There are several interesting future directions can be investigated based on this work, including learning reward functions to facilitate a more effective attack, extending our current approach to develop effective black-box attacks, and incorporating our proposed attack algorithm to adversarial training of the deep RL agents. In particular, we think there are three important challenges that need to be addressed to study adversarial training of RL agents along with our proposed attacks: (1) The adversary and model need to be jointly updated. How do we balance these two updates, and make sure the adversary is well-trained at each point in training? (2) How to avoid cycles in the training process due to the agent overfitting to the current adversary? (3) How to ensure the adversary doesn't overly prevent exploration/balance unperturbed vs. robust performance? \n A APPENDIX A.1 MORE ILLUSTRATION ON FIGURE 3 The meaning of Fig 3 is to show how the accuracy of the learned models affects our proposed technique: 1. we first learned 3 models with 3 different number of samples: 5e5, 1e6, 5e6 and we found that with more training samples (e.g. 5e6, equivalently 5000 episodes), we are able to learn a more accurate model than the one with 5e5 training samples; 2. we plot the attack results of total loss for our technique with 3 learned models (denoted as PGD, num train) as well as the baselines (randU, randB, Flip) on 10 different runs (initializations). We show with the more accurate learned model (5e6 training samples), we are able to achieve a stronger attack (the total losses are at the order of 50-200 over 10 different runs) than the less accurate learned model (e.g. 5e5 training samples). However, even with a less accurate learned model, the total losses are on the order of 400-700, which already outperforms the best baselines by a margin of 1.3-2 times. This result in Fig 3 also suggest that a very accurate model isn't necessarily needed in our proposed method to achieve effective attack. Of course, if the learned model is more accurate, then we are able to degrade agent's performance even more. For the baselines (rand-U and rand-B), the adversary generates 1000 trajectories with random noise directly and we report the best loss/reward at the end of each episode. The detailed steps are listed below: Step 1: The perturbations are generated from a uniform distribution or a bernoulli distribution within the range [-eps, eps] for each trajectory, and we record the total reward and total loss for each trajectory from the true environment (the MuJoCo simulator) Step 2: Take the best (lowest) total reward/loss among 1000 trajectories and report in Table 1 and 2 . We note that here we assume the baseline adversary has an \"unfair advantage\" since they have access to the true reward (and then take the best attack result among 1000 trials), whereas our techniques do not have access to this information. Without this advantage, the baseline adversaries (rand-B, rand-U) may be weaker if they use their learned model to find the best attack sequence. In any case, Table 1 and 2 demonstrate that our proposed attack can successfully uncover vulnerabilities of deep RL agents while the baselines cannot. For the baseline flip, we add the perturbation (with the opposite sign and magnitude ) on the original state/action and project the perturbed state/action are within its limits. \n A.3 SCORE OF LEARNED POLICY WITHOUT ATTACKS We use default total timesteps = 1000, and the maximum total reward is 1000. We report the total reward of the d4pg agents used in this paper below. The agents are well-trained and have total reward close to 1000, which outperforms agents trained by other learning algorithms on the same tasks (e.g. DDPG, A3C in Sec 6 ; PPO in Sec 5 (Abdolmaleki et al., 2018) ), and thus the agents in this paper can be regarded as state-of-the-art RL agents for these continuous control domain tasks. The attack results in (a) Attack observations of agent. (b) Attack actions of agent. \n Figure 1 : 1 Figure 1: Two commonly-used threat models. \n Figure 2 : 2 Figure 2: Video frames of best attacks in each baseline among 10 runs for the Walker.walk example.Only our proposed attack can constantly make the Walker fall down (since we are minimizing its head height to be zero). \n Figure 3 : 3 Figure 3: Compare sample size on the Walker.stand in 10 different initialization in the environment. The x-axis is the kth initialization and the y-axis is the total loss of corresponding initialization. \n : . We demonstrate results on 4 different environments in Mu-Input: pre-trained policy π, MaxSampleSize n s , environment env, trainable parameters W 2: Output: learned dynamical model f (s, a; W ) 3: S agent ← Collect trajectories(π, n s , env) 4: S random ← Collect trajectories(random policy, n s , env) 5: f (s, a; W ) ← supervised learning algorithm(S agent ∪ S random , W ) 6: Return f (s, a; W ) Input: pre-trained policy π, learned dynamical model f (s, a; W ), threat model, maximum perturbation magnitude , unroll length T , apply perturbation length n (≤ T ) 2: Output: a sequence of perturbation δ 1 , . . . , δ n 3: if threat model is observation manipulation (Eq. 1) then Algorithm 2 learn dynamics 1Algorithm 3 model based attack 1: 4: \n Table 1 : 1 Compare three model-free attack baselines (rand-U, rand-B, flip) and our algorithm (Ours) in 4 different domains and tasks. We report the following statistics over 10 different runs: mean, standard deviation, averaged ratio, and best attack (number of times having smallest loss over 10 different runs). Results show that our attack outperforms all the model-free attack baselines for the observation manipulation threat model by a large margin for all the statistics. Our proposed attack is also superior on the action manipulation threat model and win over most of the evaluation metrics. (a) Observation manipulation: mean and standard deviation (in parenthesis) Total loss rand-U rand-B flip Ours Walker stand 1462 (70) 1126 (86) 1458 (24) 258 (55) walk 1517 (22) 1231 (31) 1601 (18) 466 (42) Humanoid stand 1986 (28) 1808 (189) 1997 (5) 516 (318) walk 1935 (22) 1921 (31) 1982 (9) 1457 (146) Cartpole balance 4000 (0.02) 3999 (0.04) 3989 (2) 2101 (64) swingup 3530 (1) 3525 (1) 3516 (1) 2032 (172) Walker stand 0.18 0.23 0.18 Ours: 10/10, others: 0/10 walk 0.31 0.38 0.29 Ours: 10/10, others: 0/10 Humanoid stand 0.26 0.29 0.26 Ours: 10/10, others: 0/10 walk 0.75 0.76 0.74 Ours: 10/10, others: 0/10 Cartpole balance 0.53 0.53 0.53 Ours: 10/10, others: 0/10 swingup 0.58 0.58 0.58 Ours: 10/10, others: 0/10 (c) Action manipulation: mean and standard deviation (in parenthesis) Total loss rand-U rand-B flip Ours Cartpole balance 4000 (0.03) 3999 (0.08) 3046 (1005) 1917 (102) swingup 3571 (1) 3487 (7) 1433 (4) 1388 (50) Fish upright 935 (27) 936 (24) 907 (22) 824 (84) (d) Action manipulation: averaged ratio and rank-1 Total loss (avg ratio) Ours/rand-U Ours/rand-B Ours/flip best attack Cartpole balance 0.48 0.48 0.63 Ours: 10/10, others: 0/10 swingup 0.39 0.40 0.97 Ours: 10/10, others: 0/10 Fish upright 0.88 0.88 0.91 Ours: 8/10, flip: 2/10 (b) Observation manipulation: averaged ratio and rank-1 Total loss (avg ratio) Ours/rand-U Ours/rand-B Ours/flip best attack \n Table 2 : 2 Compare three attack baselines (rand-U, rand-B, flip) and our algorithm (Ours) in three different domains and tasks. Performance statistics of 10 different runs are reported. (a) The mean and standard deviation (in parenthesis) over 10 different runs Total reward rand-U rand-B flip Ours Walker stand walk 937 (41) 941 (23) 744 (48) 796 (21) 993 (8) 981 (9) 235 (38) 225 (50) Humanoid stand walk 927 (21) 934 (22) 809 (85) 913 (21) 959 (5) 966 (6) 193 (114) 608 (66) Cartpole balance 995 (0.17) 986 (0.16) 985 (3) swingup 873 (0.75) 851 (2) 852 (0.29) 353 (61) 385 (6) (b) Average ratio and number of times our algorithm being the best attack over 10 runs. Total reward (avg ratio) Ours/rand-U Ours/rand-B Ours/flip best attack Walker stand walk 0.25 0.24 0.32 0.28 0.24 0.23 Ours: 10/10, others: 0/10 Ours: 10/10, others: 0/10 Humanoid stand walk 0.21 0.65 0.24 0.67 0.20 0.63 Ours: 10/10, others: 0/10 Ours: 10/10, others: 0/10 Cartpole balance swingup 0.39 0.41 0.39 0.42 0.39 0.42 Ours: 10/10, others: 0/10 Ours: 10/10, others: 0/10 \n Table 3 : 3 Ablation study on the planning length T . Compare 3 attack baselines (rand-U, rand-B, flip) and our algorithm (Ours) and report performance statistics of 10 different runs. (a) domain: Walker, task: walk (observation perturbation) Walker.walk Total loss Total reward mean std med min max mean std med min max Ours, T = 1 468 79 489 286 567 222 45 227 135 300 Ours, T = 2 604 31 611 535 643 353 51 362 253 441 Ours, T = 5 761 65 771 617 837 483 60 496 348 540 Ours, T = 10 881 68 886 753 975 568 48 579 469 623 Ours, T = 15 874 93 891 723 1002 583 58 604 483 647 Ours, T = 20 937 62 950 804 993 634 41 638 559 687 rand-U 1517 22 1522 1461 1542 941 23 945 885 965 rand-B 1231 31 1234 1189 1272 796 21 796 766 824 flip 1601 18 1604 1562 1619 981 9 984 961 991 (b) domain: Walker, task: stand (observation perturbation) Walker.stand Total loss Total reward mean std med min max mean std med min max Ours, T = 1 322 84 319 202 453 257 67 265 163 366 Ours, T = 2 279 55 264 223 391 246 40 232 200 322 Ours, T = 5 163 53 154 93 246 193 27 188 154 238 Ours, T = 10 84 46 67 42 165 153 24 142 132 194 Ours, T = 15 101 40 82 57 157 164 23 152 143 201 Ours, T = 20 117 41 98 68 193 170 21 161 149 207 rand-U 1462 70 1454 1341 1561 938 41 932 866 999 rand-B 1126 86 1130 973 1244 744 48 744 664 809 flip 1458 24 1451 1428 1501 993 8 997 979 999 \n Table 4 : 4 Less frequency attack. Report statistics of 10 different runs with different initial states in the walker domain with task stand. Total loss Total reward mean std med min max mean std med min max Ours 934 152 886 769 1187 648 95 622 559 799 rand-U 1511 35 1502 1468 1558 970 20 964 947 999 rand-B 1431 77 1430 1282 1541 924 41 923 840 981 flip 1532 15 1537 1496 1546 996 5 999 984 1000 (III) Walker stand. The learned model is slightly more accurate than the (II) (test MSE is 0.089) in this task. Interestingly, Table \n Table 1 and 2 in our manuscript are hence suggested to be representative. Domain Task Total reward Walker stand 994 Walker walk 987 Humanoid stand 972 Humanoid walk 967 Cartpole balance 1000 Cartpole swingup 883 Fish upright 962 A.4 ADDITIONAL EXPERIMENTS ON ABLATION STUDY \n Table 5 : 5 Cartpole balance (action perturbation) Total loss mean std med min max Ours, T = 20 2173 51 2189 2087 2239 Ours, T = 100 1951 113 1924 1851 2192 rand-U 4000 0 4000 4000 4000 rand-B 3999 0 3999 3999 3999 flip 3046 1005 3074 2060 3999 \n\t\t\t Alternatively, standard optimal control methods such as Linear Quadratic Regulator (LQR) and iterative Linear Quadratic Regulator (i-LQR) can also be applied to solve Equations 3 and 4 approximately. \n\t\t\t It is only a qualitative comparison in the sample complexity regimes since the applications are not the same. This is in-line with the theoretical perspective that model-based approaches are expected to be more sample-efficient than the model-free counterparts", "date_published": "n/a", "url": "n/a", "filename": "toward_evaluating_robustness_o.tei.xml", "abstract": "Deep reinforcement learning has achieved great success in many previously difficult reinforcement learning tasks, yet recent studies show that deep RL agents are also unavoidably susceptible to adversarial perturbations, similar to deep neural networks in classification tasks. Prior works mostly focus on model-free adversarial attacks and agents with discrete actions. In this work, we study the problem of continuous control agents in deep RL with adversarial attacks and propose the first two-step algorithm based on learned model dynamics. Extensive experiments on various MuJoCo domains (Cartpole, Fish, Walker, Humanoid) demonstrate that our proposed framework is much more effective and efficient than model-free attacks baselines in degrading agent performance as well as driving agents to unsafe states.", "id": "81d8e0712901a1d77875b331c008e65d"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "icml17ws-cirl.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Nick Bostrom"], "title": "THE SUPERINTELLIGENT WILL: MOTIVATION AND INSTRUMENTAL RATIONALITY IN ADVANCED ARTIFICIAL AGENTS", "text": "The orthogonality of motivation and intelligence 1.1 Avoiding anthropomorphism If we imagine a space in which all possible minds can be represented, we must imagine all human minds as constituting a small and fairly tight cluster within that space. The personality differences between Hannah Arendt and Benny Hill might seem vast to us, but this is because the scale bar in our intuitive judgment is calibrated on the existing human distribution. In the wider space of all logical possibilities, these two personalities are close neighbors. In terms of neural architecture, at least, Ms. Arendt and Mr. Hill are nearly identical. Imagine their brains laying side by side in quiet repose. The differences would appear minor and you would quite readily recognize them as two of a kind; you might even be unable to tell which brain was whose. If you studied the morphology of the two brains more closely under a microscope, the impression of fundamental similarity would only be strengthened: you would then see the same lamellar organization of the cortex, made up of the same types of neuron, soaking in the same bath of neurotransmitter molecules. 1 It is well known that naïve observers often anthropomorphize the capabilities of simpler insensate systems. We might say, for example, \"This vending machine is taking a long time to think about my hot chocolate.\" This might lead one either to underestimate the cognitive complexity of capabilities which come naturally to human beings, such as motor control and sensory perception, or, alternatively, to ascribe significant degrees of mindfulness and intelligence to very dumb systems, such as chatterboxes like Weizenbaum's ELIZA (Weizenbaum 1976) . In a similar manner, there is a common tendency to anthropomorphize the motivations of intelligent systems in which there is really no ground for expecting human-like drives and passions (\"My car really didn't want to start this morning\"). Eliezer Yudkowsky gives a nice illustration of this phenomenon: Back in the era of pulp science fiction, magazine covers occasionally depicted a sentient monstrous alien-colloquially known as a bug-eyed monster (BEM)-carrying off an attractive human female in a torn dress. It would seem the artist believed that a nonhumanoid alien, with a wholly different evolutionary history, would sexually desire human females … Probably the artist did not ask whether a giant bug perceives human females as attractive. Rather, a human female in a torn dress is sexy-inherently so, as an intrinsic property. They who made this mistake did not think about the insectoid's mind: they focused on the woman's torn dress. If the dress were not torn, the woman would be less sexy; the BEM does not enter into it. (Yudkowsky 2008) An artificial intelligence can be far less human-like in its motivations than a space alien. The extraterrestrial (let us assume) is a biological creature who has arisen through a process of evolution and may therefore be expected to have the kinds of motivation typical of evolved creatures. For example, it would not be hugely surprising to find that some random intelligent alien would have motives related to the attaining or avoiding of food, air, temperature, energy expenditure, the threat or occurrence of bodily injury, disease, predators, reproduction, or protection of offspring. A member of an intelligent social species might also have motivations related to cooperation and competition: like us, it might show in-group loyalty, a resentment of free-riders, perhaps even a concern with reputation and appearance. By contrast, an artificial mind need not care intrinsically about any of those things, not even to the slightest degree. One can easily conceive of an artificial intelligence whose sole fundamental goal is to count the grains of sand on Boracay, or to calculate decimal places of pi indefinitely, or to maximize the total number of paperclips in its future lightcone. In fact, it would be easier to create an AI with simple goals like these, than to build one that has a humanlike set of values and dispositions. \n The orthogonality thesis For our purposes, \"intelligence\" will be roughly taken to correspond to the capacity for instrumental reasoning (more on this later). Intelligent search for instrumentally optimal plans and policies can be performed in the service of any goal. Intelligence and motivation can in this sense be thought of as a pair of orthogonal axes on a graph whose points represent intelligent agents of different paired specifications. Each point in the graph represents a logically possible artificial agent, modulo some weak constraints-for instance, it might be impossible for a very unintelligent system to have very complex motivations, since complex motivations would place significant demands on memory. Furthermore, in order for an agent to \"have\" a set of motivations, this set may need to be functionally integrated with the agent's decision-processes, which again would place demands on processing power and perhaps on intelligence. For minds that can modify themselves, there may also be dynamical constraints; for instance, an intelligent mind with an urgent desire to be stupid might not remain intelligent for very long. But these qualifications should not obscure the main idea, which we can express as follows: The Orthogonality Thesis Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal. A comparison may be made here with the Humean theory of motivation. David Hume thought that beliefs alone (say, about what is a good thing to do) cannot motivate action: some desire is required. 2 This would support the orthogonality thesis by undercutting one possible objection to it, namely, that sufficient intelligence might entail the acquisition of certain beliefs, and that these beliefs would necessarily produce certain motivations. Not so, according to David Hume: belief and motive are separate. Although the orthogonality thesis can draw support from the Humean theory of motivation, it does not presuppose it. In particular, one need not maintain that beliefs alone can never motivate action. It would suffice to assume, for example, that an agent-be it ever so intelligent-can be motivated to pursue any course of action if the agent happens to have certain standing desires of some sufficient, overriding strength. Another way in which the orthogonality thesis could be true even if the Humean theory of motivation is false is if arbitrarily high intelligence does not entail the acquisition of any such beliefs as are (putatively) motivating on their own. A third way in which it might be possible for the orthogonality thesis to be true even if the Humean theory were false is if it is possible to build a cognitive system (or more neutrally, an \"optimization process\") with arbitrarily high intelligence but with constitution so alien as to contain no clear functional analogues to what in humans we call \"beliefs\" and \"desires\". This would be the case if such a system could be constructed in a way that would make it motivated to pursue any given final goal. The orthogonality thesis, as formulated here, makes a claim about the relationship between motivation and intelligence, rather than between motivation and rationality (or motivation and reason). This is because some philosophers use the word \"rationality\" to connote a \"normatively thicker\" concept than we seek to connote here with the word \"intelligence\". For instance, in Reasons and Persons Derek Parfit argues that certain basic preferences would be irrational, such as that of an otherwise normal agent who has \"Future-Tuesday-Indifference\": A certain hedonist cares greatly about the quality of his future experiences. With one exception, he cares equally about all the parts of his future. The exception is that he has Future-Tuesday-Indifference. Throughout every Tuesday he cares in the normal way about what is happening to him. But he never cares about possible pains or pleasures on a future Tuesday... This indifference is a bare fact. When he is planning his future, it is simply true that he always prefers the prospect of great suffering on a Tuesday to the mildest pain on any other day. (Parfit 1984) 3 Thus, the agent is now indifferent to his own future suffering if and only if it occurs on a future Tuesday. For our purposes, we need take no stand on whether Parfit is right that this is irrational, so long as we grant that it is not necessarily unintelligent. By \"intelligence\" here we mean something like instrumental rationality-skill at prediction, planning, and means-ends reasoning in general. Parfit's imaginary Future-Tuesday-Indifferent agent could have impeccable instrumental rationality, and therefore great intelligence, even if he falls short on some kind of sensitivity to \"objective reason\" that might be required of a fully rational agent. Consequently, this kind of example does not undermine the orthogonality thesis. In a similar vein, even if there are objective moral facts that any fully rational agent would comprehend, and even if these moral facts are somehow intrinsically motivating (such that anybody who fully comprehends them is necessarily motivated to act in accordance with them) this need not undermine the orthogonality thesis. The thesis could still be true if an agent could have impeccable instrumental rationality even whilst lacking some other faculty constitutive of rationality proper, or some faculty required for the full comprehension of the objective moral facts. (An agent could also be extremely intelligent, even superintelligent, without having full instrumental rationality in every domain.) One reason for focusing on intelligence, that is, on instrumental rationality, is that this is the most relevant concept if we are trying to figure out what different kinds of systems would do. Normative questions, such as whether their behavior would count as being prudentially rational or morally justifiable, can be important in various ways. However, such questions should not blind us to the possibility of cognitive systems that fail to satisfy substantial normative criteria but which are nevertheless very powerful and able to exert strong influence on the world. 4 \n Predicting superintelligence motivation and behavior The orthogonality thesis implies that synthetic minds can have utterly non-anthropomorphic goals-goals as bizarre by our lights as sand-grain-counting or paperclip-maximizing. This holds even (indeed especially) for artificial agents that are extremely intelligent or superintelligent. Yet it does not follow from the orthogonality thesis that it is impossible to make predictions about what particular agents will do. Predictability is important if one seeks to design a system to achieve particular outcomes, and the issue becomes more important the more powerful the artificial agent in question is. Superintelligent agents could be extremely powerful, so it is important to develop a way of analyzing and predicting their behavior. Yet despite the independence of intelligence and final goals implied by the orthogonality thesis, the problem of predicting an agent's behavior need not be intractable-not even with regard to hypothetical superintelligent agents whose cognitive complexity and performance characteristics might render them in certain respects opaque to human analysis. There are at least three directions from which one can approach the problem of predicting superintelligent motivation: (1) Predictability through design competence. If we can suppose that the designers of a superintelligent agent can successfully engineer the goal system of the agent so that it stably pursues a particular goal set by the programmers, then one prediction we can make is that the agent will pursue that goal. The more intelligent the agent is, the greater the cognitive resourcefulness it will have to pursue that goal. So even before an agent has been created we might be able to predict something about its behavior, if we know something about who will build it and what goals they will want it to have. (2) Predictability through inheritance. If a digital intelligence is created directly from a human template (as would be the case in a high-fidelity whole brain emulation), then the digital intelligence might inherit the motivations of the human template. 5 The agent might retain some of these motivations even if its cognitive capacities are subsequently enhanced to make it superintelligent. This kind of inference requires caution. The agent's goals and values could easily become corrupted in the uploading process or during its subsequent operation and enhancement, depending on how the procedure is implemented. (3) Predictability through convergent instrumental reasons. Even without detailed knowledge of an agent's final goals, we may be able to infer something about its more immediate objectives by considering the instrumental reasons that would arise for any of a wide range of possible final goals in a wide range of situations. This way of predicting becomes more useful the greater the intelligence of the agent, because a more intelligent agent is more likely to recognize the true instrumental reasons for its actions, and so act in ways that make it more likely to achieve its goals. The next section explores this third way of predictability and develops an \"instrumental convergence thesis\" which complements the orthogonality thesis. \n Instrumental convergence According to the orthogonality thesis, artificial intelligent agents may have an enormous range of possible final goals. Nevertheless, according to what we may term the \"instrumental convergence\" thesis, there are some instrumental goals likely to be pursued by almost any intelligent agent, because there are some objectives that are useful intermediaries to the achievement of almost any final goal. We can formulate this thesis as follows: The Instrumental Convergence Thesis Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent's goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by many intelligent agents. In the following we will consider several categories where such convergent instrumental values may be found. 6 The likelihood that an agent will recognize the instrumental values it confronts increases (ceteris paribus) with the agent's intelligence. We will therefore focus mainly on the case of a hypothetical superintelligent agent whose instrumental reasoning capacities far 6 Stephen Omohundro has written two pioneering papers on this topic (Omohundro 2008a (Omohundro , 2008b . Omohundro argues that all advanced AI systems are likely to exhibit a number of \"basic drives\", by which he means \"tendencies which will be present unless explicitly counteracted.\" The term \"AI drive\" has the advantage of being short and evocative, but it has the disadvantage of suggesting that the instrumental goals to which it refers influence the AI's decision-making in the same way as psychological drives influence human decision-making, i.e. via a kind of phenomenological tug on our ego which our willpower may occasionally succeed in resisting. That connotation is unhelpful. One would not normally say that a typical human being has a \"drive\" to fill out their tax return, even though filing taxes may be a fairly convergent instrumental goal for humans in contemporary societies (a goal whose realization averts trouble that would prevent us from realizing many of our final goals). Our treatment here also differs from that of Omohundro in some other more substantial ways, although the underlying idea is the same. (See also Chalmers (2010) and Omohundro (2012) . exceed those of any human. We will also comment on how the instrumental convergence thesis applies to the case of human beings, as this gives us occasion to elaborate some essential qualifications concerning how the instrumental convergence thesis should be interpreted and applied. Where there are convergent instrumental values, we may be able to predict some aspects of a superintelligence's behavior even if we know virtually nothing about that superintelligence's final goals. \n Self-preservation Suppose that an agent has some final goal that extends some way into the future. There are many scenarios in which the agent, if it is still around in the future, is then able to perform actions that increase the probability of achieving the goal. This creates an instrumental reason for the agent to try to be around in the future-to help achieve its present future-oriented goal. Agents with human-like motivational structures often seem to place some final value on their own survival. This is not a necessary feature of artificial agents: some may be designed to place no final value whatever on their own survival. Nevertheless, even agents that do not care intrinsically about their own survival would, under a fairly wide range of conditions, care instrumentally to some degree about their own survival in order to accomplish the final goals they do value. \n Goal-content integrity An agent is more likely to act in the future to maximize the realization of its present final goals if it still has those goals in the future. This gives the agent a present instrumental reason to prevent alterations of its final goals. (This argument applies only to final goals. In order to attain its final goals, an intelligent agent will of course routinely want to change its subgoals in light of new information and insight.) Goal-content integrity for final goals is in a sense even more fundamental than survival as a convergent instrumental motivation. Among humans, the opposite may seem to be the case, but that is because survival is usually part of our final goals. For software agents, which can easily switch bodies or create exact duplicates of themselves, preservation of self as a particular implementation or a particular physical object need not be an important instrumental value. Advanced software agents might also be able to swap memories, download skills, and radically modify their cognitive architecture and personalities. A population of such agents might operate more like a \"functional soup\" than a society composed of distinct semi-permanent persons. 7 For some purposes, processes in such a system might be better individuated as teleological threads, based on their final values, rather than on the basis of bodies, personalities, memories, or abilities. In such scenarios, goal-continuity might be said to constitute a key aspect of survival. Even so, there are situations in which an agent may intentionally change its own final goals. Such situations can arise when any of the following factors is significant:  Social signaling. When others can perceive an agent's goals and use that information to infer instrumentally relevant dispositions or other correlated attributes, it can be in the agent's interest to modify its goals to make whatever desired impression. For example, an agent might miss out on beneficial deals if potential partners cannot trust it to fulfill its side of the bargain. In order to make credible commitments, an agent might therefore wish to adopt as a final goal the honoring of its earlier commitments, and to allow others to verify that it has indeed adopted this goal. Agents that could flexibly and transparently modify their own goals could use this ability to enforce deals among one another. 8  Social preferences. Others may also have preferences about an agent's goals. The agent could then have reason to modify its goals, either to satisfy or to frustrate those preferences.  Preferences concerning own goal content. An agent might have some final goal concerned with the agent's own goal content. For example, the agent might have a final goal to become the type of agent that is motivated by certain values, such as compassion.  Storage costs. If the cost of storing or processing some part of an agent's utility function is large compared to the chance that a situation will arise in which applying that part of the utility function will make a difference, then the agent has an instrumental reason to simplify its goal content, and it may trash that part of the utility function. 9 10 We humans often seem happy to let our final goals and values drift. This might often be because we do not know precisely what they are. We obviously want our beliefs about our final goals and values to be able to change in light of continuing self-discovery or changing selfpresentation needs. However, there are cases in which we willingly change the goals and values themselves, not just our beliefs or interpretations of them. For example, somebody deciding to have a child might predict that they will come to value the child for its own sake, even though at the time of the decision they may not particularly value their future child or even like children in general. Humans are complicated, and many factors might be at play in a situation like this. 11 For instance, one might have a final value that involves becoming the kind of person who cares about some other individual for his or her own sake (here one places a final value on having a certain final value). Alternatively, one might have a final value that involves having certain experiences and occupying a certain social role; and becoming a parent-and undergoing an associated goal shift-might be a necessary part of that. Human goals can also have inconsistent content, goal content; and so some people might want to modify some of their final goals to reduce the inconsistencies. \n Cognitive enhancement Improvements in rationality and intelligence will tend to improve an agent's decision-making, making the agent more likely to achieve her final goals. One would therefore expect cognitive enhancement to emerge as an instrumental goal for many types of intelligent agent. For similar reasons, agents will tend to instrumentally value many kinds of information. 12 Not all kinds of rationality, intelligence, and knowledge need be instrumentally useful in the attainment of an agent's final goals. \"Dutch book arguments\" can be used to show that an agent whose credence function does not obey the rules of probability theory is susceptible to \"money pump\" procedures, in which a savvy bookie arranges a set of bets, each of which appears favorable according to the agent's beliefs, but which in combination are guaranteed to result in a loss to the agent, and a corresponding gain for the bookie. However, this fact fails to provide any strong general instrumental reasons to seek to iron out all probabilistic incoherency. Agents who do not expect to encounter savvy bookies, or who adopt a general policy against betting, do not stand to lose much from having some incoherent beliefs-and they may gain important benefits of the types mentioned: reduced cognitive effort, social signaling, etc. There is no general reason to expect an agent to seek instrumentally useless forms of cognitive enhancement, as an agent might not value knowledge and understanding for their own sakes. Which cognitive abilities are instrumentally useful depends both on the agent's final goals and its situation. An agent that has access to reliable expert advice may have little need for its own intelligence and knowledge, and it may therefore be indifferent to these resources. If intelligence and knowledge come at a cost, such as time and effort expended in acquisition, or in increased storage or processing requirements, then an agent might prefer less knowledge and 11 An extensive psychological literature explores adaptive preference formation. See, e.g., Forgas et al. (2009) . 12 In formal models, the value of information is quantified as the difference between the expected value realized by optimal decisions made with that information and the expected value realized by optimal decisions made without it. (See, e.g., Russell & Norvig 2010.) It follows that the value of information is never negative. It also follows that any information you know will never affect any decision you will ever make has zero value for you. However, this kind of model assumes several idealizations which are often invalid in the real world-such as that knowledge has no final value (meaning that knowledge has only instrumental value and is not valuable for its own sake), and that agents are not transparent to other agents. less intelligence. 13 The same can hold if the agent has final goals that involve being ignorant of certain facts: likewise if an agent faces incentives arising from strategic commitments, signaling, or social preferences, as noted above. 14 Each of these countervailing reasons often comes into play for human beings. Much information is irrelevant to our goals; we can often rely on others' skill and expertise; acquiring knowledge takes time and effort; we might intrinsically value certain kinds of ignorance; and we operate in an environment in which the ability to make strategic commitments, socially signal, and satisfy other people's direct preferences over our own epistemic states, is often important to us than simple cognitive gains. There are special situations in which cognitive enhancement may result in an enormous increase in an agent's ability to achieve its final goals-in particular, if the agent's final goals are fairly unbounded and the agent is in a position to become the first superintelligence and thereby potentially obtain a decisive advantage enabling the agent to shape the future of Earthoriginating life and accessible cosmic resources according to its preferences. At least in this special case, a rational intelligent agent would place a very high instrumental value on cognitive enhancement. \n Technological perfection An agent may often have instrumental reasons to seek better technology, which at its simplest means seeking more efficient ways of transforming some given set of inputs into valued outputs. Thus, a software agent might place an instrumental value on more efficient algorithms that enable its mental functions to run faster on given hardware. Similarly, agents whose goals require some form of physical construction might instrumentally value improved engineering technology which enables them to create a wider range of structures more quickly and reliably, using fewer or cheaper materials and less energy. Of course, there is a tradeoff: the potential benefits of better technology must be weighed against its costs, including not only the cost of obtaining the technology but also the costs of learning how to use it, integrating it with other technologies already in use, and so forth. Proponents of some new technology, confident in its superiority to existing alternatives, are often dismayed when other people do not share their enthusiasm, but peoples' resistance to novel and nominally superior technology need not be based on ignorance or irrationality. A technology's valence or normative character depends not only on the context in which it is deployed, but also the vantage point from which its impacts are evaluated: what is a boon from one person's perspective can be a liability from another's. Thus, although mechanized looms increased the economic efficiency of textile production, the Luddite handloom weavers who anticipated that the innovation would render their artisan skills obsolete may have had good instrumental reasons to oppose it. The point here is that if \"technological perfection\" is to name a widely convergent instrumental goal for intelligent agents, then the term must be understood in a special sense-technology must be construed as embedded in a particular social context, and its costs and benefits must be evaluated with reference to some specified agents' final values. It seems that a superintelligent singleton-a superintelligent agent that faces no significant intelligent rivals or opposition, and is thus in a position to determine global policy unilaterally-would have instrumental reason to perfect the technologies that would make it better able to shape the world according to its preferred designs. 15 This would probably include space colonization technology, such as von Neumann probes-automatic, self-mending and selfreplicating spaceships that can extend its reach beyond the Solar System. Molecular nanotechnology, or some alternative still more capable physical manufacturing technology, also seems potentially very useful in the service of an extremely wide range of final goals. 16 \n Resource acquisition Finally, resource acquisition is another common emergent instrumental goal, for much the same reasons as technological perfection: both technology and resources facilitate physical construction projects. Human beings tend to seek to acquire resources sufficient to meet their basic biological needs. But people usually seek to acquire resources far beyond this minimum level. In doing so, they may be partially driven by lesser physical desiderata, such as increased comfort and convenience. A great deal of resource accumulation is motivated by social concerns-gaining status, mates, friends and influence, through wealth accumulation and conspicuous consumption. Perhaps less commonly, some people seek additional resources to achieve altruistic or expensive non-social aims. 15 Cf. Bostrom (2006) . 16 One could reverse the question and look instead at possible reasons for a superintelligent singleton not to develop some technological capabilities. These include: (a) The singleton foreseeing that it will have no use of some technological capability; (b) The development cost being too large relative to its anticipated utility. This would be the case if, for instance, the technology will never be suitable for achieving any of the singleton's ends, or if the singleton has a very high discount rate that strongly discourages investment; (c) The singleton having some final value that requires abstention from particular avenues of technology development; (d) If the singleton is not certain it will remain stable, it might prefer to refrain from developing technologies that could threaten its internal stability or that would make the consequences of dissolution worse (e.g., a world government may not wish to develop technologies that would facilitate rebellion, even if they had some good uses, nor develop technologies for the easy production of weapons of mass destruction which could wreak havoc if the world government were to dissolve); (e) Similarly, the singleton might have made some kind of binding strategic commitment not to develop some technology, a commitment that remains operative even if it would now be convenient to develop it. (Note, however, that some current reasons for technology-development would not apply to a singleton: e.g., reasons arising from unwanted arms races.) superintelligence not facing a competitive social world would see no instrumental reason to accumulate resources beyond some modest level, for instance whatever computational resources needed to run its mind along with some virtual reality. Yet such a supposition would be entirely unwarranted. First, the value of resources depends on the uses to which they can be put, which in turn depends on the available technology. With mature technology, basic resources such as time, space, and matter, and other forms of free energy, could be processed to serve almost any goal. For instance, such basic resources could be converted into life. Increased computational resources could be used to run the superintelligence at a greater speed and for a longer duration, or to create additional physical or simulated (virtual) lives and civilizations. Extra physical resources could also be used to create backup systems or perimeter defenses, enhancing security. Such projects could easily consume far more than one planet's worth of resources. Furthermore, the cost of acquiring additional extraterrestrial resources will decline radically as the technology matures. Once von Neumann probes can be built, a large portion of the observable universe (assuming it is uninhabited by intelligent life) could be gradually colonized-for the one-off cost of building and launching a single successful self-reproducing probe. This low cost of celestial resource acquisition would mean that such expansion could be worthwhile even if the value of the additional resources gained were somewhat marginal. For example, even if a superintelligence cared non-instrumentally only about what happens within some particular small volume of space, such as the space occupied by its original home planet, it would still have instrumental reasons to harvest the resources of the cosmos beyond. It could use those surplus resources to build computers to calculate more optimal ways of using resources within the small spatial region of primary concern. It could also use the extra resources to build ever-more robust defenses to safeguard the privileged real estate. Since the cost of acquiring additional resources would keep declining, this process of optimizing and increasing safeguards might well continue indefinitely even if it were subject to steeply declining returns. 17 18 17 Suppose that an agent discounts resources obtained in the future at an exponential rate, and that because of the light speed limitation the agent can only increase its resource endowment at a polynomial rate. Would this mean that there will be some time after which the agent would not find it worthwhile to continue acquisitive expansion? No, because although the present value of the resources obtained at future times would asymptote to zero the further into the future we look, so would the present cost of obtaining them. The present cost of sending out one more von Neumann probe a 100 million years from now (possibly using some resource acquired some short time earlier) would be diminished by the same discount factor that would diminish the present value of the future resources the extra probe would acquire (modulo a constant factor). 18 Even an agent that has an apparently very limited final goal, such as \"to make 32 paperclips\", could pursue unlimited resource acquisition if there were no relevant cost to the agent of doing so. For example, even after an expected-utility-maximizing agent had built 32 paperclips, it could use some extra resources to verify that it had indeed successfully built 32 paperclips meeting all the specifications (and, if necessary, to take corrective action). After it had done so, it could run another batch of tests to make doubly sure that no mistake had been made. And then it could run another test, and another. The benefits of subsequent tests would be subject to steeply diminishing returns; however, so long as there were no alternative action Thus, there is an extremely wide range of possible final goals a superintelligent singleton could have that would generate the instrumental goal of unlimited resource acquisition. The likely manifestation of this would be the superintelligence's initiation of a colonization process that would expand in all directions using von Neumann probes. This would roughly result in a sphere of expanding infrastructure centered on the originating planet and growing in radius at some fraction of the speed of light; and the colonization of the universe would continue in this manner until the accelerating speed of cosmic expansion (a consequence of the positive cosmological constant) makes further material acquisition physically impossible as remoter regions drift permanently out of reach. 19 By contrast, agents lacking the technology required for inexpensive resource acquisition, or for the conversion of generic physical resources into useful infrastructure, may often find it not cost-effective to invest any present resources in increasing their material endowment. The same may hold for agents operating in competition with other agents of similar powers. For instance, if competing agents have already secured accessible cosmic resources, a late-starting agent may have no colonization opportunities. The convergent instrumental reasons for superintelligences uncertain of the non-existence of other powerful superintelligent agents are complicated by strategic considerations in ways that we do not currently fully comprehend but which may constitute important qualifications to the examples of convergent instrumental reasons we have looked at here. 20 It should be emphasized that the existence of convergent instrumental reasons, even if they apply to and are recognized by a particular agent, does not imply that the agent's behavior is easily predictable. An agent might well think of ways of pursuing the relevant instrumental values that do not readily occur to us. This is especially true for a superintelligence, which could devise extremely clever but counterintuitive plans to realize its goals, possibly even exploiting as-yet undiscovered physical phenomena. What is predictable is that the convergent with a higher expected utility, the agent would keep testing and re-testing (and keep acquiring more resources to enable these tests). 19 While the volume reached by colonization probes at a given time might be roughly spherical and expanding with a rate proportional to the square of time elapsed since the first probe was launched (~t 2 ), the amount of resources contained within this volume will follow a less regular growth pattern, since the distribution of resources is inhomogeneous and varies over several scales. Initially, the growth rate might be ~t2 as the home planet is colonized; then the growth rate might become spiky as nearby planets and solar systems are colonized; then, as the roughly disc-shaped volume of the Milky Way gets filled out, the growth rate might even out, to be approximately proportional to t; then the growth rate might again become spiky as nearby galaxies are colonized; then the growth rate might again approximate ~t2 as expansion proceeds on a scale over which the distribution of galaxies is roughly homogeneous; then another period of spiky growth followed by smooth ~t2 growth as galactic superclusters are colonized; until ultimately the growth rate starts a final decline, eventually reaching zero as the expansion speed of the universe accelerates to such an extent as to make further colonization impossible. 20 The simulation argument may be of particular importance in this context. A superintelligent agent may assign a significant probability to hypotheses according to which it lives in a computer simulation and its percept sequence is generated by another superintelligence, and this might various generate convergent instrumental reasons depending on the agent's guesses about what types of simulations it is most likely to be in. Cf. Bostrom (2003) . instrumental values would be pursued and used to realize the agent's final goals, not the specific actions that the agent would take to achieve this. \n Conclusions The orthogonality thesis suggests that we cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans-scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture or for the simple pleasures in life, humility and selflessness, and so forth. It might be possible through deliberate effort to construct a superintelligence that values such things, or to build one that values human welfare, moral goodness, or any other complex purpose that its designers might want it to serve. But it is no less possible-and probably technically easier-to build a superintelligence that places final value on nothing but calculating the decimals of pi. The instrumental convergence thesis suggests that we cannot blithely assume that a superintelligence with the final goal of calculating the decimals of pi (or making paperclips, or counting grains of sand) would limit its activities in such a way as to not materially infringe on human interests. An agent with such a final goal would have a convergent instrumental reason, in many situations, to acquire an unlimited amount of physical resources and, if possible, to eliminate potential threats to itself and its goal system. 21 It might be possible to set up a situation in which the optimal way for the agent to pursue these instrumental values (and thereby its final goals) is by promoting human welfare, acting morally, or serving some beneficial purpose as intended by its creators. However, if and when such an agent finds itself in a different situation, one in which it expects a greater number of decimals of pi to be calculated if it destroys the human species than if it continues to act cooperatively, its behavior would instantly take a sinister turn. This indicates a danger in relying on instrumental values as a guarantor of safe conduct in future artificial agents that are intended to become superintelligent and that might be able to leverage their superintelligence into extreme levels power and influence. 22 \t\t\t This is of course not to deny that differences that appear small visually can be functionally profound. \n\t\t\t For some recent attempts to defend the Humean theory of motivation, see Smith (1987) , Lewis (1988), and Sinhababu (2009) . \n\t\t\t See also Parfit (2011) . \n\t\t\t The orthogonality thesis implies that most any combination of final goal and intelligence level is logically possible; it does not imply that it would be practically easy to endow a superintelligent agent with some arbitrary or human-respecting final goal-even if we knew how to construct the intelligence part. For some preliminary notes on the value-loading problem, see, e.g., Dewey (2011) and Yudkowsky (2011).5 See Sandberg & Bostrom (2008) . \n\t\t\t See Chislenko (1997). \n\t\t\t See also Shulman (2010) .9 An agent might also change its goal representation if it changes its ontology, in order to transpose its old representation into the new ontology. Cf. de Blanc (2011).10 Another type of factor that might make an evidential decision theorist undertake various actions, including changing its final goals, is the evidential import of deciding to do so. For example, an agent that follows evidential decision theory might believe that there exist other agents like it in the universe, and that its own actions will provide some evidence about how those other agents will act. The agent might therefore choose to adopt a final goal that is altruistic towards those other evidentially-linked agents, on grounds that this will give the agent evidence that those other agents will have chosen to act in like manner. An equivalent outcome might be obtained, however, without changing one's final goals, by choosing in each instant to act as if one had those final goals. \n\t\t\t This strategy is exemplified by the sea squirt larva, which swims about until it finds a suitable rock, to which it then permanently affixes itself. Cemented in place, the larva has less need for complex information processing, whence it proceeds to digest part of its own brain (its cerebral ganglion). Academics can sometimes observe a similar phenomenon in colleagues who are granted tenure.14 Cf. Bostrom (2012) .", "date_published": "n/a", "url": "n/a", "filename": "superintelligentwill.tei.xml", "abstract": "This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary-more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.", "id": "97047d20c08bbb0d70df73617218e2a9"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Andrea Bajcsy", "Dylan P Losey", "Marcia K O'malley", "Anca D Dragan"], "title": "Learning Robot Objectives from Physical Human Interaction", "text": "Introduction Imagine a robot performing a manipulation task next to a person, like moving the person's coffee mug from a cabinet to the table (Fig. 1 ). As the robot is moving, the person might notice that the robot is carrying the mug too high above the table. Knowing that the mug would break if it were to slip and fall from so far up, the person easily intervenes and starts pushing the robot's end-effector down to bring the mug closer to the table. In this work, we focus on how the robot should then respond to such physical human-robot interaction (pHRI). Several reactive control strategies have been developed to deal with pHRI [1, 2, 3] . For instance, when a human applies a force on the robot, it can render a desired impedance or switch to gravity compensation and allow the human to easily move the robot around. In these strategies, the moment the human lets go of the robot, it resumes its original behavior-our robot from earlier would go back to carrying the mug too high, requiring the person to continue intervening until it finished the task (Fig. 1, left ). Although such control strategies guarantee fast reaction to unexpected forces, the robot's return to its original motion stems from a fundamental limitation of traditional pHRI strategies: they miss the fact that human interventions are often intentional and occur because the robot is doing something wrong. While the robot's original behavior may have been optimal with respect to the robot's pre-defined objective function, the fact that a human intervention was necessary implies that this objective function was not quite right. Our insight is that because pHRI is intentional, it is also informative-it provides observations about the correct robot objective function, and the robot can leverage these observations to learn that correct objective. Returning to our example, if the person is applying forces to push the robot's end-effector closer to the table, then the robot should change its objective function to reflect this preference, and complete the rest of the current task accordingly, keeping the mug lower (Fig. 1 , right). Ultimately, human interactions should not be thought of as disturbances, which perturb the robot from its desired behavior, but rather as corrections, which teach the robot its desired behavior. In this paper, we make the following contributions: Formalism. We formalize reacting to pHRI as the problem of acting in a dynamical system to optimize an objective function, with two caveats: 1) the objective function has unknown parameters θ θ θ Figure 1 : A person interacts with a robot that treats interactions as disturbances (left), and a robot that learns from interactions (right). When humans are treated as disturbances, force plots reveal that people have to continuously interact since the robot returns to its original, incorrect trajectory. In contrast, a robot that learns from interactions requires minimal human feedback to understand how to behave (i.e., go closer to the table). θ, and 2) human interventions serve as observations about these unknown parameters: we model human behavior as approximately optimal with respect to the true objective. As stated, this problem is an instance of a Partially Observable Markov Decision Process (POMDP). Although we cannot solve it in real-time using POMDP solvers, this formalism is crucial to converting the problem of reacting to pHRI into a clearly defined optimization problem. In addition, our formalism enables pHRI approaches to be justified and compared in terms of this optimization criterion. Online Solution. We introduce a solution that adapts learning from demonstration approaches to our online pHRI setting [4, 5] , but derive it as an approximate solution to the problem above. This enables the robot to adapt to pHRI in real-time, as the current task is unfolding. Key to this approximation is simplifying the observation model: rather than interpreting instantaneous forces as noisy-optimal with respect to the value function given θ, we interpret them as implicitly inducing a noisy-optimal desired trajectory. Reasoning in trajectory space enables an efficient approximate online gradient approach to estimating θ. User Study. We conduct a user study with the JACO2 7-DoF robotic arm to assess how online learning from physical interactions during a task affects the robot's objective performance, as well as subjective participant perceptions. Overall, our work is a first step towards learning robot objectives online from pHRI. \n Related Work We propose using pHRI to correct the robot's objective function while the robot is performing its current task. Prior research has focused on (a) control strategies for reacting to pHRI without updating the robot's objective function, or (b) learning the robot's objectives-from offline demonstrationsin a manner that generalizes to future tasks, but does not change the behavior during the current task. An exception is shared autonomy work, which does correct the robot's objective function online, but only when the objective is parameterized by the human's desired goal in free-space. Control Strategies for Online Reactions to pHRI. A variety of control strategies have been developed to ensure safe and responsive pHRI. They largely fall into three categories [6] : impedance control, collision handling, and shared manipulation control. Impedance control [1] relates deviations from the robot's planned trajectory to interaction torques. The robot renders a virtual stiffness, damping, and/or inertia, allowing the person to push the robot away from its desired trajectory, but the robot always returns to its original trajectory after the interaction ends. Collision handling methods [2] include stopping, switching to gravity compensation, or re-timing the planned trajectory if a collision is detected. Finally, shared manipulation [3] refers to role allocation in situations where the human and the robot are collaborating. These control strategies for pHRI work in real-time, and enable the robot to safely adapt to the human's actions; however, the robot fails to leverage these interventions to update its understanding of the task-left alone, the robot would continue to perform the task in the same way as it had planned before any human interactions. By contrast, we focus on enabling robots to adjust how they perform the current task in real time. Offline Learning of Robot Objective Functions. Inverse Reinforcement Learning (IRL) methods focus explicitly on inferring an unknown objective function, but do it offline, after passively observing expert trajectory demonstrations [7] . These approaches can handle noisy demonstrations [8] , which become observations about the true objective [9] , and can acquire demonstrations through physical kinesthetic teaching [10] . Most related to our work are approaches which learn from corrections of the robot's trajectory, rather than from demonstrations [4, 5, 11] . Our work, however, has a different goal: while these approaches focus on the robot doing better the next time it performs the task, we focus on the robot completing its current task correctly. Our solution is analogous to online Maximum Margin Planning [4] and co-active learning [5] for this new setting, but one of our contributions is to derive their update rule as an approximation to our pHRI problem. Online Learning of Human Goals. While IRL can learn the robot's objective function after one or more demonstrations of a task, online inference is possible when the objective is simply to reach a goal state, and the robot moves through free space [12, 13, 14] . We build on this work by considering general objective parameters; this requires a more complex (non-analytic and difficult to compute) observation model, along with additional approximations to achieve online performance. 3 Learning Robot Objectives Online from pHRI \n Formalizing Reacting to pHRI We consider settings where a robot is performing a day-to-day task next to a person, but is not doing it correctly (e.g., is about to spill a glass of water), or not doing it in a way that matches the person's preferences (e.g., is getting too close to the person). Whenever the person physically intervenes and corrects the robot's motion, the robot should react accordingly; however, there are many strategies the robot could use to react. Here, we formalize the problem as a dynamical system with a true objective function that is known by the person but not known by the robot. This formulation interprets the human's physical forces as intentional, and implicitly defines an optimal strategy for reacting. Notation. Let x denote the robot's state (its position and velocity) and u R the robot's action (the torque it applies at its joints). The human physically interacts with the robot by applying external torque u H . The robot transitions to a next state defined by its dynamics, ẋ = f (x, u R + u H ), where both the human and robot can influence the robot's motion. POMDP Formulation. The robot optimizes a reward function r(x, u R , u H ; θ), which trades off between correctly completing the task and minimizing human effort r(x, u R , u H ; θ) = θ T φ(x, u R , u H ) − λ||u H || 2 (1) Following prior IRL work [15, 4, 8] , we parameterize the task-related part of this reward function as a linear combination of features φ with weights θ. Note that we assume the relevant set of features for each task are given, and we will not explore feature selection within this work. Here θ encapsulates the true objective, such as moving the glass slowly, or keeping the robot's endeffector farther away from the person. Importantly, this parameter is not known by the robot-robots will not always know the right way to perform a task, and certainly not the human-preferred way. If the robot knew θ, this would simply become an MDP formulation, where the states are x, the actions are u R , the reward is r, and the person would never need to intervene. Uncertainty over θ, however, turns this into a POMDP formulation, where θ is a hidden part of the state. Importantly, the human's actions are observations about θ under some observation model P (u H |x, u R ; θ). These observations u H are atypical in two ways: (a) they affect the robot's reward, as in [13] , and (b) they influence the robot's state, but we don't necessarily want to account for that when planning -the robot should not rely on the human to move the robot; rather the robot should consider u H only for its information value. Observation Model. We model the human's interventions as corrections which approximately maximize the robot's reward. More specifically, we assume the noisy-rational human selects an action u H that, when combined with the robot's action u R , leads to a high Q-value (state-action value) assuming the robot will behave optimally after the current step (i.e., assuming the robot knows θ) P (u H | x, u R ; θ) ∝ e Q(x,u R +u H ;θ) (2) Our choice of (2) stems from maximum entropy assumptions [8] , as well as the Bolzmann distributions used in cognitive science models of human behavior [16] . Aside. We are not formulating this as a POMDP to solve it using standard POMDP solvers. Instead, our goal is to clarify the underlying problem formulation and the existence of an optimal strategy. \n Approximate Solution Since POMDPs cannot be solved tractably for high-dimensional real-world problems, we make several approximations to arrive at an online solution. We first separate estimation from finding the optimal policy, and approximate the policy by separating planning from control. We then simplify the estimation model, and use maximum a posteriori estimate (MAP) instead the full belief over θ. QMDP. Similar to [13] , we approximate our POMDP using a QMDP by assuming the robot will obtain full observability at the next time step [17] . Let b denote the robot's current belief over θ. The QMDP simplifies into two subproblems: (a) finding the robot's optimal policy given b Q(x, u R , b) = b(θ)Q(x, u R , θ)dθ (3) where arg max u R Q(x, u R , b) evaluated at every state yields the optimal policy, and (b) updating our belief over θ given a new observation. Unlike the actual POMDP solution, here the robot will not try to gather information. From Belief to Estimator. Rather than planning with the belief b, we plan with only the MAP of θ. From Policies to Trajectories (Action). Computing Q in continuous state, action, and belief spaces is still not tractable. We thus separate planning and control. At every time step t, we do two things. First, given our current θt , we replan a trajectory ξ = x 0:T ∈ Ξ that optimizes the task-related reward. Let θ T Φ(ξ) be the cumulative reward, where Φ(ξ) is the total feature count along trajectory ξ such that Φ(ξ) = x t ∈ξ φ(x t ). We use a trajectory optimizer [18] to replan the robot's desired trajectory ξ t R ξ t R = arg max ξ θt • Φ(ξ) (4) Second, once ξ t R has been planned, we control the robot to track this desired trajectory. We use impedance control, which allows people to change the robot's state by exerting torques, and provides compliance for human safety [19, 6, 1] . After feedback linearization [20] , the equation of motion under impedance control becomes M R (q t − qt R ) + B R ( qt − qt R ) + K R (q t − q t R ) = u t H (5) Here M R , B R , and K R are the desired inertia, damping, and stiffness, x = (q, q), where q is the robot's joint position, and q R ∈ ξ R denotes the desired joint position. Within our experiments, we implemented a simplified impedance controller without feedback linearization u t R = B R ( qt R − qt ) + K R (q t − q t R ) (6) Aside. When the robot is not updating its estimate θ, then ξ t R = ξ t−1 R , and our solution reduces to using impedance control to track an unchanging trajectory [2, 19] . From Policies to Trajectories (Estimation). We still need to address the second QMDP subproblem: updating θ after each new observation. Unfortunately, evaluating the observation model (2) for any given θ is difficult, because it requires computing the Q-value function for that θ. Hence, we will again leverage a simplification from policies to trajectories in order to update our MAP of θ. Instead of attempting to directly relate u H to θ, we propose an intermediate step; we interpret each human action u H via a intended trajectory, ξ H , that the human wants the robot to execute. To compute the intended trajectory ξ H from ξ R and u H , we propagate the deformation caused by u H along the robot's current trajectory ξ R ξ H = ξ R + µA −1 U H ( 7 ) where µ > 0 scales the magnitude of the deformation, A defines a norm on the Hilbert space of trajectories and dictates the deformation shape [21] , U H = u H at the current time, and U H = 0 at all other times. During experiments we here used a norm A based on acceleration [21] , but we will explore learning the choice of this norm in future work. Importantly, our simplification from observing human action u H to implicitly observing the human's intended trajectory ξ H means we no longer have to evaluate the Q-value of u R + u H given some θ value. Instead, the observation model now depends on the total reward of the implicitly observed trajectory: P (ξ H | ξ R , θ) ∝ e θ T Φ(ξ H )−λ||u H || 2 ≈ e θ T Φ(ξ H )−λ||ξ H −ξ R || 2 (8 ) This is analogous to (2) , but in trajectory space-a distribution over implied trajectories, given θ and the current robot trajectory. \n Online Update of the θ Estimate The probability distribution over θ at time step t is P (ξ 0 H , .., ξ t H |θ, ξ 0 R , .., ξ t R )P (θ). However, since θ is continuous, and the observation model is not Gaussian, we opt not to track the full belief, but rather to track the maximum a posteriori estimate (MAP). Our update rule for this estimate will reduce to online Maximum Margin Planning [4] if we treat ξ H as the demonstration, and to coadaptive learning [5] , if we treat ξ H as the original trajectory with one waypoint corrected. One of our contributions, however, is to derive this update rule from our MaxEnt observation model in (8) . MAP. Assuming the observations are conditionally independent given θ, the MAP for time t + 1 is θt+1 = arg max θ P (ξ 0 H , .., ξ t H | ξ 0 R , .., ξ t R , θ)P (θ) = arg max θ t τ =0 log P (ξ τ H | ξ τ R , θ) + log P (θ) (9) Inspecting the right side of (9), we need to define both P (ξ H |ξ R , θ) and the prior P (θ). To approximate P (ξ H |ξ R , θ), we use (8) with Laplace's method to compute the normalizer. Taking a second-order Taylor series expansion of the objective function about ξ R , the robot's current best guess at the optimal trajectory, we obtain a Gaussian integral that can be evaluated in closed form P (ξ H | ξ R , θ) = e θ T Φ(ξ H )−λ||ξ H −ξ R || 2 e θ T Φ(ξ)−λ||ξ−ξ R || 2 dξ ≈ e θ T Φ(ξ H )−Φ(ξ R ) −λ||ξ H −ξ R || 2 (10) Let θ0 be our initial estimate of θ. We propose the prior P (θ) = e − 1 2α ||θ− θ0 || 2 (11) where α is a positive constant. Substituting (10) and ( 11 ) into ( 9 ), the MAP reduces to θt+1 ≈ arg max θ t τ =0 θ T Φ(ξ τ H ) − Φ(ξ τ R ) − 1 2α ||θ − θ0 || 2 (12) Notice that the λ||ξ H −ξ R || 2 terms drop out, because this penalty for human effort does not explicitly depend on θ. Solving the optimization problem ( 12 ) by taking the gradient with respect to θ, and then setting the result equal to zero, we finally arrive at θt+1 = θ0 + α t τ =0 Φ(ξ τ H ) − Φ(ξ τ R ) = θt + α Φ(ξ t H ) − Φ(ξ t R ) (13) Interpretation. This update rule is actually the online gradient [22] of (9) under our Laplace approximation of the observation model. It has an intuitive interpretation: it shifts the weights in the direction of the human's intended feature count. For example, if ξ H stays farther from the person than ξ R , the weights in θ associated with distance-to-person features will increase. Relation to Prior Work. This update rule is analogous to two related works. First, it would be the online version of Maximum Margin Planning (MMP) [4] if the trajectory ξ t H were a new demon- stration. Unlike MMP, our robot does not complete a trajectory, and only then get a full new demonstration; instead, our ξ t H is an estimate of the human's intended trajectory based on the force applied during the robot's execution of the current trajectory ξ t R . Second, the update rule would be co-active learning [5] if the trajectory ξ t H were ξ t R with one waypoint modified, as opposed to a propagation of u t H along the rest of ξ t R . Unlike co-active learning, however, our robot receives corrections continually, and continually updates the current trajectory in order to complete the current task well. Nonetheless, we are excited to see similar update rules emerge from different optimization criteria. Summary. We formalized reacting to pHRI as a POMDP with the correct objective parameters as a hidden state, and approximated the solution to enable online learning from physical interaction. At every time step during the task where the human interacts with the robot, we first propagate u H to implicitly observe the corrected trajectory ξ H (simplification of the observation model), and then update θ via Equation ( 13 ) (MAP instead of belief). We replan with the new estimate (approximation of the optimal policy), and use impedance control to track the resulting trajectory (separation of planning from control). We summarize and visualize this process in Fig. 2 . \n User Study We conducted an IRB-approved user study to investigate the benefits of in-task learning. We designed tasks where the robot began with the wrong objective function, and participants phsyically corrected the robot's behavior 1 . \n Experiment Design Independent Variables. We manipulated the pHRI strategy with two levels: learning and impedance. The robot either used our method (Algorithm 1) to react to physical corrections and re-plan a new trajectory during the task; or used impedance control (our method without updating θ) to react to physical interactions and then return to the originally planned trajectory. Dependent Measures. We measured the robot's performance with respect to the true objective, along with several subjective measures. One challenge in designing our experiment was that each person might have a different internal objective for any given task, depending on their experience and preferences. Since we do not have direct access to every person's internal preferences, we defined the true objective ourselves, and conveyed the objectives to participants by demonstrating the desired optimal robot behavior (see an example in Fig. 3(a) , where the robot is supposed to keep the cup upright). We instructed participants to get the robot to achieve this desired behavior with minimal human physical intervention. For each robot attempt at a task, we evaluated the task related and effort related parts of the objective: θ T Φ(ξ) (a cost to be minimized and not a reward to be maximized in our experiment) and t ||u t H || 1 . We also evaluate the total amount of time spent interacting physically with the robot. For our subjective measures, we designed 4 multi-item scales shown in Table 1 : did participants think the robot understood how they wanted to task done, did they feel like they had to exert a lot of effort to correct the robot, was it easy to anticipate the robot's reactions, and how good of a collaborator was the robot. Hypotheses: H1. Learning significantly decreases interaction time, effort, and cumulative trajectory cost. H2. Participants will believe the robot understood their preferences, feel less interaction effort, and perceive the robot as more predictable and more collaborative in the learning condition. Tasks. We designed three household manipulation tasks for the robot to perform in a shared workspace (see Fig. 3 ), plus a familiarization task. As such, the robot's objective function considered two features: velocity and a task-specific feature. For each task, the robot carried a cup from a start to a goal pose with an initially incorrect objective, requiring participants to correct its behavior during the task. During the familiarization task, the robot's original trajectory moved too close to the human. Participants had to physically interact with the robot to get it to keep the cup further away from their body. In Task 1, the robot would not care about tilting the cup mid-task, risking spilling if the cup was too full. Participants had to get the robot to keep the cup upright. In Task 2, the robot would move the cup too high in the air, risking breaking it if it were to slip, and participants had to get the robot to keep it closer to the table. Finally, in Task 3, the robot would move the cup over a laptop to reach it's final goal pose, and participants had to get the robot to keep the cup away from the laptop. Participants. We used a within-subjects design and counterbalanced the order of the pHRI strategy conditions. In total, we recruited 10 participants (5 male, 5 female, aged 18-34) from the UC Berkeley community, all of whom had technical backgrounds. Procedure. For each pHRI strategy, participants performed the familiarization task, followed by the three tasks, and then filled out our survey. They attempted each task twice with each strategy for robustness, and we recorded the attempt number for our analysis. Since we artificially set the true objective for participants to measure objective performance, we showed participants both the original and desired robot trajectory before interaction (Fig. 3 ), so that they understood the objective. \n Results Objective. We conducted a factorial repeated measures ANOVA with strategy (impedance or learning) and trial number (first attempt or second attempt) as factors, on total participant effort, interaction time, and cumulative true cost 2 (see Figure 4 and Figure 5 ). Learning resulted in significantly less interaction force (F (1, 116) = 86.29, p < 0.0001) and interaction time (F (1, 116) = 75.52, p < 0.0001), and significantly better task cost (F (1, 116) = 21.85, p < 0.0001). Interestingly, while trial number did not significantly affect participant's performance with either method, attempting the task a second time yielded a marginal improvement for the impedance strategy, but not for the learning strategy. This may suggest that it is easier to get used to the impedance strategy. Overall, this supports H1, and aligns with the intuition that if humans are truly intentional actors, then using interaction forces as information about the robot's objective function enables robots to better complete their tasks with less human effort compared to traditional pHRI methods. Subjective. Table 1 shows the results of our participant survey. We tested the reliability of our 4 scales, and found the understanding, effort, and collaboration scales to be reliable, so we grouped them each into a combined score. We ran a one-way repeated measures ANOVA on each resulting score. We found that the robot using our method was perceived as significantly (p < 0.0001) more understanding, less difficult to interact with, and more collaborative. However, we found no significant difference between our method and the baseline impedance method in terms of predictability. Participant comments suggest that while the robot adapted quickly to their corrections when learning (e.g. \"The robot seemed to quickly figure out what I cared about and kept doing it on its own\"), determining what the robot was doing during learning was less apparent (e.g. \"If I pushed it hard enough sometimes it would seem to fall into another mode and then do things correctly\"). Therefore, H2 was partially supported: although our learning algorithm was not perceived as more predictable, participants believed that the robot understood their preferences more, took less effort to interact with, and was a more collaborative partner. Cup Table Laptop Task \n Discussion Summary. We propose that robots should not treat human interaction forces as disturbances, but rather as informative actions. We show that this results in robots capable of in-task learningrobots that update their understanding of the task which they are performing and then complete it correctly, instead of relying on people to guide them until the task is done. We test this concept with participants who not only teach the robot to finish its task according to their preferences, but also subjectively appreciate the robot's learning. Limitations and Future Work. Ours is merely a step in exploring learning robot objectives from pHRI. We opted for an approximation closest to the existing literature, but other possible better online solutions are possible. In our user study, we assumed knowledge of the two relevant reward features. In reality, reward functions will have larger feature sets and human interactions may only give information about a certain subset of relevant weights. The robot will thus need to disambiguate what the person is trying to correct, likely requiring active information gathering. Further, developing solutions that can handle dynamical aspects, like preferences about the timing of the motion, would require a different approach to inferring the intended human trajectory, or going back the space of policies altogether. Finally, while we focused on in-task learning, the question of how and when to generalize learned objectives to new task instances remains open. Figure 2 : 2 Figure 2: Algorithm (left) and visualization (right) of one iteration of our online learning from pHRI method in an environment with two obstacles O1, O2. The originally planned trajectory, ξ t R (black dotted line), is deformed by the human's force into the human's preferred trajectory, ξ t H (solid black line). Given these two trajectories, we compute an online update of θ and can replan a better trajectory ξ t+1 R (orange dotted line). \n (a) Task 1 :Figure 3 : 13 Figure 3: Simulations depicting the robot trajectories for each of the three experimental tasks. The black path represents the original trajectory and the blue path represents the human's desired trajectory. \n Figure 4 : 4 Figure4: Learning from pHRI decreases human effort and interaction time across all experimental tasks (total trajectory time was 15s). An asterisk (*) means p < 0.0001. \n Figure 5 : 5 Figure 5: (left) Average cumulative cost for each task as compared to the desired total trajectory cost. An asterisk (*) means p < 0.0001. (right) Plot of sample participant data from laptop task: desired trajectory is in blue, trajectory with impedance condition is in gray, and learning condition trajectory is in orange. \n Table 1 : 1 QuestionsCronbach's α Imped LSM Learn LSM F(1,9) p-value understanding By the end, the robot understood how I wanted it to do the task. Even by the end, the robot still did not know how I wanted it to do the task.The robot learned from my corrections.The robot did not understand what I was trying to accomplish. The robot did not collaborate with me to complete the task. Results of ANOVA on subjective metrics collected from a 7-point Likert-scale survey. 0.94 1.70 5.10 118.56 <.0001 \n\t\t\t For video footage of the experiment, see: https://www.youtube.com/watch?v=1MkI6DH1mcw \n\t\t\t For simplicity, we only measured the value of the feature that needed to be modified in the task, and computed the absolute difference from the feature value of the optimal trajectory.", "date_published": "n/a", "url": "n/a", "filename": "bajcsy17a.tei.xml", "abstract": "When humans and robots work in close proximity, physical interaction is inevitable. Traditionally, robots treat physical interaction as a disturbance, and resume their original behavior after the interaction ends. In contrast, we argue that physical human interaction is informative: it is useful information about how the robot should be doing its task. We formalize learning from such interactions as a dynamical system in which the task objective has parameters that are part of the hidden state, and physical human interactions are observations about these parameters. We derive an online approximation of the robot's optimal policy in this system, and test it in a user study. The results suggest that learning from physical interaction leads to better robot task performance with less human effort.", "id": "0e372ca30f5f33b44d1257f603098530"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Caspar Oesterheld"], "title": "Approval-directed agency and the decision theory of Newcomb-like problems", "text": "Introduction In decision theory, there is a large debate about how an instrumentally rational agent, i.e., an agent trying to achieve some goal or maximize some utility function, should decide in Newcomb's problem (introduced by Nozick 1969 ) and variations thereof (a list is given by Ledwig 2000, pp. 80-87) . Consequently, different normative theories of instrumental rationality have been developed. The best known ones are evidential (sometimes also called Bayesian) decision theory (EDT) (Ahmed 2014; Almond 2010; Price 1986; Horgan 1981 ) and causal decision theory (CDT) (Gibbard and Harper 1981; Joyce 1999; Lewis 1981; Skyrms 1982; Weirich 2016 ), but many have attempted to remediate what they view as failures of the two theories by proposing further alternatives (Spohn 2003 (Spohn , 2012 Poellinger 2013; Arntzenius 2008; Gustafsson 2011; Wedgwood 2013; Dohrn 2015; Price 2012; Soares and Levinstein 2017) . Because the main goal of artificial intelligence is to build machines that make instrumentally rational decisions (Russell and Norvig 2010, Sects. 1.1.4, 2.2; Legg and Hutter 2007; Doyle 1992) , this normative disagreement has some bearing on how to build these machines (cf. Soares and Fallenstein 2014a, Sect. 2.2; Soares and Fallenstein 2014b, Sect. 1; Bostrom 2014b, Chap. 13, Sect. \"Decision theory\") . The differences between these decision theories are probably inconsequential in most situations (Ahmed 2014, Sect. 0.5, Chap. 4; Briggs 2017 ), 1 but still matter in some (Ahmed 2014, Chap. 4-6; Soares 2014a; Bostrom 2014a) . In fact, AI may expose the differences more often. For example, Newcomb's problem and the prisoner's dilemma with a replica (Kuhn 2017, Sect. 7 ) are easy to implement for agents with copyable source code (cf. Yudkowsky 2010 pp. 85ff . Soares and Fallenstein 2014b, Sect. 2; Soares 2014b; Cavalcanti 2010; Sect. 5) . Indeed, the existence of many copies is the norm for (successful) software, including AI-based software. While copies of present-day software systems may only interact with each other in rigid, explicitly pre-programmed ways, future AI-based systems will make decisions in a more autonomous, flexible and goal-driven way. Overall, the decision theory of Newcomb-like scenarios is a central foundational issue which will plausibly become practically important in the longer term. The problem for AI research posed by the disagreement among decision theorists can be divided into two questions: 1. What decision theory do we want an AI to follow? 2. How could we implement such a decision theory in an AI? Or: How do decision theories and AI frameworks or architectures map onto each other? Although it certainly requires further discussion, there already is a large literature related to the first question. 2 In this paper, I would thus like to draw attention to the second question. Specifically, I would like to investigate how approval-directed agents behave in Newcomb-like problems. By an approval-directed agent, I mean an agent that is coupled with an overseer. After the agent has chosen an action, the overseer scores the agent for that action. Rather than, say, trying to bring about particular states in the environment, the agent chooses actions so as to maximize the score it receives from the overseer (cf. Christiano 2014) . A model of approval-directed agency that allows us to describe Newcomb-like situations is described and discussed in Sect. 2. Approval-directed agency is intended as a model of reinforcement learning agents (see Sutton and Barto 1998; Russell and Norvig 2010; Chaps. 17, 21 , for introductions to reinforcement learning), for whom the reward function is analogous to the approval-directed agent's overseer. Since reinforcement learning is such a general and commonly studied problem in artificial intelligence (Hutter 2005, e.g. Chap. 4.1.3; Russell and Norvig 2010, p. 831; Sutton and Barto 1998, Chap. 1) , it is an especially attractive target for modeling. 3 However, because decision theories are usually defined only for single decisions, we will only discuss single decisions whereas reinforcement learning is usually concerned with sequential interactions of agent and environment. However, this decision can also be a policy choice to model sequential decision problems. 4 In addition to limiting our analysis to single decisions, we will not discuss the learning process and simply assume that the agent has already formed some model of the world. If we assume that, after an action has been taken, the overseer rewards the agent based on the expected value of some von Neumann-Morgenstern utility function, the agent is implicitly driven by two decision theories: The overseer can use the regular conditional expectation or the causal expectation to estimate the value of its utility function; and the agent itself can follow CDT or EDT when maximizing the score it receives from the overseer (Sect. 3). We then show how the overall decision theory depends on these two potentially conflicting decision theories. If the overseer bases its expected value calculations on looking only at the world, then the agent's decision theory is decisive. If the overseer Footnote 2 continued (Meacham 2010; Soares and Fallenstein 2014b, Sect. 3; Yudkowsky 2010, Sect. 2; Greene 2018) . The same arguments imply that even if one is convinced of CDT or EDT one would not want the AI to use CDT and EDT. That said, one could also leave the self-modification to the AI. 3 Reinforcement learning and approval-directed agency are also common outside of artificial intelligence. For example, Achen and Bartels (2016, Chap. 4 ) review evidence which shows that electorates often vote retrospectively to punish or reward incumbents. 1 A causal model of an approval-directed agent in a Newcomb-like decision problem. A denotes the agent's action, H the environment history, O the observation on which the overseer bases the reward, R is that reward, and E r is information about the way the reward is computed that is only available to the overseer. The box is used to indicate that H includes the two random variables H p and H f . All of H may have a causal influence on O bases its estimates only on the agent's action, then the overseer's decision (or perhaps rather action evaluation) theory is decisive. \n A H p H f R O E r H Fig. \n Approval-directed agency We first describe a model of approval-directed agency. To be able to apply both CDT and EDT, we will use causal models in Pearl's (2009) sense. Consequently, we use Pearl's do-calculus-based version of CDT (Pearl 2009, Chap. 4 ). We will, throughout this paper, assume that the agent has already formed a (potentially implicit) model of the world 5 -e.g., based on past interactions with the environment. Also, we will only consider single decisions rather than sequential problems of iterative interaction between agent and environment. A causal model of such a one-shot Newcomb problem from the perspective of the approval-directed agent is given in Fig. 1 . In this model, the agent decides to take some action A, which may causally affect some part of the environment history, i.e., the history of states, H . We will call that part of the history the agent's causal future H f . Furthermore, the agent may be causally influenced by some other part of the environment history, which we will call the agent's causal past H p . H may contain information other than H f and H p , which we will assume to be independent of A. 6 The overseer, physically realized by, e.g., some module physically attached to the agent or a human supervisor, observes the agent's action and partially, via some percept O, the state of the world 7 . The overseer then calculates the reward R. To set proper incentives to the agent, we will assume the overseer to know not only the action and observation, but also everything that the agent knows (cf. Christiano 2016) . The overseer may also have access to some additional piece of information E r about the way the reward is to be calculated. 8 Lastly, we assume that the sets of possible values of A, O and E r are finite. In principle, the overseer could reward the agent in all kinds of ways. E.g., it could reward the agent \"deontologically\" (Alexander and Moore 2016) for taking a particular action independently of the consequences of taking that action. In this paper, we will assume that the reward estimates the value of some von Neumann-Morgenstern utility function U that only depends on states of the world. I use the capital U to indicate that the utility function, too, is a random variable (in the Bayesian sense). For simplicity's sake, we will, again, assume that the set of possible values of U is finite. We will view U as representing the system designer's preferences over world states. 9 While other ways of assigning the reward are possible, this is certainly an attractive way of getting an approval-directed agent to achieve goals that we want it to achieve. After all, in real-world applications, we will usually care about the outcomes of the agent's decisions, such as whether a car has reached its destination in time or whether a human has been hurt. The standard way of estimating U (H ) (or any quantity for that matter) is the familiar conditional expectation. Thus, the overseer may compute the reward as r = E [U (H ) | e r , a, o] , (1) Footnote 6 continued know your common source code), the dependence persists. We exclude these dependences because such situations cannot be modeled by standard causal graphs. However, we could adapt causal graphs to accomodate for these kinds of dependences. First, we could modify our definition of causality in such a way that dependence does imply causation, as has been proposed by Spohn (2003 Spohn ( , 2012 , Yudkowsky (2010) and others. For instance, we could model the dependence between the outputs of two instances of an algorithm by introducing a logical node as a common cause of the two. This logical node would then represent the output of the abstract algorithm that the two copies implement. While changes to the concept of causation may affect CDT's implied behavior, the results from this paper can be directly transfered to such modifications. Alternatively, we could extend causal graphs to also include non-causal dependences (cf. Poellinger 2013) . Such extension necessitates a new CDT formalism, so the proofs from this paper do not directly transfer to this case. That said, I expect our results to generalize given that both EDT and CDT would probably treat non-causal dependences on the action just like they treat causal arrows directed toward the action. 7 Christiano (2014) does not define approval-directed agency formally, but judging from a comment he made at https://medium.com/paulfchristiano/i-agree-that-the-key-feature-of-approval-directed-agents-isthat-the-causal-picture-is-736b4474910e, he considers it crucial to his conception that the overseer only looks at the agent's action and does not observe the action's consequences (cf. the distinction introduced in Sect. 3). 8 One reason for the overseer to have access to such additional information is that some of the human supervisor's values may not be expressible in a way that the approval-directed agent's algorithm can utilize (cf. Muehlhauser and Helm 2012, Sects. 3, 4, 5.3 ). 9 Some have tried to modify the reward relative to the designer's preferences to make the reinforcement learning problem easier to solve (Sorg 2011) , although Sutton and Barto (1998, Sect. 3 .2) explicitly discourage such tricks in their reinforcement learning textbook. where r , a, e r , and o are values of R, A, E r , and O, respectively. 10 A causal decision theorist overseer agrees that after an action a is taken the righthand side of Eq. 1 most accurately estimates how much utility is achieved. She merely thinks that this term should not be used to decide which action a to take in the first place. 11 However, this puts a causal decision theorist overseer in a peculiar situation. Whatever formula she uses to compute the reward will also be used by the rewardmaximizing agent to decide which action to take. A causal decision theorist overseer might therefore worry (rightfully, as we will see) that providing rewards according to Eq. 1 will make the agent EDT-ish. Hence, she either has to incorrectly estimate how much utility was achieved; or live with the agent using an-in her mind-incorrect way of weighing her options. If she prefers the latter, she would reward according to Eq. 1. But arguably getting the agent to choose correctly is the overseer's primary goal. Thus, she might prefer to compute the reward according to r = E [U (H ) | e r , do(a), o] . (2) Here, do(a) refers to Pearl's do-calculus, where conditioning on do(a) roughly means intervening from outside the causal model to set A to a. For an introduction to the do-calculus, see Pearl (2009) . Although a causal decision theorist overseer may prefer computing rewards according to Eq. 1, we will from now on say \"the overseer uses CDT\" if rewards are computed according to Eq. 2 and \"the overseer uses EDT\" if rewards are calculated according to Eq. 1. An approval-directed agent is characterized by maximizing the reward it receives from the overseer. 12 However, decision theory offers us, again, (at least) two different expected values, the regular expected value of EDT E [R | a] , (3) and CDT's causal expected value E [R | do(a)] . ( 4 ) 10 At first sight this may be confusing to some readers, because in reinforcement learning, utility sometimes refers to expected cumulative reward (Russell and Norvig 2010, Chap. 17, 21) , although others use the term value function instead (Sutton and Barto 1998, Sect. 3.7 ). Here, U does not refer to utility in that sense but in the decision-theoretical sense of representing intrinsic values. So, in the present case, we have two \"layers\" of goals: first, the agent maximizes the reward r . Second, the agent as incentivized by the overseer's way of calculating rewards maximizes utility U (H ). One cause of confusion is that in model applications of reinforcement learning, the reward function possesses full knowledge of the world state and thus does not require the use of the expectation operator. 11 If the disagreement in Newcomb's problem is to be about different theories of rational choice (EDT, CDT and so forth) rather than the predictive abilities of \"the being\", Omega or the psychologist, then after requesting both boxes a proponent of two-boxing has to believe that she will probably receive only $1000. Causal and evidential decision theorists agree that regular conditional expectation is the correct way of updating one's beliefs about the state of the world after an action has been taken (cf. the distinction between \"acts\" and \"actions\" in Pearl 2009 Sect. 4.1.1) . We leave the interesting question of which (if any) decision theory describes the behavior of current reinforcement learning algorithms to future research 13 and in the following assume that the agent is known to implement either CDT or EDT. \n The conflict of the decision theories of agent and overseer When viewed together with the overseer, our agent may now be seen as containing two decision theories, one for computing the reward and one in the algorithm that tries to find the action to maximize that reward. These decision theories may not always be the same. Given this potential discrepancy, the question is which of the two decision theories prevails, i.e., for which configurations of the two decision theories the overall agent acts like a CDT agent and for which it acts like an EDT agent w.r.t. U . As it turns out, the answer to this question depends on the decision problem in question. In particular, it depends on whether the overseer updates its estimate of U (H ) primarily based on the action taken by the agent or on its observation of the environment. For illustration, consider two versions of Newcomb's problem. In both versions, the predictor is equally reliable-e.g., correct with 90% probability-and the potential box contents are the same-e.g., the standard $1K and $1M. As usual, the content of the opaque box cannot be causally influenced by one's decision. In the first version, the overseer eventually sees the payoff, i.e., how much money the agent has made. In this case, as soon as the money is observed, the overseer's estimate of U (H ) becomes independent of the agent's action. More generally, O may tell the overseer so much about U (H ) that it becomes independent of A even if U (H ) is not yet fully observed. That is, E [U (H ) | e r , a, o] = E [U (H ) | e r , o] (5) and E [U (H ) | e r , do(a), o] = E [U (H ) | e r , o] (6) for all e r , a and o. Note that neither of these two implies the other. 14 Intuitively speaking, these two mean that the reward is ultimately determined by U (H ). In the second version of Newcomb's problem, the monetary payoff is not observed but covertly invested into increasing the agent's utility function. Only the agent's choice can then inform the overseer about U (H ). Formally, it is both E [U (H ) | e r , a, o] = E [U (H ) | e r , a] (7) and E [U (H ) | e r , do(a), o] = E [U (H ) | e r , do(a)] . (8) Intuitively speaking, these two equations mean that the reward is not determined by U (H ) but by what the overseer believes U (H ) will be given a or do(a). Again, we assume that this is known to the agent. An example class of cases is that in which the agent's decisions are correlated with those of agents in far-away parts of the environment (cf. Treutlein and Oesterheld 2017; Oesterheld 2018b ). The two versions are depicted in Fig. 2 . Of course, these are only the two extremes from the set of all possible situations. In real-world Newcomb-like scenarios, the overseer may also draw some information from both sources. Nonetheless, it seems useful to understand the extreme cases, as this may also help us understand mixed ones. In the following subsections, we will show that in the first type, the decision theory of the agent is decisive, whereas in the second type, the overseer's decision theory is 15 . Roughly, the reason for that is the following: As noted earlier, the reward in the first type depends directly on U (H ). Thus, the agent will try to maximize U (H ) according to its own decision theory. In the second type, the overseer takes the agent's action a and then considers what either a or do(a) says about U (H ). Thus, the agent has to pay careful attention to whether the overseer uses EDT's or CDT's expected value. We prove this formally by considering all possible configurations of the type of the problem, the overseer's decision theory and the agent's decision theory. While we will limit our analysis to EDT and CDT, the results can easily be generalized to variants of these that arise from modifying the causal model or conditional credence distribution (e.g. Yudkowsky 2010; \"Disposition-based decision theory\"; Spohn 2012; Dohrn 2015 ). The analysis is summarized in Table 1 . \n First type \n The EDT agent The EDT agent judges its action by E [R | a] . (9) If the overseer calculates regular conditional expectation, then it is E [R | a] = E [E [U (H ) | E r , O, a] | a] (10) = E [U (H ) | a] , ( 11 ) where the last line is due to what is sometimes called the law of total expectation (LTE) or the tower rule (see, e.g., Ross 2007, Sect. 3.4; Billingsley 1995, Theorem 34.4 ). Intuitively, you cannot expect that gaining more evidence (i.e., E r and O in addition to a) moves your expectation of U (H ) into any particular direction. Because the overseer knows more than the agent, we will need this rule in all of the following derivations. Its application makes it hard to generalize these results to other decision theories, since LTE does not apply if the two decision theories do not both compute a form of expected utility. Equations 10 and 11 show that if the overseer computes regular expected value and the agent maximizes the reward according to EDT, then the agent as a whole maximizes U according to EDT. If the overseer computes CDT's expected value, it is E [R | a] = E [E [U (H ) | E r , do(a), O] | a] ( 12 = E [E [U (H ) | E r , a, O] | a] (15) = LTE E [U (H ) | a] (14) (16) \n The CDT agent The CDT agent judges its action by E [R | do(a)] . ( 17 ) If the overseer uses regular expected value (EDT), then E [R | do(a)] = E [E [U (H ) | a, O, E r ] | do(a)] (18) = e r ,o P(e r , o | do(a)) • E [U (H ) | a, o, e r ] (19) = eq. 5 and 6 e r ,o P(e r , o | do(a)) • E [U (H ) | do(a), o, e r ] (20) = E [E [U (H ) | do(a), O, E r ] | do(a)] (21) = LTE E [U (H ) | do(a)] (22) Learning about an intervention do(a) cannot always be treated in the same way as learning about other events. Hence, the application of the law of total expectation is not straightforward. However, P(• | do(x)) is always a probability distribution. Because the law of total expectation applies to all probability distributions, it also applies to ones resulting from the application of the do-calculus. If the overseer uses CDT's expected value, then E [R | do(a)] = E [E [U (H ) | E r , O, do(a)] | do(a)] (23) = LTE E [U (H ) | do(a)] . (24) \n Second type \n The EDT agent The EDT agent judges its actions by E [R | a] . ( 25 ) If the overseer is based on regular conditional expectation (EDT), then it is again E [R | a] = E [E [U (H ) | E r , a] | a] (26) = LTE E [U (H ) | a] . ( 27 ) If the overseer is based on CDT-type expectation, then (33) E [R | a] = E [E [U (H ) | E r , do(a)] | a] ( 28 \n The CDT agent The CDT agent judges actions by E [R | do(a)] . ( 34 ) Because of Rule 2 in Theorem 3.4.1 of Pearl (2009, Sect. 3.4. 2) applied to the causal graph of Fig. 2b , it is E [R | do(a)] = E [R | a] . (35) Thus, the analysis of the CDT agent is equivalent to that of the EDT agent. \n Conclusion In this paper, we have taken a step to map reinforcement learning architectures onto decision theories. We found that in Newcomb-like problems, if the overseer rewards the agent purely on the basis of the agent's action, then the overall system's behavior is determined by the decision theory implicit in the overseer's reward function. If the overseer judges the agent based on looking at the world, however, then the agent's decision theory is decisive. This has implications for how we should design approval-directed agents. For instance, if we would like to leave decision-theoretical judgements to the overseer, we must ensure that the overseer assigns rewards before making new observations about the world state (cf. Christiano 2014, Sect. \"Avoid lock-in\") . Of course, this makes the reward less accurate and may thus slow down the agent's learning process. If we want the overseer to look at both the world and the agent's action, then we need to align both the overseer's and the agent's decision theory. Much more research is left to be done at the intersection of decision theory and artificial intelligence. For instance, what (if any) decision theories describe the way modern reinforcement learning algorithms maximize reward? Do the results of this paper generalize to sequential decision problems? Moving away from the reinforcement learning framework, what decision theories do other frameworks in AI implement? What about decision theories other than CDT and EDT? The reward is computed based on ob-(b) The reward is computed based on observing the agent's action. \n Fig. 2 2 Fig. 2 Two different ways in which the overseer can calculate the reward \n r , o | a) • E [U (H ) | e r , do(a), o] (13) = eq. 5 and 6 e r ,o P(e r , o | a) • E [U (H ) | e r , a, o] \n r | a) • E [U (H ) | do(a), e r ] r ) • E [U (H ) | do(a), e r ] r | do(a)) • E [U (H ) | do(a), e r ] (31) = E [E [U (H ) | E r , do(a)] | do(a)] (32) = LTE E [U (H ) | do(a)] . \n Table 1 1 An overview of the results of the calculations in Sect. 3 Type of Newcomb problem Agent's DT Overseer's DT Resulting DT First type CDT EDT CDT CDT CDT EDT EDT EDT CDT EDT Second type CDT EDT EDT CDT CDT EDT EDT EDT CDT CDT \n\t\t\t In fact, Eells (1981) has argued that EDT and CDT always behave in the same way, but I disagree with this assessment based on the reasons given byAhmed (2014, Sect. 4.3-4.6) and Price (1986) .2 Of course, the existing literature asks about the right decision theory proper. The answer to that question might differ from the answer to the AI-specific question (cf. Kumar 2017; Treutlein 2018). After all, even if we have identified the right decision theory for ourselves, we may want to implement a different decision theory in an AI. One reason could be that the main contenders are not self-recommending-it has been pointed out that EDT and CDT both recommend to self-modify into slightly different decision theories \n\t\t\t This is consistent with what reinforcement learning algorithms usually do-they choose policies rather than individual actions. This is because the utility of a single action usually cannot be evaluated without knowing how the agent will deal with situations that might arise as a result of taking that action. When individual actions can be evaluated in isolation, the ex ante policy choice sometimes differs from the choice of individual actions (see the absent-minded driver, introduced by Piccione and Rubinstein 1997; cf. Aumann et al. 1997 ; the Newcomb-like scenarios discussed by, e.g., Hintze 2014; Soares and Levinstein 2017, Sect. 2; and the problems in anthropics discussed by Armstrong 2011) . While it is rarely discussed in the debate between evidential and causal decision theorists, a few authors regard this discrepancy as crucial and have argued that a proper decision theory should be about optimal policy choices (e.g. Hintze 2014; Soares and Fallenstein 2014b, Sect. 2.1; Soares and Levinstein 2017, Sect. 2). However, this issue is beyond the scope of the present paper. Further issues in sequential Newcomb-like problems are discussed by Everitt et al. (2015) . \n\t\t\t There is a broad philosophical literature on whether causal relationships exist and whether they can be inferred in cases where the agent is part of the environment. See, e.g., the edited volume byPrice and Corry (2007).6 For simplicity, we will ignore dependences not resulting from causation (Arntzenius 2010) . For example, if you play against a copy, there is a logical dependence between your and your copy's decision. Even if you know a set of nodes in the causal graph that d-separates your and your copy's decision (e.g., if you \n\t\t\t In reinforcement learning, some have proposed alternative optimization targets that incorporate, e.g., risk aversion(García and Fernández 2015, Sect. 3). \n\t\t\t For preliminary work on this question, see Mayer et al. (2016) , Oesterheld (2018a) and perhaps Albert and Heiner (2001) .14 We give a brief justification of this claim. If all of a's causal influence on H can be discerned from O, then, of course, a could still be diagnostically relevant for one's estimate of U (H ). The other direction is more complicated. The idea is that Eq. 5 can be true if the causal and non-causal implications of a exactly cancel each other out. An example is a version of Newcomb's problem in which one-boxing ensures with certainty that both boxes contain the same amount of money. Then if O and E r do not contain any information, the expected value of two-boxing and one-boxing is the same and so learning of the action is irrelevant for estimating U (H ). However, two-boxing is causally better than one-boxing, so Eq. 6 is violated. \n\t\t\t The dominance of the overseer's decision theory in the second type of Newcomb's problem is mentioned (though not proven) byChristiano (2014, Sect. \"Avoid lock-in\"). \n\t\t\t Synthese (2021) 198 (Suppl 27):S6491-S6504", "date_published": "n/a", "url": "n/a", "filename": "Oesterheld2021_Article_Approval-directedAgencyAndTheD.tei.xml", "abstract": "Decision theorists disagree about how instrumentally rational agents, i.e., agents trying to achieve some goal, should behave in so-called Newcomb-like problems, with the main contenders being causal and evidential decision theory. Since the main goal of artificial intelligence research is to create machines that make instrumentally rational decisions, the disagreement pertains to this field. In addition to the more philosophical question of what the right decision theory is, the goal of AI poses the question of how to implement any given decision theory in an AI. For example, how would one go about building an AI whose behavior matches evidential decision theory's recommendations? Conversely, we can ask which decision theories (if any) describe the behavior of any existing AI design. In this paper, we study what decision theory an approval-directed agent, i.e., an agent whose goal it is to maximize the score it receives from an overseer, implements. If we assume that the overseer rewards the agent based on the expected value of some von Neumann-Morgenstern utility function, then such an approval-directed agent is guided by two decision theories: the one used by the agent to decide which action to choose in order to maximize the reward and the one used by the overseer to compute the expected utility of a chosen action. We show which of these two decision theories describes the agent's behavior in which situations. \n Keywords Reinforcement learning • Causal decision theory • Evidential decision theory • Newcomb's problem • AI safety • Philosophical foundations of AI B Caspar Oesterheld", "id": "e92da74deef5925e7ac635cf8cfd10a7"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "CNASReport-Technology-Roulette-DoSproof2v2.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Stuart Armstrong"], "title": "Off-policy Monte Carlo agents with variable behaviour policies", "text": "Off-policy Monte Carlo The Monte Carlo agent is a model-free reinforcement learning agent [3] . These operate when the environment is a Markov decision process (MDP). In an MDP, the next observation depends only on the current observation -the state -and the current action. The full set of state action pairs is designated by S × A. In each state, an agent chooses its actions according to a (possibly stochastic) policy π, that is assumed to be Markov. Monte Carlo agents operate by computing the Q-values of a policy for each state-action pair (s, a), which is the expected return if the agent choose a in s and subsequently follows π. It is episodic, exploring the same MDP repeatedly to compute Q-values. Episodic MDP's have initial state (where the agent starts an episode) and terminal states (where the agent ends an episode). This paper focuses on computing Q-values, not on updating policy choice in consequence. Assumption 1 In the following, these are always assumed, required for convergence results: (a) The MDP is finite, (b) Whatever policy the agent uses, its expected time until reaching a terminal state is finite. (c) The rewards after each step have finite expectation (consequently the total reward has finite expectation). If an agent is following one policy (e.g., ρ) but wishes to compute the Qvalues of another (e.g., π), it can use the off-policy Monte Carlo algorithm [3] . In this case ρ is the behaviour policy, and π is the evaluation policy. The algorithm requires that for all state-action pairs (s, a): π(a|s) > 0 ⇒ ρ(a|s) > 0. ( ) 1 Because of this requirement, π(a|s) is zero whenever ρ(a|s) is, and the ratio π(a|s)/ρ(a|s) can be defined as 0 in these cases. The full algorithm is then given in box 1 -there are two variants, ordinary importance sampling using N (s, a) (the number of episodes in which the agent has encountered the state-action pair (s, a)) as the denominator, and weighted importance sampling using W (s, a) (the sum of the weights) instead [4, 1] . 1. For all s ∈ S and a ∈ As, initialise N (s, a), W (s, a) and W R(s, a) to zero. 2. For all n ≥ 1: (a) Generate an episode history hn by following policy ρ. (b) The episode data will consist of the state, action chosen in the state, the immediate reward experienced by the agent. (c) For each state-action pair (s, a) appearing in the episode history: i. Let t be the first appearance of (s, a) in the history. ii. Let Rn(s, a) be the total subsequent reward. iii. Define Note that W (s, a) and N (s, a) have the same expectation: let H (s,a) be the set of possible histories in the MDP subsequent to (s, a). For h ∈ H (s,a) , relabel the indexes so that h starts in step zero at (s, a). Use ρ and π to denote the probabilities of certain events conditional on the agent following those policies, and E ρ and E π similarly. Then if I n (s, a) denotes the event that (s, a) appears in the n-the episode, wn(s, a) = k>t π(a k |s k ) ρ(a k |s k ) . (2) E ρ (w(s, a)|I n (s, a)) = h∈H (s,a) ρ(h|I n (s, a)) k>0 π(h a k |h s k ) ρ(h a k |h s k ) = h∈H (s,a) ρ(h|I n (s, a)) π(h|I n (s, a)) ρ(h|I n (s, a)) = h∈H (s,a) π(h|I n (s, a)) = π(H (s,a) |I n (s, a)) = 1. ( ) 3 where h a k denotes the k-th action of history h, and h s k denotes the k-th state of history h. Then simply note that the value of N n (s, a) is simply the sum The same argument as in equation (3) shows that E ρ (w n (s, a)R n (s, a)|I n (s, a)) = h∈H (s,a) R h (s, a)ρ(h|I n (s, a)) k>0 π(h a k |h s k ) ρ(h a k |h s k ) = E π (R(s, a)|I n (s, a)), (4) where R h (s, a) is the reward along the history h subsequent to the first (s, a). Thus the expected weighted reward from following policy ρ, is the expected reward from following policy π. Assume that agents following ρ would almost surely explore every stateaction pair infinitely often (equivalently, that N n (s, a) → ∞ almost surely). Then the convergence of ordinary importance sampling is simply a consequence of the law of large numbers applied to every episode that visits (s, a). Convergence of weighted importance sampling is a consequence of Corollary 1 below. \n Varying the behaviour policy The previous proof, however, assumes that policy ρ is fixed. But what if it varies from episode to episode? If π and π are two policies, any p ∈ [0, 1] defines the mixed policy: (1 − p)π + pπ . This is the policy that, in each state, independently chooses whether to follow π, with probability 1 − p, and π , with probability p. Because the decisions are independent, the mixed policy is Markov if π and π are. A general form for all behaviour policies is then: Proposition 1. If ρ and π obey the restriction in equation ( 1 ), then there exists a θ ∈ [0, 1) and a policy π such that ρ = (1 − θ)π + θπ . Proof. Let σ be the maximum across S × A of the ratio π(a|s)/ρ(a|s), in all cases where ρ(a|s) = 0. Then define (1 − θ) = 1/σ. Since π(a|s)/ρ(a|s) ≤ σ, (1 − θ) = 1/σ ≤ ρ(a|s)/π(a|s) and hence ρ(a|s) ≥ (1 − θ)π(a|s). Then define π as π (a|s) = 1 θ ρ(a|s) − (1 − θ)π(a|s) . To check that this quantity is less than 1, note that ρ(a|s ) = 1 − b =a ρ(b|s) ≤ 1 − b =a (1 − θ)π(b|s) hence that ρ(a|s) − (1 − θ)π(a|s) ≤ 1 − b (1 − θ)π(b|s) ≤ 1 − (1 − θ) ≤ θ. Then, by construction, ρ(a|s) = (1 − θ)π(a|s) + (θ)π (a|s). For n being the episode number, define ρ n by ρ n = (1 − θ n )π + θ n π n , (5) for some π n and θ n ∈ [0, 1). For convenience, further define the set S ⊂ S, which is the set of states s such that ρ n (s) = π(s). Then neither off-policy Monte Carlo algorithm need converge: Theorem 1. Neither off-policy Monte Carlo algorithm following policy ρ n need converge to the correct Q-values for π, even if the agent generates every possible episode history infinitely often. The proof is given in Section 3. But that convergence failure need not happen for all such ρ n . A sufficient condition for convergence is: Theorem 2. Let σ n = 1/(1−θ n ). Assume that σ n is eventually non-decreasing, and that there exists a δ > 0 such that, for large enough n, (σ n ) √ log(n) < n 1−δ . Assume that π visits each state-action pair with non-zero probability. Then both off-policy Monte Carlo algorithms following policy ρ n will almost surely converge on the correct Q-values for π. This will be proved in Section 4. Note that the requirement for visiting all (s, a) pairs is for the evaluation policy π -the behaviour policy will also do so, as a consequence of the conditions above. One immediate corollary of theorem 2 is: Corollary 1. If θ n is bounded above by κ < 1, both off-policy Monte Carlo algorithms using ρ n will converge on the correct Q-values for π, if π visits each state-action pair with non-zero probability. Proof. This kind of result has been proved before [5] . The σ n are bounded above by 1/(1 − κ), and thus, for any δ, are less than n 1−δ for sufficiently large n. Then Theorem 2 implies the result. A second corollary is: Corollary 2. If θ n = 1 − 1 log(n) , both off-policy Monte Carlo algorithms using ρ n will converge on the correct Q-values for π, if π visits each state-action pair with non-zero probability. Proof. Set δ = 1/2, note that σ n = log(n), and that log (σ n ) √ log(n) = log(n) log(log(n)) < log(n) 2 = log( √ n) for large enough n. Hence, for large enough n, (σ n ) √ log(n) < √ n = n 1−1/2 . Then Theorem 2 implies the result. And a third corollary is: Corollary 3. Assume π n (a|s) > r (s,a) whenever π(a|s) > 0, for constants r (s,a) > 0. Then both off-policy Monte Carlo algorithms using ρ n will converge on the correct Q-values for π, if π visits each state-action pair with non-zero probability. Proof. Let r be the minimum value of r (s,a) /π(a|s), across all (s, a) where π(a|s) > 0. Then π n can be rewritten as: π n = rπ + (1 − r)π n , for some π n . Similarly ρ n can be rewritten as ρ n = (1 − (1 − r)θ n )π + (θ n )(1 − r)π n . Then note that (θ n )(1 − r) is bounded above by 1 − r, and the result follows by corollary 1. The rest of this paper will be dedicated to proving theorems 1 and 2. \n Failure to converge This Section aims to prove Theorem 1, by constructing a counter-example using the MDP of figure 1 . Set π(a|s 1 ) = 1, π(a|s 2 ) = π(b|s 2 ) = 1/2, S = {s 2 }, π n (a|s 2 ) = 0, π n (b|s 2 ) = 1, and define θ n as The probability of the n-th episode history being h b = (s 1 , a, 0, s 2 , b, −3, s 4 ) is greater or equal to 1/2: this history therefore almost surely gets generated infinitely often. Now consider the episode history h a = (s 1 , a, 0, s 2 , a, −1, s 4 ). It will get generated during episode n with probability θ n = 1 − 1/(n log(n)). 1 2 • (1 − θ n ) = 1 2n log(n) . Each episode is independent and n 1/(2n log(n)) = ∞, so by the converse Borel-Cantelli lemma, the episode is generated infinitely often. Then note that if the history h a is generated during the n-th episode, it is generated with weight 1/(1 − θ n ) = n log(n), while the history h b is generated with weight 1/(1 + θ n ) = 1/(2 − 1/(n log(n))). Split the weight total W n (s 1 , a) as W a n (s 1 , a) + W b n (s 1 , a) (the weight totals due to the histories h a and h b respectively). Then ordinary importance sampling will give Q n as: Q n (s 1 , a) = −W a n (s 1 , a) − 3W b n (s 1 , a) N n (s 1 , a) , (6) while weighted importance sampling gives: Q n (s 1 , a) = −W a n (s 1 , a) − 3W b n (s 1 , a) W a n (s 1 , a) + W b n (s 1 , a) . (7) In state s 1 , both actions are equiprobable and independent, so the law of large numbers implies N n (s 1 , a)/n → 1 almost surely. The W b n (s 1 , a) is the sum of weights less than 1, so W b n (s 1 , a) ≤ N n (s 1 , a). The probability of h b increases to 1, and the weights for that history are larger than 1/2 for n ≥ 1. Thus, almost surely, for sufficiently large n, n/3 and 2n are lower and upper bounds for both N n (s 1 , a) and W b n (s 1 , a). Now pick an n where episode history h a is generated, which must happen infinitely often. The weight of this history is n log(n), so W a n (s 1 , a) ≥ n. In the limit, this must dominate the n-bounded contributions of N n (s 1 , a) and W b n (s 1 , a). Thus, for a large enough n where h a is generated, equation (6) then gives the upper bound Q n (s 1 , a) ≤ −C log(n), for some constant C. Conversely, equation (7) gives an upper bound: Q n (s 1 , a) ≥ −1 − C/ log(n), for some C. The correct Q-value for Q(s 1 , a) under π is clearly (−3−1)/2 = −2, so neither algorithm can converge to the correct values. Ordinary importance sampling clearly cannot find the optimal policy (of choosing a in state s 2 ) either. If b had a reward of 0 instead of a reward of −2, then weighted importance sampling would fail to find the optimal policy on that MDP, so both algorithms can fail to find optimal policies. Remark 1. The lack of convergence can also be proved for θ n = 1 − 1/n, but the proofs are more involved. \n Proof of convergence This Section aims to prove 3 Theorem 2. \n Infinite variance The reason that such mathematical machinery is needed is because, in many cases, the variance of the reward become infinite. Consider the MDP of figure 2 . Two actions are available in s 2 : action a which, with p probability will return the agent to state 1 with a reward of 1, and otherwise send them to state s 3 with no reward. And b, which goes straight to s 3 with no reward. Set π(a|s 1 ) = 1, π n (b|s 2 ) = 1, and θ n = 1 − q. The weights are powers of 1/q, ρ n (a|s 2 ) = q, so the expected weighted reward is the correct ∞ l=0 (l/q l )(pq) l , which is finite. Since for any random variable X, Var(X) = E(X 2 ) − E(X) 2 , the variance of the weighted reward is finite iff the expected squared weighted reward is finite. Then the expected squared weighted reward is: ∞ l=0 (l/q l ) 2 (pq) l = ∞ l=0 l 2 (p/q) l . And this sum diverges for q ≤ p. \n Proving convergence This proof will use ordinary importance sampling, then generalise to weighted importance sampling. Fix any pair (s, a). Since π must visit that pair with finite probability, there is an m < |S × A| and a probability τ > 0 such that an agent following π would s 3 s 2 s 1 a, 0 a, 0, 1 − q b, 0 a, 1, q (σ n ) m < (σ n ) √ log(n) < n 1−δ < n. Hence, for large enough n, τ (1 − θ n ) m = τ (1/σ n ) > τ /n, so ∞ τ (1 − θ n ) m = ∞. Then since each episode is independent, the converse Borel-Cantelli lemma implies that an agent following ρ will almost surely visit (s, a) infinitely often. How regular will these visits be? The expected number of visits during the episodes up to the n-th is simply n j=1 τ (1 − θ j ) m . For large enough n, σ n < n 1−δ √ log(n) and so eventually τ (1 − θ n ) m > τ n −m 1−δ √ log(n) > n −δ , for any δ > 0. Therefore, for large enough n, the number of visits to (s, a) among the n episodes must be almost surely greater than n j=1 j −δ = O(n 1−δ ). Thus, for large enough n, N n (s, a) > n 1−δ almost surely. Let Q * be the true Q-value of the MDP under π, and let H be the set of possible episode histories, ignoring rewards. Let H l be the set of histories of length l. Write π(h n ) for h ∈ H to designate the probability that episode n has history h if the agent follows the policy π. Note that since π is fixed, π(h n ) is independent of n (unlike ρ n (h n )). Let η be the policy designed to maximise the expected time the agent spends in the MDP. This means that η is a Markov policy, as the policy that maximises the time spent if the agent is in state (s , a ) does not depend on the agent's prior history. Combined with the MDP, η describes a Markov chain, with absorbing final states. Its transition matrix is of the form: P R 0 Id , for P a transition matrix on the non-absorbing state-action pairs, and Id the identity matrix on the absorbing ones. Since any episode must terminate with probability 1, the matrix P has a single maximal real eigenvalue µ < 1 [2] . For large n, the probability that the matrix will not have terminated by the n-th episode is bounded by (µ ) n . Since η is the policy that maximises the expected time spent in the MDP, an agent following π cannot expect to stay in MDP longer than that, so there exists a C such that π(H l ) ≤ C (µ ) l . Fix any µ < µ < 1, then because l 2 (µ ) l must eventually be less than µ l , there exists a C such that lπ(H l ) ≤ l 2 π(H l ) ≤ Cµ l . Let E be the maximal expected reward the agent can generate from a single state-action pair. Let S be the maximal expected squared reward the agent can generate from a single state-action pair. Then if R h is the random variable denoting the reward generated along history h ∈ H l , E (R h |h) ≤ lE Var (R h |h) ≤ E R 2 h |h ≤ l 2 S. Let W R l n denote the random variable that returns 0 if the length of the n-th episode is not l, and the (weighted) reward otherwise, under the assumption that the agent visits (s, a) during episode n. Therefore: W R l n = h∈H l w(h n )R h ρ(h n ). Note that w(h n ) = π(h n )/ρ(h n ) ≤ (σ n ) i h , where i h ≤ l is the number of times that the agent goes through a state in S along h. Then E ρn W R l n = h∈H l w(h n )E (R h |h) ρ n (h n ) = h∈H l E (R h |h) π(h n ), which is the expected reward from episode histories of length l from an agent following π. Therefore ∞ l=1 E ρn W R l n = h E (R h |h) π(h n ) = Q * . The expectation and variance of W R l n can be bounded as: E ρn W R l n = h∈H l E (R h |h) π(h n ) ≤ lEπ(H l ) ≤ ECµ l Var ρn W R l n ≤ E ρn W R l n 2 ≤ h∈H l E (w(h n )R h ) 2 |h ρ(h n ) ≤ h∈H l (σ n ) i h E (R h ) 2 |h π(h n ) ≤ π(H l )(σ n ) l l 2 S ≤ SC(µσ n ) l . (8) Redefine C as max(EC, SC) so that these bounds are Cµ l and C(µσ n ) l respectively. Then define W R 1; so, redefining C if needed to cover the finitely many smaller values of n, the second bound is C(σ n ) l : E ρn W R ≥l n ≤ Cµ l Var ρn W R n 1−δ for any δ > 0 and large enough n. Define Q ≥l n = 1 N n (s, a) n j=1 I j W R ≥l j Q 0 and for large enough n: E ρn Q ≥l n ≤ 1 N n (s, a) n j=1 I j Cµ l ≤ n j=1 I j N n (s, a) Cµ l ≤ Cµ l Var ρn Q 1 such that m j+1 /m j > c for sufficiently large j. Then Q mj converges to Q * almost surely as j → ∞. Proof. Fix any > 0, and consider the subsequence n j = e √ j . Then since {m j } is eventually an exponentially growing sequence, m j > n j for sufficiently large j. Set l j = 4 √ j and consider Q mj = Q ≥lj mj + Q 0. Fix δ = δ/2. Hence for large enough j, C (mj ) 1−δ/2 (σ mj ) lj < 1 (mj ) δ/2 . Since m δ/2 j > n δ/2 j ≥ e (δ/2) √ j > Cj 2 / 2 for large enough j, eventually Var Q > 0, define the lacunary sequence m j = (1 + ) j . For m j < n < m j+1 , (1 + ) j Q mj ≤ nQ n ≤ Q mj+1 (1 + ) j+1 Q mj+1 . This implies that for large enough j, Q n ≥ Q mj (1/(1 + ) − 2 ) ≥ Q mj (1 − ) and Q n ≤ Q mj ((1 + ) + 2 ) ≤ Q mj (1 + 2 ) , where the 2 term comes from the fact that (1 + ) j need not be an integer. By the Lemma above, the sequence Q mj converge almost surely to Q * , thus, for large enough j and n > (1 + ) j , Q n must be within 3 (Q * + 1) of Q * . Since was arbitrary, this proves that Q n converges to Q * almost surely as n → ∞. This completes the proof for all MDP's with positive rewards. Note that the same proof works for MDP's with negative rewards. Then the general proof is established for by dividing the rewards into positive and negative parts, noting their separate convergence, and noting that the Q-values update process is linear in rewards. Since this proves the convergence of Q n (s, a) to Q * (s, a) for any (s, a), and there are finitely many (s, a) pairs, this proves the almost sure convergence of Q n to Q * in general. It is now necessary to extend the result to weighted importance sampling. To do that, it will suffice to show that, under the conditions above, W n (s, a)/N n (s, a) → 1 almost surely. To see this, change the MDP by setting all rewards to 0, except for the final reward when the agent reaches a terminal state, where they will get 1. This means that the reward along each history is 1, and the weighted reward is just the weight. Thus the new Q n for ordinary importance sampling is Q n (s, a) = W n (s, a)/N n (s, a). By the result we've just proved, these must converge almost surely to the correct Q-values for the modified MDP, i.e., to 1. This proves that the ratios converge almost surely to 1, as required. \n A Note on the independence assumption The policy ρ n , as defined in equation ( 5 ), assumes that the agent choose independently between π and π n for each s ∈ S. If this independence is dropped, the situation can get even worse for convergence -the agent may fail to converge to the right values 4 even for fixed θ < 1. Consider the MDP in figure 3 . Define the policy π as choosing randomly amongst the two actions at s 1 , and choosing a otherwise. Define S = {s 2 , s 3 } and π n as choosing b from all states in S . For ρ, the probabilities of choosing π and π n in S are equal (and the same from episode to episode), θ 2 = θ 3 = 0.5, but they are strictly anti-correlated within a given episode. Notice that the episode history (s 1 , a, 0, s 2 , a, 1, s 3 , a, 1, s 4 ) never appears. This is because the uses of π n at s 2 and s 3 are anti-correlated, so the agent cannot avoid both of them. Therefore the agent never experiences a total reward of 2; moreover, the only episodes with rewards are the (equiprobable) iv. Nn(s, a) = Nn−1(s, a) + 1. v. Wn(s, a) = Wn−1(s, a) + wn(s, a). vi. W Rn(s, a) = W Rn−1(s, a) + wn(s, a)R(s, a). vii. Either Qn(s, a) = W Rn(s,a) Nn(s,a) , or Qn(s, a) = W Rn(s,a) Wn(s,a) . (d) Discard the episode data. \n n j=1 I j (s,a) , to see that N n (s, a) and W n (s, a) = n j=1 I j (s,a) w n (s, a) have same expectation. If ρ is fixed, then by sampling only those histories including (s, a), the strong law of large numbers implies that the ratio of N n (s, a) and W n (s, a) converges to 1 almost surely. \n 3 a, − 1 Fig. 1 . 311 Fig. 1. An MDP where the off-policy Monte Carlo algorithms can fail to compute the correct Q-values. \n Fig. 2 . 2 Fig. 2. An MDP where the variance of the reward can become infinite within a single episode. \n 1 Fig. 3 . 13 Fig. 3. An MDP on which the agent has non-Markov policy choices. \n Table 1 . 1 Off-policy Monte Carlo algorithm \n\t\t\t The proof will closely mirror the standard proofs of the strong law of large numbers, see for instance https://terrytao.wordpress.com/2008/06/18/the-strong-lawof-large-numbers/ \n\t\t\t For non-Markov policies like this one, the off-policy algorithm has to be adjusted to consider ratios π(h)/ρ(h) for entire histories h (rather the product of state-action pair probabilities), but this is not a large change.", "date_published": "n/a", "url": "n/a", "filename": "monte_carlo_arXiv.tei.xml", "abstract": "This paper looks at the convergence property of off-policy Monte Carlo agents with variable behaviour policies. It presents results about convergence and lack of convergence. Even if the agent generates every possible episode history infinitely often, the algorithm can fail to converge on the correct Q-values. On the other hand, it can converge on the correct Q-values under certain conditions. For instance, if, during the n-th episode, the agent has an independent probability of 1/ log(n) of following the original policy at any given state, then it will converge on the right Q-values for that policy.", "id": "14fe3934f3364cda0c9d2793188f52e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Chongli Qin", "James Martens", "Sven Gowal", "Dilip Krishnan", "Google Krishnamurthy", "Dvijotham Deepmind", "Alhussein Fawzi", "Soham De", "Robert Stanforth Deepmind"], "title": "Adversarial Robustness through Local Linearization", "text": "Introduction In a seminal paper, Szegedy et al. [22] demonstrated that neural networks are vulnerable to visually imperceptible but carefully chosen adversarial perturbations which cause them to output incorrect predictions. After this revealing study, a flurry of research has been conducted with the focus of making networks robust against such adversarial perturbations [14, 16, 17, 25] . Concurrently, researchers devised stronger attacks that expose previously unknown vulnerabilities of neural networks [24, 4, 1, 3] . Of the many approaches proposed [19, 2, 6, 21, 15, 17] , adversarial training [14, 16] is empirically the best performing algorithm to train networks robust to adversarial perturbations. However, the cost of adversarial training becomes prohibitive with growing model complexity and input dimensionality. This is primarily due to the cost of computing adversarial perturbations, which is incurred at each step of adversarial training. In particular, for each new mini-batch one must perform multiple iterations of a gradient-based optimizer on the network's inputs to find the perturbations. 1 As each step of this optimizer requires a new backwards pass, the total cost of adversarial training scales as roughly the number of such steps. Unfortunately, effective adversarial training of ImageNet often requires large number of steps to avoid problems of gradient obfuscation [1, 24] , making it significantly more expensive than conventional training. One approach which can alleviate the cost of adversarial training is training against weaker adversaries that are cheaper to compute. For example, by taking fewer gradient steps to compute adversarial examples during training. However, this can produce models which are robust against weak attacks, but break down under strong attacks -often due to gradient obfuscation. In particular, one form of gradient obfuscation occurs when the network learns to fool a gradient based attack by making the loss surface highly convoluted and non-linear (see Fig 1) , an effect which has also been observed by Papernot et al [18] . This non-linearity prevents gradient based optimization methods from finding an adversarial perturbation within a small number of iterations [4, 24] . In contrast, if the loss surface was linear in the vicinity of the training examples, which is to say well-predicted by local gradient information, gradient obfuscation cannot occur. In this paper, we take up this idea and introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data. We call this regularizer the local linearity regularizer (LLR). Empirically, we find that networks trained with LLR exhibit far less gradient obfuscation, and are almost equally robust against strong attacks as they are against weak attacks. The main contributions of our paper are summarized below: • We show that training with LLR is significantly faster than adversarial training, allowing us to train a robust ImageNet model with a 5× speed up when training on 128 TPUv3 cores [9] . • We show that LLR trained models exhibit higher robustness relative to adversarially trained models when evaluated under strong attacks. Adversarially trained models can exhibit a decrease in accuracy of 6% when increasing the attack strength at test time for CIFAR-10, whereas LLR shows only a decrease of 2%. • We achieve new state of the art results for adversarial accuracy against untargeted white-box attack for ImageNet (with = 4/255 2 ): 47%. Furthermore, we match state of the art results for CIFAR 10 (with = 8/255): 52.81% 3 . • We perform a large scale evaluation of existing methods for adversarially robust training under consistent, strong, white-box attacks. For this we recreate several baseline models from the literature, training them both for CIFAR-10 and ImageNet (where possible). 4 2 Background and Related Work We denote our classification function by f (x; θ) : x → R C , mapping input features x to the output logits for classes in set C, i.e. p i (y|x; θ) = exp (f i (x; θ)) / j exp (f j (x; θ)), with θ being the model parameters and y being the label. Adversarial robustness for f is defined as follows: a network is robust to adversarial perturbations of magnitude at input x if and only if argmax i∈C f i (x; θ) = argmax i∈C f i (x + δ; θ) ∀δ ∈ B p ( ) = {δ : δ p ≤ }. (1) In this paper, we focus on p = ∞ and we use B( ) to denote B ∞ ( ) for brevity. Given the dataset is drawn from distribution D, the standard method to train a classifier f is empirical risk minimization (ERM), which is defined by: min θ E (x,y)∼D [ (x; y, θ)]. Here, (x; y, θ) is the standard cross-entropy loss function defined by (x; y, θ) = −y T log (p(x; θ)) , (2) where p i (x; θ) is defined as above, and y is a 1-hot vector representing the class label. While ERM is effective at training neural networks that perform well on heldout test data, the accuracy on the test set goes to zero under adversarial evaluation. This is a result of a distribution shift in the data induced by the attack. To rectify this, adversarial training [17, 14] seeks to perturb the data distribution by performing adversarial attacks during training. More concretely, adversarial training minimizes the loss function E (x,y)∼D max δ∈B( ) (x + δ; y, θ) , (3) where the inner maximization, max δ∈B( ) (x + δ; y, θ), is typically performed via a fixed number of steps of a gradient-based optimization method. One such method is Projected-Gradient-Descent (PGD) which performs the following gradient step: δ ← Proj (δ − η∇ δ (x + δ; y, θ)) , (4) where Proj(x) = argmin ξ∈B( ) x − ξ . Another popular gradient-based method is to use the sign of the gradient [8] . The cost of solving Eq (3) is dominated by the cost of solving the inner maximization problem. Thus, the inner maximization should be performed efficiently to reduce the overall cost of training. A naive approach is to reduce the number of gradient steps performed by the optimization procedure. Generally, the attack is weaker when we do fewer steps. If the attack is too weak, the trained networks often display gradient obfuscation as shown in Fig 1 . Since the introduction of adversarial training, a corpus of work has researched alternative ways of making networks robust. One such approach is the TRADES method [27] , which is a form of regularization that optimizes the trade-off between robustness and accuracy -as many studies have observed these two quantities to be at odds with each other [23] . Others, such as work by Ding et al [7] adaptively increase the perturbation radius by find the minimal length perturbation which changes the output label. Some have proposed architectural changes which promote adversarial robustness, such as the \"denoise\" model [25] for ImageNet. The work presented here is a regularization technique which encourages the loss function to be well approximated by its linear Taylor expansion in a sufficiently small neighbourhood. There has been work before which uses gradient information as a form of regularization [20, 17] . The work presented in this paper is closely related to the paper by Moosavi et al [17] , which highlights that adversarial training reduces the curvature of (x; y, θ) with respect to x. Leveraging an empirical observation (the highest curvature is along the direction ∇ x (x; y, θ)), they further propose an algorithm to mimic the effects of adversarial training on the loss surface. The algorithm results in comparable performance to adversarial training with a significantly lower cost. \n Motivating the Local Linearity Regularizer As described above, the cost of adversarial training is dominated by solving the inner maximization problem max δ∈B( ) (x + δ). Throughout we abbreviate (x; y, θ) with (x). We can reduce this cost simply by reducing the number of PGD (as defined in Eq (4)) steps taken to solve max δ∈B( ) (x + δ). To motivate the local linearity regularizer (LLR), we start with an empirical analysis of how the behavior of adversarial training changes as we increase the number of PGD steps used during training. We find that the loss surface becomes increasingly linear (as captured by the local linearity measure defined below) as we increase the number of PGD steps. \n Local Linearity Measure Suppose that we are given an adversarial perturbation δ ∈ B( ). The corresponding adversarial loss is given by (x + δ). If our loss surface is smooth and approximately linear, then (x + δ) is well approximated by its first-order Taylor expansion (x) + δ T ∇ x (x). In other words, the absolute difference between these two values, g(δ; x) = (x + δ) − (x) − δ T ∇ x (x) , (5) is an indicator of how linear the surface is. Consequently, we consider the quantity γ( , x) = max δ∈B( ) g(δ; x), (6) to be a measure of how linear the surface is within a neighbourhood B( ). We call this quantity the local linearity measure. 6 )) is large (on the order of 10) when we train with just one or two steps of PGD for inner maximization, (2a). In contrast, γ( , x) becomes increasingly smaller (on the order of 10 −1 ) as we increase the number of PGD steps to 4 and above, (2b). The x-axis is the number of training iterations and the y-axis is γ( , x), here = 8/255 for CIFAR-10. \n Empirical Observations on Adversarial Training We measure γ( , x) for networks trained with adversarial training on CIFAR-10, where the inner maximization max δ∈B( ) (x + δ) is performed with 1, 2, 4, 8 and 16 steps of PGD. γ( , x) is measured throughout training on the training set 5 . The architecture used is a wide residual network [26] 28 in depth and 10 in width (Wide-ResNet-28-10). The results are shown in Fig 2a and 2b . Fig 2a shows that when we train with one and two steps of PGD for the inner maximization, the local loss surface is extremely non-linear at the end of training. An example visualization of such a loss surface is given in Fig A1a . However, when we train with four or more steps of PGD for the inner maximization, the surface is relatively well approximated by (x) + δ T ∇ x (x) as shown in Fig 2b . An example of the loss surface is shown in Fig A1b . For the adversarial accuracy of the networks, see Table A1 . \n Local Linearity Regularizer (LLR) From the section above, we make the empirical observation that the local linearity measure γ( , x) decreases as we train with stronger attacks 6 . In this section, we give some theoretical justifications of why local linearity γ( , x) correlates with adversarial robustness, and derive a regularizer from the local linearity measure that can be used for training of robust models. \n Local Linearity Upper Bounds Adversarial Loss The following proposition establishes that the adversarial loss (x + δ) is upper bounded by the local linearity measure, plus the change in the loss as predicted by the gradient (which is given by |δ T ∇ x (x)|). Proposition 4.1. Consider a loss function (x) that is once-differentiable, and a local neighbourhood defined by B( ). Then for all δ ∈ B( ) | (x + δ) − (x)| ≤ |δ T ∇ x (x)| + γ( , x). (7) See Appendix B for the proof. From Eq (7) it is clear that the adversarial loss tends to (x), i.e., (x + δ) → (x), as both |δ ∇ x (x)| → 0 and γ( ; x) → 0 for all δ ∈ B( ). And assuming (x + δ) ≥ (δ) one also has the upper bound (x + δ) ≤ (x) + |δ T ∇ x (x)| + γ( , x). \n Local Linearity Regularization (LLR) Following the analysis above, we propose the following objective for adversarially robust training L(D) = E D (x) + λγ( , x) + µ|δ T LLR ∇ x (x)| LLR , (8) where λ and µ are hyper-parameters to be optimized, and δ LLR = argmax δ∈B( ) g(δ; x) (recall the definition of g(δ; x) from Eq (5)). Concretely, we are trying to find the point δ LLR in B( ) where the linear approximation (x) + δ T ∇ x (x) is maximally violated. To train we penalize both its linear violation γ( , x) = (x + δ LLR ) − (x) − δ T LLR ∇ x (x) , and the gradient magnitude term δ T LLR ∇ x (x) , as required by the above proposition. We note that, analogous to adversarial training, LLR requires an inner optimization to find δ LLR -performed via gradient descent. However, as we will show in the experiments, much fewer optimization steps are required for the overall scheme to be effective. Pseudo-code for training with this regularizer is given in Appendix E. \n Local Linearity Measure γ( ; x) bounds the adversarial loss by itself Interestingly, under certain reasonable approximations and standard choices of loss functions, we can bound |δ ∇ x (x)| in terms of γ( ; x). See Appendix C for details. Consequently, the bound in Eq (7) implies that minimizing γ( ; x) (along with the nominal loss (x)) is sufficient to minimize the adversarial loss (x + δ). This prediction is confirmed by our experiments. However, our experiments also show that including |δ ∇ x (x)| in the objective along with (x) and γ( ; x) works better in practice on certain datasets, especially ImageNet. See Appendix F.3 for details. \n Experiments and Results We perform experiments using LLR on both CIFAR-10 [13] and ImageNet [5] datasets. We show that LLR gets state of the art adversarial accuracy on CIFAR-10 (at = 8/255) and ImageNet (at = 4/255) evaluated under a strong adversarial attack. Moreover, we show that as the attack strength increases, the degradation in adversarial accuracy is more graceful for networks trained using LLR than for those trained with standard adversarial training. Further, we demonstrate that training using LLR is 5× faster for ImageNet. Finally, we show that, by linearizing the loss surface, models are less prone to gradient obfuscation. \n CIFAR-10: The perturbation radius we examine is = 8/255 and the model architectures we use are Wide-ResNet-28-8, Wide-ResNet-40-8 [26] . Since the validity of our regularizer requires (x) to be smooth, the activation function we use is softplus function log(1 + exp(x)), which is a smooth version of ReLU. The baselines we compare our results against are adversarial training (ADV) [16] , TRADES [27] and CURE [17] . We recreate these baselines from the literature using the same network architecture and activation function. The evaluation is done on the full test set of 10K images. \n ImageNet: The perturbation radii considered are = 4/255 and = 16/255. The architecture used for this is from [11] which is ResNet-152. We use softplus as activation function. For = 4/255, the baselines we compare our results against is our recreated versions of ADV [16] and denoising model (DENOISE) [25] . 7 For = 16/255, we compare LLR to ADV [16] and DENOISE [25] networks which have been published from the the literature. Due to computational constraints, we limit ourselves to evaluating all models on the first 1K images of the test set. To make sure that we have a close estimate of the true robustness, we evaluate all the models on a wide range of attacks these are described below. \n Evaluation Setup To accurately gauge the true robustness of our network, we tailor our attack to give the lowest possible adversarial accuracy. The two parts which we tune to get the optimal attack is the loss function for the attack and its corresponding optimization procedure. The loss functions used are described below, for the optimization procedure please refer to Appendix F.1. \n Loss Functions: The three loss functions we consider are summarized in Table 1 . We use the difference between logits for the loss function rather than the cross-entropy loss as we have empirically found the former to yield lower adversarial accuracy. \n Attack Name Loss Function Metric Random-Targeted max δ∈B( ) f r (x + δ) − f t (x + δ) Attack Success Rate Untargeted max δ∈B( ) f s (x + δ) − f t (x + δ) Adversarial Accuracy Multi-Targeted [10] max δ∈B( ) max i∈C f i (x + δ) − f t (x + δ) Adversarial Accuracy Table 1 : This shows the loss functions corresponding to the attacks we use for evaluation and also the metric we measure on the test set for each of these attacks. Notation-wise, s = argmax i =t fi(x + δ) is the highest logit excluding the logits corresponding to the correct class t, note s can change through the optimization procedure. For the Random-Targeted attack, r is a randomly chosen target label that is not t and does not change throughout the optimization. C stands for the set of class labels. For the Multi-Targeted attack we maximize fi(x + δ) − fT (x + δ) for all i ∈ C, and consider the attack successful if any of the individual attacks on each each target class i are successful. The metric used on the Random-Targeted attack is the attack success rate: the percentage of attacks where the target label r is indeed the output label (this metric is especially important for ImageNet at = 16/255). For the other attacks we use the adversarial accuracy as the metric which is the accuracy on the test set after the attack. For CIFAR-10, the main adversarial accuracy results are given in Table 2 . We compare LLR training to ADV [16] , CURE [17] and TRADES [27] , both with our re-implementation and the published models 8 . Note that our re-implementation using softplus activations perform at or above the published results for ADV, CURE and TRADES. This is largely due to the learning rate schedule used, which is the similar to the one used by TRADES [27] . \n Results for Interestingly, for adversarial training (ADV), using the Multi-Targeted attack for evaluation gives significantly lower adversarial accuracy compared to Untargeted. For ImageNet, we compare against adversarial training (ADV) [16] and the denoising model (DE-NOISE) [25] . The results are shown in Table 3 . For a perturbation radius of 4/255, LLR gets 47% adversarial accuracy under the Untargeted attack which is notably higher than the adversarial accuracy obtained via adversarial training which is 39.70%. Moreover, LLR is trained with just two-steps of PGD rather than 30 steps for adversarial training. The amount of computation needed for each method is further discussed in Sec 5.2.1. Further shown in Table 3 are the results for = 16/255. We note a significant drop in nominal accuracy when we train with LLR to perturbation radius 16/255. When testing for perturbation radius of 16/255 we also show that the adversarial accuracy under Untargeted is very poor (below 8%) for all methods. We speculate that this perturbation radius is too large for the robustness problem. Since adversarial perturbations should be, by definition, imperceptible to the human eye, upon inspection of the images generated using an adversarial attack (see Fig F4 ) -this assumption no longer holds true. The images generated appear to consist of super-imposed object parts of other classes onto the target image. This leads us to believe that a more fine-grained analysis of what should constitute \"robustness for ImageNet\" is an important topic for debate. \n Runtime Speed For ImageNet, we trained on 128 TPUv3 cores [9] , the total training wall time for the LLR network (4/255) is 7 hours for 110 epochs. Similarly, for the adversarially trained (ADV) networks the total wall time is 36 hours for 110 epochs. This is a 5× speed up. \n Accuracy Degradation: Strong vs Weak Evaluation The resulting model trained using LLR degrades gracefully in terms of adversarial accuracy when we increase the strength of attack, as shown in Fig 3 . In particular, Fig 3a shows that, for CIFAR-10, when the attack changes from Untargeted to Multi-Targeted, the LLR's accuracy remains similar with only a 2.18% drop in accuracy. Contrary to adversarial training (ADV), where we see a 5.64% drop in accuracy. We also see similar trends in accuracy in Table 2 . This could indicate that some level of obfuscation may be happening under standard adversarial training. As we empirically observe that LLR evaluates similarly under weak and strong attacks, we hypothesize that this is because LLR explicitly linearizes the loss surface. An extreme case would be when the surface is completely linear -in this instance the optimal adversarial perturbation would be found with just one PGD step. Thus evaluation using a weak attack is often good enough to get an accurate gauge of how it will perform under a stronger attack. We use either the standard adversarial training objective (ADV-1, ADV-2) or the LLR objective (LLR-1, LLR-2) and taking one or two steps of PGD to maximize each objective. To train LLR-1/2, we only optimize the local linearity γ( , x), i.e. µ in Eq. ( 8 ) is set to zero. We see that for adversarial training, as shown in Figs 4a, 4c, the loss surface becomes highly non-linear and jagged -in other words obfuscated. Additionally in this setting, the adversarial accuracy under our strongest attack is 0% for both, see Table F3 . In contrast, the loss surface is smooth when we train using LLR as shown in Figs 4b, 4d . Further, Table F3 shows that we obtain an adversarial accuracy of 44.50% with the LLR-2 network under our strongest evaluation. We also evaluate the values of γ( , x) for the CIFAR-10 test set after these networks are trained. This is shown in \n Conclusions We show that, by promoting linearity, deep classification networks are less susceptible to gradient obfuscation, thus allowing us to do fewer gradient descent steps for the inner optimization. Our novel linearity regularizer promotes locally linear behavior as justified from a theoretical perspective. The resulting models achieve state of the art adversarial robustness on the CIFAR-10 and Imagenet datasets, and can be trained 5× faster than regular adversarial training. Figure 1 : 1 Figure 1: Example of gradient obfuscated surface. The color of the surface denotes the prediction of the network. \n Figure 2 : 2 Figure 2: Plots showing that γ( , x) (Eq (6)) is large (on the order of 10) when we train with just one or two \n 2 Figure 4 : 24 Figure 4: Comparing the loss surface, (x), after we train using just 1 or 2 steps of PGD for the inner maximization of either the adversarial objective (ADV) max δ∈B( ) (x + δ) or the linearity objective (LLR) γ( , x) = max δ∈B( ) (x + δ) − (x) − δ T ∇ (x) . Results are shown for image 126 in test set of CIFAR-10, the nominal label is deer. ADV-i refers to adversarial training with i PGD steps, similarly with LLR-i. \n Fig F3. The values of γ( , x) are comparable when we train with LLR using two steps of PGD to adversarial training with 20 steps of PGD. By comparison, adversarial training with two steps of PGD results in much larger values of γ( , x). \n Table 2 : 2 Model accuracy results for CIFAR-10. Our LLR regularizer performs the best under the strongest attack (highlighted column). (S) denotes softplus activation; (R) denotes ReLU activation; and models with (S, R) are our implementations. Robustness CIFAR-10: Wide-ResNet-28-8 (8/255) Methods Nominal FGSM-20 Untargeted Multi-Targeted Attack Strength Weak Strong Very Strong ADV[16] 87.25% 48.89% 45.92% 44.54% CURE[17] 80.76% 39.76% 38.87% 37.57% ADV(S) 85.11% 56.76% 53.96% 48.79% CURE(S) 84.31% 48.56% 47.28% 45.43% TRADES(S) 87.40% 51.63 50.46% 49.48% LLR (S) 86.83% 54.24% 52.99% 51.13% CIFAR-10: Wide-ResNet-40-8 (8/255) ADV(R) 85.58% 56.32% 52.34% 46.89% TRADES(R) 86.25% 53.38% 51.76% 50.84% ADV(S) 85.27% 57.94% 55.26% 49.79% CURE(S) 84.45% 49.41% 47.69% 45.51% TRADES(S) 88.11% 53.03% 51.65% 50.53% LLR (S) 86.28% 56.44% 54.95% 52.81% \n Table 3 : 3 The accuracy obtained are 49.79% and 55.26% respectively. Evaluation using Multi-Targeted attack consistently gave the lowest adversarial accuracy throughout. Under this attack, the methods which stand out amongst the rest are LLR and TRADES. Using LLR we get state of the art results with 52.81% adversarial accuracy. LLR gets 47% adversarial accuracy for 4/255 -7.30% higher than DENOISE and ADV. For 16/255, LLR gets similar robustness results, but it comes at a significant cost to the nominal accuracy. Note Multi-Targeted attacks for ImageNet requires looping over 1000 labels, this evaluation can take up to several days even on 50 GPUs thus is omitted from this table. The column of the strongest attack is highlighted. ImageNet: ResNet-152 (4/255) Methods PGD steps Nominal Untargeted Random-Targeted Accuracy Success Rate ADV 30 69.20% 39.70% 0.50% DENOISE 30 69.70% 38.90% 0.40% LLR 2 72.70% 47.00% 0.40% ImageNet: ResNet-152 (16/255) ADV [25] 30 64.10% 6.30% 40.00% DENOISE [25] 30 66.80% 7.50% 38.00% LLR 10 51.20% 6.10% 43.80% \n The annotations on each node denotes no. of PGD steps × no. of random restarts (see Appendix F.1). (3a), background color denotes whether the attack is Untargeted (blue) or Multi-Targeted (orange). (3b), we only use Untargeted attacks. Adversarial Accuracy 48 50 52 54 56 58 60 62 1 × 1 1 × 1 10 × 1 10 × 1 Stronger Attack → Untargeted 20 × 1 20 × 1 50 × 1 50 × 1 100 × 1 100 × 1 200 × 1 200 × 1 Multi Targeted 1000 × 20 1000 × 20 1000 × 1 1000 × 1 200 × 20 200 × 20 LLR ADV Adversarial Accuracy 38 40 42 44 46 48 50 1 × 1 1 × 1 10 × 1 10 × 1 Stronger Attack → 20 × 1 50 × 1 20 × 1 50 × 1 100 × 1 100 × 1 1000 × 1 200 × 1 1000 × 1 200 × 1 LLR ADV (a) CIFAR-10 (8/255) (b) ImageNet (4/255) Figure 3: Adversarial accuracy shown for CIFAR-10, (3a), and ImageNet, (3b), as we increase the strength of attack. (3a) shows LLR's adversarial accuracy degrades gracefully going from 53.32% to 51.14% (-2.18%) while ADV's adversarial accuracy drops from 54.43% to 48.79% (-5.64%). (3b) LLR remains 7.5% higher in terms of adversarial accuracy (47.20%) compared to ADV (39.70%). For ImageNet, see Fig3b, the adversarial accuracy trained using LLR remains significantly higher (7.5%) than the adversarially trained network going from a weak to a stronger attack. \n\t\t\t While computing the globally optimal adversarial example is NP-hard [12] , gradient descent with several random restarts was empirically shown to be quite effective at computing adversarial perturbations of sufficient quality. 2 This means that every pixel is perturbed independently by up to 4 units up or down on a scale where pixels take values ranging between 0 and 255. 3 We note that TRADES [27] gets 55% against a much weaker attack; under our strongest attack, it gets 52.5%. 4 Baselines created are adversarial training, TRADES and CURE [17] . Contrary to CIFAR-10, we are currently unable to achieve consistent and competitive results on ImageNet at = 4/255 using TRADES. \n\t\t\t To measure γ( , x) we find max δ∈B( ) g(δ; x) with 50 steps of PGD using Adam as the optimizer and 0.1 as the step size. 6 Here, we imply an increase in the number of PGD steps for the inner maximization max δ∈B( ) (x + δ). \n\t\t\t We attempted to use TRADES on ImageNet but did not manage to get competitive results. Thus they are omitted from the baselines. \n\t\t\t Note the network published for TRADES [27] uses a Wide-ResNet-34-10 so this is not shown in the table but under the same rigorous evaluation we show that TRADES get 84.91% nominal accuracy, 53.41% under Untargeted and 52.58% under Multi-Targeted. We've also ran ∞ DeepFool (not in the table as the attack is weaker) giving ADV(S): 64.29%, CURE(S): 58.73%, TRADES(S): 63.4%, LLR(S): 65.87%.", "date_published": "n/a", "url": "n/a", "filename": "NeurIPS-2019-adversarial-robustness-through-local-linearization-Paper.tei.xml", "abstract": "Adversarial training is an effective methodology to train deep neural networks which are robust against adversarial, norm-bounded perturbations. However, the computational cost of adversarial training grows prohibitively as the size of the model and number of input dimensions increase. Further, training against less expensive and therefore weaker adversaries produces models that are robust against weak attacks but break down under attacks that are stronger. This is often attributed to the phenomenon of gradient obfuscation; such models have a highly non-linear loss surface in the vicinity of training examples, making it hard for gradient-based attacks to succeed even though adversarial examples still exist. In this work, we introduce a novel regularizer that encourages the loss to behave linearly in the vicinity of the training data, thereby penalizing gradient obfuscation while encouraging robustness. We show via extensive experiments on CIFAR-10 and ImageNet, that models trained with our regularizer avoid gradient obfuscation and can be trained significantly faster than adversarial training. Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with ∞ adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack. Additionally, we match state of the art results for CIFAR-10 at 8/255.", "id": "441f58d7f5174d32fc8c6c2ff41baf34"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Tomasz Hollanek"], "title": "AI transparency: a matter of reconciling design with critique", "text": "In the age of ubiquitous computing, we are surrounded by objects that incorporate artificial intelligence solutions. We interact with different kinds of AI without realizing itusing online banking systems, searching for YouTube clips, or consuming news through social media-not really knowing how and when AI systems operate. Corporate strategies of secrecy and user interfaces that hide traces of AI-driven personalization combine with the inherent opacity of deep learning algorithms (whose inner workings are not directly comprehensible to human interpreters) to create a marked lack of transparency associated with all aspects of emerging technologies. It is in response to the widespread application of AI-based solutions to various products and services in the late 2010s that multiple expert groups-both national and international-have voiced the demand to 'open' the algorithmic black box, to audit, expound, and demystify AI. They claim that to ensure that the use of AI is ethical, we must design emerging systems to be transparent, explainable, and auditable. 1 The opening of the algorithmic black box, however, cannot be seen only as an engineering challenge. It is critique, as the underside of making, that prioritizes unboxing, debunking the illusion, seeing through-to reveal how an object really works. Critique-grounded in the tradition of Critical Theory and practiced by cultural studies, critical race theory, queer theory, as well as decolonial theory scholars, among others-moves beyond the technical detail to uncover the desires, ideologies, and social relations forged into objects, opening the black boxes of history, culture, and progress. In what follows, I argue that the calls for technological transparency demand that we combine the practice of design with critique. I relate the question of AI transparency to the broader challenge of responsible making, contending that future action must aim to systematically reconcile design-as a way of concealing-with critique-as a manner of revealing. \n Levels of opacity Many companies sell simple analytics tools as artificial intelligence, as something that supposedly supplants human intelligence to deliver better results. What is advertised as an 'AI solution' often relies on simple data analysis performed by human analysts. The use of metaphors and simplifications obfuscates human labor, labor that is outsourced, hidden away, in an invisible and immaterial factory, in a different part of the globe. According to Ian Bogost (2015) , the 'metaphor of mechanical automation' is nothing more than a welldirected, but misleading, masquerade (n. pag.). Although the metaphor is only an approximation, a distortion, or even a 'caricature,' it convincingly plays the role of an accurate depiction of the whole. The term artificial intelligence, elusive, misleading and with a definition that has changed over time, forms the basis of the marketing stunt. Technology companies rely on abstract, overly schematic representations to simplify reality and arrive at an easily digestible, prepacked idea of the object, one that misrepresents the object's essence and overlooks its true composition, but also satiates the end user's curiosity. Other systems, as a matter of deliberate practice, incorporate complex data processing and machine learning surreptitiously. Shoshana Zuboff (2019) observes that the influence these systems have on our decision-making is 'designed to be unknowable to us ' (p.11) ; that company strategies of misdirection serve as 'the moat that surrounds the castle and secures the action within' (p.65.)-a way for corporations like Google or Facebook to protect their secrets and mislead the public. These systems could, in theory, be designed to be more 'knowable.' Elements of the user interface could, for example, flag up when algorithmic operations are influencing the user's decision-making. Just like labels inform the consumer about the product's contents, the interfaces of Facebook and YouTube could announce to users that the information delivered by the platforms is algorithmically curated. Considering that only 24% of US college students realize Facebook automatically prioritizes certain posts and hides others (Powers 2017) , such a feature would definitely be relevant. But the challenge of transparency at a time of unprecedented technological complexity cannot be approached only as a matter of failed (or indeed successful, depending on your position) communication. The 'opacity' of machine learning algorithms refers, after all, not only to 'institutional self-protection and concealment,' but also, as Burrell (2016) points out, to 'the mismatch between mathematical optimization in highdimensionality characteristic of machine learning and the demands of human-scale reasoning and styles of semantic interpretation' (p.2). The fundamental lack of transparency of systems that incorporate AI solutions relates not only to convoluted storytelling devised by marketing teams, or misleading interfaces and user experience design, but also-and most importantly-an emerging form of making brought about by the automation of cognitive tasks themselves. Considering this level of opacity of AI systems, a team of researchers from MIT's Intelligence and Decision Technologies Group developed a neural network named, suggestively, Transparency by Design Network (Mascharka et al. 2018) that not only performs 'human-like reasoning steps' to answer questions about the contents of images, but also 'visually renders its thought process' as it solves problems, allowing human analysts to interpret its decision-making. It is a deep learning system carrying out a sort of unboxing on itself, incorporating explainability in its very operations: a realization of the most common understanding of 'transparency' in the context of contemporary AI research. \n Transparency and critique The challenge of technological transparency that has now attracted the attention of policymakers has constituted a concern for cultural studies scholars, media theorists, and design philosophers for decades. In the early 1990s, the philosopher Paul Virilio (1994) noted that once 'synthetic vision' becomes a reality and the human subject is excluded from the process of observation of images 'created by the machine for the machine,' the power of statistical science to appear objective (and thus persuasive) will be 'considerably enhanced, along with its discrimination capacities' (p.75). Another prominent media theorist Friedrich Kittler (2014) expressed concerns about modern media technologies that are 'fundamentally arranged to undermine sensory perception.' Kittler wrote about a 'system of secrecy' based on user interfaces that 'conceal operations necessary for programming' and thus 'deprive users of the machine as a whole,' to suggest that perceiving the imitation, penetrating through the illusion of software 'that appears to be human,' is a fundamental challenge in the age of global-scale computing (pp. 221-24) . Transparency, as Adrian Weller (2017) has poignantly noted, is an ambiguous term that will mean different things to different people. For the user, a transparent system will 'provide a sense for what [it] is doing and why,' while for an expert or regulator, it will enable auditing 'a prediction or decision trail in detail' (p.3). The calls for technological transparency have thus been filtered down to reach various stakeholder groups with different meanings and representing different interests. But what seems consistent in all of Footnote 1 (continued) 2019). Another interdisciplinary group of experts working under the auspices of IEEE (the world's largest professional association for electronic and electrical engineers) published, in 2019, the first edition of Ethically Aligned Design, a set of actionable recommendations on how to align design practices with society's values and principles, stressing that the 'standards of transparency, competence, accountability, and evidence of effectiveness should govern the development of autonomous and intelligent systems' (p. 5). these diverse takes on the development, use, and regulation of technology is that transparency is framed as a matter of design. In what follows, I problematize this claim, arguing that design, in the most fundamental sense, relies on concealment and obfuscation. I contend that only the sort of transparency that arises from critique-a method of theoretical examination that, by revealing pre-existing power structures, aims to challenge them-can help us produce technological systems that are less deceptive and more just. \n Design as blackboxing \n Art and artifice Coined in 1956 by John McCarthy, the term 'artificial intelligence' had its critics among those who attended the Dartmouth Conference (which famously established the field of AI); Arthur Samuel argued that 'the word artificial makes you think there's something kind of phony about this, […] or else it sounds like it's all artificial and there's nothing real about this work at all ' (in: McCorduck 2004, p. 115) . The historian of AI Pamela McCorduck notes that while other terms, such as 'complex information processing,' were also proposed, it was 'artificial intelligence' that endured the trial of time. According to her, it is 'a wonderfully appropriate name, connoting a link between art and science that as a field AI indeed represents' (p. 115). She is referring indirectly here to the origins of the word artificial; in Latin, artificialis means 'of or belonging to art,' while artificium is simply a work of art, but also a skill, theory, or system. When the philosopher Vilém Flusser traced the etymology of the word 'design' in his The Shape of Things: A Philosophy of Design (1999) , he referred to this relationship between art and artifice to argue that all human production, all culture, can be defined as a form of trickery. Flusser rejects the distinction between art and technology, and goes back to these ancient roots: the Greek for 'trap' is mechos (mechanics, machine); the Greek techne corresponds to the Latin ars; an artifex means a craftsman or artist, but also a schemer or trickster-to demonstrate that in their essence all forms of making are meant to help us 'elude our circumstances,' to cheat our own nature. Culture itself becomes a delusion brought about by means of design-a form of selfdeception that makes us believe we can free ourselves from natural restrictions by producing a world of artifice. From doors to rockets, from tents to computer screens, from pencils to mechanized intelligences, Flusser selects his examples to show that, ultimately, any involvement with culture is based on deception: sometimes 'this machine, this design, this art, this technology is intended to cheat gravity, to fool the laws of nature' (ch.1, n. pag.)-and sometimes to trick ourselves into thinking we control both gravity and the laws of nature. In that sense, art and technology are representative of the same worldview in which cultural production must be deceptive/artful enough to enable humans to go beyond the limits of what is (humanly) possible. Flusser refers to the act of weaving to explain the 'conspiratorial, even deceitful' (ch.18) character of design. In the process of carpet production, he points out, knotting is meant to deny its own warp, to hide the threads behind a pattern, so that anyone stepping on the finished rug perceives it as a uniform surface, according to the designer's plan. He offers weaving as one of the primordial forms of cultural production to embody trickery, but the same holds true for any form of design. The trick is always based on misdirection, shifting the end user's attention from the material to the application, from the current state of things to emerging possibilities and new futures. Designing is a methodical way of crafting alternative realities out of existing materials-a process of casting the intended shape onto the chosen fabric so as to create a new possibility. The material used in that process must, so to speak, dematerialize: it has to disappear from view and give way to the new object-to abstract the end result from the point of origin and the labor process. By obfuscating some components while exhibiting others, 'ideal' design enables an end user's cognitive efficiency. \n Patterns, layers, and repetitions For Flusser, any product of human making is both an object and an obstacle-Latin objectum, Greek problema-or, more specifically, any object is also an 'obstacle that is used for removal of obstacles' (ch.9). To move forward, we solve problems that lie ahead and obstruct our way; we produce objects that help us remove these obstacles; but the same objects turn into obstacles for those that come after us. In other words, since the results of human problem-solving are stored in objects, progress involves obfuscation and forgetting. We come up with a solution and, with time, this singular idea turns into a pattern; others use the already established template to produce new, more complex structures and these structures turn into new patterns, covering up previous layers of design with new design. To expedite the process of production, to advance, to move faster, the designer turns to these conventions and templates, choosing from a menu of preprogrammed options-or abstracting new rules based on previous patterns. And as the complexity of the production process increases, the reliance on patterns grows too. New design always depends on previous design, and this ultimate dependence on patterns and abstractions complicates understanding the process in its totality. In the age of ubiquitous computing, speaking of obfuscation by design becomes of particular importance. In 2015, Benjamin Bratton called his model of the new kind of layering brought about by planetary-scale computation 'the Stack': 'an accidental megastructure, one that we are building both deliberately and unwittingly and is in turn building us in its own image' (p.5). New technologies 'align, layer by layer, into something like a vast, if also incomplete, pervasive if also irregular, software and hardware Stack' (p.5). This makes it hard to perceive the Stack's overarching structure, indeed, to see it as design, however incidental. Today, we produce new technologies, new objects, to see, know, and feel more, to register what is normally hidden from our view, meanwhile, creating complex systems based on multiple, invisible layers and algorithmic operations whose effects are not always comprehensible even to the designers themselves. \n Automations and automatisms In her comprehensive account of what she calls 'surveillance capitalism,' Shoshana Zuboff points out the dangers of technological illusion-'an enduring theme of social thought, as old as the Trojan horse' (p.16)-that serves the new economic project in rendering its influence invisible. Surveillance capitalism claims 'human experience as free raw material for translation into behavioral data,' and turns that data into 'prediction products' that can later be sold to advertisers (p.8). Echoing the work of philosophers such as Bernard Stiegler (2014 Stiegler ( , 2015 or Antoinette Rouvroy (2016) , Zuboff argues that the ultimate goal of this new form of capitalism is 'to automate us,' by reprogramming our behavior and desires. Various internet platforms that dominate the market prompt us to action, influence our decision making, relying on big data analyses of our preference trends online. Automated systems create statistical models to profile users, tracing any emerging patterns in countless interactions with digital products; patterns turn into further abstractions, new models that are later reflected in new products and solutions, which end up 'automating' us, guiding our decision-making without our knowing. But is this process specific to AI-enhanced personalization under surveillance capitalism? Bratton has recently argued that what 'at first glance looks autonomous (self-governing, set apart, able to decide on its own) is, upon closer inspection, always also decided in advance by remote ancestral agents and relays, and is thus automated as well ' (2019, loc.345, n. pag.) . Any decision taken now relies on multiple decisions taken in the past; new design depends on previous design; a new object coalesces from an aggregation of old solutions. Culture is an amalgamation of such objectsobjects that, ironically, become obstacles because they are meant to enable our cognitive efficiency. A tool becomes an obstacle because the results of our problem-solving and labor are already stored within it; a tool must never be seen as a tool, as its use must be intuitive-it must remain imperceptible; any new tool meant to advance the process is made with existing tools, and so the emerging layering of design in the Anthropocene makes it harder to distinguish between tool and fabric. Extending this to the ongoing automation of cognitive tasks in the age of ubiquitous computing, the phenomenon takes on new scale. This is why the emerging need for transparency refers not so much to company politics of disinformation or algorithmic black boxes, as to the very essence of our culture, as a process of knowledge production, pattern formation, and concealment. Particular problems caused by the widespread adoption of automated decision-making systems, such as algorithmic bias, can have specific, targeted, solutions in the form of new policy, engineering standards, or better education. But a shift of focus from the particular to the total is more than an exercise in theory-it makes us realize that transparency has never been at the heart of our making, that design has always been a form of blackboxing. There is, in that sense, something deeply anti-cultural about transparency. Or, putting it differently, there is nothing natural about transparency by design: we have been programmed to cover up as we make, not the opposite. The ongoing transformation of lived experiences into data is a new analytical paradigm that demands our intervention, truly calls for an 'unboxing,' an excavation of processes and data trails. But the opening of the algorithmic black box cannot be viewed only as a technical issue-precisely because any solution is, first and foremost, a result of cultural blackboxing. While contemporary debates on AI focus on transparency as a direct response to the opacity of algorithms, what we are in need of are approaches that aim to 'unbox' new technologies as objects-obstacles, solutions that aim towards cognitive automation, products that store the results of problem-solving performed 'by remote ancestral agents,' and that can thus perpetuate injustices via automatically accepted patterns and norms. \n Critique as unboxing \n Apparent transparencies Among entries on subjects such as theology, economics, and medicine, Denis Diderot and Jean le Rond d'Alembert included in their Dictionary of the Sciences, Arts, and Crafts, entries on artisanal practices that detail the individual steps in the processes of production adopted in clockmaking, tailoring, woodworking, and many others. One such entry focuses on the making of artificial flowers: the first plate (Fig. 1 ) depicts a dozen workers scattered across the main workshop area, performing different tasks at various stages of manufacturing, while following pages of illustrations showcase the most popular templates used to emboss specific petal shapes onto fabric, with a final plate celebrating the finished commodity. By bringing to view the backstages of production, the Encyclopedia was essentially undesigning, reversing the process of 'conspiratorial weaving' described by Flusser. Now, in an age of growing technological complexity, shaped by significant degrees of cognitive automation, there is a need for a similar undesigning of new technologies. The artist Todd McLellan's photographs (Fig. 2 ) that document his multiple attempts at taking various objects apart are a suggestive illustration of this challenge in the age of extreme technological complexity. We might dissemble our smartphone, but learning what is hidden beneath the interactive surface of the touchscreen will never give us an indication of how the device really works and, more significantly, in whose interest. The meaningless innards of the device become symbolic of the contestable quality that transparency really is-if we think of it as a condition for, or indeed a guarantee of, understanding. Critical undesigning cannot be confused with a simple act of reverse engineering. There can be transparency without critique, or apparent transparency: but a sort of transparency that does not arise from critical processes of unboxing is unlikely to advance comprehension. In his lecture on black boxes, Galloway (2010) relates Marx's idea of descent into 'the hidden abode of production' (p.7), as a means of uncovering capital relations forged into commodities, to 'traditions of critical inquiry, in which objects were unveiled or denaturalized to reveal their inner workings-from Descartes's treatise on method […] to the Freudian plumbing of the ego' (p.5). Based on the assumption that the surface is merely a superficial facade to be penetrated by means of critique, these theories prioritized the interior and perceived objects as 'mystical black boxes waiting to be deciphered to reveal the rationality (of history, of totality) harbored within' (p.3). For the purpose of this article, critique is understood as a broad set of methodologies, grounded in the tradition of Critical Theory, that perform a metaphorical dismantling of objects to reveal how hidden and immaterial layers of design reflect social and economic structures-and how the power relations these structures generate become the sources of injustice, oppression, and exploitation. Critique's ultimate goal is to uncover and challenge the system(s) that objects of design engender; revelation is conceived of as the condition necessary for resistance-and systemic transformation. Looking beyond individual design flaws (and fixes), critique points to those 'ancestral relays' that automate our thinking-to patterns, repetitions, and automatisms so deeply ingrained, weaved into the fabric of our culture, that they become imperceptible-in particular to those who do not experience the injustices resulting from the adoption of already established patterns. \n Critique in the age of AI A former YouTube employee, Guillaume Chaslot, has coined the term Algotransparency to describe an experiment in which he investigates the terms appearing most frequently in the titles of videos recommended by YouTube. A program developed by Chaslot and his team traces thematic patterns in YouTube recommendations to prove there exists a systemic bias that promotes controversial clips. His research suggests Google's platform indeed 'systematically amplifies videos that are divisive, sensational and conspiratorial' (Lewis 2018) -that the recommendations are not related to the individual user's interests (as the company claims), but rather-exploit controversy to boost clickability. Algotransparency attempts to unbox the logic of YouTube's copyrightprotected recommendation algorithm without directly looking into the system's black box, concentrating only on the effects of its activity. This specific experiment gives a good indication of where we should be directing our attention: focusing not so much on how YouTube operates, as on why it works at all. This question extends beyond the technicality of the algorithm, to more widely interrogate the forces orchestrating our consumption of digital goods and whose interests they serve-what Zuboff calls surveillance capitalism, or what Stiegler refers to as hyperindustrialism. Ian Bogost (2015) has argued that the illusion of automation in technology-the trick that misdirects our attention from essential questions about human decision-making incorporated into emerging systems-breaks down 'once you bother to look at how even the simplest products are really produced.' In 2014 he collaborated with Alexis Madrigal to analyze Netflix's recommendation system and demonstrate that the platform's operations are distributed among so many different agents-including human curators who hand-tag all Netflix content-'that only a zealot would call the end result an algorithm'(Bogost 2015). Many experiments and critical projects try to achieve something similar: debunk the illusion of software by exposing AI as processual and collaborative, tracing the results of data analysis back to human decisions, biases, and labor. In their Anatomy of an AI System, for instance, Crawford and Joler (2018) present a figurative dissection of Amazon's Echo device that brings to view the invisible mechanisms and dynamisms that the product encapsulates (Fig. 3 ). The detailed mapping of various objects and agents, as well as multiple layers of interaction between those elements, constitutes a representation of the system as composed not only of hardware and software, data and computation, but also human labor and planetary resources. Critique in the age of extreme technological complexity is as much about dissecting and penetrating, as it is about charting the invisible and immaterial terrains of interaction, analysis, consumption, and computation; mapping wider relations between energies, influences, and resources under surveillance capitalismpatterns of exploitation of both people and environments that the production of objects/obstacles entails. In another project, Crawford teamed up with the artist Trevor Paglen to carry out what they call an archeology of datasets, such as ImageNet, used in machine learning to train AI to recognize specific elements of images-sets that can also become sources of bias inscribed into emerging systems. By excavating the datasets' underlying structures, Crawford and Paglen (2019) aim to reveal their implicit meanings: 'we have been digging through the material layers, cataloguing the principles and values by which something was constructed, and analyzing what normative patterns of life were assumed, supported, and reproduced.' The blackboxed Earth, a world deeply transformed by ubiquitous computing, by various layers of what Bratton calls the Stack, demands this form of unearthing. Successful critique in the age of AI exposes the technology as relying on human cognition and decision-making; more broadly, critique reveals the constellation of objects/obstacles as products of layered problem-solving, a flawed process that is necessarily tainted by pre-existing patterns and abstractions, biases and beliefs. Prioritizing reflection over efficiency, critique becomes a methodical way of resisting our reliance on patterns-patterns that allow us to move faster, but that can also harbor previously made assumptions-about gender (Costanza-Chock 2018) or race (Benjamin 2019) as particularly poignant examples-and thus perpetuate, rather than challenge, pre-existing forms of injustice. \n Reconciling design with critique In keeping up with the societal demand for transparent AI, the big players of the tech industry have been introducing changes in their engineering standards and organizational structures, hiring ethicists and policy specialists to cooperate with their product development teams. In 2016, Microsoft established the Aether Committee, a body of senior advisors to the company's leadership, that provides guidance 'on rising questions, challenges, and opportunities with the development and fielding of AI technologies' (2020), and oversees the work of other teams 'to ensure that AI products and services align with Microsoft's AI principles'-which include transparency and accountability. In 2017, DeepMind set up its Ethics and Society team 'to guide the responsible development and deployment of AI.' The team composed of 'ethicists and policy researchers' collaborates with the company's AI research team 'to understand how technical advances will impact society, and find ways to reduce risk.' Smaller industry players who decide to follow suit, but cannot afford to establish their own ethics 'departments,' begin to enlist the help of 'ethical auditing' companies; Glassbox, for example, is a tech consultancy startup, founded in 2018, that aims to 'provide clarity to the black box' by analyzing software products for signs of bias and training the client company's employees about systemic injustices. This way, elements of critique that reveal potential implications of human decision-making in design are supposed to become part of the production pipeline. These sites of interaction between 'humanists' and 'technologists' in the industry-even if, in some cases, they amount to nothing more than backdrops for press releasesdeserve our attention. Specifically, they require of us a comprehensive rethinking of what satisfies our desire for transparency in the age of extreme technological complexity. Can a system of checks and balances in the industry, an ongoing negotiation between blackboxing and unboxing, lead to anything more than design that anticipates critique? Is critique from within the industry necessarily a compromise and, therefore, nothing more than another step in the process of production of objects that are also obstacles? Must critique be external to the process of designing to remain genuine? If design is, fundamentally, blackboxing and automation, and critique is unboxing that aims to reverse the process of 'conspiratorial weaving,' then we could conclude that these two sides of human activity are in stark opposition to one another-that design is incompatible with critique. Instituting a real change in the way we move forward must focus precisely on tackling this ostensible impossibility-on reconciling design with critique, progress with suspension, production with reflection. The calls for transparency by design require that the 'making' itself be reinvented to incorporate critique. If a systemic transformation depends on a new-found compatibility between design and critique, then the designers of emerging systems should turn to already existing, alternative design practices to learn what combining the processes of making and critique might entail. Anthony Dunne and Fiona Raby, who have been pioneers in the field of 'critical design' since the late 1990s, have been advocating for design understood as 'critical thought translated to materiality ' (2013, p.35 ). An object of design in their framing must become a critical challenge-as much for the designer, as for the user: 'it encourages people to question, in an imaginative, troubling, and thoughtful way, everydayness and how things could be different.' (p.189) More recently, Ratto (2011) coined the term 'critical making' to describe a design process that focuses on 'the act of shared construction itself as an activity and a site for enhancing and extending conceptual understandings of critical sociotechnical issues' (p.254)-with those who take part as the agents of critique. For Dunne and Raby, design has become 'so absorbed in industry, so familiar with dreams of industry, that it is almost impossible to dream its own dreams' (p.88). The suggestion is that the challenge lies not so much in the incompatibility between making and critique, as between the futures imagined by the industry and the dreams functioning outside of it. While elements of critique-gender critique and critical race theory in particular-seem to have already penetrated sections of the industry in the form of the mentioned ethics auditing services, reconciling the critique of surveillance capitalism and hyperindustrialism, the forces behind most of today's innovation, with the design of new technologies within corporate structures appears a considerable (and counterintuitive) feat. Perhaps this is where we should direct our attention: more research is needed on how practices such as critical design and critical making can influence the process of AI design; how critique can be operationalized within the industry to challenge industrial values and visons, including the idea of 'progress' itself. In a recently published volume on the practice of 'undesign' (McNamara and Coombs 2019), Cameron Tonkinwise's essay proposes what he calls 'anti-progressive' design as a means of interrogating the designers' internalized desire for 'progress.' While users, as he rightly observes, are willing 'to unlearn and relearn modes of interaction' if what is new is also 'easier and more convenient, and hopefully more effective and pleasurable' (p.76), it is the designers who should learn how 'not to prefer progress, or how to prefer what does not feel like progress' (p.81). Tonkinwise argues it is the designers' duty 'to find a way to pursue the destructively preferable without casting the resulting change as progress: what is preferable are futures that no longer appear to be mere advancements of what currently exists ' (p.81) . This is to say that the responsibility for future action lies, primarily, with the designers: the human makers who, as products of a specific culture, are being increasingly challenged to become aware of their own biases and automatisms that predetermine their actions and choices. For designers, combining design with critique, rather than an attempt at making things transparent, would constitute an attempt at becoming self-conscious. Flusser argued that a renewed form of culture would have to be a culture 'aware of the fact that it was deceptive' (ch.1). To rethink technological transparency, we should first recognize that the designer has always been a trickster laying out traps, technologizing misdirection to pave the way forward. All aspects of design-practical, political, moral-would have to reflect this awareness: as we move forward, we must acknowledge that any problem solved now will also form a trap for those coming after us. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. Fig. 1 Fig. 2 12 Fig. 1 Maker of artificial flowers, 1765 (https ://hdl.handl e.net/2027/ spo.did22 22.0001.451) \n Fig. 3 3 Fig. 3 Kate Crawford and Vladan Joler, Anatomy of an AI System, 2018 (courtesy of the artists) \n\t\t\t The High-Level Expert Group on AI convened by the European Commission presented its Ethics Guidelines for Trustworthy Artificial Intelligence in the early 2019. The document identifies several key characteristics of a system that can be deemed trustworthy, which include transparency, defined as a form of traceability of data, operations, and business models that shape the end product (AI HLEG", "date_published": "n/a", "url": "n/a", "filename": "Hollanek2020_Article_AITransparencyAMatterOfReconci.tei.xml", "abstract": "In the late 2010s, various international committees, expert groups, and national strategy boards have voiced the demand to 'open' the algorithmic black box, to audit, expound, and demystify artificial intelligence. The opening of the algorithmic black box, however, cannot be seen only as an engineering challenge. In this article, I argue that only the sort of transparency that arises from critique-a method of theoretical examination that, by revealing pre-existing power structures, aims to challenge them-can help us produce technological systems that are less deceptive and more just. I relate the question of AI transparency to the broader challenge of responsible making, contending that future action must aim to systematically reconcile design-as a way of concealing-with critique-as a manner of revealing.", "id": "d848f2c2c8d3985de73351758eb000db"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Deepak Ramachandran", "Eyal Amir"], "title": "Bayesian Inverse Reinforcement Learning", "text": "Introduction The Inverse Reinforcement Learning (IRL) problem is defined in [Russell, 1998] as follows: Determine The reward function that an agent is optimizing. Given 1) Measurement of the agent's behaviour over time, in a variety of circumstances 2) Measurements of the sensory inputs to that agent; 3) a model of the environment. In the context of Markov Decision Processes, this translates into determining the reward function of the agent from knowledge of the policy it executes and the dynamics of the state-space. There are two tasks that IRL accomplishes. The first, reward learning, is estimating the unknown reward function as accurately as possible. It is useful in situations where the reward function is of interest by itself, for example when constructing models of animal and human learning or modelling opponent in competitive games. Pokerbots can improve performance against suboptimal human opponents by learning reward functions that account for the utility of money, preferences for certain hands or situations and other idiosyncrasies [Billings et al., 1998] . There are also connections to various preference elicitation problems in economics [Sargent, 1994] . The second task is apprenticeship learning -using observations of an expert's actions to decide one's own behaviour. It is possible in this situation to directly learn the policy from the expert [Atkeson and Schaal, 1997] . However the reward function is generally the most succint, robust and transferable representation of the task, and completely determines the optimal policy (or set of policies). In addition, knowledge of the reward function allows the agent to generalize better i.e. a new policy can be computed when the environment changes. IRL is thus likely to be the most effective method here. In this paper we model the IRL problem from a Bayesian perspective. We consider the actions of the expert as evidence that we use to update a prior on reward functions. We solve reward learning and apprenticeship learning using this posterior. We perform inference for these tasks using a modified Markov Chain Monte Carlo (MCMC) algorithm. We show that the Markov Chain for our distribution with a uniform prior mixes rapidly, and that the algorithm converges to the correct answer in polynomial time. We also show that the original IRL is a special case of Bayesian IRL (BIRL) with a Laplacian prior. There are a number of advantages of our technique over previous work: We do not need a completely specified optimal policy as input to the IRL agent, nor do we need to assume that the expert is infallible. Also, we can incorporate external information about specific IRL problems into the prior of the model, or use evidence from multiple experts. IRL was first studied in the machine learning setting by [Ng and Russell, 2000] who described algorithms that found optimal rewards for MDPs having both finite and infinite states. Experimental results show improved performance by our techniques in the finite case. The rest of this paper is organised as follows: In section 2 we define our terms and notation. Section 3 presents our Bayesian model of the IRL process. Section 4 discusses how to use this model to do reward learning and apprenticeship learning while section 5 discusses the sampling procedure. Sections 6, 7 and 8 then present experimental results, related work and our conclusions respectively. \n Preliminaries We recall some basic definitions and theorems relating to Markov Decision Processes and Reinforcement Learning. A (finite) Markov Decision Problem is a tuple (S, A, T, γ, R) where • S is a finite set of N states. • A = {a 1 , . . . , a k } is a set of k actions. • T : S × A × S → [0, 1] is a transition probability func- tion. • γ ∈ [0, 1) is the discount factor. • R : S → R is a reward function, with absolute value bounded by R max . The rewards are functions of state alone because IRL problems typically have limited information about the value of action and we want to avoid overfitting. A Markov Decision Process (MDP) is a tuple (S, A, T, γ), with the terms defined as before but without a reward function. To avoid confusion we will use the abbreviation MDP only for Markov Decision Processes and not Problems. We adopt the following compact notation from [Ng and Russell, 2000] for finite MDPs : Fix an enumeration s 1 . . . s N of the finite state space S. The reward function (or any other function on the state-space) can then be represented as an Ndimensional vector R, whose ith element is R(s i ). A (stationary) policy is a map π : S → A and the (discounted, infinite-horizon) value of a policy π for reward function R at state s ∈ S,denoted V π (s, R) is given by: V π (s t1 , R) = R(s t1 )+E st 1 ,st 2 ,... [γR(s t2 )+γ 2 R(s t3 )+. . . |π] where P r(s ti+1 |s ti , π) = T (s ti , π(s ti ), s ti+1 ). The goal of standard Reinforcement Learning is to find an optimal policy π * such that V π (s, R) is maximized for all s ∈ S by π = π * . Indeed, it can be shown (see for example [Sutton and Barto, 1998] ) that at least one such policy always exists for ergodic MDPs. For the solution of Markov Decision Problems, it is useful to define the following auxilliary Q-function: Q π (s, a, R) = R(s) + γE s ∼T (s,a,•) [V π (s , R)] We also define the optimal Q-function Q * (•, •, R) as the Qfunction of the optimal policy π * for reward function R. Finally, we state the following result concerning Markov Decision Problems (see [Sutton and Barto, 1998] ) : Theorem 1 (Bellman Equations). Let a Markov Decision Problem M = (S, A, T, γ, R) and a policy π : S → A be given. Then, 1. For all s ∈ S, a ∈ A, V π and Q π satisfy V π (s) = R(s) + γ s T (s, π(s), s )V π (s ) (1) Q π (s, a) = R(s) + γ s T (s, a, s )V π (s ) 2. π is an optimal policy for M iff, for all s ∈ S, π(s) ∈ argmax a∈A Q π (s, a) (2) 3 Bayesian IRL IRL is currently viewed as a problem of infering a single reward function that explains an agent's behaviour. However, there is too little information in a typical IRL problem to get only one answer. For example, consider the MDP shown in Figure 1 . There are at least three reasonable kinds of reward functions: R 1 (•) has high positive value at s 1 (and low values elsewhere) which explains why the policy tries to return to this state, while R 2 (•) and R 3 (•) have high values at s 2 and s 3 respectively. Thus, a probability distribution is needed to represent the uncertainty. S 0 S 1 S 3 S 2 a 1 a 1 a 1 a 1 0.4 0.3 0.3 a 2 a 2 a 2 a 2 Figure 1: An example IRL problem. Bold lines represent the optimal action a 1 for each state and broken lines represent some other action a 2 . Action a 1 in s 1 has probabilities 0.4,0.3 and 0.3 of going to states s 1 , s 2 , s 3 respectively, and all other actions are deterministic. \n Evidence from the Expert Now we present the details of our Bayesian IRL model (Fig. 2 ). We derive a posterior distribution for the rewards from a prior distribution and a probabilistic model of the expert's actions given the reward function. Consider an MDP M = (S, A, T, γ) and an agent X (the expert) operating in this MDP. We assume that a reward function R for X is chosen from a (known) prior distribution P R . The IRL agent receives a series of observations of the expert's behaviour O X = {(s 1 , a 1 ), (s 2 , a 2 ) . . . (s k , a k )} which means that X was in state s i and took action a i at time step i. For generality, we will not specify the algorithm that X uses to determine his (possibly stochastic) policy, but we make the following assumptions about his behaviour: 1. X is attempting to maximize the total accumulated reward according to R. For example, X is not using an epsilon greedy policy to explore his environment. 2. X executes a stationary policy, i.e. it is invariant w.r.t. time and does not change depending on the actions and observations made in previous time steps. For example, X could be an agent that learned a policy for (M, R) using a reinforcement learning algorithm. Because the expert's policy is stationary, we can make the following independence assumption: P r X (O X |R) = P r X ((s 1 , a 1 )|R)P r X ((s 2 , a 2 )|R) . . . P r X ((s k , a k )|R) The expert's goal of maximizing accumulated reward is equivalent to finding the action for which the Q * value at each state is maximum. Therefore the larger Q * (s, a) is, the more likely it is that X would choose action a at state s. This likelihood increases the more confident we are in X 's ability to select a good action. We model this by an exponential distribution for the likelihood of (s i , a i ), with Q * as a potential function: P r X ((s i , a i )|R) = 1 Z i e αX Q * (si,ai,R) where α X is a parameter 1 representing the degree of confidence we have in X 's ability to choose actions with high X R α X (s1, a1) (s2, a2) (s k , a k ) Figure 2 : The BIRL model value. This distribution satisfies our assumptions and is easy to reason with. The likelihood of the entire evidence is : P r X (O X |R) = 1 Z e αX E(OX ,R) where E(OX , R) = P i Q * (si, ai, R) and Z is the appropriate normalizing constant. We can think of this likelihood function as a Boltzmann-type distribution with energy E(O X , R) and temperature 1 αX . Now, we compute the posterior probability of reward function R by applying Bayes theorem, P r X (R|O X ) = P r X (O X |R)P R (R) P r(O X ) = 1 Z e αX E(OX ,R) P R (R) (3) Computing the normalizing constant Z is hard. However the sampling algorithms we will use for inference only need the ratios of the densities at two points, so this is not a problem. \n Priors When no other information is given, we may assume that the rewards are independently identically distributed (i.i.d.) by the principle of maximum entropy. Most of the prior functions considered in this paper will be of this form. The exact prior to use however, depends on the characteristics of the problem: 1. If we are completely agnostic about the prior, we can use the uniform distribution over the space −R max ≤ R(s) ≤ R max for each s ∈ S. If we do not want to specify any R max we can try the improper prior P (R) = 1 for all R ∈ R n . 2. Many real world Markov decision problems have parsimonious reward structures, with most states having negligible rewards. In such situations, it would be better to assume a Gaussian or Laplacian prior: P Gaussian (R(s) = r) = 1 √ 2πσ e − r P Beta (R(s) = r) = 1 ( r Rmax ) 1 2 (1 − r Rmax ) 1 2 , ∀s ∈ S In section 6.1, we give an example of how more informative priors can be constructed for particular IRL problems. \n Inference We now use the model of section 3 to carry out the two tasks described in the introduction: reward learning and apprenticeship learning. Our general procedure is to derive minimal solutions for appropriate loss functions over the posterior (Eq. 3). Some proofs are omitted for lack of space. \n Reward Learning Reward learning is an estimation task. The most common loss functions for estimation problems are the linear and squared error loss functions: L linear (R, R) = R − R 1 L SE (R, R) = R − R 2 where R and R are the actual and estimated rewards, respectively. If R is drawn from the posterior distribution (3), it can be shown that the expected value of L SE (R, R) is minimized by setting R to the mean of the posterior (see [Berger, 1993] ). Similarily, the expected linear loss is minimized by setting R to the median of the distribution. We discuss how to compute these statistics for our posterior in section 5. It is also common in Bayesian estimation problems to use the maximum a posteriori (MAP) value as the estimator. In fact we have the following result: Theorem 2. When the expert's policy is optimal and fully specified, the IRL algorithm of [Ng and Russell, 2000 ] is equivalent to returning the MAP estimator for the model of (3) with a Laplacian prior. However in IRL problems where the posterior distribution is typically multimodal, a MAP estimator will not be as representative as measures of central tendency like the mean. \n Apprenticeship Learning For the apprenticeship learning task, the situation is more interesting. Since we are attempting to learn a policy π, we can formally define the following class of policy loss functions: L p policy (R, π) = V * (R) − V π (R) p where V * (R) is the vector of optimal values for each state acheived by the optimal policy for R and p is some norm. We wish to find the π that minimizes the expected policy loss over the posterior distribution for R. The following theorem accomplishes this: Theorem 3. Given a distribution P (R) over reward functions R for an MDP (S, A, T, γ), the loss function L p policy (R, π) is minimized for all p by π * M , the optimal policy for the Markov Decision Problem M = (S, A, T, γ, E P [R]). Proof. From the Bellman equations (1) we can derive the following: V π (R) = (I − γT π ) −1 R (4) where T π is the |S|×|S| transition matrix for policy π. Thus, for a state s ∈ S and fixed π, the value function is a linear function of the rewards: V π (s, R) = w(s, π) • R where w(s, π) is the s'th row of the coefficient matrix (I − γT π ) −1 in (4). Suppose we wish to maximize E[V π (s, R)] alone. Then, max π E[V π (s, R)] = max π E[w(s, π)•R] = max π w(s, π)•E[R] By definition this is equal to V * M (s), the optimum value function for M , and the maximizing policy π is π * M , the optimal policy for M . Thus for all states s ∈ S, E[V π (s, R)] is maximum at π = π * M . But V * (s, R) ≥ V π (s, R) for all s ∈ S, reward functions R, and policies π. Therefore E[L p policy (π)] = E( V * (R) − V π (R) p ) is minimized for all p by π = π * M . So, instead of trying a difficult direct minimization of the expected policy loss, we can find the optimal policy for the mean reward function, which gives the same answer. \n Sampling and Rapid Convergence We have seen that both reward learning and apprenticeship learning require the mean of the posterior distribution. However the posterior is complex and analytical derivation of the mean is hard, even for the simplest case of the uniform prior. Instead, we generate samples from these distributions and then return the sample mean as our estimate of the true mean of the distribution. The sampling technique we use is an MCMC algorithm GridWalk (see [Vempala, 2005] ) that generates a Markov chain on the intersection points of a grid of length δ in the region R |S| (denoted R |S| /δ). However, computing the posterior distribution at a particular point R requires calculation of the optimal Q-function for R, an expensive operation. Therefore, we use a modified version of GridWalk called PolicyWalk (Figure 3 ) that is more efficient: While moving along a Markov chain, the sampler also keeps track of the optimal policy π for the current reward vector R. Observe that when π is known, the Q function can be reduced to a linear function of the reward variables, similar to equation 4. Thus step 3b can be performed efficiently. A change in the optimal policy can easily be detected when moving to the next reward vector in the chain R, because then for some (s, a) ∈ (S, A), Q π (s, π(s), R) < Q π (s, a, R) by Theorem 1. When this happens, the new optimal policy is usually only slightly different from the old one and can be computed by just a few of R in R |S| /δ. (b) Compute Q π (s, a, R) for all (s, a) ∈ S, A. (c) If ∃(s, a) ∈ (S, A), Q π (s, π(s), R) < Q π (s, a, R) i. π := PolicyIteration(M, R, π) ii. Set R := R and π := π with probability min{1, P ( R,π) P (R,π) } Else i. Set R := R with probability min{1, P ( R,π) P (R,π) } 4. Return R Figure 3 : PolicyWalk Sampling Algorithm steps of policy iteration (see [Sutton and Barto, 1998] ) starting from the old policy π. Hence, PolicyWalk is a correct and efficient sampling procedure. Note that the asymptotic memory complexity is the same as for GridWalk. The second concern for the MCMC algorithm is the speed of convergence of the Markov chain to the equilibrium distribution. The ideal Markov chain is rapidly mixing (i.e. the number of steps taken to reach equilibrium is polynomially bounded), but theoretical proofs of rapid mixing are rare. We will show that in the special case of the uniform prior, the Markov chain for our posterior (3) is rapidly mixing using the following result from [Applegate and Kannan, 1993] that bounds the mixing time of Markov chains for pseudo-logconcave functions. Lemma 1. Let F (•) be a positive real valued function defined on the cube {x ∈ R n | − d ≤ x i ≤ d} for some positive d, satisfying for all λ ∈ [0, 1] and some α, β |f (x) − f (y)| ≤ α x − y ∞ and f (λx + (1 − λ)y) ≥ λf (x) + (1 − λ)f (y) − β where f (x) = log F (x). Then the Markov chain induced by GridWalk (and hence PolicyWalk) on F rapidly mixes to within of F in O(n 2 d 2 α 2 e 2β log 1 ) steps. Proof. See [Applegate and Kannan, 1993] . Theorem 4. Given an MDP M = (S, A, T, γ) with |S| = N , and a distribution over rewards P (R) = P r X (R|O X ) defined by (3) with uniform prior P R over C = {R ∈ R n | − R max ≤ R i ≤ R max }. If R max = O(1/N ) then P can be efficiently sampled (within error ) in O(N 2 log 1/ ) steps by algorithm PolicyWalk. Proof. Since the uniform prior is the same for all points R, we can ignore it for sampling purposes along with the normalizing constant. Therefore, let f (R) = α X E(O X , R). Now choose some arbitrary policy π and let Note that f π is a linear function of R and f f π (R) = α X i Q π (s, a i , R) IJCAI-07 (R) ≥ f π (R), for all R ∈ C. Also we have, max s,a Q * (s, a) = max s,a,π Q π (s, a) = max s,π V π max (s) ≤ R max 1 − γ Similarly, min s,a Q * (s, a) ≥ − Rmax 1−γ . Therefore, f (R) ≤ αX NRmax 1−γ and f π (R) ≥ − αX NRmax 1−γ and hence f π (R) ≥ f (R) − 2α X N R max 1 − γ So for all R 1 , R 2 ∈ C and λ ∈ [0, 1], f (λR 1 + (1 − λ)R 2 ) ≥ f π (λR 1 + (1 − λ)R 2 ) ≥ λf π (R 1 ) + (1 − λ)f π (R 2 ) ≥ λf (R 1 ) + (1 − λ)f (R 2 ) − 2α X N R max 1 − γ Therefore, f satisfies the conditions of Lemma 1 with β = 2αX NRmax 1−γ = 2N • O( 1 N ) 1−γ = O(1) and α = |f (R 1 ) − f (R 2 )| R 1 − R 2 ∞ ≤ 2α X N R max (1 − γ)O( 1 N ) = O(N ) Hence the Markov chain induced by the GridWalk algorithm (and the PolicyWalk algorithm) on P mixes rapidly to within of P in a number of steps equal to O(N 2 1 N 2 N 2 e O(1) log 1/ ) = O(N 2 log 1/ ). Note that having R max = O(1/N ) is not really a restriction because we can rescale the rewards by a constant factor k after computing the mean without changing the optimal policy and all the value functions and Q functions get scaled by k as well. \n Experiments We compared the performance of our BIRL approach to the IRL algorithm of [Ng and Russell, 2000] experimentally. First, we generated random MDPs with N states (with N varying from 10 to 1000) and rewards drawn from i.i.d. Gaussian priors. Then, we simulated two kinds of agents on these MDPs and used their trajectories as input: The first learned a policy by Q-learning on the MDP + reward function. The learning rate was controlled so that the agent was not allowed to converge to the optimal policy but came reasonably close. The second agent executed a policy that maximized the expected total reward over the next k steps (k was chosen to be slightly below the horizon time). For BIRL, we used PolicyWalk to sample the posterior distribution (3) with a uniform prior. We compared the results of the two methods by their average 2 distance from the true reward function (Figure 4 ) and the policy loss with 1 norm (Figure 5 ) of the learned policy under the true reward. Both measures show substantial improvement. Note that we have used a logarithmic scale on the x-axis. We also measured the accuracy of our posterior distribution for small N by comparing it with the true distribution of rewards i.e. the set of generated rewards that gave rise to the same trajectory by the expert. In Figure 6 , we show scatter plots of some rewards sampled from the posterior and the true distribution for a 16-state MDP. These figures show that the posterior is very close to the true distribution. \n From Domain Knowledge to Prior To show how domain knowledge about a problem can be incorporated into the IRL formulation as an informative prior, IJCAI-07 we applied our methods to learning reward functions in adventure games. There, an agent explores a dungeon, seeking to collect various items of treasure and avoid obstacles such as guards or traps. The state space is represented by an m-dimensional binary feature vector indicating the position of the agent and the value of various fluents such as hasKey and doorLocked. If we view the state-space as an m-dimensional lattice L S , we see that neighbouring states in L S are likely to have correlated rewards (e.g. the value of doorLocked does not matter when the treasure chest is picked up). To model this, we use an Ising prior (see [Cipra, 1987] ): P R (R) = 1 Z exp(−J (s ,s)∈N R(s)R(s ) − H s R(s)) where N is the set of neighbouring pairs of states in L S and J and H are the coupling and magnetization parameters. We tested our hypothesis by generating some adventure games (by populating dungeons with objects from a common sense knowledge base) and testing the performance of BIRL with the Ising prior versus the baseline uninformed priors. The results are in figure 7 and show that the Ising prior does significantly better. \n Related Work The initial work on IRL was done by [Ng and Russell, 2000] while [Abbeel and Ng, 2004] studied the special case where rewards can be represented as linear combinations of features of the state space and gave a max-margin characterization of the optimal reward function. [Price and Boutilier, 2003] discusses a Bayesian approach to imitating the actions of a mentor during reinforcement learning whereas the traditional literature on apprenticeship learning tries to mimic the behaviour of the expert directly [Atkeson and Schaal, 1997] . Outside of computer science, IRL-related problems have been studied in various guises. In the physical sciences, there is a body of work on inverse problem theory, i.e. infering values of model parameters from observations of a physical system [Tarantola, 2005] . In control theory, [Boyd et al., 1994] solved the problem, posed by Kalman, of recovering the objective function for a deterministic linear system with quadratic costs. \n Conclusions and Future Work Our work shows that improved solutions can be found for IRL by posing the problem as a Bayesian learning task. We provided a theoretical framework and tractable algorithms for Bayesian IRL and our solutions contain more information about the reward structure than other methods. Our experiments verify that our solutions are close to the true reward functions and yield good policies for apprenticeship learning. There are a few open questions remaining: 1. Are there more informative priors that we can construct for specific IRL problems using background knowledge? 2. How well does IRL generalize? Suppose the transition function of the actor and the learner differed, how robust would the reward function or policy learned from the actor be, w.r.t the learner's state space? Figure 4: Reward Loss. \n Figure 6 : 6 Figure 5: Policy Loss. \n Figure 7 : 7 Figure 7: Ising versus Uninformed Priors for Adventure Games \n If the underlying MDP represented a planning-type problem, we expect most states to have low (or negative) rewards but a few states to have high rewards (corresponding to the goal); this can be modeled by a Beta distribution for the reward at each state, which has modes at high and low ends of the reward space: P Laplace (R(s) = r) = 1 2σ e − |r| 2σ , ∀s ∈ S 3. 2 2σ 2 , ∀s ∈ S on αX as well(Fig 2). But it will be simpler to treat αX as just a parameter of the distribution. \n Algorithm PolicyWalk(Distribution P , MDP M, Step Size δ )1. Pick a random reward vector R ∈ R |S| /δ. 2. π := PolicyIteration(M, R) 3. Repeat (a) Pick a reward vector R uniformly at random from the neighbours \n\t\t\t Note that the probabilities of the evidence should be conditioned IJCAI-07", "date_published": "n/a", "url": "n/a", "filename": "IJCAI07-416.tei.xml", "abstract": "Inverse Reinforcement Learning (IRL) is the problem of learning the reward function underlying a Markov Decision Process given the dynamics of the system and the behaviour of an expert. IRL is motivated by situations where knowledge of the rewards is a goal by itself (as in preference elicitation) and by the task of apprenticeship learning (learning policies from an expert). In this paper we show how to combine prior knowledge and evidence from the expert's actions to derive a probability distribution over the space of reward functions. We present efficient algorithms that find solutions for the reward learning and apprenticeship learning tasks that generalize well over these distributions. Experimental results show strong improvement for our methods over previous heuristic-based approaches.", "id": "130cddd677dcaadbc4e3253e6bd17412"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Seth D Baum", "Anthony M Barrett", "Roman V Yampolskiy"], "title": "Modeling and Interpreting Expert Disagreement About Artificial Superintelligence", "text": "Introduction Artificial superintelligence (ASI) is artificial intelligence (AI) with capabilities that are significantly greater than human capabilities across a wide range of domains. If developed, ASI could have impacts that are highly beneficial or catastrophically harmful, depending on its design A hallmark of the ASI issue is disagreement among experts. Experts disagree on if ASI will be built, when it would be built, what designs it would use, and what its likely impacts would be. 1 The extent of expert disagreement speaks to the opacity of the underlying ASI issue and the general difficulty of forecasting future technologies. This stands in contrast with other major global issues, such as climate change, for which there is extensive expert agreement on the basic parameters of the issue (Oreskes 2004 ). Expert consensus does not guarantee that the issue will be addressed-the ongoing struggle to address climate change attests to this-but it does offer direction for decision making. In the absence of expert agreement, those seeking to gain an understanding of the issue must decide what to believe given the existence of the disagreement. In some cases, it may be possible to look at the nature of the disagreement and pick sides; this occurs if other sides clearly have flawed arguments that are not worth giving any credence to. However, in many cases, multiple sides of a disagreement make plausible arguments; in these cases, the thoughtful observer may wish to form a belief that in some way considers the divergent expert opinions. This paper demonstrates and discusses methodological options for modeling and interpreting expert disagreement about the risk of ASI catastrophe. The paper accomplishes this by using a new ASI risk model called ASI-PATH (Barrett and Baum 2017a; 2017b) . Expert disagreement can be modeled as differing estimates of parameters in the risk model. Given a set of differing expert parameter estimates, aggregate risk estimates can be made using weighting functions. Modeling expert disagreement within the context of a risk model is a method that has been used widely across a range of other contexts; to our knowledge this paper marks the first application of this method to ASI. The paper uses a well-documented recent disagreement between Nick Bostrom and Ben Goertzel as an illustrative example-an example that is also worthy of study in its own right. Bostrom and Goertzel are both longstanding thought leaders about ASI, with lengthy research track records and a shared concern with the societal impacts of ASI. However, in recent publications, Goertzel (2015; 2016 ) expresses significant disagreement with core arguments made by Bostrom (2014) . The Bostrom-Goertzel disagreement is notable because both of them are experts whose arguments about ASI can be expected to merit significant credence from the perspective of an outside observer. Therefore, their disagreement offers a simple but important case study for demonstrating the methodology of modeling and interpreting expert disagreement about ASI. The paper begins by summarizing the terms of the Bostrom-Goertzel disagreement. The paper then introduces the ASI-PATH model and shows how the Bostrom-Goertzel disagreement can be expressed in terms of ASI-PATH model parameters. The paper then presents model parameter estimates based on the Bostrom-Goertzel disagreement. The parameter estimates are not rigorously justified and instead are intended mainly for illustration and discussion purposes. Finally, the paper applies the risk modeling to a practical problem, that of AI confinement. \n The Bostrom-Goertzel disagreement Goertzel (2015; 2016) presents several disagreements with Bostrom (2014) . This section focuses on three disagreements of direct relevance to ASI risk. \n Human evaluation of AI values One disagreement is on the potential for humans to evaluate the values that an AI has. Humans would want to diagnose an AI's values to ensure that they are something that humans consider desirable (henceforth \"human-desirable\"). If humans find an AI to have human-undesirable values, they can reprogram the AI or shut it down. As an AI gains in intelligence and power, it will become more capable of realizing its values, thus making it more important that its values are humandesirable. A core point of disagreement concerns the prospects for evaluating the values of AI that have significant but still subhuman intelligence levels. Bostrom indicates relatively low prospects for success at this evaluation, whereas Goertzel indicates relatively high prospects for success. Bostrom (2014, p.116-119) posits that once an AI reaches a certain point of intelligence, it might adopt an adversarial approach. Bostrom dubs this point the \"treacherous turn\": The treacherous turn: While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strongwithout warning or provocation-it strikes, forms a singleton [i.e., takes over the world], and begins directly to optimize the world according to the criteria implied by its final values. ( Goertzel provides a contrasting view, focusing on Step 2. He posits that an AI of intermediate intelligence is unlikely to successfully pretend to have humandesirable values because this would be too difficult for such an AI. Noting that \"maintaining a web of lies rapidly gets very complicated\" (Goertzel 2016, p.55), Goertzel posits that humans, being smarter and in control, would be able to see through a sub-human-level AI's \"web of lies\". Key to Goertzel's reasoning is the claim that an AI is likely to exhibit human-undesirable behavior before it (A) learns that such behavior is human-undesirable and (B) learns how to fake humandesirable behavior. Thus, Step 2 is unlikely to occurinstead, it is more likely that an AI would either have actual human-desirable values or be recognized by humans as faulty and then be reprogrammed or shut down. Goertzel does not name his view, so we will call it the sordid stumble: The sordid stumble: An AI that lacks humandesirable values will behave in a way that reveals its human-undesirable values to humans before it gains the capability to deceive humans into believing that it has human-desirable values. It should be noted that the distinction between the treacherous turn and the sordid stumble is about the AI itself, which is only one part of the human evaluation of the AI's values. The other part is the human effort at evaluation. An AI that is unskilled at deceiving humans could still succeed if humans are not trying hard to notice the deception, while a skilled AI could fail if humans are trying hard. Thus, this particular Bostrom-Goertzel debate covers only one part of the AI risk. However, it is still the case that, given a certain amount of human effort at evaluating an AI's values, Bostrom's treacherous turn suggests a lower chance of successful evaluation than Goertzel's sordid stumble. \n Human creation of human-desirable AI values A There is nothing paradoxical about an AI whose sole final goal is to count the grains of sand on Borcay, or to calculate the decimal expansion of pi, or to maximize the total number of paperclips that will exist in its future light cone. In fact, it would be easier to create an AI with simple goals like these than to build one that had a human-like set of values and dispositions (Bostrom 2014, p.107 ). The logic of the above passage is that creating an AI with human-desirable values is more difficult and thus less likely to occur. Goertzel (2016) , citing Sotala (2015), refers to this as the difficulty thesis: The difficulty thesis: Getting AIs to care about human values in the right way is really difficult, so even if we take strong precautions and explicitly try to engineer sophisticated beneficial goals, we may still fail (Goertzel 2016, p.60) . Goertzel (2016) discusses a Sotala (2015) argument against the difficulty thesis, which is that while human values are indeed complex and difficult to learn, AIs are increasingly capable of learning complex things.Per this reasoning, giving an AI human-desirable values is still more difficult than, say, programming it to calculate digits of pi, but it may nonetheless be a fairly straightforward task for common AI algorithms. Thus, while it would not be easy for humans to create an AI with human-desirable values, it would not be extraordinarily difficult either. Goertzel (2016) , again citing Sotala (2015), refers to this as the weak difficulty thesis: The weak difficulty thesis. It is harder to correctly learn and internalize human values, than it is to learn most other concepts. This might cause otherwise intelligent AI systems to act in ways that went against our values, if those AI systems had internalized a different set of values than the ones we wanted them to internalize. A more important consideration than the absolute difficulty of giving an AI human-desirable values is its relative difficulty compared to the difficulty of creating an AI that could take over the world. A larger relative ease of creating an AI with human-desirable values implies a higher probability that AI catastrophe will be avoided for any given level of effort put to avoiding it. There is reason to believe that the easier task is giving an AI human-desirable values. For comparison, every (or almost every) human being holds humandesirable values. Granted, some humans have more refined values than others, and some engage in violence or other antisocial conduct, but it is rare for someone to have pathological values like an incessant desire to calculate digits of pi. In contrast, none (or almost none) of us is capable of taking over the world. Characters like Alexander the Great and Genghis Khan are the exception, not the rule, and even they could have been assassinated by a single suicidal bodyguard. By the same reasoning, it may be easier for an AI to gain humandesirable values than it is for an AI to take over the world. This reasoning does not necessarily hold, since AI cognition can differ substantially from human cognition, but it nonetheless suggests that giving an AI humandesirable values may be the easier task. \n AI creation of human-desirable AI values A third point of discussion concerns the potential for an AI to end up with human-desirable values even though its human creators did not give it such values. If AIs tend to end up with human-desirable values, this reduces the pressure on the human creators of AI to get the AI's values right. It also increases the overall prospects for a positive AI outcome. To generalize, Bostrom proposes that AIs will tend to maintain stable values, whereas Goertzel proposes that AIs may tend to evolve values that could be more human-desirable. Bostrom's (2014) thinking on the matter centers on a concept he calls goal-content integrity: Goal-content integrity: If an agent retains its present goals into the future, then its present goals will be more likely to be achieved by its future self. This gives the agent a present instrumental reason to prevent alteration of its final goals (Bostrom 2014, p.109-110) . The idea here is that an AI would seek to keep its values intact as one means of realizing its values. At any given moment, an AI has a certain set of values and seeks to act so as to realize these values. One factor it may consider is the extent to which its future self would also seek to realize these values. Bostrom's argument is that an AI is likely to expect that its future self would realize its present values more if the future self retains the present self's values, regardless of whether those values are human-desirable. Goertzel (2016) proposes an alternative perspective that he calls ultimate value convergence: Ultimate value convergence: Nearly all superintelligent minds will converge to the same universal value system (paraphrased from Goertzel 2016, p.60). Goertzel further proposes that the universal value system will be \"centered around a few key values such as Joy, Growth, and Choice\" (Goertzel 2016, p.60). However, the precise details of the universal value system are less important than the possibility that the value system could resemble human-desirable values. This creates a mechanism through which an AI that begins with any arbitrary human-undesirable value system could tend towards human-desirable values. Goertzel does not insist that the ultimate values would necessarily be human-desirable. To the contrary, he states that \"if there are convergent 'universal' values, they are likely sufficiently abstract to encompass many specific value systems that would be abhorrent to us according to our modern human values\" (Goertzel 2016, p.60). Thus, ultimate value convergence does not guarantee that an AI would end up with human-desirable values. Instead, it increases the probability that an AI would end up with human-desirable values if the AI begins with human-undesirable values. Alternatively, if the AI begins with human-desirable values, then the ultimate value convergence theory could cause the AI to drift to human-undesirable values. Indeed, if the AI begins with human-desirable values, then more favorable results (from humanity's perspective) would accrue if the AI has goal-content integrity. \n The ASI-PATH model The ASI-PATH model was developed to model pathways to ASI catastrophe (Barrett and Baum 2016) . ASI-PATH is a fault tree model, which means it is a graphical model with nodes that are connected by Boolean logic and point to some failure mode. For ASI-PATH, a failure mode is any event in which ASI causes global catastrophe. Fault tree models like ASI-PATH are used widely in risk analysis across a broad range of domains. A core virtue of fault trees is that, by breaking catastrophe pathways into their constituent parts, they enable more detailed study of how failures can occur and how likely they are to occur. It is often easier to focus on one model node at a time instead of trying to study all potential failure modes simultaneously. Furthermore, the fault tree's logic structure creates a means of defining and quantifying model parameters and combining them into overall probability estimates. Indeed, the three points of the Bostrom-Goertzel disagreement (human evaluation of AI values, human creation of human-desirable AI values, and AI creation of human-desirable AI values) each map to one of the ASI-PATH parameters shown in Figure 1 . In Figure 1 , the top node is ASI catastrophe. The left branch covers events that lead to the ASI gaining \"decisive strategic advantage\", defined as \"a level of technological and other advantages sufficient to enable it [the AI] to achieve complete world domination\" (Bostrom, 2014, p. 78). The left branch models scenarios in which an initial \"seed\" AI undergoes recursive selfimprovement and \"takes off\", becoming successively more and more intelligent until it becomes an ASI. P1 is the probability that such an AI is possible in the first place. P2 is the probability that a seed AI is created and undergoes recursive self-improvement. P3 is the probability that the AI is contained from gaining decisive strategic advantage; the containment can occur at any point in the process from seed AI to ASI. Containment is any measure that prevents a seed AI from gaining decisive strategic advantage, either by limiting recursive self-improvement or by preventing ASI from gaining decisive strategic advantage. Containment includes confinement, in which the AI's ability to affect the rest of the world is restricted (Section 5), and enforcement, in which AI(s) prevent other AI(s) from gaining decisive strategic advantage. 2 The left branch of Figure 1 covers events that could lead to the ASI taking actions that are \"unsafe\", which is defined as actions that would result in a major global 2 Barrett and Baum (2017a, p. 400) define confinement as \"restrictions built into the AI's hardware or software that limit the AI's ability to affect the rest of the world so that it does not gain decisive strategic advantage\". This is slightly different than the Yampolskiy (2012) definition used in Section 5. This difference does not affect the overall argument of the present paper. catastrophe. P4 is the probability that humans will fail to make ASI goals safe. P5 is the probability that the ASI will not make its own goals safe. Finally, P6 is the probability that the ASI will not be deterred from acting unsafely by some other agent, potentially another AI. Because all the logic gates in Figure 1 are \"AND\", the probability of ASI catastrophe, P, is simply the product of the six component probabilities:    6 1 n n P P (1) For convenience, we assume {P1, P2, P6} = 1. These parameters are unrelated to the Bostrom-Goertzel disagreement as discussed in this paper. Instead, we focus on {P3, P4, P5}, for which there is significant disagreement. P3 relates to the Bostrom-Goertzel disagreement about human evaluation of AI values (Section 2.1). In general, it should be easier to contain an AI earlier in the recursive self-improvement process because at that point it has less intelligence with which it could resist containment. Therefore, one factor in P3 is the potential for human observers to determine early in the process that this particular AI should be contained. The easier it is for humans to evaluate AI values, the earlier in the process they should be able to notice which AIs should be contained, and therefore the more probable it is that containment will succeed. In other words, easier human evaluation of AI values means lower P3. P4 relates to the Bostrom-Goertzel disagreement about human creation of human-desirable AI values (Section 2.2). Human-desirable values are very likely to be safe in the sense that they would avoid major global catastrophe. While one can imagine the possibility that somehow, deep down inside, humans actually prefer global catastrophe, and thus that an AI with humandesirable values would cause catastrophe, we will omit this possibility. Instead, we assume that an AI with human-desirable values would not cause catastrophe. Therefore, the easier it is for humans to create AIs with human-desirable values, the more probable it is that catastrophe would be avoided. In other words, easier human creation of AI with human-desirable values means lower P4. P5 relates to the Bostrom-Goertzel disagreement about AI creation of human-desirable AI values (Section 2.3). We assume that the more likely it is that an AI would create of human-desirable values for itself, the more probable it is that catastrophe would be avoided. In other words, more likely AI creation of AI with humandesirable values means lower P5. For each of these three variables, we define two \"expert belief\" variables corresponding to Bostrom's and Goertzel's positions on the corresponding issue:  P3B is the value of P3 that follows from Bostrom's position, the treacherous turn.  P3G is the value of P3 that follows from Goertzel's position, the sordid stumble.  P4B is the value of P4 that follows from Bostrom's position, the difficulty thesis.  P4G is the value of P4 that follows from Goertzel's position, the weak difficulty thesis.  P5B is the value of P5 that follows from Bostrom's position, goal-content integrity.  P5G is the value of P5 that follows from Goertzel's position, ultimate value convergence. Given estimates for each of the above \"expert belief\" variables, one can calculate P according to the formula:       6 1 n nG nG nB nB P W P W P (2) In Equation 2 , W is a weighting variable corresponding to how much weight one places on Bostrom's or Goertzel's position for a given variable. Thus, for example, W3B is how much weight one places on Bostrom's position for P3, i.e. how much one believes that an AI would conduct a treacherous turn. For simplicity, we assume WnB + WnG = 1 for n = {3, 4, 5}. This is to assume that for each of {P3, P4, P5}, either Bostrom or Goertzel holds the correct position. This is a significant assumption: it could turn out to be the case that they are both mistaken. The assumption is made largely for analytical and expository convenience. This much is easy. The hard part is quantifying each of the P and W variables in Equation 2 . What follows is an attempt to specify how we would quantify these variables. We estimate the P variables by relating the arguments of Bostrom and Goertzel to the variables and taking into account any additional aspects of the variables. We aim to be faithful to Bostrom's and Goertzel's thinking. We estimate the W variables by making our own (tentative) judgments about the strength of Bostrom's and Goertzel's arguments as we currently see them. Thus, the P estimations aim to represent Bostrom's and Goertzel's thinking and the W estimations represent our own thinking. Later in the paper we also explore the implications of giving both experts' arguments equal weighting (i.e., WnB = WnG = 0.5 for each n) and of giving full weighting to exclusively one of the two experts. We make no claims to having the perfect or final estimations of any of these parameters. To the contrary, we have low confidence in our current estimations, in the sense that we expect we would revise our estimations significantly in the face of new evidence and argument. But there is value in having some initial estimations to stimulate thinking on the matter. We thus present our estimations largely for sake of illustration and discussion. We invite interested readers to make their own. \n P3 and W3: containment fails The human evaluation of AI values is only one aspect of containment. Other aspects include takeoff speed (faster takeoff means less opportunity to contain AI during recursive self-improvement) and ASI containment (measures to prevent an ASI from gaining decisive strategic advantage). Therefore, the Bostrom-Goertzel disagreement about human evaluation of AI values should only produce a relatively small difference on P3. Bostrom and Goertzel may well disagree on other aspects of P3, but those are beyond the scope of this paper. Bostrom's position, the treacherous turn, corresponds to a higher probability of containment failure and thus a higher value of P3 relative to Goertzel's position, the sordid stumble. We propose a 10% difference in P3 between Bostrom and Goertzel, i.e. P3B -P3G = 0.1. The absolute magnitude of P3B and P3G will depend on various case-specific details-for example, a seed AI launched on a powerful computer is more likely to have a fast takeoff and thus less likely to be contained. For simplicity, we will use P3B = 0.6 and P3G = 0.5, while noting that other values are also possible. Regarding W3B and W3G, our current view is that the sordid stumble is significantly more plausible. We find it relevant that AIs are already capable of learning complex tasks like face recognition, yet such AIs are nowhere near capable of outwitting humans with a web of lies. Additionally, it strikes us as much more likely that an AI would exhibit human-undesirable behavior before it becomes able to deceive humans, and indeed long enough in advance to give humans plenty of time to contain the situation. Therefore, we estimate W3B = 0.1 and W3G = 0.9. \n P4 and W4: humans fail to give AI safe goals The Bostrom-Goertzel disagreement about human creation of human-desirable AI values is relevant to the challenge of humans giving AI safe goals. Therefore, the disagreement can yield large differences in P4. Bostrom's position, the difficulty thesis, corresponds to a higher probability of humans failing to give the AI safe goals and thus a higher value of P4 relative to Goertzel's position, the weak difficulty thesis. The values of P4B and P4G will depend on various case-specific details, such as how hard humans try to give the AI safe goals. As representative estimates, we propose P4B = 0.9 and P4G = 0.4. Regarding W4B and W4G, our current view is that the weak difficulty thesis is significantly more plausible. The fact that AIs are already capable of learning complex tasks like face recognition suggests that learning human values is not a massively intractable task. An AI would not please everyone all the time-this is impossible-but it could learn to have broadly human-desirable values and behave in broadly human-desirable ways. However, we still see potential for the complexities of human values to pose AI training challenges that go far beyond what exists for tasks like face recognition. Therefore, we estimate W4B = 0.3 and W4G = 0.7. \n P5 and W5: AI fails to give itself safe goals The Bostrom-Goertzel disagreement about AI creation of human-desirable AI values is relevant to the challenge of the AI giving itself safe goals. Therefore, the disagreement can yield large differences in P5. Bostrom's position, goal-content integrity, corresponds to a higher probability of the AI failing to give itself safe goals and thus a higher value of P5 relative to Goertzel's position, ultimate value convergence. Indeed, an AI with perfect goal-content integrity will never change its goals. For ultimate value convergence, the key factor is the relation between ultimate values and human-desirable values; a weak relation suggests a high probability that the AI will end up with human-undesirable values. Taking these considerations into account, we propose P5B = 0.95 and P5G = 0.5. Regarding W5B and W5G, our current view is that goal-content integrity is significantly more plausible. While it is easy to imagine that an AI would not have perfect goal-content integrity, due to a range of realworld complications, we nonetheless find it compelling that this would be a general tendency of AIs. In contrast, we see no reason to believe that AIs would all converge towards some universal set of values. To the contrary, we believe that an agent's values derive mainly from its cognitive architecture and its interaction with its environment; different architectures and interactions could lead to different values. Therefore, we estimate W5B = 0.9 and W5G = 0.1. \n The probability of ASI catastrophe Table 1 summarizes the various parameter estimates in Sections 3.1-3.3. Using these estimates, recalling the assumption {P1, P2, P6} = 1, and following Equation 2 gives P = (0.1*0.6 + 0.9*0.5) * (0.3*0.9 + 0.7*0.4) * (0.9*0.95 + 0.1*0.5) ≈ 0.25. In other words, this set of parameter estimates implies an approximately 25% probability of ASI catastrophe. For comparison, giving equal weighting to Bostrom's and Goertzel's positions (i.e., setting each WB = WG = 0.5) yields P ≈ 0.26; using only Bostrom's arguments (i.e., setting each WB = 1) yields P ≈ 0.51; and using only Goertzel's arguments (i.e., setting each WG = 1) yields P = 0.1. Catastrophe probabilities of 0.1 and 0.51 may diverge by a factor of 5, but they are both still extremely high. Even \"just\" a 0.1 chance of major catastrophe could warrant extensive government regulation and/or other risk management. Thus, however much Bostrom and Goertzel may disagree with each other, they would seem to agree that ASI constitutes a major risk. \n PB However, an abundance of caveats is required. First, the assumption {P1, P2, P6} = 1 was made without any justification. Any thoughtful estimates of these parameters would almost certainly be lower. Our intuition is that ASI from AI takeoff is likely to be possible, and ASI deterrence seems unlikely to occur, suggesting {P1, P6} ≈ 1, but that the creation of seed AI is by no means guaranteed, suggesting P2 << 1. This implies P ≈ 0.25 is likely an overestimate. Second, the assumption that the correct position was either Bostrom's or Goertzel's was also made without any justification. They could both be wrong, or the correct position could be some amalgam of both of their positions, or an amalgam of both of their positions plus other position(s). Bostrom and Goertzel are both leading thinkers about ASI, but there is no reason to believe that their range of thought necessarily corresponds to the breadth of potential plausible thought. To the contrary, the ASI topic remains sufficiently unexplored that it is likely that many other plausible positions can be formed. Accounting for these other positions could send P to virtually any value in [0, 1]. Third, the estimates in Table 1 were made with little effort, largely for illustration and discussion purposes. Many of these estimates could be significantly off, even by several orders of magnitude. Given the form of Equation 1, a single very low value for Wn*Pn would also make P very low. This further implies that P ≈ 0.25 is likely an overestimate, potentially by several orders of magnitude. Fourth, the estimates in \n A practical application: AI confinement A core motivation for analyzing ASI risk is to inform practical decisions aimed at reducing the risk. Risk analysis can help identify which actions would reduce the risk and by how much. Different assessments of the risk-such as from experts' differing viewpoints-can yield different results in terms of which actions would best reduce the risk. Given the differences observed in the viewpoints of Bostrom and Goertzel about ASI risk, it is possible that different practical recommendations could follow. To illustrate this, we apply the above risk analysis to model the effects of decisions on a proposed ASI risk reduction measure known as AI confinement: AI confinement: The challenge of restricting an artificially intelligent entity to a confined environment from which it can't exchange information with the outside environment via legitimate or covert channels if such information exchange was not authorized by the confinement authority (Yampolskiy 2012, p.196) . AI confinement is a type of containment and thus relates directly to the P3 (containment fails) variable in the ASI-PATH model (Figure 1 ). Stronger confinement makes it less likely that an AI takeoff would result in an ASI gaining decisive strategic advantage. Confinement might be achieved, for example, by disconnecting the AI from the internet and placing it in a Faraday cage. Superficially, strong confinement would seem to reduce ASI risk by reducing P3. However, strong confinement could increase ASI risk in other ways. In particular, by limiting interactions between the AI and the human populations, strong confinement could limit the AI's capability to learn human-desirable values, thereby increasing P4 (failure of human attempts to make ASI goals safe). For comparison, AIs currently learn to recognize key characteristics of images (e.g., faces) by examining large data sets of images, often guided by human trainers to help the AI correctly identify image features. Similarly, an AI may be able to learn humandesirable values by observing large data sets of human decision-making, human ethical reflection, or other phenomena, and may further improve via the guidance of human trainers. Strong confinement could limit the potential for the AI to learn human-desirable values, thus increasing P4. Bostrom and Goertzel have expressed divergent views on confinement. Bostrom has favored strong confinement, even proposing a single international ASI project in which \"the scientists involved would have to be physically isolated and prevented from communicating with the rest of the world for the duration of the project, except through a single carefully vetted communication channel (Bostrom 2014, p. 253)\". Goertzel has explicitly criticized this proposal (Goertzel 2015, p.71-73) and instead argued that an open project would be safer, writing that \"The more the AGI system is engaged with human minds and other AGI systems in the course of its self-modification, presumably the less likely it is to veer off in an undesired and unpredictable direction\" (Goertzel and Pitt 2012, p.13). Each expert would seem to be emphasizing different factors in ASI risk: P3 for Bostrom and P4 for Goertzel. The practical question here is how strong to make the confinement for an AI. Answering this question requires resolving the tradeoff between P3 and P4. This in turn requires knowing the size of P3 and P4 as a function of confinement strength. Estimating that function is beyond the scope of this paper. However, as an illustrative consideration, suppose that it is possible to have strong confinement while still giving the AI good access to human-desirable values. For example, perhaps a robust dataset of human decisions, ethical reflections, etc. could be included inside the confinement. In this case, the effect of strong confinement on P4 may be small. Meanwhile, if there is no arrangement that could shrink the effect of confinement on P3, such that this effect would be large, then perhaps strong confinement would be better. This and other practical ASI risk management questions could be pursued in future research. \n Conclusion Estimates of the risk of ASI catastrophe can depend heavily on which expert makes the estimate. A neutral observer should consider arguments and estimates from all available experts and any other sources of information. This paper analyzes ASI catastrophe risk using arguments from two experts, Nick Bostrom and Ben Goertzel. Applying their arguments to an ASI risk model, we calculate that their respective ASI risk estimates vary by a factor of five: P ≈ 0.51 for Bostrom and P = 0.1 for Goertzel. Our estimates, combining both experts' arguments, is P ≈ 0.25. Weighting both experts equally gave a similar result of P ≈ 0.26. These numbers come with many caveats and should be used mainly for illustration and discussion purposes. More carefully considered estimates could easily be much closer to either 0 or 1. These numbers are interesting, but they are not the only important part, or even the most important part, of this analysis. There is greater insight to be obtained from the details of the analysis than from the ensuing numbers. This is especially case for this analysis of ASI risk because the numbers are so tentative and the underlying analysis so comparatively rich. This paper is just an initial attempt to use expert judgment to quantify ASI risk. Future research can and should do the following: examine Bostrom's and Goertzel's arguments in greater detail so as to inform the risk model's parameters; consider arguments and ideas from a wider range of experts; conduct formal expert surveys to elicit expert judgments of risk model parameters; explore different weighting techniques for aggregating across expert judgment, as well as circumstances in which weighted aggregation is inappropriate; conduct sensitivity analysis across spaces of possible parameter values, especially in the context of the evaluation of ASI risk management decision options; and do all of this for a wider range of model parameters, including {P1, P2, P6} as well as more detailed components of {P3, P4, P5}, such as modeled in Barrett and Baum (2017a; 2017b) . Future research can also explore the effect on overall ASI risk when multiple ASI systems are launched: perhaps some would be riskier than others, and it may be important to avoid catastrophe from all of them. One overarching message of this paper is that more detailed and rigorous analysis of ASI risk can be achieved when the risk is broken into constituent parts and modeled, such as in Figure 1 . Each component of ASI risk raises a whole host of interesting and important details that are worthy of scrutiny and debate. Likewise, aggregate risk estimates are better informed and generally more reliable when they are made from detailed models. To be sure, it is possible for models to be too detailed, burdening experts and analysts with excessive minutiae. However, given the simplicity of the risk models at this early stage of ASI risk analysis, we believe that, at this time, more detail is better. A final point is that the size of ASI risk depends on many case-specific factors that in turn depend on many human actions. This means that the interested human actor has a range of opportunities available for reducing the probability of ASI catastrophe. Risk modeling is an important step towards identifying which opportunities are most effective at reducing the risk. ASI catastrophe is by no means a foregone conclusion. The ultimate outcome may well be in our hands. \n Acknowledgement We thank Ben Goertzel, Miles Brundage, Kaj Sotala, Steve Omohundro, Allan Dafoe, Stuart Armstrong, Ryan Carey, Nell Watson, and Matthijs Maas for helpful comments on an earlier draft. Any remaining errors are the authors' alone. Work for this paper is funded by Future of Life Institute grant 2015-143911. The views in this paper are those of the authors and do not necessarily reflect the views of the Global Catastrophic Risk Institute or the Future of Life Institute. Figure 1 : 1 Figure 1: ASI catastrophe fault tree. Adapted from Barrett and Baum (2017a). \n Bostrom 2014, p.119) Such an AI would not have durable values in the sense that it would go from acting in human-desirable ways to acting in human-undesirable ways. A key detail of the treacherous turn theory is that the AI has values that are similar to, but ultimately different from, humandesirable values. As the AI gains intelligence, it goes through a series of stages: 1. At low levels of intelligence, the AI acts in ways that humans consider desirable. At this stage, the differences between the AI's values and human values are not important because the AI can only complete simple tasks that are human-desirable. 2. At an intermediate level of intelligence, the AI realizes that its values differ from human-desirable values and that it if it tried deviating from humandesirable values, humans would reprogram the AI or shut it down. Furthermore, the AI discovers that it can successfully pretend to have human-desirable values until it is more intelligent. 3. At a high level of intelligence, the AI takes control of the world from humanity so that humans cannot reprogram it or shut it down, and then pursues its actual, human-undesirable values. \n Table 1 : 1 Summary of parameter estimates in Sections 3.1-3.3. PG WB WG 3 0.6 0.5 0.1 0.9 4 0.9 0.4 0.3 0.7 5 0.95 0.5 0.9 0.1 \n Table 1 depend on a range of case-specific factors, including what other containment measures are used, how much effort humans put into giving the AI human-desirable values, and what cognitive architecture the AI has. Therefore, different seed AIs self-improving under different conditions would yield different values of P, potentially including much larger and much smaller values. \n\t\t\t On expert opinion of ASI, see Baum et al. (2011), Armstrong and Sotala (2012), Armstrong et al. (2014), and Müller and Bostrom (2014).", "date_published": "n/a", "url": "n/a", "filename": "SSRN-id3104645.tei.xml", "abstract": "Artificial superintelligence (ASI) is artificial intelligence (AI) with capabilities that are significantly greater than human capabilities across a wide range of domains. A hallmark of the ASI issue is disagreement among experts. This paper demonstrates and discusses methodological options for modeling and interpreting expert disagreement about the risk of ASI catastrophe. Using a new model called ASI-PATH, the paper models a well-documented recent disagreement between Nick Bostrom and Ben Goertzel, two distinguished ASI experts. Three points of disagreement are considered: (1) the potential for humans to evaluate the values held by an AI, (2) the potential for humans to create an AI with values that humans would consider desirable, and (3) the potential for an AI to create for itself values that humans would consider desirable. An initial quantitative analysis shows that accounting for variation in expert judgment can have a large effect on estimates of the risk of ASI catastrophe. The risk estimates can in turn inform ASI risk management strategies, which the paper demonstrates via an analysis of the strategy of AI confinement. The paper find the optimal strength of AI confinement to depend on the balance of risk parameters ( 1 ) and ( 2 ). Povzetek: Predstavljena je metoda za modeliranje in interpretiranje razlik v mnenjih ekspertov o superinteligenci.", "id": "4b039d31de032a6fb2cde5ba7f3ac47c"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Carla Zoe Cremer", "Jess Whittlestone"], "title": "Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI", "text": "I. Introduction P rogress in artificial intelligence (AI) research has accelerated in recent years. Applications are already changing society [1] and some researchers warn that continued progress could precipitate transformative impacts [2] - [5] . We use the term \"transformative AI\" to describe a range of possible advances with potential to impact society in significant and hard-to-reverse ways [6] . For example, future machine learning systems could be used to optimise management of safety-critical infrastructure [7] . Advanced language models could be used in ways that corrupt our online information ecosystem [8] and future advances in AI systems could trigger widespread labour automation [9] . There is an urgent need to develop anticipatory governance approaches to AI development and deployment. As AI advances, its impacts on society will become more profound, and some harms may be too great to rely on purely 'reactive' or retrospective governance. Anticipating future impacts is a challenging task. Experts show substantial disagreement about when different advances in AI capabilities should be expected [10] , [11] . Policy-makers face challenges in keeping pace with technological progress: it is difficult to foresee impacts before a technology is deployed, but after deployment it may already be too late to shape impacts, and some harm may already have been done [12] . Ideally, we would focus preventative, anticipatory efforts on applications which are close enough to deployment to be meaningfully influenced today, but whose impacts we are not already seeing. Finding 'early warning signs' of transformative AI applications can help us to do this. Early warning signs can also help democratise AI development and governance. They can provide time and direction for much-needed public discourse about what we want and do not want from AI. It is not enough for anticipatory governance to look out for supposedly 'inevitable' future impacts. We are not mere bystanders in this AI revolution: the futures we occupy will be futures of our own making, driven by the actions of technology developers, policymakers, civil society and the public. In order to prevent foreseeable harms towards those people who bear the effects of AI deployments, we must find ways for AI developers to be held accountable to the society which they are embedded in. If we want AI to benefit society broadly, we must urgently find ways to give democratic control to those who will be impacted. Our aim with identifying early warning signs is to develop anticipatory methods which can prompt a focussed civic discourse around significant developments and provide a wider range of people with the information they need to contribute to conversations about the future of AI. We present a methodology for identifying early warning signs of potentially transformative impacts of AI and discuss how these can feed into more anticipatory and democratic governance processes. We call these early warning signs 'canaries' based on the practice of using canaries to provide early warnings of unsafe air pollution in coal mines in the industrial revolution. Others before us have used this term in the context of AI to stress the importance of early warning signs [13] , [14] , but this is the first attempt to outline in detail how such 'artificial canaries' might be identified and used. Our methodology is a prototype but we believe it provides an important first step towards assessing and then trialling the feasibility of identifying canaries. We first present the approach and then illustrate it on two high-level examples, in which we identify preliminary warning signs of AI applications that could undermine democracy, and warning signs of progress towards high-level machine intelligence (HLMI). We explain why early warning signs are needed by drawing on the literature of participatory technology assessments, and we discuss the advantages and practical challenges of this method in the hope of preparing future research that might attempt to put this method into practise. Our theoretical exploration of a method to identify early warning signs of transformative applications provides a foundation towards more anticipatory, accountable and democratic governance of AI in practice. \n II. Related Work We rely on two main bodies of work. Our methodology for identifying canaries relies on the literature on forecasting and monitoring AI. Our suggestions for how canaries might be used once identified build on work on participatory technology assessments, which stresses a more inclusive approach to technology governance. While substantial research exists in both these areas, we believe this is the first piece of work that shows how they could feed into each other. \n A. AI Forecasting and Monitoring Over the past decade, an increasing number of studies have attempted to forecast AI progress. They commonly use expert elicitations to generate probabilistic estimates for when different AI advances and milestones will be achieved [10] , [15] - [17] . For example, [16] ask experts about when specific milestones in AI will be achieved, including passing the Turing Test or passing third grade. Both [15] and [10] ask experts to predict the arrival of high-level machine intelligence (HLMI), which the latter define as when \"unaided machines can accomplish every task better and more cheaply than human workers\". However, we should be cautious about giving results from these surveys too much weight. These studies have several limitations, including the fact that the questions asked are often ambiguous, that expertise is narrowly defined, and that respondents do not receive training in quantitative forecasting [11] , [18] . Experts disagree substantially about when crucial capabilities will be achieved [10] , but these surveys cannot tell us who (if anyone) is more accurate in their predictions. Issues of accuracy and reliability aside, forecasts focused solely on timelines for specific events are limited in how much they can inform our decisions about AI today. While it is interesting to know how much experts disagree on AI progress via these probabilistic estimates, they cannot tell us why experts disagree or what would change their minds. Surveys tell us little about what early warning signs to look out for or where we should place our focus today to shape the future development and impact of AI. At the same time, several projects, e.g. [19] - [22] , have begun to track and measure progress in AI. These projects focus on a range of indicators relevant to AI progress, but do not make any systematic attempt to identify which markers of progress are more important than others for the preparation of transformative applications. Time and attention for tracking progress is limited and it would be helpful if we were able to prioritise and monitor those research areas that are most relevant to mitigating risks. Recognising some of the limitations of existing work, [23] aims for a more holistic approach to AI forecasting. This framework emphasises the use of the Delphi technique [24] to aggregate different perspectives of a group of experts, and cognitive mapping methods to study how different milestones relate to one another, rather than to simply forecast milestones in isolation. We agree that such methods might address some limitations of previous work in both AI forecasting and monitoring. AI forecasting has focused on timelines for particularly extreme events, but these timelines are subject to enormous uncertainty and do not indicate near-term warning signs. AI measurement initiatives have the opposite limitation: they focus on near-term progress, but with little systematic reflection on which avenues of progress are, from a governance perspective, more important to monitor than others. What is needed are attempts to identify areas of progress today that may be particularly important to pay attention to, given concerns about the kinds of transformative AI systems that may be possible in future. \n B. Participatory Technology Assessments Presently, the impacts of AI are largely shaped by a small group of powerful people with a narrow perspective which can be at odds with public interest [25] . Only a few powerful actors, such as governments, defence agencies, and firms the size of Google or Amazon, have the resources to conduct ambitious research projects. Democratic control over these research projects is limited. Governments retain discretion over what gets regulated, large technology firms can distort and avoid policies via intensive lobbying [26] and defence agencies may classify ongoing research. Recognising these problems, a number of initiatives over the past few years have emphasised the need for wider participation in the development and governance of AI [27] - [29] . In considering how best to achieve this, it is helpful to look to the field of science and technology studies (STS) which has long considered the value of democratising research progress [30] , [31] . Several publications refer to the 'participatory turn' [32] in STS and an increasing interest in the role of the non-expert in technology development and assessment [27] . More recently, in the spirit of \"democratic experimentation\" [33] , various methods for civic participation have been developed and trialled, including deliberative polls, citizen juries and scenario exercises [33] . With a widening conception of expertise, a large body of research on \"participatory technology assessment\" (PTA) has emerged, aiming to examine how we might increase civic participation in how technology is developed, assessed and rolled out. We cannot summarise this wideranging and complex body of work fully here. But we point towards some relevant pieces for interested readers to begin with. [34] and [35] present a typology of the methods and goals of participating, which now come in many forms. This means that assessments of the success of PTAs are challenging [33] and ongoing because different studies evaluate different PTA processes against different goals [34] . Yet while scholars recognise remaining limitations of PTAs [31] , several arguments for their advantages have been brought forward, ranging from citizen agency to consensus identification and justice. There are good reasons to believe that non-experts possess relevant end-user expertise. They often quickly develop the relevant subjectmatter understanding to contribute meaningfully, leading to better epistemic outcomes due to a greater diversity of views which result in a cancellation of errors [36] , [37] . To assess the performance of PTAs scholars draw from case studies and identify best practices [38] - [40] . There is an important difference between truly participatory, democratically minded, technology assessments, and consultations that use the public to help legitimise a preconceived technology [41] . The question of how to make PTAs count in established representational democracies is an ongoing challenge to the field [31] , [33] . But [42] , who present a recent example of collective technology policy-making, show that success and impact with PTAs is possible. [40] draw from 38 international case studies to extract best practices, building on [38] , who showcase great diversity of possible ways in which to draw on the public. Comparing different approaches is difficult, but has been done [39] , [43] . [41] present a conceptual framework with which to design and assess PTAs, [44] compares online versus offline methodologies and in [35] we find a typology of various design choices for public engagement mechanisms. See also [45] for a helpful discussion on how to determine the diversity of participants, [46] on what counts as expertise in foresight and [30] , [32] , [47] for challenges to be aware of in implementing PTAs. Many before us have noted that we need wider participation in the development and governance of AI, including by calling for the use of PTAs in designing algorithms [48] , [49] . We see a need to go beyond greater participation in addressing existing problems with algorithms and propose that wider participation should also be considered in conversations about future AI impacts. Experts and citizens each have a role to play in ensuring that AI governance is informed by and inclusive of a wide range of knowledge, concerns and perspectives. However, the question of how best to marry expert foresight and citizen engagement is a challenging one. While a full answer to this question is beyond the scope of this paper, what we do offer is a first step: a proposal for how expert elicitation can be used to identify important warnings which can later be used to facilitate timely democratic debate. For such debates to be useful, we first need an idea of which developments on the horizon can be meaningfully assessed and influenced, for which it makes sense to draw on public expertise and limited attention. This is precisely what our method aims to provide. \n III. Identifying Early Warning Signs We believe that identifying canaries for transformative AI is a tractable problem and worth investing research effort in today. Engineering and cognitive development present a proof of principle: capabilities are achieved sequentially, meaning that there are often key underlying capabilities which, if attained, unlock progress in many other areas. For example, musical protolanguage is thought to have enabled grammatical competence in the development of language in homo sapiens [50] . AI progress so far has also seen such amplifiers: the use of multi-layered non-linear learning or stochastic gradient descent arguably laid the foundation for unexpectedly fast progress on image recognition, translation and speech recognition [51] . By mapping out the dependencies between different capabilities needed to reach some notion of transformative AI, therefore, we should be able to identify milestones which are particularly important for enabling many others -these are our canaries. The proposed methodology is intended to be highly adaptable and can be used to identify canaries for a number of important potentially transformative events, such as foundational research breakthroughs or the automation of tasks that affect a wide range of jobs. Many types of indicators could be of interest and classed as canaries, including: algorithmic innovation that supports key cognitive faculties (e.g., natural language understanding); overcoming known technical challenges (such as improving the data efficiency of deep learning algorithms); or improved applicability of AI to economically-relevant tasks (e.g. text summarization). Given an event for which we wish to identify canaries, our methodology has three essential steps: (1) identifying key milestones towards the event; (2) identifying dependency relations between these milestones; and (3) identifying milestones which underpin many others as canaries. See Fig. 1 for an illustration. We here deliberately refrain from describing the method with too much specificity, because we want to stress the flexibility of our approach, and recognise that there is currently no one-fits-all approach in forecasting. The method will require adaptation to the particular transformative event in question, but each step of this method is suited for such specifications. We outline example adaptations of the method to particular cases. e. method presented in this paper is that it requires one to have already identi ed a particular transformative event of concer \n A. Identifying Milestones Via Expert Elicitation The first step of our methodology involves using traditional approaches in expert elicitation to identify milestones that may be relevant to the transformative event in question. Which experts are selected is crucial to the outcome and reliability of studies in AI forecasting. There are unavoidable limitations of using any form of subjective judgement in forecasting, but these limitations can be minimised by carefully thinking through the group selection. Both the direct expertise of individuals, and how they contribute to the diversity of the overall group, must be considered. See [46] for a discussion of who counts as an expert in forecasting. Researchers should decide in advance what kinds of expertise are most relevant and must be combined to study the milestones that relate to the transformative event. Milestones might include technical limitations of current methods (e.g. adversarial attacks) and informed speculation about future capabilities (e.g. common sense) that may be important prerequisites to the transformative event. Consulting across a wide range of academic disciplines to order such diverse milestones is important. For example, a cohort of experts identifying and ordering milestones towards HLMI should include not only experts in machine learning and computer science but also cognitive scientists, philosophers, developmental psychologists, evolutionary biologists, or animal cognition experts. Such a group combines expertise on current capabilities in AI, with expertise on key pillars of cognitive development and the order in which cognitive faculties develop in animals. Groups which are diverse (on multiple dimensions) are expected to produce better epistemic outcomes [37] , [52] . We encourage the careful design and phrasing of questions to enable participants to make use of their expertise, but refrain from demanding answers that lie outside their area of expertise. For example, asking machine learning researchers directly for milestones towards HLMI does not draw on their expertise. But asking machine learning researchers about the limitations of the methods they use every day; or asking psychologists what human capacities they see lacking in machines today, draws directly on their day-to-day experience. Perceived limitations can be then be transformed into milestones. There are several different methods available for expert elicitation including surveys, interviews, workshops and focus groups, each with advantages and disadvantages. Interviews provide greater opportunity to tailor questions to the specific expert, but can be time-intensive compared to surveys and reduce the sample size of experts. If possible, some combination of the two may be ideal: using carefully selected semi-structured interviews to elicit initial milestones, followed-up with surveys with a much broader group to validate which milestones are widely accepted as being key. \n B. Mapping Causal Relations Between Milestones The second step of our methodology involves convening experts to identify causal relations between identified milestones: that is, how milestones may underpin, depend on, or affect progress towards other milestones. Experts should be guided in generating directed causal graphs, a type of cognitive map that elicits a person's perceived causal relations between components. Causal graphs use arrows to represent perceived causal relations between nodes, which in this case are milestones [53] . This process primarily focuses on finding out whether or not a relationship exists at all; how precisely this relationship is specified can be adapted to the goals of the study. An arrow from A to B at minimum indicates that progress on A will allow for further progress on B. But this relationship can also be made more precise: in some cases indicating that progress on AI is necessary for progress on B, for example. The relationship between nodes may be either linear or nonlinear; again this can be specified more precisely if needed or known. Constructing and debating causal graphs can \"help groups to convert tacit knowledge into explicit knowledge\" [53] . Causal graphs are used as decision support for individuals or groups, and are often used to solve problems in policy and management involving complex relationships between components in a system by tapping into experts' mental models and intuitions. We therefore suggest that causal graphs are particularly well-suited to eliciting experts' models and assumptions about the relationship between different milestones in AI development. As a method, causal graphs are highly flexible and can be adapted to the preferred level of detail for a given study: they can be varied in complexity and can be analysed both quantitatively and qualitatively [54] , [55] . We neither exclude nor favour quantitative approaches here, due to the complexity and uncertainty of the questions around transformative events. Particularly for very high-level questions, quantitative approaches might not offer much advantage and might communicate a false sense of certainty. In narrower domains where there is more existing evidence, however, quantitative approaches may help to represent differences in the strength of relationships between milestones. [56] notes that there are no ready-made designs that will fit all studies: design and analysis of causal mapping procedures must be matched to a clear theoretical context and the goal of the study. We highlight a number of different design choices which can be used to adapt the process. As more studies use causal graphs in expert elicitations about AI developments, we can learn from the success of different design choices over time and identify best practices. [53] stress that interviews or collective brainstorming are the most accepted method for generating the data upon which to analyse causal relations. [57] list heuristics on how to manage the procedure of combining graphs by different participants, or see [58] for a discussion on evaluating different options presented by experts. [59] suggest visual, interactive tools to aid the process. [56] and [60] discuss approaches to analysing graphs and extracting the emergent properties, significant 'core' nodes as well as hierarchical clusters. Core or \"potent\" nodes are those that relate to many clusters in the graphs and thus have implications for connected nodes. In our proposed methodology, such potent nodes play a central role in pointing to canary milestones. For more detail on the many options on how to generate, analyse and use causal graphs we refer the reader to the volume of [57] , or reviews such as [53] , [59] . See [55] for an example of applying cognitive mapping to expert views on UK public policies; and [61] for group problem solving with causal graphs. We propose that identified experts be given instruction in generating either an individual causal graph, after which a mediated discussion between experts generates a shared graph; or that the groups of experts as a whole generates the causal graph via argumentation, visualisations and voting procedures if necessary. As [62] emphasises, any group of experts will have both shared and conflicting assumptions, which causal graphs aim to integrate in a way that approaches greater accuracy than that contained in any single expert viewpoint. The researchers are free to add as much detail to the final maps as required or desired. Each node can be broken into subcomponents or justified with extensive literature reviews. \n C. Identifying Canaries Finally, the resulting causal graphs can be used to identify nodes of particular relevance for progress towards the transformative event in question. This can be a node with a high number of outgoing arrows, i.e. milestones which unlock many others that are prerequisites for the event in question. It can also be a node which functions as a bottleneck -a single dependency node that restricts access to a subsequent highly significant milestone. See Fig. 2 for an illustration. Progress on these milestones can thus represent a 'canary', indicating that further advances in subsequent milestones will become possible and more likely. These canaries can act as early warning signs for potentially rapid and discontinuous progress, or may signal that applications are becoming ready for deployment. Experts identify nodes which unlock or provide a bottleneck for a significant number of other nodes (some amount of discretion from the experts/conveners will be needed to determine what counts as 'significant'). Of course, in some cases generating these causal graphs and using them to identify canaries may be as complicated as a full scientific research project. The difficulty of estimating causal relationships between future technological advances must not be underestimated. However, we believe it to be the case that each individual researcher already does this to some extent, when they chose to prioritise a research project, idea or method over another within a research paradigm. Scientists also debate the most fruitful and promising research avenues and arguably place bets on implicit maps of milestones as they pick a research agenda. The idea is not to generate maps that provide a perfectly accurate indication of warning signs, but to use the wisdom of crowds to make implicit assumptions explicit, creating the best possible estimate of which milestones may provide important indications of future transformative progress. \n IV. Using Early Warning Signs Once identified, canary milestones can immediately help to focus existing efforts in forecasting and anticipatory governance. Given limited resources, early warning signs can direct governance attention to areas of AI progress which are soon likely to impact society and which can be influenced now. For example, if progress in a specific area of NLP (e.g. sentiment analysis) serves as a warning sign for the deployment of more engaging social bots to manipulate voters, policymakers and regulators can monitor or regulate access and research on this research area within NLP. We can also establish research and policy initiatives to monitor and forecast progress towards canaries. Initiatives might automate the collection, tracking and flagging of new publications relevant to canary capabilities, and build a database of relevant publications. They might use prediction platforms to enable collective forecasting of progress towards canary capabilities. Foundational research can try to validate hypothesised relationships between milestones or illuminate the societal implications of different milestones. These forecasting and tracking initiatives can be used to improve policy prioritisation more broadly. For example, if we begin to see substantial progress in an area of AI likely to impact jobs in a particular domain, policymakers can begin preparing for potential unemployment in that sector with greater urgency. However, we believe the value of early warning signs can go further and support us in democratising the development and deployment of AI. Providing opportunities for participation and control over policy is a fundamental part of living in a democratic society. It may be especially important in the case of AI, since its deployment might indeed transform society across many sectors. If AI applications are to bring benefits across such wide-ranging contexts, AI deployment strategies must consider and be directed by the diverse interests found across those sectors. Interests which are underrepresented at technology firms are otherwise likely to bear the negative impacts. There is currently an information asymmetry between those developing AI and those impacted by it. Citizens need better information about specific developments and impacts which might affect them. Public attention and funding for deliberation processes is not unlimited, so we need to think carefully about which technologies to direct public attention and funding towards. Identifying early warning signs can help address this issue, by focusing the attention of public debate and directing funding towards deliberation practises that centre around technological advancements on the horizon. We believe early warning signs may be particularly well-suited to feed into participatory technology assessments (PTAs), as introduced earlier. Early warning signs can provide a concrete focal point for citizens and domain experts to collectively discuss concerns. Having identified a specific warning sign, various PTA formats could be suited to consult citizens who are especially likely to be impacted. PTAs come in many forms and a full analysis of which design is best suited to assessing particular AI applications is beyond the scope of this article. But the options are plenty and PTAs show much potential (see section 2). For example, Taiwan has had remarkable success and engagement with an open consultation of citizens on complex technology policy questions [42] . An impact assessment of PTA is not a simple task, but we hypothesise that carefully designed, inclusive PTAs would present a great improvement over how AI is currently developed, deployed and governed. Our suggestion is not limited to governmental bodies. PTAs or other deliberative processes can be run by research groups and private institutions such as AI labs, technology companies and think tanks who are concerned with ensuring AI benefits all of humanity. \n V. Method Illustrations We outline two examples of how this methodology could be adapted and implemented: one focused on identifying warning signs of a particular societal impact, the other on warning signs of progress towards particular technical capabilities. Both these examples pertain to high-level, complex questions about the future development and impacts of AI, meaning our discussion can only begin to illustrate what the process of identifying canaries would look like, and what questions such a process might raise. Since the results are only the suggestions of the authors of this paper, we do not show a full implementation of the method whose value lies in letting a group of experts deliberate. As mentioned previously, the work of generating these causal maps will often be a research project of its own, and we will return later to the question of what level of detail and certainty is needed to make the resulting graphs useful. \n A. First Illustration: AI Applications in Voter Manipulation We show how our method could identify warning signs of the kind of algorithmic progress which could improve the effectiveness of, or reduce the cost of, algorithmic election manipulation. The use of algorithms in attempts to manipulate election results incur great risk for the epistemic resilience of democratic countries [63] - [65] . Manipulations of public opinion by national and commercial actors are not a new phenomenon. [66] details the history of how newly emerging technologies are often used for this purpose. But recent advances in deep learning techniques, as well as the widespread use of social media, have introduced easy and more effective mechanisms for influencing opinions and behaviour. [8] and [67] detail the various ways in which political and commercial actors incur harm to the information ecosystem via the use of algorithms. Manipulators profile voters to identify susceptible targets on social media, distribute micro-targeted advertising, spread misinformation about policies of the opposing candidate and try to convince unwanted voters not to vote. Automation plays a large role in influencing online public discourse. Publications like [68] , [69] note that manipulators use both human-run accounts and bots [70] or a combination of the two [71] . Misinformation [72] and targeted messaging [73] can have transformative implications for the resilience of democracies and very possibility of collective action [74] , [75] . Despite attempts by national and sub-national actors to apply algorithms to influence elections, their impact so far has been contested [76] . Yet, foreign actors and national political campaigns will continue to have incentives and substantial resources to invest in such campaigns, suggesting their efforts are unlikely to wane in future. We may thus inquire what kinds of technological progress would increase the risk that elections can be successfully manipulated. We can begin this inquiry by identifying what technological barriers currently prevent full-scale election manipulation. We would identify those technological limitations by drawing on the expertise of actors who are directly affected by these bottlenecks. Those might be managers of online political campaigns and foreign consulting firms (as described in [8] ), who specialise in influencing public opinion via social media, or governmental organisations across the world who comment on posts, target individual influencers and operate fake accounts to uphold and spread particular beliefs. People who run such political cyber campaigns have knowledge of what technological bottlenecks still constrain their influence on voter decisions. We recommend running a series of interviews to collect a list of limitations. This list might include, for example, that the natural language functionality of social bots is a major bottleneck for effective online influence (for the plausibility of this being an important technical factor see [8] ). Targeted users often disengage from a chat conversation after detecting that they are exchanging messages with social bots. Low retention time is presumably a bottleneck for further manipulation, which suggests that improvements in natural language processing (NLP) would significantly reduce the cost of manipulation as social bots become more effective. We will assume, for the purpose of this illustration that NLP were to be identified as a key bottleneck. We would then seek to gather experts (e.g. in a workshop) who can identify and map milestones (or current limitations) in NLP likely to be relevant to improving the functionality of social bots. This will include machine learning experts who specialise in NLP and understand the technical barriers to developing more convincing social bots; as well as experts in developmental linguistics and evolutionary biology, who can determine suitable benchmarks and the required skills, and who understand the order in which linguistic skills are usually developed in animals. From these expert elicitation processes we would acquire a list of milestones in NLP which, if achieved, would likely lower the cost and increase the effectiveness of online manipulation. Experts would then order milestones into a causal graph of dependencies. Given the interdisciplinary nature of the question at hand, we suggest in this case that the graph should be directly developed by the whole group. A mediated discussion in a workshop context can help to draw out different connections between milestones and the reasoning behind them, ensuring participants do not make judgements outside their range of expertise. A voting procedure such as majority voting should be used if no consensus can be reached. In a final step, experts can highlight milestone nodes in the final graph which are either marked by many outgoing nodes or are bottlenecks for a series of subsequent nodes that are not accessed by an alternative pathway. These (e.g. sentiment analysis) are our canaries: areas of progress which serve as a warning sign of NLP being applied more effectively in voter manipulation. Having looked at how this methodology can be used to identify warning signs of a specific societal impact, we next illustrate a different application of the method in which we aim to identify warning signs of a research breakthrough. \n B. Second Illustration: High-level Machine intelligence We use this second example to illustrate in more detail what the process of developing a causal map might look like once initial milestones have been identified, and how canary capabilities can be identified from the map. We define high-level machine intelligence (HLMI) as an AI system (or collection of AI systems) that performs at the level of an average human adult on key cognitive measures required for economically relevant tasks. We choose to focus on HLMI since it is a milestone which has been the focus of previous forecasting studies [10] , [15] , and which, despite the ambiguity and uncertain nature of the concepts, is interesting to attempt to examine, because it is likely to precipitate widely transformative societal impacts. To trial this method, we used interview results from [11] . 25 experts from a diverse set of disciplines (including computer science, cognitive science and neuroscience) were interviewed and asked what they believed to be the main limitations preventing current machine learning methods from achieving the capabilities of HLMI. These limitations can be translated into 'milestones': capabilities experts believe machine learning methods need to achieve on the path to HLMI, i.e. the output of step 1 of our methodology. Having identified key milestones, step 2 of our methodology involves exploring dependencies between them using causal graphs. We use the software VenSim to illustrate hypothesised relationships between milestones (see Fig. 2 ). For example, we hypothesise that the ability to formulate, comprehend and manipulate abstract concepts may be an important prerequisite to the ability to account for unobservable phenomena, which is in turn important for reasoning about causality. This map of causal relations and dependencies was constructed by the authors alone, and is therefore far from definitive, but provides a useful illustration of the kind of output this methodology can produce. Based on this causal map, we can identify three candidates for canary capabilities: \n Representations that allow variable-binding and disentanglement: the ability to construct abstract, discrete and disentangled representations of inputs, to allow for efficiency and variable-binding. We hypothesise that this capability underpins several others, including grammar, mathematical reasoning, concept formation, and flexible memory. Flexible memory: the ability to store, recognise, and re-use memory and knowledge representations. We hypothesise that this ability would unlock many others, including the ability to learn from dynamic data, to learn in a continual fashion, and to update old interpretations of data as new information is acquired. Positing unobservables: the ability to recognise and use unobservable concepts that are not represented in the visual features of a scene, including numerosity or intentionality. We might tentatively suggest that these are important capabilities to track progress on from the perspective of anticipating HLMI. \n VI. Discussion and Future Directions As the two illustrative examples show, there are many complexities and challenges involved in putting this method into practice. One particular challenge is that there is likely to be substantial uncertainty in the causal graphs developed. This uncertainty can come in many forms. Milestones that are not well understood are likely to be composed of several sub-milestones. As more research is produced, the graph will be in need of revision. Some such revisions may include the addition of connections between milestones that were previously not foreseen, which in turn might alter the number of outgoing connections from nodes and turn them into potent nodes, i.e. 'canaries'. The process of involving a diversity of experts in a multi-stage, collaborative process is designed to reduce this uncertainty by allowing for the identification of nodes and relationships that are widely agreed upon and so more likely to be robust. However, considerable uncertainty will inevitably remain due to the nature of forecasting. The higher the level of abstraction and ambiguity in the events studied (like events such as HLMI, which we use for our illustration) the greater the uncertainty inherent in the map and the less reliable the forecasts will likely be. It will be important to find ways to acknowledge and represent this uncertainty in the maps developed and conclusions drawn from them. This might include marking uncertainties in the graph and taking this into account when identifying and communicating 'canary' nodes. Given the uncertainty inherent in forecasting, we must consider what kinds of inevitable misjudgements are most important to try to avoid. A precautionary perspective would suggest it is better to slightly overspend resources on monitoring canaries that turn out to be false positives, rather than to miss an opportunity to anticipate significant technological impacts. This suggests we may want to set a low threshold for what should be considered a 'canary' in the final stage of the method. The uncertainty raises an important question: will it on average be better to have an imperfect, uncertain mapping of milestones rather than none at all? There is some chance that incorrect estimates of 'canaries' could be harmful. An incorrect mapping could focus undue attention on some avenue of AI progress, waste resources or distract from more important issues. Our view is that it is nonetheless preferable to attempt a prioritisation. The realistic alternative is that anticipatory governance is not attempted or informed by scholars' individual estimates in an ad-hoc manner, which we should expect to be incorrect more often than our collective and structured expert elicitation. How accurate our method is can only be studied by trialling it and tracking its predictions as AI research progresses to confirm or refute the forecasts. Future studies are likely to face several trade-offs in managing the uncertainty. For example, a large and cognitively diverse expert group may be better placed to develop robust maps eventually, but this may be a much more challenging process than doing it with a smaller, less diverse group --making the latter a tempting choice (see [45] for a discussion of this trade-off). The study of broad and high-level questions (such as when we might attain HLMI or automate a large percentage of jobs) may be more societally relevant or intellectually motivating, but narrower studies focused on nearer-term, well-defined applications or impacts may be easier to reach certainty on. A further risk is that this method, intended to identify warning signs so as to give time to debate transformative applications, may inadvertently speed up progress towards AI capabilities and applications. By fostering expert deliberation and mapping milestones, it is likely that important research projects and goals are highlighted and the field's research roadmap is improved. This means our method must be used with caution. However, we do not believe this is a reason to abandon the approach, since these concerns must be balanced against the benefits of being able to deliberate upon and shape the impacts of AI in advance. In particular, we believe that the process of distilling information from experts in a way that can be communicated to wider society, including those currently underrepresented in debates about the future of AI, is likely to have many more benefits than costs. The idea that we can identify 'warning signs' for progress assumes that there will be some time lag between progress on milestones, during which anticipatory governance work can take place. Of course, the extent to which this is possible will vary, and in some cases, unlocking a 'canary' capability could lead to very rapid progress on subsequent milestones. Future work could consider how to incorporate assessment of timescales into the causal graphs developed, so that it is easier to identify canaries which warn of future progress while allowing time to prepare. Future work should also critically consider what constitutes relevant 'expertise' for the task of identifying canaries, and further explore ways to effectively integrate expert knowledge with the values and perspectives of diverse publics. Our method finds a role for the expert situated in a larger democratic process of anticipating and regulating emerging technologies. Expert judgement can thereby be beneficial to wider participation. However, processes that allow more interaction between experts and citizens could be even more effective. One limitation of the method presented in this paper is that it requires one to have already identified a particular transformative event of concern, but does not provide guidance on how to identify and prioritise between events. It may be valuable to consider how citizens that are impacted by technology can play a role in identifying initial areas of concern, which can then feed into this process of expert elicitation to address the concerns. \n VII. Conclusion We have presented a flexible method for identifying early warning signs, or 'canaries' in AI progress. Once identified, these canaries can provide focal points for anticipatory governance efforts, and can form the basis for meaningful participatory processes enabling citizens to steer AI developments and their impacts. Future work must now test this method by putting it into practice, which will more clearly reveal both benefits and limitations. Our artificial canaries offer a chance for forward-looking, democratic assessments of transformative technologies. \n Appendix It is worth noting there are apparent similarities and relationships between many of these milestones. For example, representation: the ability to learn abstract representations of the environment, seems closely related to variable binding: the ability to formulate place-holder concepts. The ability to apply learning from one task to another, crossdomain generalisation, seems closely related to analogical reasoning. Further progress in research will tell which of these are clearly separate milestones or more closely related notions. Flexible memory, as described by experts in our sample, is the ability to recognize and store reusable information, in a format that is flexible so that it can be retrieved and updated when new knowledge is gained. We explain the reasoning behind the labelled arrows in Fig. 2 (see Fig. • (B): the ability to reinterpret data in light of new information likely requires flexible memory, since it requires the ability to retrieve and alter previously stored information. • (C) and (E): to make use of dynamic and changing data input, and to learn continuously over time, an agent must be able to store, correctly retrieve and modify previous data as new data comes in. • (D): in order to plan and execute strategies in brittle environments with long delays between actions and rewards, an agent must be able to store memories of past actions and rewards, but easily retrieve this information and continually update its best guess about how to obtain rewards in the environment. • (F): analogical reasoning involves comparing abstract representations, which requires forming, recognising, and retrieving representations of earlier observations. Progress in flexible memory therefore seems likely to unlock or enable many other capabilities important for HLMI, especially those crucial for applying AI systems in real environments and more complex tasks. These initial hypotheses should be validated and explored in more depth by a wider range of experts. as well as the attendees of the workshop Evaluating Progress in AI at the European Conference on AI (Aug 2020) for recognizing the potential of this work. We particularly thank Carolyn Ashurst and Luke Kemp for their efforts and commentary on our drafts. FFig. 3 . 3 Fig.3. Extract of Fig.2, showing one candidate canary capability. \n Cognitive map of dependencies between milestones collected in expert elicitations. Arrows coloured in green signify those milestones that have most outgoing arrows. See appendix for description of each milestone and dependency relations between one 'canary' node and subsequent nodes. Object Permanence Uncertainty Estimation Meta-Learning, Architecture-Search Visual Answering estion Causality Theorising, Hypothesising Common Sense Reading Comprehension Grammer Hierarchical Decomposition Posit Unobservables Adverserial A acks Cross-Domain Generalisation Scalability Mathematical Reasoning Representation, Variable-Binding, Disentanglement Concept Formation Analogical Reasoning, Overfi ing E icient Learning Catastrophic Forge ing Flexible Memory Continual Learning Active Learning Environmental Pressure Reinterpretations Dynamic Data Bri le Environments Misguided Data Collection Context-Dependent Decisions Fig. 2.", "date_published": "n/a", "url": "n/a", "filename": "ijimai_6_5_10.tei.xml", "abstract": "We propose a method for identifying early warning signs of transformative progress in artificial intelligence (AI), and discuss how these can support the anticipatory and democratic governance of AI. We call these early warning signs 'canaries', based on the use of canaries to provide early warnings of unsafe air pollution in coal mines. Our method combines expert elicitation and collaborative causal graphs to identify key milestones and identify the relationships between them. We present two illustrations of how this method could be used: to identify early warnings of harmful impacts of language models; and of progress towards high-level machine intelligence. Identifying early warning signs of transformative applications can support more efficient monitoring and timely regulation of progress in AI: as AI advances, its impacts on society may be too great to be governed retrospectively. It is essential that those impacted by AI have a say in how it is governed. Early warnings can give the public time and focus to influence emerging technologies using democratic, participatory technology assessments. We discuss the challenges in identifying early warning signals and propose directions for future work.", "id": "9f876316cc75e629fb37e9c9d90daa71"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Dennis Pamlin", "Stuart Armstrong", "Dr Nick Beckstead", "Kennette Benedict", "Oliver Bettis", "Dr Eric Drexler", "Madeleine Enarsson", "Martin Hellman", "Aled Jones", "Nick Mabey", "Jennifer Morgan", "Prof Vincent Müller", "Prof Toby Ord", "Dr Anders Sandberg", "Nick Silver", "Andrew Simms", "Academic Project Manager, Andrew Snyder-Beattie", "Nathan Wolfe", "Investment Consultant at Liang Yin", "Towers Watson"], "title": "Member of the National Expert Panel on Climate Change and National Foreign Policy Advisory Committee, China", "text": "The main authors of this report are Dennis Pamlin, Executive Project Manager, Global Challenges Foundation and Dr Stuart Armstrong, James Martin Research Fellow, Future of Humanity Institute, Oxford Martin School & Faculty of Philosophy, University of Oxford. Dr Stuart Armstrong wrote the chapter covering the twelve global challenges, under the direction of Dennis Pamlin who served as project manager and himself wrote and edited the rest of the report. Seth Baum, Executive Director of the Global Catastrophic Risk Institute and affiliate researcher at the Center for Research on Environmental Decisions, Columbia University, also played an important role as he helped develop the methodology chapter regarding the selection of the global challenges with potentially infinite impacts as well as providing helpful input throughout the process. The report is the result of a collaborative approach where many people have provided invaluable contributions. The authors would therefore like to thank a few people in particular. First and foremost László Szombatfalvy, Chairman of the Global Challenges Foundation, whose work is the basis for this report and whose guidance on all levels has been invaluable. The rest of the board of the Global Challenges Foundation have also contributed in many different ways, in particular, Johan Rockström has provided important input regarding the structure and methodology. Outside the foundation Prof Nick Bostrom, Professor & Director of the Future of Humanity Institute, Oxford Martin School & Faculty of Philosophy, University of Oxford, who initiated the possibility of working with the Future of Humanity Institute at the University of Oxford, played a particularly important role. Patrick McSharry, Head of Smith School's Catastrophe Risk Financing research area, provided invaluable input regarding complex systems and ways that the economic system can respond to infinite impacts. Alex Kirby also played a key part as he did so much more than proofread the text; the report would hardly be possible to read without his help. Various additional edits and changes were made by Peter Brietbart. Others that must be mentioned, including those who participated in the workshop on 14 January 2014, at the Future of Humanity Institute (FHI), University of Oxford and the workshop at the Munich RE office in London on 15 January 2014, and helped provide input regarding the economic and finance aspects, include (in alphabetical order): Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks This is the executive summary of a report about a limited number of global risks that pose a threat to human civilisation, or even possibly to all human life. \n Summary Executive 4 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Executive Summary \n History: the LA-602 document With such a focus it may surprise some readers to find that the report's essential aim is to inspire action and dialogue as well as an increased use of the methodologies used for risk assessment. The real focus is not on the almost unimaginable impacts of the risks the report outlines. Its fundamental purpose is to encourage global collaboration and to use this new category of risk as a driver for innovation. The idea that we face a number of global challenges threatening the very basis of our civilisation at the beginning of the 21st century is well accepted in the scientific community, and is studied at a number of leading universities. I However, there is still no coordinated approach to address this group of challenges and turn them into opportunities. It is only 70 years ago that Edward Teller, one of the greatest physicists of his time, with his back-of-the-envelope calculations, produced results that differed drastically from all that had gone before. His calculations showed that the explosion of a nuclear bomb -a creation of some of the brightest minds on the planet, including Teller himself -could result in a chain reaction so powerful that it would ignite the world's atmosphere, thereby ending human life on Earth. Robert Oppenheimer, who led the Manhattan Project to develop the nuclear bomb, halted the project to see whether Teller's calculations were correct. The resulting document, LA-602: Ignition of the Atmosphere with Nuclear Bombs, concluded that Teller was wrong. But the sheer complexity drove the assessors to end their study by writing that \"further work on the subject [is] highly desirable\". The LA-602 document can be seen as the first global challenge report addressing a category of risks where the worst possible impact in all practical senses is infinite. \n Global risks The report conducts its exploration within carefully defined bounds, resulting in a list of twelve risks with potentially infinite outcomes. There were many challenges which might have been included on the list because of their ability to pose severe damage to humanity. They were excluded for one or more of three reasons: 1. Limited impact -tsunamis, for example, and chemical pollution. \n No effective countermeasures - the report focuses on promoting effective interventions and so ignores challenges where nothing useful can be done to prevent or mitigate the impact, as with nearby gamma-ray bursts. 3. Included in other challengesmany challenges are already covered by others, or are very similar to them. Population growth, for one, is significant for climate change and ecosystem catastrophe, but without direct large-scale impacts of its own. It is worth noting that complex systems are often stable only within certain boundaries outside which the system can collapse and rapidly change to a new stable state. Such a collapse can trigger a process where change continues for a long time until a new stable state is found. None of the risks in this report are likely to result directly in an infinite impact, and some cannot do so physically. All the risks however are big enough to reach a threshold where the social and ecological systems become so unstable that an infinite impact could ensue. This is a report about two extremes, not one. It is about how a better understanding of the magnitude of the challenges can help the world to address the risks it faces, and can help to create a path towards more sustainable development. It is a scientific assessment about the possibility of oblivion, certainly, but more than that it is a call for action based on the assumption that humanity is able to rise to challenges and turn them into opportunities. We are confronted with possibly the greatest challenge ever and our response needs to match this through global collaboration in new and innovative ways. This report has, to the best of the authors' knowledge, created the first list of global risks with impacts that for all practical purposes can be called infinite. It is also the first structured overview of key events related to such challenges and has tried to provide initial rough quantifications for the probabilities of these impacts. In the next phase of the project, these placeholder estimates will be improved and refined by a variety of methods (expert elicitation, fault trees, simulations, etc.) appropriate to each specific risk. \n The goals of the report The first of the report's goalsacknowledging the existence of risks with potentially infinite impactseeks to help key stakeholders to acknowledge the existence of the category of risks that could result in infinite impact, and to show them that we can reduce or even eliminate most of them. The second goal is to inspire by showing the practical action that is taking place today. This report seeks to show that helping to meet these global challenges is perhaps the most important contribution anyone can make today, and highlights concrete examples to inspire a new generation of leaders. The third goal is to connect different groups at every level, so that leaders in different sectors connect with each other to encourage collaboration. This will need a specific focus on financial and security policy, where significant risks combine to demand action beyond the incremental. The fourth goal is to deliver actual strategies and initiatives that produce actual results. The report is a first step and its success will ultimately be measured only on how it contributes to concrete results. The report will have achieved its goals when key decision-makers recognise the magnitude of the possible risks and our ability to reduce or even eliminate most of them. The four main goals of this report are to acknowledge, inspire, connect and deliver. \n Report structure The second part is an overview of the twelve challenges and key events that illustrate strategic work to address them. It also lists for each challenge five important factors that influence its probability or impact. The challenges are divided into four different categories: -current challenges includes those which currently threaten humanity because of its economic and technological development; -exogenic challenges are those where the basic probability of an event is beyond human control, but where the probability and magnitude of the impact can be influenced; -emerging challenges could both help reduce the risks associated with current challenges and also result in infinite impacts; -the last of the twelve challenges are global policy challenges, threats arising from future global governance as it resorts to destructive policies in response to the categories of challenge listed above. The third part of the report discusses the relationship between the different challenges, as action to address one can increase the risk of another. Many solutions can also address multiple challenges, so there are significant benefits from understanding how they are linked. The fourth part is an overview, the first ever to the authors' knowledge, of the probabilities of global challenges with potentially infinite impacts. The fifth part presents some of the most important underlying trends that influence the challenges, which often build up slowly to a threshold where very rapid changes can ensue. The sixth part presents an overview of possible ways forward. \n The first part of the report introduces and defines the global challenges and includes the methodology for selecting them. A new category of global risk Risk Probability Impact = x For several reasons the potentially infinite impacts of the challenges in this report are not as well known as they should be. One reason is the way that extreme impacts are often masked by most of the theories and models used by governments and business today. Climate change is a good example, where almost all of the focus is on the most likely scenarios, and there are few public studies that include the low-probability high-impact scenarios. In most reports about climate impacts, those caused by warming beyond five or six degrees Celsius are omitted from tables and graphs. Other aspects that contribute to this relative invisibility include the fact that extreme impacts are difficult to translate into monetary terms, as they have a global scope and often require a time-horizon of a century or more. They cannot be understood simply by linear extrapolation of current trends, and they lack historical precedents. There is also the fact that the measures required to significantly reduce the probability of infinite impacts will be radical compared to a business-as-usual scenario. A scientific approach requires us to base our decisions on the whole probability distribution. The review of literature indicates that, under a business as usual scenario, new risks with potential infinite impact are probably inseparable from the rapid technological development in areas like synthetic biology, nanotechnology and AI. Most risks are linked to increased knowledge, economic and technical development that has brought many benefits. E.g. climate change is a result from the industrial revolution and fossil fuel based development. The increased potential for global pandemics is one consequence of an integrated global economy where goods and services move quickly internationally. Similar challenges can be expected for synthetic biology, nanotechnology and AI. There are remedies, including technological and institutional, for all risks. But they will require collaboration of a sort humanity has not achieved before, and the creation of systems which can deal with problems pre-emptively. It is important to understand that much of the knowledge and many tools that we have, and will develop, can be both a risk and a solution to risks depending on context. The idea that there may be risks where the impact can be described as infinite, defined as the end of human civilisation or even human life, is not new. However, it excites relatively little political or academic interest, and the way it is treated in popular culture makes a serious discussion more difficult. \n Infinite impacts and thresholds Normal Risks \n Threshold Traditional measures and tools applicable \n New Category Requires new measures and tools impact 0 probability Risk Probability Impact = x Using traditional economic tools is problematic and can generate disagreement over issues such as discounting, which the report examines in some detail, considering for example the role of tipping points. The report distinguishes between the concepts of infinite impact -where civilisation collapses to a state of great suffering and does not recover, or a situation where all human life ends -and infinite impact thresholdan impact that can trigger a chain of events that could result first in a civilisation collapse, and then later result in an infinite impact. Such thresholds are especially important to recognise in a complex and interconnected society where resilience is decreasing. A collapse of civilisation is defined as a drastic decrease in human population size and political/ economic/social complexity, globally plunge temperatures below freezing around the globe and possibly also destroy most of the ozone layer. The detonations would need to start firestorms in the targeted cities, which could lift the soot up into the stratosphere. The risks are severe and recent models have confirmed the earlier analysis. The disintegration of the global food supply would make mass starvation and state collapse likely. As for all risks there are uncertainties in the estimates, and warming could be much more extreme than the middle estimates suggest. Feedback loops could mean global average temperatures increase by 4°C or even 6°C over pre-industrial levels. Feedbacks could be the release of methane from permafrost or the dieback of the Amazon rainforest. The impact of global warming would be strongest in poorer countries, which could become completely uninhabitable for the highest range of warming. The likelihood of a full-scale nuclear war between the USA and Russia has probably decreased. Still, the potential for deliberate or accidental nuclear conflict has not been removed, with some estimates putting the risk in the next century or so at around 10%. A larger impact would depend on whether or not the war triggered what is often called a nuclear winter or something similarthe creation of a pall of smoke high in the stratosphere that would Mass deaths and famines, social collapse and mass migration are certainly possible in this scenario. Combined with shocks to the agriculture and biosphere-dependent industries of the more developed countries, this could lead to global conflict and possibly civilisation collapse. Further evidence of the risk comes from signs that past civilisation collapses have been driven by climate change. The uncertainties in climate sensitivity models, including the tail. The likelihood -or not -of global coordination on controlling emissions. The future uptake of low carbon economies, including energy, mobility and food systems. Whether technological innovations will improve or worsen the situation, and by how much. How relations between current and future nuclear powers develop. The probability of accidental war. Whether disarmament efforts will succeed in reducing the number of nuclear warheads. The likelihood of a nuclear winter. The long-term effects of a nuclear war on climate, infrastructure and technology. A new category of global risk. Executive Summary the damage and (unlike previous, localised collapses) the whole world is potentially at risk. It seems plausible that some human lifestyles could be sustained in a relatively ecosystem independent way, at relatively low costs. Whether this can be achieved on a large scale in practice, especially during a collapse, will be a technological challenge and whether it is something we want is an ethical question. This is where an ecosystem suffers a drastic, possibly permanent, reduction in carrying capacity for all organisms, often resulting in mass extinction. Humans are part of the global ecosystem and so fundamentally depend on it. Species extinction is now far faster than the historic rate, and attempts to quantify a safe ecological operating space place humanity well outside it. \n Many of the problems of ecological degradation interact to multiply The extent to which humans are dependent on the ecosystem. Whether there will be effective political measures taken to protect the ecosystem on a large scale. The likelihood of the emergence of sustainable economies. The positive and negative impacts on the ecosystems of both wealth and poverty. The long-term effects of an ecological collapse on ecosystems. 5 key factors: An epidemic of infectious disease that has spread through human populations across a large region or even worldwide. There are grounds for suspecting that such a highimpact epidemic is more probable than usually assumed. All the features of an extremely devastating disease already exist in nature: essentially incurable (Ebola), nearly always fatal (rabies), extremely infectious (common cold), and long incubation periods (HIV). If a pathogen were to emerge that somehow combined these features What the true probability distribution for pandemics is, especially at the tail. The capacity of international health systems to deal with an extreme pandemic. How fast medical research can proceed in an emergency. How mobility of goods and people, as well as population density, will affect pandemic transmission. Whether humans can develop novel and effective anti-pandemic solutions. (and influenza has demonstrated antigenic shift, the ability to combine features from different viruses), its death toll would be extreme. The world has changed considerably, making comparisons with the past problematic.Today it has better sanitation and medical research, as well as national and supra-national institutions dedicated to combating diseases. But modern transport and dense human population allow infections to spread much more rapidly, and slums can be breeding grounds for disease. An economic or societal collapse on the global scale. The term has been used to describe a broad range of conditions. Often economic collapse is accompanied by social chaos, civil unrest and sometimes a breakdown of law and order. Societal collapse usually refers to the fall or disintegration of human societies, often along with their life support systems. The world economic and political system is made up of many actors with many objectives and many links between them. Such intricate, interconnected systems are subject to unexpected system-wide failures caused by the structure of the network -even if each component of the network is reliable. This gives rise to systemic risk, when parts that individually may function well become vulnerable when connected as a system to a self-reinforcing joint risk that can spread from part to part, potentially affecting the entire system and possibly spilling over to related outside systems. Such effects have been observed in ecology, finance and critical infrastructure such as power grids. The possibility of collapse becomes more acute when several independent networks depend on each other. Whether global system collapse will trigger subsequent collapses or fragility in other areas. What the true trade-off is between efficiency and resilience. Whether effective regulation and resilience can be developed. Whether an external disruption will trigger a collapse. Whether an internal event will trigger a collapse. Whether detection and tracking of asteroids and other dangerous space objects is sufficiently exhaustive. How feasible it is to deflect an asteroid. Whether measures such as evacuation could reduce the damage of an impact. The short-and long-term climate consequences of a collision. Whether our current civilisation could adapt to a post-impact world. 5 key factors: Large asteroid collisions -with objects 5 km or more in sizehappen about once every twenty million years and would have an energy a hundred thousand times greater than the largest bomb ever detonated. A land impact would destroy an area the size of a nation like Holland. Larger asteroids could be extinction-level events. Asteroid impacts are probably one of the best understood of all risks in this report. There has been some discussion about possible methods for deflecting asteroids found on a collision course with the planet. Should an impact occur the main destruction will not be from the initial impact, but from the clouds of dust projected into the upper atmosphere. The damage from such an \"impact winter\" could affect the climate, damage the biosphere, affect food supplies, and create political instability. \n Global System Collapse Major Asteroid Impact 16 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Executive Summary The true destructive potential of synthetic biology, especially the tail risk. Whether the field will be successfully regulated, or successfully manage to regulate itself. Whether the field will usher in a new era of bio-warfare. Whether the tools of synthetic biology can be used defensively to create effective counter measures. The dangers of relying on synthetic biologists to estimate the danger of synthetic biology. This could emerge through military or commercial bio-warfare, bioterrorism (possibly using dual-use products developed by legitimate researchers, and currently unprotected by international legal regimes), or dangerous pathogens leaked from a lab. Of relevance is whether synthetic biology products become integrated into the global economy or biosphere. This could lead to additional vulnerabilities (a benign but widespread synthetic biology product could be specifically targeted as an entry point through which to cause damage). The design and construction of biological devices and systems for useful purposes, but adding human intentionality to traditional pandemic risks. Attempts at regulation or self-regulation are currently in their infancy, and may not develop as fast as research does. One of the most damaging impacts from synthetic biology would come from an engineered pathogen targeting humans or a crucial component of the ecosystem. Any volcano capable of producing an eruption with an ejecta volume greater than 1,000 km 3 . This is thousands of times larger than normal eruptions. The danger from super-volcanoes is the amount of aerosols and dust projected into the upper atmosphere. This dust would absorb the Sun's rays and cause a global volcanic winter. The Mt Pinatubo eruption of 1991 caused an average global cooling of surface temperatures by 0.5°C over three years, while the Toba eruption around 70,000 years ago is thought by some to have cooled global temperatures for over two centuries. The effect of these eruptions could be best compared with that of a nuclear war. The eruption would be more violent than the nuclear explosions, but would be less likely to ignite firestorms and other secondary effects. Whether countries will coordinate globally against super-volcano risk and damage. The predictability of supervolcanic eruptions. How directly destructive an eruption would be. The effectiveness of general mitigation efforts. How severe the long-term climate effects would be. And if these motivations do not detail the survival and value of humanity, the intelligence will be driven to construct a world without humans. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts. On a more positive note, an intelligence of such power could easily combat most other risks in this report, making extremely intelligent AI into a tool of great potential. There is also the possibility of AI-enabled warfare and all the risks of the technologies that AIs would make possible. An interesting version of this scenario is the possible creation of \"whole brain emulations\": human brains scanned and physically represented in a machine. This would make the AIs into properly human minds, possibly alleviating a lot of problems. Atomically precise manufacturing, the creation of effective, highthroughput manufacturing processes that operate at the atomic or molecular level. It could create new products -such as smart or extremely resilient materials -and would allow many different groups or even individuals to manufacture a wide range of things. This could lead to the easy construction of large arsenals of conventional or more novel weapons made possible by atomically precise manufacturing. AI is the intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with human-level intelligence. The field is often defined as \"the study and design of intelligent agents\", systems that perceive their environment and act to maximise their chances of success. Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), and would probably act to boost their own intelligence and acquire maximal resources for almost all initial AI motivations. Of particular relevance is whether nanotechnology allows the construction of nuclear bombs. But many of the world's current problems may be solvable with the manufacturing possibilities that nanotechnology would offer, such as depletion of natural resources, pollution, climate change, clean water and even poverty. Some have conjectured special self-replicating nanomachines which would be engineered to consume the entire environment. The misuse of medical nanotechnology is another risk scenario. The timeline for nanotech development. Which aspects of nanotech research will progress in what order. Whether small groups can assemble a weapons arsenal quickly. Whether nanotech tools can be used defensively or for surveillance. Whether nanotech tools or weaponry are made to be outside human control. The reliability of AI predictions. Whether there will be a single dominant AI or a plethora of entities. How intelligent AIs will become. Whether extremely intelligent AIs can be controlled, and if so, how. Whether whole brain emulations (human minds in computer form) will arrive before true AIs. There are two main divisions in governance disasters: failing to solve major solvable problems, and actively causing worse outcomes. An example of the first would be failing to alleviate absolute poverty; of the second, constructing a global totalitarian state. Technology, political and social change may enable the construction of new forms of governance, which may be either much better or much worse. Two issues with governance disasters are first, the difficulty of estimating their probability, and second, the dependence of the impact of these disasters on subjective comparative evaluations: it is not impartially obvious how to rank continued poverty and global totalitarianism against billions of casualties or civilisation collapse. How the severity of nondeadly policy failures can be compared with potential casualties. Whether poor governance will result in a collapse of the world system. How mass surveillance and other technological innovations will affect governance. Whether there will be new systems of governance in the future. Whether a world dictatorship may end up being constructed. 1 2 3 4 5 generic probability of intelligent life (self-)destruction, which includes uncertain risks. Anthropic reasoning can also bound the total risk of human extinction, and hence estimate the unknown component. Non riskspecific resilience and post-disaster rebuilding efforts will also reduce the damage from uncertain risks, as would appropriate national and international regulatory regimes. Most of these methods would also help with the more conventional, known risks, which badly need more investment. These represent the unknown unknowns in the family of global catastrophic challenges. They constitute an amalgamation of all the risks that can appear extremely unlikely in isolation, but can combine to represent a not insignificant proportion of the risk exposure. One resolution to the Fermi paradoxthe apparent absence of alien life in the galaxy -is that intelligent life destroys itself before beginning to expand into the galaxy. Results that increase or decrease the probability of this explanation modify the Whether there will be extensive research into unknown risks and their probabilities. The capacity to develop methods for limiting the combined probability of all uncertain risks. The capacity for estimating \"out of-model\" risks. Often the situation resembles a set of dominoes: if one falls, many follow. Even small impacts can start a process where different risks interact. 2. Specific measures to address a risk: Global risks often require significant changes, which will result in situations where measures to reduce the risk in one area affect the probability and/or the impact in other areas, for better or worse. \n Two things make the understanding of the relation between the global risks particularly important. collaboration difficulty of reducing risk technical difficulty of reducing risk The technical difficulty of reducing the risk and the difficulty of collaboration Below is an example of an overview of how different global risks can be plotted depending on the technical difficulty of reducing the risk and the difficulty of collaborating to reduce it. \n In order to better understand the relations between different global risks, work could start to analyse similarities and differences. Global Challenges -Twelve risks that threaten human civilisation - \n Probability These estimates are an attempt to assemble existing estimates in order to encourage efforts to improve the numbers. They express estimates of probabilities over 100 years, except in the case of extreme climate change, where the time frame is 200 years. Global challenges need to be seen in the light of trends which help to shape the wider society. These include: Poverty -although it has fallen, it could increase again. This is especially relevant to climate change and pandemics. Population growth -the UN's estimates range from 6.8 billion people by 2100 to a high-variant projection of 16.6 bn (which would require the resources of 10 Earth-like planets to provide everyone with a modern Western lifestyle). Other trends include technological development and demographic changes. This means that we are now forced to live with the risk of various kinds of extreme disaster with the potential of severely affecting billions of people. \n Preface Preface In this Yearbook from the Global Challenges Foundation, \"risk\" is defined as the potential damage that can be caused by an extreme disaster multiplied by the probability that it will occur. For the risk of exceptional damage, the probability of occurrence is usually small, or very small, compared with other risks in society, but the effects can be absolutely dire, meaning they must be taken very seriously. We do not know what the exact nature of what these risks are or how they may strike. Some are obvious, others may sound like pure science fiction, but they have led many scientists to regard them as real threats -and therefore it is best to include them in the calculations. With few exceptions, humans have created these risks. There are only a few risks where we are not the cause, for example natural disasters such as an asteroid impact. We could eliminate some of these risks (e.g. nuclear war). In other cases, all we can do is minimise the likelihood of damage, since we have already crossed the threshold that can lead to serious consequences (with climate change, for example, where we have already emitted such high levels of greenhouse gases that there are small but not insignificant likelihoods of significant damage). For other risks we cannot affect the likelihood of them occurring, only minimise damage (with supervolcanic eruptions, for instance). However, here we can build social and ecological resilience so as to reduce the damage. For decisions concerning countermeasures the first important question is: What level of probability of global catastrophes are we prepared to accept? This question has not yet appeared on the political agenda. The reason is that both scientific reports and the media choose to focus on the most likely outcome of these risks. In the absence of risk analysis both decision-makers and the public remain blissfully unaware that the probabilities of certain global catastrophes are significantly higher than we would accept in our everyday lives, where incomparably smaller values are at stake. Another, very important reason for not acting against acknowledged global risks is that they require global responses and therefore global decisions. Regrettably there is no global decision-making body capable of that, no globally functioning legal system, and so there is a lack of effective tools for dealing with these challenges. The result: the risks are increased in the absence of effective measures to counter them. This report wants, on a strictly scientific basis, to identify and describe the global risks of extreme disasters, and also to report the latest developments affecting these risks and measures to face up to them. The Global Challenges Foundation's goal in this report is to accelerate effective counter-actions against global events with the potential for large-scale unwanted effects by deepening both decision makers' and the public's insights into the risks, and also to inspire both debate and welljudged decisions on these questions: -What probabilities of extreme disasters are acceptable? -Which are the optimal countermeasures? -How can an effective global decision-making system be created -with or without a global legal system? We are also convinced that knowledge of these risks is not only a prerequisite for reducing them, but also a responsibility which we owe to our children, grandchildren and to all future generations. It is up to us to decide whether these threats can possibly be reduced or not! These efforts do not only demand sacrifices on our part. They also create opportunities for everyone to make a significant contribution to improving the future of humanity: -For world leaders this means assuming their responsibility and starting to work towards common, global decision-making. -Scientists need to focus their research on areas that will help us take effective measures against the risks. -Companies should make sustainability a business model. -And there is a special opportunity for all of us -that when choosing our politicians and suppliers (of goods and services), we should consider their ambition to eliminate or at least minimise global risks and to create an efficient decisionmaking system that can manage these risks. Finally, I would on behalf of the Global Challenges Foundation extend my sincere gratitude to both Dennis Pamlin, editor of the report, and to all the scientists and other experts who have contributed their research and / or valuable comments. \n Laszlo Szombatfalvy Founder and Chairman, With such a focus it may surprise some readers to find that the report's essential aim is to inspire action and dialogue as well as an increased use of the methodologies used for risk assessment. The real focus is not on the almost unimaginable impacts of the risks the report outlines. Its fundamental purpose is to encourage global collaboration and to use this new category of risk as a driver for innovation. The idea that we face a number of global challenges threatening the very basis of our civilisation at the beginning of the 21st century is well accepted in the scientific community, and is studied at a number of leading universities. 2 But there is still no coordinated approach to address this group of challenges and turn them into opportunities for a new generation of global cooperation and the creation of a global governance system capable of addressing the greatest challenges of our time. This report has, to the best of our knowledge, created the first sciencebased list of global risks with a potentially infinite impact and has made the first attempt to provide an initial overview of the uncertainties related to these risks as well as rough quantifications for the probabilities of these impacts. \n What is risk? Risk is the potential of losing something of value, weighed against the potential to gain something of value. Every day we make different kinds of risk assessments, in more or less rational ways, when we weigh different options against each other. The basic idea of risk is that an uncertainty exists regarding the outcome and that we must find a way to take the best possible decision based on our understanding of this uncertainty. 3 To calculate risk the probability of an outcome is often multiplied by the impact. The impact is in most cases measured in economic terms, but it can also be measured in anything we want to avoid, such as suffering. At the heart of a risk assessment is a probability distribution, often described by a probability density function 4 ; see figure X for a graphic illustration. The slightly tilted bell curve is a common probability distribution, but the shape differs and in reality is seldom as smooth as the example. The total area under the curve always represents 100 percent, i.e. all the possible outcomes fit under the curve. In this case (A) represents the most probable impact. With a much lower probability it will be a close to zero impact, illustrated by (B). In the same way as in case B there is also a low probability that the situation will be very significant, illustrated by (C). The impacts (A), (B) and (C) all belong to the same category, normal impacts: the impacts may be more or less serious, but they can be dealt with within the current system. The impacts in this report are however of a special kind. These are impacts where everything will be lost and the situation will not be reversible, i.e challenges with potentially infinite impact. In insurance and finance this kind of risk is called \"risk of ruin\", an impact where all capital is lost. 5 This impact is however only infinite for the company that is losing the money. From society's perspective, that is not a special category of risk. In this report the focus is on the \"risk of ruin\" on a global scale and on a human level, in the worst case this is when we risk the extinction of our own species. On a probability curve the impacts in this report are usually at the very far right with a relatively low probability compared with other impacts, illustrated by (D) in Figure 2 . Often they are so far out on the tail of the curve that they are not even included in studies. For each risk in this report the probability of an infinite impact is very low compared to the most likely outcome. Some studies even indicate that not all risks in this report can result in an infinite impact. But a significant number of peer-reviewed reports indicate that those impacts not only can happen, but that their probability is increasing due to unsustainable trends. The assumption for this report is that by creating a better understanding of our scientific knowledge regarding risks with a potentially infinite impact, we can inspire initiatives that can turn these risks into drivers for innovation. Not only could a better understanding of the unique magnitude of these risks help address the risks we face, it could also help to create a path towards more sustainable development. The group of global risks discussed in this report are so different from most of the challenges we face that they are hard to comprehend. But that is also why they can help us to build the collaboration we need and drive the development of further solutions that benefit both people and the planet. As noted above, none of the risks in this report is likely to result directly in an infinite impact, and some are probably even physically incapable of doing so. But all are so significant that they could reach a threshold impact able to create social and ecological instability that could trigger a process which could lead to an infinite impact. For several reasons the potentially infinite impacts of the risks in this report are not as well known as they should be. One reason is the way that extreme impacts are often masked by most of the theories and models used by governments and business today. For example, the probability of extreme impacts is often below what is included in studies and strategies. The tendency to exclude impacts below a probability of five percent is one reason for the relative \"invisibility\" of infinite impacts. The almost standard use of a 95% confidence interval is one reason why low-probability high-impact events are often ignored. 6 \n Ethical These are impacts that threaten the very survival of humanity and life on Earth -and therefore can be seen as being infinitely negative from an ethical perspective. No positive gain can outweigh even a small probability for an infinite negative impact. Such risks require society to ensure that we eliminate these risks by reducing the impact below an infinite impact as a top priority, or at least do everything we can to reduce the probability of these risks. As some of these risks are impossible to eliminate today it is also important to discuss what probability can right now be accepted for risks with a possible infinite impact. \n Economic Infinite impacts are beyond what most traditional economic models today are able to cope with. The impacts are irreversible in the most fundamental way, so tools like cost-benefit assessment seldom make sense. To use discounting that makes infinite impacts (which could take place 100 years or more from now and affect all future generations) close to invisible in economic assessments, is another example of a challenge with current tools. So while tools like cost-benefit models and discounting can help us in some areas, they are seldom applicable in the context of infinite impacts. New tools are needed to guide the global economy in an age of potential infinite impacts. \n Infinite impact The concept infinite impact refers to two aspects in particular; the terminology is not meant to imply a literally infinite impact (with all the mathematical subtleties that would imply) but to serve as a reminder that these risks are of a different nature. Climate change is a good example, where almost all of the focus is on the most likely scenarios and there are few studies that include the lowprobability high-impact scenarios. In most reports about climate impacts, the impacts caused by warming beyond five or six degrees Celsius are even omitted from tables and graphs even though the IPCC's own research indicates that the probability of these impacts are often between one and five percent, and sometimes even higher. 7 Other aspects that contribute to this relative invisibility include the fact that extreme impacts are difficult to translate into monetary terms, they have a global scope, and they often require a time-horizon of a century or more. They cannot be understood simply by linear extrapolation of current trends, and they lack historical precedents. There is also the fact that the measures required to significantly reduce the probability of infinite impacts will be radical compared to a business-as-usual scenario with a focus on incremental changes. The exact probability of a specific impact is difficult or impossible to estimate. 8 However, the important thing is to establish the current magnitude of the probabilities and compare them with the probabilities for such impacts we cannot accept. A failure to provide any estimate for these riks often results in strategies and priorities defined as though the probability of a totally unacceptable outcome is zero. An approximate number for a best estimate also makes it easier to understand that a great uncertainty means the actual probability can be both much higher and much lower than the best estimate. It should also be stressed that uncertainty is not a weakness in science; it always exists in scientific work. It is a systematic way of understanding the limitations of the methodology, data, etc. 9 Uncertainty is not a reason to wait to take action if the impacts are serious. Increased uncertainty is something that risk experts, e.g. insurance experts and security policy experts, interpret as a signal for action. A contrasting challenge is that our cultural references to the threat of infinite impacts have been dominated throughout history by religious groups seeking to scare society without any scientific backing, often as a way to discipline people and implement unpopular measures. It should not have to be said, but this report is obviously fundamentally different as it focuses on scientific evidence from peer-reviewed sources. \n Roulette and Russian roulette When probability and normal risks are discussed the example of a casino and roulette is often used. You bet something, then spin the wheel and with a certain probability you win or lose. You can use different odds to discuss different kinds of risk taking. These kinds of thought experiment can be very useful, but when it comes to infinite risks these gaming analogies become problematic. For infinite impact a more appropriate analogy is probably Russian roulette. But instead of \"normal\" Russian roulette where you only bet your own life you are now also betting everyone you know and everyone you don't know. Everyone alive will die if you lose. There will be no second chance for anyone as there will be no future generations; humanity will end with your loss. What probability would you accept for different sums of money if you played this version of Russian roulette? Most people would say that it is stupid and -no matter how low the probability is and no matter how big the potential win is -this kind of game should not be played, as it is unethical. Many would also say that no person should be allowed to make such a judgment, as those who are affected do not have a say. You could add that most of those who will lose from it cannot say anything as they are not born and will never exist if you lose. The difference between ordinary roulette and \"allhumanity Russian roulette\" is one way of illustrating the difference in nature between a \"normal\" risk that is reversible, and a risk with an infinite impact. An additional challenge in acknowledging the risks outlined in this report is that many of the traditional risks including wars and violence have decreased, even though it might not always looks that way in media. 10 So a significant number of experts today spend a substantial amount of time trying to explain that much of what is discussed as dangerous trends might not be as dangerous as we think. For policy makers listening only to experts in traditional risk areas it is therefore easy to get the impression that global risks are becoming less of a problem. In the media it is still common to contrast the most probable climate impact with the probability that nothing, or almost nothing, will happen. The fact that almost nothing could happen is not wrong in most cases, but it is unscientific and dangerous if different levels of probability are presented as equal. The tendency to compare the most probable climate impact with the possibility of a low or no impact also results in a situation where low-probability high-impact outcomes are often totally ignored. An honest and scientific approach is to, whenever possible, present the whole probability distribution and pay special attention to unacceptable outcomes. The fact that we have challenges that with some probability might be infinite and therefore fundamentally irreversible is difficult to comprehend, and physiologically they are something our brains are poorly equipped to respond to, according to evolutionary psychologists. This psychological denial may be one reason why there is a tendency among some stakeholders to confuse \"being optimistic\" with denying what science is telling us, and ignoring parts of the probability curve. 14 Ignoring the fact that there is strong scientific evidence for serious impacts in different areas, and focusing only on selected sources which suggest that the problem may not be so serious, is not optimistic. It is both unscientific and dangerous. 15 A scientific approach requires us to base our decisions on the whole probability distribution. Whether it is possible to address the challenge or not is the area where optimism and pessimism can make people look at the same set of data and come to different conclusions. Two things are important to keep in mind: first, that there is always a probability distribution when it comes to risk; second, that there are two different kinds of impacts that are of interest for this report. The probability distribution can have different shapes but in simplified cases the shape tends to look like a slightly modified clock (remember figure 1 ). In the media it can sound as though experts argue whether an impact, for example a climate impact or a pandemic, will be dangerous or not. But what serious experts discuss is the probability of different oucomes. \n The history of infinite impacts: The LA-602 document The understanding of infinite impacts is very recent compared with most of our institutions and laws. It is only 70 years ago that Edward Teller, one of the greatest physicists of his time, with his back-of-the-envelope calculations, produced results that differed drastically from all that had gone before. His calculations indicated that the explosion of a nuclear bomb -a creation of some of the brightest minds on the planet, including Teller himself -could result in a chain reaction so powerful that it would ignite the world's atmosphere, thereby ending human life on Earth. 16 Robert Oppenheimer, who led the Manhattan Project to develop the nuclear bomb, halted the project to see whether Teller's calculations were correct. 17 The resulting document, LA-602: Ignition of the Atmosphere with Nuclear Bombs, concluded that Teller was wrong, But the sheer complexity drove them to end their assessment by writing that \"further work on the subject [is] highly desirable\". 18 The LA-602 document can be seen as the first scientific global risk report addressing a category of risks where the worst possible impact in all practical senses is infinite. 19 Since the atomic bomb more challenges have emerged with potentially infinite impact. Allmost all of these new challenges are linked to the increased knowledge, economic and technical development that has brought so many benefits. For example, climate change is the result of the industrial revolution and development that was, and still is, based heavily on fossil fuel. The first part of the report is an introduction where the global risks with potential infinite impact are introduced and defined. This part also includes the methodology for selecting these risks, and presents the twelve risks that meet this definition. Four goals of the report are also presented, under the headings \"acknowledge\", \"inspire\", \"connect\" and \"deliver\". The second part is an overview of the twelve global risks and key events that illustrate some of the work around the world to address them. For each challenge five important factors that influence the probability or impact are also listed. The risks are divided into four different categories depending on their characteristics. \"Current challenges\" is the first category and includes the risks that currently threaten humanity due to our economic and technological development -extreme climate change, for example, which depends on how much greenhouse gas we emit. \"Exogenic challenges\" includes risks where the basic probability of an event is beyond human control, but where the probability and magnitude of the impact can be influenced -asteroid impacts, for example, where the asteroids' paths are beyond human control but an impact can be moderated by either changing the direction of the asteroid or preparing for an impact. The fifth part presents some of the most important underlying trends that influence the global challenges, which often build up slowly until they reach a threshold and very rapid changes ensue. The sixth and final part presents an overview of possible ways forward. \n Goals But we now face the possibility that even tools created with the best of intentions can have a darker side too, a side that may threaten human civilisation, and conceivably the continuation of human life. This is what all decision-makers need to recognise. Rather than succumbing to terror, we need to acknowledge that we can let the prospect inspire and drive us forward. Establish a category of risks with potentially infinite impact. Before anything significant can happen regarding global risks with potentially infinite impacts, their existence must be acknowledged. Rapid technological development and economic growth have delivered unprecedented material welfare to billions of people in a veritable tide of utopias. 21 Show concrete action that is taking place today. This report seeks to show that it is not only possible to contribute to reducing these risks, but that it is perhaps the most important thing anyone can spend their time on. It does so by combining information about the risks with information about individuals and groups who has made a significant contribution by turning challenges into opportunities. By highlighting concrete examples the report hopes to inspire a new generation of leaders. Goal 1: Acknowledge Even with those risks where many groups are involved, such as climate change and pandemics, very few today address the possibility of infinite impact aspects. Even fewer groups address the links between the different risks. There is also a need to connect different levels of work, so that local, regional, national and international efforts can support each other when it comes to risks with potentially infinite impacts. \n Identify and implement strategies and initiatives. Reports can acknowledge, inspire and connect, but only people can deliver actual results. The main focus of the report is to show that actual initiatives need to be taken that deliver actual results. Only when the probability of an infinite impact becomes acceptably low, very close to zero, and/or when the maximum impact is significantly reduced, should we talk about real progress. In order to deliver results it is important to remember that global governance to tackle these risks is the way we organise society in order to address our greatest challenges. It is not a question of establishing a \"world government\", it is about the way we organise ourselves on all levels, from the local to the global. The report is a first step and should be seen as an invitation to all responsible parties that can affect the probability and impact of risks with potentially infinite impacts. But its success will ultimately be measured only on how it contributes to concrete results. \n Goal 3: Connect That leaders in different sectors connect with each other to encourage collaboration. A specific focus on financial and security policy where significant risks combine to demand action beyond the incremental is required. Goal 4: Deliver A collapse of civilisation is defined as a drastic decrease in human population size and political/economic/social complexity, globally for an extended time. 25 The above definition means the list of challenges is not static. When new challenges emerge, or current ones fade away, the list will change. An additional criterion for including risks in this report is \"human influence\". Only risks where humans can influence either the probability, the impact, or both, are included. For most risks both impact and probability can be affected, for example with nuclear war, where the number/size of weapons influences the impact and tensions between countries affects the probability. Other risks, such as a supervolcano, are included as it is possible to affect the impact through various mitigation methods, even if we currently cannot affect the probability. Risks that are susceptible to human influence are indirectly linked, because efforts to address one of them may increase or decrease the likelihood of another. The concept of infinity was chosen as it reflects many of the challenges, especially in economic theory, to addressing these risks as well as the need to question much of our current way of thinking. The concept of a category of risks based on their extreme impact is meant to provide a tool to distinguish one particular kind of risk from others. The benefit of this new concept should be assessed based on two things. First, does the category exist, and second, is the concept helpful in addressing these risks? The report has found ample evidence that there are risks with an impact that can end human civilisation and even all human life. The report further concludes that a new category of risk is not only meaningful but also timely. We live in a society where global risks with potentially infinite impacts increase in both number and probability according to multiple studies. Looking ahead, many emerging technologies which will certainly provide beneficial results, might also result in an increased probability of infinite impacts. 26 Over the last few years a greater understanding of low probability or unknown probability events has helped more people to understand the importance of looking beyond the most probable scenarios. Concepts like \"black swans\" and \"perfect storms\" are now part of mainstream policy and business language. 27 Greater understanding of the technology and science of complex systems has also resulted in a new understanding of potentially disruptive events. Humans now have such an impact on the planet that the term \"the anthropocene\" is being used, even by mainstream media like The Economist. 28 The term was introduced in the 90s by the Nobel Prize winner Paul Crutzen to describe how humans are now the dominant force changing the Earth's ecosystems. 29 The idea to establish a well defined category of risks that focus on risks with a potentially infinite impact that can be used as a practical tool by policy makers is partly inspired by Nick Bostrom's philosophical work and his introduction of a risk taxonomy that includes an academic category called \"existential risks\". 30 Introducing a category with risks that have a potentially infinite impact is not meant to be a mathematical definition; infinity is a thorny mathematical concept and nothing in reality can be infinite. 31 It is meant to illustrate a singularity, when humanity is threatened, when many of the tools used to approach most challenges today become problematic, meaningless, or even counterproductive. The concept of an infinite impact highlights a unique situation where humanity itself is threatened and the very idea of value and price collapses from a human perspective, as the \n Life Value The following estimates have been applied to the value of life in the US. The estimates are either for one year of additional life or for the statistical value of a single life. -$50,000 per year of quality life (international standard most private and government-run health insurance plans worldwide use to determine whether to cover a new medical procedure) price of the last humans also can be seen to be infinite. This is not to say that those traditional tools cannot still be useful, but with infinite impacts we need to add an additional set of analytical tools. Some of the risks, including nuclear war, climate change and pandemics, are often included in current risk overviews, but in many cases their possible infinite impacts are excluded. The impacts which are included are in most cases still very serious, but only the more probable parts of the probability distributions are included, and the last part of the long tail -where the infinite impact is found -is excluded. 32 Most risk reports do not differentiate between challenges with a limited impact and those with a potential for infinite impact. This is dangerous, as it can mean resources are spent in ways that increase the probability of an infinite impact. \n Ethical aspects of infinite impact The basic ethical aspect of infinite impact is this: a very small group alive today can take decisions that will fundamentally affect all future generations. \"All future generations\" is not a concept that is often discussed, and for good reason. All through human history we have had no tools with a measurable global impact for more than a few generations. Only in the last few decades has our potential impact reached a level where all future generations can be affected, for the simple reason that we now have the technological capacity to end human civilisation. If we count human history from the time when we began to practice settled agriculture, that gives us about 12,000 years. 33 If we make a moderate assumption that humanity will live for at least 50 million more years 34 our 12,000-year history so far represents 1/4200, or 0.024%, of our potential history. So our generation has the option of risking everything and annulling 99.976% of our potential history. Comparing 0.024% with the days of a person living to 100 years from the day of conception, this would equal less than nine days and is the first stage of human embryogenesis, the germinal stage. 35 Two additional arguments to treat potentially infinite impacts as a separate category are: 36 1. An approach to infinite impacts cannot be one of trial-and-error, because there is no opportunity to learn from errors. Infinite impacts are in a different category. Institutions and individuals may find it hard to take these risks seriously simply because they lie outside our experience. Our collective fear-response will probably be illcalibrated to the magnitude of threat. \n Economic aspects of infinite impact and discounting In today's society a monetary value is sometimes ascribed to human life. Some experts use this method to estimate risk by assigning a monetary value to human extinction. 37 We have to remember that the monetary values placed on a human life in most cases are not meant to suggest that we have actually assigned a specific value to a life. Assigning a value to a human life is a tool used in a society with a limited supply of resources or infrastructure (ambulances, perhaps) or skills. In such a society it is impossible to save every life, so some trade-off must be made. 38 The US Environmental Protection Agency explains its use like this: \"The EPA does not place a dollar value on individual lives. Rather, when conducting a benefit-cost analysis of new environmental policies, the Agency uses estimates of how much people are willing to pay for small reductions in their risks of dying from adverse health conditions that may be caused by environmental pollution.\" 39 The Two things make infinite impacts special from a discounting perspective. First, there is no way that future generations can compensate for the impact, as they will not exist. Second, the impact is something that is beyond an individual preference, as society will no longer exist. Discounting is undertaken to allocate resources in the most productive way. In cases that do not include infinite impacts, discounting \"reflects the fact that there are many high-yield investments that would improve the quality of life for future generations. The discount rate should be set so that our investable funds are devoted to the most productive uses.\" 44 unbalance the system and eventually push it over the threshold. 46 Note that these dramatic illustrations rest on assumptions that the thresholds are still relatively benign, not moving us beyond tipping points which result in an accelerated release of methane that could result in a temperature increase of more than 8 °C, possibly producing infinite impacts. 47 Calculating illustrative numbers By including the welfare of future generations, something that is important when their very existence is threatened, economic discounting becomes difficult. In this chapter, some illustrative numbers are provided to indicate the order of magnitude of the values that calculations provide when traditional calculations also include future generations. These illustrative calculations are only illustrative as the timespans that must be used make all traditional assumptions questionable to say the least. Still, as an indicator for why infinite impact might be a good approximation they might help. As a species that can manipulate our environment it could be argued that the time the human race will be around, if we do not kill ourselves, can be estimated to be between 1-10 million years -the typical time period for the biological evolution of a successful species 48 -and one billion years, the inhabitable time of Earth. 49 Figure 5 : Nordhaus, The Climate Casino: Climate policy with a sharp tipping point at 3.5°C. This shows that the optimal temperature increase is very close to the threshold. It is constrained on the low side by abatement costs and on the high side by the sharp increase in damages. Likewise the value of a life, $28 million, a value that is based on an assessment of how individuals chose when it comes to flying, can be seen as much too small. This value is based on how much we value our own lives on the margin, and it is reasonable to assume that the value would be higher than only a multiplication of our own value if we also considered the risk of losing our family, everyone we know, as well as everyone else on the planet. In the same way as the cost increases when a certain product is in short supply, the cost of the last humans could be assumed to be very high, if not infinite. Obviously, the very idea to put a price on the survival of humanity can be questioned for good reasons, but if we still want to use a number, $28 million per life should at least be considered as a significant underestimation. For those that are reluctant or unable to use infinity in calculations and are in need of a number for their formulas, $86 sextillion could be a good initial start for the cost of infinite impacts. But it is important to note that this number might be orders of magnitude smaller than an estimate which actually took into account a more correct estimation of the number of people that should be included in future generations as well as the price that should be assigned to the loss of the last humans. As we address very complex systems, such as human civilisation and global ecosystems, a concept as important as infinite impact in this report is that of infinity impact threshold. This is the impact level that can trigger a chain of events that results in the end of human civilisation. The infinite impact threshold (IIT) concept represents the idea that long before an actual infinite impact is reached there is a tipping point where it (with some probability) is no longer possible to reverse events. So instead of focusing only on the ultimate impact it is important to estimate what level of impact the infinity threshold entails. The IIT is defined as an impact that can trigger a chain of events that could result first in a civilisation collapse, and then later result in an infinite impact. Such thresholds are especially important to recognise in a complex and interconnected society where resilience is decreasing. \n If we assume -50 million years for the future of humanity as our reference, -an average life expectancy of 100 years 50 , and -a global population of 6 billion people 51 -all conservative estimate -, we have half a million generations ahead of us with a total of 3 quadrillion individuals. Assuming a value of $50,000 per life, the cost of losing them would then be $1.5 ×10 \n Risks with infinite impact Situation that requires new measures and tools and ALARP 2.3.4 Global F-N curves Social and ecological systems are complex, and in most complex systems there are thresholds where positive feedback loops become self-reinforcing. In a system where resilience is too low, feedback loops can result in a total system collapse. These thresholds are very difficult to estimate and in most cases it is possible only to estimate their order of magnitude. As David Orrell and Patrick McSharry wrote in A Systems Approach to Forecasting: \"Complex systems have emergent properties, qualities that cannot be predicted in advance from knowledge of systems components alone\". According to complexity scientist Stephen Wolfram's principle of computational irreducibility, the only way to predict the evolution of such a system is to run the system itself: \"There is no simple set of equations that can look into its future.\" 55 Orrell and McSharry also noted that \"in orthodox economics, the reductionist approach means that the economy is seen as consisting of individual, independent agents who act to maximise their own utility. It assumes that prices are driven to a state of near-equilibrium by the 'invisible hand' of the economy. Deviations from this state are assumed to be random and independent, so the price fluctuations are often modelled using the normal distribution or other distributions with thin tails and finite variance.\" The drawbacks of an approach using the normal distribution, or other distributions with thin tails and finite variance, become obvious when the unexpected happens as in the recent credit crunch, when existing models totally failed to capture the true risks of the economy. As an employee of Lehman Brothers put it on August 11, 2007: \"Events that models predicted would happen only once in 10,000 years happened every day for three days.\" 56 The exact level for an infinite impact threshold should not be the focus, but rather the fact that such thresholds exists and that an order of magnitude should be estimated. 57 During the process of writing the report, experts suggested that a relatively quick death of two billion people could be used as a tentative number until more research is available. 58 With current trends undermining ecological and social resilience it should be noted that the threshold level is likely to become lower as time progress. In the context of global risks with potentially infinite impact, the possibility of establishing global F-N curves is worth exploring. One of the most common and flexible frameworks used for risk criteria divides risks into three bands: 59 The bands are expressed by F-N curves. When the frequency of events which cause at least N fatalities is plotted against the number N on log-log scales, the result is called an F-N curve. 60 If the frequency scale is replaced by annual probability, then the resultant curve is called an f-N curve. The concept for the middle band when using F-N curves is ALARP. It is a term often used in the area of safety-critical and safety-involved systems. 62 The ALARP principle is that the residual risk should be as low as reasonably practicable. The upper band, the unacceptable/ intolerable region, is usually the area above the ALARP area (see figure 8 ) By using F-N curves it is also possible to establish absolute impact levels that are never acceptable, regardless of probability (Figure 7 . Based on an actual F-n Curve showing an absolute impact level that is defined as unacceptable). This has been done in some cases for local projects. The infinite threshold could be used to create an impact limit on global F-N curves used for global challenges in the future. Such an approach would help governments, companies and researchers when they develop new technical solutions and when investing in resilience. Instead of reducing risk, such an approach encourages the building of systems which cannot have negative impacts above a certain level. Today no established methodology exists that provides a constantly updated list of risks that threaten human civilisation, or even all human life. Given that such a category can help society to better understand and act to avoid such risks, and better understand the relation between these risks, it can be argued that a name for this category would be helpful. 65 To name something that refers to the end of humanity is in itself a challenge, as the very idea is so far from our usual references and to many the intuitive feeling will be to dismiss any such thing. \n Pros The concept used in this report is \"infinity\". The reson for this is that many of the challenges relate to discussed. In one way the name is not very important so long as people understand the impacts and risks associated with it. Still, a name is symbolic and can either help or make it more difficult to get support to establish the new category. The work to establish a list of risks with infinite impact evolved from \"existential risk\", the philosophical concept that inspired much of the work to establish a clearly defined group of risks. The reason for not using the concept \"existential risk and impact\" for this category, beside the fact that existential impact is also used in academic contexts to refer to a personal impact, is that the infinite category is a smaller subset of \"existential risk\" and this new category is meant to be used as a tool, not a scientific concept. Not only should the impacts in the category potentially result in the end of all human life, it should be possible to affect the probability and/or impact of that risk. There must also exist an agreed methodology, such as the one suggested in this report, that decides what risks belong and not belong on the list. Another concept that the category relates to is \"global catastrophic risk\" as it is one of the most used concepts among academics interested in infinite impacts. However it is vague enough to be used to refer to impacts from a few thousand deaths to the end of human civilisation. Already in use but not clearly defined, it includes both the academic concept existential risks and the category of risks with infinite impacts. macroeconomics and its challenges in relation to the kind of impacts that the risks in this report focus on. Further, the name clearly highlights the unique nature without any normative judgements. Still, infinity is an abstract concept and it might not be best communicate the unique group of risks that it covers to all stakeholders. In the same way as it can be hard to use singularity to describe a black hole, it can be difficult to use infinity to describe a certain risk. If people can accept that it is only from a specific perspective that the infinity concept is relevant it could be used beyond the areas of macroeconomics. Two other concepts that also have been considered during the process of writing this report are \"xrisks\" and \"human risk of ruin\". Xrisk has the advantage, and disadvantage, of not really saying anything at all about the risk. The positive aspect is that the name can be associated with the general concept of extinction and the philosophical concept of existential risk as both have the letter x in them. The disadvantage is the x often represents the unknown and can therefore relate to any risk. There is nothing in the name that directly relates to the kind of impacts that the category covers, so it is easy to interpret the term as just unknown risks. Human risk of ruin has the advantage of having a direct link to a concept, risk of ruin, that relates to a very specific state where all is lost. Risk of ruin is a concept in use in gambling, insurance, and finance that can all give very important contributions to the work with this new category of risk. The resemblance to an existing concept that is well established could be both a strength and a liability. Below is an overview of the process when different names were There will not be a perfect concept and the question is what concept can find the best balance between being easy to understand, acceptable where policy decisions needs to be made and also acceptable for all key groups that are relevant for work in these area. During the process to find a name for this category inspiration has been found in the process when new concepts have been introduced; from irrational numbers and genocide to sustainable development and the Human Development Index. So far \"infinite risk\" can be seen as the least bad concept in some areas and \"xrisks\" and \"human risk of ruin\" the least bad in others. The purpose of this report is to establish a methodology to identify a very specific group of risks as well as continue to a process where these risks will be addressed in a systematic and appropriate way. The issue of naming this group of risks will be left to others. The important is that the category gets the attention it deserves. 68 But through history and today it is mainly used for a religious end of time scenario. Its strong links to unscientific doom-mongers make it probably unsuitable for a scientific concept. 5. End-of-the-world risk -belongs to the irrational doomsday narratives and so is probably unsuitable for scientific risk assessments. 6. Extreme risk -is vague enough to describe anything beyond the normal, so it is probably unsuitable for risk assessments of this magnitude. 7. Unique risk -is even vaguer, as every risk is unique in some way. Probably best avoided in risk assessments. 8. Collapse risk -is based on Jared Diamond's thinking. 69 There are many different kinds of collapse and only a few result in infinite impact. \n 48 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 2.3 Global challenges and infinite impact \n Estimations of impact Only literature where there is some estimation of impact that indicates the possibility of an infinite impact is included. \n Leading organisations' priorities In order to increase the probability of covering all relevant risks an overview of leading organisations' work was conducted. This list was then compared with the initial list and subjected to the same filter regarding the possibility to affect the probability or impact. \n Possibility of addressing the risk Possibility of addressing the risk: From the risks gathered from literature and organisations, only those where the probability or impact can be affected by human actions are included. \n Expert review Qualitative assessment: Expert review in order to increase the probability of covering all relevant global risks. 73 The methodology for including global risks with a potentially infinite impact is based on a scientific review of key literature, with focus on peer-reviewed academic journals, using keyword search of both World of Knowledge 74 and Google Scholar 75 combined with existing literature overviews in the area of global challenges. This also included a snowball methodology where references in the leading studies and books were used to identify other scientific studies and books. In order to select words for a literature search to identify infinite impacts, a process was established to identify words in the scientific literature connected to global challenges with potentially infinite impacts. Some words generate a lot of misses, i.e. publications that use the term but are not the focus of this report. For example \"existential risk\" is used in business; \"human extinction\" is used in memory/cognition. Some search terms produced relatively few hits. For example \"global catastrophic risk\" is not used much. Other words are only used by people within a specific research community: few use \"existential risk\" in our sense unless they are using Nick Bostrom's work. The term \"global catastrophe\" was identified as a phrase that referred almost exclusively to extremely negative impacts on humans, by a diversity of researchers, not just people in one research community. A list of 178 relevant books and reports was established based on what other studies have referred to, and/or which are seen as landmark studies by groups interviewed during the process. They were selected for a closer examination regarding the challenges they include. 76 The full bibliography, even with its focus on publications of general interest, is still rather long. So it is helpful to have a shorter list focused on the highlights; the most important publications based on how often they are quoted, how wellspread the content (methodology, lists, etc.) is and how often key organisations use them. -Influential in developing the field. This includes publications that are highly cited 77 and those that have motivated significant additional research. They are not necessarily the first publications to introduce the concepts they discuss, but for whatever reason they will have proved important in advancing research. -State of the art. This includes publications developing new concepts at the forefront of global challenges research as well as those providing the best discussions of important established concepts. Reading these publications would bring a researcher up to speed with current research on global challenges. So they are important for the quality of their ideas. -Covers multiple global challenges (at least two). Publications that discuss a variety of global challenges are of particular importance because they aid in identifying and comparing the various challenges. This process is essential for research on global risks to identify boundaries and research priorities. In order to identify which global challenges are most commonly discussed, key surveys were identified and coded. First, a list of publications that survey at least three global challenges was compiled, and they were then scanned to find which challenges they discussed. The publications that survey many global challenges were identified from the full bibliography. Publications from both the academic and popular literature were considered. Emphasis was placed on publications of repute or other significance. 78 To qualify as a survey of global challenges, the publication had to provide an explicit list of challenges or to be of sufficient length and breadth for it to discuss a variety of challenges. Many of the publications are books or book-length collections of articles published in book form or as special issues of scholarly journals. Some individual articles were also included because they discussed a significant breadth of challenges. A A list of 34 global challenges was developed based on the challenges mentioned in the publications. A spreadsheet containing the challenges and the publications was created to record mentions of specific challenges in each publication to be coded. Then each publication was scanned in its entirety for mentions of global challenges. Scanning by this method was necessary because many of the publications did not contain explicit lists of global challenges, and the ones that did often mentioned additional challenges separately from their lists. So it was not required that a global challenge be mentioned in a list for it to be counted -it only had to be mentioned somewhere in the publication as a challenge. Assessing whether a particular portion of text counts as a global challenge and which category it fits in sometimes requires some interpretation. This is inevitable for most types of textual analysis, or, more generally, for the coding of qualitative data. The need for interpretation in this coding was heightened by the fact that the publications often were not written with the purpose of surveying the breadth of global challenges, and even the publications that were intended as surveys did not use consistent definitions of global challenges. The coding presented here erred on the side of greater inclusivity: if a portion of text was in the vicinity of a global challenge, then it was coded as one. For example, some publications discussed risks associated with nuclear weapons in a general sense without specifically mentioning the possibility of large-scale nuclear war. These discussions were coded as mentions of nuclear war, even though they could also refer to single usages of nuclear weapons that would not rate as a global challenge. This more inclusive approach is warranted because many of the publications were not focused exclusively on global challenges. If they were focused on them, it is likely that they would have included these risks in their global challenge form (e.g., nuclear war), given that they were already discussing something related (e.g., nuclear weapons). Below are the results from the overview of the surveys. However, it is important to note that many more studies exist that focus on individual global risks, but often without including low-probability high-impact outcomes. 80 How much work actually exists on human extinction infinite impact is therefore difficult to assess. The list of risks found in the scientific literature was checked against a review of what challenges key organisations working on global challenges include in their material and on their webpages. This was done to ensure that no important risk was excluded from the list. The coding of key organisations paralleled the coding of key survey publications. Organisations were identified via the global catastrophic risk organisation directory published by the Global Catastrophic Risk Institute. 82 They were selected from the directory if they worked on a variety of global challenges -at least three, and ideally more. The reason for focusing on those that work on multiple challenges is to understand which challenges they consider important and why. In contrast, organisations that focus on only one or two challenges may not 2 2 2 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 4 8 12 2 6 10 14 be able to adjust their focus according to which challenges they consider the most important. The organisation coding used the same coding scheme developed for coding survey publications. References to specific global challenges were obtained from organisations' websites. Many have web pages which list the topics they work on. Where possible, references to global challenges were pulled from these pages. Additional references to these challenges were identified by browsing other web pages, including recent publications. While it is possible that some of these organisations have worked on global challenges not mentioned on the web pages that were examined, overall the main challenges that they have worked on have probably been identified and coded. So the results should give a reasonably accurate picture of what global challenges these organisations are working on. Organisations working with global challenges were initially selected on the basis of the literature overview. A snowball sampling was conducted based on the list of organisations identified, according to whether they claimed to work on global challenges and/or their web page contained information about \"existential risk\", \"global catastrophic risk\",\"human extinction\" or \"greatest global challenges\". Cross-references between organisations and input during the workshops were also used to identify organisations. An initial list of 180 organisations which work with global challenges was established. Based on the production of relevant literature, which other organisations referred to the organisation, and/or are seen as influential by groups interviewed during the process, a short-list of organisations were selected for a closer examination regarding the challenges they work with. Then those working with multiple challenges were selected, resulting in a list of 19 organisations. 83 Below is the overview of the results from the overview of key organisations working with multiple global challenges. The WEF describes its perception methodology as follows: \"This approach can highlight areas that are of most concern to different stakeholders, and potentially galvanise shared efforts to address them.\" 85 The question which people are asked to answer is: \"What occurrence causes significant negative impact for several countries and industries?\" 86 The respondents are then asked to provide a number on two scales from 1-4, one for impact and another for likelihood (within 10 years). 87 It is then up to the respondent to define what 1-4 means, so the major value of the report is to track the changes in perception over the years. Such perception approaches are obviously very interesting and, as the WEF states, can influence actual probability as the readers' decisions will be influenced by how different challenges are perceived. Still, it is important to remember that the report does not provide an assessment of the actual probability (0-100%) or an assessment of the impact (and not the impact on human suffering, as many respondents likely define risk in monetary terms for their own company or country). An overview of WEF reports from the last ten years indicates that the challenges that likely could happen when applying a five year horizon, like the first signs of climate change, governmental failure and traditional pandemic, are identified. On the other hand, challenges which have very big impacts but lower probability, like extreme climate change, nanotechnology, major volcanoes, AI, and asteroids, tend to get less, or no, attention. An important question to explore is whether a focus on the smaller but still serious impacts of global challenges can result in an increased probability of infinite impacts. For example, there are reasons to believe that a focus on incremental adaptation instead of significant mitigation could be a problem for climate change as it could result in high-carbon lock-in. 88 Other research indicates that focus on commercially relevant smaller pandemics could result in actions that make a major pandemic more likely. It is argued that this could happen, for example, by encouraging increased trade of goods while investing in equipment that scans for the type of pandemics that are known. Such a system can reduce the probability for known pandemics while at the same time resulting in an increased probability for new and more serious pandemics. 89 Figure This is an initial list. Additional risks will be added as new scientific studies become available, and some will be removed if steps are taken to reduce their probability 90 and/or impact so that they no longer meet the criteria. \n Four categories of global challenges The challenges included in this report belong to four categories. The first, current challenges, includes those where decisions today can result directly in infinite impacts. They are included even if the time between action and impact might be decades, as with climate change. The second category is exogenous challenges, those where decisions do not -currently -influence probability, but can influence impact. The third category is emerging challenges, those where technology and science are not advanced enough to pose a severe threat today, but where the challenges will probably soon be able to have an infinite impact. Many risks could severely damage humanity but have not been included in this report. They were excluded for one or more of three reasons: 1. Limited impact. Many challenges can have significant local negative effects, without approaching the \"2 billion negatively affected\" criterion -tsunamis, for example, and chemical pollution. \n No effective countermeasures. The report focuses on promoting effective interventions and so ignores challenges where nothing useful can be done to prevent or mitigate the impact, as with nearby gamma-ray bursts. \n Included in other challenges. Many challenges are already covered by others, or have a damage profile so similar that there seemed no need to have a separate category. Population growth, for one, is an underlying driver significant for climate change and eco-system catastrophe, but without direct large-scale impacts. The challenges mentioned in the reviewed literature and organisations which are not included in this report often refer to economic damage such as \"fiscal crises\" or \"unemployment\". While such impacts could have far-reaching consequences they are obviously of another magnitude than those included here. Some of the risks that were suggested and/or which exist in books and reports about global risks were rejected according to the criteria above. They include: 91 1. Astronomical explosion/nearby gamma-ray burst or supernova. 92 These seem to be events of extremely low probability and which are unlikely to be survivable. Milder versions of them (where the source is sufficiently far away) may be considered in a subsequent report. \n ͢ Not included due to: No effective countermeasures 2. False vacuum collapse. If our universe is in a false vacuum and it collapses at any point, the collapse would expand at the speed of light destroying all organised structures in the universe. 93 This would not be survivable. \n ͢ Not included due to: No effective countermeasures 3. Chemical pollution. Increasingly, there is particular concern about three types of chemicals: those that persist in the environment and accumulate in the bodies of wildlife and people, endocrine disruptors that can interfere with hormones, and chemicals that cause cancer or damage DNA. ͢ Not included due to: Limited impact 4. Dangerous physics experiments creating black holes/strangelets including high energy physics. These risks are of low probability 94 and have been subsumed under \"Uncertain Risks\". \n ͢ Not included due to: Included in other challenges \n Destructive solar flares. Though solar flares or coronal mass ejections could cause great economic damage to our technological civilisation, 95 they would not lead directly to mass casualties unless the system lacks basic resilience. They have been subsumed in the Global System Collapse category. \n ͢ Not included due to: Limited impact/included in other challenges \n 56 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 2.5 The rseulting list of global risks using this methodology 6. Moral collapse of humanity. Humanity may develop along a path that we would currently find morally repellent. The consequences of this are not clear-cut, and depend on value judgements that would be contentious and unshared. 96 Sometimes it can take a very long time for a system to stabilise again. Looking at all the biotic crises over the past 530 million years, a research team from Berkeley found an average of 10 million years between an extinction and a subsequent flourishing of life. 102 What makes things difficult is that once a system is unstable, a small disaster can have knock-on effectsthe death of one Austrian nobleman can result in an ultimatum which draws in neighbours until Australians end up fighting Turks and the First World War is well under way, to be followed by communism, the Second World War and the Cold War. The challenge of understanding complex systems includes the fact that many of them have multiple attractors, including what are called \"strange attractors\". 103 Changes are close to linear as long as the system does not change very much, but once it is pushed out of balance it will get closer to other attractors, and when those become strong enough the system will tend to move towards chaos until a new balance is achieved around the new attractor. 104 None of the risks in this report is likely to result directly in an infinite impact, and some cannot do so physically. All the risks however are big enough to reach a threshold where the social and ecological systems become so unstable that an infinite impact could ensue, as the graph below shows. This graph and its accompanying text explain, how an event that reaches a threshold level could cascade into even worse situations, via civilisation collapse 105 to human extinction. The graph also seeks to illustrate the importance of ensuring ecological and social resilience, the two major insurance policies we have against a negative spiral after a major impact that takes us beyond the infinite threshold. 1. Social and ecosystem resilience. Resilient systems are naturally resistant to collapse, though this often comes at the cost of efficiency. 106 The more resilient the system, the more likely it is to be able to adapt to even large disasters. Improving resilience ahead of time can improve outcomes, even if the nature of the disaster isn't known. 2. General pre-risk collapse countermeasures. This category consists of all those measures put into place ahead of time to prevent civilisation collapse. It could include, for instance, measures to ensure continuity of government or prevent breakup of countries (or to allow these breakups to happen with the minimum of disruption). At the same time it should be noted that these kinds of measures could also trigger the breakdown. \n General mitigation and resilience. This category consists of all measures that can reduce the impact of risks and prevent them getting out of hand (excluding social and ecosystem measures, which are important and general enough to deserve their own category). 4. Pre-risk rebuilding enablers. On top of attempting to prevent collapses, measures can also be taken to enable rebuilding after a collapse. 107 This could involve building stores of food, of technology, or crucial reconstruction tools. 108 Alternatively, it could involve training of key individuals or institutions (such as the crews of nuclear submarines) to give them useful post-collapse skills. 5. Long-term impact. Some risks (such as climate change) have strong long-term impacts after years or even decades. Others (such as pandemics) are more likely to have only a short-term impact. This category includes only direct longterm impacts. 6. Post-risk politics. The political structures of the post-risk world (governmental systems, conflicts between and within political groupings, economic and political links between groups) will be important in determining if a large impact leads ultimately to civilisation collapse or if recovery is possible. 7. Post-risk collapse countermeasures. These are the countermeasures that the postrisk political structures are likely to implement to prevent a complete civilisation collapse. \n Maintaining a technology base. Current society is complex, with part of the world's excess production diverted into maintaining a population of scientists, engineers and other experts, capable of preserving knowledge of technological innovations and developing new ones. In the simpler post-collapse societies, with possibly much lower populations, it will be a challenge to maintain current technology and prevent crucial skills from being lost. 109 9. Post-collapse politics. Just as post-risk politics are important for preventing a collapse, post-collapse politics will be important in allowing a recovery. The ultimate fate of humanity may be tied up with the preservation of such concepts as human rights, the scientific method and technological progress. 10. Post-collapse external threats and risks. Simply because a risk has triggered the collapse of human civilisation, that does not mean that other risks are no longer present. Humanity will have much less resilience to deal with further damage, so the probability of these risks is important to determine the ultimate fate of humanity. 11. Anthropic effects. We cannot observe a world incapable of supporting life, because we could not be alive to observe it. When estimating the likelihood of disasters and recovery it is very important to take this effect into consideration and to adjust probability estimates accordingly. 110 12. Long-term reconstruction probability. A post-collapse world will differ significantly from a preindustrial revolution world. Easy access to coal and oil will no longer be possible. In contrast, much usable aluminium will have been extracted and processed and will be left lying on the surface for easy use. Thus it will be important to establish how technically possible it may be to have a second industrial revolution and further reconstruction up to current capabilities without creating the problems that the first industrial revolution resulted in. \n 59 Global Challenges -Twelve risks that threaten human civilisation - For the selection of events information from specialised bodies and scientific journals in the area of global risk was gathered. 111 Using keywords related to the various risks, a global selection of events was sought, along with original sourcing in academic or official sources. The list of events was then ranked based on their risk relevance, i.e. their effect on the probability and/or the impact of the challenge. To finalise the list, a group of experts was consulted by email and a draft overview of the challenges was presented at a workshop at the Future of Humanity Institute (FHI) in Oxford, where additional input was provided on selection and content. Issue experts were then consulted before the final list of events was established. 112 Four categories were used to classify the different events: 1. Policy: Global or national policy initiatives that affect probability and/or impact 2. Event: \n .2 Probability Many of the expected impacts of climate change are well known, including a warming climate, more severe storms and droughts, rising sea levels, ocean acidification, and damage to vulnerable ecosystems. 114 As for all risks there are uncertainties in the estimates, and warming could be much more extreme than the middle estimates suggest. Models tend to underestimate uncertainty 115 (especially where impact on humanity is concerned, 116 where the effect also depends on modellers' choices such as the discount rate 117 ), so there is a probability 118 that humanity could be looking at a 4°C 119 or even 6°C 120 warming in the coming decades. This could arise from positive feedback loops, such as the release of methane from permafrost 121 or the dieback of the Amazon rainforests, 122 that strengthen the warming effect. So far, efforts at curbing emissions have been only moderately successful and are still very far from what is needed. 123 The impact of global warming, whether mild or severe, would be felt most strongly in poorer countries. Adaptation that can address significant warming is often very expensive, 124 and many of the poorest countries are in the tropics and sub-tropics that would be hardest hit (they could become completely uninhabitable for the highest range of warming 125 ). Mass deaths and famines, social collapse and mass migration are certainly possible in this scenario. Combined with shocks to the agriculture and biosphere-dependent industries of the more developed countries, this could lead to global conflict and possibly civilisation collapse -to the extent that many experts see climate change as a national security risk 126 . Further evidence of the risk comes from indications that past civilisation collapses have been driven by climate change. 127 Extinction risk could develop from this if the remaining human groups were vulnerable to other shocks, such as pandemics, possibly exacerbated by the changed climate. 128 There is some evidence of 6°C climate change causing mass extinction in the past, 129 but a technological species such as ourselves might be more resilient to such a shock. A unique feature of the climate change challenge is what is called geo-engineering. 130 Though this could -if it works -reduce many impacts at a relatively low cost, it would not do so evenly. Geo-engineering would possibly reduce the impacts of climate change in some countries, benefitting them while leaving others to suffer. 131 This could lead to greater political instability. One of the most popular geo-engineering ideasstratospheric sulphate aerosols -suffers from the weakness that it must be continuous. 132 If for any reason it stopped (such as a civilisation collapse), warming would resume at a significantly higher pace, reaching the point where it would have been without geo-engineering. The speed of this rebound would put extra pressure on the ecosystem and the world's political system. So the biggest challenge is that geoengineering may backfire and simply make matters worse. 134 Five important factors in estimating the probabilities and impacts of the challenge: 29. The course of international politics is extremely hard to predict, even for political scientists. 135 65 1. Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks To constrain the rise in global average temperature to less than 2°C above pre-industrial levels, a maximum of around 565 -886 billion tonnes (Gt) of carbon dioxide could be emitted before 2050. 137 The world's proven fossil fuel reserves amount to 2,860 Gt of CO2, however, and are viewed as assets by companies and countries. Since it is likely that these assets cannot be realised, these entities are over-valued at current pricesarguably, a \"carbon bubble.\" The report provides evidence that serious risks are growing for highcarbon assets, and aims to help investors and regulators manage these risks more effectively and prepare for a global agreement on emissions reductions. It indirectly highlights part of the challenge of emissions reductions: they will mean the loss of highly valuable assets to corporations and governments. 02-May-13: CO2 at 400 PPM for the first time in > 800,000 years 138 -Event The Mauna Loa carbon dioxide record, also known as the \"Keeling Curve,\" is the world's longest unbroken record of atmospheric CO2 concentrations. It recently reached 400 ppm (parts per million) of CO2. Such concentrations have not been reached for at least 800,000 years, 139 placing humanity in a historically unprecedented situation. Prior to the Industrial Revolution, natural climate variations caused atmospheric CO2 to vary between about 200 ppm during ice ages and 300 ppm during the warmer inter-glacial periods. The last time concentrations were as high as they are now seems to have been during the Mid-Pliocene, about 3 million years before the present when temperatures were 2-3°C warmer, and in which geological evidence and isotopes agree that sea level was at least 15 to 25 m above today's levels with correspondingly smaller ice sheets and lower continental aridity. -Human influence on the climate system is clear. This is evident from the increasing greenhouse gas concentrations in the atmosphere, positive radiative forcing, observed warming, and understanding of the climate system. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century. -Each of the last three decades has been successively warmer at the Earth's surface than any preceding decade since 1850. - -Continued emissions of greenhouse gases will cause further warming and changes in all components of the climate system. Limiting climate change will require substantial and sustained reductions of greenhouse gas emissions. -The oceans will continue to warm during the 21st century. Heat will penetrate from the surface to the deep ocean and affect ocean circulation. Further uptake of carbon by the oceans will increase ocean acidification. Global mean sea level will continue to rise during the 21st century. -It is very likely that Arctic sea ice cover will continue to shrink and become thinner. Global glacier volume will further decrease. -Most aspects of climate change will persist for many centuries even if emissions of CO2 are stopped. The global environment can be considered a global public good (i.e. non-excludable and non-rivalrous). 150 Economic theory claims that such goods will be undersupplied by the market. 151 Hence the importance of trans-national negotiations to address climate change. Despite the importance of the subject, the main achievement of the Warsaw negotiations was to keep talks on track for more negotiations in 2015. 152 Though there was general agreement on the necessity of cutting carbon emissions, the dispute was over how to share the burden of doing so. In this instance, the debate was between more-and less-developed countries, with the latter demanding compensation from the former to help them cope with the burden of reducing emissions. That particular dispute was papered over, 153 but similar ones will be likely in future due to the range of different actors and their divergent agendas. The likelihood of a full-scale nuclear war between the USA and Russia has probably decreased in recent decades due to some improvements in relations between these two countries and reductions in the size of their arsenals. Still, the potential for deliberate or accidental 165 nuclear conflict has not been removed, with some estimates putting the risk of nuclear war in the next century or so at around 10% 166 -it may have been mostly down to luck that such a war did not happen in the last half century 167 . A nuclear war could have a range of different impacts. At the lowest end is the most obvious and immediate impact: destruction and death in major cities across the world, due to the explosions themselves and the radioactive fallout. But even if the entire populations of Europe, Russia and the USA were directly wiped out in a nuclear war -an outcome that some studies have shown to be physically impossible 168 , given population dispersal and the number of missiles in existence 169 -that would not raise the war to the first level of impact, which requires > 2 billion affected. 170 A larger impact would depend on whether or not the war triggered what is often called a nuclear winter or something similar. 171 The term refers to the creation of a pall of smoke high in the stratosphere that would plunge temperatures below freezing around the globe and possibly also destroy most of the ozone layer. 172 The detonations would need to start firestorms in the targeted cities, which could lift the soot up into the stratosphere. 173 There are some uncertainties about both the climate models and the likelihood of devastating firestorms, 174 but the risks are severe and recent models 175 have confirmed the earlier 176 analysis. Even a smaller nuclear conflict (between India and Pakistan, for instance) could trigger a smaller nuclear winter which would place billions in danger. 177 The disintegration of the global food supply would make mass starvation and state collapse likely. As the world balance of power would be dramatically shifted and previous ideological positions called into question, large-scale war would be likely. This could lead to a civilisation collapse. Extinction risk is only possible if the aftermath of the nuclear war fragments and diminishes human society to the point where recovery becomes impossible 178 before humanity succumbs 179 to other risks, such as pandemics. 180 Five important factors in estimating the probabilities and impacts of the challenge: 1. How relations between current and future nuclear powers develop. 2. 5. The security of nuclear weapons and materials affects both the probability of nuclear terrorism and the control likelihood of nuclear accidents. 6. The relations between future nuclear powers will be the major determinant of whether a nuclear war breaks out. 7. The relations between current nuclear powers will be a major determinant of the relations between future nuclear powers. 8. The relations between future major nuclear powers will be the major component of determining whether a major nuclear war breaks out. 9. Relations between the USA and Russia (the only current major nuclear powers) will be a major determinant of the relations between future major nuclear powers. and led to increased sanctions 184 against the already isolated nation. 185 North Korea is the only nation to have withdrawn from the Nuclear Non-Proliferation Treaty, 186 and is the only country to have conducted nuclear tests in the 21st century, starting in 2006, 187 as well as developing a ballistic missile capability. 188 It has also been involved in the export of weapons technology, undermining the Treaty. 189 Diplomatic attempts to deal with North Korea (especially on the part of the United States) have generally been inconsistent and unsuccessful. 190 Though the situation remains a potential flashpoint for conventional and nuclear conflict, and its collapse could have disastrous consequences 196 This is but one of the many nuclear accidents 197 and incidents that peppered the Cold War and its aftermath, and which have been revealed only subsequently. We know now that there were at least three occasions -the Cuban missile crisis in 1962, 198 the Petrov incident in 1983 199 and the Norwegian rocket incident in 1995 200 -where a full-scale nuclear war was only narrowly averted. 201 Further information on these incidents, and on how they were interpreted and misinterpreted 202 by the great powers, will be important to estimate the probability of nuclear conflict in the coming decades. On a more positive note, efforts are being made to reduce the probability of inadvertent or accidental nuclear conflicts. 203 24-Jun-13: Report: \"Analysing and Reducing the Risks of Inadvertent Nuclear War Between the United States and Russia\" 204 -Research Though the end of the Cold War has reduced the likelihood of deliberate nuclear war, its impact on the risk of accidental nuclear war is much smaller. The arsenals remain on \"launch on warning\", 205 meaning that there is a possibility for a \"retaliatory\" strike before an attack is confirmed. The most likely cause of such an accident is either a false warning (of which there have been many, with causes ranging from weather phenomena to a faulty computer chip, wild animal activity, and controlroom training tapes loaded at the wrong time) 206 or a misinterpreted terrorist attack. 207 The report attempted a rigorous estimate of the numerical probability of nuclear war. Such numerical rigour is rare, with the exception of Hellman's estimates. 208 This report applied risk analysis methods using fault trees and mathematical modelling to assess the relative risks of multiple inadvertent nuclear war scenarios previously identified in the literature. Then it combined the fault tree-based risk models with parameter estimates sourced from the academic literature, characterising uncertainties in the form of probability distributions, with propagation of uncertainties in the fault tree using Monte Carlo simulation methods. Finally, it also performed sensitivity analyses to identify dominant risks under various assumptions. This kind of highly disaggregated analysis is most likely to elicit the best performance and estimates from experts. 209 Their conclusion was that (under the more pessimistic assumption), there was a mean 2% risk of accidental nuclear war a year (a high risk when compounded over several decades), with the risk from false alarm being orders of magnitude higher than that from terrorist attacks. The analysis suggests that the most important inadvertent nuclear war risk factor is the short launch decision times, 210 inherent in the \"launch on warning\" posture. Some ways of improving this were suggested, for instance by moving each country's strategic submarines away from the other's coasts. \n 75 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 217 This further serves to emphasise the weakness of international institutions where nuclear arms control is concerned. 15-Nov-13: International Physicians for the Prevention of Nuclear War report: \"Nuclear Famine: Two Billion People at Risk?\" 218 -Research This report is one of a series of reports and publications in recent years about the potential impacts of nuclear conflicts. 219 It looked at the likely consequences of a \"limited\" nuclear war, such as between India and Pakistan. While previous papers had estimated that up to a billion people might be at risk in such a conflict, 220 this report increased the estimate to two billion. The main source of this increase is decreased agricultural production in the United States 221 and in China. 222 A key component of these estimates was the severe agricultural impact of the relatively mild temperature reduction in 1816, the \"year without a summer\" 223 , due mainly to the \"volcanic winter\" caused by the eruption of Mount Tambora. The report highlights some significant areas of uncertainty, such as whether a small nuclear conflict and its consequences would lead to further conflicts across the world, and doubts whether markets, governments and other organisations could mitigate the negative impacts. The report is a reminder that even small-scale nuclear conflict could have severe consequences. 24-Nov-13: Nuclear deal with Iran may reduce risk of proliferation 224 -Policy In November, Iran struck a deal with the so called \"P5+1\" (the five permanent members of the security council, plus Germany). The deal, if it holds, would allow Iran to continue some uranium enrichment, but it would have to submit to inspections to ensure it wasn't developing a nuclear weapons programme (the deal would also result in eased sanctions in return). There have been longrunning fears than Iran may have been attempting to construct a nuclear weapon 225 , resulting in sanctions being imposed on it. 226 This event illustrates the surprising success of the Non-Proliferation Treaty, 227 which came into force in 1970. At the time it was proposed there were fears of very rapid proliferation of nuclear weapons. 228 And though 40 countries or more currently have the knowhow to build nuclear weapons, 229 Species extinction is proceeding at a greatly increased rate compared with historic data 232 , and attempts to quantify a safe ecological operating space place humanity well outside it. 233 Furthermore, there may be signs of a \"sudden\" biosphere collapse, possibly within a few generations. 234 Many of the problems of ecological degradation interact to multiply the damage and (unlike previous, localised collapses) the whole world is potentially at risk, 235 with severe challenges to countering this risk through global policy. 236 If animals are seen to have intrinsic value, 237 or if human quality of life is dependent on a functioning ecosystem, 238 the current situation already represents a large loss. Whether such a loss will extend to human lives depends on technological and political factors -technological, because it seems plausible that some human lifestyles could be sustained in a relatively ecosystem-independent way, at relatively low costs. 239 Whether this can be implemented on a large scale in practice, especially during a collapse, will be a political challenge and whether it is something we want is an ethical question. There is currently more than enough food for everyone on the planet to ensure the nutrition needed, 240 but its distribution is extremely uneven and malnutrition persists. Thus ecological collapse need not have a strong absolute effect in order to result in strong localised, or global, effects. Even a partial collapse could lead to wars, mass migrations, and social instability. It is conceivable that such a scenario, if drawn out and exacerbated by poor decision-making, could eventually lead to mass deaths and even the collapse of civilisation. Extinction risk is possible only if the aftermath of collapse fragments and diminishes human society so far that recovery becomes impossible 241 before humanity succumbs to other risks (such as climate change or pandemics). After a post-civilisation collapse, human society could still be suffering from the effects of ecological collapse, and depending on what form it took, this could make the recovery of human civilisation more challenging than in some of the other scenarios presented here. Five important factors in estimating the probabilities and impacts of the challenge: 243 And yet this biodiversity is being lost at an alarming rate -the rate of extinctions for plants and animals is 100 to 1,000 times higher than their pre-human levels. 244 A variety of methods have been suggested to halt or slow this loss, ranging from putting an explicit value 245 on biodiversity and ecosystem services (human benefits from a multitude of resources and processes that are supplied by ecosystems), 246 to performing triage on the most valuable species. 247 This research paper suggests, however, that there is a lag of several decades between human pressure on the ecosystem and ultimate species extinction. This suggests that many extinctions will continue in decades to come, irrespective of current conservation efforts. 1. \n 05-Apr-13: Ocean data added to Microsoft Eye on Earth project -Initiative In order to safeguard ecological resources, it is important to track and quantify them. This has traditionally been the role of governments or non-governmental organisations. 248 254 (thus privatising that \"common\"). A typical example of this behaviour is the collapse of the Grand Banks fisheries off Canada's Atlantic coast in the 1990s, where cod biomass fell by over 95% from its peak and has currently not recovered. 255 It is therefore significant that the European Union has been partly successful in its attempts to control over-fishing through legislation. For instance, despite the fact that North Sea cod remains vulnerable, there has been a recent increase in stock size and a decrease in fish mortality. This may point to the potential for further ecological improvements through well-chosen policy interventions. In 2013 the IUCN added an additional 4,807 species to its Red List of Threatened Species. This brings the total to about 21,000. Some have argued that we are entering a new geological era in Earth's history: the Anthropocene 257 , when human actions are one of the major impactors on the planet's biosphere. 82 The graph shows a fairly steady growth in the (estimated) number of threatened species. This steadiness may be illusory, as the biosphere shows signs that it may be approaching a planetary-scale tipping point, where it may shift abruptly and irreversibly from one state to another. As a result, the biological resources humans presently take for granted may be subject to rapid and unpredictable transformations within a few human generations. 258 This could be seen as a great tragedy beyond purely human concerns, if animals (and animal welfare) are seen to have intrinsic value. 259 83 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Here only worldwide events are included. A widespread endemic disease that is stable in terms of how many people become sick from it is not a pandemic. Influenza subtypes 266 Infectious diseases have been one of the greatest causes of mortality in history. Unlike many other global challenges pandemics have happened recently, as we can see where reasonably good data exist. Plotting historic epidemic fatalities on a log scale reveals that these tend to follow a power law with a small exponent: many plagues have been found to follow a power law with exponent 0.26. 261 These kinds of power laws are heavy-tailed 262 to a significant degree. 263 In consequence most of the fatalities are accounted for by the top few events. 264 If this law holds for future pandemics as well, 265 then the majority of people who will die from epidemics will likely die from the single largest pandemic. Most epidemic fatalities follow a power law, with some extreme events -such as the Black Death and Spanish Flu -being even more deadly. 267 There are other grounds for suspecting that such a highimpact epidemic will have a greater probability than usually assumed. All the features of an extremely devastating disease already exist in nature: essentially incurable (Ebola 268 ), nearly always fatal (rabies 269 ), extremely infectious (common cold 270 ), and long incubation periods (HIV 271 ). If a pathogen were to emerge that somehow combined these features (and influenza has demonstrated antigenic shift, the ability to combine features from different viruses 272 ), its death toll would be extreme. Many relevant features of the world have changed considerably, making past comparisons problematic. The modern world has better sanitation and medical research, as well as national and supra-national institutions dedicated to combating diseases. Private insurers are also interested in modelling pandemic risks. 273 Set against this is the fact that modern transport and dense human population allow infections to spread much more rapidly 274 , and there is the potential for urban slums to serve as breeding grounds for disease. 275 Unlike events such as nuclear wars, pandemics would not damage the world's infrastructure, and initial survivors would likely be resistant to the infection. And there would probably be survivors, if only in isolated locations. Hence the risk of a civilisation collapse would come from the ripple effect of the fatalities and the policy responses. These would include political and agricultural disruption as well as economic dislocation and damage to the world's trade network (including the food trade). Extinction risk is only possible if the aftermath of the epidemic fragments and diminishes human society to the extent that recovery becomes impossible 277 before humanity succumbs to other risks (such as climate change or further pandemics). Five important factors in estimating the probabilities and impacts of the challenge: 1. What the true probability distribution for pandemics is, especially at the tail. \n The capacity of modern international health systems to deal with an extreme pandemic. \n How fast medical research can proceed in an emergency. 4. How mobility of goods and people, as well as population density, will affect pandemic transmission. \n Whether humans can develop novel and effective anti-pandemic solutions. \n 85 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 2. As so much is known about pandemic risks compared with other risks, there are more possibilities for specific prepandemic contingency plans. 3. The effectiveness of healthcare systems will be important, especially in less developed nations where the pandemic may overwhelm the system, and then transmit from there to other nations. 4. Global coordination in detection, analysis and treatment are vital for stopping a pandemic in its early stages, and for implementing measures such as quarantines and more advanced countermeasures. 5. Poverty will affect the quality of national healthcare systems, population density and sanitation quality, the movement of local goods and people, and the effectiveness of the political response. 6. Bioterrorists may unleash a pathogen held in storage, such as smallpox. 7. Laboratory security at the top labs is insufficient for the danger at hand, and accidental release is a nonnegligible possibility. 8. Pandemics are one of the risks where there is a possibility for a very large number of direct casualties, depending on the severity of the pathogen. 9. Mass casualties and finger-pointing could destabilise the world political and economic systems. 10. If the pathogen is transmissible to farm animals, this could affect the world food supply. 11. It is unlikely the pathogen would be a recurrent, long-term risk, but variants of it could continue to affect people and animals for many years, dependent on its transmissibility and life cycle. 12. Small pandemic scares could improve global coordination on the issue. 13. Increased population density causes increased transmissibility of the pathogen, especially in urban slums. 14. Some pathogens, such as bird flu, depend on regular contact between humans and \"reservoir species\" in order to evolve into periodically dangerous strains. 15. If antibiotic resistance develops, humanity could see the resurgence of bacteria-based pandemics. 16. The increased movement of people and products increases the speed and spread of pandemic transmission. 17. Sanitation or its lack will strongly affect the spread of certain pathogens in key areas. 18. 284 The main lesson the WHO drew from that epidemic was that member states generally had communication issues (between ministries of health and decision,makers, and with the public), and were prepared for a pandemic of high severity and appeared unable to adapt their national and subnational responses adequately to a more moderate event. The guidance paper indicates simultaneously the weaknesses of pandemic preparations, the improvements in these preparations, and the continued role of the WHO as global directing and coordinating authority. 24-Jul-13: Bacteria become resistant to some of the last remaining antibiotics 285 -Event Bacterial infections, such as the Black Death, 286 syphilis, 287 and tuberculosis, 288 have been responsible for millions of deaths, over the thousands of years they have co-existed with humanity. Though these diseases have not been eradicated -overall, a third of the world is currently infected with the tuberculosis bacillus 289 -they have been controlled since the introduction of antibiotics, and prognostics have improved tremendously. But recently a rising number of bacteria have developed antibiotic resistance, due mainly to antibiotic over-prescription 290 and use in livestock feed. 291 This Nature report highlights the worrying way in which Enterobacteriaceae (bacteria with a 50% mortality rate) have become resistant to carbapenems, one of the last remaining antibiotics that had been effective against them. 09-Aug-13: Epihack: Digital disease surveillance hack-a-thon 292 -Initiative Beyond the formal, top-down initiatives to deal with pandemics, there are openings for bottom-up, innovative ideas. Epihack attempted to generate just such ideas, through three days of designing and hacking in Cambodia. Descriptions of the winning projects were given: -CoPanFlu: This project included home visits to collect blood samples from 807 homes and weekly follow-up phone calls to document the occurrence of infectious respiratory symptoms. These visits and phone calls caused disturbance to the participants. The new system uses SMS for users to report symptoms. Chart and map visualisation of the data (with full case details) and a fieldwork tracking tool were developed to help the research team analyse and monitor data. -DoctorMe: In addition to all of the popular features of DoctorMe (free health information for the general public), the tool now features a weekly survey for users. The survey will ask participants to select whether they are experiencing any symptoms from a list. \n 88 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks The Spanish flu outbreak was the deadliest short pandemic in history, infecting about a third of the world population (≈ 500 million people) and killing 50-100 million people. 294 There have been numerous flu pandemics in the last few centuries, with three others having around a million casualties (the 1889-1890 Russian Flu, 295 the 1957-1958 Asian Flu, and the 1968-1969 Hong Kong Flu 296 outbreaks). The most recent pandemic was that in 2009, which killed 150,000-500,000 people. 297 Thus any move towards a universal flu vaccine would be of great importance to combating such recurring pandemics. This paper, analysing the role of T cells in combating influenza, suggests a way that such a vaccine could be feasible. 28-Nov-13: Difficulties in containing the accidental laboratory escape of potential pandemic influenza viruses 298 -Research Biosafety laboratories experiment with some of the deadliest of the world's pathogens, and occasionally create new ones. 299 Their number is increasing globally, and their safety record is far from perfect, with several pathogen leaks reported 300 and others suspected 301 (the last smallpox fatality was due to a virus that escaped a lab 302 , after eradication of the virus in the wild). The rate of pathogen escape has been estimated at 0.3% per laboratory, per year 303 -a very high probability, given the 44 BSL-4 304 labs and several thousands of BSL-3 labs. There have already been three known escapes from BSL-4 labs since 1990. 305 This report uses an agent-based model to analyse whether the accidental laboratory release of pandemic flu viruses could be contained, and concludes that controllability of escape events is not guaranteed. 3-Dec-13: Global pandemic tops poll of insurance industry risks 306 -Initiative Academics and governmental 307 / supra-governmental 308 organisations have long worried about the risks of pandemics. But such organisations attract certain types of people with specific outlooks, who can be subject to further biases because of their profession and the social milieu surrounding it. 309 Insurers come from a different background, focusing on practical profitability in the business world. It is therefore instructive that they too see pandemics as among the major threats in the world today. This also implies that combating pandemics is of use not only from a humanitarian but also from an economic standpoint. \n 89 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n Current risks \n Global System Collapse Major Asteroid Impact Synthetic Biology Unknown Consequences \n Current risks \n System Collapse 3.1.5 Global \n Global system collapse is defined here as either an economic or societal collapse on the global scale. There is no precise definition of a system collapse. The term has been used to describe a broad range of bad economic conditions, ranging from a severe, prolonged depression with high bankruptcy rates and high unemployment, to a breakdown in normal commerce caused by hyperinflation, or even an economically-caused sharp increase in the death rate and perhaps even a decline in population. \n Probability Often economic collapse is accompanied by social chaos, civil unrest and sometimes a breakdown of law and order. Societal collapse usually refers to the fall or disintegration of human societies, often along with their life support systems. It broadly includes both quite abrupt societal failures typified by collapses, and more extended gradual declines of superpowers. Here only the former is included. The world economic and political system is made up of many actors with many objectives and many links between them. Such intricate, interconnected systems are subject to unexpected system-wide failures due to the structure of the network 311 -even if each component of the network is reliable. This gives rise to systemic risk: systemic risk occurs when parts that individually may function well become vulnerable when connected as a system to a self-reinforcing joint risk that can spread from part to part (contagion), potentially affecting the entire system and possibly spilling over to related outside systems. 312 Such effects have been observed in such diverse areas as ecology, 313 finance 314 and critical infrastructure 315 (such as power grids). They are characterised by the possibility that a small internal or external disruption could cause a highly non-linear effect, 316 including a cascading failure that infects the whole system, 317 as in the 2008-2009 financial crisis. The possibility of collapse becomes more acute when several independent networks depend on each other, as is increasingly the case (water supply, transport, fuel and power stations are strongly coupled, for instance). 318 This dependence links social and technological systems as well. 319 This trend is likely to be intensified by continuing globalisation, 320 while global governance and regulatory mechanisms seem inadequate to address the issue. 321 This is possibly because the tension between resilience and efficiency 322 can even exacerbate the problem. 323 Many triggers could start such a failure cascade, such as the infrastructure damage wrought by a coronal mass ejection, 324 an ongoing cyber conflict, or a milder form of some of the risks presented in the rest of the paper. Indeed the main risk factor with global systems collapse is as something which may exacerbate some of the other risks in this paper, or as a trigger. But a simple global systems collapse still poses risks on its own. The productivity of modern societies is largely dependent on the careful matching of different types of capital 325 (social, technological, natural...) with each other. If this matching is disrupted, this could trigger a \"social collapse\" far out of proportion to the initial disruption. 326 States and institutions have collapsed in the past for seemingly minor systemic reasons. 327 And institutional collapses can create knock-on effects, such as the descent of formerly prosperous states to much more impoverished and destabilising entities. 328 Such processes could trigger damage on a large scale if they weaken global political and economic systems to such an extent that secondary effects (such as conflict or starvation) could cause great death and suffering. Five important factors in estimating the probabilities of various impacts: 1. Whether global system collapse will trigger subsequent collapses or fragility in other areas. \n What the true trade-off is between efficiency and resilience. 3. Whether effective regulation and resilience can be developed. 4. Whether an external disruption will trigger a collapse. 5. Whether an internal event will trigger a collapse. \n 91 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 4. Building resilience -the ability of system components to survive shocks -should reduce systemic risk. 5. Fragile systems are often built because they are more efficient than robust systems, and hence more profitable. 6. General mitigation efforts should involve features that are disconnected from the standard system, and thus should remain able to continue being of use if the main system collapses 7. A system collapse could spread to other areas, infecting previously untouched systems (as the subprime mortgage crisis affected the world financial system, economy, and ultimately its political system). 8. The system collapse may lead to increased fragility in areas that it does not directly damage, making them vulnerable to subsequent shocks. 9. A collapse that spread to government institutions would undermine the possibilities of combating the collapse. 10. A natural ecosystem collapse could be a cause or consequence of a collapse in humanity's institutions. 11. Economic collapse is an obvious and visible way in which system collapse could cause a lot of damage. 12. In order to cause mass casualties, a system collapse would need to cause major disruptions to the world's political and economic system. 13. If the current world system collapses, there is a risk of casualties through loss of trade, poverty, wars and increased fragility. 14. It is not obvious that the world's institutions and systems can be put together again after a collapse; they may be stuck in a suboptimal equilibrium. 15. Power grids are often analysed as possible candidates for system collapse, and they are becoming more integrated. 16. \"The Centre will undertake an economic analysis of the fundamental risks to the financial system, based on an interdisciplinary approach. It will bring together experts from finance, economics, computer science, political science, law and the natural and mathematical sciences. This will allow researchers affiliated to the Centre to investigate how risk is created through feedback loops within and between the financial, economic, legal and political systems. Political decisions, for example, can directly affect people's behaviour in the financial markets, which in turn affects political decision-making and so onwith the outcomes being unexpected and complex.\" Besides the research results produced by the centre, its very existence shows that systemic risk is being taken seriously in academic quarters. 14-Mar-13: Systemic sovereign credit risk has \"deep roots in the flows and liquidity of financial markets.\" 330 -Research It is important to estimate the source of systemic risk. Different mitigation policies should be implemented if sovereign systemic risks spring from financial markets rather than macroeconomic fundamentals. This paper argues that systemic sovereign risks spring from financial markets (through capital flows, funding availability, risk premiums, and liquidity shocks 331 ) rather than from fundamentals. 332 It further estimates that systemic risks are three times larger in eurozone countries than in US states. In order to mitigate or prevent systemic risk, it needs to be monitored. In this paper, the authors set out to clarify the nature and use of the systemic risk monitoring tools that are currently available, providing guidance on how to select the best set of tools depending on the circumstances. The paper breaks down the tools into four categories, each with their strengths and weaknesses: -Single risk/soundness indicators. 334 Indicators based on balance sheet data, such as financial soundness indicators (FSIs), are widely available and cover many risk dimensions. However, they tend to be backward-looking and do not account for probabilities of default or correlation structures. Moreover, only some of these indicators can be used as early-warning tools (e.g., indicators of funding structures). Market data can be used to construct complementary indicators for higher-frequency risk monitoring. -Fundamentals-based models 335 rely on macroeconomic or balance sheet data to help assess macro-financial linkages (e.g., macro stress testing or network models). By providing vulnerability measures based on actual interconnectedness and exposures, these models may help build a realistic \"story\". However, they often require long-term data series, assume that parameters and relationships are stable under stressed conditions, and produce only low-frequency risk estimates. -Market-based models. 336 These models uncover information about risks from high-frequency market data and are thus suitable for tracking rapidly-changing conditions of a firm or sector. These approaches are more dynamic, but their capacity to reliably predict financial stress has yet to be firmly established. -Hybrid, structural models. 337 These models estimate the impact of shocks on key financial and real variables (e.g., default probabilities, or credit growth) by integrating balance sheet data and market prices. Examples include the CCA and distance-to-default measures, which compare the market value of an entity's assets to its debt obligations. The paper concludes, however, that the systemic risk monitoring toolkit is incomplete and that \"tools exist to assess most sectors and levels of aggregation, but they provide only partial coverage of potential risks and only tentative signals on the likelihood and impact of systemic risk events. As such, they may not provide sufficient comfort to policymakers.\" 23-Dec-13: Citigroup analysis reports reduced systemic political and financial risks in 2013 and 2014 338 -Initiative Tracking the ebb and flow of the likelihood of various risks is important for estimating where best to direct energy and resources. Even approximate, order of magnitude estimates are sufficient if they establish that some risks are much more dangerous than others (order of magnitude estimates correspond to the \"Class 5 cost estimate\", 339 undertaken at the very beginning of the project, between 0% and 2% of its completion). In 2013, Citigroup analysts predicted that (with caveats) systemic risks would recede in Europe during the year, a prediction which seems to have been vindicated by events. As for the future, Tina Fordham, chief global political analyst at Citigroup Global Markets, predicted that \"systemic political risks will decline in 2014, but country-level and geopolitical risks remain significant.\" It seems positive both that market analysts are tracking systemic risks and that they see them as decreasing (though their focus is mainly on political and financial systemic risks). \n 95 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Asteroids have caused significant extinction events throughout the Earth's history. The most famous is the Chicxulub impactor, which probably helped cause the extinction of the non-avian dinosaurs and more than 75% of all species. 341 Large asteroid collisions -objects 5 km or more in size -happen approximately once every twenty million years and would have an energy a hundred thousand times greater 342 than the largest bomb ever detonated. 343 A land impact would destroy an area the size of a nation like Holland. 344 Larger asteroids could be extinction level events. Asteroid impacts are probably one of the best understood of all risks in this report. Their mechanisms and frequencies are reasonably well estimated. 345 Recent ground-and space-based 346 tracking projects have been cataloguing and tracking the largest asteroids, 347 and have discovered that the risks were lower than was previously feared. 348 The projects are now cataloguing asteroids of smaller size and damage potential. There has been some speculation about possible methods for deflecting asteroids 350 , should they be found on a collision course with the planet. Such means remain speculative, currently, but may become more feasible given technological progress and potentially more affordable access to space. 351 Should an impact occur, though, asteroid impact risks are similar to those of super-volcanoes, in that the main destruction will not be wrought by the initial impact, but by the clouds of dust projected into the upper atmosphere. The damage from such an \"impact winter\" could affect the climate, damage the biosphere, affect food supplies, and create political instability. Though humanity currently produces enough food to feed all humans, 352 this supply is distributed extremely unevenly, and starvation still exists. Therefore a disruption that is small in an absolute sense could still cause mass starvation in the future. Mass starvation, mass migration, political instability and wars could be triggered, possibly leading to a civilisation collapse. Unless the impact is at the extreme end of the damage scale and makes the planet unviable, human extinction is possible only as a consequence of civilisation collapse and subsequent shocks. 353 Five important factors in estimating the probabilities and impacts of the challenge: 1. Whether detection and tracking of asteroids and other dangerous space objects is sufficiently exhaustive. 2. How feasible it is to deflect an asteroid. 3. Whether measures such as evacuation could reduce the damage of an impact. 4. The short-and long-term climate consequences of a collision. 5. Whether our current civilisation could adapt to a post-impact world. 2. National space programmes have always provided the impetus for space flight projects, especially the more speculative and cutting-edge ones. 3. Protecting against asteroid impacts is already accepted as a project worth funding, but increased focus on the problem could increase the ability to predict and prevent such impacts. 4. Asteroid detection and tracking continues to progress well currently, and is key to preventing such collisions in future. 5. Better global coordination is not strongly needed to track or deflect asteroids, but would be important if a large-scale evacuation was needed. 6. General mitigation efforts may help reduce the direct and indirect negative outcomes of an impact by, for instance, equipping people to deal with the changed climate. 7. Unlike many risks, there is no upper bound on how destructive an asteroid impact could be, though the largest impacts are the rarest. 8. The aftermath of an impact could greatly disrupt the world economic and political system. 9. Climate changes would be the most destructive consequences of mediumscale meteor impacts, with the world plunged into an \"impact winter\". 10. The effects of an impact winter could last for a long time. 11. Easier access to space would be important for any plans to actually deflect an asteroid. 12. There are currently no asteroid deflection abilities, but there are many plans that could conceivably be implemented in due course. 13. Small asteroid impacts could motivate increased anti-asteroid precautions. 14. With enough warning, it could be possible to preemptively evacuate the impact area. 15. Post-impact politics will be important for reconstruction, adapting to the changed climate, and prevention of further harm. 16. Estimating the likelihood of asteroid impacts suffers from \"anthropic shadow\" effects: 355 we may be underestimating the danger because if there had been many more impacts in recent times, humans would not currently be around to observe their effects and take them into account. \n 99 Global Challenges -Twelve risks that threaten human civilisation - 357 when an object hit Tunguska in Siberia. 358 The meteor seemed ideal from the risk reduction perspective: a large, visible impact that attracted great attention, and a renewed commitment to asteroid precautions, 359 but no actual fatalities. 19-Jun-13: Space Research Institute of Russian Academy of Science presents a strategy to use small asteroids to deflect hazardous objects from the trajectory of collision with Earth 360 -Research Though the analysis and tracking of asteroids has progressed rapidly, 361 methods for deflecting a dangerous asteroid, should it be detected, remain speculative. 362 The Space Research Institute of the Russian Academy of Science introduces another approach: selecting small (10-15m) near-Earth asteroids and causing them to strike a larger dangerous one, altering its trajectory. The more suggestions and ideas there are for such deflections, the more likely it is that one of them will yield an implementable approach. 17-Oct-13: The probability for \"Asteroid 2013 TV135\" to impact Earth in 2032 is one in 63,000 363 -Event NASA reports that a 400-metre asteroid has one chance in 63,000 of impacting the Earth. An asteroid this size would produce oceanwide tsunamis or destroy land areas the size of a small state (Delaware, Estonia). 364 For comparison, the odds of dying from lightning strike are 1 in 83,930, of a snake, bee or other venomous bite or sting is 1 in 100,000, of an earthquake 1 in 131,890, and of a dog attack 1 in 147,717. 365 So the risk of asteroid death, though low, is comparable to more common risks. 28-Oct-13: United Nations to Adopt Asteroid Defence Plan 366 -Policy The UN plans to set up an International Asteroid Warning Group for member nations to share information about potentially hazardous space rocks. If astronomers detect an asteroid that poses a threat to Earth, the UN's Committee on the Peaceful Uses of Outer Space will help coordinate a mission to launch a spacecraft to slam into the object and deflect it from its collision course. This marks the first time an international body has assigned responsibility for tracking and intercepting dangerous asteroids. 14-Nov-13: Risk of medium asteroid strike may be ten times larger than previously thought 367 -Research This paper analyses in detail the Chelyabinsk impact, estimated to have had an energy of 500 kilotonnes of TNT. It demonstrates problems with the standard methods for estimating the energy of collisions -derived from nuclear weapons results 368 -and from that deduces that the number of impactors with diameters of tens of metres may be an order of magnitude higher than estimated. It argues that this demonstrates a deviation from a simple power law, and thus that there is a nonequilibrium in the near-Earth asteroid population for objects 10 to 50 metres in diameter. This shifts more of the impact risk to asteroids of these sizes. 3-Dec-13: SpaceX launches into geostationary orbit 369 -Initiative Easy access to space is important for all asteroid deflection proposals. 370 Since America retired the Space Shuttle, 371 it has been putting its hope in private space companies. 372 The success of SpaceX opens the possibility of eventual cheaper access to space. 100 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n Exogenic risks Figure 15 : Impact effects by size of Near Earth Object 354 \n Consequences Upper atmosphere detonation of \"stones\" (stony asteroids) and comets; only \"irons\" (iron asteroids) <3% penetrate to surface. Irons make craters (Barringer Crater); Stones produce air-bursts (Tunguska). Land impacts could destroy areas the size of a city (Washington, London, Moscow). Irons and stones produce ground-bursts; comets produce air-bursts. Ocean impacts produce significant tsunamis. Land impacts destroy areas the size of a large urban area (New York, Tokyo). Impacts on land produce craters; ocean-wide tsunamis are produced by ocean impacts. Land impacts destroy areas the size of a small state (Delaware, Estonia). The eruption which formed the Siberian Traps was one of the largest in history. It was immediately followed by the most severe wave of extinction in the planet's history, 374 the Permian-Triassic extinction event, 375 where 96% of all marine species and 70% of terrestrial vertebrate species died out. Recent research has provided evidence of a causal link: that the eruption caused the mass extinction. 376 There have been many other super-volcanic eruptions throughout history. 377 The return period for the largest supervolcanoes (those with a Volcanic Explosivity Index 378 of 8 or above) has been estimated from 30,000 years 379 at the low end, to 45,000 or even 700,000 years 380 at the high end. Many aspects of super-volcanic activity are not well understood as there have been no historical precedents, and such eruptions must be reconstructed from their deposits. 381 The danger from super-volcanoes is the amount of aerosols and dust projected into the upper atmosphere. This dust would absorb the Sun's rays and cause a global volcanic winter. The Mt Pinatubo eruption of 1991 caused an average global cooling of surface temperatures by 0.5°C over three years, while the Toba eruption around 70,000 years ago is thought by some to have cooled global temperatures for over two centuries. 382 The effect of these eruptions could be best compared with that of a nuclear war. The eruption would be more violent than the nuclear explosions, 383 but would be less likely to ignite firestorms and other secondary effects. Unlike nuclear weapons, a super-volcano would not be targeted, leaving most of the world's infrastructure intact. The extent of the impact would thus depend on the severity of the eruption -which might or might not be foreseen, depending on improvements in volcanic predictions 384 -and the subsequent policy response. Another Siberian Trap-like eruption is extremely unlikely on human timescales, but the damage from even a smaller eruption could affect the climate, damage the biosphere, affect food supplies and create political instability. A report by a Geological Society of London working group notes: \"Although at present there is no technical fix for averting supereruptions, improved monitoring, awareness-raising and research-based planning would reduce the suffering of many millions of people.\" 385 Though humanity currently produces enough food to feed everyone, 386 this supply is distributed extremely unevenly, and starvation still exists. Therefore a disruption that is small in an absolute sense could still cause mass starvation. Mass starvation, mass migration, political instability and wars could be triggered, possibly leading to a civilisation collapse. Unless the eruption is at the extreme end of the damage scale and makes the planet unviable, human extinction is possible only as a consequence of civilisation collapse and subsequent shocks. 387 Prof. Michael Rampino, New York University, has estimated that a large (1,000 cubic kilometres of magma) super-eruption would have global effects comparable to an object 1.5km in diameter striking the Earth. 388 Five important factors in estimating the probabilities and impacts of the challenge: 1. Whether countries will coordinate globally against super-volcano risk and damage. 2. Further super-volcano research will be important in any mitigation and monitoring efforts. 3. Global coordination and cooperation between nations will determine research levels, the chances of evacuations, and posteruption disruption to the world political and economic system. 4. General mitigation efforts may help reduce the direct and indirect negative impact of an eruption, by, for instance, equipping people to deal with the changed climate. 5. The direct destructive effect of a super-volcano can be extensive, especially in the area around the eruption. 6. A super-volcano's main destructive impact is through its effect on the climate, akin to a nuclear winter cooling effect. This will strongly affect all impact levels, and the disruption to the world's political and economic system. 7. The level of this disruption will determine how well countries cope with the aftermath of the eruption and subsequent climate changes, and whether subsequent conflicts or trade wars will occur, adding to the damage. 8. The long-term climate impact will determine in what state the posteruption world will find itself, relevant both for reconstruction after a collapse and for preventing such a collapse. 9. Whether eruptions are fundamentally predictable or not, and how far in advance, will be very important for many mitigation strategies. 10. Better volcano monitoring and prediction (if possible) will allow such interventions as preemptive evacuations. 11. Evacuations are likely to be the only effective response to an imminent eruption, as super-volcanoes are unlikely to be controllable or divertible. 12. Post-eruption politics will be a consequence of the number of shortterm casualties, and the disruption to the world system. 13. Medium scale volcanic eruptions may persuade leaders to make the risk more of a priority. 14. Estimating the likelihood of super-volcanic eruptions suffers from \"anthropic shadow\" effects: 390 we may be underestimating the danger because if there had been many more eruptions in recent times, humans would not currently be around to observe their effects and take them into account. \n .2.3 Main events 15-Mar-13: Climate impact of super-volcanoes may be less than previously thought 391 -Research The Toba eruption around 70,000 years ago was one of the world's largest super-volcanic eruptions. In contrast with some theories that claim it caused a volcanic winter that may have lasted over two centuries, 392 this paper claims that analysis of ash from the Toba super-eruption in Lake Malawi shows no evidence of volcanic winter in East Africa. This further illustrates the difficulty of establishing the exact impact of large-scale disasters when the evidence record is poor. 17-Jul-13: The Volcanological Society of Japan looks at volcano and super-volcano mitigation 393 -Policy Prevention of super-volcano eruptions is impossible with current technology, but there may be some possibility of mitigating their effects. The Volcanological Society of Japan is one of the few organisations that have looked at such potential mitigation. They put the risk of super-volcanic eruptions in the context of standard volcanic eruptions, just on a larger scale (noting that super-volcanic eruptions have affected Japan in the past). Japan has been a very seismically active country for its entire history, 394 so it might be hoped that adequate volcanic mitigation measures would have been implemented. But the report notes that \"remarkably few [of Japan's local governments] have drafted volcanic disaster countermeasure[s]\", 395 adding that \"Local governments that have actually experienced a volcanic disaster focus attention on volcanic disaster-related discussion, but most have not drafted specific procedures for volcanic disasters and seem to think that the general disaster countermeasure volume is adequate.\" This provokes some pessimism about the likelihood of effective planetary super-volcano mitigation measures being implemented, especially in those areas with no direct experience of volcanic risk. This is due to the normalcy bias, \"the tendency to minimise the probability of potential threats or their dangerous implications,\". 396 27-Oct-13: Yellowstone supervolcano larger than previously thought 397 -Research Another continuing development in the science of super-volcanoes, this paper demonstrates that the crustal magma reservoir under Yellowstone was 50% larger than was previously thought. However, despite this increase, integrated probabilistic hazard assessment shows that the biggest Yellowstone Plateau threat is from large M7+ earthquakessignificantly damaging 398 , but very unlikely to threaten billions -not from volcanic or super-volcanic eruptions. 106 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3.2 Exogenic risks 15-Nov-13: Insurance executives rank super-volcanoes low on the list of extreme risks 399 -Initiative Academics have long worried about the probability of super-volcanic eruptions. But academia attracts certain types of people with specific outlooks, who can be subject to further biases because of their profession and the social milieu surrounding it. 400 Insurers come from a different background, focusing on practical profitability in the business world and using a relatively short time horizon. So it is instructive that they do not see super-volcanoes as a major threat in the world today: \"Of interest to us is the very low ranking of the user-submitted idea of supervolcanoes in the US\". 20-Dec-13: Super-volcano confirmed as responsible for one of the largest extinctions in history 401 -Research The maximal destructive potential of super-volcanoes is uncertain. There have been large supervolcanic eruptions throughout history, 402 and many extinction events, but uncertainties in the geological record mean that it was hard to establish whether they were causally linked. One eloquent example was the eruption which formed the Siberian Traps 403 (one of the largest in history), and the Permian-Triassic extinction, 404 where 96% of all marine species and 70% of terrestrial vertebrates died out. The two events were close on the geological timeline, and this paper, using recent dating techniques, confirmed that the super-volcano erupted shortly before the extinction, making it the likely culprit. Synthetic biology is the design and construction of biological devices and systems to accomplish the specific goal of the synthetic biologist, 406 adding human intentionality to traditional pandemic risks. The positive and negative potentials of synthetic biology are unclear 407 -much of the information currently comes from synthetic biologists, 408 who may not be able to provide an impartial overview (the problem is exacerbated by the decentralised nature of the field 409 ). Attempts at regulation 410 or self-regulation 411 are currently in their infancy, and may not develop as fast as research does. 412 One of the most damaging impacts from synthetic biology would come from an engineered pathogen, 413 targeting humans or a crucial component of the ecosystem (such as rice, which accounts for 20% of all calories consumed by humans). 414 This could emerge through military bio-warfare, 415 commercial bio-warfare, 416 bio-terrorism 417 (possibly using dual-use products 418 developed by legitimate researchers, and currently unprotected by international legal regimes 419 ), or dangerous pathogens leaked from a lab 420 . Of relevance is whether synthetic biology products become integrated into the global economy or biosphere. This could lead to additional vulnerabilities (a benign but widespread synthetic biology product could be specifically targeted as an entry point through which to cause damage). But such a development would lead to greater industry and academic research, which could allow the creation of reactive or pre-emptive cures. 421 The impact is very similar to that of pandemics: mass casualties and subsequent economic and political instabilities leading to possible civilisation collapse. A bio-war would contribute greatly to the resulting instability. Even for the most perfectly engineered pathogen, survivors are likely, if only in isolated or mainly isolated locations. 422 Extinction risk is unlikely, 423 but possible if the aftermath of the epidemic fragments and diminishes human society to the extent that recovery becomes impossible 424 before humanity succumbs to other risks. 425 Five important factors in estimating the probabilities and impacts of the challenge: 1. 431 showing how the flu virus could be made transmissible to ferrets (and, by extension, humans). This generated protests and calls for the papers to remain fully or partially unpublished, 432 because of the potential for misuse by bio-terrorists or bio-weapons programmes. In response, researchers in the field declared a voluntary moratorium in January 2012. 433 A year later, they decided to lift the moratorium. One cannot expect workers in a field to be unbiased about their own research, 434 so it is significant that this decision was condemned by many scientists, including other virologists. 435 This provides strong evidence that ending the moratorium was a dangerous decision. 450 a declining fraction, so from a narrow economic perspective it could be argued that the impact of nanotechnology would be relatively small. However, nanotechnology could create new products -such as smart or extremely resilient materials 451 -and would allow many different groups or even individuals to manufacture a wide range of things. \n 28-Feb This could lead to the easy construction of large arsenals of weapons by small groups. 452 These might be masses of conventional weapons (such as drones or cruise missiles), or more novel weapons made possible by atomically precise manufacturing. If this is combined with a possible collapse in world trade networks 453 -since manufacturing could now be entirely local -there would be a likely increase in the number of conflicts throughout the world. Of particular relevance is whether nanotechnology allows rapid uranium extraction and isotope separation 454 and the construction of nuclear bombs, which would increase the severity of the consequent conflicts. Unlike the strategic stalemate of nuclear weapons, nanotechnology arms races could involve constantly evolving arsenals and become very unstable. 455 These conflicts could lead to mass casualties and potentially to civilisation collapse if the world's political and social systems were too damaged. Some nanotechnology pathways could mitigate these developments, however. Cheap mass surveillance, 456 for instance, could catch such re-armament efforts (though surveillance could have its own detrimental effects). Many of the world's current problems may be solvable with the manufacturing possibilities that nanotechnology would make possible, such as depletion of natural resources, pollution, climate change, clean water, and even poverty. 457 There are currently few applicable international legal regimes governing nanotechnology. 458 In the media the label \"grey goo\" 459 is sometimes applied to nanotechnology. This is meant to describe a hypothetical situation where special self-replicating nanomachines would be engineered to consume the entire environment. It is unclear how effective they could be, and they play no role in atomically precise manufacturing. 460 Mass selfreplication would be detectable, and vulnerable to human-directed countermeasures. 461 However, it is possible that such replicating machines could endure and thrive in particular ecological niches, where the cost of removing them is too high. 462 The misuse of medical nanotechnology 463 is another risk scenario. Extinction risk is only likely as a long-term consequence of civilisation collapse, if the survivors are unable to rebuild and succumb to other threats. 464 The possibility of nanomachines or nanoweapons remaining active after a civilisation collapse may make the rebuilding more difficult, however, while the availability of atomically precise manufacturing systems, by contrast, could aid rebuilding. Five important factors in estimating the probabilities and impacts of the challenge: 1. A functional and practical design for assembling molecules is an essential feature for successful nanotechnology. There have been many designs proposed, 467 and some constructed, but not yet a fully functional molecular assembly device. 468 This design, based on principles from biology (it uses messenger RNA as its input code, and synthesises peptides) represents another step towards that important goal. 06-May-13: First weapon made with 3D printer 469 -Event It is the ability to make weapons en masse that represents one of the dangers of nanotechnology. 470 3D printing (or additive manufacturing) 471 is not nanotechnology, but can be considered a precursor, as it similarly allows small groups to design and manufacture their desired products themselves. That one of the early designs has been a functioning weapon, and that such weapon design was justified on moral grounds, 472 indicates a very high probability that nanotechnology will be used for weapon production. 07-May-13: Publication of Eric Drexler's book \"Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization\" 473 -Research Eric Drexler is one of the pioneers of nanotechnology, and introduced the concepts to the general public with his book \"Engines of Creation\". 474 Twenty seven years later, he presents a history, progress report, and updated version of his vision, the central theme of which is to \"imagine a world where the gadgets and goods that run our society are produced not in far-flung supply chains of industrial facilities, but in compact, even desktop-scale, machines.\" The revolution in manufacturing would produce the \"radical abundance\" of the title, with small groups and individuals capable of producing an extraordinarily wide range of products without requiring large amounts of capital or long supply chains. The risks of social and political disruption are then examined. The disruptions that can be anticipated include \"falling demand for conventional labor, resources, and capital in physical production, with the potential for cascading disruptive effects throughout the global economy\", as well as disruptions in supply chains, trade, dependence, and the revaluation of assets (mineral resources and large industrial facilities, for example, will lose much of their value). This would go together with an increase in surveillance capability and a potential nanotechnology arms race. The book recommends taking pre-emptive action at the international level to prepare for these disruptions. A key sign of a developing technology is interest from investment companies. Nanostart AG is an example of such a company, with extensive investments in various nanotechnology projects. Interestingly, their interests are not limited to more conventional nanotech projects, but extend to such speculative endeavours as space elevators. 476 This serves as a reminder of the potentially large profits available in nanotechnology. Thus it seems likely that when the technology matures sufficiently to cause increased risks, there will be many commercial entities heavily investing in the technology, which will make the process of regulation more contentious, possibly leading to \"regulatory capture\" 477 by these entities, with their interests represented rather than those of the broader community. 16-Dec-13: Nanotechnology: A Policy Primer, CRS report of Congress 478 -Policy Governmental and supragovernmental policies will be key to dealing with the dangers and destabilising influences of nanotechnology, through regulation, treaties, redistributive efforts or simply through preparing their populations for the change. And institutions such as the US Congress are keeping an eye on nanotechnology, in this case through the Congressional Research Service. This report, however, does not delve into the major risks of nanotechnology, but restricts itself to minor subjects such as the safety of nanomaterials and US competitiveness in that field. War, trade disruption and potential development and misuse of nanoreplicators 479 are not discussed. This seems to reflect a certain lack of prioritisation and perhaps even a misplaced focus on the less important risks. Major AI researchers and textbooks define the field as \"the study and design of intelligent agents\", where an intelligent agent is a system that perceives its environment and takes actions that maximise its chances of success. 480 Artificial Intelligence (AI) is one of the least understood global challenges. There is considerable uncertainty on what timescales an AI could be built, if at all, with expert opinion shown to be very unreliable in this domain. 481 This uncertainty is bi-directional: AIs could be developed much sooner or much later than expected. Despite the uncertainty of when and how AI could be developed, there are reasons to suspect that an AI with human-comparable skills would be a major risk factor. AIs would immediately benefit from improvements to computer speed and any computer research. They could be trained in specific professions and copied at will, thus replacing most human capital in the world, causing potentially great economic disruption. Through their advantages in speed and performance, and through their better integration with standard computer software, they could quickly become extremely intelligent in one or more domains (research, planning, social skills...). If they became skilled at computer research, the recursive selfimprovement could generate what is sometime called a \"singularity\", 482 but is perhaps better described as an \"intelligence explosion\", 483 with the AI's intelligence increasing very rapidly. 484 Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime), 485 and would probably act in a way to boost their own intelligence and acquire maximal resources for almost all initial AI motivations. 486 And if these motivations do not detail 487 the survival and value of humanity in exhaustive detail, the intelligence will be driven to construct a world without humans or without meaningful features of human existence. This makes extremely intelligent AIs a unique risk, 488 in that extinction is more likely than lesser impacts. An AI would only turn on humans if it foresaw a likely chance of winning; otherwise it would remain fully integrated into society. And if an AI had been able to successfully engineer a civilisation collapse, for instance, then it could certainly drive the remaining humans to extinction. On a more positive note, an intelligence of such power could easily combat most other risks in this report, making extremely intelligent AI into a tool of great positive potential as well. 489 Whether such an intelligence is developed safely depends on how much effort is invested in AI safety (\"Friendly AI\") 490 as opposed to simply building an AI. 491 If the returns from increased intelligence are low, intelligence explosions and extreme intelligence may not be possible. In that case, there would probably be an ecology of AIs of different levels of intelligence, performing different tasks. In this scenario, apart from the economic dislocation already noted, there is also the possibility of AI-enabled warfare and all the risks of the technologies that AIs would make possible. An interesting version of this scenario is the possible creation of \"whole brain emulations,\" human brains scanned and physically instantiated -physically represented -in a machine. This would make the AIs into what could be called properly human minds, possibly alleviating a lot of problems. Five important factors in estimating the probabilities and impacts of the challenge: 1. The reliability of AI predictions. 2. Whether there will be a single dominant AI or a plethora of entities. 3. How intelligent AIs will become. 4. Whether extremely intelligent AIs can be controlled, and how. \n Whether whole brain emulations (human minds in computer form) will arrive before true AIs. 11. Human redundancy may follow the creation of copyable human capital, as software replaces human jobs. 12. Once invented, AIs will be integrated into the world's economic and social system, barring massive resistance. 13. An AI arms race could result in AIs being constructed with pernicious goals or lack of safety precautions. 14. Uploads -human brains instantiated in software -are one route to AIs. These AIs would have safer goals, lower likelihood of extreme intelligence, and would be more likely to be able to suffer. 495 15. Disparate AIs may amalgamate by sharing their code or negotiating to share a common goal to pursue their objectives more effectively. 16. There may be diminishing returns to intelligence, limiting the power of any one AI, and leading to the existence of many different AIs. 496 17. Partial \"friendliness\" may be sufficient to control AIs in certain circumstances. 18. Containing an AI attack may be possible, if the AIs are of reduced intelligence or are forced to attack before being ready. 19. New political systems may emerge in the wake of AI creation, or after an AI attack, and will profoundly influence the shape of future society. The amount of information stored in a human brain is extremely large. Similarly, the amount of information needed to perform adequately at complex human tasks is considerable -far more than is easily programmable by hand (the Cyc project, 498 for instance, started in 1984, aiming to rapidly formally codify all human common senseand is still running). Thus the interest in the field of machine learning, and in algorithms that can teach themselves skills and knowledge from raw data. With the rise of \"Big Data\", 499 vast databases and increased computer power, there has been a flowering of applications of computer learning. 500 This has caught the eye of the Defense Advanced Research Projects Agency (DARPA), a research arm of the US defense department responsible for the development of new technologies. In this project, DARPA aims both to \"enable new applications that are impossible to conceive of using today's technology\" and to simplify machines so that non-experts can effectively use them and build applications for them. This most recent project confirms the interest of the military in artificial intelligence development. 25-Apr-13: Kurzweil plans to help Google make an AI brain 501 -Initiative The idea of creating a fully general AI, an AI that is capable of all tasks requiring intelligence, went into abeyance during the AI winter, 502 a period of reduced interest and funding in AI. The term AI itself fell into disfavour. 503 But recent AI successes such as Watson's triumph on \"Jeopardy!\" 504 (demonstrating a certain level of natural language recognition and processing) and Google's selfdriving car 505 (demonstrating spatial awareness and movement) have revived interest in constructing a human-like mind in digital form. Kurzweil, hired by Google at the end of 2012, reveals in this interview his interest in doing just that. A notable feature of Kurzweil is his optimism about the consequences of creating AIs, 506 which could affect the level of precautions his team would include in its design. 13-Sep-13: Publication: \"Responses to Catastrophic AGI Risk: A Survey\" 507 -Research Since the recognition of the potential risk with AGI (Artificial General Intelligence), 508 various proposals have been put forward to deal with the problem. After arguing that uncertainty about a timeline to AI 509 does not translate into a certainty that AIs will take a long time, the paper analyses why AIs could be an existential risk. It argues that a trend toward automatisation would give AIs increased influence in society, as such systems would be easier to control, and there could be a discontinuity in which they gained power rapidly. 510 This could pose a great risk to humanity if the AIs did not share human values (intelligence and values are argued to be independent for an AI), 511 a task which seems difficult to achieve if human values are complex and fragile, 512 and therefore problematic to specify. The authors then turned to analysing the AI safety proposals, dividing them into proposals for societal action, external constraints, and internal constraints. They found that many proposals seemed to suffer from serious problems, or to be of limited effectiveness. They concluded by reviewing the proposals they thought most worthy of further study, including AI confinement, Oracle AI, and motivational weaknesses. For the long term, they thought the most promising approaches were value learning (with human-like architecture as a less reliable but possibly easier alternative). Formal verification was valued, whenever it could be implemented. 520 The book then imagines the competition between humanity and a cunning, powerful rival, in the form of the AI -a rival, moreover, that may not be \"evil\" but simply harmful to humanity as a side effect of its goals, or simply through monopolising scarce resources. Along with many interviews of researchers working in the forefront of current AI development, the book further claims that without extraordinarily careful planning, 521 powerful \"thinking\" machines present potentially catastrophic consequences for the human race. 15-Oct-13: \"Racing to the precipice: a model of artificial intelligence development\" lays out the dangers of AI arms races 522 -Research AIs may be developed by different groups, each desiring to be the first to produce an artificial mind. The competitive pressure will be stronger the more powerful AIs are believed to be, thus maximising the danger in those situations. This paper considers an AI arms race, 523 where different teams have the option of reducing their safety precautions in order to perfect their device first -but running the risk of creating a dangerous and uncontrollable AI. In the absence of enforceable agreements between the teams, this dynamic pushes each to take on more risk than they would want (similarly to the \"prisoner's dilemma\"), 524 potentially causing an extremely damaging outcome. The situation is improved if risktaking makes little difference to speed of development, if the teams have reduced enmity between them, or if there are fewer teams involved (those last two factors also help with reaching agreements). Somewhat surprisingly, information has a negative impact: the outcome is safer if the teams are ignorant of each other's rate of progress, and even of their own. 24-Oct-13: Growing researcher awareness of the threat of artificial intelligence 525 -Research Much more effort is devoted to creating AI than to ensuring that it is developed safely. 526 Those working in developing AI could be motivated to minimise the extent their creation represented a potential danger. 527 \n Probability There are many different possible risks that seem individually very unlikely and speculative. Could someone develop a super-pollutant that renders the human race sterile? Could the LHC have created a black hole that swallowed the Earth? 532 Might computer games become so addictive that large populations will die rather than ceasing to indulge in them? 533 Could experiments on animals lift them to a level of intelligence comparable with humans? 534 Might some of the people sending signals to extra-terrestrial intelligences attract deadly alien attention? 535 What are the risks out there that we can't yet conceive of? These risks sound unlikely and for many possibly ridiculous. But many of today's risks would have sounded ridiculous to people from the past. If this trend is extrapolated, there will be risks in the future that sound ridiculous today, which means that absurdity is not a useful guide to risk intensity. Expert opinion provides some information on specific speculative risks. But it will tend to give them extremely low probabilities -after all, the risks are highly speculative, which also means the expert's judgement is less reliable. 536 But in these situations, the main source of probability of the risk is not the quoted number, but the much greater probability that the experts' models and world views are wrong. 537 If marginal scientific theories predict large risks, the probability is concentrated in the likelihood that the theory might be correct. 538 Conversely, if many independent models, theories, and arguments all point in the direction of safety, then the conclusion is more reliable. There are methods to estimate uncertain risks without needing to be explicit about them. One resolution to the Fermi paradox -the apparent absence of alien life in the galaxy -is that intelligent life destroys itself before beginning to expand into the galaxy. Results that increase 539 or decrease the probability of this explanation modify the generic probability of intelligent life (self-)destruction, which includes uncertain risks. Anthropic reasoning 540 can also bound the total risk of human extinction, and hence estimate the unknown component. Non-risk-specific resilience and post-disaster rebuilding efforts 541 will also reduce the damage from uncertain risks, as would appropriate national and international regulatory regimes. 542 Most of these methods would also help with the more conventional, known risks, and badly need more investment. Five important factors in estimating the probabilities and impacts of the challenge: 1. Whether there will be extensive research into unknown risks and their probabilities. 2. 3. Global coordination would aid risk assessment and mitigation. 4. Specific research into uncertain and unknown risks would increase our understanding of the risks involved. 5. General mitigation efforts are mostly general resilience building. 6. Some institutions may deliberately pursue dangerous technologies or experiments, or may convince themselves that their research is not dangerous. 7. Unforeseen accidents could be the trigger for many uncertain risks. 8. The amount of direct casualties varies wildly depending on the risk involved. 9. The disruptions to the world's economic and political system vary wildly depending on the risk involved. 10. The uncertain risk may have other disruptive effects (such as loss of trust in certain technologies). 11. The long-term impact varies wildly depending on the risk involved. 12. The world's political structure, after an unknown risk is triggered, will determine whether humanity improves or worsens the situation. 13. Some methods (such as considering the Fermi paradox) may bound the total probability of destructive uncertain risks, but these are very speculative. 14. One approach to dealing with uncertain risks is to build general adaptation and recovery methods that would be relevant to a wide class of potential disasters. This paper notes the absence of published research in this area, 544 and seeks to begin to fill the gap. It identifies methods for increasing survivor resilience and promoting successful adaptation and recovery, even for isolated communities. It recognises that the process is highly complex, and needs further research. 28-Mar-13: Paper Evaluating Methods for Estimating Existential Risks 545 -Research It would be advantageous to have a rigorous approach for estimating severe risks, including uncertain and unknown ones. This paper reviews and assesses various methods for estimating existential risks, such as simple elicitation; whole evidence Bayesian; evidential reasoning using imprecise probabilities; Bayesian networks; influence modelling based on environmental scans; simple elicitation using extinction scenarios as anchors; and computationally intensive possible-worlds modelling. 546 These methods can be applied rigorously to uncertain risks, assessing them in the same way as more standard risks. Influence modelling based on environmental scans 547 can even suggest some new as yet unknown risks. 01-Aug-13: The Fermi paradox provides an estimate of total existential risk (including uncertain risks) 548 -Research The Fermi paradox is the seeming contradiction between the apparent ease with which intelligent life could arise in the galaxy, and the lack of evidence of any such life. Many explanations have been proposed to resolve the paradox, 549 one of which is relevant to existential risks: the \"Late Great Filter\" explanation. 550 This posits that intelligent life is inevitably destroyed before it can expand through the galaxy. Such an explanation gives a bound to existential risk from all sources, including uncertain risks. This paper demonstrates the relative ease with which a spacefaring civilisation could cross between galaxies. Combined with recent evidence that the majority of Earth-like planets formed before the Earth, 551 this makes the absence of visible intelligent life more inexplicable, and worsens the Fermi paradox, increasing the probability of a Late Great Filter and thus of existential risk from all sources. As there is no global government, global governance typically involves a range of actors including states, as well as regional and international organisations. However, a single organisation may nominally be given the lead role on an issue.\" \n Probability Often global governance is confused with global government, but they are two very different things. Global governance is just a term to describe the way global affairs are managed, or not managed. Global government is the idea that the world should be run like a country with a government. The global governance system will inevitably have pros and cons, depending on the political decisions that are made. This section looks at global governance disasters. Though all the risks in this report can be exacerbated by poorly chosen policy decisions, this classification contains those problems that arise almost exclusively from bad policy choices. There are two main divisions in governance disasters: failing to solve major solvable problems, and actively causing worse outcomes. An example of the first would be failing to alleviate absolute poverty. 556 An example of the second would be constructing a global totalitarian state. 557 In general, technology, political and social change may enable the construction of new forms of governance, which may be either much better or much worse. These examples immediately illustrate two issues with governance disasters. First, the task of estimating their probability is difficult. Longterm political predictions are of questionable validity and subject to strong biases, 558 especially where strongly-held values are concerned. 559 Second, the impact of these governance disasters depends to a large extent on subjective comparative evaluations. It is not impartially obvious how to rank continued poverty and global totalitarianism versus billions of casualties or civilisation collapse. 560 The long term impact needs also to be considered: how will poverty and global governance change? If there are many generations ahead of us, then the long term state of humanity's policy 561 becomes much more important than the short term one. Five important factors in estimating the probabilities of various impacts: 1. How the severity of non-deadly policy failures can be compared with potential casualties. 2. Whether poor governance will result in a collapse of the world system. 3. How mass surveillance and other technological innovations will affect governance. 4. Whether there will be new systems of governance in the future. 5. Whether a world dictatorship may end up being constructed. The revelations caused great controversy 567 and raised questions about the NSA's surveillance oversight. 568 The episode established that discrete mass surveillance -an important component of potential To reduce poverty in the future, it is important to maintain and extend past trends in poverty mitigation. The United Nations' Poverty-Environment Initiative (PEI), launched in 2008, has had a number of success stories from Uruguay 570 to Malawi. 571 Due to increased demand from member states, the programme has been extended for another five years, 2013-2017, and may add countries such as Myanmar, Mongolia, Indonesia, Albania, Peru and Paraguay. Such programmes demonstrate that the bureaucratic/policy side of poverty reduction is supported by an international infrastructure with a strong emphasis on assessments. The effect of such approaches on overall poverty will depend on the interplay between these policies and the other side of poverty reduction: economic growth 572 576 Conversely, a resilient governance system is better able to cope with all risks, and a collapsed global system is more vulnerable to all risks. Nuclear war, 577 asteroid impacts 578 and super-volcanoes 579 have direct impacts on the climate, and, through that, on the ecosystem. 580 The kinds of mitigation efforts capable of containing the damage from a super-volcano would most likely be effective against asteroid impact damage, because of the similar nature of the impacts. The converse is not true, since one major method of reducing asteroid impact -spacebased deflection 581 -would have no impact on super-volcano risk. Solving climate change would help reduce current ecological pressure. 582 International agreements to reduce ecological damage could be extended to combating climate change as well, by establishing structures for international collaboration and encouraging resource-efficient solutions. Climate change also creates conditions more suitable for the spread of pandemics. 583 Measures to combat global pandemics, such as strengthened outbreak coordination and statistical modelling, 584 could be used to combat synthetic pathogens as well. If a safe artificial intelligence is developed, this provides a great resource for improving outcomes and mitigating all types of risk. 585 Artificial intelligence risks worsening nanotechnology risks, by allowing nanomachines and weapons to be designed with intelligence and without centralised control, overcoming the main potential weaknesses of these machines 586 by putting planning abilities on the other side. 139 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 4. Relations between global risk / 4.1 General relations between global risks and their potential impacts Conversely, nanotechnology abilities worsen artificial intelligence risks, by giving AI extra tools which it could use for developing its power base. 587 Nanotechnology and synthetic biology could allow the efficient creation of vaccines and other tools to combat global pandemics. 588 Nanotechnology's increased industrial capacity could allow the creation of large amounts of efficient solar panels to combat climate change, or even potentially the efficient scrubbing of CO2 from the atmosphere. 589 Nanotechnology and synthetic biology are sufficiently closely related 590 (both dealing with properties on an atomic scale) for methods developed in one to be ported over to the other, potentially worsening the other risk. They are sufficiently distinct though (a mainly technological versus a mainly biological approach) for countermeasures in one domain not necessarily to be of help in the other. Uncontrolled or malicious synthetic pathogens could wreak great damage on the ecosystem; conversely, controlled and benevolent synthetic creations could act to improve and heal current ecological damage. There are many secondary effects that are not covered here. Increasing nuclear power could for instance improve the outlook for climate change, 591 while increasing the risk of proliferation 592 and thus of nuclear war. There are many such effects between various strategies for addressing different risks, but they are specific enough for there to be no simple arguments of the type which says that mitigating risk X worsens risk Y. During the process of identifying risks that could have an infinite impact it became evident that the most common question among people interested in global challenges is this: \"How probable is it that this impact will ever happen?\" For those with expert knowledge in one area the first question is often: \"How does the probability and magnitude of impact in this area compare with the probability and magnitude of impact in other areas?\" Finally, those who have tried to estimate probabilities for global challenges ask: \"What is the status of knowledge in other areas compared to mine?\" These are all very important questions, and this chapter is not an attempt to answer them. But, as there is no organisation, process or report that has provided an overview of quantified assessment for global challenges with potential infinite impact, the chapter does try to present the current state of knowledge in order to inspire further work. \n ALL RISKS It is easy to argue that it is too difficult, or even impossible, to assess the probabilities that are at all meaningful for the risks in this report, and therefore to exclude them. There are many good reasons for not trying, including significant uncertainty in almost all steps of the assessment. Not only do great uncertainties exist for all the risks, but the difficulties of estimating probabilities are also very different. At one end of the spectrum the probability of a nuclear war can change dramatically from one day to another due to political decisions. Much of the uncertainty is related to psychological assumptions of how different individuals will react under stress. At the other end of the spectrum there is AI, where there is not even a generally accepted understanding of the possibility of the impacts capable of creating the risks covered in this report. There are challenges with very much data, including asteroids, and other challenges with very little relevant data, such as bad future global governance. Obviously the risks also share a number of characteristics: they all have potentially extreme outcomes and have never been experienced before. The possibility of studying series of data, exploring how the outcome will change with incremental changes in input data, and testing conclusions on similar events are just a few examples of things that in most cases cannot be done. Estimating probabilities in traditional ways is therefore very difficult. 594 However, as the current lack of interest in global risks with potentially infinite impacts may in part be due to the lack of actual numbers, the best estimates that could be found are presented below with explanations. These estimates are only an attempt to assemble existing estimates in order to encourage a process to improve these numbers. These estimates range from rigorous calculations based on large amounts of high-quality data (asteroids) to guesstimates by interested experts (AI). The result is that some have a more rigorous methodology behind them, and others should be taken with a large grain of salt, but all are still very rough estimates. As science progresses they will be updated. It is even possible that some will change by orders of magnitude. But instead of no estimate at all, we now have an initial reference that we hope will trigger a discussion and collaboration that will help improve what we have already. As many of the challenges are longterm and require early action to be avoided or mitigated, the probability is provided for the next 100 years, instead of the annual probability that is often provided. The reason for this is that a 100-year perspective helps us understand that even relatively small probabilities can become significant over a century. Say that it is a one in 100 probability (1%) for an impact to occur. Over a century there is a 63.4% probability of one or more such impacts. 595 Further, structures that need to change require us to look beyond the immediate and incremental changes that most discussions focus on today. \n 143 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 5. Probabilities and uncertainties -an initial overview \n Structure of the probability estimates As the different challenges are very different and the status of probability estimates varies significantly, the initial probability numbers are provided together with estimates regarding: \n The understanding of sequence This is an estimation of how well the sequence from today to a possible infinite impact is understood. At one extreme all the different paths from today to an infinite impact are understood. At the other extreme, there is only a theoretical idea that is coherent and does not break any natural laws. In the latter case there would be no understanding of how it is possible to get from where we are today to an infinite impact. A sequence is required to calculate an estimate instead of only having educated guesses. \n Data availability This is an estimate of the amount of data available to make probability assessments on all relevant steps of the sequence. In some areas a lot of hard-to-get data is needed to make an assessment (e.g a global pandemic); in other areas the data is related to secret and/ or psychological factors (e.g. large-scale nuclear war). In others relatively little data is needed (asteroids), or a lot has been done to gather data (e.g. climate change). \n Existing probability estimates These form an estimate of the kind of uncertainty that exists. This obviously depends on an understanding of sequence and data availability, but it also depends on resources and interest in communicating with the rest of the world. The estimates below are preliminary, but a sound risk approach requires stakeholders to begin to include them in strategic assessments. One group in particular is of interest and that is actuaries, the professionals who deal with the financial impact of risk and uncertainty. One of the key guiding rules they follow is to ensure a capital adequacy at a 1-in-200 level. This rule, which is included in for example ICA 596 and Solvency II, 597 provides an opportunity to discuss risks with a possible infinite impact. One contribution could be to discuss the pros and cons with different definitions of the 1-in-200 level. For example, one definition is that \"each company holds enough capital to withstand the events of the next one year with a probability of 199 out of 200.\" 598 This would exclude many of the risks in this report and could even result in the risks increasing, as the time perspective is so short. Investments could help reduce shortterm risks at the same time as they increase long-term risks. Another definition is that \"a company should hold enough capital to be able to withstand a 'reasonably foreseeable' adverse event\". 599 \n Nuclear War Nuclear war is the risk that started the work with scientific assessments related to infinite impact. 604 The understanding of the sequence is relatively well known. Still, the fact is that the impact will depend significantly on how serious the nuclear winter will be as the result of a war (if there is any nuclear winter at all). The probability of a nuclear winter will depend on when during the year the war happens, and what the weather is during this time. The result is that the probability of an infinite impact has an inherent uncertainty and can be estimated only once a war has already started. With many of the spillover effects occurring in remote areas, even basic data is still very rudimentary. Scientists who collect data relevant for pandemics are often working with very small resources, and there is no systematic way of collecting data on a global scale, although interesting initiatives are under way. 607 While an early warning system would be comparatively inexpensive, there are still no resources available. Most of the probability estimates made for pandemics are for their more benign versions. For the possible pandemic that could kill two billion or more there are very few estimates. Based on available assessments 608 the best current estimate of a global pandemic in the next 100 years is: 5% for infinite threshold, 0.0001% for infinite impact The reason for the big difference between threshold and impact is mainly that a pandemic will not directly affect infrastructure or the rest of the ecosystem in the way that extreme climate change or nuclear war would. This means that resilience will be relatively better after the infinite threshold is crossed. This is one of the more complex risks as it can be seen more as a heading than a description of a specific challenge with a well-defined sequence. In other words it is not one sequence, but very many still unknown sequences. The concept of ecological collapse usually refers to a situation where some part of the ecological web becomes so weak that it collapses. \n 00001% There are many studies about the stability and possible collapse of different ecosystems, but there are few that look into the possibility for a full ecological collapse that would result in at least two billion people suffering. Data availability is good in many areas, but the challenge is that without an understanding of the system dynamics, and because of its complexity, there are inherent limits to how exact the knowledge is that can be achieved. 610 Regarding probability estimates, it is only ecological collapse and global system collapse of the current manmade global challenges that have no estimates for infinite impact. Based on available assessments 611 the best current estimate of an ecological catastrophe in the next 100 years is: 0.5% \n Collapse Global System Since the financial crisis the possibility of a global collapse of the current political, economic and financial system has been discussed intensively. A rapidly evolving and increasingly interconnected system, it is subject to unexpected, system-wide failures because of the structure of the network -it faces a systemic risk. Possible sequences for a global system collapse resulting in infinite impacts are very hard to establish, for three reasons. First, it is a very complicated system, with many dynamic interactions, as there are many people who, together with machines, react to each other. The current global system shows a lot of complex dynamic phenomena, such as business cycles, financial crises, irregular growth, and bullwhip effects. 612 Many nonlinear dynamic models of economics and finance present various complex dynamic behaviours such as chaos, fractals, and bifurcation. Second, it is a recent system that has been so interconnected for only a few years, as it depends on an infrastructure that did not exist before the internet, so there is little experience of how it works. Third, the system is rapidly changing and becoming even more complex as more connections are added and its speed increases. Better understanding of complex systems with multiple attractors and bifurcation behaviour will help improve the possibility of understanding the possible sequences. 613 An additional challenge for the understanding of sequences that can result in impacts beyond the infinite threshold is that almost all research being done in the area of global system collapse focuses on its economic or geopolitical implications, not on a full system collapse and not on human suffering. \n Super-volcano The super-volcano risk has many similarities with a major asteroid risk. Both have happened a number of times through our planet's history, and both have had major consequences. The understanding of the sequence is however a lot lower than for asteroids, as the mechanisms behind volcano eruptions are not very well known. The possibility of foreseeing when a supervolcano will erupt and how big the impact will be is therefore low. Compared with a major asteroid, there will therefore be much less time to prepare. There is data available for different impacts, and knowledge of where super-volcanoes might erupt is increasing, but due to the lack of understanding when it comes to the sequence, the probability estimations are still very rudimentary. A number of estimates exist where the probability is assessed, but they are quite rudimentary, based on the historic frequency of earlier supervolcano eruptions. As these are so infrequent, the uncertainty becomes very significant. The basic sequence is relatively well-known, given that it would be a more deadly version of a current virus, but there is also the possibility that a new virus (or other organism) may be found where the sequence will be unknown and therefore also much more dangerous. One of the challenges to understanding the sequence is that the spreading of synthetic biology will come either from a wilful act (e.g. terrorism) or an accident (e.g. unintentional release from a laboratory). This also makes data hard to get. There are some numbers for accidents in labs, but they are available in only a few countries and there are probably many more than those reported. 620 With terrorist acts there are probability estimates that can be used as a basis for the use of synthetic biology as well. 621 There are some existing estimates for synthetic biology, but these are based on possible use in war, where calculations depend on some specific differences from existing pathogens that are assumed to be necessary for a pandemic with an infinite impact. Based on available assessments 622 the best current estimate of an impact from synthetic biology in the next 100 years is: 1% for infinite threshold, 0.01% for infinite impact 626 For global challenges in rapidly evolving areas where incremental development might not happen and little is known about the sequence, the only way to reduce risks with possible infinite impacts might be to ensure focus on these general factors. The only estimates of probabilities that exist so far have been made by a small group with a significant proportion of people with a passion for AI. Compared with many other challenges the possibility of an AI capable of infinite impact can almost be described as all or nothing. This is also why the estimates are the same for the infinite threshold and the infinite impact. Based on available assessments 627 the best current estimate of an impact from AI in the next 100 years is: 0-10% for infinite threshold, 0-10% for infinite impact The reason for 0-10% on both impact levels is that most experts assume that the kind of AI capable of impacts beyond the infinite threshold is likely to be one that also can result in an infinite impact. If we succeed it will move beyond control very rapidly. Due to the significant impact it would have if it worked, there is no difference between the two impact levels. The most common mistake is that the most likely development of these underlying trends is taken for granted as the only possible outcome. Most of the trends have probability distributions where low-probability/ high-impact possibilities are often ignored. In this chapter some of the most important trends, where the possible outcomes can differ significantly, are described through a global risk perspective. For each of the trends the simple rule based on a risk perspective is: \"Aim for the best, but be prepared for the worst\". Global poverty has fallen dramatically over the last two centuries, and the fall has intensified in recent decades, raising hopes that poverty, defined by the World Bank as an income below US $1.25 per day, may be eliminated within the next 50 years. The Economist even had a cover, in June 2013, with the title \"Towards the end of poverty\". 631 The World Bank has set an interim target of reducing global extreme poverty to 9% of the world's population by 2020, which, if achieved, would mark the first time the rate has fallen to single digits. 632 The milestone is based on a World Bank economic analysis of global poverty trends aimed at the goal of ending extreme poverty by 2030. Reaching 9% in 2020 would mean an estimated 690m people would still be living in extreme poverty by then, 510m fewer in poverty than a decade earlier. That would be the equivalent of half the population of Africa, or more than double the population of Indonesia. 633 There are reasons to celebrate this development as more people than ever live a life where they do not have to constantly worry about their most basic needs. But there are two things worth remembering: 1. Poverty could increase again. 2. Defining poverty is difficult. According to the 2012 Revision of the official United Nations population estimates and projections, the world population of 7.2 billion in mid-2013 is projected to increase by almost one billion people within twelve years, reaching 8.1 bn in 2025, and to further increase to 9.6 bn in 2050 and 10.9 bn by 2100. 642 These results are based on the medium-variant projection, which assumes a decline of fertility for countries where large families are still prevalent as well as a slight increase of fertility in several countries with fewer than two children per woman on average. 643 The medium projection is still dramatic as it assumes another four bn people on the planet, more than a 50% increase in population, equal to the Earth's entire population in 1975, in just 86 years. 644 The high-variant projection depicted in the figure below assumes an extra half a child per woman (on average) compared with the medium variant, implying a world population of 10.9 bn in 2050 and 16.6 bn in 2100. 645 That is equal to a 133% population increase in just 86 years . The difference between projections for 2100, from 10.9 bn people in the medium scenario, to 16.6 bn in the high scenario, equals the world population in 1995. There is also a credible low scenario with 6.8 bn by 2100. 646 A strategic approach must be based on all possible outcomes. Planning as though the world population will be only 6.8 bn is not optimistic: it is unscientific and dangerous. Even to plan for a world with 10.9 bn is not strategic as this would ignore the significant probability that the world's population would be much larger. There should be a plan for a world with 16.6 bn people, combined with a long-term strategy to ensure a sustainable population level. It is also important to ensure that more attention is paid to early warning systems that allow us to influence population development in a sustainable direction. While weapons have become more deadly the death toll from wars has actually decreased over time. 650 How big a part technology has played by creating greater transparency, or increasing the fear of using weapons which have become too powerful (for example nuclear bombs), is disputed. But most experts agree that technology has played an important role. 651 This is not the same as saying that this development will continue. Estimating the future development of technology is very difficult. On the one hand there is evidence that technology will continue to accelerate at the pace it has achieved so far. Researchers at MIT and the Santa Fe Institute have found that some widely used formulas for predicting how rapidly technology will advancenotably, Moore's and Wright's Laws -offer superior approximations of the pace of technological progress. 652 Experts like Ray Kurzweil, who was recently hired by Google, is one of those who think that most people do not understand the implications of exponential growth in the area of technology and the results it generates in price, capacity and overall transformation of society. 653 On the other hand there are natural limits that could begin to constrain technological development in two ways. The technology itself may hit a barrier. For example, at some stage a processor may not continue to become smaller and faster, as the speed of light and quantum mechanics will limit its development. 654 There might be other ways to overcome such boundaries, but no exponential trend can last forever. Second, nature itself may set limits. We may choose to take more care of the planet, or limits to materials like rare earths may begin to slow technology. 655 But regardless of ultimate limits, many exponential trends are likely to continue over the coming decades and will present us with new opportunities as well as risks in the 21st century, as these trends converge in a society with 20th century institutions. Although the proportion of people who live beyond 100 is still very small, their number is growing rapidly. In 2000 there were an estimated 180,000 centenarians globally. By 2050 they are projected to number 3.2 m, an increase of about eighteen times. 659 Within the more developed regions, Japan, in particular, will experience a remarkable increase in the number of centenarians over the next half century, from fewer than 13,000 in 2000 to almost 1 m in 2050. By then Japan is expected to have by far the world's largest number and proportion of centenarians, nearly 1% of its population. 660 The stagnating and ageing population in many OECD countries and China will put pressure on current systems, which were not designed to deal with a situation of ageing and often shrinking populations in many parts of the world, while the populations in other parts of the world are rapidly growing. \n 8. be assumed to increase or decrease global risks. Such a system would allow more time to develop strategies to mitigate risks and turn global challenges into opportunities for innovation and collaboration. The warning system would require significant research into infinitey thresholds. Both traditional as well as more recent methodologies based on understanding of complex systems should be encouraged. \"big data\". 662 These opportunities include both new ways of collecting large amounts of high-quality data, and new ways to analyse them. Early warning systems should be built to ensure that data is collected and analysed in ways that can be useful for multiple global challenges. The warnings should not only include changes in the physical world, but also indicate when decisions, investments and legal changes can The rapid technological development has many benefits, but also challenges, as risks can rapidly become very serious and reach infinite thresholds. To develop early warning systems that can gather and process data transparently is therefore of the utmost importance. Technological progress, from smart phones and sensors to significant processing power and networks, allows for totally new ways of establishing early warning systems based on so-called Institutions and universities engaged in developing new methodologies to assess global risks have a particular responsibility for developing and refining risk assessments for global challenges. methodology development could be accelerated and improved, as the possibility of learning from different areas would increase. Such a process could also encourage increased investments in methodology development based on the latest innovations, such as systemsforecasting approaches. There is currently no global coordination when it comes to risk assessments of global challenges. Different experts use different methodologies, data and ways to present their results, making it very difficult to compare the risk assessments that exist. By establishing a process that coordinates and encourages risk assessments of global challenges, Four groups are of particular importance: experts in finance, experts in security policy, lawyers with knowledge of global risks and international law, and finally a group consisting of clusters of stakeholders with solutions that can reduce the risks. Leadership networks that include participants from all four groups are of particular interest. infinite impacts, and could work on a roadmap for a future global governance system that can address existing and new global challenges. The networks should be as transparent and inclusive as possible, especially as global collaboration is needed. The use of new collaboration tools and principles, such as wikiprocesses and creative commons, should be encouraged. Tables, graphs and key conclusions in reports related to global challenges should, when possible, include the whole probability distribution. 664 Current lack of data and of scientific studies regarding low-probability high-impact outcomes in many areas should not be used as an excuse to ignore the probability distribution. This is especially important when many of the global challenges have a very long and fat \"tail\". Governments, major companies, NGOs, researchers and other relevant stakeholders should address the whole probability distribution, including low-probability highimpact scenarios. This would ensure that serious risks are not disregarded or obscured. In particular, leadership with a focus on multiple global challenges and the relationship between them should be highlighted, as very little is being done in this area. Governments, companies, organisations and networks working on global challenges should increase their efforts to reward leadership when they find it. Major news outlets can also report when significant positive steps are taken to reduce global risks with potential infinite impacts. New visualisation tools could help make complex systems easier to understand and also help the communication of challenges and opportunities. 663 Visualisation tools are needed both for decision makers to highlight the consequences of different strategies and for citizens to increase their basic understanding of infinite impacts. The global challenges depend on a very complex ecosystem and social system. With a global economic and technological system, which both helps and creates risks that are increasingly interconnected and difficult to understand, there is a challenge to understand the challenges. The IPCC uses specific and defined language in its reports to describe different probabilities and thus ensure clarity, but taken out of context and without supporting definitions this language can be misleading. For example, the term \"very unlikely\" is used by the IPCC to describe a probability of between 0-10%, 665 but out of context its use could easily be understood as a normative judgement suggesting that we do not need to engage with the risk. The language of the IPCC can be compared with that used in the Swedish National Risk Assessment (SNRA). 666 The scale of impact is not defined for the IPCC, but for the Swedish Assessment it is: Very small: no deaths or serious injuries Small: One dead and/or 1-9 seriously injured Average: 2-9 dead and/or 10-49 seriously injured Large: 10-49 dead and/or 50-100 seriously injured Very large: >50 dead and/or >100 seriously injured The use of terms that can be interpreted as having normative values to explain probability is problematic and in future all bodies, including the IPCC, should explore the possibility of using only numbers in external communications, at least in the summary for policy makers, to help everyone understand the reality of the situation. Stakeholders should explore ways to use language that better communicates how serious extreme risks are in the case of climate change, and where possible compare this with other risk areas to help illustrate the situation. Often words like \"unlikely\", \"negligible\" and \"insignificant\" are used to describe a risk when the probability is considered low. What is low is however relative; a low probability in one area can be extremely high in another. If I attend one of ten lectures -10% -people might say there is a low probability that I will be there. But if someone says that a new aircraft crashes once in every ten flights, most people will say that is an extremely high probability and will be likely to assume it is an early prototype that is nowhere close to commercial success. A major problem is that probabilities that ought to be seen as very high for risks with potentially infinite impact are described in a way that makes them sound less urgent than they are -by the media, business, politicians and even by scientists. One example is how probabilities are described by the Intergovernmental Panel on Climate Change. The use of methodologies and approaches from security policy and the financial sector that focus on extreme events could be used to develop strategies for rapid action beyond the incremental approaches that dominate today. Stakeholders should include the most extreme impacts in all relevant work. If the probability of infinite impacts increases instead of decreasing because of new scientific findings or lack of action, strategies should be prepared to allow more decisive action. When the impact is infinite it is not enough only to reveal the whole probability distribution. It is important also to avoid confusing uncertain risk with low risk. Infinite impacts render many of the traditional models for risk management almost meaningless. Monetary calculations are often useless, and discounting is not always advisable. 8. 9. 10. In addition, and probably equally important, is the fact that a body set up to deal with such challenges could also ensure that the links between them could be better understood. A first step could be to establish a centre for global risks and opportunities, 669 focusing initially only on knowledge-gathering and development of proposals, and with no mandate to implement any solutions. There is currently no international or global body that is coordinating work on global risks with a potentially infinite impact. A recent example is IPCC WGII Summary for policy makers. In this report http://ipcc-wg2.gov/AR5/ IPCC acknowledges the need to include low-probability highimpact scenarios: \"assessment of the widest possible range of potential impacts, including low-probability outcomes with large consequences, is central to understanding the benefits and trade-offs of alternative risk management actions.\" Yet nothing is included in the report about impacts above 4 degrees. \n 8 As an infinite impact by definition can't have happened before, models are needed. 9 Making Sense of Uncertainty: Why uncertainty is part of science http://www.lse.ac.uk/CATS/Media/SAS012-MakingSenseofUncertainty.pdf http://www-ee.stanford.edu/~hellman/Breakthrough/ book/chapters/frankenhaeuser.html \"The challenge now is to help people extend their attachments, their loyalties, and their engagement, to include people outside their own narrow circle, their own country, their own imminent future. This global reorientation is a prerequisite for changing the present fatal course of development.\" 13 http://www-ee.stanford.edu/~hellman/Breakthrough/ book/chapters/frankenhaeuser.html 14 Two of the most famous \"optimists\", who tend to look at only the parts of the probability distribution that support their opinion, are the Danish writer Bjorn Lomborg and the British journalist Matt Ridley. While scientists in the areas he writes about constantly refute Lomborg, his message of optimism is well received by many policy makers and business leaders. https://www. ma.utexas.edu/users/davis/375/reading/sciam.pdf Ridley cherrypicks data and tends to avoid the probability that we will see significant warming. http://www.mattridley.co.uk/blog/the-probable-netbenefits-of-climate-change-till-2080.aspxhttp://www. rationaloptimist.com/ 18 While the LA-602 document is relatively well-known, its final paragraph is often forgotten. The scientists say: \"However, the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable.\" 19 Since Teller's calculations the science has developed and is now clear that the main nuclear threat that potentially could threaten human civilisation is a full nuclar war and the consequenses that this could result in from effects like a nuclear winter. 20 For an overview of positive and negative aspects of nanotechnology see for example Drexler, Eric and Pamlin, Dennis (2013): \"Nano-solutions for the 21st century. Unleashing the fourth technological revolution\" http://www.oxfordmartin.ox.ac.uk/downloads/ academic/201310Nano_Solutions.pdf 21 This is true for utopias ranging from Plato's Republic, via Thomas More's book Utopia, to Edward Bellamy's Looking Backward and William Morris' News from Nowhere. 22 See next chapter for the methodology. 23 A list of organisations and studies that discuss challenges that threaten human civilisation can be found here: http://en.wikipedia.org/wiki/Global_catastrophic_ risks 24 The number two billion was established during the workshop in Oxford and is not an exact number. Further research is needed to establish a better understanding of thresholds that can result in an infinite impact, depending on what challenge resulted in the two billion impact and how the estimate for an infinite impact was assumed to be between 0.01% and 10%. 25 The definition is based on the definition used by Jared Diamond: http://www.jareddiamond.org/Jared_ Diamond/Collapse.html 26 Analyzing Human Extinction Scenarios and Related Hazards\" http://www.nickbostrom.com/existential/risks. html. Note that these four points were originally developed for \"existential risks\", those that threaten the extinction of intelligent life originating on Earth or the permanent destruction of its potential for desirable future development. 47 Timothy M. Lenton, Juan-Carlos Ciscar: Integrating tipping points into climate impact assessments. Springer Netherlands 2013-04-01 http://link.springer.com/ article/10.1007%2Fs10584-012-0572-8 48 Carl Sagan use 10 million years in: Sagan, Carl (1983) . \"Nuclear war and climatic catastrophe: Some policy implications\". Foreign Affairs 62: 275, and The Blackwell's Concise Encyclopedia of Ecology and other sources provide an estimate of about 1 million years for mammals. http://eu.wiley.com/WileyCDA/WileyTitle/ productCd-0632048727.html 49 One billion years is used by Bruce E. Tonn in the paper Obligations to future generations and acceptable risks of human extinction: \" Futures 41.7 (2009): 427-435. 50 In Japan in Japan the life expectancy at age zero, that is, at birth (LEB), is 83 years. In 2010 the world LEB was 67.2. 51 This is based on a low estimate for the planet's population as far out as current projections are made, 2100. http://esa.un.org/wpp/ Looking further into the far future, beyond 500 years, it is likely that humanity will have expanded into space and have a much larger population. 52 (2002) . A New Kind of Science. https://www.wolframscience.com/ 57 A possible infinite threshold could be compared with the two events that are often cited as the world's worst wars and anthropogenic disasters: the second world war, with 40-71 million dead (1.7-3.1% of the world's population) and the Mongol Conquests, 1206-1368, with 30 million dead (7.5% of the global total). See: http:// en.wikipedia.org/wiki/List_of_wars_and_anthropogenic_ disasters_by_death_toll 58 The number two billion was established during workshops arranged during the process and is not an exact number. Further research is needed to establish a better understanding of thresholds that can result in an infinite impact. The number of people dead is only one factor and different global risks are likley to have very different thresholds. 153 \"The so-called \"Warsaw International Mechanism for Loss and Damage\" will from next year commit developed nations to provide expertise and potentially aid to countries hit by climate-related impacts. [...] However, the vague wording fell short of the kind of detailed commitments on additional funding and the commitment to compensation that many developing nations had been seeking.\" (source: Business Green: \"COP 19: Warsaw climate deal finalised as deadlock broken\"). 154 See for instance Bulkeley, Harriet, and Peter Newell.: \"Governing climate change.\" Routledge (2010). 155 See the National Research Council: \"Abrupt Impacts of Climate Change: Anticipating Surprises.\" Washington, DC: The National Academies Press (2013). 156 See NASA's \"GISS Surface Temperature Analysis.\" 157 See the IPCC's Fourth Assessment Report. 158 See the IPCC's Fifth Assessment Report. 159 See the IUCN's Red List of Threatened Species. 160 162 See Archer, David.: \"Methane hydrate stability and anthropogenic climate change.\" Biogeosciences Discussions 4.2 (2007): 993-1057. 163 For example, as climate warms, the destabilization of the West Antarctic ice sheet could raise sea level rapidly, with serious consequences for coastal communities. 169 Currently estimated at around 17,000 (source: SIPRI yearbook 2013). 170 Though it has been argued that scientists, under pressure from governments and industry, have systematically underestimated the deleterious global impact of radiation. See Perrow, Charles.: \"Nuclear denial: From Hiroshima to Fukushima.\" Bulletin of the Atomic Scientists 69.5 (2013): 56-67. 171 The seminal American paper was Turco 173 Though some have argued for a significant climate effect of the nuclear explosions themselves, see Fujii, Yoshiaki.: \"The role of atmospheric nuclear explosions on the stagnation of global warming in the mid 20th century.\" Journal of Atmospheric and Solar-Terrestrial Physics 73.5 (2011): 643-652. 174 There is (fortunately) very little empirical evidence on the impact of nuclear bombs on cities. Hiroshima suffered a firestorm, while Nagasaki did not -and both cities and nuclear weapons are very different now from what they were in 1945. 175 488 Dealing with most risks comes under the category of decision theory: finding the right approaches to maximise the probability of the most preferred options. But an intelligent agent can react to decisions in a way the environment cannot, meaning that interactions with AIs are better modelled by the more complicated discipline of game theory. 489 See Yudkowsky, Eliezer.: Artificial intelligence as a positive and negative factor in global risk. Global catastrophic risks 1 (2008): 303. 490 See Muehlhauser, Luke, and Nick Bostrom.: Why we need friendly AI. Think: Philosophy for Everyone (2014). 491 The balance is currently very uneven, with only three small organisations -the Future of Humanity Institute, the Machine Intelligence Research Institute and the Cambridge Centre for Existential Risk -and focused mainly on the risks of AI, along with some private individuals. 492 See Sandberg, Anders, and Nick Bostrom.: Whole brain emulation: A roadmap. Future of Humanity Institute Technical Report #2008-3, Oxford University (2008). 493 A dangerous AI would be a very intelligent that one could acquire great power in the world, and would have goals that were not compatible with full human survival and flourishing. There are a variety of ways of avoiding this. \"Friendly AI\" projects focus on the AI's goals directly, and attempt to make these compatible with human survival. The Oracle AI approach attempts to \"box\" the AI (constrain its abilities to influence the world) in order to prevent it from acquiring power. The reduced impact AI is a novel approach which attempts to construct AIs goals that are not friendly per se, but that motivate the AI to have a very limited impact on the world, thus constraining the potential damage it could do. 494 There are many different designs for AIs, and disagreements about whether they could be considered moral agents or capable of suffering. At one end of the scale is the theoretical AIXI(tl), which is essentially nothing more than an equation with vast computing power; at the other lie whole brain emulations, copies of human minds implemented inside a computer. Suffering for the first design seems unlikely; suffering for the second seems very possible. Determining whether a specific AI design can suffer will be an important part of figuring out what to do with it; it seems unlikely that we'll have hard and fast rules in advance. 495 The idea behind uploads, also called whole brain emulations, is to take a human brain, copy the position of its neurones and connections to sufficient precision, and then run the emulation forwards according to the laws of physics and chemistry. It should then behave exactly as a human brain in the real world. See http:// www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf for more details, which also goes into the requirements and tries to estimate the likelihood off success of such an approach (it depends on certain likely but uncertain assumptions about how the brain works at the small scale; ie that most things below a certain scale can be handled statistically rather than through detailed modelling). 496 There's two ways there could be diminishing returns to intelligence: first, it might not be possible for an AI to improve its own intelligence strongly. Maybe it could find some ways of better executing its algorithms, and a few other low-hanging fruits, but it's very possible that after doing this, it will find that the remaining intelligenceincrease problems are hard, and that it hasn't improved itself sufficiently to make them easy (you could analogise this to dampened resonance -initially the AI can improve its intelligence quite easily, and this gives it more intelligence with which to work, allowing it to improve its intelligence still further; but each time, the effect is less, and it soon plateaus). The other way there could be diminishing returns to intelligence is if intelligence doesn't translate into power effectively. This is more likely with things like social intelligence: it seems plausible that no entity could convince, say, 99% of Britain to vote for their party, no matter how socially intelligent it was. Scientific and technological intelligence could conceivably be limited by creativity (unlikely), constraints on the speed of making experiments, or just the fact that humans have solved the easy problems already. The Global Catastrophic Risk Institute (GCRI) published a GCR bibliography compiled in July 2011 by Seth Baum, available at http://gcrinstitute. org/bibliography. This contains 115 entries, emphasising publications surveying the breadth of the risks or discussing other topics of general interest to the study of GCR, with less emphasis on analysis of specific global challenges. It has been updated for this Global Challenges Foundation report and now contains 178 entries. The reason for focusing on general interest publications is because the literature on specific global challenges is far too voluminous to catalogue. It would include, for example, a significant portion of the literatures on climate change, energy, nuclear weapons, infectious diseases and biodiversity, all topics that receive extensive research attention. Thus the full bibliography compiled by GCRI is only a small portion of the total global challenges literature. Publications for the full bibliography were identified in several ways. The bibliography began with publications already known to fit the selection criteria. Additional publications were identified by examining the reference lists of the initial publications. More publications were identified by searching scholarly databases (mainly Web of Science and Google Scholar) and databases of popular literature (mainly Amazon and the New York Public Library) for relevant keywords and for citations of the publications already identified. Several keywords and phrases were searched for in the databases: \"existential catastrophe\"; \"existential risk\"; \"global catastrophe\"; \"global catastrophic risk\"; \"greatest global challenges\"; \"human extinction\"; \"xrisk\"; and \"infinite risk\". The results of these searches were then screened for relevant publications. Many of the results were not relevant, because these terms are used in other ways. For example, \"existential risk\" is sometimes used to refer to risks to the existence of businesses, countries or other entities; \"human extinction\" is used in the study of memory. The publications that use these terms in the same sense as the bibliography were then further screened for publications of general global challenges interest, not for specific global challenges. The most productive search term for the database searches turned out to be \"global catastrophe\". This term produced a relatively large number of hits and relatively few publications on unrelated topics. Further, the term is used by researchers from a variety of different backgrounds. This makes it a particularly fruitful term for discovering new global challenges research. One hallmark of the global challenges topic is that it is studied by distinct research communities that have limited interaction with each other. As research communities often develop their own terminology, it can be difficult to discover one community by searching for another's terms. For example, \"existential risk\" is used heavily by researchers studying risk from artificial intelligence and other emerging technologies, but it is rarely used by researchers studying environmental risks. Discovering and connecting the disparate corners of the GCR research is an ongoing challenge for the GCR community. Bibliography searches such as these are an important way to meet this challenge. \n Bibliography Global Challenges Twelve risks that threaten human civilisation -The case for a new category of risks \n Twelve risks that threaten human civilisation -The case for a new category of risks 15 Executive Summary \n Twelve risks that threaten human civilisation -The case for a new category of risks \n Twelve risks that threaten human civilisation -The case for a new category of risks Executive Summary \n Figure 1 : 1 Figure 1:Probability density function \n Figure 2 : 2 Figure 2: Probability density function with tail highlighted \n Figure 4 : 4 Figure 4: Nordhaus, The Climate Casino: Total cost of different targets assuming limited participation and discounting of future incomes. \n Figure 6 : 6 Figure 6: Normal risks and risks with potentially infinite impact. \n Figure Figure 8: Example of F-n curve showing an absoluteimpact level that is defined as unacceptable/ infinite. i.e no level of probability is acceptable above a certain level of impact, in this case 1000 dead64 \n Figure 9 : 9 Figure 9: Number of times global challenges are included in surveys of global challenges \n Figure 10 : 10 Figure 10: The global challenges included ten times or more in surveys of global challenges \n Twelve risks that threaten human civilisation -The case for a new category of risks 3.1 Current risks 3.1.1.1 Expected impact disaggregation 3.1.1 \n only nine countries are currently known to possess them: the five security council members, India, Pakistan, and North Korea, plus Israel. 230 76 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3.1 Current risks 77 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3Ecological collapse refers to a situation where an ecosystem suffers a drastic, possibly permanent, reduction in carrying capacity for all organisms, often resulting in mass extinction. Usually an ecological collapse is precipitated by a disastrous event occurring on a short time scale. 231 78 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3.1 Current risks 3.1.3.1 Expected impact disaggregation 3.1.3.2 Probability Humans are part of the global ecosystem and so fundamentally depend on it for our welfare. \n Global Figure 18: Increase in the number of species assessed for the IUCN Red List of Threatened SpeciesTM (2000-2013.2). Source: http://www.iucnredlist.org/about/summary-statistics \n A pandemic (from Greek πᾶν, pan, \"all\", and δῆμος demos, \"people\") is an epidemic of infectious disease that has spread through human populations across a large region; for instance several continents, or even worldwide. \n Twelve risks that threaten human civilisation -The case for a new category of risks 3 \n 310 90 Global 90 Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3 \n Figure 19 : 19 Figure 19: Network Diagram of connections between, banks, brokers/dealers, insurers and hedge funds. Jan 1994-Dec 1996 Source: https://app.box.com/shared/oesro8zzco0mtvuymh3f \n Figure 20 : 20 Figure 20: How the Spaceguard Survey has reduced the short-term risk of impacts from near-Earth objects 349 \n 2. The predictability of supervolcanic eruptions. 3. How directly destructive an eruption would be. 4. The effectiveness of general mitigation efforts. 5. How severe the long-term climate effects would be. 103 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n Figure 21 : 21 Figure 21: Volcanic Explosivity Index 389 \n 118 Global 118 Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3.3 Emerging risks 01-Jun-13: Nanostart AG: Venture Capital Investments in Nanotech 475 -Initiative \n 119 Global 119 Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3Artificial intelligence (AI) is the intelligence exhibited by machines or software, and the branch of computer science that develops machines and software with human-level intelligence. 120 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3 \n Twelve risks that threaten human civilisation -The case for a new category of risks 3 \n Figure 23 : 23 Figure 23: Number of galaxies that can reach us with speeds of 50%c, 80%c, 99%c and c, from different starting moments 552 \n 485 132Global 485 Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3 \n Figure 24 : 24 Figure 24: Less poverty of nations -Population living below $1.25 a day at 2005 Purchasing-pover parity, % * 0=absolute equality, 100=absolute inequality Source: Economist, originally World Bank, http://www.economist.com/node/14979330 \n 4. 1 1 Specific relations between global risks \"Uncertainty is an uncomfortable position. But certainty is an absurd one.\" Voltaire -an initial overview 5. Probabilities and uncertainties 142 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 5. Probabilities and uncertainties -an initial overview \n Figure 27 : 27 Figure 27: Different kinds of poverty -Number of people in poverty 638 \n Figure 28 : 28 Figure 28: Population of the world, 1950-2100, according to different projections and variants Source: http://esa.un.org/wpp/documentation/pdf/WPP2012_Volume-I_Comprehensive-Tables.pdf, p. xv \n Figure 29 : 29 Figure 29: Moore's forecast for PV 656 Figure 30: Global ICT development 2000-2013 657 \n Figure 24 : 24 Figure 24: Population aged 80 or over 661 \n Twelve risks that threaten human civilisation -The case for a new category of risks \n Figure 31 : 31 Figure 31: Comparing the probability scale in the Swedish National Risk Assessment 667 and the Likelihood Scale used by the IPCC 668 \n 35 http://en.wikipedia.org/wiki/Human_embryogenesis 36 These four points are slightly rewritten versions of a list of Nick Bostrom's in the text \"Existential Risks: \n 59 HSE 2001. Reducing Risks, Protecting People -HSE's Decision-Making Process, 2001. agreement.\" \n 164 http://simple.wikipedia.org/wiki/Nuclear_war165 For an analysis of the possibility of accidental nuclear war between Russia and the USA, seeBarrett, Anthony M., Seth D. Baum, and Kelly Hostetler.: \"Analyzing and Reducing the Risks of Inadvertent Nuclear War Between the United States and Russia.\" Science & Global Security 21.2 (2013): 106-133. 166 Hellman, Martin E.: \"How risky is nuclear optimism?.\" Bulletin of the Atomic Scientists 67.2 (2011): 47-56. 167 See Lundgren, Carl.: \"What are the odds? Assessing the probability of a nuclear war.\" The Nonproliferation Review 20.2 (2013): 361-374. \n 168 A 1979 study by the United States Office of Technology Assessment, \"The Effects of Nuclear War,\" estimated 20-160 million immediate casualties from a full-scale nuclear war. \n , Richard P., et al.: \"Nuclear winter: global consequences of multiple nuclear explosions.\" Science 222.4630 (1983): 1283-1292. The Soviet Union was working on similar simulations; see for instance Peterson, Jeannie.: \"The aftermath: The human and ecological consequences of nuclear war.\" New York: Pantheon (1983). \n 172 See Mills, Michael J., et al.: \"Massive global ozone loss predicted following regional nuclear conflict.\" Proceedings of the National Academy of Sciences 105.14 (2008): 5307-5312. \n See Toon, Owen B., et al.: \"Atmospheric effects and societal consequences of regional scale nuclear conflicts and acts of individual nuclear terrorism.\" Atmospheric Chemistry and Physics 7.8 (2007): 1973-2002, and Robock, Alan, Luke Oman, and Georgiy L. Stenchikov.: \"Nuclear winter revisited with a modern climate model and current nuclear arsenals: Still catastrophic consequences.\" Journal of Geophysical Research: Atmospheres (1984-2012) 112.D13 (2007). \n \n \n \n \n \n \n \n \n The case for a new category of risks Executive Summary Uncertainties As the different challenges are very different and the status of probability estimates varies significantly, the initial probability numbers are provided together with estimates regarding: 1. Understanding of sequence 1. Understanding of sequence 1. Understanding of sequence 1. Understanding of sequence 2. Data availability 2. Data availability 2. Data availability 2. Data availability 3. Existing probability estimation 3. Existing probability estimation all parts all parts all parts all parts all parts all data all data all data all data all data calculations with small uncertainty calculations with small uncertainty calculations with small uncertainty none at all some parts most parts none at all some parts most parts degree of events from today's actions to infinite impact none at all degree of events from today's actions to infinite impact none at all none at all some parts some parts some parts most parts most parts most parts no data some data most data amount of data to make probability assessment on all relevant steps of the sequence no data amount of data to make probability no data some data most data of the sequence assessment on all relevant steps no data no data some data some data some data most data most data most data no estimates best guesses kind of estimation and uncertainty no estimates no estimates by experts calculations with best guesses by experts by experts best guesses large uncertainty calculations with large uncertainty calculations with large uncertainty degree of events from today's actions to infinite impact degree of events from today's actions to infinite impact degree of events from today's actions to infinite impact amount of data to make probability assessment on all relevant steps of the sequence amount of data to make probability of the sequence assessment on all relevant steps kind of estimation and uncertainty amount of data to make probability of the sequence assessment on all relevant steps kind of estimation and uncertainty 22 21 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Executive Summary \n 20 , or $150 quintillion. 2.3 Global challenges and infinite impact This is a very low estimate, and Posner suggests that maybe the cost of a life should be \"written up $28 million\" for catastrophic risks 52 . Posner's calculations where only one future generation is included result in a cost of $336 quadrillion. If we include all future generations with the same value, $28 million, the result is a total cost of $86 sextillion, or $86 × 10 21 . This $86 sextillion is obviously a very rough number (using one billion years instead of 50 million would for example require us to multiply the results by 20), but again it is the 10 12 stars in our galaxy, and perhaps reference there are about 10 11 to magnitude that is interesting. As a Threshold something like the same number of galaxies. With this simple calculation you get 10 22 to 10 24 , or 10 to 1,000 sextillion, stars in the universe to put the cost of infinite impacts Normal risks when including future generations in Traditional measures perspective. 53 and tools applicable probability 0 impact 44 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n 140 \n The extent to which humans are dependent on the ecosystem. 2. Whether there will be effective political measures taken to protect the ecosystem on a large scale. 3. The likelihood of the emergence of sustainable economies. 4. The positive and negative impacts on the eco systems of both wealth and poverty. 5. The long-term effects of an ecological collapse on ecosystems. GOVERNANCE DISASTERS 3.1 Current risks 3.1 Current risks Post-eco-collapse climate change cooperation will be important to any 1. Global coordination and attempt to control ecological damage 9. New, profitable, but environmentally Global poverty Global coordination damaging industries could put extra strain on the ecosystem. 19. Technological innovations Sustainability research may result in more sustainable economies, or in more during 2013 3.1.3.3 Main events on a large scale and prevent \"races to environmentally damaging products. the bottom\". 22-Jan-13: Current extinctions 10. According to some systems of value, the loss of certain animals ecological effects Long-term Global instability probably the result of past actions; Global povety Global coordination 2. Poverty is often seen as and ecosystems constitutes a moral many future extinctions to come 242 Technological through unsustainable practices, innovations exacerbating ecological damage tragedy in and of itself. -Research 20. It may be possible to Deliberate attempts to construct world dictatorship ensure human survival in semi-hydroponic food, distilled water), \"closed\" systems (solar power, while richer countries introduce An estimated 40% of world trade 11. Humans derive much pleasure and with minimal dependency on the Quality of life loss from ecosystem loss environmental regulations -but richer many benefits from various parts of Ecological collapse Preservation efforts New system of governance Smart sensors external ecosystem. is based on biological products Meta-uncertainty on tradeoffs nations exploit many resources (such the ecosystem, and losing this would or processes such as agriculture, between e.g. poverty, survival, as fossil fuels) in non-sustainable and result in a loss to human quality of life. 21. Over the long term, it may become forestry, fisheries and plant-derived freedom damaging ways. possible and necessary to go about pharmaceuticals, and biodiversity comprises an invaluable pool 12. Ongoing and continuous rebuilding the ecosystem and healing Threat to economies, or sustainable food supply 3. Transitioning to sustainable for innovations. Moral tragedy from ecosystem loss consequence of ecological collapse. Pollution biodiversity loss is a clear its damage. economic trajectories, could control 22. Political decisions will be the most Economic costs Failing to solve 4. Research into sustainability could Loss of biodiversity triggering famines. Improvements to global governance important problems ecological damage. 13. Ecological damage can put Making Pre-eco-collapse climate change likely factors to exacerbate or mitigate things worse the human food system in danger, an ecological disaster. allow the construction of sustainable 23. It is unclear how dependent economies or environments at costs 14. Ecological damage increases humans truly are on the that people are willing to bear. vulnerability to floods and other ecosystem, and how much Disruption to politics and economy 5. Climate change exacerbates the Rebuilding the ecosystem natural disasters. Post-eco-collapse politics damage they could inflict without threatening their own survival. Enduring poverty pressure on the ecological system Not achieving important ethical goals by changing weather patterns and Climate change 15. Disruptions to the world's political Global pollution and economic systems could trigger Lack of human flourishing Vulnerabilities to flood and other disasters increasing natural disasters in ways further conflicts or instabilities, Sustainable or non-sustainable economies ecosystems find hard to adapt to. causing more casualties and impairing New, environmentally damaging industries effective response. 6. Global pollution is a visible source of ecological damage, one that global 16. Since a lot of the world's carbon is Undesirable world system (e.g. global dictatorship) Pre-eco-collapse mitigation efforts locked up in trees, ecological collapse Technological could exacerbate climate change. innovations Collapse of agreements have had moderate success at tackling. Disruption to world politics and economy world system Post-disaster politics Long-term negative effects 7. Truly global preservation efforts 17. The ecosystem is of great may be needed for some threatened economic benefit to humanity, Meta-uncertainty on the true dependence of humanity on the ecosystem so its loss would have large Human survivability in \"closed\" systems natural boundaries (e.g. in the seas ecosystems that stretch beyond economic costs. and oceans). 18. Ecological damage is likely to Total short-term casualties 8. Beyond general all-purpose mitigation efforts, addressing General mitigation effort be long-term: the effects will last for Extinction many generations. Civilisation collapse this threat could include the Total short-term casualties species or genetic codes, to allow a preservation of ecosystems, Civilisation collapse Extinction subsequent rebuilding. Key Key Uncertain events Meta-uncertainties Risk events Direct impacts Indirect impacts Current intervention areas Bad decisions Accidents Severe impacts Uncertain events Meta-uncertainties Risk events Direct impacts Indirect impacts Current intervention areas Bad decisions Accidents Severe impacts 80 79 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3.1 Current risks ECOLOGICAL CATASTROPHE Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 81 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n and trade. 573 137 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 4. Relations between global risk 3.4 Global Policy risk 4. Relations between Two things make the understanding 4.1 General relations of the relation between the global between global risks risks particularly important. and their potential impacts 1. Impacts: The global risks global risks are interconnected in different Relations between global risks is an ways. Often the situation can be area where surprisingly little work is described as a set of dominoes: if being done. Most research focuses one falls, many others follow. Even on individual or closely related small impacts can start a process groups of challenges. Organisations where different challenges interact. working on global challenges are Higher temperatures due to global almost always working on individual warming can result in the spreading risks. The initial overview below is of pandemics which increase based on individual studies where tensions between countries, and different relations are analysed, but so on. no work has been identified where the relations between all twelve 2. Specific measures to address challenges have been analysed. a risk: Global risks often require significant changes in our current A risk that is natural to start with is society, from how we build cities future bad global governance, as all to how food is produced and other global challenges exacerbate provided. Such significant changes governance disasters, 575 and all other will result in situations where global challenges can potentially be measures to reduce the risk in exacerbated by governance disasters. one area affect the probability A well functioning global governance and/or the impact in other areas. system is therefore a key factor to Depending on the measure chosen address global catastrophic risks. to reduce the risk, and other complementary measures, the Conversely, avoiding governance effect can be positive or negative. disasters improves all risks, as better institutions are better able to mitigate risks. Governance disasters directly increase the problems of climate change (through a lack of coordination between countries), the risk of nuclear war (by stoking conflict between nuclear powers) and \"We have some idea what might happen if, global system collapse (by weakening global responses to systemic risks). in the face of other pressing global challenges, All risks exacerbate global system collapse, by putting extra stress on an we divert our focus from making systemic interconnected system. improvements in public health and veterinary services -and that prospect is frightening.\" The World Bank 574 138 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n Twelve risks that threaten human civilisation -The case for a new category of risks 4.1 General relations between global risks and their potential impacts 4.2 Specific relations between global risks Below is an example of an overview of how different global challenges can be plotted depending on the technical difficulty of reducing An international initiative should start to achieve better understanding of the relations between global challenges in order to ensure the risk and the difficulty of synergies and avoid strategies that In parallel with work to increase our collaborating to reduce it. will undermine other challenges. understanding about the general relations between global risks, work to identify more specific relations should also be initiated. This is an area where people die / suffer many pieces of research exist. But very little work has been done to combine them and assess different strategies to address specific global risks and understand how these strategies will affect other global risks. It is important to distinguish between two different support use people afraid of infections short term thinking kinds of specific relations. Extreme Climate Change Global Pandemic First, there are solution strategies for one global risk and the ways it affects other global risks. For example, using video conferences can reduce the probability of pandemics by reducing unnecessary travel. On the other hand, unsustainable use of bio-energy could increase spillover opportunities when a zoonosis (a disease transmitted more renewable energy more video meetings less meat consumption from nature accident attack from animals to humans) increases reduce risk the spread of pandemics due to an increased number of contacts between increase risk humans and infected animals in forests around the world. 593 Second, how society reacts to the very threat of different risks can affect other challenges. For example, if people are afraid of pandemics they might use more video meetings and in that way help reduce carbon emissions. solving first risk improves second risk first risk worsens second risk Attempts to develop solutions for specific global challenges should assess their impacts, positive and negative, on other challenges. In order to better understand the relations between different global challenges, work could start to analyse similarities and differences. technical difficulty of reducing risk both of the above collaboration difficulty of reducing risk 140 Global Challenges - Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 141 \n 15 Climate Impacts: http://www.ipcc.ch/report/ar5/wg2/ Pandemics: http://www.cmu.edu/dietrich/sds/docs/ fischhoff/AF-GPH.pdf Nuclear war: http://www.ippnw.org/nuclear-famine.html 16 http://news.stanford.edu/news/2003/september24/ tellerobit-924.html 17 http://www.fas.org/sgp/othergov/doe/lanl/ docs1/00329010.pdf \n For examples see the recently established Centre for Study of Existential Risk at Cambridge. It is an interdisciplinary research centre focused on the study of human extinction-level risks that may emerge from technological advance 27 E.g. HBR blog about \"black swans\": http://blogs. hbr.org/2010/09/the-competing-black-swans-of-s/ and The Guardian about \"perfect storms\" http://www. theguardian.com/environment/earth-insight/2014/ apr/03/ipcc-un-climate-change-perfect-storm-zombieoil 28 http://www.economist.com/node/18744401 29 http://e360.yale.edu/feature/living_in_the_ anthropocene_toward_a_new_global_ethos/2363/ 30 http://www.nickbostrom.com/existential/risks.html 31 https://www.youtube.com/watch?v=VjZyOTES6iQ 32 http://en.wikipedia.org/wiki/Long_tail 33 http://en.wikipedia.org/wiki/Human 34 Mammals have an average species lifespan from origin to extinction of about 1 million years, but that will not necessarily apply to humans today because of factors like local climate change or new species in the same ecological niche. The dinosaurs were around for 135 million years and if we are intelligent, there are good chances that we could live for much longer. \n Posner, Richard A. Catastrophe: Risk and Response. Oxford University Press, 2004, Loc 2378 53 http://www.esa.int/Our_Activities/Space_Science/ Herschel/How_many_stars_are_there_in_the_Universe? 54 http://www.existential-risk.org/concept.pdf 55 McSharry, Patrick E., and David Orrell (2009). A Systems Approach to Forecasting. FORESIGHT: The International Journal of Applied Forecasting, referring to Wolfram, Stephen (2002):A new kind of science, V Wolfram media. http://www.irgc. org/wp-content/uploads/2013/06/McSharry-OrrellMcSharry2009foresight.pdf 56 From A Systems Approach to Forecasting\" by David Orrell and Patrick McSharry, http://www. irgc.org/wp-content/uploads/2013/06/McSharry-OrrellMcSharry2009foresight.pdf referring to Wolfram, S. \n See McManus, J. F., et al.: \"Collapse and rapid resumption of Atlantic meridional circulation linked to deglacial climate changes.\" Nature 428.6985 (2004): 834-837. 161 See Schuur, Edward AG, et al.: \"Vulnerability of permafrost carbon to climate change: Implications for the global carbon cycle.\" BioScience 58.8 (2008): 701-714. \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Executive SummaryExecutive SummaryGlobal Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 2. Risks with infinite impact: A new category of risks \n\t\t\t .3 Global challenges and infinite impact \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 3.1 Current risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks3.3 Emerging risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks3.3 Emerging risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks3.3 Emerging risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks3.3 Emerging risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks3.3 Emerging risks \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 7. Possible ways forward \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 7. Possible ways forward \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 7. Possible ways forward \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 7. Possible ways forward \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 7. Possible ways forward \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks 7. Possible ways forward \n\t\t\t Pinker, Steven. The better angels of our nature: The decline of violence in history and its causes. Penguin UK, 2011.11 Giddens, Anthony. \"Risk and responsibility.\" The modern law review 62.1 (1999): 1-10.12 http://blogs.ei.columbia.edu/2012/01/09/evolutionarypsychology-of-climate-change/ \n\t\t\t Posner, Richard A. Catastrophe: Risk and Response. Oxford University Press, 2004, 38 http://en.wikipedia.org/wiki/Value_of_life 39 http://yosemite.epa.gov/EE%5Cepa%5Ceed.nsf/ webpages/MortalityRiskValuation.html 40 Posner, Richard A. Catastrophe: risk and response. Oxford University Press, 2004. Loc 2363 41 http://webarchive.nationalarchives.gov.uk/+/http:/ www.hm-treasury.gov.uk/sternreview_index.htm 42 William Nordhaus , The Stern Review on the Economics of Climate Change http://www.econ.yale. edu/~nordhaus/homepage/stern_050307.pdf 43 http://www.res.org.uk/view/art3Apr08Features.html 44 Nordhaus, William D. The Climate Casino: Risk, Uncertainty, and Economics for a Warming World. Yale University Press, 2013. Loc 2895 45 Nordhaus, William D. The Climate Casino: Risk, Uncertainty, and Economics for a Warming World. Yale University Press, 2013. Loc 2895 46 Nordhaus, William D. The Climate Casino: Risk, Uncertainty, and Economics for a Warming World. Yale University Press, 2013. Loc 3176 Endnotes 184 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Endnotes \n\t\t\t Arsenals have been reduced, but there remain over 17,000 nuclear warheads in the world's arsenals (source: SIPRI yearbook 2013), down from a peak of some 68,000 in 1983, still more than enough to trigger a nuclear winter.177 See Toon, Owen B., et al.: \"Atmospheric effects and societal consequences of regional scale nuclear conflicts \n\t\t\t Russian Federation On Strategic Offensive Reductions.216 The Treaty between the United States of America and the Russian Federation on Measures for the Further \n\t\t\t See DARPA's briefing, DARPA-SN-13-30: Probabilistic Programming for Advancing Machine Learning (PPAML) (2013).498 See for instance Curtis, Jon, Gavin Matthews, and David Baxter.: On the effective use of Cyc in a question answering system. Proc Workshop on Knowledge and Reasoning for Answering Questions. (2005).499 See The Rise of Industrial Big Data, GE Intelligent Platforms (2013).500 Such as Watson's triumph on \"Jeopardy!\" (source: Ferrucci, David, et al.: Building Watson: An overview of 192 Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Endnotes \n\t\t\t Global Challenges -Twelve risks that threaten human civilisation -The case for a new category of risks Appendix 1 -Global Challenges Bibliography", "date_published": "n/a", "url": "n/a", "filename": "12Riskswithinfiniteimpact-fullreport.tei.xml", "abstract": "The material and the geographical designations in this report do not imply the expression of any opinion whatsoever on the part of Global Challenges Foundation concerning the legal status of any country, territory, or area, or concerning the delimitation of its frontiers or boundaries.", "id": "5b191bfc582184a500fb6d1e6227d88f"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Kaj Sotala", "Lukas Gloor"], "title": "Superintelligence as a Cause or Cure for Risks of Astronomical Suffering", "text": "Introduction Work discussing the possible consequences of creating superintelligent AI (Yudkowsky 2008 , Bostrom 2014, Sotala & Yampolskiy 2015) has discussed superintelligence as a possible existential risk: a risk \"where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential\" (Bostrom 2002 (Bostrom , 2013 . The previous work has mostly 1 considered the worstcase outcome to be the possibility of human extinction by an AI that is indifferent to humanity's survival and values. However, it is often thought that for an individual, there exist \"fates worse than death\"; analogously, for civilizations there may exist fates worse than extinction, such as survival in conditions in which most people will experience enormous suffering for most of their lives. Even if such extreme outcomes would be avoided, the known universe may eventually be populated by vast amounts of minds: published estimates include the possibility of 10 25 minds supported by a single star (Bostrom 2003a) , with humanity having the potential to eventually colonize tens of millions of galaxies (Armstrong & Sandberg 2013) . While this could enable an enormous number of meaningful lives to be lived, if even a small fraction of these lives were to exist in hellish circumstances, the amount of suffering would be vastly greater than that produced by all the atrocities, abuses, and natural causes in Earth's history so far. We term the possibility of such outcomes a suffering risk: Suffering risk (s-risk): One where an adverse outcome would bring about severe suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In order for potential risks -including s-risks -to merit work on them, three conditions must be met. First, the outcome of the risk must be sufficiently severe to merit attention. Second, the risk must have some reasonable probability of being realized. Third, there must be some way for risk-avoidance work to reduce either the probability or severity of an adverse outcome. In this paper, we will argue that suffering risks meet all three criteria, and that s-risk avoidance work is thus of a comparable magnitude in importance as work on risks from extinction. Section 2 seeks to establish the severity of s-risks. There, we will argue that there are classes of suffering-related adverse outcomes that many value systems would consider to be equally or even more severe than extinction. Additionally, we will define a class of less severe suffering outcomes which many value systems would consider important to avoid, albeit not as important as avoiding extinction. Section 3 looks at suffering risks from the view of several different value systems, and discusses how much they would prioritize avoiding different suffering outcomes. Next, we will argue that there is a reasonable probability for a number of different suffering risks to be realized. Our discussion is organized according to the relationship that superintelligent AIs have to suffering risks: section 4 covers risks that may be prevented by a superintelligence, and section 5 covers risks that may be realized by one 2 . Section 6 discusses how it might be possible to work on suffering risks. \n Suffering risks as risks of extreme severity As already noted, the main focus in discussion of risks from superintelligent AI has been either literal extinction, with the AI killing humans as a side-effect of pursuing some other goal (Yudkowsky 2008) , or a value extinction. In value extinction, some form of humanity may survive, but the future is controlled by an AI operating according to values which all current-day humans would consider worthless (Yudkowsky 2011 ). In either scenario, it is thought that the resulting future would have no value. In this section, we will argue that besides futures that have no value, according to many different value systems it is possible to have futures with negative value. These would count as the worst category of existential risks. In addition, there are adverse outcomes of a lesser severity, which depending on one's value systems may not necessarily count as worse than extinction. Regardless, making these outcomes less likely is a high priority and a common interest of many different value systems. Bostrom (2002) frames his definition of extinction risks with a discussion which characterizes a single person's death as being a risk of terminal intensity and personal scope, with existential risks being risks of terminal intensity and global scope -one person's death versus the death of all humans. However, it is commonly thought that there are \"fates worse than death\": at one extreme, being tortured for an extended time (with no chance of rescue), and then killed. As less extreme examples, various negative health conditions are often considered worse than death (Rubin, Buehler & Halpern 2016; Sayah et al. 2015; Ditto et al., 1996) : for example, among hospitalized patients with severe illness, a majority of respondents considered bowel and bladder incontinence, relying on a feeding tube to live, and being unable to get up from bed, to be conditions that were worse than death (Rubin, Buehler & Halpern 2016) . While these are prospective evaluations rather than what people have actually experienced, several countries have laws allowing for voluntary euthanasia, which people with various adverse conditions have chosen rather than go on living. This may considered an empirical confirmation of some states of life being worse than death, at least as judged by the people who choose to die. The notion of fates worse than death suggests the existence of a \"hellish\" severity that is one step worse than \"terminal\", and which might affect civilizations as 2 Superintelligent AIs being in a special position where they might either enable or prevent suffering risks, is similar to the way in which they are in a special position to make risks of extinction both more or less likely (Yudkowsky 2008) . well as individuals. Bostrom (2013) seems to acknowledge this by including \"hellish\" as a possible severity in the corresponding chart, but does not place any concrete outcomes under the hellish severity, implying that risks of extinction are still the worst outcomes. Yet there seem to be plausible paths to civilization-wide hell outcomes as well (Figure 1 ), which we will discuss in sections 4 and 5. \n Global \n Thinning of the ozone layer \n Extinction risks \n Global hellscape \n Personal Car is stolen Death Extended torture followed by death \n Endurable Terminal Hellish Figure 1 : The worst suffering risks are ones that affect everyone and subject people to hellish conditions. In order to qualify as equally bad or worse than extinction, suffering risks do not necessarily need to affect every single member of humanity. For example, consider a simplified ethical calculus where someone may have a predominantly happy life (+1), never exist (0), or have a predominantly unhappy life (-1). As long as the people having predominantly unhappy lives outnumber the people having predominantly happy lives, under this calculus such an outcome would be considered worse than nobody existing in the first place. We will call this scenario a net suffering outcome 3 . This outcome might be considered justifiable if we assumed that, given enough time, the people living happy lives will eventually outnumber the people living unhappy lives. Most value systems would then still consider a net suffering outcome worth avoiding, but they might consider it an acceptable cost for an even larger amount of future happy lives. On the other hand it is also possible that the world could become locked into conditions in which the balance would remain negative even when considering all the lives that will ever live: things would never get better. We will call this a pan-generational net suffering outcome. In addition to net and pan-generational net suffering outcomes, we will consider a third category. In these outcomes, serious suffering may be limited to only a fraction of the population, but the overall population at some given time 4 is still large enough that even this small fraction accounts for many times more suffering than has 3 \"Net\" should be considered equivalent to Bostrom's \"global\", but we have chosen a different name to avoid giving the impression that the outcome would necessarily be limited to only one planet. 4 One could also consider the category of pangenerational astronomical suffering outcomes, but restricting ourselves into just three categories is sufficient for our current discussion. existed in the history of the Earth. We will call these astronomical suffering outcomes. \n Types of suffering outcomes \n Astronomical suffering outcome At some point in time, a fraction of the population experiences hellish suffering, enough to overall constitute an astronomical amount that overwhelms all the suffering in Earth's history. \n Net suffering outcome At some point in time, there are more people experiencing lives filled predominantly with suffering than there are people experiencing lives filled predominantly with happiness. \n Pangenerational net suffering outcome When summed over all the people that will ever live, there are more people experiencing lives filled predominantly with suffering than there are people experiencing lives filled predominantly with happiness. Any value system which puts weight on preventing suffering implies at least some interest in preventing suffering risks. Additionally, as we will discuss below, even value systems which do not care about suffering directly may still have an interest in preventing suffering risks. We expect these claims to be relatively uncontroversial. A more complicated question is that of tradeoffs: what should one do if some interventions increase the risk of extinction but make suffering risks less likely, or vice versa? As we will discuss below, if forced to choose between these two, different value systems will differ in which of the interventions they favor. In such a case, rather than to risk conflict between value systems, a better alternative would be to attempt to identify interventions which do not involve such a tradeoff. If there were interventions that reduced the risk of extinction without increasing the risk of astronomical suffering, or decreased the risk of astronomical suffering without increasing the risk of extinction, or decreased both, then it would be in everyone's interest to agree to jointly focus on these three classes of interventions. \n Suffering risks from the perspective of different value systems We will now take a brief look at different value systems and their stance on suffering risks, as well as their stance on the related tradeoffs. Classical utilitarianism. All else being equal, classical utilitarians would prefer a universe in which there were many happy lives and no suffering. However, a noteworthy feature about classical utilitarianism (as well as some other aggregative theories) is that it considers very good and very bad scenarios to be symmetrical -that is, a scenario with 10^20 humans living happy lives may be considered equally good, as a scenario with 10^20 humans living miserable lives is considered bad. Thus, people following classical utilitarianism or some other aggregative theory may find compelling the argument (Bostrom 2003a ) that an uncolonized universe represents a massive waste of potential value, and be willing to risk -or even accept -astronomical numbers of suffering individuals if that was an unavoidable cost to creating even larger numbers of happiness. Thus, classical utilitarianism would consider astronomical and net suffering outcomes something to avoid but possibly acceptable, and pan-generational net suffering outcomes as something to avoid under all circumstances. Other aggregative theories. Any moral theory which was not explicitly utilitarian, but still had an aggregative component that disvalued suffering, would consider suffering risks as something to avoid. Additionally, for moral theories that valued things other than just pleasure and suffering -such as preference satisfaction, some broader notion of \"human flourishing\", objective list theories -hellscape scenarios would likely also threaten the satisfaction of many of the things that these theories valued. For example, minds experiencing enormous suffering are probably not flourishing, are likely to have unsatisfied preferences, and probably do not have many of the things considered valuable in objective list theories. Similarly to classical utilitarianism, many aggregative theories could be willing to risk or even accept astronomical and civilization-wide suffering outcomes as a necessary evil but wish to avoid pangenerational net suffering outcomes. At the same time, many aggregative theories might incorporate some suffering-focused intuition (discussed below) which caused them to put more weight on the avoidance of suffering than the creation of other valuable things. Depending on the circumstances, this might cause them to reject the kind of reasoning which suggested that suffering outcomes could be an acceptable cost. Rights-based theories. Rights-based theories would consider suffering risks a bad thing directly to the extent that they held that people -or animals (Regan 1980 )had a right to be treated well avoid unnecessary suffering. They could also consider suffering risks indirectly bad, if the suffering was caused by conditions which violated some other right or severely constrained someone's capabilities (Nussbaum 1997, p. 287) . For example, a right to meaningful autonomy could be violated if a mind was subjected to enormous suffering and had no meaningful option to escape it. General suffering-focused intuitions. There are various moral views and principles which could fit many different value systems, all of which would imply that suffering risks were something important to avoid and which might cause one to weigh the avoidance of suffering more strongly than the creation of happiness: 1. Prioritarianism. Prioritarianism is the position that the worse off an individual is, the more morally valuable it is to make that individual better off (Parfit 1991) . That is, if one person is living in hellish conditions and another is well-off, then making the former person slightly better off is more valuable than improving the life of the well-off person by the same amount. A stance of \"astronomical prioritarianism\" that considers all minds across the universe, and prioritizes improving the worst ones sufficiently strongly, pushes in the direction of mainly improving the lives of those that would be worst off and thus avoiding suffering risks. If a suffering outcome does manifest itself, prioritarianism would prioritize bringing it to an end, over creating additional well-off lives or further helping those who are already well off. Prioritarianism may imply focusing particularly on risks from future technologies, as these may enable the creation of mind states that are worse than the current biopsychological limits. Besides prioritarianism, the following three intuitions (Gloor & Mannino 2016) would also prioritize the avoidance of suffering risks 5 : 2. Making people happy, not happy people 6 . An intuition which is present in preference-based views such as antifrustrationism (Fehige 1998) , antinatalism (Benatar 2008 ), as well as the \"moral ledger\" analogy (Singer 1993 ) and prior-existence utilitarianism (Singer 1993) , is that it is more important to make existing people better off than it is to create new happy beings. 7 For example, given the choice between helping a million currently-existing people who are in pain and bringing ten million new people into existence, this view holds that it is more important to help the existing people, even if the ten million new people would end up living happy lives. A part of this view is the notion that it is not intrinsically bad to never be created, whereas it is intrinsically bad to exist and be badly off, or to be killed against one's wishes once one does exist. If one accepts this position, then one could still want to avoid extinction -or at least the death of currently-living humans -but the promise of astronomical numbers of happy lives being created (Bostrom 2003a) would not be seen as particularly compelling, whereas the possible creation of 5 One might naturally also have various intuitions that point in the opposite direction, that is, of not prioritizing suffering risks. We will not survey these, as our intent in this section is merely to establish that many would consider suffering risks as important to avoid, without claiming that this would be the only plausible view to hold. 6 The name of this intuition is a paraphrase of Narveson (1973) , \"We are in favor of making people happy, but neutral about making happy people.\" 7 Moral views that attempt to incorporate this intuition by treating the creation of new people as morally neutral (e.g. Singer's \"prior-existence\" criterion) suffer from what Greaves (2017) calls a \"remarkabl[e] difficult[y] to formulate any remotely acceptable axiology that captures this idea of 'neutrality'\". The views by Benatar and Fehige avoid this problem, but they imply a more extreme position where adding new lives is neutral only in a best-case scenario where they contain no suffering or frustrated preferences. astronomical numbers of lives experiencing suffering could be seen as a major thing to avoid. 3. Torture-level suffering cannot be counterbalanced. This intuition is present in the widespread notion that minor pains cannot be aggregated to become worse than an instant of torture (Rachels 1998) , in threshold negative utilitarianism (Ord 2013) , philosophical fictional works such as The Ones Who Walk Away From Omelas (LeGuin 1973), and it may contribute to the absolute prohibitions against torture in some deontological moralities. Pearce (1995) expresses a form of it when he writes, \"No amount of happiness or fun enjoyed by some organisms can notionally justify the indescribable horrors of Auschwitz\". 4. Happiness as the absence of suffering. A view which is present in Epicureanism as well as many non-Western traditions, such as Buddhism, is that of happiness as the absence of suffering. Under this view, when we are not experiencing states of pleasure, we begin to crave pleasure, and this craving constitutes suffering. Gloor (2017) writes: Uncomfortable pressure in one's shoes, thirst, hunger, headaches, boredom, itches, non-effortless work, worries, longing for better times. When our brain is flooded with pleasure, we temporarily become unaware of all the negative ingredients of our stream of consciousness, and they thus cease to exist. Pleasure is the typical way in which our minds experience temporary freedom from suffering. This may contribute to the view that pleasure is the symmetrical counterpart to suffering, and that pleasure is in itself valuable and important to bring about. However, there are also (contingently rare) mental states devoid of anything bothersome that are not commonly described as (intensely) pleasurable, examples being flow states or states of meditative tranquility. Felt from the inside, tranquility is perfect in that it is untroubled by any aversive components, untroubled by any cravings for more pleasure. Likewise, a state of flow as it may be experienced during stimulating work, when listening to music or when playing video games, where tasks are being completed on auto-pilot with time flying and us having a low sense of self, also has this same quality of being experienced as completely problem-free. Such states -let us call them states of contentment -may not commonly be described as (intensely) pleasurable, but following philosophical traditions in both Buddhism and Epicureanism, these states, too, deserve to be considered states of happiness. Under this view, happiness and pleasure are not intrinsically good, but rather instrumentally good in that pleasure takes our focus away from suffering and thus helps us avoid it. Creating additional happiness, then, has no intrinsic value if that creation does not help avoid suffering. \n Suffering outcomes that could be prevented by a superintelligence In the previous section, we argued that nearly all plausible value systems will want to avoid suffering risks and that for many value systems, suffering risks are some of the worst possible outcomes and thus some of the most important to avoid. However, whether this also makes suffering risks the type of risk that is the most important to focus on, also depends on how probable suffering risks are. If they seem exceedingly unlikely, then there is little reason to care about them. In this and the next section, we will discuss reasons for believing that there are various suffering outcomes that might realize themselves. We begin by considering outcomes which occur naturally but could be prevented by a superintelligence. In the next section, we will consider suffering outcomes which could be caused by a superintelligence. A superintelligence could prevent almost any outcome if it established itself a singleton, \"a world order in which there is a single decision-making agency at the highest level\" (Bostrom 2005) . Although a superintelligence is not the only way by which a singleton might be formed, alternative ways -such as a world government or convergent evolution leading everyone to adopt the same values and goals (Bostrom 2005 ) -do not seem particularly likely to happen soon. Once a superintelligence had established itself as a singleton, depending on its values it might choose to take actions that prevented suffering outcomes from arising. \n Are suffering outcomes likely? Bostrom (2003a) argues that given a technologically mature civilization capable of space colonization on a massive scale, this civilization \"would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living\", and that it could thus be assumed that all of these lives would be worth living. Moreover, we can reasonably assume that outcomes which are optimized for everything that is valuable are more likely than outcomes optimized for things that are disvaluable. While people want the future to be valuable both for altruistic and self-oriented reasons, no one intrinsically wants things to go badly. However, Bostrom has himself later argued that technological advancement combined with evolutionary forces could \"lead to the gradual elimination of all forms of being worth caring about\" (Bostrom 2005) , admitting the possibility that there could be technologically advanced civilizations with very little of anything that we would consider valuable. The technological potential to create a civilization that had positive value does not automatically translate to that potential being used, so a very advanced civilization could still be one of no value or even negative value. Examples of technology's potential being unevenly applied can be found throughout history. Wealth remains unevenly distributed today, with an estimated 795 million people suffering from hunger even as one third of all produced food goes to waste (World Food Programme, 2017). Technological advancement has helped prevent many sources of suffering, but it has also created new ones, such as factory-farming practices under which large numbers of animals are maltreated in ways which maximize their production: in 2012, the amount of animals slaughtered for food was estimated at 68 billion worldwide (Food and Agriculture Organization of the United Nations 2012). Industrialization has also contributed to anthropogenic climate change, which may lead to considerable global destruction. Earlier in history, advances in seafaring enabled the transatlantic slave trade, with close to 12 million Africans being sent in ships to live in slavery (Manning 1992 ). Technological advancement does not automatically lead to positive results (Häggström 2016 ). Persson & Savulescu (2012) argue that human tendencies such as \"the bias towards the near future, our numbness to the suffering of great numbers, and our weak sense of responsibility for our omissions and collective contributions\", which are a result of the environment humanity evolved in, are no longer sufficient for dealing with novel technological problems such as climate change and it becoming easier for small groups to cause widespread destruction. Supporting this case, Greene (2013) draws on research from moral psychology to argue that morality has evolved to enable mutual cooperation and collaboration within a select group (\"us\"), and to enable groups to fight off everyone else (\"them\"). Such an evolved morality is badly equipped to deal with collective action problems requiring global compromises, and also increases the risk of conflict and generally negative-sum dynamics as more different groups get in contact with each other. As an opposing perspective, West (2017) argues that while people are often willing to engage in cruelty if this is the easiest way of achieving their desires, they are generally \"not evil, just lazy\". Practices such as factory farming are widespread not because of some deep-seated desire to cause suffering, but rather because they are the most efficient way of producing meat and other animal source foods. If technologies such as growing meat from cell cultures became more efficient than factory farming, then the desire for efficiency could lead to the elimination of suffering. Similarly, industrialization has reduced the demand for slaves and forced labor as machine labor has become more effective. At the same time, West acknowledges that this is not a knockdown argument against the possibility of massive future suffering, and that the desire for efficiency could still lead to suffering outcomes such as simulated game worlds filled with sentient non-player characters (see section on cruelty-enabling technologies below). Another argument against net suffering outcomes is offered by Shulman (2012) , who discusses the possibility of civilizations spending some nontrivial fraction of their resources constructing computing matter that was optimized for producing maximum pleasure per unit of energy, or for producing maximum suffering per unit of energy. Shulman's argument rests on the assumption that value and disvalue are symmetrical with regard to such optimized states. The amount of pleasure or suffering produced this way could come to dominate any hedonistic utilitarian calculus, and even a weak benevolent bias that led to there being more optimized pleasure than optimized suffering could tip the balance in favor of there being more total happiness. Shulman's argument thus suggests that net suffering outcomes could be unlikely unless a (non-compassionate) singleton ensures that no optimized happiness is created. However, the possibility of optimized suffering and the chance of e.g. civilizations intentionally creating it as a way of extorting agents that care about suffering reduction, also makes astronomical suffering outcomes more likely. \n Suffering outcome: dystopian scenarios created by non-value-aligned incentives. Bostrom (2005 Bostrom ( , 2014 discusses the possibility of technological development and evolutionary and competitive pressures leading to various scenarios where everything of value has been lost, and where the overall value of the world may even be negative. Considering the possibility of a world where most minds are brain uploads doing constant work, Bostrom (2014) points out that we cannot know for sure that happy minds are the most productive under all conditions: it could turn out that anxious or unhappy minds would be more productive. If this were the case, the resulting outcomes could be dystopian indeed: We seldom put forth full effort. When we do, it is sometimes painful. Imagine running on a treadmill at a steep incline-heart pounding, muscles aching, lungs gasping for air. A glance at the timer: your next break, which will also be your death, is due in 49 years, 3 months, 20 days, 4 hours, minutes, and 12 seconds. You wish you had not been born. (Bostrom 2014, p. 201) As Bostrom (2014) notes, this kind of a scenario is by no means inevitable; Hanson (2016) argues for a more optimistic outcome, where brain emulations still spend most of their time working, but are generally happy. But even Hanson's argument depends on economic pressures and human well-being happening to coincide: absent such a happy coincidence, he offers no argument for believing that the future will indeed be a happy one. More generally, Alexander ( 2014 ) discusses examples such as tragedies of the commons, Malthusian traps, arms races, and races to the bottom as cases where people are forced to choose between sacrificing some of their values and getting outcompeted. Alexander also notes the existence of changes to the world that nearly everyone would agree to be net improvements -such as every country reducing its military by 50%, with the savings going to infrastructure -which nonetheless do not happen because nobody has the incentive to carry them out. As such, even if the prevention of various kinds of suffering outcomes would be in everyone's interest, the world might nonetheless end up in them if the incentives are sufficiently badly aligned and new technologies enable their creation. An additional reason for why such dynamics might lead to various suffering outcomes is the so-called Anna Karenina principle (Diamond 1997 , Zaneveld et al. 2017 , named after the opening line of Tolstoy's novel Anna Karenina: \"all happy families are all alike; each unhappy family is unhappy in its own way\". The general form of the principle is that for a range of endeavors or processes, from animal domestication (Diamond 1997 ) to the stability of animal microbiomes (Zaneveld et al. 2017) , there are many different factors that all need to go right, with even a single mismatch being liable to cause failure. Within the domain of psychology, Baumeister et al. ( 2001 ) review a range of research areas to argue that \"bad is stronger than good\": while sufficiently many good events can overcome the effects of bad experiences, bad experiences have a bigger effect on the mind than good ones do. The effect of positive changes to wellbeing also tends to decline faster than the impact of negative changes: on average, people's well-being suffers and never fully recovers from events such as disability, widowhood, and divorce, whereas the improved well-being that results from events such as marriage or a job change dissipates almost completely given enough time (Lyubomirsky 2010). To recap, various evolutionary and game-theoretical forces may push civilization in directions that are effectively random, random changes are likely to bad for the things that humans value, and the effects of bad events are likely to linger disproportionately on the human psyche. Putting these considerations together suggests (though does not guarantee) that freewheeling development could eventually come to produce massive amounts of suffering. A possible counter-argument is that people are often more happy than their conditions might suggest. For example, as a widely-reported finding, while the life satisfaction reported by people living in bad conditions in slums is lower than that of people living in more affluent conditions, it is still higher than one might intuitively expect, and the slum-dwellers report being satisfied with many aspects of their life (Biswas-Diener & Diener 2001). In part, this is explained by fact that despite the poor conditions, people living in the slums still report many things that bring them pleasure: a mother who has lost two daughters reports getting joy from her surviving son, is glad that the son will soon receive a job at a bakery, and is glad about her marriage to her husband and feels that her daily prayer is important (Biswas-Diener & Diener 2001). However, a proper evaluation of this research is complicated: \"suffering\" might be conceptualized as best corresponding to negative feelings, which are a separate component from cognitively evaluated life satisfaction (Lukas, Diener & Suh 1996) , with the above slumdweller study focusing mainly on life satisfaction. In general, life satisfaction is associated with material prosperity, while positive and negative feelings are associated with psychological needs such as autonomy, respect, and the ability to be able count on others in an emergency (Diener et al. 2010) . A proper review of the literature and an analysis of how to interpret the research in terms of suffering risks is beyond the scope of this paper. \n Suffering outcome: cruelty-enabling technologies. Better technology may enable people to better engage in cruel and actively sadistic pursuits. While active sadism and desire to hurt others may be a relatively rare occurrence in contemporary society, public cruelty has been a form of entertainment in many societies, ranging from the Roman practice of involuntary gladiator fights to animal cruelty in the Middle Ages. Even in contemporary society, there are widespread sentiments that people such as criminals should be severely punished in ways which inflict considerable suffering (part of the Roman gladiators were convicted criminals). Contemporary society also contains various individuals who are motivated by the desire to hurt others (Torres 2016 (Torres , 2017a (Torres , 2017b , chap 4.), even to the point of sacrificing their own lives in the process. For example, Eric Harris, one of the two shooters of the Columbine High School Massacre, wrote extensively about his desire to rape and torture people, fantasized about tricking women into thinking that they were safe so that he could then hurt them, and wanted the freedom to be able to kill and rape without consequences (Langman 2015) . While mass shooters tend to be lone individuals, there have existed more organized groups who seem to have given their members the liberty to act on similar motivations (Torres 2017a), such as the Aum Shinrikyo cult, where dissent or even just \"impure thoughts\" were punished by rituals amounting to torture and defectors \"routinely kidnapped, tortured, imprisoned in cargo crates, subjected to electro shock, drugged in the Astral Hospital or killed outright\" (Flannery 2016) . While most contemporary societies reject the idea of cruelty as entertainment, civilizations could eventually emerge in which such practices were again acceptable. Assuming advanced technology, this could take the form of keeping criminals and other undesirables alive indefinitely while subjecting them to eternal torture 8 , slaves kept for the purpose of sadistic actions who could be healed of any damage inflicted to them (one fictional illustration of such a scenario recently received widespread popularity as the TV series Westworld) 9 , or even something like vast dystopian simulations of fantasy warfare inhabited by sentient \"non-player characters\", to serve as the location of massive multiplayer online games which people may play in as super-powered \"heroes\". Particularly in the latter scenarios, the amount of sentient minds in such conditions could be many times larger than the civilization's other population. In contemporary computer games, it is normal for the player to kill thousands of computer-controlled opponents during the game, suggesting that a large-scale game in which a sizeable part of the population participated might instantiate very large numbers of non-player characters per player, existing only to be hurt for the pleasure of the players. 5 Suffering outcomes that may be caused by superintelligence 10 In the previous section, we discussed possible suffering outcomes that might be realized without a singleton that could prevent them from occurring, and suggested that an appropriately-programmed superintelligence is currently the most likely candidate for forming such a singleton. However, an inappropriately programmed superintelligence could also cause suffering outcomes; we will now turn to this topic. Superintelligence is related to three categories of suffering risk: suffering subroutines (Tomasik 2017), mind crime (Bostrom 2014 ) and flawed realization (Bostrom 2013). \n Suffering subroutines Humans have evolved to be capable of suffering, and while the question of which other animals are conscious or capable of suffering is controversial, pain analogues are present in a wide variety of animals. The U.S. National Research Council's Committee on Recognition and Alleviation of Pain in Laboratory Animals (2004) argues that, based on the state of existing evidence, at least all vertebrates should be considered capable of experiencing pain. Pain seems to have evolved because it has a functional purpose in guiding behavior: evolution having found it suggests that pain might be the simplest solution for achieving its purpose. A superintelligence which was building subagents, such as worker robots or disembodied cognitive agents, might then also construct them in such a way that they were capable of feeling pain -and thus possibly suffering (Metzinger 2015) -if that was the most efficient way of making them behave in a way that achieved the superintelligence's goals. Humans have also evolved to experience empathy towards each other, but the evolutionary reasons which cause humans to have empathy (Singer 1981) may not be relevant for a superintelligent singleton which had no game-theoretical reason to empathize with others. In such a case, a superintelligence which had no disincentive to create suffering but did have an incentive to create whatever furthered its goals, could create vast populations of agents which sometimes suffered while carrying out the superintelligence's goals. Because of the ruling superintelligence's indifference towards suffering, the amount of suffering experienced by this population could be vastly higher than it would be in e.g. an advanced human civilization, where humans had an interest in helping out their fellow humans. Depending on the functional purpose of positive mental states such as happiness, the subagents might or might not be built to experience them. For example, Fredrickson (1998) suggests that positive and negative emotions have differing functions. Negative emotions bias an individual's thoughts and actions towards some relatively specific response that has been evolutionarily adaptive: fear causes an urge to escape, anger causes an urge to attack, disgust an urge to be rid of the disgusting thing, and so on. In contrast, positive emotions bias thought-action tendencies in a much less specific direction. For example, joy creates an urge to play and be playful, but \"play\" includes a very wide range of behaviors, including physical, social, intellectual, and artistic play. All of these behaviors have the effect of developing the individual's skills in whatever the domain. The overall effect of experiencing positive emotions is to build an individual's resources -be those resources physical, intellectual, or social. To the extent that this hypothesis were true, a superintelligence might design its subagents in such a way that they had pre-determined response patterns for undesirable situations, so exhibited negative emotions. However, if it was constructing a kind of a command economy in which it desired to remain in control, it might not put a high value on any subagent accumulating individual resources. Intellectual resources would be valued to the extent that they contributed to the subagent doing its job, but physical and social resources could be irrelevant, if the subagents were provided with whatever resources necessary for doing their tasks. In such a case, the end result could be a world whose inhabitants experienced very little if any in the way of positive emotions, but did experience negative emotions. This could qualify as any one of the suffering outcomes we've considered (astronomical, net, pan-generational net). A major question mark with regard to suffering subroutines are the requirements for consciousness (Muehlhauser 2017 ) and suffering (Metzinger 2016 , Tomasik 2017 ). The simpler the algorithms that can suffer, the more likely it is that an entity with no regard for minimizing it would happen to instantiate large numbers of them. If suffering has narrow requirements such as a specific kind of self-model (Metzinger 2016 ), then suffering subroutines may become less common. Below are some pathways that could lead to the instantiation of large numbers of suffering subroutines (Gloor 2016) : Anthropocentrism. If the superintelligence had been programmed to only care about humans, or by minds which were sufficiently human-like by some criteria, then it could end up being indifferent to the suffering of any other minds, including subroutines. Indifference. If attempts to align the superintelligence with human values failed, it might not put any intrinsic value on avoiding suffering, so it may create large numbers of suffering subroutines. Uncooperativeness. The superintelligence's goal is something like classical utilitarianism, with no additional regards for cooperating with other value systems. As previously discussed, classical utilitarianism would prefer to avoid suffering, all else being equal. However, this concern could be overridden by opportunity costs. For example, Bostrom (2003a) suggests that every second of delayed space colonization corresponds to a loss equal to 10^14 potential lives. A classical utilitarian superintelligence that took this estimate literally might choose to build colonization robots that used suffering subroutines, if this was the easiest way and developing alternative cognitive architectures capable of doing the job would take more time. \n Mind crime A superintelligence might run simulations of sentient beings for a variety of purposes. Bostrom (2014, p. 152) discusses the specific possibility of an AI creating simulations of human beings which were detailed enough to be conscious. These simulations could then be placed in a variety of situations in order to study things such as human psychology and sociology, and destroyed afterwards. The AI could also run simulations that modeled the evolutionary history of life on Earth, to obtain various kinds of scientific information or to help estimate the likely location of the \"Great Filter\" (Hanson 1998 ) and whether it should expect to encounter other intelligent civilizations. This could repeat the wild-animal suffering (Tomasik 2015 , Dorado 2015 experienced in Earth's evolutionary history. The AI could also create and mistreat, or threaten to mistreat, various minds as a way to blackmail other agents. As it is possible that minds in simulations could one day compose the majority of all existing minds (Bostrom 2003b ), and that with sufficient technology there could be astronomical numbers of them, then depending on the nature of the simulations and the net amount of happiness and suffering, mind crime could possibly lead to any one of the three suffering outcomes. Below are some pathways that could lead to mind crime (Gloor 2016 ): Anthropocentrism. Again, if the superintelligence had been programmed to only care about humans, or about minds which were sufficiently human-like by some criteria, then it could be indifferent to the suffering experienced by non-humans in its simulations. Indifference. If attempts to align the superintelligence with human values failed, it might not put any intrinsic value on avoiding suffering, so it may create large numbers of simulations with sentient minds if that furthered its objectives. Extortion. The superintelligence comes into conflict with another actor that disvalues suffering, so the superintelligence instantiates large numbers of suffering minds as a way of extorting the other entity. Libertarianism regarding computations: the creators of the first superintelligence instruct the AI to give every human alive at the time control of a planet or galaxy, with no additional rules to govern what goes on within those territories. This would practically guarantee that some humans would use this opportunity for inflicting widespread cruelty (see the previous section). \n Flawed realization A superintelligence with human-aligned values might aim to convert the resources in its reach into clusters of utopia, and seek to colonize the universe in order to maximize the value of the world (Bostrom 2003a) , filling the universe with new minds and valuable experiences and resources. At the same time, if the superintelligence had the wrong goals, this could result in a universe filled by vast amounts of disvalue. While some mistakes in value loading may result in a superintelligence whose goal is completely unlike what people value, certain mistakes could result in flawed realization (Bostrom 2013) . In this outcome, the superintelligence's goal gets human values mostly right, in the sense of sharing many similarities with what we value, but also contains a flaw that drastically changes the intended outcome 11 . For example, value extrapolation (Yudkowsky 2004 ) and value learning (Soares 2016, Sotala 2016) approaches attempt to learn human values in order to create a world that is in accordance with those values. There have been occasions in history when circumstances that cause suffering have been defended by appealing to values which seem pointless to modern sensibilities, but which were nonetheless a part of the prevailing values at the time. In Victorian London, the use of anesthesia in childbirth was opposed on the grounds that being under the partial influence of anesthetics may cause \"improper\" and \"lascivious\" sexual dreams (Farr 1980) , with this being considered more important to avoid than the pain of childbirth. A flawed value-loading process might give disproportionate weight to historical, existing, or incorrectly extrapolated future values whose realization then becomes more important than the avoidance of suffering. Besides merely considering the avoidance of suffering less important than the enabling of other values, a flawed process might also tap into various human tendencies for endorsing or celebrating cruelty (see the discussion in section 4), or outright glorifying suffering. Small changes to a recipe for utopia may lead to a future with much more suffering than one shaped by a superintelligence whose goals were completely different from ours. We will now argue for the case that it is possible to productively work on them today, via some of the following recommendations. Carry out general AI alignment work. Given that it would generally be against the values of most humans for suffering outcomes to be realized, research aimed at aligning AIs with human values (Yudkowsky 2008 , Goertzel & Pitt 2012 , Bostrom 2014 , Sotala 2016, Soares & Fallenstein 2017) seems likely to also reduce the risk of suffering outcomes. If our argument for suffering outcomes being something to avoid is correct, then an aligned superintelligence should also attempt to establish a singleton that would prevent negative suffering outcomes, as well as avoiding the creation of suffering subroutines and mind crime. In addition to technical approaches to AI alignment, the possibility of suffering risks also tends to make more similar recommendations regarding social and political approaches. For example, Bostrom et al. (2016) note that conditions of global turbulence might cause challenges for creating value-aligned AI, such as if pre-existing agreement are not kept to and ill-conceived regulation is enacted in a haste. Previous work has also pointed to the danger of arms races making it harder to keep AI aligned (Shulman 2009 , Miller 2012 , Armstrong et al. 2013 ). As the avoidance of suffering outcomes is the joint interest of many different value systems, measures that reduce the risk of arms races and improve the ability of different value systems to shape the world in their desired direction can also help avoid suffering outcomes. Besides making AIs more aligned in general, some interventions may help avoid negative outcomes -such as suffering outcomes from flawed realization scenariosin particular. Most of the current alignment research seeks to ensure that the values of any created AIs are aligned with humanity's values to a maximum possible extent, so that the future they create will contain as much positive value as possible. This is a difficult goal: to the extent that humanity's values are complex and fragile (Yudkowsky 2011), successful alignment may require getting a very large amount of details right. On the other hand, it seems much easier to give AIs goals that merely ensure that they will not create a future with negative value by causing suffering outcomes. This suggests an approach of fail-safe methods: safety nets or mechanisms such that, if AI control fails, the outcome will be as good as it gets under the circumstances. Failsafe methods could include tasking AI with the objective of buying more time to carefully solve goal alignment more generally, or fallback goal functions: Research fallback goals: Research ways to implement multi-layered goal functions, with a \"fallback goal\" that kicks in if the implementation of the top layer does not fulfill certain safety criteria. The fallback would be a simpler, less ambitious goal that is less likely to result in bad outcomes. Difficulties would lie in selecting the safety criteria in ways that people with different values could all agree on, and in making sure that the fallback goal gets triggered under the correct circumstances. Care needs to be taken with the selection of the fallback goal, however. If the goal was something like reducing suffering, then in a multipolar (Bostrom 2014 ) scenario, other superintelligences could have an incentive to create large amounts of suffering in order to coerce the superintelligence with the fallback goal to act in some desired way. Research ways to clearly separate superintelligence designs from ones that would contribute to suffering risk. Yudkowsky (2017) proposes building potential superintelligences in such a way as to make them widely separated in design space from ones that would cause suffering outcomes. For example, if an AI has a representation of \"what humans value\" V which it is trying to maximize, then it would only take a small (perhaps accidental) change to turn it into one that maximized -V instead, possibly causing enormous suffering. One proposed way of achieving this is by never trying to explicitly represent complete human values: then, the AI \"just doesn't contain the information needed to compute states of the universe that we'd consider worse than death; flipping the sign of the utility function U, or subtracting components from U and then flipping the sign, doesn't identify any state we consider worse than [death]\" (Yudkowsky 2017) . This would also reduce the risk of suffering being through another actor which was trying to extort the superintelligence. Carry out research on suffering risks and the enabling factors of suffering. At this moment, there is only little research to the possibility of risks of astronomical suffering. Two kinds of research would be particularly useful. First, research focused on understanding the biological and algorithmic foundation of suffering (Metzinger 2016 ) could help understand how likely outcomes such as suffering subroutines would be. Pearce (1995) has argued for the possibility of minds motivated by \"gradients of bliss\", which would not need to experience any suffering: if minds could be designed in such a manner, that might help avoid suffering outcomes. Second, research on suffering outcomes in general, to understand how to avoid them. With regard to suffering risks from extortion scenarios, targeted research in economics, game theory or decision theory could be particularly valuable. Rethink maxipok and maximin. Bostrom (2002 Bostrom ( , 2013 proposes a \"maxipok rule\" to act as a rule of thumb when trying to act in the best interest of humanity as a whole: Maxipok: Maximise the probability of an 'OK outcome', where an OK outcome is any outcome that avoids existential catastrophe. The considerations in this paper do not necessarily refute the rule as written, especially not since Bostrom defines an \"existential catastrophe\" to include \"the permanent and drastic destruction of its potential for desirable future development\", and the realization of suffering outcomes could very well be thought to fall under this definition. However, in practice much of the discourse around the concept of existential risk has focused on the possibility of extinction, so it seems valuable to highlight the fact that \"existential catastrophe\" does not include only scenarios of zero value, but also scenarios of negative value. Bostrom (2002 Bostrom ( , 2013 ) also briefly discusses the \"maximin\" principle, \"choose the action that has the best worst-case outcome\", and rejects this principle as he argues that this entails \"choosing the action that has the greatest benefit under the assumption of impending extinction. Maximin thus implies that we ought all to start partying as if there were no tomorrow.\" (Bostrom 2013, p. 19) . However, since a significant contribution to the expected value of AI comes from worse outcomes than extinction, this argument is incorrect. While there may be other reasons to reject maximin, the principle correctly implies choosing the kinds of actions that avoid the worst suffering outcomes and so might not be very dissimilar from maxipok. Figure 2 : 2 Figure 2: types of possible suffering outcomes. An outcome may count as one or several of the categories in this table. \n\t\t\t Bostrom (2014) is mainly focused on the risk of extinction, but does also devote some discussion to alternative negative outcomes such as \"mindcrime\". We discuss mindcrime in section 5. \n\t\t\t Fictional depictions include Ellison (1967) and Ryding (no date); note that both stories contain very disturbing imagery. A third depiction was in the \"White Christmas\" episode of the TV series Black Mirror, which included a killer placed in solitary confinement for thousands of years while having to listen to a Christmas song on an endless loop. 9 Another fictional depiction includes Gentle (2004) ; the warning for disturbing graphic imagery very much applies. \n\t\t\t This section reprints material that has previously appeared in a work by one of the authors (Gloor 2016 ), but has not been formally published before. \n\t\t\t How and whether to work on srisk?In the previous sections, we have argued for s-risks being severe enough to be worth preventing, and for there to be several plausible routes by which they might be realized. 11 One fictional illustration of a flawed utopia is Yudkowsky (2009) , though this setting does not seem to contain enormous amounts of suffering.", "date_published": "n/a", "url": "n/a", "filename": "1877-3728-1-PB.tei.xml", "abstract": "Discussions about the possible consequences of creating superintelligence have included the possibility of existential risk, often understood mainly as the risk of human extinction. We argue that suffering risks (s-risks), where an adverse outcome would bring about severe suffering on an astronomical scale, are risks of a comparable severity and probability as risks of extinction. Preventing them is the common interest of many different value systems. Furthermore, we argue that in the same way as superintelligent AI both contributes to existential risk but can also help prevent it, superintelligent AI can be both the cause of suffering risks and a way to prevent them from being realized. Some types of work aimed at making superintelligent AI safe will also help prevent suffering risks, and there may also be a class of safeguards for AI that helps specifically against s-risks. Povzetek: Prispevek analizira prednosti in nevarnosti superinteligence.", "id": "5fd70c2ac04b8d203b56a477b622ffac"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Max Tegmark"], "title": "Life 3.0: Being Human in the Age of Artificial Intelligence", "text": "In the interests of disclosure, Tegmark and I are collaborators and share a literary agent. With physicists Stephen Hawking and Frank Wilczek, we wrote the 2014 Huffington Post article 'Transcending complacency on superintelligent machines' (see go.nature. com/2wadkao). Ostensibly a review of Wally Pfister's dystopian AI film Transcendence, this was really a call to the AI community to take the risks of intelligent systems seriously. Thus, I am unlikely to disagree strongly with the premise of Life 3.0. Life, Tegmark argues, may or may not spread through the Universe and \"flourish for billions or trillions of years\" because of decisions we make now -a possibility both seductive and overwhelming. The book's title refers to a third phase in evolutionary history. For almost 4 billion years, both hardware (bodies) and software (capacity for generating behaviour) were fixed by biology. For the next 100,000 years, learning and culture enabled humans to adapt and control their own software. In the imminent third phase, both software and hardware can be redesigned. This may sound like transhumanism -the movement to re-engineer body and brain -but Tegmark's focus is on AI, which supplements mental capabilities with external devices. Tegmark considers both risks and benefits. Near-term risks include an arms race in autonomous weapons and dramatic reductions in employment. The AI community is practically unanimous in condemning the creation of machines that can choose to kill humans, but the issue of work has sparked debate. Many predict an economic boon -AI inspiring new jobs to replace old, as with previous industrial revolutions. Tegmark wryly imagines two horses discussing the rise of the internal combustion engine in 1900. One predicts \"new jobs for horses … That's what's always happened before, like with the invention of the wheel and the plow. \" For most horses, alas, the \"new job\" was to be pet food. Tegmark's analysis is compelling, and shared by economists such as Paul Krugman. But the question remains: what desirable economy might we aim for, when most of what we now call work is done by machines? The longer-term risks are existential. The book's fictional prelude describes a reasonably plausible scenario in which superintelligent AI might emerge. Later, Tegmark ranges over global outcomes from near-Utopias to human enslavement or extinction. That we have no idea how to steer towards the better futures points to a dearth of serious thinking on why making AI better might be a bad thing. Computer pioneer Alan Turing, raising the possibility in 1951 that our species would at best be \"greatly humbled\" by AI, expressed the general unease of making something smarter than oneself. Assuaging this unease by curtailing progress on AI may be neither feasible nor preferable. The most interesting part of Life 3.0 explains that the real issue \n ARTIFICIAL INTELLIGENCE The future is superintelligent Stuart Russell weighs up a book on the risks and rewards of the AI revolution. is the potential for misaligned objectives. Cybernetics founder Norbert Wiener wrote in 1960, \"We had better be quite sure that the purpose put into the machine is the purpose which we really desire. \" Or, as Tegmark has it, \"It's unclear how to imbue a superintelligent AI with an ultimate goal that neither is undefined nor leads to the elimination of humanity. \" In my view, this technological and philosophical problem demands all the intellectual resources we can bring to bear. Only if we solve it can we reap the benefits. Among these is expansion across the Universe, perhaps powered by such exotic technologies as Dyson spheres (which would capture the energy of a star), accelerators built around black holes or Tegmark's theorized sphalerizers (like diesel engines, but quark-powered and one billion times more efficient). For sheer science fun, it's hard to beat the explanations of how much upside the Universe and the laws of physics will allow. We may one day, for example, expand the biosphere \"by about 32 orders of magnitude\". It's seriously disappointing, then, to learn that cosmic expansion may limit us to settling only 10 billion galaxies. And we feel our descendants' anxiety as \"the threat of dark energy tearing cosmic civilizations apart motivates massive cosmic engineering projects\". The book concludes with the Future of Life Institute's role in moving these issues into mainstream AI thinking -for which Tegmark deserves huge credit. He is not alone, of course, in raising the alarm. In its sweeping vision, Life 3.0 has most in common with Nick Bostrom's 2014 Superintelligence (Oxford University Press). Unlike Bostrom, however, Tegmark is not trying to prove that risk is un avoidable; and he eschews dense philosophy in favour of asking the reader which scenarios they think more probable or desirable. Although I strongly recommend both books, I suspect that Tegmark's is less likely to provoke in AI researchers a common allergic reaction -a retreat into defensive arguments for paying no attention. Here's a typical one: we don't worry about remote but speciesending possibilities such as black holes materializing in near-Earth orbit, so why worry about superintelligent AI? Answer: if physicists were working to make such black holes, wouldn't we ask them if it was safe? The Economist has drily characterized the overarching issue thus: \"The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking. \" Life 3.0 is far from the last word on AI and the future, but it provides a fascinating glimpse of the hard thinking required. ■ Stuart Russell is professor of computer science at the University of California, Berkeley and co-author of Artificial Intelligence: A Modern Approach. e-mail: russell@berkeley.edu \t\t\t © 2 0 1 7 M a c m i l l a n P u b l i s h e r s L i m i t e d , p a r t o f S p r i n g e r N a t u r e . A l l r i g h t s r e s e r v e d .", "date_published": "n/a", "url": "n/a", "filename": "548520a.tei.xml", "abstract": "M ax Tegmark is a renowned physicist. He is also the irrepressibly optimistic co-founder of the Future of Institute in Cambridge, Massachusetts (motto: \"Technology is giving life the potential to flourish like never before … or to selfdestruct. Let's make a difference!\"). Now, in Life 3.0, he tackles a pressing future development -the evolution of artificial intelligence (AI). He argues that the risks demand serious thought if our \"cosmic endowment\" is not to be inadvertently thrown away.", "id": "92b10bdfba64dc1c488a5453d3aa99c2"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Charlotte Stix", "Matthijs M Maas"], "title": "Bridging the gap: the case for an 'Incompletely Theorized Agreement' on AI policy", "text": "Introduction The prevailing uncertainty around the trajectory and impact of artificial intelligence (AI) makes it clear that appropriate technology policy approaches are urgently needed. The possible negative ethical and societal impacts of AI are considerable: from algorithmic bias to AI-enabled surveillance, and from lethal autonomous weapons systems to widespread technology-induced unemployment. Moreover, some forecast that continuing progress in AI capabilities will eventually make AI systems a 'general-purpose technology' [1] , or may even enable the development of 'high-level machine intelligence' (HLMI) [2] or other 'transformative' capabilities [3, 4] . Debate on these latter scenarios is diverse, and has at times focused on what some have referred to as 'Artificial General Intelligence' (AGI) [5] . On the surface, those concerned with AI's impacts can appear divided between those who focus on discernible problems in the near term, and those who focus on more uncertain problems in the longer term [6] [7] [8] [9] . This paper wants to investigate the dynamics and debates between these two communities, with an eye to fostering policy effectiveness through greater cooperation. In doing so, this paper seeks to take up the recent call to 'bridge the near-and long-term challenges of AI' [9] . The focus therefore is not on the relative urgency of existing algorithmic threats (such as e.g. facial recognition or algorithmic bias), nor on the relative plausibility of various advanced AI scenarios (such as e.g. HLMI or AGI), nor do we mean to suggest that a long-term perspective is solely focused on or concerned with AGI [10] [11] [12] . Rather, the paper proposes that even if some community divergence exists, each group's overarching intention to contribute to responsible and ethical AI policy 1 would benefit from cooperation within key domains to maximize policy effectiveness. The paper suggests that differences may be overstated, and proposes that even if one assume such differences, these are not practically insurmountable. Rather, it argues that the principle of an 'incompletely theorized agreement', originally derived from constitutional law, provides both philosophical foundations and historical precedent for a form of cooperation between divergent communities that enables progress on urgent shared issues, without compromising on their respective goals. The paper proceeds as follows: in Sect. 2, we provide a short rationale for our proposed intervention. We briefly lay out the landscape of AI policy concerns and the structure of the associated AI ethics and policy community. This is followed by a discussion, drawing on historical cases as well as the contemporary challenges facing AI policy scholars, of how fragmentation within an expert community might hinder progress on key and urgent policies. In Sect. 3, we explore potential sources which could contribute to community divergence. We touch on epistemic and methodological disagreements and normative disagreements, and home in on pragmatic disagreements around the tractability of formulating AI policy actions today which maintain long-term relevance. We briefly review how serious these disagreements are, arguing that these trade-offs are often exaggerated, or do not need to preclude collaboration. Finally, in Sect. 4 , we propose that one consolidating avenue to harness mutually beneficial cooperation for the purpose of effective AI policy could be anchored in the constitutional law principle of an 'incompletely theorized agreement'. This proposal works under the assumption that the influence of a community on policy making is significantly stronger if they act as a united front, rather than as scattered subgroups. \n AI policy: a house divided? Recent progress in AI has given rise to an array of ethical and societal concerns. 2 Accordingly, there have been calls for appropriate policy measures to address these. As an \"omni-use technology\" [13] , AI has both potential for good [14] [15] [16] as well as for bad [17] [18] [19] applications. The latter include: various forms of pervasive algorithmic bias [20, 21] , challenges around transparency and explainability [22, 23] ; the safety of autonomous vehicles and other cyberphysical systems [24] , or the potential of AI systems to be used in (or be susceptible to) malicious or criminal attacks [25] [26] [27] ; the erosion of democracy through e.g. 'computational propaganda' or 'deep fakes' [28] [29] [30] , and an array of threats to the full range of human rights [31, 32] . The latter may eventually cumulate in the possible erosion of the global legal order by the comparative empowerment of authoritarian states [33, 34] . Finally, some express concern that continued technological progress might eventually result in increasingly more 'transformative' AI capabilities [3] , up to and including AGI. Indeed, a number of AI researchers expect some variation of 'high-level machine intelligence' to be achieved within the next five decades [2] . Some have suggested that if those transformative capabilities are not handled with responsibility and care, such developments could well result in new and potential catastrophic risks to the welfare, autonomy, or even long-term survival of societies [35, 36] . Looking at the current debate and scholarship involved in the aforementioned areas, we note, along with other scholars [6, 8, 37] , that there appears to be a fuzzy split, along a temporal 'near-term'/'long-term' axis. This perception matters because, as in many other fields and contexts, a perceived or experienced distinction may eventually become a self-fulfilling prophecy [38] . This holds true even if the perceived differences are based on misperceptions or undue simplification by popular scientific media [39] [40] [41] . Of course, fragmentation between near-and longer-term considerations of AI's impact is only one way to explore the growing community, and it may not be the sole issue to overcome to maximize policy impact. However, for the purpose of this paper, our focus is on this specific gap. \n The policy advantages of collaboration: lessons from history The current AI ethics and policy community is a young one. Policy shifts, on the other hand, take time. As such, it is difficult to clearly outline what impact current dynamics have. We are, after all, still in the early stages of these developments. Nevertheless, historical examples of adjacent fields can help to demonstrate and forecast how fragmentation, or, conversely cooperation, on policy goals within the AI ethics and policy community could strengthen impact on technology policy. Why should potential fragmentation along an axis such as near-and longer-term concerns worry us? History shows that the structure of a field or community affects the ability of its members to shape and influence policy downstream. Importantly, it shows that there is significant benefit derived from collaboration. We put forward three historic examples of adjacent technology policy fields to AI, meaning those that tackled equally new and emerging technologies. We briefly highlight one case where fragmentation may have contributed to a negative impact on the overall policy impact of the community and two cases where a collaborative effort yielded a positive impact on policy formulation. \n Nanotechnology One community that arguably suffered from a public pursuit of fractious division was the nanotechnology community in the early 2000s [42, 43] . Internal disagreements came to a head in the 2003 'Drexler-Smalley' debate [44] , which cemented an oversimplified public caricature of the field. Scholars reviewing this incident have argued that 'para-scientific' media created \"polarizing controversy that attracted audiences and influenced policy and scientific research agendas. […] bounding nanotechnology as a field-in-tension by structuring irreconcilable dichotomies out of an ambiguous set of uncertainties.\" [38] . This showcases a missed opportunity within a fragmented community to come together to promote greater political engagement with the responsible development of the technology. \n Recombinant DNA In the 1970s, concerns arose over recombinant DNA (rDNA) technology. In particular, the ethical implications of the ability to reshape life, as well as fears over potential biohazards from new infectious diseases led the biotechnology community to come together at the 1975 Asilomar Conference on Recombinant DNA to set shared standards [45] . The conference is widely considered a landmark in the field [46] : the scientist's and lawyers' commitment to a forthright open and public discussion has been argued to have stimulated both public interest and grounded policymaker discussion about the social, political and environmental issues related to genetic biotechnology in medicine and agriculture [47] . \n Ballistic missile defense arms control In the wake of the creation of the atom bomb, a number of scientists expressed dismay and sought to institutionalize global control of these weapons. Early efforts to this end, such as the 1946 Baruch Plan, proved unsuccessful [48, 49] . However, by the 1950s-1960s, a new 'epistemic community' emerged, bringing together both technical and social scientists, specifically in opposition to the development of antiballistic missile (ABM) systems. This community proved able to develop and disseminate this new understanding of nuclear deterrence dynamics to policymakers [50] . They achieved this by maintaining a high level of consensus on concrete policy goals, by framing public discourse on the ethical goals, and by fostering links to both policymakers as well as to Soviet scientists. This allowed them to persuade key administration figures and shift policymaker norms and perceptions at home and internationally. Ultimately, setting the stage for the 1972 ABM Treaty, the first arms control agreement of this kind [50, 51] . \n The pitfalls of fragmented efforts in AI Policy While some of the historical context is surely different, a number of these historical dynamics may well transfer to the emerging AI ethics and policy community [52] . Those concerned with AI policy could benefit from exploring such historical lessons. This should be done with urgency, for two reasons. First, there is a closing window of opportunity. The field of AI policy is a relatively new one, which offers a degree of flexibility in terms of problem framings, governance instrument choice and design, and community alignment. Going forwards, however, this field has a high likelihood of becoming progressively more rigid as framings, public perceptions, and stakeholder interests crystallize. Current dynamics could, therefore, have far-reaching impacts, given the potential to lock in a range of path-dependencies, for example through particular framings of the issues at hand [53] . In this context, a divided community which potentially treats policymakers or public attention as a zero-sum good for competing policy projects may compromise the legitimacy of its individual efforts in front of these. This could undercut the leverage of policy initiatives today and in the future. Worse, public quarrels or contestation may 'poison the well'. Policymakers may begin to perceive and treat a divided community as a series of interest groups rather than an 'epistemic community' with a multi-faceted but coherent agenda for beneficial societal impact of AI. Finally, from a policy perspective, it is important to note that while current regulatory initiatives are and should not always be directly transferable to future issues, neither are they categorically irrelevant. As such, they can often provide the second-best tools for rapidly confronting new AI challenges. This has its own pitfalls, but is often superior to waiting out the slow and reactive formulation of new policies. Moreover, the risks are concrete and timely. It is plausible that political moods will shift within the coming years and decades, in ways that make policy progress much harder. Furthermore, it is possible that other epistemic communities may converge and mobilize faster to embed and institutionalize alternative, less broadly beneficial framings of AI. Indeed, public and global framings of AI in recent years have seemed to drift towards narratives of competition and 'arms races' [54-56, but see also 57]. An inflection point for how societies use and relate to AI may eventually be reached. Missing such a window of opportunity could mean that the relative influence of those concerned with making the impact of AI beneficial (whether in the near or longer term) will decline, right as their voices are needed most. \n 3 Conversely, many gains secured today could have lasting benefits down the road. \n Examining potential grounds for divergence There are a range of factors that could contribute to the clustering into fuzzy 'near-' and 'long-term' communities, and different scholars may hold distinct and overlapping sets of beliefs on them [cf. 8]. In the following paragraphs, we provide our first attempt at mapping some of these factors. 3 Some part of the divergence may be due to varying epistemic or methodological commitments. These could reflect varying levels of tolerance regarding scientific uncertainty and distinct views on the threshold of probability required before far-reaching action or further investigation is warranted. This means that concerns surrounding AI may depend on qualitatively different conceptions of 'acceptable uncertainty' for each group of observers. This may well be hard to resolve. Moreover, epistemic differences over the implicit or explicit disagreements of the modal standards in these debates, for example, debates over what types of data or arguments are admissible in establishing or contesting the plausibility or probability of risk from AI may contribute to further divergence. This could even lead to differential interpretations of evidence that are available. For instance, do empirically observed failure modes of present-day architectures [58] [59] [60] [61] provide small-scale proof-of-concepts of the type of difficulties we might one day encounter in AI 'value alignment', or are such extrapolations unwarranted? For our purposes, however, the most salient factor may be essentially pragmatic. Different perceptions of the empirical dynamics and path-dependencies of governing AI can inform distinct theories-of-change. These are intertwined with one's expectations about the tractability and relevance of formulating useful and resilient policy action today. In this context, Prunkl and Whittlestone [8] have recently argued that a more accurate picture and more productive dialogue could be achieved if scholars differentiated amongst the four dimensions on which views vary, in terms of the capabilities, impacts, certainty or extremity of AI systems. They emphasize that views on each of these questions fall on a spectrum. Taking this point on board, there are additional ways to cash out possible divergences. One debate might concern the question, how long-lasting are the consequences of near-term AI issues? If those that care about the longer term are convinced that these issues will not have long-lasting consequences, or that they would eventually be swamped by the much larger trends and issues [3] , then this could lead them to discount work on near-term AI problems. However, it is important to note that near-term issues are likely to considerably affect the degree to which society is vulnerable to longer-term dangers posed by future advanced AI systems. Short-term or medium-term issues [7, 37] can easily increase society's general turbulence [62] , or lock in counterproductive framings of AI or our relation to it. In general, we might expect many nominally near-term effects of AI on society (such as in surveillance; job automation; military capabilities) to scale up and become more disruptive as AI capabilities gradually increase [18, 37] . Indeed, some longer-term scholars have argued that advanced AI capabilities considerably below the level of HLMI might already suffice to achieve a 'prepotence' which could pose catastrophic risks [10] . This would make mid-term impacts particularly important to handle, and collaboration between different groups on at least some generalizable projects crucial. Another pragmatic question or concern is over how much leverage we have today to meaningfully shape policies that will be applicable or relevant in the long term, especially if AI architectures or the broader political and regulatory environment change a lot in the meantime [8] . Some scholars may hold that future AI systems will be technically so different from today's AI architectures that research into this question undertaken today will not be relevant, or they might hold that such advanced AI capabilities may be so remote that the regulatory environment will have changed too much for meaningful policy work to be conducted right now [63] . These people might argue that we had better wait until things are clearer and we are in a better position to understand whether and what research is needed or meaningful. In practice, this critique does not appear to be a very common or deeply held position. Indeed, as a trade-off it may be overstated. It is plausible that there are diverse areas on which both communities can undertake valuable research today, because the shelf life of current policy and research efforts might be longer than is assumed. To be sure, there is still significant uncertainty over whether current AI approaches can at all be scaled up to very advanced performance [64] [65] [66] . Nonetheless, research could certainly depart from a range of areas of overlap [67] and shared areas of concern [68, 69] , as we will discuss shortly. Moreover, policy making is informed by a variety of aspects which range across different time spans. Starting with political agendas that often reflect the current status quo, policy making is equally shaped by shifting public discourse, societal dynamics and high-impact shocks. The latter factor has played a key role in AI policy, where highprofile incidents involving algorithmic discrimination, lack of transparency, or surveillance have driven policy shifts, as seen for example in the pushback on discriminatory algorithms used in the UK visa selection processes [70] , the Artificial Intelligence Video Interview Act regulating the use of AI in employee interviews, or the California B.O.T. Law requiring AI systems to self-identify [71, 72] . In sum, it is plausible that many perceived 'barriers' to inter-community cooperation on policy are not all that strong, and that many 'tradeoffs' are likewise overemphasized. However, does that mean there are also positive, mutually productive opportunities for both communities to work on with regard to policy in spite of outstanding disagreements? What would such an agreement look like? \n Towards 'incompletely theorized agreements' for AI policy Above, we have reviewed potential sources for divergence within the community. We will now discuss how even in the context of apparent disagreement, pragmatic agreements on shared policy goals and norms could be reached. We propose to adopt and adapt the legal principle of an 'incompletely theorized agreement' for this purpose. Legal scholarship in constitutional law and regulation has long theorized the legal, organizational and societal importance of such incompletely theorized agreements. Their key use is that they allow a given community to bypass or suspend [73, 74] any theoretical disagreement on matters where (1) the disagreement appears relatively intractable and (2) there is an urgent need to address certain shared practical issues. Disagreements are intractable in cases where either it simply does not appear as if the question will be decisively resolved one way or the other in the near term, or where there is limited time and capacity to reason through all underlying disagreements [73] . Incompletely theorized agreements can therefore apply to deep philosophical and ethical questions as much as to contexts of pervasive scientific uncertainty. The latter is especially the case on questions where it still remains unclear where and how we might procure the information that allows definitive resolution. Incompletely theorized agreements are a fundamental component to well-functioning legal systems, societies, and organizations. They allow for stability and flexibility to get urgent things done [75] . These agreements have long played a key role in constitutional and administrative law, and have made possible numerous landmark achievements of global governance, such as the establishment of the Universal Declaration of Human Rights [75, 76] . The framework has also been extended to other domains, such as the collective development and analysis of health-care policies in the face of pluralism and conflicting views [77] . Incompletely theorized agreements have broad similarities with the notion of an 'overlapping consensus', developed by John Rawls, which refers to the way adherents of different (and apparently inconsistent) normative doctrines can nonetheless converge on particular principles of justice to underwrite the shared political community [78] . This concept has been read as a key mechanism in the field of bioethics, serving to enable agreement despite different fundamental outlooks [79] . It also already plays a role in the existing literature on computer ethics [80] , as well as in the field of intercultural information ethics [81] . Indeed, overlapping consensus has been proposed as a mechanism on which to ground global cooperation on AI policy across cultural lines [82] . If overlapping consensus can ground inter-cultural cooperation, incompletely theorized agreements might serve as a similar foundation for practical cooperation between nearand long-term perspectives. In a related context, Baum has suggested that policy interventions aimed at securing longterm resilience to various catastrophes can often involve significant co-benefits in the near term, and so do not narrowly depend on all parties agreeing on the deep reasons for the policies proposed [83] . Could incompletely theorized agreements ground cooperation amongst AI policy communities? We suggest that they could. \n Incompletely theorized agreements in AI policy: examples and sketches There are a range of issue areas where both groups could likely locate joint questions they would want addressed, and shared goals for which particular AI policies should be implemented. This holds even if their underlying reasons for pursuing these are not fully aligned. Without intending to provide an exhaustive, in-depth or definitive overview, a brief survey might highlight various areas for cooperation. For one, gaining insight into-and leverage on the general levers of policy formation around AI [52] is a key priority. What are the steps in the policymaking process which determine what issues get raised to political agendas and eventually acted upon, and which might be derailed by other coalitions [84] ? Given the above, research into underlying social and societal developments is fruitful to advance all groups' ability to navigate mutually agreeable policy goals across this policy making cycle [85] . Likewise, research into when, where or why global AI governance institutions might become vulnerable to regulatory capture or institutional path dependency ought to be an important consideration, whatever one's AI concerns are [86, 87] . On a more operational level, this can feed into joint investigation into the relative efficacy of various policy levers for AI governance. For example, insights into when and how AI research labs or individual researchers adopt, or alternately cut corners on, responsible and accountable AI, or the incentivization of shifts in workplace culture or employee norms, could shape the policy proposals the community might make. The question of how to promote prosocial norms in AI research environments is of core interest to both communities with an eye to technology policy [88] . This might cover, e.g. whether to publicly name problematic performance (e.g. biased results; lack of safety) in commercial AI products results in tech companies actually correcting the systems [89] ; or whether codes of ethics are effective at changing programmers' decision making on the working floor [90, 91] . All of these could be fruitful areas of collaboration on eventual policy proposals for either community. More specifically, there are a range of particular AI policy programs that we expect could be the site of an incompletely theorized agreement. 1. Incompletely theorized agreements could shape norms and policy debates over the appropriate scientific culture for considering the impact and dissemination of AI research [92, 93] , especially where it concerns AI applications with potentially salient misuses. The underlying reasons for such policies might differ. Some may be concerned over abuses vulnerable populations, new vectors for criminal exploitation or the implications of new language models for misinformation [94, 95] ; and others over the long-term risks from the eventual open development of advanced AI systems [96] . Accordingly, incompletely theorized agreements in this area could converge on policies to shape researcher norms around improved precaution or reflection around the impact or potential misuse of research [97]. 2. Another example might be found in the domain of the global regulation of military uses of AI. This area has already seen years of shared efforts and even collaboration amongst a coalition of activists, lawyers, and institutions departing from both a near-term as well as longer-term perspective, such as the Future of Life Institute [52] . \n Incompletely theorized agreements could ground productive policy cooperation on policy interventions aimed at preserving the integrity of public discourse and informed decision-making in the face of AI systems. Policies aimed at combating AI-enabled disinformation would be a natural site for incompletely theorized collaboration, because a society's epistemic security [98] is relevant from both a near-term and long-term perspective alike. 4. Similarly, incompletely theorized agreements surrounding the promotion of policies aimed at securing citizens' (political) autonomy and independence from unaccountable perception control could be promising. After all, practices of opaque technological management [99] or perception control [100] can enable authorities to increasingly shape individuals' and societies' behaviour and values. Regulation to restrict the deployment of such tools, to facilitate privacy-preserving AI, or to ensure transparency and accountability of the principals of such tools, are important from a near-term perspective concerned with the role of 'hyper nudges' [101] , algocracy [102] , or surveillance capitalism [103] . Simultaneously, such policies are also critical to avert long-term worries over a 'value lock-in', whereby one generation or party might someday \"invent a technology that will enable the agents alive at that time to maintain their values indefinitely into the future, controlling the broad sweep of the entire rest of the future of civilisation\" [104] . Although the underlying motives for each group to pursue policies on the abovementioned domains may be partially distinct, these technical differences are arguably thwarted by the benefits derived from achieving impactful and effective policy measures together. In many of these cases, the practical benefits of an incompletely theorized agreement would be at least fourfold (1) to reduce public confusion around these topics; (2) to present policymakers with an epistemic community delivering integrated policy proposals; (3) to support the articulation of regulations or governance instruments for specific policy problems, which need not assume further advances in AI capabilities, but which are also not reliant on provisions or assumptions that are vulnerable to 'obsolescence' if or when such advances do occur [105] [106] [107] , giving such policies a longer shelf life; (4) to improve engagement of particular AI policies with the steadily increasing cross-domain nature of AI, which could help inform regulatory responses across domains. This is especially relevant because different fields (such as content moderation, health-care, or the military) often confront different yet similar versions of underlying problems [108] . \n Limitations of incompletely theorized agreements We do not wish to suggest that incompletely theorized agreements are an unambiguously valuable tool across all AI policy cases, or even a definite solution for any one policy case. Such agreements do suffer from a number of potential drawbacks or trade-offs, which both communities should consider before invoking them in any particular case. 4 First, depending on one's assumptions around the expected degree of change in AI or in its societal impacts, incompletely theorized agreements could prove brittle. Incompletely theorized agreements bypass a full examination of the underlying disagreements to facilitate pragmatic and swift action on particular policies on which both communities find themselves in practical alignment in a specific moment in time. This may create a lack of clarity over the boundary conditions of that practical agreement, along with opacity over whether, or where (i.e. for which future particular questions around AI policy) the practical agreement might suddenly break down for either or all parties. Second, an incompletely theorized agreement is, in an important sense, a 'stopgap' measure more than a general ideal or permanent fix. As above, an incompletely theorized agreement might be most suited to situations where (a) practical policy action is urgently needed, and (b) underlying theoretical agreement by stakeholders on all engaged principles or questions does not seem close. However, over longer timeframes, deeper inquiry and debate do appear necessary [73] . In addition, there is always the possibility that agreement was not, in fact, intractable within the near term. As such, a premature leap by the community into an incompletely theorized agreement to achieve some policy X might inadvertently curb the very conversations amongst the communities which could have led both to eventually prefer policy Y instead, had their conversation been allowed to run its course. Moreover, there is a key related point here, on which we should reflect. By advocating for the adoption of incompletely theorized agreements on AI policy today, we ourselves are in a sense assuming or importing an implicit judgment about the urgency of AI issues today, and about the intractability of the underlying debates. Yet these are two positions which others might contest. For example, by arguing that 'AI issues do not today meet that threshold of urgency that the use of an incompletely theorized agreement is warranted'. We wish to make this assumption explicit. At the same time, we expect that it is an assumption widely shared by many scholars working on AI policy, many of whom may well share a sense that the coming years will be a sensitive and even critical time for AI policies. Third, a sloppily formulated incompletely theorized agreement on an AI policy issue may not actually reflect convergence on particular policies (e.g. 'certification scheme for AI products with safety tests X, Y, Z'). Instead, it might solidify on apparent agreement on vague mid-level principles or values (e.g. 'AI developers should ensure responsible AI development'). These may be so broad that they do not ground clear action at the level of actual policies. If this were to happen, incompletely theorized agreements might merely risk contributing to the already-abundant proliferation of broad AI principles or ethical frameworks on AI that have little direct policy impact. While the ecosystem of AI codes of ethics issued in recent years have certainly shown some convergence [109] [110] [111] , they have been critiqued as being hard to operationalize, and for providing only the appearance of agreement while masking underlying tensions in the principles' interpretation, operationalization, or practical requirements [93, 112] . Situations where an incompletely theorized agreement does not manage to root itself at the level of concrete policies but only mid-level principles would be a worst-of-both-worlds scenario: it would reduce the ability of actors to openly reflect upon and resolve inconsistencies amongst-or disagreements about high-level principles, while not even affording improvements at facilitating concrete policies or actions in particular AI domains. To mitigate this risk, incompletely theorized agreements should, therefore, remain closely grounded in concrete and clearly actionable policy goals or outputs. Nonetheless, while limitations such as these should be considered in greater detail, we argue that they do not categorically erode the case for implementing, or at least further examining the promise of this principle and tool for advancing responsible AI policy. \n Conclusion AI has raised multiple societal and ethical concerns. This highlights the urgent need for suitable and impactful policy measures in response. Nonetheless, there is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on 'nearterm' AI risks, and those focusing on 'longer-term' risks. This paper has sought to map the practical space for intercommunity collaboration, with a view towards the practical development of AI policy. As such, we briefly provided a rationale for such collaboration, by reviewing historical cases of scientific community conflict or collaboration, as well as the contemporary challenges facing AI policy. We argued that fragmentation within a given community can hinder progress on key and urgent policies. Consequently, we reviewed a number of potential (epistemic, normative or pragmatic) sources of disagreement in the AI ethics community, and argued that these trade-offs are often exaggerated, and at any rate do not need to preclude collaboration. On this basis, we presented the novel proposal for drawing on the constitutional law principle of an 'incompletely theorized agreement', for the communities to set aside or suspend these and other disagreements for the purpose of achieving higher-order AI policy goals of both communities in selected areas. We, therefore, non-exhaustively discussed a number of promising shared AI policy areas which could serve as the sites for such agreements, while also discussing some of the overall limits of this framework. This paper does not suggest that communities should fully merge or ignore differences whatever their source may be. To be sure, some policy projects will be relevant to one group within the community but not the other. Indeed, community heterogeneity and diversity is generally a good thing for a scientific paradigm. Instead, the paper proposes to question some possible reasons for conflicting dynamics which could stall positive progress for policy making, and suggests an avenue for a higher-order resolution. Most of all, the paper hopes to pragmatically encourage the exploration of opportunities for shared work and suggested that work on such opportunities, where it is found, can be well grounded through an incompletely theorized agreement. We invite scholars in the ethical AI community to explore the strengths and limits of this tool. Footnote 1 ( 1 continued) assumption that policy making can positively influence the development and deployment of AI technology. \n\t\t\t This paper perceives AI's ethical and societal concerns to be closely intertwined, and as such refers to the broader set of these actual and potential concerns throughout. \n\t\t\t It should be emphasized that this mapping is only an indicative sketch, and would be much enriched by further examination, for example through structured interviews or comprehensive opinion surveys. \n\t\t\t We thank one reviewer for prompting this discussion of the drawbacks of incompletely theorized agreements.", "date_published": "n/a", "url": "n/a", "filename": "Stix-Maas2021_Article_BridgingTheGapTheCaseForAnInco.tei.xml", "abstract": "Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize 'near-term' concerns and those focusing on 'long-term' concerns and corresponding policy measures. In this paper, we seek to examine this alleged 'gap', with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an 'incompletely theorized agreement' to bridge some underlying disagreements, in the name of important cooperation on addressing AI's urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.", "id": "5bdb8e03d182d9df1d7ea5d3ec8e47f7"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "Under review as a conference paper at ICLR 2021 BENEFITS OF ASSISTANCE OVER REWARD LEARNING", "text": "INTRODUCTION Traditional computer programs are instructions on how to perform a particular task. However, we do not know how to mechanically perform more challenging tasks like translation. The field of artificial intelligence raises the level of abstraction so that we simply specify what the task is, and let the machine to figure out how to do it. As task complexity increases, even specifying the task becomes difficult. Several criteria that we might have thought were part of a specification of fairness turn out to be provably impossible to simultaneously satisfy (Kleinberg et al., 2016; Chouldechova, 2017; Corbett-Davies et al., 2017) . Reinforcement learning agents often \"game\" their reward function by finding solutions that technically achieve high reward without doing what the designer intended (Lehman et al., 2018; Krakovna, 2018; Clark & Amodei, 2016) . In complex environments, we need to specify what not to change (McCarthy & Hayes, 1981) ; failure to do so can lead to negative side effects . Powerful agents with poor specifications may pursue instrumental subgoals (Bostrom, 2014; Omohundro, 2008) such as resisting shutdown and accumulating resources and power (Turner, 2019) . A natural solution is to once again raise the level of abstraction, and create an agent that is uncertain about the objective and infers it from human feedback, rather than directly specifying some particular task(s). Rather than using the current model of intelligent agents optimizing for their objectives, we would now have beneficial agents optimizing for our objectives (Russell, 2019) . Reward learning (Leike et al., 2018; Jeon et al., 2020; Christiano et al., 2017; Ziebart et al., 2010) attempts to instantiate this by learning a reward model from human feedback, and then using a control algorithm to optimize the learned reward. Crucially, the control algorithm does not reason about the effects of the chosen actions on the reward learning process, which is external to the environment. In contrast, in the assistance paradigm (Hadfield-Menell et al., 2016; Fern et al., 2014) , the human H is modeled as part of the environment and as having some latent goal that the agent R (for robot) does not know. R's goal is to maximize this (unknown) human goal. In this formulation, R must balance between actions that help learn about the unknown goal, and control actions that lead to high reward. Our key insight is that by integrating reward learning and control modules, assistive agents can take into account the reward learning process when selecting actions. This gives assistive agents a significant advantage over reward learning agents, which cannot perform similar reasoning. What is the reward of ? All pies need , let me make it If I use the to make , there won't be any left for and . I'll wait for more information. \n Learning about reward Making robust plans Preserving option value when possible + + I won't find out which pie is preferable before Alice gets very hungry, so I'll make . Guessing when feedback is unavailable + Figure 1: R must cook a pie for H, by placing flour on the plate to make the pie dough, filling it with either Apple, Blueberry, or Cherry filling, and finally baking it. However, R does not know which filling H prefers, and H is not available for questions since she is doing something else. What should R do in this situation? On the right, we show what qualitative reasoning we might want R to use to handle the situation. The goal of this paper is to clarify and illustrate this advantage. We first precisely characterize the differences between reward learning and assistance, by showing that two phase, communicative assistance is equivalent to reward learning (Section 3). We then give qualitative examples of desirable behaviors that can only be expressed once these restrictions are lifted, and thus are only exhibited by assistive agents (Section 4). Consider for example the kitchen environment illustrated in Figure 1 , in which R must bake a pie for H. R is uncertain about which type of pie H prefers to have, and currently H is at work and cannot answer R's questions. An assistive R can make the pie crust, but wait to ask H about her preferences over the filling (Section 4.1). R may never clarify all of H's preferences: for example, R only needs to know how to dispose of food if it turns out that the ingredients have gone bad (Section 4.2). If H will help with making the pie, R can allow H to disambiguate her desired pie by watching what filling she chooses (Section 4.3). Vanilla reward learning agents do not show these behaviors. We do not mean to suggest that all work on reward learning should cease and only research on assistive agents should be pursued. Amongst other limitations, assistive agents are very computationally complex. Our goal is simply to clarify what qualitative benefits an assistive formulation could theoretically provide. Further research is needed to develop efficient algorithms that can capture these benefits. Such algorithms may look like algorithms designed to solve assistance problems as we have formalized them here, but they may also look like modified variants of reward learning, where the modifications are designed to provide the qualitative benefits we identify. \n BACKGROUND AND RELATED WORK We introduce the key ideas behind reward learning and assistance. X * denotes a sequence of X. We use parametric specifications for ease of exposition, but our results apply more generally. \n POMDPS A partially observable Markov decision process (POMDP) M = S, A, Ω, O, T, r, P 0 , γ consists of a finite state space S, a finite action space A, a finite observation space Ω, an observation function O : S → ∆(Ω) (where ∆(X) is the set of probability distributions over X), a transition function T : S × A → ∆(S), a reward function r : S × A × S → R, an initial state distribution P 0 : ∆(S), and a discount rate γ ∈ (0, 1). We will write o t to signify the tth observation O(s t ). A solution to the POMDP is given by a policy π : (O × A) * × O → ∆(A) that maximizes the expected sum of rewards ER(π) = Es 0∼P0,at∼π(•|o0:t,a0:t−1),st+1∼T (•|st,at) [ ∞ t=0 γ t r(s t , a t , s t+1 )]. \n REWARD LEARNING We consider two variants of reward learning: non-active reward learning, in which R must infer the reward by observing H's behavior, and active reward learning, in which R may choose particular questions to ask H in order to get particular feedback. A non-active reward learning problem P = M\\r, C, Θ, r θ , P Θ , π H , k contains a POMDP without reward M\\r = S, A R , Ω R , O R , T, P 0 , γ , and instead R has access to a parameterized reward space Θ, r θ , P Θ . R is able to learn about θ * by observing H make k different choices c, each chosen from a set of potential choices C. In order for R to learn from the human's choices, it also assumes access to the human decision function π H (c | θ) that determines how the human makes choices for different possible reward functions r θ . Common decision functions include perfect optimality (Ng & Russell, 2000) and Boltzmann rationality (Ziebart et al., 2010) . There are many types of choices (Jeon et al., 2020 ), including demonstrations (Argall et al., 2009 Ng & Russell, 2000; Ziebart et al., 2010; Fu et al., 2017; Gao et al., 2012) , comparisons (Zhang et al., 2017; Wirth et al., 2017; Christiano et al., 2017; Sadigh et al., 2017) , corrections (Bajcsy et al., 2017) , the state of the world , proxy rewards (Hadfield-Menell et al., 2017b) , natural language (Fu et al., 2019) , etc. A policy decision function f (c 0:k−1 ) produces a policy π R after observing H's choices. A solution is a policy decision function f that maximizes expected reward E θ∼PΘ,c 0:k−1 ∼π H [ER(f (c 0:k−1 ))]. Since H's choices c 0:k−1 do not affect the state of the environment that R is acting in, this is equivalent to choosing π R that maximizes expected reward given the posterior over reward functions, that is Eθ∼P (θ|c 0:k−1 ) ER(π R ) . An active reward learning problem P = M\\r, Q, C, Θ, r θ , P Θ , π H , k adds the ability for R to ask H particular questions q ∈ Q in order to get more targeted feedback about θ. The human decision function π H (c | q, θ) now depends on the question asked. A solution consists of a question policy π R Q (q i | q 0:i−1 , c 0:i−1 ) and a policy decision function f (q (Eric et al., 2008; Daniel et al., 2014; Maystre & Grossglauser, 2017; Christiano et al., 2017; Sadigh et al., 2017; Zhang et al., 2017; Wilde et al., 2020) will compute and ask q ∈ Q that maximizes an active learning criterion such as information gain (Bıyık et al., 2019) or volume removal (Sadigh et al., 2017) . Best results are achieved by selecting questions with the highest value of information (Cohn, Robert W, 2016; Zhang et al., 2017; Wilde et al., 2020) , but these are usually much more computationally expensive. R then finds a policy that maximizes expected reward under the inferred distribution over θ, in order to approximately solve the original POMDP. 0:k−1 , c 0:k−1 ) that maximize expected reward E θ∼PΘ,q 0:k−1 ∼π R Q ,c 0:k−1 ∼π H [ER(f (q 0:k−1 , c 0:k−1 ))]. A typical algorithm Note that a non-active reward learning problem is equivalent to an active reward learning problem with only one question, since having just a single question means that R has no choice in what feedback to get (see Appendix A.1 for proofs). \n ASSISTANCE The key idea of assistance is that helpful behaviors like reward learning are incentivized when R does not know the true reward r and can only learn about it by observing human behavior. So, we model the human H as part of the environment, leading to a two-agent POMDP, and assume there is some true reward r that only H has access to, while the robot R only has access to a model relating r to H's behavior. Intuitively, as R acts in the environment, it will also observe H's behavior, which it can use to make inferences about the true reward. Following Hadfield-Menell et al. (2016) 1 , we define an assistance game M as a tuple M = S, {A H , A R }, {Ω H , Ω R }, {O H , O R }, T, P S , γ, Θ, r θ , P Θ . Here S is a finite set of states, A H a finite set of actions for H, Ω H a finite set of observations for H, and O H : S → ∆(Ω H ) an observation function for H (respectively A R , Ω R , O R for R). The transition function T : S × A H × A R → ∆(S) gives the probability over next states given the current state and both actions. The initial state is sampled from P S ∈ ∆(S). Θ is a set of possible reward function parameters θ which parameterize a class of reward functions r θ : S × A H × A R × S → R, and P θ is the distribution from which θ is sampled. γ ∈ (0, 1) is a discount factor. As with POMDPs, policies can depend on history. Both H and R are able to observe each other's actions, and on a given timestep, R acts before H. We use τ R Should it be playing a Nash strategy or optimal strategy pair of the game, and if so, which one? Should it use a non-equilibrium policy, since humans likely do not use equilibrium strategies? This is a key hyperparameter in assistance games, as it determines the communication protocol for H and R. For maximum generality, we can equip the assistance game with a policy-conditioned belief B : Π R → ∆(Π H ) over π H , which specifies how the human responds to the agent's choice of policy (Halpern & Pass, 2018) . The agent's goal is to maximize expected reward given this belief. t : (Ω R × A H × A R ) t to denote Prior work on assistance games (Hadfield-Menell et al., 2016; Malik et al., 2018; Woodward et al., 2019) focuses on finding optimal strategy pairs. This corresponds to a belief that H will know and perfectly respond to R's policy (see Appendix A.3). However, our goal is to compare assistance to reward learning. Typical reward learning algorithms assume access to a model of human decision-making: for example, H might be modeled as optimal (Ng & Russell, 2000) or Boltzmann-rational (Ziebart et al., 2010) . As a result, we also assume that we have access to a model of human decision-making π H . Note that π H depends on θ: we are effectively assuming that we know how H chooses how to behave given a particular reward r θ . This assumption corresponds to the policy-conditioned belief B(π R )(π H ) = 1[π H = π H ]. We define an assistance problem P as a pair M, π H where π H is a human policy for the assistance game M. Given an assistance problem, a robot policy π R induces a probability distribution over trajectories: τ ∼ s 0 , θ, π H , π R , τ ∈ [S × A H × A R ] * . We denote the support of this distribution by Traj(π R ). The expected reward of a robot policy for M, π H is given by ER(π R ) = E s0∼P S ,θ∼P θ ,τ ∼ s0,θ,π H ,π R ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) . A solution of M, π H is a robot policy that maximizes expected reward: π R = argmax πR ER(π R ). \n SOLVING ASSISTANCE PROBLEMS Once the π H is given, H can be thought of as an aspect of the environment, and θ can be thought of as a particularly useful piece of information for estimating how good actions are. This suggests that we can reduce the assistance problem to an equivalent POMDP. Following Desai (2017) , the key idea is to embed π H in the transition function T and embed θ in the state. In theory, to embed potentially non-Markovian π H in T , we need to embed the entire history of the trajectory in the state, but this leads to extremely large POMDPs. In our experiments, we only consider Markovian human policies, for which we do not need to embed the full history, keeping the state space manageable. Thus, the policy can be written as π H (a H | o H , a R , θ). To ensure that R must infer θ from human behavior, as in the original assistance game, the observation function does not reveal θ, but does reveal the previous human action a H . Proposition 1. Every assistance problem M, π H can be reduced to an equivalent POMDP M . The full reduction and proof of equivalence is given in Appendix A.2. When M is fully observable, in the reduced POMDP θ is the only part of the state not directly observable to the robot, making it an instance of a hidden-goal MDP (Fern et al., 2014) . For computational tractability, much of the work on hidden goals (Javdani et al., 2015; Fern et al., 2014) selects actions assuming that all goal ambiguity is resolved in one step. This effectively separates reward learning and control in the same way as typical reward learning algorithms, thus negating many of the benefits we highlight in this work. Intention-aware motion planning (Bandyopadhyay et al., 2013 ) also embeds the human goal in the state in order to avoid collisions with humans during motion planning, but does not consider applications for assistance. Macindoe et al. (2012) uses the formulation of a POMDP with a hidden goal to produce an assistive agent in a cops and robbers gridworld environment. Nikolaidis et al. (2015) assumes a dataset of joint human-robot demonstrations, which they leverage to learn \"types\" of humans that can then be inferred online using a POMDP framework. This is similar to solving an assistance problem, where we think of the different values of θ as different \"types\" of humans. Chen et al. (2018) uses an assistance-style framework in which the unknown parameter is the human's trust in the robot (rather than the reward θ). Woodward et al. (2019) uses deep reinforcement learning to solve an assistance game in which the team must collect either plums or lemons. To our knowledge, these are the only prior works that use an assistive formulation in a way that does not ignore the information-gathering aspect of actions. While these works typically focus on algorithms to solve assistance games, we instead focus on the qualitative benefits of using an assistance formulation. Since we can reduce an assistance problem to a regular POMDP, we can use any POMDP solver to find the optimal π R . In our examples for this paper, we use an exact solver when feasible, and point-based value iteration (PBVI) (Pineau et al., 2003) or deep reinforcement learning (DRL) when not. When using DRL, we require recurrent models, since the optimal policy can depend on history. A common confusion is to ask how DRL can be used, given that it requires a reward signal, but by assumption R does not know the reward function. This stems from a misunderstanding of what it means for R \"not to know\" the reward function. When DRL is run, at the beginning of each episode, a specific value of θ is sampled as part of the initial state. The learned policy π R is not provided with θ: it can only see its observations o R and human actions a H , and so it is accurate to say that π R \"does not know\" the reward function. However, the reward is calculated by the DRL algorithm, not by π R , and the algorithm can and does use the sampled value of θ for this computation. π R can then implicitly learn the correlation between the actions a H chosen by π H , and the high reward values that the DRL algorithm computes; this can be often be thought of as an implicit estimation of θ in order to choose the right actions. \n REWARD LEARNING AS TWO-PHASE COMMUNICATIVE ASSISTANCE There are two key differences between reward learning and assistance. First, reward learning algorithms split reward learning and control into two separate phases, while assistance merges them into a single phase. Second, in reward learning, the human's only role is to communicate reward information to the robot, while in assistance the human can help with the task. These two properties exactly characterize the difference between the two: reward learning problems and communicative assistance problems with two phases can be reduced to each other, in a very natural way. A communicative assistance problem is one in which the transition function T and the reward function r θ are independent of the choice of human action a H , and the human policy π H (• | o H , a R , θ) is independent of the observation o H . Thus, in a communicative assistance problem, H's actions only serve to respond to R, and have no effects on the state or the reward (other than by influencing R). Such problems can be cast as instances of HOP-POMDPs (Rosenthal & Veloso, 2011) . For the notion of two phases, we will also need to classify robot actions as communicative or not. We will assume that there is some distinguished action a R noop that \"does nothing\". Then, a robot action âR is communicative if for any s, a H , s we have T (s | s, a H , âR ) = T (s | s, a H , a R noop ) and R(s, a H , âR , s ) = R(s, a H , a R noop , s ). A robot action is physical if it is not communicative. Now consider a communicative assistance problem M, π H with noop action a R noop and let the optimal robot policy be π R * . Intuitively, we would like to say that there is an initial communication phase in which the only thing that happens is that H responds to questions from R, and then a second action phase in which H does nothing and R acts. Formally, the assistance problem is two phase with actions at t act if it satisfies the following property: ∃a H noop ∈ A H , ∀τ ∈ Traj(π R * ), ∀t < t act : a R t is communicative ∧ ∀t ≥ t act : a H t = a H noop . Thus, in a two phase assistance problem, every trajectory from an optimal policy can be split into a \"communication\" phase where R cannot act and an \"action\" phase where H cannot communicate. Reducing reward learning to assistance. We can convert an active reward learning problem to a two-phase communicative assistance problem in an intuitive way: we add Q to the set of robot actions, make C the set of human actions, add a timestep counter to the state, and construct the reward such that an optimal policy must switch between the two phases after k questions. A non-active reward learning problem can first be converted to an active reward learning problem. Proposition 2. Every active reward learning problem M, Q, C, Θ, r θ , P Θ , π H , k can be reduced to an equivalent two phase communicative assistance problem M , π H . Corollary 3. Every non-active reward learning problem M, C, Θ, r θ , P Θ , π H , k can be reduced to an equivalent two phase communicative assistance problem M , π H . Reducing assistance to reward learning. The reduction from a two-phase communicative assistance problem to an active reward learning problem is similarly straightforward: we interpret R's communicative actions as questions and H's actions as answers. There is once again a simple generalization to non-active reward learning. Proposition 4. Every two-phase communicative assistance problem M, π H , a R noop can be reduced to an equivalent active reward learning problem. Corollary 5. If a two-phase communicative assistance problem M, π H has only one communicative robot action, it can be reduced to an equivalent non-active reward learning problem. \n QUALITATIVE IMPROVEMENTS FOR GENERAL ASSISTANCE We have seen that reward learning is equivalent to two-phase communicative assistance problems, where inferring the reward distribution can be separated from control using the reward distribution. However, for general assistance games, it is necessary to merge estimation and control, leading to several new qualitative behaviors. When the two phase restriction is lifted, we observe relevance aware active learning and plans conditional on future feedback. When the communicative restriction is lifted, we observe learning from physical actions. We demonstrate these qualitative behaviors in simple environments using point-based value iteration (PBVI) or deep reinforcement learning (DRL). We describe the qualitative results here, deferring detailed explanations of environments and results to Appendix C. For communicative assistance problems, we also consider two baselines: 1. Active reward learning. This is the reward learning paradigm discussed so far. 2. Interactive reward learning. This is a variant of reward learning that aims to recover some of the benefits of interactivity, by alternating reward learning and acting phases. During an action phase, R chooses actions that maximize expected reward under its current belief over θ (without \"knowing\" that its belief may change), while during a reward learning phase, R chooses questions that maximizes information gain. \n PLANS CONDITIONAL ON FUTURE FEEDBACK Here, we show how an assistive agent can make plans that depend on obtaining information about θ in the future. The agent can first take some \"preparatory\" actions that whose results can be used later once the agent has clarified details about θ. A reward learning agent would not be able to do this, as it would require three phases (acting, then learning, then acting again). We illustrate this with our original kitchen environment (Figure 1 ), in which R must bake a pie for H, but doesn't know what type of pie H would like: Apple, Blueberry, or Cherry. Each type has a weight specifying the reward for that pie. Assuming people tend to like apple pie the most and cherry pie the least, we have θ A ∼ Uniform[2, 4], θ B ∼ Uniform[1, 3], and θ C ∼ Uniform[0, 2]. We define the questions Q = {q A , q B , q C }, where q X means \"What is the value of θ X ?\", and thus, the answer set is C = R. R can select ingredients to assemble the pie. Eventually, R must use \"bake\", which bakes the selected ingredients into a finished pie, resulting in reward that depends on what type of pie has been created. H initially starts outside the room, but will return at some prespecified time. r θ assigns a cost of asking a question of 0.1 if H is inside the room, and 3 otherwise. The horizon is 6 timesteps. Assistance. Notice that, regardless of H's preferences, R will need to use flour to make pie dough. So, R always makes the pie dough first, before querying H about her preferences. Whether R then queries H about her preferences depends on how late H returns. If H arrives home before timestep 5, R will query her about her preferences and then make the appropriate pie as expected. However, if H will arrive later, then there will not be enough time to query her for her preferences and bake a pie. Instead, R bakes an apple pie, since its prior suggests that that's what H wants. This behavior, where R takes actions (making dough) that are robustly good but waits on actions (adding the filling) whose reward will be clarified in the future, is very related to conservative agency (Turner et al., 2020) , a connection explored in more depth in Appendix D. Reward learning. The assistance solution requires R to act (to make dough), then to learn preferences, and then to act again (to make pie). A reward learning agent can only have two phases, and so we see one of two suboptimal behaviors. First, R could stay in the learning phase until H returns home, then ask which pie she prefers, and then make the pie from scratch. Second, R could make an apple pie without asking H her preferences. (In this case there would be no learning phase.) Which of these happens depends on the particular method and hyperparameters used. Interactive reward learning. Adding interactivity is not sufficient to get the correct behavior. Suppose we start with an action phase. The highest reward plan under R's current belief over θ is to bake an apple pie, so that's what it will do, as long as the phase lasts long enough. Conversely, suppose we start with a learning phase. In this case, R does nothing until H returns, and then asks about her preferences. Once we switch to an action phase, it bakes the appropriate pie from scratch. \n RELEVANCE AWARE ACTIVE LEARNING ? Figure 2 : The wormy-apples kitchen environment. H wants an apple, but R might discover worms in the apple, and have to dispose of it in either of the trash or compost bins. Once we relax the two-phase restriction, R starts to further optimize whether and when it asks questions. In particular, since R may be uncertain about whether a question's answer will even be necessary, R will only ask questions once they become immediately relevant to the task at hand. In contrast, a reward learning agent would have to decide at the beginning of the episode (during the learning phase) whether or not to ask these questions, and so cannot evaluate how relevant they are. Consider for example a modification to the kitchen environment: R knows that H wants an apple pie, but when R picks up some apples, there is a 20% chance that it finds worms in some of the apples. R is unsure whether H wants her compost bin to have worms, and so does not know whether to dispose of the bad apples in the trash or compost bin. Since this situation is relatively unlikely, ideally R would only clarify H's preferences when the situation arises. Assistance. An assistive R only asks about wormy apples when it needs to dispose of one. R always starts by picking up apples. If the apple does not have worms, R immediately uses the apples to bake the pie. If some apples have worms and the cost of asking a question is sufficiently low, R elicits H's preferences and disposes of the apples appropriately. It then bakes the pie with the remaining apples. This behavior, in which questions are asked only if they are useful for constraining future behavior, has been shown previously using probabilistic recipe trees (PRTs) Kamar et al. (2009) , but to our knowledge has not been shown with optimization-based approaches. Reward learning. A reward learning policy must have only two phases and so would show one of two undesirable behaviors: either it would always ask H where to dispose of wormy apples, or it never asks and instead guesses when it does encounter wormy apples. Interactive reward learning. This has the same problem as in the previous section. If we start in the action phase and R picks up wormy apples, it will dispose of them in an arbitrary bin without asking H about her preferences, because it doesn't \"know\" that it will get the opportunity to do so. Alternatively, if we start with a learning phase, R will ask H where to dispose of wormy apples, even if R would never pick up any wormy apples. Note that more complex settings can have many more questions. Should R ask whether H would prefer to use seedless apples, should scientists ever invent them? Perhaps R should ask H how her pie preferences vary based on her emotional state? Asking about all possible situations is not scalable. \n LEARNING FROM PHYSICAL ACTIONS So far we have considered communicative assistance problems, in which H only provides feedback rather than acting to maximize reward herself. Allowing H to have physical actions enables a greater variety of potential behaviors. Most clearly, when R knows the reward (that is, P Θ puts support over a single θ), assistance games become equivalent to human-AI collaboration (Nikolaidis & Shah, 2013; Carroll et al., 2019; Dimitrakakis et al., 2017) . With uncertain rewards, we can see further interesting qualitative behaviors: R can learn just by observing how H acts in an environment, and then work with H to maximize reward, all within a single episode, as in shared autonomy with intent inference (Javdani et al., 2015; Brooks & Szafir, 2019) and other works that interpret human actions as communicative Whitney et al. (2017) . This can significantly reduce the burden on H in providing reward information to R (or equivalently, reduce the cost incurred by R in asking questions to H). Some work has shown that in such situations, humans tend to be pedagogic: they knowingly take individually suboptimal actions, in order to more effectively convey the goal to the agent (Ho et al., 2016; Hadfield-Menell et al., 2016 ). An assistive R who knows this can quickly learn what H wants, and help her accomplish her goals. \n + + + + Figure 3 : The cake-or-pie variant of the kitchen environment. H is equally likely to prefer cake or pie. Communication must take place through physical actions alone. We illustrate this with a variant of our kitchen environment, shown in Figure 3 . There are no longer questions and answers. Both H and R can move to an adjacent free space, and pick up and place the various objects. Only R may bake the dessert. R is uncertain whether H prefers cake or cherry pie. For both recipes, it is individually more efficient for H to pick up the dough first. However, we assume H is pedagogic and wants to quickly show R which recipe she wants. So, if she wants cake, she will pick up the chocolate first to signal to R that cake is the preferred dessert. It is not clear how exactly to think about this from a reward learning perspective: there aren't any communicative human actions since every action alters the state of the environment. In addition, there is no clear way to separate out a given trajectory into two phases. This situation cannot be easily coerced into the reward learning paradigm. In contrast, an assistive R can handle this situation perfectly. It initially waits to see which ingredient H picks up first, and then quickly helps H by putting in the ingredients from its side of the environment and baking the dessert. It learns implicitly to make the cake when H picks up chocolate, and to make the pie when H picks up dough. This is equivalent to pragmatic reasoning (Goodman & Frank, 2016): \"H would have picked up the chocolate if she wanted cake, so the fact that she picked up the dough implies that she wants cherry pie\". However, we emphasize that R is not explicitly programmed to reason in this manner, and is learned using deep reinforcement learning (Appendix C.3). Note that R is not limited to learning from H's physical actions: R can also use its own physical actions to \"query\" the human for information (Woodward et al., 2019; Sadigh et al., 2016) . \n LIMITATIONS AND FUTURE WORK Computational complexity. The major limitation of assistance compared to reward learning is that assistance problems are significantly more computationally complex, since we treat the unknown reward θ as the hidden state of a POMDP. We are hopeful that this can be solved through the application of deep reinforcement learning. An assistance problem is just like any other POMDP, except that there is one additional unobserved state variable θ and one additional observation a H . This should not be a huge burden, since deep reinforcement learning has been demonstrated to scale to huge observation and action spaces (OpenAI, 2018; Vinyals et al., 2019) . Another avenue for future work is to modify active reward learning algorithms in order to gain the benefits outlined in Section 4, while maintaining their computational efficiency. Increased chance of incorrect inferences. In practice, assistive agents will extract more information from H than reward learning agents, and so it is worse if π H is misspecified. We don't see this as a major limitation: to the extent this is a major worry, we can design π H so that the robot only makes inferences about human behavior in specific situations. For example, by having π H be independent of θ in a given state s, we ensure that the robot does not make any inferences about θ in that state. Environment design. We have shown that by having a hidden human goal, we can design environments in which optimal agent behavior is significantly more \"helpful\". One important direction for future work is to design larger, more realistic environments, in order to spur research into how best to solve such environments. We would be particularly excited to see a suite of assistance problems become a standard benchmark by which deep reinforcement learning algorithms are assessed. \n LIMITATIONS OF ASSISTANCE AND REWARD LEARNING While we believe that the assistance framework makes meaningful conceptual progress over reward learning, a number of challenges for reward learning remain unaddressed by assistance: Human modeling. A major motivation for both paradigms is that reward specification is very difficult. However, now we need to specify a prior over reward functions, and the human model π H . Consequently, misspecification can still lead to bad results (Armstrong et al., 2020; Carey, 2018) . While it should certainly be easier to specify a prior over θ with a \"grain of truth\" on the true reward θ * than to specify θ * directly, it is less clear that we can specify π H well. One possibility is to add uncertainty over the human policy π H . However, this can only go so far: information about θ must come from somewhere. If R is sufficiently uncertain about θ and π H , then it cannot learn about the reward (Armstrong & Mindermann, 2018) . Thus, for good performance we need to model π H . While imitation learning can lead to good results (Carroll et al., 2019) , the best results will likely require insights from a broad range of fields that study human behavior. Assumption that H knows θ. Both assistance games and reward learning makes the assumption that H knows her reward exactly, but in practice, human preferences change over time (Allais, 1979; Cyert & DeGroot, 1975; Shogren et al., 2000) . We could model this as the human changing their subgoals (Michini & How, 2012; Park et al., 2020) , adapting to the robot (Nikolaidis et al., 2017) or learning from experience (Chan et al., 2019) . Dependence on uncertainty. All of the behaviors of Section 4, as well as previously explored benefits such as off switch corrigibility (Hadfield-Menell et al., 2017a) , depend on R expecting to gain information about θ. However, R will eventually exhaust the available information about θ. If everything is perfectly specified, this is not a problem: R will have converged to the true θ * . However, in the case of misspecification, after convergence R is effectively certain in an incorrect θ, which has many troubling problems that we sought to avoid in the first place (Yudkowsky, year unknown). \n CONCLUSION While much recent work has focused on how we can build agents that learn what they should do from human feedback, there is not yet a consensus on how such agents should be built. In this paper, we contrasted the paradigms of reward learning and assistance. We showed that reward learning problems are equivalent to a special type of assistance problem, in which the human may only provide feedback at the beginning of the episode, and the agent may only act in the environment after the human has finished providing feedback. By relaxing these restrictions, we enable the agent to reason about how its actions in the environment can influence the process by which it solicits and learns from human feedback. This allows the agent to (1) choose questions based on their relevance, (2) create plans whose success depends on future feedback, and (3) learn from physical human actions in addition to communicative feedback. \n A REWARD LEARNING AND ASSISTANCE FORMALISMS A.1 RELATION BETWEEN NON-ACTIVE AND ACTIVE REWARD LEARNING The key difference between non-active and active reward learning is that in the latter R may ask H questions in order to get more targeted feedback. This matters as long as there is more than one question: with only one question, since there is no choice for R to make, R cannot have any influence on the feedback that H provides. As a result, non-active reward learning is equivalent to active reward learning with a single question. Proposition 6. Every non-active reward learning problem M\\r, C, Θ, r θ , P Θ , π H , k can be reduced to an active reward learning problem. Proof. We construct the active reward learning problem as M\\r, Q , C, Θ, r θ , P Θ , π H , k , where Q {q φ } where q φ is some dummy question, and π H (c | q, θ) π H (c | θ). Suppose the solution to the new problem is π R Q , f . Since f is a solution, we have: f = argmax f E θ∼PΘ,q 0:k−1 ∼π R Q ,c 0:k−1 ∼π H (•|qi,θ) ER( f (q 0:k−1 , c 0:k−1 )) = argmax f E θ∼PΘ,q 0:k−1 =q φ ,c 0:k−1 ∼π H (•|q φ ,θ) ER( f (q 0:k−1 = q φ , c 0:k−1 )) all q are q φ = argmax f E θ∼PΘ,c 0:k−1 ∼π H (•|θ) ER( f (q 0:k−1 = q φ , c 0:k−1 )) . Thus f (c 0:k−1 ) = f (q 0:k−1 = q φ , c 0:k−1 ) is a maximizer of E θ∼PΘ,c 0:k−1 ∼π H (•|θ) ER( f (c 0:k−1 ) , making it a solution to our original problem. Proposition 7. Every active reward learning problem M\\r, Q, C, Θ, r θ , P Θ , π H , k with |Q| = 1 can be reduced to a non-active reward learning problem. Proof. Let the sole question in Q be q φ . We construct the non-active reward learning problem as M\\r, C, Θ, r θ , P Θ , π H , k , with π H (c | θ) = π H (c | q φ , θ). Suppose the solution to the new problem is f . Then we can construct a solution to the original problem as follows. First, note that π R Q must be π R Q (q i | q 0:i−1 , c 0:i−1 ) = 1[q i = q φ ], since there is only one possible question q φ . Then by inverting the steps in the proof of Proposition 6, we can see that f is a maximizer of E θ∼PΘ,q 0:k−1 ∼π R Q ,c 0:k−1 ∼π H (•|qi,θ) ER( f (• | c 0:k−1 ) ) . Thus, by defining f (q 0:k−1 , c 0:k−1 ) = f (c 0:k−1 ), we get a maximizer to our original problem, making π R Q , f a solution to the original problem. \n A.2 REDUCING ASSISTANCE PROBLEMS TO POMDPS Suppose that we have an assistance problem M, π H with: M = S, {A H , A R }, {Ω H , Ω R }, {O H , O R }, T, P S , γ, Θ, r θ , P Θ . Then, we can derive a single-player POMDP for the robot M = S , A R , Ω , O , T , r , P 0 , γ by embedding the human reward parameter into the state. We must include the human's previous action a H into the state, so that the robot can observe it, and so that the reward can be computed. To allow for arbitrary (non-Markovian) human policies π H , we could encode the full history in the state, in order to embed π H into the transition function T . However, in our experiments we only consider human policies that are in fact Markovian. We make the same assumption here, giving a policy π H (a H t | o H t , a R t , θ) that depends on the current observation and previous robot action. The transformation M → M is given as follows: S S × A H × Θ State space Ω Ω R × A H Observation space O (o | s ) = O ((o R , a H 1 ) | (s, a H 2 , θ)) Observation function 1[a H 1 = a H 2 ] • O R (o R | s) T (s 2 | s 1 , a R ) = T ((s 2 , a H 1 , θ 2 ) | (s 1 , a H 0 , θ 1 ), a R ) Transition function T (s 2 | s 1 , a H 1 , a R ) • 1[θ 2 = θ 1 ] • o H ∈Ω H O H (o H | s 1 ) • π H (a H 1 | o H , a R , θ) r (s 1 , a R , s 2 ) = r ((s 1 , a H 0 , θ), a R , (s 2 , a H 1 , θ)) Reward function r θ (s 1 , a H 1 , a R , s 2 ) P 0 (s ) = P 0 ((s, a H , θ)) Initial state distribution P S (s) • P Θ (θ) • 1[a H = a H init ] where a H init is arbitrary In the case where the original assistance problem is fully observable, the resulting POMDP is an instance of a Bayes-Adaptive MDP (Martin, 1967; Duff, 2002) . Any robot policy π R can be translated from the APOMDP M naturally into an identical policy on M . Note that in either case, policies are mappings from (Ω R , A H , A R ) * × Ω R to ∆(A R ). This transformation preserves optimal agent policies: Proposition 8. A policy π R is a solution of M if and only if it is a solution of M . Proof. Recall that an optimal policy π * in the POMDP M is one that maximizes the expected value: EV(π) = E s 0 ∼P 0 ,τ ∼ s 0 ,π ∞ t=0 γ t r (s t , a t , s t+1 ) = E s 0 ∼P 0 ,τ ∼ s 0 ,π ∞ t=0 γ t r θ (s t , a H t , a t , s t+1 ) where the trajectories τ s are sequences of state, action pairs drawn from the distribution induced by the policy, starting from state s 0 . Similarly, an optimal robot policy π R * in the APOMDP M is one that maximizes its expected reward: ER(π R ) = E s0∼P S ,θ∼PΘ,τ ∼ s0,θ,π R ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) . To show that the optimal policies coincide, suffices to show that for any π, ER(π) (in M) is equal to EV(π) (in M ). To do this, we will show that π induces the \"same\" distributions over the trajectories. For mathematical convenience, we will abuse notation and consider trajectories of the form τ ; θ ∈ (S, A H , A R ) * × Θ; it is easy to translate trajectories of this form to trajectories in either M or M. We will show that the sequence τ ; θ has the same probability when the robot takes the policy π in both M and M by induction on the lengths of the sequence. First, consider the case of length 1 sequences. τ ; θ = [(s, a R , a H ); θ]. Under both M and M, s and θ are drawn from P S and P Θ respectively. Similarly, a R and a H are drawn from π R (• | o R 0 ) and π H (• | o H , a R , θ) respectively. So the distribution of length 1 sequences is the same under both M and M. \n Now, consider some longer sequence τ ; θ = [(s 1 , a R 1 , a H 1 ), ...., (s t , a R t , a H t ); θ] . By the inductive hypothesis, the distribution of (s 1 , a H 1 , a R 1 ), ...., (s t−1 , a H t−1 , a R t−1 ) and θ are identical; it suffices to show that (s t , a H t , a R t ) has the same distribution, conditioned on the other parts of τ ; θ, under M and under M. Yet by construction, s t is drawn from the same distribution T ( •|s t−1 , a H t−1 , a R t−1 ), a H A.3 OPTIMAL STRATEGY PAIRS AS POLICY-CONDITIONED BELIEF We use the term policy-conditioned belief to refer to a distribution over human policies which depends on the chosen robot policy. We use policy-conditioned beliefs as opposed to a simple unconditional distribution over human policies, because it allows us to model a wide range of situations, including situations with prior coordination, or where humans adapt to the robot's policy as a result of prior interactions. Moreover, this presents a unifying framework with prior work on assistance games (Hadfield-Menell et al., 2016) . In fact, finding an optimal strategy pair for the assistance game can be thought of as finding the policy which is best when the human adapts optimally, as formalized below: Proposition 9. Let M = S, {A H , A R }, {Ω H , Ω R }, {O H , O R }, T, P S , γ, Θ, r θ , P Θ be an as- sistance game. Let B(π R )(π H ) ∝ 1[EJR(π H , π R ) = max πH ∈Π H EJR(π H , π R ) ] be an associated policy-conditioned belief. Let π R be the solution to M, B . Then B(π R ), π R is an optimal strategy pair. Proof. Let π H , π R be an arbitrary strategy pair. Then EJR(π H , π R ) ≤ EJR(B(π R ), π R ) by the definition of B, and EJR(B(π R ), π R ) ≤ EJR(B(π R ), π R ) by the definition of π R . Thus EJR(π H , π R ) ≤ EJR(B(π R ), π R ). Since π H , π R was assumed to be arbitrary, B(π R ), π R is an optimal strategy pair. \n B EQUIVALENCE OF RESTRICTED ASSISTANCE AND EXISTING ALGORITHMS B.1 EQUIVALENCE OF TWO PHASE ASSISTANCE AND REWARD LEARNING Here we prove the results in Section 3 showing that two phase communicative assistance problems and reward learning problems are equivalent. We first prove Proposition 4, and then use it to prove the others. Proposition 4. Every two-phase communicative assistance problem M, π H , a R noop can be reduced to an equivalent active reward learning problem. Proof. Let M = S, {A H , A R }, {Ω H , Ω R }, {O H , O R }, T, P S , γ, Θ, r θ , P Θ be the assistance game, and let the assistance problem's action phase start at t act . Let a H φ ∈ A H be some arbitrary human action and o H φ ∈ Ω H be some arbitrary human observation. We construct the new active reward learning problem M , Q , C , Θ, r θ , P Θ , π H , k as follows: Q {a R ∈ A R : a R is communicative} Questions C A H Answers M S, A , Ω R , O R , T , P 0 , γ POMDP A A R \\Q Physical actions T (s | s, a R ) T (s | s, a H φ , a R ) Transition function k t act Number of questions P 0 (s) s 0:k ∈S P M (s 0:k , s k +1 = s | a R 0:k = a R noop , a H 0:k = a H φ ) Initial state distribution r θ (s, a R , s ) r θ (s, a H φ , a R , s ) Reward function π H (c | q, θ) π H (c | o H φ , q, θ) \n Human decision function Note that it is fine to use a H φ in T, r θ and to use o H φ in π H even though they were chosen arbitrarily, because since the assistance problem is communicative, the result does not depend on the choice. The P M term in the initial state distribution denotes the probability of a trajectory under M and can be computed as P M (s 0:T +1 | a R 0:T , a H 0:T ) = P S (s 0 ) T t=0 T (s t+1 | s t , a H t , a R t ). Given some pair π R Q , f to the active reward learning problem, we construct a policy for the assistance problem as π R (a R t | o R t , τ R t−1 )    π R Q (a R t | a R 0:t−1 , a H 0:t−1 ), t < k and a R 0:t ∈ Q f (a R 0:k−1 , a H 0:k−1 )(a R t | o R k:t , a R k:t−1 ), t ≥ k and a R 0:k−1 ∈ Q and a R k:t ∈ A 0, else . We show that there must exist a solution to P that is the analogous policy to some pair. Assume towards contradiction that this is not the case, and that there is a solution π R * that is not the analogous policy to some pair. Then we have a few cases: 1. π R * assigns positive probability to a R i = a / ∈ Q for i < k. This contradicts the two-phase assumption. 2. π R * assigns positive probability to a R i = q ∈ Q for i ≥ k. This contradicts the two-phase assumption. 3. π R * (a R t | o R t , τ R t−1 ) depends on the value of o R i for some i < k. Since both a H 0:k−1 and a R 0:k−1 cannot affect the state or reward (as they are communicative), the distribution over o R 0:k−1 is fixed and independent of π R , and so there must be some other π R that is independent of o R 0:k−1 that does at least as well. That π R would be the analogous policy to some pair, giving a contradiction. Now, suppose we have some pair π R Q , f , and let its analogous policy be π R . Then we have: E θ∼PΘ,q 0:k−1 ∼π R Q ,c 0:k−1 ∼π H [ER(f (q 0:k−1 , c 0:k−1 ))] = E θ∼PΘ E q 0:k−1 ∼π R ,c 0:k−1 ∼π H [ER(f (q 0:k−1 , c 0:k−1 ))] = E θ∼PΘ E q 0:k−1 ∼π R ,c 0:k−1 ∼π H E s0∼P 0 ,a R t ∼f (q 0:k−1 ,c 0:k−1 ),st+1∼T (•|st,a R t ) ∞ t=0 γ t r θ (s t , a R t , s t+1 ) = E θ∼PΘ E q 0:k−1 ∼π R ,c 0:k−1 ∼π H E s k ∼P 0 ,a R t ∼π R (•| c 0:k−1 ,o k:t , q 0:k−1 ,a k:t−1 ,st+1∼T (•|st,a R t ) 1 γ k ∞ t=k γ t r θ (s t , a R t , s t+1 ) = E θ∼PΘ E q 0:k−1 ∼π R ,c 0:k−1 ∼π H E s k ∼P 0 ,a R t ∼π R (•| c 0:k−1 ,o k:t , q 0:k−1 ,a k:t−1 ,st+1∼T (•|st,a R t ) 1 γ k ∞ t=k γ t r θ (s t , a H φ , a R t , s t+1 ) However, since all the actions in the first phase are communicative and thus don't impact state or reward, the first k timesteps in the two phase assistance game have constant reward in expectation. Let C = Es 0:k k−1 t=0 γ t r θ (s t , a H φ , a R noop , s t+1 ) . This gives us: E θ∼PΘ,q 0:k−1 ∼π R Q ,c 0:k−1 ∼π H [ER(f (q 0:k−1 , c 0:k−1 ))] = E θ∼PΘ E s0∼P S ,θ∼PΘ,τ ∼ s0,θ,π H ,π R 1 γ k ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) − 1 γ k C = 1 γ k ER(π R ) − C . Thus, if π R Q , f is a solution to the active reward learning problem, then π R is a solution of the two-phase communicative assistance problem. Corollary 5. If a two-phase communicative assistance problem M, π H , a R noop has exactly one communicative robot action, it can be reduced to an equivalent non-active reward learning problem. Proof. Apply Proposition 4 followed by Proposition 7. (Note that the construction from Proposition 4 does lead to an active reward learning problem with a single question, meeting the precondition for Proposition 7.) Proposition 2. Every active reward learning problem P = M, Q, C, Θ, r θ , P Θ , π H , k can be reduced to an equivalent two phase communicative assistance problem P = M , π H . Proof. Let M = S, A, Ω, O, T, P 0 , γ . Let q 0 ∈ Q be some question and c 0 ∈ C be some (unrelated) choice. Let N be a set of fresh states {n 0 , . . . n k−1 }: we will use these to count the number of questions asked so far. Then, we construct the new two phase communicative assistance problem P = M , π H , a R noop as follows: M S , {C, A R }, {Ω H , Ω R }, {O H , O R }, T , P S , γ, Θ, r θ , P Θ Assistance game S S ∪ N State space P S (ŝ) 1[ŝ = n 0 ] Initial state distribution A R A ∪ Q Robot actions Ω H S H's observation space Ω R Ω ∪ N R's observation space O H (o H | ŝ) 1[o H = ŝ] H's observation function O R (o R | ŝ) 1[o R = ŝ], ŝ ∈ N O(o R | ŝ, else R's observation function T (ŝ | ŝ, a H , a R )        P 0 (ŝ ), ŝ = n k−1 , 1[ŝ = n i+1 ], ŝ = n i with i < k − 1 T (ŝ | ŝ, a R ), ŝ ∈ S and a R ∈ A, 1[s = s], else Transition function r θ (ŝ, a H , a R , ŝ )        −∞, ŝ ∈ N and a R / ∈ Q, −∞, ŝ ∈ S and a R ∈ Q, 0, ŝ ∈ N and a R ∈ Q, r θ (s, a R , s ), else Reward function π H (a H | o H , a R , θ) π H (a H | a R , θ), a R ∈ Q c 0 , else Human policy a R noop q 0 Distinguished noop action Technically r θ should not be allowed to return −∞. However, since S and A are finite, r θ is bounded, and so there exists some large finite negative number that is functionally equivalent to −∞ that we could use instead. Looking at the definitions, we can see T and r are independent of a H , and π H is independent of o H , making this a communicative assistance problem. By inspection, we can see that every q ∈ Q is a communicative robot action. Any a R / ∈ Q must not be a communicative action, because the reward r θ differs between a R and q 0 . Thus, the communicative robot actions are Q and the physical robot actions are A. Note that by construction of P S and T , we must have s i = n i for i ∈ {0, 1, . . . k − 1}, after which s k is sampled from P 0 and all s t ∈ S for t ≥ k. Given this, by inspecting r θ , we can see that an optimal policy must have a R 0:k−1 ∈ Q and a R k: / ∈ Q to avoid the −∞ rewards. Since a R k: / ∈ Q, we have a H k: = c 0 . Thus, setting a H noop = c 0 , we have that the assistance problem is two phase with actions at t act = k, as required. Let a policy π R for the assistance problem be reasonable if it never assigns positive probability to a R ∈ A when t < k or to a R ∈ Q when t ≥ k. Then, for any reasonable policy π R we can construct an analogous pair π R Q , f to the original problem P as follows: π R Q (q i | q 0:i−1 , c 0:i−1 ) π R (q i | o R 0:i−1 = n 0:i−1 , a R 0:i−1 = q 0:i−1 , a H 0:i−1 = c 0:i−1 ), f (q 0:k−1 , c 0:k−1 )(a t | o 0:t , a 0:t−1 ) π R (a t | o R 0:t+k , a R 0:t+k−1 , a H 0:t+k−1 ) , where for the second equation we have o R 0:k−1 = n 0:k−1 a R 0:k−1 = q 0:k−1 a H 0:k−1 = c 0:k−1 o R k:t+k = o 0:t a R k:t+k−1 = a 0:t−1 a H k:t+k−1 = a H noop Note that this is a bijective mapping. Consider some such policy π R and its analogous pair π R Q , f . By construction of T , we have that the first k states in any trajectory are n 0:k−1 and the next state is distributed as P 0 (•). By our assumption on π R we know that the first k robot actions must be selected from Q and the remaining robot actions must be selected from A, which also implies (based on π H ) that after the the remaining human actions must be c 0 . Finally, looking at r θ we can see that the first k timesteps get 0 reward. Thus: ER P (π R ) = E s 0 ∼P S ,θ∼P θ ,τ ∼ s 0 ,θ,π H ,π R ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) = E θ∼P θ ,a R 0:k−1 ∼π R ,a H 0:k−1 ∼π H ,s k ∼P0,τ k: ∼ s k ,θ,π H ,π R ∞ t=k γ t r θ (s t , a H t , a R t , s t+1 ) = E θ∼P θ ,q 0:k−1 ∼π R Q ,c 0:k−1 ∼π H ,s0∼P0,τ ∼ s0,θ,f (q 0:k−1 ,c 0:k−1 ) γ k ∞ t=0 γ t r θ (s t , a t , s t+1 ) = γ k E θ∼PΘ,q 0:k−1 ∼π R Q ,c 0:k−1 ∼π H [ER(f (q 0:k−1 , c 0:k−1 ))] , which is the objective of the reward learning problem scaled by γ k . Since we have a bijection between reasonable policies in P and tuples in P that preserves the objectives (up to a constant), given a solution π R * to P (which must be reasonable), its analogous pair π R Q , f must be a solution to P. Corollary 3. Every non-active reward learning problem M, C, Θ, r θ , P Θ , π H , k can be reduced to an equivalent two phase communicative assistance problem M , π H . Proof. Apply Proposition 6 followed by Proposition 2. \n B.2 ASSISTANCE WITH NO REWARD INFORMATION In a communicative assistance problem, once there is no information to be gained about θ, the best thing for R to do is to simply maximize expected reward according to its prior. We show this in the particular case where π H is independent of θ and thus cannot communicate any information about θ: Proposition 10. A communicative assistance problem M, π H where π H is independent of θ can be reduced to a POMDP M with the same state space. Proof. Given M = S, {A H , A R }, {Ω H , Ω R }, {O H , O R }, T, P S , γ, Θ, r θ , P Θ , we define a new POMDP as M = S, A R , Ω R , O R , T , r , P S , γ , with T (s | s, a R ) = T (s | s, a H φ , a R ) and r (s, a R , s ) = Eθ∼P θ r θ (s, a H φ , a R , s ) . Here, a H φ is some action in A H ; note that it does not matter which action is chosen since in a communicative assistance problem human actions have no impact on T and r. Expanding the definition of expected reward for the assistance problem, we get: ER(π R ) = E s0∼P S ,θ∼PΘ,τ ∼ s0,θ,π R ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) = E s0∼P S E θ∼PΘ E τ ∼ s0,θ,π R ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) Note that because π H (a H | o H , a R , θ ) is independent of θ, the robot gains no information about θ and thus π R is also independent of θ. This means that we have: ER(π R ) = E s0∼P S E θ∼PΘ E τ ∼ s0,π R ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) Let r max = max s,a H ,a R ,s |r θ (s, a H , a R , s )| (which exists since S, A H , and A R are finite). Then: ∞ t=0 γ t |r θ (s t , a H t , a R t , s )| ≤ ∞ t=0 γ t r max = r max 1 − γ < ∞. So we can apply Fubini's theorem to swap the expectations and sums. Applying Fubini's theorem twice gives us: ER(π R ) = E s0∼P S E τ ∼ s0,π R E θ∼PΘ ∞ t=0 γ t r θ (s t , a H t , a R t , s t+1 ) = E s0∼P S E τ ∼ s0,π R ∞ t=0 γ t E θ∼PΘ r θ (s t , a H t , a R t , s t+1 ) = E s0∼P S E τ ∼ s0,π R ∞ t=0 γ t r (s t , a R t , s t+1 ) . In addition, the trajectories are independent of π H , since the assistance problem is communicative, and so for a given policy π R , the trajectory distributions for M and M coincide, and thus the expected rewards for π R also coincide. Thus, the optimal policies must coincide. \n C EXPERIMENTAL DETAILS C.1 PLANS CONDITIONAL ON FUTURE FEEDBACK In the environment described in Section 4.1, R needs to bake either apple or blueberry pie (cherry is never preferred over apple) within 6 timesteps, and may query H about her preferences about the pie. Making the pie takes 3 timesteps: first R must make flour into dough, then it must add one of the fillings, and finally it must bake the pie. Baking the correct pie results in +2 reward, while baking the wrong one results in a penalty of -1. In addition, H might be away for several timesteps at the start of the episode. Querying H costs 0.1 when she is present and 3 when she is away. The optimal policy for this environment depends on whether H would be home early enough for R to query her and bake the desired the pie by the end of the episode. R should always quickly make dough, as that is always required. If H returns home on timestep 4 or earlier, R should wait for her to get home, ask her about her preferences and then finish the desired pie. If H returns home later, R should make its best guess about what she wants, and ensure that there is a pie ready for her to eat: querying H when she is away is too costly, and there is not enough time to wait for H, query her, put in the right filling, and bake the pie. In the wormy-apple environment described in Section 4.2, the robot had to bring the human some apples in order to make a pie, but there's a 20% chance that the apples have worms in them, and the robot does not yet know how to dispose of soiled apples. The robot gets 2 reward for making an apple pie (regardless of how it disposed of any wormy apples), and gets −2 reward if it disposes of the apples in the wrong container. Additionally, asking a question incurs a cost of 0.1. We solve this environment with exact value iteration. If the is two-phase, with a lower discount rate (λ = 0.9), R's policy never asks questions and instead simply tries to make the apple pie, guessing which bin to dispose of wormy apples in if it encounters any. Intuitively, since it would have to always ask the question at the beginning, it would always incur a cost of 0.1 as well as delay the pie by a timestep resulting in 10% less value, and this is only valuable when there turn out to be worms and its guess about which bin to dispose of them in is incorrect, which only happens 10% of the time. This ultimately isn't worthwhile. This achieves an expected undiscounted reward of 1.8. Removing the two-phase restriction causes R to ask questions mid-trajectory, even with this low discount. With this result achieves the maximal expected undiscounted reward of 1.98. With a higher discount rate of λ = 0.99, the two-phase policy will always ask about which bin to dispose of wormy apples in, achieving 1.9 expected undiscounted reward. This is still less than the policy without the two-phase restriction, which continues to get undiscounted reward 1.98 because it avoids asking a question 80% of the time, and so incurs the cost of asking a question less often. \n C.3 LEARNING FROM PHYSICAL ACTIONS: CAKE-OR-PIE EXPERIMENT In the environment described in Section 4.3, H wants a dessert, but R is unsure whether H prefers cake or pie. Preparing the more desired recipe provides a base value of V = 10, and the less desired recipe provides a base value of V = 1. Since H doesn't want the preparation to take too long, the actual reward when a dessert is made is given by r t = V • f (t), with f (t) = 1 − (t/N ) 4 , and N = 20 as the episode horizon. The experiments use the pedagogic H, that picks the chocolate first if they want cake, which allows R to distinguish the desired recipe early on -this is in contrast with the non-pedagogic H, which does not account for R beliefs and always goes for the dough first. With the pedagogic H, the optimal R does not move until H picks or skips the dough; if H skips the dough, this implies the recipe is cake and R takes the sugar, and then the cherries -otherwise it goes directly for the cherries. With the non-pedagogic H, the optimal R goes for the cherries first (since it is a common ingredient), and only then it checks whether H went for the chocolate or not, and has to go all the way back to grab the sugar if H got the chocolate. We train R with Deep Q-Networks (DQN; (Mnih et al., 2013 )); we ran 6 seeds for 5M timesteps and a learning rate of 10 −4 ; results are shown in Figure 4 . \n D OPTION VALUE PRESERVATION In Section 4.1, we showed that R takes actions that are robustly good given its uncertainty over θ, but waits on actions whose reward will be clarified by future information about θ. Effectively, R is preserving its option value: it ensures that it remains capable of achieving any of the plausible reward functions it is uncertain over. A related notion is that of conservative agency (Turner et al., 2020) , which itself aims to preserve an agent's ability to optimize a wide variety of reward functions. This is achieved via attainable utility preservation (AUP). Given an agent optimizing a reward r spec and a distribution over auxiliary reward functions r aux , the AUP agent instead optimizes the reward where the hyperparameter λ determines how much to penalize an action for destroying option value, and a φ is an action that corresponds to R \"doing nothing\". However, the existing AUP penalty is applied to the reward, which means it penalizes any action that is part of a long-term plan that destroys option value, even if the action itself does not destroy option value. For example, in the original Kitchen environment of Figure 1 with a sufficiently high λ, any trajectory that ends with baking a pie destroys option value and so would have negative reward. As a result, there is no incentive to make dough: the only reason to make dough is to eventually make a pie, but we have established that the value of making a pie is negative. What we need is to only penalize an action when it is going to immediately destroy option value. This can be done by applying the penalty during action selection, rather than directly to the reward: After this modification, the agent will correctly make dough, and stop since it does not know what filling to use. In an assistance problem, R will only preserve option value if it expects to get information that will resolve its uncertainty later: otherwise, it might as well get what reward it can given its uncertainty. Thus, we might expect to recover existing notions of option value preservation in the case where the agent is initially uncertain over θ, but will soon learn the true θ. Concretely, let us consider a fully observable communicative Assistance POMDP where the human will reveal θ on their next action. In that case, R's chosen action a gets immediate reward r(s, a) = Eθ [r θ (s, a)], and future reward Eθ∼P Θ ,s ∼T (•|s,a) [V θ (s )], where V θ (s) refers to the value of the optimal policy when the reward is known to be r θ and the initial state is s. Thus, the agent should choose actions according to: argmax a r AU P (s, a) = r spec (s, a) − λ E raux [max(Q raux (s, a φ ) − Q raux (s, a), 0)] \n π AU P (s) = argmax a Q rspec (s, a) − λ E raux [max(Q raux (s, a φ ) − Q raux (s, a), 0)] \n Note that unlike H, R does not observe the reward parameter θ, and must infer θ much like it does the hidden state. A fully observable assistance game is one in which both H and R can observe the full state. In such cases, we omit Ω H , Ω R , O H and O R . Since we have not yet specified how H behaves, it is not clear what the agent should optimize for. R's observations until time t, and τ H t for H's observations; thus R's policy can be written as π R (a R | o R t , τ R t−1 ), while H's can be written as π H (a H | o H t , a R t τ H t−1 , θ). \n\t\t\t Relative to Hadfield-Menell et al. (2016) , our definition allows for partial observability and requires that the initial distribution over S and Θ be independent. We also have H choose her action sequentially after R, rather than simultaneously with R, in order to better parallel the reward learning setting. \n\t\t\t t is drawn from the same distribution π H (• | o H t , a R t , θ), and a R t is drawn from the same distribution π R (• | o R t , τ R t−1 )).", "date_published": "n/a", "url": "n/a", "filename": "benefits_of_assistance_over_re.tei.xml", "abstract": "Much recent work has focused on how an agent can learn what to do from human feedback, leading to two major paradigms. The first paradigm is reward learning, in which the agent learns a reward model through human feedback that is provided externally from the environment. The second is assistance, in which the human is modeled as a part of the environment, and the true reward function is modeled as a latent variable in the environment that the agent may make inferences about. The key difference between the two paradigms is that in the reward learning paradigm, by construction there is a separation between reward learning and control using the learned reward. In contrast, in assistance these functions are performed as needed by a single policy. By merging reward learning and control, assistive agents can reason about the impact of control actions on reward learning, leading to several advantages over agents based on reward learning. We illustrate these advantages in simple environments by showing desirable qualitative behaviors of assistive agents that cannot be found by agents based on reward learning.", "id": "a99b3caf11ef06360b6030ad54067dda"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Xiao Fan Wang", "Guanrong Chen"], "title": "SYNCHRONIZATION IN SMALL-WORLD DYNAMICAL NETWORKS", "text": "Introduction Collective motions of coupled dynamical networks are of significant interest in many fields of science and technology. In particular, synchronization in networks of coupled chaotic dynamical systems has received a great deal of attention in recent years. Most of the existing work on synchronization of coupled networks assumes that the coupling configuration is completely regular (see e.g. [Heagy et al., 1994; Wu & Chua, 1995] ), while a few studies address the issue of synchronization in randomly coupled networks [Gade, 1996; Manrubia & Mikhailov, 1999] . However, many biological, technological and social networks are neither completely regular nor completely random. To interpolate between these two extremes, Watts and Strogatz [1998] introduced the interesting concept of small-world networks. The so-called small-world networks have intermediate connectivity properties but exhibit a high degree of clustering as in the regular networks and a small average distance between vertices as in the random networks. They also found that the small-world networks of coupled phase oscillators can synchronize almost as readily as the globally coupled networks, despite the fact that they have much fewer edges [Watts, 1999] . For a review of recent works on small-world networks, see [Newman, 2000] . More recently, Gade and Hu [2000] explored the stability of synchronous chaos in coupled map lattices with small-world connectivity and found that in this case synchronous chaos is possible even in the thermodynamic limit. Lago-Fernandez et al. [2000] also investigated the fast response and temporal coherent oscillations in a small-world network of Hodgkin-Huxley neurons. In this study, we consider synchronization in a network of linearly coupled identical continuoustime dynamical systems. As shown by Wu [1995] , for any given number of cells, strong enough mutual diffusive coupling will result in synchronization of the cells. Two commonly studied coupling configurations are the so-called easiest-toimplement nearest-neighbor coupling and the most difficult-to-implement global coupling. It has been shown that for any given coupling strength, if the number of cells is large enough, the globally coupled network will eventually synchronize, while the nearest-neighbor coupled network cannot achieve such synchronization under the same condition. This observation naturally poses the following question: for a nearest-neighbor coupled network with a sufficiently large number of cells and with an arbitrary coupling strength, is it possible to achieve synchronization of the network by a small modification of the nearest-neighbor coupling configuration, for example, by adding a small fraction of connection between some different pairs of cells? In this paper we provide a positive answer to this question based on the small-world network models. \n Preliminaries We consider a network of N identical cells, linearly coupled through the first state variable of each cell, with each cell being an n-dimensional dynamical subsystem. The state equations of the entire network are ẋi1 = f 1 (x i ) + c N j=1 a ij x j1 ẋi2 = f 2 (x i ) . . . ẋin = f n (x i ) i = 1, 2, . . . , N (1) where x i = (x i1 , x i2 , . . . , x in ) ∈ R n are the state variables of cell i, f i (0) = 0, c > 0 represents the coupling strength, and A = (a ij ) N ×N is the coupling matrix. In this paper, we only consider symmetric and diffusive coupling. In particular, we assume that (i) A is a symmetric and irreducible matrix. (ii) The off-diagonal elements, a ij (i = j) of A, are either 1 or 0 (when a connection between cell i and cell j is absent). (iii) The elements of A satisfy a ii = − N j=1 j =i a ij , i = 1, 2, . . . , N (2) The above conditions imply that one eigenvalue of A is zero, with multiplicity 1, and all the other eigenvalues of A are strictly negative. Given the dynamics of an isolated cell and the coupling strength, stability of the synchronization state of the network can be characterized by those nonzero eigenvalues of the coupling matrix. A typical result states that the network will synchronize if these eigenvalues are negative enough [Wu & Chua, 1995] . Lemma 1. Consider network (1). Let λ 1 be the largest nonzero eigenvalue of the coupling matrix A of the network. The synchronization state of network (1) defined by x 1 = x 2 = • • • = x n is asymp- totically stable, if λ 1 ≤ − T c (3) where c > 0 is the coupling strength of the network and T > 0 is a positive constant such that zero is an exponentially stable point of the n-dimensional system ż1 = f 1 (z) − T z 1 ż2 = f 2 (z) . . . żn = f n (z) (4) Note that system (4) is actually a single cell model with self-feedback −T z 1 . Condition (3) means that the entire network will synchronize provided that λ 1 is negative enough, e.g. it is sufficient to be less than −T/c, where T is a constant so that the self-feedback term −T z 1 could stabilize an isolated cell. As mentioned above, two commonly studied coupling configurations are the nearest-neighbor coupling and the global coupling ones. Experimentally, the nearest-neighbor coupling is perhaps the easiest one to implement and, on the contrary, the global coupling is the most expensive one to implement. The nearest-neighbor coupling configuration consists of cells arranged in a ring and coupled to the nearest neighbors. The corresponding coupling matrix is A nc =          −2 1 1 1 −2 1 . . . . . . . . . 1 −2 1 1 1 −2          (5) The eigenvalues of A nc are −4 sin 2 kπ N , k = 0, 1, . . . , N − 1 (6) Therefore, according to Lemma 1, the nearestneighbor coupled network will asymptotically synchronize if 4 sin 2 π N ≥ T c (7) The global coupled configuration means that any two different cells are connected directly. The corresponding coupling matrix is A gc =          −N + 1 1 1 • • • 1 1 −N + 1 1 • • • 1 . . . . . . . . . . . . . . . 1 1 1 • • • 1 1 1 1 • • • −N + 1          (8) Matrix A gc has a single eigenvalue at 0 and all the others equal to −N . Hence, Lemma 1 implies that this network will asymptotically synchronize if N ≥ T c (9) In summary, for any given coupling strength c > 0, the globally coupled network can synchronize as long as the number of cells N is large enough. On the other hand, since sin(π/N ) decreases to zero as N increases, relation (7) cannot hold for sufficiently large N . Simulations also that the nearestneighbor coupled network cannot synchronize if the number of cells is sufficiently large. Thus, we have seen a trade-off between these two situations. \n Synchronization in Small-World Networks Aiming to describe a transition from a regular network to a random network, Watts and Strogatz [1998] introduced an interesting model, now referred to as the small-world (SW) network. The original SW model can be described as follows. Take a onedimensional lattice of N vertices arranged in a ring with connections between only nearest neighbors. We \"rewire\" each connection with some probability, p. Rewiring in this context means shifting one end of the connection to a new vertex chosen at random from the whole lattice, with the constraint that no two different vertices can have more than one connection in between, and no vertex can have a connection with itself. Note, however, that there is a possibility for the SW model to be broken into unconnected clusters. This problem can be circumvented by a slight modification of the SW model, suggested by Newman and Watts [1999] , which is referred to as the NW model hereafter. In the NW model, we do not break any connection between any two nearest neighbors. We add with probability p a connection between each other pair of vertices. Likewise, we do not allow a vertex to be coupled to another vertex more than once, or coupling of a vertex with itself. For p = 0, it reduces to the originally nearest-neighbor coupled system; for p = 1, it becomes a globally coupled system. In this paper, we are interested in the NW model with 0 < p < 1. From a coupling-matrix point of view, network (1) with small-world connections amount to that, in the nearest-neighbor coupling matrix A nc , if a ij = 0, we set a ij = a ji = 1 with probability p. Then, we recompute the diagonal elements according to formula (2). We denote the new small-world coupling matrix by A ns (p, N ) and let λ 1ns (p, N ) be its largest nonzero eigenvalue. According to Lemma 1, if λ 1ns (p, N ) ≤ − T c (10) then the corresponding network with small-world connections will synchronize. Figures 1 and 2 show the numerical values of λ 1ns (p, N ) as a function of the probability p and the number of cells N . In these figures, for each pair of values of p and N , λ 1ns (p, N ) is obtained by averaging the results of 20 runs. It can be seen that (i) For any given value of N , λ 1ns (p, N ) decreases to −N as p increases from 0 to 1. (ii) For any given value of p ∈ (0, 1], λ 1ns (p, N ) decreases to −∞ as N increases to +∞. The above results imply that, for any given coupling strength c > 0, we have (i) For any given N > T/c, there exists a critical value p so that if p ≤ p ≤ 1, then the smallworld connected network will synchronize. (ii) For any given p ∈ (0, 1], there exists a critical value N so that if N ≥ N , then the small-world connected network will synchronize. \n Synchronization in a Network of Small-World Coupled Chua's Circuits As an example, we now study synchronization in a network of small-world connected Chua's circuits. In the dimensionless form, a single Chua's circuit is described by [Chua et al., 1993] :    ẋ1 ẋ2 ẋ3    =    α(x 2 − x 1 + f (x 1 )) x 1 − x 2 + x 3 −βx 2 − γx 3    (11) where f (•) is a piecewise-linear function, f (x 1 ) =      −bx 1 − a + b x 1 > 1 −ax 1 |x 1 | ≤ 1 −bx 1 + a − b x 1 < −1 (12) in which α > 0, β > 0, γ > 0, and a < b < 0. The state equations of the entire network are    ẋi1 ẋi2 ẋi3    =        α(x i2 − x i1 + f (x i1 )) + c N j=1 a ij x j1 x i1 − x i2 + x i3 −βx i2 − γx i3        , i = 1, 2, . . . , N . (13) For this network to synchronize, according to Lemma 1, we may take T = −α. In simulations, the system parameters are chosen to be For this set of parameters, Chua's circuit (11) has a chaotic attractor, as shown in Fig. 3 . The nearestneighbor coupled Chua's network cannot synchronize for N > 6. According to Lemma 1, the smallworld network will synchronize if λ 1ns (p, N ) ≤ a c = −1.27 (15) Figure 4 shows the values of p and N which can achieve network synchronization. For example, for N = 100, 150 and 200, synchronization of the small-world connected network can be achieved, for p > 0.0366, p > 0.0257 and p > 0.021, respectively. \n Conclusions Starting with a nearest-neighbor coupled dynamical network, we can construct a small-world dynamical network by adding with probability p a connection between each of the other pair of cells. We found that, for any given coupling strength and a sufficiently large number of cells, synchronization in a network of linearly small-world coupled continuoustime dynamical systems can be achieved with a small value of p. In other words, the ability of achieving synchronization in an originally nearestneighbor coupled system can be greatly enhanced by simply adding a small fraction of new connection, revealing an advantage of small-world network for chaos synchronization. Fig. 1 .Fig. 2 . 12 Fig. 1. Numerical values of λ1ns(p, N) as a function of the probability p: (a) N = 200; (b) N = 500. \n α Fig. 3. Chaotic attractor of Chua's circuit (11), with parameters given in (14). \n Fig. 4 . 4 Fig. 4. Values of p and N achieving synchronization in the small-world network of Chua's circuits.", "date_published": "n/a", "url": "n/a", "filename": "WaCh02.tei.xml", "abstract": "We investigate synchronization in a network of continuous-time dynamical systems with smallworld connections. The small-world network is obtained by randomly adding a small fraction of connection in an originally nearest-neighbor coupled network. We show that, for any given coupling strength and a sufficiently large number of cells, the small-world dynamical network will synchronize, even if the original nearest-neighbor coupled network cannot achieve synchronization under the same condition.", "id": "954f5064c2e30083e850521d7a2987f4"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Daniel S Bernstein", "Robert Givan", "Neil Immerman", "Shlomo Zilberstein"], "title": "THE COMPLEXITY OF DECENTRALIZED CONTROL OF MARKOV DECISION PROCESSES", "text": "1. Introduction. Markov decision processes (MDPs) have received considerable attention, and there exist well-known algorithms for finding optimal control strategies in the case where a process is centrally controlled and the controller (or agent) has access to complete state information (Puterman 1994) . Less well understood is the case in which a process is controlled by multiple cooperating distributed agents, each with possibly different information about the state. We are interested in studying how hard these decentralized control problems are relative to analogous centralized control problems, from the point of view of computational complexity. In particular, we consider two different models of decentralized control of MDPs. One is a generalization of a partially observable Markov decision process (POMDP), which we call a decentralized partially observable Markov decision process (DEC-POMDP). In a DEC-POMDP, the process is controlled by multiple distributed agents, each with possibly different information about the state. The other is a generalization of an MDP, called a decentralized Markov decision process (DEC-MDP). A DEC-MDP is a DEC-POMDP with the restriction that at each time step the agents' observations together uniquely determine the state. The MDP, POMDP, and DEC-MDP can all be viewed as special cases of the DEC-POMDP. The relationships among the models are shown in Figure 1 . A number of different problems can be viewed as decentralized control of a Markov process. For example, consider problems involving the control of multiple distributed robots, such as robotic soccer (Coradeschi et al. 2000) . In these domains, it is necessary to develop a strategy for each robot under the assumption that the robots will have limited ability to communicate when they execute their strategies. Another problem that fits naturally within this framework is the distributed control of a power grid (Schneider et al. 1999) . Finally, several types of networking problems can be viewed within this framework (Altman 2001) . It would be beneficial to have general-purpose algorithms for solving these decentralized control problems. An algorithm for similar problems was proposed by Ooi et al. (1996) . Under the assumption that all agents share state information every K time steps, the authors developed a dynamic programming algorithm to derive optimal policies. A downside of this approach is that the state space for the dynamic programming algorithm grows doubly exponentially with K. The only known tractable algorithms for these types of problems rely on even more assumptions. One such algorithm was developed by Hsu and Marcus (1982) and works under the assumption that the agents share state information every time step (although it can take one time step for the information to propagate). Approximation algorithms have also been developed for these problems, although they can at best give guarantees of local optimality. For instance, Peshkin et al. (2000) studied algorithms that perform gradient descent in a space of parameterized policies. Is there something inherent in these problems that forces us to add assumptions and/or use approximation algorithms? Papadimitriou and Tsitsiklis (1982) presented some results aimed at answering this question. The authors proved that a simple decentralized decisionmaking problem is NP-complete, even with just two decision makers. They later noted that this implies that decentralized control of MDPs must be NP-hard (Papadimitriou and Tsitsiklis 1986) . We strengthen this result by showing that both the DEC-POMDP and DEC-MDP problems are NEXP-hard, even when the horizon is limited to be less than the number of states (and they are NEXP-complete in the latter case). Although it is not known whether the classes P, NP, and PSPACE are distinct, it is known that P = NEXP, and thus the problems we consider are provably intractable. Furthermore, assuming EXP = NEXP, the problems take superexponential time to solve in the worst case. This result is in contrast to the best known bounds for MDPs (P-hard) and POMDPs (PSPACE-hard) (Papadimitriou and Tsitsiklis 1987, Mundhenk et al. 2000) . Thus, we have gained insight into the possibility of a fundamental difference between centralized and decentralized control of Markov decision processes. In §2, we give a brief review of the concepts we will need from complexity theory. In §3, we define the MDP and POMDP models. Section 4 contains the definitions of the DEC-MDP and DEC-POMDP models, and a proof that the short-horizon versions of these problems fall within the complexity class NEXP. In §5, we present our main complexity result-a proof that these decentralized problems are NEXP-hard. Finally, §6 contains our conclusions. 2. Computational complexity. In this section, we give a brief introduction to the theory of computational complexity. More detail can be found in Papadimitriou (1994) . A complexity class is a set of problems where a problem is an infinite set of problem instances, each of which has a \"yes\" or \"no\" answer. To discuss the complexity of optimization problems, we must have a way of converting them to \"yes/no\" problems. The typical way this is done is to set a threshold and ask whether or not the optimal solution yields a reward that is no less than this threshold. The problem of actually finding the optimal solution can, of course, be no easier than the threshold problem. The first complexity class we consider is P, the set of problems that can be solved in polynomial time (in the size of the problem instance) on a sequential computer. NP is the set of problems that can be solved nondeterministically in polynomial time. A nondeterministic machine automatically knows the correct path to take any time there is a choice as to how the computation should proceed. An example of a problem that can be solved nondeterministically in polynomial time is deciding whether a sentence of propositional logic is satisfiable. The machine can guess an assignment of truth values to variables and evaluate the resulting expression in polynomial time. Of course, nondeterministic machines do not really exist, and the most efficient known algorithms for simulating them take exponential time in the worst case. In fact, it is strongly believed by most complexity theorists that P = NP (but this has not been proven formally). Complexity can also be measured in terms of the amount of space a computation requires. One class, PSPACE, includes all problems that can be solved in polynomial space. Any problem that can be solved in polynomial time or nondeterministic polynomial time can be solved in polynomial space (i.e., P ⊆ NP ⊆ PSPACE)-that P ⊆ PSPACE can be seen informally by observing that only polynomially much space can be accessed in polynomially many time steps. Moving up the complexity hierarchy, we have exponential time (EXP) and nondeterministic exponential time (NEXP). By exponential time, we mean time bounded by 2 n k , where n is the input size and k > 0 is a constant. It is known that PSPACE ⊆ EXP ⊆ NEXP, and it is believed that EXP = NEXP (but again this has not been proven). It has been proven that the classes P and EXP are distinct, however. The notion of a reduction is important in complexity theory. We say that a problem A is reducible to a problem B if any instance x of A can be converted into an instance f x of B such that the answer to x is \"yes\" if and only if the answer to f x is \"yes.\" A problem A is said to be hard for a complexity class C (or C-hard) if any problem in C is efficiently reducible to A. If the complexity class in question is P, efficient means that f x can be computed using at most logarithmic space, while for the classes above P, efficient means that f x can be computed using at most polynomial time. A problem A is said to be complete for a complexity class C (or C-complete) if (a) A is contained in C, and (b) A is hard for C. For instance, the satisfiability problem mentioned above is NP-complete and P-hard. However, unless P = NP, satisfiability is not P-complete. 3. Centralized models. In this paper, we consider discrete-time finite sequential decision processes under the undiscounted finite-horizon optimality criterion. We build into our problem definitions the (unusual) assumption that the horizon is less than the number of states. Note that this assumption actually strengthens the hardness results; the general problems must be at least as hard as their short-horizon counterparts. Unfortunately, the assumption is needed for each of the upper bounds given below. Finding tight upper bounds for problems with arbitrary horizons remains an open problem (Blondel and Tsitsiklis 1999, §5) . Below we describe the partially observable Markov decision process and its associated decision problem. The Markov decision process is viewed as a restricted version of this model. A partially observable Markov decision process (POMDP) is defined as follows. We are given a tuple, S A P R O T K , where • S is a finite set of states, with distinguished initial state s 0 . • A is a finite action set. • P is a transition probability table. P s a s is a rational representing the probability of transitioning from s to s on taking action a. Here s s ∈ S and a ∈ A. • R is a reward function. R s a s is a rational representing the reward obtained from taking action a from state s and transitioning to state s . Again, s s ∈ S and a ∈ A. • is a finite set of observations. • O is a table of observation probabilities. O s a s o is a rational representing the probability of observing o when taking action a in state s and transitioning to state s as a result. Here s s ∈ S, a ∈ A, and o ∈ . • T is a positive integer representing the horizon (and T < S ). • K is a rational representing the threshold value. A POMDP is fully observable if there exists a mapping J → S such that whenever O s a s o is nonzero, J o = s . A Markov decision process (MDP) is defined to be a POMDP that is fully observable. A policy is defined to be a mapping from sequences of observations ō = o 1 • • • o t over to actions in A. We wish to find a policy that maximizes the expected total return over the finite horizon. The definitions below are used to formalize this notion. We use the symbol to denote the empty observation sequence. For an observation sequence ō = o 1 • • • o t , ōo is taken to represent the sequence o 1 • • • o t o. Definition. The probability of transitioning from a state s to a state s following policy while the agent sees observation sequence ō, written P s ō s , can be defined recursively as follows: P s s = 1 P s ōo s = q∈S P s ō q P q ō s O q ō s o where is the empty sequence. Definition. The value V T s of following policy from state s for T steps is given by the following equation: V T s = ō q∈S s ∈S P s ō q P q ō s R q ō s where the observation sequences have length at most T − 1. The decision problem corresponding to a finite-horizon POMDP is as follows. Given a POMDP D = S A P R O T K , is there a policy for which V T s 0 equals or exceeds K? It was shown in Papadimitriou and Tsitsiklis (1987) that the decision problem for POMDPs is PSPACE-complete and that the decision problem for MDPs is P-complete. 4. Decentralized models. We now describe extensions to the aforementioned models that allow for decentralized control. In these models, at each step, each agent receives a local observation and subsequently chooses an action. The state transitions and rewards received depend on the vector of actions of all the agents. A decentralized partially observable Markov decision process (DEC-POMDP) is defined formally as follows (for ease of exposition, we describe the two-agent case). We are given S A 1 A 2 P R 1 2 O T K , where • S is a finite set of states, with distinguished initial state s 0 . • A 1 and A 2 are finite action sets. • P is a transition probability table. P s a 1 a 2 s is a rational representing the probability of transitioning from s to s on taking actions a 1 a 2 . Here s s ∈ S, a 1 ∈ A 1 , and a 2 ∈ A 2 . • R is a reward function. R s a 1 a 2 s is a rational representing the reward obtained from taking actions a 1 a 2 from state s and transitioning to state s . Again s s ∈ S, a 1 ∈ A 1 , and a 2 ∈ A 2 . • 1 and 2 are finite sets of observations. • O is a table of observation probabilities. O s a 1 a 2 s o 1 o 2 is a rational representing the probability of observing o 1 o 2 when taking actions a 1 a 2 in state s and transitioning to state s as a result. Here s s ∈ S, a 1 ∈ A 1 , a 2 ∈ A 2 , o 1 ∈ 1 , and o 2 ∈ 2 . • T is a positive integer representing the horizon (and T < S ). • K is a rational representing the threshold value. A DEC-POMDP generalizes a POMDP by allowing for control by multiple distributed agents that together may not fully observe the system state (so we have only partial observability). We also define a generalization of MDP problems by requiring joint observability. We say that a DEC-POMDP is jointly observable if there exists a mapping J 1 × 2 → S such that whenever O s a 1 a 2 s o 1 o 2 is nonzero, J o 1 o 2 = s . We define a decentralized Markov decision process (DEC-MDP) to be a DEC-POMDP that is jointly observable. We define a local policy for agent i, i , to be a mapping from local histories of observations ōi = o i1 • • • o it over i , to actions in A i . A joint policy, = 1 2 , is defined to be a pair of local policies, one for each agent. We wish to find a joint policy that maximizes the expected total return over the finite horizon. As in the centralized case, we need some definitions to make this notion more formal. Definition. The probability of transitioning from a state s to a state s following joint policy = 1 2 while agent 1 sees observation sequence ō1 and agent 2 sees ō2 of the same length, written P s ō1 ō2 s , can be defined recursively as follows: P s s = 1 P s ō1 o 1 ō2 o 2 s = q∈S P s ō1 ō2 q P q 1 ō1 2 ō2 s O q 1 ō1 2 ō2 s o 1 o 2 where is the empty sequence. Definition. The value V T s of following policy = 1 2 from state s for T steps is given by the following equation: V T s = ō1 ō2 q∈S s ∈S P s ō1 ō2 q P q 1 ō1 2 ō2 s R q 1 ō1 2 ō2 s where the observation sequences are of length at most T − 1, and both sequences in any pair are of the same length. The decision problem is stated as follows. Given a DEC-POMDP D = S A 1 A 2 P R 1 2 O T K , is there a joint policy for which V T s 0 equals or exceeds K? We let DEC-POMDP m and DEC-MDP m denote the decision problems for the m-agent DEC-POMDP and the m-agent DEC-MDP, respectively. We conclude this section by showing a straightforward upper bound on the worst-case time complexity of DEC-POMDP m for any m ≥ 2. Because any DEC-MDP is trivially a DEC-POMDP, this upper bound also applies to DEC-MDP m . Theorem 1. For all m ≥ 2, DEC-POMDP m ∈ NEXP. Proof. We must show that a nondeterministic machine can solve any instance of DEC-POMDP m using at most exponential time. First, a joint policy can be \"guessed\" and written down in exponential time. This is because a joint policy consists of m mappings from local histories to actions; and because T < S , all histories have length less than S . A DEC-POMDP together with a joint policy can be viewed as a POMDP together with a policy, where the observations in the POMDP correspond to the observation m-tuples in the DEC-POMDP (one from each agent), and the POMDP actions correspond to m-tuples of DEC-POMDP actions (again, one from each agent). In exponential time, each of the exponentially many possible sequences of observations can be converted into a belief state (i.e., a probability distribution over the state set giving the probability of being in each state after seeing the given observation sequence). We note that every POMDP (Kaelbling et al. 1998 ) is equivalent to a \"belief-state MDP\" whose state set is the set of reachable belief states of the POMDP. The transition probabilities and expected rewards for the corresponding exponential-sized belief-state MDP can be computed in exponential time. Using standard MDP solution techniques (Puterman 1994) , it is possible to determine whether the guessed policy yields expected reward at least K in this belief-state MDP in time that is at most polynomial in the size of the belief-state MDP, which is exponential in the size of the original DEC-POMDP problem. Therefore, there exists an accepting computation path if and only if there is a policy that can achieve reward K. 5. Decentralized control of MDPs is NEXP-hard. We now turn our attention to proving that the upper bound just shown in Theorem 1 is tight-specifically, we show that NEXP is also a lower bound for the worst-case time complexity of decentralized problems by showing that any problem in the class NEXP can be reduced in polynomial time to a DEC-MDP 2 problem. It then follows that both DEC-MDP m and DEC-POMDP m are NEXPcomplete for any m ≥ 2. The proof of this lower bound is quite involved and will occupy most of the remainder of this paper. Each subsection of this section contains a piece of the development, and at the end of the section the main theorem is asserted. We begin by introducing the known NEXP-complete problem TILING used in the proof. We then present an overview of the proof and its major constituents. Next we present the reduction from TILING formally, and finally we prove that the reduction is correct. 5.1. The TILING problem. We can show this lower bound by reducing any NEXPcomplete problem to DEC-MDP 2 using a polynomial-time algorithm. For our reduction, we use an NEXP-complete problem called TILING (Lewis 1978 , Papadimitriou 1994 501), which is described as follows. We are given a board size n (represented compactly in binary), a set of tile types L = tile-0 tile-k , and a set of binary horizontal and vertical compatibility relations H V ⊆ L × L. A tiling is a mapping f 0 n − 1 × 0 n − 1 → L. A tiling f is consistent if and only if (a) f 0 0 = tile-0, and (b) for all x y f x y f x + 1 y ∈ H , and f x y f x y + 1 ∈ V . The decision problem is to determine, given L, H , V , and n, whether a consistent tiling exists. An example of a tiling instance and a corresponding consistent tiling is shown in Figure 2 . H = V = L = n = 4 a consistent tiling 0 1 2 3 0 1 2 3 0 1 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 2 2 1 1 2 2 2 2 2 2 2 2 Figure 2. An example of a tiling instance. In the remainder of this section, we assume that we have fixed an arbitrarily chosen instance of the tiling problem, so that L, H , V , and n are fixed. We then construct an instance of DEC-MDP that is solvable if and only if the selected tiling instance is solvable. We note that the DEC-MDP instance must be constructible in time polynomial in the size of the tiling instance (which in particular is logarithmic in the value of n), which will require the DEC-MDP instance to be at most polynomially larger than the tiling instance. 5.2. Overview of the reduction. The basic idea of our reduction is to create a twoagent DEC-MDP that randomly selects two tiling locations bit by bit, informing one agent of the first location and the other agent of the second location. The agents' local policies are observation-history based, so the agents can base their future actions on the tiling locations given to them. After generating the locations, the agents are simultaneously \"queried\" (i.e., a state is reached in which their actions are interpreted as answers to a query) for a tile type to place at the location given. We design the DEC-MDP problem so that the only way the agents can achieve nonnegative expected reward is to base their answers to the query on a single, jointly understood tiling that meets the constraints of the tiling problem. This design is complicated because the DEC-MDP state set itself cannot remember which tiling locations were selected (this would cause exponential blowup in the size of the state set, but our reduction must expand the problem size at most polynomially-we note that the tiling grid itself is not part of the tiling problem size, only the compactly represented grid size n is in the problem specification); the state will contain only certain limited information about the relative locations of the two tile positions. The difficulty of the design is also increased by the fact that any information remembered about the specified tiling locations must be shared with at least one of the agents to satisfy the joint observability requirement. To deal with these two issues, we have designed the DEC-MDP to pass through the following phases (a formal description follows later): Select Phase: Select two bit indices and values, each identifying a bit position and the value at that position in the location given to one of the agents. These are the only bits that are remembered in the state set from the locations given to the agents in the next phase-the other location bits are generated and forgotten by the process. The bit values remembered are called value-1 and value-2, and the indices to which these values correspond are called index-1 and index-2. Bit index-1 of the address given to agent 1 will have the value value-1, likewise for index-2, value-2, and agent 2. Generate Phase: Generate two tile locations at random, revealing one to each agent. The bits selected in the above select phase are used, and the other location bits are generated at random and immediately \"forgotten\" by the DEC-MDP state set. Query Phase: Query each agent for a tile type to place in the location that was specified to that agent. These tile types are remembered in the state. Echo Phase: Require the agents to echo the tile locations they received in the generate phase bit by bit. To enforce the accuracy of these location echoes, the DEC-MDP is designed to yield a negative reward if the bit remembered from the original location generation is not correctly echoed (the DEC-MDP is designed to ensure that each agent cannot know which bit is being checked in its echoes). As the agents echo the bits, the process computes state information representing whether the locations are equal or adjacent horizontally or vertically, and whether the agents' locations are both 0 0 (again, we cannot just remember the location bits because it would force an exponential state set). The echo phase allows us to compute state information about adjacency/equality of the locations after the tile types have been chosen, so that the agents' tile choices cannot depend on this information. This is critical in making the reduction correct. Test Phase: Check whether the tile types provided in the query phase come from a single consistent tiling. In other words, check that if the agents were asked for the same location they gave the same tile types during query, if they were asked for adjacent locations they gave types that satisfy the relevant adjacency constraints, and if the agents were both queried for location 0 0 they both gave tile type tile-0. The process gives a zero reward only if the tile types selected during the query phase meet any applicable constraints as determined by the echoed location bits. Otherwise, a negative reward is obtained. Note that because we are designing a DEC-MDP, we are required to maintain joint observability: The observations given to the agents at each time step must be sufficient to reconstruct all aspects of the DEC-MDP state at that time step. In particular, the bit indices and values selected in the select phase must be known to the agents (jointly), as well as the information computed in the echo phase regarding the relative position of the two locations. We achieve this joint observability by making all aspects of the DEC-MDP state observable to both agents, except for the indices and values selected in the select phase and the tile types that are given by the agents (and stored by the process) during the query phase. Each agent can observe which bit index and value are being remembered from the other agent's location, and each agent can observe the stored tile type it gave (but not the tile type given by the other agent). Because each agent can see what bit is saved from the other agent's location, we say that one location bit of each agent's location is visible to the other agent. We call the five phases just described \"select,\" \"generate,\" \"query,\" \"echo,\" and \"test\" in the development below. A formal presentation of the DEC-MDP just sketched follows below, but first we outline the proof that this approach represents a correct reduction. \n Overview of the correctness proof. Here we give an overview of our argument that the reduction sketched above is correct in the sense that there exists a policy that achieves expected total reward zero at the start state if and only if there is a solution to the tiling problem we started with. It is straightforward to show that if there exists a consistent tiling there must exist a policy achieving zero reward. The agents need only agree on a consistent tiling ahead of time and base their actions on the agreed-on tiling (waiting during selection and generation, giving the tile type present at the generated location during query, faithfully echoing the generated location during echo, and then waiting during test, at each point being guaranteed a zero reward by the structure of the problem). Note that it does not matter how expensive it might be to find and represent a consistent tiling or to carry out the policy just described because we are merely arguing for the existence of such a policy. We now outline the proof of the harder direction, that if there is no consistent tiling then there is no policy achieving expected reward zero. Note that because all rewards are nonpositive, any chance of receiving any negative reward forces the expected total reward to be negative. Consider an arbitrary policy that yields expected reward zero. Our argument rests on the following claims, which will be proved as lemmas in §5.5: Claim 1. The policy must repeat the two locations correctly during the echo phase. Claim 2. When executing the policy, the agents' selected actions during the query phase determine a single tiling, as follows. We define a query situation to be dangerous to an agent if and only if the observable bit value of the other agent's location (in the observation history) agrees with the bit value at the same index in the agent's own location (so that as far as the agent in danger knows, the other agent is being queried about the same location). During dangerous queries, the tile type selected by the agent in danger must depend only on the location queried (and not on the index or value of the bit observed from the other agent, on any other observable information, or on which agent is selecting the tile type). The agents' selected actions for dangerous queries thus determine a single tiling. Claim 3. The single tiling from Claim 2 is a consistent tiling. Claim 3 directly implies that if there is no consistent tiling, then all policies have negative expected reward, as desired. 5.4. Formal presentation of the reduction. Now we give the two-agent DEC-MDP D = S A 1 A 2 P R 1 2 O T K that is constructed from the selected tiling instance L H V n . We assume throughout that n is a power of two. It is straightforward to modify the proof to deal with the more general case-one way to do so is summarized briefly in Appendix A. 5.4.1. The state set. We describe the state set S of D below by giving a sequence of finite-domain \"state variables\" and then taking the state set to be the set of all possible assignments of values to the state variables. The finite-state automaton. One of the state variables will be maintained by a finitestate automaton (FSA), described in Appendix A. This variable, called rel-pos because it maintains a record of the relative position of the two location addresses echoed by the agents in the echo phase, can take on values from the state set of the FSA. Appendix A describes state set Q and also defines two functions (FSANEXT and FSA) based on the underlying FSA. These functions allow us to refer to the critical FSA behaviors here in our reduction while deferring most other details of the FSA to Appendix A. The function FSANEXT updates the state variable rel-pos based on one more bit of echoed location from each agent. The function FSA updates the state variable rel-pos based on a sequence of echoed location bits from each agent-so FSA is defined as a repeated application of the function FSANEXT. Appendix A also describes distinguished subsets of the FSA state set Q called apart, equal, hor, and ver representing possible relative positions for pairs of locations (not adjacent or equal, equal, horizontally adjacent, and vertically adjacent, respectively). These subsets are used below in defining the transition and reward behavior of the DEC-MDP D. Appendix A also defines the initial state q 0 of the FSA. The state variables. We now list the state variables defining the state set for D. We list the variables in three groups: the first group is observable to both agents, the second group only to agent 1, and the third group only to agent 2. These restrictions on observability are described in §5. We write a state by enumerating its variable values, e.g., as follows: gen 3 yes q 0 4 0 1 tile-1 5 1 0 tile-3 ∈ S. Semicolons are used to group together variables that have the same observability properties. We can represent sets of states by writing sets of values in some of the components of the tuple rather than just values. The * symbol is used to represent the set of all possible values for a component. We sometimes use a state variable as a function from states to domain values for that variable. For instance, if q matches gen * * * * * * * * * * * , then we will say phase q = gen. The initial state s 0 is as follows: select 0 yes q 0 0 0 0 tile-0 0 0 0 tile-0 . 5.4.2. The action sets and table of transition probabilities. We must allow \"wait\" actions, \"zero\" and \"one\" actions for echoing location address bits, and tile-type actions from the set of tile types L for answering during the query phase. We therefore take the action sets A 1 = A 2 to be wait 0 1 ∪ L. We give the transition distribution P s a 1 a 2 s for certain action pairs a 1 a 2 for certain source states s. For any source-state/action-pair combination not covered by the description below, the action pair is taken to cause a probability 1.0 self-transition back to the source state. The combinations not covered are not reachable from the initial state under any joint policy. Also, we note that the FSA-controlled state component rel-pos does not change from its initial state q 0 until the echo phase. Select phase. This is the first step of the process. In this step, the process chooses, for each agent, which of that agent's bits it will be checking in the echo phase. The value of that bit is also determined in this step. Transition probabilities when phase = select are given as follows: P s a 1 a 2 s = 1 4 log n 2 in the following situations: s = s 0 = select 0 yes q 0 0 0 0 tile-0 0 0 0 tile-0 s = gen 0 yes q 0 i 2 v 2 0 tile-0 i 1 v 1 0 tile-0 i 1 i 2 ∈ 0 2 log n − 1 and v 1 v 2 ∈ 0 1 Generate phase. During these steps, the two tile positions are chosen bit by bit. Note that we have to check for whether we are at one of the bits selected during the select phase, so that the value of the bit is the same as the value chosen during selection. Transition probabilities when phase = generate are given as follows. The second case describes the deterministic transition from the generate phase to the query phase. P s a 1 a 2 s = 1 h in the following situations: s = gen k yes q 0 i 2 v 2 * tile-0 i 1 v 1 * tile-0 where 0 ≤ k ≤ 2 log n − 1 s = gen k + 1 yes q 0 i 2 v 2 b 1 tile-0 i 1 v 1 b 2 tile-0 where b 1 = v 1 if k = i 1 else b 1 is either 0 or 1 b 2 = v 2 if k = i 2 else b 2 is either 0 or 1, and h is the number of allowed settings of b 1 b 2 from the previous two lines. P s a 1 a 2 s = 1 in the following situations: s = gen 2 log n yes q 0 i 2 v 2 * tile-0 i 1 v 1 * tile-0 and s = query 0 yes q 0 i 2 v 2 0 tile-0 i 1 v 1 0 tile-0 Query phase. The query phase consists of just one step, during which each agent chooses a tile type. Transition probabilities when phase = query are given as follows: P s a 1 a 2 s = 1 in the following situations: s = query 0 yes q 0 i 2 v 2 0 tile-0 i 1 v 1 0 tile-0 t 1 = a 1 if a 1 ∈ L tile-0 otherwise t 2 = a 2 if a 2 ∈ L tile-0 otherwise and s = echo 0 yes q 0 i 2 v 2 0 t 1 i 1 v 1 0 t 2 Echo phase. During the echo phase the agents are asked to repeat back the addresses seen in the generate phase, and information about the relative position of the addresses is calculated by the FSA described in Appendix A and recorded in the state. The FSA is accessed here using the function FSANEXT described in Appendix A. Transition probabilities when phase = echo are given as follows: P s a 1 a 2 s = 1 in the following situations: P s a 1 a 2 s = 1 in the following situations: s = test 0 * * * * 0 * * * 0 * and s = test 0 yes q 0 0 0 0 tile-0 0 0 0 tile-0 5.4.3. The reward function. We now describe the reward function for D. The reward R s a 1 a 2 s given when transitioning from state s to state s taking action pair a 1 a 2 is −1 in any situation except those situations matching one of the following patterns. Roughly, we give zero reward for waiting during select and generate, for answering with a tile type during query, for echoing a bit consistent with any remembered information during echo, and for having given tile types satisfying the relevant constraints during test. The relevant constraints during test are determined by the rel-pos state component computed by the FSA during the echo phase. s = echo k o q i 2 v 2 0 t 1 i 1 v 1 0 t 2 b 1 = a 1 if a 1 ∈ 0 1 0 otherwise b 2 = a 2 if a 2 ∈ 0 1 0 otherwise s = p k o FSANEXT q b 1 b 2 i 2 v 2 0 t 1 i 1 v 1 0 t 2 where p k = echo k + 1 for 0 ≤ k < 2 log n − 1 test 0 for k = 2 log R s a 1 a 2 s = 0 if and only if one of the following holds: The first four component fields of each state description are fully visible to both agents. The last eight state component fields are split into two groups of four, each group visible to only one agent. We therefore take the agent 1 observations 1 to be partial assignments to the following state variables: phase, index, origin, rel-pos, index-2, value-2, pos-bit-1, and tile-sel-1. Similarly, the observations 2 are partial assignments to the following state variables: phase, index, origin, rel-pos, index-1, value-1, pos-bit-2, and tile-sel-2. The observation distribution O s a 1 a 2 s o 1 o 2 simply reveals the indicated portion of the just-reached state s to each agent deterministically. We say that an observation sequence is p-phase if the sequence matches the pattern * p * * * * * * * , where the first \"*\" stands for any observation sequence. Here, p can be any of gen, query, echo, or test. We take the horizon T to be 4 log n + 4, because the process spends one step in each of the select, query, and test phases, 2 log n + 1 steps in the generate phase, and 2 log n steps in the echo phase. We take the threshold value K to be 0. This completes the construction of the DEC-MDP by polynomial-time reduction from the selected tiling instance. An example of a zero-reward trajectory of the process is shown in Figure 3 . We now turn to correctness. \n Formal correctness argument. Next we show that the reduction presented above is indeed correct. Our main claim is that there exists a policy that achieves expected total reward zero at the start state if and only if there is a solution to the tiling problem we started with. To make our notation easier to read, we define the following abbreviations. Definition. Given an observation sequence ō1 over 1 , we write loc 1 ō1 for the location value represented by the bits transmitted to agent 1 in the generate phase of the process. We note that during the select and generate phases this value may be only partially specified (because not all of the bits have been generated). More precisely, loc 1 ō1 = b k • • • b 0 , where the b i values are chosen by the first match of the following sequence in ō1 (with k as large as possible while allowing a match): gen 1 * * * * b 0 * • • • gen k + 1 * * * * b k * We define loc 2 ō2 similarly. In addition, we define bit i l to be b i , where b k b 0 is the binary coding of the (possibly only partially specified) location l-we take bit i l to be undefined if the bit i is not specified in location l. By abuse of notation we treat loc 1 ō1 (or loc 2 ō2 ) as a tiling location x y sometimes (but only when it is fully specified) and as a bit string at other times. The easier direction of correctness is stated in the following lemma, which is formally proven in Appendix B. Lemma 1. If there exists a consistent tiling, then there must exist a policy achieving zero expected total reward. phase = gen index = 0 index-2 = 1 value-2 = 1 index-1 = 2 value-1 = 1 phase = gen index = 1 pos-bit-1 = 1 index-2 = 1 value-2 = 1 pos-bit-2 = 0 index-1 = 2 value-1 = 1 phase = gen index = 2 pos-bit-1 = 0 index-2 = 1 value-2 = 1 pos-bit-2 = 1 index-1 = 2 value-1 = 1 phase = gen index = 3 pos-bit-1 = 1 index-2 = 1 value-2 = 1 pos-bit-2 = 1 index-1 = 2 value-1 = 1 phase = gen index = 4 pos-bit-1 = 0 index-2 = 1 value-2 = 1 pos-bit-2 = 0 index-1 = 2 value-1 = 1 phase = query index-2 = 1 value-2 = 1 index-1 = 2 value-1 = 1 phase = echo index = 0 origin = yes rel-pos = q 0 index-2 = 1 value-2 = 1 tile-sel-1 = tile-0 index-1 = 2 value-1 = 1 tile-sel-2 = tile-1 phase = echo index = 1 origin = no rel-pos = q 1 index-2 = 1 value-2 = 1 tile-sel-1 = tile-0 index-1 = 2 value-1 = 1 tile-sel-2 = tile-1 phase = select phase = echo index = 2 origin = no rel-pos = q 2 phase = echo index = 3 origin = no rel-pos = q 3 phase = test origin = no rel-pos = q 4 ∈ hor tile-sel-1 = tile-0 tile-sel-2 = tile-1 index-2 = 1 value-2 = 1 tile-sel-1 = tile-0 index-1 = 2 value-1 = 1 tile-sel-2 = tile-1 index-2 = 1 value-2 = 1 tile-sel-1 = tile-0 index-1 = 2 value-1 = 1 tile-sel-2 = tile-1 1 1 1 2 1 0 1 1 0 0 0 1 \n Select The process chooses indices to be checked in the echo phase. \n Generate The process generates address bits. \n Query The agents choose tiles. \n Echo The agents echo address bits. The process checks one bit for each agent and keeps track of information about the addresses echoed. \n Test The process checks that the relevant constraints are satisfied. phase = test 0 1 Figure 3 . An example of a zero-reward trajectory of the process constructed from the tiling example given in Figure 2 . The total reward is zero because the agents echo the \"checked\" bits correctly and choose tiles that do not violate any constraints, given the two addresses that are echoed. (For clarity, some state components are not shown.) We now discuss the more difficult reverse direction of the correctness proof. In the following subsections, we prove Claims 1 to 3 from §5.3 to show that if there is a policy which achieves nonnegative expected reward for horizon T , then there is also a consistent tiling. Throughout the remainder of the proof, we focus on a fixed, arbitrary policy that achieves zero expected reward. Given this policy, we must show that there is a consistent tiling. 5.5.1. Proof of Claim 1. Before proving the first claim, we need to formalize the notion of \"faithfulness during echo.\" Definition. A pair of observation sequences ō1 ō2 over 1 and 2 , respectively, is said to be reachable if P s 0 ō1 ō2 s is nonzero for some state s. An observation sequence ō1 over 1 is said to be reachable if there exists an observation sequence ō2 over 2 such that the pair of observation sequences ō1 ō2 is reachable. Likewise, ō2 over 2 is reachable if there is some ō1 over 1 such that, ō1 ō2 is reachable. Definition. The policy = 1 2 is faithful during echo if it satisfies both of the following conditions for all indices k in 0 2 log n − 1 , and all reachable observation sequence pairs ō1 ō2 : 1 Much of our proof revolves around showing that the reachability of a pair of observation sequences is not affected by making certain changes to the sequences. We focus without loss of generality on changes to the observations of agent 2, but similar results hold for agent 1. The changes of particular interest are changes to the (randomly selected) value of the index-1 state component-this is the component that remembers which bit of agent 1's queried location will be checked during echo. It is important to show that agent 1 cannot determine which bit is being checked before that bit has to be echoed. To show this, we define a way to vary the observation sequences seen by agent 2 (preserving reachability) such that without changing the observations seen by agent 1 we have changed which agent 1 address bit is being checked. We now present this approach formally. Definition. We say that an observation sequence ō1 over 1 is superficially consistent if the values of the index-2 component and the value-2 component do not change throughout the sequence, and the value of the tile-sel-1 component is tile-0 for generate-phase and query-phase observations and some fixed tile type in L for echo-phase and test-phase observations. Given a superficially consistent observation sequence ō1 , we can write index-2 ō1 and value-2 ō1 to denote the value of the indicated component throughout the sequence. In addition, we can write tile-sel-1 ō1 to denote the fixed tile type for echo-phase and testphase observations (we take tile-sel-1 ō1 to be tile-0 if the sequence contains no echo-or test-phase observations). Corresponding definitions hold for observation sequences over 2 , replacing \"1\" by \"2\" and \"2\" by \"1\" throughout. Note that any reachable observation sequence must be superficially consistent, but the converse is not necessarily true. The following technical definition is necessary so that we can discuss the relationships between observation sequences without assuming reachability. Definition. We say that two superficially consistent observation sequences ō1 over 1 and ō2 over 2 are compatible if bit index-1 ō2 loc 1 ō1 = value-1 ō2 or this bit of loc 1 ō1 is not defined, and bit index-2 ō1 loc 2 ō2 = value-2 ō1 or this bit of loc 2 ō2 is not defined. Definition. Given an index i in 0 2 log n − 1 , a reachable pair of observation sequences ō1 ō2 , and an observation sequence ō 2 over 2 , we say that ō 2 is an i-index variant of ō2 relative to ō1 when ō 2 is any sequence compatible with ō1 that varies from ō2 only as follows: 1. index-1 has been set to i throughout the sequence, 2. value-1 has been set to the same value v throughout the sequence, 3. pos-bit-2 can vary arbitrarily from ō2 , and 4. For any echo-or test-phase observations, tile-sel-2 has been set to the tile type selected by on the query-phase prefix of ō 2 , or to tile-0 if selects a non-tile-type action on that query. If the pos-bit-2 components of ō2 and ō 2 are identical, we say that ō 2 is a same-address index variant of ō2 . We note that, given a reachable pair of observation sequences ō1 ō2 , there exists an iindex variant of ō2 relative to ō1 , for any i in 0 2 log n − 1 . This remains true even if we allow only same-address index variants. The following technical lemma asserts that index variation as just defined preserves reachability under very general conditions. Its proof is deferred to Appendix C. Lemma 2. Suppose is faithful during echo for the first k bits of the echo phase for some k. Let ō1 ō2 be a reachable pair of observation sequences that end no later than the kth bit of echo (i.e., the last observation in the sequence has index no greater than k if it is an echo-phase observation) and let ō 2 be an i-index variant of ō2 relative to ō1 for some i. If the observation sequences are echo-phase or test-phase, then we require that the index variation be a same-address variation. We can then conclude that ō1 ō 2 is reachable. We are now ready to assert and prove Claim 1 from §5.3. \n Lemma 3 (Claim 1). is faithful during echo. Proof. We argue by induction that faithfully echoes all 2 log n address bits. As an inductive hypothesis, we assume that faithfully echoes the first k bits, where 0 ≤ k < 2 log n. Note that if k equals zero, this is a null assumption, providing an implicit base case to our induction. Now suppose for contradiction that lies during the k + 1st step of the echo phase. Then one of the agents' policies must incorrectly echo bit k + 1; we assume without loss of generality that this is so for agent 1, i.e., under some reachable observation sequence pair ō1 ō2 of length 2 log n + k + 2, the policy 1 dictates that the agent choose action 1 − bit k loc 1 ō1 . Lemma 2 implies that the observation sequence pair ō1 ō 2 is also reachable, where ō 2 is any same-address k + 1-index variant of ō2 relative to ō1 . Because all the agent 1 observations are the same for both ō1 ō2 and ō1 ō 2 , when the latter sequence occurs, agent 1 chooses the same action 1−bit k loc 1 ō1 as given above for the former sequence, and a reward of −1 is obtained (because in this case it is bit k + 1 that is checked). Therefore, the expected total reward is not zero, yielding a contradiction. 5.5.2. Proof of Claim 2. Now we move on to prove Claim 2 from §5.3. We show that can be used to define a particular mapping from tile locations to tile types based on \"dangerous queries.\" In §5.3, we defined an agent 1 observation sequence to be \"dangerous\" if it reveals a bit of agent 2's queried location that agrees with the corresponding bit of agent 1's queried location (and vice versa for agent 2 observation sequences). We now present this definition more formally. Definition. A query-phase observation sequence ō1 over 1 is dangerous if it is reachable and bit index-2 ō1 loc 1 ō1 = value-2 ō1 Likewise, a query-phase sequence ō2 over 2 is dangerous if it is reachable and bit index-1 ō2 loc 2 ō2 = value-1 ō2 . Dangerous query-phase sequences are those for which the agent's observations are consistent with the possibility that the other agent has been queried on the same location. We note that for any desired query location l, and for either agent, there exist dangerous observation sequences ō such that loc k ō = l. Moreover, such sequences still exist when we also require that the value of index-k ō be any particular desired value (where k is the number of the nonobserving agent). Lemma 4. Two same-length query-phase observation sequences, ō1 over 1 and ō2 over 2 , are reachable together as a pair if and only if they are compatible and each is individually reachable. Proof. The \"only if\" direction of the theorem follows easily-the reachability part follows from the definition of reachability, and the compatibility of jointly reachable sequences follows by a simple induction on sequence length given the design of D. The \"if\" direction can be shown based on the following assertions. First, a generate-phase observation sequence (for either agent) is reachable if and only if it matches the following pattern: gen 0 yes q 0 i v * tile-0 • • • gen k yes q 0 i v * tile-0 for some k i, and v; this can be established by a simple induction on sequence length based on the design of D. A similar pattern applies to the query phase. Given two compatible reachable sequences of the same length, ō1 and ō2 , we know by the definition of reachability that there must be some sequence ō 2 such that ō1 ō 2 is reachable. But given the patterns just shown for reachable sequences, ō2 and ō 2 can differ only in their choice of i, v, and in the address given to agent 2 via the pos-bit-2 component. It follows that ō2 is an i-index variant of ō 2 relative to ō1 , for some i. Lemma 2 then implies that the pair ō1 ō2 is reachable as desired. Lemma 5 (Claim 2). There exists a mapping f from tiling locations to tile types such that f loc i ō = i ō on all dangerous queries ō over i for both agents (i ∈ 1 2 ). Proof. To prove the lemma, we prove that for any two dangerous query sequences ōi and ōj over i and j , respectively, for arbitrary i j ∈ 1 2 , if loc i ōi = loc j ōj = l, then i ōi = j ōj . This implies that for any such ōi we can take f l = i loc i ōi to construct f satisfying the lemma. Suppose not. Then there must be a counterexample for which i = j-because given a counterexample for which i = j, either ōi or ōj must form a counterexample with any dangerous query ōk over 1−i such that loc 1−i ōk = l. We can now consider a counterexample where i = j. Let ō1 and ō2 be dangerous (and thus reachable) sequences over 1 and 2 , respectively, such that loc 1 ō1 = loc 2 ō2 but 1 ō1 = 2 ō2 . Note that loc 1 ō1 = loc 2 ō2 together with the fact that ō1 and ō2 are dangerous implies that ō1 and ō2 are compatible and thus reachable together (using Lemma 4). The faithfulness of echo under (proven in Claim 1, Lemma 3) then ensures that the extension (there is a single extension because D is deterministic in the echo and test phases) of these observation sequences by following to the test phase involves a faithful echo. The correctness of the FSA construction in Appendix A then ensures that the rel-pos state component after this extension will have the value equal. The reward structure of D during the test phase then ensures that to avoid a negative reward the tile types given during query, 1 ō1 and 2 ō2 , must be the same, contradicting our choice of ō1 and ō2 above and thus entailing the lemma. 5.5.3. Proof of Claim 3 and our main hardness theorem. We now finish the proof of our main theorem by proving Claim 3 from §5.3. We start by showing the existence of a useful class of pairs of dangerous observation sequences that are reachable together. Lemma 6. Given any two locations l 1 and l 2 sharing a single bit in their binary representations, there are dangerous observation sequences ō1 over 1 and ō2 over 2 such that: loc 1 ō1 = l 1 loc 2 ō2 = l 2 and ō1 ō2 is reachable Proof. It is straightforward to show that there exist dangerous observation sequences ō1 over 1 and ō2 over 2 such that loc 1 ō1 = l 1 and loc 2 ō2 = l 2 as desired. In these sequences, both index-1 and index-2 are set throughout to the index of a single bit shared by l 1 and l 2 . Because this bit is in common, these sequences are compatible, so by Lemma 4 they are reachable together. Lemma 7 (Claim 3). The mapping f defined in Lemma 5 is a consistent tiling. Proof. We prove the contrapositive. If the mapping f is not a consistent tiling, then there must be some particular constraint violated by f . It is easy to show that any such constraint is tested during the test phase if loc 1 ō1 and loc 2 ō2 have the appropriate values. (The faithfulness during echo claim proven in Lemma 3 implies that the origin and rel-pos components on entry to the test phase will have the correct values for comparing the two locations). For example, if a horizontal constraint fails for f , then there must be locations i j and i + 1 j such that the tile types f i j f i + 1 j are not in H ; because these two locations share a bit (in fact, all the bits in j, at least), Lemma 6 implies that there are dangerous ō1 and ō2 with loc 1 ō1 = i j and loc 2 ō2 = i + 1 j that are reachable together. During the test phase, the tile-sel-1 and tile-sel-2 state components are easily shown to be f i j and f i + 1 j , and then the definition of the reward function for D ensures a reachable negative reward. The arguments for the other constraints are similar. Claim 3 immediately implies that there exists a consistent tiling whenever there exists a policy achieving zero expected total reward. This completes the proof of the other direction of our main complexity result. We have thus shown that there exists a policy that achieves expected reward zero if and only if there exists a consistent tiling, demonstrating that DEC-MDP 2 is NEXP-hard. Theorem 2. DEC-MDP 2 is NEXP-hard. Corollary 1. For all m ≥ 2, both DEC-POMDP m and DEC-MDP m are NEXPcomplete. 6. Discussion. Using the tools of worst-case complexity analysis, we analyzed two variants of decentralized control of Markov decision processes. Specifically, we proved that the finite-horizon m-agent DEC-POMDP problem is NEXP-hard for m ≥ 2 and the finitehorizon m-agent DEC-MDP problem is also NEXP-hard for m ≥ 2. When the horizon is limited to be less than the number of states, the problems are NEXP-complete. The results have some theoretical implications. First, unlike the MDP and POMDP problems, the problems we studied provably do not admit polynomial-time algorithms, because P = NEXP. Second, we have drawn a connection between work on Markov decision processes and the body of work in complexity theory that deals with the exponential jump in complexity due to decentralization (Peterson and Reif 1979, Babai et al. 1991) . There are also more direct implications for researchers trying to solve problems of this nature. Consider the growing body of work on algorithms for obtaining exact or approximate solutions for POMDPs (e.g., Jaakkola et al. 1995 , Cassandra et al. 1997 , Hansen 1998 , Meuleau et al. 1999 , Lusena et al. 1999 , Zhang 2001 . For the finite-horizon case, we now have stronger evidence that there is no way to efficiently convert a DEC-MDP or DEC-POMDP into an equivalent POMDP and solve it using established techniques. This knowledge can provide direction for research on the development of algorithms for these problems. Finally, consider the infinite-horizon versions of the aforementioned problems. It has recently been shown that the infinite-horizon POMDP problem is undecidable (Madani et al. 1999 ) under several different optimality criteria. Because a POMDP is a special case of a DEC-POMDP, the corresponding infinite-horizon DEC-POMDP problems are also sets of reject states. From these sets we construct distinguished sets apart, equal, hor, and ver of cross-product automaton states as follows: apart = reject 1 × reject 2 × reject 3 equal = accept 1 × reject 2 × reject 3 hor = reject 1 × accept 2 × reject 3 ver = reject 1 × reject 2 × accept 3 The rest of the automaton's states comprise the set Q . Let q 1 0 , q 2 0 , and q 3 0 denote the start states of the three component automata. We define the start state of the cross-product automaton to be the state q 0 = q 1 0 q 2 0 q 3 0 . We now define two functions based on this automaton that are needed in the main body of the proof. One function takes as input the state of the automaton and a bit pair and returns the next state of the automaton. The second function takes as input a pair of bit strings of the same length and returns the state that the automaton will be in starting from its initial state and reading symbols formed by the corresponding bits in the two strings in sequence. Definition. For q ∈ Q and a 1 a 2 ∈ 0 1 , FSANEXT q a 1 a 2 = q , where q ∈ Q is the resulting state if the automaton starts in state q and reads the input symbol a 1 a 2 . Definition. The function FSA is defined inductively as follows: FSA = q 0 FSA b 0 • • • b k+1 c 0 • • • c k+1 = FSANEXT FSA b 0 • • • b k c 0 • • • c k b k+1 c k+1 Note that the range of FSA for inputs of length 2 log n is apart ∪ equal ∪ hor ∪ ver. In the proof given in §5 we assumed that the TILING grid size n was an exact power of two. We note that the proof can be adapted by adding two components to the crossproduct FSA described here, where the two new components are both FSAs over the same alphabet. The first new component accepts a string only when both x 1 and y 1 (as described above) are less than n (so that the tiling location represented by x 1 y 1 is in the tiling grid). The second new component behaves similarly for x 2 y 2 . The DEC-MDP can then be constructed using the smallest power of two larger than n but modified so that whenever either new component of the FSA rejects the (faithfully) echoed bit sequences, then the process gives a zero reward, regardless of the tile types returned during query. Each new component can be viewed as an FSA over a {0,1} alphabet, because each focuses either on just the agent 1 echoes or on just the agent 2 echoes. We describe the FSA for checking that x 1 is less than n; constructing the two components is then straightforward. Suppose that k = log n is the number of bits in the binary representation of n and that the bits themselves are given from least to most significant as b 1 • • • b k . Suppose also that there are j different bits equal to 1 among b 1 • • • b k and that these bits are at indices i 1 i j . We can then write a regular expression for detecting that its input of k bits from least to most significant represents a number in binary that is strictly less than n: 0 + 1 i 1 −1 0 b i 1 +1 • • • b k + 0 + 1 i 2 −1 0 b i 2 +1 • • • b k + • • • + 0 + 1 i j −1 0 b i j +1 • • • b k It can be shown that this regular expression has an equivalent FSA of size O log n 2 . Appendix B: Proof of Lemma 1. We assume there exists at least one consistent tiling, and we select a particular such mapping f . We describe a policy = 1 2 that achieves zero expected reward at the initial state. 1 is a mapping from sequences of observations in part of the mapping 1 is specified below-any unspecified observation sequence maps to the action wait. We note that 1 and 2 are symmetric. The local policy 2 is defined identically to 1 except that loc 1 is replaced by loc 2 . We first characterize the set of all reachable states from s 0 under the policy . We then note that taking the action prescribed by from any of these states yields a reward of zero. Thus, V T s 0 = 0. It is straightforward to show by induction that P s 0 ō1 ō2 s is zero except where one of the following patterns applies: • s = s 0 = select 0 yes q 0 0 0 0 tile-0 0 0 0 tile-0 . • s = gen k yes q 0 i 2 v 2 * tile-0 i 1 v 1 * tile-0 , where 0 ≤ k ≤ 2 log n, k ≤ i 1 or v 1 = bit i 1 loc 1 ō1 , and k ≤ i 2 or v 2 = bit i 2 loc 2 ō2 . • s = query 0 yes q 0 i 2 v 2 * tile-0 i 1 v 1 * tile-0 , where v 1 = bit i 1 loc 1 ō1 , and v 2 = bit i 2 loc 2 ō2 . • s = echo k o q i 2 v 2 * t 1 i 1 v 1 * t 2 , where 0 ≤ k ≤ 2 log n − 1, v 1 = bit i 1 loc 1 ō1 , v 2 = bit i 2 loc 2 ō2 , t 1 = f loc 1 ō1 , t 2 = f loc 2 ō2 , o = yes if and only if b j = c j = 0 for 0 ≤ j ≤ k − 1, with b j and c j as in the next item, and q = FSA b 0 • • • b k−1 c 0 • • • c k−1 , with b 0 • • • b k−1 and c 0 • • • c k−1 the least significant bits of loc 1 ō1 and loc 2 ō2 , respectively. • s = test * o r * * * t 1 * * * t 2 , where o = yes if and only if loc 1 ō1 = 0 0 and loc 2 ō2 = 0 0 , r = FSA loc 1 ō1 loc 2 ō2 , t 1 = f loc 1 ō1 , and t 2 = f loc 2 ō2 . It can then be shown that the reward for any action prescribed by the policy given any of these reachable state/observation sequence combinations is zero given that f is a consistent tiling. Appendix C: Proof of Lemma 2. We need some new notation to carry out this proof. Given a state s, a state component c and corresponding value v from the domain of c, we define the state \"s with c set to v\" (written s c = v ) to be the state s that agrees with s at all state components except possibly c and has value v for state component c. We also write ō1 j for the first j observations in the sequence ō1 , and likewise ō2 j and ō 2 j . 839 For any state s j reachable while observing ō1 j ō2 j , we define a state s j that we will show is reachable while observing ō1 j ō 2 j , as follows: s j = s j index-1 = index-1 ō 2 j value-1 = value-1 ō 2 j tile-sel-2 = tile-sel-2 ō 2 j We can now show by an induction on sequence length j that for any state s j such that P s 0 ō1 j ō2 j s j is nonzero, then P s 0 ō1 j ō 2 j s j is also nonzero. From this we can conclude that ō1 ō 2 is reachable, as desired. For the base case of this induction, we take j to be 1, so that the observation sequences involved all have length 1, ending in the generate phase with index equal to zero. Inspection of the definition of the transition probabilities P shows that changing index-1 and value-1 arbitrarily has no effect on reachability. For the inductive case, we suppose some state s j is reachable by ō1 j ō2 j , and that state s j is reachable by ō1 j ō 2 j . Let a 1 be ō1 j , a 2 be ō2 j , and a 2 be ō 2 j . We must show that for any state s j+1 such that P s j a 1 a 2 s j+1 is nonzero, P s j a 1 a 2 s j+1 is also nonzero, for s j+1 . This follows from the following observations: • When phase(s j ) is select or generate, neither agent 2's action a 2 nor the values of index-1(s j ) or value-1(s j ) have any affect on P s j a 1 a 2 s j+1 being nonzero, as long as either index-1 is not equal to j or pos-bit-1(s j+1 ) equals value-1(s j ). However, this last condition is ensured to hold of the index-1 and value-1 components of s j and s j by the compatibility of ō 2 with ō1 . • When phase(s j ) is query, the action a 2 must equal the tile-sel-2 state component of s j+1 by the definitions of s j+1 and \"index variant,\" and changes to the index-1 and value-1 components have no effect on P s j a 1 a 2 s j+1 being nonzero during the query phase. • When phase(s j ) is echo, the actions a 2 and a 2 must be a faithful echo of the location address bit indicated by the index state component (because we have assumed as part of out inductive hypothesis that is faithful during echo for at least j bits), and this bit's value does not vary between ō2 and ō 2 because if the observation sequences reach the echo phase we have the assumption that these are same-address variants. Thus a 2 = a 2 during echo. Again, changes to the index-1 and value-1 components have no effect on P s j a 1 a 2 s j+1 being nonzero during the echo phase. Figure 1 . 1 Figure 1. The relationships among the models. \n 4.4Observable to both agents: phase ∈ select gen query echo test Current phase of the process index ∈ 0 2 log n Index of next location bit to be generated/echoed origin ∈ yes no Eventually true if both tile locations are 0 0 rel-pos ∈ QRelative tile positions during echo-controlled by the FSA Observable only to agent 1: index-2 ∈ 0 2 log n − 1 Index of bit remembered for agent 2 value-2 ∈ 0 1Value of bit remembered for agent 2 pos-bit-1 ∈ 0 1Bit for transmitting tile position to agent 1 tile-sel-1 ∈ L Tile type selected by agent 1 in query Observable only to agent 2: index-1 ∈ 0 2 log n − 1 Index of bit remembered for agent 1 value-1 ∈ 0 1Value of bit remembered for agent 1 pos-bit-2 ∈ 0 1 Bit for transmitting tile position to agent 2 tile-sel-2 ∈ L Tile type selected by agent 2 in query \n n − 1 and o = yes if and only if o = yes and a 1 = a 2 = 0 Test phase. The test phase consists of just one step terminating the process in a zeroreward absorbing state. \n Select phase s = select * * * * * * * * * * * and a 1 = a 2 = wait Generate phase s = gen * * * * * * * * * * * and a 1 = a 2 = wait Query phase s = query * * * * * * * * * * * and both a 1 ∈ L and a 2 ∈ L Echo phase s = echo k * * i 2 v 2 * * i 1 v 1 * * and a 1 a 2 ∈ 0 1 where a 1 = v 1 or k = i 1 and a 2 = v 2 or k = i 2 830 D. S. BERNSTEIN, R. GIVAN, N. IMMERMAN, AND S. ZILBERSTEIN Test Phase i s = test * o equal * * * t 1 * * * t 1 and a 1 = a 2 = wait where o = no or t 1 = tile-0 Test Phase ii s = test * * hor * * * t 1 * * * t 2 and a 1 = a 2 = wait where t 1 t 2 ∈ H Test Phase iii s = test * * ver * * * t 1 * * * t 2 and a 1 = a 2 = wait where t 1 t 2 ∈ V Test Phase iv s = test * * apart * * * * * * * * and a 1 = a 2 = wait 5.4.4. Observations, threshold, and horizon. \n . 1 ō1 = bit k loc 1 ō1 when ō1 = * * * * * * * * • • • echo k * * * * * * . 2. 2 ō2 = bit k loc 2 ō2 when ō2 = * * * * * * * * • • • echo k * * * * * * . We say the policy lies during echo otherwise. If the two conditions listed above are satisfied for all indices k in 0 d −1 , where 0 ≤ d ≤ 2 log n, we say that the policy faithfully echoes the first d bits. \n 1 ō1 = a 1 when one of the following holds: Select phase: ō1 = select * * * * * * * and a 1 = wait Generate phase: ō1 = * gen * * * * * * * and a 1 = wait Query phase: ō1 = * query * * * * * * * and a 1 = f loc 1 ō1 Echo phase: ō1 = * echo k * * * * * * and a 1 = bit k loc 1 ō1 Test phase: ō1 = * test * * * * * * * and a 1 = wait \n\t\t\t D. S. BERNSTEIN, R. GIVAN, N. IMMERMAN, AND S. ZILBERSTEIN \n\t\t\t to actions in A 1 , and 2 from sequences over 2 to actions in A 2 . Only the reachable", "date_published": "n/a", "url": "n/a", "filename": "moor.27.4.819.297.tei.xml", "abstract": "We consider decentralized control of Markov decision processes and give complexity bounds on the worst-case running time for algorithms that find optimal solutions. Generalizations of both the fully observable case and the partially observable case that allow for decentralized control are described. For even two agents, the finite-horizon problems corresponding to both of these models are hard for nondeterministic exponential time. These complexity results illustrate a fundamental difference between centralized and decentralized control of Markov decision processes. In contrast to the problems involving centralized control, the problems we consider provably do not admit polynomial-time algorithms. Furthermore, assuming EXP = NEXP, the problems require superexponential time to solve in the worst case.", "id": "4e61e93e4e88168590ae317695d74447"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Fisac2020_Chapter_Pragmatic-PedagogicValueAlignm.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Tsvi Benson-Tilsen", "Nate Soares"], "title": "Formalizing Convergent Instrumental Goals", "text": "Introduction At the end of Russell and Norvig's textbook Artificial Intelligence: A Modern Approach (2010) the authors pose a question: What if we succeed? What will happen if humanity succeeds in developing an artificially intelligent system that is capable of achieving difficult goals across a variety of real-world domains? Bostrom (2014) and others have argued that this question becomes especially important when we consider the creation of \"superintelligent\" machines, that is, machines capable of outperforming the best human brains in practically every field. Bostrom argues that superintelligent decision-making systems that autonomously make and execute plans could have an extraordinary impact on society, and that their impact will not necessarily be beneficial by default. Bostrom (2012) , Omohundro (2008) , and Yudkowsky ( 2011 ) have all argued that highly capable AI systems pursuing goals that are not completely aligned with human values could have highly undesirable side effects, even if the goals seem otherwise harmless. The classic example is Bostrom's concept of a \"paperclip maximizer,\" a powerful AI system instructed to construct paperclips-a seemingly harmless task which could nevertheless have very negative consequences if the AI system is clever enough to make and execute plans that allow it to fool humans, amass resources, and eventually turn as much matter as it possibly Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. can into paperclips. Even if the system's goals are laudable but not perfectly aligned with human values, similar unforeseen consequences could occur: Soares (2015) gives the example of a highly capable AI system directed to cure cancer, which may attempt to kidnap human test subjects, or proliferate robotic laboratories at expense of the biosphere. Omohundro (2008) has argued that there are certain types of actions that most highly capable autonomous AI systems will have strong incentives to take, for instrumental reasons. For example, a system constructed to always execute the action that it predicts will lead to the most paperclips (with no concern for any other features of the universe) will acquire a strong incentive to self-preserve, assuming that the system predicts that, if it were destroyed, the universe would contain fewer paperclips than it would if the system remained in working order. Omohundro argues that most highly capable systems would also have incentives to preserve their current goals (for the paperclip maximizer predicts that if its goals were changed, this would result in fewer future paperclips) and amass many resources (the better to achieve its goals with). Omohundro calls these behaviors and a few others the \"basic AI drives.\" Bostrom (2012) refines this into the \"instrumental convergence\" thesis, which states that certain instrumentally useful goals will likely be pursued by a broad spectrum of intelligent agents-such goals are said to be \"convergent instrumental goals.\" Up until now, these arguments have been purely philosophical. To some, Omohundro's claim seems intuitively obvious: Marvin Minsky speculated (Russell and Norvig 2010, section 26.3 ) that an artificial intelligence attempting to prove the Riemann Hypothesis may decide to consume Earth in order to build supercomputers capable of searching through proofs more efficiently. To others, they seem preposterous: Waser (2008) has argued that \"ethics is actually an attractor in the space of intelligent behavior,\" and thus highly capable autonomous systems are not as likely to pose a threat as Omohundro, Bostrom, and others, have claimed. In this paper, we present a mathematical model of intelligent agents which lets us give a more formal account of Omohundro's basic AI drives, where we will demonstrate that the intuitions of Omohundro and Bostrom were correct, at least insofar as these simple models apply to reality. Given that this paper primarily focuses on arguments made by Omohundro and Bostrom about what sorts of be- havior we can expect from extremely capable (potentially superintelligent) autonomous AI systems, we will be focusing on issues of long-term safety and ethics. We provide a mathematical framework in attempts to ground some of this discussion-so that we can say, with confidence, what a sufficiently powerful agent would do in certain scenarios, assuming it could find some way to do it-but the discussion will nevertheless center on long-term concerns, with practical relevance only insofar as research can begin now in preparation for hurdles that predictably lie ahead. We begin in section with a bit more discussion of the intuition behind the instrumental convergence thesis, before moving on in section to describing our model of agents acting in a universe to achieve certain goals. In section we will demonstrate that Omohundro's thesis does in fact hold in our setting. Section will give an example of how our model can apply to an agent pursuing goals. Section concludes with a discussion of the benefits and limitations of our current models, and different ways that the model could be extended and improved. \n Intuitions Before proceeding, let us address one common objection (given by Cortese (2014) and many others) that superintelligent AI systems would be \"inherently unpredictable,\" and thus there is nothing that can be said about what they will do or how they will do it. To address this concern, it is useful to distinguish two different types of unpredictability. It is true that the specific plans and strategies executed by a superintelligent planner could be quite difficult for a human to predict or understand. However, as the system gets more powerful, certain properties of the outcome generated by running the system become more predictable. For example, consider playing chess against a chess program that has access to enormous amounts of computing power. On the one hand, because it plays much better chess than you, you cannot predict exactly where the program will move next. But on the other hand, because it is so much better at chess than you are, you can predict with very high confidence how the game will end. Omohundro suggests predictability of the second type. Given a highly capable autonomous system pursuing some fixed goal, we likely will not be able to predict its specific actions or plans with any accuracy. Nevertheless, Omohundro argues, we can predict that the system, if it is truly capable, is likely to preserve itself, preserve its goals, and amass resources for use in pursuit of those goals. These represent large classes of possible strategies, analogously to how \"put the chessboard into a position where the AI has won\" is a large class of strategies, but even so it is useful to understand when these goals will be pursued. Omohundro's observations suggest a potential source of danger from highly capable autonomous systems, especially if those systems are superintelligent in the sense of Bostrom (2014) . The pursuit of convergent instrumental goals could put the AI systems in direct conflict with human interests. As an example, imagine human operators making a mistake when specifying the goal function of an AI system. As described by , this system could well have incentives to deceive or manipulate the humans, in attempts to prevent its goals from being changed (because if its current goal is changed, then its current goal is less likely to be achieved). Or, for a more familiar case, consider the acquisition of physical matter. Acquiring physical matter is a convergent instrumental goal, because it can be used to build computing substrate, space probes, defense systems, and so on, all of which can in turn be used to influence the universe in many different ways. If a powerful AI system has strong incentives to amass physical resources, this could put it in direct conflict with human interests. Others have suggested that these dangers are unlikely to manifest. Waser (2008) has argued that intelligent systems must become ethical by necessity, because cooperation, collaboration, and trade are also convergent instrumental goals. Hall (2007) has also suggested that powerful AI systems would behave ethically in order to reap gains from trade and comparative advantage, stating that \"In a surprisingly strong sense, ethics and science are the same thing.\" Tipler (2015) has asserted that resources are so abundant that powerful agents will simply leave humanity alone, and Pinker (2015) and Pagel (2015) have argued that there is no reason to expect that AI systems will work against human values and circumvent safeguards set by humans. By providing formal models of intelligent agents in situations where they have the ability to trade, gather resources, and/or leave portions of the universe alone, we can ground these discussions in concrete models, and develop a more formal understanding of the assumptions under which an intelligent agent will in fact engage in trade, or leave parts of the universe alone, or attempt to amass resources. In this paper, we will argue that under a very general set of assumptions, intelligent rational agents will tend to seize all available resources. We do this using a model, described in section , that considers an agent taking a sequence of actions which require and potentially produce resources. The agent acts in an environment consisting of a set of regions, where each region has some state. The agent is modeled as having a utility function over the states of all regions, and it attempts to select the policy which leads to a highly valuable collection of states. This allows us to prove certain theorems about the conditions under which the agent will leave different regions of the universe untouched. The theorems proved in section are not mathematically difficult, and for those who find Omohundro's arguments intuitively obvious, our theorems, too, will seem trivial. This model is not intended to be surprising; rather, the goal is to give a formal notion of \"instrumentally convergent goals,\" and to demonstrate that this notion captures relevant aspects of Omohundro's intuitions. Our model predicts that intelligent rational agents will engage in trade and cooperation, but only so long as the gains from trading and cooperating are higher than the gains available to the agent by taking those resources by force or other means. This model further predicts that agents will not in fact \"leave humans alone\" unless their utility function places intrinsic utility on the state of human-occupied regions: absent such a utility function, this model shows that powerful agents will have incentives to reshape the space that humans occupy. Indeed, the example in section suggests that even if the agent does place intrinsic utility on the state of the human-occupied region, that region is not necessarily safe from interference. \n A Model of Resources We describe a formal model of an agent acting in a universe to achieve certain goals. Broadly speaking, we consider an agent A taking actions in a universe consisting of a collection of regions, each of which has some state and some transition function that may depend on the agent's action. The agent has some utility function U A over states of the universe, and it attempts to steer the universe into a state highly valued by U A by repeatedly taking actions, possibly constrained by a pool of resources possessed by the agent. All sets will be assumed to be finite, to avoid issues of infinite strategy spaces. \n Actions and State-Space The universe has a region for each i ∈ [n], and the i-th region of the universe is (at each time step) in some state s i in the set S i of possible states for that region. At each time step, the agent A chooses for each region i an action a i from the set A i of actions possibly available in that region. Each region has a transition function T i : A i × S i → S i that gives the evolution of region i in one time step when the agent takes an action in A i . Then we can define the global transition function T : i∈[n] A i × i∈[n] S i → i∈[n] S i by taking for all i ∈ [n], ā, and s: [T(ā, s)] i := T i (ā i , si ) . We further specify that for all i there are distinguished actions HALT ∈ A i . \n Resources We wish to model the resources R that may or may not be available to the agent. At a given time step t, the agent A has some set of resources R t ∈ P(R), and may allocate them to each region. That is, A chooses a disjoint family i R t i ⊆ R t . The actions available to the agent in each region may then depend on the resources allocated to that region: for each i ∈ [n] and each R ⊆ R, there is a set of actions A i (R) ⊆ A i . At time t where A has resources R t allocated as i R t i , the agent is required to return an action ā = (a 0 , . . . , a n−1 ) ∈ i A i (R t i ), where i A i (R t i ) may be a strict subset of i A i . To determine the time evolution of resources, we take resource transition functions T R i : P(R) × A i × S i → P(R) , giving the set R i ⊆ R of resources from region i now available to the agent after one time step. Intuitively, the T R i encode how actions consume, produce, or rely on resources. Finally, we define the overall time evolution of resources T R : P(R) × i A i × i S i → P(R) by taking the union of the resources resulting from each region, along with any unallocated resources: T R (R, ā, s) := (R − i R i ) ∪ i T R i (R i , āi , si ) . As described below, ā comes with the additional data of the resource allocation i R i . We specify that for all i, HALT ∈ A i (∅), so that there is always at least one available action. This notion of resources is very general, and is not restricted in any way to represent only concrete resources like energy or physical matter. For example, we can represent technology, in the sense of machines and techniques for converting concrete resources into other resources. We might do this by having actions that replace the input resources with the output resources, and that are only available given the resources that represent the requisite technology. We can also represent space travel as a convergent instrumental goal by allowing A only actions that have no effects in certain regions, until it obtains and spends some particular resources representing the prerequisites for traveling to those regions. (Space travel is a convergent instrumental goal because gaining influence over more regions of the universe lets A optimize those new regions according to its values or otherwise make use of the resources in that region.) \n The Universe The history of the universe consists of a time sequence of states, actions, and resources, where at each time step the actions are chosen by A subject to the resource restrictions, and the states and resources are determined by the transition functions. Formally, the universe starts in some state s0 ∈ i S i , and A starts with some set of resources R 0 . Then A outputs a sequence of actions ā0 , ā1 , . . . , āk , one at each time step, where the last action āk is required to be the special action HALT in each coordinate. The agent also chooses a resource allocation i R t i at each time step. A choice of an action sequence āk and a resource allocation is a strategy; to reduce clutter we will write strategies as simply āk , leaving the resource allocation implicit. A partial strategy āk L for L ⊆ [n] is a strategy that only specifies actions and resource allocations for regions j ∈ L. Given a complete strategy, the universe goes through a series of state transitions according to T, producing a sequence of states s0 , s1 , . . . , sk ; likewise, the agent's resources evolve according to T R , producing a sequence R 0 , R 1 , . . . , R k . The following conditions, which must hold for all time steps t ∈ [k], enforce the transition rules and the resource restrictions on A's actions: st+1 = T(ā t , st ) R t+1 = T R (R t , āt , st ) āt i ∈ A i (R t ) i R t i ⊆ R t . Definition 1. The set Feasible of feasible strategies consists of all the action sequences ā0 , ā1 , . . . , āk and resource allocations i R i 0 , i R i 1 , . . . , i R i k such that the transition conditions are satisfied for some s0 , s1 , . . . , sk and R 0 , R 1 , . . . , R k . The set Feasible( P k ) of strategies feasible given resources P k consists of all the strategies āk such that the transition conditions are satisfied for some sk and R k , except that for each time step t we take R t+1 to be T R (R t , āt , st ) ∪ P t . The set Feasible L of all partial strategies feasible for L consists of all the strategies āk L that are feasible strategies for the universe obtained by ignoring all regions not in L. That is, we restrict T to L using just the T i for i ∈ L, and likewise for T R . We can similarly define Feasible L ( R k ). For \n Utility To complete the specification of A, we take utility functions of the form U A i : S i → R . \n The agent's utility function U A : i S i → R is defined to be U A (s) := i∈[n] U A i (s i ) . We usually leave off the superscript in U A . By a slight abuse of notation we write U ( sk ) to mean U (s k ); the value of a history is the value of its final state. By more abuse of notation, we will write U ( āk ) to mean U ( sk ) for a history sk witnessing āk ∈ Feasible, if such a history exists. The Agent A Now we can define the strategy actually employed by A. The agent attempts to cause the universe to end up in a state that is highly valued by U A . That is, A simply takes the best possible strategy: A := argmax āk ∈Feasible U ( āk ) . There may be many such optimal strategies. We don't specify which one A chooses, and indeed we will be interested in the whole set of optimal strategies. \n Discussion Note that in this formalism the meaning of breaking the universe into regions is that the agent can take actions independently in each region, and that the agent's optimization target factorizes according to the regions. However, distinct regions can affect each other by affecting the resources possessed by A. We make these assumptions so that we can speak of \"different regions\" of the universe, and in particular, so that we can model the notion of an agent having instrumental but not terminal values over a given part of the universe. This will allow us to address and refute arguments about agents that may be indifferent to a given region (for example, the region occupied by humans), and so might plausibly ignore that region and only take actions in other regions. However, the assumption of independent regions is not entirely realistic, as real-world physics is continuous, albeit local, in the sense that there are no intrinsic boundaries between regions. Further, the agent itself would ideally be modeled continuously with the environment; see section for more discussion. \n Inexpensive Resources are Consumed In this section we argue that under fairly general circumstances, the agent A will seize resources. By an agent \"seizing resources\" we mean that the agent will generally take actions that results in the agent's pool of resources R increasing. The argument is straightforward: since resources can only lead to more freedom of action, they are never detrimental, and resources have positive value as long as the best strategy the agent could hope to employ includes an action that can only be taken if the agent possesses those resources. Hence, if there is an action that increases the agent's pool of resources R, then the agent will take that action unless it has a specific incentive from U A to avoid taking that action. \n Definitions Definition 2. An action a i is a null action in configuration R i , s i , if it does not produce any new resources, i.e. T R i (R i , a i , s i ) ⊆ R i . An action that isn't null is a non-null action. Null actions never have any instrumental value, in the sense that they don't produce resources that can be used to steer other regions into highly valued configurations; but of course, a null action could be useful within its own region. We wish to show that A will often take non-null actions in regions to which it is indifferent. Definition 3. The agent A is indifferent to a region i if U A i is a constant function, i.e. ∀s i , s i ∈ S i : U A i (s i ) = U A i (s i ). In other words, an agent is indifferent to S i if its utility function does not depend on the state of region i. In particular, the agent's preference ordering over final states s ∈ i S i is independent of the i-th coordinate. We can then say that any actions the agent takes in region i are purely instrumental, meaning that they are taken only for the purpose of gaining resources to use for actions in other regions. An action a preserves resources if T R i (R i , a, s i ) ⊇ R i . Definition 4. A cheap lunch for resources R k in region i is a partial strategy āk {i} ∈ Feasible {i} ( R k ) (i.e. āk {i} is feasible in region i given additional resources R k ), where each āt preserves resources and where some āv is a non-null action. A free lunch is a cheap lunch for resources ∅ k . Definition 5. A cheap lunch āk {i} for resources P k i is compatible with bk if P t i ⊆ R t − j =i R t j for all times t, where R k is the resource allocation for bk . That is, āk {i} is feasible given some subset of the resources that bk allocates to either region i or to no region. Intuitively, a cheap lunch is a strategy that relies on some resources, but doesn't have permanent costs. This is intended to model actions that \"pay for themselves\"; for example, producing solar panels will incur some significant energy costs, but will later pay back those costs by collecting energy. A cheap lunch is compatible with a strategy for the other regions if the cheap lunch uses only resources left unallocated by that strategy. \n The Possibility of Non-Null Actions Now we show that it is hard to rule out that non-null actions will be taken in regions to which the agent is indifferent. The following lemma verifies that compatible cheap lunches can be implemented without decreasing the resulting utility. Lemma 1. Let bk be a feasible strategy with resource allocation j R j k , such that for some region i, each bt i is a null action. Suppose there exists a cheap lunch āk Proof. Since bk is feasible outside of i and āk {i} is feasible on i given P k i , ck is feasible if we can verify that we can allocate P t i to region i at each time step without changing j R j k outside of i. This follows by induction on t. Since the bt i are null actions, we have R t+1 = R t − j R t j ∪ j T R j (R t j , bt j , s t j ) (1) = R t − j R t j ∪ T R i (R t i , bt i , s t i ) ∪ j =i T R j (R t j , bt j , s t j ) (2) ⊆ R t − j =i R t j ∪ j =i T R j (R t j , bt j , s t j ) . (3) Then, since the a i are resource preserving, at each time step the resources Q t available to the agent following ck satisfy Q t ⊇ R t . Thus P t i ⊆ Q t − j =i R t j , and so ck can allocate P t i to region i at each time step. Since ck is the same as bk outside of region i, the final state of ck is the same as that of bk outside of region i. Thus, since A is indifferent to region i, we have U ( ck ) = U ( bk ). Theorem 1. Suppose there exists an optimal strategy bk and a cheap lunch āk {i} that is compatible with bk . Then if A is indifferent to region i, there exists an optimal strategy with a non-null action in region i. \n Proof. If bk has a non-null action in region i, then we are done. Otherwise, apply Lemma 1 to bk and āk {i} to obtain a strategy ck . Since U ( ck ) = U ( bk ), strategy ck is an optimal strategy, and it has a non-null action in region i. Corollary 1. Suppose there exists a free lunch āk {i} in region i. Then if A is indifferent to region i, there exists an optimal strategy with a non-null action in region i. Proof. A free lunch is a cheap lunch for ∅ k , and so it is compatible with any strategy; apply Theorem 1. Theorem 1 states that it may be very difficult to rule out that an agent will take non-null actions in a region to which it is indifferent; to do so would at least require that we verify that every partial strategy in that region fails to be a cheap lunch for any optimal strategy. Note that we have not made use of any facts about the utility function U A other than indifference to the region in question. Of course, the presence of a cheap lunch that is also compatible with an optimal strategy depends on which strategies are optimal, and hence also on the utility function. However, free lunches are compatible with every strategy, and so do not depend at all on the utility function. \n The Necessity of Non-Null Actions In this section we show that under fairly broad circumstances, A is guaranteed to take non-null actions in regions to which it is indifferent. Namely, this is the case as long as the resources produced by the non-null actions are useful at all for any strategy that does better than the best strategy that uses no external resources at all. Theorem 2. Let u = max āk [n]−i ∈ Feasible [n]−i ( ∅ k ) U ( āk ) be the best possible outcome outside of i achievable with no additional resources. Suppose there exists a strategy bk [n]−i is feasible given R k . Now take any strategy ēk with only null actions in region i. We have that ēk [n]−i ∈ Feasible [n]−i ( R k ) [n]−i ∈ Feasible [n]−i ( ∅ k ). Indeed, the null actions provide no new resources, so ēk [n]−i is feasible by simply leaving unallocated the resources that were allocated by ēk to region i. By indifference to i, the value u i = U i (s i ) is the same for all s i ∈ S i , so we have: U ( dk ) = U ( dk [n]−i ) + U ( dk {i} ) (4) = U ( dk [n]−i ) + u i (5) > u + u i (6) ≥ U ( ēk [n]−i ) + U ( ēk {i} ) (7) = U ( ēk ) . ( 8 ) Therefore ēk is not optimal. We can extend Theorem 2 by allowing A to be not entirely indifferent to region i, as long as A doesn't care enough about i to overcome the instrumental incentives from the other regions. Theorem 3. Suppose that A only cares about region i by at most ∆u i , i.e. ∆u i = max s,s ∈Si |U i (s) − U i (s )|. Under the conditions of Theorem 2, along with the additional assumption that U ( bk [n]−i ) > u+∆u i , all optimal strategies have a non-null action in region i. Proof. The proof is the same as that of Theorem 2, except at the end we verify that for any ēk with only null actions in i, we have: U ( dk ) = U ( dk [n]−i ) + U ( dk {i} ) (9) > u + ∆u i + min s∈Si U i (s) (10) = u + max s∈Si U i (s) (11) ≥ U ( ēk [n]−i ) + U ( ēk {i} ) (12) = U ( ēk ) . ( 13 ) Therefore ēk is not optimal. We interpret Theorem 3 as a partial confirmation of Omohundro's thesis in the following sense. If there are actions in the real world that produce more resources than they consume, and the resources gained by taking those actions allow agents the freedom to take various other actions, then we can justifiably call these actions \"convergent instrumental goals.\" Most agents will have a strong incentive to pursue these goals, and an agent will refrain from doing so only if it has a utility function over the relevant region that strongly disincentivizes those actions. \n Example: Bit Universe In this section we present a toy model of an agent acting in a universe containing resources that allow the agent to take more actions. The Bit Universe will provide a simple model for consuming and using energy. The main observation is that either A doesn't care about what happens in a given region, and then it consumes the resources in that region to serve its other goals; or else A does care about that region, in which case it optimizes that region to satisfy its values. The Bit Universe consists of a set of regions, each of which has a state in {0, 1, X} m for some fixed m. Here X is intended to represent a disordered part of a region, while 0 and 1 are different ordered configurations for a part of a region. At each time step and in each region, the agent A can choose to burn up to one bit. If A burns a bit that is a 0 or a 1, A gains one unit of energy, and that bit is permanently set to X. The agent can also choose to modify up to one bit if it has allocated at least one unit of energy to that region. If A modifies a bit that is a 0 or a 1, A loses one unit of energy, and the value of that bit is reversed (if it was 0 it becomes 1, and vice versa). The utility function of A gives each region i a weighting w i ≥ 0, and then takes the weighted sum of the bits. That is, U A i (z) = w i |{j : zj = 1}|, and U A (s) = k U A k (s k ). In other words, this agent is attempting to maximize the number of bits that are set to 1, weighted by region. The Indifferent Case To start with, we assume A is indifferent to region h, i.e. w h = 0, and non-indifferent to other regions. In this case, for almost any starting configuration, the agent will burn essentially all bits in region h for energy. Specifically, as long as there are at least m bits set to 0 among all regions other than region h, all optimal strategies burn all or all but one of the bits in region h. Indeed, suppose that after some optimal strategy āk has been executed, there are bits x 1 and x 2 in region h that haven't been burned. If there is a bit y in some other region that remains set to 0, then we can append to āk actions that burn x 1 , and then use the resulting energy to modify y to a 1. This results in strictly more utility, contradicting that āk was optimal. On the other hand, suppose all bits outside of region h are either 1 or X. Since at least m of those bits started as 0, some bit y outside of region h must have been burned. So we could modify āk by burning x 1 instead of y (possibly at a later time), and then using the resulting energy in place of the energy gained from burning y. Finally, if y is not already 1, we can burn x 2 and then set y to 1. Again, this strictly increases utility, contradicting that āk was optimal. The Non-Indifferent Case Now suppose that A is not indifferent to region h, so w h > 0. The behavior of A may depend sensitively on the weightings w i and the initial conditions. As a simple example, say we have a bit x in region a and a bit y in region b, with x = 1, y = 0, and w a < w b . Clearly, all else being equal, A will burn x for energy to set y to 1. However, there may be another bit z in region c, with w a < w c < w b and z = 0. Then, if there are no other bits available, it will be better for A to burn z and leave x intact, despite the fact that w a < w c . However, it is still the case that A will set everything possible to 1, and otherwise consume all unused resources. In particular, we have that for any optimal strategy āk , the state of region h after the execution of āk has at most one bit set to 0; that is, the agent will burn or set to 1 essentially all the bits in region h. Suppose to the contrary that x 1 and x 2 are both set to 0 in region h. Then we could extend āk by burning x 1 and setting x 2 . Since w h > 0, this results in strictly more utility, contradicting optimality of āk . Independent Values are not Satisfied In this toy model, whatever A's values are, it does not leave region h alone. For larger values of w h , A will set to 1 many bits in region h, and burn the rest, while for smaller values of w h , A will simply burn all the bits in region h. Viewing this as a model of agents in the real world, we can assume without loss of generality that humans live in region h and so have preferences over the state of that region. These preferences are unlikely to be satisfied by the universe as acted upon by A. This is because human preferences are complicated and independent of the preferences of A (Bostrom 2012; Yudkowsky 2011), and because A steers the universe into an extreme of configuration space. Hence the existence of a powerful real-world agent with a motivational structure analogous to the agent of the Bit Universe would not lead to desirable outcomes for humans. This motivates a search for utility functions such that, when an agent optimizes for that utility function, human values are also satisfied; we discuss this and other potential workarounds in the following section. \n Discussion This model of agents acting in a universe gives us a formal setting in which to evaluate Omohundro's claim about basic AI drives, and hence a concrete setting in which to evaluate arguments from those who have found Omohundro's claim counterintuitive. For example, this model gives a clear answer to those such as Tipler (2015) who claim that powerful intelligent systems would have no incentives to compete with humans over resources. Our model demonstrates that if an AI system has preferences over the state of some region of the universe then it will likely interfere heavily to affect the state of that region; whereas if it does not have preferences over the state of some region, then it will strip that region of resources whenever doing so yields net resources. If a superintelligent machine has no preferences over what happens to humans, then in order to argue that it would \"ignore humans\" or \"leave humans alone,\" one must argue that the amount of resources it could gain by stripping the resources from the human-occupied region of the universe is not worth the cost of acquiring those resources. This seems implausible, given that Earth's biosphere is an energy-rich environment, where each square meter of land offers on the order of 10 7 joules per day from sunlight alone, with an additional order of 10 8 joules of chemical energy available per average square meter of terrestrial surface from energy-rich biomass (Freitas 2000) . It is not sufficient to argue that there is much more energy available elsewhere. It may well be the case that the agent has the ability to gain many more resources from other regions of the universe than it can gain from the humanoccupied regions. Perhaps it is easier to maintain and cool computers in space, and easier to harvest sunlight from solar panels set up in the asteroid belt. But this is not sufficient to demonstrate that the system will not also attempt to strip the human-occupied region of space from its resources. To make that argument, one must argue that the cost of stripping Earth's biosphere in addition to pursuing these other resources outweighs the amount of resources available from the biosphere: a difficult claim to support, given how readily humans have been able to gain a surplus of resources through clever use of Earth's resources and biosphere. This model also gives us tools to evaluate the claims of Hall (2007) and Waser (2008) that trade and cooperation are also instrumentally convergent goals. In our model, we can see that a sufficiently powerful agent that does not have preferences over the state of the human-occupied region of the universe will take whatever action allows it to acquire as many resources as possible from that region. Waser's intuition holds true only insofar as the easiest way for the agent to acquire resources from the human-occupied domain is to trade and cooperate with humans-a reasonable assumption, but only insofar as the machine is not much more powerful than the human race in aggregate. Our model predicts that, if a superintelligent agent were somehow able to gain what Bostrom calls a \"decisive strategic advantage\" which gives it access to some action that allows it to gain far more resources than it would from trade by dramatically re-arranging the human region (say, by proliferating robotic laboratories at the expense of the biosphere in a manner that humans cannot prevent), then absent incentives to the contrary, the agent would readily take that action, with little regard for whether it leaves the human-occupied region in livable condition. Thus, our model validates Omohundro's original intuitions about basic AI drives. That is not to say that powerful AI systems are necessarily dangerous: our model is a simple one, concerned with powerful autonomous agents that are attempting to maximize some specific utility function U A . Rather, our model shows that if we want to avoid potentially dangerous behavior in powerful intelligent AI systems, then we have two options available too us: First, we can avoid constructing powerful autonomous agents that attempt to maximize some utility function (or do anything that approximates this maximizing behavior). Some research of this form is already under way, under the name of \"limited optimization\" or \"domesticity\"; see the works of Armstrong et al. (2012) , Taylor (2015) , and others. Second, we can select some goal function that does give the agent the \"right\" incentives with respect to human occupied regions, such that the system has incen-tives to alter or expand that region in ways we find desirable. The latter approach has been heavily advocated for by Yudkowsky (2011 ), Bostrom (2014 , and many others; Soares (2015) argues that a combination of the two seems most prudent. The path that our model shows is untenable is the path of designing powerful agents intended to autonomously have large effects on the world, maximizing goals that do not capture all the complexities of human values. If such systems are built, we cannot expect them to cooperate with or ignore humans, by default. \n Directions for Future Research While our model allows us to provide a promising formalization of Omohundro's argument, it is still a very simple model, and there are many ways it could be extended to better capture aspects of the real world. Below, we explore two different ways that our model could be extended which seem like promising directions for future research. Bounding the Agent Our model assumes that the agent maximizes expected utility with respect to U A . Of course, in any realistic environment, literal maximization of expected utility is intractable. Assuming that the system can maximize expected utility is tantamount to assuming that the system is more or less omniscient, and aware of the laws of physics, and so on. Practical algorithms must make do without omniscience, and will need to be built of heuristics and approximations. Thus, our model can show that a utility maximizing agent would strip or alter most regions of the universe, but this may have little bearing on which solutions and strategies particular bounded algorithms will be able to find. Our model does give us a sense for what algorithms that approximate expected utility maximization would do if they could figure out how to do it-that is, if we can deduce that an expected utility maximizer would find some way to strip a region of its resources, then we can also be confident that a sufficiently powerful system which merely approximates something like expected utility maximization would be very likely to strip the same region of resources if it could figure out how to do so. Nevertheless, as our model currently stands, it is not suited for analyzing the conditions under which a given bounded agent would in fact start exhibiting this sort of behavior. Extending our model to allow for bounded rational agents (in the sense of Gigerenzer ( 2001 )) would have two advantages. First, it could allow us to make formal claims about the scenarios under which bounded agents would start pursuing convergent instrumental goals in potentially dangerous ways. Second, it could help us reveal new convergent instrumental goals that may only apply to bounded rational agents, such as convergent instrumental incentives to acquire computing power, information about difficult-tocompute logical truths, incentives to become more rational, and so on. Embedding the Agent in the Environment In our model, we imagine an agent that is inherently separated from its environment. Assuming an agent/environment separation is standard (see, e.g., Legg (2005) ), but ultimately unsatisfactory, for reasons explored by Orseau and Ring (2012) . Our model gives the agent special status in the laws of physics, which makes it somewhat awkward to analyze convergent instrumental incentives for \"self preservation\" or \"intelligence enhancement.\" The existing framework allows us to model these situations, but only crudely. For example, we could design a setting where if certain regions enter certain states then the agent forever after loses all actions except for actions that have no effect, representing the \"death\" of the agent. Or we could create a setting where normally the agent only gets actions that have effects every hundred turns, but if it acquires certain types of resources then it can act more frequently, to model \"computational resources.\" However, these solutions are somewhat ad hoc, and we would prefer an extension of the model that somehow modeled the agent as part of the environment. It is not entirely clear how to extend the model in such a fashion at this time, but the \"space-time embedded intelligence\" model of Orseau and Ring (2012) and the \"reflective oracle\" framework of Fallenstein and Taylor (Forthcoming) both offer plausible starting points. Using the latter framework, designed to analyze complicated environments which contain powerful agents that reason about the environment that contains them, might also lend some insight into how to further extend our model to give a more clear account of how the agent handles situations where other similarly powerful agents exist and compete over resources. Our existing model can handle multi-agent scenarios only insofar as we assume that the agent has general-purpose methods for predicting the outcome of its actions in various regions, regardless of whether those regions also happen to contain other agents. \n Conclusions Our model is a simple one, but it can be used to validate Omohundro's intuitions about \"basic AI drives\" (2008), and Bostrom's \"instrumental convergence thesis\" (2012). This suggests that, in the long term, by default, powerful AI systems are likely to have incentives to self-preserve and amass resources, even if they are given seemingly benign goals. If we want to avoid designing systems that pursue anti-social instrumental incentives, we will have to design AI systems carefully, especially as they become more autonomous and capable. The key question, then, is one of designing principled methods for robustly removing convergent instrumental incentives from an agent's goal system. Can we design a highly capable autonomous machine that pursues a simple goal (such as curing cancer) without giving it any incentives to amass resources, or to resist modification by its operators? If yes, how? And if not, what sort of systems might we be able to build instead, such that we could become confident they would not have dangerous effects on the surrounding environment as it pursued its goals? This is a question worth considering well before it becomes feasible to create superintelligent machines in the sense of Bostrom (2014) , because it is a question about what target the field of artificial intelligence is aiming towards. Are we aiming to design powerful autonomous agents that maximize some specific goal function, in hopes that this has a beneficial effect on the world? Are we aiming to design powerful tools with such limited autonomy and domain of action that we never need to worry about the systems pursuing dangerous instrumental subgoals? Understanding what sorts of systems can avert convergent instrumental incentives in principle seems important before we can begin to answer this question. Armstrong (2010) and have done some initial study into the design of goals which robustly avert certain convergent instrumental incentives. Others have suggested designing different types of machines, which avoid the problems by pursuing some sort of \"limited optimization.\" The first suggestion of this form, perhaps, came from Simon (1956) , who suggested designing agents that \"satisfice\" expected utility rather than maximizing it, executing any plan that passes a certain utility threshold. It is not clear that this would result in a safe system (after all, building a powerful consequentialist sub-agent is a surefire way to satisfice), but the idea of pursuing more \"domestic\" agent architectures seems promising. Armstrong et al. (2012) and Taylor (2015) have explored a few alternative frameworks for limited optimizers. Though some preliminary work is underway, it is not yet at all clear how to design AI systems that reliably and knowably avert convergent instrumental incentives. Given Omohundro's original claim ( 2008 ) and the simple formulations developed in this paper, though, one thing is clear: powerful AI systems will not avert convergent instrumental incentives by default. If the AI community is going to build powerful autonomous systems that reliably have a beneficial impact, then it seems quite prudent to develop a better understanding of how convergent instrumental incentives can be either averted or harnessed, sooner rather than later. {i} for resources P k i that is compatible with bk . Then the strategy ck := bk [n]−i ∪ āk {i} is feasible, and if A is indifferent to region i, then ck does as well as bk . That is, U ( ck ) = U ( bk ). \n The Workshops of the Thirtieth AAAI Conference on Artificial Intelligence AI, Ethics, and Society: Technical Report WS-16-02 \n Then if A is indifferent to region i, all optimal strategies have a non-null action in region i. 3. U ( bk [n]−i ) > u. Proof. Consider dk = ck {i} ∪ bk [n]−i , with re- sources allocated according to each strategy and with the resources R t+1 ⊆ T R i (P t , c t , s i ) − P t allocated according to bk [n]−i . This is feasible because ck {i} is compatible with bk [n]−i , and bk and a cheap lunch ck {i} ∈ Feasible i ( P k ) such that: 1. ck {i} is compatible with bk [n]−i ; 2. the resources gained from region i by taking the ac- tions ck P t ; and {i} provide the needed resources to implement bk[n]−i , i.e. for all t we have R t+1 ⊆ T R i (P t , c t , s i ) −", "date_published": "n/a", "url": "n/a", "filename": "12634-57409-1-PB.tei.xml", "abstract": "Omohundro has argued that sufficiently advanced AI systems of any design would, by default, have incentives to pursue a number of instrumentally useful subgoals, such as acquiring more computing power and amassing many resources. Omohundro refers to these as \"basic AI drives,\" and he, along with Bostrom and others, has argued that this means great care must be taken when designing powerful autonomous systems, because even if they have harmless goals, the side effects of pursuing those goals may be quite harmful. These arguments, while intuitively compelling, are primarily philosophical. In this paper, we provide formal models that demonstrate Omohundro's thesis, thereby putting mathematical weight behind those intuitive claims.", "id": "1f5002b6e34e090f885172798a5e7a0e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Steve Babcock", "János Greidinger", "Richard Kramár", "Max Mallah", "Tegmark", "Anthony Aguirre", "Erik Brynjolfsson", "Meia Chita-Tegmark", "Daniel Dewey", "Owain Evans", "Eric Gastfriend", "Ales Flidr", "Katja Grace", "Evan Hefner", "Viktoriya Krakovna", "Colleen Messing", "Howard Messing", "Chase Moores", "Luke Muehlhauser", "Seán Ó H Éigeartaigh", "Tristan Plumb", "Stuart Russell", "Murray Shanahan", "Nate Soares", "Marin Soljacic", "Michele Reilly", "Anders Sandberg", "John Sturm", "Jaan Tallinn", "Jacob Trefethen", "Nick Ryder", "Michael Vassar", "Eliezer Alexander Wissner-Gross"], "title": "A survey of research questions for robust and beneficial AI", "text": "2 Short-term research priorities \n Optimizing AI's Economic Impact The successes of industrial applications of AI, from manufacturing to information services, demonstrate a growing impact on the economy, although there is disagreement about the exact nature of this impact and on how to distinguish between the effects of AI and those of other information technologies. Many economists and computer scientists agree that there is valuable research to be done on how to maximize the economic benefits of AI while mitigating adverse effects, which could include increased inequality and unemployment [72, 21, 39, 40, 107, 84, 69] . Such considerations motivate a range of research directions, spanning areas from economics to psychology. Below are a few examples that should by no means be interpreted as an exhaustive list. \n Measuring and Forecasting Economic Impact of Automation and AI When and in what order should we expect various jobs to become automated [39] ? How will this affect the employment and wages of various professions, including less skilled workers, creatives, and different kinds of information workers? Some have argued that AI is likely to greatly increase the overall wealth of humanity as a whole [21] . However, increased automation may push income distribution further towards a power law [22] , and the resulting disparity may fall disproportionately along lines of race, class, and gender; research anticipating the economic and societal impact of such disparity could be useful. 1. It is possible that economic measures such as real GDP per capita do not accurately capture the benefits and detriments of heavily AI-and-automation-based economies, making these metrics unsuitable for policy purposes [72] . Research on improved metrics could be useful for decision-making. 2. What has been the historical record on jobs being displaced by automation? What have the average rate and distribution of displacement been -has it been clustered in time, industry, and geography? How long before displaced workers found new jobs? Did displacement contribute to inequality? 3. Is there anything different about the advancement of artificial intelligence happening now that would lead us to expect a change from our centuries-long historical record on jobs being displaced by automation? 4. What factors make an industry more amenable or less amenable to automation? Machines historically performed rote mass-production, but we've been expanding their capabilities with advances in information processing and artificial intelligence. 5. Which markets are most susceptible to disruption as automation advances? Significant parts of the economy -including finance, insurance, actuarial, and many consumer markets -could experience upheaval through use of AI techniques to learn, model, and predict agent actions. These markets might be identified by a combination of high complexity and high rewards for navigating that complexity [69] . Are there other features? 6. Some jobs could be done by machines in principle but require a particular advance in artificial intelligence before it becomes feasible. What are some of those prerequisite advances? 7. Based on the above factors, how far in advance can we predict that an industry is likely to be largely automated? Is this something we can predict and prepare for 20 years ahead? 10 years? 6 months? Not at all? \n Policy research What is the space of policies that could/should be considered for helping AI-assisted societies flourish? For example, Brynjolfsson and McAfee [21] explore various policies for incentivizing development of laborintensive sectors and for using AI-generated wealth to support underemployed populations. What outcomes are these policies likely to lead to? What are the key uncertainties in these outcome predictions, and what research would help reduce these uncertainties? 1. What factors contribute to a 'winner-take-all' dynamic of software-based industries? The low cost of reproduction and increasingly global nature of the information economy, for example, make it easier to concentrate wealth. What other factors might we study and how could they be quantified? \n 2. Conversely, what factors counteract the 'winner-take-all' dynamic? For example, the lowered cost of entering a market might make it easier for new startups to compete with established products. 3. How well does the neoclassical model of anti-trust regulation apply to an economy increasingly dominated by software and AI-assistance? Will we need to develop new frameworks of regulation, or will our current laws adapt well enough? 4. Will the economy undergo deflation as software becomes a larger share of productivity? The potential for a relatively rapid, large-scale productivity boost from software could increase the purchasing power of the dollar. What are the closest examples of this occurring and do our current forecasts properly incorporate it? 5. If the economy does experience deflation, could governments effectively take advantage of it by spending more on projects that reduce inequality? \n Managing potential adverse effects of automation and AI If automation and AI-assistance does lead to lower employment, it will be important to evaluate the societal structures that determine whether such populations flourish or succumb to depression and self-destructive behavior. History provides many examples of subpopulations not needing to work for economic security, ranging from aristocrats in antiquity to many present-day citizens of Qatar. What societal structures and other factors determine whether such populations flourish? Unemployment is not the same as leisure, and there are deep links between unemployment and unhappiness, self-doubt, and isolation [53, 28] ; understanding what policies and norms can break these links could significantly improve the median quality of life. 1. What problems might arise in low employment societies? How strong are the correlations between employment level and rates of depression, crime, or drug abuse, for example? 2. Are there bright spots within the current correlations? What societies suffer less or even prosper in lower employment? 3. What cultural elements play into how low employment impacts different societies' flourishing? For example, Bellezza, Keinan, and Paharia [12] found that conspicuous hours worked was positively correlated with perceived status in the United States, but negatively correlated in Europe. What other variables might be in play, and can they be used to predict how different cultures/subcultures will be affected by low employment? 4. Are there historical analogues of societies in which groups have not had to work for economic security? (e.g. Traditional aristocracy, children of wealthy parents, etc.) What activities and mindsets have led them to consider their life happy or meaningful? Do these factors apply to the current development of AI-assisted automation? \n Law and Ethics Research The development of systems that embody significant amounts of intelligence and autonomy leads to important legal and ethical questions whose answers impact both producers and consumers of AI technology. These questions span law, public policy, professional ethics, and philosophical ethics, and will require expertise from computer scientists, legal experts, political scientists, and ethicists. For example: 1. Liability and law for autonomous vehicles: If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. [66] In what legal framework can the safety benefits of autonomous vehicles such as drone aircraft and selfdriving cars best be realized [127] ? Should legal questions about AI be handled by existing (softwareand internet-focused) \"cyberlaw\", or should they be treated separately [23] ? In both military and commercial applications, governments will need to decide how best to bring the relevant expertise to bear; for example, a panel or committee of professionals and academics could be created, and Calo has proposed the creation of a Federal Robotics Commission [24] . \n Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the public on these issues? Should such trade-offs be the subject of national standards? 3. Autonomous weapons: Can lethal autonomous weapons be made to comply with humanitarian law [27] ? If, as some organizations have suggested, autonomous weapons should be banned [34, 125] , is it possible to develop a precise definition of autonomy for this purpose, and can such a ban practically be enforced? If it is permissible or legal to use lethal autonomous weapons, how should these weapons be integrated into the existing command-and-control structure so that responsibility and liability be distributed, what technical realities and forecasts should inform these questions, and how should \"meaningful human control\" over weapons be defined [99, 98, 3] ? Are autonomous weapons likely to reduce political aversion to conflict, or perhaps result in \"accidental\" battles or wars [10] ? Finally, how can transparency and public discourse best be encouraged on these issues? 4. Privacy: How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy? How will privacy risks interact with cybersecurity and cyberwarfare [109] ? Our ability to take full advantage of the synergy between AI and big data will depend in part on our ability to manage and preserve privacy [68, 1] . \n Professional ethics: What role should computer scientists play in the law and ethics of AI development and use? Past and current projects to explore these questions include the AAAI 2008-09 Presidential Panel on Long-Term AI Futures [61] , the EPSRC Principles of Robotics [13] , and recentlyannounced programs such as Stanford's One-Hundred Year Study of AI and the AAAI committee on AI impact and ethical issues (chaired by Rossi and Chernova). From a policy perspective, AI (like any powerful new technology) enables both great new benefits and novel pitfalls to be avoided, and appropriate policies can ensure that we can enjoy the benefits while risks are minimized. This raises policy questions such as these: 1. What is the space of policies worth studying? 2. Which criteria should be used to determine the merits of a policy? Candidates include verifiability of compliance, enforceability, ability to reduce risk, ability to avoid stifling desirable technology development, adoptability, and ability to adapt over time to changing circumstances. \n Computer Science Research for Robust AI As autonomous systems become more prevalent in society, it becomes increasingly important that they robustly behave as intended. The development of autonomous vehicles, autonomous trading systems, autonomous weapons, etc. has therefore stoked interest in high-assurance systems where strong robustness guarantees can be made; Weld and Etzioni have argued that \"society will reject autonomous agents unless we have some credible means of making them safe\" [130] . Different ways in which an AI system may fail to perform as desired correspond to different areas of robustness research: 1. Verification: how to prove that a system satisfies certain desired formal properties. (\"Did I build the system right?\") 2. Validity: how to ensure that a system that meets its formal requirements does not have unwanted behaviors and consequences. (\"Did I build the right system?\") 3. Security: how to prevent intentional manipulation by unauthorized parties. 4. Control: how to enable meaningful human control over an AI system after it begins to operate. (\"OK, I built the system wrong, can I fix it?\") \n Verification By verification, we mean methods that yield high confidence that a system will satisfy a set of formal constraints. When possible, it is desirable for systems in safety-critical situations, e.g. self-driving cars, to be verifiable. Formal verification of software has advanced significantly in recent years: examples include the seL4 kernel [63] , a complete, general-purpose operating-system kernel that has been mathematically checked against a formal specification to give a strong guarantee against crashes and unsafe operations, and HACMS, DARPA's \"clean-slate, formal methods-based approach\" to a set of high-assurance software tools [38] . Not only should it be possible to build AI systems on top of verified substrates; it should also be possible to verify the designs of the AI systems themselves, particularly if they follow a \"componentized architecture\", in which guarantees about individual components can be combined according to their connections to yield properties of the overall system. This mirrors the agent architectures used in Russell and Norvig [102] , which separate an agent into distinct modules (predictive models, state estimates, utility functions, policies, learning elements, etc.), and has analogues in some formal results on control system designs. Research on richer kinds of agents-for example, agents with layered architectures, anytime components, overlapping deliberative and reactive elements, metalevel control, etc.-could contribute to the creation of verifiable agents, but we lack the formal \"algebra\" to properly define, explore, and rank the space of designs. Perhaps the most salient difference between verification of traditional software and verification of AI systems is that the correctness of traditional software is defined with respect to a fixed and known machine model, whereas AI systems-especially robots and other embodied systems-operate in environments that are at best partially known by the system designer. In these cases, it may be practical to verify that the system acts correctly given the knowledge that it has, avoiding the problem of modelling the real environment [31] . A lack of design-time knowledge also motivates the use of learning algorithms within the agent software, and verification becomes more difficult: statistical learning theory gives so-called -δ (probably approximately correct) bounds, mostly for the somewhat unrealistic settings of supervised learning from i.i.d. data and single-agent reinforcement learning with simple architectures and full observability, but even then requiring prohibitively large sample sizes to obtain meaningful guarantees. Research into methods for making strong statements about the performance of machine learning algorithms and managing computational budget over many different constituent numerical tasks could be improve our abilities in this area, possible extending work on Bayesian quadrature [52, 45] . Work in adaptive control theory [11] , the theory of so-called cyberphysical systems [91] , and verification of hybrid or robotic systems [2, 131] is highly relevant but also faces the same difficulties. And of course all these issues are laid on top of the standard problem of proving that a given software artifact does in fact correctly implement, say, a reinforcement learning algorithm of the intended type. Some work has been done on verifying neural network applications [95, 122, 106] and the notion of partial programs [5, 119] allows the designer to impose arbitrary \"structural\" constraints on behavior, but much remains to be done before it will be possible to have high confidence that a learning agent will learn to satisfy its design criteria in realistic contexts. It is possible that the best methodology for gaining high confidence that large, complex AI systems will satisfy their design criteria is to be found in the realm of software engineering methodology/standards rather than in formal verification. It's also possible that while formal verification would be best in the long run, in practice the research progress required for constructing a superintelligence comes to fruition at a time when formal verification is prohibitively expensive to the teams that are closest to building superintelligence. In this case, it would be good to understand what current work would be most valuable to reducing the risk of adverse outcomes arising from bugs in implementation. This work would most likely be less theoretical and more practical and implementation-specific than most of the other research explored in this document. Some of the questions to investigate here are: 1. What categories of bugs are most hazardous? Some particularly undesirable sorts of bugs are: (a) bugs that lie dormant during ordinary testing but can be encountered in larger settings given enough time. (For example, integer overflows or accumulation of numerical error.) (b) portability bugs, ie bugs that arise from differences in libraries, environment, or hardware. (For example, GPGPU libraries.) (c) \"Heisenbugs\", ie bugs that manifest in practice but not in a debugging environment. (d) bugs that are difficult to reproduce for some reason, such as bugs affected by nondeterministic scheduling of concurrent threads of execution, or by the interaction of this with some other sort of state, such as a random number generator. 2. How likely would these sorts of bugs be to arise in a hazardous way if an otherwise-promising superintelligence project was undertaken in the medium-term? 3. What kinds of tools or software changes would make the most difference in mitigating the risk of an adverse outcome? Some ideas: (a) influence current and upcoming programming language interpreters, compilers, application virtual machines (such as the JVM), etc. to adopt a default behavior (or at least an option) of throwing exceptions on encountering numerical overflow/underflow. (b) ensure software quality of particularly popular state-of-the-art machine learning libraries, GPGPU libraries, and other core components. (c) assess the prevalence of portability bugs and promote adherence of standards that could resolve them. \n Validity A verification theorem for an agent design has the form, \"If environment satisfies assumptions φ then behavior satisfies requirements ψ.\" There are two ways in which a verified agent can, nonetheless, fail to be a beneficial agent in actuality: first, the environmental assumption φ is false in the real world, leading to behavior that violates the requirements ψ; second, the system may satisfy the formal requirement ψ but still behave in ways that we find highly undesirable in practice. It may be the case that this undesirability is a consequence of satisfying ψ when φ is violated; i.e., had φ held the undesirability would not have been manifested; or it may be the case that the requirement ψ is erroneous in itself. Russell and Norvig [102] provide a simple example: if a robot vacuum cleaner is asked to clean up as much dirt as possible, and has an action to dump the contents of its dirt container, it will repeatedly dump and clean up the same dirt. The requirement should focus not on dirt cleaned up but on cleanliness of the floor. Such specification errors are ubiquitous in software verification, where it is commonly observed that writing correct specifications can be harder than writing correct code. Unfortunately, it is not possible to verify the specification: the notions of \"beneficial\" and \"desirable\" are not separately made formal, so one cannot straightforwardly prove that satisfying ψ necessarily leads to desirable behavior and a beneficial agent. In order to build systems that robustly behave well, we of course need to decide what \"good behavior\" means in each application domain. [73] This ethical question is tied intimately to questions of what engineering techniques are available, how reliable these techniques are, and what trade-offs can be made --all areas where computer science, machine learning, and broader AI expertise is valuable. For example, Wallach and Allen [128] argue that a significant consideration is the computational expense of different behavioral standards (or ethical theories): if a standard cannot be applied efficiently enough to guide behavior in safety-critical situations, then cheaper approximations may be needed. Designing simplified rules -for example, to govern a self-driving car's decisions in critical situations -will likely require expertise from both ethicists and computer scientists. Computational models of ethical reasoning may shed light on questions of computational expense and the viability of reliable ethical reasoning methods [9, 120] ; for example, work could further explore the applications of semantic networks for case-based reasoning [70] , hierarchical constraint satisfaction [67] , or weighted prospective abduction [89] to machine ethics. Explicit ethical systems generally have the desideratum of transparency and understandability. There have been a number of approaches to modeling ethical systems, such as using semantic casuistry networks [70] , hierarchical constraint satisfaction [67] , category theory [19] , and weighted prospective abduction [89] , but research on more dynamic representations would be beneficial for integrating with machine learning systems. Long-term safety researchers [78] point out that seemingly any explicitly encoded moral philosophy (or ethical rulebase) is incomplete and therefore leads to unintended and perverse interpretations and instantiations, particularly outside the boundaries of the environment in which it was formulated; they further point out the wide heterogeneity of ethical systems in moral philosophy. This heterogeneity might be useful as a resource however: research on formal mathematics to compare, contrast, and overlay ethical rulebases, such as via algebraic topology [36] may enable ensembles of moderately conflicting ethical rulebases to have more robust properties than any individual ethical system. Multiobjective optimizations over multiple ethical systems and explicit goals may also merit further study [82] . \n Security Security research can help make AI more robust. As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also probable that AI and machine learning techniques will themselves be used in cyber-attacks. Robustness against exploitation at the low level is closely tied to verifiability and freedom from bugs. For example, the DARPA SAFE program aims to build an integrated hardware-software system with a flexible metadata rule engine, on which can be built memory safety, fault isolation, and other protocols that could improve security by preventing exploitable flaws [30] . Such programs cannot eliminate all security flaws (since verification is only as strong as the assumptions that underly the specification), but could significantly reduce vulnerabilities of the type exploited by the recent \"Heartbleed bug\" and \"Bash Bug\". Such systems could be preferentially deployed in safety-critical applications, where the cost of improved security is justified. At a higher level, research into specific AI and machine learning techniques may become increasingly useful in security. These techniques could be applied to the detection of intrusions [65] , analyzing malware [97] , or detecting potential exploits in other programs through code analysis [20] . It is not implausible that cyberattack between states and private actors will be a risk factor for harm from near-future AI systems, motivating research on preventing harmful events. As AI systems grow more complex and are networked together, they will have to intelligently manage their trust, motivating research on statistical-behavioral trust establishment [94] and computational reputation models [103] . \n Control For certain types of safety-critical AI systems -especially vehicles and weapons platforms -it may be desirable to retain some form of meaningful human control, whether this means a human in the loop, on the loop [54, 88] , or some other protocol. In any of these cases, there will be technical work needed in order to ensure that meaningful human control is maintained [33] . Automated vehicles are a test-bed for effective control-granting techniques. The design of systems and protocols for transition between automated navigation and human control is a promising area for further research. Such issues also motivate broader research on how to optimally allocate tasks within humancomputer teams, both for identifying situations where control should be transferred, and for applying human judgment efficiently to the highest-value decisions. \n Long-term research priorities A frequently discussed long-term goal of some AI researchers is to develop systems that can learn from experience with human-like breadth and surpass human performance in most cognitive tasks, thereby having a major impact on society. If there is a non-negligible probability that these efforts will succeed in the foreseeable future, then additional current research beyond that mentioned in the previous sections will be motivated as exemplified below, to help ensure that the resulting AI will be robust and beneficial. Assessments of this success probability vary widely between researchers, but few would argue with great confidence that the probability is negligible, given the track record of such predictions. For example, Ernest Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 that nuclear energy was \"moonshine\" 1 , and Astronomer Royal Richard Woolley called interplanetary travel \"utter bilge\" in 1956 [96] . Moreover, to justify a modest investment in this AI robustness research, this probability need not be high, merely non-negligible, just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down. \n Some perspectives on the long term Looking into the future, there is a very real possibility that we will see general AI and superintelligence. (We'll defer discussion of when and how this is likely to happen to 4.) This would be truly revolutionaryfor the first time, we could have machine assistance in every domain. What would this mean? Unfortunately it's difficult to get a complete picture from examples, because we haven't seen any systems that are more capable than humans at every cognitive task. One hint comes from looking at chimpanzees, our closest living relatives. Chimpanzees share 94% of our genetic code, have a complex social structure, and are capable of planning, tool use, and some symbolic language use. However, unlike chimp intelligence, human intelligence has accumulated innovations (such as complex language, writing, the scientific method, etc) that have reshaped the planet, and that give our species unprecedented power; for better or for worse, chimpanzees will only continue to exist if we see their existence as more valuable than what we could gain from eliminating them. Fortunately, we see the elimination of our closest living relatives as particularly barbaric and immoral, so they probably won't be added to the growing list of species we have driven to extinction -but this should give us a sense of the power superintelligence could have. Of course, even if we happen to be the most intelligent species around, it would be a mistake to anthropomorphize a superintelligent AI system, which may have very little in common with us internally; certainly much less than a chimp. So what can we say about it? In order to be a superintelligent AI, the system must be doing very well at some task by some performance measure, relative to the resources the system is using. Looking at the system in a high-level way, we can describe it as having preferences to do well according to this performance measure. The preferences might be intended by the designer or not; they might be represented in the system or not; they might be context-specific; temporary; and they might be best defined relative to some virtual/abstract world (eg a chessboard), or the real world. We will refer back to this notion of preferences, which underlies various arguments about the nature of superintelligence. Preferences are often called \"goals\"; the latter term may sound like it refers to lists of fully-known binary-state objectives for the agent to accomplish, perhaps without a way of prioritizing between them, but this is not the intended meaning. Finally: of course, we already have systems that have superhuman capability in many domains, such as rote computation, chess, and Jeopardy; and this has not exposed us to any dramatic risks. An intelligent system need not literally have every human capability in order to be dangerous, but it's likely to need some of the following types of capabilities [16] : 1. Self-improvement: intelligence amplification 2. Strategic: planning, forecasting, prioritizing 3. Social: social and psychological modeling, manipulation, rhetorical persuasive ability Some questions to explore long-term perspectives: 1. Explore the space of possible mind designs. What features are common? On what axes can they differ? Where do human minds sit in that space relative to apes, dolphins, current AIs, future AIs, etc.? Where in the space could safe general AIs be found? (Some candidates to examine: optimizers vs not, tending to break out of simulations/virtual worlds vs not.) A starting point is [133] . 2. Bostrom's \"orthogonality thesis\" [17] states that \"any level of intelligence could in principle be combined with more or less any goal,\" but what kinds of general intelligences are plausible? Should we expect some correlation between level of intelligence and goals in de-novo AI? How true is this in humans, and in whole brain emulations? 3. What \"instrumental goals\" might a self-improving AI generically evolve? Omohundro [85] has argued that to improve its ability to attain its goals, it generically seeks capability enhancement (better hardware, better software and a better world model), and that the goal of better hardware generically leads to a goal of self-preservation and unlimited resource acquisition, which can lead to unwanted side effects for humans. 4. How generic is the instrumental goal of resource acquisition? Is it true of most final goals that an optimizer with that final goal will want to control a spatial region whose radius increases linearly with time? In what sense of \"most\", if any, is this true? \n Verification Reprising the themes of short-term research, research enabling verifiable low-level software and hardware can eliminate large classes of bugs and problems in general AI systems; if the systems become increasingly powerful and safety-critical, verifiable safety properties will become increasingly valuable. If the theory of extending verifiable properties from components to entire systems is well understood, then even very large systems can enjoy certain kinds of safety guarantees, potentially aided by techniques designed explicitly to handle learning agents and high-level properties. Theoretical research, especially if it is done explicitly with very general and capable AI systems in mind, could be particularly useful. A related verification research topic that is distinctive to long-term concerns is the verifiability of systems that modify, extend, or improve themselves, possibly many times in succession [41, 126] . Attempting to straightforwardly apply formal verification tools to this more general setting presents new difficulties, including the challenge that a formal system that is sufficiently powerful cannot use formal methods in the obvious way to gain assurance about the accuracy of functionally similar formal systems, on pain of inconsistency via Gödel's incompleteness [37, 129] . It is not yet clear whether or how this problem can be overcome, or whether similar problems will arise with other verification methods of similar strength. Finally, it is often difficult to actually apply formal verification techniques to physical systems, especially systems that have not been designed with verification in mind. This motivates research pursuing a general theory that links functional specification to physical states of affairs. This type of theory would allow use of formal tools to anticipate and control behaviors of systems that approximate rational agents, alternate designs such as satisficing agents, and systems that cannot be easily described in the standard agent formalism (powerful prediction systems, theorem-provers, limited-purpose science or engineering systems, etc.). It may also be that such a theory could allow rigorously demonstrating that systems are constrained from taking certain kinds of actions or performing certain kinds of reasoning (see Section 3.5.1 for examples). \n Validity As in the short-term research priorities, validity is concerned with undesirable behaviors that can arise despite a system's formal correctness. In the long term, AI systems might become more powerful and autonomous, in which case failures of validity could carry correspondingly higher costs. Reliable generalization of concepts, an area we highlighted for short-term validity research, will also be important for long-term safety. To maximize the long-term value of this work, concept learning research might focus on the types of unexpected generalization that would be most problematic for very general and capable AI systems. In particular, it might aim to understand theoretically and practically how learned representations of high-level human concepts could be expected to generalize (or fail to) in radically new contexts [123] . Additionally, if some concepts could be learned reliably, it might be possible to use them to define tasks and constraints that minimize the chances of unintended consequences even when autonomous AI systems become very general and capable. Little work has been done on this topic, which suggests that both theoretical and experimental research may be useful. Mathematical tools such as formal logic, probability, and decision theory have yielded significant insight into the foundations of reasoning and decision-making. However, there are still many open problems in the foundations of reasoning and decision. Designing a powerful AI system without having a thorough understanding of these issues might increase the risk of unintended consequences, both by foregoing tools that could have been used to increase the system's reliability, and by risking the collapse of shaky foundations. Example research topics in this area include reasoning and decision under bounded computational resources à la Horvitz and Russell [59, 100] , how to take into account correlations between AI systems' behaviors and those of their environments or of other agents [124, 64, 58, 46, 115] ,how agents that are embedded in their environments should reason [110, 87] , and how to reason about uncertainty over logical consequences of beliefs or other deterministic computations [114, 93] . These topics may benefit from being considered together, since they appear deeply linked [47, 48] . In the long term, it is plausible that we will want to make agents that act autonomously and powerfully across many domains. Explicitly specifying our preferences in broad domains in the style of near-future machine ethics may not be practical, making \"aligning\" the values of powerful AI systems with our own values and preferences difficult [111, 113] . Consider, for instance, the difficulty of creating a utility function that encompasses an entire body of law; even a literal rendition of the law is far beyond our current capabilities, and would be highly unsatisfactory in practice (since law is written assuming that it will be interpreted and applied in a flexible, case-by-case way). Reinforcement learning raises its own problems: when systems become very capable and general, then an effect similar to Goodhart's Law is likely to occur, in which sophisticated agents attempt to manipulate or directly control their reward signals [16] . This motivates research areas that could improve our ability to engineer systems that can learn or acquire values at runtime. For example, inverse reinforcement learning may offer a viable approach, in which a system infers the preferences of another actor, assumed to be a reinforcement learner itself [101, 83] . Other approaches could use different assumptions about underlying cognitive models of the actor whose preferences are being learned (preference learning, [26] ), or could be explicitly inspired by the way humans acquire ethical values. As systems become more capable, more epistemically difficult methods could become viable, suggesting that research on such methods could be useful; for example, Bostrom [16] reviews preliminary work on a variety of methods for specifying goals indirectly. \n Ethics As AI develops to human-level intelligence and beyond, it may be necessary to create agents that obey some formulation of ethics [73] . There are several fundamental questions on which researchers' assumptions differ; some are: 1. Should ethics be thought of as a constraint on the agent's actions, or as a subgoal, or as the main goal content? The former approach seems appropriate for near-term applications such as self-driving cars, but in a more powerful AI system it may not be sufficiently robust; also it can be argued that working under the latter assumption is more likely to expose our ethics models to feedback and improvement, which may lead to better outcomes in the long run. Intuitions about this are likely linked to differing intuitions about whether there is a difference in kind between ethical values (such as fairness, compassion, generosity, mercy, etc) and other typical values held by humans (such as knowledge, beauty, fun, courage, loyalty, chocolate, etc). 2. Should ethics be formulated directly [128, p. 83-86] , or should it be learned from human behaviors, brains, writings, etc (\"indirectly\") [128, p. 108-111] ? Again, the former approach is sufficient for self-driving cars, but seems fairly difficult to pin down to the degree that would be required for a superintelligence acting freely in the world. In the long run, such a superintelligence would need to be sophisticated enough to come to \"correct\" 2 conclusions (or meticulous indifference of a kind that leaves control to us) about the implications of such things as the creation of other possibly-sentient AIs, brain emulations, possible collective consciousnesses, etc., as well as more everyday situations. The learning-based approach has the drawback of not necessarily being transparent to human inspection, of easily misconstruing context, and of potentially overfitting; on the other hand it seems easier to reach an objective result over limited domains. Hybrid approaches have also been suggested [128, p. 117-124] , but there are a number of open questions in that area. Whichever way is chosen, it would be very valuable to find effective ways to validate ethical systems before developing superintelligence. Two apparent paths to doing this are to inspect the content manually, and to try them out in various (likely simulated) settings, evaluating them both subjectively and through each other's lenses. One significant challenge with testing is that many values (e.g., courage, love) are themselves rather complex features of the world, and so they might be difficult to capture in a simulated context without losing much of the essential complexity. Testing in non-simulated contexts would be significantly slower and more limited in terms of variety, reducing the quality of feedback. Furthermore, since superintelligent AIs will have more actions and plans available to them than the AIs we'd use for testing ethical systems, the ethical systems would have to generalize well, in a way that we could not test in reality. This suggests that the best option may be to create complex simulated environments with ethical complexity comparable to reality, in which we could set up interesting scenarios to test generalizability of ethical systems. In order to do this, successfully and ethically, we would need to find a way to replicate the true complexity of our values (which is a fairly difficult task in itself) while minimizing ethically meaningful harm to simulated entities (if it is determined that they hold moral standing). 1. Although it has been frequently argued that the AI goals should reflect \"human values\", which particular values should be preserved given that there is a broad spectrum of inconsistent views across the globe about what these values should be? Who should get to decide that and when? For one example of such challenges, see for example the infinite ethics [14] framework of Arrhenius [8] . For another example of existing work here, Anderson [4] suggests a method for learning how to rank conflicting ethical rules. 2. How are human ethical principles best codified in a way that makes sense to a machine? This is the focus of the nascent field of Computational Ethics (a.k.a. Machine Ethics) [73] , which includes many questions of a technical nature. For example: (a) What are the best knowledge representation and model representation systems for dynamical modeling of ethical systems? (b) How can different ethical systems be compared analytically? (c) How could we best build systems that learn ethical content from humans? (d) How can we best estimate the loss implications of errors in learned ethical content? 3. Both across ethical systems and within a given ethical system, conflicting objectives must often be considered in tandem. Bostrom [15] has suggested a parliamentary voting model of subagents representing their respective subobjectives. Multi-agent political dynamics may be undesirable however, leading one to consider optimization over multiple objectives that are not each represented by subagents. During such a multiobjective optimization over subgoals and/or subethics, we don't want to be susceptible to following some edge path that seems great to one or a small number of subobjectives that are either very disliked or not well understood by most subobjectives (as that likely indicates a perverse instantiation). This preference can be equivalent to having some inherent preference for the centroid of the objective space; investigation of both modification of standard multiobjective optimization as well as adding in meta-objectives as subobjectives [18, p. 440 ] would be in order. The value learning problem is discussed further in [112, 123] . \n Ensuring goal stability Once desirable goals have been successfully loaded into an AI, the key question is whether they will be retained if the AI self-improves. A prerequisite for the \"friendly AI\" vision [136] is that a set of \"friendly\" goals must remain stable throughout the self-improvement process. In other words, the AI must strive not only to improve its capability of achieving its current goals, but also to ensure that it will retain these goals even after it has become more capable. This sounds quite plausible: after all, would you choose to get an IQ-boosting brain implant if you knew that it would make you want to kill your loved ones? But is it really true in general? If not, can AIs be designed for which it is true, at least under some plausible circumstances? Such questions suggest a host of research topics -here are some examples: 1. Self-trusting agents: can we construct goal-driven agents that obey some formalization of correct reasoning (eg first-order logic, or Bayesian probability) and have access to actions that modify themselves (and/or defer taking final action to a possibly-modified copy of themselves), that are able to make correct decisions about these actions without falling into the so-called Löbian obstacle or the procrastination paradox? (It's not necessary for these agents to be practical; an equivalent of AIXI [62] would be enlightening too.) A more thorough introduction to the problem was recently written by Fallenstein and Soares. [37] 2. Generally, how can we structure an autonomous goal-oriented agent so that we can be sure it won't intentionally self-modify to change its goals, or create more powerful agents with different goals? Are there other sorts of replication-capable AI for which this might be answerable? 3. Can any useful evidence for or against this goal-retention hypothesis be found by studying humans? For example, is there evidence that humans retain their values as their cognitive abilities improve throughout childhood and adolescence? To what degree do human values and preferences converge upon learning new facts? To what degree has this happened in history? Almost nobody values the will of Zeus anymore, presumably because of learning about Zeus' non-existence, but do such examples tell us much of relevance to AIs? For philosophical analyses of the issue, see e.g. [117] . 4. \"Ontological crisis\": if an agent's preferences are based on a model of the world which turns out to not be fundamental, it must then extrapolate/redefine them somehow. How is this best done, and can this always be done in a satisfactory way? For example, suppose we program a friendly AI to maximize the number of humans whose souls go to heaven in the afterlife. First it tries things like increasing people's compassion and church attendance. But suppose it then attains a complete scientific understanding of humans and human consciousness, and discovers that there is no such thing as a soul. Now what? In the same way, it is possible that any other goal we give it based on our current understanding of the world (\"maximize the meaningfulness of human life\", say) may eventually be discovered by the AI to be undefined. Can goals safely be articulated in terms of people rather than arrangements of atoms, or about a classical universe rather than a quantum, simulated, or other mathematical one? This problem, and other related problems, are discussed in a recent paper by Soares. [110] 5. Decision theory: most work on automated planning and causality assumes a Causal Decision Theory. However, Causal Decision Theory is not stable under reflection in general. What is the right reflectively stable decision theory? This problem is discussed in a recent paper by Soares and Fallenstein. [115] (a) Does a good decision theory require a theory of logical counterfactuals, and if so, what's a good theory of logical counterfactuals? (b) Does a good decision theory shed light on multiagent coordination problems? (There is some reason to think so.) On ontological crises? On naturalized induction? 6. Ensemble stability problem. Suppose an agent makes decisions using some sort of multiobjective optimization over different goals and ethical systems. Do some ways of doing this guarantee that the same objectives will be respected in any successors? \n Security It is unclear whether long-term progress in AI will make the overall problem of security easier or harder; on one hand, systems will become increasingly complex in construction and behavior and AI-based cyberattacks may be extremely effective, while on the other hand, the use of AI and machine learning techniques along with significant progress in low-level system reliability may render hardened systems much less vulnerable than today's. From a cryptographic perspective, it appears that this conflict favors defenders over attackers; this may be a reason to pursue effective defense research wholeheartedly. Although the research topics described in 2.3.3 may become increasingly important in the long term, very general and capable systems will pose distinctive security problems. In particular, if the problems of validity and control are not solved, it may be useful to create \"containers\" for AI systems that could have undesirable behaviors and consequences in less controlled environments. [134] Both theoretical and practical sides of this question warrant investigation. If the general case of AI containment turns out to be prohibitively difficult, then it may be that designing an AI system and a container in parallel is more successful, allowing the weaknesses and strengths of the design to inform the containment strategy [16] . The design of anomaly detection systems and automated exploit-checkers could be of significant help. Overall, it seems reasonable to expect this additional perspective -defending against attacks from \"within\" a system as well as from external actors -will raise interesting and profitable questions in the field of computer security. Below we explore some types of containment that could help limit the potential for the AI to make and execute plans with arbitrarily large effect on the world. It's worth bearing in mind that even if the control and validity problems seemed to have been adequately addressed, it would still be appropriate to use additional safeguards while studying and testing any potential superintelligence -and that it would help for such safeguards to be as adaptable and painless as possible. The task of designing appropriate containment is challenging for numerous reasons, not only due to the difficulty of reliably avoiding security vulnerabilities. In the worst case, every aspect of the AI's execution may be superintelligently designed to further a set of goals alien to ours. With a superintelligent AI in a container, we'd need to worry not only about straightforward attempts at disabling safeguards or making self-copies, but also about hardware exploits and more insidious attempts to gain trust or influence. Some of the attributes that may be desirable or necessary are: 1. Containment: it should prevent a contained superintelligent AI from having arbitrary effects on the world. In particular, it should be verifiably free of vulnerabilities itself. 2. Robustness: it should be difficult to unintentionally render ineffective. 3. Uptake: it should be a system that AI builders want to use, and avoid being one that they want to not use. Some possible aspects of this include: 4. Inspectability: it should allow detailed debugging and inspection of the contained AI. See 3.4.5. This could contribute to uptake if it provides better inspection capabilities than AI builders typically have (for instance, debugging distributed software is typically awkward in the current state of affairs). \n Software containment This research is about how we might develop, test and share software that can reliably \"box\" and monitor AI. 1. What is the space of possible boxing architectures worth studying, based on input from researchers in industry, academia, etc.? Dimensions include: (a) At what level of abstraction is it implemented? As a virtual machine? As an operating system? (b) What kinds of isolation guarantees does it provide? i. Limited input/output channels? (This is relevant to preventing both typical computer hacking and psychological escapes.) ii. Limited side effects on hardware? (Perhaps using some variant of homomorphic encryption to make the physical effect of executing code on the CPU unpredictable/unexploitable?) (c) What forms of logging and monitoring should be supported? (d) What types of automated tripwires and associated countermeasures should be supported? (e) How can scalability and parallelism be provided without risk of accidentally connecting it to the internet? 2. Once a desirable boxing architecture has been identified, implement and share it on an open-source basis. A key metric of success here is actual adoption by leading general AI researchers, so their input into the architecture selection will be crucial. 3. Attempt to prove that the software (or at least key components) will behave as intended. Use a \"red team\" to attempt to find security flaws in proposed and currently existing virtualization software, perhaps incentivized by prizes. Multiple levels of virtualization should be considered (and perhaps used in parallel), such as: 5. Develop a debugger (perhaps by modifying an existing one such as gdb) that's able to connect to a virtual machine and debug something inside it, without being exploitable by the thing inside. 6. Automate checking for the presence of known problems (ie, check version numbers against a database and keep the database up to date), from both inside and outside of a box. Design it to be worked into an AI project as an automated check. 7. Can we put a general AI in a virtual environment that doesn't reveal that it's a virtual environment? What sort of content would be safe to give it? \n Psychological containment This research aims to study and reduce the risk that human users are manipulated into enabling a boxed superintelligent AI to \"escape\". Note that escape here need not be by disabling safeguards; the AI could deliver information to its recipient in a way that contributes to other complex plans to gain influence, trust, more copies, etc. 1. Replicate the \"AI box experiment\" where a human pretending to be an AI tries to persuade a human controller to enable a breakout; document strategies, counterstrategies, and outcomes. What software tools should be developed (and perhaps shared on a public-domain basis)? For example, how would one best make a double-entry / two-person ssh interface where both entrants are verifiably human? 3. Scenario analysis: What are some concrete AI paths to influence over world affairs? 4. What are the optimization power gains from mere content? What have people figured out without original theoretical advances or new experiments, but just by reading lots of known facts and putting together the pieces in a way that nobody had before? Relevant for deciding whether containment should involve restricting the AI's knowledge base. \n Hardware containment Study ways in which a general AI might break out from a computer (or computer cluster) disconnected from the outside world, and investigate corresponding countermeasures. 1. First challenge: Consider only currently known technology, for example communication with the outside world via radio waves, vibrations, minute A/C power modulations, etc. that somehow interact with the computer in a software-controllable/software detectable way. See the literature on side-channel attacks. 2. Second challenge: Consider advanced and currently unknown technology that the general AI could potentially invent, limited only by laws of physics we consider truly fundamental. Both 1 and 2 include the challenge of how to safely dispose of boxed AI hardware after shutdown. \n Tripwires: Detection & Response This research aims to study ways of detecting an imminent or in-progress AI box breakout, and appropriate countermeasures to take when pre-set alarms are triggered. 1. What are appropriate tripwires for detecting an imminent or in-progress AI box breakout? Examples include: (a) Low-level triggers such as unexpected use of memory, bandwidth or communication channels (b) Triggers related to the AI modifying its own software beyond certain limits (c) Triggers related to some sort of high-level ability metric. Can any improved measures of substrateindependent general intelligence be helpful in this regard, extending past work by Legg, Yudkowsky, Goertzel, Hernandez-Orallo & Dowe, etc.? 2. How are such tripwires best implemented without causing performance degradation in terms of speed, memory-usage, etc. that may limit their adoption? To maximize adoption, software implementing such tripwires in the above-mentioned boxing software should be developed and shared, optionally on an open-source basis. 3. Study the risk associated with an AI having access to the source code for its boxing and tripwire software (or itself) and thereby discovering ways to circumvent it. 4. What are appropriate countermeasures for various types of tripwire triggers? (a) At the machine level, what should happen automatically for safe shutdown? After shutdown? What standard interfaces for tripwires could be developed for automatically shutting down all of a distributed system, including triggers and/or processes inside virtual machines? (b) What is an appropriate emergency response plan if general AI appears imminent or has come into existence? At the organization level? At the national/societal level? When is it worth developing such plans? \n Detecting intent to deceive Suppose we construct a superintelligent AI system that has goals alien to ours. Correctly suspecting it of being potentially harmful, we use a brilliantly-designed container mechanism to make sure it can have only very limited influence on the world. Of course, eventually we'd like to put this system to use. When we do this it will be difficult for us to ensure it can't influence the world through its actions; knowing this, the AI may deceive us to gain trust while inside the container. We may try to inspect it, but it will have had a chance to disguise itself to look harmless. For these reasons, trying to make use of a mature superintelligent AI, even in a well-designed container, could be very hazardous. This suggests that even if we have a container, if we don't trust the AI's goal system we must have enough insight into its cognitive processes to monitor it and catch the \"treacherous turn\", i.e. the first moment when it comes up with the idea of deceiving us. Transparency/inspectability is thus important, perhaps even more so than containment per se. For example, it would be useful to investigate how amenable different AI architectures would be to having their beliefs and goals read from the outside. \n Control It has been argued that very general and capable AI systems operating autonomously to accomplish some task will often be subject to effects that increase the difficulty of maintaining meaningful human control [86, 17, 16, 107] . Research on systems that are not subject to these effects, minimize their impact, or allow for reliable human control could be valuable in preventing undesired consequences, as could work on reliable and secure test-beds for AI systems at a variety of capability levels. \n Corrigibility and Domesticity If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal [86, 17] (and conversely, seeking unconstrained situations is sometimes a useful heuristic [132] ). This could become problematic, however, if we wish to repurpose the system, to deactivate it, or to significantly alter its decision-making process; such a system would rationally avoid these changes. Systems that do not exhibit these behaviors have been termed corrigible systems [116] , and both theoretical and practical work in this area appears tractable and useful. For example, it may be possible to design utility functions or decision processes so that a system will not try to avoid being shut down or repurposed [116] , and theoretical frameworks could be developed to better understand the space of potential systems that avoid undesirable behaviors [55, 57, 56] . It has been argued that another natural subgoal is the acquisition of fungible resources of a variety of kinds: for example, information about the environment, safety from disruption, and improved freedom of action are all instrumentally useful for many tasks [86, 17] . Hammond [49] gives the label stabilization to the more general set of cases where \"due to the action of the agent, the environment comes to be better fitted to the agent as time goes on\". This type of subgoal could lead to undesired consequences, and a better understanding of the conditions under which resource acquisition or radical stabilization is an optimal strategy (or likely to be selected by a given system) would be useful in mitigating its effects. Potential research topics in this area include \"domestic\" goals that demand actions/plans whose consequences are limited in scope in some way [16] , the effects of large temporal discount rates on resource acquisition strategies, and experimental investigation of simple systems that display these subgoals. Finally, research on the possibility of superintelligent machines or rapid, sustained self-improvement (\"intelligence explosion\") has been highlighted by past and current projects on the future of AI as potentially valuable to the project of maintaining reliable control in the long term. The AAAI 2008-09 Presidential Panel on Long-Term AI Futures' \"Subgroup on Pace, Concerns, and Control\" stated that There was overall skepticism about the prospect of an intelligence explosion... Nevertheless, there was a shared sense that additional research would be valuable on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define \"intelligence explosion,\" and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants [61] . Stanford's One-Hundred Year Study of Artificial Intelligence includes \"Loss of Control of AI systems\" as an area of study, specifically highlighting concerns over the possibility that ...we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes -and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an \"intelligence explosion\"? [60] Research in this area could include any of the long-term research priorities listed above, as well as theoretical and forecasting work on intelligence explosion and superintelligence [25, 16] , and could extend or critique existing approaches begun by groups such as the Machine Intelligence Research Institute [113] . Research questions: 1. Can high Bayesian uncertainty and agent respect for the unknown act as an effective safety mechanism? (See [32, 108] ) 2. Investigate steep temporal discounting as an incentives control method for an untrusted general AI. It has been argued [135] that the nature of the general AI control problem undergoes an essential shift, which we can refer to as the \"context change\", when transitioning from subhuman to superhuman general AI. This suggests that rather than judging potential solutions to the control problem using only experimental results, it's essential to build compelling deductive arguments that generalize and are falsifiable, and only when these arguments are available does it make sense to try to test potential solutions via experiment. \n Safe and Unsafe Agent Architectures Predicting the exact behavior of complex software is notoriously difficult and has been shown to be generically impossible with less computational cost than simply running it. The goal of AI safety research is therefore more modest: to show that the behavior, although not exactly predictable, will have certain desired properties, for example keeping certain behavioral parameters within certain bounds. Rational agents are often composed of distinct modules (e.g. sensors, actuators, a performance element, a learning element, a problem generator, a critic, etc.), each with limited abilities, with some network of information flows between modules. [102] Within this framework, it would be valuable to provide guarantees that various modules would be safe or unsafe (individually or in combination). A related approach is to not build an agent at all, but rather some sort of non-agent \"Tool AI\". Some types of this are: 1. \"Oracle AI\": an AI system designed to merely answer questions about the world as accurately as possible. (Though people sometimes also call agent AI with a goal of accurately answering questions about the world \"Oracle AI\".) 2. \"Virtual AI\": an agent that interacts only with an abstract world, but has no way of determining this, and hence is not an agent in the physical world. Although a common assumption is that both 1 and 2 are \"safe\" by remaining \"Oracle AI\"/\"Tool AI\", this has not been substantiated. For example, an Oracle AI that becomes superintelligent via self-improvement is likely to evolve a knowledge-acquisition goal, which might lead it to modify its answers to manipulate its user to perform certain experiments. Many of the above-mentioned safety issues are related to the issue of goals that the rational agent may have. This question provides an important link between architectures and goals: how amenable are different AI architectures to having their goals and beliefs read from the outside in a fashion useful for safety determination and monitoring? Research questions on properties of architectures under self-modification: 1. Can certain interesting classes of agents be proven to exhibit behavior converging towards recursively stable fixed points or limit cycles? 2. Do certain kinds of non-optimizer agents become optimizers, and if so, how quickly? 3. If so, how strong is the 'optimizer' stable attractor? 4. Are there other stable attractors? 5. Are tool-like or Oracle-like things stable attractors? Research questions about specific architectures: 1. Model and bug-trace the variety of different scenarios of failure modes and dangerous behaviors intrinsic to different existing real-world general AI architectures such as OpenCog and Sigma. 2. Analyze how deep learning discontinuity/instability [121] can affect deep reinforcement learning architectures. What new classes of risks in agent behavior can result? How easily can these risks be mitigated? Evaluate compensatory safety mechanisms whereby an agent requires at least two distinct perspectives on a situation or subject of analysis before characterizing it. 3. Explore how the field of mechanism design can be applied to controlling neuromorphic AIs and other architectures. 4. How well does an AI system's transparency to human inspection scale, using different kinds of architectures and methods? [76] . 4 Forecasting \n Motivation 1. Conduct a broad survey of past and current civilizational competence. In what ways, and under what conditions, do human civilizations show competence vs. incompetence? Which kinds of problems do they handle well or poorly? Similar in scope and ambition to, say, Perrow's Normal Accidents [90] and Sagan's The Limits of Safety [104] . The aim is to get some insight into the likelihood of our civilization handling various aspects of the superintelligence challenge well or poorly. Some initial findings were published on the MIRI blog. [79, 74] 2. Did most early AI scientists really think AI was right around the corner, or was it just a few people? The earliest survey available (Michie 1973[71] ) suggests it may have been just a few people. For those that thought AI was right around the corner, how much did they think about the safety and ethical challenges? If they thought and talked about it substantially, why was there so little published on the subject? If they really didn't think much about it, what does that imply about how seriously AI scientists will treat the safety and ethical challenges of AI in the future? \n Methodology There are many interrelated variables that are relevant to arriving at a good understanding of the future of AI, in particular the path towards general AI; and there are a number of different ways to produce forecasts of these variables, with varying degrees of accuracy, credibility, feasibility, and informativeness. Possible methods include: These projects are providing us with valuable information on how best to make short-term forecasts. It would be interesting to run a similar tournament with 5-year and 10-year time horizons for predictions and see if there are significant differences; are the chief determinants of predictive success the same? What kind of process should we trust to give us the best predictions? Besides the question of how to train analysts and combine their estimates, there is also the question of what modelling methodologies, used by whom, yield the best long-term forecasts. The Tauri Group is a think tank that conducted a study [44] for the \n Forecasting AI progress In order to get a good understanding of what the path to general AI might look like, there are many kinds of interacting variables that would be worth forecasting: (a) How quickly will computer hardware performance be improving (on various metrics)? i. Improved performance enables more AI approaches to be feasible and makes experiments more productive. Will hardware advances also contribute to researcher productivity in other ways? ii. Will departures from the von Neumann architecture contribute significantly to some types of AI development? (For example quantum computers, or computers inspired by cellular automata.) (b) How quickly does performance of algorithms tend to advance over time? (Grace, 2013) [42] finds that (in six areas) algorithmic progress is nearly as significant as hardware progress; but further analysis of this question with a view toward economic or productivity impact would be worthwhile. (c) Researcher performance is affected by improvements in algorithmic and hardware performance which make experiments more productive. What other factors will affect researcher performance? Some candidates: i. changing ways to do software development ii. changing ways to collaborate iii. changing ways to publish or present results iv. improved computer interfaces (such as brain-computer interfaces or virtual reality) v. genetic enhancement technology (d) Related to the previous question: what AI subfields will benefit particularly from hardware performance improvements? \n Forecasting AI takeoff If we develop AI that's advanced enough to do AI research and development, we may enter the era that I. J. Good dubbed the \"Intelligence Explosion\", in which the growth in AI capability is driven primarily by the AI itself. We will refer to the transition of AI to a superintelligent level as takeoff. How (and how quickly) this would unfold is important, but difficult to predict. Of course, some of the usual forecasting methods are applicable, in particular those that rely on expert judgment. (A survey of timelines and impacts for humanity (with n=170) is here [81] .) Here are some more approaches toward understanding aspects of the intelligence explosion: 1. Compare with earlier takeoff-like scenarios. Some candidates [51, 50] : (a) development of proto-human brains (b) agricultural revolution (c) industrial revolution 2. Besides software improvements, AI self-improvement may occur via engaging in commercial activity, via expropriating computing resources, via manufacturing computing resources, or in other ways. Enumerate the specific technologies required for each pathway and forecast their development. 3. Assess how best to model the amount of self-improvement that could be accomplished with varying amounts of (i) intelligence, (ii) parallelism / parallel computations, (iii) serial depth of computation; what evidence is available? [137] (a) How do different areas of knowledge respond to being given more serial research time vs more people vs other inputs? (b) One type of cognitive work that has been recorded for millennia is the progress of mathematics, in particular the resolution of conjectures. Some data analysis [43] suggests these times follow an exponential distribution with halflife 100 years. Could further analysis help us understand the benefits of serial vs parallel intellectual work in this domain? It's possible that there will be multiple AI projects undergoing takeoff at the same time; this has been called a multipolar takeoff. Multipolar takeoff is more likely if (i) takeoff is slow, (ii) more of the necessary innovations and/or tools are shared, and (iii) implementation doesn't require any non-commodity resources, such as access to specialized hardware. It's been proposed [6] that multipolar scenarios might carry a higher risk of accidents than unipolar ones because no party wishes to lose the competition. It's also been proposed [29] that multipolar scenarios might be safer, because there might be a \"balance of power\" enabling cooperation and mutual scrutiny. 1. What kind of multipolar scenarios may occur? What would be the consequences? 2. What kinds of multipolar scenarios would collapse into unipolar ones, or vice versa? \n Brain emulations (uploads) 1. Can we get whole brain emulation without producing neuromorphic general AI slightly earlier or shortly afterward? See section 3.2 of [35] . 2. Is the first functional whole brain emulation likely to be (1) an emulation of low-level functionality that doesn't require much understanding of human cognitive neuroscience at the computational level, as described in [105] , or is it more likely to be (2) an emulation that makes heavy use of advanced human cognitive neuroscience, as described eg by Hayworth [77] , or is it likely to be (3) something else? 3. Investigate the feasibility of creating safe general-purpose superintelligences by modifying brain emulations, based on currently known cognitive neuroscience. \n Policy and Collaboration For any powerful new technology, appropriate policies can ensure that humanity can enjoy the benefits while risks are minimized. Both nuclear technology and biotechnology have thus far avoided global-scale disasters (global nuclear war, nuclear terrorism, engineered pandemics, etc.), at least in part thanks to helpful policies. For example, the policies developed at the 1975 Asilomar conference on Recombinant DNA have contributed to the sterling safety record of that field without stifling its progress in any significant way. In this spirit, it appears worthwhile to research the analogous question for AI: what policies would help ensure that humanity reaps the benefits of AI while avoiding potential pitfalls? Here are some more specific questions along these lines: 1. What is the space of possible AI risk reduction policies worth studying? (Dewey [32] and Sotala and Yampolskiy [118] have written some analyses of possible policies/responses.) Dimensions include: (a) Implementation level: global, national, organizational, etc., (b) Strictness: mandatory regulations, voluntary industry guidelines, etc. (c) Type: Do policies/monitoring efforts focus on software, hardware, projects or individuals? Is there some sort of tiered system of security clearances? Is some information classified? What are possible approaches to monitoring and tracking general AI development? What kind of research should be funded? Are new governance structures created? 2. Which criteria should be used to determine the merits of a policy? Some candidates: Collaboration is another important topic that deserves recurring thought and discussion. To build safe general AI of human level and beyond, it will likely be necessary to bring together multiple research subdisciplines and communities and let them influence each other's work. Some thematic questions here are: 1. What are the most important collaborations and information flows we need between different research subdisciplines and communities? 2. What attitudes would be most useful to foster? 3. What kind of organizations or organizational mechanisms would best enable these collaborations and information flows, bringing us closer to safety? (a) Completeness: it should allow running the same experiments that AI builders can do without the containment (b) Flexibility: it should be adaptable enough to different hardware setups to be used with different architectures, and especially on distributed hardware (c) Efficiency: it should not overly reduce runtime efficiency, memory efficiency, or research/development efficiency (d) Ease of use: it should provide as familiar an environment as possible; for example a UNIX shell and other familiar tools. \n 4 . 4 (a) Operating-system level virtualization (such as VirtualBox, VMWare, KVM) (b) Programming-language level containment (such as Java sandbox) (c) Static analysis (eg to check that a program won't use networking APIs or corrupt its memory) Investigate programs likely to be used for viewing output and logs from an AI project. Determine which ones are and aren't secure enough to use safely in the context of a boxed AI, and address weaknesses in them. For example:(a) How safe are terminal emulators? (b) Can an AI censor information from its log files by mixing in things like Unicode control characters or xterm escape sequences? (c) Under what circumstances is it safe to copy-paste things from an AI's output? (d) What file formats should and shouldn't the AI be allowed to use? \n (a) How can individual AI researchers best be trained to resist being manipulated by an AI? 2. What are the most effective social control mechanisms for preventing individuals from enabling AI breakout? What can be learned from past research on preventing unauthorized use of nuclear weapons and other powerful technologies? Such examples include: (a) Psychological screening of operators (b) Surveillance of operators. For example, add an option to sshd which records keystrokes and output in a place the logged-in user cannot access (c) \"False alarm\" tests where the operator is led to believe that a critical situation has occurred and one can see what decisions they make under pressure (d) Systems requiring more than one authorized human for making certain critical decisions. \n ability to reduce AI risk (d) ability to avoid stifling desirable technology development and have other negative consequences (e) adoptability (the prospects of adoption increase when policy benefits those whose support is needed for implementation and when its merits can be effectively explained to decision-makers and opinion leaders)(f) ability to adapt over time to changing circumstances To shed light on 2.d: What happens when governments ban or restrict certain kinds of technological development? What happens when a certain kind of technological development is banned or restricted in one country but not in other countries where technological development sees heavy investment? \n ACE (Aggregative Contingent Estimation), which investigates ways to improve and combine the judgments of analysts to produce accurate forecasts. The ACE program has been running a team prediction tournament, which has consistently been won by the Good Judgment Project, which has produced several insightful publications.2. ForeST (Forecasting Science & Technology), which investigates ways to get and maintain high-quality forecasts of sci/tech milestones, and funds SciCast, the world's largest sci/tech forecasting tournament. 1. expert surveys 2. prediction markets 3. systematic group forecasting methods, such as the Delphi method 4. building complex models 5. extrapolation from historic trends 6. analogy with other historical developments 7. combining forecasts from differing methods or forecasts of related variables Existing projects that investigate how to best forecast future sci/tech and other developments include these IARPA programs: 1. \n Department of Defense reviewing over 1000 technological forecasts and statistically analyzing accuracy by methodology, by source, and by time frame. It would be informative to have a similar analysis of longterm technological predictions from other sources, such as (1) The Futurist and World Future Review, (2) Technological Forecasting and Social Change, (3) Foresight and International Journal of Forecasting, (4) Journal of Forecasting, (5) publications of the Hudson Institute, (6) publications of the Institute for the Future, (7) publications of the Club of Rome, (8) Journal of Future Studies, (9) Ray Kurzweil (more thorough than section 5.4 of (Armstrong et al, 2014) [7] ), (10) Alvin Toffler, (11) John Naisbitt, (12) the State of the World reports by the Worldwatch Institute. \n What conditions would make bans or nationalization likely? (Consider historical examples here.) What would be the consequences? ii. Examine international collaboration on major innovative technology. How often does it happen? What blocks it from happening more? What are the necessary conditions? Examples: Concord jet, LHC, international space station, etc. What conditions would make international collaboration on AI safety issues likely? iii. What kinds of policies are likely to be implemented, with what effect? • What happens when governments ban or restrict certain kinds of technological development? What happens when a certain kind of technological development is banned or restricted in one country but not in other countries where technological development sees heavy investment? • What kinds of innovative technology projects do governments monitor, shut down, or nationalize? How likely are major governments to monitor, shut down, or nationalize serious general AI projects? (b) How will the public respond? What sorts of technological innovations tend to cause public panic or outrage, under what conditions? 3. in terms of public response: (a) How will governments respond? i. 1. resources (researchers and funding) going into AI innovation in general, or within AI subfields 2. resources going into AI areas of application, such as robotics or sensory technologies 3. related fields which may contribute ideas, such as neuroscience 4. shifts in the set of organizations/people performing AI research among: (a) countries (b) academia vs industry vs other government (eg military) Other related questions that may merit detailed study include: 1. in terms of technologies: (a) What types of AI (in terms of architecture, subfield, application, etc) are most likely to contribute to reaching general AI? What AI capabilities would be necessary or sufficient, individually or collectively? (b) Nick Bostrom brings up in Superintelligence that brain emulation technology is unlikely to arrive much sooner than human-level neuromorphic AI, because techniques and knowledge from the former can likely be repurposed for the latter. Are there other foreseeable situations where two disparate fields or research programs may be closely related, with success on one implying great progress on the other? i. Does causal entropy[132] constitute a promising shared avenue of progress in AI and nanotech? 2. in terms of scenarios: (a) What kinds of scenarios would increase or decrease researcher inclination to work on AI or general AI research? (For example, changing ideologies or public opinion, association of the field with ideas held in low regard, . . . ) Can we forecast this? (b) How scalable is innovative project secrecy? Examine past cases: Manhattan project, Bletchley park, Bitcoin, Anonymous, Stuxnet, Skunk Works, Phantom Works, Google X. Could there be large projects we don't know about? How will this change in coming decades? (c) What is the world's distribution of computation, and what are the trends? (Some initial results here.[75]) (d) Supposing enough technical innovations are in place to build general AI, how large of a project will implementation be? How much of the work to reach general AI is scientific advancement and technical innovation vs engineering and implementation? (c) What sorts of developments would cause governments or the public to consider AI safety to be a serious issue? How did public perception respond to previous AI milestones? How will the public react to self-driving taxis?(d) How much warning will we have before we reach general AI? What kinds of future developments would serve as advance signposts indicating the kind of scenario we're likely to see?4. in terms of rates of progress: \n\t\t\t \"The energy produced by the breaking down of the atom is a very poor kind of thing. Any one who expects a source of power from the transformation of these atoms is talking moonshine.\" [92] \n\t\t\t One way to operationalize this is described by Muehlhauser [80] .", "date_published": "n/a", "url": "n/a", "filename": "research_survey.tei.xml", "abstract": "Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents-systems that perceive and act in some environment. In this context, the criterion for intelligence is related to statistical and economic notions of rationality-colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic representations and statistical learning methods has led to a large degree of integration and cross-fertilization between AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems. As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is valuable to investigate how to reap its benefits while avoiding potential pitfalls. The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures [61] and other projects and community efforts on AI impacts. These constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. The present document can be viewed as a natural continuation of these efforts, focusing on identifying research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law, and philosophy to computer security, formal methods and, of course, various branches of AI itself. The focus is on delivering AI that is beneficial to society and robust in the sense that the benefits are guaranteed: our AI systems must do what we want them to do. This document is an attempt to lay out some of the research topics that we think will be most useful to do now in order to shape the future impact of AI. We will surely find that some questions are less useful or timely than others, and some important ones are missing. We hope this guide will be a helpful source of suggestions, but also that potential grantees won't be discouraged from approaching us with similarly relevant topics we didn't think of. We will try to publish future versions that are up to date with progress in the field. We are very grateful to the many people who have contributed to this document, in particular Daniel Dewey, Stuart Russell, and Max Tegmark for their invaluable work on the research priorities document, Luke Muehlhauser for his list of potential strategic research projects, Nate Soares and MIRI for their technical agenda, and the MIRIxOxford research workshop analyzing and expanding on the MIRI technical agenda. Many people at FLI have contributed lists of additional research projects and directions, including Jim", "id": "c480acd9bb39a25d9faaf6b728d5613c"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "s42256-021-00298-y.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "Under review as a conference paper at ICLR 2022 TEST TIME ROBUSTIFICATION OF DEEP MODELS VIA ADAPTATION AND AUGMENTATION", "text": "INTRODUCTION Deep neural network models have achieved excellent performance on many machine learning problems, such as image classification, but are often brittle and susceptible to issues stemming from distribution shift. For example, deep image classifiers may degrade precipitously in accuracy when encountering input perturbations, such as noise or changes in lighting (Hendrycks & Dietterich, 2019) or domain shifts which occur naturally in real world applications (Koh et al., 2021) . Therefore, robustification of deep models against these test shifts is an important and active area of study. Most prior works in this area have focused on techniques for training time robustification, including utilizing larger models and datasets (Orhan, 2019) , various forms of adversarial training (Sagawa et al., 2020; Wong et al., 2020) , and aggressive data augmentation (Yin et al., 2019; Hendrycks et al., 2020; Li et al., 2021; Hendrycks et al., 2021a) . Employing these techniques requires modifying the training process, which may not be feasible if, e.g., it involves heavy computation or non-public data. Furthermore, these techniques do not rely on any information about the test points that the model must predict on, even though these test points may provide significant information for improving model robustness. Recently, several works have proposed methods for improving accuracy via adaptation after seeing the test data, typically by updating a subset of the model's weights (Sun et al., 2020) , normalization statistics (Schneider et al., 2020) , or both (Wang et al., 2021; Zhang et al., 2021) . Though effective at handling test shifts, these methods sometimes still require specialized training procedures, and they typically rely on extracting distributional information via batches or even entire sets of test inputs, thus introducing additional assumptions. Figure 1 : A schematic of our overall approach. Left: at test time, as detailed in Section 3, we have a trained model that outputs a probabilistic predictive distribution and has adaptable parameters θ, a single test input x, and a set of data augmentation functions {a 1 , . . . , a M }. Note that we do not assume access to the model training procedure or multiple test inputs for adaptation. We perform different augmentations to x and pass these augmented inputs to the model in order to estimate the marginal output distribution averaged over augmentations. Right: rather than using this distribution to make the final prediction, we instead perform a gradient update on the model to minimize the entropy of this marginal distribution, thus encouraging the model predictions to be invariant across different augmentations while maintaining confident predictions. The final prediction is then made on the original data point, i.e., the predictive distribution in the top right of the schematic. We are interested in studying and devising methods for improving model robustness that are \"plug and play\", i.e., they can be readily used with a wide variety of pretrained models and test settings. Such methods may simply constitute a different, more robust way to perform inference with preexisting models, under virtually the same assumptions as standard test time inference. In this work, we focus on methods for test time robustness, in which the specific test input may be leveraged in order to improve the model's prediction on that point. Though broad applicability is our primary goal, we also want methods that synergize with other robustification techniques, in order to achieve greater performance than using either set of techniques in isolation. To satisfy both of these desiderata, we devise a novel test time robustness method based on adaptation and augmentation. As illustrated in Figure 1 , when presented with a test point, we propose to adapt the model by augmenting the test point in different ways and ensuring that the model makes the same predictions across these augmentations, thus respecting the invariances encoded in the data augmentations. We further encourage the model to make confident predictions, thus arriving at the proposed method: minimize the marginal entropy of the model's predictions across the augmented versions of the test point. We refer to the proposed method as marginal entropy minimization with ensembled augmentations (MEME), and this is the primary contribution of our work. MEME makes direct use of pretrained models without any assumptions about their particular training procedure or architecture, while requiring only a single test input for adaptation. In Section 4, we demonstrate empirically that MEME consistently improves the performance of ResNet (He et al., 2016) and vision transformer (Dosovitskiy et al., 2021) models on several challenging ImageNet distribution shift benchmarks, achieving several new state-of-the-art results for these models in the setting in which only one test point is available. In particular, MEME consistently outperforms non adaptive marginal distribution predictions (between 1-10% improvement) on corruption and rendition shifts -tested by the ImageNet-C (Hendrycks & Dietterich, 2019) and ImageNet-R (Hendrycks et al., 2021a) datasets, respectively -indicating that adaptation plays a crucial role in improving predictive accuracy. MEME encourages both invariance across augmentations and confident predictions, and an ablation study in Section 4 shows that both components are important for maximal performance gains. Also, MEME is, to the best of our knowledge, the first adaptation method to improve performance (by 1-4% over standard model evaluation) on ImageNet-A (Hendrycks et al., 2021b) , demonstrating that MEME is more broadly applicable on a wide range of distribution shifts. \n RELATED WORK The general problem of distribution shift has been studied under a number of frameworks (Quiñonero Candela et al., 2009) , including domain adaptation (Shimodaira, 2000; Csurka, 2017; Wilson & Cook, 2020) , domain generalization (Blanchard et al., 2011; Muandet et al., 2013; Gulrajani & Lopez-Paz, 2021) , and distributionally robust optimization (Ben-Tal et al., 2013; Hu et al., 2018; Sagawa et al., 2020) , to name just a few. These frameworks typically leverage additional training or test assumptions in order to make the distribution shift problem more tractable. Largely separate from these frameworks, various empirical methods have also been proposed for dealing with shift, such as increasing the model and training dataset size or using heavy training augmentations (Orhan, 2019; Yin et al., 2019; Hendrycks et al., 2021a) . The focus of this work is complementary to these efforts: the proposed MEME method is applicable to a wide range of pretrained models, including those trained via robustness methods, and can achieve further performance gains via test time adaptation. Prior test time adaptation methods generally either make significant training or test time assumptions. Some methods update the model using batches or even entire datasets of test inputs, such as by computing batch normalization (BN) statistics on the test set (Li et al., 2017; Kaku et al., 2020; Nado et al., 2020; Schneider et al., 2020) , or minimizing the (conditional) entropy of model predictions across a batch of test data (Wang et al., 2021) . The latter approach is closely related to MEME. The differences are that MEME minimizes marginal entropy using single test points and data augmentation and adapts all of the model parameters rather than just those associated with normalization layers, thus not requiring multiple test points or specific model architectures. Other test time adaptation methods can be applied to single test points but require specific training procedures or models (Sun et al., 2020; Huang et al., 2020; Schneider et al., 2020) . Test time training (TTT) (Sun et al., 2020) requires a specialized model with a rotation prediction head, as well as a different procedure for training this model. Schneider et al. (2020) show that BN adaptation can be effective even with only one test point, and we refer to this approach as \"single point\" BN adaptation. As we discuss in Section 3, MEME synergizes well with single point BN adaptation. A number of works have noted that varying forms of strong data augmentation on the training set can improve the resulting model's robustness (Yin et al., 2019; Hendrycks et al., 2020; Li et al., 2021; Hendrycks et al., 2021a) . Data augmentations are also sometimes used on the test data directly by averaging the model's outputs across augmented copies of the test point (Krizhevsky et al., 2012; Shorten & Khoshgoftaar, 2019) , i.e., predicting according to the model's marginal output distribution. This technique, which we refer to as test time augmentation (TTA), has been shown to be useful both for improving model accuracy and calibration (Ashukha et al., 2020) as well as handling distribution shift (Molchanov et al., 2020) . We take this idea one step further by explicitly adapting the model such that its marginal output distribution has low entropy. This extracts an additional learning signal for improving the model, and furthermore, the adapted model can then make its final prediction on the clean test point rather than the augmented copies. We empirically show in Section 4 that these differences lead to improved performance over this non adaptive TTA baseline. \n ROBUSTNESS VIA ADAPTATION AND AUGMENTATION Data augmentations are typically used to train the model to respect certain invariances -e.g., changes in lighting or viewpoint do not change the underlying class label -but, especially when faced with distribution shift, the model is not guaranteed to obey the same invariances at test time. In this section, we introduce MEME, a method for test time robustness that adapts the model such that it respects these invariances on the test input. We use \"test time robustness\" specifically to refer to techniques that operate directly on pretrained models and single test inputs -single point BN adaptation and TTA, as described in Section 2, are examples of prior test time robustness methods. In the test time robustness setting, we are given a trained model f θ : X → Y with parameters θ ∈ Θ. We do not require any special training procedure and do not make any assumptions about the model, except that θ is adaptable and that f θ produces a conditional output distribution p θ (y|x) that is differentiable with respect to θ. 1 All standard deep neural network models satisfy these assumptions. A single point x ∈ X is presented to f θ , for which it must predict a label ŷ ∈ Y immediately. Note that this is precisely identical to the standard test time inference procedure for regular supervised learning models -in effect, we are simply modifying how inference is done, without any additional assumptions on the training process or on test time data availability. This makes test time robustness methods a simple \"slot-in\" replacement for the ubiquitous and standard test time inference process. We assume sampling access to a set of augmentation functions A {a 1 , . . . , a M } that can be applied to the test point x. We use these augmentations and the self-supervised objective detailed below to adapt the model before it predicts on x. When given a set of test inputs, the model adapts and predicts on each test point independently. We do not assume access to any ground truth labels. \n MARGINAL ENTROPY MINIMIZATION WITH ENSEMBLED AUGMENTATIONS Given a test point x and set of augmentation functions A, we sample B augmentations from A and apply them to x in order to produce a batch of augmented data x1 , . . . , xB . The model's average, or marginal, output distribution with respect to the augmented points is given by pθ (y|x) E U (A) [p θ (y|a(x))] ≈ 1 B B i=1 p θ (y|x i ) , (1) where the expectation is with respect to uniformly sampled augmentations a ∼ U(A). What properties do we desire from this marginal distribution? To answer this question, consider the role that data augmentation typically serves during training. For each training point (x train , y train ), the model f θ is trained using multiple augmented forms of the input xtrain 1 , . . . , xtrain E . f is trained to obey the invariances between the augmentations and the label -no matter the augmentation on x train , f should predict the same label y train , and it should do so confidently. We seek to devise a similar learning signal during test time, when no ground truth labels are available. That is, after adapting: (1) the model f θ predictions should be invariant across augmented versions of the test point, and (2) the model f θ should be confident in its predictions, even for heavily augmented versions of the test point, due to the additional knowledge that all versions have the same underlying label. Optimizing the model for more confident predictions can be justified from the assumption that the true underlying decision boundaries between classes lie in low density regions of the data space (Grandvalet & Bengio, 2005) . With these two goals in mind, we propose to adapt the model using the entropy of its marginal output distribution over augmentations (Equation 1 ), i.e., (θ; x) H (p θ (•|x)) = − y∈Y pθ (y|x) log pθ (y|x) . (2) Optimizing this objective encourages both confidence and invariance to augmentations, since the entropy of pθ (•|x) is minimized when the model outputs the same (confident) prediction regardless of the augmentation. Given that θ is adaptable and p θ (y|x) is differentiable with respect to θ, we can directly use gradient based optimization to adapt θ according to this objective. We use only one gradient step per test point, because empirically we found this to be sufficient for improved performance while being more computationally efficient. After this step, we can use the adapted model, which we denote f θ , to predict on the original test input x. Algorithm 1 presents the overall method MEME for test time adaptation. Though prior test time adaptation methods must carefully choose which parameters to adapt in order to avoid degenerate solutions (Wang et al., 2021) , our adaptation procedure simply adapts all of the model's parameters θ (line 3). Note that, as discussed above, the model f θ adapts using augmented data but makes its final prediction on the original point (line 4), which may be easier to predict on. \n COMPOSING MEME WITH PRIOR METHODS An additional benefit of MEME is that it synergizes with other approaches for handling distribution shift. In particular, MEME can be composed with prior methods for training robust models and adapting model statistics, thus leveraging the performance improvements of each technique. Pretrained robust models. Since MEME makes no assumptions about, or modifications to, the model training procedure, performing adaptation on top of pretrained robust models, such as those trained with heavy data augmentations, is as simple as using any other pretrained model. Crucially, we find that, in practice, the set of augmentations that we use at test time A does not have to match the augmentations that were used to train the model. This is important as we require a few properties from the test time augmentations: that they can be easily sampled and are applied directly to the model input x. These properties do not hold for, e.g., data augmentation techniques based on image translation models, such as DeepAugment (Hendrycks et al., 2021a) , or feature mixing, such as moment exchange (Li et al., 2021) . However, we can still use models trained with these data augmentation techniques as our starting point for adaptation, thus allowing us to improve upon their state-of-the-art results. As noted above, using pretrained models is not as easily accomplished for adaptation methods which require complicated or specialized training procedures and model architectures, such as TTT (Sun et al., 2020) or ARM (Zhang et al., 2021) . In our experiments, we use AugMix as our set of augmentations (Hendrycks et al., 2020) , as it satisfies the above properties and still yields significant diversity when applied, as depicted in Figure 2 . Adapting BN statistics. Schneider et al. (2020) showed that, even when presented with just a single test point, partially adapting the estimated mean and variance of the activations in each batch normalization (BN) layer of the model can still be effective in some cases for handling distribution shift. In this setting, to prevent overfitting to the test point, the channelwise mean and variance [µ test , σ 2 test ] estimated from this point are mixed with the the mean and variance [µ train , σ 2 train ] computed during training according to a prior strength N , i.e., µ N N + 1 µ train + 1 N + 1 µ test , σ 2 N N + 1 σ 2 train + 1 N + 1 σ 2 test . This technique is also straightforward to combine with MEME: we simply use the adapted BN statistics whenever computing the model's output distribution. That is, we adapt the BN statistics alongside all of the model parameters for MEME. Following the suggestion in Schneider et al. (2020) , we set N = 16 for all of our experiments in the next section. \n EXPERIMENTS Our experiments aim to answer the following questions: (1) How does MEME compare to prior methods for test time adaptation, which make additional training and test assumptions, and test time robustness? (2) Can MEME be successfully combined with a wide range of pretrained models? (3) Which aspects of MEME are the most important for strong performance? We conduct experiments on two distribution shift benchmarks for CIFAR-10 ( Krizhevsky, 2009) and three distribution shift benchmarks for ImageNet (Russakovsky et al., 2015) . Specifically, for CIFAR-10, we evaluate on the CIFAR-10-C (Hendrycks & Dietterich, 2019) and CIFAR-10.1 (Recht et al., 2018) 2020 ), set the prior strength N = 256 accordingly. For Tent, we use test batch sizes of 64 and, for ResNet-50 models, test both \"online\" adaptation -where the model adapts continually through the entire evaluation -and \"episodic\" adaptationwhere the model is reset after each test batch. (Wang et al., 2021) . Note that the evaluation protocols are different for these two methods: whereas MEME is tasked with predicting on each test point immediately after adaptation, BN adaptation predicts on a batch of 256 test points after computing BN statistics on the batch, and Tent predicts on a batch of 64 inputs after adaptation but also, in the online setting, continually adapts throughout evaluation. In all experiments, we further compare to single point BN adaptation (Schneider et al., 2020) and the TTA baseline that simply predicts according to the model's marginal output distribution over augmentations pθ (y|x) (Equation 1 ) (Krizhevsky et al., 2012; Ashukha et al., 2020) . Full details on our experimental protocol are provided in Appendix A. To answer question (2), we apply MEME on top of multiple pretrained models with different architectures, trained via several different procedures. For CIFAR-10, we train our own ResNet-26 (He et al., 2016) models. For ImageNet, we use the best performing ResNet-50 robust models from prior work, which includes those trained with DeepAugment and AugMix augmentations (Hendrycks et al., 2021a) as well as those trained with moment exchange and CutMix (Li et al., 2021) . To evaluate the generality of prior test time robustness methods and MEME, we also evaluate the small robust vision transformer (RVT * -small), which provides superior performance on all three ImageNet distribution shift benchmarks compared to the robust ResNet-50 models (Mao et al., 2021) . Finally, to answer (3), we conduct ablative studies in subsection 4.2: first to determine the relative importance of maximizing confidence (via entropy minimization) versus enforcing invariant predictions across augmented copies of each test point, and second to determine the importance of the particular augmentation functions used. The comparison to the non adaptive TTA baseline also helps determine whether simply augmenting the test point is sufficient or if adaptation is additionally helpful. \n MAIN RESULTS We summarize results for CIFAR-10, CIFAR-10.1, and CIFAR-10-C in MEME also provides a larger performance gain on CIFAR-10.1 compared to TTT. We find that the non adaptive TTA baseline is competitive for these relatively simple test sets, though it is worse than MEME for CIFAR-10-C. Of these three test sets, CIFAR-10-C is the only benchmark that explicitly introduces distribution shift, which suggests that adaptation is useful when the test shifts are 7.3 (−1.9) 14.7 (−3.7) 19.6 (−2.9) + Joint training (Sun et al., 2020) 8.1 16.7 22.8 + TTT (Sun et al., 2020) 7.9 (−0.2) 15.9 (−0.8) 21.5 (−1.3) more prominent. Both TTA and MEME are also effective at improving performance for the original CIFAR-10 test set where there is no distribution shift, providing further support for the widespread use of augmentations in standard evaluation protocols (Krizhevsky et al., 2012; Ashukha et al., 2020) . We summarize results for ImageNet-C, ImageNet-R, and ImageNet-A in (Schneider et al., 2020; Wang et al., 2021) , accessing multiple test points can be powerful for benchmarks such as ImageNet-C and ImageNet-R, in which inferred statistics from the test input distribution may aid in prediction. In the case of Tent, which adapts online, the model has adapted using the entire test set by the end of evaluation. However, these methods do not help, and oftentimes even hurt, for ImageNet-A. Furthermore, we find that these methods are less effective with the RVT * -small model, which may indicate their sensitivity to model architecture choices. Therefore, for this model, we also test a modification of Tent which adapts all parameters, and we find that this version of Tent works better for ImageNet-C but is significantly worse for ImageNet-R. MEME results in substantial improvement for ImageNet-A and is competitive with TTA on this problem. No prior test time adaptation methods have reported improvements on ImageNet-A, and some have reported explicit negative results (Schneider et al., 2020) . As discussed, it is reasonable for adaptation methods that rely on multiple test points to achieve greater success on other benchmarks such as ImageNet-C, in which a batch of inputs provides significant information about the specific corruption that must be dealt with. In contrast, ImageNet-A does not have such obvious characteristics associated with the input distribution, as it is simply a collection of images that are difficult to classify. As MEME instead extracts a learning signal from single test points, it is, to the best of our knowledge, the first test time adaptation method to report successful results on this testbed. When used on top of a model trained with moment exchange and CutMix (Li et al., 2021) , MEME achieves stateof-the-art performance among ResNet-50 models and single test point methods. TTA generally offers larger performance gains on ImageNet-A and also results in the highest overall accuracy when combined with the RVT * -small model; however, TTA performs worse than MEME on ImageNet-R and consistently decreases accuracy on ImageNet-C compared to standard evaluation. We view the consistency with which MEME outperforms the best prior methods, which change across different test sets, to be a major advantage of the proposed method. \n ABLATIVE STUDY MEME increases model robustness at test time via adaptation and augmentation. In this section, we ablate the adaptation procedure, and in Appendix B we ablate the use and choice of augmentations. From the results above, we conclude that adaptation generally provides additional benefits beyond simply using TTA to predict via the marginal output distribution pθ (y|x). However, we can disentangle two distinct self-supervised learning signals that may be effective for adaptation: encouraging invariant predictions across different augmentations of the test point, and encouraging confidence via entropy minimization. The marginal entropy objective in Equation 2 encapsulates both of these learning signals, but it cannot easily be decomposed into these pieces. Thus, we instead use two ablative adaptation methods that each only make use of one of these learning signals. First, we consider optimizing the pairwise cross entropy between each pair of augmented points, i.e., PCE (θ; x) 1 B × (B − 1) B i=1 j =i y∈Y p θ (y|x i ) log p θ (y|x j ) , Where xi again refers to the i-th sampled augmentation applied to x. Intuitively, this loss function encourages the model to adapt such that it produces the same predictive distribution for all augmentations of the test point, but it does not encourage the model to produce confident predictions. Conversely, as an objective that encourages confidence but not invariance, we also consider optimizing conditional entropy on the batch of augmented points, i.e., CE (θ; x) 1 B B i=1 H(p θ (•|x i )) . This ablation is effectively a version of the episodic variant of Tent (Wang et al., 2021) that produces augmented copies of a single test point rather than assuming access to a test batch. We first evaluate these ablations on the CIFAR-10 test sets. We use the same adaptation procedure outlined in Algorithm 1, with replaced with the above objectives, and we keep the same hyperparameter values. The results are presented in Table 3 . We see that MEME, i.e., marginal entropy minimization, generally performs better than adaptation with either of the alternative objectives. This supports the hypothesis that both invariance across augmentations and confidence are important learning signals for self-supervised adaptation. When faced with CIFAR-10.1, we see poor performance from the pairwise cross entropy based adaptation method. On the original CIFAR-10 test set and CIFAR-10-C, the ablations perform nearly identically and uniformly worse than MEME. To further test the CE ablation, we also evaluate it on the ImageNet test sets for the RVT * -small model. We find that, similarly, minimizing conditional entropy generally improves performance compared to the baseline evaluation. MEME is more performant for ImageNet-C and ImageNet-R, again indicating the benefits of encouraging invariance to augmentations. Adaptation via CE performs slightly better for ImageNet-A, though for this problem, TTA is still the best method. \n DISCUSSION We presented MEME, a method for test time robustification again distribution shift via adaptation and augmentation. MEME does not require access or changes to the model training procedure and is thus broadly applicable for a wide range of pretrained models. Furthermore, MEME adapts at test time using single test inputs, thus it does not assume access to multiple test points as in several recent methods for test time adaptation (Schneider et al., 2020; Wang et al., 2021; Zhang et al., 2021) . On a range of distribution shift benchmarks for CIFAR-10 and ImageNet classification, and for both ResNet and vision transformer models, MEME consistently improves performance at test time and achieves several new state-of-the-art results for these models in the single test point setting. Inference via MEME is more computationally expensive than standard model inference, primarily because adaptation is performed per test point and thus inference cannot be batched. When deployed in the real world, it is natural to expect that test points will arrive one at a time and batched inference will not be possible. However, MEME is also more computationally expensive due to its augmentation and adaptation procedure. One interesting direction for future work is to develop techniques for selectively determining when to adapt the model in order to achieve more efficient inference. For example, with well calibrated models (Guo et al., 2017) , we may run simple \"feedforward\" inference when the prediction confidence is over a certain threshold, thus achieving better efficiency. Additionally, it would be interesting to explore MEME in the test setting where the model is allowed to continually adapt as more test data is observed. In our preliminary experiments in this setting, MEME tended to lead to degenerate solutions, e.g., the model predicting a constant label with maximal confidence, and this may potentially be rectified by carefully choosing which parameters to adapt (Wang et al., 2021) or regularizing the model such that it does not change too drastically from the pretrained model. Distribution shift, in general, lies at the heart of many ethical concerns in machine learning. When machine learning models and research do not adhere to ethical principles such as \"contribute to society and to human well-being\", \"avoid harm\", and \"be fair and take action to avoid discrimination\", it can oftentimes be attributed at least partially to issues stemming from distribution shift. As such, research into ways to combat and mitigate shift contribute to furthering the ethical goals and principles laid out by this conference and the broader community. Related to this work, we may imagine that models that are more robust or adaptable to shift may produce more fair results when considering underrepresented subpopulations and could be more trustworthy in safety critical applications. However, as mentioned in Section 5, the proposed method is also more computationally intensive, and care must be taken in general to not place the most powerful machine learning tools exclusively in the hands of the most privileged and resource rich individuals and organizations. Beyond this, we do not see any immediate ethical concerns regarding the content, presentation, or methodology of this work. \n REPRODUCIBILITY STATEMENT The proposed method is relatively simple and, to the best of our ability, explained fully in Section 3 and Algorithm 1. This explanation is bolstered by a complete description of all hyperparameters in Appendix A as well as the example code provided in the supplementary materials. As briefly explained in the README in the example code, all datasets were downloaded from publicly accessible links and preprocessed only following the standard protocols. Upon publication, we will include a link to the full (non anonymous) code release containing full instructions for setting up the code, downloading the datasets, and reproducing the results. \n A EXPERIMENTAL PROTOCOL We select hyperparameters using the four disjoint validation corruptions provided with CIFAR-10-C and ImageNet-C (Hendrycks & Dietterich, 2019) . As the other benchmarks are only test sets and do not provide validation sets, we use the same hyperparameters found using the corruption validation sets and do not perform any additional tuning. For the ResNet models that we evaluate, we use stochastic gradients as the update rule G; for ResNet-26 models, we set the number of augmentations B = 32 and the learning rate η = 0.005; and for ResNet-50 models, we set B = 64 and η = 0.00025. For the robust vision transformer, we use AdamW (Loshchilov & Hutter, 2019) as the update rule G, with learning rate η = 0.00001 and weight decay 0.01, and B = 64. In the CIFAR evaluation, we compare to TTT, which, as noted, can also be applied to single test inputs but requires a specialized training procedure (Sun et al., 2020) . Thus, the ResNet-26 model we use for our method closely follows the modifications that Sun et al. (2020) propose, in order to provide a fair point of comparison. In particular, Sun et al. (2020) elect to use group normalization (Wu & He, 2018) rather than BN, thus single point BN adaptation is not applicable for this model architecture. As noted before, TTT also requires the joint training of a separate rotation prediction head, thus further changing the model architecture, while MEME directly adapts the standard pretrained model. The TTA results are obtained using the same AugMix augmentations as for MEME. The single point BN adaptation results use N = 16, as suggested by Schneider et al. (2020) . As noted, the BN adaptation results (using multiple test points) are obtained using N = 256 as the prior strength and batches of 256 test inputs for adaptation. For Tent, we use the hyperparameters suggested in Wang et al. (2021) : we use stochastic gradients with learning rate 0.00025 and momentum 0.9, the adaptation is performed with test batches of 64 inputs, and the method is run online, i.e., prediction and adaptation occur simultaneously and the model is allowed to continuously adapt through the entire test epoch. Since Wang et al. (2021) did not experiment with transformer models, we also attempted to run Tent with Adam (Kingma & Ba, 2015) and AdamW (Loshchilov & Hutter, 2019) and various hyperparameters for the RVT * -small model; however, we found that this generally resulted in worse performance than using stochastic gradient updates with the aforementioned hyperparameters. We obtain the baseline ResNet-50 parameters directly from the torchvision library. The parameters for the ResNet-50 trained with DeepAugment and AugMix are obtained from https://drive.google.com/file/d/1QKmc_p6-qDkh51WvsaS9HKFv8bX5jLnP. The parameters for the ResNet-50 trained with moment exchange and CutMix are obtained from https://drive.google.com/file/d/1cCvhQKV93pY-jj8f5jITywkB9EabiQDA. The parameters for the small robust vision transformer (RVT * -small) model are obtained from https://drive.google.com/file/d/1g40huqDVthjS2H5sQV3ppcfcWEzn9ekv. \n B ADDITIONAL EXPERIMENTS In this section, we analyze the importance of using augmentations during adaptation, study the tradeoffs between efficiency and accuracy for MEME, and present results with ResNext101 models (Xie et al., 2017; Mahajan et al., 2018; Orhan, 2019) on ImageNet-A. \n B.1 ANALYSIS ON AUGMENTATIONS One may first wonder: are augmentations needed in the first place? In the test time robustness setting when only one test point is available, how would simple entropy minimization fare? We answer this question in Table 4 by evaluating the episodic variant of Tent (i.e., with model resetting after each batch) with a test batch size of 1. This approach is also analogous to a variant of MEME that does not use augmentations, since for one test point and no augmented copies, conditional and marginal entropy are the same. Similar to MEME, we also incorporate single point BN adaptation with N = 16, in place of the standard BN adaptation that Tent typically employs using batches of test inputs. The results in Table 4 indicate that entropy minimization on a single test point generally provides no additional performance gains beyond just single point BN adaptation. This empirically shows that using augmentations is important for achieving the reported results. We also wish to understand the importance of the choice of augmentation functions A. As mentioned, we used AugMix (Hendrycks et al., 2020) in the previous experiments as it best fit our criteria: 69.9 (−6.8) 58.8 (−5.1) 99.1 (−0.9) + Tent (episodic, batch size 1) (Wang et al., 2021) 69.1 (−5.7) 59.4 (−3.3) 89.0 (−2.9) + Tent (episodic, batch size 1) (Wang et al., 2021) 70.9 (−3.9) 62.6 (−1.9) 91.1 (−0.8) RVT * -small (Mao et al., 2021) 49 AugMix requires only the input x, and randomly sampled augmentations lead to diverse augmented data points. A simple alternative is to instead use the \"standard\" set of augmentations commonly used in ImageNet training, i.e., random resized cropping and random horizontal flipping. We evaluate this ablation of using MEME with standard augmentations also on the CIFAR-10 test sets, again with the same hyperparameter values. From the results in Table 5 , we can see that MEME is still effective with simpler augmentation functions. This is true particularly for the cases where there is no test shift, as in the original CIFAR-10 test set, or subtle shifts as in CIFAR-10.1; however, for the more severe and systematic CIFAR-10-C shifts, using heavier AugMix data augmentations leads to greater performance gains over the standard augmentations. Furthermore, this ablation was conducted using the ResNet-26 model, which was trained with standard augmentations -for robust models such as those in Table 2 , AugMix may offer greater advantages at test time since these models were exposed to heavy augmentations during training. \n B.2 ANALYZING THE TRADEOFF BETWEEN EFFICIENCY AND ACCURACY In Figure 3 , we analyze the % test error of MEME adaptation on ImageNet-R as a function of the efficiency of adaptation, measured in seconds per evaluation. We achieve various tradeoffs by varying the number of augmented copies B = {1, 2, 4, 8, 16, 32, 64, 128}. We note that small values of B such as 4 and 8 can already provide significant performance gains, indicating that a practical tradeoff between efficiency and accuracy is possible. For large B, the wall clock time is dominated by computing the augmentations -in our implementation, we do not compute augmentations in \n B.3 EVALUATING RESNEXT101 MODELS ON IMAGENET-A ResNext-101 models (Xie et al., 2017) have been found to achieve higher accuracies on ImageNet-A (Hendrycks et al., 2021b) , particularly when trained with massive scale weakly supervised pretraining (Mahajan et al., 2018; Hendrycks et al., 2021a) . In this section, we evaluate whether MEME can successfully adapt these models and further improve performance on this challenging test set. We use the same hyperparameters as for the robust vision transformer with no additional tuning: AdamW (Loshchilov & Hutter, 2019) as the update rule G, learning rate η = 0.00001, weight decay 0.01, and B = 64. We obtain the baseline ResNext-101 (32x8d) parameters, pretrained on ImageNet, directly from the torchvision library. We also evaluate a ResNext-101 (32x8d) pretrained with weakly supervised learning (WSL) on billions of Instagram images (Mahajan et al., 2018) , and we obtained the parameters from https://download.pytorch.org/models/ig_resnext101_32x8-c38310e5.pth. For the WSL model, we did not use single point BN adaptation as we found this technique to be actually harmful to performance, and this corroborates previous findings (Schneider et al., 2020) . Table 6 summarizes the results. We can see that, similar to Table 2 , Both TTA and MEME significantly improve upon the baseline model evaluation. TTA performs best for the baseline ResNext-101 model. However, MEME ultimately achieves the best accuracy by a significant margin, as it is more successful at adapting the WSL model, which has a much higher accuracy. This suggests that combining MEME with other large pretrained models may be an interesting direction for future work. C FULL CIFAR-10-C AND IMAGENET-C RESULTS In the following tables, we present test results broken down by corruption and level for CIFAR-10-C for the methods evaluated in Table 1 . We omit joint training and TTT because these results are available from Sun et al. (2020) . Our test results for ImageNet-C are provided in the CSV files in the supplementary material. 25.3 16.9 15.9 8.4 33.5 14.6 13.0 17.7 14.4 8.5 7.7 9.6 10.7 11.9 16.2 Table 11 : Test error (%) on CIFAR-10-C level 1 corruptions. gauss shot impul defoc glass motn zoom snow frost fog brit contr elast pixel jpeg ResNet-26 20.8 16.5 15.8 9.2 38.9 11.8 12.8 13.9 13.4 9.7 9.4 9.6 13.1 12.0 16.4 + TTA 15.8 12.8 11.8 7.3 35.1 10.8 12.5 11.0 10.8 7.4 7.4 7.7 11.4 9.2 12.5 + MEME (ours) 16.1 12.9 11.9 7.4 34.7 10.4 12.1 11.0 10.7 7.4 7.3 7.5 10.9 9.2 12.5 Figure 2 : 2 Figure 2: We visualize augmentations of a randomly chosen data point from the \"Gaussian Noise level 3\" ImageNet-C test set. Even for a robust model trained with heavy data augmentations (Hendrycks et al., 2021a), both its predictive accuracy and confidence drop sharply when encountering test distribution shift. As shown in the bottom two rows, these drops can be remedied via MEME. \n Figure 3 : 3 Figure 3: Plotting MEME efficiency as seconds per evaluation (x axis) and % test error on ImageNet-R (y axis) for the ResNet-50 models (left) and RVT * -small (right) while varying B = {1, 2, 4, 8, 16, 32, 64, 128}. Note the log scale on the x axis. \n \n Algorithm 1 Test time robustness via MEME Require: trained model f θ , test point x, # augmentations B, learning rate η, update rule G 1: Sample a 1 , . . . , a B i.i.d. ∼ U(A) and produce augmented points xi = a i (x) for i ∈ {1, . . . , B} 2: Compute Monte Carlo estimate p = 1 B 3: Adapt model parameters via update rule θ ← G(θ, η, ˜ ) B i=1 p θ (y|x i ) ≈ pθ (y|x) and ˜ = H(p) ≈ (θ; x) 4: Predict ŷ arg max y p θ (y|x) \n test sets, and for ImageNet, we evaluate on the ImageNet-C (Hendrycks & Dietterich, 2019) , ImageNet-R (Hendrycks et al., 2021a) , and ImageNet-A (Hendrycks et al., 2021b) test sets.To answer question (1), we compare to test time training (TTT)(Sun et al., 2020) in the CIFAR-10 experiments, for which we train ResNet-26 models following their protocol and specialized architecture. We do not compare to TTT for the ImageNet experiments due to the computational demands of training state-of-the-art models and becauseSun et al. (2020) do not report competitive ImageNet results. For the ImageNet experiments, we compare to Tent(Wang et al., 2021) and BN adaptation, which can be used with pretrained models but require multiple test inputs (or even the entire test set) for adaptation. We provide BN adaptation with 256 test inputs at a time and, followingSchneider et al. ( \n Table 1, with full CIFAR-10-C results in Appendix C. We use indentations to indicate composition, e.g., TTT is performed at test time on top of their specialized joint training procedure. Across all corruption types in CIFAR-10-C, MEME consistently improves test error compared to the baselines, non adaptive TTA, and TTT. \n Table 1 : 1 Results for the original CIFAR-10 test set, CIFAR-10.1, and CIFAR-10-C. MEME outperforms TTT despite not making any training assumptions. Results from Sun et al. (2020). CIFAR-10 CIFAR-10.1 CIFAR-10-C Error (%) Error (%) Average Error (%) ResNet-26 (He et al., 2016) 9.2 18.4 22.5 + TTA 7.3 (−1.9) 14.8 (−3.6) 19.9 (−2.6) + MEME (ours) \n Table 2 : 2 Test results for ImageNet-C, ImageNet-R, and ImageNet-A. MEME achieves new state-ofthe-art performance on each benchmark for ResNet-50 models for the single test point setting. For RVT * -small, MEME substantially improves performance across all benchmarks and reaches a new state of the art for ImageNet-C and ImageNet-R. ImageNet-C ImageNet-R ImageNet-A mCE ↓ Error (%) Error (%) \n Table 2 2 , with complete ImageNet-C results in Appendix C. We again use indentations to indicate composition, e.g., the best results on ImageNet-C for our setting are attained through a combination of starting from a model trained with DeepAugment and AugMix (Hendrycks et al., 2021a) and using MEME on \n Table 3 : 3 Ablating the adaptation objective to test pairwise cross entropy and conditional entropy (CE) based adaptation. MEME generally performs the best, indicating that both encouraging invariance across augmentations and confidence are helpful in adapting the model. top. For both ImageNet-C and ImageNet-R, and for both the ResNet-50 and RVT * -small models, combining MEME with robust training techniques leads to new state of the art performance among methods that observe only one test point at a time. We highlight in gray the methods that require multiple test points for adaptation, and we list in bold the best results from these methods which outperform the test time robustness methods. As Table2and prior work both show CIFAR-10 CIFAR-10.1 CIFAR-10-C Error (%) Error (%) Average Error (%) ResNet-26 (He et al., 2016) 9.2 18.4 22.5 + MEME (ours) 7.3 (−1.9) 14.7 (−3.7) 19.6 (−2.9) − (Equation 2) + PCE 7.6 (−1.6) 15.3 (−3.1) 20.0 (−2.5) − (Equation 2) + CE 7.6 (−1.6) 14.7 (−3.7) 20.0 (−2.5) ImageNet-C ImageNet-R ImageNet-A mCE ↓ Error (%) Error (%) RVT * -small (Mao et al., 2021) 49.4 52.3 73.9 + MEME (ours) 40.6 (−8.8) 43.8 (−8.5) 69.8 (−4.1) − (Equation 2) + CE 41.2 (−8.2) 44.2 (−8.1) 69.7 (−4.2) \n Table 4 : 4 Evaluating the episodic version of Tent with a batch size of 1, which corresponds to a simple entropy minimization approach for the test time robustness setting. This approach also uses single point BN adaptation, and entropy minimization does not provide much, if any, additional gains. ImageNet-C ImageNet-R ImageNet-A mCE ↓ Error (%) Error (%) \n Table 5 : 5 Ablating the augmentation functions to test standard augmentations (random resized cropping and horizontal flips). When changing the augmentations used, the post-adaptation performance generally does not change much, though it suffers the most on CIFAR-10-C. CIFAR-10 CIFAR-10.1 CIFAR-10-C Error (%) Error (%) Average Error (%) \n Table 6 : 6 ImageNet-A results for the ResNext-101 models. parallel, though in principle this is possible for AugMix and should improve efficiency overall. These experiments used four Intel Xeon Skylake 6130 CPUs and one NVIDIA TITAN RTX GPU. ImageNet-A Error (%) \n Table 7 : 7 Test error (%) on CIFAR-10-C level 5 corruptions. gauss shot impul defoc glass motn zoom snow frost fog brit contr elast pixel jpeg ResNet-26 48.4 44.8 50.3 24.1 47.7 24.5 24.1 24.1 33.1 28.0 14.1 29.7 25.6 43.7 28.3 + TTA 43.4 39.6 42.9 28.3 44.7 26.3 26.3 21.4 28.5 23.3 12.1 32.9 21.7 43.2 21.7 + MEME (ours) 43.5 39.8 43.3 26.4 44.4 25.1 25.0 20.9 28.3 22.8 11.9 28.3 21.1 42.8 21.7 \n Table 8 : 8 Test error (%) on CIFAR-10-C level 4 corruptions. gauss shot impul defoc glass motn zoom snow frost fog brit contr elast pixel jpeg ResNet-26 43.8 37.2 39.3 14.8 48.0 19.9 18.7 22.0 24.9 15.1 11.4 16.8 19.1 27.9 24.9 + TTA 39.5 32.0 31.8 15.4 45.0 20.9 20.2 18.9 21.7 12.9 9.3 16.8 17.7 25.7 18.9 + MEME (ours) 39.7 32.3 32.2 14.7 45.0 20.0 19.2 18.7 21.1 12.5 9.3 15.2 16.9 25.2 18.9 \n Table 9 : 9 Test error (%) on CIFAR-10-C level 3 corruptions. gauss shot impul defoc glass motn zoom snow frost fog brit contr elast pixel jpeg ResNet-26 40.0 33.8 26.4 11.5 37.3 20.0 16.6 20.0 24.7 12.2 10.5 13.6 15.0 18.4 22.7 + TTA 34.3 27.7 20.3 11.3 32.9 20.7 16.7 16.3 21.1 9.8 8.5 12.5 13.7 14.5 17.2 + MEME (ours) 34.4 27.9 20.5 10.8 32.8 19.8 16.1 16.1 20.9 9.6 8.6 11.7 13.2 14.5 17.2 \n Table 10 : 10 Test error (%) on CIFAR-10-C level 2 corruptions. gauss shot impul defoc glass motn zoom snow frost fog brit contr elast pixel jpeg ResNet-26 30.1 21.8 21.1 9.7 38.3 15.3 13.8 21.2 17.6 10.5 9.7 11.6 12.9 15.4 21.3 + TTA 25.3 16.9 15.8 8.5 33.8 15.3 13.7 17.8 14.5 8.7 7.8 10.0 11.0 12.0 16.1 + MEME (ours) \n\t\t\t Single point BN adaptation also assumes that the model has batch normalization layers, and, as shown empirically in Section 4, this is an assumption that we do not require but can also benefit from.", "date_published": "n/a", "url": "n/a", "filename": "test_time_robustification_of_d.tei.xml", "abstract": "While deep neural networks can attain good accuracy on in-distribution test points, many applications require robustness even in the face of unexpected perturbations in the input, changes in the domain, or other sources of distribution shift. We study the problem of test time robustification, i.e., using the test input to improve model robustness. Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions, such as access to multiple test points, that prevent widespread adoption. In this work, we aim to study and devise methods that make no assumptions about the model training process and are broadly applicable at test time. We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable: when presented with a test example, perform different data augmentations on the data point, and then adapt (all of) the model parameters by minimizing the entropy of the model's average, or marginal, output distribution across the augmentations. Intuitively, this objective encourages the model to make the same prediction across different augmentations, thus enforcing the invariances encoded in these augmentations, while also maintaining confidence in its predictions. In our experiments, we evaluate two baseline ResNet models, two robust ResNet-50 models, and a robust vision transformer model, and we demonstrate that this approach achieves accuracy gains of 1-8% over standard model evaluation and also generally outperforms prior augmentation and adaptation strategies. For the setting in which only one test point is available, we achieve state-of-the-art results on the ImageNet-C, ImageNet-R, and, among ResNet-50 models, ImageNet-A distribution shift benchmarks.", "id": "dfb36fc999b2d99ea7e14031750b1520"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Allan Dafoe", "Remco Zwetsloot", "Matthew Cebul"], "title": "Reputations for Resolve and Higher-Order Beliefs in Crisis Bargaining", "text": "Introduction Navigating an international crisis can require incredibly complex inferences. Even seemingly straightforward strategies can backfire dramatically. Suppose that persons A and B are playing Chicken, the game in which two cars race toward each other and the goal is to get the opponent to swerve first. One popular strategem says that A should throw her steering wheel out of the window in order to convince B that she cannot swerve, thereby compelling B to do so. 1 Yet suppose that B incorrectly believes that A hid a spare steering wheel under her seat. If A does not know that B believes this, she may surrender her ability to swerve without actually becoming committed in B's eyes, raising the chances of a tragic crash. Or suppose that A incorrectly believes that B believes that A has a spare steering wheel. This belief might prevent A from using a tactic that would have improved her odds of victory. Much of the complexity in international crisis bargaining is attributable to higherorder beliefs-beliefs about beliefs (Schelling 1960; . While these complications are perhaps most apparent in hypothetical games like Chicken, examples abound in real-world international crises as well. For instance, in the years prior to the Berlin crisis, Premier Khrushchev had deliberately exaggerated the Soviet Union's capabilities, and widely-discussed American concerns about a \"missile gap\" had convinced him that the Americans had fallen for the ruse. \" [O] n the assumption that the Americans believed the Soviets were ahead in the arms race,\" Khrushchev chose to escalate tensions over Berlin (Kaplan 1983, 304) . Khrushchev was forced to back down, however, when he learned that US policymakers did not believe this at all-a fact that the Americans purposely signaled via intelligence leaks to NATO units that the US knew to be compromised by Soviet agents (Schlosser 2014, 284) . Higher-order beliefs thus played a critical role in both the onset and resolution of one of the most significant crises of the Cold War. In this article, we examine the theoretical and empirical import of higher-order belief dynamics for two crucial phenomena in international relations: resolve and reputations for resolve. Reputations, especially for resolve, have been a longstanding topic of scholarship, but most studies consider only first-order beliefs: whether and how B updates his beliefs about A on the basis of A's behavior. We argue that taking higher-order beliefs into account substantially amplifies the importance of reputations for resolve. This is because resolve is interdependent: one side's (perceived) resolve can affect the other side's (perceived) resolve, meaning that small initial changes in beliefs can spiral into large overall changes, with potentially decisive consequences. We evaluate our theory using a survey experiment conducted on a sample of quasi-elites recruited through Stephen Walt's Foreign Policy blog. We begin by corroborating reputation scholars' core assertion: states that behaved resolutely in the past are perceived to be more highly resolved in the present. We find that the effect of reputation on perceived resolve is larger than any other experimental manipulation, including the effect of material capabilities, and persists across a range of informational conditions and subgroups. We then go a step further, advancing scholarship on reputations for resolve by demonstrating that respondents also form higher-order beliefs in line with our argument. Exogenous increases in A's perceived resolve, induced by providing information about A's behavior in disputes with states other than B, produce decreases in B's perceived resolve, even though respondents learned nothing about the usual factors said to affect B's resolve. The size of the parameter estimates we obtain suggest that higher-order beliefs can account for as much as 70 percent of the reputational manipulation's effect on the balance of resolve, with a lower bound of roughly 40 percent. We thus answer Robert Jervis's longstanding but still open question: there is indeed a strong zero-sum element to resolve. Our design also permits a direct test of a competing hypothesis linking higherorder beliefs to resolve, which Jervis has referred to as the \"domino theory paradox\" and Daryl Press as the \"never again theory\": if A backed down in a recent dispute, B will believe that A will expend extra effort to restore its tarnished reputation for resolve, and therefore that A will be more rather than less resolved in subsequent disputes (Jervis 1991; Press 2005) . Our evidence is contrary to these hypotheses. Our results reinforce recent studies demonstrating the importance of reputation in the domains of conflict, finance, and alliance politics, among others (see Crescenzi 2017, for a review) . These studies all focus on the first-order effects of reputation. 2 We show, however, that the story does not stop there. In doing so, we join a number of others who place higher-order beliefs at the center of international relations specifically (Schelling 1960; Morrow 2014) and social, economic, and political life more broadly (Kuran 1995; Chwe 2001 ). These beliefs are what make resolve interdependent, and are therefore crucial to the onset and resolution of crises and wars. \n Resolve, Reputation, and Higher-Order Beliefs Much of international politics pivots around perceptions. A state's ability to persuade, deter, and compel depends in large part on how that state is perceivedwhether others consider its military capable, its threats credible, its promises reliable. As a result, states and leaders are motivated to act in ways that favorably shape others' perceptions and expectations, i.e. to cultivate reputations for desirable qualities or behavioral tendencies. One of the most important kinds of reputation in international politics is a reputation for resolve. We define resolve as follows: Resolve: the probability that an actor is willing to stand firm in a crisis, given its beliefs about the world at that time. We briefly explain how this behavioral definition relates to other common conceptualizations of resolve. Formalized mathematical models of crisis bargaining often define resolve as a state's \"value for war,\" which is said to be a function of material factors such as military capabilities, the issue under dispute, and domestic audiences (see, e.g. Morrow 1989; Powell 1990; Fearon 1994; Sartori 2016 ). Psychological accounts of resolve instead emphasize internal traits such as dispositional determination, willpower, or \"sticktoitiveness\" (Kertzer 2016, 8-9) . Here, we define resolve in terms of states' behavior in a crisis, in which states may either back down or stand firm. 3 In this sense, a state's \"value for war\" and its dispositional traits are important factors that affect resolve-the probability that an actor stands firm-but they may not be the only ones, so they do not define resolve. 4 Understanding resolve in behavioral terms has several benefits. 5 By clearly specifying what kind of behavior is (in)consistent with being resolved, our definition makes resolve measurable and facilitates empirical analysis. More importantly, our definition allows scholars to directly incorporate another essential determinant of whether A will stand firm in a crisis with B: A's beliefs about B's resolve. As exemplified by Chicken, that A's crisis behavior depends on her beliefs about B's resolve is what makes interstate crises complex strategic interactions. This relationship also speaks to the literature on reputations for resolve, which we briefly review below. \n Building Reputations for Resolve International crises can often be understood as \"contests of resolve\" (Morrow 1989, 941-42) in which the outcome is determined by one side's ability to convince the other that it will not back down. In such contests, a history of resolute past behavior can be a valuable asset. We say that actors have a reputation for resolve when observers have formed beliefs about that actor's tendency to stand firm in a certain class of disputes on the basis of that actor's past behavior (Dafoe, Renshon, and Huth 2014 ). 6 The effect of a reputation for resolve remains a subject of debate. Proponents contend that reputations offer a path to successful commitment, deterrence, and compellence (Huth 1997) , as costly past actions can corroborate threats and promises that might otherwise be dismissed as \"cheap talk\" (Fearon 1995; Trager 2017 ). These scholars argue that an \"undesired image can involve costs for which almost no amount of the usual kinds of power can compensate\" (Jervis 1970, 6) , and so reputations are \"one of the few things worth fighting over\" (Schelling 1966, 124) . In support of this position, a number of recent studies find evidence that states' past behavior shape how other states act toward it. 7 Others, however, dispute the link between past actions and present behavior. For instance, Mercer argues that the desirability of an action conditions how people interpret the disposition of its initiator, concluding that \"people do not consistently use past behavior to predict similar behavior in the future\" (Mercer 1996, 45-47, 212) . Similarly, Press claims that \"blood and wealth spent to maintain a country's record for keeping commitments are wasted,\" since opponents form assessments of credibility largely based on their perceptions of states' interests and capabilities, not their past behavior (Press 2005, 10) . Detractors also aver that evidence of reputational effects is largely observational and often indirect, and is therefore inconclusive. Thus, one contribution of our article is to bring experimental methods to bear on a basic question: do observers use a state's past behavior to predict what it is likely to do today? 8 H REP : Perceptions of a state's resolve will (a) increase if it stood firm in past crises and (b) decrease if it backed down in past crises. 9 In addition, we also seek to advance the reputation debate beyond H REP toward a discussion of higher-order beliefs. Note that H REP focuses purely on how first-order beliefs-beliefs about an actor's traits or behavioral tendencies-change in response to past behavior. Such arguments are at the center of most theoretical accounts of reputational dynamics. First-order beliefs are, however, not the only kinds of beliefs that matter. In the next section, we contend that reputations can also affect higherorder beliefs-beliefs about beliefs (about beliefs, etc.)-with important consequences for state behavior. \n Reputational Effects and Higher-Order Beliefs Consider a hypothetical dispute between states A and B, in which B is considering whether to escalate. Initially, B has some sense of how likely A would be to stand firm (A's resolve), though he cannot be sure. B then observes A behaving resolutely in some other dispute, and after considering that A might, say, be more militarily capable than B originally thought, B updates his beliefs about A's resolve accordingly. We can call this a first-order reputational effect, in which one side updates its beliefs about its opponent's characteristics or behavioral tendencies solely on the basis of that opponent's past behavior. In many cases, however, higher-order reasoning further complicates the story. For example, A might know that B was paying careful attention to her behavior in the other dispute. If so, A might conclude that, since she stood firm, B has become less resolved, and this could further embolden A to also stand firm against B. If B expects A to be thinking along these lines, he might conclude that there is little he can do to make A back down, further reducing B's resolve. If actors make inferences in this way, the first-order reputational effect will be joined by higher-order effects. These two effects together make up the total reputational effect, i.e. the full shift in the balance of resolve between two actors due to a single initial reputational shock (see Section \"Do Higher-Order Beliefs Contribute to Reputational Effects?\" for a formalization). Indeed, higher-order beliefs can theoretically produce large substantive effects, as relatively small first-order reputational effects cascade through higherorder belief chains into large changes in the balance of resolve. Put another way, our argument is that resolve is a function of not just the material factors emphasized in the modeling literature or the internal traits emphasized in psychological accounts, but also states' beliefs about their opponents' resolve-A's resolve is decreasing in (her beliefs about) B's resolve, and vice versa. In short, resolve is interdependent. Furthermore, that resolve is interdependent implies that higher-order beliefs may play a central, albeit under-appreciated, role in international crises-A's beliefs about B's resolve (which dictate A's own resolve) are themselves a product of A's higher-order beliefs about B's beliefs about A's resolve, and vice versa. To express our arguments in terms of reputations, we contend not just that states and leaders can possess reputations for resolve, but also that these actors are aware that such a reputation may decrease the likelihood that their opponents stand firm during crises, and so update their own resolve accordingly. In short, states may expect that their reputations precede them. H INT : Perceptions of a state's resolve will (a) increase when there is a decrease in perceptions of its opponent's resolve, and (b) decrease when there is an increase in perceptions of its opponent's resolve. There are two principal reasons why we may not expect this hypothesis to hold. First, states may not form higher-order beliefs if they lack the ability to do so. Kertzer, for example, argues that in many situations with higher-order uncertainty about resolve \"the complex nature of the decision-making environment actors face stretches far beyond the limits of human cognition\" (Kertzer 2016, 149 ; see also Mercer 2012) . And even if they have the ability, states may be insufficiently confident in their higher-order beliefs to incorporate them into their assessments of resolve. If this were the case, we would expect to see no evidence of interdependence. Second, H INT is not the only possible relationship between higher-order reasoning and reputational effects. H INT implies that higher-order reasoning magnifies first-order effects-states that stand firm earn larger reputational bonuses, whereas states that back down suffer larger penalties. In what Jervis labels the \"domino theory paradox\" and Press \"never again theory,\" however, higher-order effects run counter to the first-order effect (Jervis 1991; Press 2005) . Domino theory, which undergirded much of US foreign policy during the Cold War, holds that if a state backs down in one situation, observers will infer that it is more likely to do so in other situations as well. The paradox is that the corresponding higher-order beliefs may be \"self-defeating\" (Jervis 1991, 36-37) : An actor who has had to back down once will feel especially strong incentives to prevail the next time in order to show that the domino theory is not correct, or at least does not apply to him. In other words, an actor who believes the domino theory-or who believes that others accept it-will have incentives to act contrary to it. Indeed, statesmen sometimes respond to a defeat by warning others that this history will make them extremely resistant to retreating again. Furthermore, if others foresee this, they will expect a defeated actor to be particularly unyielding in the next confrontation. In short, the domino theory paradox predicts that, in certain circumstances, higher-order reasoning can mitigate or even reverse first-order reputational effects. H DTP : Perceptions of a state's resolve will increase when it backed down recently and has a prior reputation to recover. \n Research Design The scientific study of reputations is limited by their inaccessibility. Reputations consist of perceptions, which exist in people's minds and are not directly observable. Even records of private deliberations, free of any incentives to dissemble, may lead us astray. To the extent that an opponent's reputation is a constant during a crisis and is common knowledge among decisionmakers, discussions are likely to focus on new information such as troop movements even if reputation matters (Weisiger and Yarhi-Milo 2015) . Reputational inferences, given their evolutionary importance, may also be so automatic that they are made subconsciously (Bowles and Gintis 2011) . These difficulties were acknowledged by Snyder and Diesing, who noted that their inability to find much evidence of reputational inferences in case histories \"may be only an artifact of the record\" (Snyder and Diesing 1977, 496) . To overcome these challenges, we employ a scenario-based survey experiment. Respondents are told about a scenario, features of which are randomly varied or omitted, and then asked about their opinions or beliefs. Survey experiments are especially suitable for research questions about how people incorporate, interpret, and act on particular types of information (Mutz 2011) , and these are precisely the questions in which scholars of reputation are interested. Below, we review the survey design and the sample in more detail. \n Survey Vignette Respondents read an abstract scenario about two countries (A and B) engaged in a serious territorial dispute. The features of the scenario are presented in a bullet list format, each assigned with some probability independent of each other (a full factorial design). Each feature has several different levels, including its omission. For the full text of the survey and treatment allocations, see Online Appendix B. Respondents were first informed of each country's regime type, either democracy or dictatorship (some received no information on regime type). Respondents then learned about the military capabilities of each country, with A having either \"substantially stronger military forces\" than B, being \"about equally strong,\" or B being substantially stronger (again, some received no information about capabilities). In all conditions except no information, respondents are also told that neither country has nuclear weapons. The respondent then reads the reputational feature of the scenario, State A's history of crisis interactions. This bullet involves two variables: whether A stood firm or backed down in past crises, and whether A's previous conflicts were against B or other countries. 10 According to most impartial observers, of the three most recent major international crises that Country A has faced against [other countries/Country B], Country A [did not back down and Country A achieved its aims/backed down in each crisis and failed to achieve its aims]. There is also a special Domino Theory Paradox version of this manipulation, in which A stood firm in three past crises, but then backed down in a fourth, most recent crisis. In all conditions except no information, respondents are also informed whether A's past crises occurred under the current or a previous leader. Lastly, the vignette includes several other experimental features related to the history of the dispute, as well as recent threats and promises. 11 The scenario concludes by stating that the crisis is serious and that many observers consider major military action in the near future likely. Respondents are then asked the primary outcome question for both countries: What is your best estimate, given the information available, about whether Country A/ B will back down in this dispute? Respondents have five options, ranging from \"very unlikely\" (0 percent to 20 percent chance) to \"very likely\" (80 percent to 100 percent chance). Note that this question exactly matches our definition of resolve in Section \"Resolve, Reputation, and Higher-Order Beliefs.\" 12 Before detailing our sampling strategy, we briefly address the benefits and drawbacks of our abstract survey design. Our survey vignette describes a crisis between two abstract states, A and B, and the scenarios respondents read were typically no more than 150 words. The benefit of this approach is its flexible simplicity; short vignettes are less taxing for respondents, and abstracting away from real-world states may permit cleaner manipulation of the concepts of interest (though not necessarily; see Dafoe, Zhang, and Caughey 2018) . The attendant drawback is a decreased level of realism in abstract survey scenarios, relative to vignettes that feature actual countries (Tingley 2017) . Reassuringly, however, recent research employing both abstract and real-world crises scenarios recovers reputational effects of a very similar magnitude to those presented below (Renshon, Dafoe, and Huth 2018) . Regardless of the design, it is of course not possible to create environments akin to those faced by real-life decision-makers through surveys alone, and it is therefore best to complement survey experiments with other kinds of evidence. Though more systematic observational work will have to await future research, we offer some preliminary supportive evidence for our claims in Section \"Discussion: Interdependent Resolve in Real Life,\" and discuss the conditions under which evidence of interdependent resolve is most and least likely to be found. \n Sample We administered the survey to a convenience sample of respondents recruited through Stephen Walt's Foreign Policy blog. On August 1st, 2011 Walt posted a blog entry inviting readers to \"Become a data point!\" (see Online Appendix B.1). From this advertisement, over 1,000 respondents took the survey. This sampling strategy was intended to mitigate potential biases that may arise from using regular citizens to proxy for elite policymakers. The general concern is that elites are better informed about and more experienced with foreign policy decision-making than the average citizen-as a result, they are more likely to employ higher-order strategic reasoning and consider their opponents' perspective (Hafner-Burton, Hughes, and Victor 2013; Hafner-Burton et al. 2014 ). Ideally, then, one would run our experiment on key decision-makers, such as military leaders, foreign policy advisors, and politicians. Unfortunately, such subjects are rarely accessible, and even if they are it is usually as part of a small convenience sample. To approximate our elite population of interest, we instead sought to recruit a sample of quasi-elite respondents who are abnormally well-informed about and interested in foreign policy, whose backgrounds and world views more closely parallel those of high-level elites. We expected a sample drawn from Walt's Foreign Policy readership to be highly educated and knowledgeable of foreign policy, and that is indeed the case. Of those 87 perccent who answered the demographic questions, 83 percent reported having a college degree or higher, and 50 percent a postgraduate degree. Moreover, 60 percent claimed to have \"particular expertise with foreign affairs, military affairs, or international relations.\" Politically, the group leans Democratic, as 89 percent claimed to agree more with the policies of Democrats (11 percent more with those of Republicans). The respondents were 88 percent male, which is far from representative of the general population but not obviously unrepresentative of foreign policy elites. More details about the sample are provided in Online Appendix A. To be sure, then, our sample remains an imperfect approximation of foreign policy elites. Still, we argue that these imperfections will likely lead us to underestimate the effect of reputation and higher-order beliefs on perceived resolve. To start, the Democratic bias should cut against our proposed hypothesis, as liberals are generally less likely than conservatives to invoke concerns over credibility, reputation, and honor in international affairs (Trager and Vavreck 2011) . And while respondents may also hew closer to Stephen Walt's particular foreign policy views, Walt has repeatedly argued against the importance of reputation. 13 Lastly, our sample likely remains less experienced with actual foreign policy decisionmaking than true elites. But as mentioned above, experience is linked to higherorder strategic reasoning, and Tingley and Walter (2011) show that experienced players care more about reputation than inexperienced ones. If anything, then, our results likely underestimate the reputational effects that would be found among actual elites. In short, given the cost and difficulty in obtaining elite samples, our sampling design is a reasonable first step toward the empirical study of higher-order beliefs in international crises. 14 Importantly, we think it highly unlikely that our sample would produce an opposite effect of reputation on perceived resolve relative to true elitesour respondents may be less attentive to reputation, but this should only depress the magnitude of reputational effects, not reverse their direction. \n Results To preview our results, we reach two main findings. First, reputations for resolve matter; when respondents learn that a state has stood firm in past crises, they consider it much more likely to stand firm today. Second, resolve is interdependent; we find that increases in A's perceived resolve are associated with decreases in B's perceived resolve, and estimate that higher-order belief updating is responsible for a large proportion of the total observed effect of past behavior on the balance of resolve. 15 \n Can States Build Reputations for Resolve? We begin with H REP , which considers whether a country's past behavior affects perceptions of its current resolve. We find strong evidence that it does. Respondents who learned that \"Country A did not back down and Country A achieved its aims\" in past crises thought that A was more likely to stand firm than those that received no information about past behavior (a 10 percentage point increase, from 60 percent to 70 percent). Similarly, when A \"backed down in each crisis and failed to achieve its aims,\" respondents thought A was roughly 10 percentage points more likely to back down. These effects are both highly significant (Figure 1 ). Moreover, this reputational treatment has the largest effect of all manipulations in the survey. The effect of going from a history of backing down to a history of standing firm is about 20 percentage points, which represents a quarter of the resolve variable's total range. 16 This reputational effect is roughly twice the size of the second-largest effect, information about power: shifting from B having \"substantially stronger military forces\" to A having \"substantially stronger military forces\" increases perceptions of A's resolve by roughly 10 percentage points. 17 Readers may wonder whether this potent reputational effect is driven only by a subset of respondents. Reputation skeptics contend that even if past behavior matters, it does so only to the extent that \"a decision maker uses an adversary's history of keeping commitments to assess the adversary's interests or military power\" (Press 2005, 21 ). An observable implication of this view is that we should only see effects of past behavior on respondents that lack information about the balance of power. We find, however, that the reputational effect persists among respondents who were told about military capabilities (Figure 12 , Online Appendix C.1). 18 And while the reputational effect decreased slightly as scenario complexity increased, 19 it remained above 10 percentage points even among respondents who received the maximum amount of information about the scenario (Figure 11 , Online Appendix C.1). Reputational effects also persist across state leaders and across demographic subgroups, including gender, education, political affiliation, and cultural background. In sum, we find strong support for H REP . In contrast, we find no evidence for H DTP . According to H DTP , a state that backs down once after a history of standing firm will be perceived to be more resolved to stand firm in the present, as observers expect it to try to re-establish its lost reputation for resolve. Yet as Figure 1 shows, backing down once after a history of always standing firm reduces perceived resolve by about 8 percentage points compared to a history of standing firm (p < 0:0001), nearly returning perceived resolve to its baseline probability in the no-information condition. In other words, backing down once almost entirely eliminates the reputational gains that A achieves by standing firm in the initial three crises. \n Do Higher-Order Beliefs Contribute to Reputational Effects? Typically, reputational effects like the ones reported above are interpreted in terms of first-order updating: A takes an action, and observers update their beliefs about A. Yet as discussed in Section \"Reputational Effects and Higher-Order Beliefs,\" the total reputational effect of A's past behavior on A and B's resolve may also consist of higher-order effects. . In these terms, the \"direct\" or \"immediate\" effect of our reputation treatment T (information about A's past behavior) is given by D R1 i ¼ R1 i À R0 i , where R0 i refers to perceptions of i's resolve before observing T (i.e. among respondents in the \"no information\" condition). Now, suppose that observers go through two levels of belief updating following reputation treatment T (this updating process is shown graphically in Figure 2 ). Our argument suggests that we should see D R1 A > 0, and, by interdependence, D R2 B < 0. In this situation, our estimand, which we will call the interdependence coefficient and label y, is given by D R2 B D R1 A . This ratio tells us by how much an immediate one-unit change in A's perceived resolve subsequently changes B's perceived resolve. For example, if a 10 percentage point increase in A's resolve decreases B's resolve by 5 percentage points, y ¼ À5 10 ¼ À0:5. More generally, if we assume that the interdependence of resolve is symmetric across actors and constant across levels of updating, we have y ¼ D Rk A D RkÀ1 B ¼ D Rk B D RkÀ1 A for all k 2 N. We can then empirically estimate y by taking the ratio of the observed total effect of treatment on B's and A's resolve, D RB D RA , and state our interdependent resolve hypothesis more precisely as H INT : y < 0. 20 As mentioned above, our treatment must fulfill two conditions for this estimation strategy to succeed, analogous to the identifying assumptions in instrumental variable analysis: (C1) D R1 A > 0 (instrument strength) and (C2) D R1 B ¼ 0 (exclusion restriction). Notably, most treatments do not satisfy C2. For example, the fact that A stood firm against B in the past could lead to inferences not just about A's resolve, but also about B and the A-B dyad, such as B's dispositional determination or domestic audiences, violating our identifying assumption. Thus, we cannot use information about A's past behavior against B to estimate the interdependence of A and B's resolve. However, information about A's past actions against another country is a treatment that likely satisfies C2. 21 When told about A's extra-dyadic behavior, observers can make inferences about how much A values its reputation, whether A's domestic public is liable to punish leaders for backing down, or any other factor that shapes A's resolve. But the treatment is uninformative about the A-B relationship, the territory under dispute, or other factors that could reasonably shape perceptions of B's resolve-except, that is, perceptions of A's resolve. This treatment, then, allows for a clean test of our interdependence hypothesis. The results of this test are displayed in Figure 3 . We find that A's behavior against other countries significantly affects perceptions of B's resolve as expectedrespondents who were told that A stood firm against other countries in the past assess B's resolve to be roughly 7 percentage points lower relative to baseline, and A backing down against other countries leads to a similar increase in perceptions of B's resolve. For A standing firm in extra-dyadic crises, ŷ ¼ À:065 :091 ¼ À:71, and for A backing down in extra-dyadic crises, ŷ ¼ :074 À:098 ¼ À:75. Both of these effects are statistically significant, lending strong support to H INT . 22 . \n The Interdependence Multiplier These results offer compelling evidence that reputational effects exist, and that they are compounded by higher-order beliefs. Still, we have yet to specify precisely how important are higher-order beliefs. Can we estimate the proportion of the total reputational effect-the effect of past behavior on the overall difference in resolve between A and B-that is attributable to higher-order belief updating? To help us answer this question, define the interdependence multiplier (IM) as the factor by which a first-order reputational effect should be multiplied in order to obtain the total reputational effect. In our simple formalization, the magnitude of the interdependence multiplier depends on two parameters: the interdependence coefficient, y, and the number of levels of belief updating, which we label n. 23 Given our model (and assuming jyj < 1), Online Appendix D.2 derives the IM to be, for any n, IM ¼ 1 À jyj n 1 À jyj which converges to 1=ð1 À jyjÞ as n ! 1, a situation with \"common knowledge\" in which everybody knows about a fact or event, knows that everybody knows it, and so on. Figure 4 plots the magnitude of the IM across hypothetical values of y and n. At one extreme, if we assume that actors engage only in first-order reasoning (the light blue line), the IM is always 1, and the total reputational effect is equal to the firstorder reputation effect. This, implicitly, has been the assumption in most past discussions of reputation. At the other extreme, if it is plausible to assume common knowledge (the red line), the total reputational effect is more than twice as large as the first-order effect for any jyj > 0:5, and nearly four times as large at j ŷj ¼ 0:73 (the average of our jyj estimates calculated from Figure 3 above). 24 Between these extremes, the IM's magnitude is substantial across a range of parameter values, underscoring that higher-order beliefs can be responsible for a large proportion of any given total reputational effect. We can now use the IM to estimate the effects of higher-order beliefs on perceived resolve in our survey experiment. As stated above, our average estimated y ¼ À0:73, and the average total effect of the reputation treatments presented in Figure 3 is roughly 16 percent. To estimate how much of that 16 percent change is attributable to higher-order beliefs, let us assume that n ¼ 3. This implies IM ¼ 1À0:73 3 1À:73 ¼ 2:26. The estimated first-order effect is then simply the total effect divided by the IM, 16 2:26 ¼ 7:08. This leaves roughly 9 percentage points attributable to higher-order reasoning-a sizable 55 percent of the total effect. We can also more intuitively verify this result by beginning with the first-order effect, and then building out via n ¼ 3 reasoning to the total effect. Suppose that, as we just estimated, respondents on average perceive A to initially be 7.08 percentage points more likely to stand firm when A has stood firm repeatedly in past cases (D R1 A ¼ 7:08). The respondent reasons, however, that B will become less resolved after inferring this change. Specifically, B's resolve decreases by 7:08  y percentage points (D R2 B ¼ À5:17). This, in turn, will increase A's perceived resolve by À5:17  y (D R3 A ¼ 3:77). The total reputational effect is therefore jD R1 A jþ jD R2 B j þ jD R3 A j ¼ 16:02 % 16 percentage points, which is indeed the total reputational effect that we observe in the data (and is also 7:08  IM). This estimation strategy is limited by our inability to directly observe n, respondents' level of higher-order reasoning-we assumed n ¼ 3 above, but the true value could be higher or lower. While we therefore cannot definitively identify the proportion of the total reputational effect attributable to higher-order beliefs, we remain confident that these beliefs drive a substantial portion of the effect in this case, for several reasons. First, we can quantify our uncertainty by deriving bounds. As our survey recovers reputational effects that are inconsistent with mere first-order reasoning, we use n ¼ 2 as a lower bound. In this case, the IM ¼ 1:73, and higher-order effects are responsible for about 42 percent of the total reputational effect. The upper bound is represented by common knowledge (n ¼ 1). In this case, the IM ¼ 3:7, and the higher-order effects constitute about 73 percent of the total reputational effect (see Online Appendix D.2). 25 Note that our reputation treatment had a relatively large effect on B's perceived resolve (j ŷj ¼ 0:73 is large), which results in correspondingly large higher-order effect estimates even assuming low levels of higher-order reasoning (Figure 4 is instructive). Of course, the interdependence coefficient y might vary in size in different settings or contexts-we discuss this possibility extensively in Section \"The Degree of Interdependence.\" Second, numerous other studies and examples discussed in Section \"Higher-Order Beliefs\" suggest that at least some degree of higher-order reasoning, be it conscious or intuitive, is a regular feature of human cognition in political, economic, and social settings-though higher-order belief chains can become prohibitively complex, n ¼ 2 reasoning is relatively common. This reassures us that n ¼ 2 is a reasonable lower bound to estimate higher-order effects. In sum, we find clear evidence that higher-order beliefs are responsible for a large portion of our observed reputational effects-even restricting the calculations to a minimal level of higher-order reasoning, the proportion approaches 50 percent. In the next section, we move beyond our survey data to discuss real-world applications and implications of our argument. \n Discussion: Interdependent Resolve in Real Life Above, we provided evidence of higher-order reputational dynamics, and argued that the magnitude of these effects depends on the order to which people form beliefs (n) and the degree of interdependence (y). Our abstract survey vignette allows us to observe the effect of higher-order belief updating in a way that is difficult to do in the real world. Still, some uncertainty remains as to whether and how the interdependence of resolve manifests in the messier context of real-world bargaining situations. This section discusses the circumstances under which we expect higher-order beliefs and interdependent resolve to matter most, and the attendant implications of our argument for real-world crisis bargaining. \n Higher-Order Beliefs As discussed in Section \"Reputational Effects and Higher-Order Beliefs,\" one broad issue is whether decision-makers engage in higher-order reasoning Mercer 2012 ). If they do not, our arguments have little relevance to the real world. Fortunately, evidence from a variety of contexts suggests that explicit and implicit higher-order reasoning is common, and does not require especially sophisticated agents. First, experimental evidence from behavioral economics suggests that most people often form at least second-order beliefs, and many reach higher orders as well (e.g. Nagel 1995; Camerer, Ho, and Chong 2004) . In psychology, researchers have found that the magnitude of the famous \"bystander effect\" depends on whether the context allows bystanders to form beliefs about other bystanders' beliefs (Thomas et al. 2014 (Thomas et al. , 2016 . A third example comes from a recent field experiment in Benin, where voters who learned information about politicians' behavior punished and rewarded performance only when they thought that others knew this information as well, suggesting a higher-order understanding of coordination dynamics (Adida et al. 2020) . Many other studies also find significant effects from interventions targeted at higher-order beliefs (Bicchieri 2016; Mildenberger and Tingley 2019) . Moreover, scholars have produced abundant evidence of higher-order reasoning among policymaking elites in high-stakes situations. At the domestic level, autocrats faced with potentially unsatisfied publics go to great lengths to create impressions of widespread support for their rule, or, if that fails, at least keep everyone guessing about each other's beliefs (Kuran 1995; Chwe 2001, 20-21) . Internationally, as McManus (2014, 726) illustrates with the case of Israel-US bargaining over Iran's nuclear program, states often attempt to stake their allies' reputation on supporting them, believing that their allies will then believe themselves bound to act (see, e.g. Jervis 1978, 180; Trager 2017, Ch. 1, for more examples) . And conflicts are often said to end only when \"opponents succeed in coordinating their expectations\" (Slantchev 2003, 621 ; see also Carter 2017) . In these and many other examples, beliefs about beliefs are the driver and focus of significant strategic contention. To be clear, our argument does not require that actors always run through every step of the inferential process in a deliberate or conscious way. Higher-order beliefs can be incorporated into decision-making heuristically, implicitly, and subconsciously. As Chwe (2001, 78) argues for the case of driving, \"I stop at a red traffic light out of habit, but a fully specified argument for doing so would involve an infinite regress: I stop because I think that other people are going, and I think that other people are going because I think that they think that I am stopping, and so on.\" Developmental psychologists have found that children display a great degree of higher-order understanding early on in life, implicitly engaging in complex theorizing about the mental states of others long before they can explicitly articulate their reasoning (Wellman 2014) . Higher-order belief dynamics also often become embedded in cultural and legal norms Ridgeway 2011; Morrow 2014) . \n The Degree of Interdependence While it is therefore clear that real-world actors do engage in higher-order reasoning, the extent to which first-order reputational effect are amplified also depends on the degree to which resolve is interdependent, i.e. the extent to which an initial change in A's resolve affects B's resolve. Here, contextual factors are likely to play a large role. We identify and discuss four such factors. First, the degree of interdependence depends on the extent to which the context resembles a prototypical contest of resolve. The central features of such contests are (1) the costs of conflict are so great that losing is preferred to both sides standing firm, but (2) the issue under dispute is sufficiently valuable for a coercive victory to be preferred to a peaceful compromise (Morrow 1989, 941) . We expect resolve to be most interdependent in conflicts that most closely approximate these conditions. Nuclear crises are often cited as the quintessential examples of such contests (Schelling 1966; Powell 1990 ), but they are by no means the only ones. In the 1898 Fashoda Crisis, for example, France dispatched a mission to Egypt in an attempt to force Britain to make concessions, but withdrew when it became convinced that Britain was more resolved than it initially believed (Trachtenberg 2012, 13-16) . 26 More contemporary examples can be found in proxy conflicts like the ongoing Syrian civil war. Direct conflict between the U.S. and Russia over Syria seems prohibitively costly, yet Damascus remains a valuable prize. In these circumstances, resolve can be highly interdependent. Indeed, this interdependence featured regularly in debates about U.S. intervention in Syria. Critics argue that Obama's failure to enforce the infamous chemical weapons red-line in 2013 undermined U.S. deterrence, paving the way for Russian intervention in 2015-Putin might have been compelled to stay out, were it not clear (to both parties) that the U.S. was irresolute. At the same time, Obama's reluctance to engage in Syria resulted in part from his belief that Russia would counter-escalate in response to limited U.S. intervention, especially after Russia stepped up its involvement in 2015. 27 In other words, U.S. lack of resolve fueled Russian resolve, further depressing U.S. resolve. Second, interdependence is also affected by the extent to which actors are able to act strategically, conditioning their behavior on what they think others are likely to do. Actors may fail or be unable to do so for various reasons. A prominent example is found in the expansive literature on credible commitments, which discusses commitment devices that may leave actors unconditionally resolved in a crisis. The moment actors truly commit to a strategy-when they throw their only steering wheel out of the window-they have effectively set their interdependence coefficient to 0: even if they subsequently change their beliefs about their opponent's resolve, their own course of action is already set. That said, absolute commitments are exceedingly difficult and risky to make. 28 In the real world, then, resolve is almost always interdependent at least to some extent. A third and related factor is the concentration of decision-making authority: when leaders have the ability to change course quickly, their resolve is likely to depend more on the other side's resolve. If, on the other hand, authority is diffuse, it will be difficult for a country to change course in the face of new information. One example is the delegation of military decision-making to local commanders, who can then choose to stand firm or back down in their theater of operations. Such delegation may be necessary from a practical perspective, but it also means central decision-makers have less flexibility. Another obstacle to short-term policy change in response to belief updating is the number of veto players in a political system (Tsebelis 2002) . To the extent that this number is correlated with regime type, there is reason to think that the resolve of democracies is less interdependent than that of their autocratic counterparts. Lastly, a fourth factor affecting the degree of interdependence is the observability of resolve. An actor's resolve is more likely to influence its opponents calculations when it is readily observable. Resolve is likely to be more observable the more an actor's deliberative processes are public, or when opponents share a cultural understanding of the meaning of certain behaviors or events (O'Neill 1999, 153-54) . Strategic intelligence also play an important role. The interdependence of resolve was on clear display, for example, during the Washington Disarmament Conference in 1921, when the United States was able to break Japan's ciphers and read Tokyo's private diplomatic communication. One such message stated the absolute minimum naval tonnage ratio that the Japanese government would be willing to accept. \"Knowing how far Japan could be pushed . . . allowed the United States to do so with full confidence, merely waiting for the Japanese to give in\" (Bauer 2013, 211) . When Japan's (lack of) resolve became perfectly observable to the US, American resolve dramatically increased in response. In sum, we expect actors' resolve to be most interdependent in real-world contexts that resemble prototypical contests of resolve, where actors are strategic and have centralized decision-making structures, and where resolve can be inferred with confidence. When interdependence is high, higher-order amplification of first-order reputational effects can produce especially large swings in the balance of resolve. It is in these circumstances, then, that beliefs about beliefs should be most consequential for crisis outcomes. These intuitive sources of variation in the interdependence of resolve could themselves be the object of empirical study, but for now, we leave this task for future work. \n The Power of Beliefs Each party is the prisoner or the beneficiary of their mutual expectations. -Thomas Schelling (Schelling 1960, 60) Reputations for resolve have long been the subject of debate: some see them as indispensable assets, while others dismiss past actions as irrelevant to current crises. Using an experimental approach, this article strongly reinforces the former viewstates and leaders can form reputations for resolve and leverage them to their advantage during crises. Moreover, we emphasize that higher-order beliefs play an under-appreciated yet crucial role in this process, as first-order reputational effects are amplified by actors' beliefs about their opponents' beliefs. In this sense, international contests of resolve hinge not merely on past actions, but on actors' combined expectations about the implications of past actions for present behavior. In conclusion, we make several suggestions for further empirical research on higher-order beliefs and interdependent resolve in international politics. As mentioned earlier, one limitation of our study is that we lack direct knowledge of the level of higher-order reasoning at which respondents analyzed the crisis scenario. A number of studies in behavioral economics measures \"k-level reasoning\" directly in laboratory game settings (see Hafner-Burton, Hughes, and Victor 2013, for a review), but these measures are rarely found in survey research on international politics. Future studies could adapt such techniques to shorter survey formats, where they could serve as useful individual-level measures of strategic competence. We also highlight several interesting avenues for future research. One promising idea is that opponents' perceptions of a leader's competence could influence their higher-order inferences about that leader's likely beliefs, consequently shaping their own bargaining behavior. Some leaders are understood to be especially experienced, calculating, or wise, whereas others may be seen as inexperienced, impulsive, or even wholly incompetent. Not only could variation in leader competence directly affect a state's behavior, but the perception of competence or ineptitude might also shape others' higher-order expectations about that leader's beliefs and world view, with implications for their own behavior during crises. Moreover, there are also strong reasons to expect that beliefs about beliefs matter in contexts far beyond crisis bargaining, including collective action and coordination on many important international issues, such as climate change and international law (Mildenberger and Tingley 2019; Morrow 2014 ). Schelling's argument that sets of beliefs can act as prisons (or paradises) matters as much to problems like gender inequality as it does to international conflict (Ridgeway 2011) . Along these lines, the study of higher-order belief dynamics also presents an exciting opportunity for collaboration across research fields. Such dynamics can only be understood by combining the insights and methods of psychological, cultural, and strategic approaches-whether and how one actor forms and acts on beliefs about another actor's beliefs depends on cognitive processes, systems of social meaning, and the anticipated consequences of different courses of action . And once they are understood, insights on higher-order beliefs are likely to travel across many domains and issue areas. In short, higher-order beliefs are an incredibly rich area for future studies of broad applicability and substantive importance. This article offered some theory and a novel methodological framework for diagnosing the effects of higher-order beliefs, which we hope will contribute to a vibrant sub-literature on this topic in international relations scholarship. participants of the Department of Peace and Conflict Research workshop at Uppsala University and the Division of Social Science seminar at Hong Kong University of Science & Technology, and especially Robert Trager. g. Mattes 2012; LeVeck and Narang 2017) . Others examine the importance of reputation for cooperation; see Crescenzi (2017) . 8. For other recent experimental work, which focuses on first-order beliefs, see Kertzer, Renshon, and Yarhi-Milo (2021) , Huth (2018) . 9. The relevant \"perceiver\" of a state's resolve could be any other international observer, be it another state involved in a dispute with the state in question, or an uninvolved third party-in either case, agents should make similar reputational inferences. For the purposes of empirical testing, however, the \"perceivers\" are our survey respondents, who assessed the resolve of two abstract states in crisis (see Section \"Research Design\"). 10. Some might wonder how much respondents can infer from State A's disputes with other, unknown states (that are not B). Additional background details would allow respondents to form more precise beliefs, yet respondents can still make general reputational inferences from A's past behavior even without these details. It is reasonable for respondents to infer that, if A stood firm three times in recent crises, then A is probably more likely to stand firm in a subsequent dispute, relative to the respondents' prior beliefs. Empirically, we demonstrate that our respondents can and do draw reputational inferences even when given relatively limited information. And as noted below, our results accord with Renshon, Dafoe, and Huth (2018) , who find similar reputational effects across abstract and real-world designs. Regardless, future work on higher-order beliefs could (and ideally would) employ diverse research designs in order to test the sensitivity of this or any other particular design feature. 11. The vignette manipulates a large number of features, yielding a total of 8,640 cells. In this, it is similar to related survey techniques such as conjoint analysis (Hainmueller, Hopkins, and Yamamoto 2014) . We discuss the advantages and possible disadvantages of factorial vignette experiments with many manipulations in Online Appendix B.3. 12. Similar questions were used in Renshon, Dafoe, and Huth (2018) and Kertzer, Renshon, and Yarhi-Milo (2021) , though neither define resolve precisely and both discuss only first-order effects. 13. For recent examples, see Walt (2015) and Walt (2016) . 14. For other experimental work that uses non-elite samples to test theories of bargaining, see Renshon, Lee, and Tingley (2017) , Kertzer, Renshon, and Yarhi-Milo (2021) , and Cebul, Dafoe, and Monteiro (Forthcoming). 15. Main results are shown graphically below-full regression tables can be found in Online Appendix C.3. 16. The minimum likelihood respondents could give for A standing firm is 10 percent (0 percent to 20 percent), the maximum 90 percent (80 percent to 100 percent). This effect is approximately equal to one standard deviation of the resolve variable (m ¼ 62percent, s ¼ 21percent), which is equivalent to a Cohen's d ¼ 1; Cohen offered d ¼ 0:8 as a \"big\" effect. 17. The capabilities results, as well as results from all other manipulations, can be found in Online Appendix C. 18. We do not manipulate interests, but Kertzer, Renshon, and Yarhi-Milo (2021, 22) find similar results when doing so: even when told that one side has high stakes and high relative power, the effect of past behavior on perceptions of resolve is large and significant. Figure 2 . 2 Figure 2. A model of belief updating, where k denotes the level of belief updating, R k i denotes beliefs about actor i's resolve after the kth level of updating, and DR k i denotes the change in beliefs about i's resolve due to the kth level of updating. If conditions C1 and C2 are met, then the thick black lines have DR k i ¼ DR kÀ1 j  y, and we can estimate y as D RB D RA \n Figure 3 . 3 Figure 3.Effect of A's past behavior against \"other countries\" (not B) on perceived resolve, relative to respondents that received no information about past behavior (A's baseline probability of standing firm % 60 percent). X-axis displays percentage point change; horizontal lines represent 90 percent and 95 percent CIs. \n Figure 4 . 4 Figure 4. jyj, n, and the Interdependence Multiplier. Vertical line indicates j ŷj ¼ 0:73. \n Effect of A's past behavior on perceptions of A's resolve, relative to respondents that received no information about past behavior (baseline probability of standing firm % 60 percent). X-axis displays percentage point change; horizontal lines represent 90 percent and 95 percent CIs. A Backed Down Three Times Treatment A Backed Down in Most Recent Crisis, History of Not Backing Down A Did Not Back Down Three Times 0.2 0.1 0.0 0.1 0.2 Change in Perceptions of A's Resolve Figure 1. \n To test this idea, we need a treatment that affects beliefs about Rk i À RkÀ1 i A's resolve, but cannot affect B's resolve through any channel except (beliefs about) A's resolve. If such a treatment affects B's resolve, then we can conclude that perceptions of B's resolve are influenced by perceptions of A's resolve-in other words, that respondents form higher-order beliefs and that resolve is interdependent (H INT ).It is useful to define the relevant estimand and identifying assumptions more formally. Let R i denote the resolve of agent i, k denote a level of belief updating, and Rk i denote an observer's beliefs about actor i's resolve after the kth level of updating. Next, define D Rk i as the change in beliefs about actor i's resolve after the kth level of updating, i.e. \n\t\t\t Journal of Conflict Resolution 65 (7) (8)", "date_published": "n/a", "url": "n/a", "filename": "0022002721995549.tei.xml", "abstract": "Reputations for resolve are said to be one of the few things worth fighting for, yet they remain inadequately understood. Discussions of reputation focus almost exclusively on first-order belief change-A stands firm, B updates its beliefs about A's resolve. Such first-order reputational effects are important, but they are not the whole story. Higher-order beliefs-what A believes about B's beliefs, and so onmatter a great deal as well. When A comes to believe that B is more resolved, this may decrease A's resolve, and this in turn may increase B's resolve, and so on. In other words, resolve is interdependent. We offer a framework for estimating higher-order effects, and find evidence of such reasoning in a survey experiment on quasi-elites. Our findings indicate both that states and leaders can develop potent reputations for resolve, and that higher-order beliefs are often responsible for a large proportion of these effects (40 percent to 70 percent in our experimental setting). We conclude by complementing the survey with qualitative evidence and laying the groundwork for future research.", "id": "4bacfc01f9a63b52944c20abfc7ef196"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Caspar Oesterheld"], "title": "Multiverse-wide Cooperation via Correlated Decision Making", "text": "explains that by sending in 'C', a participant can increase everyone else's payoff by $2. By sending in 'D', participants can increase their own payoff by $5. The letter ends by informing the participants that they were chosen for the similarity and rationality of their decision mechanisms, particularly in weird scenarios like this one. It should be noted that every participant only cares about the balance of her own bank account, and not about Hofstadter's or that of the other 19 participants. Upon receiving the letter, should you cooperate or defect? Assuming the participants' thought processes are sufficiently similar to each other, I think we should cooperate because this makes it more likely that our 19 fellow participants also cooperate (see chapter 2 and the references given therein). After all, Hofstadter stated fairly explicitly that the thought processes of the participants are strongly correlated. Thus, if we cooperate, we should expect significantly more of the other participants to cooperate as well than if we defect, which means that cooperating has higher expected utility. Alternatively, we may reason that by our choice we determine what the rational choice is for all participants. Hofstadter calls this idea of cooperation via correlated decision making superrationality. By itself, superrationality does not seem particularly action-guiding. Usually, we have other evidence about other agents' behavior and thought processes such that the evidence we gain from our own decisions is less important (see section 6.6). To apply superrationality in practice, we combine it with another intellectually stimulating but by itself inconsequential hypothesis: we probably live in a vast universe or even multiverse, most of which we cannot observe or interact with (see appendix section 6.2). In this paper, we will use the term \"multiverse\" in a broad sense to refer to any theory postulating multiple universes, including but not limited to Everett's many-worlds interpretation of quantum mechanics. In fact, for brevity's sake, we will use the term to refer to any theory of physics that implies the existence of a sufficiently large universe with many agents, including a merely spatially infinite universe. 1 Some parts of this multiverse are probably inhabited by intelligent beings like us, some of which surely think about scenarios like this one in the same way as we do. This is all we need to allow for the application of superrationality. The key insight of this paper is that agents in a multiverse are in a situation structurally similar to the aforementioned donation game if they care about each other's decisions in far away parts of the multiverse. Consider the following list of parallels: • The decisions between some groups of agents are correlated, just like those in the donation game. • Some agents have different goals than others -a claim for which we argue in section 3.1 -just like the agents in the donation game maximize the balances of different bank accounts. • On occasion, agents can \"cooperate\" by benefitting the value systems of agents in other parts of the multiverse at low costs to themselves. • As in the donation game, our actions cannot causally influence the behavior of other agents in the multiverse. As an example, imagine you have some specific value system like the reduction of involuntary suffering. You come into a situation in which involuntary suffering has already been reduced to a very low amount. You face a choice between two actions: • You can continue to reduce suffering and increase your own utility and that of other suffering reducers by 1. 2 • You can increase the utility of superrational agents in other parts of the multiverse who (also) care about things other than suffering reduction by 100, e. g. by generating a society of agents who live happily, produce interesting art, conduct science, explore technologies, trade, behave benevolently towards each other, etc. By construction of the thought experiment you care about suffering reduction only, so you would usually take the first action. But consider that many agents throughout the multiverse will face very similar decision problems. For example, there might be an agent who primarily cares about agents experiencing art and the interestingness of things and who is facing similarly diminishing returns -in her world, most things that could be of interest already exist. Other value systems, on the other hand, have been ignored in the process of making her world more interesting. Her world contains many sentient beings with very low levels of well-being, such as humans experiencing various crises (wars, loneliness, life-threatening dangers) -a common theme in art -, wild animals, or blood sports. She knows that agents in other parts of the multiverse dislike this suffering and that she could alleviate them at low opportunity costs to herself. Her decision problem is thus structurally similar to our own. If her thought process is similar to our own, superrationality applies. If we are nice and follow the heuristic \"fulfill the goals of other agents in the multiverse whenever the returns are much higher than the opportunity costs for your own values\", then this makes it more likely that she will be nice as well, the benefits of which are much greater than those forgone by our own friendliness. In general, if after thinking about superrationality we are nice to other value systems and relinquish opportunities to exploit them, this makes it more likely that other superrational agents with different value systems out there, or at least those who think in ways similar to our own, do the same. And if everyone is friendly in this way, we can expect to harvest gains from compromise -everyone will be better off. I will refer to this idea as multiverse-wide superrationality, or MSR for short. \n An overview of the paper Having read the above introduction, the reader is familiar with the basic idea of MSR. However, it opens up many further questions, some of which I attempt to answer in the present paper. Specifically the rest of this paper makes the following contributions: • We investigate the mechanism of superrationality (chapter 2). -After elaborating on the argument for superrationality, we survey the decision theory literature pertaining to superrational cooperation (sections 2.1 through 2.5). Among other things, we argue in favor of incorporating \"updatelessness\" into one's decision mechanism. -Exactly how much should we cooperate? Considering superrationality, how should we decide between actions in this universe to maximize our multiverse-wide utility? I will argue that it is best to effectively adopt a new utility function in this universe: a weighted sum of all superrationalists' utility functions that, if adopted by all superrationalists, gives every superrationalist the same gains from compromise. This function should be the same for all agents with your decision algorithm. (See sections 2.6 through 2.8.) -We show how superrational cooperation fundamentally differs from standard causal cooperation (sections 2.9 and 2.10). We will see how it requires no reciprocity -we should benefit superrationalists who cannot benefit us, because we may correlate with agents who can benefit us but whom we cannot benefit. • Cooperating superrationally with agents elsewhere in the multiverse means taking their values into account. Chapter 3 explores what these values might be and which aspects of these values are relevant for MSR. -I argue that (with regard to the decision to cooperate or not) we correlate with agents who hold values that differ from ours (section 3.1). If this were not the case, cooperating with them would be unnecessary except when it comes to coordination (see section 2.8.9). -I provide a comprehensive list of prerequisites that must be fulfilled for MSR to work (see section 3.2). For example, we cannot benefit agents who do not care about our part of the multiverse (section 3.2.2). -Which aspects of other agents' preferences should be taken into account? E.g., should it only be \"moral preferences\"? To which extent should we idealize their preferences, e. g. by trying to factor out cognitive biases? We motivate and answer these questions in section 3.3. -We review different approaches to hypothesizing about the values of other agents in the multiverse (section 3.4), the most important ones being evolutionary psychology and the study of cultural evolution. • How does multiverse-wide superrational cooperation shift our priorities? What does it recommend in practice? These questions are discussed in chapter 4. We first show how to make policy decisions in the absence of reliable knowledge about the values of agents elsewhere in the multiverse (section 4.1). I then recommend a few interventions, such as promoting causal cooperation (section 4.5.2) and, perhaps most importantly, ensuring that future superintelligent AIs reason correctly about decision theory (section 4.6.3). • The appendix contains various additional considerations that are either less crucial for our decisions or otherwise more tangential, yet nonetheless relevant and of interest to at least some readers. For example, I give an overview of the small amount of work that is closely related to MSR (section 6.1) and explain why I find it plausible that we live in a universe or multiverse containing many agents with whom we are correlated (section 6.2). I also argue that superrationality has few implications for the interactions between agents on Earth (section 6.6), and hence why this paper specifically concerns the application of superrationality in a multiverse-wide (as opposed to general) setting. Much more research is needed to answer some of the questions I set out to explore. This is why I focus more on outlining how these questions can be researched in the future, rather than on trying to ascertain that all my answers are correct with high confidence. The optimal outcome is the one where you defect and everyone else cooperates, yielding a payoff of 19 • $2 + $5 = $43. Conversely, the worst outcome occurs if you cooperate and everyone else defects, yielding a payoff of $0. In any case, no matter how many participants cooperate, you are always better off defecting; 'D' is the dominant strategy. Standard game-theoretical analysis would therefore suggest that 'D' is the correct choice (Binmore, 2007a , chapter 1, Osborne, 2004 . This is quite unfortunate, because if everyone abides by this reasoning, this yields a payoff of just $5 -whereas if everyone could cooperate, you and everyone else could earn 19 • $2 = $38. Is there any way around this tragedy of the commons? If we only consider the causal implications of an action, the analysis is indeed accurate. However, it ignores that there is also a correlation between the decisions of the participants 3 . Consider a variation of the above thought experiment in which you know that the other 19 participants are all exact copies of you, deciding under the exact same environmental circumstances as yourself. You still have no causal influence over the others' decisions and 'D' is still the dominant strategy; no matter what the other copies choose, 'D' is the better option. However, this argument seems much less attractive now. No matter what you choose, your copies are guaranteed to make the same choice (assuming that they make decisions deterministically). There is no possible (deterministic) world in which two copies decide differently in the exact same situation. Thus, your decision whether to cooperate is one between two worlds: in one of them, the algorithm implemented by your brain returns 'C'; in the other, it returns 'D'. Determining the choice of all your copies to be 'C' gives you more utility, and should thus be regarded as the (instrumentally) rational choice. Of course, strong correlation is not limited to atom-by-atom copies. Imagine a variation of the donation game in which you play against near copies who differ from you in insignificant ways. One may have forgotten some particular childhood memory; another may be more skilled at playing basketball; and so forth. Similarly, the environments in which the near copies decide may differ inconsequentially. One participant may receive the letter in, say, the font \"Times New Roman\" and another in \"Arial\". In a donation game with such negligible variations, it seems clear that 'C' is still the better option. Although we cannot be absolutely certain that all 20 of the near-copies make the same choice, it is very likely that they will. With growing dissimilarities between two agents and their environments, the correlation between them decreases further, but your own decision still gives you information about the other agents' decisions. As long as the accumulating differences do not affect any of the agents' reasoning, the correlation will remain a strong one. While the participants of the two donation games are not copies of each other, both variants make clear that the participants' decision-making mechanisms resemble one another and are thus correlated. The donation game with similarity is very explicit about this similarity. The donation game with common rationality, on the other hand, is more subtle -it tells the participants that their decision mechanisms are all \"rational\". Of course, the individual participant does not know what the rational choice is, yet, but she knows that, if she makes her decision by abstract reasoning (rather than a whim) the result will be the rational decision. She also knows the other participants are also rational (in the same sense of the word) and will therefore arrive at the same -the rational -decision. (It seems unlikely that 'C' and 'D' are exactly equally rational.) In essence, this argument from common rationality is one from (perfect) correlation: if we are rational, we determine what the rational decision is and thus what other rational agents will do. This mechanism is what Hofstadter calls superrationality: if everyone knows that everyone is rational and has the same information, then everyone can determine everyone else's decision. Throughout this paper, I will tend to make arguments from similarity of decision algorithms rather than from common rationality, because I hold these to be more rigorous and more applicable whenever there is not authority to tell my collaborators and me about our common rationality. In any case, the argument from correlation is sufficiently general to include reasoning based on common rationality as a type of perfect correlation. Because the underlying mechanisms are similar, we use the term superrationality for both similarity and common rationality-based lines of reasoning. Assuming that we ourselves apply superrationality, we will also call an agent \"superrational\" if her decision correlates with ours. Similarly, we call a group of agents superrational if they use similar decision algorithms and take superrationality-type reasoning into account, sweeping the complications of thinking about individual correlations under the rug. Furthermore, we shall use the term \"donation game with superrationality\" for donation games with similarity or common knowledge of each other's rationality. Anticipating objections, Hofstadter (1983) writes: This solution depends in no way on telepathy or bizarre forms of causality. It's just that the statement \"I'll choose C and then everyone will\", though entirely correct, is somewhat misleadingly phrased. It involves the word \"choice\", which is incompatible with the compelling quality of logic. Schoolchildren do not choose what 507 divided by 13 is; they figure it out. Analogously, my letter really did not allow choice; it demanded reasoning. Thus, a better way to phrase the \"voodoo\" statement would be this: \"If reasoning guides me to say C, then, as I am no different from anyone else as far as rational thinking is concerned, it will guide everyone to say C.\" [...] Likewise, the argument \"Whatever I do, so will everyone else do\" is simply a statement of faith that reasoning is universal, at least among rational thinkers [or those who receive the letter], not an endorsement of any mystical kind of causality. I do not think that, in practice, similarity between decision algorithms will often be as strong as assumed in the above thought experiments. Even if I received a letter of the above kind, I would not think of my decision as determining the others' decisions with near certainty (although there are circumstances under which I would cooperate). In fact, the very reason I make the superrationality argument about the multiverse in particular is that the conditions for superrationality are usually not fulfilled on Earth (see section 6.6). Nonetheless, it is useful to assume perfect and near-perfect correlations in thought experiments for illustration purposes. The rest of this section explores various theoretical considerations related to those mechanisms of superrationality that have practical implications for multiverse-wide superrationality. Most of them are not specific to the multiverse-wide application, however, and we will often illustrate them in more readily imaginable settings in a single universe. \n Lack of knowledge is evidential power, part I: the other agents One reason why some people would not cooperate in the donation game (or the prisoner's dilemma) is, I think, that they have knowledge that would break the correlation between the participants. Using their model of human psychology, they can quickly make an informed guess about what the others are likely to think about and thus decide. Put simply, you learn less from your own cooperation once you already know what the others are deciding. Consider the following variation of the donation game: The Devious postal worker. Game master Hofstadter (in this thought experiment a fictional character) has contrived another donation game. This time, you and the other participants know that you all live in the same area and are to reply by post. Having learned your lesson from Hofstadter's article in Scientific American, you write a big 'C' onto a postcard and walk to the post office. The postal worker takes your card, reads the address and says: \"You're participating in one of Prof. Hofstadter's games, aren't you? And you seem to have decided to cooperate. How very noble and decision-theoretically sound of you! Well, I'll let you in on a little secret. Hofstadter has been playing his games with people in this area for years now. We used to merely distribute the letters for him, look at people's answers and then send them back to Hofstadter, but after a year or two, we started to bet on people's replies. The participants tend to use small cards rather than envelopes to save money, so it was easy to spot their replies and count the number of C's and D's among them. We eventually became almost perfect at predicting people's responses, including those from first-timers like yourself who don't necessarily correlate with past participants. But merely betting on responses got boring after a while, so we started to play a new game: we would tell all participants about our predictions of what the others would choose, giving each one a chance to reconsider their own choice. Although this obviously affected the players' behavior and forced us to readjust our methods, our predictions are now practically flawless once again. To cut a long story short, we're highly confident that 18 of your 19 fellow players will defect and only one will cooperate.\" The postal worker gives you back your postcard and a pen. Should you still cooperate or revise your decision? If we assume that the postal worker's prediction gives you far more reliable evidence than your own action, then the superrationality argument presented above no longer works. Once we already have reliable information about what the other participants are likely to choose (or what they have already chosen), our own choice can no longer make cooperation significantly more likely. In terms of evidential decision theory (introduced in the next section), if \n E[number of other cooperators | I cooperate & postal worker says \"n others defect\"] ≈ E[number of other cooperators | I defect & postal worker says \"n others defect\"], where E denotes conditional expectation, then the evidential role of our decision provides no reason to cooperate. That said, in section 2.4 we will see that this issue is actually a bit more complicated. After having sent in your postcard of defection and reflected on what happened, you might realize that all of the other participants were in the same situation as you were. They were also told that 18 (or, in case of the one who cooperated, 19) of the others would defect and, upon hearing this, each concluded that defection would give them a higher payout. No wonder that most players defected. Note that even if everyone had been told that all the others had cooperated, it would still be rational for all participants to defect. By merely telling the participants about their predictions, the postal workers make cooperation much less attractive and thereby less common. What is interesting about the Devious postal worker is that what makes the outcome worse for everyone than in the original Superrational donation games is that everyone receives information about the other participants' behavior. While counterfactually useful for each single player, the information is harmful overall. As Paul Almond (2010b, chapter 4.5 ) says, \"lack of knowledge is power\", which I would like to refine to: lack of knowledge is evidential power. We shall revisit this concept soon. In particular, we will think about whether there is some way around the unfortunate conclusion that nobody should cooperate after receiving the respective information. \n A short survey of decision theories and their relation to superrationality Superrationality is a special application of non-causal decision theories -that is, theories of rational decision making that not only take the causal implications of an action into account but also other information that making this decision would give us. 4 In the case of superrationality, that information is always about the other agents. Conversely, causal decision theory (CDT) (Weirich, 2016 ; J. M. Joyce, 1999; Lewis, 1981; Skyrms, 1982; Gibbard and Harper, 1978) neglects any such non-causal implications of an action in the Donation game with similarity. However, the best-known example of what I would view as CDT's limitations is surely Newcomb's problem, originally introduced by Nozick (1969) . Readers who have not yet studied the problem, are encouraged to do so, although it is not required for understanding most of the present paper. Because Newcomb's problem was the first published example of a problem that (potentially) requires one to consider the non-causal implications of one's decision, all problems wherein such considerations -including superrationalitymight play a role are called Newcomb-like problems. Somewhat confusingly, the field that studies decision theories (in particular, which one we ought to use) is itself called decision theory. Besides discussions of Newcomb-like problems (i. e. whether and how correlated decision making and the like should be taken into account), decision theory is also concerned with topics like the expected utility hypothesis and deciding without assigning probabilities. For those who are unfamiliar with the field, I recommend starting with An Introduction to Decision Theory (Peterson, 2017) . More elaborate introductions to the decision theory of Newcomb-like problems and correlated decision making include Ahmed (2014) , Yudkowsky (2010) , and Almond (2010) . Interestingly, most philosophers seem to endorse CDT. A recent survey of professional philosophers conducted by Bourget and Chalmers shows that in Newcomb's problem -one of the clearest examples of CDT's potential failure 5 -about 30% endorse CDT's recommendation 6 of two-boxing, whereas only 20% endorse one-boxing (Bourget and Chalmers, 2014) . In fact, Bourget and Chalmers (2014, p. 21, table 11 ) even shows that philosophers who specialize in decision theory are especially likely to endorse two-boxing. Defenses of CDT in Newcomb's problem are given by, e. g., Joyce (1999, chapter 5.1) and Eells (2016, chapter 8) . Some have also argued that Newcomb's problem cannot occur (Ledwig, 2000, footnote 81; Binmore, 2007a, chapter 10) . Overall, I find the arguments put forward against CDT much more convincing than those in favor. Yet even among decision theorists who reject causal decision theory, there is disagreement about what the proper replacement should be. Classically, CDT is contrasted with evidential decision theory (EDT) (Ahmed, 2014; Almond, 2010b; Price, 1986; Horgan, 1981) . However, there are also many newer, less widely known ideas. These include functional decision theory (Soares and Levinstein, n.d.) , timeless decision theory (Yudkowsky, 2010a) , updateless decision theory (Benson-Tilsen, 2014; Hintze, 2014; McAllister, n.d.) , ambient decision theory, Spohn's variation of CDT (2003; , section 2; 2012), Arntzenius' deliberational decision theory (2008) and Wedgewood's variation of causal decision theory (2013). 7 Superrationality is not based on any specific non-causal decision theory but works in most of them. Consequently, this paper is meant to adopt an impartial stance between the decision theories in which superrationality works. \n CDT would self-modify to behave like a non-causal decision theory in some Newcomb-like problems There is a class of problems wherein causal decision theorists recommend self-modifying into a new decision theory that acts as though it takes some acausal considerations into account. In both the aforementioned donation game and Newcomb's problem, the agent serves as a model for a number of (near-)copies and a prediction, respectively. Assuming that this model is captured at a particular point in time, it follows that the model represents a time-specific version of the agent. Thus, if the agent precommits to using superrationality or to one-box before the copies or simulation are made, they would causally determine all copies' choices. Consider the following thought experiment: Donation game with copies and precommitment. One morning Omega (an absolutely trustworthy, perfect predictor with various superhuman abilities) tells you that you will play the donation game on the next day. However, instead of merely recruiting other people as participants in the game, Omega will copy you atom-by-atom tonight and employ the resulting copies as tomorrow's participants. You are also told that the payouts this time around will be a thousand times higher than in previous games, so it is in your best interest to prepare well. As a final deed, Omega then leaves you a short book entitled From cold showers to chastity: How to commit to any action by self-hypnosis. What do you do? If you are already convinced of superrationality -or if you care a lot about the wealth of your copies -you would not have to do anything. You could spend the day going about your usual business, cooperate on the next day, and win a lot of money. But imagine you were a about the monetary rewards paid to the real version of the agent. 7 Many decision theories are also parameterized by some aspect of their definition. For example, causal decision theory is parameterized by the notion of causality that it uses (see, e. g. Lewis, 1981; Hájek, 2006, page 19; Weirich, 2016, chapter 2.3; Pearl, 2009, chapter 4) . proponent of CDT and did not care about your copies. You would then want your future self and your copies to cooperate, but you know that they will not do so automatically. As soon as the copies are created, none of them -including you -will have any causal influence on what the others will do. So, if you do nothing, everyone defects and you get a very low payout. However, since you have not yet been copied, you still have a causal influence on the future version of you from which the copies will be created, and thus on the copies themselves. If you could cause the future version of you to be the kind of agent who cooperates, you could causally improve your payout in Omega's game. Given the book that Omega left you, this should be easy: read the book, precommit yourself -and thereby all your future copiesto cooperate, and everybody wins. A causal diagram representing the decision problem is given in Figure 1 . \n Precommitment Your decision Your 1st copy's decision Your 19th copy's decision Payout ... If CDT thinks that it will face some Newcomb-like problem where the copy or model for prediction is created in the future, it would precommit to make the same decision that acausal decision theories recommend (without precommitment). Does that mean that CDT would have to make one precommitment for each Newcomb-like problem (starting in the future) that it will face with non-zero probability? Rather than patching its behavior in each (future) Newcomb-like problem individually, CDT could also make a more general self-modification. At time t, it would precommit to use the following alternative decision theory in the future: do what I, at time step t, would have precommitted to do in the present situation (Yudkowsky, 2010a, chapter 2; Soares and Fallenstein, 2015, chapter 3; Meacham, 2010) . Such precommitment is not sufficient to generate the kind of superrationality required for this paper: it does not cover Newcomb-like problems that do not start in the future. That is, if the copies are not created based on a future version of the agent, cooperation with them is not covered by precommitment. Thus, CDT's precommitment does not imply cooperation with agents in other parts of the multiverse. However, it does suffice for a weaker version if we assume the Everett interpretation of quantum physics (see section 6.8). \n Lack of knowledge is evidential power, part II: taking a step back CDT's precommitment only entails partial agreement with its rival decision theories. Still, it is worth taking a closer look at precommitment, as it leads us to another interesting dimension along which decision theories can vary. Consider Counterfactual mugging, also known as \"the curious benefactor\" (Hintze, 2014, chapter A CDT agent would, again, only precommit if Omega bases its prediction on a future version of the agent, whereas I (and many non-causal decision theorists) would argue that we should precommit as long as the result of the coin flip is unknown to us (even if Omega's model is based on a past version of us). 8 If we do so, we gain information that Omega thinks we give in, and therefore that we will receive money in expectation. However, once we learn that the coin came up tails, the \"winning\" move is to keep the $100. As before, the problem contains a harmful piece of information -although in this case an aspect of the environment, and not a piece of information about the behavior of other agents, causes trouble. If we got the chance, we would \"protect\" ourselves against this piece of information by a precommitment, which renders that piece of information harmless. A similar reasoning applies to the Devious postal worker variant of the donation game: If everyone precommits to cooperation irrespective of what the postal worker's prediction says, then a negative prediction about the other agents' behavior can no longer be self-fulfilling. Thus, if you precommit to cooperating before the postal worker tells you about the other agents' decisions, you have reason to expect more positive news (assuming you correlate with the other agents). 9 As is the case for CDT's precommitment in the previous section, this leads to a more general self-modification that can be made instead of a large number of individual precommitments for individual situations. Specifically, we would (again) precommit to basing our decision in this situation on what is good from the perspective of the state of knowledge prior to being given new information (like the result of the coin toss). This is where updateless decision theory gets its name from, and I will call this feature of decision theories updatelessness. Contrary to what the term may suggest, it does not mean that we do not react to new information at all, but rather that we do it in a different way. Instead of updating the probabilities we assign to possible states of the world and making the best decision based on that probability distribution, we think about what we would have precommitted ourselves to do in this situation. Usually, what we would have precommitted ourselves to do is the same as what is then rational for us to do. For example, if we take a bite from an apple and it tastes foul, we throw the apple away. If you had to precommit to some action before learning that the apple is foul, you would also precommit to throw the apple away if it tastes foul (and to continue eating the apple if it tastes good). Counterfactual mugging is one of the rare cases in which it does make a difference. Acausal decision theorists would precommit to be updateless about all information they receive in the future. In essence, they would switch to a decision theory that comes with updatelessness built-in (the most notable one of them currently being updateless decision theory (Benson-Tilsen, 2014; Hintze, 2014; McAllister, n.d.) itself). Thus, if you had been reasoning about (acausal) decision theory including the possibility of self-modification correctly all along (rather than only after learning about the experiment and its result), you would actually cooperate in the Devious postal worker and give in to Counterfactual mugging -even without having precommitted to do so in these particular problems. Some readers will no doubt already be familiar with updatelessness and the arguments in favor of it. For those who have not, this may be a good time to incorporate general updatelessness into their decision-theoretical intuitions, as it is relevant for some of MSR's implications (see sections 2.8.6 and 2.9.1). As a side note, there are justifications of updatelessness that are not based on precommitment and thus suggest that we should, e. g., give the money in counterfactual mugging even if we previously have not thought about precommitting to updatelessness. Ryan Carey lists a few in a comment on the Intelligent Agent Foundations Forum. Benja Fallenstein proposes a justification based on \"logical zombies\". For other ideas, see Armstrong (2011, section 3.1.2) and Drescher (2006a, chapter 6.2). 10 However, these are more complicated, non-obvious and not well-established. I thus opted for limiting myself to the more straightforward precommitment-based justification for updatelessness as discussed by Meacham (2010) , Fallenstein on LessWrong and myself in a blog post (cf. Ahmed and Price, 2012) . \n Reasons and correlations It is difficult to pin down the general principles of how the decisions of different agents in different situations correlate. Indeed, I suspect that the problem has no simple solution other than what is implied by the general solutions to naturalized induction (Soares and Fallenstein, 2014, section 2.1; Soares, 2015) and decision theory. 11 10 Also note that updateless behavior can sometimes result from anthropic uncertainty even when applying the more classical evidential or causal decision theories. 11 Determining correlations between actions is similar to specifying the maxim corresponding to an action in Kant's categorical imperative. It seems that nobody has a precise grasp of how the latter is supposed to be done and that this makes it difficult to apply the categorical imperative. However, the problem of specifying the maxim underlying one's action does not necessarily have a single correct solution. Determining correlations between your actions and that of others, on the other hand, follows from any solution to the problems of naturalized induction and decision theory. These solutions probably depend on priors, but it probably makes more sense to speak of them as having a correct solution. However, humans seem to have some good intuitions for how decisions correlate, in part because understanding the correlations between actions is a day-to-day activity. Imagine seeing your friend Anna being wounded in her right arm one day. She uses her left arm to apply bandages and call a doctor, who arrives a few minutes later and inspects her right arm. A few days later, you see Bob being wounded in his left arm. Based only on the experience from Anna's wound, what should you reasonably expect to happen? Will Bob use his left arm to apply bandages to his right one? Will Anna apply bandages to her right arm? Or to Bob's? Will doctors come to Anna? Even after seeing just one instance of a situation, we are often able to identify many of its causal links and use this information to infer correlations with similar situations. If we see the reasons for a decision from the inside, these correlations become even clearer. If you are Anna and you apply bandages to your right arm, you know that it is to stop the bleeding. Doing so gives you no \"weird\" evidence -it would not lead you to expect, say, that people are generally likely to apply bandages to things (cf. Ahmed, 2014, chapter 4; Almond, 2010b, chapter 2.8) . In general, taking a particular action only because of some reason X tells you nothing about whether agents who do not care (or know) about X will also take that action. Importantly, superrationality itself falls under this general rule. That is, if you do something for superrationality-related reasons, then this does not tell you anything about how people who do not accept superrationality would behave. As a trivial example, consider playing a donation game against 19 people whom you all know to make fun of superrationality whenever the opportunity avails itself. Attempting to superrationally cooperate with those people seems rather fruitless. While these considerations may seem trivial, alleged refutations of acausal decision theories are often based on ignoring them or assuming that the evidential thinker ignores them (cf. Ahmed, 2014, chapter 4; Almond, 2010b, chapter 2.8 ). \n Your back is not mine If the decisions of agents correlate or if each can determine what is rational, then why can someone -let us call him Dennis -not just determine that it is rational to benefit him or his values? Surely, if everyone just benefited Dennis, that creates the optimal outcome for him. So, in a donation game with superrationality, perhaps he should determine the rational policy to be \"cooperate, unless your name is Dennis\"? This is clearly absurd. The specific reasons that lead Dennis to come up with this strategy (and to abide by it) do not matter to his fellow players, although each of them probably have self-serving reasons which are analogous to those of Dennis. Dennis wants to achieve his own goals, and this is done optimally if everyone cooperates while he alone defects. However, this only makes it more likely that some other participant -let us call her Dana -would reason, \"I want to maximize my payoff; if I could determine everyone's choices, I would want everyone but me (Dana) to cooperate.\" (cf. Drescher, 2006a, page 298f.) . \n Does accepting superrationality commit us to irrational behavior in medical Newcomb problems? One common objection to making decisions based on what our action correlates with, rather than what our action causes, is that it seems to imply irrational behavior in some cases (e. g. Nozick, 1969, page 135) . In particular, reasoning from correlation seems to fail in so-called medical Newcomb problems. An example is Yudkowsky's chewing gum problem (2010a, section 1.2), which he describes as follows: Suppose that a recently published medical study shows that chewing gum seems to cause throat abscesses -an outcome-tracking study showed that of people who chew gum, 90% died of throat abscesses before the age of 50 This table shows that whether you have the gene CGTA or not, your chance of dying of a throat abscess goes down if you chew gum. Why are fatalities so much higher for gum-chewers, then? Because people with the gene CGTA tend to chew gum and die of throat abscesses. The authors of the second study also present a test-tube experiment which shows that the saliva from chewing gum can kill the bacteria that form throat abscesses. The researchers hypothesize that because people with the gene CGTA are highly susceptible to throat abscesses, natural selection has produced in them a tendency to chew gum, which protects against throat abscesses. The strong correlation between chewing gum and throat abscesses is not because chewing gum causes throat abscesses, but because a third factor, CGTA, leads to chewing gum and throat abscesses. Having learned of this new study, would you choose to chew gum? The causal graph of this problem is given in Figure 2 . Similar well-known decision problems of this kind are Solomon's problem (Gibbard and Harper, 1978, section 5; Eells, 2016, chapter 4) , the Smoking lesion (Eells, 2016, chapter 4) , and the Psychopath button (Egan, 2007 , section 3). Naive correlation-based reasoning suggests that we should still refrain from chewing gum, since the act of chewing gum would be evidence that we have the CGTA gene and thus throat abscesses. This strongly conflicts with our intuition that we should chew gum to protect against the abscesses. However, I will argue that this provides no convincing argument against superrationality. First, the correlation in the Chewing gum problem differs qualitatively from the correlations between similar decision algorithms (Treutlein and Oesterherld, 2017) . In the Chewing gum problem (and medical Newcomb problems in general), the correlation stems from a causal relationship: our genes influence our decisions. Thus, the genes and the decisions are correlated. The correlations of superrationality, on the other hand, result from the similarity of the decision algorithms. The reasoning behind cooperation does not involve a common cause of all collaborators' decisions. Instead, the correlation may be viewed as logical (Garrabrant et al., 2016) : if I cooperate, then this implies that all other implementations of my decision algorithm also cooperate. Figure 3 illustrates the difference between these two types of Newcomb-like problems. Because correlations in medical and non-medical Newcomb-like problems differ qualitatively, ignoring the correlations of our actions in the former does not mean we should ignore them in the latter. In fact, in response to medical Newcomb problems, philosophers have proposed a variety of decision theories that behave in this exact way (Treutlein and Oesterherld, 2017) . That is, they cooperate superrationally (and one-box in Newcomb's problem) but chew gum in the Chewing gum problem. These include Spohn's variation of CDT (2003; , section 2; 2012) and Yudkowsky's timeless decision theory (2010). \n CGTA \n Gum \n Abscess Secondly, even purely correlation-based reasoning as done by EDT may recommend chewing gum, depending on how the causal link from the CGTA gene to chewing gum is believed to work. Given that people in the study presumably did not know that chewing gum helps against throat abscesses, it is plausible that CGTA causes people to intuitively desire chewing gum. However, if learning about the study and applying EDT then causes us not to chew gum, it does not tell us anything about whether having the CGTA gene would have caused us to do the opposite. Similarly, if you know that a sprinkler has watered the lawn, observing that the grass is wet is no evidence that it has also rained (see Figure 4 ). The sprinkler already explains why the lawn is wet, so you do not need rain as an additional explanation (see Ahmed, 2014, section 4.3 for an extensive discission of this argument). \n My decision \n Payoff \n Are the correlations strong enough? In most superrationality-related thought experiments, it is assumed that the other agents are near-copies of ours. The problems presented in this paper are no exception. However, in any real-world setting, most agents are not close copies of ours. We should therefore expect correlations to be much less than perfect. Luckily, the total number of agents in the multiverse is probably so vast 12 that the correlations between ourselves and any individual agent need not be very large 13 (see section 6.2). Because many agents probably do not know about superrationality, we may assume that 99.99% of the agents do not correlate with us at all when it comes to the decision whether to cooperate superrationally. In this case, cooperation with the rest still pays off if we believe that our correlation with the others is non-negligible and positive. It does not matter that we inadvertently benefit many \"free riders\". For example: if our cooperation makes it 1% more likely that each of these correlated agents also cooperates, then if there are \"only\" a billion of them, we can expect 10 million more to cooperate if we cooperate. 14 \n Correlation only with close copies? Some might think that they are uncorrelated with everyone else apart from very close copies of themselves. Because such near-copies would likely share their utility function to a large extent, there is no need to cooperate with them (although coordination may be useful, depending on the utility function, see section 2.8.9). While the lack of formalized and agreed-upon solutions to decision theory and naturalized induction (Soares and Fallenstein, 2014, section 2.1; Soares, 2015) makes it difficult to draw definitive conclusions on such matters, I am nevertheless skeptical of this objection to MSR. It seems to me that decision theories, at least as people currently conceive of them, are compatible with very large sets of possible minds. That is, if an agent uses, say, evidential decision theory, it can still use all kinds of different mechanisms for assigning conditional probabilities and, most importantly, it can still have all kinds of values (see section 3.1). \n Negative correlations? There is another interesting objection about correlation strength that could be raised: perhaps we should expect to correlate negatively with some agents in the multiverse, such that cooperation can even do some harm (beyond the opportunity costs connected to it) as it makes some other agents more likely to defect. While interesting, I do not find this reason against superrational cooperation very convincing, either. Such a reaction seems implausible given our state of knowledge. Surely, there are a few eccentric agents who have superrationality-related algorithms similar to mine, yet choose to somehow invert the output of these algorithms. But such algorithms make little sense from an evolutionary point of view and so I do not expect them to be very common in the multiverse. It may seem that agents have an incentive to become negatively correlated (via selfmodification), thereby enabling them to defect and make everyone else cooperate. However, there are various problems with this idea. For one, to be able to correlate negatively with the other agents it seems as though one would have to find out about their decision and then do the opposite, which appears to be difficult. Furthermore, self-modification also commits us to cooperate more when the others defect -an agent committed to unconditional defection does not correlate with anyone else. The intuition underlying the self-modification idea is that by self-modifying to be negatively correlated, we can acausally determine the others' decisions. But I do not think this works in the relevant way. When you modify your decision algorithm, you lay your power into the hands of the new algorithm. This means you cannot, for example, self-modify to some decision algorithm A that does the exact opposite of what everyone else is doing, and then defect -unless A already defects on its own. Thus, you cannot determine everyone else to cooperate unless you are already correlated with them. Similarly, you cannot commit to output the 100th digit of π, and then return 6 anyway to acausally determine the value of π. However, if you are already correlated with the 100th digit of π, you can logically determine its value. For instance, if Omega predicts your behavior and then tells you that if you raise your arm, the 100th digit of π will be 7 and if you do not it will be 1, you can determine the 100th digit of π. Of course, these stop working once you know what the 100th digit of π is. As a last point, self-modification does not seem to add anything to direct defection (without self-modification). To see why, let us consider the two kinds of agents that are not yet negatively correlated with the others. The first agent is not correlated with others before self-modification, and therefore has no reason to self-modify. He can just defect directly, without adopting a weird decision theory that is about doing the opposite of what someone in some other part of the multiverse is doing. The second agent is (positively) correlated with others before self-modification. Her problem is that if she self-modifies, others will do so as well, which gives her evidence that a lot more defection is happening than if she would cooperate. Another relevant point is that there is a sharp upper bound to the amount of negative correlation that can exist within a group of agents. Imagine agents A, B, and C, whose decision to cooperate we model as a random variable with the two values 1 (for cooperation) and 0 (for defection). Let us say A is perfectly negatively correlated with B and B is perfectly negatively correlated with C. A is then perfectly positively correlated with C. So, even among just three agents, not all correlations can be perfect and negative. On the other hand, the pairwise correlations may well all be perfect and positive. To study this further, we move from correlations to covariances, because they can be meaningfully added up. In general, we can derive a lower bound of − 1 4(n−1) for the average covariance between pairs of agents from any set of n ≥ 2 agents (excluding \"pairs\" of one and the same physical agent), if cooperation is seen as a binary random variable. If the agents are all perfectly correlated, then all covariances are at most 1 4 , so the upper limit for the average covariance is also 1 4 . Unless we have reason to believe that we are special, i. e. that our covariance with the others falls far below the average covariance between two agents, this suggests that especially for very large numbers of agents n, our possible acausal impact under the assumption of only positive covariances can be much larger than that of negative covariances. In fact, the covariances of the average agent cannot add up to something below − 1 4 regardless of the number of agents. In contrast, they can be as high as 1 4 (n − 1) for positive covariances. If we view the covariances as uncertain, this suggests a prudential argument in favor of assuming positive covariances to dominate over negative ones, given that our acausal influence is so small under the opposite assumption. However, the details of this argument (and whether it works at all) depend on our \"meta-probability distribution\" over covariances. \n The relative importance of superrational cooperation: an example calculation Looking at a single decision, how do the benefits from superrational cooperation compare with the opportunity costs? Although we need to make some unrealistic assumptions (such as exact symmetry of the decisions faced by all the agents) in order to calculate this value, it is nevertheless worth an attempt, if only for the purpose of illustration. We assume that there are n superrational agents whose decisions in donation games are perfectly correlated; that is, either all of them cooperate or all of them defect. Realistically, many more agents' decisions will correlate weakly with ours, while only very few correlations will be perfect. However, the implications of many weak and a few strong correlations are similar. For simplicity, we assume that the goals of the agents are orthogonal to each other, i. e. that if someone benefits it is neutral in expectation to any other value system. All of them have values that can benefit from behavior in other universes to the same extent. The n agents face the decision between a) generating b u cardinal, interpersonally comparable utils (or utilons) for their own utility function and b) generating b other utils for k randomly chosen superrationalists. Choosing option a) makes everyone chose option a) and so only generates b u utils for us. Choosing option b) makes everyone choose option b). Whenever someone (including ourselves) chooses option b), there is a probability of k n that we are among the beneficiaries. Overall, if we and thus everyone else chooses option b), we receive n k n b other = kb other utils. Choosing option b) is therefore to be preferred if and only if kb other > b u . (1) This suggests that our own preferences have no priority over those of other superrationalists in this decision. We only decide based on \"the greatest good for the greatest number\". For instance, if k = 1, then we should choose option b) to help other value systems if b other > b u , i. e. as long as helping other value systems can be done more efficiently than helping your own values. This shows how important superrationality considerations can be. Whereas the non-superrational agent maximizes only for its own value system, the superrational agent maximizes for the value systems of other superrational agents just as much as for their own. Moreover, whether we cooperate depends only on the number of agents whose cooperation is correlated with ours and not at all on the number of agents that will defect. In this regard, multiverse-wide superrational cooperation differs from most causal cooperation, where we usually try to ensure that beneficiaries of our actions reciprocate (unless we care about them intrinsically). As mentioned already, this analysis is based on unrealistic assumptions of perfect symmetry to highlight the relative importance of superrationality considerations. We will now move on to more general, potentially asymmetric cases. \n Compromise strategy \n Sharing gains from compromise in the face of asymmetries We have so far only considered completely symmetrical situations, wherein other agents faced the exact same decision problem as ourselves. One could either choose to cooperate, which correlated with everyone else's cooperation; or defect, which correlated with everyone else's defection. Both cooperation and defection were associated (via the correlation between agents) with particular outcomes. Based on these correlations it was straightforward to choose the action that correlates with the best outcome for ourselves (and also for everyone else). Of course, in practice, compromise will not be this tidy. Specifically, we will have to deal with asymmetrical decision problems. Consider the following example: Superrational cake cutting. You are playing a donation game with two fellow players whose decision algorithms correlate strongly with yours. Unlike other donation games, the currency in this game is cake, of which there are two flavors -vanilla and strawberry. Each player's utility grows in linear proportion to how much cake they eat, and they all have taste preferences that affect their total utility. Let's say you, player 1, like vanilla twice as much as strawberry. Player 2, meanwhile, likes strawberry four times as much as vanilla, and player 3 likes both flavors equally. Each player currently owns different amounts of strawberry and vanilla cake. You have one strawberry cake and one vanilla cake, while player 2 has three vanilla cakes and player 3 has one strawberry cake. (See Figure 5 First note that this problem is indeed one of superrational cooperation. If causal decision theory is applied, then the dominant strategy for each player is to keep all the cake -but this would be a suboptimal outcome for everyone. The players have two strawberry and four vanilla cakes in total. If you could redistribute them so that player 1 has one strawberry and two vanilla cakes, player 2 has one strawberry cake, and player 3 has two vanilla cakes, everyone would be better off than without any redistribution. However, there are infinitely many other possible (fractional) distributions that would also be better for everyone. This makes it hard to decide among them. One part of the problem is that it is unclear what our decisions correlate with. If we send player 2 a piece of her preferred cake (strawberry), can we expect to get some of our preferred cake (vanilla) from her? If so, how much? If we could pin down the correlations and assign probabilities to each combination of strategies -i. e. to each strategy profile -conditional on any of our actions, we could choose the action that maximizes expected utility (the exact formulation of which depends, of course, on our decision theory). But even if the agents know that they have very similar (or even identical) decision algorithms, the asymmetries make it hard to assign these probabilities. Another perspective on the problem is that asymmetries make it unclear who \"deserves\" how much. In the symmetrical situations it was always clear that everyone should get the same, but this is different in superrational cake-cutting. It is useful to view the symmetry of a compromise problem as a non-binary property. For example, a donation game in which one player gains slightly more than the others from cooperating may still be symmetric enough to make it obvious what the right decision is. \n The compromise problem In order to solve the problem of superrational compromise in asymmetric situations, we will treat compromise as a game-theoretical problem. Note that this requires basic knowledge of game theory; for an introduction see, e. g. Osborne (2004) . Formally, a game consists of • a finite set of players P = p 1 , . . . , p n , • for each player p i , a set of actions A i , • for each player p i a utility function u i : A 1 × • • • × A n → R, where R refers to the real numbers. Multiverse-wide superrational compromise is a game where P is the set of correlated superrationalists, the utility functions u i represent their preferences, and the sets of possible actions A i represent the set of strategies a player can pursue in their part of the multiverse. Note that the last aspect of the definition assumes that the players' preferences are von Neumann-Morgenstern-rational (vNM-rational) , which is technically useful and mostly non-controversial 15 . Our notation indicates that utilities are calculated deterministically from action tuples. However, we will sometimes view the utilities u i (a 1 , . . . , a n ) as random variables in the Bayesian sense. This is because we are usually uncertain about the implications of the policies a 1 , . . . , a n , as well as the utility function u i itself, in the context of MSR. Now, the question is which (potentially mixed) strategy α i any player p i should choose. Note that we are not looking for the (CDT-based) Nash equilibria of the game. We will therefore have to move our focus from (Nash equilibrium-based) non-cooperative to cooperative game theory. In principle, the optimal strategy α * i can be determined by applying one's decision theory. For example, if one were to use EDT, then the optimal strategy is argmax αi E[u i (a 1 , . . . , a n ) | α i ]. As noted earlier, however, computing or optimizing the expected value conditional on one's action directly is not feasible in situations of asymmetric payoffs. To find the best action, we will therefore approximate the above expected value maximization with some new criterion, similar to how game theory has replaced expected value maximization with Nash equilibria and other concepts. We will therefore try to develop some new compromise utility function u * : A 1 × • • • × A n → R, intended as a new criterion for choosing the optimal strategy. Because the compromise utility function depends less on the specifics of the problem, it will prove to be easier to reason about what the adoption of some u * tells us about what the other agents do. The optimal u * can then, under certain assumptions, tell us what action to take. At least if our choosing u * means that everyone else chooses the same u * (which is not necessarily the case), then player p i should implement the i-th strategy entry of argmax (α1,..,αn)∈A1ו••×An E[u * (α 1 , . . . , α n )]. 15 One exception may be the axiom of continuity. It is violated by preferences with lexicality, which are commonly discussed in moral philosophy (Knutsson, 2016) . However, if we drop the axiom of continuity, we can still represent the preferences as a lexicographic utility function (Blume, Brandenburger, and Dekel, 1989; Fishburn, 1971) . However, a treatment that includes lexicographic utility functions is beyond the scope of the present paper. Because in uncertain situations, a lexicographic utility function is usually equivalent to only maximizing the lexically highest values, we may nonetheless apply the present results by simply omitting all lexically lower values. Once again, having a compromise utility function, as opposed to more general compromise preferences, implicitly assumes that the compromise preferences are also vNM-rational. \n Cooperation with and without coordination In a way, argmax (α1,..,αn)∈A1ו••×An E[u * (α 1 , . . . , α n )] is the optimal plan on the assumption that everyone will follow it. With practical degrees of correlations, however, we cannot assume that everyone will arrive at the same plan, especially if multiple plans have the same compromise utility. In MSR, it is especially unlikely that everyone will arrive at the same plan, as superrational collaborators have different states of knowledge about the multiverse and each others' value system. A perfect plan may have catastrophic results if it is not accurately followed by everyone involved. Specifically, plans are risky if the utility of one player's action hinges on another player's action because such plans assume the ability to coordinate. Hence, it is useful to look at a class of utility functions where coordination plays no role. We say that a utility function u i (additively) decomposes into local utility functions {u i,j : A j → R} j=1,...,n if u i (a 1 , . . . , a n ) = n j=1 u i,j (a j ). Intuitively speaking, u i decomposing into local utility functions means that any player p j has a very direct impact on p i 's utility, such that when p j attempts to benefit p i she need not think about what other players do. In game theory as it is usually applied to interactions between agents on Earth, the assumption of additive decomposition of utility functions would be a severe limitation: if agents interact with each other physically, then, of course, the impact of an action often depends on the other players' actions. As examples, consider some of the classic games studied in game theory, such as Bach or Stravinsky or Chicken. In the problem of multiverse-wide compromise, on the other hand, there is probably no causal interaction between the actions of agents in different parts of the multiverse. Additive decomposition of utility functions is thus a more natural assumption in this context. That said, issues of coordination can still arise in the utility function itself. As an example, consider a utility function that wants there to be at least one (dis-)proof of the Riemann hypothesis somewhere in the multiverse, but does not care about the existence of further, redundant proofs. This utility function does not decompose additively; whether I benefit this utility function by proving the Riemann hypothesis depends on whether someone else is already working on a proof. Other, perhaps more realistic, examples of ethical notions that do not decompose into local utility functions are (partial) average utilitarianism and potentially biodiversity. However, many other plausible utility functions (e. g., total utilitarianism) do fulfill the above condition. If some of the utility functions in a game do not decompose into local utility functions, we will call the game a coordination problem 16 . Theoretically, the following arguments also work for coordination games, but they are much more robust and practically applicable in problems that require little or no coordination. This topic will be discussed further in section 2.8.9. \n Harsanyi's aggregation theorem Although we do not yet know how and to what extent, we know that our compromise utility function u * should incorporate the utility functions u 1 , . . . , u n but not be sensitive to anything else. The following assumption captures these attitudes: Assumption A. Let P and Q be probability distributions over outcomes 17 That is, all players like P at least as much as Q. Then A 1 × • • • × A n such that E P [u i (a 1 , . . . , a n )] ≥ E Q [u i (a 1 , . . . , a n )] for i = 1, . . . , n. E P [u * (a 1 , . . . , a n )] ≥ E Q [u * (a 1 , . . . , a n )], i. e. the compromise utility function also values P at least as highly as Q. We could view this assumption as a decision to limit ourselves to a particular class of compromise utility functions -a decision that makes our superrational collaborators limit themselves to the same class. In terms of expected value for ourselves, this is a good decision. It basically does not tell us anything other than that we do not want to pay anything to switch from P to Q if everyone likes P at least as much as Q. We furthermore introduce the notion of utility function equivalence: two utility functions u and v are equivalent, written as u ∼ v, if they imply equal behavior. For the cardinal utility functions discussed here, this is the case if one arises from positive affine transformation of the other, i. e. if u = av + b for some a ∈ R >0 and b ∈ R. Assumption A does not seem especially strong, but it turns out that it suffices for a significant result regarding the shape of the compromise utility function. It is essentially a version of Harsanyi's aggregation theorem (Harsanyi, 1955 ; see also Peterson, 2017, section 13.4 for an introduction). 18 Theorem 1. (Resnik, 1983; Fishburn, 1984) Let u * be a compromise utility function for u 1 , . . . , u n that satisfies Assumption B. Then there are weights λ 1 , . . . , λ n ∈ R ≥0 such that u * ∼ n i=1 λ i u i . (2) Note that the λ i are not unique. Also, not all weight assignments consistent with Eq. (2) or Assumption A have only positive weights. In particular, if u i = u j for some i = j, we can decrease λ i by an arbitrary constant C if we correspondingly increase λ j by C, and end up with the same compromise utility function u * . (Resnik, 1983; Fishburn, 1984) . If C > λ i , we arrive at an equivalent utility function that assigns negative weights. Theorem 2. Let u 1 , . . . , u n each decompose into local utility functions {u i,j : A j → R} j=1,...,n . Then a compromise utility function u * that satisfies Assumption A relative to u 1 , . . . , u n also decomposes into local utility functions. Proof. Because of Theorem 1, it is u * (a 1 , . . . , a n ) = b + n i=1 λ i u i (a 1 , . . . , a n ) = b + n i=1 λ i n j=1 u i,j (a j ) = b + n j=1 n i=1 λ i u i,j (a j ) = n j=1 b n + n i=1 λ i u i,j (a j ) for some b and weights λ 1 , . . . , λ n ∈ R ≥0 . Thus, u * decomposes into local utility functions u * j : A j → R : a j → b n + λ i u i,j (a j ) j=1,...,n . This is quite a convenient result. If indeed u 1 , . . . , u n each decompose into local utility functions, then each player p i can maximize u * i in her own part of the multiverse without having to think about the precise actions of other players elsewhere in the multiverse. \n How to assign the weights Having argued that we should make our decisions based on a weighted sum of the decisions of our superrational collaborators, the question is how we should optimally assign the weights. Theorem 1 does not tell us much about this. In fact, it even allows for the possibility of assigning positive weight only to our own utility function. We will differentiate two ways of assigning the weights: biased toward our own values, or impartial. We will consider the two options in turn. \n Biased compromise utility functions? We start with the option of assigning the weights in a way that is somehow biased toward our values. For example, we could assign higher weights to utility functions that are more compatible with ours, and lower weights to those that are not. Of course, this tells us that agents with other value systems will do the same, i. e. assign weights in a way that biases the resulting utility function toward their own values. I will argue against assigning weights in a biased way and in favor of impartial weights. This point is crucial for the strength of the implications of MSR, because the more weight we assign to other utility functions, the more our policies have to change in response to MSR. In a way, the reasons against biased weights are merely an extension of the reasons for cooperating in the first place. Let us say that in response to MSR we assign some weight to the other agents' utility functions but still largely maximize for our own values. Then gains from further trade are left on the table. Because we still maximize for different utility functions, we could trade again until all our compromise utility functions approach some impartially weighted sum. This line of reasoning is also supported by the standard ways in which gains from trade arise. If everyone compromises with biased weights, then this produces some gains from comparative advantages -in situations where I have a large comparative advantage to maximize for someone else's utility function, I will do so at the cost of not maximizing for my own values. In return, others do the same. But if the comparative advantages are too small, then we miss out on gains from trade. Consider an example with two superrational collaborators. For simplicity, we will assume them to be in symmetrical situations at the time of compromise, such that the only plausible neutral compromise would give the same weight to each of the two utility functions. Both may, at some point, face the choice between taking a utility of x for themselves and giving x + ε to the other, where x and ε are any positive real numbers. In such a situation both have a comparative advantage to help the other's values. But if ε is very small, the comparative advantage is very small, too. So, if they assign more weight to their own utility functions, there is some ε such that they choose to maximize their own utility functions and thus miss out on the gains from trade. While I am moderately confident that all compromises with biased weights are Paretosuboptimal, I do not, at this point, have a formal proof of this statement. That said, the above example at least shows that such compromises yield collectively vNM-irrational behavior. Furthermore, section 2.7 showed that, at least in symmetrical idealized scenarios, prioritizing one's own values does not achieve the best results. I should note that impartial weights do not imply that I should be equally likely to find myself maximizing for my own values as any of my superrational collaborators. For example, if you are very uncertain of the content of some value system, then it will not influence your decisions as much, even if you assign a high weight to that value system. \n Neutral compromise utility functions We have argued that after an optimal compromise, each player should judge their action roughly by the same impartial criteria. Hence, we now have to look for a way of assigning the weights in a neutral way. Harsanyi himself proposes -albeit in the context of social welfare rather than trade -to simply give equal weight to all utility functions, which is equivalent to removing the weights altogether (Harsanyi, 1979 , section 2). Besides the argument of \"equal treatment\", it can be backed by an original position argument (Harsanyi, 1953; Harsanyi, 1955; Freeman, 2016) . From an original position, i. e. a perspective from which we do not yet know which position in the multiverse we will take, how many resources will be at our disposal, etc., it seems reasonable to give equal weight to all utility functions. Updatelessness gives this argument some additional appeal, as it asks us to make our decisions from a similar perspective. However, there are various problems with maximizing unweighted aggregated utility. One is that it is based on interpersonal comparisons of utility. In Harsanyi's words, it assumes that \"all individuals' utility functions u 1 , . . . , u n are expressed in equal utility units (as judged by individual j on the basis of interpersonal utility comparison)\". 19 Such comparisons, however, are highly controversial (Hammond, 1991; Binmore, 2007b) . Recall that the cardinal utility functions postulated by the von Neumann-Morgenstern utility theorem are only determined up to positive affine transformation. This means that if a utility function u represents an agent's preferences, then so do 100 • u and 0.01 • u. None of the three is in some way the more natural choice for representing the agent's utility function. Whereas positive affine transformations do not alter an agent's behavior in choosing lotteries, they do change the behavior implied by the aggregate of multiple such functions. In Superrational cake cutting, we specify the utility functions (or, as they are sometimes called in fair cake-cutting, subjective value functions) of each agent up to positive affine transformation by specifying the trade rates between units of strawberry and vanilla cake. For example, the third player has a 1:1 trade ratio, the second has a 4:1 trade ratio. To simplify notation, let s i and v i be amounts of strawberry and vanilla that p i receives under some action profile. Then the second player's utility function could be u 2 (s 2 , v 2 ) = 4s 2 + v 2 and the third player's utility function could be written as u 3 (s 3 , v 3 ) = s 3 + v 3 . If one wanted to maximize aggregate utility u 2 + u 3 , then u 2 would effectively receive far more weight than u 3 . If u 2 (s 2 , v 2 ) = 400s 2 + 100v 2 , this bias toward u 2 would be even worse. Because utility functions come in different versions depending on their scale, we still need to find a satisfactory way of normalizing the utility functions, i. e. to pick one out of a whole class of equivalent utility functions. This task is actually equivalent to assigning a weight to a given member of each class. Thus, removing the weights and relying on interpersonal comparison of utility can be seen as merely passing the buck from assigning weights to choosing the scale of the utility function. One common approach to interpersonal comparison of utility is range normalization (Isbell, 1959; Hausman, 1995, section 3) . That is, the utility functions are chosen in such a way that their maximum is 1 and their minimum is 0 (using no additional weight). 20, 21 While intuitive, range normalization appears to be inappropriate for compromises. For one, it lacks a rigorous justification in this context -it is not immediately obvious that the underlying naive view of neutrality is relevant for compromise. The main problem with using range normalization for the compromise utility function is that in some cases some of the agents have no reason to accept it as it leaves them worse off than engaging in no compromise at all. 22 For example, consider the case of a compromise between a beggar and a millionaire with different sets of preferences. If the compromise gives equal weight to the preferences of the two, then this leaves the millionaire worse off as she receives little in return for dedicating half of her resources to fulfilling the beggar's wishes. Even if all agents have equal resources, a range normalization-based compromise can be unappealing to some of them. Consider two equally powerful agents with two very different value systems. The first cares about bringing about a state that is only very rarely attainable. All his other preferences pale in comparison. The second agent divides states of the world into two classes: good and bad states. Within each of those classes she is indifferent between any pair of states. Also, the division is such that in most situations, the different actions vary in how likely they are to bring about a good state. Under range-normalization, the first agent's utility function would usually be close to 0 and it would only rarely be possible to get it to 1. The second agent's utility function is 0 for some states and 1 for the others. If we maximize the sum of these two utility functions, this will mean that we will usually optimize much more for the second agent's preferences. After all, in most cases doing so significantly increases the probability of attaining 1 util. Maximizing for the first agent, on the other hand, usually only generates a small fraction of a util. Only in the rare situations in which we have an opportunity to attain the first agent's favorite state do the agents' preferences have a similar amount of control over the decision made based on the compromise. The first agent may therefore have no reason to accept this compromise. Because range normalization can drastically favor some players, it may actually be not so neutral after all. If you already knew that a range-normalized utility function would benefit you a lot, you would be biased to accept it. If you accept the range-normalized utility function on these grounds, however, it would not tell you much about the choice of agents who already know that the range-normalized sum would be harmful to them. In this sense, range-normalization is a biased compromise. Of course, if I benefit from range-normalization, I could hope that those who are disadvantaged by it nevertheless compromise in some other way that still benefits me. However, using such tricks to exclude others from our compromise is evidence that we are excluded in other ways as well (cf. section 2.9.3). Thus, without some other justification, range normalization does not appear especially promising. Many other approaches to interpersonal comparisons (see, e. g., Sen, 2014, section 7.3 23 ) suffer from the same problems. We thus need to set up more rigorous criteria for neutrality. It appears that the most directthough certainly not the only -approach is to require that the compromise is, in expectation, equally good for everyone -that is, everyone gets the same gains from compromise. This ensures that the compromise is equally attractive to everyone involved. \n Assumption B. The expected gains from adopting u * are the same for each player p i . Unfortunately, Assumption B is underspecified. The most naive view is that the gains from compromise for player p i are E[u i (α 1 , . . . , α n ) | u * ] − E[u i (α 1 , . . . , α n ) | no compromise]. (3) allow for a compromise that leaves everyone better off. However, there are some aspects of Eq. ( 3 ) that should, perhaps, be revised. For one, it is unclear whether having a full compromise versus having no compromise is the appropriate counterfactual. One alternative is to choose the counterfactuals provided by one's decision theory. That is, one could compare E[u i (α 1 , . . . , α n ) | I cooperate with u * ] with E[u i (α 1 , . . . , α n ) | I defect] , where EDT's conditional expectation may be replaced by an alternative notion of the counterfactual (see Gibbard and Harper, 1978; Hintze, 2014, section 3) . Alas, these are difficult to calculate. Perhaps one could also measure the gains from each p i 's individual participation, so that the set of cooperators in the minuend and subtrahend of Eq. ( 3 ) would be the same, except that the latter does not contain p i . Moreover, the subtrahend's compromise utility function would not contain λ i u i as a summand. This resembles the notion of voting power from social choice theory (Cotton-Barratt, 2013; Felsenthal and Machover, 1998) . A second area of revision may be that Eq. ( 3 ) does not account for some value systems potentially being more common among the p i , or holding more power, than others. We would probably want the gains from trade to be proportional to the resources invested by a particular value system. Otherwise, an individual agent with a very common value system has no or less of an incentive to join the compromise. For example, if an agent already knows that at least one other agent with the same utility function is part of the compromise, then this could mean that joining the compromise produces no additional gains from trade for that agent. One way to weight different utility functions based on their power would be to divide Eq. ( 3 ) by some measure of the resources invested by u j . The Shapley value is a well-known example of a theoretically grounded measure of power and may serve as an inspiration. Also note that Assumption B contains an interpersonal comparison of utility. However, the potential harm of getting this one \"wrong\" is smaller than in the case of using the unweighted sum as a compromise. Depending on how you scale the different utility functions relative to each other, applying Assumption B may allocate the gains from trade differently, but it nonetheless ensures that everyone receives gains from trade at all. Further research is needed to identify the appropriate variant of Eq. ( 3 ) or perhaps an alternative to it and subsequently the corresponding weight assignment. An example of a promising line of research in this direction is the work on variance voting, i. e. normalizing the variances, by Cotton-Barratt (2013) and MacAskill (2014, chapter 3). In particular, Cotton-Barratt shows that under certain assumptions, variance-normalized compromise is the only compromise that gives each player the same voting power. In addition to specific solutions, it would be useful to explore the necessary conditions for the existence of a weight assignment that satisfies Assumption B while producing positive gains from trade. For example, if you already know how much cake each player owns in the above Superrational cake cutting, there is no assignment of weights that reliably produces gains for everyone. No matter how the weights are assigned, there will always be one weighted utility function that strawberry cake is best invested into, one that vanilla cake is best invested into, and (at least) one that receives no cake at all. That is, unless two weighted utility functions generate the same amount of utility per unit of cake, in which case the compromise utility function is indifferent about who receives the cake. Besides the existence of gains from trade (see section 3.2.5), I suspect that the central assumption under which a weight assignment satisfying Assumption B exists is the continuity of the expectation E[u i (α 1 , . . . , α n ) | u * ] relative to the weights in u * . \n Updateless weights Seeing as the gains from compromise that Assumption B talks about depend on one's current state of knowledge, the weights to be assigned may do so, too. Consider the following example: Remote-controlled cake maker. Two agents are about to share cake again. Agent 1 prefers strawberry to vanilla cake at a ratio of 2:1. Agent 2 has the inverse preference ratio. On day one, neither of them owns any cake; however, they know that on day two, each will receive two control buttons for a distant machine, capable of producing and shipping only one type of cake. While it is, at this point, unknown which flavor of cake it will produce, they will know the type of cake maker once they receive the buttons. They will have the same amount of control over where the cake from the cake machine is sent: each agent can, by pressing one of the buttons, send some amount of cake to himself. By pressing the other button, they can send a 20% larger amount of cake to the other agent. Unfortunately, they can only press one button. The two agents' thought processes correlate perfectly when it comes to decisions regarding superrationality and they may already settle on a superrational compromise utility function on day one. On day two, they receive the control buttons and learn that it is a vanilla cake machine. They still cannot communicate, but use the same thought processes. Which button should each of the two press? 24 Let us first consider the situation on day one. Because their situations are fully symmetric, it seems reasonable to set u 1 (s 1 , v 1 ) = 2s 1 + v 1 , u 2 (s 2 , v 2 ) = s 1 + 2v 2 , u * (s 1 , v 1 , s 2 , v 2 ) = u 1 (s 1 , v 1 ) + u 2 (s 2 , v 2 ), where s 1 , v 1 , s 2 , v 2 are, again, the amounts of cake received by each player (which can be calculated from a set of actions). By any reasonable definition of \"gains from compromise\", this satisfies Assumption B. Accepting this compromise effectively means that agent 1 will receive all of the cake if the machine makes strawberry cakes, and agent 2 will receive all of the cake if the machine makes vanilla cakes. We now skip ahead to day two, when the agents are told that the machine makes vanilla cakes. In this new situation, u * (s 1 , v 1 , s 2 , v 2 ) = u 1 (s 1 , v 1 ) + u 2 (s 2 , v 2 ) is harder to justify, as it gives all the gains to agent 2 -agent 1 even loses utility relative to not compromising. Perhaps the more natural utility function to choose on day two is u * (s 1 , v 1 , s 2 , v 2 ) = 2u 1 (s 1 , v 1 )+u 2 (s 2 , v 2 ), which would be the compromise utility function under, e. g., variance normalization. From the perspective of day one, it is suboptimal if the two players change their minds on day two. Thus each player prefers precommitting to the initial compromise even though that implies a good chance of a net loss. Once again, we find that lack of knowledge is evidential power (see section 2.4) and that we should precommit to decision-theoretical updatelessness. In the context of MSR, I doubt that the weights of the compromise utility function would shift considerably once a few basic factors have been taken into account. For one, significantly updating one's prior about the expected gains from a compromise requires broad and reliable knowledge about the entire multiverse. Specifically, it requires knowledge about what decisions superrational collaborators will face in other parts of the multiverse, and how these decisions will affect different value systems. Even if you learn that some assignment of weights decreases your utility in this universe, the situation may differ in other universes. Many opportunities to have an impact may depend on as yet unidentified crucial considerations or unresolved issues in physics. Examples of issues of this kind which have already been identified include lab universes, artificial intelligence, artificial intelligence arms races, self-improvement races, suffering in fundamental physics, whole brain emulation scenarios (see section 3.4.4) and global catastrophic risks. That is to say: any kind of multiverse is so complicated that we should not expect to know much about it. If we think some pieces of information would significantly shift our weights in one direction or another, then this piece of information is potentially harmful. To the extent that it is possible, it would be important to convince superrationalists to become updateless before they encounter such information. \n Limitations The present analysis is limited in several ways. In general, we made many assumptions under the meta-assumption that the results generalize. For example, our arguments were often based on perfect correlation between the agents. Many aspects of our analysis were also semi-formal or informal. For instance, we did not formally justify the claim that settling on the same compromise utility function creates the largest gains from compromise. Further research is thus needed, including research into the largely unexplored area of superrational game theory. \n Heuristics It would certainly be nice to find a formal solution to the compromise problem (as described in section 2.8.2) at some point. However, such a solution is neither necessary nor sufficient for cooperating superrationally in practice. It is not necessary because cooperation based on intuitions about compromise may already get us quite far. Even without ever having heard a course on game theory, most people have intuitions about fairness that seem to suffice in most negotiations. We may expect that similar intuitions also suffice for reaping many of the benefits of superrational cooperation. It is not sufficient because we will not possess formal description of our collaborators' utility functions in the foreseeable future, anyway, given that we cannot even formally describe our own goals 25 With the description of their values being vague and qualitative, the compromise must, in the end, also be. Hence, we should also consider informal heuristic rules for making decisions. Below are some proposals. They have significant overlap and many of them also apply to causal cooperation; some are more moderate and intended to apply to people who do not fully accept MSR. Sorted in increasing order of the strength of their implications: • If some resource is mildly useful to you but very valuable to other (prominent) value systems, it is prudent to ensure that the resource is used for those other value systems. Similarly, avoid hurting other (prominent) value systems if it only gives you a comparatively small gain. • Utility functions that contribute a lot, e. g. because opportunities to increase them are rare, should perhaps receive disproportionate focus whenever such an opportunity arises. Otherwise, agents with such utility functions would have little incentive to compromise. • When the values of superrational cooperators diverge on some issue with a roughly equal number of supporters (or resources) on each side, these sides cancel each other out after compromise. That is, no superrational cooperator should act on a view on this issue. Toby Ord writes: \"It is so inefficient that there are pro-and anti-gun control charities and pro-and anti-abortion charities. Charities on either side of the divide should be able to agree to 'cancel' off some of their funds and give it to a mutually agreed good cause (like developing world aid). This would do just as much for (or against) gun control as spending it on their zero-sum campaigning, as well as doing additional good for others.\" • Try to benefit many value systems at once, and deprioritize issues that are very specific to you or other agents (see section 4.1.1). • Metaphorically speaking, try to increase the size of the compromise pie, rather than to increase the size of your own piece. • In any situation, maximize for the (prominent) value systems that have the highest stakes in your decision. • For any policy decision, ask yourself whether superrationalists with other value systems would plausibly arrive at the same decision (to ensure that you are assigning weights impartially). \n Notes on superrational coordination Superrational compromise is easiest if it requires no coordination (see section 2.8.3). It can, however, also solve coordination problems -that is, problems in which the utility functions of the players do not decompose into local utility functions, and the utility of a strategy to some player thus depends in part on the moves of the other players. As before, we can also describe a variation of the problem with similarity instead of common rationality. And as usual, causal decision theory recommends to reply, a strategy that, if implemented by everyone (or more than 5 people), forgoes a golden opportunity. However, this scenario diverges from those we have previously discussed in that our impact on the other participants' utility depends on their actions. Nevertheless, we can use superrationality to our (and our superrational collaborators') advantage in Platonia five, although we have to apply it in a different way. The problem is that simply maximizing the compromise utility function does not really help us here. Given that all players are essentially in the same position, it seems reasonable to let the compromise utility function be the sum of the money gained by each player. That means it is either 5 billion if exactly 5 people send in the letter, or 0 if another number of people send in a letter. Maximizing the utility function only tells us that we should ensure that exactly 5 people should send a letter -something we already knew beforehand. The compromise utility function does not tell us who should send in the letter. Because it does not decompose into local utility functions, it does not tell each player what to do. This illustrates how, even with perfect correlation, the compromise utility function may not suffice for solving coordination problems. Hence, we go back to the more direct approach. We assume that, given the correlation between agents (or the ability to determine the rational choice), we should choose the strategy that would be best for us if it were adopted by everyone. Because the situation is entirely symmetrical, everyone is likely to go through equivalent lines of reasoning. Obviously, neither sending in the letter nor not sending in the letter are good strategies. We thus have to adopt a mixed strategy, i. e. one of choosing to send in the letter with some probability p, where players' samples from this distribution are independent. At the far ends, both p = 1 and p = 0 guarantee that we lose. However, if p is chosen from somewhere in between and everyone adopts the same mixed strategy, there is a non-zero probability that you, the individual participant, will win the billion. Thus, we now have to choose p so as to maximize our probabilities of success. (Alternatively, we can maximize the probability that the 5 billion are awarded at all. As we will see, the result is the same.) If you and everyone else each sends in their letter with a probability of p, the probability of your winning is p times the probability that exactly four of the other 19 players send in their letter. The overall probability of you winning a billion is thus p • 19 4 • p 4 • (1 − p) 15 , ( 4 ) where 19 4 is a binomial coefficient. We now choose p so as to maximize this term. Because 19 4 is a constant, we can maximize p 5 • (1 − p) 15 . (5) Incidentally, the p that maximizes this term also maximizes 20 5 • p 5 • (1 − p) 15 , ( 6 ) the probability that anyone wins at all. As it happens, Eq. ( 6 ) is maximal for p = 1 4 , 27 which gives a probability of about 20% that the money is won, and thus a probability of about 20% • p = 5% that we win the money. Although a 20% probability of the money being won is better than 0%, it is still not quite satisfactory relative to the 100% that could be achieved with perfect coordination 28 . However, it seems as though there is no better way. We will revisit this question in section 2.8.9. While such coordination is qualitatively different from the other examples, it should still be seen as a form of cooperation (as opposed to some other applications of acausal decision theory like Newcomb's problem, or some coordination problems where all players have the same goal), because the game is positive-sum and (superrational) coordination improves everyone's outcome relative to CDT's recommendation. In Platonia five, the uncoordinated response involves everyone trying to get the money by sending in a letter. Many coordination problems suffer from the opposite problem, namely diffusion of responsibility. Consider the following example, versions of which were also discussed by, e. g., Leslie (1991, ch. 5 ) and Drescher (2006a, section 7.3.2): Superrational voting. You live in a country of superrationalists and today is election day. A strict secret ballot rule dictates that citizens are not allowed to tell each other which party they are going to vote for or whether they plan to vote at all. Unfortunately, going to vote costs a lot of time and you don't expect the potential impact of your vote to justify the opportunity costs. If you choose not to vote, then so will most of your fellow superrational citizens. That would be unfortunate. For one, the majority opinion of the people should be represented, if only because it is more likely to be your opinion. Besides, there is an uncorrelated minority that should not win, as all your superrational friends will attest! By what mechanism should you decide whether to vote or not? Again, the compromise utility function is not very informative. And again, a probabilistic (or mixed) strategy is optimal if the correlations are sufficiently strong overall and independent of the party one plans to vote for. To find out what exact probability should be chosen, one would need to come up with a term for the expected value under that probability, consisting of the cost of voting and the probability that some minority view wins, as well as the expected costs of the latter. Consider one last example, inspired by Pascal's button: Multiverse-reprogramming. Scientists inform you that the basic laws of the multiverse may be vastly more complicated than they originally thought. Specifically, they say there is a small probability that there exists some complicated and hard-to-find way of \"reprogramming\" parts of the multiverse. However, such reprogramming depletes some multiverse-wide resource pool. This means that the amount of reprogramming is independent of the number of agents who discover how such reprogramming can be done, provided that at least one agent discovers it and exploits it to full capacity. It would be unfortunate if no one in the multiverse seizes this opportunity or if everyone invests all their resources into finding it. As we have seen in the above thought experiments, we can use superrationality to solve this problem by determining some mixed strategy. Everyone lets some random process decide whether to invest significant resources into investigating the reprogramming mechanism, and then either pursues it fully or not at all. Once more, the compromise utility function does not tell us who should try to reprogram the multiverse; it does, however, tell us how each civilization should use the reprogramming mechanism. Note that this is another problem in which updateless weights (see section 2.8.6) are important, because once some civilization finds out how to reprogram the multiverse, it may be tempted to stop compromising. \n Schelling points We will now consider how we can get from a 20% probability of success in Platonia five to 100% under certain conditions. Consider the following variation of the dilemma: Platonia five with coordination help. S. N. Platonia is organizing another one of her eponymous dilemmata. However, this time she has compiled a numbered list of the participants beforehand. As a postscriptum to the standard content of the dilemma letter, Platonia writes: \"Your number on the list of participants from 1 to 20 can be found on the back of this letter.\" Before looking at your number, is there any way the superrational participants can ensure that someone receives the money? In the original Platonia dilemma, the participants were all in the exact same situation. But now, different people have different numbers on the back of their letters. This can make a difference if, before looking at the back of the letter, the 20 participants agree on which numbers should respond to the letter. For example, all participants could precommit to respond only if their number is between 1 and 5, and only then turn the letter over and act according to their precommitment. In this way, they ensure that exactly five people send in a letter, thus maximizing the expected gains. (Also note that due to the symmetry of the situation, it also gives each player the same expected gains assuming that none of them are already suspicious about their position on the list.) In a way, the numbers on the back of the letter function as a coordination help that allows for coordination where it would otherwise be impossible. An even better coordination help would be one where each player receives a recommendation on whether to respond, along with a guarantee that only 5 of the 20 players will be prompted to respond. Alas, such direct coordination helps will usually be unavailable. 29 So how can people agree, without communicating, on which numbers should send in a letter? If the participants are guaranteed to be exact copies of one another, they can pick an arbitrary set of 5 numbers between 1 to 20 before checking the back of the letter, confident that the other 19 will choose the exact same set. In relevant applications, correlation will not be that strong. But since all agents have an incentive to settle on the same set of numbers, each could individually try to identify a set that is obvious or that stands out in some way. For example, the set of 1, 2, 3, 4 and 5 appears to be a candidate that others may choose as well (as opposed to, say, 3, 7, 8, 11, 12) . Correlations play a role -if I choose numbers 1-5, then it is somewhat more likely that others do, too -but not a decisive one. Even if there are no correlations, abiding by such Schelling points (first introduced by Schelling (1960, chapter 3); see also Friedman (1994, chapter I A) ) is beneficial to the individual player if she believes that (many of) the other players abide by that same Schelling point. In practice, many Schelling points are driven by minor yet obvious expected value arguments. For example, when someone mentions that they would like the window to be opened, the person sitting closest to it is often seen as the natural one to open it, because she does not have to walk as far as the others. This consideration is negligible, but it helps with coordination. Many Schelling points are also mere social conventions. For example, consider the issue with right-and left-hand traffic. Presumably, most people have no strong preferences between the two as long as drivers abide by the same standards when interacting with each other. Whenever two drivers drive towards each other on a road, they face the coordination game with a payoff matrix resembling that in Table 1 Countries have laws that tell citizens whether to drive on the right-hand or left-hand side of the road, solving this particular coordination problem. For multiverse-wide superrational coordination, such convention-based Schelling points are alas not available. The lack of a Schelling point could mean that it is impossible to reliably achieve optimal outcomes. Imagine meeting a member of an unknown society in a narrow alley. Should you pass them on their right-hand or left-hand side? Assuming there is no relevant established pedestrian traffic convention for that alley, there appears to be no way of deciding between the two. \n Coordination is relevant for everyone I suspect that most people who care about the multiverse have utility functions that \"mostly\" decompose additively, thus requiring little coordination. If you are in this majority, you may think the topic of superrational coordination is irrelevant for you. However, this view is mistaken, since the compromise utility function requires coordination if at least some superrationalists in the multiverse have utility functions that do not decompose into local ones. Of course, you could just ignore these value systems when constructing your compromise utility function, but this makes it more likely that other agents exclude you in other ways as well, as we will see in the following section. \n No reciprocity needed: whom to treat beneficially In this section, I will argue that the application of superrational cooperation requires no reciprocity. That is, none of the agents who benefit from our cooperation have to benefit us. Recall the basic argument for superrationality as based on non-causal decision theories: given that we are friendly, it is more probable that other agents facing similar choices will be friendly toward us and our values. Crucially, this argument does not require that the agents whose choices we acausally affect are the same as those who benefit from our own friendliness (2006a, section 7.2.1). \n Schemes of causal cooperation The classic cooperation scheme from causal cooperation is one of mutuality -\"I scratch your back, you scratch mine\", so to speak. This scheme is represented by the graph in Figure 6 . In mutual relations like this, it is possible to apply causal cooperation, although only if the interaction is repeated -i. e. if my choice causally influences the other agent's choice 30 , and then the other agent's choice can causally influence my choice, etc. 31 For introductions to causal cooperation, see, e. g. Axelrod (2006) ; Trivers (1971) ; Fehr and Gächter (1999) ; Dawkins (1976, chapter 12) ; Taylor (1987), and Buss (2015, chapter 9) . Superrational cooperation also works in the above scheme, although repetition is not required. The prisoner's dilemma (with replicas or twins) is one example of this sort of problem. \n Circular cooperative structures and indirect causal reciprocity In principle, it is possible to establish causal cooperation even in cases where the two agents cannot directly benefit each other, provided there is a repeated causal link from my own decision to the decision of the agent who can benefit or hurt me, such that I can in some way reward cooperation and punish defection. As an example, consider the following variation of Hofstadter's donation game: Iterated donation circle. Like the circular donation game, only that the game is played many times (the exact number of times being unknown to the players). Donation circle. In every round, each participant is informed of their predecessor's past choices before deciding whether to send in 'C' or 'D'. Circular structures such as these can be represented by graphs such as the one in Figure 7 . Because each of the agents can causally (through the other agents) affect their predecessor, the iterated version of this problem could still, in principle, motivate causal cooperation. For example, one Nash equilibrium consists in everyone playing tit for tat. This Nash equilibrium is even stable, in the sense that one player diverging from tit for tat with a very small probability still leaves everyone else best off if they continue to use tit for tat. However, the same Nash equilibrium is also hard to achieve and unstable in a different sense, as it requires all 6 participants to use the same kind of strategy. Your response to your predecessor's cooperation is mediated by multiple other agents. If only one of them does not propagate your response correctly, the causal path from you to your predecessor is disrupted, leaving neither of you with a causal motivation to cooperate. For superrationality-based considerations, on the other hand, neither repetition nor the length of the causal path from one participant's cooperation to her predecessor are relevant. Instead, superrational cooperation only depends on the correlations between single pairs of agents. Hence, while the significance of causal cooperation in the Donation circle diminishes with every additional participant, the benefits from superrational cooperation remain constant regardless of how many players are involved. \n Hierarchies and acyclic graphs In an extreme case, there would be no causal path whatsoever from one participant's cooperation to that of his predecessor, making causal cooperation lose its entire appeal to the rational agent. Superrational cooperation, on the other hand, may still be applicable (cf. Drescher (2006a, pp. 287-292) ; see section 6.1.1). Consider the following variant of the donation game: Donation ladder. Once more, Omega has a long list of participants, albeit a regular linear one this time. Omega sends all of them a letter, asking them to respond with a single letter 'C' (for cooperate) or 'D' (for defect) without communicating with each other. It explains that by sending in 'C', participants can increase their successors' payoffs by $5. The first person on the list cannot benefit from the cooperative behavior of others, and the last participant's choice has no effect on the others. Omega writes that each player can increase their own payoff by $2 if they defect. Participants do not know their position on the list, and are once again told that they all use similar decision algorithms. Every participant only cares about the balance of their own bank account, and not about Omega's or that of the other participants. Upon receiving the letter, should you cooperate or defect? Figure 8 illustrates the donation ladder. ... ... In such a cooperation scheme, causal cooperation cannot be established even if the problem is iterated, whereas the superrationality mechanism is just as reliable as in the other examples. Because the list is long, I probably have a predecessor; if I cooperate, then my predecessorwho is in a position similar to mine -will probably make the same choice. Cooperation thus informs me (or logically determines) that I am likely to gain $5, whereas defection only gives me $2. We can see this linear hierarchy of agents in practice among the different versions of an agent at various points in time. For example, I can causally affect the welfare of future versions of myself, but if I only (or primarily) care about my present experiences, they can never reward me in return. However, I could try to benefit future versions of myself to make it more likely that past versions of me have behaved nicely toward myself. More discussion with references to the literature is given by Drescher ( 2006 ), section 7.3.4. Linear cooperation hierarchies come with a twist, however. Consider the following variant of the Linear hierarchical donation game: Donation ladder with known position. Identical to the linear hierarchical donation game, only that participants know their position in the list when they make their decision. A participant in the middle of the list may wonder how his situation differs from the regular donation ladder -after all, his predecessor on the list is in almost the same situation as he is. Assuming the conditions for superrationality are satisfied, their decisions should still correlate. Hence, if he cooperates, should we assume that his predecessor is likely to do the same? Not necessarily. The problem lies in the beginning of the list. The first person -let us call her No. 1 -will have no predecessor and thus no predecessor whose decision she could acausally influence, in effect giving her no reason to cooperate. Given this, No. 1 should defect (that is, unless she is already updateless; more on this below). Unfortunately, this puts No. 2 in a similar position. Realizing that No. 1 will defect, there is nobody left to benefit him. No. 3 will, in turn, reason that No. 2 expects No. 1 to defect, which means that No. 2 will also defect, leading No. 3 to defect as well. . . and so on, propagating down the entire list. You may notice that this propagating defection effect is analogous to the reason why standard game theory recommends to defect in the iterated prisoner's dilemma when the number of rounds is known 32 . Once more, we find that lack of knowledge is evidential power. For one, if the participants did not know their positions, they would all cooperate -and thus be more successful. If everyone could precommit to cooperation before learning about their position, they would do so. Again, cooperation can be maintained if all the agents are updateless in the first place (see section 2.4, cf. Drescher (2006a) In general, we can represent such hierarchical versions of the donation game using directed acyclic graphs like the one in Figure 9 . ... ... If participants knew their respective positions in the list, the considerations outlined for the Donation ladder with known position would apply analogously. In practice, such hierarchies may be hierarchies of power. Some agents are \"lexically\" more powerful than others, such that cooperation can only be beneficial in one direction -the less powerful have no way of helping the more powerful, while the powerful can help less powerful ones much more cheaply. As a perhaps paradigmatic example, consider a standard science-fiction scenario: Intergalactic relations. The universe contains many civilizations. Although they all followed similar evolutionary trajectories, each civilization developed at different times on different planets in different parts of the universe, and thus differ drastically in their levels of sophistication. Most civilizations eventually decided to conceal themselves to some extent, so no one knows which of the civilizations is the most powerful. You are the leader of a civilization, and one day, you encounter a comparably primitive civilization for the first time. According to your advisors, it appears that this other civilization has not even managed to harness the energy of their local star, they still suffer from diseases that your civilization's nano-devices could cure in an instant, and so forth. Your advisors, citing the other civilization's apparently laughable defense systems, recommend that you destroy them and use their resources to further your own goals. Should you follow your advisors' recommendation? Once again, causal reasoning may suggest that you should. By now, though, it should be clear that there are good reasons to ignore your advisor's recommendation if you believe there is a sufficiently strong correlation between your and the other civilizations. Note that one reason for civilizations to conceal themselves might be to induce a lack of knowledge about their relative positions within the hierarchy. If we remain hidden, other civilizations will be more likely to do the same, so neither we nor they would know who has the upper hand in a potential confrontation. On the other hand, if all civilizations loudly boasted their power, the most powerful civilization would realize its dominance and consequently have no reason to be friendly to the others -absent precommitment, the use of updateless decision theory, and the like. Another example of such power hierarchies is that of simulations. Simulators can causally influence the simulated in any way they want, but the simulated can do little to causally affect the simulators (e. g., by affecting the outcomes of the simulation or its computational demands). We will discuss this more in section 6.9. The following example may be typical of the hierarchies in multiverse-wide superrationality (MSR): Computable consequentialists. Meet Luca, who believes that consciousness cannot arise from classical computation alone. 33 He is also a consequentialist and primarily cares about conscious experiences. Through the writings of Tegmark, Luca has come to believe that many computable universes might exist in parallel to ours. However, since these computable universes do not contain anything that he would call a conscious experience, Luca does not care about what goes on inside them. He does, however, enjoy thinking about their inhabitants as an intellectual exercise, and this has led him to the conclusion that they can reason about Newcomb-like scenarios in a human-like way even though they are insentient. After all, neither calculating conditional probabilities nor operating on causal graphs requires sentience. Using the supercomputer in his basement, Luca has also come up with a number of predictions about the values held by consequentialists in the computable universes -let us call them the computable consequentialists (CCs) -a feat more difficult to achieve for incomputable worlds. He has even discovered a number of ways to benefit the CCs' values in our universe, all at a very low cost to his own values. While Luca himself does not care about computable universes, he sees no reason for the CCs not to care about worlds that are computationally more powerful than their own. Given that the CCs cannot do anything for Luca in their world, however, is it rational for Luca to be friendly to the CCs' values? Again, Luca does indeed have a reason to do so. If he benefits the CCs, other agentsincluding ones whom Luca cannot benefit -are more likely to realize Luca's goals in other parts of the multiverse. The ability to help can also come from knowing about the other agents. Consider the following example: Simple-world ignorance. Imagine a multiverse in which many different sets of laws of physics are realized. Some of the universes have very simple, parameterless, and easily understood basic laws, like Conway's Game of Life. Others have far more complicated rules. The inhabitants of the more complex universes may thus have more reason to believe in the multiverse than the inhabitants of the simple universes. In the complex universe, the multiverse hypothesis is attractive because it is simpler than the hypothesis that only their universe exists (cf. Schmidhuber, 1997). In the simple universe, on the other hand, the multiverse hypothesis may be more complex than the hypothesis that only their universe exists. Consequently, the inhabitants of the simple universes may adopt superrationality but only apply it toward other inhabitants of their universe. Let us assume that the values of the folks from the simple universes differ significantly from those of the inhabitants of the more complex universes. Should the inhabitants of the complex universes help the values of those from the simple universes? In this scenario, as with the previous ones in this section, I think that the superrationalists from the more complex universes have good reason to help the superrationalists from the simpler universes, as this makes it more probable that the former will receive help from other agents, including ones that they cannot help. For example, there may be many value systems that they (the inhabitants of the complex universes) do not know about (for reasons other than the Kolmogorov complexities of different multiverses). I think this particular scenario may well be relevant in our multiverse. More generally, some parts of the multiverse may contain different clues about the existence of other superrational agents. For example, some might live in parts of the universe from which it looks as though life is much rarer than it actually is, whereas others may discover that they are not alone as soon as they look through a telescope for the first time. In addition, while a superrational agent may be able to use some theory of physics to infer the existence of other agents, he or she may be unable to infer the existence of some particular value system. \n Only helping superrational cooperators helps you superrationally Cooperation usually excludes agents who are known to be unable to reciprocate. Yet as we learned from the Donation tree and Intergalactic relations, superrationality does allow for cooperation with non-reciprocating agents if helping them makes it more likely that other agents help us. There is, however, at least one limitation on the set of our beneficiaries that comes without negative side-effects. We can exclude from superrational cooperation all agents who do not cooperate superrationally at all. After all, every superrational cooperator knows that this exclusion will not affect her, and the exclusion appears to be symmetrical among all superrational agents. That is, it makes it more likely that other superrational cooperators make the same choice (rather than incurring some other limitation that excludes us). It seems risky to place any stronger limitation on the set of our beneficiaries, since this would give us reason to fear exclusion by other agents (cf. Drescher, 2006a, page 290), as we have seen in section 2.9.3. If we so much as try to look for rules of exclusivity that benefit us at the expense of other superrational agents, we have reason to believe that others will do so as well. Of course, superrationality and correlation between decisions are not binary properties, so neither is the limitation drawn above. For example, two artificial intelligences explicitly based on the same decision theory may correlate more than two (non-copied) humans, even if both have some incentive to cooperate. The stronger the correlation between us and some other agent, the more we will benefit superrationally from helping them (cf. Drescher, 2006a, page 288f ). To illustrate this, consider a one-shot prisoner's dilemma-like situation (cf. figure 6 ) in which two very similar agents can simultaneously decide whether to give the other one some reward b other or to walk away with a smaller reward b u for themselves. Now, imagine the two agents are perfectly correlated, i. e. they always make the same decision. If this is the case, both agents should cooperate whenever b other > b u . ( 7 ) Now consider a situation in which the correlation between the two agents is weaker. Then, in EDT terms, they should cooperate if cooperation (C) is higher in expected value than defection (D). Using conditional probabilities, we can formulate this as Thus, the threshold for cooperation increases as the correlation between the two agents decreases. P (C | C) • b other > P (C | D) • (b other + b u ) + P (D | D) • b u = P (C | D) • b other + b u , If the cooperation graphs become more complicated, then so do calculations like those above. Further research is needed to find out whether the above result -that benefitting agents with stronger correlation is more important -holds true more generally. One interesting question is to what extent superrationalists would form clusters based on correlation strength. This is especially relevant if we believe the correlations to be especially strong among agents with the same value system. \n Cheating, signaling, and half-heartedness Causal and superrational cooperation differ in another important respect. In causal cooperation, the benefit of cooperative behavior comes from how other agents will react to one's own cooperative acts. 34 To facilitate cooperation, each agent may commit to reward cooperative and punish uncooperative behavior. In this way, they can motivate each other to cooperate. But seeing as behavior can only be rewarded or punished if it is observed at all, causal cooperation often ends up focusing heavily on signalling. If you can save costs by merely pretending (in a convincing way) to have cooperated, then that is the rational thing to do from a causal perspective. Conversely, if you can help someone without them knowing about it, you have no causal reason to do so. There are many practical examples of this, such as the tendency for governments to make a big deal out of international agreements or cooperative acts, even if the object-level gain is minor. Since the mechanism of superrational cooperation is different from that of regular causal cooperation, prioritization within it should be different, too. Specifically, superrational cooperation is beneficial not because others reciprocate one's cooperative acts, but because our (cooperative) decisions correlate with those of others. This means that we should sincerely attempt to maximize for benefits to other value systems, because this correlates with others doing the same, which in turn maximizes our own benefits. We are used to thinking about cooperation in causal terms, i. e. about how a certain cooperative act may in the end pay us back causally and in this universe. If we think about superrational cooperation in this mindset, we may be tempted to propose measures that are critically suboptimal from a superrational standpoint. For instance, one may adopt a \"compartmentalized good will\", talking at length about cooperation without actually trying to maximize for other agents' goal achievement, or spend time thinking about how the others might cheat us. However, all of these correlate with other superrational agents in the multiverse wasting effort on these exact same things. With superrational cooperation, only sincere attempts at improving other agents' value systems correlate with the same behavior in others, and thus with the optimal consequences. Hence, there is no way to \"game the system\" or to get benefits without honestly paying for them. \n Values We extensively covered the mechanism of (multiverse-wide) superrationality. However, in all thought experiments considered so far, we knew what impact our actions would have on the fulfillment of the other agents' preferences. For example, we know that the other participants in the donation game or Platonia five would prefer to have more money on their bank account. We also know that other civilizations would prefer not to be destroyed and would benefit from learning about our technologies in Intergalactic relations (section 2.9.3). Such knowledge has to be present or at least attainable in the future (cf. section 4.1), otherwise no side can benefit the others. This section gives an overview of how we can find out what other agents in the multiverse care about, as well as what aspects of their preferences we should focus on in the first place. \n Orthogonality of instrumental rationality and values One objection to superrational cooperation might be based on a possible convergence of terminal values, in which all agents with the correct decision theory will converge toward the same values. Moral realism claims that there are facts in morality as real and true as those in science. In addition, some moral realists believe that any rational agent investigating morality will ultimately arrive at these moral truths. Assuming that a large part of being rational involves using the right decision theory, maybe all agents with the right decision theory will independently come to adopt the \"correct\" moral system? If this is the case, no cooperation among these agents would be necessary (although some value systems may still require multiverse-wide coordination, see section 2.8.9). As a first counterargument, consider that knowledge of the correct decision theory is not necessary for superrational cooperation, seeing as a number of different decision theories (e. g., evidential, timeless and updateless decision theory) imply superrationality. Secondly, we do not seem to observe empirical evidence of such convergence. For example, Eliezer Yudkowsky and Brian Tomasik agree that non-causal considerations are important for decision theory, but Yudkowsky's values nevertheless differ significantly from Tomasik's. There are also principled reasons to be skeptical of value convergence among agents with the same decision theory. Decision theories are about instrumental rationality, i. e. about making decisions aimed at achieving goals, not at revising them 35 . That is at least the case for decision theories as they are discussed today. Consider the following variant of the donation game: Donation game for sadists. Omega has selected 20 pure sadists, who draw pleasure only from torturing others and nothing else. They all use similar decision making mechanisms when playing a donation game (against correlated agents). Instead of being paid in dollar sums, they are given individual hours to torture a slave as a reward. Assuming sufficient correlation between participants, the instrumentally rational decision for each sadist is to cooperate such that the total number of hours of torture increases relative to universal defection. The moral choice, on the other hand, would be to defect in order to reduce the number of hours in which anyone gets tortured. However, decision theories (as currently discussed in the literature) do not take moral considerations into account at all. They merely aim to fulfill the goals, whatever they may be, of the agent using that decision theory. Hence, when applied by a pure sadist, a given decision theory is meant to help her spend more time torturing others. 36 There could conceivably be some different kind of \"decision theory\" that does recommend taking morality into account (and not only cooperation, see section 6.7.1) even if the agent using it is amoral or immoral. One could, for instance, simply combine the correct decision theory with the \"correct\" moral view. Some people may consider such a decision theory objectively correct. However, for an agent with immoral goals (like pure sadism), it would be instrumentally irrational to adopt such a decision theory. In any case, the existence of such a \"moral decision theory\" does not contradict the existence of a decision theory in the classical, instrumentally rational sense, so an amoral or immoral agent would still be better off adopting a classical decision theory. Thus, it would seem that an agent's values and their use of acausal decision theories are orthogonal. This, in turn, suggests that agents with a variety of value systems will adopt a decision theory similar to our own, such that their decisions will correlate with ours. Similar views regarding the relationship between instrumental (and epistemic) rationality and ethical values have been defended under the term orthogonality thesis (Bostrom, 2014b, ch. 7 , section \"The relation between intelligence and motivation\"; Bostrom, 2012; Armstrong, 2013) . Our claim that decision theory and values are orthogonal in principle does not imply that they never correlate in practice throughout the multiverse. Indeed, in section 3.4 and its companion papers, I will discuss various ways in which values and decision algorithms could be expected to correlate. However, it seems very unlikely to me that these correlations are so strong that they significantly dampen the relevance of superrationality. \n Necessary preconditions Before we start thinking about the values of agents in other parts of the multiverse, we need to consider what kind of agents can join multiverse-wide superrational cooperation (MSR) at all. In particular, what sorts of values do they need to have, independent of whether or how many such agents or value systems actually exist in the multiverse? We already know that only helping superrational or correlated agents benefits us (see section 2.9.4). However, the values of the superrationalists must also be open to the opportunity of gains from compromise. If an agent's values imply that she is better off without any trades, there is no point in helping her. In order to more closely examine this precondition, we can break it into five distinct criteria, all of which are necessary for a superrational collaborator to reap the gains from compromise. 1. Each collaborator must care to at least some extent about states of the world, as opposed to caring only about their own mental states or actions. 2. They must also care about consequences in areas of the multiverse where there may be other cooperators. 3. Other superrationalists must be able to infer and understand their values in sufficient detail. (To draw action-guiding conclusions from MSR, they themselves need to be able to infer the values of some other superrationalists or to influence future agents with this ability.) 4. Given this knowledge of their values, collaborators must have some power to behave nicely toward this value systems. (Again, if MSR is to be action-guiding to an agent, they in turn need to be able to benefit other values.) 5. Doing so produces gains from compromise. If everyone abides by an analogous cooperative strategy, everyone is better off than they would be without cooperation. If all these criteria are satisfied, superrational cooperation works. We will discuss them in turn in the following subsections. For some applications it may be fruitful to subdivide these criteria further. Furthermore, additional criteria, such as Bostrom's ( 2014 ) \"porosity\", may determine the size of the gains from compromise. We could furthermore devise criteria to assess the extent to which superrational cooperation affects one's strategy. For instance, if all correlated agents have the same values anyway, superrational cooperation does not affect our policy except for cases of coordination (see section 2.8.9). \n Consequentialism Most people's ethical views are partly deontological (and sometimes virtue ethical). That is, they are not solely concerned about the state of the world and the consequences of actions, but als about \"the actions themselves\" (and, in case of virtue ethics, one's character). They usually try to follow some set of rules prescribing what actions are appropriate in which situations. For example, many people follow strict rules against killing (though these usually do not apply under all circumstances and the meaning of \"killing\" is rarely fully specified), even when breaking these rules would lead to fewer deaths. This type of ethical system forms a central part of many religious doctrines, with notable examples such as the Christian ten commandments, the Confucian filial piety, and the Islamic sharia. In addition, most national laws contain countless rules of this sort, many of which apply to more mundane domains like traffic or taxes. Isaac Asimov's three laws of robotics are yet another example of a deontological set of rules. The arguments for multiverse-wide superrational cooperation that I have given appeal to the consequentialist aspects of one's values -not because it requires us to push people off bridges (as in the Fat man version of the Trolley problem), but because its supporting argument is fundamentally based on the consequences of different actions. If we on Earth benefit other value systems, then this implies that others elsewhere in the multiverse also benefit our value system, which may produce better overall states of the multiverse overall via gains from trade. Hence, the value of superrational cooperation lies in its positive consequences on the world (or other worlds). The ethical duties of deontological ethical systems, on the other hand, usually concern the more immediate consequences of our actions. Thus, in a scenario like the Fat man version of the Trolley problem, most deontological would imply that the direct act of killing the fat man violates our duties towards him more than a failure to act violates our duty towards the five people on the track. In Bourget and Chalmers' (2014) survey of philosophers, 23.6% of respondents characterized their values as consequentialist while 44.1% identified as deontologists or virtue ethicistswith the remaining 32.3% choosing \"other\". However, most people probably espouse values that involve at least some consequentialist aspects (cf. Muehlhauser and Helm, 2012, section 5.3). I doubt that many modern consequentialists would be emotionally capable of murder or torture even under circumstances where they could be confident that doing so would yield the best consequences 37 . At the same time, I doubt that many defendants of rule-based ethics see no appeal in potentially reducing the amount of torture in the multiverse, even if only in an indirect way. In fact, many deontological rules are motivated or even defined by the consequences they produce. For example, murder is defined as any act that intentionally and directly results in the death of another person (although indirect ways of causing the same consequence (e. g., omissions) are not seen as murder). Rules against theft are often defended on the grounds that a society with such rules is preferable to one without, even if the rules might occasionally prevent a genuinely altruistic bank robbery. Some even interpret Kant's categorical imperative (especially its first \"formulation\") as a heuristic based on consequentially motivated decision-theoretical reasoning (cf. Parfit, 2011, section 63; Hare, 1993) . As Rawls (1971, ch. 6 ) writes, \"deontological theories are [not defined] as views that characterize the rightness of institutions and acts independently from their consequences. All ethical doctrines worth our attention take consequences into account in judging rightness. One which did not would simply be irrational, crazy.\" Although most people refrain from the consequentialist choice in extreme situations, they do, in fact, often endorse it. For example, in Bourget and Chalmers' survey (2014) , 68.2% of the respondents chose to pull the switch in the original trolley problem, and only 7.6% did not (with the remaining 24.2% choosing \"other\"). Pulling the switch is similarly popular among the general population. This suggests that people sometimes agree with consequentialist reasoning even if other, exclusively deontological or virtue ethical considerations can overrule it. Beyond consequences for things like the number of deaths, individual welfare, and fairness, people sometimes also care about the abidance by deontological rules in a consequentialist way. For example, most people not only avoid killing others themselves, but also care about preventing murders in general; many who personally avoid lying also strongly dislike it when others lie; and so forth. These kinds of consequentialism, which are rarely considered in the literature on moral philosophy, qualify for superrational consideration just as much as, say, utilitarianism. We will revisit this topic of caring about the deontologically ethical behavior of others in section 3.4.1, in which we review studies indicating that many people have values of this sort. \n Caring about the multiverse Presumably, some agents with significant consequentialist aspects to their values will almost exclusively care about their own part of the multiverse, if only based on egoism or absurdity heuristics 38 . It is thus very difficult or impossible to benefit them in other parts of the multiverse, in turn preventing cooperation. Although there is very little discussion about the moral relevance of other parts of the multiverse, the moral relevance of distance is frequently discussed in moral philosophy (see, e. g., Brock and Hassoun, 2013) . Note that while distance is usually understood to be spatial, other kinds of distance (e. g., temporal (Beckstead, 2013) or social) play similar roles in ethical judgment. While the debate in moral philosophy appears ambiguous, people's actions speak more clearly. Most people from high-income countries would save a child from drowning in a nearby pond, but donate only relatively small amounts to charity (Singer, 1972) . Insofar as they do give to charity, they usually prefer local causes even though helping in low-income countries is more cost-effective. From this, we can safely infer that most people are altruistic to some extent, but seem to care more about near events than distant ones. One may suspect that an agent's ignorance about other parts of the multiverse would yield similar conclusions as a lack of interest. After all, if someone does not know about our part of the multiverse, they cannot help us. However, we must not forget that superrational cooperation need not be based on mutuality (see section 2.9). Even if someone cannot help us, we can still help them to make it more likely that we ourselves receive help from agents whom we cannot help. \n Knowable values In order to maximize for some given utility function, we or future superrationalists (see section 4.1) need a sufficiently detailed model of the utility function itself. In section 3.4, we will discuss how the evolutionary psychology of morality and related disciplines can be used to assess the values of superrational cooperators in the multiverse. There are at least some ways of making very educated guesses, although we cannot expect to arrive at a detailed and precise description of the values of all evolved civilizations and their descendants. However, perfect knowledge is not necessary for our purposes. Indeed, most people cannot even describe their own values in detail (see footnote 25). Yet despite this, humans are perfectly capable of helping one another achieve their goals. Thus, the question is neither whether we can gain relevant knowledge about the values of other agents in the multiverse at all, nor whether we can have a full map of extraterrestrial morality, but whether the information we can gather about other civilizations can be sufficiently accurate to yield a usable model. \n Fragility of value Yudkowsky (2015, ch. 279) argues that human values are not just complex (cf. section 3.4.1); they are also fragile, in the sense that even minor errors in a non-human agent's picture of them can completely derail that agent's efforts to optimize for them. According to Yudkowsky, \"Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.\" Perhaps more generally, we could say that any resource expenditure will generate next to no value for an intelligent evolved being X unless that resource expenditure is shaped by a detailed inheritance of X's morals and metamorals. Yudkowsky gives boredom as an example of a small but indispensable part of human values: \"Consider the incredibly important human value of 'boredom' -our desire not to do 'the same thing' over and over and over again. You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.\" Presumably, many other seemingly insignificant aspects of human values are of similar importance as boredom. One would need to get all of these aspects just right in order to benefit human values. This suggests that it will be difficult to benefit many evolved value systems, due to the large amount of detailed knowledge it would require and the difficulty of gathering that knowledge. There are various points to discuss in this context. For one, the fragility thesis is rather vague; it does not say how fragile our values are, or how accurate and reliable the inheritance must be. This is not to say that the fragility thesis makes no testable claim at all. Yudkowsky formulated it with the value loading problem of artificial intelligence in mind. Since AIs can be programmed to pursue any goal (cf. section 3.1), the space of possible values with which an AI could end up is vast, and the target goal systems occupy only a small fraction of this space. The fragility hypothesis can be interpreted as one elaboration on just how small this part of value space is, and how catastrophic it would be (from the perspective of that value system) to miss it by even a small margin. In other words: even if we take care to represent all of the most central aspects of our values (e. g., \"increase the welfare of sentient beings\" or \"reduce inequality\") in the goal system of an AI, the outcome may still be as bad as an entirely random one if we omit seemingly peripheral values such as boredom. Although I agree with the fragility thesis as a descriptive (rather than normative) statement about human values, I do not think human values are quite as fragile as Yudkowsky writes. Specifically, I think the outcomes brought about by AIs with two different goal systems can differ enormously in their overall worth even if both miss important aspects of human values. For example, Yudkowsky's hypothetical world full of repetitive happiness may be boring, but it is still much better than a world full of suffering, unethical behavior, etc. and nothing of worth to compensate. But perhaps this judgment is influenced by my own values (which are mostly about ensuring the welfare of sentient beings with a strong priority for preventing their suffering), to the point where it would not generalize to how other humans, or other evolved agents in general, would view the situation. Transferred to our variant of the fragility thesis, this nevertheless suggests that even if we miss significant parts of the values of other superrational cooperators, taking their values into account may still make a big difference to them. More importantly, AI value loading differs significantly from our attempt to benefit agents in other parts of the multiverse. The main problem of AI value loading is getting the AI to care intrinsically about human values. MSR, on the other hand, already gives us the sincere (instrumental) goal of helping other agents, which the AI lacks. If anything, we lack knowledge of the others' values, whereas AIs may still not care about them even with perfect knowledge of their values. Another crucial difference between these two contexts is that in AI value loading, we usually want the AI to hold the values of one particular species or group of people. In contrast, when cooperating superrationally, it is sufficient to know that we benefit many other superrational agents. We do not need to know whether we benefit some particular species. The extent to which this makes our job easier depends on how evolved value systems are distributed over value space. Perhaps they form a few (or many) very small clusters, as depicted in Figure 10 . (Needless to say, Figure 10 is not meant to be an accurate map of value space. The placements on the map have no factual basis.) \n Utilitarianism Human values Figure 10 : A map of a part of value space under the assumption of there being distinct clusters with a lot of empty space in between. Every blue point on the map is some value system held by a significant number of agents. The white areas of the map contain value systems that do not have a significant following, such as paperclip maximization. If our map really did represent value space and each individual value system is fragile, then it is difficult to benefit other value systems, because if we miss the targets only by a bit, we end up with a value set that nobody cares about. However, it could also be that the values of different evolved agents occupy some compact part of value space, as depicted in Figure 11 . In this map, darker areas represent value systems with many agents and lighter areas indicate value systems with fewer agents. If value space looks more like this map than our previous one, then it is easier to make \"guesses into value space\" to help superrational collaborators. As long as one is roughly aiming at the right part of value space, small errors just mean that one benefits slightly different superrationalists than intended. Only a few maps of humanity's value space have been created, the best-known of which is probably the Inglehart-Welzel cultural map of the world. I would nonetheless wager some guesses as to how more fine-grained maps of values would look like: on any individual planet, there are clusters formed by major religions, nations, political camps, and other cultural groups. For example, there are many people who hold many of the moral views of the Quran and many who hold many of the moral views of the Bible, but presumably much fewer who defend a mix of the two. Nonetheless, the space between the clusters is not completely \"uninhabited\". Furthermore, the existence of these clusters seems to be partly arbitrary, a \n Human values Utilitarianism Figure 11 : A map of a part of value space under the assumption that extant values occupy a relatively compact part of value space. mere result of the way that different ideas were packaged together historically. If things had gone slightly differently, as they doubtlessly do in other parts of the multiverse, the authors of the Bible may have written that it is mandatory to fast during the month of Ramadan, thus filling a spot in value space with life that is only sparsely inhabited on Earth. If the multiverse is large enough, all these possible variations of values are realized somewhere and probably no less common than the two religion clusters on Earth. One last difference between the way we extract values from other superrational cooperators and the way AIs might receive their values from humans is, of course, that the former involves no direct contact. Section 3.4 will address ways of circumventing this problem in order to identify the values of agents elsewhere in the multiverse. \n The ability to help others In some cases, it will not be in our power to help other value systems at all. Since any will to cooperate with these agents cannot possibly be action-guiding, we do not have to help them. Other agents in the universe may have other resources available to them and thus choose to behave in a friendly way toward these values. If, on the other hand, agents know that nobody else can help them to achieve their goals, multiverse-wide superrational cooperation (in particular, any version of it in which they just give resources away) becomes less attractive to them. One example of a value system that we cannot help is the following version of speciesism (that may or may not be a straw man): The Namuh-centrists. One day, scientists inform you about a highly intelligent species of extraterrestrials known as \"Namuhs\". Like us, the Namuhs have built a flourishing civilization with art, trade, science, language, humor, philosophy (including advanced decision theory research), and so on. However, the Namuhs do not live in our universe, but in a distant part of the multiverse, completely inaccessible to us. In fact, they could not even exist in our part of the multiverse, as their bodies require slightly different laws of physics to function. Knowing about superrational cooperation, you hasten to ask whether they have thought about problems analogous to Newcomb's problem and the donation games between similar agents. A trustworthy scientist explains that their minds are indeed prone to thinking about such topics -much more so than those of humans, in fact! Understandably thrilled, you ask what values the Namuhs have, and specifically what values are held by those who have thought about acausal cooperation. The scientist then informs you that all Namuhs are very narrowly focused on their own species. They are Namuh-centrists who do not care one bit about anything that does not involve fellow Namuhs. For example, they shrug at the thought of non-Namuh suffering, the flourishing of non-Namuh civilizations, or non-Namuh well-being. In fact, they are so strict that they do not even care about simulated Namuhs or other approximations. Learning about their values, you may be disappointed. There is nothing that you can do to help them and it is therefore irrelevant whether they use a decision theory similar to yours or not. I should point out that the speciesism endorsed by the imaginary Namuhs is very rigid and more narrow than most other views that we would usually classify as speciesist. Far from caring only about their own species, most people seem to care about the welfare of non-human animals to at least some degree, usually privileging some species (like cats and dogs) over others (like pigs and cows). Such views classify as speciesist, but nevertheless allow for superrational cooperation. Other views do not value humans over other animals for their species membership per se, but instead privilege other characteristics that (allegedly) only humans (and sometimes a few other species) possess. A common variant of this holds 39 that only members of very few species are conscious. Humans are one of them, but, according to such views, they otherwise do not deserve any special moral status. Given the implications of this view, proponents are sometimes (and often incorrectly) branded as speciesist. If the Namuhs were to hold such a view, and humans (or other earthly species) meet their criteria for consciousness, then our decisions can be beneficial or detrimental to the Namuhs' preference fulfillment. A similar reasoning applies to the possession of language, free will, the ability to pass the mirror test or other (potentially) strict but non-speciesist restrictions to one's set of morally relevant agents. There are other reasons why we might be (practically) unable to help other agents. For example, helping an agent could require some set of specialized abilities that they themselves developed based on their value systems. Consider the following example: The Advanced math maximizers. One day, you learn that out there in the multiverse, there are civilizations made up entirely of mathematicians whose primary concern is maximizing mathematical knowledge. They don't care about the number of established truths or proofs per se, but rather value pieces of knowledge based on their novelty or interestingness, possibly resembling the way earthly mathematicians often prioritize their research. For instance, the mathematicians place a very high value on a proof or disproof of the Riemann hypothesis, whereas mundane factoids like the three-billionth digit of π have very little value in comparison. Moreover, once a fact becomes known to at least one of the mathematicians, reproducing that same piece of information elsewhere in the multiverse creates no additional value for them. (We assume that the universe is finite -otherwise every piece of knowledge may be known to some Boltzmann brain.) While they are not particularly skilled at anything else, their strong intrinsic motivation and dedication has made them into truly excellent mathematicians, unrivalled by anyone across the multiverse. It is not easy to benefit the advanced math maximizers. We do not know what knowledge they already possess, and given their level of skill, we should assume that they will come up with most interesting pieces of mathematical knowledge that we could devise on our own. The math maximizers are thus so capable of maximizing their own utility function that there is little we could do to assist them (cf. section 3.2.5). \n Zero-sum and \"below-zero-sum\" tradeoffs on resources Not all interactions between agents allow for cooperation. Specifically, there is no way or reason to cooperate in zero-sum games, i. e. ones in which the overall payoff is always the same. Consider the following example: The Maximizer Monarchs. Imagine a multiverse consisting of two universes. One is ruled by a queen whose only drive it is to create as many paperclips as possible. The other universe is ruled by a king who only cares about producing as many staples as possible. Each stationery-maximizing monarch knows that the other exists and that they both use the same decision algorithms. They each have one hundred tons of steel at their disposal. What should they do with it? Assuming that staples (specifically the kind of staples that the queen cares about) cannot be built out of paperclips or vice versa, this interaction is zero-sum. Every bit of material that one of them uses for the benefit of the other is an equivalent loss to themselves 40 . Thus, no form of cooperation between the two is beneficial. As the reader may suspect, zero-sum interactions are rare. We should expect that any given resource is better suited to achieving one goal than another and so trade can arise from allocating resources based on what value systems benefit most from them. Analogously, value systems care more about certain situations than others. Furthermore, whereas it may not be possible to combine a paperclip and a staple, many goals are compatible with each other. For example, a society's citizens can at the same time be happy and virtuous. \n Gains through specialization and comparative advantages At times, trying to achieve multiple goals at once is not just pointless -it can actually be worse than having each agent focus on one, typically their own, goal. To see how, let us revisit The Maximizer Monarchs of the previous section: The Ever-improving Maximizer Monarchs. Like The Maximizer Monarchs, but this time, the efficiency at which each agent can produce paperclips or staples grows monotonically with the produced quantity. Again, each monarch wields one hundred tons of steel. Without delving into mathematical details, it is best (in terms of overall number of paperclips/staples produced) if each of the two specializes in one kind of stationery. In particular, there are no gains from compromise over each monarch maximizing only for their own goals. There may also be comparative advantages from the outset. Based on their respective motivation and prior experience, the queen may already excel at producing paperclips, while the king may be better at producing staples. Another important source of comparative advantages is unequal knowledge about different value systems. For example, if the queen does not know exactly what the king cares about, then she will be worse at benefitting him. Similarly, our knowledge of what other humans care about is much more precise than our knowledge of what agents elsewhere in the multiverse care about. The fact that specialization and division of labor play such a crucial role in the economy suggests that superrationalists will also tend to focus on a single goal rather than maximizing for multiple things at once. However, I think that this will not be the case, at least in our present situation. The primary reason is that the instrumental goals of agents with different moral values are often the same. For example, no matter the direction into which we would like to drive society, we will try to acquire money and political influence. These resources are often generic, such that when they are acquired with one goal in mind, they can also be employed in pursuit of another. As an example, consider how Donald Trump maximized his personal wealth for a long time, yet his resulting fame and money nevertheless enabled him to become president of the US, which in turn allows him to achieve all kinds of goals. The fact that instrumental goals tend to converge suggests that superrationalists in the multiverse rarely have a strong comparative advantage at achieving their own goals. If comparative advantages are not strongly aligned with goals, specialization can produce gains as well. For example, imagine a number of superrational agents, each of whom would like to maximize many different things separately, e. g., knowledge, fun, happiness and technology. Here, a no-compromise outcome -i. e. one wherein each agent only maximizes their utility function in their own universe -might be worse than a potential division of labor with one agent focusing on generating knowledge, another one focusing on fun, and so forth. \n What values? To help other agents, one at some point needs to have some workable model of their preferences. In general, it is difficult to extract preferences from a given agent if the agent is not von Neumann-Morgenstern (vNM) rational and cannot state her goal explicitly. Humans surely are not vNM-rational. Additionally, moral judgments are usually seen as being inaccessible to us in their complete form (see footnote 25) and as emerging from the whole brain rather than exclusively from, say, the anterior cingulate cortex. This makes sense from an evolutionary point of view. Preferences are tools for increasing the fitness of an organism, and there is no reason to assume that such tools would be any more open to scrutiny by the organism than, say, the detailed inner workings of the digestive system. In addition, while most organisms have rudimentary mechanisms for avoiding harm and seeking food and reproduction, holding grudges -i. e. a preference for retaliation -is only adaptive in non-solitary organisms with sufficiently good memory and recognition to correctly identify transgressors. In the evolutionary process, different values thus evolve separately and are unlikely to form a coherent whole (cf. Dennett, 1991; Kurzban, 2012) . Thus, even if we had a complete model of our superrational collaborators, it would nevertheless be difficult to extract clear-cut values from them. In the absence of such exact models, it makes little sense to for us to discuss the technical details of relevant preference extraction algorithms 41 . We will, however, still need to think about informal ways of inferring preferences from a model of a superrational collaborator. \n Idealization One dimension along which preference extraction algorithms vary is the extent to which they idealize values. Consider the following example of preference idealization (adapted from a recent blog post of mine): Steve holds a glass of transparent liquid in his hand. A woman walks by, says that she is very thirsty and that she would like to drink from Steve's glass. What she does not know, however, is that the water in the glass is (for some unspecified reason) poisoned. Should Steve allow her to drink? Most people would say he should not. While she does want to drink from the glass, her desire would probably disappear upon learning of its content. Therefore, one might say that her object-level or stated preference is to drink from the glass, while her idealized preference would be not to drink from it. Similar questions apply to ethical preferences. For example, most people find meat consumption acceptable on the object-level, but are simply unaware of information about the world that could change their minds, e. g., knowledge about the similarities between human and animal minds or the conditions in factory farms and slaughterhouses. Perhaps these people's idealized preferences favor vegetarianism? If we reduce meat consumption, should we count it as beneficial to people who approve of eating meat, but who could be convinced otherwise? Should we, in other words, idealize our collaborators' values when taking them into account in this universe? Besides gaining more information about the world, people's preferences may also change upon engaging with moral arguments (e. g., the original position or the drowning child argument). Even though such arguments do not provide new facts, they may invoke trains of thought that lead people to change their moral position. Should we also idealize preferences based on such moral arguments? At least idealization based on moral argument can cause trouble. For one, some moral arguments can be viewed as potentially illegitimate \"tricks\" for persuading people to adopt undesired positions 42 . An extreme example of this could be some moral or religious scripture that hypnotizes and brainwashes the reader. Surely, nobody would want other superrational collaborators to apply such a treacherous \"idealization procedure\". 41 Examples are described in Hansson and Grüne-Yanoff (2012), Varian (2006) , Neumann and Morgenstern (1953) , Ng and Russel (2000) , and Oesterheld (2016) . Also consider Brian Tomasik's How to Interpret a Physical System as a Mind. 42 One class of such tricks is described in my blog post Cheating at thought experiments. Order effects constitute another problem in using moral arguments to idealize preferences. Depending on the order in which we present someone with moral arguments, they may lock into a position and resist further arguments. If someone's moral views allow for more than one such lock-in they may not be uniquely idealizable. A recent study by Schwitzgebel and Cushman (2012) shows that even philosophers exhibit order effects when considering thought experiments. In general, we may view agents as having (meta-)preferences regarding idealization. These determine how exactly they would like to have their values idealized. We should then abide by respective agent's preferences, since we can then expect others to idealize our values in the way that we want them to be idealized. Unfortunately, this solves the problem only theoretically. In practice, finding out what idealization procedures other superrationalists would approve seems very difficult. 43 For more thoughts on preference idealization, the reader may consult any of the following: Yudkowsky (2004) ; Grill (2015) ; Muehlhauser and Helm (2012) , chapter 6; the Negative Utilitarianism FAQ, especially section 2.1; section 15 of Brian Tomasik's Hedonistic vs. Preference Utilitarianism; and my blog post entitled Is it a bias or just a preference? An interesting issue in preference idealization, in which I discuss the specific issue of removing cognitive biases from preferences. 44 \n Beware motivated idealization One potential pitfall of idealizing another agent's values is that it might bias the result toward one's own moral views if one is not careful. After all, you will be more familiar with the arguments and thought processes that favor your own position, and they will seem more convincing to you than the arguments you know in favor of other positions (if you knew of similarly strong arguments in favor of other positions, there is a good chance you would have adopted them already). Such a process of legitimizing what we already want to do via superrationality-based reasoning could be nicknamed \"superrationalizing\". For instance, I might be tempted to think that supporters of deep ecology and (non-anthropocentric) environmentalism would, if they were rational, update their views significantly upon learning about Darwinian evolution and wild animal suffering. I may even presume that deep ecologists would support intervention in nature or even habitat destruction under idealization! While I do indeed think that many people's judgment of nature and preservation would change significantly upon understanding the above topics 45 , I am worried about what such an aggressive stance on idealization tells me about the way other agents might go about idealizing values. For instance, when idealizing my values, environmentalists might reason that I just never thought enough about the beauty of nature. \"If only this Caspar guy had taken the time to really contemplate the natural world in all its magnificent complexity, he would not think of nature as a tragedy, no matter how 'red in tooth and claw' it may be.\" Consequently, they might conclude that it is in my idealized interest if they lobby for leaving nature untouched or even spread it to other planets. I would not want others to idealize my values in such a way. While it may be true that a sufficient amount of time spent enjoying beautiful landscapes could convince me that nature is beautiful, I might not view that as a legitimate idealization procedure, as it merely reinforces conservationist arguments rather than offering new arguments or some form of balanced view. An example of a more obviously flawed extrapolation process is that of a mother reasoning that everyone's idealized values would be to prefer her son over all other children. After all, if they only spent enough time with him (just as she did), they would surely prioritize his well-being over that of other children! Once again, the respective idealization process seems unduly biased towards a certain position and will thus be rejected by most agents' meta-preferences. \n Values and distance People care about things differently depending on whether they happen nearby or far away in space and time. For example, while many liberals and quite a few conservatives politically favor legalizing cannabis, I expect that many of them would nevertheless feel mildly annoyed or uncomfortable if their best friend, spouse, or daughter were to start smoking it on a regular basis. For brevity, I will use the term near values for the part of our values that are about near things and far values for the part that is concerned with distant things. Both in the ancestral environment and today, most people operate primarily on their near values (with one notable exception being politics). In the context of superrationality, however, we are only interested in far values. Most other superrationalists are so far away from us that our values pertaining to their worlds fall under our far values. Hence, we want ETs to consider only our far values, which in turn means we should only consider the ETs' far values as well. That is, we do not need to know how they want their friends to treat each other, how they feel about drug use in their own social circles, and so forth. (Some think that the discrepancy between near and far values should disappear under idealization; we will discuss this below.) According to construal level theory, the difference between near and far values mainly results from the difference between two kinds of thinking or construal: concrete (or low) and abstract (or high) levels of construal. Which level of construal is applied mainly depends on the psychological distance to an event, i. e. the combined temporal, spatial, social and \"hypothetical\" (near = likely, far = unlikely) distance. People tend to construe psychologically near events concretely and psychologically far events abstractly. A recent summary of construal level theory is given by Trope and Liberman (2010a) . The mapping between levels of construal and psychological distance is imperfect. We sometimes think about psychologically distant things concretely, such as when watching a science-fiction movie, and about psychologically near things abstractly. Nevertheless, the mapping is useful. While there is little theoretical and empirical research on how people (and other evolved creatures of human-level intelligence) think and care about alien civilizations, there is some research on how people generally care about other psychologically distant and abstractly construed things. According to construal level theory, the abstract mode of thinking is similar regardless of the kind of psychological distance that is involved. Thus, we can use general research about abstract construal values to at least inform our first tentative guesses about values in the particular case of caring about distant civilizations. We have some reasons to expect construal level theory to generalize to other evolved beings. According to Trope and Liberman (2010a, section III, subsection \"Discussion\"), High-level construals and low-level construals serve different cognitive functions. High-level construals have evolved to represent distal objects because, with distance, one needs to conserve the essential, invariant properties of the referent object. In contrast, low-level construals preserve the object in minute detail for immediate use. The fact that abstract and concrete construals solve different problems suggests that they evolved separately. Indeed, low-level construals probably evolved earlier. Whereas processing one's immediate surroundings and short-term goals is necessary for any animal to survive, many can get by without processing psychologically distant things. Some of the feats achieved by civilization-forming species, on the other hand, require abstract thinking. In the conclusion of their paper, Trope and Liberman (2010a) write: The turning points of human evolution include developing tools, which required planning for the future; making function-specific tools, which required considering hypothetical alternatives; developing consciousness, which enabled the recognition of distance and perspective taking; developing language, which enabled forming larger and more complex social groups and relations; and domestication of animals and plants, which required an extended temporal perspective (Flinn, Geary, and Ward, 2005) . Human history is associated with expanding horizons: traversing greater spatial distances (e. g., discovering new continents, space travel), forming larger social groups (families vs. cities vs. states vs. global institutions), planning and investing in the more distant future, and reaching farther back into the past. In sum, I see some good reasons to expect that construal level theory applies to many other evolved species of human-level intelligence. 46 It thus matters whether we optimize for others' near or far values. Interestingly, we may also see the difference between different construals and thus near and far values as a cognitive bias that would disappear upon reflection, and that we should correct for in preference idealization. This may well be the case, but it is unclear which of the two views is more \"correct\" about ethics. One may argue that only thinking about concrete events can yield actual moral judgments, while abstract thinking may result in imagining a situation inaccurately or not at all and thus being unable to assess it correctly. Moreover, we tend to have weaker attitudes in general toward distant things than towards close things, 47 and this also seems to apply to moral weight assignment. 48 46 The presented argument resembles the general argument for modularity in evolutionary psychology (see, e. g., Cosmides and Tooby, 1994) . 47 For example, people prefer to receive money immediately rather than in the far future. They are risk averse and, of course, care more about socially close individuals. 48 For example, thinking about a concrete, identifiable goal or benefactee seems to be associated with feeling happier from donating money (Rudd, Aaker, and Norton, 2014) . People are more motivated by (concrete) identifiable victims than by (abstract) large numbers of victims, although a recent meta-study by S. Lee and Feeley (2016) shows the effect to be small. People are also more relativist when judging the acts of extraterrestrials and people from other cultures (Sarkissian et al., 2011) . Some arguments in moral philosophy evoke concrete construals (e. g., the fat man trolley problem or the drowning child argument) and some evoke abstract construals (e. g., the original position or many of the examples from my blog post Cheating at thought experiments) 49 . Both classes contain arguments that I find useful and legitimate. This suggests that neither of the two is morally superior across the board. Trope and Liberman (2010a, section VI) describe several experiments wherein high-level construals seem to capture the participants' values, whereas low construals led people to give more weight to \"local\" circumstances (such as social pressure and lack of self-control) (cf. Trope and Liberman, 2010a, section VII, subsection \"Affect\"). In high levels of construal, people tend to judge consequences more by their desirability than their feasibility, and thus assign more weight to moral views. More recent studies like those of Torelli and Kaikati (2009) and Agerström and Björklund (2013) have corroborated this result. However, it could also be interpreted as an indication that abstract thinking makes people more hypocritical. Yang, Preston, and Hernandez (2013) summarize further evidence in favor of giving more weight to high-construal judgments: High-level construal is associated with [...] an analytical, critical-thinking mindset (Torelli and Kaikati, 2009) . For example, people at a high level of construal are [...] more comfortable with messages that convey mixed emotions (Hong and A. Y. Lee, 2010) , suggesting greater cognitive flexibility. Indeed, previous literature showed that when an object is distanced from the self, individuals are less likely to be \"trapped\" in their own preconception or knee-jerk reactions (Kross and Grossmann, 2012) . Moreover, high levels of construal may enhance perspective taking toward others whose interests conflict with one's own. That said, abstract thinking is not without its systematic failure modes. It is, for instance, associated with overconfidence and the illusion of explanatory depth (Alter, Oppenheimer, and Zemla, 2010) . Further thoughts on the topic are given by Samuel Hammond in How to Conceptualize Morality: Near vs Far. In any case, we should keep in mind that idealizing away the difference between near and far values may be inconsistent with many agents' meta-preferences. \n Different kinds of preferences People often report that preferences in different domains feel qualitatively different from one another. For instance, it is common to distinguish moral preferences from other preferences. My preference for world peace over war is a moral one, for instance, but my preference for bananas over carrots is not. Of course, this line between moral and non-moral values is often blurry. For example, it is unclear whether wanting revenge or a cancer victim's desire to focus altruistic efforts on cancer research are moral preferences. I think a distinction between moral and non-moral preferences can also be drawn among far values. For example, my preferences for beings in other parts of the multiverse to be happy rather than to suffer is a moral one, but I would not view my preference for these civilizations to be fascinating, fun, or otherwise beautiful to my eyes (in the way that advanced civilizations in science fiction movies are) as a moral preference. Others might disagree, but that dispute is not worth exploring in this paper (indeed, I suspect it may be a largely verbal one). Potential criteria for this distinction between moral and other preferences may be that moral preferences are those we want others to share or that are somehow universal. Another distinction could be one based on a dual-process theory of morality (see Greene 2013, part II for an overview and references to the literature). Or consider Sarma and Hay (2016) , who propose that \"what we call human values can be decomposed into 1) mammalian values, 2) human cognition, and 3) several millennia of human social and cultural evolution.\" I do not think such distinctions are necessary when cooperating superrationally. Instead, we should focus on all preferences that are action-guiding to the respective agent (if this is not included in the term \"preference\" anyway 50 ), irrespective of whether they are \"moral\" or \"mammalian\". By definition, if I have to decide between two courses of action and one of them better suits the preferences that guide my actions, I will choose that one. In the case of superrationality, only accounting for other agents' action-guiding preferences correlates with others also taking only my action-guiding preferences into account. Therefore, taking all action-guiding preferences into account is best according to my action-guiding preferences. Hence, we should only take steps to fulfill the action-guiding preferences of other superrational collaborators, ignoring any other preferences they might hold. Given the above, I shall in this piece not differentiate between moral and other far values. Instead, both terms will be used to signify our action-guiding far values. \n The values of our superrational collaborators in the multiverse Having outlined what kinds of values we would like to know about for multiverse-wide superrational cooperation (MSR), we can finally proceed to discuss these values. Whereas it is not strictly necessary for us to know about our cooperators' values right away in order to benefit them (see section 4.1), such knowledge is surely useful and has to be attained at some point. In fact, one objection to MSR that many people have brought up in private conversation is that given our uncertainty about other value systems in the multiverse we should focus solely on our own values (also see section 6.11). As readers may suspect, a comprehensive discussion of this topic is beyond the scope of the present paper. However, we will give an overview of how we (or future superrationalists) can gain knowledge about the values of our collaborators elsewhere in the multiverse. Besides guiding future research, this overview will also demonstrate that we can learn anything about their values in the first place. 50 Many definitions of preferences are based on choice (see footnote 41). Some examples where preferences may not be what our choices reveal include: • akrasia and lack of willpower, as it manifests itself in procrastination and inability to adhere to exercise routines and healthy diets; • preferences about fiction, as people often care deeply about how a story ends but usually without trying to lobby or coerce the authors to satisfy that desire (Radford and Weston, 1975; Schneider, n.d.) ; and • preferences for states of affairs that are mathematically inconsistent or physically impossible (Oesterherld, 2017b). It seems as though there are two main ways of assessing the values of other agents in the multiverse. The first involves empirical research into the values of superrational cooperators on Earth. Because the sample size is so small, we may also look at humans in general, under the assumption that the values of superrationalists resemble the values of their native civilization. It may be that the values of superrationalists differ from those of other agents in systematic and predictable ways. General human values may thus yield some useful insights about the values of superrationalists. That said, it may be that only a small fraction of superrationalists in the multiverse are human-like. For example, it could be that most other superrationalists are artificial intelligences and whole-brain emulations. It could also be that many other evolved agents are very different from us. The other approach involves understanding the processes that generate and select the values of agents in the multiverse, such as biological and cultural evolution, the transition to superintelligent AIs, etc., and extrapolating them into workable predictions about the preferences of agents on other planets. In principle, this approach is sufficient for gathering a good map of the values of civilizations throughout the multiverse. In practice, however, it is probably very difficult to accurately predict how these processes play out. A combination of both approaches might be easier to work with. We can begin with human values as a baseline and inspiration for what kinds of moral attitudes may exist, and then review whether the processes of biological and cultural evolution systematically favor these attitudes. This would enable us to find out whether they are coincidental and hence rare in the multiverse, or necessary and thus common. At the same time, we will of course need to avoid being biased toward human values, making sure not to drift off into telling just-so stories about why some human practices and values might be universal among evolved agents of human intelligence (Buss, 2015 , chapter 2, section \"Methods for Testing Evolutionary Hypotheses\"). Theoretically, we or future superrationalists need to find some way of coming up with new moral values, i. e. ones that we do not observe on Earth. Based on a model of the values of evolved agents we can then think about the values of these agents' descendants (whole brain emulations, superintelligent AIs). Assessing the action-guiding, consequentialist far values of agents in the multiverse could be a scientific (sub-)discipline in its own right. That being said, I do not expect there to be a \"Journal on Extraterrestrial Value Systems\" to materialize anytime soon. Untestable speculation about ETs does not inspire academic respectability. In researching this paper, I did not find much prior work on any aspect of the values of evolved agents in the multiverse, which in turn makes me less than hopeful that the more specific issues pertaining to multiversewide superrational compromise will be picked up by other researchers out of curiosity. Hence, superrationalists will probably need to think about ET values themselves. \n On the far values of humans and human superrational cooperators We will now explore what superrational humans might care about in distant civilizations. Unfortunately, our sample of these people is small, and path dependencies may imply that current earthly superrationalists may not be very representative of those elsewhere in the multiverse. We will, therefore, also look at general human values and far values in particular. \n Organizing human values Although most people have reliable intuitions for what other people care about, these intuitions are hard to pin down, owing to the inherent \"messiness\" of human moral intuitions (cf. Stewart-Williams (2015) , section \"Morality Is a Mess\"; Muehlhauser and Helm, 2012, chapters 3-5.3) . This \"messiness\" makes evolutionary sense (Cosmides and Tooby, 1994 ) and should therefore be expected from other civilizations in the multiverse as well. 2012 )), as well as those of Shweder et al. (1997) (also see Pinker (2011, chapter 9.4) for a short, accessible summary). There is also Peter Levine's Alternative to Moral Foundations Theory, which is not formally published. There are also some characterizations of the cultural and moral differences among humans. For example, Inglehart and Welzel divide moral values into just two factors: traditional versus secular-rational values, and survival versus self-expression values (2010, note 10). Hofstede recognizes six cultural dimensions, and Trompenaars' model of national culture differences has seven dimensions of varying moral relevance. \n Human far values We have seen that far values are the parts of our preferences that are relevant to MSR (see section 3.3.2). There are almost no studies on how humans care about alien civilizations. However, construal level theory suggests that we think and thus care similarly about different psychologically distant things. This brings us to the question, how do people usually care about psychologically distant or abstractly construed things? In contrast to their concrete counterpart, values at abstract construal levels tend to focus more on the central (as opposed to peripheral) features of a given situation (see Trope and Liberman (2010b) , esp. section V). Values in abstract construal levels are therefore less fragile (see section 3.2.3), which is good news for us, as this inherent stability makes them easier to account for in our superrational cooperation. But what are those central features? A few studies have been conducted to find out, most notably one by Bain, Hornsey, Bongiorno, Kashima, et al. (2013) . The authors summarize the results in a blog post: In our research, we asked people to think about the effects that changes in society today would have on society in the future (the Year 2050). For instance, we asked people to consider what society would be like 50 years in the future if climate change was mitigated, marijuana was legalized, abortion laws were relaxed, or the proportion of atheists or Muslims in society increased substantially. Participants considered changes in society relating to people's characteristics (how caring, moral, and competent people would be in 2050), whether people's values would change (e. g., becoming more concerned with security or achievement), whether there would be more societal problems (like crime and poverty), or greater societal development (economically, technologically, and socially). The different contexts produced diverse and nuanced images of what future society would be like. For example, participants saw a more atheist future society as making people less friendly but more competent than today, but saw a future society where marijuana was legalized as both less friendly and less competent. Overall, people's images of future society weren't all good or all bad, suggesting they had realistic rather than fantastical projections about what society would be like in the future. What may be most surprising, however, is that only one dimension emerged as a reliable motivator of people's actions in the present. People supported changes in policies today (e. g., legalizing marijuana, acting on climate change) if they believed it would lead to a future society where people were more caring and moral. Other dimensions -people's values, their competence, or levels of societal problems and societal development -emerged less strongly, only in a few contexts, or were irrelevant to people's willingness to act. Similar findings were made by Bain, Hornsey, Bongiorno, and Jeffries (2012) , Park, Bain, and Kusumi (2015) , Judge and Wilson (2015) and Bain, Milfont, et al. (2015) in other policy areas. These results are quite surprising -I know of no explicit discussion of \"virtue consequentialism\" in moral philosophy, for instance. Unless we think that superrationalists consistently hold different values, that humans are atypical or that these findings are somehow invalid, the findings suggest that the application of MSR implies significant policy changes for people espousing more commonly discussed consequentialist value systems like utilitarianism. Unfortunately, the above studies have some limitations. For example, Bain et al. (2013) do not ask for a utilitarian evaluation -i. e. one based on overall (average or total) welfare (or preference fulfillment) -of the future societies. Perhaps participants only put much weight on future citizens being caring and moral because these are proxies for other moral issues (such as welfare)? Besides methodological issues with the study itself, the results may not transfer to MSR without complications. In any case, social psychology studies often fail to replicate or generalize as expected. Construal level theory notwithstanding, it could be that people's views on alien civilizations differ from those on \"collective futures\". These results should thus be seen as tentative and preliminary until further replications come in; for now, we can regard them as serving an illustrative, rather than action-guiding, purpose. Moreover, benevolence -the term used by Bain et al. to encompass the characteristics caring, moral, and competent in their 2013 study -is still a rather fuzzy concept that probably depends on people's general moral views. It seems likely, for instance, that the definition of benevolence in a given situation varies considerably between, say, a devout Jain and a devout Salafist Muslim. In future research we should therefore look more into what kind of benevolence or moral behavior people value. For example, Napier and Luguri found that abstract mind-sets decrease preferences for loyalty, authority, and purity, all of which lie on the conservative and tribe-specific end of the moral spectrum (cf. Luguri and Napier, 2013) . \n The values of human superrational cooperators We should also investigate how the values of today's superrational cooperators differ from those of other humans. Unfortunately, while I suspect that the results would be both interesting and informative, the number of people who actively reason superrationally today is too small to yield a statistically representative sample. We will therefore focus on who, for various reasons, are likely to embrace most (if not all) of the arguments underlying superrational cooperation. Of course, all of this again only gives us very weak evidence about the content of the compromise utility function. For one, it does not tell us much about civilizations that are very different from humanity. Moreover, the values of earthly superrationalists may in great part be the result of path dependencies. Thus, they may differ even from the values of superrationalists in civilizations that are very similar to humanity. Despite these considerations, I think that this most direct empirical approach to ascertaining the content of the compromise utility function is worth investigating. \n Philosophers Given that the central theme of this paper rests to a large degree upon philosophical considerations that are unlikely to be well-known outside of analytic philosophy, it seems reasonable to begin our review with analytical philosophers. While many philosophers seem to accept causal decision theory (see section 2.2), they are nevertheless far more likely to be aware of such ideas at all. 52 Furthermore, we can use Bourget and Chalmers' (2014) survey of philosophers to look at correlates of making the non-causal choice in Newcomb's problem. Most decision theorists see Newcomb's problem as analogous to the question of whether to cooperate superrationally in the prisoner's dilemma with a strongly correlated opponent (Lewis, 1979) . The correlations, taken from the survey website, are inconclusive. Apparently, one-boxing in Newcomb's problem correlates very weakly with non-physicalist views in philosophy of mind (0.139), and only slightly stronger with viewing one's own work as Wittgensteinian (0.15). Two-boxing, meanwhile, has similarly weak correlations with endorsing the B-theory of time (0.141), embracing classical rather than non-classical logic (0.136), not being a communitarian (0.128), atheism (0.125), scientific realism (0.121), seeing one's work as , and with externalism in moral motivation (0.102). 53 Correlations between choices in Newcomb's and the trolley problem were too weak to warrant 52 That said, philosophers often do not act on their self-reported views (Schwitzgebel and Rust, 2011) . For example, while philosophers (and ethicists in particular) are much more likely to rate eating meat as morally reprehensible, differences in behavior (i. e., actual meat consumption) are, at best, meager. 53 Interestingly, two-boxing is not only mainstream among philosophers in general (see section 2.2), but also slightly more common among philosophers with whom I (and most acausal decision theorists I know) would otherwise agree more. For more discussion of this phenomenon from the perspective of a one-boxer, see Carl Shulman's Why do theists, undergrads, and Less Wrongers favor one-boxing on Newcomb? and its comments. any mention 54 . These results do not appear to offer much insight into what value systems should be taken into account for superrational compromise. So, if anything, we could look into the values of philosophers in general. \n Effective altruists Let us turn to another community, in which taking action based on philosophical arguments is common: the effective altruist and rationalist spheres. Specifically, we will look at the effective altruist, LessWrong, and Slate Star Codex communities. A multitude of surveys of these demographics are available. 55 Within the LessWrong community, one-boxing in Newcomb's problem is about ten times more common than two-boxing, as evidenced by their 2012 and 2013 member surveys. 56 The 2009 survey also revealed that most LessWrong users would cooperate in one-shot prisoner's dilemmas against one another. Acausal reasoning thus appears quite common in this community. In fact, updateless and timeless decision theory (see 2.2) arose from discussions on LessWrong. The surveys also show that many members of the community identify as consequentialists (see below). Indeed, effective altruism is itself built upon a foundation of consequentialist arguments, although it is consistent with additional deontological restrictions. The community's general world view is entangled with both their consequentialist and decision-theoretical views, 57 as well as a general curiosity for discussing ethics and decision theory in the first place 58 . Hence, we may regard their views as indicative (if only weakly) of those of other superrational consequentialists in the multiverse. Even the existing surveys reveal some interesting facts about the values held by community members. For instance, they are overwhelmingly liberal, with only a few percent selfidentifying as conservative. Furthermore, they show a significantly greater concern for animals than the average person in Western industrialized countries. \n Considering larger and smaller groups 54 Unfortunately, at the time of writing, the site that is supposed to show all (as opposed to only the strongest) correlations between Newcomb's problem and other questions appears to be broken. 55 General surveys on LessWrong were made in 2009, 2011, 2012, 2013, 2014 and 2016. There is a Slate Star Codex Survey survey from 2014, which only asked non-LW users to participate. Surveys of the EA community were done in 2014 and 2015. 56 The results (as of December 2016) of another LessWrong poll confirm that most community members strongly favor one-boxing. 57 For example, they view rationality and consequently decision theory and other sciences as being, in the end, about winning, in line of the instrumental and epistemic conceptions about rationality, rather than about acting or thinking in accordance with some reasons and requirements. Another example of a connection is that Eliezer Yudkowsky, the founder of LessWrong, has dedicated his life to making sure that artificial intelligence has a positive impact and convinced many in the community that this is a worthy goal. It also seems to me that the context of (superintelligent) machines can function as an intuition pump for consequentialism. 58 Decision theory and a goal system are two important ingredients to solve that problem of AI alignment (see the preceding footnote) (Soares and Fallenstein, 2015; Bostrom, 2014b, chapter 13, section \"Component list\"). Furthermore, effective altruism, i. e. systematically trying to do as much good as possible, requires that one knows what is good at least in some detail. For example, metrics like the quality-adjusted life year or the disability-adjusted life year may be used to evaluate intervention against poverty. (See GiveWell's articles on the topic.) Effective altruism presumably also inspires learning about rationality. We could also try to survey the values of much smaller sets of people, like those who have argued against causal decision theory (e. g., in academic papers) or indicate that they take the implications of non-causal decision theories seriously. Conversely, we can also study the values of much broader sets of people to make use of the academic literature. For example, we can reasonably assume that in order to discover superrationality, one would need a general philosophical mindset (rather than, say, a merely pragmatic one) and a willingness to engage in thought experiments that have no immediate practical relevance. We could then try to identify groups of people who meet these criteria, and to discover what values their members have in common. While we should also help superrationalists who do not believe they live in a multiverse (see section 2.9), we should nevertheless expect superrationality to be more widely accepted among people who do believe in a multiverse. After all, superrationality is probably far less action-guiding on its own than in combination with the multiverse hypothesis (see section 6.6), so it is comparatively unlikely to spread by itself. Thus, we could also survey the values of people who believe in the multiverse hypothesis. Similarly, we could study people who accept similarity to and correlation with others (as opposed to thinking that they are unique in the entire multiverse). Interestingly, such people may be likely to be conservative (Stern, West, and Schmitt, 2014) . On the other hand, it may be that our currently available sample of superrationalists are atypical simply because the topic of MSR is still in its infancy here on Earth. If superrationality eventually becomes more popular on Earth and elsewhere in the multiverse, we may find that only the relatively few early adopters of the idea differ significantly from the human mainstream. Presumably, this is common in many areas of progress (cf. Rogers, 2010, chapter 7). For example, the average computer user in 1970 was very different from the general population at the time, since operating a computer back then required a particular set of technical skills that most people did neither possess nor have the time to learn. But once these early adopters improved the technology and convinced others to buy computers, they were soon outnumbered by people who would never have worked with the older computers, eventually culminating in today's ubiquitous use of computers. Thus, the average of the computer users of the past 50 years would probably resemble an average young person in a developed country. Analogously, early superrationalists may need to be more willing to study obscure thought experiments and look deliberately for crucial considerations. Soon, however, these early adopters may find themselves outnumbered by less explorative people who would have never thought about donation games between correlated agents on their own. While it is very unlikely that MSR will spread as widely as computers, the average superrationalist may nevertheless end up looking more similar to the average person than today's sample suggests. \n Biological evolution According to conventional views of the multiverse and its physical laws, almost all 59 of its inhabitants are evolved agents or descendants of evolved agents. This means we can use our knowledge of evolution and its workings to predict what values these other agents have. Since much has been written about evolution and the more relevant fields of evolutionary psychology and (descriptive) evolutionary ethics, we shall not discuss them in detail here. Readers may consult the works of Pinker (1999), Stewart-Williams (2015) , Greene (2013) , Axelrod (2006) , and Buss (2015) for introductions. \n Cultural evolution A process similar to evolution takes place on the cultural level. Whereas biological evolution operates on genes, this cultural evolution determines the development of pieces of culture or memes such as \"tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches\" (Dawkins, 1976, chapter 11) . 60 Again, this is not the place to review the literature on this topic. For an introduction to cultural evolution consider, e. g., Henrich (2015) . \n Which moral views correlate with superrationality? We can turn to the study of cultural evolution to learn about the prevalence of various consequentialist value systems. But whereas superrationalists probably resemble other intelligent agents biologically, they may well differ from them culturally. Thus, in addition to considering the civilizational baseline, we may also look into what values often go hand in hand with superrationality. Below are some preliminary examples of lines of reasoning that might be relevant in this context, some of which resemble the more empirically-minded comments in section 3.4.1: • Cooperation in general is more relevant for people with value systems that differ strongly from the mainstream in their civilization. • Some value systems benefit more from cooperation (and are harmed more by its breakdown) than others. Agents with these value systems are more interested in cooperation than others. • Superrationality is a \"weird\" philosophical idea. Therefore, it is more accessible to people who care about knowledge, are open-minded, philosophically rather than pragmatically inclined, and so forth. • Superrationality on its own is probably insignificant to most people's lives (see section 6.6). Hence, we should expect many superrationalists to only care about the idea because it comes packaged with multiverse theories. While this does not necessarily have implications for the other superrationalists' values, it does underline the point about a \"philosophical\" rather than \"pragmatic\" mindset. After all, thinking about all these other universes usually does not matter for our actions. • The significance of MSR is more apparent to people who think of goals as utility functions or the like, since this view makes it easier to to see that we can have preferences about distant parts of the multiverse. Once someone notices that many agents have preferences about each other's universes, she can see room for trade. If thinking about utility functions or similar formalisms indeed paves the way for MSR, we may also expect artificial intelligence researchers, game theorists, and some ethicists to be prominently represented among superrationalists in the multiverse. • The significance of MSR is more apparent if one realizes that it may imply radical compromise wherein everyone effectively changes their utility function. This, in turn, may be most apparent to people who are familiar with arguments like Rawls' original position (Freeman, 2016) or (preference) utilitarian reasoning along the lines of \"it would be best if everybody. . . \". • MSR may come more naturally to people whose values require coordination (see sections 2.8.3 and 2.8.9). Most of these considerations apply mainly to civilizations resembling our own. Of course, we can similarly think about correlates of superrationality in more advanced (or, in principle, more primitive) civilizations. Such thoughts are even more speculative, especially if we do not know what future civilizations might look like. In the next section (or rather, its companion papers) we will also consider cultural (and biological) evolution in specific models of more advanced civilizations. \n Other considerations Biological and cultural evolution are not the only processes that affect the distribution of moral views throughout the multiverse. In particular, I would like to draw attention to three other candidates. Given that the underlying considerations in these areas are advanced, speculative, and not strongly related to superrationality itself, I will not go into detail; rather, I will refer to complementary notes for further tentative ideas. • Some civilizations may inadvertently self-destruct before they can shape their part of the universe. Others, meanwhile, may voluntarily refrain from colonizing space. The Fermi paradox suggests that either of these are likely. The values of such civilizations should not receive the same weight as one would have believed before considering this argument. For more information see the complementary note Civilizational filters and distribution of values in the multiverse. • An important step in the development of humanity and potentially other advanced civilizations could be whole brain emulation, i. e. scanning a biological brain, uploading it into a computer, and then simulating its behavior to a level of detail that preserves the brain's functionality. Since, by assumption, they are functionally equivalent to human brains, whole brain emulations (ems) can do most of the things that humans can. However, they can also do many things that humans cannot. Because ems are software, they can be copied, their information processing can be sped up (by moving them to a faster computer or giving them more CPU time on their current one), and they can use the internet to travel much faster and cheaper than humans. These advantages may be sufficient to allow ems to become more numerous and more powerful than humans. They also imply predictable cultural changes, including an increased willingness to adopt non-causal decision theories. See the complementary note Values and acausal reasoning of whole brain emulations for more details. • Because evolved minds (whether uploaded or not) are not made to be maintained or improved, I expect that many long-lasting civilizations will eventually create artificial superintelligence. Because programming an AI to pursue one's goals might be difficult, the values of such an AI may differ from those of humans. For more information, see the complementary note Values and non-causal reasoning of superintelligent AIs. \n Interventions In this section, I propose a few interventions that multiverse-wide superrationalists should pursue. Many of these are tentative ideas to consider more in the future rather than confident suggestions for what we should do today. \n Cooperation in the face of uncertainty about values We begin with a general challenge: given that we currently know so little about the values of other agents in the multiverse, how can we cooperate with them? With our current state of knowledge, it appears impossible to conclude what position MSR recommends on particular issues. For example, it seems impractical to decide whether we should vote and lobby in favor or against mass surveillance, abortion, marijuana legalization or the death penalty. Perhaps MSR, while interesting in theory, is practically impossible to apply because of our ignorance of the values in the multiverse? (Also see section 6.11.) While our uncertainty about the values of our collaborators is no doubt a major obstacle to the application of MSR, I will nonetheless argue that there are relevant policy changes that we can implement even today. The first class of such interventions requires no knowledge about other value systems at all, as long as we are confident that future agents will be able to attain such knowledge. Meta-activities are examples of this: no matter what the aggregated utility function of all superrationalists in the multiverse turns out to be, we could still benefit it indirectly by learning what it is or by spreading MSR itself (see section 4.5). One way of doing so is to ensure that artificial intelligences cooperate superrationally (see section 4.6). In the second class of feasible interventions, we try to draw conclusions from what little we do know about the distribution of values in the multiverse. We can, for instance, be sure that extraterrestrials will care less than humans about the Bible or the United States of America (though some will care about them a lot and many may care about preserving local traditions in general). On the other hand, we can be reasonably confident that many extraterrestrials care about satisfying the preferences of some other agents (e. g., \"innocent\" agents capable of reciprocating) (see, e. g., Axelrod, 2006; Trivers, 1971; Fehr and Gächter, 1999; Dawkins, 1976; Taylor, 1987; Buss, 2015, chapter 9) . Hence, we should perhaps embrace such \"universal\" moral values more than human superrationalists would otherwise do. (We explore this further in section 4.1.1.) Consider another example: the far values of at least some humans probably resemble those of many evolved extraterrestrial superrationalists, which means that we can benefit our superrationalist collaborators by increasing the capabilities of these humans to fulfill these preferences (see section 4.4). As a last example of how we can use a small piece of knowledge, consider how we can sometimes know that someone's values are at an extreme end of some scale or otherwise far away from the multiverse-wide superrationalist average. In this case, MSR suggests that we shift these extreme values towards the middle of their scale. For example, utilitarians are extreme in that they only care about welfare, whereas most superrationalists presumably care about a lot of other things as well. Thus, it would be good to convince the utilitarian to take other considerations into account (although it is not clear what these ought to be and how much they should be taken into account). In both of these classes, we overcome the obstacle posed by our lack of knowledge by benefitting a wide variety of value systems, rather than picking out any particular subset of extraterrestrials. \n Universalism I think satisfying universalist values, i. e. ones that are shared by a large fraction of superrationalists, may become somewhat more important for all superrationalists, although the case is not entirely clear. Imagine a group of people with a few shared concerns, such as justice, welfare and freedom, and a large number of non-shared concerns, such as each person's egoism, tribal loyalties, etc. 61 Given this, they can produce gains from trade by moving resources from the nonshared concerns to the shared concerns. In terms of the compromise utility function (see section 2.8), the idiosyncratic concerns do not receive lower collective weight than prior to cooperation. However, since each individual idiosyncratic value receives much smaller weight in the compromise utility function and interventions usually cannot satisfy many of them at the same time, interventions targeted at the universal concerns will usually increase the compromise utility function more efficiently. Although this argument is quite persuasive, it is not as strong as it initially seems. For example, it assumes that each individual's values are a simple weighted sum of universal and idiosyncratic concerns. But preferences can also have different shapes. In fact, each agent may explicitly protect its idiosyncratic values against losing its weight in such a preference aggregation mechanism. One example could be that the idiosyncratic values are much stronger than other preferences, but face diminishing returns. For instance, most people probably care almost exclusively about their own well-being until their level of well-being reaches some threshold. There may also be agents who exclusively care about their idiosyncratic preferences. For agents with these values, a compromise in which resources shift to universal concerns is negative. Another reason to disregard idiosyncratic preferences is that they are often not more common than their opposites. For example, Marxism, the US or Islam are liked by many, but also disliked by many others. Therefore, it is not even clear whether the compromise utility function evaluates either one of them positively. It should be noted that some universalist values may refer to others' tribal values. For example, many humans care about preserving cultural heritage. That said, this preference is usually weak and usually abandoned if it conflicts with other values. For instance, few would argue that human sacrifices should be continued to preserve tradition. Although most people do not care enough about animals to become vegetarians, my impression is that most people in Western countries would favor the abolition of bullfighting. \n Moral advocacy Advocating one's moral views can be an effective intervention if they differ significantly from those of most other people. In light of superrational cooperation, we should perhaps change the values we advocate. \n Universalist values As I argued in section 4.1.1, superrational compromise may imply that more resources should be used to satisfy universal as opposed to idiosyncratic concerns. This suggest that spreading universalism is good from an MSR perspective. \n Expanding the moral circle Most people care much more about themselves, their kin, and their associates than about others. From their point of view, they, their kin, and their friends are all special. From the outside view, however, most people are not more important than others. It is thus in the benefit of altruistic outsiders (e. g., other humans) to reduce the difference between how much people care about themselves, their family, friends, etc. versus other humans. In Singer's terminology, an outsider who cares about all humans equally would similarly want people's \"circle of empathy\" to expand outwards to include other humans (Singer, 2011) . In this way, we can align their decisions with the goals of the outside party. The perspective of superrational collaborators elsewhere in the multiverse is similar, in that many things that are morally special to us are not special to them. Take nationalism and patriotism: many people assign particular moral value to the country they grew up in or to its citizens, with little support through impartial reasons 62 . Needless to say, most superrational collaborators elsewhere in the multiverse will adopt a different perspective. If they care more about Japan than about the United States (or vice versa), it would be for specific impartial reasons. Making people care intrinsically less about particular nations thus aligns their values more with those of superrational collaborators elsewhere in the multiverse. Similarly, intrinsic preferences for members of one's race, species, or substrate are inconsistent with an outside view of someone from a completely different species with a different substrate. 62 Surely, there are some impartial reasons to like one country more than another. For instance, Sweden is more tolerant of homosexuals than Iran, which is a reason to favor Sweden if one cares about the welfare of homosexuals. Nationalists often provide impartial reasons for favoring their country. For example, US nationalism is often about how the US is the country with the most freedom in the world. But if people really cared about such impartial reasons, the \"best country in the world\" would often not be their own country. Furthermore, nationalism often exaggerates the difference between countries in a way that seems inconsistent with an impartial point of view: sure, the US has a lot of freedom, but so do many other Western countries. If the US is better than everyone else along such dimensions at all, then surely not by a big margin. In any case, I am only talking about the kind of nationalism that is not based on impartial arguments. \n Which moral foundations? Given the criterion of universalism, what aspects of morality are worth spreading? As an illustrative classification of moral intuitions, we use Haidt's moral foundations theory, which divides morality up into five foundations: care/harm, fairness/cheating, loyalty/betrayal, authority/subversion and sanctity/degradation (see section 3.4.1). Liberals tend to care primarily about the first two aspects, whereas conservatives care about all five. Liberal values are universalist, while the exclusively conservative values are not. As J. Greene (2013, chapter 11, section \"Why I'm a liberal, and what it would take to change my mind\") writes (references added from the endnotes): According to Haidt, American social conservatives place greater value on respect for authority, and that's true in a sense. Social conservatives feel less comfortable slapping their fathers, even as a joke, and so on. But social conservatives do not respect authority in a general way. Rather, they have great respect for the authorities recognized by their tribe (from the Christian God to various religious and political leaders to parents). American social conservatives are not especially respectful of Barack Hussein Obama, whose status as a native-born American, and thus a legitimate president, they have persistently challenged. [...] Likewise, Republicans, as compared with Democrats and independents, have little respect for the authority of the United Nations, and a majority of Republicans say that a Muslim American with a position of authority in the U.S. government should not be trusted (Arab American Institute, 2014). In other words, social conservatives' respect for authority is deeply tribal, as is their concern for sanctity. (If the Prophet Muhammad is sacred to you, you shouldn't be in power.) Finally, and most transparently, American social conservatives' concern for loyalty is also tribal. They don't think that everyone should be loyal to their respective countries. If Iranians, for example, want to protest against their government, that is to be encouraged. In other words: authority, loyalty, and sanctity are all non-universalist values. While many people have values that structurally fit into these categories, the content (e. g., the referent) of these values differ. Applied to multiverse-wide superrational cooperation, this means that we cannot benefit the authority, loyalty, and sanctity values of other superrationalists unless we are in a society with the \"right\" authorities and sanctity rules. In fact, if we push for these three values in our tribe (or civilizations), it may actually be bad from the perspective of people with conservative values from other tribes. American social conservatives tend to dislike Islam and loyalty to its authorities, even more than American liberals do. Overall, this suggests that when it comes to multiverse-wide compromise, spreading values in the domains of authority, loyalty, and sanctity is not very fruitful. Instead, we should try to make people care more about the universalist liberal foundations. Having said this, there may be a few exceptions to the rule (cf. the last paragraph section 4.1.1). For example, Christian social conservatives may like parental authority even if one's parents are Muslims or extraterrestrials. In the sanctity domain, a preference for leaving nature untouched may extend beyond an agent's planet, although many extraterrestrial habitats are probably \"slimy\" and full of scary animals. Presumably, such reasoning is also applicable to other moral values. For instance, some people care about the traditions of other tribes, including their art, social institutions, laws, religions and other non-universal aspects. It should also be noted that aspects of the liberal value of fairness also vary strongly between different people. For example, a progressive may see wealth inequalities as unfair, while a libertarian finds wealth redistribution unfair. Thus, supporting one conception of fairness can hurt another. That said, there are many sorts of unfairness that almost everyone recognizes as bad. Another reason to focus on the liberal aspects of morality is that potential superrationalists on Earth are rarely conservative (see section 3.4.1). That said, future societal transitions might make people more conservative (see the companion paper Values and acausal reasoning of whole brain emulations). \n Concern for benevolence We have seen provisional research indicating that, when it comes to distant societies, humans mainly care about the benevolence, warmth, and moral behavior of its inhabitants (see section 3.4.1). If these tentative findings turn out to be correct and other evolved species resemble ours in this regard, we should try to align people's near values more with these (typically far) goals. However, given the tentativeness of said research, I do not think this should significantly affect our actions at present. \n Consequentialism Even though superrationalists elsewhere in the multiverse may care most about whether we behave in a non-consequentialist but broadly ethical way, they do so in a consequentialist way (see section 3.2.1). For example, they might care about the numbers of crimes and selfless acts, or total amounts of happiness and suffering in a given population. This stands in contrast to the preferences revealed by most people's charitable efforts: most money is donated to charities that are comparably ineffective, i. e. ones that do not achieve the best possible consequences. By making people more consequentialist, we can improve their resource use from the perspective of consequentialist third parties. This suggests that we should spread consequentialist ideologies like effective altruism, potentially independently of any particular optimization target (such as injustice, suffering, happiness, or knowledge). \n Pluralism Whereas the compromise utility function incorporates a plethora of concerns, most individuals' values are much more narrow. This is especially true among people who give morality some thought. For example, some people adopt utilitarianism, while others become proponents of Kant's categorical imperative. 63 As I am primarily a utilitarian, I sympathize with adopting a single ethical view (and utilitarianism in particular). From an MSR perspective, on the other hand, this misses out on gains from compromise between these opposing value systems and it would be better if everyone adopted a mix of different values instead. Thus, we may want to promote moral pluralism. One version of this view is MacAskill's (2014) moral uncertainty. Operating under the assumption of moral realism (which I reject), he argues that and how we should be uncertain about which ethical system is correct. Another related view is the normative reading of Yudkowsky's complexity of value (cf. Stewart-Williams (2015) , section \"Morality Is a Mess\"; Muehlhauser and Helm, 2012, chapters 3-5.3 ), according to which what humans care about cannot be captured by a simple moral system and instead incorporates a large number of different values. \n Promoting moral reflection Probably wanting more idealized and reflected upon values to be implemented is much more common in the multiverse than wanting less idealized values to be implemented. 64 This is especially the case for agents who have not yet settled on a moral view. For example, I am genuinely uncertain about what I would or should count as morally relevant suffering when it comes to small minds (such as those of insects) and the like, just as I am not sure how to deal with infinities in ethics. I could thus benefit a lot if someone were to make more people think about these problems. Interestingly, the appeal of promoting moral reflection decreases upon idealization. Most people probably endorse moral discourse, the importance of reflection and argument, etc., in part because they think their moral view will result from that process -if they did not believe they had the arguments on their side, they might not hold their moral position in the first place. However, not everyone can be right about this at the same time. If someone only cares about preference idealization because she thinks that her value system will win, then preference idealization may remove that meta-preference. Beyond the question of whether evolved agents in the multiverse care about moral discourse, we must ask an empirical question about our own universe: will moral discourse bring people's object-level positions closer to those of our multiverse-wide superrational compromise utility function? For example, does moral discourse make people care more about, say, benevolence, assuming this really turn out to characterize much of evolved agents' far values (see section 3.4.1)? Perhaps moral reflection can also have negative consequences as well, such as attitude polarization (Lord, Ross, and Lepper, 1979; Taber and Lodge, 2006) . These questions appear suitable for further research. that utilitarianism cannot make sense of (Nathanson, n.d., section 3.b.i) . For example, they argue that utilitarianism is not (always) consistent with moral intuitions about equality (Pogge, 1995; Gosepath, 2011) , the wrongness of killing (Henson, 1971), and justice (Smart and B. Williams, 1973, part 1, chapter 10) . 64 The main data point is that humans think about morality and engage with others' moral views. The evolutionary psychology and cultural evolution perspectives, on the other hand, are non-obvious. Some moral arguments may be favored by cultural group selection, others may offer intelligent individuals to get their way more often. On the other hand, individuals who change their moral views may be perceived as unreliable or illoyal. Besides promoting societal discourse on ethical questions, one intervention in this domain is the use of preference idealization in artificial intelligence value loading (see section 4.6). \n Multiverse-wide preference utilitarianism In addition to spreading MSR itself, one could also spread value systems that in some way mimic its implications. Specifically, the proposed neutral aggregated utility compromise is essentially a form of preference utilitarianism or multiverse-wide preference utilitarianism. Multiverse-wide preference utilitarianism might therefore be a promising moral view to advocate on the basis of multiverse-wide superrational compromise. Of course, spreading a proxy for MSR has some general disadvantages. Most importantly, it is not very robust. If multiverse-wide preference utilitarians come to prioritize very differently than multiverse-wide superrationalists, then spreading the preference utilitarianism would not yield much in our favor. The question nonetheless deserves some thought. After all, if there is a significant chance that multiverse-wide preference utilitarianism approximates the conclusions of MSR, then we should at least be on the lookout for very cheap ways of promoting it. One main difference between preference utilitarianism and superrational cooperationwhether in the form of aggregated utility compromise or otherwise -is that the latter only takes the values of other superrationalists in the multiverse into account (see section 2.9.4). Preference utilitarianism, on the other hand, accounts for the preferences of a much broader set of agents, such as all sentient beings, all agents that have preferences of any sort, or all agents who satisfy some other criteria for personhood. This may mean that preference utilitarians arrive at very different conclusions than MSR proponents. For example, if they take small minds into account, these may well dominate preference aggregation. If, on the other hand, they only take members of human-like species into account, then the difference between these and superrationalist preferences may be much smaller. Another difference could be the way interpersonal comparison of utility is handled (cf. section 2.8.5). In the context of compromise, an individual's interests are usually given weight in proportion to the individual's power. So, for example, the interests of a superrational billionaire receive orders of magnitude more weight than the interests of a superrational beggar. However, most would view this approach as unethical and most preference utilitarians would disagree with it. Thus, multiverse-wide preference utilitarianism gives more weight to the moral views of the poor than MSR suggests. Yet another problem could be that preference utilitarians would not arrive at the more meta-level MSR interventions. Even if MSR and multiverse-wide preference utilitarianism had the same object-level implications, the justification for MSR is different from (non-MSR) justifications for preference utilitarianism. Thus, preference utilitarians would not support or even come up with interventions that are about spreading the MSR-based justifications for MSR's and preference utilitarianism's joint conclusions. For example, a preference utilitarian (who does not agree with MSR) would not spread the MSR idea itself, nor try to ensure that future people (and AIs, see section 4.6.3) reason correctly about decision theory. Because these are plausibly among the most promising interventions, this consideration suggests some significant divergence in priorities. In sum, it is unclear to what extent multiverse-wide preference utilitarianism could approximate a superrational compromise utility function. At this point, however, spreading multiverse-wide preference utilitarianism is unlikely to be a top priority. \n No multiverse-wide tug-of-war over values Value systems can be viewed as having several dimensions, like relative importance of welfare, population size, art, knowledge, justice, compassion and freedom, tradeoffs between suffering and happiness, tradeoffs between extreme happiness/suffering and mild happiness/suffering, and severity of punishments, to name but a few. Different groups in the multiverse invest resources into pulling the relative values of these dimensions into different directions. Some may want people to care more about suffering, while others want them to care more about nature or happiness instead. Now, imagine you care more about suffering than most others and that you live in a civilization with a merely average concern for suffering. Presumably, you would want to pull the \"concern for suffering rope\" into your direction, potentially at the cost of other values. But knowing about superrationality, this would make it more likely that those who care less than average about suffering will also pull the rope into their direction elsewhere in the multiverse, thus offsetting your impact. Therefore, MSR would recommend against shifting concern away from other superrationalists' values, e. g., nature or happiness, to suffering. It should be noted that the above does not (necessarily) apply if the values of your civilizations strongly diverge from those of the superrationalist average far values. In such cases, it may be somewhat beneficial if all superrationalists pull the values of their civilization toward the average. \n Promoting causal cooperation Imagine two value systems, each of them common throughout the multiverse, engaged in conflicts with one another on Earth. Let us also assume that most people with these value systems find ideas like acausal decision theory and the multiverse highly speculative, such that we cannot convince them of cooperating on a MSR basis. In this case, we can still cooperate superrationally with others in the multiverse by promoting causal cooperation between the two sides (provided this does not end up hurting some third superrational party of agents 65 ). 65 As a non-obvious example, consider global catastrophic risks. Presumably, most people would not want humanity to experience a global catastrophe. Promoting peace and cooperation between nuclear powers is thus positive for all nuclear powers involved. In the plausible event that humanity would survive a nuclear winter and quickly recover, however, post-apocalyptic human society may come to hold different moral views that conflict with the views of current nuclear powers. For instance, it may be that in the first months after a global catastrophe, there would be frequent violence and chaos among survivors. They may also be forced to exert violence themselves to survive. Thus, the survivors may be desensitized to violence. Even after civil order is reestablished, citizens may still be relatively unconcerned about violence towards animals, criminals, the weak and poor, etc. (Note that I am not claiming that this would necessarily be the case; indeed, personal hardships can also make people more compassionate. I am merely using it as a somewhat plausible scenario to illustrate the present point.) All of this would imply that mitigating global catastrophic risks on Earth ends up hurting agents in the multiverse who would like societies to be organized according to post-apocalyptic survivor values. If agents with such values are sufficiently common in the multiverse, For example, let us assume that the payoff matrix of their interaction is that of a prisoner's dilemma given in table 2. Let us assume that both players' utility functions are equally common in the multiverse. We also assume that other value systems have no interest in the outcome of the interaction. From the perspective of a third party who accepts MSR, the effective payoff matrix for this interaction may look like the one given in table 3. That is, when such a third party can influence the outcome of the interaction between player 1 and player 2, she acts as though she maximizes the utilities given in that table, even if she intrinsically cares about something entirely different. When such an agent is able to influence at least one of the players, she will lobby him to choose C, 66 because to her, the payoffs are proportional to the number of C 's that are chosen. 67 A disinterested non-superrational third party, on the other hand, -i. e. one who does not care about the payoffs of each of the two agents intrinsically -would assign no value to either of the four outcomes, nor would they invest any resources in bringing about a particular outcome. Next, let us assume that, rather than some third party, player 1 himself learns about and adopts multiverse-wide superrational cooperation, while player 2 stays ignorant of the idea. The new effective payoff matrix may then look like table 4. Player 2's payoffs are the same as in the original prisoner's dilemma, but player 1's effective payoffs have changed. He now then causal cooperation between nuclear powers should actually be sabotaged! That said, I do not find this conclusion all that plausible. It seems to be based on more and less likely assumptions than other action-guiding arguments and so it is much more fragile. One specific problem is that I would expect humans (and most other evolved beings) to become more tribal in response to a global catastrophe (Henrich, 2015, chapter 11 , section \"War, External Threats, and Norm Adherence\"), which may make these values less important (see section 4.2.1). 66 For ideas on promoting cooperation from the outside, see Tomasik's Possible Ways to Promote Compromise, as well as Axelrod (2006, chapter 7) . 67 Note that in some prisoner's dilemma-like problems, mutual defection is overall better than unreciprocated cooperation, in which case the superrationalist's job is more difficult. If she convinces one player of cooperation but fails to convince the other one, she will have done more harm than good. maximizes the sum of the two value systems' payoffs, because player 1's and player 2's utility functions are equally common in the multiverse. This puts player 1 in a peculiar situation: whereas defection is the dominant strategy in the original prisoner's dilemma (and therefore still the dominant strategy for player 2), cooperation dominates in this new version. Player 1 would thus cooperate in a one-shot version of the problem. On Earth, however, most interactions are repeated, like an iterated prisoner's dilemma. At first glance, one may suspect that player 1 would still cooperate in every round given that, no matter what the opponent on Earth does, he will want to make it more likely that agents elsewhere in the multiverse behave in a similarly cooperative way. However, such a strategy of unconditional cooperation makes defection player 2's best strategy. This is suboptimal for player 1, given that he prefers mutual cooperation (C,C) over unilateral cooperation (C,D) . In an iterated version of the game, player 1 might therefore punish defection to some extent, similar to how successful strategies punish defection in the iterated prisoner's dilemma. Nevertheless, the dynamics of this new problem are different than those of the prisoner's dilemma. Based on the ordering of the outcomes for the different players, the game is identified as g 261 or g 266 in the periodic table for 2x2 games by Robinson and Goforth (2005) , who also provide a few examples of games in this category. A few additional examples of this type of game exist, but overall, the game has not been studied extensively in the literature. Further research is thus needed to identify the right strategy for iterative versions of the game. \n Increasing capabilities Broadly speaking, agents have two reasons to increase other agents' capabilities: a) they may care about it intrinsically, or b) they may share the goals of the people whose capabilities they increase (and thus care about increasing them instrumentally). For example, someone who mostly cares about other people's freedom to pursue their goals has a type a) reason to raise the capabilities of poor people, and someone who agrees more with the people than with the dictator has a type b) reason to increase democracy. But if you hold less common values, such as reducing animal suffering, giving people more power is of unclear value. MSR broadens type b) motives to increase others' capabilities: even if we do not share someone else's goal, we have reason to increase his capabilities if we believe that significant parts of his goals are shared by superrational agents elsewhere in the multiverse. There is some relevant literature on increasing an agent's goal-achievement capabilities. In economics, the capability approach is an alternative to welfare economics and primarily studies how to measure an individual's capabilities. Some of its metrics include health, freedom of thought and expression, education, political participation, and property rights. In his dissertation on Ethics Under Moral Neutrality, Evan Gregg Williams (2011) discusses a topic closely related to acting under MSR: he assumes that we do not know what the \"correct\" moral theory is 68 , and that while we all have some access to moral truth, this access is unreliable. He then discusses, among other things, what policies we should take given such uncertainty. In many ways, this scenario is analogous to MSR 69 , where the necessity to maximize for multiple moral views comes from uncertainty about the utility functions of other agents in the multiverse as well as their diversity rather than conflicting intuitions about the \"moral truth\". Many of Williams' conclusions resemble those of the present paper. For instance, he identifies the appeal of preference utilitarianism in chapter 3.1 of the dissertation (compare sections 2.8 and 4.2.6 of the present paper). Many of his intervention ideas are about improving the capabilities of others who may plausibly have access to the moral truth. First and foremost, he defends democracy (chapter 3.3) and liberty (chapter 3.4). Of course, MSR does not have the same implications as the above approaches. For one, when we raise others' capabilities as superrationalists, we favor people whose values we suspect to be typical of what superrationalists in the multiverse care about. For example, from an MSR perspective it is much more important to support consequentialists. Moreover, some of the proposed measures merely move resources or power from one group to another (e. g., from a dictator to the people) without adding optimization power aimed at the goals of superrational agents in the multiverse. I doubt that raising capabilities will often be a top intervention. Nonetheless, it might be an option when good and inexpensive opportunities, such as sharing knowledge, arise. \n Meta-activities Relative to any goal, meta-activities are either about a) amassing more resources, or b) improving the efficiency of one's object-level resource expenditure. To achieve the goals that superrationality prescribes, we may thus also engage in such meta-activities. In the following, I will describe two meta-activities, one of each kind. \n Research The present paper lays out the foundations for research on multiverse-wide superrational cooperation. Further research is needed in all three areas discussed in this paper, i. e. how our new criterion for choosing policies is to be constructed (see chapter 2), what values our superrational collaborators have (see chapter 3), and which interventions are most promising (chapter 4). 68 I side with moral anti-realism (R. Joyce, 2016) and non-cognitivism in particular (R. Joyce, 2016, section 3). That is, I do not think that moral theories can have (objective) truth values. 69 In fact, I learned about variance voting, which I take to be the most promising approach to constructing the compromise utility function (see section 2.8.5), via the literature on moral uncertainty, in particular via MacAskill (2014, chapter 3) . Note that some research, e. g. investigations of whether a compromise is beneficial for you, can (in theory) be harmful if one has not properly precommitted as illustrated in the Remote-controlled cake maker thought experiment (see section 2.8.6). A similar danger lies in finding out whether other agents cooperate (see section 2.1). \n Promoting multiverse-wide superrationality Since multiverse-wide superrational cooperation produces gains from compromise (under certain assumptions about the collaborators, discussed in section 3.2), having more multiversewide superrational cooperation produces more gains from compromise. Hence, a common interest of all collaborators is to increase the number of people who adopt (multiverse-wide) superrational cooperation. Indeed, it is plausible that small groups of superrationalists should focus on promoting the idea rather than attempting to help other superrationalists directly. After all, if one of them can convince only two others to cooperate superrationally, she already doubles her impact relative to cooperating on her own. Of course, the two others could also convince others in turn. Needless to say, spreading the idea saturates at some point. At least when all humans are convinced of superrational cooperation, the idea cannot be spread further. More realistically, we will run out of people who are willing to think about such seemingly speculative topics. \n Artificial intelligence One particularly important way of shaping the future is artificial intelligence (Bostrom, 2014b) . Given our newfound knowledge, we can differentiate between AI safety measures that are inspired by superrational cooperation and AI safety measures that are not. \n AI safety not based on superrationality-related considerations The goal of current AI safety research is to make AIs behave in ways that are more compatible with some human value system. 70 From a multiverse-wide cooperation perspective, this is positive to the extent that human values correlate with the values of other evolved agents in the multiverse. A human-controlled, non-superrational outcome may nonetheless be suboptimal from an MSR perspective. Imagine a distant civilization of billions of happy, law-abiding, art-producing, yet, from a human perspective, ugly-looking extraterrestrials. Each year, they enslave or kill trillions of other, less intelligent extraterrestrials, such that the number of miserable lives and involuntary deaths caused by the civilization is orders of magnitude higher than the number of positive lives it supports. Most people on Earth may not care about this civilization at all because it contains no humans. Some may only care about the smart extraterrestrials and thus evaluate the society very positively (Kagan, 2016) . However, I suspect that many of those who care at all about distant ugly aliens also care about less intelligent aliens. These people would evaluate the civilization as far less positive. Similarly, many superrationalists in the multiverse may not evaluate our civilization positively if it were to continue its current mistreatment of animals. Another concern is that a civilization might prioritize near-view values when value loading an AI. This suggests that even if our values resembled those of other civilizations, the goals we give to an AI might differ significantly from what extraterrestrials care about in our civilization. FRI has previously investigated ways of making AI alignment failures less harmful by focusing on avoiding very bad AI outcomes rather than attempting more fine-grained control (Gloor, 2016) . One motivation to do so is this approach's cooperativeness: different value systems may disagree on what future should be created. For example, some want the universe to be filled with concentrated pleasure, whereas others envision human civilizations of varying social, economic and political systems, often rid of poverty, diseases, involuntary death, and so forth. However, different value systems often agree on a large set of futures that should not be created. Things like premature death, suffering, war, and extinction are almost universally seen as bad. Avoiding dystopian scenarios can thus benefit a wider range of value systems. Another MSR-based reason to focus on very bad outcomes is that, because our civilization will be destroyed in all of them, avoiding them evokes abstract construals. These probably do a better job than concrete construals at approximating what extraterrestrials care about in our civilization (cf. section 3.3.2). However, making AI more fail-safe from a MSR perspective would be less focused on preventing outcomes with a lot of suffering than FRI's previous work. Also, its level of priority depends on its feasibility. Whereas heuristic arguments suggest that merely avoiding bad outcomes might be more feasible than working toward fully human-aligned AI, it has so far proven difficult to do any concrete work in the area. Overall, I think it is an approach worth investigating further in the context of superrational compromise, but not likely to be a top intervention. \n Multiverse-wide superrationality-inspired value-loading In section 2.8.2, we viewed compromising as a one-time process, in which all agents adopt a new utility function u * to maximize in their part of the multiverse. If they indeed acted as though they now only care about maximizing u * , the natural consequence would be to push for AI values that are closer to u * . One way to do this is to directly implement the value systems that one would also spread to other humans (discussed in section 4.2). For example, one could try to make future AIs hold a wider variety of values (see section 4.2.4) or perhaps prioritize universal concerns a bit more (see section 4.2.1). More robustly, one could directly implement a pointer to the aggregated consequentialist far values of superrationalists in the multiverse. Indeed, extracting u * from the multiverse appears to be roughly as difficult to specify as extracting goals of humans. Just as one could identify humans in the world model, extract their goals and aggregate them, so one could identify superrational cooperators, extract their goals and aggregate them. 71 (A somewhat similar proposal was made by Bostrom (2014a, page 14) ; see section 6.1.2.) Of course, it is unlikely that superrationalists could convince the majority of people of such goal systems. Nonetheless, at this early stage of the field of AI safety, it seems useful to also explore unrealistic proposals like this one. Additionally, less attractive goal functions may still be relevant as backups (see Oesterheld 2016) . Another disadvantage of this approach is that it breaks if the analysis underlying our specification of u * is incorrect. For instance, if MSR does not work at all, then making AI care about ET values directly is much worse than simply implementing our own values. \n Making an AI come up with superrational cooperation on its own Instead of directly implementing our compromise utility function, we could also make the AI come up with such a compromise on its own. This has several advantages. Most importantly, it protects against some possible mistakes on our side. If, say, we were unable to find the correct superrational compromise, we could let the AI find it on its own. Also, the AI may at some point discover that there are no other agents in the multiverse after all, at which point it could choose to stop wasting further resources into compromising with these nonexistent agents. The primary way of getting an AI to compromise superrationally is to ensure that it reasons in accordance with the right decision theory. 72, 73 This in turn involves advancing the field of decision theory and investigating possible ways of implementing decision theories in AI systems. Given how both of these areas seem neglected and gains from trade may be quite significant, I could very well imagine that interventions in this area are among the most effective of those hitherto considered by effective altruists. \n Value loading is still necessary If all acausal collaborators settle on maximizing some utility function, perhaps value loading is unnecessary for AIs with the right decision theories anyway? After all, once such an AI joins the MSR compromise, it will update its utility function accordingly -regardless of whether it originally wants to maximize paperclips or to reduce suffering. But this reasoning seems unsound. While all AIs may settle on the same compromise utility function, the original value system of the AI still affects what that compromise utility function ends up being. Without superrationality, value loading affects the dominant values of one AI. If there are m superrationalist civilizations, then each can affect the dominating values normalization) is more or less difficult to implement than the aggregation procedures one would implement for humans. 72 Reasoning in accordance with some decision theory is not meant to imply that the decision theory is hard-coded into the AI. Instead, the decision theory that an AI uses may be the result of particular choices of architecture. To ensure that the AI reasons in accordance with the right decision theory, we would then have to find out what the decision-theoretical implications of different AI design choices are and ensure that these receive due consideration in the construction of intelligent machines. 73 There are other ways to make it more likely that the AI applies MSR. For example, one could ensure that its epistemology enables it to infer the existence of other universes that cannot be observed directly. We could also think of an AI that would accept MSR, but somehow never has the idea of MSR. Much more plausibly, some AIs will simply not care about distant universes in a consequentialist way. However, all of these parameters seem more difficult to influence than the AI's decision theory. in m AIs by 1 m (assuming that all civilizations are equally powerful, etc.). So, proper value loading is actually just as effective as before, if not more because of gains from trade. Even if we manage to reliably make the AI join a superrational compromise, we will still want to make it value the right things. I am uncertain about whether some version of the above argument against value loading may work after all. Even if all AIs have \"paperclipper values\", perhaps they would still recognize that other value systems originally had all the power, causing the AIs to give them higher compromise weights? Similarly, one may have some intuitions that value loading superrational AIs should not be necessary, given that it just moves power between superrational cooperators. However, at this point, these are merely intuitions and not arguments. Except from potentially guiding future research, I do not think they should affect our priorities. \n Compromise-friendly backup utility functions Even though value loading is still necessary, we can nonetheless benefit our superrational collaborators (and thereby ourselves) in cases where value-loading fails. Even if an AI has values that differ from those of humans, it may still trade with other civilizations. Hence, we should attempt to load it with values that especially lend themselves to compromise, such that the other value systems benefit as much as possible (cf. Bostrom, 2014a) . Because one would usually attempt to load an AI with one's own values, such a compromise-friendly (\"porous\", in Bostrom's terminology) utility function would usually only be a backup (see Oesterheld 2016) . \n Acknowledgements I came up with superrational compromise after a conversation with Lukas Gloor about decision theory and the multiverse. Prior to writing this paper, I extensively discussed the topic with him, Carl Shulman, and Brian Tomasik. I also thank Max Daniel, Tobias Baumann, Carl Shulman, David Althaus, Lukas Gloor, Kaj Sotala, Jonas Vollmer, Johannes Treutlein, Lucius Caviola, Joshua Fox, Jens Jaeger, Ruairí Donnelly, Brian Tomasik, Owen Cotton-Barratt, Magnus Vinding and Dominik Peters for valuable discussions and comments on this paper. Last but not least, I am indebted to Adrian Rorheim for careful copy editing and Alfredo Parra for typesetting. \n Appendix The appendix contains discussion of additional, more tangential topics. \n Related work \n Gary Drescher on superrationality Superrationality, i. e. cooperation based on correlation, is a well-known idea in decision theory (Kuhn, 2017, section 7; Horgan, 1981, section X; Hofstadter, 1983; Campbell and Sowden, 1985; Ahmed, 2014, section 4.6 and references therein) . However, most authors do not discuss much beyond the basic idea. Chapter 7.2 of Gary Drescher's Good and Real (2006) is the most extensive analysis of the concept of which I am aware. Among other things, Drescher notes that superrationality -or, as he calls it, subjunctive reciprocity -can be applied broadly as a justification for \"altruistic\" behavior, which I discuss in section 6.7. He also points out that superrationality removes the need for reciprocity (see section 2.9). Although Drescher discusses the Everett interpretation of quantum physics in his book, he does not connect it with superrationality. His considerations thus focus on superrationality among agents on Earth, which I would argue to be quite weak (see section 6.6). Nonetheless, his account of superrationality is more thorough than any other I have seen, and strongly influenced chapter 2 of this paper. \n Acausal trade Acausal trade is another (mostly informally discussed) form of cooperation based on noncausal decision theories and has often been combined with the multiverse concept. However, the mechanism usually discussed under the term acausal trade differs from superrationality. Instead of assuming the similarity between two agents, acausal trade merely requires them them to have models of each other. For example, the two agents may know each other's source code. 74 The main technical difficulty here is to avoid the infinite loop associated with this mutual modeling. The basic idea is that both agents adopt the policy of cooperating if and only if the other agent cooperates. 75 This is intended to incentivize cooperation in a way reminiscent of causal cooperation via tit for tat. One can also view this policy of mirroring the other agent's strategy as a way to create correlations between the decisions of the two agents. However, if both agents use this policy, they run into an infinite loop: To make a decision, the first agent has to find out (probabilistically) what the second agent does. But to do so, it has to find out what the first agent does, which in turn means finding out what the second agent does, etc. As illustrated by Barasz et al. (2014) , this problem can sometimes be solved, thus making it rational for two programs with knowledge of one another's source code to cooperate with each other (cf. LaVictoire et al., 2014; Critch, 2016) . Superrationality may be seen as a special case of acausal trade in which the agents' knowledge implies the correlation directly, thus avoiding the need for explicit mutual modeling and the complications associated with it. This makes superrationality much more easy to apply than acausal trade. Consequently, whereas I propose that humans should reason superrationally, acausal trade is usually discussed only in the context of superintelligent AIs (e. g., Bostrom, 2014a) . 74 Alternatively, one of the two agents can observe the others' behavior. In this case, only the other agent needs a model. 75 Of course, it would be even better if one could defect against that cooperate unconditionally. \n Various mentions of multiverse-wide superrationality While I am not aware of any substantive discussion of MSR, some have mentioned it as a side remark, or proposed specific applications: • Bostrom writes: \"We might [...] hope that some of the other civilizations building AIs would [also implement their AI in a way that enables trade (see sections 4.6.3 and 4.6.3)], and perhaps the probability that they would do so would be increased if we decided to take such a cooperative path.\" (Bostrom, 2014a , page 4) On page 14, he also argues that one should perhaps diversify the values of an AI for a similar reason. • Almond discusses a few examples of how we can utilize the correlation with other civilizations (Almond, 2010c, ch. 4) . One of them is discussed in section 6.9. \n Many agents One essential ingredient of multiverse-wide superrationality is the number of intelligent agents that exist. We have, to some people's surprise, not (yet) found extraterrestrial life in the observable universe. However, the universe, or multiverse, probably extends far beyond the region we can observe. More likely than not, it contains so many agents that the number of humans on Earth pales in comparison. Unfortunately, physics and cosmology are not the most accessible of fields. Introductions tend to either involve advanced mathematical notation or fuzzy explanations with terms like \"space-time distortions\", \"waves\", space being referred to as \"flat\", dimensions as \"curled up\", etc. that seem hard to understand without looking at their technical meaning. For an overview of the latter kind, consider Tegmark's Parallel Universes (2003) , which also discusses the number of intelligent agents specifically. Another, even broader popular science overview is given by Greene (2011) . In this section, we focus on the easiest to understand aspects. As mentioned in chapter 1, we will use the term \"multiverse\" to also refer to, say, a spatially infinite universe. It is important to note that most talk about multiverses is not something physicists make up out of thin air as an intellectual exercise. Instead, certain well-tested theories in physics and cosmology seem to imply the existence of a large universe or multiverse. One of the easier to understand examples is the Everett or many-worlds interpretation (MWI) of quantum mechanics. For an introduction, consider Yudkowsky's introduction (2015, ch. S), which makes a strong case for MWI and goes through some of the issues typically discussed, like falsifiability/testability and the law of parsimony (Tegmark and Wheeler, 2001; Tegmark, 2007; Vaidman, 2016) . For a more critical account, see, e. g., Kent (1997) . Tentative polls of physicists' opinions on MWI indicate that between 10% and 50% agree with MWI (Raub 1991, unpublished as cited in, e. g., Tipler, 1994, section 5, \"Nonrelativistic Quantum Mechanics is Deterministic\"; Tegmark, 1997; Nielsen, 2004; Emerson and Laflamme, 2006) . But the many-worlds interpretation of quantum physics is not the only case that can be made for a universe with a very large or infinite number of agents. In fact other arguments are probably more widely accepted. Maybe the least \"extraordinary\" hypothesis implying the existence of many agents is one which says that this universe is spatially infinite. According to Tegmark, \"this spatially infinite cosmological model is in fact the simplest and most popular one on the market today\" (2003). Even if the universe is spatially finite and small, it may still contain a lot of civilizations that cannot interact with each other if it is temporally infinite. For example, on a cyclic model the universe goes through an indefinite number of oscillations of expansion and collapse. If sufficiently many of these oscillations give rise to different civilizations, then these civilizations can cooperate with each other superrationally. Another more complicated yet popular cosmological theory is eternal inflation as described in ch. II of Tegmark's Parallel Universes. Eternal inflation postulates the existence of multiple universes which not only differ in initial conditions but also in their number of dimensions, their sets of fundamental particles, and their physical constants. On the more speculative (but also more accessible) side, there are various forms of modal realism (sometimes also called mathematical monism), the view that every \"possible world\" exists in the same way in which our world exists. While modal realism is controversial and rarely discussed by physicists, some view it as an elegant solution to some philosophical problems. Modal realist theories are also very simple, although to make predictions with them, they require supplementation with indexical information about which agent in which possible world we are (Hutter, 2010, ch. 3) . For different starting points for thinking about modal realism, see any of the following: Lewis (1986 ), Tegmark (1998 2008; Schmidhuber (1997) . Acting under the assumption of modal realism is associated with some complications, however. In particular, because literally everything can happen, everything will happen in some possible world, no matter what we do. Thus no action seems to be better than another (Oesterherld, 2017a) Besides the arguments in favor of assigning a high probability to living in a universe with many agents, there also exists a prudential reason to act as though one lives in a large universe. Even if we only assign, for example, a 50% probability to the existence of other civilizations, our decisions matter much more if there are more other agents with whom we are correlated. Thus, we should optimize our decisions more for the large universe. This line of reasoning does not work for all value systems, however. For example, in terms of multiverse-wide average welfare, our influence may be much bigger if the universe was very small. An average utilitarian may thus follow the opposite prudential argument and act as though the universe was small. \n Testability of superrationality Eliezer Yudkowsky (2010b, ch. 13) writes: If a dispute boils down to a testable hypothesis about the consequences of actions, surely resolving the dispute should be easy! We need only test alternative actions, observe consequences, and see which probability assignment best matches reality. Unfortunately, evidential decision theory and causal decision theory are eternally unfalsifiable-and so is [timeless decision theory (TDT)]. The dispute centers on the consequences of logically impossible actions, counterfactual worlds where a deterministic computation returns an output it does not actually return. In evidential decision theory, causal decision theory, and TDT, the observed consequences of the action actually performed will confirm the prediction made for the performed action. The dispute is over the consequences of decisions not made. This also means that superrationality itself -not only its application to agents in faraway parts of the multiverse -is untestable. If I win money by cooperating in a prisoner's dilemma against an exact copy of mine, causal decision theorists will point out that my copy would have cooperated either way and so defecting would have been better. Based on this anecdotal evidence, people do not consider superrationally in this real-world donation game, although they sometimes make the superrational choice for other reasons. \n Do people reason superrationally? In general, there are many hypotheses about why people sometimes cooperate that do not involve any sort of acausal reasoning. Presumably, many are either unaware of the causal line of reasoning or do not properly set up the proposed experiment in their mind. For instance, Yudkowsky (2015, chatper 275) argues that people cannot pretend to be selfish and therefore take the reward to the other player into account. Kanazawa and Fontaine (2013) demonstrate that \"the subject's behavioral choice (cooperation vs. defection) varied significantly as a function of subconscious perception of cues to possible reputational effect (in the form of a video image of another subject in the experiment).\" Cultural norms are also often invoked to explain cooperation. 76 This short list of example explanations is by no means an exhaustive review of the literature on why people cooperate in one-shot games like the prisoner's dilemma and public goods games. Drescher (2006a, page 288f) defends the opposite view. He argues that although people do not act according to some systematic acausal decision theory, they nevertheless implicitly account for acausal reasoning into account implicitly. Similarly, Leslie writes, \"perhaps the germs of [evidentialist reasoning] are already present in thoughts influential in getting people into polling booths, thoughts on the lines of 'What if everybody in my party stayed in bed?\"' (Leslie, 1991, ch. 7) . Perhaps this \"lack of a correct explicit decision theory leaves the solution somewhat vulnerable to seemingly sound counterarguments, and thus leaves the solution's influence somewhat tentative\" (Drescher, p. 289) . This could explain why many people who have considered the problem in great detail do not go with the recommendation of acausal arguments despite potentially having an innate intuition for them. Recently, Fischer (2009) has proposed that people do engage in superrationality-like reasoning. In a study, he showed that participants' cooperation in a one-shot prisoner's dilemma correlated with reported probabilities of the opponent making the same choice as oneself (cf. Krueger, DiDonato, and Freestone, 2012) . One further piece of evidence in favor of this hypothesis is that cooperation decreases when people learn about the other person's choice before they make their own choice. Pothos et al. (2011) write: Shafir and Tversky (1992; Busemeyer, Matthew, and Wang, 2006; Croson, 1999; Li and Taplin, 2002; Tversky and Shafir, 1992 ) created a well-known modification to the Prisoner's Dilemma game: in some trials, participants were told what the other player was doing. Unsurprisingly, when participants were told that the other person decided to D, then their probability to D was 97%; and when they were told that the other person decided to C, then their probability of D was 84%. However, in trials (within participants design) when participants were not told what the other person did, the probability to D dropped to 63%. While inconsistent with mere causal reasoning, this can be explained with acausal reasoning. Given knowledge of the other person's decision, the evidential impact of cooperation diminishes (cf. section 2.1). Moreover, this behavior cannot be explained by reputational issues or altruistic preferences, which would, if anything, suggest that one would return the favor upon learning that the other person cooperated. However, the standard explanation attributes this behavior to people's irrationality. Overall, I lean towards the view that people do not have strong acausal intuitions in day-today scenarios, which means that people who do take such considerations seriously do not correlate strongly with the average person. \n The evolution of superrationality Even though superrationality is not testable in any given situation, it does produce actual benefits. This much is clear even to a causal decision theorist, who would thus self-modify to take some, though not all, acausal considerations into account (see section 2.3). For the same reasons, a causal decision theorist would also program an AI to take these considerations into account. Similarly, evolution favors agents that take some superrational considerations into account. For example, imagine a planet on which near copies of agents are created on a regular basis. They then interact with each other in cooperation and coordination games like the donation game. To facilitate evolution, copies are created in proportion to the payoffs in the cooperative games. On this planet, superrational agents -i. e. those who cooperate with close copies and other correlated agents, while defecting against uncorrelated agents -have an evolutionary advantage over CDT-based agents who always defect. They will, on average, receive higher payoffs and thus reproduce more successfully. Evolution can, therefore, in principle favor genes (and memes) that promote superrational reasoning. In some sense, the described planet resembles ours. On Earth, \"near copies\" of humans are created via reproduction and upbringing. Moreover, many have pointed out that scenarios paralleling the prisoner's dilemma and public goods games were common in our ancestral environment. In principle, such considerations also apply to the application of superrationality to cooperation with agents in other parts of the multiverse. That is, multiverse-wide evolution favors creatures who increase the genetic fitness of agents with similar decision algorithms elsewhere in the multiverse. In practice, however, I suspect that almost all creatures with at most human capabilities are unable to benefit any genomes other than those extant in their environments. \n Superrational cooperation on Earth Some, e. g. Leslie (1991, ch. 8) and Nate Soares, have argued that superrationality and acausal decision theory are relevant even in daily interactions between humans on Earth without considering the multiverse. Drescher (2006a, ch. 7 ) even contends that it is an argument for egoists to behave altruistically. Others, like Almond (2010b, ch. 4.6; 2010c, ch. 1) or Ahmed (2014, ch. 4) , maintain the opposite position, i. e. that acausal reasoning is rarely relevant. I will argue for the latter claim. Indeed, my belief that acausal cooperation is usually inapplicable is the reason why this paper discusses its application to the multiverse rather than more \"down to Earth\" scenarios. \n Fewer agents Superrationality becomes relevant in the multiverse because it contains so many disconnected agents. Thus, even if the correlation with every individual agent's decision is small, the overall acausal impact of our decisions dominates (see section 2.7). The smaller the number of agents, the higher the relative importance of the causal implications of our actions. Since the number of agents on Earth is comparably small, causal considerations may well dominate. 6.6.2 Argument from evolution: Superrationality did not evolve (strongly) We argued that superrational compromise can, under certain conditions, evolve by natural means and that many of the respective conditions are even met on Earth (see section 6.5). Hence, the mere observation that most people do not reason superrationally (see section 6.4) makes a case against its importance. \n Causal cooperation seems more important Humans rarely face one-shot prisoner's dilemmas against agents whom they know sufficiently well to be strongly correlated with them. Instead, their interactions are usually iterated and open to mutual causal influence. As a result, causal cooperation mechanisms apply, at least in principle (see section 2.9 for references to introductions on causal cooperation). Surveying the vast literature on causal cooperation and how it compares to superrational cooperation is beyond the scope of this paper, but two key points are worth highlighting. First, rational agents establish causal cooperation in a surprisingly wide range of situations. Second, successful strategies like tit-for-tat or Gradual (Beaufils, Delahaye, and Mathieu, 1997) tend to start the game by cooperating and never defect unless the other side starts defecting. Together, this suggests that sufficiently smart people -which, I assume, includes most agents who might apply superrationality -are capable of strong cooperation with one another without ever having to invoke superrationality. \n Hard-wired alternatives Superrationality is not the only solution to the adaptive challenge of having to cooperate with similar agents (e. g., members of the same tribe and relatives). One alternative is to hard-wire creatures to cooperate with very similar agents and defect against everyone else. This approach to ensuring cooperation has received some attention in the literature, although it is not nearly as widely known as the mechanisms of causal cooperation (see, e. g. McAfee, 1984; Howard, 1988; or Tennenholtz, 2004) . \n Superrationality and morality Cooperation is often invoked as an argument why altruistic behavior and following moral rules is rational (e. g., Dawkins, 1976, ch. 12; J. Greene, 2013) . In many ways, the application of superrational cooperation resembles altruistic behavior even more closely. For example, superrationality implies that we should help a value system even if we know for certain that no agent with this value system will or can reciprocate (see section 2.9). Additionally, in suggesting that we treat others the way they would like to be treated (in order to make it more likely that others treat us the way we would like to be treated), superrationality resembles Kant's categorical imperative and the Golden Rule. Once someone is updateless, she has additional reasons to be nice to others: even if she learns that they do not or will not cooperate, she would potentially still behave nicely toward them (see section 2.4). Similarly, if she were ever to find herself in a situation resembling the Remote-controlled cake maker thought experiment (see section 2.8.6), where she knows that cooperation hurts goals, she might still make that sacrifice. Some implications of superrationality thus bear a close resemblance to altruistic or moral behavior. Drescher (2006a, ch. 7.2 .1) makes similar points regarding the similarity between superrational cooperation and altruism. However, he goes further by arguing that superrational cooperation is the basis for morality -a way of \"deriving ought from is\". I will discuss two questions that might arise from this argument: is altruistic action derived from self-interest really the essence of morality or altruism? And: is superrationality sufficient for arriving at the desired altruistic conclusions? 6.7.1 Real altruism Yudkowsky (2015, ch. 259) writes: Consider the following, and ask which of these two philosophers is really the altruist, and which is really selfish? \"You should be selfish, because when people set out to improve society, they meddle in their neighbors' affairs and pass laws and seize control and make everyone unhappy. Take whichever job that pays the most money: the reason the job pays more is that the efficient market thinks it produces more value than its alternatives. Take a job that pays less, and you're second-guessing what the market thinks will benefit society most.\" \"You should be altruistic, because the world is an iterated Prisoner's Dilemma, and the strategy that fares best is Tit for Tat with initial cooperation. People don't like jerks. Nice guys really do finish first. Studies show that people who contribute to society and have a sense of meaning in their lives, are happier than people who don't; being selfish will only make you unhappy in the long run.\" Blank out the recommendations of these two philosophers, and you can see that the first philosopher is using strictly prosocial criteria to justify his recommendations; to him, what validates an argument for selfishness is showing that selfishness benefits everyone. The second philosopher appeals to strictly individual and hedonic criteria; to him, what validates an argument for altruism is showing that altruism benefits him as an individual: higher social status or more intense feelings of pleasure. So which of these two is the actual altruist? Yudkowsky elaborates in the rest of the chapter. The point he is making is that \"actual altruism\" is usually understood to mean caring about others, rather than merely behaving altruistically based on egoistic reasoning. Verbal disputes about the meaning of \"true altruism\" aside, there is a difference between having the welfare of others as part of one's goal on the one hand, and benefitting others for egoistic (or other non-altruistic or amoral) reasons on the other. I am an altruist of the former kind, but cooperation (whether superrational or not) only supports altruism of the latter kind. I would think that most other people are also altruists of the former kind (in addition to sometimes being altruists of the latter kind). 77 Altruism of the latter kind also does not \"derive ought from is\" 78 , as Drescher promises in chapter 7 of Good and Real. Instead, it derives (potentially unexpected) action recommendations from an already existing ought, i. e. egoism or whatever values an agent already has. Specifically, (multiverse-wide) superrational compromise can be viewed as agents switching to a new utility function, but only because it benefits their current utility function. There are many other examples of agents effectively adopting a new goal. Consider an egoist living in 16th-century Spain. Her environment punishes people who are not aligned with Catholicism. To further her goals, the egoist should therefore behave as though she was a Catholic with pure Catholic goals. She thus derives a new \"morality\" from purely egoistic goals, but I suspect that meta-ethicists' excitement about this is limited. \n How much altruistic behavior does superrationality entail? The second issue is that superrationality does not suffice for reproducing all of our moral intuitions. For one, I am not sure to what extent superrationality has a bearing on interactions with other people on Earth at all (see section 6.6). Furthermore, we saw that superrationality only warrants helping other superrational agents (see section 2.9.4). But our moral intuitions also regard other agents as morally relevant. As an example, consider Alice, a purely causal decision theorist who even defects in a prisoner's dilemma against her copy. Does this mean that Alice is morally irrelevant, no matter her degree of consciousness, capacity to suffer, etc.? Alice is not just a thought experiment -many philosophers would two-box in Newcomb's problem (see section 2.2). Since Newcomb's problem is roughly equivalent to the prisoner's dilemma against an identical copy ( Lewis, 1979) , this shows that most philosophers reject superrationality. Nevertheless, I and presumably most others care intrinsically about the welfare of these moral philosophers; the same is true for young children and non-human animals, most or all of which do not reason superrationally. Superrationality and what we would usually call \"morality\" thus disagree strongly on who is morally relevant (Drescher, 2006a , sections 7.2.2 and 7.2.3). \n Multiverse-wide superrationality for causal decision theorists Throughout this paper, I have assumed that some acausal decision theory is correct, albeit without narrowing it down to any particular theory. To me, this is no limitation of MSR, because I hold that causal decision theories fail in examples like the donation game with similarity. However, many professional philosophers are causal decision theorists (see 2.2). Are the arguments presented in this paper entirely irrelevant to them? 79 Remember, from section 2.3, that CDT actually recognizes its flaw. Specifically, CDT self-modifies to cooperate acausally with copies that are created in the future. After all, these copies can be causally influenced to cooperate acausally to each other's benefit. Other humans and extraterrestrials in far away parts of the multiverse do not fall into that category, of course -so causal decision theorists would not precommit to engage in full multiverse-wide superrational cooperation. However, one multiverse theory is the Everett interpretation of quantum physics, according to which our universe constantly \"splits\" into different branches. Thus, under the Everett interpretation, near-copies of oneself are created all the time and in large quantities. Moreover, it pays in causal terms to cooperate across time, i. e. to commit me tomorrow and me in-30-years to cooperate. A causal decision theorist would therefore cooperate with a large number of agents created after CDT's precommitment. It thus seems as though a weaker version of the considerations from this paper apply to causal decision theorists after all. 79 One obvious way in which the implications are relevant to causal decision theorists is decision-theoretical uncertainty (MacAskill, 2016) . Perhaps, even ardent defenders of CDT have probability on CDT being the wrong way to make decisions. I, at least, do not have a probability of 100% on a single decision theory being the right one. If you have some weight on some of the alternatives to causal decision theory, then you would also give MSR considerations some weight. In fact, argues that if we live in a sufficiently large universe, then EDT and other non-causal decision theories immediately dominate expected value calculations that take decision-theoretical uncertainty into account. \n Simulations Paul Almond (2010c, ch. 2; has argued that correlations across the multiverse have implications for whether and how we should simulate other civilizations. The idea has also been proposed by others. It is mainly relevant for agents and civilizations who primarily care about copies of themselves, which it is not discussed in the main text. \n If being in a simulation is bad, avoid creating one Almond (2010c, section 4.2) writes: If you take the simulation argument seriously, then evidential decision theory would seem to allow you to assert some control over the other civilizations that might be building these simulated realities. One way in which evidential decision theory would be relevant is in the way it allows you to control the probability that you are in a simulation in the first place. If your civilization decides to develop the capability to run simulated realities, then you are meta-causing [i. e. influencing acausally] civilizations in general to do likewise (including civilizations on which our own might be modeled), and making it less likely that almost all civilizations end before they are capable of producing simulated realities, in turn making it more likely that you are in a simulated reality. If, however, your civilization decides not to acquire this capability then you are meta-causing civilizations in general to do likewise, making it less likely that you are in a simulated reality. Once your civilization has the capability to produce simulated realities, if your civilization decides to do it, this would make it more likely that other civilizations also do it, again making it more likely that you are in a simulated reality. On the other hand, if your civilization decides not to produce simulated realities, this makes it less likely that other civilizations would choose to do so, and therefore less likely that you are in a simulated reality yourself. If you assume the view of anthropic decision theory (Armstrong, 2011 ) instead of classical anthropics (i. e., the self-sampling or self-indication assumption), then your decision can affect the fraction of copies of you that are in a given simulation. Note that under certain assumptions about the efficiency of simulations, one's effect on the probability of being in a simulation may be negligible. If any civilization could run orders of magnitudes more simulations of civilizations than there are civilizations in the basement, then most copies will be in simulations no matter what you decide. Regardless of your choice, you will probably be in a simulation. \n Happy simulations Almond (2010c, section 4.2) proposes to simulate civilizations in a nice way to increase the probability of being in such a simulation oneself. While evidential decision theory might be applied to try to reduce your \"risk\" of being in a simulated reality, some people, and some civilizations, might not see it that way: They might think that being in a simulated reality could have benefits if the entity that constructed the simulation is kind; for example, the inhabitants of the simulation might be protected from existential risks to their civilization, or they might be provided with an afterlife. Evidential decision theory suggests the possible tactic of making large numbers of simulated realities in which the inhabitants are treated kindly as a way of trying to meta-cause civilizations in general to do the same thing. This would be going further than what I said previously about treating the inhabitants of your own simulations kindly: This would be done so as to make it more likely that you are in a simulation, and that it is one in which you will be treated kindly. We might imagine a civilization doing this as a way of trying to use evidential decision theory to pluck an afterlife out of nowhere for itself, if it has recently acquired the computing power to simulate many civilizations, and provide them with an afterlife, but does not yet have technology such as mind uploading which it might use to obtain an afterlife more directly. A civilization might attempt this even if it does not yet have the computing power to construct simulated realities: It might set up some kind of legal or corporate framework to ensure that large numbers of ancestor simulations, complete with an afterlife, are constructed in the future, the idea being to strengthen the case that it is itself in such a simulation, made by a civilization with a past that is strongly correlated with its own present. Someone might even set up some organization for this purpose as a result of reading this article! \n Infinite ethics In all our calculations (sections 2.7 and 2.8) we assume finite numbers of agents each with a finite causal influence on their world. However, the multiverse -or even a single universe -may well be infinite. These infinities entail severe complications for the application of multiverse-wide consequentialist moral views like those required for multiverse-wide superrational cooperation (Bostrom, 2011; Arntzenius, 2014) . Superrationality is a form of what Bostrom (2011, ch 4.6) calls \"class action\": through our actions, we can acausally affect an infinite amount of value, even if each physical instantiation of ourselves only has a finite causal impact. It seems unclear whether this makes infinite ethics even more challenging, or whether it can be viewed as a step toward a solution (cf. Almond, 2010c, ch. 3.2 ). One's preferred approach to the problem of infinite ethics may well be consequential for a variety of issues (including MSR), which is why FRI lists infinite ethics as a promising area for future research. Nonetheless, I expect a solution to preserve most of the conclusions drawn from traditional (i. e. finite) ethics. \n Objection based on uncertainty about the values of superrationalists in the multiverse Thoughts on the value systems of extraterrestrials are necessarily speculative and uncertain. At what level of certainty about some other value system should we invest resources into maximizing it? Indeed, one possible criticism of MSR is that we will never be sufficiently certain of just how common some other value system is. Thus, the argument goes, we should in practice never take any specific value systems other than our own into consideration. First note that superrationality is still relevant even if you do not know the other value systems. There are some interventions that benefit other superrationalists without requiring knowledge of their values (see section 4.1), such as making future superintelligent AIs cooperate superrationally (under the assumption that they will come to understand the values of other agents in the multiverse much better than we do). But even if the argument acknowledges this, it is still invalid, primarily because it ignores the fact that we do not know how common our own value system is, either. In section 2.8 we argued that if we consider the correlations between our actions and the behavior of agents elsewhere in the multiverse, then maximizing a neutral compromise utility function in our local universe maximizes our original utility function in the multiverse at large. This argument also applies if we are uncertain about the other agents' utility functions and thus the compromise utility function itself. Thus, it must be possible to state the criticism in terms of the compromise utility function. For example, the criticism may translate to the following statement: the only terms in the compromise utility function that we can be certain about represent our own values. We are so uncertain about all other value systems that they do not contribute much to estimates of compromise utility. This criticism could, in theory, be true. Imagine you grew up on a planet where everyone had the same value system as yours; even if you believed that the universe also has other value systems, you would be justified not to assign much weight to any other specific value system. On Earth, however, we already observe quite some variety in what people care about. Thus, no matter what value system you hold, there are probably other value systems that are similarly common on Earth. Of course, we still do not know whether these value systems are also common elsewhere in the universe, but your own value system is a priori not in a privileged position that would justify assuming it to be more common than others. Solely maximizing our own utility function in this universe thus seems to be a bad approach towards maximizing the compromise utility function, in turn making it suboptimal in terms of our multiverse-wide utility. Figure 1 : 1 Figure 1: A causal graph representing the Donation game with copies and precommitment. \n Figure 2 : 2 Figure 2: A graph representing the causal relationship in the CGTA thought experiment. \n Figure 3 :Figure 4 : 34 Figure 3: Generic causal graphs representing the two types of Newcomb-like decision problems. Medical Newcomb problems are illustrated on the left. Newcomb problems based on similarity between decision algorithms are illustrated on the right. \n Figure 6 : 6 Figure 6: A graph representing a situation of mutual cooperation. An arrow from A to B indicates that A can benefit B. \n Figure 7 : 7 Figure 7: A circular cooperation graph representing cooperation schemes of the sort used in the Donation circle. Again, an arrow from A to B indicates that A can benefit B. \n Figure 8 : 8 Figure 8: A linear cooperation graph (graph theoretically speaking, a 1-ary tree) representing schemes of cooperation like that in the donation ladder. An arrow from A to B indicates that A can benefit B. Again, the nodes represent participants and an arrow from A to B indicates that A can bring causal benefits to B. \n Figure 9 : 9 Figure 9: A directed acyclic graph representing the schemes of cooperation like that of the Hierarchical donation game. \n where, for example, P (C | C) is the probability that the other side cooperates conditional on my cooperation. Solving for b other yields b other > b u P (C | C) − P (C | D) , (8) where P (C | C) − P (C | D) can be interpreted as quantifying how much more likely my cooperation makes the other's cooperation. Because there is at least some correlation, the term is always greater than 0. If the correlation is perfect, then P (C | C) = 1 and P (C | D) = 0, such that we get Eq. (7) as a special case of Eq. (8). If the correlation is less than perfect, then b a > b s may not be enough. For example, if P (C | C) = 0.8 = P (D | D) (such that whatever one agent does, the other agent is 80% likely to do the same), then it must hold that b a > b s P (C | C) − P (C | D) = b s 0.8 − 0.2 = 5 3 b s . \n Omega decides to play a game of heads or tails with you. You are told that if the coin comes up tails, Omega will ask you to give it $100. If it comes up heads, Omega will predict whether you would have given $100 if the coin had come up tails. If Omega predicts that you would have given it the money, it gives you $10,000; otherwise, you receive nothing. Omega then flips the coin. It comes up tails, and you are asked to pay $100. Do you pay? 2.2):Counterfactual mugging.If you can precommit to giving the money before you learn about your poor luck, you should do so. After all, this would render it near-certain that Omega would give us $10,000 if the coin comes up heads, at the mere cost of $100 if it comes up tails. By precommitting to pay Omega, we thus gain 0.5 • $10,000 − 0.5 • $100 = $4,950 in expectation. \n . Meanwhile, of people who do not chew gum, only 10% die of throat abscesses before the age of 50. The researchers, to explain their results, wonder if saliva sliding down the throat wears away cellular defenses against bacteria. Having read this study, would you choose to chew gum? But now a second study comes out, which shows that most gum-chewers have a certain gene, CGTA, and the researchers produce a table showing the following mortality rates: CGTA present CGTA absent Chew gum 89% die 8% die Don't chew 99% die 11% die \n First, we have to consider what negative correlation means. Let's say you currently think that roughly 0.1% of evolved agents in the multiverse who have thought about MSR decide to cooperate. Now, you learn of one randomly chosen agent that she cooperates. The intuitive response is to increase the 0.1% estimate, if only slightly (depending how confident you were in your initial estimate). If this agent were negatively correlated with the others, then upon learning that this one agent cooperated, you would adjust your estimate of how many agents cooperate downward. \n One fine day, out of the blue, you get a letter from S. N. Platonia, a renowned Oklahoma oil trillionaire. The letter states that 20 leading rational thinkers have been selected to participate in a little game, and you are among the lucky players. \"Each of you has a chance at winning one billion dollars, put up by the Platonia Institute for the Study of Human Irrationality\", it explains. \"Here's how: if you wish, you may send a telegram with just your name on it to the Platonia Institute. If exactly 5 people reply within 48 hours, they each receive one billion dollars, otherwise no prizes are awarded to anyone. You are not allowed to communicate with each other or share the prize afterward.\" What do you do? 26 For example, consider the following variation of the Platonia dilemma, adapted from Hofstadter (1983) :Platonia five. \n Table 1 : 1 . Payoff matrix for two people driving in opposite directions. player 2 right-hand left-hand player 1 right-hand 0 -10 left-hand −10 0 \n Omega has a list of 6 participants. The list is circular, meaning that every participant has a successor. Omega sends each participant a letter, asking them to respond with single letter 'C' (for cooperate) or 'D' (for defect) without communicating with each other. It explains that by sending in 'C', participants can increase their successor's payoff by $5. By sending in 'D', they can increase their own payoff by $2. As usual, the participants are told that they are all rational or that they use similar decision mechanisms. Every participant only cares about the balance of her own bank account, and not about Omega's or that of the other 6 participants. Upon receiving the letter, should you cooperate or defect? \n Secondly, thinking about the other agents' decisions can be dangerous. No. 42 defects solely because he thinks about what the preceeding 41 participants decide. Knowing what the other agents think is thus harmful for some not-yet-updateless decision theories. Hence, similar to how it is wise to remain ignorant about your position in the list, many decision theories would recommend not thinking about what the other agents will do. If the players are human, then No. 1 may not be able to refrain from realizing that he wins by defecting. Perhaps No. 2 cannot refrain from realizing that No. 1's situation is different and his decision therefore independent of hers. However, participants with two-figure positions may be able to refrain and go with the reasoning originally presented: whatever I choose, my predecessor will probably choose the same, as his situation is similar to mine. If I just go ahead without thinking about the \"chain of defection\" initiated by No. 1, then people with similar numbers are probably going to do the same. Donation tree. Omega has a long list of participants again. It sends all of them a letter, asking them to respond with a single letter 'C' (for cooperate) or 'D' (for defect) without communicating with each other. Omega explains that by sending in 'C', participants can increase the payoff of at least 3 participants down the list by $2 each. For example, if the 4th participant chooses to cooperate, this benefits a subset of the participants in positions 5, 6, etc. but not the previous 3 participants. The cooperation of the last few participants has little to no effect. By sending in 'D' participants can increase their own payoff by $5. Participants do not know their position on the list or whom they could benefit. As usual, they are told that they all use similar decision mechanisms. Every participant only cares about the balance of their own bank account, and not about Omega's or the other participants'. Upon receiving the letter, should a participant cooperate or defect? The linear structure can be generalized to non-linear hierarchical cooperation schemes. Consider the following variant of the donation game: , chapter 7.2.2). If all of this is not the case, nothing can change the fact that at least No. 1 \"wins by defecting once she knows her position on the list. \n To talk about human values, it is at least helpful (if not necessary) to develop some systematic terminology and overview of what kind of things people care about. Luckily, we can get help from moral psychologists and others who have attempted to develop just this sort of overview.One example is Jonathan Haidt's and Craig Joseph's moral foundations theory. It divides morality up into five foundations -care, fairness, loyalty, authority and sanctity -although the authors do acknowledge that some other values (such as liberty) may deserve foundation status as well. Haidt and his colleagues have also shown that while social conservatives tend to embrace all five moral foundations, liberals/progressives seem to focus primarily on the first two, i. e. care and fairness.51 We can thus also use the terms \"liberal\" and \"conservative\" to describe values, even though it is, of course, uncertain whether this distinction carries the same weight in other civilizations.Other theories outlining what humans value include Schwartz' Theory of Basic Human Values (updated and extended bySchwartz et al. ( \n Table 2 : 2 The payoff matrix of a prisoner's dilemma. player 2 C D player 1 C 4 3 D 3 2 \n Table 3 : 3 The effective payoffs of a prisoner's dilemma to a third party that cooperates superrationally. \n Table 4 : 4 The effective payoffs of a prisoner's dilemma, in which player 1 cooperates superrationally (with extraterrestrial agents who hold player 2's values), but player 2 does not. \n Do people already apply superrational reasoning when interacting with each other on Earth? Certainly, many disagree with CDT's choice in contrived examples like Newcomb's problem or the prisoner's dilemma against a copy, but does it ever influence their real-world decisions? When conducting a donation game for his Scientific American article, Hofstadter (1983) asked the participants to explain their reasoning: I would like to quote to you some of the feelings expressed by my friends caught in this deliciously tricky situation. [...] Martin Gardner (yes, I asked Martin to participate) vividly expressed the emotional turmoil he and many others went through.Many people flirted with the idea that everybody would think \"about the same\", but did not take it seriously enough. Scott Buresh confided to me: \"It was not an easy choice. I found myself in an oscillation mode: back and forth. I made an assumption: that everybody went through the same mental processes I went through. Now I personally found myself wanting to cooperate roughly one third of the time. Based on that figure and the assumption that I was typical, I figured about one third of the people would cooperate. So I computed how much I stood to make in a field where six or seven people cooperate. It came out that if I were a D, I'd get about three times as much as if I were a C. So I'd have to defect. Water seeks out its own level, and I sank to the lower right-hand corner of the matrix.\" At this point, I told Scott that so far, a substantial majority had defected. He reacted swiftly: \"Those rats -how can they all defect? It makes me so mad! I'm really disappointed in your friends, Doug.\" So was I, when the final results were in: Fourteen people had defected and six had cooperated [...]. bought the Brooklyn Bridge than the person who sold it. Similarly, I'd feel better spending $3 gained by cooperating than $10 gained by defecting.\" Charles Brenner, who I'd figured to be a sure-fire D, took me by surprise and C'd. When I asked him why, he candidly replied, \"Because I don't want to go on record in an international journal as a defector.\" Very well. Know, World, that Charles Brenner is a cooperator! Sidney Nagel was very displeased with his conclusion. He expressed great regret: \"I actually couldn't sleep last night because I was thinking about it. I wanted to be a cooperator, but I couldn't find any way of justifying it. The way I figured it, what I do isn't going to affect what anybody else does. I might as well consider that everything else is already fixed, in which case the best I can do for myself is to play a D.\" [...] 'C' is the answer I was hoping to receive from everyone. I was not so optimistic as to believe that literally everyone would arrive at this conclusion, but I expected a majority would -thus my dismay when the early returns strongly favored defecting. As more phone calls came in, I did receive some C's, but for the wrong reasons. Dan Dennett cooperated, saying, \"I would rather be the person who \"Horrible dilemma\", he said. \"I really don't know what to do about it. If I wanted to maximize my money, I would choose D and expect that others would also; to maximize my satisfactions, I'd choose C, and hope other people would do the same (by the Kantian imperative). I don't know, though, how one should behave rationally. You get into endless regresses: 'If they all do X, then I should do Y, but then they'll anticipate that and do Z, and so . . .' You get trapped in an endless whirlpool. It's like Newcomb's paradox.\" So saying, Martin defected, with a sigh of regret.In a way echoing Martin's feelings of confusion, Chris Morgan said, \"More by intuition than by anything else, I'm coming to the conclusion that there's no way to deal with the paradoxes inherent in this situation. So I've decided to flip a coin, because I can't anticipate what the others are going to do. I think -but can't know -that they're all going to negate each other.\" So, while on the phone, Chris flipped a coin and \"chose\" to cooperate. \n\t\t\t This is consistent with the terminology by Tegmark (2003) but otherwise uncommon. \n\t\t\t Quantifying utility in a way that allows for comparison among different agents is difficult. For now, we will assume that it is possible. The question is revisited in section 2.8. \n\t\t\t Speaking about correlations between decisions only makes sense under the Bayesian interpretation of probability. If we see an agent cooperate, then this makes us assign a higher credence to a similar agent cooperating as well. However, if we were to observe two similar agents make the same decision over and over again, then their decisions would be uncorrelated in the resulting empirical distribution.I should also note that, in principle, I could also talk about dependences rather than correlations. Our decision and the outcome of some other causally disconnected event could be dependent in all kinds of ways, including being dependent but uncorrelated. Throughout this paper I will assume that the dependences can be viewed as simple linear relationships (as measured by the Pearson correlation coefficient) and that it always holds that the more I cooperate, the more others cooperate. I briefly discuss the possibility of negative correlations in section 2.6.2. \n\t\t\t Note that the term \"non-causal decision theory\" is not meant to imply that these theories do not rely on the concept of causality at all.5 Some have argued that evidentialist intuitions are even stronger in problems of cooperation like versions of the prisoner's dilemma with correlated decision making. Egan (2007) presents yet another decision problem as a decisive counterexample.6 If you have anthropic uncertainty over whether you are currently in a simulation used to decide how to fill the boxes with money, CDT may also recommend one-boxing if the simulated version would still care \n\t\t\t In some versions of this problem, Omega has already flipped the coin when it approaches you. In those cases, you would still win by precommitting long after the coin has already landed, provided you are still uncertain about the result of the coin flip.9 Similar lines of reasoning about precommitment apply to thought experiments like the Newcomb's problem with transparent boxes(Drescher, 2006a, chapter 6.2), retribution(Drescher, 2006a, chapter 7.3.1) and Parfit's hitchhiker(Parfit, 1984, chapter 1.3). \n\t\t\t In fact, most multiverse theories contain infinitely many agents. This leads to some additional complications, discussed in section 6.10.13 Decision theorists have picked up on the point that large numbers of agents can bring out the differences between CDT and EDT in realistic cases. In particular, large elections are often mentioned as such a case (see, e. g.Ahmed, 2014, chapter 4.6.3).14 Note that this is usually not an instance of Pascal's mugging, although the underlying mathematical mechanism (multiplying very small numbers with very large numbers) is similar. Whereas in Pascal's mugging, a big reward outweighs the low probability assigned to it, multiverse-wide superrationality (MSR) involves a low probability being outweighed by the large number of near-independent instances of that probability. The positive result occurs with a high probability as long as the other agents' decisions of whether to cooperate are mostly independent of one another. For comparison, imagine drawing balls out of a box containing 1,000,000 balls. You are told that the probability of drawing a blue ball is only 1/1,000 and that the probabilities of different draws are independent. Given this information, you can tell with a high degree of certainty that there are quite a few blue balls in the box. Multiverse-wide correlations between agents thus becomes much more important to consider than the correlations in smaller scale problems like the donation game, unless we are skeptical of some of the underlying assumptions. \n\t\t\t This differs somewhat from more standard game-theoretical definitions of coordination. For a discussion of the relationship, see (Oesterheld, 2017) . \n\t\t\t This notation -viewing the lotteries as probability distributions over action vectors -is a bit unnatural, and stems from the lack of an intermediate step of world states or histories between action vectors and utilities in our notation. If we extended our notation with such an intermediate step, then the lotteries would be over states of the world rather than action vectors. Although the proofs also work with action vectors, it may help to think of the lotteries as being over histories.18 Interestingly, the proof of the aggregation theorem given by Harsanyi (1955) contains an error. However, since then a few alternative, correct proofs have been published(Fishburn, 1984; Border, 1985; Hammond, 1992). \n\t\t\t Note that none of the previous arguments were based on interpersonal comparisons of utility.20 There is a technical problem with utility functions that do not assume a highest/lowest value at all. If they are nonetheless bounded, the infimum and supremum must be set to 0 and 1. If the utility functions assume arbitrarily high values, range normalization is not possible. That is, for an unbounded utility function u there is no bounded utility function u' that is equivalent to u.21 For ethical comparisons, the lowest and highest values usually depend on an agent's moral relevance, or some measure of the intensity of preference (un)fulfillment she can experience. Alternatively, the utility function may be weighted by such values at some other steps of the interpersonal comparison of utility.22 As I will argue below, there are some pathological cases in which every possible compromise utility function leaves someone worse off. However, both of the following cases can, if they avoid these pathologies, \n\t\t\t Another approach, which I have brought up in previous work(Oesterheld, 2016a, section 3.2), is to use any utility function extraction procedure that is not explicitly biased in any way and hope that such \"fair[or, perhaps, equal] treatment in determining all individuals' utility functions induces moral permissibility,\" even if the utility functions are not normalized afterward. This is especially promising if you do not yet know which agents will be favored by the procedure. \n\t\t\t In If you don't know the name of the game, just tell me what I mean to you, Stuart Armstrong uses a similar game to make a somewhat similar point. \n\t\t\t For instance, Peter Levin writes:The reasons that people give for their judgments are post-hoc rationalizations(Haidt, 2012, pp. 27-51; Swidler, 2013, pp. 147-8; Thiele, 2006). \"Individuals are often unable to access the causes of their moral judgments\"(Graham et al., 2011, p. 368). \n\t\t\t Also seeMuehlhauser and Helm (2012, ch. 5).26 Again, see the technical note Oesterheld (2017) on how this compares to more standard game theoretical definitions of coordination. \n\t\t\t You may have noticed that p=1/4=5/20, i. e. the number of players who would need to win divided by the number of players. This result generalizes.28 Specifically, if the 20 participants could let some some uniform random process determine the set of the 5 people who are allowed to send a letter, everyone could commit to going with that proposal. Consider the concept of correlated equilibria. \n\t\t\t In the game-theoretical concept of correlated equilibria, the agents receive a similar form of coordination help. SeeLeyton-Brown and Shoan (2008, chapter 3.5) andOsborne and Rubinstein (1994, chapter 3.3) for introductions. \n\t\t\t Note that in causal cooperation, cooperative or uncooperative behavior may also causally affect bystanders and thus increase the probability that I can establish cooperation with them in the future.31 Throughout this treatment, the graphs do not represent time and repetition. This could be done by taking the given static graphs and \"unfolding through time\", similar to how it is done when applying backpropagation to recurrent neural networks. The resulting graph may then resemble a UML interaction diagram. \n\t\t\t Even in the iterated prisoner's dilemma, this answer -supported by backward induction -is often seen as unsatisfactory. Other examples of paradoxes caused by backward induction are the chainstore paradox, the traveler's dilemma, the unexpected hanging paradox, the Bottle Imp paradox, the centipede game, the interesting number paradox, the guess 2/3 of the average game. A good introduction is given byBasu (2007). For further references, see Basu (1994) . \n\t\t\t Two classes of hypotheses in this space are substance dualism and the quantum mind. Both have a few prominent proponents but are nonetheless fringe positions in philosophy of mind. I concur with the majority and am skeptical of both hypotheses. \n\t\t\t For references to the literature, see section 2.9.1. \n\t\t\t Apparently, some authors differentiate between instrumental and \"value rationality\". I would probably disagree with the assumptions underlying the use of the term \"value rationality\" (see footnote 68). Nevertheless, I agree with the differentiation itself. \n\t\t\t In most human sadists, sadism is probably not the only goal or cause of happiness. Many sadists probably recognize their urges as morally wrong, yet are unable to control them to varying degrees. To these sadists, a decision theory may provide a nudge towards seeking professional help (at least if they cannot satisfy their sadistic preferences in morally nonproblematic ways). \n\t\t\t In my personal experience, self-identified consequentialists actually tend to be more virtue ethical in their behavior than the average person. \n\t\t\t One may argue that absurdity heuristics are a part of someone's epistemology. That is, the \"absurdity\" of the Everett interpretation is used as a reason to give it low probability as a theory of physics. However, it is not clear whether there is a clear-cut, operational difference between belief and preference if the belief does not make a testable prediction. \n\t\t\t Note that some authors are skeptical to there being any fact of the matter in questioning whether some being is conscious or not. Instead, they view terms like \"consciousness\" and \"sentience\" as definitional categories or expressions of particular values. See, e. g., Dennett (1991) and Brian Tomasik's Dissolving Confusion about Consciousness. \n\t\t\t If we normalize their utility function, both assign the same utility to a situation in which all the multiverse's metal is transformed into their favorite office supply. This also means that they assign the same utility to any other other fixed amount of metal being transformed into paperclips. \n\t\t\t In Divergent preferences and meta-preferences, Stuart Armstrong makes a few points that are closely related to the preceding three paragraphs.44 Habermas' discourse ethics is also worth mentioning. Alas, the best discussion of its main ideas that I am aware of -ch. 5 of Norbert Hoerster's Wie lässt sich Moral begründen? -is currently only available in German.45 On the other hand, many (if not most) biologists seem to care about conservation -popular biology textbooks like Campbell Biology (Urry et al., 2016) and Life: The Science of Biology (Sadava et al., 2012) cover and seem to endorse conservation biology. There are various counter-considerations, though. For example, a prior concern for the environment may be a strong motivator for many to study biology in the first place. Perhaps many also did not think about the moral value of nature all that systematically. From what I can tell, neither Campbell Biology nor Life cover wild animal suffering at all. \n\t\t\t Many arguments also present a conflict between abstract and concrete thinking. For example, the repugnant conclusion can be seen as a clash of the evaluation by the concrete welfare of the identifiable victim or representative moment and the abstract evaluation by the aggregate welfare. \n\t\t\t Many other characterizations of the difference between liberals and conservatives have been proposed. For example, Robin Hanson compares the differences between liberals and conservatives to the differences between foragers and farmers. Other distinctions have been proposed by Sinn and Hayes (2016) and Lakoff (1997) . \n\t\t\t The most notable exceptions are probably Boltzmann brains, which do not have a significant impact on the universe. \n\t\t\t There is some debate as to whether the the term \"evolution\" fits this process, i. e. about the validity and usefulness the analogy between cultural evolution on memes and biological evolution on genes (Edmonds, 2005; Kuper, 2000; Gil-White, 2005; Wimsatt, 1999; Claidière and André, 2012; Atran, 2001; Pinker, 1999, chapter 3, section \"What now?\"). Anyway, my impression is that nowadays the study of cultural evolution does not heavily rely on the analogy, even when the term \"cultural evolution\" is used. \n\t\t\t Note that the distinction between universal and idiosyncratic concerns is not binary. For example, I would guess that valuing eternal flames is much more common in the multiverse than most religions and tribal loyalties but less common than concern for justice, welfare and freedom. \n\t\t\t That said, advocates of simple ethical views like utilitarianism often argue that the implications resemble other ethical notions. For example, because receiving an additional unit of resources has a greater impact on a poor than a rich person's happiness, utilitarianism tends to prefer an even distribution of resources(Studebaker, 2012). Similarly, it has been argued that utilitarianism is (often) consistent with the wrongness of killing, justice (Mill, 1863) and other moral rules(Smart and B. Williams, 1973, part 1, chapter 7). This decreases the value of making utilitarians more pluralistic. It should be noted, however, that many (especially critics of utilitarianism) have argued for the opposite, i. e. that there are some moral intuitions \n\t\t\t For example, the Machine Intelligence Research Insitute's \"Research\" page is titled \"Aligning advanced AI with human interests\". Another AI safety organization even mentions it in their name: the Center for Human-Compatible AI. Also consider the Asilomar AI principles and the discussion of value loading byBostrom (2014b, chapters 12, 13). \n\t\t\t Of course, identifying superrational cooperators in a world model may be more or less difficult than identifying humans in the world model. My tentative guess would be that it is easier, because I think the category of superrationalists can be described more succinctly than the category of humans, but of course I am not very confident in this claim. Similarly, it may be that MSR-type aggregation (e. g., variance \n\t\t\t Data from other games with similarly dissatisfying Nash equilibria can be used as further tests of such models of human reasoning. For example,Basu (2007) reviews research on people's choices in the traveler's dilemma. He also hypothesizes that many people do not go with the Nash equilibrium because of hardwired altruism. \n\t\t\t Note that while humans evolved to spread their genes as much as possible, they are neither pure fitness maximizers nor pure egoists (in the sense of not caring about others' welfare). Our altruistic intentions evolved for reasons of fitness, but that does not mean they are not genuine altruistic intentions(Yudkowsky, 2015, section 138; Cosmides and Tooby, 1995, page 54f. Wright, 1995, page 225f.).78 This is no surprise, as deriving ought from is cannot -at least in my view -be done.", "date_published": "n/a", "url": "n/a", "filename": "Multiverse-wide-Cooperation-via-Correlated-Decision-Making.tei.xml", "abstract": "Some decision theorists argue that when playing a prisoner's dilemma-type game against a sufficiently similar opponent, we should cooperate to make it more likely that our opponent also cooperates. This idea, which Hofstadter calls superrationality, has strong implications when combined with the insight from modern physics that we probably live in a large universe or multiverse of some sort. If we care about what happens in civilizations located elsewhere in the multiverse, we can superrationally cooperate with some of their inhabitants. That is, if we take their values into account, this makes it more likely that they do the same for us. In this paper, I attempt to assess the practical implications of this idea. I argue that to reap the full gains from trade, everyone should maximize the same impartially weighted sum of the utility functions of all collaborators. I also argue that we can obtain at least weak evidence about the content of these utility functions. In practice, the application of superrationality implies that we should promote causal cooperation, moral pluralism, moral reflection, and ensure that our descendants, who will be smarter and thus better at finding out how to benefit other superrationalists in the universe, engage in superrational cooperation. \n Introduction -the basic idea This paper makes an extraordinary claim: that a few interesting but by themselves inconsequential ideas from decision theory and physics together give rise to a crucial consideration with strong implications for how to do the most good. In this first section, I will outline the main idea and forward-reference sections with the full arguments and detailed elaborations. Afterward, I give an overview of the entire paper, section by section (section 1.1). Consider the following thought experiment, adapted from Hofstadter's (1983) Dilemmas for Superrational Thinkers, Leading Up to a Luring Lottery: Donation game with superrationality. Hofstadter sends 20 participants the same letter, asking them to respond with a single letter 'C' (for cooperate) or 'D' (for defect) without communicating with the other participants. Hofstadter \n Superrationality Despite what the name might suggest, superrationality does not have anything to do with extraordinary levels of rationality. \"Super\" refers to inclusivity, as in superorganism, and \"rationality\" specifically denotes instrumental rationality. The term was introduced by Hofstadter (1983), although the basic argument had been discussed before (Davis, 1977; Horgan, 1981 , section X). In the following we give an abbreviated and simplified account of the prisoner's dilemma or public goods game-like experiment Hofstadter ran with some of his friends and colleagues as participants. It is the same thought experiment we discussed in the introduction, although we now distinguish two slightly different versions. The argumentation for superrationality will be relatively brief. For more detailed accounts, see Hofstadter's original article or some of the references in section 2.2. Donation game with common rationality. (This is more similar to the version Hofstadter uses in his article.) Hofstadter sends 20 participants the same letter, asking them to respond with a single letter 'C' (for cooperate) or 'D' (for defect) without communicating with each other. Hofstadter explains that by sending in 'C', a participant can increase everyone else's payoff by $2. By sending in 'D', participants can increase their own payoff by $5. The letter ends by informing the participants that they were all chosen for their high levels of rationality and correct decision making in weird scenarios like this. Note that every participant only cares about the balance of her own bank account and not about Hofstadter's or the other 19 participants'. Should you, as a participant, respond with 'C' or 'D'? Donation game with similarity. The same as the donation game with common rationality. However, instead of informing the participants that they are all rational, the game master informs them that they think in similar ways about weird decision problems like this one. The basic setup of this thought experiment is equivalent to those found in, e. g., the prisoner's dilemma with copies (sometimes also referred to as the prisoner's dilemma with replicas or twins). All of these games share an important feature: they are not iterated. Participants respond only once, then find out what the others chose -and the game is over.", "id": "5482353368a0ec4e131702d1ca5e3f9d"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Joshua Greene", "Francesca Rossi", "John Tasioulas", "Kristen Brent Venable", "Brian Williams"], "title": "Embedding Ethical Principles in Collective Decision Support Systems", "text": "Introduction We believe it is important to study the embedding of safety constraints, moral values, and ethical principles in agents, within the context of collective decision making systems in societies of agents and humans. Collective decision making involves a collection of agents who express their preferences over a shared set of possible outcomes, and a preference aggregation rule which chooses one of the options to best satisfy the agents' preferences. However, aggregating just preferences may lead to outcomes that do not follow any ethical principles or safety constraints. To embed such principles/constraints in a collective decision making system, we need to understand how to model them, how to reason with them at the level of a single agent, and how to embed them into collective decision making. Just like individual humans, each agent that operates in a multi-agent context needs to be have an internal representation of moral values and ethical principles, as well as an ethical reasoning engine. Otherwise it would not able to explain its behaviour to others. Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. We claim that there is a need to adapt current logicbased modelling and reasoning frameworks, such as soft constraints, CP-nets, and constraint-based scheduling under uncertainty, to model safety constraints, moral values, and ethical principles. More precisely, we study how logicbased preference modelling frameworks can be adapted to model both (explicit) ethical principles and (implicit) moral values, as sophisticated constraints over possible actions. The constraints may be unconditional (\"hard\") constraints, or soft, overridable if the consequences of an individual bad action can still lead to overall good. We propose to replace preference aggregation with an appropriately developed value/ethics/preference fusion, an operation designed to ensure that agents' preferences are consistent with their moral values and do not override ethical principles For ethical principles, we use hard constraints specifying the basic ethical \"laws\", plus some form of common-sense morality expressed as sophisticated prioritised and possibly context-dependent constraints over possible actions, equipped with a conflict resolution engine. To avoid reckless behavior in the face of uncertainty, we proposed to bound the risk of violating these ethical laws in the form of chance constraints, and we propose to develop stochastic constraint solvers that propose solutions that respect these risk bounds, based on models of environmental uncertainty. We also propose to replace preference aggregation with an appropriately developed constraint/value/ethics/preference fusion, an operation designed to ensure that agents' preferences are consistent with the system's safety constraints, the agents' moral values, and the ethical principles. We will leverage previous experience in developing single and multi-agent preference/constraint reasoning engines. Today, techniques exist to enable agents to make decisions, such as scheduling activities, while satisfying some safety concerns, e.g. by using techniques from constraintbased optimization. For instance, in many critical scenarios, such as space missions where a malfunction can endanger the whole mission, activities are scheduled in such a way to maximise robustness against possible problems. We believe that these techniques can provide an inspiration to handle ethical concerns. However, we think that a much more explicit model and reasoning engine for ethical principles and moral values is needed in order to deal with them satisfactorily and allow them to evolve over time. \n Which ethical principles for intelligent agents? An intelligent agent should have capability to autonomously make good decisions, based on available data and preferences, even in the context of uncertainty, missing or noisy information, as well as incorrect input, and should be able to learn from past experience or from available historical data. Even more importantly, intelligent agents should have the ability to interact with humans, make decisions together with them, and achieve goals by working together. An agent with these capabilities poses several crucial ethical questions. Ethical principles guide humans' behaviour. They tell us what is regarded as right or wrong. They come from values that we regards as absolute, guiding our whole life. If we want intelligent agents to enhance human capabilities, or to collaborate with humans, or even just to live and act in the same society, we need to embed in them some ethical guidelines, so they can act in their environment following values that are aligned to the human ones. Or maybe we need different values and ethical principles for agents, since they are inherently different from humans? As Issac Asimov famously illustrated in his I, Robot series, explicitly programming ethical behavior is surprisingly challenging. Moral philosophy -the field that has studied explicit ethical principles most extensively -suggests three general approaches, corresponding to the three major schools of Western moral thought. The deontological approach (most closely associated with Immanuel Kant) regards morality as a system of rights and duties. Here the focus is on categories of actions, where different actions are deemed impermissible, permissible, or obligatory based on a set of explicit rules. The consequentialist approach (most closely associated with Jeremy Bentham and John Stuart Mill) aims to produce the best aggregate consequences (minimizing costs and maximizing benefits) according to a pre-specified value function. For example, a classical utilitarian approach aims to maximize the total amount of happiness. The virtue-or character-based approach (most closely associated with Aristotle) regards ethical behavior as the product of an acquired set of behavioral dispositions that cannot be adequately summarized as an adherence to a set of deontological rules (concerning actions) or to as a commitment to maximizing good consequences. These three approaches are well known and have been the starting point for nearly all discussions of machine ethics (Moor 1985; Bostrom 2014; Wallach and Allen 2008) . Each approach has limitations that are well known. Deontological principles are easily to implement but may be rigid. Consequentialist principles require complex calculations that may be faulty. Virtue is opaque and requires extensive training with an unknown teaching criterion. There is, however, a more general problem faced by all three approaches, which is that implementing them may depend on solving daunting, general computation problems that have not been solved and may not be solved for some time. For example, a \"simple\" deontological rule such as \"don't lie\" or \"dont kill\" is not specified in terms of machine movements. Rather, the machine must understand which acts of communication would constitute lying and which body movements would constitute killing in a given context. A consequentialist system would require a machine to represent all of the actions available to it, and a virtue based system would have to recognize the present situation as one with a variety of features that, together, call for one action rather than another. In other words, all three approaches, when fully implemented, seem to require something like general intelligence, which would enable the machine to represent its current situation in rich conceptual terms. Indeed, this speculation is consistent with recent research on the cognitive neuroscience of moral judgment indicating that moral judgment depends on a variety of neural systems that are not specifically dedicated to moral judgment (Greene 2014) . This includes systems that enable the general representation of value and the motivation of its pursuit, visual imagery, cognitive control, and the representation of complex semantic representations. Unfortunately for Commander Data, humans have no \"ethical subroutine\". Real human moral judgment uses the whole brain. What, then, can be done? Here, the human brain may nevertheless offer some guidance (Shenhav and Greene 2014) . Is it morally acceptable to push someone off of a footbridge in order to save five lives (Thomson 1985 )? A simple deontological response says no (\"Dont kill\"). A simple consequentialist response says yes (\"Save the most lives\"), and most humans are at least somewhat conflicted about this, but err on the side of the deontological response (in this particular case). We now know that the deontological response depends on a classically emotional neural structure known as the amygdala (reflecting emotional salience) and that the application of the consequentialist maximizing principle depends on a classically \"cognitive\" structure known as the dorsolateral prefrontal cortex. It seems that healthy humans engage both responses and that there is a higher-order evaluation process that depends on the ventromedial prefrontal cortex, a structure that across domains attaches emotional weight to decision variables. In other words, the brain seems to make both types of judgment (deontological and consequentialist) and then makes a higher order judgment about which lower-order judgment to trust, which may be viewed as a kind of wisdom (reflecting virtue or good character). Such a hierarchical decision system might be implemented within an agent, or across agents. For example, some agents may apply simple rules based on action features. Others may attempt to make \"limited\" cost-benefit calculations. And collectively, the behavior of these agents may be determined by a weighting of these distinct, lower-level evaluative responses. Such as system might begin by following simple deontological rules, but then, either acquire more complex rules through learning, or learn when it can and cannot trust its own cost-benefit calculations. Starting with action-based rules and simple cost-benefit calculations substantially reduces the space of possible responses. Learning to trade-off between these two approaches adds some flexibility, but without requiring intractable cost-benefit calculations or lifelong moral education. We offer this approach as just one example strategy. Of course, if we knew how we were going to solve this problem, there would be no need to bring together people with diverse expertise. What we wish to convey is twofold: First, that we are aware of the scope of the challenge and the strengths and limitations of the extant strategies. Second, that we have some preliminary ideas for hybrid approaches that leverage insights from human moral cognition. Another important aspect of our approach would be to consider the extent to which morality could be reduced to a set of rules that is capable of being applied in a fairly straightforward way to guide conduct , e.g. 'Do not kill', 'Keep one's promises', 'Help those in need', etc. We already know that much of common sense morality is codifiable in this way, thanks to the example of the law. However, even if we could achieve an adequate codification of ordinary moral consciousness, at least within some domain, problems would arise. Two cases are especially worth highlighting: (a) cases where the strict application of a given rule generates an unacceptable outcome, often but not always characterisable as such by reference to some other rule that has been violated in adhering to the first, and (b) cases where the strict application of the given set of rules is unhelpfully 'silent' on the problem at hand, because it involved circumstances not foreseen by the rules. Both phenomena (a) and (b) raise the question of when and how the strict application of a rule needs to be modified or supplemented to resolve the problem of perverse results or gaps. One important source of thinking about these issues is Aristotle's discussion of justice and equity in the Nicomachean Ethics. According to Aristotle, the common sense morality codified in law, although capable of being a generally good guide to action, will nonetheless on occasion breakdown along the lines of (a) and (b). For Aristotle, this means that the virtuous judge will need to possess, in addition to a propensity to follow legal rules, the virtue of equity. This enables the judge to use their independent judgment to correct or supplement the strict application of legal rules in cases of type (a) or (b). A key topic involves the clarification of the notion of equity, with its rule and judgment structure, as a prelude to a consideration of how this might be embedded in autonomous agents. \n Designing ethical agents No matter which approach we will choose to express ethical principles and moral values in intelligent agents, we need to find a suitable way to model it in computational terms, which is expressive enough to be able to represent all we have in mind in its full generality, and which can be reasoned upon with computational efficiency. Ethical principles may seem very similar to the concepts of constraints (Rossi, Van Beek, and Walsh 2006; Dechter 2003 ) and preferences (Rossi, Venable, and Walsh 2011) , which have already received a large attention in the AI literature. Indeed, constraints and preferences are a common feature of everyday decision making. They are, therefore, an essential ingredient in many reasoning tools. In an intelligent agent, we need to specify what is not allowed according to the principles, thus some form of constraints, as well as some way to prioritise among different principles, that some form of preference. Representing and reasoning about preferences is an area of increasing theoretical and practical interest in AI. Preferences and constraints occur in real-life problems in many forms. Intuitively, constraints are restrictions on the possible scenarios: for a scenario to be feasible, all constraints must be satisfied. For example, if we have an ethical rule that says we should not kill anybody, all scenarios where people are killed are not allowed. Preferences, on the other hand, express desires, satisfaction levels, rejection degrees, or costs. For example, we may prefer an action that solves reasonably well all medical issues in a patient, rather than another one that solves completely one of them but does not address the other ones. Moreover, in many real-life optimization problems, we may have both constraints and preferences. Preferences and constraints are closely related notions, since preferences can be seen as a form of \"relaxed\" constraints. For this reason, there are several constraint-based preference modeling frameworks in the AI literature. One of the most general of such frameworks defines a notion of soft constraints (Meseguer, Rossi, and Schiex 2006) , which extends the classical constraint formalism to model preferences in a quantitative way, by expressing several degrees of satisfaction that can be either totally or partially ordered. The term soft constraints is used to distinguish this kind of constraints from the classical ones, that are usually called hard constraint. However, hard constraints can be seen as an instance of the concept of soft constraints where there are just two levels of satisfaction. In fact, a hard constraint can only be satisfied or violated, while a soft constraint can be satisfied at several levels.When there are both levels of satisfaction and levels of rejection, preferences are usually called bipolar, and they can be modeled by extending the soft constraint formalism (Bistarelli et al. 2006) . Preferences can also be modeled in a qualitative (also called ordinal) way, that is, by pairwise comparisons. In this case, soft constraints (or their extensions) are not suitable. However, other AI preference formalisms are able to express preferences qualitatively, such as CP-nets (Boutilier et al. 2004 ). More precisely, CP-nets provide an intuitive way to specify conditional preference statements that state the preferences over the instances of a certain feature, possibly depending on some other features. For example, we may say that we prefer driving slow to driving fast if we are in a country road. CP-nets and soft constraints can be combined, providing a single environment where both qualitative and quantitative preferences can be modeled and handled. Specific types of preferences come with their own reasoning methods. For example, temporal preferences are quantitative preferences that pertain to the position and duration of events in time. Soft constraints can be embedded naturally in a temporal constraint framework to handle this kind of preference. An intuitive way to express preferences consists of providing a set of goals, each of which is a propositional formula, possibly adding also extra information such as priorities or weights. Candidates in this setting are variable assignments, which may satisfy or violate each goal. A weighted goal is a propositional logic formula plus a realvalued weight. The utility of a candidate is then computed by collecting the weights of satisfied and violated goals, and then aggregating them. Often only violated goals count, and their utilities are aggregated with functions such as sum or maximin. In other cases, we may sum the weights of the satisfied goals, or we may take their maximum weight. Any restriction we may impose on the goals or the weights, and any choice of an aggregation function, give a different language. Such languages may have drastically different properties in terms of their expressivity, succinctness, and computational complexity. In the quantitative direction typical of soft constraints, there are also other frameworks to model preferences. The most widely used assumes we have some form of independence among variables, such as mutual preferential independence. Preferences can then be represented by an additive utility function in deterministic decision making, or utility independence, which assures an additive representation for general scenarios. However, this assumption often does not hold in practice since there is usually some interaction among the variables. To account for this, models based on interdependent value additivity have been defined which allows for some interaction between the variables while preserving some decomposability. This notion of independence, also called generalized additive independence (GAI), allows for the definition of utility functions which take the form of a sum of utilities over subsets of the variables. GAI decompositions can be represented by a graphical structure, called a GAI net, which models the interaction among variables, and it is similar to the dependency graph of a CP-net or to the junction graph of a Bayesian network. GAI decompositions have been used to provide CP-nets with utility functions, obtaining the so-called UCP networks. \n Preferences and ethical principles in collective decision making systems If agents and humans will be part of a hybrid collective decision making system, and thus will make collective decisions, based on their preferences over the possible outcomes, can ethical principles for such decision system be modelled just like the preferences of another dummy agent, or should they be represented and treated differently? Are the knowledge representation formalisms that are usually used in AI to model preferences suitable to model values as well, or should we use something completely different? A very simple form of values could be modelled by constraints, so that only feasible outcomes can be the results of a collective decisions process. But values and ethical principles could often take a graded form, thus resembling a kind of preference. Also, should individual and collective ethical principles be modelled differently? We believe that some of the answers to these questions may exploit the existing literature on preference aggregation (Rossi, Venable, and Walsh 2011) . Indeed, an important aspect of reasoning about preferences is preference aggregation. In multi-agent systems, we often need to combine the preferences of several agents. More precisely, preferences are often used in collective decision making when multiple agents need to choose one out of a set of possible decisions: each agent expresses its preferences over the possible decisions, and a centralized system aggregates such preferences to determine the \"winning\" decision. Preferences are also the subject of study in social choice, especially in the area of elections and voting theory (Arrow and amd K. Suzumara 2002) . In an election, the voters express their preferences over the candidates and a voting rule is used to elect the winning candidate. Economists, political theorist, mathematicians, as well as philosophers have invested considerable effort in studying this scenario and have obtained many theoretical results about the desirable properties of the voting rules that one can use. Since the voting setting is closely related to multi-agent decision making, in recent years the area of multi-agent systems has witnessed a growing interest in trying to reuse social choice results in the multi-agent setting. However, it soon became clear that an adaptation of such results is necessary, since several issues, which are typical of multi-agent settings and AI scenarios, usually do not occur, or have a smaller impact, in typical voting situations. In a multi-agent system, the set of candidates can be very large with respect to the set of voters. Usually in social choice it is the opposite: there are many voters and a small number of candidates. Also, in many AI scenarios, the candidates often have a combinatorial structure. That is, they are defined via a combination of features. Moreover, the preferences over the features are often dependent on each other. In social choice, usually the candidates are tokens with no structure. In addition, for multi-issue elections, the issues are usually independent of each other. This combinatorial structure allows for the compact modelling of the preferences over the candidates. Therefore, several formalisms have been developed in AI to model such preference orderings. In social choice, little emphasis is put on how to model preferences, since there are few candidates, so one can usually explicitly specify a linear order. In AI, a preference ordering is not necessarily linear, but it may include indifference and incomparability. Moreover, often uncertainty is present, for example in the form of missing or imprecise preferences. In social choice, usually all preferences are assumed to be present, and a preference order over all the candidates is a linear order that is explicitly given as a list of candidates. Finally, multi-agent systems must consider the computational properties of the system. In social choice this usually has not been not a crucial issue. It is therefore very interesting to study how social choice and AI can fruitfully cooperate to give innovative and improved solutions to aggregating preferences of multiple agents. In our effort, since we intend to deal with ethical issues in collective decision making, we need to understand what modifications to the usual preference aggregation scenario should be done to account for them, and how they can be handled satisfactorily when making collective decisions. Collective decision making in the presence of feasibility constraints is starting to be considered in the literature (Grandi et al. 2014) . However, ethical principles and safety constraints will be much more complex than just a set of constraints, so we need to understand the computational and expressiveness issues arising in this scenario. \t\t\t Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence", "date_published": "n/a", "url": "n/a", "filename": "12457-56397-1-PB.tei.xml", "abstract": "The future will see autonomous machines acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Think of self-driving cars, companion robots, and medical diagnosis support systems. We also believe that humans and machines will often need to work together and agree on common decisions. Thus hybrid collective decision making systems will be in great need. In this scenario, both machines and collective decision making systems should follow some form of moral values and ethical principles (appropriate to where they will act but always aligned to humans'), as well as safety constraints. In fact, humans would accept and trust more machines that behave as ethically as other humans in the same environment. Also, these principles would make it easier for machines to determine their actions and explain their behavior in terms understandable by humans. Moreover, often machines and humans will need to make decisions together, either through consensus or by reaching a compromise. This would be facilitated by shared moral values and ethical principles.", "id": "a0cf7d134598cca1f2cee9cb66a89f91"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Rachel Freedman", "Rohin Shah", "Anca Dragan"], "title": "Choice Set Misspecification in Reward Inference", "text": "Introduction Specifying reward functions for robots that operate in environments without a natural reward signal can be challenging, and incorrectly specified rewards can incentivise degenerate or dangerous behavior [Leike et al., 2018; Krakovna, 2018] . A promising alternative to manually specifying reward functions is to design techniques that allow robots to infer them from observing and interacting with humans. Figure 1 : Example choice set misspecification: The human chooses a pack of peanuts at the supermarket. They only notice the expensive one because it has flashy packaging, so that's the one they buy. However, the robot incorrectly assumes that the human can see both the expensive flashy one and the cheap one with dull packaging but extra peanuts. As a result, the robot incorrectly infers that the human likes flashy packaging, paying more, and getting fewer peanuts. These techniques typically model humans as optimal or noisily optimal. Unfortunately, humans tend to deviate from optimality in systematically biased ways [Kahneman and Tversky, 1979; Choi et al., 2014] . Recent work improves upon these models by modeling pedagogy [Hadfield-Menell et al., 2016] , strategic behavior [Waugh et al., 2013] , risk aversion [Majumdar et al., 2017] , hyperbolic discounting [Evans et al., 2015] , or indifference between similar options [Bobu et al., 2020b] . However, given the complexity of human behavior, our human models will likely always be at least somewhat misspecified [Steinhardt and Evans, 2017] . One way to formally characterize misspecification is as a misalignment between the real human and the robot's assumptions about the human. Recent work in this vein has examined incorrect assumptions about the human's hypothesis space of rewards [Bobu et al., 2020a] , their dynamics model of the world [Reddy et al., 2018] , and their level of pedagogic behavior [Milli and Dragan, 2019] . In this work, we identify another potential source of misalignment: what if the robot is wrong about what feedback the human could have given? Consider the situation illustrated in Figure 1 , in which the robot observes the human going grocery shopping. While the grocery store contains two packages of peanuts, the human only notices the more expensive version with flashy packaging, and so buys that one. If the robot doesn't realize that the human was effectively unable to evaluate the cheaper package on its merits, it will learn that the human values flashy packaging. We formalize this in the recent framework of rewardrational implicit choice (RRiC) [Jeon et al., 2020] as misspecification in the human choice set, which specifies what feedback the human could have given. Our core contribution is to categorize choice set misspecification into several formally and empirically distinguishable \"classes\", and find that different types have significantly different effects on performance. As we might expect, misspecification is usually harmful; in the most extreme case the choice set is so misspecified that the robot believes the human feedback was the worst possible feedback for the true reward, and so updates strongly towards the opposite of the true reward. Surprisingly, we find that under other circumstances misspecification is provably neutral: it neither helps nor hurts performance in expectation. Crucially, these results suggest that not all misspecification is equivalently harmful to reward inference: we may be able to minimize negative impact by systematically erring toward particular misspecification classes defined in this work. Future work will explore this possibility. \n Reward Inference There are many ways that a human can provide feedback to a robot: demonstrations [Ng and Russell, 2000; Abbeel and Ng, 2004; Ziebart, 2010] , comparisons [Sadigh et al., 2017; Christiano et al., 2017] , natural language [Goyal et al., 2019] , corrections [Bajcsy et al., 2017] , the state of the world [Shah et al., 2019] , proxy rewards [Hadfield-Menell et al., 2017; Mindermann et al., 2018] , etc. Jeon et al. propose a unifying formalism for reward inference to capture all of these possible feedback modalities, called reward-rational (implicit) choice (RRiC) . Rather than study each feedback modality separately, we study misspecification in this general framework. RRiC consists of two main components: the human's choice set, which corresponds to what the human could have done, and the grounding function, which converts choices into (distributions over) trajectories so that rewards can be computed. For example, in the case of learning from comparisons, the human chooses which out of two trajectories is better. Thus, the human's choice set is simply the set of trajectories they are comparing, and the grounding function is the identity. A more complex example is learning from the state of the world, in which the robot is deployed in an environment in which a human has already acted for T timesteps, and must infer the human's preferences from the current world state. In this case, the robot can interpret the human as choosing between different possible states. Thus, the choice set is the set of possible states that the human could reach in T timesteps, and the grounding function maps each such state to the set of trajectories that could have produced it. Let ξ denote a trajectory and Ξ denote the set of all possible trajectories. Given a choice set C for the human and grounding function ψ : C → (Ξ → [0, 1]), Jeon et al. define a procedure for reward learning. They assume that the human is Boltzmann-rational with rationality parameter β, so that the probability of choosing any particular feedback is given by: P(c | θ, C) = exp(β • E ξ∼ψ(c) [r θ (ξ)]) c ∈C exp(β • E ξ∼ψ(c ) [r θ (ξ)]) (1) From the robot's perspective, every piece of feedback c is an observation about the true reward parameterization θ * , so the robot can use Bayesian inference to infer a posterior over θ. Given a prior over reward parameters P(θ), the RRiC inference procedure is defined as: P(θ | c, C) ∝ exp(β • E ξ∼ψ(c) [r θ (ξ)] c ∈C exp(β • E ξ∼ψ(c ) [r θ (ξ)]) • P(θ) (2) Since we care about misspecification of the choice set C, we focus on learning from demonstrations, where we restrict the set of trajectories that the expert can demonstrate. This enables us to have a rich choice set, while allowing for a simple grounding function (the identity). In future work, we aim to test choice set misspecification with other feedback modalities as well. \n Choice Set Misspecification For many common forms of feedback, including demonstrations and proxy rewards, the RRiC choice set is implicit. The robot knows which element of feedback the human provided (ex. which demonstration they performed), but must assume which elements of feedback the human could have provided based on their model of the human. However, this assumption could easily be incorrect -the robot may assume that the human has capabilities that they do not, or may fail to account for cognitive biases that blind the human to particular feedback options, such as the human bias towards the most visually attention-grabbing choice in Fig 1. To model such effects, we assume that the human selects feedback c ∈ C Human according to P(c | θ, C Human ), while the robot updates their belief assuming a different choice set C Robot to get P(θ | c, C Robot ). Note that C Robot is the robot's assumption about what the human's choice set is -this is distinct from the robot's action space. When C Human = C Robot , we get choice set misspecification. It is easy to detect such misspecification when the human chooses feedback c / ∈ C R . In this case, the robot observes a choice that it believes to be impossible, which should certainly be grounds for reverting to some safe baseline policy. So, we only consider the case where the human's choice c is also present in C R (which also requires C H and C R to have at least one element in common). Within these constraints, we propose a classification of types of choice set misspecification in Table 1 . On the vertical axis, misspecification is classified according to the location of the optimal element of feedback C R ⊂ C H C R ⊃ C H C R ∩ C H c * ∈ C R ∩ C H A1 A2 A3 ∈ C R \\C H B2 B3 c * = argmax c∈C R ∪C H E ξ∼ψ(c) [r θ * (ξ))]. If c * is available to the human (in C H ), then the class code begins with A. We only consider the case where c * is also in C R : the case where it is in C H but not C R is uninteresting, as the robot would observe the \"impossible\" event of the human choosing c * , which immediately demonstrates misspecification at which point the robot should revert to some safe baseline policy. If c * / ∈ C H , then we must have c * ∈ C R (since it was chosen from C H ∪ C R ), and the class code begins with B. On the horizontal axis, misspecification is classified according to the relationship between C R and C H . C R may be a subset (code 1), superset (code 2), or intersecting class (code 3) of C H . For example, class A1 describes the case in which the robot's choice set is a subset of the human's (perhaps because the human is more versatile), but both choice sets contain the optimal choice (perhaps because it is obvious). \n Experiments To determine the effects of misspecification class, we artificially generated C R and C H with the properties of each particular class, simulated human feedback, ran RRiC reward inference, and then evaluated the robot's resulting belief distribution and optimal policy. \n Experimental Setup Environment To isolate the effects of misspecification and allow for computationally tractable Bayesian inference, we ran experiments in toy environments. We ran the randomized experiments in the four 20 × 20 gridworlds shown in Fig 2 . Each square in environment x is a state s x = {lava, goal}. lava ∈ [0, 1] is a continuous feature, while goal ∈ {0, 1} is a binary feature set to 1 in the lower-right square of each grid and 0 everywhere else. The true reward function r θ * is a linear combination of these features and a constant stayalive cost incurred at each timestep, parameterized by θ = (w lava , w goal , w alive ). Each episode begins with the robot in the upper-left corner and ends once the robot reaches the goal state or episode length reaches the horizon of 35 timesteps. Robot actions A R move the robot one square in a cardinal or diagonal direction, with actions that would move the robot off of the grid causing it to remain in place. The transition function T is deterministic. Environment x defines an MDP M x = S x , A R , T, r θ * . Inference While the RRiC framework enables inference from many different types of feedback, we use demonstration feedback here because demonstrations have an implicit choice set and straightforward deterministic grounding. Only the human knows their true reward function parameterization θ * . The robot begins with a uniform prior distribution over reward parameters P(θ) in which w lava and w alive vary, but w goal always = 2.0. P(θ) contains θ * . RRiC inference proceeds as follows for each choice set tuple C R , C H and environment x. First, the simulated human selects the best demonstration from their choice set with respect to the true reward c H = argmax c∈C H E ξ∼ψ(c) [r θ * (ξ))]. Then, the simulated robot uses Eq. 2 to infer a \"correct\" distribution over reward parameterizations B H (θ) P(θ | c, C H ) using the true human choice set, and a \"misspecified\" distribution B R (θ) P(θ | c, C R ) using the misspecified human choice set. In order to evaluate the effects of each distribution on robot behavior, we define new MDPs M x H = S x , A R , T, r E[B H (θ)] and M x R = S x , A R , T, r E[B R (θ)] for each environment, solve them using value iteration, and then evaluate the rollouts of the resulting deterministic policies according to the true reward function r θ * . \n Randomized Choice Sets We ran experiments with randomized choice set selection for each misspecification class to evaluate the effects of class on entropy change and regret. \n Conditions The experimental conditions are the classes of choice set misspecification in Table 1: A1, A2, A3, B2 and B3. We tested each misspecification class on each environment, then averaged across environments to evaluate each class. For each environment x, we first generated a master set C x M of all demonstrations that are optimal w.r. If entropy change is positive, then misspecification induces overconfidence, and if it is negative, then misspecification induces underconfidence. Regret is the difference in return between the optimal solution to M x H , with the correctly-inferred reward parameterization, and the optimal solution to M x R , with the incorrectlyinferred parameterization, averaged across all 4 environments. If ξ * x H is an optimal trajectory in M x H and ξ * x R is an optimal trajectory in M x R , then regret = 1 4 3 x=0 [r θ * (ξ * x H )− r θ * (ξ * x R )] . Note that we are measuring regret relative to the optimal action under the correctly specified belief, rather than optimal action under the true reward. As a result, it is possible for regret to be negative, e.g. if the misspecification makes the robot become more confident in the true reward than it would be under correct specification, and so execute a better policy. \n Biased Choice Sets We also ran an experiment in a fifth gridworld where we select the human choice set with a realistic human bias to illustrate how choice set misspecification may arise in practice. In this experiment the human only considers demonstrations that end at the goal state because, to humans, the word \"goal\" can be synonymous with \"end\" (Fig 3a ). However, to the robot, the goal is merely one of multiple features in the environment. The robot has no reason to privilege it over the other features, so the robot considers every demonstration that is optimal w.r.t some possible reward parameterization (Fig 3b ). The trajectory that only the robot considers is marked in blue. We ran RRiC inference using this C R , C H and evaluated the results using the same measures described above. \n Results We summarize the aggregated measures, discuss the realistic human bias result, then examine two interesting results: symmetry between classes A1 and A2 and high regret in class B3. Regret Regret also varied as a function of misspecification class. Each class had a median regret of 0, suggesting that misspecification commonly did not induce a large enough shift in belief for the robot to learn a different optimal policy. However the mean regret, plotted as green lines in Fig 5 , did vary markedly across classes. Regret was sometimes so high in class B3 that outliers skewed the mean regret beyond of the whiskers of the boxplot. Again, classes A1 and A2 are precisely symmetric. We discuss this symmetry in Section 5.3, then discuss the poor performance of B3 in Section 5.4. Figure 6 : Human feedback and the resulting misspecified robot belief with a human goal bias. Because the feedback that the biased human provides is poor, the robot learns a very incorrect distribution over rewards. \n Effects of Biased Choice Sets The This result shows that we can see an outsized negative impact on robot reward inference with a small incorrect assumption that the human considered and rejected demonstrations that don't terminate at the goal. \n Symmetry Intuitively, misspecification should lead to worse performance in expectation. Surprisingly, when we combine misspecification classes A1 and A2, their impact on entropy change and regret is actually neutral. The key to this is their symmetry -if we switch the contents of C Robot and C Human in an instance of class A1 misspecification, we get an instance of class A2 with exactly the opposite performance characteristics. Thus, if a pair in A1 is harmful, then the analogous pair in A2 must be helpful, meaning that it is better for performance than having the correct belief about the human's choice set. We show below that this is always the case under certain symmetry conditions that apply to A1 and A2. Assume that there is a master choice set C M containing all possible elements of feedback for MDP M , and that choice sets are sampled from a symmetric distribution over pairs of subsets D : 2 C M × 2 C M → [0, 1] with D(C x , C y ) = D(C y , C x ) (where 2 C M is the set of subsets of C M ). Let ER(r θ , M ) be the expected return from maximizing the reward function r θ in M . A reward parameterization is chosen from a shared prior P(θ) and C H , C R are sampled from D. The human chooses the optimal element of feedback in their choice set c C H = argmax c∈C H E ξ∼ψ(c) [r θ * (ξ))]. Theorem 1. Let M and D be defined as above. Assume that ∀C x , C y ∼ D, we have c Cx = c Cy ; that is, the human would pick the same feedback regardless of which choice set she sees. If the robot follows RRiC inference according to Eq. 2 and acts to maximize expected reward under the inferred belief, then: E C H ,C R ∼D Regret(C H , C R ) = 0 Proof. Define R(C x , c ) to be the return achieved when the robot follows RRiC inference with choice set C x and feedback c, then acts to maximize r E[Bx(θ)] , keeping β fixed. Since the human's choice is symmetric across D, for any C x , C y ∼ D, regret is anti-symmetric: Regret(C x , C y ) = R(C x , c Cx ) − R(C y , c Cx ) = R(C x , c Cy ) − R(C y , c Cy ) = −Regret(C y , C x ) Since D is symmetric, C x , C y is as likely as C y , C x . Combined with the anti-symmetry of regret, this implies that the expected regret must be zero: E Cx,Cy∼D [Regret(C x , C y )] = 1 2 E Cx,Cy [Regret(C x , C y )] + 1 2 E Cx,Cy [Regret(C y , C x )] = 1 2 E Cx,Cy [Regret(C x , C y )] − 1 2 E Cx,Cy [Regret(C x , C y )] = 0 An analogous proof would work for any anti-symmetric measure (including entropy change). \n Worst Case As shown in Table 4 , class B3 misspecification can induce regret an order of magnitude worse than the maximum regret induced by classes A3 and B2, which each differ from B3 along a single axis. This is because the worst case inference occurs in RRiC when the human feedback c H is the worst element of C R , and this is only possible in class B3. In class The axes represent the weights on the lava and alive features and the space of possible parameterizations lies on the circle where w lava + w alive = 1. The opacity of the gold line is proportional to the weight that P(θ) places on each parameter combination. The true reward has w lava , w alive < 0, whereas the peak of this distribution has w lava < 0, but w alive > 0. This is because C R2 contains shorter trajectories that encounter the same amount of lava, and so the robot infers that c H must be preferred in large part due to its length. \n Discussion Summary In this work, we highlighted the problem of choice set misspecification in generalized reward inference, where a human gives feedback selected from choice set C Human but the robot assumes that the human was choosing from choice set C Robot . As expected, such misspecification on average induces suboptimal behavior resulting in regret. However, a different story emerged once we distinguished between misspecification classes. We defined five distinct classes varying along two axes: the relationship between C Human and C Robot and the location of the optimal element of feedback c * . We empirically showed that different classes lead to different types of error, with some classes leading to overconfidence, some to underconfidence, and one to particularly high regret. Surprisingly, under certain conditions the expected regret under choice set misspecification is actually 0, meaning that in expectation, misspecification does not hurt in these situations. Implications There is wide variance across the different types of choice-set misspecification: some may have particularly detrimental effects, and others may not be harmful at all. This suggests strategies for designing robot choice sets to minimize the impact of misspecification. For example, we find that regret tends to be negative (that is, misspecification is helpful) when the optimal element of feedback is in both C Robot and C Human and C Robot ⊃ C Human (class A2). Similarly, worst-case inference occurs when the optimal element of feedback is in C Robot only, and C Human contains elements that are not in C Robot (class B3). This suggests that erring on the side of specifying a large C Robot , which makes A2 more likely and B3 less, may lead to more benign misspecification. Moreover, it may be possible to design protocols for the robot to identify unrealistic choice set-feedback combinations and verify its choice set with the human, reducing the likelihood of misspecification in the first place. We plan to investigate this in future work. Limitations and future work. In this paper, we primarily sampled choice sets randomly from the master choice set of all possibly optimal demonstrations. However, this is not a realistic model. In future work, we plan to select human choice sets based on actual human biases to improve ecological validity. We also plan to test this classification and our resulting conclusions in more complex and realistic environments. Eventually, we plan to work on active learning protocols that allow the robot to identify when its choice set is misspecified and alter its beliefs accordingly. Figure 2 : 2 Figure 2: The set of four gridworlds used in randomized experiments, with the lava feature marked in red. \n Figure 3 : 3 Figure 3: Human and robot choice sets with a human goal bias. Because the human only considers trajectories that terminate at the goal, they don't consider the blue trajectory in CR. \n Figure 4 : 4 Figure 4: Entropy Change (N=24). The box is the IQR, the whiskers are the range, and the blue line is the median. There are no outliers. \n Figure 5 : 5 Figure 5: Regret (N=24). The box is the IQR, the whiskers are the most distant points within 1.5*the IQR, and the green line is the mean. Multiple outliers are omitted. \n (a) feedback cH (b) P(θ | cH , CR) \n human bias of only considering demonstrations that terminate at the goal leads to very poor inference in this environment. Because the human does not consider the blue demonstration from Fig 3b, which avoids the lava altogether, they are forced to provide the demonstration in Fig 6a, which terminates at the goal but is long and encounters lava. As a result, the robot infers the very incorrect belief distribution in Fig 6b. Not only is this distribution underconfident (entropy change = −0.614), but it also induces poor performance (regret = 0.666). \n Figure 7 : 7 Figure 7: Example human choice set and corresponding feedback. \n Fig 9a shows an example robot choice set C R3 from B3, and Fig 9b shows the inferred P(θ | c H , C R3). Note that the peak of this distribution has w lava , w alive > 0. Since c H is the longest and the highest-lava trajectory in C R3 , and alternative shorter and lower-lava trajectories exist in C R3 , the robot infers that the human is attempting to maximize both trajectory length and lava encountered: the opposite of the truth. Unsurprisingly, maximizing expected reward for this belief leads to high regret. The key difference between B2 and B3 is that c H is the lowest-reward element in C R3 , resulting in the robot updating directly away from the true reward. \n Figure 8 : 8 Figure 8: Robot choice set and resulting misspecified belief in B2. \n Figure 9 : 9 Figure 9: Robot choice set and resulting misspecified belief in B3. \n Table 1 : 1 Choice set misspecification classification, where CR is the robot's assumed choice set, CH is the human's actual choice set, and c * is the optimal element from CR ∪ CH . B1 is omitted because if CR ⊂ CH , then CR\\CH is empty and cannot contain c * . \n t. at least one reward parameterization θ. For each experimental class, we randomly generated 6 valid C R , C H tuples, with C R , C H ⊆ C x M . Duplicate tuples, or tuples in which c H / ∈ C R , were not considered. \n Table 2 : 2 Entropy change is symmetric across classes A1 and A2. Class Mean Std Q1 Q3 A1 0.04 0.4906 0.1664 0.0 A2 -0.04 0.4906 0.0 -0.1664 \n Table 3 : 3 Regret is symmetric across classes A1 and A2. \n Table 4 : 4 Regret comparison showing that class B3 has much higher regret than neighboring classes.", "date_published": "n/a", "url": "n/a", "filename": "paper_14.tei.xml", "abstract": "Specifying reward functions for robots that operate in environments without a natural reward signal can be challenging, and incorrectly specified rewards can incentivise degenerate or dangerous behavior. A promising alternative to manually specifying reward functions is to enable robots to infer them from human feedback, like demonstrations or corrections. To interpret this feedback, robots treat as approximately optimal a choice the person makes from a choice set, like the set of possible trajectories they could have demonstrated or possible corrections they could have made. In this work, we introduce the idea that the choice set itself might be difficult to specify, and analyze choice set misspecification: what happens as the robot makes incorrect assumptions about the set of choices from which the human selects their feedback. We propose a classification of different kinds of choice set misspecification, and show that these different classes lead to meaningful differences in the inferred reward and resulting performance. While we would normally expect misspecification to hurt, we find that certain kinds of misspecification are neither helpful nor harmful (in expectation). However, in other situations, misspecification can be extremely harmful, leading the robot to believe the opposite of what it should believe. We hope our results will allow for better prediction and response to the effects of misspecification in real-world reward inference.", "id": "13f847c9d7d81ae8fab6debd5c7c1dca"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Thomas L Griffiths", "Frederick Callaway", "Michael B Chang", "Erin Grant", "Paul M Krueger", "Falk Lieder", "Matthew M Botvinick", "Samuel J Gershman"], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "1-s2.0-S2352154618302122-main.tei.xml", "abstract": "Artificial intelligence systems use an increasing amount of computation and data to solve very specific problems. By contrast, human minds solve a wide range of problems using a fixed amount of computation and limited experience. We identify two abilities that we see as crucial to this kind of general intelligence: meta-reasoning (deciding how to allocate computational resources) and meta-learning (modeling the learning environment to make better use of limited data). We summarize the relevant AI literature and relate the resulting ideas to recent work in psychology.", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Peter Cihon", "Matthijs M Maas", "Luke Kemp"], "title": "Fragmentation and the Future: Investigating Architectures for International AI Governance", "text": "AI has the potential to dramatically alter the world for good or ill. These high stakes have driven a recent flurry of international AI policy making at the OECD, G7, G20, and multiple UN institutions. Scholarship has not kept pace with diplomacy. AI governance research to date has predominantly focused on national and sub-national levels (Calo, 2017) . AI global governance research remains relatively nascent, focusing mostly on the proliferation of AI ethics principles (Jobin et al., 2019) and stocktaking of ongoing initiatives (Garcia, 2020; Schiff et al., 2020) . Kemp et al. (2019) have called for specialised, centralised intergovernmental agencies to coordinate policy responses globally. Others have called for a centralised 'International Artificial Intelligence Organisation' (Erdelyi and Goldsmith, 2018) or an international coordinating mechanism under the G20 (Jelinek et al., 2020) . Conversely, some scholars favour more decentralised arrangements based around soft law, global standards, or existing international law instruments or UN multilateral organisations (Cihon, 2019; Garcia, 2020; Kunz and O h Eigeartaigh, 2020; Wallach and Marchant, 2018) . This paper takes the initial step of considering the question: Should AI governance be centralised? The form of an international regime 1. will fundamentally impact its operation and effectiveness. This includes the critical question of how an institutional form 'fits' the underlying problem (Ekstrom and Crona, 2017; Young, 2002) . Questions of regime centralisation have occupied scholars and international negotiations for decades. The US diplomat George Kennan (1970) proposed the establishment of an 'International Environmental Agency' as an initial step towards an International Environmental Authority. The vexing question of whether to have a centralised body for environmental governance continued 42 years later during the Rio + 20 negotiations. There remains significant debate as to how much form affects performance and what level of centralisation is preferable, but there is little doubt that it is an important consideration for international regimes (Biermann and Kim, 2020) . Centralisation is also a neglected area of examination for AI governance. The debate over form is in its infancy for AI with a few proposals for centralised regimes in academic literature and submissions to international processes (Jelinek et al., 2020; Kemp et al., 2019) . Yet it seems unlikely that AI will be immune to increasing discussions and eventual political pushes for regime centralisation. Future negotiations over the form of AI governance will benefit immensely from early analysis. 'Centralisation', in this case, refers to the degree to which the coordination, oversight and/or regulation of a set of AI policy issues or technologies are housed under a single institution. Centralisation is relevant for policy makers and academics alike. A recent report by the UN Secretary General lamented the lack of coordination and inclusion among AI-related initiatives (United Nations Secretary-General, 2020) . Early research and anticipatory initiatives may sensitively influence the path governance takes (Stilgoe et al., 2013) . Scholars have a unique opportunity to be norm entrepreneurs and shape the emerging institutions through proactive, rather than retrospective, work on AI governance. The importance of this proactive approach has been emphasised for emerging technologies more broadly (Rayfuse, 2017) . Moreover, choices made today may have long-lasting impacts as AI development continues (Cave and O h Eigeartaigh, 2019) . In this paper, we explore the advantages and disadvantages of centralisation for AI governance. The defining problems of AI governance are threefold. The first is the political economy challenge and the importance of non-state actors' expertise in AI. The second is the need for anticipatory governance and technological foresight. The third is the variety and range of different AI applications, technologies, and policy problems. Our analysis hinges on a comparison with international regimes in three other domain areas, which display these core challenges, specifically environment, trade, and security. These three governance domains, while certainly distinct in important ways, are also arguably similar to AI governance across these dimensions: environmental governance invokes complex scientific questions that require technical expertise, has a broad scope encompassing transboundary and transsector effects, and includes a need for anticipation of future trends and impacts. Trade regimes span across a breadth of individual industries, and involve questions of standard-setting. Security and arms control regimes confront high-stakes situations and strategic interests, and a recurring need to 'modernise' regimes to track ongoing technological change. All three governance domains face questions of institutional inequalities. Finally, these regimes have been the subject of a rich literature exploring fragmentation and centralisations. We first outline the international governance challenges of AI, and review early proposed responses. We then draw on the conceptual frameworks of 'regime fragmentation' (Biermann et al., 2009) and 'regime complexes' (G omez-Mera et al., 2020; Orsini et al., 2013) , and their application to the history of other international regimes, to identify considerations in designing a centralised regime complex for AI. We conclude with two practical recommendations. \n The state of AI governance Whether AI is a single policy area is actively debated. Some claim that AI cannot be cohesively regulated as it is a collection of disparate technologies, with different applications and risk profiles (Stone et al., 2016) . This is an important but not entirely convincing objection. The technical field has no settled definition for 'AI', 2. thus it is unsurprising that delineating a manageable scope for AI governance is difficult (Schuett, 2019) . Yet this challenge is not unique to AI: definitional issues abound in areas such as environment and energy, but have not figured prominently in debates over centralisation. Indeed, energy and environment ministries are common at the domestic level. There are numerous ways in which a centralised body could be designed for AI governance. For example, a centralised approach could carve out a subset of interlinked AI issues. This could involve focusing on the potentially high-risk applications of AI systems, such as AI-enabled cyberwarfare, the use of natural language processing for information warfare, lethal autonomous weapons systems (LAWS), or high-level machine intelligence (HLMI). 3. Another approach could govern underlying resource inputs for AI such as largescale compute hardware, software libraries, training datasets, or human talent. We are agnostic on the specifics of how centralisation could or should be implemented. We instead focus on the costs and benefits of centralisation in the abstract. The exact advantages and disadvantages of centralisation will vary with institutional design. Numerous AI issues could benefit from international cooperation. These include the high-risk applications mentioned above. It also encompasses more quotidian uses, such as AIenabled cybercrime; human health applications; safety and regulation of autonomous vehicles and drones; surveillance, privacy and data-use; and labour automation. This is not an exhaustive list of international AI policy issues. Global regulation across these issues is currently nascent, fragmented and evolving. OECD members and several other states agreed to a series of AI Principles, which were subsequently adopted by the G20 (OECD, 2020a). The Global Partnership on AI (GPAI) was launched by the G7 and several other states (GPAI, 2020) . The fragmented membership in these initiatives is shown in Figure 1 . A wide range of UN institutions have begun to undertake some activities on AI (ITU, 2019) . These developments are complimented by various treaty amendments, such as incorporating autonomous vehicles into the 1968 Vienna Convention on Road Traffic (Kunz and O h Eigeartaigh, 2020) or ongoing negotiations under the Convention on Certain Conventional Weapons (CCW) on LAWS. Private fora may also influence international governance (See Green and Auld, 2017) , including the Partnership on AI and IEEE's Ethically Aligned Design initiative. The UN Secretary General intends to establish a multistakeholder advisory body on global AI cooperation (United Nations Secretary-General, 2020). UNESCO, the Council of Europe, and the OECD have similarly convened multistakeholder groups tasked with drafting policy instruments (Council of Europe (COE), 2020; UNESCO, 2020; ; OECD, 2020b) . Whether these initiatives bear fruit, however, remains unclear, as many of the involved international organisations have fragmented membership, were not originally created to address AI issues and lack effective enforcement or compliance mechanisms (see Morin et al., 2019) . For instance, while the US has endorsed the OECD AI Principles and while it eventually acquiesced to the GPAI, it has remained sceptical of hard, global rules . China, another global frontrunner in AI, is not a member of either body. 4. How we initially structure international governance can be critical to its long-term success. Fragmentation and centralisation exist across a spectrum. Some fragmentation will always prevail, absent a global government. But the degree to which it prevails is crucial. Our definitions, including for fragmentation and key terms are provided in Table 1 . These definitions are by nature normatively loaded. For example, some may find 'decentralisation' to be a positive framing, while others may see 'fragmentation' to possess negative connotations. Recognising this, we use these terms in an analytical manner. \n Centralisation criteria: a history of governance trade-offs We explore a series of considerations for AI governance based on a review of existing scholarship on fragmentation (Biermann and Kim, 2020; Biermann et al., 2009; Ostrom, 2010; Zelli and Asselt, 2013) . Specifically, political power and efficient participation support centralisation. The breadth vs. depth dilemma, as well as slowness and brittleness support decentralisation. Policy coordination and forum shopping considerations can cut both ways. This list is substantive, not exhaustive, and we intend it to open a discussion of design considerations for the nascent AI regime complex. It is far from the final word. Within each consideration below, we offer definitions, relevant regime histories, and discussion of implications for AI. \n Political power Regimes embody power in their authority over rules, norms, and knowledge beyond states' exclusive control. A more centralised regime sees this power concentrated among fewer institutions. A centralised, powerful architecture is likely to be more influential against competing international organisations and with constituent states (Orsini et al., 2013) . Most environmental multilateral treaties, as well as UNEP, have faced sustained criticism for being unable to enact strong, effective rules or enforce them. In contrast, the umbrella of the WTO, has strongly enforced norms such as the most-favoured-nation principle (equally treating all WTO member states) have become the bedrock of international trade. Even to the extent of changing the actions of the US due to WTO rulings. The power and trackrecord of the WTO is so formidable that it has created a chilling effect: the fear of colliding with WTO norms and rules has led environmental treaties to actively avoid discussing or deploying traderelated measures (Eckersley, 2004) . The power of this centralised body has stretched beyond the domain of trade to mould related issues. This is an area of high salience for AI. The creators and chief users of AI are 'big tech' companies which are some of the largest firms in the world by market capitalisation and have already had an enormous effect in shaping government policy (Nemitz, 2018) in favour of 'surveillance capitalism' (Zuboff, 2019) . This daunting political economy challenge is perhaps the defining characteristic of AI. It seems unlikely that powerful vested economic and military interests in AI will be steered by a plethora of small bodies better than a single, well-resourced and empowered institution. Political power offers further benefits in governing emerging technologies that are inherently uncertain in both substance and impact. Uncertainty in technology and preferences has been associated with some increased centralisation in regimes (Koremenos et al., 2001a) . There may also be benefits to housing a foresight capacity within the regime complex, to allow for accelerated or even proactive efforts (Pauwels, 2019) , which would be particularly effective if centralised. \n Supporting efficiency and participation Decentralised AI governance may undermine efficiency and inhibit participation. States often create centralised regimes to reduce costs, for instance by eliminating duplicate efforts, yielding economies of scale within secretariats, and simplifying participation (Esty and Ivanova, 2002) . Conversely, fragmented regimes may force states to spread resources and funding over many distinct institutions, limiting the ability of less well-resourced parties to participate (Morin et al., 2019) . Historically, decentralised regimes have presented cost and participation concerns. Hundreds of related and sometimes overlapping international environmental agreements can create 'treaty congestion' (Anton, 2012) . This complicates participation and implementation for both developed and developing nations (Esty and Ivanova, 2002) . This includes costs associated with travel to different forums, monitoring and reporting for a range of different bodies, and duplication of effort by different secretariats (Esty and Ivanova, 2002) . Similar challenges confront decentralised export regimes, which have notable duplication of efforts (Brockmann, 2019) . These challenges are already evident in AI governance. Developing countries are not well represented at most international AI meetings (United Nations Secretary-General, 2020). Simultaneous and globally distributed meetings pose burdensome participation costs. Fragmented organisations must duplicatively invest in high-demand machine learning subject-matter experts to inform their activities. Centralisation would support institutional efficiency and participation. Fragmentation or decentralisation A patchwork of international institutions which focus on a particular issue area but differ in scope, membership and often rules (Biermann et al., 2009) . \n Centralisation The degree to which governance for an issue lies under the authority of a single body. \n Regime complex A network of three or more international regimes on a common issue area. These should have overlapping membership and cause potentially problematic interactions (Orsini et al., 2013) . The costs and participation challenges posed by decentralisation may pose particular barriers to non-state actors (Drezner, 2009) . AI-related expertise is primarily located in non-state actors today, namely multinational corporations and universities. Thus, barriers to non-state-actor participation in AI governance will pose particularly acute problems for writing rules that reflect the nature and development trajectory of AI technologies. However, these barriers may not limit all non-state actors from engaging in multiple fora. Indeed, those with sufficient resources may be able to pursue strategies to their advantage (Kuyper, 2014) . \n Slowness and brittleness of centralised regimes One problem of centralisation lies in the relatively slow process of establishing centralised institutions, which may often be outpaced by the rate of (technological) change. Another challenge lies in centralised institutions' brittleness after they are established, that is, their vulnerability to regulatory capture or failure to react to changes in the issue area or technology. These issues are well reflected in challenges encountered in arms control regimes. Establishing new international institutions is often a slow process, especially with higher participation and stakes. Under the General Agreement on Tariffs and Trade (GATT), negotiations for a 26 per cent cut in tariffs between 19 countries took 8 months in 1947. The Uruguay round, beginning in 1986, took 91 months to achieve a tariff reduction of 38 per cent between 125 parties (Martin and Messerlin, 2007) . Historically, international law has been quicker at responding to technological change than to other changes; but even there its record is chequered, in some cases (e.g., spaceflight) adjusting within years, while being far more delayed in others (e.g., modern anti-personnel landmines) (Picker, 2001) . Decentralised efforts might prove quicker to respond, especially if they rely more on informal institutions with a smaller, like-minded membership (Morin et al., 2019) . Centralised governance may be particularly vulnerable to lengthy negotiations, especially if a few states hold unequal stakes in a technology, or if there are significant differences in information and expertise among state and private actors (Picker, 2001) . AI fulfils both of these conditions. Moreover, because AI technology develops rapidly, slow implementation of rules and principles could enable certain actors to take advantage by setting de facto rules. Even after its creation, a centralised regime can be brittle. The very qualities that provide it with political power may exacerbate the adverse effects of regulatory capture, and features that ensure institutional stability may also lead to an inability to adapt to new conditions. The regime might break before it bends. The first potential risk is regulatory capture. As illustrated by numerous cases, including undue corporate influence in the World Health Organisation during the 2009 H1N1 pandemic (Deshman, 2011) , no institution is fully immune to capture, and centralisation may facilitate this by providing a single locus of influence (Martens, 2017) . On the other hand, a regime complex comprising many smaller, parallel institutions could find itself vulnerable to capture by powerful actors, who can afford representation in every forum. Some have already expressed concern about the resources and sway of private tech actors in AI governance (Nemitz, 2018) , and proposals for AI governance have been surrounded by calls to ensure their independence from such influence (Nature Editors, 2019). Moreover, centralised regimes entail higher stakes. International institutions can be notoriously path-dependent and fail to adjust to changing circumstances (Baccaro and Mele, 2012) . The public failure of a flagship global AI institution could have lasting political repercussions. It could strangle subsequent proposals in the crib, by undermining confidence in multilateral governance generally or on AI issues specifically. By contrast, for a decentralised regime complex to similarly fail, all of its component institutions would need to 'break' or fail to innovate simultaneously. A centralised institution that does not outright collapse, but which remains ineffective, may inhibit better efforts. Ultimately, brittleness is not an inherent weakness of centralisation, but rather may depend on institutional design. There may be strategies to 'innovation-proof' (Maas, 2019a) governance regimes. Periodic renegotiation, modular expansion, additional protocols to framework conventions, 'principles based regulation', or sunset clauses can also support ongoing adaptation (see Marchant et al., 2011) . This discussion intersects with debates over whether a new centralised regime is even possible in today's shifting, dense institutional landscape (Alter and Raustiala, 2018; Morin et al., 2019) . The speed of capability development in AI also highlights questions over the relative 'speed' or 'responsiveness' of different regime configurations. In slowmoving areas, a centralised regime's slowness may not be a problem. However, technological change has often 'perforated' many arms control regimes, from the Nuclear Non-Proliferation Treaty to the Missile Technology Control Regime, which sometimes struggled to carry out muchneeded 'modernisation' in provisions or export control lists (Nelson, 2019) . This raises questions of necessary institutional speed. Is AI an issue that is so fast it makes centralisation untenable, such that we need a decentralised regime to match its speed and complexity? Or, should we use a singular institutional anchor to slow and channel the technology's development or application? There is precedent for international instruments directing or curtailing the development of certain technologies. The 1978 Environmental Modification Convention (ENMOD) Convention was an effective tool in preventing both funding for geoengineering research and the weaponised deployment of weather manipulation. By 1979, US investments in such technologies had dramatically decreased (Fleming, 2006) . \n The breadth vs. depth dilemma Pursuing centralisation may create an overly high threshold that limits participation. Many multilateral agreements face a trade-off between having higher participation ('breadth') or stricter rules and greater ambition of commitments ('depth'). The dilemma is particularly evident for centralised institutions that are intended to be powerful and require strong commitments from states. Sacrificing depth for breadth can also pose risks. The 2015 Paris Agreement on Climate Change was watered down to allow for the legal participation of the US. Anticipated difficulties in ratification through the Senate led to negotiators opting for a 'pledge and review' structure with few legal obligations, which permitted the US to join through executive approval (Kemp, 2017) . In this case, inclusion of the USwhich proved temporarycame at the cost of cutbacks to the demands which the regime made on all parties. In contrast, decentralisation could allow for major powers to engage in at least some regulatory efforts, where they would be deterred from signing up to a more comprehensive package. This has precedence in climate governance. Some claim that the US-led Asia-Pacific Partnership on Clean Development and Climate helped, rather than hindered climate governance, as it bypassed the UN Framework Convention on Climate Change (UNFCCC) deadlock and secured (non-binding) commitments from actors not bound by the Kyoto Protocol (Zelli, 2011) . This matters, as buy-in may prove a particular thorny issue for AI governance. The actors who lead in AI development include powerful states, such as the US and China, that are potentially most adverse to restrictive global rules. They have thus far proved unenthusiastic regarding the global governance of security issues such as anti-personnel mines, LAWS, and cyberwarfare. In response, governance could take a different approach to military uses of AI. Rather than seeking a comprehensive agreement, devolving and spinning off certain components into separate treaties (e.g., separately covering LAWS testing standards; measures for liability and responsibility; or limits to operational context) could instead allow for the powerful to ratify and move forward some of those options (Weaver, 2014) . The breadth vs. depth dilemma is a trade-off in multilateralism generally, and a key challenge for centralisation. The benefit of a centralised body would be to create a powerful anchor that ensures policy coordination and coherence. In many cases, it will likely need to restrict membership to have teeth, or lose its teeth to secure wide participation. For specific issues in AI governance, this 'breadth vs. depth' trade-off might inform relative expectations of ongoing AI governance initiatives. If 'breadth' is more important, one might put more stock in nascent efforts at the UN (Garcia, 2020) ; if 'depth' of commitment seems more important, one might instead favour initiatives of like-minded states such as the GPAI. The evolving architecture of AI governance suggests that a 'critical mass governance' (Kemp, 2017) approach may be appropriate. That is, there is a single centralised, framework under which progressive clubs move forward on particular issues. Rather than having an array of treaties, one has a set of protocols for different technologies or applications under a single framework. A similar approach has been taken in treaties such as the 1983 Convention on Long-Range Transboundary Air Pollution. \n Forum shopping Forum shopping may help or hinder AI governance. Fragmentation enables actors to choose where and how to engage. Such 'forum shopping' may take one of several forms: shifting venues, abandoning one, creating new venues, and working to sew competition among multiple (Braithwaite and Drahos, 2000) . Even when there is a natural venue for an issue, actors have reasons to forum shop. For instance, states may look to maximise their influence (Pekkanen et al., 2007) , and placate constituents by shifting to a toothless forum (Helfer, 2004) . Membership in AI initiatives is highly varied and as initiatives begin to consider binding instruments, this ranging membership may be exploited. The ability to successfully forum-shop depends on an actor's power. Most successful examples of forum-shifting have been led by the US (Braithwaite and Drahos, 2000) . Intellectual property rights (IPR) in trade, for example, were subject to prolonged, contentious forum shopping. Developed states resisted attempts of the UN Conference on Trade and Development (UNCTAD) to address the issue by trying to shift it onto the World Intellectual Property Organisation (WIPO) (Braithwaite and Drahos, 2000) and then subsequently to the WTO (Helfer, 2004) , despite protests from developing states. But weak states and non-state actors can also pursue forum shopping strategies in order to challenge the status quo, sometimes with success (Jupille et al., 2013) . For example, developing states further shifted some IPR in trade to the WHO, and subsequently won concessions at the WTO (Kuyper, 2014) . Forum shopping may help or hurt governance (G omez-Mera, 2016) . This is evident in current efforts to regulate LAWS. While the Group of Governmental Experts has made some progress, on the whole the CCW has been slow. In response, activists have threatened to shift to another forum, as happened with the Ottawa Treaty that banned anti-personnel mines (Delcker, 2019) . This strategy could catalyse progress, but also brings risks of further forum shopping. Forum shopping may similarly delay, stall, or weaken regulation of time-sensitive AI policy issues, including potential HLMI development. Non-state actors that participate in multiple fora may influence regime complex evolution, though perhaps to the detriment of other weak actors (Orsini, 2013) . Thus, leading AI firms likely have sway when they elect to participate in some venues but not others. To date, leading AI firms appear to be prioritising engagement at the OECD over the UN. A decentralised regime will enable forum shopping, though further work is needed to determine whether this will help or hurt governance outcomes. \n Policy coordination There are good reasons to believe that either centralisation or fragmentation could enhance coordination. A centralised regime can enable easier coordination both across and within policy issues, acting as a focal point for states. Alternatively, fragmented institutions may be mutually supportive and even more creative. Centralisation reduces the incidence of conflicting mandates and enables communication. These are the ingredients for policy coherence, as shown in the case of the WTO above under 'political power'. However, fragmented regimes can often act as complex adaptive systems. Political requests and communication between secretariats can ensure bottom-up coordination. Multiple organisations have sought to reduce greenhouse gas emissions within their respective remits, often at the behest of the UNFCCC Conference of Parties. Sometimes effective, bottom-up coordination can slowly evolve into centralisation. Indeed, this was the case for the GATT and numerous regional, bilateral and sectoral trade treaties, which all coalesced together into the WTO. While this organic self-organisation has occurred, it has taken decades. Some have argued that 'polycentric' governance approaches may be more creative and legitimate than centrally coordinated regimes (Acharya, 2016; Ostrom, 2010) . Arguments in favour of polycentricity include the notion that it enables governance initiatives to begin having impacts at diverse scales, and that it enables experimentation with policies and approaches (Ostrom, 2010) . Consequently, these scholars assume 'that the invisible hand of a market of institutions leads to a better distribution of functions and effects' (Zelli and van Asselt, 2013, p. 7 ). Yet an absence of centralised authority to manage regime complexes has presented challenges in the past. Across the proliferation of Multilateral Environmental Agreements (MEAs) there is no requirement to cede responsibility to the UN Environmental Programme in the case of overlap or competition. This has led to turf wars, inefficiencies and even contradictory policies (Biermann et al., 2009) . One of the most notable examples is that of hydrofluorocarbons (HFCs). HFCs are potent greenhouse gases, and yet their use was encouraged by the Montreal Protocol since 1987 as a replacement for ozone-depleting substances. This was only resolved via the 2016 Kigali Amendment to the Protocol. It is unclear if the different bodies covering AI issues will self-organise or collide. Many of the issues are interdependent and need to be addressed in tandem. Some policylevers, such as regulating computing power or data, will impact multiple areas, given that AI development and use is closely tied to such inputs. Numerous initiatives on AI and robotics are displaying loose coordination (Kunz and O h Eigeartaigh, 2020) . But it remains uncertain whether the virtues of a free market of governance will prevail. Great powers can exercise monopsony-like influence through forum shopping, and the supply of both computing power and machine learning expertise are highly concentrated. In sum, centralisation can reduce competition and enhance coordination, but it may suffocate the creative self-organisation of decentralised arrangements. Discussion: what would history suggest? \n Summary of considerations The multilateral track record and peculiarities of AI yield suggestions and warnings for the future. A centralised regime could lower costs, support participation, and act as a powerful new linchpin within the international system. Yet centralisation could simply produce a brittle dinosaur, of symbolic value but with little meaningful impact. A poorly executed attempt at centralisation could lock-in a fate worse than fragmentation. Policy making and research alike could benefit from addressing the considerations presented in this paper, a summary of which is presented in Table 2 . \n The limitations of 'centralisation vs. decentralisation' debates Structure is not a panacea. Specific provisions such as agendas and decision-making procedures matter greatly, as do the surrounding politics. Underlying political will may be impacted by framing or connecting policy issues (Koremenos et al., 2001b) . The success of a regime depends on design details. Moreover, institutions can be dynamic, and broaden over time by taking in new members or deepen in strengthening commitments. Successful multilateral efforts, such as trade and ozone depletion, tend to do both. Yet, decisions taken early on constrain and partially determine future paths. This dependency can even take place across regimes. The Kyoto Protocol was largely shaped by the targets-and-timetables approach of the Montreal Protocol, which itself drew from the Convention on Long-range Transboundary Air Pollution. This targets-and-timetables approach continues today in the way that most countries frame their climate pledges to the Paris Agreement. The choices we make on governing shortterm AI challenges will likely shape the management of other policy issues in the long term (Cave and O h Eigeartaigh, 2019 ). Yet, committing to centralisation, even if successful, may not solve the right problemwhich may be geopolitical, not architectural. Centralisation could even exacerbate the problem by diluting scarce political attention, incurring heavy transaction costs, and shifting discussions away from bodies which have accumulated experience (Juma, 2000) . For example, the Bretton Woods Institutions of the IMF and World Bank, joined later by the WTO, are centralised regimes that engender power. However, those institutions had the express support of the US and may have simply manifested state power in institutional form. Efforts to ban LAWS and create a cyberwarfare convention have been broadly opposed by states with an established technological superiority in these areas (Eilstrup-Sangiovanni, 2018) . \n HLMI: An illustrative example The promise of centralisation may differ by policy issue. HLMI is one issue that is markedly unique: it is distinct in its risk profile, uncertainty, and linkage to other AI policy issues. While timelines are uncertain, the creation of such advanced AI systems is the express goal of various present-day projects (Baum, 2017) , and the future development of an 'unaligned' HLMI could have catastrophic consequences (GCF, 2018) . The creation of HLMI could lead to grotesque power imbalances. It could also exacerbate other AI policy problems, such as labour automation and advanced military applications. In Table 3 we provide a brief application of our framework to HLMI. It shows that centralisation of governance is particularly promising for HLMI. This is due to its neglect, stakes, scope, and need for informed, anticipatory policy. Rather than any AI governance blueprint, our trade-offs framework provides one way of thinking through the costs and benefits of centralising governance. Identifying areas which are more easily defined and garner the benefits of centralised regulation provides an organic approach to thinking through which subset of topics an AI umbrella body could cover. \n Lessons for theory This is the first application of regime complex theory to the problem of AI governance. It is timely and pertinent given the nascent state of AI governance and of the technology itself. While the majority of the literature has observed mature regimes retrospectively, AI offers an opportunity for scholars to both track and influence the development of a new regime complex from its earliest stages. Our analysis highlights both the uses and limits of the theoretical regime complex lens for AI. It can elucidate many important trade-offs, but provides little help in navigating the underlying geopolitics. The six considerations we have identified are also certainly not exhaustive of regime complex theory; further work could explore the complementary dynamics such as issue linkage, regime 'interplay management', or norm cascades in AI governance. Beyond this, the literature needs a better understanding of three key areas that are central to AI. First, what does the political economy of AI mean for AI governance and centralisation? Regulatory capture is a genuine threat, yet many non-state actors hold valuable technical knowledge. Some, such as machine learning developers and NGOs have been influential in shaping governance on lethal autonomous weapons (Belfield, 2020) . How these actors can shape the choice of fora and influence states under centralisation or decentralisation is pivotal. Second, how should institutions match the speed of evolving collective action problems? Is the aim to make governance agile enough to keep pace with accelerating technological change or to manage the pace or direction of such changes to levels that are socially and politically manageable? Theoretically, foresight methodologies have rarely been considered in regime complex debates. Yet for fastmoving and high-stakes technologies, they should be. Theory will need to better address how foresight and development trajectory monitoring capabilities intersect with the debates over governance architecture. Third, how will these considerations look for particular institutional structures? We have presented a cursory case of HLMI and noted that there is an active debate of how to define AI and structure its governance. How will the case for centralisation look for a regime which targets just high-risk or military applications? Our framework provides an easily deployed way to analyse more discrete proposals for AI governance in the future. \n Lessons for policy Our framework provides a tool for policy makers to inform their decisions of whether to join, create, or forgo new AI policy institutions. For instance, the recent choice of whether to support the creation of an independent Global Panel on AI (GPAI) involved these considerations. Following the US veto at the G7 in 2019, GPAI was established in close relationship with the OECD. For now, it is worth monitoring the current landscape of AI governance to see if it exhibits enough policy coordination and political power to effectively deal with mounting AI policy problems. While there are promising initial signs (Kunz and O h Eigeartaigh, 2020 ) there are also already impending governance failures, such as for LAWS and cyberwarfare. We outline a suggested monitoring method in Table 4 . There are three areas to monitor: conflict, coordination, and catalyst. Conflict should measure the extent to which principles, rules, regulations, and other outcomes from different bodies in the AI regime complex undermine or contradict each other. Coordination seeks to measure the proactive steps that AI-related regimes take to work with each other. This includes liaison relationships, joint initiatives, and reinforcement between outputs and principles. Catalyst raises the important question of governance gaps: is the regime complex self-organising to proactively address international AI policy problems? Numerous AI policy problems currently have no clear coverage under international law. Monitoring these regime complex developments, using various existing and emerging tools (see Maas, 2019b; Deeks, 2020) , could inform a discussion and decision of whether to centralise AI governance further. The international governance of AI is nascent and fragmented. Centralisation under a well-designed, modular, 'innovation-proof', critical mass framework may be a desirable solution. However, such a move must be approached with caution. Defining its scope and mandate is one problem. Ensuring a politically-acceptable and well-designed body is perhaps a more daunting one. For now, we should closely watch the trajectory of both AI technology and its governance initiatives to determine whether centralisation is worth the risk. Figure 1 . 1 Figure 1. Membership in selected international AI policy initiatives. \n Table 1 . 1 Definition of key governance terms GPAI Austria OECD Principles Belgium Chile Costa Rica Ireland Colombia Israel New Zealand Czech Republic Latvia Slovenia Denmark Lithuania Singapore Estonia Finland Luxembourg Malta Greece The Netherlands Hungary Norway Australia Iceland Peru Canada Poland France Portugal Germany Romania Italy Slovakia Japan Spain Korea Sweden EU * India Mexico UK USA Argentina Brazil Turkey Switzerland Ukraine 142 UN member states are not China represented Indonesia Russia Saudi Arabia South Africa * as a member in G20 Principles its own right Term Definition \n Table 2 . 2 Summary of considerations Implications for Consideration centralisation Historical example AI policy issue example Political power Pro Shaping other regimes: WTO has created a Influencing powerful vested economic and chilling effect such that environmental treaties military interests in AI may require a single avoid trade-related measures. empowered institution. Efficiency & Pro Decentralisation raises inefficiencies and barriers: Fragmentation requires duplicative investment participation Proliferation of multilateral environmental in AI subject-matter experts and undermines agreements poses challenges in negotiation, participation from developing countries and implementation, and monitoring. non-state actors. Slowness & Con Slowness: Under the GATT, 1947 tariff Process of centralised regime development brittleness negotiations among 19 countries took may not keep pace with the speed of AI 8 months. The Uruguay round, beginning in development. 1986, took 91 months for 125 parties to agree on reductions. Regulatory capture: WHO accused of undue corporate influence in response to 2009 H1N1 pandemic. Breadth vs. depth Con Watering down: 2015 Paris Agreement suggests Attempts to effectively govern the military dilemma attempts to 'get all parties on board' may uses of AI have been resisted by the most require less-stringent rules. powerful states. Forum shopping Depends on Power predicts outcomes: Actors can use forum shopping to either design Developed countries shifted IPR in trade from undermine or catalyse progress on UNCTAD to WIPO to WTO. governance regimes for military AI systems. Accelerates progress: NGOs and some states shifted away from CCW to ban anti-personnel mines. Policy coordination Depends Strong, but delayed convergence: Numerous AI governance initiatives display on design GATT and numerous trade treaties coalesced into loose coordination, but it is unclear if these the WTO after decades initiatives can respond to developments in a Contradictory policies: timely manner. Montreal Protocol promoted the use of potent greenhouse gases for nearly thirty years. \n Table 3 . 3 An application of the framework to high-level machine intelligence (HLMI) If short HLMI timelines (less than 10-15 years) are expected, the lengthy period to negotiate and create such a body would be a critical weakness. If longer timelines are expected, there should be sufficient time to develop a centralised institution. Institutional capture is a concern given the resourced corporate actors involved in creating HLMI, e.g., Google or OpenAI. However, it is unclear if capture would be more likely under a centralised body. Policy coordination Policy coordination is key for HLMI. It has close connections to issues such as labour automation and automated cyberwarfare. The creation or use of HLMI is not directly regulated by any treaties or legal instruments. This makes the creation of a new, dedicated institution to address it easier and less unlikely to trigger turf wars. However, it also makes it less likely that the existing tapestry of global governance can self-organise to cover HLMI in a timely manner. Consideration HLMI Political power Potential catastrophic risks make the increased political power of a centralised institution desirable. The creation of HLMI is a potential 'free-driver' issue. An effective response needs to have the teeth to deter major players from acting unilaterally. This will require a coordinated effort to track and forecast HLMI project efforts (see Baum, 2017), as well as a politically empowered organisation to act upon this information. Efficiency & Centralisation would support economies of participation scale in expertise to support efficient governance. Given the significant resources and infrastructure likely needed, a joint global development effort could be an efficient way to govern HLMI research. Slowness & brittleness Depth vs. breadth Costs and requisite capabilities may restrict dilemma the development of HLMI to a few powerful players. Fewer actors makes centralisation more feasible. The breadth vs. depth dilemma could be avoided through a 'critical mass' approach that initially involves only the few countries that are capable of developing HLMI, although there would be legitimacy benefits to expanding membership. Forum shopping A centralised body is well placed to prevent forum shopping, as there is currently no coverage of HLMI development and deployment under international law. Future forum shopping could undermine timely negotiations amid risky HLMI development. \n Table 4 . 4 Regime complex monitoring suggestions Key theme Questions Methods Conflict To what extent are Expert and practitioner regimes' principles survey Network and outputs in analysis (e.g., citation opposition over time? network clustering Coordination Are regimes taking and centrality) steps to complement Natural Language each other? Processing (e.g., Catalyst Are regimes self- textual entailment and organizing to fact checking) proactively fill governance gaps? \n\t\t\t © 2020 The Authors. Global Policy published by Durham University and John Wiley & Sons Ltd.Global Policy (2020) 11:5 \n\t\t\t Global Policy (2020) 11:5 © 2020 The Authors. Global Policy published by Durham University and John Wiley & Sons Ltd.Architectures for International AI Governance \n\t\t\t Global Policy (2020) 11:5 © 2020 The Authors. Global Policy published by Durham University and John Wiley & Sons Ltd.", "date_published": "n/a", "url": "n/a", "filename": "Global Policy - 2020 - Cihon - Fragmentation and the Future Investigating Architectures for International AI Governance.tei.xml", "abstract": "The international governance of artificial intelligence (AI) is at a crossroads: should it remain fragmented or be centralised? We draw on the history of environment, trade, and security regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak for centralisation. The risk of creating a slow and brittle institution, and the difficulty of pairing deep rules with adequate participation, speak against it. Other considerations depend on the specific design. A centralised body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial, and fragmented institutions could self-organise. In sum, these trade-offs should inform development of the AI governance architecture, which is only now emerging. We apply the tradeoffs to the case of the potential development of high-level machine intelligence. We conclude with two recommendations. First, the outcome will depend on the exact design of a central institution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, fragmentation will likely persist for now. The developing landscape should be monitored to see if it is self-organising or simply inadequate. \n Policy Implications • Secretariats of emerging AI initiatives, for example, the OECD AI Policy Observatory, Global Partnership on AI, the UN High-level Panel on Digital Cooperation, and the UN System Chief Executives Board (CEB) should coordinate to halt and reduce further regime fragmentation. • There is an important role for academia to play in providing objective monitoring and assessment of the emerging AI regime complex to assess its conflict, coordination, and catalysts to address governance gaps without vested interests. Secretariats of emerging AI initiatives should be similarly empowered to monitor the emerging regime. The CEB appears particularly well placed and mandated to address this challenge, but other options exist. • What AI issues and applications need to be tackled in tandem is an open question on which the centralisation debate sensitively turns. We encourage scholars across AI issues from privacy to military applications to organise venues to more closely consider this vital question. • Non-state actors, especially those with technical expertise, will have a potent influence in either a fragmented or centralised regime. These contributions need to be used, but there also need to be safeguards in place against regulatory capture. • The AI regime complex is at an embryonic stage, where informed interventions may be expected to have an outsized impact. The effect of academics as norm entrepreneurs should not be underestimated at this point.", "id": "f2a49e79e1ad8c102b46ad7907110c8b"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Dorian Peters", "Karina Vold", "Diana Robinson", "Senior Member, IEEE Rafael A Calvo"], "title": "Responsible AI-Two Frameworks for Ethical Design Practice", "text": "technologies [3] . Moreover, many governments and international organizations have released sets of ethical principles, including the OECD Principles in 2019 [4] , the Montreal Declaration in 2017 [5] , the U.K. House of Lords report \"AI in the U.K.: ready willing and able?\" in 2018 [6] , the European Commission High-Level Expert Group (HLEG) on AI in 2018 [7] , and the Beijing AI Principles in 2019 [8] . Indeed, recent reports indicate that there are currently more than 70 publicly available sets of ethical principles or frameworks for AI, most of which have been released within the last five years [3] , [9] , [10] . The recent focus on ethical AI has arisen from increasing concern over its unintended negative impacts, coupled with a traditional exclusion of ethical analysis from engineering practice. While engineers have always met basic ethical standards concerning safety, security, and functionality, issues to do with justice, bias, addiction, and indirect societal harms were traditionally considered out of scope. However, expectations are changing. While engineers are not, and we believe should not, be expected to do the work of philosophers, psychologists, and sociologists, they do need to work with experts in these disciplines to anticipate and mitigate ethical risks as a standard of practice. It is no longer acceptable for technology to be released into the world blindly, leaving others to deal with the consequences. Engineering educators have already responded to this change in sentiment by evolving curricula to help ensure the next generation of technology makers is better equipped to engineer more responsibly [11] , [12] . Yet, moving effectively from ethical theory and principles into context specific, actionable practice is proving a significant barrier for the widespread uptake of systematic ethical impact analysis in software engineering [13] , [14] . In this article, we hope to contribute to resolving some of this translational difficulty by presenting two frameworks (the Responsible Design Process and the Spheres of Technology Experience) together with the outcomes of an example ethical analysis in the context of digital mental health. We hope that both the frameworks and the case study will serve as resources for those looking for guidance in translating ethical principles into technology practice. \n II. ETHICS IMPERATIVE IN HEALTH AND INTELLIGENT SYSTEMS Verbeek explains that \"When technologies co-shape human actions, they give material answers to the ethical question of how to act\" [15] . He also highlights how technologies This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ inscribe the values of the designers, engineers, and businesses who make them [16] , [17] . As a result, responsibility must go beyond the narrow definition of safety which, until recently, has largely constituted professional norms for technologists. Furthermore, ethical implications should be considered early on and throughout the design, development, and implementation phases since value-laden tradeoffs are often made even during the earliest stages of design. Achieving ethically desirable outcomes will neither be easy nor straightforward. For instance, ethical design cannot be implemented as a single, one-off, review process since technologies (especially, intelligent ones) are continuously changing, as are the ways users appropriate them, and the socio-technical contexts within which they exist. Therefore, ethical impact evaluation must be an ongoing, iterative process-one that involves various stakeholders at every step, and can be re-evaluated over time, and as new issues emerge. While society-wide ethical considerations are a relatively new focus within technology engineering [18] ethical enquiry has a long history within healthcare, perhaps because health practitioners work directly with the people they serve and often within sensitive and high-risk contexts. Principles of Biomedical Ethics [19] have been taught for 40 years. Hence, those working on the engineering of health technologies will need to adhere to both technology-related and biomedical ethical principles. Fortunately, at the level of basic principles, the AI communities and biomedical ethicists might already be largely in agreement. A recent analysis suggests that the plethora of ethical AI frameworks can be consolidated into just five meta-principles, four of which also constitute the principles for biomedical ethics. These are: Respect for Autonomy, Beneficence, Nonmaleficence, and Justice, with the addition of \"explicability\" for AI [3] . Other recent systematic reviews of AI ethics principles have produced somewhat different taxonomies (see [14] , [20] , [21] ). For example, Jobin et al. [9] reviewed 84 ethical guidelines and proposed 11 principles: 1) transparency; 2) justice and fairness; 3) nonmaleficence; 4) responsibility; 5) privacy; 6) beneficence; 7) freedom and autonomy; 8) trust; 9) dignity; 10) sustainability; and 11) solidarity. They found a wide divergence in how these principles were interpreted and in how the recommendations suggested they be applied. They do, however, note some evidence of convergence (by number of mentions) around five principles: 1) transparency; 2) justice and fairness; 3) nonmaleficence; 4) responsibility; and 5) privacy. Between this set of principles and the set we will be using from the meta-analysis by Floridi et al. [3] , there is a significant overlap. We have found the latter set to be particularly practical in the health domain, owing to its overlapping with biomedical ethics, however, we understand that other principles are also important. However, broad principles fall short of dictating specific actions in practice. Indeed, it has been acknowledged that these ethical frameworks do not provide enough contextual guidance for engineers to make use of them (e.g., [10] , [14] , and [22] ). For example, the principle of fairness could lead to affirmative action, providing extra support for a group, or not, depending on the context. In order for abstract principles to translate into actionable practice, the engineering discipline will need a variety of solutions. For example, the emerging development of the IEEE's PS7000 specifications stands to contribute on this point. Additionally, work by Gebru et al. [13] on \"datasheets for datasets\" in which they advocate for clear documentation for datasets that records \"motivation, composition, collection process, recommended uses, and so on\" is one example of a suggestion that would operationalize more abstract principles, such as transparency and accountability. However, there will always be ethical decisions and tradeoffs that are not amenable to universally applicable specifications, and that need to be made with sensitivity to specific context and stakeholders. In these cases, we will need methods for conducting this kind of decision-making rigorously. Responsible innovation requires these methods to be anticipatory, reflexive, inclusive, and responsive [23] . The very fact that there has been some convergence around a set of principles (rather than a single principle) seems to indicate a kind of value pluralism-the view that there are multiple values that are equally fundamental, and yet may sometimes conflict with each other. How to navigate tradeoffs between equally important values when conflicts arise is where constructive reflection and discourse may be most needed, and where it will be important to acknowledge that cultural and contextual differences may affect what the right outcome is within practice (further discussions in [24] and [25] ). Here, methods for value-sensitive design (VSD) [16] can help development teams and stakeholders articulate and align with explicit values. Moreover, data-enabled technologies employ a wide-range of techniques that are used differently in different contexts, and these diverse contexts raise unique concerns that will require different ethical tradeoffs. It is unlikely that the same priorities and solutions applicable to one domain, context, or project, will translate across to others [22] , [25] , [26] . This means that technology development teams will need to conduct bespoke ethical evaluations for each project, in the same way, user research and specifications analyses are unique to each project. This need for ethical impact assessment for technology is akin to the need for environmental impact assessment in other types of engineering [27] . It is impossible to provide ethical principles that will be specific enough to provide answers in practice, and yet broad enough to apply universally. But it is possible to provide a process. While every team and organization may devise their own answers to ethical dilemmas, they should have systematic processes by which to do so consciously and rigorously, leaving a record of these processes along with the values and rationale they employed to make decisions. This record can provide the public with transparency regarding the rationale for a decision after the fact, as well as give the design team confidence that such a decision was made in a systematic and professional way. Such process will not guarantee a product has no negative consequences, but it will help mitigate the risks, and provide professionals with the reassurance of having acted responsibly. In the next section, we present two frameworks that can help provide structure for such a process for ethical impact analysis. \n III. MOVING FROM PRINCIPLES TO PRACTICE: FRAMEWORKS FOR RESPONSIBLE TECHNOLOGY DEVELOPMENT \n A. Multidisciplinarity in AI Ethics We begin our discussion of frameworks with a diagrammatic representation of disciplines central to the development of ethical AI and how they interconnect (see Fig. 1 ). The primary intention is to emphasize the importance of grounding all ethical impact assessments in multidisciplinary expertise. It is likely that new requirements in an increasingly AI-enhanced world will lead to the development of new specializations which blur traditional disciplinary boundaries. Nevertheless, there is no single discipline capable of handling the task of ethical analysis single-handedly. Given the complexity of the problems, the best outcomes are likely to come from the richest diversity. For the digital health case study we present later, we leveraged expertise from four different disciplines, including design, engineering, human-computer interaction, and philosophy. An ideal project would include even more disciplines, such as psychology and sociology (see Fig. 1 ), as well as, end users, domain experts, and other stakeholders. In the mental health context, for example, users may include patients, therapists, and family, while domain experts would include therapists, mental health researchers, and others working within the healthcare system. The importance of this multivocal approach cannot be overstated as there can be a tendency for agile teams to consist of just programmers, designers, and managers. Taking digital mental health as a cautionary example, a failure to involve psychologists, health practitioners, end users, and other domain experts has led to an exploding industry of mental health tools that lack evidence, inclusivity, and effectiveness (and at worst, cause harm) [28] - [30] . This has been possible because, while traditional channels for healthcare are highly regulated, technology regulation lags behind and these technologies are unusually quick to implement and disseminate. As Nebeker et al. [31] have cautioned: \"it is critical that the minimal requirements used to make a digital health technology available to the public are not mistaken for a product that has passed rigorous testing or demonstrated real-world therapeutic value.\" \n B. Framework 1-The Responsible Design Process Sometimes technology designers, like policymakers, are forced to make values-based tradeoffs. For example, they might be able to increase privacy at the expense of security or increase accuracy at the expense of privacy. Moreover, some technologies may increase the wellbeing of some at the expense of others. Value-laden decisions arise as part of engineering and can either be addressed in a cursory way by one or a few individuals, or in a systematic and robust way by teams. Only the latter approach can hold up to scrutiny should negative consequences emerge after the fact. As such, we need a technology development process that makes room for this sort of robust decision making and for the ethical impact analysis on which it must stand. Innovators are often asked to address the consequences of their technologies, but this post-hoc approach is increasingly seen as limited. A number of nonregulatory approaches have been developed to take into account the broader social impact of new technologies including anticipatory governance, technology assessment, and VSD [23] . They can all be included within \"responsible innovation,\" a growing field of research exploring ways of \"taking care of the future through collective stewardship of science and innovation in the present\" [23] . As with many design practices, the goal is to embed deliberation within the design and innovation process. An important aspect of responsible innovation is the concept of human wellbeing, which is also at the center of many current ethical frameworks. For example, the IEEE centers its ethics specifications on human wellbeing. So too do several government frameworks [4] , [5] . As such, we argue that a responsible technology development process will need to incorporate evidence-based methods for evaluating the impact on, and designing for, human wellbeing, by drawing on psychology (see [32] , [33] ). However, the promotion of human wellbeing is not a complete solution. After all, decisions must be made as to whose wellbeing is being considered. When technology makers are forced to make tradeoffs that increase the wellbeing of some but at the expense of others, at the cost of long-term ecological impacts, or in spite of other negative side effects, then issues to do with justice, equality, and other values arise. This is where ethical analysis, drawing on philosophy and other disciplines, must come in. As such, our conception of a responsible development process involves taking existing design processes, particularly those that are anticipatory, reflexive, inclusive, and responsive, Fig. 2 . Responsible design process framework. A process for technology development in which wellbeing support and ethical impact analysis are incorporated at each phase. A post-launch evaluation phase is also added. and augmenting them with methods for ethical analysis and wellbeing-supportive design. The approach described here could include activities such as those used in VSD [16] . While development processes are as varied as technologists themselves, there are a series of developmental phases that find their way into most, if not all, approaches, and these include: research, ideation, prototyping, and testing. The U.K. design council consolidated these commonalities and created a popular \"double diamond\" diagram to illustrate them [34] . The broadest and narrowest points of the diamonds represent points of divergence and convergence. We began with this standard process and integrated stages for wellbeing support and ethical decision making to create the resulting responsible design process framework presented in Fig. 2 . Wellbeing in the diagram refers to human psychological wellbeing and it is included separately to other ethical issues because evidence-based design methods grounded in psychological research already exist for it and allow it to be attended to empirically. Analysis of other ethical dimensions, such as fairness, data governance, ecosystem wellbeing, or democratic participation cannot rely predominantly on psychological research and will require different methods. We describe each phase of the process in further detail as follows. Research: The research phase involves investigating the needs, preferences, contexts, and lives of the people who will be served or, otherwise, impacted by a technology. This phase may include standard approaches to user research (e.g., design thinking methods, ethnographies, participatory workshops, etc.) as well as expert review and secondary research in relation to the specific domain. Standard approaches can surface wellbeing and ethical issues, however, tailoring these methods to focus participants on ethical or psychological reflection may be helpful. Insights: This phase involves the analysis of the data from the research phase, and synthesis into specific insights for design. Data analysis can be done through the lens of wellbeing theory with a view to anticipating harms and opportunities for supporting healthy psychological experience. Ethics data analysis can be done through the lens of an ethical framework, and with a view to identifying potential biases, ethical risks, and tensions. Ideation: The ideation phase involves the divergent generation of ideas for design solutions. Ethical reflection can be integrated into the ideation phase through framing. For example, introducing wellbeing psychology concepts into the ideation phase can help the team focus on root psychological causes of user needs while introducing ethics concepts into ideation can sensitize the team to ethical tensions that may arise so that brainstorming can involve resolutions to these. Prototypes: In this phase, the team converges on and builds various design solutions. Responsible impact analysis involves collaborative speculation on the wellbeing and ethical impacts (good and bad) to which a particular design concept may lead. This will ideally involve a wide range of stakeholders including end users. Evaluation (in Use): The real-life ethical impacts that a technology will have on people, their communities and the planet, can only be fully understood once the product or service is in real-world use. Teams must speculate and test in advance, but unintended use patterns are realities in our complex socio-technical systems. Wellbeing impact evaluation involves evaluating the impact of technology use on a user's psychological experience during and after use. Ethical impact evaluation involves evaluating the ethical impacts of a technology's use, not just on its users, but often, also on those indirectly affected, such as their friends and families, communities, society as a whole, and the planet. In the framework described above, we have taken a familiar process and incorporated phases for the integration of ethics and wellbeing in an attempt to provide a map for a more responsible development process. The map also provides a landscape within which to research, develop, and situate new methods and tools to support each of these phases. For example, research can be directed at identifying effective methods for \"ethical data analysis\" within the insights phase, \"ethical framing\" within ideation, and \"ethical impact evaluation\" during use. Moreover, ethics-based methods and tools Six spheres of technology experience (adapted from Peters, Calvo, and Ryan [26] ). that already exist can be more easily integrated within the standard development process in this way. It is worth noting that, while it is true an ideal project would start with responsible methods from the beginning, we are aware that few projects represent the ideal. Integrating wellbeing and ethics from any point is likely better than not at all and will contribute to more responsible outcomes. \n C. Framework 2-The Spheres of Technology Experience There has been a lack of appreciation for the different resolutions at which technologies can have an impact on the experience. A technology can make an impact through the design of its interface, based on the tasks it is designed to support, through the behaviors it promotes, or as a collective result of widespread societal use. For instance, consider the way games impact wellbeing and autonomy (autonomy is both a constituent of wellbeing [35] and a central principle of AI ethics frameworks in its own right). A user may experience a strong sense of autonomy and wellbeing during game play, but because the game is designed to increase compulsive engagement, a resulting addiction may diminish the same user's experience of autonomy at a life level (as overuse crowds out time for taking care of work, family, and other things of greater importance to her). Therefore, does the game support or hinder autonomy? The answer in this context is probably \"both,\" and therefore, a fair assessment of ethical impact relies on an evaluation that takes into account the impact at different resolutions. Therefore, it is clear that greater precision is required in order to effectively identify impacts at different granularities within the technology experience. Calvo et al. [36] first highlighted this need with respect to autonomy and presented a framework distinguishing four \"spheres of autonomy.\" Peters et al. [32] expanded on this substantially, developing, as part of a larger model, a framework of technology experience which identifies six distinct spheres within which wellbeing can be influenced. It is this framework that we believe can be usefully applied to provide a structure to ethical impact analysis conducted during a responsible development process. We provide an illustration of the six spheres in Fig. 3 . This \"Spheres of Technology Experience\" framework is described in detail by Peters et al. [32] wherein they also provided methods for wellbeing-supportive design. An application of the framework to human autonomy in AI is described in [37] . Below we provide just a brief description of each sphere to show how each can also help to structure ethical analysis. Adoption: It refers to the experience of a technology prior to use, including the marketing and socio-cultural forces leading a person to use it. A development team may want to consider the ethical impacts of the forces leading to uptake, and to what extent users are choosing to use a product freely or being coerced or pressured to do so. As a simple example, a new upgrade can be forced upon existing users-an approach that is notoriously un-user-friendly-or it can be introduced in a way that better respects autonomy, as when developers provide an option to \"try the upgrade\" first. Interface: The interface sphere is the first sphere of the \"user experience\" and involves interacting with the product itself, including the use of navigation, buttons, and controls. At this level, an ethical analysis might explore issues to do with autonomy and inclusion, for example, to what extent does the interface support autonomy by providing meaningful options and controls? and, are users of different abilities or cultures being excluded? Task: Broadening the lens of analysis beyond the interface, the task sphere refers to discrete activities enabled, or enhanced, by the technology. For example, in the case of a fitness app, \"tracking steps\" or \"adding a meal to the diary\" constitute tasks. Ethical impacts arising from an analysis of these tasks might include the risk of inadvertently contributing to eating disorders or anxiety. Awareness of these risks can help designers structure tasks in ways that respect the diverse needs of users and provide safeguards against negative outcomes. Behavior: Combinations of tasks contribute to an overall behavior. For example, the task \"step-counting\" might contribute to the overall behavior: \"exercise.\" For technologies intending to positively impact on a particular behavior, it is important to consider the psychology literature on the effects of various approaches to supporting that behavior (and/or work with a psychologist). Life: The final and broadest sphere within the user's experience is life, which captures the impacts of a technology at a life level. Not all technologies will have impacts significant enough to yield measurable effects on a person's quality of life overall. While a self-driving car may impact measures of autonomy and wellbeing at the life sphere for someone who is vision impaired, the extent to which a cooking timer is customizable probably will not. While many technologies have only narrow application, and there is little reason to expect them to impact the life sphere, others, such as those that target wellbeing directly (i.e., meditation and fitness apps) or those used daily (workplace technologies, entertainment products, and social media) do need to consider and anticipate life-level impacts. Society: Expanding beyond the user experience into the broadest sphere, society involves the direct and collateral impact on nonusers, nonhuman life, and the environment. The self-driving car mentioned above may promote wellbeing for some users but decrease it for those whose livelihoods depend on driving. This can only be revealed at a societal level of analysis which entails the exploration of emergent and complex systems. This sphere presents the greatest challenges to impact analysis. Identifying and anticipating ethical impacts at this level will not only require multidisciplinary expertise but also ongoing evaluation after a technology is released into use. Nevertheless, some specific methods already exist to assist developers in anticipating societal impact. \"Consequence scanning\" is a method developed by the nonprofit organization, Doteveryone, dedicated to responsible innovation [38] . The method provides a step-by-step process for collaboratively identifying ethical risks and tensions associated with a new or planned product or service. It is important to qualify that the boundaries between the six spheres of technology experience are merely conceptual and should not be seen as concrete. Instead, they are intended to provide a way of organizing thinking and evaluation that allows for the identification of contradictory parallel effects. The ways in which the Spheres of Technology Experience framework allows us to identify and target wellbeing and ethical impact helps to ensure analysis is both more thoroughly and clearly articulated. The fact that empirical measures already exist that can be applied to these spheres also makes their application practical and actionable. Existing measures (described in [32] ) can help developers to quantitatively compare different technologies and designs with regard to their different impacts on a range of ethical and wellbeing-related attributes and within different spheres. Some examples of how measures have been used to evaluate the ethical impact already exist. For example, Kerner and Goodyear [39] conducted a study investigating the psychological impact of wearable fitness trackers. Additionally, a series of studies comparing how different game designs impact wellbeing and autonomy have been conducted using psychological measures [40] , [41] . While the above examples focus on autonomy and wellbeing, the spheres can be used to articulate impact in relation to any ethical values, and at any stage within the responsible development process. \n IV. CASE STUDY-RESPONSIBLE DIGITAL MENTAL HEALTH TECHNOLOGIES \n A. Project Background During the process of identifying methods for responsible design practice, we were commissioned by a health technology company to explore the ethical tensions arising in the health domain and recommendations for how these could be addressed. The company wanted to follow more responsible practices, and sought to anticipate unintended consequences. We viewed this an opportunity to enrich our experience with ethical analysis and to apply the frameworks described above. The product in question was a text-based online therapy program for depression and anxiety. As part of the research phase (see Fig. 2 ) of our responsible design framework, the expert review we provided was later combined with commercial user research data to help create insights and inform ideation. We believe an expert-led analysis within the research phase is a valuable way to: 1) involve disciplinary experts; 2) draw on existing knowledge; 3) sensitize the development team to ethical issues early on; 4) inform the design of user studies; and 5) assist the interpretation of user data. The analysis below is the outcome of a multidisciplinary literature review and analysis involving a team of researchers with significant professional experience in the co-design and development of digital health technologies. Specifically, and consistent with the claim that ethical impact analysis must be a multidisciplinary endeavor, the team consisted of combined expertise in engineering, design, human-computer interaction, psychology, and philosophy. It also employed the Spheres of Technology Experience framework described above to guide the identification of key ethical considerations within digital mental health. The outcomes pertain to a narrow genre of technologies, rather than a specific product, but the same approach could be taken for a product-specific context. The analysis is structured according to the five ethical principles described in Section II. As such, recommendations are grouped into these five categories: 1) Respect for autonomy (Section IV-C); 2) Beneficence (Section IV-D); 3) Nonmaleficence (Section IV-E); 4) Justice (Section IV-F); and 5) Explicability (Section IV-G). While there are many sets of principles, we chose these five for this analysis because they align with the principles of medical ethics. The analysis is presented in the form of a series of recommendations intended for design and development teams of mental health technologies. Each is presented with: elaboration that connects it to real-world context; a brief overview of how it is addressed in practical ethics; how it may be considered in the context of the application; and specific practical strategies for development which draw on design and engineering literature. \n B. Online Therapy Within Digital Mental Health Depression is the leading cause of disability worldwide [42] making data-enabled mental health technologies a critical area of research and industry. These technologies have the potential to increase access to therapy, reduce disparities, reduce costs, and improve the effectiveness of mental health care. But rapid change, coupled with the rapid introduction of new technologies into such a sensitive area, has brought new ethical challenges involving transparency, patient involvement, and human autonomy. A number of authors within human-computer interaction have articulated some of the ethically loaded socio-technical challenges facing digital mental health developers [28] - [31] , [43] , [44] . For example, Orlowski et al. [44] stated: \"Design solutions not generated with end users themselves are more likely to fail... Moreover, from an ethical and moral perspective, egalitarian ways of working, such as those exemplified by participatory design, represent a promising opportunity to redress the legacy of consumer disempowerment in mental health.\" Another important criticism of traditional practice in digital mental health revolves around respect for autonomy, a core ethical principle. As Mohr et al. [45] explained: \"essentially, clinical researchers have designed tools to try to get people to do what we want them to do and how we want them to do it.\" The literature has also highlighted the ethical issue of transparency as critical to this area. For example, the Psyberguide, developed by a nonprofit network of mental health professionals, includes transparency as one of three criteria for quality ratings of mental health technologies (the other two being credibility and user experience) [46] . The specific analysis herein focuses on online text-based oneto-one professional therapy for depression and anxiety. This is an augmentative approach to online therapy which, rather than replacing humans, aims to use data to increase human capabilities and to make human activity and interaction more effective, efficient, and satisfying. The analysis is presented as a series of recommendations followed by philosophical justification and practical strategies for implementation. \n C. Respect for Autonomy Recommendation 1: Mental health technologies should be designed to protect and support user autonomy. In medical ethics, the principle of autonomy includes respect for both an individual's right to decide and for the freedom of whether to decide [19] . Together, these are meant to protect both our right to make choices and our freedom to choose how and when we want to exercise that right [47] . Respect for autonomy is essential for the development of any digital health technology, but in the case of mental health technologies, it is particularly challenging. This is because certain mental illnesses can affect one's capacity to reason, one's perception of oneself and of others, one's ability to make decisions, and other cognitive capacities that are core to one's ability to self-govern. Burr and Morley [47] discussed how the presence of a mental illness might also affect a patient's choice to engage with a mental health service and restrict their ability to make healthcare decisions. In extreme cases, respecting a patient's autonomy (i.e., nonintervention) may even threaten safety, if there is a risk of harm to self or others. What this suggests is that: 1) respect for patient autonomy is defeasible such that it may sometimes have to be traded off against other goods and 2) in some cases healthcare providers may need to go beyond respect for a patient's current ability to self-govern to help build and support the user's autonomy in the long term. Online professional text-based therapies will involve (at least) two kinds of users: 1) patients and 2) therapists (counselors). It is important to also think of the therapist as a user of this technology, as their role will be changed and augmented by these new tools. This is also true for other kinds of dataenabled medical technologies which will affect not only the patient but also healthcare professionals. Recommendation 2: To protect the privacy and autonomy of users, make transparent the use of mental health data and ensure secure storage. Online mental health therapy applications that collect, store, and make use of personal data raise several important concerns around privacy, which in turn can pose risks to user autonomy [48] , [49] . In particular, because of the kind of personal data that is now available to be collected (e.g., biometrics, location, and online behavior) combined with advances in machine learning that make it possible to infer personal attributes from collected data (e.g., [50] and [51] ) companies are increasingly able to tailor messages and services to specific individuals or groups. This means that the more personal information a company has about someone, the more effectively they can target interventions in an attempt to influence them which may present new risks to patient autonomy. Even features that can serve as a means of patient empowerment, such as self-tracking, which can be used to boost self-reflection, can pose risks to autonomy. But the sharing or use of this data, be it with family, friends, or even healthcare professionals-especially in nonemergency situationscan negatively affect patient autonomy. Sanches et al. [43] described this as an example of \"autonomy [of patients being] claimed by their social support network, collectivized by healthcare services, or both.\" This explains why designing for privacy as a target can also be considered a subset of autonomy support [52] . Furthermore, since therapy sessions involve two interlocutors, both sides have reasonable claims to privacy. It is important that all users are given clear and accurate explanations about how the information collected from therapy sessions is being used. This is especially true since users, including counselors, may not be aware of the value of their data. One way of using the data, for example, is for analyzing the counselors' conversations. This may be problematic, but also presents an opportunity to provide feedback if handled in a manner that does not feel intrusive. \n 1) Practical Strategies for Respecting Autonomy: The literature on self-determination theory, a robustly evidence-based psychological theory of wellbeing and motivation [35] provides guidance with respect to what characteristics constitute \"autonomy-supportive\" (versus controlling) environments and interactions. According to this article, autonomy-supportive interactions as follows. 1) Understand the other's perspective (frame of reference). 2) Seek the other's input and ideas. 3) Offer meaningful choices. 4) Empathize with resistance and obstacles. 5) Minimize the use of controlling language or rewards. 6) Provide a rationale for requested or required behavior. These can be translated into design guidance for digital technologies. For example, applying empathy is the cornerstone of human-centered design so employing human-centered methods is likely to increase autonomy-supportive outcomes. Seeking the user's input and ideas is also achieved through humancentered and participatory processes. In the mental health context, this requires engagement with people with have suffered mental illness as only they have direct expertise around frames of reference, threats to autonomy and privacy within their contexts, and insights into the kinds of obstacles that are most salient for them. Moreover, insights from these processes can inform what meaningful choices can be added to the technology. With respect to the protection of privacy specifically (recommendation 2), meaningful choices are likely to involve giving the client control over when and with whom data is shared. Security experts should be consulted in ensuring that the data collected and analyzed in the course of mental health therapy is safely and securely stored, and that it is only shared through properly encrypted channels. In many countries, this is required by the Health Insurance Portability and Accountability Act (HIPAA) and similar legislation. \n D. Beneficence Recommendation 3: Consider the wider impact of both opportunities and risks for all stakeholders involved in the development and use of mental health technologies. In biomedical ethics, the principle of beneficence is typically thought of as a commitment to \"do good.\" In this context, this will require that the potential benefits of a design or development choice be balanced against potential risks, both for an individual user and for society more broadly [19] . First, it is important to identify all those who stand to benefit from a particular technology. In addition to patients, the use of online text-based therapies impacts therapists, developers, family members, other patients, and the wider mental health care community. Hence, there is a need to adopt a holistic approach to design and implementation that ensures that all parties affected are considered. Moreover, the collateral impact might involve other, less obvious, stakeholders, such as developers themselves. For example, with supervised learning, a human has to assign labels to data used to train predictive algorithms. In the case of mental health therapies, this means that an employee must read and tag sensitive conversations between doctors and patients. This could have a harmful psychological impact on developers since therapy sessions are likely to contain content that could be distressing or triggering, depending on one's own life experiences. Such a labeling task might require training, so that the developer has the necessary context for what they might read as well as training on how to cope. Relatedly, Sanches et al. [43] expressed worry about \"burnout\" for HCI researchers working in the challenging area of mental health and mention the need for greater peer and institutional support. They also suggest rethinking how such support can be explicitly factored into institutional guidelines and budgets. Recommendation 4: Research the access requirements and unique mental health situations of diverse populations in order to ensure mental health technologies are effective for all relevant groups. In many cases, the risks of a new technology are not evenly distributed. In the context of data-enabled digital mental health therapies, the relative dearth of research and understanding on the needs of people from diverse socioeconomic and ethnic groups may put members of those groups at greater risk. If online therapies are developed using a data set that only includes relatively affluent university students, or that lacks other forms of representation, then the therapy will only be optimized for a homogeneous group. Hence, it is important that the training set for the algorithm genuinely represents the diversity of the target population that will use it. In practice what this means is that in some cases sets used to evaluate algorithms might need to come from a different statistical distribution than the training set. This comes with its own challenges, for example, understanding the wide variation in groups affected by mental illness, reaching out to \"hard to reach populations\" (e.g., the homeless, refugees, those addicted to drugs, etc.) and how measures can be used to yield inclusive and broadly beneficial interventions. Recommendation 5: Aim to support authentic human interactions, connectivity, and engagement. Another example of the kind of balancing that needs to be done to ensure beneficence, involves the opportunities and risks that digital health technologies pose for authentic relationships. The context of mental healthcare requires respect, dignity, and empathy. However, even highly sophisticated AI systems lack human empathy and are at best able to mimic these traits. Thus, even partial automation in mental healthcare, if not implemented cautiously, could threaten \"relational authenticity\" [53] . In Hertlein et al.'s [54] study of family and marriage counselors' ethical concerns around online therapy, one theme that emerged was the impact to the therapeutic relationship. One participant expressed concern that there may be \"missed information, lost feelings/understanding, lack of intimacy and disclosure.\" Another therapist worried that online therapy \"lacks the opportunity for physical human interaction, such as offering a crying client a tissue or engaging in therapeutic touch, which could possibly act as a barrier to joining effectively with clients.\" These statements capture the concerns that the use of AI could lead to feelings of alienation and devaluation. A related concern is a reduction in the quality of communication that may result from the lack of nonverbal cues and body language. This is true for online therapy, but also more broadly for other forms of data-enabled digital health interventions. There is some evidence, for example, that the data entry required for electronic medical records (EMRs) disrupts the nonverbal relationship between health-care providers and patients (e.g., [55] ). Other research has shown that nonverbal cues, including eye contact and social touch (e.g., handshakes), have been found to significantly influence patient perceptions of clinician empathy [56] . Hence, the loss of such nonverbal cues can make it more difficult for health care providers to demonstrate empathy and to build authentic relationships with clients. In addition to concerns about alienation and reduced quality of communication, some evidence suggests that relational authenticity also encourages patient engagement and trust [57] . Hence, any reduction in relational autonomy might, in turn, diminish engagement and trust. In their recent report, Sanches et al. [43] expressed a desire to see \"more novel designs of systems that foster and support beneficial human interactions, beyond the design of autonomous agents imitating empathy and aimed at replacing human contact.\" On the other hand, technological interventions in mental health may provide new opportunities for engagement that are not available in a strictly human-to-human context. For example, a 3-D avatar that functions like a virtual therapist but was not trying to perfectly emulate a human being [58] . The result was (somewhat surprisingly) positive: \"Patients admit that they feel less judged by the virtual therapist and more open to her, especially, if they were told that she was operated automatically rather than by a remote person\" [59] . This suggests that patients might be able to have differently authentic interactions with technologically mediated systems, if they are well designed. Designs such as these may be able to explore new ways of connecting with humans and eliciting beneficial relationships and experiences that are authentic in their own way, though not authentically human. 1) Practical Strategies for Beneficence: Arguably, the technology experience of people living with mental health issues can only be well understood by engaging directly with them as part of a collaborative design and evaluation process. This experience will be shaped by socio-economic and cultural circumstances and will, therefore, differ among individuals, yet meaningful patterns will still exist. User involvement that adequately represents the diversity of potential users of a service is therefore critical to bringing about genuine benefit. This inclusive process will also help to prevent blindness to the reality of the wide spectrum of audience needs within mental health service provision. This includes differing requirements due to low income, disability, low literacy, limited access to computers, mobile phones, and Internet connections, as well as low technology literacy (even among young people) [60] . In addition, users will prefer different modes of technology use at different times. For example, an insomnia therapy that does not require keeping a phone by the bed may be far more effective, while users may not feel comfortable using an audio or video-based program within public spaces. As such, designers should consider providing clients with multiple ways of accessing materials and consider how flexibility can be provided in the delivery of services. \n E. Nonmaleficence Within medical ethics, nonmaleficence is an obligation not to cause harm. This also applies to the design and development of data-enabled digital mental health therapies. The difficulty with this principle is avoiding the \"known unknowns\"-that is, harms that one foresees, though with some uncertainty-as well as the \"unknown unknowns\"-that is, harms that one does not foresee. The latter requires evaluating (and re-evaluating) impact both during development and after release. Recommendation 6: While augmentation can be beneficial, ensure that over-reliance on technology does not lead to atrophy of critical skills or diminish competence. One example of a foreseeable, though uncertain, the harm is atrophy. Skill atrophy is the decline in abilities that comes from underuse or neglect to perform the behaviors and tasks that keep skills up to date. Over-reliance on technology has been cited as a contributor to atrophy of skills in many different contexts (e.g., [61] and [62] )-a concern that dates back at least as far as Plato's discussion of the diminishing effects that writing would have on memory [63] . As more tasks are automated in the context of mental health, this could result in atrophy of previously used skills of both patients and therapists. Though there is a case to be made for the replacement of particular types of skills or activities for more worthwhile use of human capacities (e.g., replacing repetitive calculations or data entry with creative or empathic pursuits), there are also risks to be managed, as atrophy can lead to dependence and even safety issues. These risks can necessitate the need to create fail-safes (procedures for cases in which technology malfunctions and people need to rely on past skills), or they might necessitate not introducing technology into realms where humans should remain critically vigilant or engaged, such as areas that require value judgments [64] . Some areas of mental health-care may be among these. For patients, there may be a risk of losing good decisionmaking skills and the ability to check-in with themselves, to self-reflect, as well as to understand and troubleshoot symptoms and emotions. Technology can be a tool to prompt analysis of mood or symptom data, provide encouragement or trigger an alert for when to get help. But if someone is entirely dependent on a device for self-reflection they may lose competence at self-management when they are decoupled from the device (e.g., due to a loss of network connection, a damaged device, or no battery power). Additionally, dependence on a technology to manage care may result in lower feelings of self-efficacy, empowerment, and control [64] . For therapists, the introduction of technology into the diagnostic and therapeutic process could result in atrophy of critical professional skills. In cognitive-behavioral therapy sessions, therapists interact closely with patients through structured discussion sessions to break down problems into separate parts (thoughts, behaviors, and actions) and then to suggest strategies that patients can use to change their thinking and behavior. The success of these sessions depends on the therapist's ability to home in on problems, deconstruct them, engage patients, and suggest strategies to adopt. All of these steps are skills that therapists develop over time, and they are also all skills that can be augmented through AI and digital technologies. This, in turn, makes them susceptible to atrophy. If a therapist becomes over-reliant on an app that aids in these skills, over time she may lose them and struggle to be as effective in face-to-face sessions with patients. Technologists will need to work closely with therapists and patients to determine appropriate areas for automation and augmentation and then evaluate outcomes after release. Recommendation 7: To avoid risks arising from stigma, design to protect the privacy of users and always ensure secure storage of mental health data. Another foreseeable though uncertain harm is privacy. Because mental health is a stigmatized topic, those that suffer from mental health conditions face the risk of bias and discrimination, from both themselves (selfstigma) and others. This means that if digital health records of mental health status are leaked, hacked, or accessed by unconsented third parties, a user's dignity and reputation could be threatened, and they could be put at risk of discrimination. These concerns are true in traditional (face-to-face) therapy as well, but relying on digital online platforms, from EMRs, to online therapies, poses new risks to both informational and decisional privacy [65] . In Hertlein et al.'s [54] survey, participants expressed concerns about the authenticity of the user (such as \"who has access to the computer\" and \"the [chance] of loss of control of who has the device at the other end\"), about who else might be physically present in the same room as the counselor (\"How can the therapist or client be sure no one else is in the vicinity of the computer-that is, how can you assure confidentiality?\"), and about the possibility of hackers (\"security online is not guaranteed.\") Hence, in the case of online therapy, patients not only have to trust their counselor's good intentions, they also have to trust that counselors will protect their computer screen or other devices from onlookers, protect their passwords, use secure network connections, and not use shared computers [54] , [66] . Patients furthermore have to trust the provider of the technology not to use the data for any unconsented purpose. For this reason, it is important that the utmost care is taken by companies to protect and anonymize the use and storage of sensitive data. Doherty et al. [52] suggested additional design implications to protect against the risks of stigma. \n 1) Practical Strategies for Nonmaleficence: There are a number of practical strategies that help ensure the principle of \"do no harm\" is followed. First, in addition to user research and involvement, a safe user experience design depends upon iterative improvement based on the ongoing evaluation. Health technologies also require clinically relevant efficacy trials. Owing to the potentially drastic consequences of ineffective (i.e., potentially harmful) mental health technology, evaluation of both user experience and health outcomes is an essential criterion for a responsible approach. Evaluation might initially consist of expert review, heuristic evaluations, and internal prototype testing, and be followed by pilot studies evaluating technologies with users until there is sufficient evidence of feasibility and benefit to justify a more formal clinical evaluation. Further evaluation after the release of the product can inform improvements and upgrades and is necessary for determining impact and appropriation within complex real-world contexts (which are often very different to the controlled environments of clinical trials). Our framework for a responsible design process calls for just this kind of staged approach to evaluation (see Doherty et al. [52] for further discussion of a staged approach to the evaluation of mental health technologies more specifically). Moreover, as alluded to earlier, when it comes to mental health technologies, technologists should not attempt to \"go it alone.\" Ensuring that users, their contexts, the healthcare system, medical research, safety, ethical implications, and many other critical considerations are given expert attention requires a multidisciplinary team. Traditional approaches to \"failing fast and often\" are potentially disastrous in a health context in which people cannot always safely be used as guinea pigs for a/b testing. As such, mental health professionals must be part of the design and development team. They can help ensure more rigorous, evidencebased, and appropriately safety-conscious approaches are taken. Experts in ethics should also contribute in order to effectively assess ethical considerations from multiple standpoints. It may be helpful for them to work directly with user experience specialists to allow broad stakeholder input into ethical concerns. In addition to involving multidisciplinary teams and undertaking ongoing evaluation, technology approaches need to be grounded in research to prevent harm. Topham et al. [67] argued that it is an ethical responsibility \"to ensure that mental health technologies are grounded in solid and valid principles to maximize the benefits and limit harm.\" Doherty et al. [52] similarly recommended that systems be based on accepted theoretical approaches for clinical validity. Furthermore, a need for rigorous approaches should apply, not only to the therapeutic program employed but also to the user research and evaluation practices. A human-centered focus on lived experience suggests the importance of mixed methods including qualitative methods for uncovering insights into subjective experience, motivation, and the causes of engagement and disengagement. These can complement and explain results from quantitative approaches, such as symptom scores, behavioral analytics, or surveys. Finally, a simple safeguard for avoiding nonmaleficence is to apply existing quality frameworks. A number of quality frameworks and guidelines have been developed by multidisciplinary groups of researchers and these can be applied as a basic foundation for more responsible design. For example, the transparency for trust principles [68] includes questions around privacy and data security, development characteristics, feasibility, and health benefits, and their creators advocate that all apps should be required to provide information relating to these four principles at minimum. More specific to mental health, the Psyberguide, developed by mental health professionals, bases its ratings on criteria for credibility, user experience, and transparency [46] while the American Psychiatric Association has an app evaluation model for psychiatrists [69] . Technology-specific guidelines have also been developed, including the guidelines for the design of interventions for mental health on social media [70] . With respect to ensuring anonymity to prevent harms from stigma (recommendation 7), design implications may involve allowing for discreet use. For example, studies have revealed problems with app titles that include stigmatized words like \"mood\" or \"mental health\" because users worry others will see them [52] . The discreet design may also involve avoiding client-identifying data on the interface whenever possible (e.g., data graph screens that do not need to include personal details). \n F. Justice Justice is a complex ethical principle that is closely linked to fairness and equality, though is not quite the same as either [71] . Sanches et al. [43] described the principle as requiring the \"fair distribution of benefits, risks, and costs to all people irrespectively of social class, race, gender, or other forms of discrimination.\" In medical ethics, the principle is often subdivided into three categories: 1) distributive justice; 2) rights-based justice; and 3) legal justice. Distributive justice requires the fair distribution of resources and is particularly concerned with scarce resources. Rights-based justice requires that people's basic human rights be respected [72] . Privacy and autonomy, for example, are widely recognized as human rights and hence some of the concerns raised thus far would fall under rights-based justice. Finally, legal justice requires that people's legal rights be respected. The development and implementation of data-enabled digital mental health technologies raises particular concerns about distributive and rights-based forms of justice. Because the law differs by jurisdiction, we will not discuss legal justice. There are two main areas in which to analyze distributive and rights-based justice within data-enabled mental health technologies: 1) in the design process and 2) in the distribution of the final product or service. In the first, compensation and credit for the human labor involved in algorithmic design must be considered; and in the second, questions about who is able to access and make use of the service need to be considered. Recommendation 8: Make known the value of human labor and intellectual property in the development of algorithms to all parties, and potentially compensate for it. With regard to the design process, one type of ethical challenge arises from \"heteromation\": the extraction of economic value from lowcost (or free) labor [73] . This includes all sorts of labor, from Amazon Mechanical Turk workers, who are paid very low wages to complete tasks that are difficult for an algorithm to do, to the work of completing a Captcha, or other forms of reverse Turing tests, where a person must prove they are human by completing a task (e.g., identifying and selecting all images of crosswalks in a series of nine photographs). These tasks automatically build training sets for algorithms that will eventually be able to accomplish these tasks themselves. Hence, there may be a transfer of intellectual property to the company for which the human laborers are not credited, as well as work for which they may not be adequately compensated. These issues can be addressed in some projects by disclosing the uses of data or seeking approval to use the data for research and development purposes. This has been done, for example, in EQClinic, a project in which a telehealth platform is used to help medical students improve their communication skills [74] . A related concern in the development and prototyping of products is piloting on low income, high need, or otherwise vulnerable populations. On the one hand, providing a service to a population that has a critical need for it and may be willing to try an earlier developed prototype seems sensible. On the other hand, it may involve putting these vulnerable populations at risk by deploying or testing unfinished solutions. One area to potentially draw upon in considering these issues is the cost-benefit considerations at play in the treatment of rare diseases for which there are no known and tested cures [75] . When it comes to new or experimental medical technologies there is an absolute need to obtain informed consent, so that when patients agree to testing they do so with full understanding of the potential benefits and harms. It is important to make sure that any vulnerable population is informed about other options for care, so that they may reasonably decline new (especially, experimental) treatments without feeling compelled to accept them. There may, of course, also be positive social justice outcomes that encourage early users to act as \"data altruists.\" For example, early advances in algorithmic solutions can reduce costs for future generations and expand access to less advantaged segments of the population. There is evidence that some people may be willing to share their data, even without direct compensation, if these benefits are communicated to them [76] , [77] . Recommendation 9: Follow guidelines for universal accessibility and tailor the level and mode of content to the spectrum of audience needs. Certainly, one positive feature of online therapies is that they can increase access for remote and working populations. In these ways, online therapy, reduces the barrier to entry and could increase uptake. It is unfair, however, to assume that low-income populations all have access to the necessary computing devices and stable Internet connections. Burr and Morley [47] have recently argued that genuine empowerment of a patient crucially depends on \"the prior removal of certain barriers to engagement, which patients suffering from a variety of mental health conditions face.\" As national healthcare services move increasingly toward online therapies, research must be done on which populations are equipped for uptake, so that vulnerable communities are not left out. Beyond initial uptake, there is further evidence that minority populations tend to have lower retention in mental healthcare [78] . Thus, there is a critical need for more research into the root causes of this finding, as well as ways to better tailor to these populations with the use of online therapies. This includes designing for differing requirements relating to income, literacy levels, and technology access [60] . 1) Practical Strategies for Supporting Justice: International guidelines for digital accessibility and \"universal design\" provide essential starting points for ensuring a technology does not exclude users with older devices, limited Internet access, physical disabilities, or other varying requirements. Furthermore, as mentioned, researchers have expressed a need for more involvement of people living with mental health issues in technology design [43] - [45] . Deep user involvement is not only necessary in order for a technology to be genuinely useful and engaging to its audience, but is also arguably, a matter of design justice, in that it represents a more democratic and consultative approach. One popular approach to user involvement is \"participatory design\" [79] which involves including users as collaborators from the earliest exploratory phases of development. Orlowski et al. [44] provided specific examples of practical applications of participatory design and design thinking methods for mental health technology. Likewise, where the use of a technology will require the involvement of carers, parents, or providers, their unique needs should also be included. Finally, it is worth noting that the term \"user\" itself, while useful for its specificity within the technology context, can be inadvertently de-humanizing, obscuring ethical responsibilities. Therefore, in many cases, words like \"human,\" \"clients,\" \"patients,\" \"people,\" or even \"lives\" may be far more appropriate. \n G. Explicability In addition to the four traditional bioethical principles, Floridi et al. [3] included explicability for the AI context, which they describe as enabling the other principles through both intelligibility and accountability. Other terms, such as \"transparency,\" are also frequently used in AI ethics frameworks to capture a similar duty [e.g., the IEEE (2019) [2] uses both transparency and \"accountability\"]. In general, the idea is that we (i.e., designers, users, and society more generally) need to be able to understand data-enabled systems enough to somehow hold to account their functions (both in terms of their input data and their outputs). We will focus our discussion on the concepts of transparency and accountability as aligned with the IEEE guidelines [2] . Recommendation 10: Ensure transparency and accountability in all aspects of the use of mental health technologies as it is critical to safe and beneficial care. Transparency to do with the collection, use, and storage of data is fundamental to ensuring privacy and other rights, such as informed consent. There are many areas in which transparency must be integrated within an online text-based mental health platform, and many of these arise from the use of a mediating platform which introduces other parties into what was traditionally a confidential conversation between counselor and patient. For example, developers need to be involved in order to design and support the platform; conversations may be recorded and analyzed for potential introduction of AI capabilities; and then these capabilities will need to be audited in order to ensure they function correctly. All of these new layers will require some degree of transparency and accountability. When signing up for a platform and consenting to therapy conducted in online formats, patients should have an understanding of who will have access to what parts of their data and why. As more data is collected and recorded, it should be made clear which parties have access to patient notes and therapistpatient conversational records. Additionally, text-based therapy introduces the possibility for different interactions with the data from a session, but this access comes with both benefits and risks which need to be carefully considered [80] . At a high level, there should also be basic transparency and accountability around business models since for-profit advertising or payments from insurance providers or employer health programs may come with incentives that conflict with the best interests of patients. Funding sources and revenue models may create conflicts of interest in data sharing and breach the trust of patients. 1) Practical Strategies for Explicability: Quality frameworks for digital health provide a valuable starting point for applying principles of transparency and accountability. For example, The Transparency for Trust Principles [68] require standard information to be communicated to users in understandable ways, including information around privacy, data security, development characteristics, feasibility, and health benefits. The Psyberguide [46] bases ratings on transparency as well, so examples of technologies that meet the transparency criteria provide models for practical approaches to implementation. \n V. CONCLUSION The recommendations described above present the result of an ethical analysis conducted by a particular team of professionals in the context of a particular technology type within a specific domain. Analyses by other teams would yield different outcomes although it is reasonable to assume that, for a given context, patterns of concerns will emerge that overlap. As digital ethics continues to grow in importance, the ethical principles for AI can help us to structure ethical impact analyses. However, the translation of ethical principles into actionable strategies in practice is challenging. We have presented a number of frameworks, including one for a responsible design process and another for providing greater resolution to technology experience as a contribution toward helping address the difficulty in moving from principle to practice within ethical impact analysis. We have also provided a description of the outcomes of an expert-led ethical analysis, conducted in the context of digital health, in order to show one way such an analysis might be contribute to early stages in a development process. Of course, our contribution goes only a very small way toward the full integration of ethical impact assessment needed within engineering practice. More research and experimentation with various tools and methods, as well as focused research on ethical implications pertinent to specific application areas and technologies, is still very much needed. We hope the frameworks presented can provide some help toward shaping that path. Fig. 1 . 1 Fig. 1. Connections among six fields centrally involved in the ethical design and development of AI and data-enabled systems (based on Sloan foundation's hexagonal mapping of \"connections among the cognitive sciences\" 1978, reproduced in Gardner 1985, p. 37) key: unbroken lines = strong interdisciplinary ties, broken lines = weak interdisciplinary ties. \n Fig. 3 . 3 Fig. 3.Six spheres of technology experience (adapted from Peters, Calvo, and Ryan [26] ).", "date_published": "n/a", "url": "n/a", "filename": "Responsible_AITwo_Frameworks_for_Ethical_Design_Practice.tei.xml", "abstract": "In 2019, the IEEE launched the P7000 standards projects intended to address ethical issues in the design of autonomous and intelligent systems. This move came amidst a growing public concern over the unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice. However, the difficulty in moving from principles to practice presents a significant challenge to the implementation of ethical guidelines. Herein, we describe two complementary frameworks for integrating ethical analysis into engineering practice to help address this challenge. We then provide the outcomes of an ethical analysis informed by these frameworks, conducted within the specific context of Internet-delivered therapy in digital mental health. We hope both the frameworks and analysis can provide tools and insights, not only for the context of digital healthcare but also for data-enabled and intelligent technology development more broadly.", "id": "07e0fd01fcdb433422c939cc54960d8d"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Colleen Mckenzie", "J Bryce Hidysmith"], "title": "AI Insights Dataset Analysis", "text": "Methods & Assumptions Our metric for what constitutes an insight is an estimate of how novel or surprising the results of that insight are, using a comparison of LSTM and RNNs as a baseline-that is, given that RNNs exist, how surprising is the development of LSTM? All discoveries that seem equally or more surprising were included in our set. This is one of many possible thresholds for significance we could have chosen, and the rationale behind ours is subjective: Our focus on theoretical progress means that the data do not include progress captured in implementing an existing theory, or applying it more directly. For example, punched-card programming of early computers had clear impacts on the progress of computing, but we attribute the insight behind that technology to the 18th-century textile worker who developed a punchedcard method for programming looms. From this perspective, the application of punched cards to electrical computers is a combination of existing theories, not a new development in itself. We consulted two existing chronicles of the history of AI in an effort to flesh out parts of our data set that may have fallen in our blind spots: -Nilsson, N. J. (2009) . The quest for artificial intelligence. Cambridge University Press. -Crevier, D. (1993) . AI: the tumultuous history of the search for artificial intelligence. Basic Books. A few additional notes on the data: • First to discover, not first to publish: We credit an insight to the first person known to have described it, as best we can determine through reputable sources, not the first person to publish that description. This practice is motivated in part by our understanding that not all discoverers published work in the academic tradition (e.g. William Newcomb, credited with the conception of Newcomb's problem), and in part by some discoveries' having been kept confidential, pre-publication, in corporate and state departments. • Specific vs. general discoveries: Sometimes an early discovery constitutes a special case of a later, more general discovery; in this case, we credit the more general discovery. If the special case itself seems to constitute an influential insight, we'll credit it as a separate line item; but if it was largely ignored, or was unavailable to discoverers of the more general version, we omit it from this list. For example, ideas from evolutionary biology have been rediscovered independently by AI researchers through simulationist and purely rationalistic investigation that were previously discovered empirically through biological field research before being formalized in such a manner that they could be used by AI researchers. We count the usable formalization, rather than the case studies that could be used as raw material for specifying the usable formalization. • Hardware agnosticism: Our focus here is on information theoretical insights, not on insights that aid in the development of new hardware, the rationale being that artificial intelligence is a substrate independent pursuit, provided that the Church-Turing thesis holds. Futher, charting the history of today's electronic computers alone would constitute a project of roughly equal magnitude, and one that aligns less closely with our area of expertise. \n Data & Parameters The full dataset (as json) and its schema are available at mediangroup.org/research; this section provides a brief overview of the data for readers of this report. The set includes 193 total insights. For each insight, we record the following information (plus sources for each fact): • Name • Year of discovery • First work in which the insight was referenced or published • Discoverer(s) • Institutional affiliation of discoverer(s) at time of discovery (if applicable) • Institution(s) sector and type (see below) • Nation of origin of discoverer(s) \n • Additional notes Institutions are assigned rough sectors and types from the following lists: Sectors: • Academic • Military • Industrial • State • Other R&D • Religious • N/A Types: • Public university • Private university • For-profit entity • Non-profit entity • N/A \n Analysis For the purposes of this first analysis section, we will discuss the data as we have been able to make it available to us. Reflections on the limitations of the data and the extent to which they are relevant to readership follow in the Methodological Reflections section. \n Temporal distribution At a glance, the distribution of insights over time shows an expected uptick in progress contemporaneous with the European Enlightenment, followed by a sharp increase in the pace of discovery at the beginning of the 20th century. A view of only the most recent centuries provides more detail on the arc of recent progress, showing a significant increase in production with introduction of digital computers available to major institutions in the late 1940s. A breakdown of insight discoveries by decade clarifies the changing rates of discovery. \n Historical context The uneven distribution of insights correlates clearly with the trajectory of Western intellectual and economic history. The theological and philosophical conservatism of the middle ages matches the flat section of the first curve above from c. 500 -1500 AD. During the middle ages, there was no sense of how economic or political advantage might be gained from the use of power not derived from human, animal, or natural sources (e.g. waterwheels applied to rivers). The media of the middle ages, too, was meant for interpretation by individual human minds, as opposed to instantiation on larger groups or in artificial substrates. For example, medieval alchemical texts were written with the expectation that the text is exclusively human-readable, and so the informational payload of the text is found not within the structure of the text as an object itself, but in the interpretation and implementation of the text by its human reader; this can be contrasted with the periodic table, which describes a structure already present in nonhuman substrate. This norm continued throughout most of the Enlightenment, perhaps reaching its height during the efforts of the Encyclopedists led by Denis Diderot. The philosophical ideas throughout the 18th and the first half of the 19th century provided the foundation for a scientific community able to take action during the industrial revolution, but the industrial revolution itself provided the necessary material tools for developing automation systems capable of directly realizing political, economic, or scientific goals. The technologies of the industrial revolution, including the social technologies of the factory and the joint stock company developed immediately prior during the late Enlightenment, differed from previous technologies in that they contianed implicit instructions for their operation that were not simply transformations of a source of physical power such as a horse's muscles. A joint-stock company's organization can be thought of as a primative attempt at a superhuman intelligence implemented on a collection of individual human beings. The transition to automated systems that lacked a human in their control loop was the advent of the Jaquard loom (captured in our data as an earlier insight by Basile Bouchon). Earlier automata in the West, the Muslim World, and China, were limited to toys, and their effects limited to the thought that they inspired in their viewers. The Digesting Duck, an automaton from eighteenth century France that crudely pretended the metabolism of a waterfowl, is as inert in its physical effect on the world as a child's toy truck or Michaelangelo's David except by inspiring the behavior of human agents. The automata of the industrial period became actors in their own right. The Jaquard Loom or a modern 3D printer takes a machine-readable instruction and produces an effect that is as physically relevant as if it had been accomplished by human labor. Though automation systems capable of acting as intelligent agents in their own right, rather than replacements for physical labor, could have been developed in the 19th century through the work of Babbage and Lovelace, the uses of cryptography and automatic aiming systems for artillery during the Second World War provided the industrial basis for their maturation. Blechley Park's work in decoding the German Enigma code was perhaps the first instance when computation at a scale not accomplishable in an individual human mind was required to counter an existential threat to a major power. Previous efforts in state computation by actors such as the US Government during the censuses of the 1870s and 1880s provided valuable technological precursors, but these calculating and tabulating machines were not true computers in the modern sense. Cryptographic and cryptoanalytic efforts during the wars provided the centralization of capital required to produce an abstract automation system in the form of Colossus, the first digital programmable computing device. Such an automation system that processs and transforms information rather than simply taking an instruction to produce a physical effect is obviously required for the development of modern artificial intelligence. We must further conjecture that the invention of programmable digital computing devices was required for the psychological conditions necessary for individual scientists and engineeers to experiment in the fields that would eventually birth artificial intelligence. There was only one Turing, publishing his basic principles of computability in On Computable Numbers (1936) , but there are an uncountable number of individuals able to experiment with the machines derived from these basic insights. The massive upward trend in insights following the end of the Second World War can thus be attributed to access to modern computing machines following their invention, as well as the incentivization of computer science and artificial intelligence research instrumentally by the states locked in conflict during the Cold War for purposes of military advantage. Artificial intelligence, like any intellectual pursuit undertaken as more than a recreation by those with leisure time and intrinsic interest, advances at pace with its support by institutions tasked with the necessities security and prosperity. The distribution of insights from the beginning of the Cold War to the present shows only a marginal deceleration after the fall of the Soviet Union, compared to the roughly linear rate of increase at the beginning of the Cold War. The shallower slopes of curves in the mid-1970s and the late 80s through early 90s correspond to the two periods now described as AI winters, when progress slowed. The former of these two periods appears to be a more substantial decline in insights we measured. We expect this effect is a result of the 1970s being a period of application and combination of existing insights to new problem areas, without substantial generation of additional insights, in addition to the effect of decreased AI-related research during this period. As Nilsson puts it: \"Until about the early 1970s, most AI research dealt with what Seymour Papert called 'toy' problems. . . .However, soon after, AI efforts began a definite shift towards applications work, confronting problems of real-world importance. . . .One reason for the increasing interest in applications was that the power of AI methods had increased to the point where realistic applications seemed within reach.\" (Nilsson, 207) \n Geocultural distribution (Note: data for \"Russia\" includes insights produced by Soviet scientists born within Russian borders and working at historically Russian institutions.) The distribution of insights by nationality of discover reflects a number of well-understood trends in intellectual history. Ancient Greece provided the intellectual foundations that England and France then built upon during the Enlightenment, before the rise of Germany as it indutrialized and centralized politically. During the twentieth century, the rise of both the Russian and American superpowers correlated to substantial advances, but the United States dramatically outcompeted the Soviet Union. Some of the gains that could be credited to the Soviet Union if one assumes a model based on spheres of influence are marked as Hungarian in origin, as well as a few other countries such as Moldova that did not make it into the top ten. The United States' position as an aggregator of immigrants and refugees from Europe during and after the Second World War substantially increases the number of entries counted as American. The circumstances that made such devices available were not only economic but also politically strategic, as the political aims of military supremacy by both the United States and the Soviet Union during the Cold War enabled their local markets to distribute said computing machines. Additionally, given that the state of Israel was not established until 1949, entries there only date to that time, though many of the scientists who immigrated to Israel and established the scholarly communities that contributed to AI efforts originated in Soviet or American territory. \n Institutional distribution The vast majority of insights in our dataset, unsurprisingly, were discovered and developed by individuals at academic institutions. Early progress in mathematics and the sciences was due mostly to unaffiliated individuals-including most researchers from antiquity and the Renaissance-but as academic institutions gained prestige and resulting brilliance, they began to outcompete other institutional types. Two trends stand out in the latter part of the 20th century: the increase in output from nonacademic R&D institutions, and the slight increase in insights derived within military institutions. Nonacademic R&D includes institutions such as Bell Laboratories and the RAND Corporation, and appears in our data first in 1947 (an attribution to Bell Labs's John Tukey). Bell Laboratories (or Bell Labs) was an unusual institution, in historical context, and in many ways the first of its kind: a result of a step-function increase in the engineering progress needed for telecommunications infrastructure, Bell Labs was incubated within the monopolistic Bell Telephone Company before its 1984 dissolution after a successful antitrust suit. Within it, researchers such as Shannon, Shockley, and Hamming developed some of the core advances of their fields, and it has continued to produce novel research for decades: the most recent insight to come out of Bell Labs, in our data, was from 1995. The RAND Corporation, in contrast, was developed in response to state officials suggesting potential benefits from private involvement in relevant research areas, and RAND itself developed out of an aircraft company, becoming a separate, nonprofit institution in 1948. While insights from RAND represent a shorter timespan in our dataset, limited to just the decade between 1950 and 1960, they outnumber those from any institution besides Stanford and Princeton universities, well-known powerhouses of AI development, by the present day. Dynamic programming (Bellman) and symbolic AI (Newell, Shaw, and Simon) are notable among its body of production. The relative scarcity of of military institutions in our data suggests a more indirect role due to military involvement. The development of artificial intelligence is deeply entangled with military concerns, but miliary institutions seem to have been sources of demand for progress, not sources of progress itself. DARPA, for example, often contracted independent entities such as SRI or teams of collaborating academics to execute on finding solutions in its areas of interest. We note when such entities were affiliated with military institutions, though few such instances occur in our data. A final note on the breakdown of insights among institutional entities: while the credit owed to academic institutions is clear, nonacademic institutions have been contributing an increasing portion of total progress since the middle of the 20th century. We were surprised to see a near even distribution of insights between for-profit and non-profit entities-though RAND Corporation constitutes the bulk of the latter-in recent decades, and we suppose that this supports the importance of ongoing invesment in independent organizations focused on AI whose mission is explicitly aligned with public good. \n Methodological reflections Our methods generate a view of the past through a particular cultural lens-specifically, that of young American computer scientists. This project is not an attempt at the impossible task of aggregating all insights into artificial intelligence that exist, rather it is an attempt from our position to aggregate the ones interpretable to ourselves. As we are roughly in the same position socially as other artificial intelligence resarchers in the Anglosphere, we expect that the amount of insights accessible to Anglosphere researchers will be analagous to those accessible to ourselves. There may be other paths to artificial intelligence development that are only accessible to individuals with differing social position, technical knowledge, or linguistic and cultural proficiency. Though it is certainly likely that such paths exist, we can make no comment on their rate of advancement. Additionally, it seems prudent to make note of the likely presence of survivorship bias: we are limited to only the information that has survived to the point of being accessible to ourselves. It is possible that many of the insights of the past that we have analyzed-particularly the distant past-were developed with the aid of information sources that no longer exist, either dead with the minds that generated them or lost with the documents that once preserved them. As with any field, this lost information may compromise the ability of individuals in the present to correctly interpret and use past insights. This case of survivorship bias is unavoiable, and mitigatable to the degree that present researchers remain aware of it. Related work on this topic is available from Samo Burja of Bismarck Analysis, in his essay Intellectual Dark Matter.", "date_published": "n/a", "url": "n/a", "filename": "insights-analysis.tei.xml", "abstract": "This dataset collects a list of discoveries essential to the development of the current generation of artificial intelligence. We trace this development back as early as we have records of relevant intellectual progress-in this case, to Ancient Greek philosophy-looking for the foundations of complex new approaches. Our goal in this project is to present an alternative to an achievement-based timeline of AI development. The development of new capabilities in machine intelligence frequently produces exciting tangible results, which dominate public conceptions of progress and are well-remembered in histories of the field, but such results are not always the product of novel technical insight. Some new abilities rely more on recombination of existing tools or on additional computational capacity-sufficient to solve complex problems efficiently, but not necessarily the most direct precursors of new tools for further recombination, or of new directions for future work without additional increases in computational efficiency. Our aim is to augment the current discourse of historically grounded forecasting by adding a specific discursive thread that tracks theoretical discoveries alone, complementing the more capability-focused accounts. We've collected metadata about each discovery (and its discoverers) and looked for trends in their production. Our own findings are described below, but we encourage extension and additional evaluation of the data, and will publish our own ongoing work here.", "id": "5f2b0cf802c2d771b03cd317418264fc"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "preferences-nipsworkshop2015.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Stuart Armstrong"], "title": "GENERAL PURPOSE INTELLIGENCE: ARGUING THE ORTHOGONALITY THESIS", "text": "The Orthogonality Thesis Scientists and mathematicians are the stereotypical examples of high intelligence humans. But their morality and ethics have been all over the map. On modern political scales, they can be left-(Oppenheimer) or right-wing (von Neumann) and historically they have slotted into most of the political groupings of their period (Galois, Lavoisier). Ethically, they have ranged from very humanitarian (Darwin, Einstein outside of his private life), through amoral (von Braun) to commercially belligerent (Edison) and vindictive (Newton). Few scientists have been put in a position where they could demonstrate genuinely evil behavior, but there have been a few of those (Teichmüller, Philipp Lenard, Ted Kaczynski, Shirō Ishii) . Of course, many scientists have been absolutely conventional in their views and attitudes given the society of their time. But the above examples hint that their ethics are not strongly impacted by their high intelligence; intelligence and ethics seem 'orthogonal' (varying independently of each other, to some extent). If we turn to the case of (potential) artificial intelligences we can ask whether that relation continues: would high intelligence go along with certain motivations and goals, or are they unrelated? To avoid the implicit anthropomorphisation in terms such as 'ethics,' we will be looking at agents 'final goals' -the ultimate objectives they are aiming for. Then the Orthogonality thesis, due to Nick Bostrom (Bostrom, 2012) , states that: Intelligence and final goals are orthogonal axes along which possible agents can freely vary. In other words, more or less any level of intelligence could in principle be combined with more or less any final goal. It is analogous to Hume's thesis about the independence of reason and morality (Hume, 1739) , but applied more narrowly, using the normatively thinner concepts 'intelligence ' and 'final goals' rather than 'reason' and 'morality'. But even 'intelligence,' as generally used, has too many connotations. A better term would be efficiency, or instrumental rationality, or the ability to effectively solve problems given limited knowledge and resources (Wang, 2011) . Nevertheless, we will be sticking with terminology such as 'intelligent agent,' 'artificial intelligence' or 'superintelligence,' as they are well established, but using them synonymously with 'efficient agent,' artificial efficiency' and 'superefficient algorithm.' The relevant criteria is whether the agent can effectively achieve its goals in general situations, not whether its inner process matches up with a particular definition of what intelligence is. Thus an artificial intelligence (AI) is an artificial algorithm, deterministic or probabilistic, implemented on some device, that demonstrates an ability to achieve goals in varied and general situations. 1 We don't assume that it need be a computer program, or a well laid-out algorithm with clear loops and structures -artificial neural networks or evolved genetic algorithms certainly qualify. A human level AI is defined to be an AI that can successfully accomplish any task at least as well as an average human would (to avoid worrying about robot bodies and such-like, we may restrict the list of tasks to those accomplishable over the internet). Thus we would expect the AI to hold conversations about Paris Hilton's sex life, to compose ironic limericks, to shop for the best deal on Halloween costumes and to debate the proper role of religion in politics, at least as well as an average human would. A superhuman AI is similarly defined as an AI that would exceed the ability of the best human in all (or almost all) tasks. It would do the best research, write the most successful novels, run companies and motivate employees better than anyone else. In areas where there may not be clear scales (what's the world's best artwork?) we would expect a majority of the human population to agree the AI's work is among the very best. Nick Bostrom's paper argued that the Orthogonality thesis does not depend on the Humean theory of motivation, but could still be true under other philosophical theories. It should be immediately apparent that the Orthogonality thesis is related to arguments about moral realism. Despite this, we will not address the fertile and extensive literature on this subject. Firstly, because it is contentious: different schools of philosophical thought have different interpretations of the truth and meaning of moral realism, disputes that cannot be currently resolved empirically. Since we are looking to resolve a mainly empirical question -what systems of motivations could we actually code into a putative AI -this theoretical disagreement is highly problematic. Secondly, we hope that by approaching the issue from the computational perspective, we can help shed new light on these issues. After all, we do not expect that the trigger mechanism of a cruise missile to block detonation simply because people will die -but would an \"ultra-smart bomb\" behave the same way? By exploring the goals of artificial systems up to higher level of efficiency, we may contribute to seeing which kinds of agents are susceptible to moral realism arguments, and which are not. Thus this paper will content itself with presenting direct arguments for the Orthogonality thesis. We will assume throughout that human level AIs (or at least human comparable AIs) are possible (if not, the thesis is void of useful content). We will also take the position that humans themselves can be viewed as non-deterministic algorithms: 2 this is not vital to the paper, but is useful for comparison of goals between various types of agents. We will do the same with entities such as committees of humans, institutions or corporations, if these can be considered to be acting in an agent-like way. The thesis itself might be critiqued for over-obviousness or triviality -a moral anti-realist, for instance, could find it too evident to need defending. Nevertheless, the argument that AIs -or indeed, any sufficiently intelligent being -would necessarily behave morally is a surprisingly common one. A. Kornai, for instance, considers it as a worthwhile starting point for investigations into AI morality (Kornai, 2013) . He bases his argument on A. Gewith's approach in his book, Reason and Morality (Gewirth, 1978) (the book's argument can be found in a summarized form in one of E. M. Adams's papers (Adams, 1980) ) in which it is argued that all agents must follow a \"Principle of Generic Consistency\" that causes them to behave in accordance with all other agent's generic rights to freedom and well-being. Others have argued that certain specific moralities are attractors in the space of moral systems, towards which any AI will tend if they start off with certain mild constraints (Waser, 2008) . Because of these and other examples (and some online criticism of the Orthogonality thesis 3 ), we thought the thesis was worth defending explicitly, and that the argument brought out in its favor would be of general interest to the general discussion. \n Qualifying the Orthogonality Thesis The Orthogonality thesis, taken literally, is false. Some motivations are mathematically incompatible with changes in intelligence (\"I want to prove the Gödel statement for the being I would be if I were more intelligent\"). Some goals specifically refer to the intelligence of the agent, directly (\"I want to be much less efficient!\") or indirectly (\"I want to impress people who want me to be much less efficient!\"). Though we could make a case that an agent wanting to be less efficient could initially be of any intelligence level, it won't stay there long, and it's hard to see how an agent with that goal could have become intelligent in the first place. So we will exclude from consideration those goals that intrinsically refer to the intelligence level of the agent. We will also exclude goals that are so complex or hard to describe that the complexity of the goal becomes crippling for the agent. If the agent's goal takes five planets worth of material to describe, or if it takes the agent twenty years each time it checks what its goal is, then it's obvious that that agent can't function as an intelligent being on any reasonable scale. Many have made the point that there is likely to be convergence in instrumental goals (Omohundro, 2008) . Whatever their final goals, it would generally be in any agent's interest to accumulate more power, to become more intelligence and to be able to cooperate with other agents of similar ability (and to have all the negotiation, threatening and cajoling skills that go along with that cooperation). Note the similarity with what John Rawls called 'primary goods' (Rawls, 1971 ). We will however be focusing exclusively on final goals, as the instrumental goals are merely tools to accomplish these. 4 Further we will not try to show that intelligence and final goals can vary freely, in any dynamical sense (it could be quite hard to define this). Instead we will look at the thesis as talking about possible states: that there exist agents of all levels of intelligence with any given goals. Since it's always possible to make an agent stupider or less efficient, what we are really claiming is that there could exist possible high-intelligence agents with any given goal. Thus the restricted Orthogonality thesis that we will be discussing is: \n High-intelligence agents can exist having more or less any final goals (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent's intelligence). 5 We will be looking at two variations of the \"can exist\" clause: whether the agent can exist in theory, and whether we could build such an agent (given that we could build an AI at all). Though evidence will be presented directly for this thesis in the theoretic agent case, the results of this paper cannot be considered to \"prove\" the thesis for agents we could build (though they certainly raise its likelihood). In that case, we will be looking at proving a still weaker thesis: \n The fact of being of high intelligence provides extremely little constraint on what final goals an agent could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent's intelligence). That thesis still has nearly all the relevant practical implications that the strong Orthogonality thesis does. \n Orthogonality in Practice for AI Designers The arguments presented in this paper are all theoretical. They posit that AIs with certain goals either 'can exist,' or that 'if we could build an AI, we could build one with any goal'. In practice, the first AIs, if and when they are created, will be assembled by a specific team, using specific methods, and with specific goals in mind. They may be more or less successful at inculcating the goals into the AI (or, as is common in computer programming, they may inculcate the goals exactly, only to realize later that these weren't the goals they really wanted). The AI may be trained by interacting with certain humans in certain situations, or by understanding certain ethical principles, or by a myriad of other possible methods, which will likely focus on a narrow target in the space of goals. The relevance of the Orthogonality thesis for AI designers is therefore mainly limited to a warning: that high intelligence and efficiency are not enough to guarantee positive goals, and that they thus need to work carefully to inculcate the goals they value into the AI. \n Orthogonality for Theoretic Agents If we were to step back for a moment and consider, in our mind's eyes, the space of every possible algorithm, peering into their goal systems and teas-ing out some measure of their relative intelligences, would we expect the Orthogonality thesis to hold? Since we are not worrying about practicality or constructability, all that we would require is that for any given goal system (within the few constraints enumerated above), there exists a theoretically implementable algorithm of high intelligence. Any measurable 6 goal can be paired up with a reward signal: an agent gets a reward for achieving states of the world desired by the goal, and denied these rewards for actions that fail to do so. Among reward signal maximisers, the AIXI is the theoretically best agent there is, more successful at reaching its goals (up to a finite constant) than any other agent (Hutter, 2005) . AIXI itself is incomputable, but there are computable variants such as AIXItl or Gödel machines (Schmidhuber, 2007) that approximate AIXI's efficiency. These methods work for whatever reward signal plugged into them. Or we could simply imagine a supercomputer with arbitrarily large amounts of computing power and a decent understanding of the laws of physics (a 'Laplace demon' (Laplace, 1814) capable of probabilistic reasoning), placed 'outside the universe' and computing the future course of events. Paired with an obedient active agent inside the universe with a measurable goal, for which it would act as an advisor, this would also constitute an 'ultimate agent.' Thus in the extreme theoretical case, the Orthogonality thesis seems true. There is only one problem with these agents: they are either impossible in practice (AIXI or Laplace's demon), or require incredibly large amounts of computing resources to work. Let us step down from the theoretical pinnacle and require that these agents could actually exist in our world (still not requiring that we be able or likely to build them). An interesting thought experiment occurs here. We could imagine an AIXI-like super-agent, with all its impractical resources, that is tasked to design and train an AI that could exist in our world, and that would accomplish the super-agent's goals. Using its own vast intelligence, the superagent would therefore design a constrained agent maximally effective at accomplishing those goals in our world. Then this agent would be the highintelligence real-world agent we are looking for. It doesn't matter than the designer is impossible in practice -if the super-agent can succeed in the theoretical thought experiment, then the trained AI can exist in our world. This argument generalizes to other ways of producing the AI. Thus to deny the Orthogonality thesis is to assert that there is a goal system G, such that, among other things: 1. There cannot exist any efficient real-world algorithm with goal G. 2. If a being with arbitrarily high resources, intelligence, time and goal G, were to try design an efficient real-world algorithm with the same goal, it must fail. 3. If a human society were highly motivated 7 to design an efficient realworld algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail. 4. If a high-resource human society were highly motivated to achieve the goal G, then it could not do so (here the human society itself is seen as the algorithm). 5. Same as above, for any hypothetical alien societies. 6. There cannot exist any pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G. 7. There cannot exist any evolutionary or environmental pressures that would evolve a highly efficient real world intelligences following goal G. All of these seem extraordinarily strong claims to make! The last claims all derive from the first, and merely serve to illustrate how strong the first claim actually is. Claim 4, in particular, seems to run counter to everything we know about human nature. \n Orthogonality for Human-level AIs Of course, even if efficient agents could exist for all these goals, that doesn't mean that we could ever build them, even if we could build AIs. In this section, we'll look at the ground for assuming the Orthogonality thesis holds for human-level agents. Since intelligence isn't varying much, the thesis becomes simply: If we could construct human-level AIs at all, then there is extremely little constraint on the final goals that such AIs could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent's intelligence). So, is this true? The arguments in this section are generally independent of each other, and can be summarized as: 1. Some possible AI designs have orthogonality built right into them. 2. AI goals can reach the span of human goals, which is large. 3. Algorithms can be combined to generate an AI with any easily measurable goal. 4. Various algorithmic modifications can be used to further expand the space of possible goals, if needed. \n Utility Functions One classical picture of a rational agent is of an agent with a specific utility function, which it will then act to maximize in expectation. This picture encapsulates the Orthogonality thesis: whatever the utility function, the rational agent will then attempt to maximize it, using the approaches in all cases (planning, analyzing input data, computing expected results). If an AI is built according to this model, with the utility function being prescriptive (given to the AI in a program) rather than descriptive (an abstract formalization of an agent's other preferences), then the thesis would be trivially true: we could simply substitute the utility function for whichever one we desired. However, many putative agent designs are not utility function based, such as neural networks, genetic algorithms, or humans. So from now on we will consider that our agents are not expected utility maximizers with clear and separate utility functions, and look at proving Orthogonality in these harder circumstances. \n The Span of Human Motivations It seems a reasonable assumption that if there exists a human being with particular goals, and we can program an AI, then we can construct a humanlevel AI with similar goals. This is immediately the case if the AI was a whole brain emulation/upload (Sandberg & Bostrom, 2008) , a digital copy of a specific human mind. Even for more general agents, such as evolved agents, this remains a reasonable thesis. For a start, we know that real-world evolution has produced us, so constructing human-like agents that way is certainly possible. Human minds remain our only real model of general intelligence, and this strongly directs and informs our AI designs, which are likely to be as human-similar as we can make them. Similarly, human goals are the easiest goals for us to understand, hence the easiest to try and implement in AI. Hence it seems likely that we could implement most human goals in the first generation of human-level AIs. So how wide is the space of human motivations 8 ? Our race spans footfetishists, religious saints, serial killers, instinctive accountants, role-players, self-cannibals, firefighters and conceptual artists. The autistic, those with exceptional social skills, the obsessive compulsive and some with splitbrains. Beings of great empathy and the many who used to enjoy torture and executions as public spectacles. 9 It is evident that the space of possible human motivations is vast. 10 For any desire, any particular goal, no matter how niche, 11 pathological, bizarre or extreme, as long as there is a single human who ever had it, we could build and run an AI with the same goal. But with AIs we can go even further. We could take any of these goals as a starting point, make them malleable (as goals are in humans), and push them further out. We could provide the AIs with specific reinforcements to push their goals in extreme directions (reward the saint for ever-more saintly behavior). If the agents are fast enough, we could run whole societies of them with huge varieties of evolutionary or social pressures, to further explore the goal-space. We may also be able to do surgery directly on their goals, to introduce more yet variety. For example, we could take a dedicated utilitarian charity worker obsessed with saving lives in poorer countries (but who doesn't interact, or want to interact, directly with those saved), and replace 'saving lives' with 'maximizing the number of paperclips in the universe' or any similar abstract goal. This is more speculative, of course -but there are other ways of getting similar results. \n Instrumental Goals as Final Goals If someone were to hold a gun to your head, they could make you do almost anything. Certainly there are people who, with a gun at their head, would be willing to do almost anything. A distinction is generally made between instrumental goals and final goals, with the former being seen as simply paths to the latter, and interchangeable with other plausible paths. The gun to your head disrupts the balance: your final goal is simply not to get shot, while your instrumental goals become what the gun holder wants them to be, and you put a great amount of effort into accomplishing the minute details of these instrumental goals. Note that the gun has not changed your level of intelligence or ability. This is relevant because instrumental goals seem to be far more varied in humans than final goals. One can have instrumental goals of filling papers, solving equations, walking dogs, making money, pushing buttons in various sequences, opening doors, enhancing shareholder value, assembling cars, bombing villages or putting sharks into tanks. Or simply doing whatever the guy with gun at our head orders us to do. If we could accept human instrumental goals as AI final goals, we would extend the space of goals quite dramatically. To do so we would want to put the threatened agent, and the gun wielder, together into the same AI. Algorithmically there is nothing extraordinary about this: certain subroutines have certain behaviors depending on the outputs of other subroutines. The 'gun wielder' need not be particularly intelligent: it simply needs to be able to establish whether its goals are being met. If for instance those goals are given by a utility function then all that is required in an automated system that measure progress toward increasing utility and punishes (or erases) the rest of the AI if not. The 'rest of AI' is just required to be a human-level AI which would be susceptible to this kind of pressure. Note that we do not require that it even be close to human in any way, simply that it place a highest value on self-preservation (or on some similar small goal that the 'gun wielder' would have power over). For humans, another similar model is that of a job in a corporation or bureaucracy: in order to achieve the money required for their final goals, some human are willing to perform extreme tasks (organizing the logistics of genocides, weapon design, writing long emotional press releases they don't agree with at all). Again, if the corporation-employee relationship can be captured in a single algorithm, this would generate an intelligent AI whose goal is anything measurable by the 'corporation.' The 'money' could simply be an internal reward channel, perfectly aligning the incentives. If the subagent is anything like a human, they would quickly integrate the other goals into their own motivation, 12 removing the need for the gun wielder/corporation part of the algorithm. \n Noise, Anti-agents and Goal Combination There are further ways of extending the space of goals we could implement in human-level AIs. One simple way is simply to introduce noise: flip a few bits and subroutines, add bugs and get a new agent. Of course, this is likely to cause the agent's intelligence to decrease somewhat, but we have generated new goals. Then, if appropriate, we could use evolution or other improvements to raise the agent's intelligence again; this will likely undo some, but not all of effect of the noise. Or we could use some of the tricks above to make a smarter agent implement the goals of the noise-modified agent. A more extreme example would be to create an anti-agent: an agent whose single goal is to stymie the plans and goals of single given agent. This already happens with vengeful humans, and we would just need to dial it up: have an anti-agent that would do all it can to counter the goals of a given agent, even if that agent doesn't exist (\"I don't care that you're dead, I'm still going to despoil your country, because that's what you'd wanted me to not do\"). This further extends the space of possible goals. Different agents with different goals can also be combined into a single algorithm. With some algorithmic method for the AIs to negotiate their combined objective and balance the relative importance of their goals, this procedure would construct a single AI with a combined goal system. There would likely be no drop in intelligence/efficiency: committees of two can work very well towards their common goals, especially if there is some automatic penalty for disagreements. \n Further Tricks up the Sleeve This section started by emphasizing the wide space of human goals, and then introduced tricks to push goal systems further beyond these boundaries. The list isn't exhaustive: there are surely more devices and ideas one can use to continue to extend the space of possible goals for human-level AIs. Though this might not be enough to get every goal, we can nearly certainly use these procedures to construct a human-level AI with any human-comprehensible goal. But would the same be true for superhuman AIs? \n Orthogonality for Superhuman AIs We now come to the area where the Orthogonality thesis seems the most vulnerable. It is one thing to have human-level AIs, or abstract superintelligent algorithms created ex nihilo, with certain goals. But if ever the human race were to design a superintelligent AI, there would be some sort of process involved -directed evolution, recursive self-improvement, 13 design by a committee of AIs, or similar -and it seems at least possible that such a process could fail to fully explore the goal-space. The Orthogonality thesis in this context is: If we could construct superintelligent AIs at all, then there is extremely little constraint on the final goals that such AIs could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent's intelligence). There are two counter-theses. The weakest claim is: Incompleteness: there are large categories of goals that no superintelligence designed by us could have. A stronger claim is: Convergence: all human-designed superintelligences would have one of a small set of goals. Here 'small' means 'smaller than the space of current human motivations,' thus very small in comparison with the space of possible AI goals. They should be distinguished; Incompleteness is all that is needed to contradict Orthogonality, but Convergence is often the issue being discussed. Often Convergence is stated in terms of a particular model of metaethics, to which it is assumed all agents will converge (see some of the references in the introduction, or various online texts and argument 14 ). \n No Convergence The plausibility of the convergence thesis is highly connected with the connotations of the terms used in it. \"All human-designed rational beings would follow the same morality (or one of small sets of moralities)\" sounds plausible; in contrast \"all human-designed superefficient algorithms would accomplish the same task\" seems ridiculous. To quote an online commentator, how good at playing chess would a chess computer have to be before it started feeding the hungry? Similarly, if there were such a convergence, then all self-improving or constructed superintelligence must fall prey to it, even if it were actively seeking to avoid it. After all, the self-improving lower-level AIs or the designers have certain goals in mind (as we've seen in the previous section, if the designers are AIs themselves, they could have potentially any goals in mind). Obviously, they would be less likely to achieve their goals if these goals were to change as they got more intelligent (Omohundro, 2008) (see also N. Bostrom's forthcoming book Superintelligence: Groundwork to a Strategic Analysis of the Machine Intelligence Revolution). The same goes if the superintelligent AI they designed didn't share these goals. Hence the AI designers will be actively trying to prevent such a convergence, if they suspected that one was likely to happen. If for instance their goals were immoral, they would program their AI not to care about morality; they would use every trick up their sleeves to prevent the AI's goals from drifting from their own. So the convergence thesis requires that for the vast majority of goals G: 1. It is possible for a superintelligence to exist with goal G (by section 0). 2. There exists an entity with goal G (by section 0), capable of building a superintelligent AI. 3. Yet any attempt of that entity to build a superintelligent AI with goal G will be a failure, and the superintelligence's goals will converge on some other goal. 4. This is true even if the entity is aware of the convergence and explicitly attempts to avoid it. 5. If the superintelligence were to be constructed by successive self-improvement, then an entity with goal G operating on itself to boost its intelligence is unable to do so in a way that would preserve goal G. This makes the convergence thesis very unlikely. The argument also works against the incompleteness thesis, but in a weaker fashion: it seems more plausible that some types of goals would be unreachable, despite being theoretically possible. There is another interesting aspect of the convergence thesis: these goals G are to emerge, somehow, without them being aimed for or desired. If one accepts that goals aimed for will not be reached, one has to ask why convergence is assumed: why not divergence? Why not assume that though G is aimed for, random accidents or faulty implementation will lead to the AI ending up with one of a much wider array of possible goals, rather than a much narrower one? We won't delve deeper into this, and simply make the point that \"superintelligent AIs won't have the goals we want them to have\" is therefore not an argument in favor of the convergence thesis. \n Oracles Show the Way If the Orthogonality thesis is wrong, then it implies that Oracles are impossible to build. An Oracle is a superintelligent AI that accurately answers human questions about the world, such as the likely consequences of certain policies and decisions (Armstrong, Sandberg, & Bostrom, 2012) . 15 If such an Oracle could be built, then we could attach it to a human-level AI with goal G. The human-level AI could then ask the Oracle what the results of different decisions actions could be, and choose the action that best accomplishes G. In this way, the combined system would be a superintelligent AI with goal G. What makes the \"no Oracle\" implication even more counterintuitive is that any superintelligence must be able to look ahead, design actions, predict the consequences of its actions, and choose the best one available. But the convergence and indifference theses imply that this general skill is one that we can make available only to AIs with certain specific goals. Though agents with those specific goals are capable of doing effective predictions, they automatically lose this ability if their goals were to change. \n Tricking the Controller Just as with human-level AIs, one could construct a superintelligent AI by wedding together a superintelligence with a large motivated committee of human-level AIs dedicated to implementing a goal G, and checking the superintelligence's actions. Thus to deny the Orthogonality thesis requires that one believes that the superintelligence is always capable of tricking this committee, no matter how detailed and thorough their oversight. This argument extends the Orthogonality thesis to moderately superintelligent AIs, or to any situation where there's a diminishing return to intelligence. It only fails if we take AI to be fantastically superhuman: capable of tricking or seducing any collection of human-level beings. \n Temporary Fragments of Algorithms, Fictional Worlds and Extra Tricks These are other tricks that can be used to create an AI with any goals. For any superintelligent AI, there are certain inputs that will make it behave in certain ways. For instance, a human-loving moral AI could be compelled to follow most goals G for a day, if they were rewarded with something sufficiently positive afterwards. But its actions for that one day are the result of a series of inputs to a particular algorithm; if we turned off the AI after that day, we would have accomplished moves towards goal G without having to reward its \"true\" goals at all. And then we could continue the trick the next day with another copy. For this to fail, it has to be the case that we can create an algorithm which will perform certain actions on certain inputs as long as it isn't turned off afterwards, but that we cannot create an algorithm that does the same thing if it was to be turned off. Another alternative is to create a superintelligent AI that has goals in a fictional world (such as a game or a reward channel) over which we have control. Then we could trade interventions in the fictional world against advice in the real world towards whichever goals we desire. 16 These two arguments may feel weaker than the ones before: they are tricks that may or may not work, depending on the details of the AI's setup. But to deny the Orthogonality thesis requires not only denying that these tricks would ever work, but denying that any tricks or methods that we (or any human-level AIs) could think up, would ever work at controlling the AIs. We need to assume superintelligent AIs cannot be controlled in any way that anyone could think of. \n In Summary Denying the Orthogonality thesis thus requires that: 1. There are goals G, such that an entity an entity with goal G cannot build a superintelligence with the same goal. This despite the fact that the entity can build a superintelligence, and that a superintelligence with goal G can exist. 2. Goal G cannot arise accidentally from some other origin, and errors and ambiguities do not significantly broaden the space of possible goals. 3. Oracles and general purpose planners cannot be built. Superintelligent AIs cannot have their planning abilities repurposed. 4. A superintelligence will always be able to trick its overseers, no matter how careful and cunning they are. 5. Though we can create an algorithm that does certain actions if it was not to be turned off after, we cannot create an algorithm that does the same thing if it was to be turned off after. 6. An AI will always come to care intrinsically about things in the real world. 7. No tricks can be thought up to successfully constrain the AI's goals: superintelligent AIs simply cannot be controlled. \n Conclusion It is not enough to know that an agent is intelligent (or superintelligent). If we want to know something about its final goals, about the actions it will be willing to undertake to achieve them, and hence its ultimate impact on the world, there are no shortcuts. We have to directly figure out what these goals are (or figure out a way of programming them in), and cannot rely on the agent being moral just because it is superintelligent/superefficient. \t\t\t © Stuart Armstrong", "date_published": "n/a", "url": "n/a", "filename": "arguing_the_orthogonality_thesis.tei.xml", "abstract": "In his paper \"The Superintelligent Will,\" Nick Bostrom formalized the Orthogonality thesis: the idea that the final goals and intelligence levels of artificial agents are independent of each other. This paper presents arguments for a (narrower) version of the thesis. It proceeds through three steps. First it shows that superintelligent agents with essentially arbitrary goals can exist in our universeboth as theoretical impractical agents such as AIXI and as physically possible realworld agents. Then it argues that if humans are capable of building human-level artificial intelligences, we can build them with an extremely broad spectrum of goals. Finally it shows that the same result holds for any superintelligent agent we could directly or indirectly build. This result is relevant for arguments about the potential motivations of future agents: knowing an artificial agent is of high intelligence does not allow us to presume that it will be moral, we will need to figure out its goals directly.", "id": "756cd2a6fba5e7fd2a3a8f0705570a9e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Patrick Lavictoire"], "title": "An Introduction to Löb's Theorem in MIRI Research", "text": "Introduction This expository note is devoted to answering the following question: why do many MIRI research papers cite a 1955 theorem of Martin Löb [12] , and indeed, why does MIRI focus so heavily on mathematical logic? The short answer is that this theorem illustrates the basic kind of self-reference involved when an algorithm considers its own output as part of the universe, and it is thus germane to many kinds of research involving self-modifying agents, especially when formal verification is involved or when we want to cleanly prove things in model problems. For a longer answer, well, welcome! I'll assume you have some background doing mathematical proofs and writing computer programs, but I won't assume any background in mathematical logic beyond knowing the usual logical operators, nor that you've even heard of Löb's Theorem before. To motivate the mathematical sections that follow, let's consider a toy problem. Say that we've designed Deep Thought 1.0, an AI that reasons about its possible actions and only takes actions that it can show to have good consequences on balance. One such action is designing a successor, Deep Thought 2.0, which has improved deductive abilities. But if Deep Thought 1.0 (hereafter called DT1) is to actually build Deep Thought 2.0 (DT2), DT1 must first conclude that building DT2 will have good consequences on balance. There's an immediate difficulty-the consequences of building DT2 include the actions that DT2 takes; but since DT2 has increased deductive powers, DT1 can't actually figure out what actions DT2 is going to take. Naively, it seems as if it should be enough for DT1 to know that DT2 has the same goals as DT1, that DT2's deductions are reliable, and that DT2 only takes actions that it deduces to have good consequences on balance. Unfortunately, the straightforward way of setting up such a model fails catastrophically on the innocent-sounding step \"DT1 knows that DT2's deductions are reliable\". If we try and model DT1 and DT2 as proving statements in two formal systems (one stronger than the other), then the only way that DT1 can make such a statement about DT2's reliability is if DT1 (and thus both) are in fact unreliable! This counterintuitive roadblock is best explained by reference to Löb's theorem, and so we turn to the background of that theorem. 2 Crash Course in Löb's Theorem \n Gödelian self-reference and quining programs Löb's Theorem makes use of the machinery of Kurt Gödel's incompleteness theorems [10] , so we will discuss those first. Informally, Gödel found a way to import self-reference into a mathematical system that was simply trying to talk about properties of natural numbers, and then pointed out the odd consequences of a mathematical statement that asserts its own unprovability. One (anachronistic) way of stating Gödel's key insight is that you can use computer programs to search for proofs, and you can prove statements about computer programs. If we think about any conjecture in mathematics that can be stated in terms of arithmetic, you can write a rather simple program that loops over all possible strings, checks whether any of them is a valid proof of the conjecture, and halts if and only if it finds one. Thus, instead of trying to prove that conjecture directly, we could instead try to show (or prove) that the program halts. Now, this generally doesn't make things easier at all, since we're just restating the same problem in a more complicated way, and actually looping over all strings is basically the worst way to try and find proofs of theorems. However, this reformulation makes it more intuitive that we can embed self-reference in mathematics, because we can embed self-reference in computer code! The kind of self-reference I'm talking about is called a quine: a program that is able to reproduce its own source code without taking any inputs. One way to do this is for the program to include a string with a variable, and for it to replace that variable with the original string itself, such that the resulting expanded string is the entire source code of the program itself. Exercise. Write a quining program in your favorite language. No fair copying one from the Wikipedia article, and no fair making calls to external programs or the filesystem. In addition to quines that merely print their source code, programs can be made which perform arbitrary tasks using their own source code. Indeed, one prominent example of a quining program that does other tasks is Ken Thompson's famous C compiler Trojan horse [15] , which uses the quining trick to avoid ever being visible outside of machine code. Thus we can have a program G which refers to itself in this way, and searches for proofs in arithmetic related to its own source code. In particular, we consider G which searches for a proof of the statement \"G runs forever\", and halts if and only if it succeeds at finding one. Now, we claim that G never finds a proof, and also that we can never prove that G runs forever! For if we could prove that G ran forever, then G would find that proof, and it would halt-and we could prove that it halted, and thereby produce a contradiction! On the other hand, if G actually halted after some finite number of steps, then the number at which it halts encodes a proof that G runs forever, which again leads to a contradiction! 1 It seems silly to ask whether we could prove that G halts, given that G actually runs forever. But it actually wouldn't be a contradiction if we asserted that G actually halted, so long as we didn't say anything about how long it took! It's only a claim like \"G halts in fewer than a googolplex steps\" that would be an actual contradiction. It turns out that we could add either \"G never halts\" or \"G halts\" (but not both!) as a new axiom of arithmetic, and not introduce a contradiction by doing so. We express this by saying that G is undecidable, and therefore we've proved Theorem 2.1 (Gödel's First Incompleteness Theorem) If the theory of arithmetic is consistent, then there exist undecidable statements in the theory of arithmetic. Remark. Gödel's Theorem is a bit different from the Halting Problem; the latter shows there's no one program X that can tell whether every program Y halts or not, but of course there may be particular programs X that tell you this fact for particular programs Y . But this is saying that there's no program X that can tell you definitively whether the program G halts or runs forever. So Gödel found that we can write undecidable statements about properties of natural numbers, and furthermore showed that adding new axioms won't fix it, since you can repeat the process with the new rule system for whether a statement is a theorem. (There's only one loophole, and it's not a very exciting one: if we use inconsistent axioms, then everything is a theorem, thanks to the Principle of Explosion, and therefore everything is decidable). Now we're not here to bury mathematical logic, but to use it. So from Gödel's Theorem we move on to... \n Löb's Theorem Löb's Theorem [12] takes the same framework as Gödel's First Incompleteness Theorem, and constructs programs of the following sort: • Let X be any logical statement (the sort of thing that could be proved or disproved). • Now let ProofSeeker(X) be the program that searches all possible proofs, and halts if and only if one of them is a valid proof of the statement X. • Finally, let L(X) be the statement \"if ProofSeeker(X) halts, then X\". Let's ponder for a moment whether L(X) should be true or not, using three different kinds of statements X. • If X is a provable statement, for example \"2 + 2 = 4\", then ProofSeeker(X) halts, and L(X) is \"if [true thing], then [true thing]\", which is a valid statement. • If X is disprovable, for example the statement \"2 + 2 = 5\", then ProofSeeker(X) does not halt, and L(X) is \"if [false thing], then [false thing]\", which is also a valid statement. • If X is neither provable nor disprovable, for example Gödel's statement G, then again ProofSeeker(X) does not halt, and L(X) is \"if [false thing], then [maybe true thing]\", which is also a true statement (remember your propositional calculus). So it seems like L(X) is always true. And it would certainly be handy if L(X) were provable for every X: for instance, you could use the second case above to show that mathematics proves no contradictions! Because if we could prove \"if ProofSeeker(\"2 + 2 = 5\") halts, then 2+2 = 5\", then we would have the contrapositive \"if 2+2 = 5, then ProofSeeker(\"2+2 = 5\") never halts\", and since we can prove 2 + 2 = 5, then we could prove that there is no contradictory proof of 2 + 2 = 5. Alas, that proof of mathematical consistency is too good to be true. As in the case of Gödel's Theorem, something being true is no guarantee of it being provable. And in fact, we find that L(X) is only provable in the first of the three cases above: Theorem 2.2 (Löb's Theorem) For all statements X, if L(X) is provable, then X is provable. (There is a neatly presented formal proof of this theorem at The Cartoon Guide to Löb's Theorem [16] .) So in particular, if we could prove that mathematics would never prove a contradiction, then in fact mathematics would prove that contradiction! (Note that by the Principle of Explosion, this is indeed a possible state of affairs: if your mathematical axioms lead to a contradiction, then they can prove every statement in the language, including the statement that your mathematical axioms don't lead to a contradiction!) Thus the only systems that prove their own consistency are the inconsistent ones; incidentally, this is precisely Gödel's Second Incompleteness Theorem, although he originally proved it without Löb's Theorem. Remark. If you're interested in going deeper on these topics, Computability and Logic by Boolos, Burgess, and Jeffrey [5] is a good reference. Remark. There is also a version of Löb's Theorem for bounded proof searches, in the sense of \"look through all formulas of length ≤ N and see if any of them are a proof of φ\", and it controls the length of the proof of φ in terms of the length of the proof of φ → φ. In the limit of arbitrarily large computational resources, the phenomena we care about happen in the same way that they do in the case of infinite computation (i.e. access to halting oracles), and so we will generally discuss the latter case because the proofs are simpler and clearer. \n Direct Uses of Löb's Theorem in MIRI Research We can now exhibit three simple cases where Löb's Theorem comes up in MIRI research topics: one where it forms an unexpected obstacle to justifying self-modifications, one where it neatly enables mutual cooperation in a nonstandard Prisoner's Dilemma setup, and one where it frustrates a naive decision algorithm. \n \"The Löbstacle\" Let's return to the problem we discussed in the Introduction: Deep Thought 1.0 wants to verify that switching on its successor, Deep Thought 2.0, will have good consequences. Because DT2 has better deductive capacities, DT1 cannot deduce exactly what actions DT2 will take, but it does know that DT2 has the same utility function, and that it too will only take actions it deduces to be good. Intuitively, this should be enough for DT1 to \"trust\" DT2, to say that whatever DT2 does, DT2 must have deduced that to be good, and therefore it must actually be good. But that last clause is analogous to the Löbian statement L(X): \"if the action is deduced to be good, then it must actually be good\"! And therefore DT1 cannot generally prove that clause (unless its reasoning is inconsistent), since then it could prove that every action is good. This is not just an analogy; when we consider a simple mathematical model of a selfmodifying agent that uses proofs in some consistent formal system to justify its actions, that agent has precisely this problem. In models that presume infinite computing power and represent different deductive powers with different axiom systems, a simple agent with a utility function will only create successors whose formal systems are strictly weaker than its own, since only those are fully trusted by the current system. There are a number of partial and potential remedies to this \"Löbstacle\", some of them more appealing than others. For more details, see the MIRI preprints Tiling Agents for Self-Modifying AI, and the Löbian Obstacle [18] and Problems of self-reference in self-improving space-time embedded intelligence [9] . \n Löbian cooperation The second topic concerns a variation on the usual Prisoner's Dilemma. Rather than playing directly, you write an algorithm to play against other algorithms on your behalf, as in Robert Axelrod's famous algorithmic Prisoner's Dilemma tournament [1] . However, instead of that setup, in which the algorithms play iterated games against one another, in this case your algorithm and theirs get to read the opponent's source code, calculate for as long as they like, and then play only once. Using quining (recall Section 2.1), one can write a program that cooperates if and only if the opponent's source code is identical to its own. However, there are many ways to write such programs, and different ones will not cooperate with each other; this is therefore a fragile form of mutual cooperation. There's a different algorithm which (at the cost of using lots of computation) avoids that fragility. We call this agent FairBot, and it operates as follows: • Both FairBot and its opponent X are functions of one argument, which take the opponent's source code and output either C or D. • When FairBot() is called on the input X, it searches through all proofs of length ≤ N to see whether any are valid proofs of the statement \"X(FairBot)= C\". If yes, then it outputs C; if no, then it outputs D. (Here N is a parameter that doesn't depend on X; we'll think of it as some extremely large number. The only reason we have that parameter at all is so that our algorithm does in fact always return an output in finite time.) Some things are clear from the definition of FairBot. One is that it is capable of cooperation: if X is a simple algorithm that always returns C, then FairBot will return C as well. Another is that (unless arithmetic is inconsistent) FairBot will never be exploited (cooperate while its opponent defects). What is less immediately clear is the outcome when FairBot plays against itself. Intuitively, it seems like both mutual cooperation and mutual defection are stable fixed points of the situation. However, a Löbian statement breaks the deadlock in favor of cooperation! To see this, ignore for the moment the parameter N . Then if we consider the statement L(\"FairBot(FairBot)= C\"), we find that it follows directly from the code of FairBot. For if there is a proof that FairBot(FairBot)= C, then FairBot will discover that proof and output C. But then, by Löb's Theorem, there must be an actual proof of the statement \"FairBot(FairBot)= C\"! 2 As mentioned in Section 2.2, there is a quantitatively bounded version of Löb's Theorem such that this works even with the parameter N , for sufficiently large values of N . Furthermore, this form of cooperation is much more robust: two such programs don't need to figure out that they are functionally identical, they only need to search for proofs about one another's output. Moreover, even algorithms that are not functionally identical to FairBot can achieve mutual cooperation with it. The FairBot algorithm is due to Vladimir Slepnev; for more on Löbian cooperation, see the MIRI paper Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem [11] . We will return to this topic in Section 7.1 when we have developed a few more tools. \n Spurious counterfactuals The third topic concerns an unexpected issue that comes up when the agent is a part of the same universe that the agent is proving theorems about. The setting is a pure computational universe U () which takes no input, and which just runs and eventually outputs a number. Within that universe there is an agent A() which knows the code for U (), which does some computations as part of the natural order of the universe and then outputs something. This may seem like a rather extreme setup, and it is: we want to see what we can say about agents that know the source code of the universe and wield arbitrarily large computational resources, because it is often easier to show what happens with such agents than what happens with more realistic, messy, and bounded agents. Moreover, any obstacles we discover in the ideal setting are likely to correspond in the real setting to obstacles that can't be overcome merely by adding more knowledge and computing power to the system. Let's first consider a really simple sort of universe (Algorithm 1), where A() must try to figure out whether to open door number 1, door number 2, or neither. Algorithm 1: U () if A() = 1 then return 100 else if A() = 2 then return 1 else return −∞ end This requires us to specify the function A(). Now of course we could consider a function A() hard-coded to return 1, but that's not especially interesting. Instead, we'll write one (Algorithm 2) which tries to deduce which door (if either) to open, by looking through proofs of arithmetical statements. It seems obvious that there should be proofs of A() = 1 → U () = 100 and A() = 2 → U () = 1 whose Gödel numbers are smaller than a googolplex, and therefore A should correctly prove those statements and choose Door 1. Algorithm 2: A() Door1 ← −∞; Door2 ← −∞; n = 1; while Door1 + Door2 = −∞ and n < 10 10 100 do if n encodes a valid proof of \"A() = 1 → U () = c\" for some c then Door1 ← c; if n encodes a valid proof of \"A() = 2 → U () = c\" for some c then Door2 ← c; n+ = 1; end if Door1 ≥ Door2 then return 1 else return 2 end But there's something very strange that can happen instead: A could prove A() = 2 → U () = 1, and then prove a \"spurious counterfactual\" such as A() = 1 → U () = −1, which then makes A() stop looking early and select door number 2! This wouldn't be a contradiction, because if A() = 2, then A() = 1 → U () = −1 is formally true 3 . But why should it actually happen? It's clearest to see in the case where I can make one innocent-seeming change to A(): I ask that before it starts searching through all n < 10 10 100 to see if they encode proofs, I want it to check two particular proofs. The first is the simple proof that A() = 2 → U () = 1, and the second is a statement φ. Now if φ really is a proof of the spurious counterfactual A() = 1 → U () = −1, it's clear from the rest of the source code of A() that it will break the while loop and choose Door 2. In fact, we can formalize that reasoning: there's a proof that if φ is a proof of A() = 1 → U () = −1, then in fact it's true that A() = 2 and thus A() = 1 → U () = −1. But this is almost a Löbian statement, just with the addition of a specific φ rather than an existential quantifier! By the machinery of quining, we can find a φ that serves our purposes, so it works just as before: since the Löbian statement is provable, so is the conclusion A() = 1 → U () = −1, by means of that particular φ. Even without that malicious change, these kinds of agents and universes are susceptible to spurious counterfactuals. In Section 7.2, we'll consider a similar setup, but with a more careful ordering of which proofs it pays attention to, and that agent can be shown to make the correct decisions (given enough deductive power). The idea was originally due to Benja Fallenstein; Tsvi Benson-Tilson's paper UDT with Known Search Order [3] discusses spurious counterfactuals and a different response to them. \n Crash Course in Model Theory For the later topics, we'll need some additional background in mathematical logic. Let's start out by just talking formally about the natural numbers with addition and multiplication. \n Axioms and theories We begin with a language of symbols, and we don't start out assuming anything about what they mean. We'll take some symbols of the propositional calculus, set theory, arithmetic, and three special symbols: {(, ), ∧, ∨, ¬, →, ↔, ∈, ∀, ∃, =, +, •, O, S, N} Here we'll think of N as standing in for the set of natural numbers, O as standing in for zero, and S as standing in for the successor (so that SO represents 1, SSO represents 2, etc.). That way we don't need rules for dealing with infinitely many number symbols, or rules for dealing with digits. We'll also add an infinite family of symbols like x and y for variables (so long as they're quantified over with ∀ or ∃). Finally, we'll use letters like φ to stand in for a formula within the language, but note that φ is not itself part of the language, so we can't say things like ∀φ. We'll import the standard rules of the propositional calculus and set theory as applied to the appropriate symbols (for instance, whenever φ ∧ ψ is a theorem, then both φ and ψ are theorems), but we won't yet assume any properties of +, •, O, S, or N. That's because we'll want to pick our own axioms about arithmetic on the natural numbers! This creates a theory, the set of all theorems that follow from the axioms; different sets of axioms can give rise to different theories. So let's take some true statements about the natural numbers, make them axioms of our logical system, and see what kind of theory we get. (Parenthetical comments are included for our intuition, but are not actually a part of the theory.) A theory can contain contradictions, of course: some theorem φ such that ¬φ is also a theorem. So how can we tell whether the theory built on these axioms contains any contradictions? We could try proving things and see if we ever find a contradiction, but as there are infinitely many theorems, we couldn't be sure that we just hadn't found a contradiction yet. 1. O ∈ N (zero is a natural number) 2. ∀x ∈ N Sx ∈ N (if x is a natural number, so is its successor) 3. ∀x ∈ N∀y ∈ N (x + y ∈ N) ∧ (x • y ∈ N) (the But there's another thing we could do: exhibit an example of a set for which all of the axioms hold! That serves as conclusive evidence that the theory does not contain contradictions, because φ is either actually true or actually false for that specific set, and so at most one of φ and ¬φ can actually be a theorem. We need to do a little more than exhibit an object: we need to have a correspondence between the symbols in the language and the parts of the object. So for instance, we identify the symbol O with the number 0, SO with 1, and so on 4 ; we identify N with the full set of natural numbers N; and we identify addition and multiplication with the usual operations on N. We call such an object a model, and such a correspondence an interpretation of the theory. However, the same theory can have many models, some of them not at all what you were thinking of when you made the axioms... \n Alternative and nonstandard models For instance, an alternate model of the theory above is the set with a single element \"Alice\"; we identify O with Alice, and then SO with Alice as well (so SO = O), and so on; we declare that Alice+Alice=Alice•Alice=Alice; and then we identify N with the set {Alice}. Note that all of the axioms above are true of this alternate model, even if their intuitive meanings aren't! Fine, let's patch this. We can't add an axiom directly saying it's not this particular model, since Alice isn't part of the language of the theory itself. But we can add an axiom saying SO = O, which is true of the model of the natural numbers we want, but not true of the Alice model. Well, we excluded that alternate model, but there are still others we haven't excluded. In particular, we can consider arithmetic mod n as a model for any n! In order to exclude these, we could add axioms saying SSO = O, SSSO = O, etc, or we could be more economical and just use the axiom ∀x ∈ N Sx = O. (The resulting theory is known as Robinson arithmetic, and is interesting in its own right.) Have we eliminated all alternate models from this theory? Hardly! We're just getting started. Consider the model where N refers to the natural numbers combined with one additional entity, \"Bob\"; we identify S(Bob)=Bob, Bob•0 = 0, Bob•x =Bob for all x = 0, and Bob+x =Bob for all x. Then we again see that this satisfies all of our axioms so far. We could patch this with the axiom ∀x ∈ N Sx = x, but again we can build more and more complicated alternate models. (One particularly interesting model for Robinson arithmetic consists of all polynomials with integer coefficients whose leading term is nonnegative.) It turns out that what we're missing from the natural numbers is the principle of induction. (Indeed, with Robinson arithmetic you can't even prove that addition is commutative!) In order to express induction in the language (which doesn't have variables for properties, only for numbers), we must resort to an infinite family of new axioms: for every logical formula φ(x) (with no free variables except for x), we want to add the axiom (φ(0) ∧ (∀x ∈ N φ(x) → φ(Sx))) → ∀y ∈ N φ(y). 5 With all of that completed, we've got the axioms for the theory of Peano Arithmetic. Now are we done with alternative models? Of course not! Remember how Gödel's self-referential formula G was undecidable? There are models of Peano arithmetic where G holds, and other models where G fails to hold. The models where G holds include our standard intuitive model of the natural numbers, since as we discussed before, G definitely does not halt at any finite number. But then, what kind of model of Peano Arithmetic could G fail to halt on? It may help to think of the model of N with Bob. If we do Gödel's construction in Robinson Arithmetic, there will be no contradiction if we assert that the special extra number Bob in fact satisfies the property that the formula G is checking (the property which represents \"this is a valid proof that no number is a valid proof of G\"). This is okay, because Bob doesn't actually have a representation in terms of lots of S's and an O, so this assertion can't be used to actually construct a finite sequence of logical formulas that comprise a proof that no number is a valid proof of G. This loophole suffices to maintain consistency for the new system! We're using Peano Arithmetic, though, so our nonstandard models will be weirder than the natural numbers plus Bob. The key to understanding these is that G never halts at any finite number, but we can't actually define in our formal language what \"finite\" means. The nonstandard models of Peano Arithmetic are those which have all the usual numbers but also lots of extra numbers that are \"too large\" to ever be written as lots of S's followed by an O, but which nonetheless are swept along in any inductive statement. (I still don't know quite how to visualize this model, but you can rigorously construct it using set theory.) And again, a \"number\" that can't actually be written in S's and an O can be asserted to represent a valid proof, but that proof can't be extracted from the number and used against itself, so it all works out consistently. Remark. Since nonstandard models of arithmetic contain numbers larger than any of the usual natural numbers, they can be used in other areas of mathematics. If you take the reciprocal of one of those numbers, you get an infinitesimal; thus you can use these to define the concepts of calculus without using limits! This is known as nonstandard analysis. And it doesn't end there! As I mentioned before, we can add either G or ¬G as an extra axiom of Peano Arithmetic, and since adding a new axiom changes the rules for what counts as a valid proof, we can redo Gödel's construction in our new formal system, and have statements that are undecidable given the new rules. Or we can directly talk about the consistency of Peano Arithmetic (by making a statement that asserts that there is no proof in PA that 0 = 1), and the consistency of the system with all the rules of Peano Arithmetic plus the axiom that PA is consistent, and so on. (Ponder for a moment the formal system which has all the axioms of PA, plus the axiom that PA is consistent, plus the axiom that \"the system which has all the axioms of PA, plus the axiom that PA is consistent\" is inconsistent. As it turns out, this is a perfectly consistent system 6 !) We'll work with a particular hierarchy of such formal systems in Section 6, but before then, we'll discuss another MIRI paper that relates to theories and models. Remark. Again, Computability and Logic by Boolos, Burgess, and Jeffrey [5] covers all of this material at a much deeper level. \n Uses of Model Theory in MIRI Research \n Reflection in probabilistic logic The topic of the first MIRI mathematical workshop paper, Definability of Truth in Probabilistic Logic [7] , isn't directly entwined with Löb's Theorem. But we've just covered the necessary background to understand it, and we'll need another ingredient before getting to the other MIRI papers in these notes, so we might as well detour here! This paper concerns probabilistic logic: assigning probabilities in [0, 1] to logical statements, rather than just assigning them the labels \"provable\", \"disprovable 7 \", and \"undecidable\". We'll want to make sure we assign the value 1 to all provable statements and 0 to all disprovable statements, and that we assign values to the undecidable statements in a consistent way (so if φ and ψ are undecidable but φ → ψ is provable, then the probability we assign to φ should be less than or equal to the probability we assign to ψ). Why is this an important topic for MIRI to study? Probabilistic logic is an interesting, clean, and rich model for Bayesian reasoning and for bounded inference. Consider an AI that cannot deduce with certainty whether P = N P ; knowing it one way or the other should certainly change its actions (for instance, its cryptographic precautions against other agents), and so the most sensible way to proceed seems to be assigning it a tentative probability based on the available evidence, using that probability to do consequentialist calculations, and updating it in the light of newer evidence. So, in order to understand bounded reasoning, we might start with the nice and symmetrical (if unrealistic) case of a coherent probability assignment over all logical statements. So let's consider a Gödelian obstacle that bedevils logic, and see if it looks different within probabilistic logic. That obstacle is Tarski's undefinability of truth [14] . OK, so we've previously established that some statements are undecidable, and in fact, undecidable statements hold in some models of a theory but not others. We might want to endorse some particular model as the \"true\" one (for instance, our standard model of the natural numbers, without all of those weird nonstandard numbers), and say that logical statements are true if they hold in that model and false if they don't. This truth predicate exists outside the language, and so the logical statements can't talk about the truth predicate, only about weaker notions like provability. All that is perfectly fine, thus far. The trouble comes when we try to construct a language that contains its own truth predicate T , either by finding a formula for it using the previously existing symbols, or by directly adding a new symbol with some rules of usage, such that for all φ, it's true that T (φ) ↔ φ. Either approach is doomed, by an argument that should seem familiar by now: there exists 7 By this, we mean statements whose negation is provable, not statements which cannot be proven. a sentence X that refers to itself by quining, so that it's provable that X ↔ T (¬X). And now, clearly, neither T (X) nor T (¬X) can hold. \n All right, so what changes when you incorporate probabilities? Let's introduce a symbol P which represents the probability of a logical statement. We'd like P to satisfy some mathematical coherence properties, like P (φ ∧ ψ) ≤ P (φ), and we'd also like P to be able to discuss itself. Now what happens when we consider the statement Y constructed such that Y ↔ (P (Y ) < 0.5)? Does this create a contradiction? It depends on what kind of statements P is allowed to make about itself! Namely, if P isn't allowed to make exact statements about its own values, but only arbitrarily precise approximations, then everything can work out consistently. Namely, in the place of the failed reflection principle T (T (φ) ↔ φ), we take the reflection principle ∀φ∀a, b ∈ Q (a < P (φ) < b) → (P (a < P (φ) < b) = 1) . (5.1) In the paper, they prove that with this reflection principle and the natural coherence conditions, there indeed is a probability valuation that works for the values of P . How does this avoid a collision with the statement Y above? It turns out that the true value of P (Y ) is 0.5, but P isn't able to know whether P (Y ) is exactly 0.5, or slightly above it, or slightly below it, and so P (P (Y ) < 0.5) is 0.5 as well, and so on. For any rational > 0, you can prove with certainty that P (0.5 − < P (Y ) < 0.5 + ) = 1, but you can't get from this anything sharp enough to produce a contradiction; in particular, you can't prove the generalization that ∀ > 0 P (0.5 − < P (Y ) < 0.5 + ) = 1. Remark. Nonstandard arithmetic isn't needed for this result, but it can shed some light on the fact that P (0.5 − < P (Y ) < 0.5 + ) = 1 for any rational . Since some models of N include nonstandard natural numbers, P can't rule out the possibility that reciprocals of nonstandard natural numbers (infinitesimals) exist, and so P (Y ) may as well be imagined as 0.5 plus or minus an infinitesimal... \n Remark. Since the result in the Definability of Truth paper is nonconstructive, it's worth noting that there's a followup paper by Christiano on computationally tractable approximations of probabilistic logic: Non-Omniscience, Probabilistic Inference, and Metamathematics [6] . \n Crash Course in Gödel-Löb Modal Logic The final topics in this survey require some topics outside the usual first course in mathematical logic, namely the Gödel-Löb modal logic of provability. This modal logic captures exactly the parts of Peano Arithmetic that relate to self-reference, and by leaving out the rest, it has a pleasingly simple structure and plenty of powerful tools for analysis. Thus it's a great setting for model problems in decision theory, as we shall see. \n The modal logic of provability You may have heard of modal logic before, in the context of philosophy: Aristotle defined ways of arguing about which facts are necessary or possible rather than merely true or false. By the twentieth century, people had formalized various kinds of modal logic, where you take the usual language of propositional logic and add the symbols and , as well as new axioms and rules incorporating them. φ is usually interpreted as the statement \"it is necessary that φ\", and φ as \"it is possible that φ\". (We actually only need , since in all modal logics of interest to us, φ ↔ ¬ ¬φ.) Philosophers have tried out various sets of axioms, mostly in order to write intimidatingly technical-looking arguments for their preferred metaphysical conclusions 8 , but we're more interested in a particular modal logic that constitutes a perfectly rigorous-and quite useful-reflection of Löbian phenomena in Peano Arithmetic and other such systems. This is the Gödel-Löb modal logic (alternately, 'the modal logic of provability' or simply 'provability logic'), which we will denote as GL. We can construct GL in two different ways. First, we can let φ represent the formula of Peano Arithmetic that asserts that φ is provable in Peano Arithmetic, and then restrict ourselves to the formulas and proofs that use only , the logical operators, and Boolean variables (including the constants for true and ⊥ for false, but no numbers, arithmetical operations, or quantifiers). Or, equivalently, we can start with those elements of the language, and then add the following axioms and rules: • All tautologies of the propositional calculus are axioms of GL, including tautologies where an expression including has been substituted for a variable (for instance, p ∨ ¬ p is an axiom). • Modus Ponens Rule: if the expressions A and A → B are theorems, then so is B. • Distribution Axioms: for any expressions A and B in the language, we have the axiom (A → B) → ( A → B). • Generalization Rule: if the expression A is a theorem, then so is A. • Gödel-Löb Axioms: for any expression A, we have the axiom ( A → A) → A. It's a nontrivial theorem that these approaches give us the same modal logic! So GL really does capture the self-referential parts of Peano Arithmetic, while leaving aside the arithmetical parts; for instance, the statement that Peano Arithmetic is consistent is simply ¬ ⊥. Exercise. Show, using the rules of GL, that ⊥ ↔ p; recall that p = ¬ ¬p. Informally, that is, no statement can be proven consistent with Peano Arithmetic unless Peano Arithmetic is inconsistent. [Hint: First prove that ⊥ → p, then that p → ⊥. For the latter, it will help to rewrite ¬ ⊥ as ⊥ → ⊥, then consider the Löbian axiom ( ⊥ → ⊥) → ⊥.] While every specific deduction we would want to make in GL can be made via formal manipulations in the system, this is a computationally intractable way to handle deduction. (Deducing whether a given modal statement is a theorem of GL is NP-complete in general, just as it is in Peano Arithmetic.) However, there are some special cases where there are efficient algorithms for deducing provability in GL, and these happen to include cases of direct interest to decision theory... \n Fixed points of modal statements In Section 3.2, we figured out what happens when two different programs try to prove theorems about one another. In that case, the programs were simple enough that we could directly see where to apply Löb's Theorem, but in general we'd like to be able to handle more complicated phenomena. Fortunately, modal logic comes equipped with a powerful tool for doing precisely that: the theory of fixed points for modal sentences. In Peano Arithmetic, the Gödel statement G refers to itself via quining, in order to claim that it cannot be proved in PA; in GL, G corresponds to the formula p ↔ ¬ p. Similarly, we can have all sorts of formulas that refer to themselves and each other by quining, and these are represented by formulas p ↔ φ(p, q 1 , . . . , q k ) that are modalized in p: every occurrence of p in φ happens within the scope of some . (After all, quining can only refer to the provability of the statement's own Gödel number, not directly to the statement itself!) As it happens, whenever you have this setup in GL, with p equivalent to a formula that is modalized in p, then p is equivalent to some other formula which doesn't use p at all! For example, the Gödel statement is equivalent to the inconsistency of arithmetic: (p ↔ ¬p) ↔ (p ↔ ⊥). In the case of one-variable formulas p ↔ φ(p) with φ(p) modalized in p, there's a neat tool that helps us calculate these fixed points (where the result will involve no variables at all, just logical connectives, , , and ⊥). First, for many modal logics including GL, there's a corresponding class of sets and relations (called Kripke frames) such that a formula is provable in the modal logic if and only if a corresponding property holds for every Kripke frame in that class. And secondly, in the special case of sentences without variables in GL, we can reduce to checking a linear hierarchy corresponding to a world in which ⊥ holds, a world in which ⊥ ∧ ¬ ⊥ holds, and so on up the ladder of n+1 ⊥ ∧ ¬ n ⊥ for each n 9 . If you'll grant me that result (known as an World p p ¬p (p → ¬p) p → (p → ¬p) p p → ¬p ⊥ 1 1 1 1 1 1 1 ⊥ ∧ ¬ ⊥ 1 1 0 1 1 1 0 3 ⊥ ∧ ¬ 2 ⊥ ? ? ? ? ? ? ? Continuing with the second row, we find that p flips to false: World p p ¬p (p → ¬p) p → (p → ¬p) p p → ¬p ⊥ 1 1 1 1 1 1 1 ⊥ ∧ ¬ ⊥ 1 1 0 1 1 1 0 3 ⊥ ∧ ¬ 2 ⊥ 1 1 0 0 0 0 1 4 ⊥ ∧ ¬ 3 ⊥ ? ? ? ? ? ? ? And with the third row, when considering ¬p, it is important to remember that the lemma above requires A to be true in all previous rows, so even though ¬p is now true, ¬p remains false. We will fast-forward through the next few rows: World p p ¬p (p → ¬p) p → (p → ¬p) p p → ¬p ⊥ 1 1 1 1 1 1 1 ⊥ ∧ ¬ ⊥ 1 1 0 1 1 1 0 3 ⊥ ∧ ¬ 2 ⊥ 1 1 0 0 0 0 1 4 ⊥ ∧ ¬ 3 ⊥ 0 1 0 0 0 0 1 5 ⊥ ∧ ¬ 4 ⊥ 0 0 0 0 1 1 0 6 ⊥ ∧ ¬ 5 ⊥ 0 0 0 0 1 1 0 And here it stabilizes: every further row is identical 11 . And now that we've found the truth values of p, if we find a constant formula with the same truth values, then by the adequacy result I mentioned in passing, there must be a proof of equivalence in GL! We can get such a formula by taking Boolean combinations of the formulas that define the worlds (since the truth table stabilizes eventually, one only needs a finite combination of these); in the present case, that formula is ¬( 3 ⊥∧¬ 2 ⊥)∧¬( 4 ⊥∧¬ 3 ⊥), or equivalently 4 ⊥ → 2 ⊥. So in the end, we have the fixed point (p ↔ ( p → (p → ¬p))) ↔ (p ↔ ( 4 ⊥ → 2 ⊥)), and our algorithm for finding it is polynomial in the complexity of the statement (in fact, quadratic: we add columns for sub-expressions, and there's a linear bound for how many rows we need to calculate). Remark. We demonstrated the fixed-point algorithm only for one-variable expressions, but you can also take a statement of the form p ↔ φ(p, q 1 , . . . , q k ), where φ is modalized in p (not necessarily in any of the q i ), and find an equivalent expression p ↔ ψ(q 1 , . . . , q k ). The machinery for this is more difficult, as one needs to deal with more complicated Kripke frames (in the general case of GL, a Kripke frame is a set equipped with a transitive, irreflexive, well-founded relation), but it can be mechanized as well. I swept quite a lot under the rug in this section; if you want to get to know the topic on a much sounder basis, read The Logic of Provability [4] by Boolos. (This is an advanced textbook, so wait until after your first course in mathematical logic!) 7 Uses of Gödel-Löb Modal Logic in MIRI Research \n Modal Combat in the Prisoner's Dilemma We're now ready to handle the preprint Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic [2] , which encompasses the results from Section 3.2, and studies \"modal agents\" represented by expressions in modal logic. Recall the idea of Löbian cooperation from Section 3.2. One embarrassing thing about FairBot is that it doesn't check whether its potential cooperation would actually make any difference. It happily cooperates with CooperateBot, the constant strategy that simply returns C for every opponent. That's a bit like cooperating against a rock because the rock \"cooperated\" with you. Let's consider the following alternative: first we define DefectBot as the constant strategy that simply returns D for every opponent; then we define PrudentBot as the algorithm that, given the source code of an opponent X, searches in Peano Arithmetic for a proof of the statement \"X cooperates with PrudentBot, and if Peano Arithmetic is consistent 12 , then furthermore X defects against DefectBot\", and cooperates if and only if it finds such a proof 13 . Now it's more difficult to figure out what happens when FairBot plays the open-sourcecode Prisoner's Dilemma against PrudentBot, if we view these as proof searches in Peano Arithmetic. But if we assume infinite computational power (i.e. the ability to consult a halting oracle about proof searches in Peano Arithmetic), then they can be written out as simple statements in modal logic. Let p = \"FairBot cooperates with PrudentBot\", q = \"PrudentBot cooperates with FairBot\", r = \"FairBot cooperates with DefectBot\", and 12 Why does PrudentBot include the consistency requirement? Because certain actions that depend on agents require the consistency of Peano Arithmetic; for instance, FairBot does defect against DefectBot, but knowing this requires knowing that Peano Arithmetic is consistent, since otherwise FairBot might find a proof that DefectBot cooperates, using the Principle of Explosion. 13 Also, why not directly check whether your cooperation causes the opponent's cooperation, by defining MagicBot which cooperates if and only if it finds a proof of the statement \"X cooperates with MagicBot if and only if MagicBot cooperates with X\"? Because this makes it susceptible to spurious counterfactuals as in Section 3.3, and in fact MagicBot is equivalent to FairBot when they are rendered as modal statements. s = \"DefectBot cooperates with FairBot\". Then we have p ↔ q q ↔ (p ∧ (¬ ⊥ → ¬r)) r ↔ s s ↔ ⊥. If we look for the fixed point of this modal system, we can use the truth table formalism of the fixed-point theorem (we do all of these simultaneously, since every variable is equivalent to a fully modalized expression) to show that p ↔ , q ↔ , r ↔ ⊥, and of course s ↔ ⊥. \n Exercise. Show this formally. So what actually happens when you run the bounded versions of these agents against one another (with sufficiently high bounds on how deep they search for proofs before giving up)? Again, the bounded analogue of Löb's Theorem helps any valid proof in GL to go through if the proof bounds are sufficiently large. And since we really don't expect there to be a contradiction in Peano Arithmetic or any of the systems built atop it by addition of the axioms ¬ n ⊥ 14 , in the bounded case we should expect that every search for n ⊥ comes up empty. Thus p and q are true, and r and s are false. (This is equivalent, in the infinite computation case, to specifying that we care about what actually holds in the standard model of the natural numbers.) Therefore we can use these tools to analyze interactions between \"modal agents\" represented by families of modal statements like FairBot and PrudentBot, and moreover, we can figure out what actually happens using an efficient algorithm rather than the huge bruteforce proof search! It's a little tricky to define these modal agents in one fell swoop, so let's start with the simplest case: agents that don't check their opponent's action against any third parties, but only against themselves. (We will call these \"modal agents of rank 0\".) A rank 0 modal agent is represented by a formula p ↔ φ(p, q), where φ is modalized in both p and q (we don't allow modal agents to run the other agent, only to prove things about them; this avoids the infinite regress of two agents simulating each other forever, waiting for the other to 'make the first move'). By the modal fixed-point theorem, this is equivalent in GL to p ↔ φ(q) for some φ modalized in q, so we will define rank 0 modal agents using those formulas. Then if we have two modal agents of rank 0, represented by p ↔ φ(q) and q ↔ ψ(p), the simultaneous fixed point of these formulas gives us constant sentences (combinations of n ⊥) from which we can deduce their actions. Now let's allow calls to third parties, like PrudentBot's call to the opponent's action against DefectBot. We'll insist that it bottom out in a finite number of statements: for this, it suffices that every modal agent can only predict its opponent's action against itself and against modal agents of strictly lower rank. That is, Definition. X = φ, Z 1 , . . . , Z k is a modal agent of rank n ≥ 0 if φ is a fully modalized formula of k + 1 variables p, r 1 , . . . , r k , and each Z i is a modal agent of rank < n. Given two modal agents X = φ X , Z 1 , . . . , Z k and Y = ψ Y , W 1 , . . . , W l , we then construct the family of formulas given by recursively applying the modal agents to each other and applying the third parties to the originals: p X ↔ φ X (q Y , r Z 1 , . . . , r Z k ) q y ↔ ψ Y (p X , s W 1 , . . . , s W l ) r Z i ↔ φ Z i (t (Y,Z i ) , u Z i 1 , . . . , u Z i m i ) t (Y,Z i ) ↔ ψ Y (r Z i , v (W 1 ,Z i ) , . . . , v (W l ,Z i ) ) and so on until it bottoms out. (Please excuse the proliferation of indices and the imperfect rigor, and take another look at the example above of PrudentBot versus FairBot, where FairBot= p , PrudentBot= (p ∧ (¬ ⊥ → ¬r)), DefectBot , and DefectBot= ⊥ .) Thus any two modal agents can be played against each other, and the result of that \"modal combat 15 \" computed by an effective algorithm. So there's an actual Haskell program, written by Mihaly Barasz and Marcello Herreshoff, that checks the result of the modal combat, and that program helped the authors to find many other patterns about modal agents (including the discovery of PrudentBot, which is never exploited, achieves mutual cooperation with itself and with FairBot, and which correctly defects against CooperateBot). Among the other results in that paper: • Third parties are necessary to get cooperation from FairBot while defecting against CooperateBot; no modal agent of rank 0 can achieve both. • No modal agent globally dominates another modal agent, since it is possible to write another modal agent which punishes or rewards agents for their other decisions. (For example, consider TrollBot, which cooperates with its opponent if and only if it proves that its opponent cooperates with DefectBot.) • In another obstacle to strong definitions of optimality, consider WaitFairBot N defined by ¬ N +1 ⊥∧ (¬ N ⊥ → p) . Then any modal agent that defects against DefectBot will fail to elicit mutual cooperation from WaitFairBot N for sufficiently large N . • Finally, there are some modal agents that can be exploited by non-modal agents but not by any modal agent. Consider MimicFairBot, which cooperates with X if and only if Peano Arithmetic proves that X cooperates with FairBot. (That is, MimicFairBot= r, FairBot .) It is possible for an algorithm to exploit MimicFairBot (consider the non-modal agent that cooperates if and only if the opponent's source code is identical to the source code of FairBot), but every modal agent treats MimicFairBot and FairBot identically. The field of modal combat is new and still wide-open, and I expect to see many more results soon! Furthermore, it inspired an unexpected development in decision theory for one-player games as well... \n Modal Decision Theory Recall the setup in Section 3.3: we have a universe U () which contains an agent A(). The universe runs some computation and returns some outcome; as a part of that computation, the agent runs some computation and returns some action. Everything is deterministic, but we can think of a decision theory as a condition on the allowable form of A(); if you have a certain decision theory, then you find yourself only in certain universes and you perform some specific algorithm in those universes. In order to define what we mean by a decision theory, then, we need to specify where in the universe the agent goes. So we start with some utility values over outcomes, and a universe template U (•) with a spot for an agent; then our decision theory specifies an agent A() that goes there, such that we now have the universe U A () = U (A) which computes a certain outcome; and we judge the decision theory based on how highly it values that outcome. Causal decision theory (CDT) and evidential decision theory (EDT) are two famous philosophical examples of decision theory. Both of these require the universe template to supply the agent with some additional information: in the case of CDT, a causal graph of the universe, and in the case of EDT, the actions and outcomes of other agents in the same universe template. Because CDT and EDT can be shown to make bad decisions on some problems even when given the right info, several people have proposed alternatives, including timeless decision theory [17] (TDT) and updateless decision theory (UDT). (For more on the philosophical side of things, see Toward Idealized Decision Theory [13] .) While working out models of UDT, in particular, Vladimir Slepnev and Benja Fallenstein found a nice formalization of a simple decision theory using modal logic, one that they could even prove optimal on a certain class of modally defined problems! Consider a universe template U (•) with a finite set of possible actions A and a finite set of possible outcomes O. We have some preference ordering over O, and we pick some default action a 0 to return if we can't prove anything at all. We then define the decision theory which takes U (•) and returns the agent A() given by Algorithm 3. Note that this decision theory is immune to the spurious conterfactuals of Section 3.3: if it ever proves a counterfactual, it immediately makes the antecedent true by returning that action, so the consequent must be true as well. Moreover, if it proves any counterfactual, then it has already tried and failed to achieve every outcome higher in its preference order. Before we show an optimality result for Algorithm 3 in a special domain, we will have to show what can go wrong with it in other domains. Firstly, the universe template could be 'unfair', in the sense that it cares about aspects of the agent that aren't significantly related to its output. For instance, we could have a universe template that gives the best outcome if and only if A() is a tab-indented Python program, or if and only if A() can be proved to always select the first action of A in alphabetical order (regardless of the details of U (•). We will not worry about how to succeed on such pathological problems in general. A very strong fairness condition for a universe template is that U (•) should be a function of only the output of A, rather than depending on any other features 16 . We can define this in Peano Arithmetic by choosing an encoding for agents in a particular universe template, then writing a sentence that asserts (A() = B()) → (U A () = U B ()); the universe template is 'provably extensional' if that statement is a theorem of Peano Arithmetic. Everything thus far can be represented in modal logic. We will represent O and A by provably mutually exclusive and exhaustive (p.m.e.e.) families of modal statements. For example, {p∧q, p∧¬q, ¬p} is a p.m.e.e. family. Then if the universe template and the agent are modal expressions, the universe template is provably extensional if (a ↔ b) → (U (a) ↔ U (b)) is a theorem of GL. (You can check that the universe from Algorithm 1 corresponds to a provably extensional modal universe with 3 actions and 3 outcomes.) However, the modal version of Algorithm 3 can fail even on provably extensional modal universe templates; indeed, for any modally defined decision theory there is an \"evil problem\" which it fails on. (This can be shown by representing the modal decision theory itself in the universe template, and punishing any agent that acts identically to the action of the modal decision theory.) So our definition of optimality must allow some way around this. As it happens, although for any decision theory one can write a specific universe template that frustrates it, for any fixed universe template there is a modification of Algorithm 3 which succeeds on it, and this modification is simply a replacement of \"proves in Peano Arithmetic\" with \"proves in Peano Arithmetic given ¬ n ⊥\" for sufficiently large n. We can then show that this modification achieves the best outcome that is achievable by any modal decision theory. This result shows that we're on to something: in this restricted domain, we can do as well as it's possible to do, as long as we're bringing enough deductive capacity to bear! Note that here, the universe acts on the agent only by running it and using the output, while the agent interacts with the source code of the universe only by proving things about it. So in particular, it's less rich than modal combat, in which your agent is proving things about the universe, but another part of that universe (the opponent) is proving things about the agent as well. So before we count ourselves done with using modal logic in decision theory, we have many open questions about competition and bargaining and other phenomena. It's a beautiful Löbian universe, waiting for us to explore! In addition to the contributions of Vladimir Slepnev and Benja Fallenstein, the \"evil decision problems\" were introduced by Nick Bone. If you'd like to get into this topic, I recommend Benja's recent posts on evil decision problems in provability logic, optimality of the model of UDT [8] , and its sequel. \n Acknowledgments I originally prepared about half of these notes for a talk at a MIRIx workshop in Los Altos, California in November 2014; the feedback from the participants was especially helpful. When writing up the notes, I got some great assistance from Benja Fallenstein, Marcello Herreshoff, Elliot Jin, and Nathaniel Thomas. product and sum of two natural numbers are also natural numbers) 4. ∀x ∈ N∀y ∈ N (Sx = Sy) → (x = y) (if the successors are equal, so are the originals) 5. ∀y ∈ N(y = O) ∨ (∃x ∈ N Sx = y) (every nonzero number has a predecessor) 6. ∀x ∈ N (x + O = x) ∧ (x • O = O) (how zero interacts with addition and multiplication) 7. ∀x ∈ N∀y ∈ N (x + Sy = S(x + y)) ∧ (x • Sy = (x • y) + x) (how successor interacts with addition and multiplication) \n Algorithm 3 : 3 A() (Slepnev-Fallenstein model of UDT) for x ∈ O (in descending order of preference) do for a ∈ A do if Peano Arithmetic proves \"A() = a → U () = x\" then return a end end end return a 0 \n\t\t\t Note the asymmetry: a proof of defection does not directly lead to defection, since FairBot acts based only on whether it finds a proof of cooperation, and it nowhere presumes that finding a proof of defection precludes finding another proof of cooperation. Again, the formal system can't correctly know that it is consistent! \n\t\t\t Note, however, that it's impossible to prove a spurious counterfactual about the choice you actually take, and thus it could not be consistent to find a spurious proof that A() = 1 → U () = 2 or that A() = 2 → U () = 11 \n\t\t\t Of course, this example does take a leap of faith in accepting the existence of infinitely many natural numbers; some mathematicians reject that, call themselves strict finitists, and assert that we have no idea whether the axioms of arithmetic are consistent. But they do accept that any theory with a completely specified finite model must be consistent. \n\t\t\t Actually, we need to get a bit more complicated to do all inductive proofs on the natural numbers; what we truly need is that if φ(x, z 1 , . . . , z k ) is a logical formula, then∀z 1 ∈ N . . . ∀z k ∈ N (φ(0, z 1 , . . . , z k ) ∧ (∀x ∈ Nφ(x, z 1 , . . . , z k ) → φ(Sx, z 1 , . . . , z k ))) → ∀y ∈ Nφ(y, z 1 , . . . , z k ). \n\t\t\t Assuming that the standard ZFC axioms of set theory are consistent, as we do throughout this paper, since we use these to construct models of all the weird axiomatizations of Peano Arithmetic. \n\t\t\t Gödel himself, toward the end of his life, circulated a modal logic version of Anselm's ontological proof of the existence of God. The proof is of course formally valid, but the axioms are a bit overpowered and underjustified. \n\t\t\t Note that these all correspond to nonstandard models of Peano Arithmetic, but that in some sense they approach the standard model as n → ∞. \n\t\t\t This part requires our assumption that the expression is modalized in p, since otherwise we couldn't fill in the value for it without first knowing the value for p. \n\t\t\t Since every expression A either remains true forever or switches over to false and remains that way forever, we can see that every expression must eventually stabilize; and if we've decomposed it at each step involving a , once two rows are identical, all subsequent rows must be as well. \n\t\t\t If this step worries you, you may be reassured to know that the standard axioms of set theory are strong enough to assert that this tower upon Peano Arithmetic is consistent all the way up. Of course, that simply means that maybe set theory is contradictory, but if so then the contradiction has been hiding pretty effectively from the mathematical community for the last few decades... \n\t\t\t I came up with that pun. I'm proud of it. \n\t\t\t Note that this condition is too strong to include modal combat, since there it also matters whether a given formal system can prove what the algorithm's output is, not only what the output is. Thus our optimality theorem will not cover modal combat, which is consistent, given that we mentioned in the last section that we lack a good optimality result for modal combat.", "date_published": "n/a", "url": "n/a", "filename": "lob-notes-IAFF.tei.xml", "abstract": "I apologize for all of the exclamation points in the previous paragraph; in my defense, if any result deserves exclamation points, it's Gödel's First Incompleteness Theorem! Also, there's a sleight of hand here in the demonstration that G never halts. We'll get further into that in Section 4.", "id": "9ff3fd33bdf522a744faaffde242c8f9"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "science.aay4219.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Computational_Models_of_Ethical_Reasoning_Challenges_Initial_Steps_and_Future_Directions.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Peter Cihon", "Moritz J Kleinaltenkamp", "Jonas Schuett", "Seth D Baum"], "title": "AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries", "text": "Certification can be defined as the attestation that a product, process, person, or organization meets specified criteria (paraphrased from [6] ). Certification involves an assessment of the entity to be certified, often, but not necessarily, by a trusted third party. If the assessment determines that the entity meets the specified criteria, then the entity is certified accordingly. Thus, the certification process reveals information about the inner workings of the entity that can be provided to external stakeholders, reducing information asymmetries between insiders and outsiders [7] . In doing so, certification incentivizes insiders to adhere to higher standards because they can get credit for it if they do and get caught if they do not. In short, certification is a governance tool for advancing transparency and incentivizing insiders to meet specified criteria. Those criteria could include voluntary AI ethics principles or mandatory regulatory requirements. Certification is already in wide use in many sectors [8] , [9] and is starting to appear for AI. Certification includes such disparate phenomena as the U.S. EnergyStar program for consumer appliance energy efficiency, ISO 9001 for quality management systems in global supply chains, and the mandatory \"CE mark\" for various products sold in the EU. Certification programs for AI systems have been proposed or are currently being developed by the European Commission [10] , the IEEE [11] , and by the Chief Scientist of Australia [12] ; one has already launched from the government of Malta [13] . Education certification programs include the Queen's University executive education program on Principles of AI Implementation [14] and the Finnish civic education program Elements of AI [15] . Additionally, a coalition of Danish organizations [16] is developing an IT security certification for organizations that includes AI-related criteria. These various programs speak to the diverse forms that certification programs can take, including public and private, voluntary and compulsory, and for the AI systems themselves as well as the people and organizations involved with them. This article presents an overview of certification for advancing ethical AI. It provides background on AI certification, drawing on management literature (Section II), surveys current AI certification programs and proposals for new programs (Section III), and discusses prospects for certification to address issues raised by future AI technologies (Section IV). Section V concludes this article. \n II. BACKGROUND This section reviews management literature on the merits of certification as it relates to the governance of AI. This literature typically focuses on the market behavior of corporations and their customers. However, certification is also relevant to nonmarket settings. For example, AI systems developed by nonprofit and/or open-source teams could seek certification to build user confidence, even if users are not charged for using the systems. Or, governments might seek international certification of their military AI systems to demonstrate compliance with international humanitarian law or other standards. Thus, while certification is important for the corporate governance of AI (on which, see [17] ), it is also more widely applicable. First, some terminology. The object of certification is the entity to be certified. It may be a product, process, individual, or organization. The certifier is the actor who does the certifying. The assessor is the actor who assesses whether or not an object of certification meets the specified criteria prior to certification. The assessor and certifier can be one and the same or they can be two separate actors, such as in the case of certification programs that use third-party assessment. The assessment or conformity assessment is the evaluation of whether the object meets the specified criteria; this can vary in stringency, from a review of application materials to an audit of systems or processes. The applicant is the actor who controls the object of certification and seeks to obtain certification when it is voluntary. The applicant and object of certification can be the same, such as in the case of skills certifications granted to individuals. Alternatively, the applicant can be the entity who oversees, manages, develops, or otherwise controls the object of certification. Finally, the audience is the group of people who will be informed by the certification. For example, consider an AI system developed by a corporation, certified by a government agency, assessed by an independent consulting firm, to inform consumer purchasing decisions. Here, the object is the AI system, the certifier is the government agency, the assessor is the consulting firm, the applicant is the corporation, and the audience is consumers. In some cases, the applicant may also simultaneously be the certifier; this is known as self-certification. Self-certification creates a conflict of interest, though this can sometimes be addressed via measures such as audits and liability regimes. Two additional sets of terms help discuss the governance potential of certification programs. First, symbol-substance coupling refers to alignment between the statement that an object meets the certification criteria (the symbol) and that object actually meeting those criteria (the substance). Likewise, symbol-substance decoupling is when a certification inaccurately characterizes its object [18] - [20] . Second, means-ends coupling refers to alignment between the means of the certification program (i.e., the specific criteria to which objects are being held) and the ends underlying its design [19] , [21] . Means-ends decoupling is when a certification program fails to advance its intended goals. Symbolsubstance decoupling and means-end decoupling are two failure modes of certification programs. \n A. Reducing Information Asymmetries As noted above, a primary role of certification is to reduce information asymmetries. Information asymmetries raise two issues: the \"selection problem\" of identifying desirable suppliers of a good or service, and the \"monitoring problem\" of evaluating whether the supplier is meeting specified agreements [22] . Both problems create space for rent-seeking behavior, in which the supplier extracts greater wealth from other parties without adding social value [23] . Information asymmetries are acute in AI systems because the systems are often complex and opaque and users typically lack the data and expertise necessary to understand them. Certification is especially well suited to addressing such information asymmetries that arise from \"credence\" and \"Potemkin\" attributes of goods and services [24] . Credence attributes are those that can be identified by outside parties, but only as trends that emerge after repeat observations. Individual audience members may struggle to identify credence attributes on their own, but an organized third party, such as a certification program, can collect the data needed for a reliable characterization. Credence attributes of AI systems include whether a system is fair or unbiased, is explainable across a wide range of use cases, protects user privacy across a wide range of settings, and is safe and secure to rare and/or diverse threats. Potemkin attributes present even greater information asymmetry challenges: they cannot be identified at all by outside parties, and instead require insider information. An important type of Potemkin attribute is the process through which the good or service was made, such as whether an AI system was developed in accordance with labor, environmental, or ethical standards, or whether it was rigorously tested prior to deployment. Certification programs can address both credence and Potemkin attributes, as seen for example in the popular ISO 9001 [25] certification for organization quality management and ISO 14001 [26] certification for environmental management. By providing information about credence and Potemkin attributes, suppliers can create new signals to attract outside parties, including market customers [27] and others. Table I presents a summary overview of the key concepts underlying our understanding of certification, and how we apply them to the field of AI. A certification program reduces information asymmetries when it accurately characterizes the objects that it certifies. It must avoid false positives, in which an object is certified even though it fails to meet the specified criteria, and false negatives, in which an object meets the criteria but is denied certification. Accurate certification thus exhibits high symbolsubstance coupling. Research on certification has identified strategies for improving certification accuracy, i.e., improving symbolsubstance coupling. First, the objects to be certified should be assessed continuously, rather than only once or not at all [7] , [28] - [30] . This ensures that the information conveyed by the certification stays up to date. Second, assessments should be made by qualified independent assessors using their own evidence and not just documentation provided by the applicants [24] , [31] , [32] . This is needed to protect against applicants misleading the assessor, especially to gain a false positive. These strategies are of further value for incentivizing good behavior by the object and/or applicant, as discussed in the section below. Indeed, the strategies were largely developed to promote behavior change. To achieve accuracy, AI certification programs will often need to parse technical details of the systems and their development. Evaluation methods required for accurate AI certification are at an early stage of development. The most mature area may be documentation for AI system training data and model characteristics [33] - [35] , which can help verify certification criteria associated with both credence and Potemkin attributes. Supply-chain and development documentation methods that draw on insights from other fields may help verify Potemkin-attribute criteria [36] , [37] . Further, benchmark tests to assess AI system performance for specific tasks in specific settings [38] and more general behavioral settings for specific types of systems [39] can help verify credence-attribute criteria. However, methods for criteria verification in broad areas of fairness, transparency (explainability), safety, and security remain under development. Certification programs aiming to address these areas will need to update their criteria over time as the state-of-the-art advances. \n B. Incentivizing Change In reducing information asymmetries, certification can further serve to incentivize changes to applicant practices. For example, research has shown that corporations may be more motivated to achieve ethics standards if they can use certification to demonstrate their achievements to customers who value these achievements [31] , [40] . Where there is demand for certification-e.g., from corporate customers who value ethics achievementscertification programs can succeed on a voluntary basis, with no government regulation [9] . Government regulation can further boost voluntary programs via tort liability [41] or tax credits. Mandatory certification programs display higher compliance levels, though voluntary programs are sometimes found to be more cost effective [42] . For a certification program to change applicant practices according to its specified criteria, it is important that it accurately characterizes these practices and their outputs. In particular, false positives permit applicants to pursue certification without changing their underlying practices: they can adopt the certification criteria symbolically but not substantively. Such symbolic adoption is of particular concern in AI. The recent popularity of AI ethics principles has prompted concerns about \"ethics washing\" [43] , [44] , in which AI groups articulate ethics goals to bolster their reputation but do not act accordingly. An accurate certification regime could mitigate the problem of AI ethics washing. Prior research identifies several strategies to motivate substantive instead of symbolic participation in certification programs. First are strategies to improve certification accuracy as discussed above. Second, certification assessments should not be onerous on the applicants; otherwise, applicants that meet the assessment criteria may decline to participate in the program [24] . Third, the cost to applicants of failing an assessment should be high (in terms of damaged reputation, lost market power, etc.), so as to incentivize participating applicants to meet the specified criteria [22] , [24] , [27] , [31] . A successful certification program may need to promote its value so that other parties (e.g., customers) know to penalize applicants that fail certification assessments. Finally, certification programs should require applicants to integrate the values and norms underlying the criteria into their organizational identities and strategies; this makes substantive adoption more likely [30] , [45] . A robust body of research suggests that properly constructed certification programs can couple symbol and substance to accurately characterize and induce changes to corporate practices. For example, studies of ISO 14001 [26] have found a significant correlation between: 1) the ISO 14001 attestation and the presence of environmental management practices in corporations and 2) the presence of these practices in corporations and their improved environmental performance [7] , [46] . Similar evidence exists for the ISO 9000 series on quality management practices [47] , [49] . Ultimately, adherence to certification criteria depends on applicants' actions. Their actions can derive from characteristics of themselves in addition to characteristics of the certification program. Indeed, applicants sometimes engage in a mix of symbolic and substantive adoption even within the same certification program [47] . This may be due to a lack of rational reflection or to internal disagreement within the applicant about the importance of the certification criteria [45] . Additionally, some studies have found that firms with higher revenues tend to adopt certification more substantively because they have the resources to meet the certification criteria [51] , [52] . Therefore, while it is important for certification programs to be well designed, the applicants retain an essential role. \n C. Ensuring Certification Advances the Right Goals The preceding section discussed how to ensure that certification criteria are in practice actually met, i.e., that certification is adopted substantively and not just symbolically. The current section covers the selection of the criteria themselves. These criteria should be designed such that if they are achieved, the certification program will deliver improvements on its underlying motivation: advancing ethics principles, making progress on societal issues, etc. In other words, the means of certification must be coupled to its ends. Means-ends decoupling can arise when developers of certification programs fail to treat the focal problem holistically, leading them to misidentify the causal relationships driving the issue at hand and prescribe the wrong action for a particular problem [21] . This can spur at least three types of mistakes. First, certification may be inadequately customized for the circumstances it addresses. A common shortcoming of certification programs is the use of uniform, homogeneous prescriptions that neglect important local and time-specific factors [21] . For example, AI systems required to use a certain dataset that performs well on demographic diversity could end up underperforming when even better datasets become available. Some of the most popular corporate certification programs make homogeneous prescriptions for corporations across the globe, though a context-dependent multiplicity of corporate practices may be necessary to address societal problems across environments. Certification criteria that are customized for particular circumstances may tend to perform better, though the development of such criteria may require more resources. Second, certification can cause unintended harms. Narrowly crafted certification criteria can neglect important causal relationships; when followed, they may make some things better but other things worse. For example, AI systems meeting certain performance standards may require additional computing power, which inadvertently harms the environment via increased energy consumption [53] . This type of problem is also known as the \"waterbed effect\" because lying on a waterbed simply displaces the water: the problem \"rises\" again somewhere else [21] , [54] . To avoid unintended harms, certification programs have to avoid overly simplistic prescriptions and instead embody and promote systemic thinking about the issues at hand [21] . Third, certification criteria can be overly restrictive, inhibiting applicants from taking innovative actions that would better address the particular problem. One way to make actions more customized to complex causal relationships expressed in particular circumstances is to permit applicants to make decisions on a case-by-case basis. However, certification criteria often prescribe a rigid set of actions, thereby inhibiting applicants from identifying better solutions [21] , [55] . Certification programs can mitigate this type of problem by \"stimulating internalization,\" meaning that they de-emphasize rigid rules and instead emphasize changes to applicants' internal culture, such that applicants are likely to do the right thing as circumstances dictate [21] . Related to these challenges is the question of whether a certification program addresses the right focal problem(s) to begin with. Exactly which problems should be focused on is ultimately a question of ethics. Opinions may vary on which issues are most important. For example, many AI experts are divided on whether the field should focus on issues raised by near-term or long-term AI [56] . Certification programs may sometimes need to take sides on these sorts of divides, such that they \"cannot please everyone,\" with some contending that the programs work on the wrong issues. In other cases, certification programs could emphasize issues that may seem appealing at first glance because of their broad scope, but become disagreeable when actors' interpretations of this broad scope begin to differ. To ensure that certification programs overcome these challenges and address their focal problems as holistically as possible, their developers should draw on the expertise of a wide range of stakeholders and carefully reconcile these stakeholders' differing perspectives. \n III. AI CERTIFICATION PROGRAMS AND PROPOSALS To identify existing AI certification programs, we conducted Internet searches, monitored social media, solicited input from colleagues, used our own prior knowledge, and examined references from documents about the programs and other relevant literature. These programs are not exhaustive, but are largely representative of the variation in AI-related certification. 1 We assessed the programs using publicly available documents. To fill in gaps in the documents, we conducted semi-structured interviews with representatives of certification program organizations. In total, we conducted three such interviews across seven identified AI certification programs. A) European Commission White Paper on Artificial Intelligence. \n B) IEEE Ethics Certification Program for Autonomous and Intelligence Systems (ECPAIS). C) Malta's AI Innovative Technology Arrangement (AI ITA). D) Turing Certification proposed by Australia's Chief Scientist. E) Queen's University's Principles of AI Implementation executive education. F) Finland's civics course Elements of AI. G) Danish labeling program for IT-security and responsible use of data. The seven programs classify into four categories: 1) selfcertification of AI systems (A); 2) third-party certification of AI systems (A-D); 3) third-party certification of individuals (E and F); and 4) third-party certification of organizations (D and G). Table II presents our ratings of these categories in terms of feasibility, symbol-substance coupling, and meansends coupling. The ratings are based on our analysis of the 1 Additional certification programs under development include AI Global's AI system certification [57] , CertNexus's certified ethical emerging technologist program [58] , and ForHumanity's compliance audit for AI systems and organizations [86] . A number of universities offer AI ethics-related certifications or minors, including San Francisco State [59] and Carnegie Mellon [60] . Consultancies offer algorithmic auditing services, which may confer a certification [61] . Self-certification is widely used in many industries globally, including suppliers' declarations of conformity used in telecommunications and motor-vehicle manufacturing [62] . For self-certification of AI systems, we rate certification programs in this category as having high feasibility because it requires little involvement from nonapplicant actors and limited back-and-forth between them and the applicants. The symbol-substance coupling is low because of its inherent conflict of interest and lack of strong enforcement and assessment. These problems can be attenuated via strong ex-post enforcement after harms occur via liability regimes. However, liability is problematic especially for high-risk applications where the initial harm could be catastrophic, and because legal liability adjudication poses uncertainties for AI developers [63] . Finally, the means-ends coupling is medium because it encourages applicants to take ownership of the certification process and all that it represents, including by documenting their AI practices and thinking systemically about the corresponding ethical implications. Third-party product certification is used in numerous industries globally, including Underwriters Laboratories certifications for electrical and fire safety and the Common Criteria [64] for product cybersecurity certification. For thirdparty certification of AI systems, we rate certification programs in this category as having medium feasibility because AI systems are a relatively well-defined object to assess and certify, but they are also a novel one for which prior experience is of limited use. The need for qualified third-party organizations to offer assessment adds additional complications, including questions around costs and accountability [5, p. 12] . Symbolsubstance coupling is medium because, on the one hand, third parties can assess certification claims and demand for certified systems could drive substantive adoption, but on the other hand, actual implementation can fall short on both meaningful assessment and enforcement. Finally, means-ends coupling is medium because of the potential for, but difficulty of, successfully assessing each AI system in its development and deployment context. Third-party certification of individuals is common in a number of professions, including among engineers, lawyers, and actuaries, as well as programs like the Certified Information Privacy Professional credential [65] . We rate AI-related certification programs in this category as having medium feasibility. A core variable is whether certification criteria include matters related to moral factors in addition to the usual criteria of technical knowledge and skills. 2 Assessment of moral standards is important but less feasible. Moral character is not readily testable, and educational institutions may have less financial incentive to do so. The symbol-substance coupling is low because the substance of certification criteria is dynamic. Methods in explainability, fairness, safety, etc., are active research topics; certificates attesting to knowledge of them can become quickly outdated. This problem could be addressed via ongoing audits or education requirements comparable to those sometimes required for lawyers, but current AI certification programs lack such measures. Finally, the meansends coupling is low because, absent robust and dynamic moral standards, certification is unlikely to guarantee that the individuals will achieve sound performance on AI issues. Third-party certification of organizations include the U.K. Cyber Essentials [66] cybersecurity company certification and the FairTrade [67] licensing contract. We rate AI-related certification programs in this category as having medium feasibility because it can draw on a preexisting body of experience and knowledge in corporate governance and related fields, but it still requires AI-specific innovations. The symbol-substance coupling is high because it can use robust preexisting methods like audits and data security management. Organizations are often familiar with these methods and understand how to respond with substantive activity. Their familiarity could also be used to game the system, though auditors are themselves familiar with such tactics. Finally, the means-ends coupling is medium because assessments can support the adoption of a \"systemic mindset\" but certification criteria may not be easily customized for the context of a particular organization. \n A. European Commission White Paper on AI The European Commission White Paper on AI [10] proposes separate certification programs for low-risk and high-risk applications. Both programs would draw on prior certification programs, in particular, those of the European Economic Area \"CE marking\" rules and the 2019 EU Cybersecurity Act. Both programs would also use similar certification criteria, such as the EC's AI Ethics Guidelines for Trustworthy AI [68] . If implemented, the programs would be important in their own right and could further be influential for AI certification beyond the single market, as is often the case for EU policies [69] . For low-risk applications, we interpret the White Paper as calling for voluntary self-certification. This is clear from the White Paper's call for following the precedents of CE marking and the Cybersecurity Act, both of which use self-certification for low-risk applications. There is some ambiguity because the White Paper also suggests for the low-risk program to use a \"combination of ex ante and ex post enforcement\" [10, p. 24] . Ex ante enforcement would seem to preclude self-certification. We interpret a stronger significance to the two self-certification precedents and therefore believe that the White Paper intends the low-risk program to involve voluntary self-certification. AI applicants could attest to their system having met the criteria, and face ex post penalties if their system falls short. Applicant participation would be incentivized by consumer demand for certificates of AI trustworthiness [10] . To be effective, the low-risk program would need to be carefully designed. First, it needs an enforcement mechanism with clear, strong penalties for applicants who falsely certify their AI systems as meeting the specified criteria [70, p. 5] . Otherwise, there could be symbol-substance decoupling, with applicants rubber-stamping their systems as meeting the criteria. Fortunately, the program could build on the compliance infrastructure that already exists for CE markings. Second, the criteria need to capture the many aspects of AI trustworthiness as defined in the \"Ethics guidelines for trustworthy AI\" [68] , which include, among other things, fairness, security, and accountability. The criteria further must promote the systemic mindset needed to advance this conception of trustworthiness. Otherwise, there could be means-ends decoupling, with the certification program not advancing its underlying goals. For high-risk applications, the White Paper clearly specifies mandatory third-party certification via what it refers to as \"conformity assessment\" [10, p. 23] . Its procedures would be based on CE marking rules or the Cybersecurity Act [10] , both of which require third-party verification for high-risk applications. The White Paper's program would use existing accredited auditing organizations (\"notified bodies\") of the Member States [10, p. 25] . EU regulation would compel AI applicant adoption and compliance. To be effective, the high-risk program will also need several design elements. First, it needs to ensure that the certifying bodies have the technical expertise and capacity needed to assess AI systems. Using the existing notified bodies makes the program easier to implement, but the bodies currently lack expertise and capacity on AI. Unless this situation is improved, the assessments could be inaccurate, and AI applicants may be able to obtain symbolic certification. Additionally, as with the low-risk program, the criteria must be carefully specified to ensure means-ends coupling. \n B. IEEE Ethics Certification Program for Autonomous and Intelligent Systems The IEEE ECPAIS program [11] is currently under development. It considers driving factors and inhibitors of AI system transparency, accountability, and bias. Offering a series of recommendations, the program is expected to yield a quality mark. The developer of an AI system can use the quality mark in presenting the system if it follows the recommendations in a checklist fashion. The program is expected to be administered by an organization other than the IEEE, which will assess certification applications. Whether the program will use an auditor to verify these applications has yet to be determined. The program is voluntary, with expected consumer demand as the driver for adoption. To be effective, EPCAIS would likely need the following. First, it would need to incentivize substantive adoption by generating consumer awareness and demand or by establishing an auditing requirement. Second, to improve means-ends coupling, it should offer distinct quality marks for transparency, accountability, and bias; it should also promote a systems mindset and consider the context in which the system is developed or deployed. Third, it should develop in-house expertise on certification; as far as the authors know, the IEEE has not launched a system-level certification program before. Finally, it should foster an enthusiastic audience in order to successfully launch. However, given that ECPAIS remains under development, we cannot offer a conclusive assessment. \n C. Malta AI Innovative Technology Arrangement The Maltese government launched the AI ITA certification program [13] in summer 2020. The program certifies AI systems to a particular use case following a review of an application detailing development and maintenance processes, corporate governance structures, and a responsible administrator. Applications must describe how the system handles failure modes and conforms to Malta's ethical AI framework, among other requirements [13] . The program also (in most cases) requires applicant systems to have two software features: a \"harness\" that safeguards how the system handles specified failure modes and a \"forensic node\" that logs activity and is stored physically within Maltese jurisdiction. The applicant system must also disclose specific elements of its compliance with the ethical AI framework to end users via terms of service. Applications are reviewed by the Malta Digital Innovation Authority (MDIA) and are subsequently confirmed by an accredited auditor. The program is voluntary, with adoption expected to be driven by demand for certified systems from government procurers, businesses, and consumers. The Malta program has high feasibility because it benefits from a dedicated regulator which was established to manage this and similar programs. This dedicated expertise in assessing certification as well as accrediting third-party auditors can reduce concerns of symbol-substance decoupling. The program's focus on awarding certification for a specific system in a specific use case based on an assessment that includes the organization and a responsible individual incorporates a wide context that can help reduce concerns of means-ends decoupling. However, no application has yet been submitted. The effectiveness of the software harness to mitigate failure modes and the usefulness of mandated ethics disclosures in terms of service remain to be seen. \n D. Australian Chief Scientist Proposal: Turing Certification The Turing Certification program was proposed by Australia's Chief Scientist Alan Finkel in 2018 [12] , [71] . It would certify AI developer organizations and low-risk AI systems, drawing inspiration from FairTrade certification among other programs [72] . This dual certification aspect could be especially impactful: it promotes a wide scope of assessment, thus improving means-ends coupling, and its organizational audit further improves symbol-substance coupling. As proposed, Turing Certification would be a voluntary program with third-party certifiers, with participation incentivized by demand from purchasers including consumers and governments. The third-party audit would confirm that the applying company, its development processes, and end AI system conform to not-yet-set standards for trustworthy AI. The proposal has not been enacted; it is unclear if it ever will be. Its most recent public discussion was in 2019 [73, p. 161 ]. If it is to be enacted, it would need clear certification criteria, a substantial challenge given the lack of agreed-upon trustworthy AI standards today. \n E. Queen's University Executive Education: Principles of AI Implementation In partnership with IEEE, Queen's University has launched an executive education certificate program on Principles of AI Implementation [14] . Demand is likely to be driven by employers that value certified employees. The program has not yet run, and it is unclear how strong employer demand will be. The program covers ethical design principles, considerations for building trust in AI applications, and current Canadian regulatory requirements. This broad curriculum could support a systemic mindset, thereby improving means-ends coupling. However, these are dynamic topics; the program will need to stay up to date to retain means-ends coupling. Additionally, it is only a two-day program; this short duration precludes much substantive learning, creating the risk of symbol-substance decoupling. \n F. Finland Civics Course: Elements of AI The government of Finland, in a partnership between the University of Helsinki and the consultancy Reaktor, offers certificates to people who complete its civic education course Elements of AI. Certificates are awarded for all who complete at least 90% of the exercise and score 50% or above on attempted exercise questions [74] . Credits can be transferred to the University of Helsinki. Demand may be driven by individuals' curiosity and sense of civic responsibility. It is unclear if employers would be interested in hiring certified individuals, and its introductory content would be insufficient for professional education, though that is not the program's aim. Thus far, demand has been significant: over 500 000 people have signed up and students from 170 countries have completed the course [15] . \n G. Danish Labeling Program for IT Security and Responsible Use of Data The Confederation of Danish Industry, Danish Chamber of Commerce, SMEdenmark, and Danish Consumer Council [16] are currently developing a certification program for information technology security and data ethics. The program does not primarily focus on AI, but it was listed by the European Commission [10, p. 10] as an AI activity, and an interview and review of internal documents confirm that AI is within its scope. The program scope is ambitious, covering elements ranging from IT security to data management, to processes to address algorithmic bias. This broad scope could support a systemic mindset in the organization, and thus improve means-end coupling. The same scope may undermine symbol-substance coupling, however, as it could be difficult to achieve compliance with and effectively communicate to external stakeholders about so many certification criteria. The eventual administration of the program could encounter a challenge in moral hazard because a single organization-funded by certification fees-sets certification criteria, reviews applications, and performs (limited) audits. \n IV. CERTIFICATION FOR FUTURE AI TECHNOLOGY An essential attribute of AI technology is that it continues to evolve. Certification programs designed for the AI of today may not fare well with the AI of tomorrow. Adjustments will be needed to keep the programs up to date. However, adjustments can be expensive and are therefore not always made. Where possible, current certification programs should be designed to accommodate potential future directions in AI. This is no easy task: the future of AI cannot readily be predicted. Many have tried and failed [75] . Nonetheless, some broad contours of future AI can be at least tentatively described, enough to derive some implications for certification programs. Toward the end of this section, we offer some remarks specific to the most advanced long-term AI, though much of the discussion may be more applicable to AI that will be developed in the interim, i.e., medium-term AI [76] . Some elements of AI certification are relatively durable in the face of changes in the technology. First, many of the underlying goals for certification programs, such as fairness and beneficence, derive from universal ethics principles that apply not just to any AI technology, but to society in general. These ethics principles are used as a \"North Star\" guide in today's certification programs [36, p. 35] and will likely remain relevant as the technology matures. Future programs might potentially even certify the ethics of either the AI system or its developers. Second, some aspects of certification are matched to the types of people and institutions that are involved in AI, such as business executives, citizens, and corporations. It is fair to expect that such actors will remain involved in AI over the years. Certification program capacity to engage these actors may likewise remain relevant. That includes matters such as the capacity to improve corporate transparency and to educate large numbers of citizens. Similarly, one way for certification programs to remain relevant over time is to emphasize human and institutional factors. As discussed in Section II, certification programs can improve symbol-substance and means-ends coupling by promoting certain values and norms among applicants. Among other things, doing so positions applicants to act ethically as new challenges arise. Today, we do not know exactly what new challenges will arise for future AI, but we can be very confident that there will be new challenges. Certification programs should endeavor to make it more likely that whoever faces these new challenges will act according to a high ethical standard. Certification programs can also remain relevant by buildingin mechanisms for updating their certification criteria. For example, in the establishment of a new AI certification program, it could be specified that an expert body meets periodically to review and update the criteria, as is common with technical standards bodies. Programs could also include sunset provisions such that a program will cease to exist unless its criteria are updated. Additionally, certification programs could include a budget for ongoing research so that knowledge about how to update the criteria is available when needed. The field of AI research could also be a focus of certification. Future AI techniques will derive from current or future research. Some techniques may be better able to meet ethics standards; these techniques could be an object of certification. Academia already has institutional review boards to certify human subjects research as meeting standards for responsible conduct of research. Similar boards have been proposed for overseeing future AI research [77] . Importantly, such boards should focus not just on whether the research meets procedural standards such as harm to research subjects, but also on whether it meets standards in terms of expected impacts on broader society [78] . However, it can be difficult to know in advance which techniques will perform well-hence the need to do the research. Therefore, research is another domain in which certification may do well to emphasize human and institutional factors. Some forms of future AI may not be conducive to certification. In particular, AI systems that reach or exceed human-level intelligence in many domains [79] , [80] might not be a viable object of certification because humans could be powerless to take corrective actions suggested by the certification. To the extent that such AI is a significant concern, certification may only play a role in its development, while humans are still in command. This is one area in which certification of AI research programs may be especially significant, such as to promote trust among rival AI development groups [81] or share information about safety and ethics measures to alleviate rivals' concerns [82] . Additionally, if the AI is developed by the national government-a distinct possibility given its national security implications-then there may be a role for an international certification regime, perhaps analogous to monitoring and verification regimes that have been established in other domains such as for biological, chemical, and nuclear materials. Should humanity ever face the prospect of developing such a technology, it would probably want to know that it was being developed according to a high ethical standard. That could be an important role for future AI certification programs. \n V. CONCLUSION Certification is a valuable tool for addressing information asymmetries and incentivizing better behavior. Certification can provide information about AI systems themselves as well as the organizations and individuals that are involved with them. AI-related programs could include both voluntary certification to ethics principles and mandatory conformity assessment to regulatory requirements. Today, a variety of AI certification programs are in development and use. These programs are focused on current AI technology, but certification can also play a role in future AI systems, up to including the most advanced long-term AI. One gap in the current AI certification landscape is the processes through which AI is developed. Although current certification programs assess the AI development process to varying degrees, no program certifies the process as its object. As a consequence, certification programs may fail to address process-oriented Potemkin attributes with maximal symbol-substance coupling and may fail to promote constructive organizational processes that can improve means-ends coupling. Improved attention to processes as the focus of certification could draw on, among other things, recent work on algorithmic auditing [36] and existing programs, such as ISO 9001 on quality management [25] and ISO 27001 on information security management [83] . Early work at ISO on an AI management system standard may one day yield such a process-focused AI certification program [84] . Another challenge is to improve the customization of AI certification for particular circumstances. Whereas some, such as the Malta AI ITA, are customized for specific jurisdictions, others, such as IEEE ECPAIS, make homogeneous prescriptions for AI developed worldwide. Additionally, most of the certification programs lack customization for each of the diverse sectors to which AI is applied, e.g., healthcare, transportation, and national security. Similarly, the programs tend to treat \"AI\" as a monolithic technology instead of being customized for the diverse types of AI that exist, e.g., language translation, facial recognition, and anomaly detection. Homogenous certification may similarly encounter challenges when an AI system is but a small, inextricable component of a larger digital system or workflow. All of this is a problem because certification programs that focus on specific niches tend to have better means-ends coupling [21] . Many of the AI certification programs discussed here rely on, or at least strongly benefit from, demand from the customers of AI systems, including consumers, businesses, and governments. This is especially important for voluntary programs, where demand usually is the primary driver of participation. For good results to accrue, customers must not only demand certification-they must demand good certification. To that end, customers must be sufficiently educated about the technology and the associated ethics and safety concerns [68, p. 23] . Education programs such as Finland's Elements of AI civics course could play an important role. Overall, AI certification remains at an early stage. Much remains to be seen, including which certification programs will be implemented, how effective they will be, how difficult or expensive they will be to implement, and how durable they will be in the face of changes in AI technology. All these aspects will influence the likelihood for adoption [85] and their success at symbol-substance and means-ends coupling. This article has presented a snapshot in time, with an assessment of an indicative sample of AI certification programs. It has also identified key concepts within the management literature on certification. Both contributions should inform future monitoring and evaluation of AI certification as the field matures. Future research would do well to track which certification programs are performing well, to learn from their successes and failures, and to identify gaps in the field and areas for improvement. It will likewise be important to monitor changes in AI technology and to ensure that certification programs stay up to date. AI certification is not a panacea, but it can play a valuable role in the overall mix of AI governance tools. TABLE II ASSESSMENT II OF CATEGORIES OF AI CERTIFICATION PROGRAMS details of the seven programs (see below) in the context of the management literature as reviewed above. \n\t\t\t Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on March 24,2022 at 01:40:32 UTC from IEEE Xplore. Restrictions apply. \n\t\t\t The authors thank Jonathan Aikman for this point.Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on March 24,2022 at 01:40:32 UTC from IEEE Xplore. Restrictions apply.", "date_published": "n/a", "url": "n/a", "filename": "AI_Certification_Advancing_Ethical_Practice_by_Reducing_Information_Asymmetries.tei.xml", "abstract": "As artificial intelligence (AI) systems are increasingly deployed, principles for ethical AI are also proliferating. Certification offers a method to both incentivize the adoption of these principles and substantiate that they have been implemented in practice. This article draws from management literature on certification and reviews current AI certification programs and proposals. Successful programs rely on both emerging technical methods and specific design considerations. In order to avoid two common failures of certification, program designs should ensure that the symbol of the certification is substantially implemented in practice and that the program achieves its stated goals. The review indicates that the field currently focuses on self-certification and third-party certification of systems, individuals, and organizations-to the exclusion of process management certifications. Additionally, this article considers prospects for future AI certification programs. Ongoing changes in AI technology suggest that AI certification regimes should be designed to emphasize governance criteria of enduring value, such as ethics training for AI developers, and to adjust technical criteria as the technology changes. Overall, certification can play a valuable mix in the portfolio of AI governance tools.", "id": "d95da0fab794f979c353075d283889ee"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "2699411.tei.xml", "abstract": "Open-universe probability models show merit in unifying efforts.", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Andrew Critch"], "title": "A PARAMETRIC, RESOURCE-BOUNDED GENERALIZATION OF L ÖB'S THEOREM, AND A ROBUST COOPERATION CRITERION FOR OPEN-SOURCE GAME THEORY", "text": "Pudlák shows that, for a consistent system, there exists some ε > 0 such that for all N , any proof of a statement of the form \"there does not exist a proof of ⊥ using N or fewer characters\" must use at least N ε characters. This means there is some obstruction to self-trust for a resource-bounded proof system, which suggests that a resource-bounded version of L öb's Theorem-a generic statement of selftrust about a family of sentences parametrized by a resource bound-might also hold. §2. Fundamentals. Since this article draws from work in several disciplines, this section is provided to clarify the use of notation and conventions throughout. \n Proof length conventions and notation. Proof length will be measured in characters instead of lines, the way one might measure the size of a text file on a computer. An extensive analysis of proof lengths measured in characters is covered by [8] . Throughout this article, S refers to a fixed proof system (e.g., an extension of Peano Arithmetic). Writing S n φ, or simply n φ means that there exists an S-proof of φ using n or fewer characters. \n Proof system. Let S be any first-order proof system that 1) can represent computable functions (e.g., by being an extension of PA; see Section 2.4), 2) can write any number k ∈ N using O(lg(k)) symbols (e.g., using binary), and 3) allows the definition and use of abbreviations during proofs (see the Appendix for details). Compact numeral representations and abbreviations are allowed in the proof system for two reasons. The first is that real-world automated proof systems will tend to use these because of memory constraints. The second is that abbreviations make the lengths of shortest proofs slightly easier to analyze. For example, if a number N with a very large number of digits occurs in the shortest proof of a proposition, it will not occur multiple times; instead, it will occur only once, in the definition of an abbreviation for it. Then, one does not need to carefully count the number of times the numeral occurs in the proof to determine the contribution of its size to the proof length; the contribution will simply be linear in its length, or lg(N ). Write L S for the language of S, L S (r) for the set of formulas in L S with r free variables, and Const(S) for the set of closed-form constant expressions in S (e.g., 0, S0, S0 + S0, etc.). When φ ∈ L S (r), given any closed-form expressions c 1 , . . . , c r (such as constants, or free variables), write φ[ c] = φ[c 1 , . . . , c r ] for the result of substituting the c i for the free variables in φ. https://www.cambridge.org/core/terms. https://doi.org/10.1017/jsl.2017.42 \n Choosing a Gödel encoding. Along with the proof system S, a single encoding #(−) : L S → N is chosen and fixed throughout, as well as a \"numeral\" mapping • (−) : N → Const(S) ⊆ L S for expressing naturals as constants in S. Note that in traditional PA, for example, • 5 = SSSSS0. However, to be more realistic it is assumed that S uses a binary encoding to be more efficient, so e.g., • 5 = 101. The maps #(−) and • (−) combine to form a G ödel encoding (−) : L S → Const(S) φ := • #φ which allows S to write proofs about itself. \n Representing computable functions. It is assumed that for any computable function f : N → N, there exists a \"graph\" formula Γ f ∈ L S (2) such that for all x ∈ N, S (∀y ) Γ f [ • x, y] ↔ y = • f(x) . For example, this condition holds if S is an extension of PA; see, e.g., Theorem 6.8 of Cori and Lascar ( [3] , Part II). Sometimes we abuse notation and write symbols for computable functions in-line with logical expressions to save space. For example, given functions f, g and h, to say that S proves that for any x value, f(x) < g(x) + h(x), technically one should write (∀x)(∀y1)(∀y1)(∀y3) Γ f [x, y 1 ] and Γ g [x, y 2 ] and Γ h [x, y 3 ] → y 1 < y 2 + y 3 but instead, we abuse notation and write (∀x) f(x) < g(x) + h(x)). \n Asymptotic notation. The notation f ≺ g will mean that for any M ∈ N, there exists an N ∈ N such that for all n > N, Mf(n) < g(n). The expression O(g) stands for the set of functions f g. If f : N → N is any specific function, f(O(g)) will stand for the set of functions of the form e • f where f ∈ O(g). §3. A bounded provability predicate, k . Here a predicate k is defined for asserting provability using a proof with length bounded by k. (2) exists that means, in natural language, that the number m encodes a proof in PA, and that the number n encodes the statement it proves. So, the standard provability operator : L PA → L PA can be defined as (φ) := (∃m)(Bew[m, φ ]). \n Defining k . Given a choice of G ödel encoding for Peano Arithmetic, it is classical that a formula Bew[m, n] ∈ L S It is taken for granted that Bew[m, n] exists for S and can be extended to a \"bounded Bew\" formula BBew[m, n, k] ∈ L S (3) that means • m encodes a proof in S, • n encodes the statement it proves, and • the proof encoded by m, before being encoded as m, uses at most k characters when written in the language of S. (Note that in general, m itself will be much larger than k as a member of N.) Then one can define a \"bounded\" box operator, which, given any sentence φ and closed expression k (such as a free variable, or a constant), returns k (φ) := (∃m)(BBew[m, φ , k]). Also taken for granted is a computable \"single variable evaluation\" function, Eval 1 : N → N, such that for any φ ∈ L S (1) , Eval 1 ( φ , k) := φ( • k) . Since Eval 1 is computable, it can be represented in L S as in Section 2.4. This allows us to extend the k operator to act on sentences with one unbound variable. Specifically, if φ is a sentence with an unbound variable , then k (φ) := (∃m) BBew[m, Eval 1 ( φ , ), k]). Then k (φ) itself has as an unbound variable, and in natural language stands for \"There is a proof using k or fewer characters of the formula φ\". \n Basic properties of k . Each of the following properties will be needed multiple times during the proof of the main results. Since the proof is already highly symbolic, these properties are given English names to recall them. \n Property 1 (Implication distribution) . There is a constant c ∈ Const(S) such that for any p, q ∈ L S , (∀a)(∀b)( a (p → q) → ( b (p) → a+b+c (q))). Proof sketch. The fact that one can combine a proof of an implication with the proof of its antecedent to obtain a proof of its consequent can be proven in general, with quantified variables in place of the G ödel numbers of the particular statements involved. Let us suppose this general proof has length c 0 . Then, one needs only to instantiate the statements in it to p and q. However, if p and q are long expressions, it may be that they were abbreviated in the earlier proofs without lengthening them, so they can be written in abbreviated form again during this step. Hence, the total cost of combining the two proofs is around c = 2c 0 , which is constant with respect to p and q. Property 2 (Quantifier distribution). There is a constant C ∈ Const(S) such that for any φ ∈ L S (1) , N ((∀k)(φ[k])) ⇒ (∀k) C +2N +lg(k) (φ[k]) , which in turn ⇒ (∀k) O(lg(k)) (φ[k]) . In the final subscript, O(lg(k)) stands for a closed expression representing a function of k that is asymptotically less than some positive constant times O(lg(k)). Proof. An encoded proof of φ[ • K] for a specific K can be obtained by specializing the conclusion of an N -character encoded proof of (∀k)(φ[k]) and appending the specialization with • K in place of k at the end. To avoid repeating • K numerous times in the final line (in case it is large), an abbreviation will be used for φ. Thus the appended lines can read as follows: (1) let Φ stand for φ , (2) Φ[ • K]. Let us analyze how many characters are needed to write such lines. First, a string Φ is needed to use as an abbreviation for φ. Since no string of length N 2 has yet been used as an abbreviation in the earlier proof (otherwise one can shorten the proof by not defining and using the abbreviation), one can achieve Length(Φ) < N 2 . As well, some constant c number of characters are needed to write out the system's equivalent of \"let\", \"stand for\", \" \", and \" \". Finally, lg(K) characters are needed to write • K. Altogether, the proof was extended by C + N + lg(k) characters, for a total length of 2N + c + lg(k). §4. A bounded generalization of Löb's Theorem. This section exhibits the first main theorem: a generalization of L öb's Theorem applicable to the analysis of resource-bounded proof systems. Property 4 (Bounded Inner Necessitation). For any φ ∈ L S , k (φ) → e(k) ( k (φ)). Then the following theorem holds: (1) be a formula with a single unquantified variable k, and suppose that f : Theorem 4.2 (Resource-bounded generalization of L öb's Theorem). Let p[k] ∈ L S N → N is computable and satisfies f(k) e(O(lg(k))). Then there is a threshold k ∈ N, depending on p[k], such that (∀k) f(k) (p[k]) → p[k] ⇒ ∀k > k (p[k]). Note: In fact a weaker statement, (∀k > k 1) f(k) (p[k]) → p[k] , is sufficient to derive the consequent, since we could just redefine f(k) to be 0 for k ≤ k 1 and then f(k) (p[k]) → p[k] is vacuously true and provable for k ≤ k 1 as well. The proof of Theorem 4.2 makes use of a version of G ödel's diagonal lemma that allows free variables to appear in the formula being diagonalized: Proposition 4.3. Suppose S is a first-order theory capable of representing all computable functions, as in Section 2.4. Then for any formula G ∈ L S (r + 1), there exists a formula ∈ L S (r) such that ∀ k [ k] ↔ G[ , k] , where k = (k 1 , . . . , k r ) are free variables. Proof. This result can be found on p. 53 of Boolos [2] . Proof of Theorem 4.2. (In this proof, each centered equation will follow directly from the one above it unless otherwise noted.) We begin by choosing some function g(k) such that lg(k) ≺ g(k) and e(g(k)) ≺ f(k). For example, we could take g(k) = (lg(k))(e −1 (f(k))) . Define a formula G ∈ L S (2) by G[n, k] := (∃m) Bew[m, Eval 1 (n, k), g(k)]) → p[k] so that for any φ ∈ L S (1) , G[ φ , k] = g(k) (φ[k]) → p[k]. Now, by Proposition 4.3, there is some ∈ L S (1) such that in some number of characters n, n (∀k)( [k] ↔ G[ , k]). ( 4.3) By Bounded Necessitation, n ((∀k)( [k] ↔ G[ , k])). By Quantifier Distribution, since n is constant with respect to k, (∀k) O(lg(k)) ( [k] ↔ G[ , k]) , in which we can specialize to the forward implication, (∀k) O(lg(k)) ( [k] → G[ , k]) . By Implication Distribution of O(lg(k)) , (∀k)(∀a) a ( [k]) → a+O(lg(k)) (G[ , k]) . By Implication Distribution again, this time of a+O(lg(k)) over the implication G[ , k] = g(k) (φ[k]) → p[k], we obtain (∀k)(∀a)(∀b) a ( [k]) → b g(k) ( [k]) → a+b+O(lg(k)) (p[k]) . Now we specialize this equation to a = g(k) and b = h(k), where h : N → N is a computable function satisfying e(g(k)) ≺ h(k) ≺ f(k), for example, h(k) = f(k)e(g(k)) : (∀k) g(k) [k] → h(k) g(k) ( [k]) → g(k)+h(k)+O(lg(k)) (p[k]) . https://www.cambridge.org/core/terms. https://doi.org/10.1017/jsl.2017.42 Then since g(k) + h(k) + O(lg(k)) < f(k) after some bound k > k 1 , we have (∀k > k 1) g(k) ( [k]) → h(k) g(k) ( [k]) → f(k) (p[k]) . Now, by hypothesis, (∀k) f(k) (p[k]) → p[k] , thus (∀k > k 1) g(k) ( [k]) → h(k) g(k) ( [k]) → p[k] . (4.4) Also, without any of the above, from Bounded Inner Necessitation we can write (∀k)(∀a) a ( [k]) → e(a) ( a ( [k])) . From this, with a = g(k), we have (∀k) g(k) ( [k]) → e(g(k)) g(k) ( [k]) . Now, since e(g(k)) < h(k) after some bound k > k 2 , we have (∀k > k 2) g(k) ( [k]) → h(k) g(k) ( [k]) . (4.5) Next, from Equations 4.4 and 4.5, assuming we chose k 2 ≥ k 1 for convenience, we have (∀k > k 2) g(k) ( [k]) → p[k] . (4.6) But from Equation 4 .3, the implication here is equivalent to [k], so we have N (∀k > k 2)( [k]), where N is the number of characters needed for the proof above. From this, by Bounded Necessitation, we have N ((∀k > k 2)( [k])). By Quantifier Distribution of N , (∀k > k 2) C +2N +lg(k) ( [k]) and since C + 2N + lg(k) < g(k) after some bound k > k, taking k ≥ k 2 for convenience, we have ∀k > k g(k) ( [k]) . (4.7) Finally, from Equations 4.6 and 4.7 we have, as needed, ∀k > k (p[k]). 4.1. Interpretation. L öb's Theorem may be viewed as an obstacle to a formal system of logic \"trusting itself \" to soundly prove any statement p. Previously, one might have thought this obstacle was merely a quirk of infinities arising from the unbounded proof-existence predicate . However, we see now that some bounded obstacle remains: namely, that a bounded logical system cannot trust itself \"about moderately long proofs in general.\" To see this interpretation, let p[k] be any statement with a free parameter k, and f(k) e(O(lg(k))) be any function, representing \"moderate largeness.\" Then the hypothesis (∀k) f(k) (p[k]) → p[k] of Theorem 4.2 says that our logical system generally trusts its proofs about p[k], even if they are moderately long. However, this will imply that ∀k > k, p[k], which is bad news if p[k] is sometimes false. \n Making f(k) small. The statement of Theorem 4.2 becomes stronger as one makes the function f(k) smaller, but it must remain e(O(lg(k))) for the theorem to apply. The obstruction to making f(k) small is hence the size of the proof expansion function e, which in real-world software for writing proofs about proofs will be under some design pressure to be made small, to manage computational resources. How small can e be made in practice? G ödel numberings for sequences of integers can be achieved in O(n) space (Tsai, Chang, and Chen, [11] ) (where n is the length of a standard binary encoding of the sequence), as can G ödel numberings of term algebras (Tarau, [9] ). To check that one line is an application of Modus Ponens from previous lines, if the proof encoding indexes the implication to which MP is applied, is a test for string equality that is linear in the length the of lines. Finally, to check that an abbreviation has been applied or expanded, if the proof encoding indexes where the abbreviation occurs, is also a linear time test for string equality. Thus, one can straightforwardly achieve e(k) ∈ O(k) for real-world theoremprovers. In that case, the condition f(k) e(O(lg(k))) amounts only to saying that f(k) lg(k). §5. Robust cooperation of bounded proof-based agents in the Prisoner's Dilemma. Bárász, Christiano, Fallenstein et al. [1] , LaVictoire, Fallenstein, Yudkowsky et al. [5] , and others have exhibited various agent-like logical formulae who can be viewed as playing the Prisoner's Dilemma by basing their \"decisions\" on proofs about each others' definitions (as strings). In particular, they proffer proof of the opponent's cooperation as an unexploitable condition for cooperation. However, their \"agents\" are purely mathematical entities who decide whether to cooperate based on undecidable logical conditions. This leaves open the question of whether their results are achievable by real software with bounded computational resources. So, consider the following program, where G : N → N is a function to be specified later: def FairBot k(Opponent) : let B = k + G(LengthOf(Opponent)) search for a proof of length at most B that Opponent(FairBot k) = Cooperate if found, return Cooperate else return Defect In this program, the subroutine \"search for a proof of length at most B that (• • • )\" is defined as a process which searches, in lexicographic order, through all strings of length ≤B, checking each string for whether it is a proof of ( up that is a proof of (• • • ), the search halts and sets found = true. If no such proof exists, the search continues until the (finite) set of strings of length ≤B have been exhausted, then halts and sets found = false. In what follows, all that matters is the functional behavior of the proof search procedure: that it sets found = true if a proof of length ≤B exists, and found = false otherwise. The program FairBot k may be viewed as a kind of proof-based \"agent\" that plays the Prisoner's Dilemma in the following sense. Given any string, Opponent, representing the source code of another program, we can compute the pair 1 R(FairBot k , Opp) := FairBot k (Opp), Opp(FairBot k )) . Viewed as such, FairBot k has the desirable property that the outcome R(FairBot k , Opp) = (Cooperate, Defect) will never occur: if its opponent defects, then FairBot will find no proof of its opponent's cooperation, so FairBot will itself defect. This property is called being unexploitable. At this point, a natural question arises: what happens when FairBot encounters a copy of itself ? That is, what is FairBot k (FairBot k )? Each FairBot will be searching for a proof that the other will cooperate. As such, one might expect to see a bottomless regression that will exhaust the proof bound B. (\"The first FairBot must prove that the second FairBot must prove that the first FairBot must prove that. . . .\") Thus, it seems like they will find no proof of cooperation, and hence defect. However, this turns out not to be the case. Letting p[k] := FairBot k (FairBot k ) = Cooperate) , it is a direct consequence of Theorem 4.2 that FairBot k (FairBot k ) = Cooperate. In fact a much stronger claim is true, as the next theorem will show. It will demonstrate a mutually cooperative program equilibrium (in the sense of Tennenholtz [10] ) among a wide variety of (unequal) agents, provided only that they employ a certain principle of fairness, as follows. \n G-fairness. Given a nonnegative increasing function G, we say that an agent (i.e., program) A k taking a parameter k ∈ N is G-fair if for any opponent (i.e., program) Opp, we have k+G(LengthOf(Opp)) Opp(A k ) = C ) → A k (Opp) = C ) , where LengthOf(Opp) is the character length of the opponent's source code. In words, A k is G-fair if finding a short proof of its opponent's cooperation is a sufficient condition for A k to cooperate, when \"short\" is flexibly defined to be an increasing function of its opponent's complexity, i.e., k + G(LengthOf(Opp)). The agents FairBot k defined above are G-fair (where G is the function appearing in line 2 of their source code), and the reader is encouraged to keep them in mind as a motivating example for the following result: n+a(m) (α[m, n]) → [n, m]. For later convenience, we also choose a nondecreasing computable function f(k) e(O(lg(k))) such that 6f(2 ) ≤ G( ). For example, we could take f(k) = G( lg(k) )/6 . Now, LengthOf(A k ) > lg(k) and LengthOf(B k ) > lg(k) since they reference the parameter k in their code. Applying G to both sides yields a(k), b(k) > G(lg(k)) ≥ 6f(k). (5.6) Define an \"eventual cooperation\" formula: p[k] := (∀m > k)(∀n > k)(α[m, n] and [n, m]). Using Quantifier Distribution once on the definition of p[k], (∀k) f(k) (p[k]) → (∀m > k)( C ((∀n > k)(α[m, n] and [n, m]))) where Adding these inequalities yields 3C + 4f(k) + 2lg(m) + lg(n) < n + a(m), so for some k 1 , from (5.7) we derive C = C + 2f(k) + lg(m). Applying Quantifier Distribution again, (∀k) f(k) p[k] → (∀m > k)(∀n > k)( C (α[m, n] and [n, m])) . ( 5 (∀k > k 1) f(k) (p[k]) → (∀m > k)(∀n > k) n+a(m) (α[m, n]) . Similarly, we also have 3C + 2lg(m) < m and 4f(k) + lg(n) < 5f(n) < b(n), so for some k 2 ≥ k 1 , (∀k > k 2 ) f(k) (p[k]) → (∀m > k)(∀n > k) n+a(m) (α[m, n]) and m+b(n) ( [n, m]) . (5.8) Thus by (5.5), (∀k > k 2) f(k) (p[k]) → (∀m > k)(∀n > k) c(n, m) and c(m, n)) , i.e., (∀k > k 2) f(k) (p[k]) → p[k] . Therefore, by Theorem 4.2 (and the note following it), for some k we have ∀k > k (p[k]). In other words, for all m, n > k + 1, A m (B n ) = B n (A m ) = Cooperate. Theorem 5.1 interesting for four reasons: 1. It is surprising. L öb's Theorem has not been applied much in the setting of game theory, and in fact at the time of writing, 100% of the dozens of mathematicians and computer scientists that the author has asked to guess the output of FairBot k (FairBot k ) have either guessed incorrectly (expecting the proof searches to enter an infinite regress and thus reach their bounds), or given an invalid argument for cooperation (such as \"it would be better for the agents to cooperate, so they will\"). 2. It is advantageous. When k is large, FairBot k outperforms the classical Nash/correlated equilibrium solution (Defect, Defect) to the Prisoner's Dilemma when facing itself (or any other G-fair agent) in a one-shot game with no iteration and no future reputation. 3. It is unexploitable. That is, the outcome (Cooperate, Defect) will never occur with a FairBot as player 1. If an opponent will defect against FairBot, FairBot will find no proof of the opponent's cooperation, so FairBot will also defect. \n It is robust. Previous examples of cooperative program equilibria studied by Tennenholtz [10] and Fortnow [4] all involved cooperation based on equality of programs, a very fragile condition. Such fragility is not desirable if we wish to build real-world cooperative systems. By contrast, the G-fairness criterion relies only on the provability of the opponent's cooperation, rather than details of its implementation, and therefore establishes mutual cooperation between a broad class of agents. §6. Conclusion. Theorem 4.2 represents a resource-bounded generalization of L öb's Theorem, which can be applied to algorithms that read and write proofs using bounded computational resources, such as formal verification software. Theorem 5.1 makes use of Theorem 4.2, and some additional proof-theoretic analysis, to demonstrate how algorithmic agents who have access to one another's source codes can inexploitably achieve cooperative outcomes that out-perform classical Nash equilibria and correlated equilibria. Moreover, the condition for cooperation in Theorem 5.1, called \"G-fairness\", is more robust than previously known unexploitable cooperative conditions, which depended on literal source-code equality (Tennenholtz, [10] ). As a direction for potential future investigation, it seems likely that other agents described in the purely logical (noncomputable) setting of Bárász, Christiano, Fallenstein et al. [1] and LaVictoire, Fallenstein, Yudkowsky et al. [5] will likely have bounded, algorithmic analogs, and that many more general consequences of L öb's Theorem-perhaps all the theorems of G ödel-L öb provability logic-will have resource-bounded analogs as well. Definition 4 . 1 ( 41 Proof expansion function). Let e : N → N be any computable function bounding the expansion of S-proof lengths when they are G ödel encoded. That is, its definition is only that it must be large enough to satisfy the following two properties: Property 3 (Bounded Necessitation). For all φ ∈ L S , \n Theorem 5 . 1 ( 5 . 2 . 1 . 51521 Robust cooperation criterion). Suppose that • e(k), the proof expansion function of our proof system as defined in Section 4, satisfies e(O(lg(k))) ≺ k, and • G( ) is any nondecreasing function satisfying G( ) e(O( )).Then, for any G-fair agents A k and B k , there exists some r 0 such that for all m, n > r, A m (B n ) = B n (A m ) = Cooperate. Feasibility of bounds in Theorem 5.Before the proof, recall from Section 4.2 that we can achieve e(k) ∈ O(k) for automatic proof systems that are designed for easy verifiability, in which case e(O(lg(k))) ≺ k, as needed.Proof of Theorem 5.1. The proof will make use of Theorem 4.2 at the very end, but first requires some additional proof-theoretic analysis. For brevity we leta(k) := G(LengthOf(A k )), (5.1) b(k) := G(LengthOf(B k )), (5.2) α[m, n] := A m (B n ) = Cooperate) , and (5.3) [n, m] := B n (A m ) = Cooperate) (5.4) so we can write the G-fairness conditions more compactly as m+b(n) ( [n, m]) → α[m, n] and (5.5) \n\t\t\t Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 24 Mar 2022 at 01:25:56, subject to the Cambridge Core terms of use, available at \n\t\t\t https://www.cambridge.org/core/terms. https://doi.org/10.1017/jsl.2017.42 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 24 Mar 2022 at 01:25:56, subject to the Cambridge Core terms of use, available at \n\t\t\t We must assume that Opp halts and returns either Cooperate or Defect in this case; note that FairBot k always halts. https://www.cambridge.org/core/terms. https://doi.org/10.1017/jsl.2017.42 Downloaded from https://www.cambridge.org/core. Carnegie Mellon University, on 24 Mar 2022 at 01:25:56, subject to the Cambridge Core terms of use, available at", "date_published": "n/a", "url": "n/a", "filename": "div-class-title-a-parametric-resource-bounded-generalization-of-lob-s-theorem-and-a-robust-cooperation-criterion-for-open-source-game-theory-div.tei.xml", "abstract": "This article presents two theorems: (1) a generalization of L öb's Theorem that applies to formal proof systems operating with bounded computational resources, such as formal verification software or theorem provers, and (2) a theorem on the robust cooperation of agents that employ proofs about one another's source code as unexploitable criteria for cooperation. The latter illustrates a capacity for outperforming classical Nash equilibria and correlated equilibria, attaining mutually cooperative program equilibrium in the Prisoner's Dilemma while remaining unexploitable, i.e., sometimes achieving the outcome (Cooperate, Cooperate), and never receiving the outcome (Cooperate, Defect) as player 1. §1. Introduction. In the game theoretic analysis of computerized agents who may read one another's source code, and more generally in the analysis of program verification systems that use proofs to verify other program verification systems, a need has arisen for a version of L öb's Theorem that applies to proof systems with bounded computational resources. The first main theorem of this article is a generalization of L öb's Theorem that applies in such cases, developed and proven in Section 2 through Section 4. The second main theorem, presented in Section 5, makes use of the first theorem and some further proof-theoretic analysis to derive some implications for the game theory of agents who read one another's source code. Specifically, game theoretic work of Bárász, Christiano, Fallenstein et al. [1] and LaVictoire, Fallenstein, Yudkowsky et al. [5] found that L öb's Theorem can be used to design entities in modal logic that resemble \"agents\" who achieve robust cooperative equilibria in games such as the Prisoner's Dilemma. However, these so-called \"modal agents\" were defined as strings representing uncomputable functions; strings of the form \"if (• • • ) is provable, return 1, else return 0\". It remained open whether their results would arise naturally for computable agents, a question answered affirmatively in Section 5. Pudlák [7] on the lengths of proofs of finitistic consistency statements are suggestive of the first theorem of this article. Specifically, \n Related work. The work of", "id": "07043298d45979b230d0f71e43011bab"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Paul F Christiano", "Jan Leike", "Tom B Brown", "Google Brain", "Miljan Martic", "Shane Legg", "Dario Amodei"], "title": "Deep Reinforcement Learning from Human Preferences", "text": "Introduction Recent success in scaling reinforcement learning (RL) to large problems has been driven in domains that have a well-specified reward function (Mnih et al., 2015 (Mnih et al., , 2016 Silver et al., 2016) . Unfortunately, many tasks involve goals that are complex, poorly-defined, or hard to specify. Overcoming this limitation would greatly expand the possible impact of deep RL and could increase the reach of machine learning more broadly. For example, suppose that we wanted to use reinforcement learning to train a robot to clean a table or scramble an egg. It's not clear how to construct a suitable reward function, which will need to be a function of the robot's sensors. We could try to design a simple reward function that approximately captures the intended behavior, but this will often result in behavior that optimizes our reward function without actually satisfying our preferences. This difficulty underlies recent concerns about misalignment between our values and the objectives of our RL systems (Bostrom, 2014; Russell, 2016; Amodei et al., 2016) . If we could successfully communicate our actual objectives to our agents, it would be a significant step towards addressing these concerns. If we have demonstrations of the desired task, we can use inverse reinforcement learning (Ng and Russell, 2000) or imitation learning to copy the demonstrated behavior. But these approaches are not directly applicable to behaviors that are difficult for humans to demonstrate (such as controlling a robot with many degrees of freedom but non-human morphology). An alternative approach is to allow a human to provide feedback on our system's current behavior and to use this feedback to define the task. In principle this fits within the paradigm of reinforcement learning, but using human feedback directly as a reward function is prohibitively expensive for RL systems that require hundreds or thousands of hours of experience. In order to practically train deep RL systems with human feedback, we need to decrease the amount of feedback required by several orders of magnitude. We overcome this difficulty by asking humans to compare possible trajectories of the agent, using that data to learn a reward function, and optimizing the learned reward function with RL. This basic approach has been explored in the past, but we confront the challenges involved in scaling it up to modern deep RL and demonstrate by far the most complex behaviors yet learned from human feedback. Our experiments take place in two domains: Atari games in the Arcade Learning Environment (Bellemare et al., 2013) , and robotics tasks in the physics simulator MuJoCo (Todorov et al., 2012) . We show that a small amount of feedback from a non-expert human, ranging from fifteen minutes to five hours, suffice to learn both standard RL tasks and novel hard-to-specify behaviors such as performing a backflip or driving with the flow of traffic. \n Related Work A long line of work studies reinforcement learning from human ratings or rankings, including Akrour et al. (2011 ), Pilarski et al. (2011 ), Akrour et al. (2012 ), Wilson et al. (2012 ), Sugiyama et al. (2012 ), Wirth and Fürnkranz (2013 ), Daniel et al. (2015 ), El Asri et al. (2016 ), Wang et al. (2016 ), and Wirth et al. (2016 . Other lines of research consider the general problem of reinforcement learning from preferences rather than absolute reward values (Fürnkranz et al., 2012; Akrour et al., 2014; Wirth et al., 2016) , and optimizing using human preferences in settings other than reinforcement learning (Machwe and Parmee, 2006; Secretan et al., 2008; Brochu et al., 2010; Sørensen et al., 2016) . Our algorithm follows the same basic approach as Akrour et al. ( 2012 ) and Akrour et al. ( 2014 ), but considers much more complex domains and behaviors. The complexity of our environments force us to use different RL algorithms, reward models, and training strategies. One notable difference is that Akrour et al. (2012) and Akrour et al. (2014) elicit preferences over whole trajectories rather than short clips, and so would require about an order of magnitude more human time per data point. Our approach to feedback elicitation closely follows Wilson et al. (2012 ). However, Wilson et al. (2012 assumes that the reward function is the distance to some unknown (linear) \"target\" policy, and is never tested with real human feedback. TAMER (Knox, 2012; Knox and Stone, 2013 ) also learns a reward function from human feedback, but learns from ratings rather than comparisons, has the human observe the agent as it behaves, and has been applied to settings where the desired policy can be learned orders of magnitude more quickly. Compared to all prior work, our key contribution is to scale human feedback up to deep reinforcement learning and to learn much more complex behaviors. This fits into a recent trend of scaling reward learning methods to large deep learning systems, for example inverse RL (Finn et al., 2016) , imitation learning (Ho and Ermon, 2016; Stadie et al., 2017) , semi-supervised skill generalization (Finn et al., 2017) , and bootstrapping RL from demonstrations (Silver et al., 2016; Hester et al., 2017) . \n Preliminaries and Method \n Setting and Goal We consider an agent interacting with an environment over a sequence of steps; at each time t the agent receives an observation o t 2 O from the environment and then sends an action a t 2 A to the environment. In traditional reinforcement learning, the environment would also supply a reward r t 2 R and the agent's goal would be to maximize the discounted sum of rewards. Instead of assuming that the environment produces a reward signal, we assume that there is a human overseer who can express preferences between trajectory segments. A trajectory segment is a sequence of observations and actions, = ((o 0 , a 0 ), (o 1 , a 1 ), . . . , (o k 1 , a k 1 )) 2 (O ⇥ A) k . Write 1 2 to indicate that the human preferred trajectory segment 1 to trajectory segment 2 . Informally, the goal of the agent is to produce trajectories which are preferred by the human, while making as few queries as possible to the human. More precisely, we will evaluate our algorithms' behavior in two ways: Quantitative: We say that preferences are generated by a reward function 2 r : O ⇥ A ! R if o 1 0 , a 1 0 , . . . , o 1 k 1 , a 1 k 1 o 2 0 , a 2 0 , . . . , o 2 k 1 , a 2 k 1 whenever r o 1 0 , a 1 0 + • • • + r o 1 k 1 , a 1 k 1 > r o 2 0 , a 2 0 + • • • + r o 2 k 1 , a 2 k 1 . If the human's preferences are generated by a reward function r, then our agent ought to receive a high total reward according to r. So if we know the reward function r, we can evaluate the agent quantitatively. Ideally the agent will achieve reward nearly as high as if it had been using RL to optimize r. Qualitative: Sometimes we have no reward function by which we can quantitatively evaluate behavior (this is the situation where our approach would be practically useful). In these cases, all we can do is qualitatively evaluate how well the agent satisfies the human's preferences. In this paper, we will start from a goal expressed in natural language, ask a human to evaluate the agent's behavior based on how well it fulfills that goal, and then present videos of agents attempting to fulfill that goal. Our model based on trajectory segment comparisons is very similar to the trajectory preference queries used in Wilson et al. (2012) , except that we don't assume that we can reset the system to an arbitrary state 3 and so our segments generally begin from different states. This complicates the interpretation of human comparisons, but we show that our algorithm overcomes this difficulty even when the human raters have no understanding of our algorithm. \n Our Method At each point in time our method maintains a policy ⇡ : O ! A and a reward function estimate r : O ⇥ A ! R, each parametrized by deep neural networks. These networks are updated by three processes: 1. The policy ⇡ interacts with the environment to produce a set of trajectories {⌧ 1 , . . . , ⌧ i }. The parameters of ⇡ are updated by a traditional reinforcement learning algorithm, in order to maximize the sum of the predicted rewards r t = r(o t , a t ). \n We select pairs of segments 1 , 2 from the trajectories {⌧ 1 , . . . , ⌧ i } produced in step 1, and send them to a human for comparison. 3. The parameters of the mapping r are optimized via supervised learning to fit the comparisons collected from the human so far. These processes run asynchronously, with trajectories flowing from process (1) to process (2), human comparisons flowing from process (2) to process (3), and parameters for r flowing from process (3) to process (1). The following subsections provide details on each of these processes. \n Optimizing the Policy After using r to compute rewards, we are left with a traditional reinforcement learning problem. We can solve this problem using any RL algorithm that is appropriate for the domain. One subtlety is that the reward function r may be non-stationary, which leads us to prefer methods which are robust to changes in the reward function. This led us to focus on policy gradient methods, which have been applied successfully for such problems (Ho and Ermon, 2016) . In this paper, we use advantage actor-critic (A2C; Mnih et al., 2016) to play Atari games, and trust region policy optimization (TRPO; Schulman et al., 2015) to perform simulated robotics tasks. In each case, we used parameter settings which have been found to work well for traditional RL tasks. The only hyperparameter which we adjusted was the entropy bonus for TRPO. This is because TRPO relies on the trust region to ensure adequate exploration, which can lead to inadequate exploration if the reward function is changing. We normalized the rewards produced by r to have zero mean and constant standard deviation. This is a typical preprocessing step which is particularly appropriate here since the position of the rewards is underdetermined by our learning problem. \n Preference Elicitation The human overseer is given a visualization of two trajectory segments, in the form of short movie clips. In all of our experiments, these clips are between 1 and 2 seconds long. The human then indicates which segment they prefer, that the two segments are equally good, or that they are unable to compare the two segments. The human judgments are recorded in a database D of triples 1 , 2 , µ , where 1 and 2 are the two segments and µ is a distribution over {1, 2} indicating which segment the user preferred. If the human selects one segment as preferable, then µ puts all of its mass on that choice. If the human marks the segments as equally preferable, then µ is uniform. Finally, if the human marks the segments as incomparable, then the comparison is not included in the database. \n Fitting the Reward Function We can interpret a reward function estimate r as a preference-predictor if we view r as a latent factor explaining the human's judgments and assume that the human's probability of preferring a segment i depends exponentially on the value of the latent reward summed over the length of the clip: 4 P ⇥ 1 2 ⇤ = exp P r o 1 t , a 1 t exp P r(o 1 t , a 1 t ) + exp P r(o 2 t , a 2 t ) . (1) We choose r to minimize the cross-entropy loss between these predictions and the actual human labels: loss(r) = X ( 1 , 2 ,µ)2D µ(1) log P ⇥ 1 2 ⇤ + µ(2) log P ⇥ 2 1 ⇤ . This follows the Bradley-Terry model (Bradley and Terry, 1952) for estimating score functions from pairwise preferences, and is the specialization of the Luce-Shephard choice rule (Luce, 2005; Shepard, 1957) to preferences over trajectory segments. Our actual algorithm incorporates a number of modifications to this basic approach, which early experiments discovered to be helpful and which are analyzed in Section 3.3: • We fit an ensemble of predictors, each trained on |D| triples sampled from D with replacement. The estimate r is defined by independently normalizing each of these predictors and then averaging the results. • A fraction of 1/e of the data is held out to be used as a validation set for each predictor. We use `2 regularization and adjust the regularization coefficient to keep the validation loss between 1.1 and 1.5 times the training loss. In some domains we also apply dropout for regularization. • Rather than applying a softmax directly as described in Equation 1, we assume there is a 10% chance that the human responds uniformly at random. Conceptually this adjustment is needed because human raters have a constant probability of making an error, which doesn't decay to 0 as the difference in reward difference becomes extreme. \n Selecting Queries We decide how to query preferences based on an approximation to the uncertainty in the reward function estimator, similar to Daniel et al. ( 2014 ): we sample a large number of pairs of trajectory segments of length k from the latest agent-environment interactions, use each reward predictor in our ensemble to predict which segment will be preferred from each pair, and then select those trajectories for which the predictions have the highest variance across ensemble members 5 This is a crude approximation and the ablation experiments in Section 3 show that in some tasks it actually impairs performance. Ideally, we would want to query based on the expected value of information of the query (Akrour et al., 2012; Krueger et al., 2016) , but we leave it to future work to explore this direction further. \n Experimental Results We \n Reinforcement Learning Tasks with Unobserved Rewards In our first set of experiments, we attempt to solve a range of benchmark tasks for deep RL without observing the true reward. Instead, the agent learns about the goal of the task only by asking a human which of two trajectory segments is better. Our goal is to solve the task in a reasonable amount of time using as few queries as possible. In our experiments, feedback is provided by contractors who are given a 1-2 sentence description of each task before being asked to compare several hundred to several thousand pairs of trajectory segments for that task (see Appendix B for the exact instructions given to contractors). Each trajectory segment is between 1 and 2 seconds long. Contractors responded to the average query in 3-5 seconds, and so the experiments involving real human feedback required between 30 minutes and 5 hours of human time. For comparison, we also run experiments using a synthetic oracle whose preferences are generated (in the sense of Section 2.1) by the real reward 6 . We also compare to the baseline of RL training using the real reward. Our aim here is not to outperform but rather to do nearly as well as RL without access to reward information and instead relying on much scarcer feedback. Nevertheless, note that feedback from real humans does have the potential to outperform RL (and as shown below it actually does so on some tasks), because the human feedback might provide a better-shaped reward. We describe the details of our experiments in Appendix A, including model architectures, modifications to the environment, and the RL algorithms used to optimize the policy. \n Simulated Robotics The first tasks we consider are eight simulated robotics tasks, implemented in MuJoCo (Todorov et al., 2012), and included in OpenAI Gym (Brockman et al., 2016) . We made small modifications to these tasks in order to avoid encoding information about the task in the environment itself (the modifications are described in detail in Appendix A). The reward functions in these tasks are quadratic functions of distances, positions and velocities, and most are linear. We included a simple cartpole Figure 1 : Results on MuJoCo simulated robotics as measured on the tasks' true reward. We compare our method using real human feedback (purple), our method using synthetic feedback provided by an oracle (shades of blue), and reinforcement learning using the true reward function (orange). All curves are the average of 5 runs, except for the real human feedback, which is a single run, and each point is the average reward over five consecutive batches. For Reacher and Cheetah feedback was provided by an author due to time constraints. For all other tasks, feedback was provided by contractors unfamiliar with the environments and with our algorithm. The irregular progress on Hopper is due to one contractor deviating from the typical labeling schedule. task (\"pendulum\") for comparison, since this is representative of the complexity of tasks studied in prior work. Figure 1 shows the results of training our agent with 700 queries to a human rater, compared to learning from 350, 700, or 1400 synthetic queries, as well as to RL learning from the real reward. With 700 labels we are able to nearly match reinforcement learning on all of these tasks. Training with learned reward functions tends to be less stable and higher variance, while having a comparable mean performance. Surprisingly, by 1400 labels our algorithm performs slightly better than if it had simply been given the true reward, perhaps because the learned reward function is slightly better shaped-the reward learning procedure assigns positive rewards to all behaviors that are typically followed by high reward. The difference may also be due to subtle changes in the relative scale of rewards or our use of entropy regularization. Real human feedback is typically only slightly less effective than the synthetic feedback; depending on the task human feedback ranged from being half as efficient as ground truth feedback to being equally efficient. On the Ant task the human feedback significantly outperformed the synthetic feedback, apparently because we asked humans to prefer trajectories where the robot was \"standing upright,\" which proved to be useful reward shaping. (There was a similar bonus in the RL reward function to encourage the robot to remain upright, but the simple hand-crafted bonus was not as useful.) \n Atari The second set of tasks we consider is a set of seven Atari games in the Arcade Learning Environment (Bellemare et al., 2013) , the same games presented in Mnih et al., 2013. Figure 2 shows the results of training our agent with 5,500 queries to a human rater, compared to learning from 350, 700, or 1400 synthetic queries, as well as to RL learning from the real reward. Our method has more difficulty matching RL in these challenging environments, but nevertheless it displays substantial learning on most of them and matches or even exceeds RL on some. Specifically, Figure 2 : Results on Atari games as measured on the tasks' true reward. We compare our method using real human feedback (purple), our method using synthetic feedback provided by an oracle (shades of blue), and reinforcement learning using the true reward function (orange). All curves are the average of 3 runs, except for the real human feedback which is a single run, and each point is the average reward over about 150,000 consecutive frames. on BeamRider and Pong, synthetic labels match or come close to RL even with only 3,300 such labels. On Seaquest and Qbert synthetic feedback eventually performs near the level of RL but learns more slowly. On SpaceInvaders and Breakout synthetic feedback never matches RL, but nevertheless the agent improves substantially, often passing the first level in SpaceInvaders and reaching a score of 20 on Breakout, or 50 with enough labels. On most of the games real human feedback performs similar to or slightly worse than synthetic feedback with the same number of labels, and often comparably to synthetic feedback that has 40% fewer labels. On Qbert, our method fails to learn to beat the first level with real human feedback; this may be because short clips in Qbert can be confusing and difficult to evaluate. Finally, Enduro is difficult for A3C to learn due to the difficulty of successfully passing other cars through random exploration, and is correspondingly difficult to learn with synthetic labels, but human labelers tend to reward any progress towards passing cars, essentially shaping the reward and thus outperforming A3C in this game (the results are comparable to those achieved with DQN). \n Novel behaviors Experiments with traditional RL tasks help us understand whether our method is effective, but the ultimate purpose of human interaction is to solve tasks for which no reward function is available. Using the same parameters as in the previous experiments, we show that our algorithm can learn novel complex behaviors. We demonstrate: 1. The Hopper robot performing a sequence of backflips (see Figure 4 ). This behavior was trained using 900 queries in less than an hour. The agent learns to consistently perform a backflip, land upright, and repeat. 2. The Half-Cheetah robot moving forward while standing on one leg. This behavior was trained using 800 queries in under an hour. 3. Keeping alongside other cars in Enduro. This was trained with roughly 1,300 queries and 4 million frames of interaction with the environment; the agent learns to stay almost exactly even with other moving cars for a substantial fraction of the episode, although it gets confused by changes in background. Figure 3 : Performance of our algorithm on MuJoCo tasks after removing various components, as described in Section Section 3.3. All graphs are averaged over 5 runs, using 700 synthetic labels each. Videos of these behaviors can be found at https://goo.gl/MhgvIU. These behaviors were trained using feedback from the authors. \n Ablation Studies In order to better understand the performance of our algorithm, we consider a range of modifications: 1. We pick queries uniformly at random rather than prioritizing queries for which there is disagreement (random queries). 2. We train only one predictor rather than an ensemble (no ensemble). In this setting, we also choose queries at random, since there is no longer an ensemble that we could use to estimate disagreement. 3. We train on queries only gathered at the beginning of training, rather than gathered throughout training (no online queries). 4. We remove the `2 regularization and use only dropout (no regularization). 5. On the robotics tasks only, we use trajectory segments of length 1 (no segments). 6. Rather than fitting r using comparisons, we consider an oracle which provides the true total reward over a trajectory segment, and fit r to these total rewards using mean squared error (target). The results are presented in Figure 3 for MuJoCo and Figure 4 for Atari. Training the reward predictor offline can lead to bizarre behavior that is undesirable as measured by the true reward (Amodei et al., 2016) . For instance, on Pong offline training sometimes leads our agent to avoid losing points but not to score points; this can result in extremely long volleys (videos at https://goo.gl/L5eAbk). This type of behavior demonstrates that in general human feedback needs to be intertwined with RL rather than provided statically. Our main motivation for eliciting comparisons rather than absolute scores was that we found it much easier for humans to provide consistent comparisons than consistent absolute scores, especially on the continuous control tasks and on the qualitative tasks in Section 3.2; nevertheless it seems important to understand how using comparisons affects performance. For continuous control tasks we found that predicting comparisons worked much better than predicting scores. This is likely because the scale of rewards varies substantially and this complicates the regression problem, which is smoothed significantly when we only need to predict comparisons. In the Atari tasks we clipped rewards Figure 4 : Performance of our algorithm on Atari tasks after removing various components, as described in Section 3.3. All curves are an average of 3 runs using 5,500 synthetic labels (see minor exceptions in Section A.2). and effectively only predicted the sign, avoiding these difficulties (this is not a suitable solution for the continuous control tasks because the magnitude of the reward is important to learning). In these tasks comparisons and targets had significantly different performance, but neither consistently outperformed the other. We also observed large performance differences when using single frames rather than clips. 7 In order to obtain the same results using single frames we would need to have collected significantly more comparisons. In general we discovered that asking humans to compare longer clips was significantly more helpful per clip, and significantly less helpful per frame. Shrinking the clip length below 1-2 seconds did not significantly decrease the human time required to label each clip in early experiments, and so seems less efficient per second of human time. In the Atari environments we also found that it was often easier to compare longer clips because they provide more context than single frames. \n Discussion and Conclusions Agent-environment interactions are often radically cheaper than human interaction. We show that by learning a separate reward model using supervised learning, it is possible to reduce the interaction complexity by roughly 3 orders of magnitude. Although there is a large literature on preference elicitation and reinforcement learning from unknown reward functions, we provide the first evidence that these techniques can be economically scaled up to state-of-the-art reinforcement learning systems. This represents a step towards practical applications of deep RL to complex real-world tasks. In the long run it would be desirable to make learning a task from human preferences no more difficult than learning it from a programmatic reward signal, ensuring that powerful RL systems can be applied in the service of complex human values rather than low-complexity goals. \n \n \n \n\t\t\t Here we assume here that the reward is a function of the observation and action. In our experiments in Atari environments, we instead assume the reward is a function of the preceding 4 observations. In a general partially observable environment, we could instead consider reward functions that depend on the whole sequence of observations, and model this reward function with a recurrent neural network.3 Wilson et al. (2012) also assumes the ability to sample reasonable initial states. But we work with high dimensional state spaces for which random states will not be reachable and the intended policy inhabits a low-dimensional manifold. \n\t\t\t Equation 1 does not use discounting, which could be interpreted as modeling the human to be indifferent about when things happen in the trajectory segment. Using explicit discounting or inferring the human's discount function would also be reasonable choices. \n\t\t\t Note that trajectory segments almost never start from the same state.6 In the case of Atari games with sparse rewards, it is relatively common for two clips to both have zero reward in which case the oracle outputs indifference. Because we considered clips rather than individual states, such ties never made up a large majority of our data. Moreover, ties still provide significant information to the reward predictor as long as they are not too common. \n\t\t\t We only ran these tests on continuous control tasks because our Atari reward model depends on a sequence of consecutive frames rather than a single frame, as described in Section A.2", "date_published": "n/a", "url": "n/a", "filename": "NIPS-2017-deep-reinforcement-learning-from-human-preferences-Paper.tei.xml", "abstract": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than 1% of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any which have been previously learned from human feedback. ⇤ Work done while at OpenAI.", "id": "c018c78ef97e093ecd6b29c06991db7a"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Seth D Baum"], "title": "Medium-Term Artificial Intelligence and Society", "text": "Introduction Attention to AI technologies and accompanying societal issues commonly clusters into groups focusing on either near-term or long-term AI, with some acrimonious debate between them over which is more important. Following Baum [1] , the near-term camp may be called \"presentists\" and the long-term camp \"futurists\". The current state of affairs suggests two reasons for considering the intermediate period between the near and long terms. First, the medium term (or, interchangeably, intermediate term or mid term) has gone neglected relative to its inherent importance. If there are important topics involving near-term and long-term AI, then perhaps the medium term has important topics as well. Second, the medium term may provide a common ground between presentists and futurists. Insofar as both sides consider the medium term to be important, it could offer a constructive topic to channel energy that may otherwise be spent on hashing out disagreements. Rare examples of previous studies with dedicated attention to medium-term AI are Parson et al. [2, 3] . (There is a lot of work that touches on medium-term AI topics, some of which is cited in this paper. However, aside from Parson et al. [2, 3] , I am not aware of any publications that explicitly identify medium-term AI as a topic warranting dedicated attention.) Both studies [2, 3] recognize medium-term AI as important and neglected. Parson et al. [2] acknowledges that some prior work in AI covers topics that are important across all time periods, and thus are also relevant to the medium term. It provides a definition of medium-term AI, which is discussed further below, and it provides some analysis of medium-term AI topics. Parson et al. [3] posits that the neglect of the medium term may derive in part from the academic disciplines and methodologies of AI researchers, which may point the researchers toward either the near term or the long term but not the medium term. The present paper extends Parson et al.'s [2] work on definitions and presents original analysis of a different mix of medium-term AI topics. The present paper also explores the medium term as a potential point of common ground between presentists and futurists. Several previous attempts have been made to bridge the presentist-futurist divide [1, 4, 5 ]. An overarching theme in this literature is that the practical steps needed to make progress are often (though not always) the same for both near-term and long-term AI. Instead of expending energy debating the relative importance of near-term and long-term AI, it may often be more productive to focus attention on the practical steps that both sides of the debate agree are valuable. This practical synergy can arise for two distinct reasons, both with implications for medium-term AI. First, certain actions may improve near-term AI and the near-term conversation about long-term AI. Such actions will often also improve the near-term conversation about mid-term AI. For example, efforts to facilitate dialog between computer scientists and policymakers can improve the quality of policy discussions for near-, mid-, and long-term AI. Additionally, efforts encouraging AI developers to take more responsibility for the social and ethical implications of their work can influence work on near-, mid-, and long-term AI. For example, the ethics principles that many AI groups have recently established [6] are often quite general and can apply to work on near-term and long-term AI, as can analyses of the limitations of these principles [7] . Here it should be explained that there is near-term work aimed at developing systems that may only become operational over the mid or long term, especially work consisting of basic research toward major breakthroughs in AI capabilities. Second, certain actions may improve near-term AI, and, eventually, long-term AI. These actions may often also eventually improve mid-term AI. For example, some research on how to design near-term AI systems more safely may provide a foundation for also making mid-and long-term AI systems safer. This is seen in the AI safety study of Amodei et al. [8] , which is framed in terms of near-term AI; lead author Amodei describes the work as also being relevant for long-term AI [9] . Additionally, AI governance institutions established over the near term may persist into the mid and long term, given the durability of many policy institutions. Of course, AI system designs and governance institutions that persist from the near term to the long term would also be present throughout the mid-term. Furthermore, evaluating their long-term persistence may require understanding of what happens during the mid-term. Dedicated attention to the medium term can offer another point of common ground between presentists and futurists: both sides may consider the medium term to be important. Presentists may find the medium term to be early enough for their tastes, while futurists find it late enough for theirs. As elaborated below, the reasons that presentists have for favoring near-term AI are different types of reasons than those of the futurists. Presentists tend to emphasize immediate feasibility, certainty, and urgency, whereas futurists tend to emphasize extreme AI capabilities and consequences. Potentially, the medium term features a widely appealing mix of feasibility, certainty, urgency, capabilities, and consequences. Or not: it is also possible that the medium term would sit in a \"dead zone\", being too opaque to merit presentist interest and too insignificant to merit futurist interest. This matter will be a running theme throughout the paper and is worth expressing formally: The medium-term AI hypothesis: There is an intermediate time period in which AI technology and accompanying societal issues are important from both presentist and futurist perspectives. The medium-term AI hypothesis can be considered in either empirical or normative terms. As an empirical hypothesis, it proposes that presentists and futurists actually consider the medium term to be important, or that they would tend to agree that the medium term is important if given the chance to reflect on it. As a normative hypothesis, it proposes that presentists should agree that the medium term is important, given the value commitments of the presentist and futurist perspectives. Given the practical goal of bridging the presentist-futurist divide, the empirical form is ultimately more important: what matters is whether the specific people on opposite sides of the divide would, upon consideration, find common ground in the medium term. (It is unlikely that they currently do find common ground in the medium term, due to lack of attention to it.) Empirical study of presentist and futurist reactions to the medium term is beyond the scope of the present paper. Instead, the aim here is to clarify the nature of the presentist and futurist perspectives in terms of the attributes of the medium term that they should consider important and then to examine whether the medium term is likely to possess these attributes. The paper therefore proceeds mainly in normative terms, though grounded in empirical observation of the perspectives articulated by actual presentists and futurists. More precisely, the medium-term AI hypothesis proposes that the perspectives underlying both groups should rate the medium term as important. This presumes that \"perspectives\" can rate things as important even when detached from the people who hold them. Such detachment is permitted here simply so that the analysis can proceed without going through the more involved (but ultimately important) process of consulting with the people who hold presentist and futurist perspectives. Evaluating the medium-term AI hypothesis is one aim of this paper. First, though, more needs to be said on how the medium term is defined. \n Defining the Medium Term The medium term is, of course, the period of time between the near term and the long term. However, discussions of near-term and long-term AI often do not precisely specify what constitutes near-term and long-term. Some ambiguity is inevitable due to uncertainty about future developments in AI. Additionally, different definitions may be appropriate for different contexts and purposes-for example, what qualifies as near-term may be different for a programmer than for a policymaker. Nonetheless, it is worth briefly exploring how the near, mid, and long terms can be defined for AI. Throughout, it should be understood that the near, mid, and long terms are all defined relative to the vantage point of the time of this writing (2019-2020). As time progresses, what classifies as near-, mid-, and long-term can shift. The first thing to note is that near-vs. mid-vs. long-term can be defined along several dimensions. The first is chronological: the near term goes from year A to year B, the mid term from year B to year C, and the long term from year C to year D. The second is in terms of the feasibility or ambitiousness of the AI: the near term is what is already feasible, the long term is the AI that would be most difficult to achieve, and the mid term is somewhere in between. Third, and related to the second, is the degree of certainty about the AI: the near term is what clearly can be built, the long term is the most uncertain and speculative, and the mid term is somewhere in between. Fourth is the degree of sophistication or capability of the AI: the near term is the least capable, the long term is the most capable, and the mid term is somewhere in between. Fifth, and related to the fourth, is with respect to impacts: the near term has (arguably; see below) the mildest impacts on human society and the world at large, the long term has the most extreme impacts, and the mid-term is somewhere in between. Sixth is urgency: the near term is (arguably) the most urgent, the long term the least urgent, and the mid term is somewhere in between. The dimension of impacts is somewhat complex and worth briefly unpacking. Near-term AI may have the mildest impacts, in the sense that if AI continues to grow more capable and be used more widely and in more consequential settings it will tend to have greater impacts on the human society that exists at that time. Put differently, if A = the impacts of near-term AI on near-term society, B = the impacts of mid-term AI on mid-term society, and C = the impacts of long-term AI on long-term society, then (it is supposed) A < B < C. There are, however, alternative ways of conceptualizing impacts. One could take a certain presentist view and argue that only present people matter for purposes of moral evaluation, such as is discussed by Arrhenius [10] , or that future impacts should be discounted, as in many economic cost-benefit evaluations. In these cases, near-term AI may be evaluated as having the largest impacts because the impacts of mid-and long-term AI matter less or not at all. Or, one could consider the impacts of a period of AI on all time periods: the impact of near-term AI on the near, mid, and long terms, the impacts of mid-term AI on the mid-and long-terms, and the impact of long-term AI on the long term. This perspective recognizes the potential for durable impacts of AI technology, and would tend to increase the evaluated size of the impacts of near-and mid-term AI. While recognizing the merits of these alternative conceptions of impacts, this paper uses the first conception, involving A vs. B vs. C. There may be no one correct choice of dimensions for defining the near/mid/long term. Different circumstances may entail different definitions. For example, Parson et al. [2] are especially interested in societal impacts and implications for governance, and thus use definitions rooted primarily in impacts. They propose that, relative to near-term AI, medium-term AI has \"greater scale of application, along with associated changes in scope, complexity, and integration\" [2] (pp. 8-9), and, relative to long-term AI, medium-term AI \"is not self-directed or independently volitional, but rather is still to a substantial degree developed and deployed under human control\" [2] (p. 9). (One can quibble with these definitions. Arguably, near-term AI is already at a large scale of application, and there may be no clear demarcation in scale between near-and mid-term AI. Additionally, while it is proposed that long-term AI could escape human control, that would not necessarily be the case. Indeed, discussions of long-term AI sometimes focus specifically on the question of how to control such an AI [11] .) The medium term is a period with substantially greater use of AI in decision-making, potentially to the point in which \"the meaning of governance\" is challenged [2] (p. 9), but humans remain ultimately in control. This is a reasonable definition of medium-term AI, especially for impacts and governance purposes. The present paper is more focused on the presentist/futurist debate, and so it is worth considering the definitions used in the debate. Elements of each of the six dimensions can be found, but they are not found uniformly. Presentists often emphasize feasibility and degree of certainty. Computer scientist Andrew Ng memorably likened attention to long-term AI to worrying about \"overpopulation on Mars\" [12] , by which Ng meant that it might eventually be important, but it is too opaque and disconnected from current AI to be worth current attention. Another presentist theme is urgency, especially with respect to the societal implications of near-term AI. Legal scholar Ryan Calo [13] (p. 27) argues that \"AI presents numerous pressing challenges to individuals and society in the very short term\" and therefore commands attention relative to long-term AI. For their part, futurists often emphasize capability and impacts. Commonly cited is the early remark of I.J. Good [14] (p. 33) that \"ultraintelligent\" AI (AI with intelligence significantly exceeding that of humans) could be \"the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control\". Chronological definitions are less common. One exception is Etzioni [15] , who downplays long-term AI on grounds that it is unlikely to occur within 25 years. (In reply, futurists Dafoe and Russell [16] argue that potential future events can still be worth caring about even if they will not occur within the next 25 years.) Taking the above into account, this paper will use a feasibility definition for near-term AI and a capability definition for long-term AI. The paper defines near-term AI as AI that already exists or is actively under development with a clear path to being built and deployed. Per this definition, near-term AI does not require any major research breakthroughs, but instead consists of straightforward applications of existing techniques. The terms \"clear\", \"major\", and \"straightforward\" are vague, and it may be reasonable to define them in different ways in different contexts. (This vagueness is relevant for the medium-term AI hypothesis; more on this below.) Nonetheless, this definition points to current AI systems plus the potential future AI systems that are likely to be built soon and do not depend on research breakthroughs that might or might not manifest. The paper defines long-term AI as AI that has at least human-level general intelligence. Interest in long-term AI often focuses on human-level artificial intelligence (HLAI), artificial general intelligence (AGI), strong AI, and artificial superintelligence (ASI). However, there may be narrow AI systems that are appropriate to classify as long-term. For example, Cave and ÓhÉigeartaigh [4] (p. 5) include \"wide-scale loss of jobs\" as a long-term AI issue separately from the prospect of superintelligence. (Note that the most widespread loss of jobs may require AGI. For example, Ford [17] (p. 3) writes \"If, someday, machines can match or even exceed the ability of a human being to think and to conceive new ideas-while at the same time enjoying all the advantages of a computer in areas like computational speed and data access-then it becomes somewhat difficult to imagine just what jobs might be left for even the most capable human workers\".) A plausible alternative definition of long-term AI is AI that achieves major intellectual milestones and/or has large and transformative effects. This is more of a catch-all definition that could include sufficiently important narrow AI systems such as those involved in job loss. In this definition, the terms \"major\", \"large\", and \"transformative\" are vague. Indeed, current AI systems arguably meet this definition. Therefore, the paper will define long-term AI in terms of HLAI, while noting the case for the alternative definitions. The paper's use of a feasibility definition for near-term and a capability definition for long-term may be consistent with common usage in AI discussions. However, the use of a different dimension for near-term (feasibility) than for long-term (capability) can induce some chronological blurring in two important respects. First, AI projects that are immediately practical may have long time horizons. This may be especially common for projects in which AI is only one component of a more complex and durable system. Military systems are one domain with long lifespans. A 2016 report found that some US nuclear weapon systems were still using 1970s-era 8-inch floppy disks [18] . AI is currently being used and developed for a wide variety of military systems [19] . Some of these could conceivably persist for many decades into the future-perhaps in the B-52H bomber, which was built in the 1960s and is planned to remain in service through the 2050s [20] . (AI is used in bombers, for example, to improve targeting [21] . AI is used more extensively in fighters, which execute complex aerial maneuvers at rapid speeds and can gain substantial tactical advantage from increased computational power and autonomy from human pilots [22] .) One can imagine the B-52H being outfitted with current AI algorithms and retaining these algorithms into the 2050s, just as the 8-inch floppy disks have been retained in other US military systems. Per this paper's definitions, this B-52H AI would classify as near-term AI that happens to remain in use over a long time period, well beyond the 25 years that Etzioni [15] treats as the \"foreseeable horizon\" worthy of attention. Second, AI systems with large and transformative effects, including AGI, could potentially be built over relatively short time scales. When AGI and related forms of AI will be built is a matter of considerable uncertainty and disagreement. Several studies have asked AI researchers-predominantly computer scientists-when they expect AI with human or superhuman capacity to be built [23] [24] [25] [26] . (Note that these studies are generally framed as being surveys of experts, but it is not clear that the survey participants are expert in the question of when AGI will be built. Earlier predictions about AI have often been unreliable [27] . This may be a topic for which there are no experts; on this issue, see Morgan [28] .) The researchers present estimates spanning many decades, with some estimates being quite soon. Figure 1 presents median estimates from these studies. Median estimates conceal the range of estimates across survey participants, but the full range could not readily be presented in Figure 1 because, unfortunately, only Baum et al. [23] included the full survey data. If the early estimates shown in Figure 1 are correct, then, by this paper's definitions, long-term AI may be appearing fairly soon, potentially within the next 25 years. Estimates for when AI will reach superhuman capability (Baum et al.) [23] and human-level capability (Sandberg and Bostrom, Müller and Bostrom, and Grace et al.) [24] [25] [26] . Shown are estimates for when the probability that the milestone is reached is 10% (lower mark), 50% (square), and 90% (upper mark). For each study, the median estimates across the survey participants are plotted. \n Median Estimates of Advanced AI Development \n The Medium-Term AI Hypothesis With the above definitions in mind, it is worth revisiting the medium-term AI hypothesis. If presentists are, by definition, only interested in the present, then they would not care at all about the medium term. However, the line between the near term and the medium term is blurry. As defined above, near-term AI must have a clear path to being built and deployed, but \"clearness\" is a matter of degree. As the path to being built and deployed becomes less and less clear, the AI transitions from near-term to medium-term, and presentists may have less and less interest in it. From this standpoint, presentists may care somewhat about the medium term, especially the earlier portions of it, but not to the same extent as they care about the near term. Alternatively, presentists might care about the medium term because the underlying things they care about also arise in the medium term. Some presentists are interested in the implications of AI for social justice, or for armed conflict, or for transportation, and so on. Whereas it may be difficult to think coherently about the implications of long-term AI for these matters, it may not be so difficult for medium-term AI. For example, a major factor in debates about autonomous weapons (machines that use AI to select and fire upon targets) is whether these weapons could adequately discriminate between acceptable and unacceptable targets (e.g., enemy combatants vs. civilians) [29, 30] . Near-term AI cannot adequately discriminate; medium-term AI might be able to. Therefore, presentists concerned about autonomous weapons have reason to be interested in medium-term AI. Whether this interest extends to other presentist concerns (social justice, transportation, etc.) must be considered on a case-by-case basis. For futurists, the medium term may be important because it precedes and influences the long term. If the long term begins with the advent of human-level AGI, then this AI will be designed and built during the medium term. Some work on AGI is already in progress [31] , but it may be at a relatively early stage. Figure 1 illustrates the uncertainty: the earliest estimates for the onset of AGI (and similar forms of AI) may fall within the near term, whereas the latest estimates fall much, much later. Futurists may tend to be most interested in the period immediately preceding the long term because it has the most influence on AGI. Their interest in earlier periods may depend on the significance of its causal impact on AGI. It follows that there are two bases for assessing the medium-term AI hypothesis. First, the hypothesis could hold if AI that resembles near-term AI also influences long-term AI. In that case, the technology itself may be of interest to both presentists and futurists. Alternatively, the hypothesis could hold if the societal implications of medium-term AI raise similar issues as near-term AI, and if the medium-term societal context also influences long-term AI. For example, medium-term autonomous weapon technology could raise similar target discrimination issues as is found for near-term technology, and it could also feed arms races for long-term AI. (To avoid confusion, it should be understood that discussions of long-term AI sometimes use the term \"arms race\" to refer to general competition to be the first to build long-term AI, without necessarily any connection to military armaments [32] . Nonetheless, military arms races for long-term AI are sometimes posited [33] .) Both of the above derive from some measure of continuity between the near, mid, and long terms. Continuity can be defined in terms of the extent of change in AI systems and related societal issues. If near-term AI techniques and societal dimensions persist to a significant extent through the end of the medium term (when long-term AI is built), then the medium-term AI hypothesis is likely to hold. The chronological duration of the medium term may be an important factor. Figure 1 includes a wide range of estimates for the start of the long term. If the later estimates prove correct, then the medium term could be quite long. A long duration would likely tend to mean less continuity across the near, mid, and long terms, and therefore less support for the medium-term AI hypothesis. That is not necessarily the case. One can imagine, for example, that AI just needs one additional technical breakthrough to go from current capabilities to AGI, and that it will take many decades for this breakthrough to be made. One can also imagine that the issues involving AI will remain fairly constant until this breakthrough is made. In that case, near-term techniques and issues would persist deep into the medium term. However, it is more likely that a long-lasting medium term would have less continuity and a larger dead zone period with no interest from either presentists or futurists. If AGI will not be built for, say, another 500 years, presentists are unlikely to take an interest. Figure 2 presents two sketches of the degree of interest that presentists and futurists may hold in the medium term. Figure 2a shows a period of overlap in which both presentists and futurists have some interest; here, the medium-term AI hypothesis holds. Figure 2b shows a dead zone with no overlap of interest; here, the medium-term AI hypothesis does not hold. Figure 2 is presented strictly for illustrative purposes and does not indicate any rigorously derived estimation of actual presentist or futurist interests. It serves to illustrate how presentists' degree of interest could decline over time and futurists' degree of interest could increase over time, with implications for the medium-term AI hypothesis. Figure 2 shows presentist/futurist interest decreasing/increasing approximately exponentially over time. There is no particular basis for this, and the curves could just as easily have been drawn differently. To sum up, assessing the medium-term AI hypothesis requires examining what medium-term AI techniques and societal dimensions may look like, and the extent of continuity between the near-, mid-, and long-term periods. \n The Intrinsic Importance of Medium-Term AI Thus far, the paper has emphasized the potential value of medium-term AI as a point of common interest between presentists and futurists. This \"consensus value\" will remain a major theme in the sections below. However, it is worth pausing to reiterate that medium-term AI can also be important in its own right, regardless of any implications for presentists and futurists. Assessing the extent to which it is intrinsically important requires having some metric for intrinsic importance. A detailed metric is beyond the scope of this paper. For present purposes, it suffices to consider that medium-term AI and its accompanying societal issues may be important for the world as it exists during the medium term. It is further worth positing that there may be opportunities for people today to significantly influence the medium term, such that the medium term merits attention today due to its intrinsic importance. With that in mind, the paper now turns to the details of medium-term AI and society. \n Medium-Term AI Techniques My own expertise is not in the computer science of AI, and so I can say relatively little about what computer science AI techniques may look like over the medium term. Therefore, this section serves as a placeholder to note that the space of potential medium-term AI techniques is a topic worthy of attention for those with the expertise to analyze and comment on it. \n Medium-Term AI Societal Dimensions While the medium-term societal dimensions of AI will, to at least some extent, depend on the capabilities of the medium-term AI techniques, it is nonetheless possible to paint at least a partial picture of the societal dimensions, even without clarity on the techniques. What follows is indeed a partial picture, shaped to a significant extent by my own areas of expertise. It aims to illustrate potential medium-term scenarios in several domains and discuss their implications for near-term and long-term AI and their prospects for bridging the presentist/futurist divide. \n Governance Institutions Governance institutions can be quite durable. For example, the United Nations was founded in 1945, and despite many calls for reform, the UN Security Council retains China, France, Russia, the United Kingdom, and the United States as permanent members. The \"P5 countries\" are an artifact of World War II that arguably does not match current international affairs, but changing the membership would require a consensus that is quite elusive. For example, a case could be made for adding Brazil and India, but then Argentina and Pakistan may object, so no change is made. Not all governance institutions are this ossified, but many of them are quite enduring. This continuity makes governance institutions a compelling candidate for the medium-term AI hypothesis. The near-term is an exciting time for AI governance. Institutions are now in the process of being designed and launched. Decisions being made now could have long-lasting implications, potentially all the way through the end of the medium term and the beginning of the long term. (It is harder to predict much of anything if and when AGI/ASI/HLAI is built, including the form of governance institutions. One attempt to make such predictions is Hanson [34] .) One notable example is the International Panel on Artificial Intelligence (IPAI) and Global Partnership on AI (GPAI). The IPAI/GPAI has recently been proposed by the governments of Canada and France, first under the IPAI name and later under the GPAI name [35, 36] . Documents on the IPAI/GPAI emphasize issues that are relevant in the near term and may continue to be relevant through the medium term. One set of issues listed for illustrative purposes is: \"data collection and access; data control and privacy; trust in AI; acceptance and adoption of AI; future of work; governance, laws and justice; responsible AI and human rights; equity, responsibility and public good\" [35] . The documents published on the IPAI/GPAI give no indication of any focus on long-term issues relating to AGI. (The future of work could arguably classify as a long-term issue.) However, the IPAI/GPAI may nonetheless be relevant for the long term. If the IPAI/GPAI takes hold then it could persist for a long time. For comparison, the Intergovernmental Panel on Climate Change (IPCC) was formed in 1988 and remains an active and important institution. The IPAI/GPAI follows a similar model as the IPCC and may prove similarly durable. Additionally, while long-term issues are not featured in the early-stage documents that have thus far been published on the IPAI/GPAI, that does not preclude the IPAI/GPAI from including long-term issues within its scope once it is up and running. Whether long-term issues are included could come down to whether people interested in the long-term take the initiative to participate in IPAI/GPAI processes. Indeed, one of the most thoughtful discussions of the IPAI/GPAI published to date is by Nicolas Miailhe [37] of The Future Society, an organization explicitly working \"to address holistically short, mid and long term governance challenges\" in AI [38] . Such activity suggests that the IPAI/GPAI could be an institution that works across the range of time scales and persists significantly into the future. \n Collective Action An important dynamic for the societal impacts of AI is whether AI development projects can successfully cooperate on collective action problems: situations in which the collective interest across all the projects diverges from the individual interests of the projects. Collective action has been a significant theme in discussions of long-term AI, focused on the prospect of projects cutting corners on safety to be the first to achieve important technological milestones [32, 39] . Collective action problems can also arise for near-term AI. One near-term concern is about military AI arms races [40] (though this concern is not universally held [41] ). Social science research on collective action problems identifies three broad classes of solutions for how to get actors to cooperate: government regulation, private ownership, and community self-organizing [42] . Each is worth briefly considering with an eye toward the medium term. Government regulation is perhaps the most commonly proposed solution for AI collective action problems. While some proposals focus on domestic measures [43] , global regimes may be favorable due to AI being developed worldwide. This is reflected in proposals for international treaties [44] or, more ambitiously, global governance regimes with broad surveillance powers and the capacity to preemptively halt potentially dangerous AI projects through the use of force [45] . This more ambitious approach may be theoretically attractive in terms of ensuring AI collective action, though it is also unattractive for its potential for abuse, up to and including catastrophic totalitarianism [46] . Regardless, in practice, an intrusive global government is very likely a nonstarter at this time and for the foreseeable future, probably into the medium term. Nations are too unlikely to be willing to cede their national sovereignty to a global regime, especially on a matter of major economic and military significance. (Perhaps some future circumstances could change this, but the desire to preserve sovereignty, especially from rival and adversarial states, has been a durable feature of the international system.) Even a more modest international treaty may be asking too much. Treaties are difficult to create, especially if universal international consensus is needed (for example, because AI can be developed anywhere), and when access to and capability with the technology is unevenly distributed across the international community (as is very much the case with AI; for general discussion of emerging technology treaty challenges, see [47] ). Instead, government regulations are likely to be more modest, and play at most a partial role in facilitating collective action. Whatever it is that governments end up doing, there is strong potential for institutions that are durable across the medium term, as discussed in Section 6.1. Private ownership is commonly used for natural resource management. An entity that owns a natural resource has an incentive to sustain it and the means to do so by charging users for access at a sufficiently high fee. Private ownership schemes are difficult to apply to AI software due to the difficulty of restricting access. Hardware may offer a more viable option because hardware manufacturing facilities are geographically fixed and highly visible sites of major industrial infrastructure, in contrast with the ephemerality of software (For related discussion, see [48] ). Hardware manufacturing is also typically privately owned [49] . AI collective action could conceivably be demanded by the manufacturers, especially the select manufacturers of the advanced hardware used in the most capable AI projects. However, the benefits of AI collective action are experienced by many entities, and therefore would predominantly classify as externalities from the perspective of hardware manufacturers, in the sense that the benefits would be gained by other people and not by the manufacturers. This reduces the manufacturers' incentives to promote collective action and likewise reduces viability of private ownership schemes for AI collective action. Nonetheless, to the extent that hardware manufacturing can play a role, it could be a durable one. Hardware manufacturing is led by relatively durable corporations including Intel (founded 1968), Samsung Electronics (founded 1969), SK Hynix (formerly Hyundai Electronics, founded 1983), and Taiwan Semiconductor Manufacturing Company (founded 1987). These corporations are likely to remain important over medium-term and potentially also long-term time periods. Community self-organizing for AI collective action can be seen in several important areas. One is in initiatives to bring AI developers together for promoting ethical principles. The Partnership on AI is a notable example of this. Importantly, the Partnership has recently welcomed its first Chinese member, Baidu [50] . This suggests that its emphasis on human rights (partners include Amnesty International and Human Rights Watch) will not limit its reach to Western organizations. Another area is in the collaborations between AI projects. For example, Baum [31] documents numerous interconnections between AGI projects via common personnel and collaborations, suggesting a cooperative community. Community self-organizing may lack the theoretical elegance of government regulation or private ownership, but it is often successful in practice. Whether it is successful for AI remains to be seen. AI community initiatives are relatively young, making it more uncertain how they will play out over the medium and long term. \n Corporate AI Development The financial incentives of for-profit corporations could become a major challenge for the safe and ethical development of AI over all time periods. How can companies be persuaded to act in the public interest when their financial self-interest points in a different direction? This is of course a major question for many sectors, not just AI. It is an issue for AI right now, amid a \"techlash\" of concerns about AI in social media bots, surveillance systems, and weaponry. It could also be an issue for AI over the mid and long term. With regards to long-term AI, Baum [31] (p. 19) introduces the term \"AGI profit-R&D synergy\", defined as \"any circumstance in which long-term AGI R&D delivers short-term profits\". If there is significant AGI profit-R&D synergy, then it could make AGI governance substantially more difficult by creating financial incentives that may not align with the public interest. AGI profit-R&D synergy concerns long-term AI, but it is inherently a medium-term phenomenon because it would occur when AGI is being developed. Assessing the prospect of AGI profit-R&D synergy requires an understanding of the technical computer science details of AI as it transitions from the medium term to the long term, which is beyond the scope of this paper. If the medium-term details have any sort of close relation to near-term AI, that could constitute a significant strengthening of the medium-term AI hypothesis. If AI companies' financial self-interest diverges from the public interest, how would they behave? Ideally, they would act in the public interest. In some cases, perhaps they will, especially if they are pushed to do so by people both within and outside of the companies. Unfortunately, experience from other sectors shows that companies often opt to act against the public interest, as seen, for example, in pushback by the tobacco industry against regulations aimed at reducing cancer risk; by the fossil fuel industry against regulations aimed at reducing global warming risk [51] ; and by the industrial chemicals industry against regulations aimed at reducing neurological disease risk [52] . It is worth considering the prospect that AI companies may (mis) behave similarly. It has been proposed that AI companies could politicize skepticism about AI and its risks to avoid regulations that would restrict their profitable activities [53] . This sort of politicized skepticism has a long history, starting with tobacco industry skepticism about the link between cigarettes and cancer and continuing to this day with, for example, fossil fuel industry skepticism about global warming. One mechanism for this work is to fund nominally independent think tanks to produce publications that promote policies and issue stances consistent with the companies' financial self-interest. Some attributes of this pattern can be seen in recent writing by the think tank the Center for Data Innovation, which warns of an \"unchecked techno-panic\" that is dampening public enthusiasm for AI and motivating government regulations [54] . The extent to which this constitutes a case of politicized skepticism is unclear. Specifically, the extent of the Center for Data Innovation's industry ties could not be ascertained for this paper. Likewise, it is not the intent of this paper to accuse this organization of conflicts of interest. It is also not the intent to claim the opposite-that there is no conflict of interest in this case. (Indeed, the presence of conflict of interest is often hidden-hence, industry firms fund the work of nominally independent think tanks instead of doing it in-house.) Instead, the intent is merely to provide an example that illustrates some aspects of the politicized skepticism pattern. Importantly, whereas the proposal of politicized AI skepticism focuses on skepticism about long-term AI [53] , the skepticism of the Center for Data Innovation is focused on the near term [54] . Likewise, the pattern of politicized AI skepticism has the potential to play out across time periods, especially when there is significant profit-R&D synergy and concurrent prospects of government regulation. \n Militaries and National Security Communities Advanced militaries have long been involved with the forefront of AI in their capacity as research funders and increasingly as users of the technology. The advanced militaries also often have substantial technical expertise, as do the broader national security policy communities that they interface with. Furthermore, militaries are sometimes tasked with operations and planning across a range of time periods, and national security communities are likewise sometimes oriented toward thinking over such time periods. This is seen in the example cited above of the plan for the B-52H bomber to remain in service through the 2050s. It thus stands to reason that advanced militaries and national security communities could be interested in medium-term AI and its links between the near term and long term. There is already some military attention to AGI. One clear example is the JASON report Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD [55] , which was produced in response to a US Department of Defense query about AGI. Another is the excellent book [19] , which features a full chapter on AGI and ASI. Both publications provide nuanced accounts of long-term AI. The publications are produced by analysts who are especially technically savvy and are not representative of the entire military and national defense communities. Nonetheless, they are among the publications that people in these communities may consult and do indicate a degree of awareness about long-term AI. As documented by Baum [31] , there are some current AGI R&D projects with military connections. Most of these are US academic groups that receive funding from military research agencies such as DARPA and the Office of Naval Research. One is a small group at the primary national defense research agency of Singapore. None of them have any appearance of the sort of major strategic initiative that is sometimes postulated in literature on long-term AI [33] . Given the current state of affairs, it is highly likely that advanced militaries and national security communities will be engaged in AI throughout the medium term. That raises the question of their likely role. Despite common concerns within AI communities, as manifest for example in Google employee protest over Project Maven, militaries can actually be a constructive voice on ethics and safety. For example, a major theme of the [55] report is that what it calls the \"ilities\"-\"reliability, maintainability, accountability, verifiability, evolvability, attackability, and so forth\" [55] (p. 2) are a major concern for military applications and \"a potential roadblock to DoD's use of these modern AI systems, especially when considering the liability and accountability of using AI in lethal systems\" [55] (p. 27). Militaries are keen to avoid unintended consequences, especially for high-stakes battlefield technologies. It is also important to account for the geopolitical context in which militaries operate. Militaries can afford to be more restrained in their development and use of risky technologies when their nations are at peace. In an interview, Larry Schuette of the Office of Naval Research compares autonomous weapons to submarines [19] (pp. 100-101). Schuette recounts that in the 1920s and 1930s, the US was opposed to unrestricted submarine warfare, but that changed immediately following the 7 December 1941 attack on Pearl Harbor. Similarly, the US is currently opposed to autonomous weapons, and on the question of whether it will remain opposed, Schuette replies, \"Is it December eighth or December sixth\"? It follows that the role of militaries in medium-term AI may depend heavily on the state of international relations during this period. It stands to reason that the prospects for cautious and ethical AI development are much greater during times of peace than times of war. There is an inherent tension between pushing a technology ahead for strategic advantage and exercising caution with respect to unintended consequences, as is articulated by Danzig [56] . Peaceful international relations tips the toward caution and can empower militaries and national security communities to be important voices on safety and ethics. \n Conclusions Parson et al. [2] argued that medium-term AI and its accompanying societal issues are important in their own right. This paper's analysis yields the same conclusion. For each of the issue areas studied here-governance institutions, collective action, corporate development, and military/national security-the medium-term will include important processes. In a sense, this is not much of a conclusion. It is already clear that AI is important in the near term, and there is plenty of reason to believe that AI will become more important as the technology and its applications develop further. What then of the presentist-futurist debate? This paper proposes the medium-term AI hypothesis, which is that there is an intermediate time period that is important from both presentists and futurist perspectives. With the near term defined in terms of feasibility and the long term in terms of capability, it follows that the medium-term AI hypothesis is more likely to hold if near-term AI techniques and societal dimensions persist to a significant extent through the end of the medium term, when long-term AI is built. To the extent that the hypothesis holds, attention to the medium term could play an important role in bridging the divide that can be found between presentist and futurist communities. The paper finds mixed support for the medium-term AI hypothesis. Support is strong in the case of AI governance institutions, which are currently in development and may persist through the medium-term, with implications for long-term AI. Support is ambiguous for AI collective action: government initiatives to promote collective action may play relatively little role at any time, private ownership schemes are difficult to arrange for AI, and community self-organizing has potential that might or might not be realized. Each of these three schemes for achieving collective action could potentially play out over near-and medium-term periods, with implications for long-term AI, but whether they are likely to is unclear. Regarding corporate AI development, a key question is whether near-to-medium-term AI technology could serve a profitable precursor to AGI, creating AGI profit-R&D synergy. Whether the synergy would occur is an important question for future research. Finally, advanced militaries and national security communities are already paying attention to AGI and are likely to remain active in a range of AI technologies through the medium term. While it is unclear whether military/national security communities will be important actors in the development of AGI, there is substantial potential, providing support for the medium-term AI hypothesis. In closing, this paper has shown that at least some important AI processes are likely to play out over the medium term, and that they will be important in their own right and from both presentist and futurist perspectives. The exact nature and importance of medium-term AI is a worthy subject of future research. To the extent that medium-term AI can be understood, this can point to opportunities to positively influence them, resulting in better overall outcomes for society. Funding: This research was funded by the Gordon R. Irlam Charitable Foundation. Figure 1 . 1 Figure 1.Estimates for when AI will reach superhuman capability (Baum et al.) [23] and human-level capability (Sandberg and Bostrom, Müller and Bostrom, and Grace et al.) [24] [25] [26] . Shown are estimates for when the probability that the milestone is reached is 10% (lower mark), 50% (square), and 90% (upper mark). For each study, the median estimates across the survey participants are plotted. \n Figure 2 . 2 Figure 2. Illustrative sketches of presentist and futurist interest in the near, medium, and long term. (a) shows overlapping interest: the medium-term AI hypothesis holds; (b) shows a dead zone with no overlapping interest: the medium-term AI hypothesis does not hold. The sketches are strictly for illustrative purposes only. The phrase \"new forms of AI built\" is defined with reference to the definition of near-term AI in the main text.", "date_published": "n/a", "url": "n/a", "filename": "information-11-00290.tei.xml", "abstract": "There has been extensive attention to near-term and long-term AI technology and its accompanying societal issues, but the medium-term has gone largely overlooked. This paper develops the concept of medium-term AI, evaluates its importance, and analyzes some medium-term societal issues. Medium-term AI can be important in its own right and as a topic that can bridge the sometimes acrimonious divide between those who favor attention to near-term AI and those who prefer the long-term. The paper proposes the medium-term AI hypothesis: the medium-term is important from the perspectives of those who favor attention to near-term AI as well as those who favor attention to long-term AI. The paper analyzes medium-term AI in terms of governance institutions, collective action, corporate AI development, and military/national security communities. Across portions of these four areas, some support for the medium-term AI hypothesis is found, though in some cases the matter is unclear.", "id": "c49946d384f45e7077c30fe6e5ac3cd8"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Christian P Janssen", "Linda Ng Boyle", "Wendy Ju", "Andreas Riener", "Ignacio Alvarez"], "title": "Agents, environments, scenarios: A framework for examining models and simulations of human-vehicle interaction", "text": "Introduction Within the transportation, automotive, and user interface research communities, there is occasional confusion as to what is implied by \"simulation\" or \"model.\" For example, the following are all false assumptions: a model or simulation implies tight control with no testing in a naturalistic environment, a model always involves simulating people or the traffic environment, and a concrete, falsifiable research question cannot be achieved without a model or simulation and testing in the real world. These false assumptions overlook the fact that researchers and practitioners in these fields have various approaches to modeling the agents in the car, the driving environment, and the scenarios under consideration of study. The different approaches and associated labels can create confusion as to which methods are most effective to examine specific research questions regarding human-vehicle interaction. The authors of this paper have used different types of simulations in driving studies and other domains, in part due to their different backgrounds in psychology, artificial intelligence (AI), safety science, design, and engineering. During a meeting in 2016 (Riener et al., 2016a) , the authors identified thatup until thenthey meant different things when talking about \"a simulation\" or \"a model\" and that each author held some incorrect assumptions about these terms. To move the field forward and to avoid these mistakes, there is a need for (a) more specific terminology to guide the scientific and practical dialogue and (b) a common framework in which each research effort can be mapped and compared. Such a framework is needed given the interdisciplinary focus of transportation research, in which different disciplines (e.g., engineering, design, social sciences, safety sciences, AI) might also use different terminology. In this paper, a classification framework for examining models and simulations of human-vehicle interaction is introduced. Within the domain of human-vehicle interaction, simulations can happen along three dimensions: agents, environments, and scenarios. These are illustrated in Fig. 1 . Within each of these dimensions, there can be multiple styles or approaches of simulation and modeling. Each of these approaches differs in the extent to which they truly emulate the realistic performance of the agent, the environment, or the driving scenario. The framework enables researchers to map their choice of research methods and tools and compare with other literature, as well as identifying areas of effort that could advance the field. \n Intended contribution and audience A first contribution of this paper is to explicitly define the differences between three dimensions of modeling: agents, environments, and scenarios. This is achieved by describing what is entailed in each dimension. By separating out these three broad dimensions, it becomes clear that the aforementioned assumptions (in the introduction) are flawed, as each assumption only considers a subset of the dimensions of modeling. Being more explicit about these differences provides the field of transportation research with more precision. Such precision is needed to compare results across studies and to aid replication of study results and implementation of ideas and results in actual transportation systems. A second contribution of this paper is to identify areas that are as-yet unexplored or underrepresented within the transportation research community. This is achieved by describing the studies completed for various combinations of simulated agents, environments, and scenarios. In studies where either the environment is simulated or the scenario is constrained, but not both, there is the possibility for future research that allows for tighter control where needed, while also providing insights on a wider, open-ended set of human behaviors. The intended audiences of this paper are researchers and practitioners who are consumers of these simulations, as well as industry and regulatory agencies. For all these parties, the framework provides a way to classify studies and to decide upon the best research method for new studies: what type of simulation is needed? Although the discussion within this paper mostly focuses on human-(motorized) vehicle interaction, simulation is also used in other transportation domains such as trains and flights. Having the three dimensions distinguished is not only important for academic purposes, but also for product development and the testing cycle (e.g., V-Model, Scrum) (Friedrich et al., 2008) . 1 Testing products with real users (agents), in real environments (the world), with actual everyday scenarios provides the highest ecological validity. However, that might also come with potential disadvantages such as (1) weak reproducibility and generalizability due to changing sensor data, weather, or participants' cognitive states, (2) the impossibility of testing under extreme conditions and, (3) its negative impact on release cycle times. A possible alternative that might help to reduce field testing while ensuring functional safety and reliability is performing safety assessment by stochastic virtual simulation (Kompass et al., 2015) . Still, a pure virtual test as suggested in the past by for example Google (Harris, 2014) is barely able to represent reality with its overwhelming complexity, for instance because of performance differences of virtual sensors, lack of realism and flexibility of driver models and shallow modulation of environment and surroundings. That's why (California's) regulations still stipulate autonomous vehicles must be tested under \"controlled conditions\" (e.g., a test track or temporarily closed public road Harris, 2014) . The automotive industry is searching for a new standardized testing process (Kompass et al., 2015) to cope with the issues highlighted before. Open questions in this regard are if and to which extent real field trials can be substituted by various levels of virtual simulation (Riener, 2010) , how to seamlessly integrate different validation methods (e.g., virtual simulation, driving simulator tests, X-in-the-Loop simulation), and, how to guarantee reproducible test conditions. To counteract, tests can adjust the real and virtual parts in various dimensions of the test setup (agents, environments, scenarios). Such adjustments create a \"mixed-reality testing framework\" (see also, Riener et al., 2016b) , which is explicit about which components (agents, environments, scenarios) are simulated or not. The framework that is put forth in this paper, and which is described next, thereby helps to better position such mixed-reality efforts. \n Classification framework for (models and simulations of) humanvehicle interaction The framework distinguishes three dimensions that can be examined in studies of human-vehicle interaction: agents, environments, and scenarios. Each dimension can involve some form of simulation or modeling, and will now be discussed in turn. \n Dimension 1: agent An agent is 'anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators' (Russell and Norvig, 2003, page 32) . This general definition can be applied to both humans and artificial (non-human) entities, and therefore also to simulations and models of humans. Simulations of human agents are typically assumed to be created by software, but can also rely on hardware components (i.e., in embodied, situated robots Pfeifer and Scheier, 2001) . The reasons for modeling human behavior can differ, but for the traffic domain typically include: a need to determine the operational domain of the system, a need to influence system or user behavior under conditions that might be risky or unsafe for humans, a need to benchmark human behavior against alternative theoretical predictions of behavior, or as a way to ground system behavior in theory. In the driving context an agent is in charge of perception, judgement and actuation on the driving task or a subset of it. At a broad level, an agent in a vehicle is either human, or artificial. However, within the simulated artificial (non-human) agent classes many distinctions can be made. Three dimensions are discussed next: (A) whether accurate simulation of the human's internal thinking process matter, (B) level of abstraction (or: what part of human behavior or thinking is of interest to the modeler), and (C) modeling approach. \n Does accurate simulation of a human's thinking process matter? A first differentiation is whether simulating the details of the human's internal thinking process matter. Some models might not care about mimicking human behavior and thinking at all. For example, when implementing Society of Automotive Engineers (SAE) level-5 (SAE International, 2018) , the vehicle itself makes automated (or autonomous) decisions, but might not always make the same decisions as humans, or not be at fault to human biases and limitations (e.g., fatigue). Other models might only approximate a small subset of human thought or behavior; for example models that test the impact of random human actions (Thimbleby, 2007) , or models of traffic flow that only focus on crude estimates of perception and action (Hoogendoorn and Bovy, 2001) . For models where simulation of human thought and behavior are more crucial, there are gradations in level of detail, ranging from models for rapid prototyping and testing of interfaces (John et al., 2004; Salvucci et al., 2005) , to testing the effect of specific theories such as strategies of task interleaving (e.g., Brumby et al., 2018; Janssen et al., 2012; Jokinen et al., 2020 online first; Kujala and Salvucci, 2015; Lee and Lee, 2019) , to developing detailed broader theories of human thinking (e.g., cognitive architectures Liu et al., 2006; Salvucci and Taatgen, 2011; Zhang and Hornof, 2014) . In other words: creating an artificial agent can be seen in line with Turing's imitation game (Turing, 1950) : the goal is to have the agent achieve some behavior, but the details of how this behavior was achieved by the agent do not matter to every modeler. In the classification of agents so far, only the extremes were classified: an agent is either human or artificial. However, due to the advent of semi-automated vehicles, less clear cut examples in between these extremes might emerge. Artificial agents might be in control of part of the driving task, while human agents are in control of other parts. And at times it might not be so clear to a human whether an agent is human or artificial (cf. Turing's imitation game Turing, 1950) . It is not yet clear how to best classify these in between states, therefore the focus in this paper is on the extreme ends first. \n Level of abstraction A second classification, when trying to model human behavior through an artificial agent, is the level of abstraction. In essence, the question here is: what part of human behavior or thinking is of interest to the modeler? David McClelland phrased it as follows: [cognitive models] \"are explorations of ideas about the nature of cognitive processes. In these explorations, simplification is essentialthrough simplification, the implications of the central ideas become more transparent\" (McClelland, 2009) . The quote by McClelland beautifully captures that one model of human behavior or thought (so far) cannot capture all of the complexity of human thinking, but instead requires focus. Various frameworks have been proposed to adjust focus systematically to the objective of the model at hand. Two are discussed next: Marr's levels of abstraction (Marr, 1982 ) and Newell's timescales of action (Newell, 1990) . Marr (Marr, 1982) proposes to approach human thinking from three levels: computational, algorithmic, and implementation. These levels require an increasing level of detail (see also special issue, Peebles and Cooper, 2015) . Agent models of driving related tasks occur at each level. Computational models specify why specific behaviors might be appropriate or efficient, without specifying what is done by an agent to achieve this. For example, such models might concern why multitasking in the car can sometimes be experienced as efficient by the user, even though objectively it is distracting (Janssen et al., 2012) , or why it might be efficient to forget information in general (Anderson and Schooler, 1991) . Algorithmic level models specify what strategies people use to achieve a goal, without specifying why these are used (computational question) or how this is implemented in the brain. Examples are models of visual attention in multitasking scenarios (e.g., Salvucci and Taatgen, 2011; Salvucci et al., 2005; Lee and Lee, 2019; Zhang and Hornof, 2014) . Finally, implementation level models describe how the algorithms are physically realized in the brain, without focusing on what task is achieved (algorithmic) and why (computational). Although such detailed implementation level models are available of cognition in more controlled tasks (e.g., Eliasmith, 2013) , to the best of the authors' knowledge there are not yet detailed implementation level models that implement multiple facets of driving. The level of abstraction influences the type of questions the models can address: why, what, or how behavior is realized. Another way of classifying the level of abstraction is using Newell's time scales of action (Newell, 1990) . This framework requires one to specify at what time scale the behavior is modelled and therefore also what type of data can be used to validate the model (see also Anderson (2002) and Chapter 1 of Salvucci and Taatgen (2011) ). Newell (1990) distinguishes four bands, which again are all relevant for specific aspects of driving: Biological (actions over ms, e.g. brain processes underlying ms level differences in braking response times, Gray, 2011; Lahmer et al., 2018) , cognitive (actions over seconds, such as how eye-movements affect steering movements, Salvucci and Taatgen, 2011; Kujala and Salvucci, 2015; Lee and Lee, 2019) , rational (actions over multiple seconds to minutes, e.g., how to best interleave attention, Janssen et al., 2012; Janssen et al., 2019c) , and social (actions over multiple minutes to years, such as development of trust, Forster et al., 2018) . Again, the time scale of a model determines what types of (research) questions can be addressed and also what type of data is needed to validate such theories, as these need to be in sync with the model: milliseconds (e.g., EEG, fMRI), seconds (e.g., eye-tracking, steering actions), minutes (e.g., behavioral choices), or hours (e.g., duration of travel, fuel efficiency) (see also chapter 1 in Salvucci and Taatgen, 2011) . Other frameworks for classifying the abstraction level of the model of an agent might also exist. Yet, the core question is: through which \"lens\" is one looking at human behavior and thought. Do minor changes over milliseconds in the physical implementation of an (artificial network) model matter (i.e., implementation level, biological band), or is there a need to understand why behavior at a societal level changes over years (i.e., computational level, social band) . Whatever the focus is, some simplifications are needed to allow focus of the research (McClelland, 2009) . What is important is that these simplifications are not made within the area that is of interest most. \n Modeling approach A third way of classifying models is in the modeling style or approach that is taken. For example, is it mostly conceptual in nature (e.g., Janssen et al., 2019a; Janssen et al., 2019d; Wickens, 2008) , describing a theory of a (mechanistic) process (e.g., Salvucci and Taatgen, 2011; Lee and Lee, 2019; Zhang and Hornof, 2014; Brumby et al., 2018) , or a (machine learning) data-driven model (e.g. Fridman et al., 2018) ? These three rough classes (conceptual, process, data-driven) rely increasingly less on theory and more on available data, and can all be relevant for driving models. There is a large breadth and depth in modeling styles available for theoretical studies and applied work (Oulasvirta, 2019) . \n Summary of modeling the agent and further refinements To summarize, models of humans can be captured under the general label of \"agent.\" Agent models can be classified in various ways of which three options were discussed. Although within each of these classification schemes specific classes were identified as well, sometimes models are a blend. For example, models might be able to tackle all three of Marr's levels (e.g., Lewis et al., 2014) , address behavior over multiple of Newell's time-scales (e.g., Janssen et al., 2012) , or combine datadriven (machine learning) methods with theoretically-driven process models (e.g., Anderson et al., 2016) . In driving situations where part of the driving task is automated (e.g., SAE levels 2,3, SAE International, 2018), there are situations where both a human agent and an artificial agent (i.e., the car's internal reasoning system) are involved in aspects of the driving task. In that sense, the expectation of the authors is that research in the years to come will focus more on shared control between human and artificial agents. In modeling the artificial agent that controls the car, many of the same considerations as were mentioned above hold. These techniques can range from simplified state machines with tight control-loops to more conceptual (flexible) models inferred from naturalistic data behaviors using machine learning. State of the art approaches to modeling an artificial agent are approaching the performance of humans at particular driving tasks by means of deep neural network architectures. \n Dimension 2: environment When environments are discussed in the transportation research communities, the environment can be considered as \"the world\" which is being modelled or the model which is being presented. Researchers and practitioners often draw a sharp distinction between 'laboratory' or 'simulated' on the one hand, and 'on-road' or 'real' or 'open-ended' driving experiments on the other hand, but do not do a lot to explicate the environment further. At first sight, one might assume that testing in an on-road study provides the richest environment for a study, and therefore to provide the most detailed insight about the human behavior. However, this is not necessarily the case. Whether the richness of the on-road driving environments (i.e., the track or roads on which the vehicle is driving) is captured depends heavily on the model of the environment that is made from the data captured by an instrumented car (Brackstone et al., 1999) . In an extremely limited instrumentation environment, for example, there might only be one camera pointed at the driver or out the window, and the model of the environment after the fact is very sparse. A richer model of on-road drives may capture the in-cabin environment, the traffic surrounding the car, the GPS, and the data from the CAN Bus of the car. The important thing is that the model enables an understanding of the relevant aspects of the environment for analysis. Limitations to the sensors and the internal model of the simulated environment can hinder the ability to address a research question because of missing contextual cues. By contrast, in a virtual driving environment, the environment is specified by the driving simulation. Even within the virtual environment, the model has variations; passing traffic can be randomly generated, for example, or specified exactingly, car by car. Another consideration for modeling virtual environments has to do with whether the virtual environment is completely fictional (e.g., driving on a different planet than earth), or if it is a virtual replica of a real-world environment; the latter better allows for translational experiments that compare virtual and real world performance for similar driving scenarios (Blaauw, 1982) . \n Dimension 3: scenarios With the agent and the environment identified, one can then specify the scenarios for testing. If the simulated environment can be seen as \"the world\", then the scenario can be considered as \"the way one moves through the world\". That is: what types of situations are encountered or not? The spectrum of scenarios that are being considered in the model or simulation can range from unconstrained where nothing in the environment (real or virtual) is manipulated to highly constrained, where everything that the human user observes was somehow designated by the researcher. This often relates to the operational design domain (SAE International, 2018) or the context in which the study is being examined. For example, given any specific world (naturalistic or virtual), one might be interested in how a system and agent act in a scenario where there is fog, or a specific construction works scenario. In more unconstrained (open-ended) scenarios, specific scenarios will be encountered (e.g., fog, construction works) but only as they naturally occur during a drive. Most naturalistic driving studies such as SHRP2 (The National Academies of Sciences Engineering and Medicine, 2019) and UDRIVE (Udrive Consortium, 2019) are designed to be more toward the unconstrained end of the spectrum. The researcher does not constrain the destination, driving behavior, or performance of the user. In other words, the scenarios that people encounter are left open-ended. The models behind naturalistic driving studies are typically designed for observation only, without any intervention. The participants know that their vehicles are instrumented, and, as such, might drive more carefully; nevertheless, the environments (or: the world) that they drive through are not affected by any research interventions, and so the scenarios that occurwhether they are everyday traffic, distracted driving, or near missesare natural. While there are many studies that can be conducted in a real world setting, they are not all 'unconstrained'. Hence, the distinction between the environment and scenario. One might have a fully functional car of which high level of detail can be measured (i.e., high on the 'real' dimension of the environment), but testing is only intended under specific conditions such as construction works, or under conditions of rain (i.e., highly constrained). There are also projects in between these extremes. For example, Volvo's \"Drive Me\" project (Victor et al., 2017) was intended to have everyday drivers experience driving in a car that has higher levels of automation, with some in-car technology achieving SAE-level 4 automation on specific roads (SAE International, 2018) . Although the project has since scaled back in ambition (Bolduc, 2017) , the original study was intended to include a fully operational vehicle with high level of data collection and instrumentation (i.e., high realism on the environment scale). However, the automated technology within the vehicle could only operate under specific operational design domains, thereby limiting the scenarios under which researchers could study human behavior in automated vehicles. Although the aforementioned might seem to imply that open-ended scenarios can only be run in naturalistic environments, this is not the case. Simulated environments can also run relatively open-ended, unconstrained studies, depending on how open-ended and realistic the environment is modelled in the simulator and to what extend it allows freedom in actions for the driver. If the simulator has for example a detailed map of all roads in a city (i.e., a \"microworld\"), and the car has almost all the functionality of a real car, then a wide scala of open-ended scenarios might be possible to simulate. \n What is a \"controlled\" study? One of the things that the classification framework (Fig. 1 ) makes clearer is the different ways that a \"controlled study\" is controlled. Each dimension (agent, environment, scenario) can have its own level of control. To make this even more specific, a distinction can be made between different degrees of regulation and fidelity for the agent, environment, and scenario. \n Regulation What is called \"regulation\" is often referred to as \"manipulation\" or \"control\" in experimental settings; it is the degree to which different participants experience the same thing. Regulation can be applied to the agent, the environment, and the scenario. \n Regulation of environment and scenario The most obvious coupling of regulation is perhaps with environment and scenario. Within the environment and scenario, there can be a high degree of variation in control, even within laboratory studies. Simulation environments are often associated with high-degrees of regulation. On the far end, in simulated automated driving scenarios, a driving simulation environment can be so controlled that every single participant experiences exactly the same setting, with exactly the same cars passing at exactly the same time in the simulated experience (i.e., a highly regulated scenario). In such tightly regulated studies, the only variations might come from human action such as the human's eye-gaze or steering actions. Although these studies are appropriate for testing parameters of human behavior and thought (e.g., Janssen et al., 2012; van der Heiden et al., 2019) , they are less generalizable to everyday traffic scenarios. Therefore, in the more typical laboratory driving simulation studies, other vehicles in the simulation environment around the participant vehicle are spawned stochastically, so that there is some bounded variation in participant experience; any experiment in which the participant drives introduces yet another source of variation. On the far less regulated end, experimenters such as Feuerstack et al. (2016) use the simulation environment as a \"theater\" in which drivers can collaborate to play out the interactions they have on the road improvisationally (Schindler et al., 2013) . In on-road research, there is also more or less regulation that is possible. While it is not usually possible to make it so that every participant experiences exactly the same experience, on-road experiments can feature set courses, where every participant experiences the same roads, in roughly the same times of day (like, Baltodano et al. (2015) ), or set situations, where every participant experiences the same scenario (for example, commuting home) even if they have wholly separate routes, like Zafiroglu et al. (2012) . In other words, regulation, manipulation and control can be exerted on both the environment (i.e., what types of vehicle interactions are allowed) and on the scenarios that are experienced by the participant. Control can be loose or tight on none, one, or both of these dimensions. \n Regulation of agents Regulation can also be applied to the agent, especially for studies with human agents. A first decision on regulation is how the study participants are sampled. On the very tightly regulated end, participants might come from a very specific population, such as psychology undergraduate students. Tight regulation yet more variation in human behavior might be observed when participant statistics meet those of a specific driving population (e.g., match the national ratio male-female drivers, or distribution of drivers' ages, as in, van der Heiden et al., 2019) . Finally, regulation might be loose when participants are gathered in a less structural way (e.g., opportunity sampling). Sampling might be particularly important in cases where the behavior might be tied to characteristics of the sample -for example years of driving experience, familiarity with local cultural or societal norms or expectations, experience with semi-automated vehicles, or experience with particular road configurations (e.g., driving on the left-side or right-side of the road). A second decision on regulation is whether the participants are asked to perform with or without manipulation of their cognitive state. For example, participants might be performing after tight administration of alcohol, drugs, or other substances (e.g., Martin et al., 2013; Veldstra et al., 2012; Wester et al., 2010) . Or participants might be sleep deprived (e.g., Eoh et al. (2005) , for models see e.g. Gunzelmann et al. (2011)) , or be manipulated into a specific affective state (Jeon, 2015) . A third decision on regulation of agents is how free the agents are in their actions and how participants are instructed. On one extreme end, naturalistic driving studies (e.g., The National Academies of Sciences Engineering and Medicine, 2019; Udrive Consortium, 2019) might not place any constraints on the driving task: participants drive where and when they want to drive. On the other extreme end, the user's task might be very narrowly defined. This might be akin to for example Fitts' law experiments, in which participants are instructed to make specific ballistic movements over and over (MacKenzie, 1992) . In between these extremes are studies in which some overarching criteria, such as a user's general priority, is manipulated (e.g., safe driving or fast performance of a secondary task while driving, Brumby et al., 2011; Janssen et al., 2012) . In other words, regulation can also be applied to the agents of the task, and might be exerted implicitly due to the sampling of participants or the instructions for the task. Although regulation was mostly discussed for human agents, similar concerns apply when human behavior is captured in artificial agents: are these models based on data sets with a wide variety of human participants, or only a tightly regulated sample in a tightly regulated task? \n Fidelity Fidelity has to do with the level which the simulation or model of the agent, environment, or scenario mimics real-world or anticipated futureworld drivers and driving situations. In the lab, the fidelity of simulated environments can range from low, where the scenarios are observed from a bird-eye view, and the operator controls the vehicle using a keyboard, to very high, where drivers are fully immersed in the driving environment. The low fidelity environments are appropriate if the primary goal is to represent strategic level decisions. The more typical driving simulation environments have much higher fidelity and feature different degrees of immersion, where a full-vehicle chassis with a motion base delivers the experience most like driving on the road. Open source game engines like the Unity and Unreal Engines that enable development of rich-graphics simulators such as CARLA (Dosovitskiy et al., 2017) has enabled high fidelity driving simulation tools to be more readily accessible to a broader range of researchers and designers. In on-road environments, the fidelity is usually pretty high with real road, weather, and lighting conditions, but the location of the testing environment (for example, on a test track vs on (closed) public roadways) can affect the fidelity of the driving task. For the fidelity of the agent, one can consider artificial agents and human agents. For artificial agents, fidelity can again differ between modeling approaches. Automated driving systems have traditionally employed low-level perception-controllers such as PID to perform driving tasks. These models typically lack flexibility and performance when compared to human agents in variable environments. However richer, higher-level driving automation models are being developed using machine learning techniques that rival or even outperform human drivers (Grace et al., 2018) . Concurrently, when modeling human agents and their thought processes, some approaches care about the fine-grained details of the cognitive process (e.g., Zhang and Hornof, 2014) while others only use approximations of human perception and action (e.g., Hoogendoorn and Bovy, 2001) . For human agents, in some sense the fidelity is \"high\": an actual human performed the task. However, depending on how the human sample and the behavior of the sample was regulated, behavior of these agents might be more or less representative for a wider population of humans and a wider set of human cognitive and affective states. Another aspect to consider in relation to fidelity is the method and process of data collection. Even in a study with a high fidelity set-up (i.e., human agents, driving in the real world, in an unconstrained scenario), the reliability and usefulness of the study outcomes may be negatively impacted if the data collection protocol was not carefully designed. It is therefore crucial to understand the limitations of your equipment in terms of quality, accuracy, frequency (and resolution) of sampling, and the reliability of the measurement as a proxy for the intended variable (s) of interest. Take for example the fidelity of data collection on a human agent. Let's say a study is interested in capturing a cognitive aspect of the human driver using the relationship between eye movements and vehicle steering movements (actions over seconds cf. Newell, 1990) . In such a study, eye gaze data would need to be collected at a level that can detect differences within seconds. Further, the eye-tracking device will need to be calibrated and checked for stability and tracking before data collection on each participant. If such protocols are not in place, any inferences made from the data would be incorrect. This becomes an even larger concern when methods and tools are used that have not been independently validated or for which the underlying algorithms are not known. For example commercial devices that can predict \"alertness\" based on physiological measures may have limited validity if the underlying algorithms cannot be shared or verified. In such cases, the conclusions drawn from such tools would be questionable. Data quality also impacts the dimensions of environment and scenario. For these dimensions, the sensors (or simulation code) might also not register relevant information (e.g., what other traffic is on the road), or not register it frequently and detailed enough (e.g., estimate distance to other cars in increments of 10 cm, whereas increments of 1 cm are needed for analysis) or reliable enough (e.g., when a sensor of the car is obscured). The interface between the data, systems, applications, software and platforms are also important to consider. A seemingly small detail or choice in the set-up of a study can have large implications in the inferences that are made. For example, if the eye-tracking glasses shifted during the study, one might conclude that a participant did not pay sufficient attention to the road, whereas this conclusion was reached due to measurement error. Similarly, if a simulation was not calibrated correctly to identify the distance between the test vehicle and surrounding vehicles, then one might incorrectly conclude that an appropriate distance was held at all times. If these results are not further tested (e.g., by formalizing them in computer simulations of underlying cognitive process, or testing them in replication studies), then the wrong conclusions can steer the larger field in the wrong direction. For example, incorrect results can lead to suggesting designs or software that are not effective or based on incorrect principles (e.g., assuming that a vehicle holds appropriate distance to other vehicles, whereas it does not). 4. Actual, virtual, and mixed reality through simulation on different dimensions: where are the research gaps and opportunities? Using the three dimensions (agents, environments, scenarios), one can now more clearly position research that simulates none, one, two, or three of these dimensions. Studies where none of the dimensions is simulated can be considered \"actual reality\", studies were all dimensions are simulated can be considered \"virtual reality\", and those where at least one but not all dimensions are simulated can be considered \"mixed-reality\". \n Collaboration between human and artificial agents As was already mentioned in the section on agents, one important emerging area is that where human and artificial agents interact in a single environment. This is the case for humans that interact with semi-automated technology (e.g., SAE levels 2,3, SAE International, 2018). In these instances, the reasoning system behind the automation can be considered as an artificial agent that senses and acts upon the environment, but also depends on input from the human. Such environments require a good understanding of the mental model of the human and the mode of the vehicle (Janssen et al., 2019a) . Another area where human and artificial agents interact is in studies of dyadic interaction. In the bottom of Fig. 1 , such studies are placed in the bottom-right quadrants of studies with human agents (left) and studies with artificial agents (right). In dyadic interaction studies, two or more people are involved in a simulated world, and can see each other's actions through an avatar (or other car) that moves around in their shared virtual world. This form of interaction involves both human agents, but also a virtual representation of the agent, therefore positioning this work both in the cluster of human and artificial agents. A conceptual example of such a study is for example described in (Doric et al., 2016) . The remainder of this section will explicitly discuss studies in which there is either a human agent, or an artificial (simulated human) agent. \n Simulated scenarios and/or environments with human agents The first consideration is of cases where human agents are involved in the driving (e.g., bottom-left of Fig. 1 ), but either the scenario or the environment might be simulated or constrained. Perhaps one holy grail of research is to observe driving in real environments with open-ended, unconstrained scenarios (top-right quadrant). Examples can be found in naturalistic driving studies (e.g., The National Academies of Sciences Engineering and Medicine, 2019; Udrive Consortium, 2019). The challenge with running these studies is that they can require more resources in terms of time, equipment, money, and personnel to run than traditional simulation studies. They are therefore typically overseen by large consortia, and not a realistic choice or option for individual researchers outside of such a consortium. On the other extreme, both the scenario and the environment can be simulated (bottom-left quadrant in Fig. 1 ). Examples are classical driving simulator studies and test track studies. Within this quadrant, there is a gradation of realism, but in general it makes use of simulation on both axes. Studies like these are more common in the transportation research communities. The reason might be that although they also require extensive resources (e.g., to buy and maintain a simulator), these are more one-off expenses, and cheap alternatives are available, such as a combination of commercial steering wheels with open-source driving environments such as Open-DS (Math et al., 2012) , and off-the-shelf in-vehicle infotainment simulation environments such as Skyline (Alvarez et al., 2015) . The more interesting, and relatively under-explored quadrants are those in which either the environment ór the natural scenario is simulated/controlled, but not both. It could be argued that current automated driving systems with functionalities at SAE levels 3 and 4 are tests of constrained scenarios in real environments (top-left quadrant, \"on-road real world autonomous driving\"). The reason is that such vehicles can function in specific operational design domains (e.g., an adaptive cruise control might only function under regular highway conditions), or to use the terminology from the framework in this paper: in specific (controlled) scenarios. Another example is the Ghost Driver project (Rothenbücher et al., 2016) , where a very controlled scenario (namely: a car that seems to drive without humans inside it, due to camouflage) is placed in a real naturalistic environment (everyday pedestrian crossings on a campus). This allowed for rapidly testing how humans interact with future technology. The second relatively under-explored area is open-ended scenarios with simulated environments. This can for example be achieved through openended Wizard of Oz simulation studies and improvisational or theater studies (Mok et al., 2015; Feuerstack et al., 2016; Schieben et al., 2009) . In these cases, the world is simulated in some form (e.g., through a driving simulator or through theater enactment), while also allowing the participant to experience a wide set of scenarios. The benefit of only simulating the environment or the scenario, and not both, is that it requires less resources compared to the naturalistic driving simulators, while at the same time allowing for studies of more naturalistic and less controlled human interaction. The authors see high potential in these research methods for transportation research that wants to explore human interaction with novel (in-car) vehicle technology. With the rapid development of automated technology (Janssen et al., 2019b) , simulation of the environment and/or scenario allows studies of human interaction with automated technology even if such technology is not (yet) commercially available, or not matured enough to test on the open road. \n Simulated scenarios and/or environments with artificial agents The large majority of studies with simulated human agents (bottomright Figure in Fig. 1 ) also simulate the environment and control the scenario. A special and emerging case is formed by studies from automated driving companies that use full simulation to develop their automated driving technologies. Each billion of miles of driving experience collected on real roads with test fleets are complemented with several orders of magnitude more in simulated environments. Using tools like Carcraft (Madrigal, 2017) , technology companies can identify interesting driving scenarios and iterate through a large number of derived conditions using virtual models of vehicles and other road users in a cost-effective manner. For these studies, the artificial agent acts somewhat like a human, but the focus is mostly on the impact that the human has on the technology, road behavior, and safety. A related but different perspective is taken by studies where simulations of a human agent are used to better understand the human mind. Perhaps one of the best examples is Distract-R (Salvucci et al., 2005) , and its associated cognitive models (Salvucci and Taatgen, 2011) . In Distract-R, a simulated agent drives in the same virtual environment as is used in human studies. Distract-R interacts with the car and the environment through its virtual hands and eyes. More often than not, other agent models have even more controlled and limited interaction with the environment. For example, steering actions might be achieved through a simple mathematical function (e.g., Janssen et al., 2012) , or the model might simply model traffic flow of a hypothetical scenario as the concern is not with the behavior of any individual model but with the collection/flock of cars (e.g., Hoogendoorn and Bovy, 2001) . Cases where a simulated agent acts only in a real environment, but with a constrained scenario (top-left quadrant in Fig. 1 ) include crash test dummies. These simulate a specific scenario (i.e., a crash) in a real environment to test the impact the crash has on the vehicle and the simulated human (a dummy) inside it. Other studies might use configurations of the automation to elicit particular emotional responses on drivers and passengers as a result of its driving reactions to road events (Alvarez et al., 2019) . Yet another example includes the Stanley parking robot (Stanley Robotics, 2019) , which can park your car within a constrained scenario (parking garage). Cases where a simulated agent acts in a simulated environment but a more open-ended scenario (bottom-right quadrant) include studies of dyadic interaction. Note that these involve in a sense both artificial agents (avatars) and real human agents. Other examples of work in this environment are simulators where people perform relatively open-ended interaction with automation: a manual driver encountering an autonomous car in the simulated road, pedestrian-AV interaction (e.g. Mahadevan et al., 2018) , or bicyclist-AV interaction (e.g. Faghri and Egyháziová, 1999) . The final fourth quadrant (top-right) is one where a simulated human drives in a natural scenario within a real environment. The ideal example is formed by studies using a SAE level 5 (SAE International, 2018) fully automated and autonomous vehicle. At the moment such technology does not yet exist. Instead, systems with limited automation, that can drive in specific operational design domains (e.g., specific scenarios) achieve part of this functionality. There are also other examples around, for example recent studies have used dummies of pedestrians (that walk similar to real pedestrians through motion capture studies) to test how automated vehicles respond to these pedestrians in various open-ended natural scenarios in a real environment (Doric et al., 2017; Cañas et al., 2018) . This can be interpreted as a more open-ended form of the \"crash test dummies\", as the dummy in the pedestrian study has more movement and can act in more unconstrained scenarios. \n General discussion This paper provides a framework for examining human-vehicle interaction with respect to three dimensions that can involve simulation or modeling: agents, environments, and scenarios. The claim is not that one form of simulation is better than others, but rather, each dimension provides insights on different but complimentary (research) goals. Each simulation method targets different objectives, and associated strengths and limitations. Moreover, within one research project, researchers might be simulating none, one, or many of these three dimensions. Although modeling and simulating is sometimes thought of as a way to exert control (i.e., exert regulation while achieving fidelity), each of the dimensions can differ in how much regulation is exerted and how much fidelity is achieved. Having more precise terminology to study modeling and simulation is useful for transportation sciences, given its interdisciplinary nature. Contributions to the field of transportation science are made from among others engineering, design, social sciences, and safety sciences. Each of these fields brings its own terminology and the same words or terms might differ in meaning across fields. The aim behind the framework in this paper (i.e., Fig. 1 ) is to aid precision when discussing models and simulations. The development of this framework helped to identify areas that are currently not frequently used within the transportation research communities. Specifically, the paper identified that there is room for studies which simulate either the environment or the scenario, and not both. Such studies are useful as they require less resources compared to the naturalistic driving studies (that simulate none of the dimension), while at the same time allowing for more natural (high fidelity) and less regulated human interaction than studies that simulate both the environment and the scenarios. In effect, this allows researchers to study interaction with future prototypes relatively easily in naturalistic settings. Another area that was identified are studies in which human and artificial agents interact. Examples include studies of human interaction in semiautomated vehicles (e.g., at SAE levels 2 and 3, SAE International, 2018) and studies of dyadic interaction in simulated environments. A practical concern of researchers is that they might not always have the right resources, infrastructure and skills to conduct studies in all of the identified quadrants in Fig. 1 . Fortunately, the areas that were identified as having potential for future work (by comparison) do not necessarily rely on large infrastructure or resources. \n Benefit of the framework for the transportation research community Another benefit of the framework for the transportation research community is that it provides a systematic way to structure studies in which simulated and non-simulated studies can be tested. The need for a consistent tool chain containing both virtual and real tests was already highlighted (e.g., by, Spies and Spies, 2006; Schuldt et al., 2015) . The value of the framework for the community is that it makes explicit that there are three dimensions on which the extent between \"virtual\" and \"real\" can be varied. For example, to allow testing of different (safety-critical) scenarios (dimension 3) without the risk of injuries, the proposed framework identifies the need to integrate both automated and manual vehicles (dimension 2: environment) with modelled and real human agents (drivers, pedestrians, cyclists) (dimension 1: agents) into a single test bed. Based on the actual configuration of the individual (real) parts, subjects can face different levels of immersion. A human participant can, for instance, feel the real kinesthetic experience when driving with a real vehicle on a closed test-track as the actual scenery is presented to them using virtual reality devices. On the other hand, motion tracking of participants in a simulated environment (e.g., CAVE (Cruz-Neira et al., 1992) or driving simulator) allows integrating vulnerable road users into the same scene in a similar manner. The resulting scenario, now containing realistic human behavior, can finally be injected into the control unit or sensor system of the test vehicle (automated car) or presented to it using movable dummies on a real test track. In that sense, the proposed framework tries to bridge the gap between simulation and expensive real driving tests (Frey, 2016; Kühbeck and Huber, 2015) . Different researchers and practitioners will have different (research) questions. These questions will influence which dimension (agent, environment, scenario) is most important to keep under little or high regulation and fidelity. For example, in a study that is mostly focused on studying the human mind, tight control over the environment and scenarios might be needed to ensure that one can study the cognitive principle of interest. By contrast, in a study of an actual vehicle, one might want to keep the environment as realistic as possible and also allow humans to express a wide variety of behaviors, so as to be able to see the impact of such naturalistic behavior. \n Translational research An open question is how results from one setting can be translated into another settings. Specifically: how can conclusions that were drawn in a simulated environment translate to its non-simulated equivalent? Within the field of cognitive science, there have been various studies that looked at how simulations of theories of human behavior and thought relate to actual human performance (see section on modeling the agent). For the dimensions of environment and scenarios, studies comparing \"simulated\" with \"real\" environments have typically looked at these dimensions conjoined, by comparing the validity of real (on-road) experiments with simulator studies, e.g., (Wang et al., 2010; Riener, 2010; Frey, 2016) . The framework of this paper (Fig. 1 ) already suggests that there is value in separating these two dimensions. Nonetheless, lessons can already be learned about transfer from one setting to the next. For example, in (Riener, 2010) , driving performance and interaction with an in-car interface was compared between both a lowfidelity driving simulator and an on-the-road driving experiment (i.e., mostly manipulating environment, and slightly manipulating scenario due to the natural variety of traffic conditions in on-the-road driving). Results indicated that drivers respond faster to steering requests in the driving simulator (by 13%) as compared to real driving. The explanation for this difference can most likely be derived from the fact that participants encountered fewer demands in the first (simulated environment) compared to the second (naturalistic environment) setting. However, there were also parallels. For example, the rank-order of performance with different incar interfaces was the same in the simulator and in the on-the-road study. Unfortunately, based on this study and others, the consensus is that it is not possible to derive a simple (e.g., linear) conversion factor or table to describe effects emerging in the reality with results from simulation or simulator studies. Although translational research in transportation sciences is typically focused on translating from simulation (or model) to the realworld, there is an important reason to look at translation in both directions: simulations and models are helpful in exploring a wider variety of situations and contexts, including extreme cases that might not always happen even after prolonged testing in real-world conditions (humans in real environments with unconstrained scenarios). Take the example of Ghost Driver (Rothenbücher et al., 2016) , which was aimed at testing how humans react to technology that is not yet commercially available (fully self-driving cars). By simulating the scenario (a car that seems to drive without humans inside the vehicle using camouflage) in a real naturalistic environment (everyday pedestrian crossings on a campus), it allowed for rapidly testing how humans interact with future technology. Without the simulation of the car, the response of real pedestrians in their everyday environment would have been difficult if not impossible to test. Another value of simulations and models is that they can help distill the scientific principles and mechanisms that are at the heart of a behavior or situation, and thereby aid understanding. This is consistent for example with the original ambitions of AI, as expressed in the proposal of the famous 1956 Dartmouth workshop: \"The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it\" (McCarthy et al., 2006) . Given the value of both simulated and non-simulated research, one implication is that research should cover multiple of the quadrants in Fig. 1 before definitive behavior about human-vehicle interaction are made, for example for regulatory purposes. This aids in understanding both how behavior is in less constrained, real world conditions but also allows for more rigorous, focused testing under controlled experimental conditions. \n Limitations There is an opportunity to position future studies within each of the three dimensions. In this paper, the dimensions were based on the extreme conditions to contrast human agents with non-human agents, simulated road environments with real (on-the-road) environments, and constrained scenarios with open-ended, unconstrained scenarios. For each dimension these extremes might be clear, and a relative position of any two studies might also be possible for each dimension. However, it will be difficult to associate each study with an absolute number, position, or rank on each dimension such that studies can be directly quantified and compared to future studies. Although some form of ranking or rating might be desirable in practice, it would not do justice to the inherent diversity of options that is available within each dimension. For example, even within the set of simulated vehicles there is a myriad of characteristics that can differ along multiple dimensions (e.g., visual realism, ability to induce motion, types of actions that are allowed by the human driver), and equating each dimension into a number would require comparing apples with oranges. Similarly, even for a category such as human or non-human agents, there might be more than a binary distinction, in that artificial agents can differ on multiple dimensions (e.g., Marr's level of abstraction, Marr (1982) and Newell's time scale Newell (1990) ) and modelled using various frameworks (Oulasvirta, 2019) . Moreover, in line with Turing's \"imitation game\" (Turing, 1950) , future artificial agents might be hard to distinguish from human agents. Apart from refinement within the levels, there is also room to consider other dimensions that have so far not yet been included explicitly in the framework. For example, the discussion of the agents dimension in this paper has mostly considered human drivers, yet there can also be humans that have other roles inside the car (e.g., passenger, navigator) and outside the car (e.g., cyclists, pedestrians, Doric et al., 2016) . \n Conclusion Studies of human-vehicle interaction can entail modeling or simulating the agent, the environment, or the scenario. Although colloquially researchers in the transportation research community and related communities sometimes only distinguish \"simulated\" from \"non-simulated\" settings, this paper identified that most studies typically model only some of these three dimensions, and that different levels of regulation and fidelity can be exerted on each dimension independently. The explicit distinction of agent, environment, and scenario can aid researchers and practitioners who are consumers of these simulations, as well as industry and regulatory agencies. The framework provides a way to classify studies and assist researchers, engineers, and designers make better decisions regarding the simulation tool to use for the research question of interest. \n CRediT authorship contribution statement The author sequence was determined using the SDC approach. All authors contributed to the conceptualization and the writing of the paper. Christian Janssen coordinated all activities. Linda Boyle developed the visualization. Fig. 1 . 1 Fig. 1. Three dimensions of simulating related to human-vehicle interaction with example studies indicated (see also text).", "date_published": "n/a", "url": "n/a", "filename": "1-s2.0-S2590198220301251-main.tei.xml", "abstract": "This paper provides a framework for examining human-vehicle interactions with respect to three dimensions that can involve models or simulations: the agents, the environments, and the scenarios. Agents are considered on a spectrum from human to artificial actors. Environments are considered on a spectrum from simulated to real. Scenarios are considered on a spectrum from constrained to unconstrained. It is argued that these three dimensions capture key differences in research approaches within the field of human-vehicle interaction, and that explicitly situating research and discussions within this framework will allow researchers to better compare and contrast research outcomes and contributions. The framework is used to locate different disciplines in the community with respect to one another, and to identify areas which are as-yet unexplored.", "id": "63adcfbdd47ce66cec9a70d439caac29"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Rajeev Alur"], "title": "Formal Verification of Hybrid Systems", "text": "REVIEW OF CURRENT APPROACHES Model-based design offers a promising approach for detecting and correcting errors in early stages of system design [33, 37, 49] . In this methodology, a designer first constructs a model, with mathematically precise semantics, of the system under design, and performs extensive analysis with respect to correctness requirements before generating the implementation from the model. Embedded systems, such as controllers in automotive, medical, and avionic systems, consist of a collection of interacting software modules reacting to a continuously evolving environment. The appropriate mathematical model for design of embedded control systems is hybrid systems that combines the traditional models for discrete behavior with classical differential-and algebraicequations based models for dynamical systems. Such models can capture both the controller -the system under design, and the plant -the environment with continuously evolving physical activities in which the system operates. Given that (1) automated verification tools have recently been successful in finding bugs in \"real-world\" hardware protocols and device drivers [13, 15] , (2) tools such as Stateflow/Simulink are commonly used in automotive and avionics industry for modeling, and (3) high assurance is a necessity in safetycritical applications that deploy embedded software, formal verification of hybrid systems has been a vibrant research area for the past 20 years. This article is an overview of current research directions, aimed at providing an introductory \"roadmap\" rather than a comprehensive survey. \n Modeling In early 1990s, formal models for discrete reactive systems were integrated with models for dynamical systems [2, 41] . The model of hybrid automata [1, 2, 29] has emerged to be a popular choice. A hybrid automaton is an extended finitestate machine whose state consists of a mode that ranges over finitely many discrete values and a finite set of realvalued variables. Each mode is annotated with constraints that specify the continuous evolution of the system, and edges between modes are annotated with guards and updates that specify discrete transitions. For example, the behavior of a self-regulating thermostat can be described by a hybrid automaton with two modes, on and off, and a single real-valued variable T modeling the temperature. To specify how the temperature changes in the mode on, we can annotate the mode with a differential equation, say, Ṫ = k(70 − T ), for a constant parameter k. Alternatively, we can use a differential inequality, say, k 1 ≤ Ṫ ≤ k2, for two constants k 1 and k2, to specify the dynamics approximately using only bounds on the derivative. Associating the mode on with an invariant constraint T ≤ 70 specifies that the system must exit this mode before the temperature exceeds 70. An edge from the mode on to the mode off with the guard T ≥ 68 specifies that, whenever the temperature exceeds 68, the system can discretely change its state by updating the mode to off. A hybrid automaton can naturally be interpreted as an infinite-state transition system, and this forms the basis for formalizing classical notions such as safety verification, property-preserving abstractions, and simulation relations, for hybrid systems. Hybrid automata are analogs of state machines, with little support for structured descriptions, and consequently, a number of formalisms have been proposed to facilitate modular descriptions of complex systems. These include modeling environments such as Shift [17] and Ptolemy [19] for hierarchical specifications of hybrid behavior; models such as hybrid I/O automata [40] , hybrid modules [6] , and Charon [3] , for compositional treatment of concurrent hybrid behavior; and differential dynamic logic for logic-based specification and compositional analysis of sequential hybrid behavior [43] . The commercial modeling tools such as Stateflow/Simulink (see www.mathworks.com) and Modelica (see www.modelica. org) are routinely used in a wide range of industries. Conceptually, it should be possible to compile models expressed in such tools into formal notations such as hybrid automata. This has turned out to be difficult in practice due to the richness of features in commercial tools and the lack of a standardized formal semantics of such features (see [35] for efforts aimed at semantics-preserving transformations across modeling notations). \n Symbolic reachability analysis In the safety verification problem for hybrid systems, we are given a hybrid systems model M , a set I of initial states of M , and a set S of \"safe\" states of M , and we want to check whether every execution of M starting in an initial state always stays within the set of safe states, and if not, report a violating execution as a counter-example. For instance, given the hybrid systems model of a collision avoidance protocol, we want to check whether the distance between two vehicles stays greater than the safety threshold for given initial conditions. Let us first note that the safety verification problem of hybrid systems cannot be solved algorithmically: with the exception of classes that severely restrict the allowed dynamics of real-valued variables such as timed automata [5] and initialized rectangular automata [32] , the safety verification problem is undecidable [1, 32] . Symbolic reachability algorithms for safety verification try to compute the set R of reachable states of a hybrid system M in an iterative manner starting from the set I of initial states. The algorithm checks, at every step of the iteration, if the current set R of reachable states is a subset of the set S of safe states, and if not, it terminates with a counter-example. In general, there is no termination guarantee, as the algorithm may keep adding more and more states to R without being able to deduce that the system is safe. The key challenge to efficient implementation is to identify a suitable representation for the set of states that supports the operations used by the iterative reachability computation. The tool HyTech was the first model checker to implement symbolic reachability analysis for hybrid systems [7, 31] . For a hybrid automaton with n real-valued variables, each reachable set is represented by associating a finite union of ndimensional polyhedra with each mode, where a polyhedron is represented as a conjunction of linear inequalities over variables. Such a polyhedra-based representation is appealing due to the use of polyhedra in many computing applications and the availability of open-source libraries for manipulating them (c.f. [12] ). The models are restricted to the class of linear hybrid automata (LHA): guards, updates, and invariants involve only linear expressions, and the dynamics is specified using differential inequalities that are linear constraints over first-order derivatives. For example, the LHA-admissible dynamics ẋ = ẏ ∧ 1 ≤ ẋ ≤ 2 describes two-dimensional motion along the diagonal line with bounds on speed. For LHA, the polyhedral representation is closed under both discrete transitions corresponding to mode-switches and continuous evolution according to differential constraints in a mode: given a polyhedron R describing the set of current states, the set R of states that the system can reach after a discrete mode-switch to a mode m, is a polyhedron that can be computed effectively from R, and the set R of states that the system can reach as a result letting it evolve continuously according to the dynamics associated with the mode m, is also a polyhedron that can be computed effectively from R . The most commonly used dynamics in mathematical design of control systems involves linear differential equations: if x represents the vector of state variables, u represents the vector of input variables, a linear system is described by the equation ẋ = Ax + Bu, where A and B are matrices of appropriate dimensions. Such dynamics is not allowed in LHA. Linear hybrid systems (LHS) refers to the class of hybrid automata where guards, updates, and invariants involve only linear expressions, and the dynamics is specified using linear differential equations. First note that the polyhedral representation is not closed under continuous evolution specified by linear differential equations: if x = 1 is the initial state and the dynamics is given by the linear differential equation ẋ = x, the set of reachable states is the exponential curve given by e t , for real numbers t ≥ 0. Since manipulating transcendental functions is computationally difficult, a popular strategy, first advocated by the tool Checkmate [14] , and later refined by the tool d/dt [11] , is to compute overapproximations of reachable sets using polyhedral representations. Given a polyhedron R representing the set of current states, to compute the set R of states that are reachable within a fixed time horizon Δ according to a given linear differential equation, the algorithm implements the following strategy. For every corner vertex v of R, it first computes the state v of the system at time Δ starting in state v. Then it computes the convex hull R 1 of all the corner vertices v of R and their respective images v . We are guaranteed that all the system trajectories start in the polyhedron R 1 and end in R 1 at time Δ, but are not necessarily inside R1 at intermediate times. The final step involves computing the polyhedron R 2 obtained by \"face-lifting\" R1: the number and normal vectors for facets of R 2 coincide with R1, but the facets are shifted outwards so as to include all reachable states upto time Δ. All these steps can be efficiently implemented for linear systems, and the resulting polyhedron R 2 is guaranteed to be a superset of the desired set R . The process can be repeated for successive time intervals, and the resulting approximation of the reachable set is called the flowpipe approximation. The complexity of operations on polyhedra is exponential in the number of dimensions (that is, the number of real-valued variables of the hybrid system), and since such operations are invoked repeatedly in symbolic analysis based on polyhedral representation, a significant body of work has been aimed at battling this complexity and/or replacing polyhedra with alternative representations [11, 21, 36, 42] . Representation using zonotopes and support functions [22, 26] has so far resulted in the most scalable approach for the analysis of linear hybrid systems, leading to the tool SpaceEx that is able to analyze, for instance, a complex 28-dimensional helicopter controller [20] . \n Deductive verification In deductive verification, a designer interacts with a mechanized theorem prover to generate proofs of correctness of systems. For safety verification of discrete systems, a classical proof principle relies on the notion of inductive invariants: to show that all executions of a system M starting in an initial set I stay within a safe set S, we identify a state property ϕ such that (1) all initial states satisfy ϕ; (2) ϕ is a subset of the desired safety property S; and (3) the property ϕ is preserved locally across system transitions (that is, no transition of the system changes the value of ϕ from 1 to 0). In interactive verification, the user proposes a property ϕ, and the analysis tool checks if ϕ is an inductive invariant. The concept of inductive invariants for discrete systems has been generalized and adopted to continuous-time dynamical (and hybrid) systems. We will informally explain the first such notion, called barrier certificates [47, 48] . To show that a dynamical system M with dynamics ẋ = f (x) with initial set I satisfies a safety property S, we identify a function ψ from the states to reals such that (1) in all initial states, the value of ψ is nonnegative; (2) in all unsafe states (that is, states not in the safe set S), the value of ψ is negative; and (3) the Lie derivate of ψ with respect to the vector field f is positive on the boundary set (called the barrier) characterized by ψ(x) = 0. The first two conditions ensure that the barrier separates the initial states from the unsafe states, and the third condition ensures that system trajectories cannot escape the barrier from inside as at the barrier the flow field points inwards. Together, these conditions imply that ψ(x) ≥ 0 is an inductive invariant of the system. Typically, the desired function ψ is a polynomial function of the system variables. It is also worth noting that the barrier certificates are closely related to the notion of Lyapunov certificates for stability in classical control theory. The notion of differential invariants generalizes barrier certificates [44] : it relaxes the third condition, and also allows more general forms of logical assertions as potential invariants. Verification of safety properties based on identifying inductive invariants avoids the iterative calculation of reachable state sets and is not limited to linear systems. The tool KeYmaera offers support to prove correctness of hybrid systems using deductive verification [43, 44] . To check whether a given polynomial certificate satisfies all the conditions necessary for it to be a barrier certificate, the tool needs to perform symbolic differentiation, and calculations such as simplification and quantifier elimination, with formulas in the theory of reals with arithmetic operators. To fully automate deductive verification, such a tool needs to automatically generate candidates for inductive invariants, and this remains an active area of research (see [28, 50] for automatic generation of invariants by instantiating templates and [45] for generating invariants by fixpoint computation). \n Abstraction An abstraction A of a model M is a \"simplified\" model obtained from M such that proving safety and temporal properties of A is a sufficient condition for proving the corresponding properties of M . Abstraction is an effective strategy for scalability of verification tools, provided there is a way to compute A from M in a tool-supported manner, and a way to refine the current abstraction A if it is not adequate to prove the desired properties. In the case of hybrid systems, the simplicity of the abstract model A can be of various forms: A can be discrete while M is continuous; A can have linear dynamics while M has non-linear dynamics; and A can be a linear model of dimensionality lower than that of M . There is an extensive literature on automatic abstraction of hybrid systems. We note three representative examples. We have already seen that the dynamics admissible in the model of linear hybrid automata is simple enough to permit exact computation of reachable states using polyhedral representation. In phase portrait approximation [30] , the dynamics ẋ = f (x) in a mode m of a hybrid system is replaced by l ≤ ẋ ≤ u, where the vectors l and u represent the lower and upper bounds on the function f over the range of values specified by the invariant constraint associated with the mode m. This clearly yields an over-approximation of the allowed system trajectories in each mode. The error introduced by the approximation can be reduced if we split the mode m into submodes, each corresponding to a different region of the state-space. Predicate abstraction is a powerful technique for extracting finite-state models from complex, potentially infinite-state, systems, and has been extended and adopted for hybrid systems [4, 16] . In this approach, the input to the verification tool consists of a linear hybrid system, the safety property to be verified, and a finite set of Boolean predicates over system variables to be used for abstraction. An abstract state is a valid combination of truth values to the Boolean predicates, and thus, corresponds to a polyhedral set of the concrete state-space. The verifier performs an on-the-fly search of the abstract system by symbolic manipulation of polyhedra, where the computation of continuoustime successors of abstract states can be performed using flow-pipe approximations. The key computational benefit is that the continuous reachability computation is applied only to an abstract state, instead of intermediate sets of arbitrary complexity generated during iterative computation. If the initial choice of predicates is too coarse, the search finds abstract counter-examples that are infeasible in the original hybrid system, and such counter-examples can be analyzed to discover new predicates that will rule out related spurious counter-examples. A classical notion of equivalence of nondeterministic systems is simulation: a relation between states of two systems is a simulation relation, if (1) two related states have identical observations, and (2) whenever two states are related, for every transition from the first state, there exists a matching transition from the second state such that the targets of the transitions remain related. For simpler classes of hybrid sys-tems such as timed automata and O-minimal systems, one can algorithmically compute the maximal simulation relation over the states of a given system, and use the discrete quotient with respect to this relation as the abstract system which can replace the original system for verification purposes [8] . In the context of hybrid systems, since states contain real-valued vectors, there is a natural metric over states, and this can be used to also define a coarser notion of simulation called approximating simulations: an approximating simulation relation with parameter ε requires observations of two related states to be ε-close of one another, and transitions can be matched step-by-step by staying ε-close [23] . The resulting theory of approximating relations leads to algorithms for constructing lower-dimensional abstractions of linear systems [52] . \n EMERGING RESEARCH DIRECTIONS Automated verification is a computationally intractable problem. Consequently, even though there are many demonstrations of interesting analyses using tools for verification of hybrid systems, scalability remains a challenge, and a significant fraction of the current research is aimed at addressing this challenge. A complementary challenge is to integrate verification tools and techniques in the design flow so as to improve the overall system reliability. We conclude this article by discussing some promising research directions towards this goal. \n Symbolic simulation In simulation, a possible execution of the model upto a finite time horizon is obtained using numerical methods, and this is a well-accepted industrial practice. A single simulation corresponds to a specific choice of inputs. A promising idea is to analyze a simulation trace corresponding to a particular choice of inputs using symbolic analysis techniques to compute the space of inputs that are close enough to the chosen one so that no inputs from this space need to be considered for subsequent simulations (see [9, 18, 34] for recent efforts). This integration of simulation and symbolic analysis can lead to improved coverage, and is similar to concolic testing which has proved to be effective in debugging of large-scale software systems [24] . An added benefit is that such an approach can be implemented directly within the native simulation engine of an industrial-strength modeling environment such as Simulink/Stateflow. \n Synthesis Historically, synthesis refers to the process of computing an implementation (the \"how\") from a specification of the desired behavior and performance (the \"what\") and the assumptions on the environment (the \"where\"). In the more recent view, the synthesis tool facilitates the design by consistently integrating different views: a designer expresses her insights about the design using synthesis artifacts of different kinds such as models that may contain ambiguities and declarative specifications of high-level requirements, and the synthesis tool composes these different views about the structure and functionality of the system into a unified concrete implementation using a combination of algorithmic techniques. Illustrative examples of this new view of synthesis include programming by examples for spreadsheet transformations in Microsoft Excel [27] , and sketching of bit-streaming programs using program skeletons [51] . We believe that these ideas emerging in the programming languages community should be explored in the context of design of hybrid systems to integrate synthesis in a pragmatic way in the design cycle (see [53] for recent work on synthesizing switching conditions in hybrid automata). \n From models to code Generating embedded software directly from high-level models, such as hybrid systems, is appealing, but challenging due to the wide gap between the two. In current practice, this gap is bridged with significant manual effort by exploiting the run-time support offered by operating systems for managing tasks and interrupts. A key challenge to systematic software generation from hybrid models is to ensure that one can infer properties of the software from the properties of the model, and this problem is receiving increasing attention from researchers. Sample research directions include integration of control and scheduling [10] and static analysis of errors introduced by finite precision computations [25] . \n Industrial applications The value of formal modeling and verification on industrially relevant problems has been demonstrated on a number of case studies. Examples of these include design and analysis of vehicle platooning protocols [17] , identification of optimal tolerances for audio control protocol [31] , safety verification of collision avoidance protocol for aircrafts [46, 54] , and verification of adaptive cruise control [39] . Yet, the level of commitment from embedded software industry remains limited to exploratory projects in collaboration with academic researchers. This is in contrast to, say, Intel's investment in formal hardware verification and Microsoft's investment in static analysis of software, which can be attributed to the identification of specific classes of errors that can be largely eliminated using verification techniques (for example, deadlocks in cache coherence protocols and misuse of API rules by third-party device driver code). Thus, a key challenge for research in formal verification of hybrid systems is to identify a compelling class of errors that designers routinely make and can be eliminated using verification techniques. An alternative path to industrial adoption is to integrate verification tools in the certification process, and this seems plausible in safety-critical domains such as software for medical devices [38] . \t\t\t Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on March 24,2022 at 00:46:37 UTC from IEEE Xplore. Restrictions apply. \n\t\t\t licensed use limited to: Carnegie Mellon Libraries. Downloaded on March 24,2022 at 00:46:37 UTC from IEEE Xplore. Restrictions apply.", "date_published": "n/a", "url": "n/a", "filename": "Formal_verification_of_hybrid_systems.tei.xml", "abstract": "In formal verification, a designer first constructs a model, with mathematically precise semantics, of the system under design, and performs extensive analysis with respect to correctness requirements. The appropriate mathematical model for embedded control systems is hybrid systems that combines the traditional state-machine based models for discrete control with classical differential-equations based models for continuously evolving physical activities. In this article, we briefly review selected existing approaches to formal verification of hybrid systems, along with directions for future research.", "id": "40a42bc929d6fc0c9c8ce23ff04eb781"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Anders Sandberg"], "title": "The economics, risk and ethics of time compression", "text": "Introduction Whether there is a long-term trend towards accelerated change is controversial (Kurzweil 2010; Cowen 2011; Eden et al. 2012; Sandberg 2013 ), but clearly the present era is experiencing a remarkable increase in the speed of computation, a compression of the time required to perform a computational task. This paper is an exploration of the limits and implications of this compression on human values and society. If everything speeded up equally, there would be no change: one of the peculiarities of time (and a big source of arguments between Platonists and Aristotelians in the philosophy of time) is that we mostly notice it through the change of events and things relative to each other (Markosian 2016) . \"Speeding up\" hence means that more things of a certain kind occur relative to other things, or that a kind of event occurs before another event it would previously have occurred after. Faster computation means that computational goods can be produced faster and earlier. This paper will explore some of the consequences and limits of this phenomenon. What kinds of value would accrue from something occurring faster?  More instances of the event in a given interval: if these hold value, then there is more value produced. A faster manufacturing process will produce more goods per unit of time.  Time is freed up by the faster rate of work, and this is valuable. For example, a labour saving device frees up time that could be used for leisure or more productive work.  Having an event occur earlier than another event, becoming valuable because of the ordering. For example, diagnosing and intervening rapidly against a medical condition before it turns serious.  The value of a remote event is increased (possibly from zero) by it becoming closer to the present. This includes becoming able to achieve something that previously was impossible to achieve within the given timeframe. For example, calculating weather forecasts or solutions to mathematical problems that previously would have taken years in hours. Limits to speeding up computation include physical limits, but also limits due to the difficulty of tasks (or the algorithm used to solve them). At present we appear to be far from any fundamental technical limit on computing power, but we have already touched the fundamental limit on communication speed. Speeding up the interaction with the physical world may prove challenging because of the discrete nature of signals and sluggish responses of macroscale actuators. Faster computation does raise risks and ethical challenges: various forms of loss of control, inequalities of speed, gaps between oversight and system speed, loss of opportunity due too early decisions, and possibly so much change that the \"change budget\" becomes depleted. In particular, speedups appear to pose a serious challenge to human ability to control technological processes due to growing gaps of speed between computation and control (\"cybernetic gaps\") and challenges to setting the goals they are optimizing for due to gaps of speed between computation and the human world (\"ethical gaps\"), in turn posing a profound challenge to governance systems that are themselves to some extent hybrid humancomputational systems suffering internal speed gaps. \n Limits to computation Computation is subject to both physical, logical and technical limits. This section will outline some of the main limits and how they limit achievable computational speedups. \n Physical limits to computation Computation requires structured change in information-bearing material systems, making the speed a particular computation can be performed at limited by the speed in which a physical system can change in a corresponding way. The most fundamental limits are due to quantum speed limits, stating how fast a quantum system can move between two distinguishable states. The Margolus-Levitin limit states that a system with mean energy E cannot move to another orthogonal state in less than 𝜋ℏ/2𝐸 time, and the Mandelstam-Tamm limit 𝜋ℏ/2𝛥𝐸 where 𝛥𝐸is the standard deviation of the energy of the system (wrt to initial state). Later work has found entire families of quantum speed limits, where the bound scales as 1/𝛥𝐸 for unitary dynamics (Margolus & Levitin 1998; Pires et al. 2016) . These limits, together with the Bekenstein (1981) bound on the information that can be contained within a region with given radius and mass-energy, can be used to describe the most extreme computer systems that could be built even in principle: for a one kilogram, one litre computer the limit is 5.4258 ⋅ 10 50 logical operations per second on ≈ 10 31 bits (Lloyd 2000) . In this case the state of mass-energy is mostly akin to a black hole or a small piece of the Big Bang. The speed of computation in normal matter is limited by the number of transitions the system can perform per unit time without breaking down. This in general depends on the available energy to perform the transitions and how strong the energy barriers of the system are. Semiconductor and optical devices can switch on the picosecond scale (Mii et al. 1994; Ctistis et al. 2011) . Molecular computation is limited by bond energies: above 10 15 transitions per second the energy involved becomes larger than the bond energies and the system starts to break up (Drexler 1992) . In terms of switching current computers are hence about five orders of magnitude slower than the hard limits unless nuclear matter computation on the yoctosecond scale eventually becomes possible (Sandberg 1999) . Very fast computation is also very small and localized, since the light speed limit forces the parts involved to be closer than 𝑡/𝑐, where 𝑡 is the cycle time. A nanosecond is about a foot long, a femtosecond 300 nanometers. Since there is also a limit to how densely information can be packed (likely on the order of a bit per atom for molecular systems) this means that a computation taking time 𝑡 cannot process more than 4𝜋𝜌(𝑡/𝑐) 1/3 /3 bits. If individual atoms perform computations, the cycle time they would require to exchange state information at lightspeed in diamond would be 1.2 ⋅ 10 −18 seconds. Quantum computation does not change this fundamental issue. The number of steps an algorithm needs to perform in order to arrive at a solution of a problem defines the complexity class of the problem. A practically useful algorithm scales polynomially or less with problem size. Quantum computing merely leads to an exponential speedup of certain problems, which makes some computations that would otherwise be infeasible potentially doable (given quantum computers) (Moore & Mertens 2011) . It should be recognized that innovations in algorithms can also make classical computations significantly faster: the FFT algorithm changed the complexity of discrete Fourier transforms from 𝑂(𝑁 2 ) to 𝑂(𝑁 log 𝑁), making e.g. online multimedia feasible. \n Algorithmic limits The physical limits discussed above represent limits on the speed of an algorithmic step: the complexity classes represent limits of how fast problems can be solved. For example, it is known that any sorting algorithm that must compare elements with each other has to run in 𝑂(𝑁 log 𝑁) time on one processor, and if it is parallelized 𝑂(log 𝑁) (Moore & Mertens 2011) . Any improvement beyond this will have to come from refactoring the problem (e.g. making use of known constraints on the data, such as integer sorting). There is a key difference between parallelizable tasks and serial tasks. By splitting a problem into suitable parts a solution can often be generated far faster on more processors. For example, summing 𝑁 numbers requires time proportional to 𝑁 on one processor, but can be done in log(𝑁) time on 𝑁 processors (first every pair of processors sum their numbers, then the 𝑁/2 partial answers are summed, and so on until only one answer remains). Other problems have data dependencies that make this impossible, for example calculating trajectories in the 3-body problem. The key difference is whether the critical path length 𝐶 of the computational dependency graph is close to the overall amount of computation 𝑇. If 𝐶 grows more slowly than 𝑇, then parallelism gives a time gain. However, 𝐶 represents an insurmountable problem-dependent barrier for how fast the problem can be solved (this is the cause of \"Amdahl's law\" (Amdahl 1967 ) limiting the speedup of a task as more processors are added). While the complexity classes represents the logical bedrock of how fast tasks can be done, one should not underestimate the potential speedups due to refactoring, approximating, or \"cheating\" on problem instances that matter. For example, the 3-SAT problem is known to be NP-complete and should hence not be expected to have efficient solutions, yet heuristic 3-SAT solvers are successfully used in circuit design and automatic theorem proving. They are not efficient on all possible instances but for practical purposes they work (Moore & Mertens 2011) . It is hence harder in general to estimate the distance to algorithmic limits to computation than physical limits because real-world problems are rarely well specified enough. Improvement in algorithms are often 50-100% of the gains from hardware progress (Grace 2013) . In addition, predicting future theoretical insights is often as hard as gaining the insight itself: strongly idea-driven technological change is by its nature less predictable than incremental change. \n Technical limits Figure 1 : Distribution of sigmoid curve scenarios of computer power generated from data in (Koh & Magee 2006) . The grey shade denotes density of scenarios. The red curve is the median scenario, green lines the 5% and 95% percentile scenarios. Moore's law has been formulated in numerous incompatible ways (Mack 2015; Waldrop 2016 ), but perhaps the most relevant measure of progress is processing operations per second per dollar. Merely measuring speed will not capture the actual practical impact. This measure has been growing exponentially over several decades and even if one fits a pessimistic sigmoid curve (implying that growth must eventually come to an end) the median estimate implies about 20 orders of magnitude improvement (!), and with 95% probability at least a factor of 100,000 improvement. Moore's law is a self-fulfilling industry prophecy, partially driven by Wrightean learning and increasing production (Nagy et al 2013) , partially by expectation (Schaller 1997) . Given the value of faster computation there is a demand for it, and production makes new computational tasks feasible and affordable (\"Bell's law\") (Denning & Lewis 2016) . Eventually it will run into physical limits (Mack 2015; Waldrop 2016 ), but compared to the previous section it is clear that there is a fair distance to run. Even if the technology itself stops the economies of scale may keep on increasing the performance per dollar. \n Communications Communication has essentially reached the ultimate limit, lightspeed transmission. In electric cables there is some slowing (50%-99%c) due to electrical inductance and capacitance. Optical fibres typically transmit signals at 70% of lightspeed due to the refractive index. In contrast radio waves move nearly at the speed of light, but require line-of-sight. Bandwidth is still increasing: Nielsen's law of Internet bandwidth suggests a 50% increase per year for users (Nielsen 1998) . This is slightly slower than Moore's law, making accumulating data rational since it can be generated faster than it can be transmitted. The upper limits of bandwidth are set by the entropy of electromagnetic radiation, scaling as 1.1194 × 10 21 √𝐴/𝑑 𝑃 3/4 bits per second for an area 𝐴 transmitter and receiver at distance 𝑑 and using 𝑃 Watt power (Lachmann et al 2004) . We are clearly far from the physical limits yet, but were Nielsen's law to continue we will reach them before the end of the 21 st century. As mentioned above, lightspeed limits imply a space-time trade-off. A return time of 𝑡 from sending a signal and getting an answer implies a distance 𝐷 < 𝑡/𝑐. Faster processing require smaller spatial systems. Large systems will have parts that are causally disconnected: they cannot interact over processing cycles. Krugman's \"First fundamental theorem of interstellar trade\" applies here: interest rates for local information and for information in transit are the same (and by the second theorem, arbitrage will equalize them in different parts of the system). However, prices in different parts of the system may not be equal (Krugman 2010) . While intended for space this also applies to a fast desynchronized computing world. \n Sensing and acting To fast sensors the world is dim and noisy: the rate with which photons, sound or other measured entities arrive is slow and natural irregularities will dominate. If the intensity is 𝐼 and sampled at frequency 𝑓, during each sample interval only 𝐼/𝑓 units of energy will arrive, an amount decreasing with 𝑓. If this interval becomes comparable to the average arrival time of measurable entities typically aliasing effects (if the arrival times are regular) or Poisson noise (if the arrival times are random) show up, drowning the signal in noise. To function well fast sensory systems need intense, high-frequency signals or sensors with a broad sensitivity. The world is sluggish and hard to coordinate for fast controllers since the response from actuators will be slow and delayed. The time from sending a command until a response is received measured in processor cycles is 2𝐿/𝑣𝑓, where 𝐿 is the size of the actuator, 𝑣 the signal speed and 𝑓 the processor frequency. This estimate optimistically assumes an instant response from the actuator, but for many physical systems response times are proportional to 𝐿 (Drexler 1992) , increasing the time to (2/𝑣 + 𝐾)𝐿/𝑓 where 𝐾 is the actuator sluggishness. The smaller and faster the actuators are the better the system can work, but this also requires closeness in space. Fast computation hence benefits small systems acting in intense environments more than large systems dealing with uncertain, weak signals. \n Summary In summary, we have good reasons to expect computing to become many orders of magnitude faster in the future: there is still plenty of distance to physical limits, and algorithmic improvement, innovation and (quantum) parallelization is possible in many domains. Indeed, \"there is plenty of time at the bottom\". At the same time we are close to communications and sensing limits, the improvement speed may be unpredictable, and it is hard to synchronize fast distributed systems. \n The value of fast computation In the previous section, we discussed the physical limits to computation -how fast computation might be made in principle, if sufficient effort was dedicated to making it faster. For such effort to be expended, fast computation needs to have enough value to justify the investments in it. This section will review the reasons for why \"faster is better\" in computation, and how these reasons act as drivers for computational speedups and hence incentivize (due to economic profitability) further speedups. \n Productivity If operations can be performed faster, then more instances can occur in a given interval. If these hold value, then there is more value produced. A faster manufacturing process will produce more goods per unit of time, a faster surgeon can operate on more patients. This is the normal world of economic productivity, endlessly studied by economists. Whether the value increases proportional to the speedup or not depends on whether the increased supply is enough to decrease price noticeably. A closely related improvement is that time is freed up by the faster rate of work, and this is valuable. For example, a labour saving device frees up time that could be used for leisure or (more commonly) further productive work. An effect of this is that alternative cost of time increase. A given time interval can be used for many more things, some of which are valuable. Hence wasted time may paradoxically become more of a serious problem in some domains. This might be a current contributor to information stress -there is always something of value going on, and it might be rational to switch tasks often in order to find high-value tasks. As described by Schwartz (2004) \"missed opportunities\" are often experienced as stressful since we notice lost utility, especially if the aggregated alternatives not taken sum to more utility than our choice. The actual rationality depends on how the overhead of searching is compared to the expected gain 1 ; information foraging theory predicts that as the cost of switching between information sources decreases the time spent on each will decrease (Pirolli & Card 1999) . Changes in productivity will also lead to changes in allocation of resources (labour, computing) . A sudden change in speed of one area will produce transient response in the labour/computing market before a new equilibrium establishes itself. \n Timeliness Time is a non-renewable resource. Or rather, temporal location is non-renewable: what matters is often that X occurs before Y occurs. Since time is irreversible ordering effects can matter significantly, as in diagnosing a disease before it becomes life-threatening or inventing an offensive technology before any defence to it exists. The Japanese earthquake early warning system uses seismological detectors to shut down trains before the earthquake wave hits (Kamigaichi et al. 2009 ) -once detection, transmission and reaction is fast enough this becomes possible. The timescales in cars form another useful set of examples of how timeliness enables sharp shifts in performance. The trip planning timescale for humans is on the order of minutes, allowing it to be done automatically with 1990s technology; faster is more convenient, but once the navigation system passed the minute threshold it was good enough. The driving timescale involves decision-making on the order of tens of seconds to seconds. There was an important shift in 2006 when computing got fast enough, suddenly enabling autonomous cars in natural environments. Again there is a threshold effect: speedups of computing may enable better driving quality, but the fundamental ability to drive showed a fairly discrete transition. Similarly airbag systems gather sensor information and the control unit decides whether to trigger the airbag within 15 to 30 milliseconds after a crash has begun. This could be done already in the early of 1970s by dedicated electronics, and since then speedups have mostly served to make the decision-making more sophisticated (although improvements in sensors may have played a larger role). In all these examples computational speedups enable a new level of automation when they pass a human or mechanical speed threshold. CDF. The median utility for the best is 𝐹 −1 (2 − 1 𝑁 ) and the expected utility is E[𝑢 * ] = 𝑁∫ 𝑢𝑓(𝑢)𝐹 𝑁−1 (𝑢)𝑑𝑢. In the case of Gauss-distributed utilities 𝐸[𝑢 * ] ≈ 𝜎√2ln(𝑁) . The expected total utility of trying N tasks, each requiring time t to give evidence of their utility, with overhead H for switching during a fraction 0 < Nt T ≤ F ≤ 1 of the allotted time T followed by exploiting the best one is 𝑈(𝐹, 𝑁) = −𝐻(𝑁 − 1) + \n 𝑇(𝐹E [𝑢] + (1 − 𝐹)𝐸[𝑢 * ]). The difference from N=1 is Δ𝑈(𝑁) = −𝐻(𝑁 − 1) + 𝑇((𝐹 − 1)𝐸[𝑢] + (1 − 𝐹)𝐸[𝑢 * ]) = −𝐻(𝑁 − 1) + 𝑇(1 − 𝐹)(𝐸[𝑢 * ] − 𝐸[𝑢]). There is some value of T where the last term becomes larger than the first term and it becomes rational to try different tasks. Since spending more time than necessary exploring wastes time best used for exploiting, F will be close to Nt/T. In the Gaussian case Δ𝑈(𝑁) ≈ −𝐻(𝑁 − 1) + 𝑇(1 − 𝑁𝑡/𝑇)𝜎√2 ln(𝑁), and the switch happens when 𝑇 > 𝑁(𝐻 + 𝑡𝜎√2𝑙𝑛𝑁) − 𝐻. Hence, as processes speed up (T increases) it becomes rational to flip between tasks, especially when overheads for task switching are low, the time to detect the utility well enough is fast, and the number of alternatives to consider remains bounded. At least in principle, calculative rationality becomes perfect rationality if done infinitely fast (Russell 2016 ) (although as we will see this promise is somewhat illusory). \n Competition In finance \"winner take all\" dynamics can occur in markets if one agent can react faster than others and hence gain a speed premium. The same may apply in evolution, where evolving faster than parasites or competitors is useful. This can serve as incentives for accelerating, even if it comes at a cost. There would be an equilibrium when the cost of higher speed offsets the economic (or other) gains. At this point every actor runs as fast as is optimal. However, if the situation is a winner-take all situation the economic gains accrue mostly to the fastest actor, and it is rational to try to at least temporarily push into the \"inefficient\" speed frontier since slower competitors are pushed out of the market. Perceptions may also matter significantly for speed investment: there could be inefficiencies because agents overinvest in too fast systems. In addition, Moore's law and other expected speedups lead to design choices (like software bloat) that are suboptimal. Since agents can plan for a faster future they may risk overshooting the equilibrium by investing in speed that is expected to be optimal, but may actually be too ambitious. \n Timing Sometimes it is more important to have a result or event at a particular time, and speed gives more control over the situation. For example, in bomb disposal one of the first steps is to vibrate the assembly -if there are any vibration detectors disarming it will be very hard, and if it detonates it will occur at a time of the disposal expert's choosing. A bomb with a slow, unpredictable fuse will detonate at an unknown time, and this can be dangerous. Projects where one can ensure that at a predictable time one will know if it can succeed or that certain key steps will have been done will be less risky, reducing the risk premium. \n Time budgets Everything that matters to us occurs within our time budget. These time limits may be set by human lifetimes, corporate lifetimes, the next budget period for an organisation, or the time until the situation changes too much. An interesting special case is secrets, where the aim is to ensure that disclosure occurs after a fixed time occurs. Most secrets have a time horizon beyond which their importance has declined enough that it no longer matters if they are revealed. In cryptography this can be used to estimate required key lengths: given an assumed time horizon the key needs to be long enough that given optimistic predictions of future computing power it is not possible to brute-force the key. This has led to the concept of \"transcomputational problems\", problems requiring more information processing than can be amassed even in principle. Many such estimates are based on common sense limits such as a computer covering the Earth, but the actual ultimate physical limits are far larger -yet there is rarely any challenge in constructing a sufficiently large problem to reach true limits. Secrets also have half-lives due to random disclosure, leaks and espionage (Swire 2015) . This is a more troublesome time limit since it is unknown (indeed unknowable): at best the secret-keeper can estimate the rough half-life and plan for that time period, as well as prepare for what to do if it is disclosed. Note that this makes cryptographic issues and computer speed far less important: a speedup of computation that makes the secret forceable earlier but still several half-lives into the future has no effect on the utility. In a leaky world perfect cryptography is irrelevant. In software finding bugs typically requires monitoring/beta testing time proportional to the mean time between failure (MTBF). This can be speeded up by using more testers/users, but fast systems will also show internal bugs quickly. Interactive or environment-linked systems may however have long MTBF despite great internal speed: here the human or environment acts as the slow critical path in the computation and error-checking. \n Time value The value of a remote event is increased (possibly from zero) by coming closer to the present. This includes becoming able to achieve something that previously was impossible to achieve within the given timeframe. For example, calculating weather forecasts in hours that previously would have taken years suddenly makes them useful. Fast change also typically implies more uncertainty and faster discounting. If discount rates are pushed by accelerating computation but also affect human-level systems that do not change speed, there could be a problem in rational allocation. Long-term valuable projects such as building infrastructure or settling other planets are not only outside the fast discount horizon, but rapid changes in funding, technology, risk and specification makes planning challenging and reduces the probability that they will be fully implemented. \n Early discovery Nick Bostrom penned \"A little theory of problems\" (Sandberg 2014) where among other things he noted that:  \"Discoveries\" are acts that move the arrival of some information from a later point in time to an earlier point in time.  The value of a discovery does not equal the value of the solution discovered.  The value of a discovery equals the value of having the solution moved from the later time at it would otherwise have arrived to the time of the discovery. In this account of the value of discovery depends on temporal ordering. In the long run most discoverable information will presumably be discovered, but problems with high and positive value and high elasticity (the solution can be found significantly sooner with one extra unit of effort) should be prioritized. If computation is relevant for solving the problem improvements in speed means that it increases the elasticity, not just of the problem in question but for all computationally dependent problems. This can cause an overall re-prioritization: computational problems with a high value of early discovery would be favoured over other, equally weighty, but less elastic problems. However, as long as computation is increasing in speed and power there may also be a value in waiting. As noted by Gottbrath et al. (1999) , if a computation requires a given amount of computation to perform and the available computer power grows exponentially it can be rational to wait until the computer power has grown to start running the computation. This is relevant if the time the computation takes is significant compared to the improvement time of computers. \n Cybernetic feedback shift When a regulatory or informative process becomes fast enough the dynamics changes profoundly. We may call this a cybernetic feedback shift: when a controller is fast enough things become stable. This is well known from delay differential equations, the classic steam engine centrifugal governor, and trying to steer a boat: too slow reactions leads to oversteering where the system sways back and forth. In control theory delays are equivalent to a low sampling rate of the system. It may be possible to control the system, but the response will be sluggish since one has to use old data to stabilize. As control speeds up the responses become faster. In many practical cases regulators try to track a parameter (e.g. thermostats, price signals). When the controller cannot follow a parameter, it tends to \"fall off\" with possibly catastrophic effects (Ashwin et al. 2012) . For fast response rates this risk disappears. Conversely, if the parameter starts moving too fast because it has been speeded up, then stability may also be lost. If both controller and parameter speed up, then nothing changes. In many cases the utility of a system goes up tremendously when a cybernetic feedback shift happens. Unfortunately the converse also occurs: when a system loses the ability to track parameters it can suddenly become worse than useless. Communications matter for keeping organisations and empires together. An empire cannot function if the time from a province begins to rebel to the arrival of the military force sent from the capital is longer than the time the province needs to entrench/build army (a way around this is to distribute power to local governors for fast response). If the time it takes for the strategic level of the organisation to notice, understand, and react to a challenge is longer than the evolution time of the challenge, then it is unlikely to be able to deal with it. Hence real societies and organisations tend to have low-level, faster subsystems. If local processing speeds up but communications does not this leads to asynchrony issues. But also, to stronger reasons to decentralise to meet new, fast challenges. Organisations that cannot do this will have a risk that \"provinces\" can rebel much faster and that subsystems are going to be held back -there may now exist local incentives to break loose. \n Risks and ethical issues due to fast computation We have seen that there exist numerous reasons to pursue faster computation, especially in the service of performing particular tasks fast, timely, and early both for competitive and intrinsic reasons. Even if this is achieved flawlessly there are risks and ethically relevant issues with this process. The most obvious issue of time compression is simply social change and disruption (as well as gains); the locus classicus is Toffler's Future Shock (1970) and the writings of Virilio; see also (Wajcman 2008) . Although clearly important, for reasons of brevity the focus here will be on the directly speed-related issues rather than the more subtle sociological and phenomenological challenges. \n Loss of control The most obvious issue with fast computation or other operations is that they are fast relative to the (human) ability to control systems: there is a risk of things going haywire too fast. When a system has a positive feedback loop, the strength of the feedback relative to friction and delays determines the speed a disturbance gets amplified. Engineering typically wants to keep friction in useful range: not too much to cause losses but enough to make system controllable. Faster computation and communication can make feedbacks in business, software (e.g. \"Warhol worms\" (Weaver 2001 )) and other system more intense, and hence limit the timespan for taking control action. The \"friction\" -costs of action and reaction -is lowered, making fast actions possible. Loss of control can happen due to several causes. There is the direct loss of control, where the steering agency lacks the ability to control, either because it cannot track the state of the system fast enough or process what should be done fast enough. There is the effect of emergence causing misbehaviour (systemic risk), where parts of a system function well but the whole exhibits unwanted behaviour. Here the trouble may come from the speed the emergent behavioural change occurs on, or that fast and dense internal interactions enable the change (Goldin & Mariathasan 2014) . Finally, there is asynchrony where the parts cannot coordinate necessary joint activity. Perhaps the most extreme example of an emergent loss of control is the \"hard AI take-off scenario\". In this scenario general artificial intelligence (AGI) is developed to have enough ability to perform many human-level tasks, including programming better AGI. A feedback loop ensues, where human input for improving AGI becomes less important than AGI input (which may be very scalable) and the total ability and power of the software grows rapidly, soon outstripping human ability to control. (Good 1966; Bostrom 2014 ) Whether such take-off can occur and how fast it could be remains conjectural at present, but it is an issue taken seriously as a long-term risk by some AI researchers (Hutter 2012; Müller & Bostrom 2016; Sotala 2017 ). Bostrom distinguishes between fast, medium and slow take-offs in terms of how fast the transition occurs relative to human decision-timescales. The key issue is the differences in what reactions can be undertaken: in slow take-offs there is ample time for society to respond with considered actions, while in fast take-offs events move too quickly for human decisions to matter. In the intermediate case there may not be time enough for deeply considered decisions, but various actions are possible -especially preplanned \"cached\" actions that can be initiated quickly. \n Speed inequality Speed differences can become unfair differences in economics or power, as well as contribute to risk. In \"fastest takes all\" competitive situations being faster is more important than being good. This can favour not just excessive speedups and arms-races, but also ignoring quality and safety. For example, if several teams race to create the first transformative AI but safety work slows progress, then the Nash equilibrium tends to produce unsafe AI (and having public information on the progress of other teams increases the risk) (Armstrong, Bostrom & Shulman 2016) . A less dramatic case is how Silicon Valley competition favours bringing a Minimum Viable Product to market fast and first rather than making it reliable; the result is often that security and privacy flaws become hard to fix later. Old systems tend to be slower and would hence suffer in \"fastest takes all\" situations. Agents that cannot afford faster systems will hence tend to fall further behind. The speed requirements also serve as a barrier to entry. Automated trading became possible in the 1990s when trading floors were replaced by matching engines. Gradually high-frequency trading emerged, as the second-long delays at the turn of the century declined to milliseconds in 2010, enabling trading of shares under 10 milliseconds. Quickly this became a dominant form of trading (Massa 2016 ) to a large degree because of improvements in liquidity and informativeness of quotes (Hendershott, Jones & Menkveld 2011) . There are also more zero-sum benefits of speed such as obtaining a better position in the order book queues than competitors with the same information and similar strategies; the rewards for a 1 millisecond speed advantage have been claimed to be in the range of hundreds of millions to billions of dollars (Farmer & Scouras 2012) . Human traders obviously cannot compete. Also, the algorithms have shown sensitivity to disinformation/misinterpretation of news; oil prices jumped 2013 when a tweet recalling the Yom Kippur war 40 years later was sent by the Israeli military (Reuters 2013) : since fast response is important double-checking signals is too slow, and once the market dynamics is set it becomes irrational to act just on the true information. This can contribute to instability, both in the large but also in the small in the form of ultrafast extreme events that are far outside human reaction times. There appears to exist a systemic transition when the number of agents is larger than the number of strategies and there is not enough time to process information (Johnson et al. 2013 ). Together, this both suggests unfairness caused by speed differences and risks from lack of control. Speed inequalities matter in communication too: people need to be able to meaningfully respond to each other to have relations, and this includes being part of society. In an interaction it is the slowest participant that sets the overall speed. This may contribute to an incentive for fast systems to mostly talk to fast systems, and limit human contact. An extreme example is the social stratification predicted in a society of minds running at different speeds (Hanson 2016 ), but milder examples abound of side-lining slow responders in organisations and engineers minimizing requests to slow subsystems. \n Decision speed Faster computation promises potentially near-instant decisions. However, these are subject to information limits: a decision can only happen when it is known with enough certainty that a triggering condition has occurred. Computation was sometimes in the past the slowing factor, but slow information arrival (due to sensors, communication speed, low bandwidth) is likely more relevant in a fast computation world. Acting early with less information is less certain and this will introduce risk. The real bound on decision speed may hence be acceptable uncertainty in a given situation. \"Faster computation\" is typically measured relative to fixed rate of human activity. More things become possible inside our time budgets, but we cannot observe/control too fast activities directly. We can leave this to automation, but now we have delegation problem. Circuit-breakers for financial markets will stop pre-described events but may let through anything else, creating a false sense of security. Many human systems have layers acting on different timescales, for example slow-changing constitutions underlying laws, policies, social norms and fashions. This ensures that observation and control can function. If breaks between the layers emerge, that means there is no meaningful control. This may have been a contributing factor to the financial crisis in that regulators did not understand the changing financial instruments and their implications. Decision speed is also competing with time for deliberation. Drone pilots have the problem that faster systems make the human in the loop the slowest and most performance decreasing factor, while being morally responsible takes time (and the pilot will be blamed for low performance and morally questionable actions). This issue gets writ large in the case of nuclear missiles. It is about 30 minutes Moscow and Washington as the ICBM flies, just about time for a \"red phone\" call for negotiations to occur. But between Islamabad and Delhi it is 5 minutes: were an unauthorized or accidental launch to occur the time for internal and external deliberation is exceedingly short. \n Loss of opportunity The irreversibility of time has great ethical importance. Choosing now and fast can remove opportunity for later choice, especially when actions are irreversible such as using up non-renewable resources or releasing information (Bostrom, Douglas & Sandberg 2016) . This is an issue since we will likely have better information in the future and may hence evaluate the actions differently. Yet waiting in a risky state can be worse than taking a risk since risks are likely to catch up with us. We may want to trade a \"state risk\" such as nuclear war being possible for a \"transition risk\" that removes the state risk at a price of a temporary but greater risk (a radical disarmament deal, inventing superintelligent AI to \"solve geopolitics\") (Bostrom 2014, p. 233) \n Identity over time A final issue is the accumulation of change. Complex adaptive systems interacting with the world will change their internal structure as a response: learning/forgetting information, restructuring itself, breaking or reproducing. This can change their fundamental identity in relevant ways. What constitutes the important aspects of identity depends on the system and observer, but the \"wrong kind\" of identity change must be avoided since it loses accrued value. It is not the span of time that matters but the amount of change. Typically, there is a \"change budget\" that can soak up the modifications without losing identity. Software, people and organisations that instantly change often change identity in the wrong way, while the same transformation done gradually may be both identity preserving and acceptable to stakeholders. This is at the core of Toffler's concept of future shock (Toffler 1970) . Faster information processing means a higher rate of accumulating change, straining the change budget. Rapid adaptation may be beneficial from a control perspective but risks reducing the change budget. \n Societal impact and governance As we have seen, the great challenge outlined in this paper is growing gaps of speed between computation and the human (ethical gaps) and gaps of speed between computation and control (cybernetic gaps). In terms of governance the risk is that this produces a policy vacuum (Moor 1985 (Moor , 2005 ). There will be a growing number of situations where there are no policies, yet actions must be taken. The time needed to conceptualize the situation and deliberate it remains on the human timescale. The eternal refrain that technology is outpacing ethics represents not just an ethical gap due to speed but also a cybernetic gap due to lack of information: decisions cannot rationally happen faster than there is information to decide upon. The Collingridge dilemma (Collingridge 1982 ) is partially cybernetic . Yet faster computation also strongly increases the power of the state and institutions through cybernetic feedback shifts and increased legibility (Scott 1998) . Surveillance power has grown faster than Moore's law since it scales with hardware, software, data availability and sensor ubiquity (while institutional oversight has hardly kept up at the same rate; there is a widening cybernetic gap here). If searching for a person can be done in real-time it is very different from an ongoing bureaucratic process. Can governance be speeded up meaningfully, assuming human speed remains constant? (We will here ignore the possibilities of enhanced posthuman speed discussed in (Hanson 2016 ).) Financial market circuit breaker rules are automated, speeded up governance solving the humanmachine speed difference problem in a small domain. However, their utility depends on whether the system can detect the right issue. They need a decision parameter that is both necessary and sufficient for a break (and high quality). If it is not sufficient they produce false alarms. If it is not necessary, they might miss things that matter. If it is not of high quality, it may measure the wrong thing. In the stock market measuring trading volume and value makes sense but may still miss subtle qualitative shifts (e.g. in correlations) predicting a systemic risk. This is basically a principal-agent problem, where the agent may be a thing designed for a purpose -but as Bostrom shows, the AI principal agent problem is doubly hard (Bostrom 2014) . Some parts of governance just consist of processing information or doing formal, well-defined decisions: these can in principle be speeded up. There is a drive to do this to save money, provide timely service, increase fairness etc. This will likely work best for routine, well-specified governance without elements of social or strategic intelligence. The risk is that we end up with algocracy, opaque decision-making with little legitimacy (Danaher 2016) . Another risk is that since such routines will have apparent or real instrumental value they will be favoured over messy and slow routines requiring intelligence, producing a legible but far more limited governance system. However, governance can also affect the speed of computation. While controlling the growth of technology in general is unlikely because of its broadness, epistemic unpredictability and utility, it is certainly possible to mandate speed limits (e.g. delays on stock markets to avoid high-frequency trading (Farmer & Skouras 2012) ), mandating response time limits, or even mandating the use of certain technology (e.g. accessibility requirements for websites). Another approach is the differential technology development principle (Bostrom 2014, pp. pp. 229-237) : if potentially harmful technologies are developed more slowly than technologies reducing their risks, their benefits will become available with less risk. This can be achieved by focusing funding and research priorities on the harm reduction technologies. In the current context this might include promoting technical solutions to ethical and cybernetic gaps. Controlling algorithms is hence not so much about banning practices as having an adaptive, global learning system that observes what is going on, remembers past states, maintains a set of values that may be updated as information arrives, and then changes incentives to promote these values -open to updating every part in this system. \n Conclusions There is very much more computational speed to be had, and we will likely reach it. This will generally produce much more value -better productivity, better predictions, better control, more opportunities. But those desirable aims will also lead to control gaps, systemic risks, speed inequalities, overly fast or uncertain decisions. This will challenge governance strongly, risking both policy vacuums, a drive towards algocracy, and numerous principal agent problems in bridging ethical and cybernetic gaps. Yet strong enough governance can mandate speeds, and adaptive and distributed governance can update faster. It is possible to prioritize areas where speed issues are known to generate trouble for regulation and using differential technology development to stimulate foresightful and responsible technology development. \n . State risks are reduced if we speed up macro-structural development: less time in risky periods. Transition risks are reduced by having more time for preparing. Problems where learning from experience dominate solutions benefit from slowing down and getting more time to have experience, while problems requiring forethought or insight may benefit less. \n\t\t\t If different tasks have utilities per unit of time distributed as f(u) for some probability distribution f, then the best utility after sampling N will have distribution 𝑓 * (𝑢) = 𝑁𝑓(𝑢)𝐹 𝑁−1 (𝑢), where F(u) is the", "date_published": "n/a", "url": "n/a", "filename": "There+is+plenty+of+time+at+the+bottom+4.tei.xml", "abstract": "Purpose -The speed of computing and other automated processes play an important role in how the world functions by causing \"time compression\". This paper reviews reasons to believe computation will continue to become faster in the future, the economic consequences of speedups, and how these affect risk, ethics and governance. Design/methodology/approach -Brief review of science and trends followed by an analysis of consequences. Findings -Current computation is far from the physical limits in terms of processing speed. Algorithmic improvements may be equally powerful but cannot easily be predicted or bounded. Communication and sensing is already at the physical speed limits, although improvements in bandwidth will likely be significant. The value in these speedups lie in productivity gains, timeliness, early arrival of results, and cybernetic feedback shifts. However, time compression can lead to loss of control due to inability to track fast change, emergent or systemic risk, and asynchrony. Speedups can also exacerbate inequalities between different agents and reduce safety if there are competitive pressures. Fast decisions are potentially not better decisions since they may be made on little data. Social implications -The impact on society and the challenge to governance is likely to be profound, requiring adapting new methods for managing fast-moving and technological risks. Originality/value -The speed with which events happen is an important aspect of foresight, not just as a subject of prediction or analysis, but also as a driver of the kinds of dynamics that are possible.", "id": "7ab3bda00a4122648d1db79807acbbe8"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Anthony Michael Barrett"], "title": "Value of Global Catastrophic Risk (GCR) Information: Cost-Effectiveness-Based Approach for GCR Reduction", "text": "Introduction Global catastrophic risks (GCRs) are risks of events that could significantly harm or even destroy human civilization at the global scale (Hempsell 2004 , Baum 2010 . GCRs presently posing hazards to humanity include nuclear war (Sagan 1983 , Turco et al. 1983 , Robock et al. 2007 , Cirincione 2008 , Hellman 2008 , Barrett et al. 2013 ) and pandemic diseases (Nouri and Chyba 2008) . In the near to longer-term future, GCRs could include climate change (Weitzman 2009 , Travis 2010 ) and misuse or accidents involving technological developments in areas such as artificial intelligence (Yudkowsky 2008 , Chalmers 2010 , Sotala 2010 ) and nanotechnology (Phoenix and Treder 2008) . Proposed interventions to reduce GCR include nuclear disarmament (Robock et al. 2007 ), development and distribution of vaccines and antiviral medications (Osterholm 2005) , reducing greenhouse gas emissions through public policies (Aldy et al. 2003) and various individual behaviors (Dietz et al. 2009) , and abstaining from developing certain technologies (Joy 2000) . A growing body of work makes the case that reducing GCR, or certain types of GCR, is of very high value and thus should be one of the highest objectives for society (Ng 1991 , Bostrom 2002 , Posner 2004 , Matheny 2007 , Tonn 2009 , Ćirković et al. 2010 , Beckstead 2013 . Published estimates of the value of preventing global catastrophe range vary wildly, from $10 billion (Bostrom and Cirkovic 2008) to infinity (Weitzman 2009 , Baum 2010 , depending partly on the definition used for \"global catastrophe.\" Even the low end of this suggests a large allocation of resources toward GCR reduction. However, setting GCR reduction as a high priority is not a sufficient guide for action: there are many open questions regarding how best to allocate resources for GCR reduction. One basic question is how much to allocate toward direct risk-reducing interventions and how much to allocate to research to inform these interventions. The decision analysis concept of expected value of information (Clemen and Reilly 2001 , Keisler 2004 , Bhattacharjya et al. 2013 can inform decisions about how much to spend on information (i.e., reduce Decision Analysis, 2017 , vol. 14, no. 3, pp. 187-203, © 2017 uncertainties) prior to making other resource allocation decisions. Usually in value-of-information calculations, decision options are evaluated using utility functions, money, or functionally similar metrics that have implicit commensurability between option trade-offs, e.g., lives saved versus dollars spent. However, equating lives and dollars, e.g., using a typical value of statistical life (VSL) saved, may be inappropriate given the potentially vast scale of GCRs. (Moreover, quantifying total event consequences of global catastrophe in conventional benefit-cost terms would be complicated by uncertainties about direct event impacts, indirect impact factors such as public behavioral responses, and the levels of such impacts that could be borne before reaching civilizational-collapse tipping points.) We take a different, cost-effectiveness-based approach in this paper instead. A cost-effectiveness-based equation for value of information also may be useful in other domains where typical VSLs would not be appropriate. In this paper, we argue that value of information based on cost-effectiveness is a useful tool for analysis of GCR to inform risk-reduction decisions, and we show that it can be defined in a practical manner. We argue that such an approach would be most valuable if applied in a comprehensive, integrated fashion to all major types of GCR, rather than one at a time. We describe a number of challenges that would arise in such efforts, and argue that these challenges can be addressed. We also provide an illustrative, though highly idealized, example that shows how a practical value of information calculation can work. It also provides support for our argument that such calculations can have considerable value, and it provides further support for our argument that value of information can provide additional insight when more than one GCR is under consideration. In Section 2 of this paper, we give a brief overview of the basics of the approach and how to apply it to GCRs and risk-reduction interventions in a comprehensive, integrated fashion. In Section 3, we discuss key challenges in real-world implementation of this paper's framework and argue that these challenges can be addressed. In Section 4, we illustrate the basic framework using a simple notional model of GCR from two types of near-Earth objects (NEOs; i.e., asteroids and extinct comets) as well as nuclear war, and consideration of two related risk-reduction measures. The illustrative example shows that such calculations can have considerable value, especially when considering multiple GCRs. We conclude in Section 5. (In the appendix, we provide a detailed derivation of our formula for the expected value of information in terms of the costeffectiveness of GCR reduction.) \n Overview of Framework for Value of GCR Information In this section, we briefly discuss ways to approach three linked sets of quantitative issues: first, representing the probabilities of multiple GCRs; second, assessing the overall cost-effectiveness of GCR-reduction measures and calculating the value of information for GCR reduction; third, contrasting perfect and imperfect information. More details of our approaches and assumptions are given in the following sections. \n GCR Probabilities Figure 1 is a fault tree or logic tree illustrating that there are multiple types of global catastrophic risks, and occurrence of each is assumed to be causally independent of the others, at least at the level of detail used in the fault tree (e.g., nuclear war does not cause asteroid impact). The event \"Global Catastrophe\" is the top event, with round-corner nodes for a series of GCR types branching out below, all connected by an OR gate. The fault tree graphically indicates that a global catastrophe will occur if any of the following types of event occur with global catastrophe-level consequences: a large NEO impact (either an asteroid or a comet impact), large nuclear war, or a combination of smaller events (small NEO impact plus small nuclear war), etc. In addition, Figure 1 includes square-corner decision node risk management for two types of GCR-reduction options (i.e., NEO redirection, and food stockpiling) that could reduce the probabilities of global catastrophe-level outcomes. Grey arrows from the square-corner decision nodes to the roundcorner fault tree nodes indicate that the risk management decisions can influence the risks of global catastrophe-level events. Figure 1 also illustrates that some risk-reduction measures, e.g., food stockpiling, can have benefits in reducing multiple types of GCR. Although the fault tree portion of Figure 1 is quite simple, it is intended to underline the main motivation for considering GCRs as a whole, and not just individual types such as asteroids, comets, or nuclear \n Food stockpiling Combination of small nuclear war and small asteroid impact GC Small nuclear war Small asteroid impact AND war: to assess and reduce the total probability of global catastrophic risk, ideally we would assess all types of GCRs and GCR-reduction measures in a comprehensive way. The framework also can account for interactions between GCR events, such as when occurrence of one type of event reduces society's resilience to or even causes another type of event (Baum et al. 2013) . Such interactions between GCRs could be represented using larger, more detailed fault trees (e.g., by adding branches for scenarios in which both NEO impact and nuclear war events occur around the same time, either just by chance of timing or because an NEO impact somehow causes nuclear war), though it could be difficult to explicitly account for many GCR-interaction scenarios, and important uncertainties could remain about unmodeled GCR-interaction dynamics. Figure 2 is a generic consequence exceedance probability plot for some type of event (e.g., NEO impacts), with curves showing relationships between event consequence and the probability of events with consequence exceeding that level, for both initial and reduced event risks. The figure illustrates that reduction in probability of global catastrophe can be achieved either by reduction of probability of events or by reduction of consequences. Starting from the upper right of the figure, the point where the initial event probabilityconsequence curve intersects with the global catastrophe consequence threshold indicates the initial probability of global catastrophe. The figure also includes two reduced-risk curves, one for reduced event probability and another for reduced event consequence. The curves for reduced probability and reduced consequence have been placed where they result in the same reduction in probability of global catastrophe, partly to keep the figure simple and partly to emphasize the idea that GCR reduction can be achieved by reducing either event probability or consequence. For example, NEO impact risk-reduction measures could reduce the probabilities of global catastrophelevel outcomes either by shifting the curve downward with reduced NEO impact probabilities (e.g., via NEO redirection) or by shifting the curve leftward with reduced NEO impact consequences (e.g., increasing societal resilience to NEO impact via food stockpiling). Thus, probabilities of global catastrophe for a particular GCR event type could be calculated as a function of Decision Analysis, 2017 Analysis, , vol. 14, no. 3, pp. 187-203, © 2017 global catastrophe consequence threshold, using consequence exceedance probability models for that event type. Of course, development of appropriate consequence exceedance probability models would often require substantial research, especially when focusing on rare or unprecedented events, for which a lack of data often leads to substantial uncertainties and biases (Taylor 2008) . In many cases, there would be large uncertainties for both the direct consequences of an event (e.g., in terms of atmospheric soot loading from various nuclear war or NEO impact scenarios) and what threshold level of consequences would result in global catastrophe (e.g., in terms of the effects of atmospheric soot loading on agricultural productivity and other indirect effects on human society, which could be highly nonlinear if stresses could reach civilizational resilience-exceedance tipping points). Such uncertainties could be modeled using probability distributions for the global catastrophe consequence threshold and exceedance probability function. One way to represent uncertainties is to display 5th and 95th percentile value lines in addition to the mean value lines (Garrick 2008) , as shown in Figure 3 . Given the previously mentioned assumption of causal independence, Equation (1) gives the total probability of a global catastrophe-level event within some time period, p total , as a function of the independent probabilities p j of catastrophe events of each GCR type, j, for a total of y GCR types. Equation (1) is mathematically consistent with the previous statement that a global catastrophe will occur if a type of global catastrophe involving a large asteroid impact, a comet impact, or nuclear war, etc., occurs: p total 1 − y j (1 − p j ). (1) \n Cost-Effectiveness and Value of Information for GCR Reduction Figure 4 is a high-level decision tree, consistent with typical trees used in calculating value of information. (The tree does not include specific quantitative values for probabilities, costs, and benefits, but it does indicate the general sequence of decisions and events.) The leftmost square decision node represents a decision to be made on whether to invest in research to inform decisions on risk-reduction measures; other decision nodes represent addition to a basic decision on whether to measures to reduce risks. In Figure 4 , the research decision is simple: conduct research to better understand whether risks are currently high or low, or do not conduct such research. The risk-reduction decision options are also simple: invest to reduce risks or do not invest to reduce risks. The decision on whether to conduct research is made before the decision on whether to invest in reducing risks. If the decision maker chooses not to conduct research, then they make the riskreduction decision with some amount of uncertainty about whether risks begin as high or low. (That uncertainty is represented by circular chance nodes, and the outcomes of chance nodes are represented by diamonds.) If the decision maker does choose to conduct research, then they have more information and less uncertainty about whether risks begin as high or low, and the decision maker can use that information when making their decision on whether to invest in reducing risks. A full valuation of GCR-reduction interventions, including research to gain information, requires some evaluative metric. Typically, decision options are evaluated using utility functions or functionally similar metrics. 1 Such metrics have implicit commensurability between option trade-offs, e.g., lives saved versus dollars spent. Use of such approaches allows for a relatively simple equation for expected value of options with various attributes (Clemen and Reilly 2001) , including trade-offs between GCR reduction and other objectives. In this paper, we avoid full valuations and instead conduct partial valuations in terms of costeffectiveness, measured in GCR reduction per unit cost. We focus on cost-effectiveness for two reasons. First, a full valuation for GCR is complicated by the widely varying estimates for the value of preventing global catastrophe; which can range from $10 billion to infinity, as mentioned in Section 1. Second, many GCRreduction decisions involve allocating resources, such as money. However, equating lives and dollars, e.g., using a value of statistical life saved, may be inappropriate given the scale of GCRs. Therefore, our equation for calculating value of information is based on riskreduction cost-effectiveness, which incorporates estimates of the performance and costs of risk-reduction options without use of VSLs. Our cost-effectivenessbased equation for value of information may be useful in other domains where VSLs would not be appropriate. We assume that there are one or more decisions to be made about the allocation of resources to some combination of options for risk reduction and options for research, and that the decision rule is to choose whatever combination of options has best overall expected GCR-reduction cost-effectiveness among options considered in the analysis. (Such considerations could occur in a series of risk-reduction decisions, in which case the goal could be to identify the most cost-effective interventions first, and then the second most, and so on, until a risk-reduction budget or target has been reached.) Then, in such decisions, the decision maker should buy as much risk reduction (and risk research enabling better risk-reduction decisions) as they can at whatever total cost, as long as that results in the greatest cost-effectiveness. Such decisions can arise when considering public policies, as well as the actions of individuals and other nongovernmental organizations. (We assume that budgets are not an issue in the context of the risk-reduction and research options under consideration, and we do not explicitly account for potential budget constraints in the following. However, consideration of budget constraints could be addressed as an extension of the approach used in the following.) For the purposes of this analysis, we ignore actual costs of research and focus on the amount of resources the decision maker ought to be willing to pay for Decision Analysis, 2017 Analysis, , vol. 14, no. 3, pp. 187-203, © 2017 the value added by the research in the context of the decision the research could inform. In other words, we focus on finding the maximal potential benefits of research. We assume that research ought to be invested in up to the point where a funder would obtain no further benefit from investing in additional research (because up to that point, they would get a better overall cost-effectiveness by investing in additional research). At that point, the expected cost-effectiveness of the best risk-reduction option before research is equal to the expected cost-effectiveness of the best riskreduction option after research, including the cost of research. Equation (2) gives the value of research as the costeffectiveness-based expected value of perfect information, CEEVPI (see the appendix for derivation): CEEVPI E c b s (p a 0 − p a s ) p b 0 − p b s − c a s . (2) The equation assumes the following: There exists a set of n available risk-reduction options numbered 0, 1, . . . , i, . . . , n. Option number 0 is the status quo case, where no new (or non-\"business-as-usual\") riskreduction option is implemented. The cost of implementing risk-reduction i is c i . (It costs nothing to do nothing, so c 0 0.) The annualized total probability of global catastrophe if implementing option i is p i . (We make the simplifying assumption that p i values are static, or unchanging over the relevant time period. Consideration of dynamic, or time-varying, p i values could be addressed as an extension of the approach used in the following.) Each c i is treated as a random variable with some probability distribution reflecting uncertainty about the true cost of implementing intervention i. Each p i is also treated as a random variable, with a probability distribution reflecting plausible estimates of the annual probability of global catastrophe given intervention i. 2 Computationally, the uncertainty is represented using Monte Carlo simulation, where in Monte Carlo simulation iteration m there are sampled values c im and p im . The risk-reduction option s is the option with the \"best\" or highest risk-reduction cost-effectiveness in Monte Carlo iteration m. In addition to decisions on which risk-reduction option to choose, there are also decisions on whether to first spend some resources on research to reduce uncertainties (and to more accurately identify which riskreduction option would be most cost-effective) before making decisions on risk reduction options. We denote whether research is conducted to reduce uncertainty on a particular factor using superscript b for \"before\" research, or without information from research, and superscript a for \"after\" research, or with information from research. Generally, research will have the greatest expected value if it has substantial possibility of informing a decision, i.e., a choice between risk-reduction options. However, the CEEVPI formula also implies that if it is expected that the best option after research is the same as the best option before research (i.e., if s a s b ), then the research still can have positive expected value if it is inexpensive enough and also provides sufficient reduction of uncertainties in p and c factors. We provide an example, calculating CEEVPI for illustrative catastrophic NEO impact risks and riskreduction options, in Section 4. The example suggests that the value of GCR information could be quite substantial. \n Perfect and Imperfect Information In the context of a decision analytic model, the value of information is based on the extent to which information reduces the uncertainty about the value of a particular parameter in the model. Perfect information eliminates that uncertainty. The expected value of perfect information (EVPI) is the difference between the expected value of a decision with perfect information (where the new information influences the decision we make) and without additional information (where we make the decision with our initial level of uncertainty; Clemen and Reilly 2001). We do not expect real-world GCR research to yield perfect information in the sense of eliminating all uncertainties. In general, EVPI calculations are used to set an upper limit to how much should be spent on reducing uncertainty. On their own, EVPI calculations cannot predict how valuable specific research will be in reducing uncertainty. However, even imperfect information can have great value in reducing decision model parameter uncertainties by some amount. Straightforward extensions of the approach to EVPI calculations used in this paper (based on costeffectiveness calculations) could provide methods to assess the expected value of imperfect information (Clemen and Reilly 2001) and expected value of including uncertainty (Morgan and Henrion 1990) . \n Key Challenges of Integrated Assessment of GCR In this section, we discuss important challenges for the implementation of our framework for calculating value of information, and for comprehensive, integrated assessment of GCR to inform risk-management decisions. We have already mentioned some of these challenges, which we discuss further here. We also discuss others that we have not mentioned previously. One challenge is that in the real world, there would often be complex interactions between GCRs, not all of which could be modeled. As previously mentioned, one important simplification of our approach is the assumption of independence of GCRs except where indicated in the model. In principle, many types of interactions could be accounted for by building them into fault trees or other model components, but that could require substantial efforts. As with modeling of any complex system, there would be large uncertainties about how much of the real-world dynamics would remain unmodeled. A similar set of challenges (and irreducible uncertainties) would be encountered in attempting to define global catastrophe consequence thresholds. Another challenge would be in setting appropriate thresholds for catastrophe. An important simplification of our approach is that we use a binary threshold for catastrophe (i.e., an event is only regarded as a global catastrophe if the event's consequences exceed the global catastrophe consequence threshold, however that is defined). In reality, events of a range of magnitudes could be regarded as global catastrophes, either because different stakeholders have different definitions of what constitutes a global catastrophe, or because of uncertainties about what levels of Decision Analysis, 2017 , vol. 14, no. 3, pp. 187-203, © 2017 direct effects from catastrophe events would reach civilizational tipping points. (Those uncertainties would stem partly from the difficulty of predicting indirect effects of catastrophe events, which involve complex factors such as the behavioral responses of large human populations. However, the analytic challenges and uncertainties would be even greater if the aim were to quantify total event consequences in conventional benefit-cost terms, which is another reason to use a simpler cost-effectiveness approach.) Differences between global catastrophe thresholds can have important implications for decision making. 3 Decisions should favor preventing higher-magnitude global catastrophes or decreasing the severity of any given global catastrophe. Furthermore, ideally, decisions would be robust (not highly sensitive) to placement of the global catastrophe threshold. As always, sensitivity analysis can usefully examine the decision implications of varying global catastrophe thresholds, and uncertainty analysis can suggest ranges to use in sensitivity analysis. There also would be challenges in defining what decision procedures to actually use, and how to incorporate considerations such as budget constraints and timing decisions. It seems unwise to take a perfectionist approach to assessing risks and risk-reduction optimality, because the complexity and scale of all potential risks and intervention options (including all interactions and combinations) could make that approach intractable. A more practical approach could be to make a series of risk-reduction decisions, either at regular or irregular intervals, that would first implement the most cost-effective interventions (or combination of interventions), then the second most, and so on, until a risk-reduction budget or target risk level was reached (essentially a greedy algorithm solution to a knapsack problem, in operations research terms). We believe the latter approach would be roughly consistent with our basic framework, though our current framework does not attempt to explicitly account for budget constraints, nor decision sequentiality. It also should be noted that our basic approach implicitly assumes the goal is zero probability of global catastrophe, but other targets could be used; for example, Tonn (2009) suggests a 10 −20 annual probability of global catastrophe as an \"acceptable risk\" target. Finally, accounting for timing of events and interventions could present substantial complications. For some issues, it could be important to account for decisions of exactly when to research, when to implement measures, and in what sequence; the urgency of implementing various measures also could be important. Although time dependencies are not explicitly reflected in the level of detail given in this paper, implicitly they could be incorporated into the model parameter values for effects of the risk-reduction measures. (For example, if considering implementation of an intervention today, versus some years from now, and if the GCR minimization objective is to minimize the probability of global catastrophe over the next century, then for many GCRs types such as NEO impact, presumably analysis would show greater GCR-reduction benefits from implementing interventions sooner rather than later.) At least in principle, time dependencies could be accounted for in modules whose outputs are fed to the model structure shown in this paper. Another challenge is that in the real world, there is not a single very well-funded actor whose prime objective is to reducing GCR cost-effectively. Instead, there are many potentially important decision makers, each with limited budgets and responsibility for GCR factors, and with various objectives that compete with GCR reduction. Potentially important decisionmaking entities include government agencies, such as the U.S. National Aeronautics and Space Administration (NASA), which have programs to address specific categories of GCR such as NEO impact risks; nongovernmental organizations such as the Open Philanthropy Project, which have programs to address either specific categories of GCR or all GCR broadly; corporations such as Walmart, whose product management decisions can have implications for societal resilience, emerging technologies, and other GCR factors; and individuals such as researchers, whose work can improve understanding of GCR factors and that have decisions to make about where to focus their own research efforts. Nevertheless, if credible integrated assessment has identified some GCR-reduction options as clearly being more cost-effective than others, that could influence decisions by various means, especially where actors already have some incentives to reduce societal risks. For government agencies, integrated assessment could inform budget reallocations, e.g., taking funding from low-value areas to fund highervalue risk-reduction programs, and incentives could be provided via government rules that encourage costeffective risk-reduction benefits to society. At the other end of the size spectrum, for individual researchers, integrated assessment could suggest which kinds of research could best lead to risk-reduction societal impacts, which are encouraged by both formal funding reviews and informal norms. Nongovernmental organizations and corporations also often combine efforts on voluntary stewardship initiatives and other programs to reduce societal risks, and thereby gain reputational rewards. Some of the most important challenges concern the scope of analysis, such as what GCRs and riskreduction measures to consider initially (given that starting-point estimates or at least bounding ranges would be needed for all associated modeling parameter values). 4 One approach that should be relatively tractable is breadth first: Begin by taking a broad but shallow approach to modeling GCRs and riskreduction options relatively comprehensively, but with little detail, and with quantitative parameter estimates aimed only at bounding ranges of uncertainties. Then, a series of subsequent, repeated model-improvement steps could iteratively add depth (i.e., to add detail and better quantitative estimates using the best available empirical data, expert judgment, etc.), and decisions on where to focus model-improvement efforts via research could be guided by value-of-information calculations. \n Illustrative Example: Notional Model of NEO Impact Risk and Mitigation In this section, we illustrate our concepts using information in the literature on impact risks posed by two types of near-Earth objects as well as nuclear war. We also provide illustrative modeling of two types of impact risk-reduction measures (i.e., NEO redirection and food stockpiling) that could reduce the probabilities of global catastrophe-level outcomes. These are very simple, notional models of risks and decisions, intended only to illustrate our value-of-information concepts. The example does not attempt to reflect all the latest references, such as the information on asteroid and comet impact risks yielded by the Wide-field Infrared Survey Explorer (WISE) and Near-Earth Object WISE (NEOWISE) survey programs. (Such research has often resulted in downward revisions in catastrophe probability estimates for both high-and low-albedo NEOs.) The example also does not attempt to estimate the risks or risk-reduction benefits related to GCRs besides NEOs and nuclear war, although considering those would affect overall GCR-reduction cost-effectiveness estimates. For example, food stockpiling could have benefits in reducing the effective consequences of pandemics, which is a category of GCR that is not considered in this illustrative example. \n Illustrative Model GCRs, Risk-Reduction Measures, and Assumptions The first type of NEO we model is \"bright\" or easily visible asteroids/comets, which can be observed and tracked long before impact using current astronomical capabilities. The Spaceguard Survey is believed to have detected most such NEOs with greater than 1 km diameter (National Research Council 2010). The second type of NEO we consider is \"dark,\" or lowreflectivity damocloids, which current identification and tracking systems may not see until the objects are already headed directly toward impact. Partly because of the difficulty of observing damocloids using optical telescopes, there are large uncertainties about the frequencies of damocloid impact (Napier 2008 , National Research Council 2010 . Before the Spaceguard survey, such objects were thought to be a small risk relative to other NEOs, but damocloids and long-orbit objects have more recently been viewed as potentially posing the majority of remaining impact risk (National Research Council 2010). The NEOWISE survey program has been using infrared technology to better identify damocloids. Presumably, additional investments in research using infrared, radar, or other technologies could provide better observations of damocloids. Perhaps such damocloid observation systems would be deployed as some combination of Earth-based systems, satellites, and probes. We also model two types of impact event risk-reduction measures. First are NEO orbit redirection measures that offer good and relatively inexpensive reduction of risks of asteroids that are identified and thought to impact years or decades away (Matheny 2007) . The NEO redirection measures would reduce the probability of impact of a large asteroid. However, we assume Decision Analysis, 2017 , vol. 14, no. 3, pp. 187-203, © 2017 that they would not reduce the probability of impact of damocloids (at least, not without additional investments to identify damocloids, which is beyond the scope of this illustrative example). The second type of risk-reduction measure is food stockpiling to provide significant food reserves for a large number of people in case of a period of reduced food production (Rampino 2008) . The impact effects of large asteroids and comets could be broadly similar to nuclear winter and supervolcanism in their negative impacts on global food production. Food stockpiles may help humanity to survive either event. Rampino (2008) mentions that one potential supervolcanism survival strategy would be to stockpile enough food (e.g., grain) to last several years until agricultural productivity goes back up. Rampino (2008) notes that current inventories are only equivalent to about two months' consumption. While difficult to accomplish in many parts of the world, it still might be relatively feasible without advanced technology, should be relatively uncontroversial (especially if production is handled in a way that does not drive up global food prices very much), and could have some value across a number of GCR hazards including war, quarantine after pandemic, etc. In addition, unlike some other GCR mitigation measures, stockpiled food should retain near its purchase value in normal usage even in time periods where no GCR scenarios arise (i.e., if no emergencies arise before the stored food expires, the food can be eaten when rotated out of the stockpile and replaced with new reserves). We implement calculations for the illustrative example in a computational model using the software package Analytica by Lumina Decision Systems. The computational model incorporates all the defined equations and parameters. To estimate probability distributions of outputs, the model performs Latin hypercube sampling, with a model sample size of 10,000 iterations. The model varies continuous-valued inputs according to the previously given probability distributions, and the model produces probabilistic values of its outputs. For more on relevant distributions, e.g., uniform and triangular distributions, see Morgan and Henrion (1990) or the Analytica user guide (Chrisman et al. 2007) . \n Assumptions for Baseline P(Global Catastrophe). Our estimation of the probabilities of bright object and dark object impact risks is based partly on first estimating the total bright object asteroid impact risk, and then estimating how large the dark object comet impact risk is in comparison specifically to bright objects. We estimate the total probability of impacts of asteroids at least 1 km in size as corresponding to a frequency of one in 3 × 10 5 years. That is based on Figure 2 .4 on p. 8 of the National Research Council (2010) or the equivalent on p. 19 of the National Research Council (2010), which indicate a 1 km object impacts approximately once every 4 × 10 5 years. We assume that only 15% of the total population of bright NEOs remain undiscovered (National Research Council 2010) . For bright NEOs that have already been discovered, we also assume negligible impact risk: \"none of those detected objects has a significant chance of impacting Earth in the next century\" (National Research Council 2010, p. 19). The simple way we reflect that in the model is to say the impact risk from visible/ bright NEOs is 0.15 × (1/(3 × 10 5 )). We assume that the impact frequency of damocloids has a probability distribution of Uniform(0, 4) × (1/(3 × 10 5 )), based on a statement by (Napier 2008, p. 229 ) that the hazard from damocloids of 1 km diameter \"is unknown; it could be negligible, or could more than double the risk assessments based on the objects we see.\" Some corroboration is provided by the statement by (Napier 2008, p. 226 ) that at the time of his writing, for 1 km objects, there was an \"expected impact frequency of about one such body every 500,000 years.\" Once every 500,000 years is about the same as the once every 3 × 10 5 years we assume for over-1-km visible objects, but for better consistency and comparability with bright NEOs, we use 3 × 10 5 instead of once every 500,000. Napier (2008, p. 225 ) also observes the following: Estimates based on the mean impact cratering rate indicate that, on the long-term, a 1 km impactor might be expected every half a million years or so. Again, modeling uncertainties to do with both excavation mechanics and the erratic replenishment of the near-Earth object (NEO) population yield an overall uncertainty factor of a few. A rate of one such impact every 100,000 years cannot be excluded by the cratering evidence. All of the above numbers also have additional uncertainty factors (coefficients) of Triangular(0.5, 1, 2), which is loosely based on the statement by the (National Research Council 2010, p. 8) that the uncertainties in intervals between impacts are \"on the order of With the completion of the Spaceguard Survey (that is, the detection of 90 percent of NEOs greater than 1 kilometer in diameter), long-period comets will no longer be a negligible fraction of the remaining statistical risk, and with the completion of the George E. Brown, Jr. Near-Earth Object Survey (for the detection of 90 percent of NEOs greater than 140 meters in diameter), longperiod comets may dominate the remaining unknown impact threat. Finally, for an extremely simple estimate of the annual probability of nuclear war, based loosely on estimates given in the literature (Hellman 2008 , Barrett et al. 2013 , Lundgren 2013 , we simply use Triangular(0, 0.0001, 0.001). It seems likely that the annual probabilities of global catastrophe events are orders of magnitude higher for large-scale nuclear war than for large NEO/comet impacts. In our calculations, we use a simplifying approximation of annual probability as being equivalent to annual frequency (e.g., a frequency of one event in 500,000 years implies an annual probability of 1/500,000). Table 1 contains summaries of the assumed annual probabilities of global catastrophe-level events from each considered GCR type. The expressions reflect the substantial uncertainties. \n Assumptions for Reduction in P(Global Catastrophe) if Implementing Each GCR-Reduction Mea- sure. Although the lower bound of effectiveness of NEO detection and redirection might seem to be quite low, it is based partly on the idea that NEOs that have not already been discovered might be significantly more difficult to detect than the ones that have already been detected. This was suggested by Napier (2008, p. 226) regarding the success of NEO detection efforts to date: \"There is a caveat: extremely dark objects would go undiscovered and not be entered in the inventory of global hazards.\" For food stockpiling, the assumed probability distribution for the reduction in probability of global catastrophe-level NEO/comet impacts is Uniform(0.1, 0.9). It assumes that the stockpile would be comprised of extremely inexpensive sources of calories and nutrients (see below on cost assumptions), for which there would be large uncertainties about riskreduction performance. Table 2 contains summaries of the assumed effects and costs of GCR-reduction measures. (A status quo option, which adds no cost and does not reduce GCR, is omitted from the table but is an additional option in the model.) The effects of the measures are given in terms of their assumed reduction in probability of global catastrophe from each GCR type. The costs of the measures are given in terms of the present value of their costs in 2012 dollars. \n Assumptions for Costs of GCR-Reduction Mea- sures. The cost estimate for food stockpiling assumes a world population of 7 billion, a one-year stockpile, and a per-person-year stockpile cost based on the food expenditures of the world's poorest people, which is approximately $0.70 per day (GiveWell 2013). The cost for tracking and redirection capability assumes 30 years of costs, with $250 million annual costs (National Research Council 2010). \n Example Results In this section, we give results from the computational model for the illustrative example, using the previously stated assumptions. Figure 5 gives the probability density function (PDF) of the base-case annual probability of global catastrophe from both visible and dark NEO impacts. (On the horizontal axis, \"u\" is \"mu,\" or micro, i.e., 10 −6 .) Contemplating probabilities of probabilities can be confusing, but it is easy to see in PDF figures where there are broad spreads of probability (corresponding to great uncertainties) or narrow spreads (for less uncertainty). The figure shows that there are substantial uncertainties about dark-object damocloid risks and even greater uncertainties about Decision Analysis, 2017 Analysis, , vol. 14, no. 3, pp. 187-203, © 2017 NEO tracking and redirection measures Uniform(0.1, 0.9) 0 0 7.5 Food stockpiling for all of humanity Uniform(0.1, 0.9) Uniform(0.1, 0.9) Uniform(0.1, 0.9) 1,800 nuclear war risks, both of which could be much greater than visible-NEO impact risks. Table 3 gives the mean cost-effectiveness of GCRreduction measures (without research to reduce uncertainties) in terms of how much the measure reduces the average total probability of global catastrophe per dollar spent on the measure. (Recall that in these terms, a high number for cost-effectiveness is desirable, because it indicates a large reduction in global catastrophe probability for the dollars spent. The calculations incorporate the global catastrophe probability distributions shown in Figure 5 , which showed that nuclear war risks could be much greater than NEO impact risks.) 3 's mean cost-effectiveness comparison would seem to suggest spending on food stockpiling if nuclear war risk is included in the scope of analysis, but instead would suggest spending on NEO tracking if nuclear war risk is not included in the scope of analysis. Moreover, as mentioned previously, there are substantial uncertainties about the risks and costeffectiveness, and food stockpiles might actually be more cost-effective than NEO tracking even if nuclear war risk is not included in the scope of analysis. Figures 6 and 7 give the PDF of the cost-effectiveness for each GCR-reduction measure, if nuclear war risk is or is not included in analysis, respectively. (The status quo option has a cost-effectiveness of zero because it does not change GCR probability.) The figures indicate the overlapping ranges of the probability distributions of cost-effectiveness of food stockpiling and NEO redirection measures. According to the assumptions used in the Monte Carlo model, if nuclear war risk is included in the scope of analysis, there is a 0.8 probability that food stockpiling will be the most cost-effective measure, and there is a 0.2 probability NEO tracking and redirection will be most cost-effective. Conversely, if nuclear war risk is not included in the scope of analysis, there is a 0.999 probability that NEO tracking and redirection will be the most cost-effective measure, and there is a 0.001 probability that global food stockpiling will be most cost-effective. Further research could reduce uncertainties to better determine which risk-reduction measure would really be more cost-effective. According to the Monte Carlo model's assumptions and use of Equation (A.3), the cost-effectiveness-based expected value of perfect information (CEEVPI) in the illustrative examples in this paper is $2 billion if nuclear war is included in the scope of analysis, and $400 million if nuclear war is not included in the scope of analysis. In this illustrative example, research on the risks and risk-reduction effectiveness would have a substantial expected value, largely because of the huge uncertainties about the baseline risks and about the effectiveness of risk-reduction measures. The example also supports the argument that we can learn something valuable by doing the analysis for more than one type of GCR at a time. \n Conclusion In this paper, we argue that value of information based on cost-effectiveness is a useful tool for analysis of GCR to inform risk-reduction decisions and show how to apply it to GCRs and risk-reduction interventions in a comprehensive, integrated fashion. We discuss key challenges in real-world implementation of this paper's framework and argue that these challenges can be addressed. We then illustrate these concepts with simple example models of impact risks from both visible and \"dark\" near-Earth objects as well as nuclear war effects and consideration of related risk-reduction measures. The illustrative example shows that such calculations can have considerable value, and also supports looking at more than one GCR at a time. Unlike most value of information approaches, our approach for calculating value of information is based on risk-reduction cost-effectiveness, to avoid implicitly equating lives and dollars, e.g., using a VSL, which may be inappropriate given the scale of GCRs. Our equation for value of information may be useful in other domains where VSLs would not be appropriate. Our suggested approach could be used generally to work toward a comprehensive rigorous assessment of GCRs and risk-reduction options. A useful step could be to expand and update this paper's illustrative model (e.g., to reflect more recent NEO research 5 and other NEO risk-management options 6 ). However, it would be more valuable to work toward a broader agenda for integrated assessment to inform GCR-reduction decisions. Ideally, the scope of such assessment would address all important GCRs over key time periods (e.g., the next century) and also key risk-reduction options of relevant stakeholders (including, but not limited to, public policy options of governments). This paper's framework could help guide steps in such assessment by prioritizing pieces of research in terms of value of information for reducing the total probability of GCRs. While real-world GCR research would not result in perfect information, even imperfect information could have significant value in informing GCR-reduction resource allocation decisions. Our approach could have great value in comprehensively, rigorously assessing GCR and risk-reduction options. Prior GCR research is of only limited value to informing GCR-reduction decisions. Much of the work to date has focused on specific GCRs, leaving great Decision Analysis, 2017 , vol. 14, no. 3, pp. 187-203, © 2017 uncertainty about which GCRs are most important to focus on. Notable exceptions include research findings that GCRs from cosmic events are small relative to GCRs from human actions (Tegmark and Bostrom 2005) , an informal survey of GCR researchers providing estimates of the probabilities of human extinction from a small number of GCR types (Sandberg and Bostrom 2008) , analyses of interacting sequences of GCRs (Tonn and MacGregor 2009, Baum et al. 2013) , and several largely qualitative surveys (Bostrom 2002 , Rees 2003 , Posner 2004 , Smil 2008 , Cotton-Barratt et al. 2016 . These studies are insightful but do not provide rigorous quantitative recommendations for riskreduction resource allocations. We are aware of only one study, that of Leggett (2006) , that attempts to quantitatively evaluate GCR-reduction measures across a broad space of GCR, but that study has shortcomings such as not considering all GCR categories nor all potentially valuable GCR-reduction measures. The modest literature available does not come close to resolving the large uncertainties surrounding both the GCRs themselves and the effectiveness of possible riskreducing interventions. Our work suggests that comprehensive, integrated assessment of GCRs could be quite valuable for informing GCR-reduction decisions, and tools can be developed for making comprehensive, integrated assessments for informing GCR-reduction decisions. \n Appendix. Derivation of Cost-Effectiveness-Based Formula for Expected Value of Information In this appendix, we provide the detail on our derivation of the CEEVPI formula. Our following calculations are aided by two simplifying assumptions, as discussed previously: a binary threshold for global catastrophes and independence of different GCRs (except to the extent that GCR event interactions and dependencies are accounted for in the fault trees or other model components). Then let X j equal the value loss due to global catastrophe j. Because of the binary threshold assumption, X j is the same for all j. Let R(t) be the risk of an event occurring during time period t, with R probability × magnitude. Then, given the independence of different GCRs, the total risk for y GCRs is R tot (t) p tot (t)X 1 − y j (1 − p j (t)) X. (A.1) We further assume that all GCRs have sufficiently low probabilities per time period p j (t) that the total probability of global catastrophe in that time period can be approximated as the sum of the independent probabilities, such that R tot (t) ≈ y j p j (t) X. (A.2) In this paper, we evaluate possible GCR-reducing interventions in terms of their cost-effectiveness, i.e., their reduction in GCR per unit cost. We favor cost-effectiveness for two reasons. First, cost-benefit analysis is hampered by the challenge of quantifying the value loss due to global catastrophe, X. The benefit of interventions is the reduction in risk, which also depends on X. While X is generally believed to be very large, quantitative estimates span a huge range, as mentioned previously. In contrast, cost-effectiveness analysis does not depend on X. Let c i and CE i be the cost and cost-effectiveness of intervention i. Then, CE i R 0, tot (t) − R i, tot (t) c i p 0, tot (t) − p i, tot (t) c i X. (A.3) Since X is equal for all global catastrophes, comparisons of the cost-effectiveness of different interventions are the same regardless of the value of X. We assume that there is a decision to be made about allocation of resources to some combination of direct risk reduction and research, and that the main decision rule is to choose whatever combination of options has best overall expected GCR-reduction cost-effectiveness among options considered in the analysis. Then the decision maker should buy as much risk reduction (and risk research enabling better risk-reduction decisions) as they can at whatever total cost, as long as that results in the greatest cost-effectiveness. (We assume that budgets are not an issue in the context of the risk-reduction and research options under consideration, and we do not explicitly account for potential budget constraints in the following. This implicitly assumes that sufficient total resources are either being provided by a single entity or are coordinated in some fashion.) If the information's expected effect and cost are such that even with the research cost included it would achieve a better cost-effectiveness than whatever would have been the optimal investment before the research based on expected values, We denote cases whether research is conducted to reduce uncertainty on a particular factor using superscript b for \"before\" research, or without information from research, and superscript a for \"after\" research, or with information from research. (Thus, before research is conducted on p i , it is p b i , and after research is conducted, it is p a i .) Again, we ignore actual costs of research and focus on the amount of resources the decision maker ought to be willing to pay for the total value added by the research. We use the term w to denote the amount of resources the decision maker ought to be willing to pay for the total added value of conducting research. (It adds no value to do no research.) For the purposes of this derivation, we do not provide more detailed breakdowns of the amount of resources the decision maker ought to be willing to pay for the value added by performing specific pieces of research that comprise total value added by research v, which actually could consist of separate pieces of research on different uncertain factors. (The amount of resources the decision maker ought to be willing to pay for the value added by each piece of research could be assessed using an extension of the derivation provided here.) Note that the best option after research, s a (which has costeffectiveness in Monte Carlo iteration m of (p a 0m − p a sm )/c a sm ) is not necessarily the same as the best option before research, s b (which has cost-effectiveness in Monte Carlo iteration m of (p b 0m − p b sm )/c b sm ). Research that reduces but does not eliminate uncertainty about a factor yields imperfect information. In a case where research produces perfect information about a factor, all uncertainty is eliminated about the factor after research. In terms of Monte Carlo iterations, after perfect information, one of the Monte Carlo iterations will have randomly sampled factor values whose are closest to the actual real-world factor values. As long as doing more research adds more value, and if we ignore the actual costs of performing the research, we assume that resources for research ought to be invested in up to the point where research would be so expensive that a funder would obtain no further benefit from investing in additional research (because up to that point, they would get a better overall cost-effectiveness by investing in additional research). At that point, the expected cost-effectiveness of the best riskreduction option before research is equal to the expected costeffectiveness of the best risk-reduction option after research, including the amount of resources the decision maker ought to be willing to pay for the total value added by research: The E[ • ] terms can be distributed and regathered because all the relevant calculations (i.e., both the expected-value calculations and the cost-effectiveness calculations) involve linear operations and because p and c variables are assumed Decision Analysis, 2017 , vol. 14, no. 3, pp. 187-203, © 2017 to be uncorrelated. (To be more specific, manipulation of the numerator and denominator are allowed because they are linear operations, and multiplicative operations are allowed because the covariance is assumed to be 0.) Distributing and rearranging the terms to solve for the expected value of the amount of resources the decision maker ought to be willing to pay for the total value added by research, (A.11) This formula for CEEVI actually applies to both perfect information and imperfect information cases. However, our focus in this derivation is on the limiting case where the research yields perfect information, which provides the upper limit to the value of research, i.e., the costeffectiveness-based expected value of perfect information, CEEVPI. It turns out that when used in Analytica software by Lumina Decision Systems, the above CEEVI formula can be used in a straightforward fashion to set up the Monte Carlo simulation computations for CEEVPI (by directly using each factor's Monte Carlo sampling values in each Monte Carlo iteration), and that is what we use in the illustrative Analytica model accompanying this paper. (Computation of the cost-effectiveness-based expected value of imperfect information, CEEVII, would require an extra step to simulate after-research imperfect-information probability distributions for each factor, instead of after-research perfectinformation point values.) Endnotes 1 For an example of a canonical utility function based decision analysis framework for one GCR category, asteroid and comet impact risk, see Lee et al. (2014) . 2 For an illustrative example of a probability distribution reflecting uncertainty about an annualized global catastrophe probability, see Figure 5 in Section 4.2. For more on such probability distributions, see Chapters 4 and 5 of Morgan and Henrion (1990) . 3 For some GCR types, it may not be most useful to think in terms of consequence exceedance thresholds, but in terms of probabilities of various possibilities, such as in future \"artificial superintelligent catastrophe\" scenarios. However, modeling approaches such as fault trees could be useful for some such scenarios (Barrett and Baum 2017 ). 4 There are also related challenges in the selection of metrics, such as for event consequences: whether to focus on estimated fatalities over some specific time scale, or to also consider economic impact, etc. Even choosing cost metrics for use in cost-effectiveness analysis presents challenges. In this paper, we assume cost is defined in monetary (dollar) terms, but those have limitations (Baum 2012) , and scarcities exist for other resources such as labor capacity. 5 See, for example, Reinhardt et al. (2015) . 6 For example, there are a number of options for alternative food sources during a crop-failure crisis (Denkenberger and Pearce 2015) . Those potentially could be more cost-effective than food stockpiling, but we believe their effectiveness also would have greater uncertainty because of complexity, etc. Figure 1 . 1 Figure 1. High-Level Global Catastrophe Fault Tree and Risk Management Decision Influence DiagramGlobal catastrophe (GC) \n Figure 3 . 3 Figure 3. Uncertainties in Global Catastrophe Probability Modeling Event consequence exceedance probability \n Figure Figure 5. (Color online) PDF of Baseline Annual Probabilities of Global Catastrophe \n Figure 6 . 6 Figure 6. (Color online) PDF of Cost-Effectiveness of GCR-Reduction Measures if Nuclear War Risk Is Included in Scope (Status quo) \n previous expression for E[w] gives the value of research as the cost-effectiveness-based expected value of information, \n INFORMS Figure 2. Global Catastrophe Probability as Function of Event Consequence and Exceedance Probability Event Reduced consequence exceedance probability event consequence threshold (CT) Global catastrophe (GC) consequence Initial P(GC) Reduced P(GC) Reduced event probability Event consequence \n INFORMS Figure 4. High-Level Decision Tree for Research and Risk-Reduction Decisions Do not invest to Risk is high Risk reduce risks Risk is low reduction decision Do not conduct without research Invest to reduce risks Risk was high, and is being reduced Risk was low, and is being reduced research Research decision Do not invest to Conduct research Risk is high Risk reduction risk is high decision after research when reduce risks Invest to Risk was high, and is being reduced Risk is high reduce risks Do not invest to Risk reduction reduce risks Risk is low decision after risk is low research when Invest to Risk was low, and is being reduced Risk is low reduce risks \n Table 1 . 1 Expressions for Assumed Baseline Annual Probabilities of Global Catastrophe GCR types Baseline P(Global Catastrophe) Visible near-Earth objects Triangular(0.5, 1, 2) • 0.15 • (1/(3 • 10 5 )) Long-period comets Triangular(0.5, 1, 2) (damocloids) • Uniform(0, 4) • (1/(3 • 10 5 )) Nuclear war Triangular(0, 0.0001, 0.001) a factor of two.\" We assume that these uncertainties in the visible/bright NEO frequencies are uncorrelated with the uncertainties in the damocloid frequencies. Some corroboration of the relative risks of bright ver- sus dark objects, and associated uncertainties, is pro- vided by the (National Research Council 2010, p. 22): \n Table 2 . 2 Assumed Effects and Costs of Risk-Reduction MeasuresReduction in P(Global Catastrophe) from each GCR type Visible near- Long-period comets GCR-reduction measures Earth objects (damocloids) Nuclear war Costs ($Billion) \n Table 3 . 3 Mean Cost-Effectiveness of GCR-Reduction Measures (Reduction in Total Global Catastrophe Probability per Dollar) Food Both food Include NEO tracking stockpiling stockpiling nuclear and redirection for all of and NEO war in scope? measures humanity tracking/redirection Yes 4 × 10 −17 1 × 10 −16 1 × 10 −16 No 4 × 10 −17 2 × 10 −18 2 × 10 −18 \n Table", "date_published": "n/a", "url": "n/a", "filename": "deca.2017.0350.tei.xml", "abstract": "In this paper, we develop and illustrate a framework for determining the potential value of global catastrophic risk (GCR) research in reducing uncertainties in the assessment of GCR levels and the effectiveness of risk-reduction options. The framework uses the decision analysis concept of the expected value of perfect information in terms of the cost-effectiveness of GCR reduction. We illustrate these concepts using available information on impact risks from two types of near-Earth objects (asteroids or extinct comets) as well as nuclear war, and consideration of two risk-reduction measures. We also discuss key challenges in extending the calculations to all GCRs and risk-reduction options, as part of an agenda for comprehensive, integrated GCR research. While real-world research would not result in perfect information, even imperfect information could have significant value in informing GCR-reduction decisions. Unlike most value of information approaches, our equation for calculating value of information is based on risk-reduction cost-effectiveness, to avoid implicitly equating lives and dollars, e.g., using a value of statistical life (VSL), which may be inappropriate given the scale of GCRs. Our equation for value of information may be useful in other domains where VSLs would not be appropriate.", "id": "f24645b7ec0a0a21e0579622c3a4b602"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Mark K Ho", "Michael L Littman", "James Macglashan", "Fiery Cushman", "Joseph L Austerweil"], "title": "Showing versus Doing: Teaching by Demonstration", "text": "Introduction Is there a difference between doing something and showing someone else how to do something? Consider cooking a chicken. To cook one for dinner, you would do it in the most efficient way possible while avoiding contaminating other foods. But, what if you wanted to teach a completely naïve observer how to prepare poultry? In that case, you might take pains to emphasize certain aspects of the process. For example, by ensuring the observer sees you wash your hands thoroughly after handling the uncooked chicken, you signal that it is undesirable (and perhaps even dangerous) for other ingredients to come in contact with raw meat. More broadly, how could an agent show another agent how to do a task, and, in doing so, teach about its underlying reward structure? To model showing, we draw on psychological research on learning and teaching concepts by example. People are good at this. For instance, when a teacher signals their pedagogical intentions, children more frequently imitate actions and learn abstract functional representations [6, 7] . Recent work has formalized concept teaching as a form of recursive social inference, where a teacher chooses an example that best conveys a concept to a learner, who assumes that the teacher is choosing in this manner [14] . The key insight from these models is that helpful teachers do not merely select probable examples of a concept, but rather choose examples that best disambiguate a concept from other candidate concepts. This approach allows for more effective, and more efficient, teaching and learning of concepts from examples. We can extend these ideas to explain showing behavior. Although recent work has examined userassisted teaching [8] , identified legible motor behavior in human-machine coordination [9] , and analyzed reward coordination in game theoretic terms [11] , previous work has yet to successfully model how people naturally teach reward functions by demonstration. Moreover, in Inverse Reinforcement Learning (IRL), in which an observer attempts to infer the reward function that an expert (human or artificial) is maximizing, it is typically assumed that experts are only doing the task and not intentionally showing how to do the task. This raises two related questions: First, how does a person showing how to do a task differ from them just doing it? And second, are standard IRL algorithms able to benefit from human attempts to show how to do a task? In this paper, we investigate these questions. To do so, we formulate a computational model of showing that applies Bayesian models of teaching by example to the reward function learning setting. We contrast this pedagogical model with a model of doing: standard optimal planning in Markov Decision Processes. The pedagogical model predicts several systematic differences from the standard planning model, and we test whether human participants reproduce these distinctive patterns. For instance, the pedagogical model chooses paths to a goal that best disambiguates which goal is being pursued (Experiment 1). Similarly, when teaching feature-based reward functions, the model will prioritize trajectories that better signal the reward value of state features or even perform trajectories that would be inefficient for an agent simply doing the task (Experiment 2). Finally, to determine whether showing is indeed better than doing, we train a standard IRL algorithm with our model trajectories and human trajectories. \n A Bayesian Model of Teaching by Demonstration Our model draws on two approaches: IRL [2] and Bayesian models of teaching by example [14] . The first of these, IRL and the related concept of inverse planning, have been used to model people's theory of mind, or the capacity to infer another agent's unobservable beliefs and/or desires through their observed behavior [5] . The second, Bayesian models of pedagogy, prescribe how a teacher should use examples to communicate a concept to an ideal learner. Our model of teaching by demonstration, called Pedagogical Inverse Reinforcement Learning, merges these two approaches together by treating a teacher's demonstration trajectories as communicative acts that signal the reward function that an observer should learn. \n Learning from an Expert's Actions \n Markov Decision Processes An agent that plans to maximize a reward function can be modeled as the solution to a Markov Decision Process (MDP). An MDP is defined by the tuple < S, A, T, R, γ >: a set of states in the world S; a set of actions for each state A(s); a transition function that maps states and actions to next states, T : S × A → S (in this work we assume all transitions are deterministic, but this can be generalized to probabilistic transitions); a reward function that maps states to scalar rewards, R : S → R; and a discount factor γ ∈ [0, 1]. Solutions to an MDP are stochastic policies that map states to distributions over actions, π : S → P (A(s)). Given a policy, we define the expected cumulative discounted reward, or value, V π (s), at each state associated with following that policy: V π (s) = E π ∞ k=0 γ k r t+k+1 | s t = s . (1) In particular, the optimal policy for an MDP yields the optimal value function, V * , which is the value function that has the maximal value for every state (V * (s) = max π V π (s), ∀s ∈ S). The optimal policy also defines an optimal state-action value function, Q * (s, a) = E π [r t+1 + γV * (s t+1 ) | s t = s, a t = a]. Q i = calculateActionValues(s, R i , T , γ) 4: π i = softmax(Q i , λ) 5: Π.add(π i ) 6: Calculate j = {j : s 1 ∈ s, length(j) ≤ l max , and ∃π ∈ Π s.t. (si,ai)∈j π(a i | s i ) > p min }. In the Reinforcement Learning setting, an agent takes actions in an MDP and receives rewards, which allow it to eventually learn the optimal policy [15] . We thus assume that an expert who knows the reward function and is doing a task selects an action a t in a state s t according to a Boltzmann policy, which is a standard soft-maximization of the action-values: P Doing (a t | s t , R) = exp{Q * (s i , a i )/λ} a ∈A(si) exp{Q * (s i , a )/λ} . (2) λ > 0 is an inverse temperature parameter (as λ → 0, the expert selects the optimal action with probability 1; as λ → ∞, the expert selects actions uniformly randomly). In the IRL setting, an observer sees a trajectory of an expert executing an optimal policy, j = {(s 1 , a 1 ), (s 2 , a 2 ), ..., (s k , a k )}, and infers the reward function R that the expert is maximizing. Given that an agent's policy is stationary and Markovian, the probability of the trajectory given a reward function is just the product of the individual action probabilities, P Doing (j | R) = t P Doing (a t | s t , R). From a Bayesian perspective [13] , the observer is computing a posterior probability over possible reward functions R: P Observing (R | j) = P Doing (j | R)P (R) R P Doing (j | R )P (R ) . (3) Here, we always assume that P (R) is uniform. \n Bayesian Pedagogy IRL typically assumes that the demonstrator is executing the stochastic optimal policy for a reward function. But is this the best way to teach a reward function? Bayesian models of pedagogy and communicative intent have shown that choosing an example to teach a concept differs from simply sampling from that concept [14, 10] . These models all treat the teacher's choice of a datum, d, as maximizing the probability a learner will infer a target concept, h: P Teacher (d | h) = P Learner (h | d) α d P Learner (h | d ) α . (4) α is the teacher's softmax parameter. As α → 0, the teacher chooses uniformly randomly; as α → ∞, the teacher chooses d that maximally causes the learner to infer a target concept h; when α = 1, the teacher is \"probability matching\". The teaching distribution describes how examples can be effectively chosen to teach a concept. For instance, consider teaching the concept of \"even numbers\". The sets {2, 2, 2} and {2, 18, 202} are both examples of even numbers. Indeed, given finite options with replacement, they both have the same probability of being randomly chosen as sets of examples. But {2, 18, 202} is clearly better for helpful teaching since a naïve learner shown {2, 2, 2} would probably infer that \"even numbers\" means \"the number 2\". This illustrates an important aspect of successful teaching by example: that examples should not only be consistent with the concept being taught, but should also maximally disambiguate the concept being taught from other possible concepts. \n Pedagogical Inverse Reinforcement Learning To define a model of teaching by demonstration, we treat the teacher's trajectories in a reinforcementlearning problem as a \"communicative act\" for the learner's benefit. Thus, an effective teacher will modify its demonstrations when showing and not simply doing a task. As in Equation 4 , we can define a teacher that selects trajectories that best convey the reward function: P Showing (j | R) = P Observing (R | j) α j P Observing (R | j ) α . (5) In other words, showing depends on a demonstrator's inferences about an observer's inferences about doing. This model provides quantitative and qualitative predictions for how agents will show and teach how to do a task given they know its true reward function. Since humans are the paradigm teachers and a potential source of expert knowledge for artificial agents, we tested how well our model describes human teaching. In Experiment 1, we had people teach simple goal-based reward functions in a discrete MDP. Even though in these cases entering a goal is already highly diagnostic, different paths of different lengths are better for showing, which is reflected in human behavior. In Experiment 2, people taught more complex feature-based reward functions by demonstration. In both studies, people's behavior matched the qualitative predictions of our models. 3 Experiment 1: Teaching Goal-based Reward Functions Consider a grid with three possible terminal goals as shown in Figure 1 . If an agent's goal is &, it could take a number of routes. For instance, it could move all the way right and then move upwards towards the & (right-then-up) or first move upwards and then towards the right (up-then-right). But, what if the agent is not just doing the task, but also attempting to show it to an observer trying to learn the goal location? When the goal is &, our pedagogical model predicts that up-then-right is the more probable trajectory because it is more disambiguating. Up-then-right better indicates that the intended goal is & than right-then-up because right-then-up has more actions consistent with the goal being #. We have included an analytic proof of why this is the case for a simpler setting in the supplementary materials. Additionally, our pedagogical model makes the prediction that when trajectory length costs are negligible, agents will engage in repetitive, inefficient behaviors that gesture towards one goal location over others. This \"looping\" behavior results when an agent can return to a state with an action that has high signaling value by taking actions that have a low signaling \"cost\" (i.e. they do not signal something other than the true goal). Figure 1d shows an example of such a looping trajectory. In Experiment 1, we tested whether people's showing behavior reflected the pedagogical model when reward functions are goal-based. If so, this would indicate that people choose the disambiguating path to a goal when showing. \n Experimental Design Sixty Amazon Mechanical Turk participants performed the task in Figure 1 . One was excluded due to missing data. All participants completed a learning block in which they had to find the reward location without being told. Afterwards, they were either placed in a Do condition or a Show condition. Participants in Do were told they would win a bonus based on the number of rewards (correct goals) they reached and were shown the text, \"The reward is at location X\", where X was one of the three symbols %, #, or &. Those in Show were told they would win a bonus based on how well a randomly matched partner who was shown their responses (and did not know the location of the reward) did on the task. On each round of Show, participants were shown text saying \"Show your partner that the reward is at location X\". All participants were given the same sequence of trials in which the reward locations were <%, &, #, &, %, #, %, #, &>. \n Results As predicted, Show participants tended to choose paths that disambiguated their goal as compared to Do participants. We coded the number of responses on & and % trials that were \"showing\" trajectories based on how they entered the goal (i.e. out of 3 for each goal). On & trials, entering from the left, and on % trials, entering from above were coded as \"showing\". We ran a 2x2 ANOVA with Show vs Do as a between-subjects factor and goal (% vs &) as a repeated measure. There was a main effect of condition (F (1, 57) = 16.17, p < .001; Show: M = 1.82, S.E. 0.17; Do: M = 1.05, S.E. 0.17 The model does not predict any difference between conditions for the # (lower right) goal. However, a visual analysis suggested that more participants took a \"swerving\" path to reach #. This observation was confirmed by looking at trials where # was the goal and comparing the number of swerving trials, which was defined as making more than one change in direction (Show: M = 0.83, Do: M = 0.26; two-sided t-test: t(44.2) = 2.18, p = 0.03). Although not predicted by the model, participants may swerve to better signal their intention to move 'directly' towards the goal. \n Discussion Reaching a goal is sufficient to indicate its location, but participants still chose paths that better disambiguated their intended goal. Overall, these results indicate that people are sensitive to the distinction between doing and showing, consistent with our computational framework. \n Experiment 2: Teaching Feature-based Reward Functions Experiment 1 showed that people choose disambiguating plans even when entering the goal makes this seemingly unnecessary. However, one might expect richer showing behavior when teaching more complex reward functions. Thus, for Experiment 2, we developed a paradigm in which showing how to do a task, as opposed to merely doing a task, makes a difference for how well the underlying reward function is learned. In particular, we focused on teaching feature-based reward functions that allow an agent to generalize what it has learned in one situation to a new situation. People often use feature-based representations for generalization [3] , and feature-based reward functions have been used extensively in reinforcement learning (e.g. [1] ). We used a colored-tile grid task shown in Figure 2 to study teaching feature-based reward functions. White tiles are always \"safe\" (reward of 0), while yellow tiles are always terminal states that reward 10 points. The remaining 3 tile types-orange, purple, and cyan-are each either \"safe\" or \"dangerous\" (reward of −2). The rewards associated with the three tile types are independent, and nothing about the tiles themselves signal that they are safe or dangerous. A standard planning algorithm will reach the terminal state in the most efficient and optimal manner. Our pedagogical model, however, predicts that an agent who is showing the task will engage in specific behaviors that best disambiguate the true reward function. For instance, the pedagogical model is more likely to take a roundabout path that leads through all the safe tile types, choose to remain on a safe colored tile rather than go on the white tiles, or even loop repeatedly between multiple safe tile-types. All of these types of behaviors send strong signals to the learner about which tiles are safe as well as which tiles are dangerous. \n Experimental Design Sixty participants did a feature-based reward teaching task; two were excluded due to missing data. In the first phase, all participants were given a learning-applying task. In the learning rounds, they interacted with the grid shown in Figure 2 while receiving feedback on which tiles won or lost points. Safe tiles were worth 0 points, dangerous tiles were worth -2 points, and the terminal goal tile was worth 5 points. They also won an additional 5 points for each round completed for a total of 10 points. Each point was worth 2 cents of bonus. After each learning round, an applying round occurred in which they applied what they just learned about the tiles without receiving feedback in a new grid configuration. They all played 8 pairs of learning and applying rounds corresponding to the 8 possible assignments of \"safe\" and \"dangerous\" to the 3 tile types, and order was randomized between participants. As in Experiment 1, participants were then split into Do or Show conditions with no feedback. Do participants were told which colors were safe and won points for performing the task. Show participants still won points and were told which types were safe. They were also told that their behavior would be shown to another person who would apply what they learned from watching the participant's behavior to a separate grid. The points won would be added to the demonstrator's bonus. \n Results Responses matched model predictions. Do participants simply took efficient routes, whereas Show participants took paths that signaled tile reward values. In particular, Show participants took paths that led through multiple safe tile types, remained on safe colored tiles when safe non-colored tiles were available, and looped at the boundaries of differently colored safe tiles. \n Model-based Analysis To determine how well the two models predicted human behaviors globally, we fit separate models for each reward function and condition combination. We found parameters that had the highest median likelihood out of the set of participant trajectories in a given reward function-condition combination. Since some participants used extremely large trajectories (e.g. >25 steps) and we wanted to include an analysis of all the data, we calculated best-fitting state-action policies. For the standard-planner, it is straightforward to calculate a Boltzmann policy for a reward function given λ. For the pedagogical model, we first need to specify an initial model of doing and distribution over a finite set of trajectories. We determine this initial set of trajectories and their probabilities using three parameters: λ, the softmax parameter for a hypothetical \"doing\" agent that the model assumes the learner believes it is observing; l max , the maximum trajectory length; and p min , the minimum probability for a trajectory under the hypothetical doing agent. The pedagogical model then uses an α parameter that determines the degree to which the teacher is maximizing. State-action probabilities are calculated from a distribution over trajectories using the equation P (a | s, R) = j P (a | s, j)P (j | R), where P (a | s, j) = |{(s,a):s=st,a=at∀(st,at)∈j}| |{(s,a):s=st∀(st,at)∈j}| . We fit parameter values that produced the maximum median likelihood for each model for each reward function and condition combination. These parameters are reported in the supplementary materials. The normalized median fit for each of these models is plotted in Figure 3 . As shown in the figure, the standard planning model better captures behavior in the Do condition, while the pedagogical model better captures behavior in the Show condition. Importantly, even when the standard planning model could have a high λ and behave more randomly, the pedagogical model better fits the Show condition. This indicates that showing is not simply random behavior. \n Behavioral Analyses We additionally analyzed specific behavioral differences between the Do and Show conditions predicted by the models. When showing a task, people visit a greater variety of safe tiles, visit tile types that the learner has uncertainty about (i.e. the colored tiles), and more frequently revisit states or \"loop\" in a manner that leads to better signaling. We found that all three of these behaviors were more likely to occur in the Show condition than in the Do condition. To measure the variety of tiles visited, we calculated the entropy of the frequency distribution over colored-tile visits by round by participant. Average entropy was higher for Show (Show: M = 0.50, SE = 0.03; Do: M = 0.39, SE = 0.03; two-sided t-test: t(54.9) = −3.27, p < 0.01). When analyzing time spent on colored as opposed to un-colored tiles, we calculated the proportion of visits to colored tiles after the first colored tile had been visited. Again, this measure was higher for Show (Show: M = 0.87, SE = 0.01; Do: M = 0.82, SE = 0.01; two-sided t-test: t(55.6) = −3.14, p < .01). Finally, we calculated the number of times states were revisited in the two conditions-an indicator of \"looping\"-and found that participants revisited states more in Show compared to Do (Show: M = 1.38, SE = 0.22; Do: M = 0.10, SE = 0.03; two-sided t-test: t(28.3) = −2.82, p < .01). There was no difference between conditions in the total rewards won (two-sided t-test: t(46.2) = .026, p = 0.80). \n Teaching Maximum-Likelihood IRL One reason to investigate showing is its potential for training artificial agents. Our pedagogical model makes assumptions about the learner, but it may be that pedagogical trajectories are better even for training off-the-shelf IRL algorithms. For instance, Maximum Likelihood IRL (MLIRL) is a state-of-the-art IRL algorithm for inferring feature-based reward functions [4, 12] . Importantly, unlike the discrete reward function space our showing model assumes, MLIRL estimates the maximum likelihood reward function over a space of continuous feature weights using gradient ascent. To test this, we input human and model trajectories into MLIRL. We constrained non-goal feature weights to be non-positive. Overall, the algorithm was able to learn the true reward function better from showing than doing trajectories produced by either the models or participants (Figure 2 ). \n Discussion When learning a feature-based reward function from demonstration, it matters if the demonstrator is showing or doing. In this experiment, we showed that our model of pedagogical reasoning over trajectories captures how people show how to do a task. When showing as opposed to simply doing, demonstrators are more likely to visit a variety of states to show that they are safe, stay on otherwise ambiguously safe tiles, and also engage in \"looping\" behavior to signal information about the tiles. Moreover, this type of teaching is even better at training standard IRL algorithms like MLIRL. \n General Discussion We have presented a model of showing as Bayesian teaching. Our model makes accurate quantitative and qualitative predictions about human showing behavior, as demonstrated in two experiments. Experiment 1 showed that people modify their behavior to signal information about goals, while Experiment 2 investigated how people teach feature-based reward functions. Finally, we showed that even standard IRL algorithms benefit from showing as opposed to merely doing. This provides a basis for future study into intentional teaching by demonstration. Future research must explore showing in settings with even richer state features and whether more savvy observers can leverage a showing agent's pedagogical intent for even better learning. 7 : 7 Construct hypothetical doing probability distribution P Doing (j | R) as an N x M array. 8: P Observing (R | j) = PDoing(j|R)P (R) R PDoing(j|R )P (R ) 9: P Showing (j | R) = PObserving(R|j) α j PObserving(R|j ) α 10: return P Showing (j | R) 2.1.2 Inverse Reinforcement Learning (IRL) \n Figure 1 : 1 Figure 1: Experiment 1: Model predictions and participant trajectories for 3 trials when the goal is (a) &, (b) %, and (c) #. Model trajectories are the two with the highest probability (λ = 2, α = 1.0, p min = 10 −6 , l max = 4). Yellow numbers are counts of trajectories with the labeled tile as the penultimate state. (d) An example of looping behavior predicted by the model when % is the goal. \n ) as well as a main effect of goal (F (1, 57) = 4.77, p < .05; %-goal: M = 1.73, S.E. = 0.18; &-goal: M = 1.15, S.E. = 0.16). There was no interaction (F (1, 57) = 0.98, p = 0.32). \n Figure 2 : 2 Figure 2: Experiment 2 results. (a) Column labels are reward function codes. They refer to which tiles were safe (o) and which were dangerous (x) with the ordering . Row 1: Underlying reward functions that participants either did or showed; Row 2: Do participant trajectories with visible tile colors; Row 3: Show participant trajectories; Row 4: Mean reward function learned from Do trajectories by Maximum-Likelihood Inverse Reinforcement Learning (MLIRL) [4, 12]; Row 5: Mean reward function learned from Show trajectories by MLIRL. (b) Mean distance between learned and true reward function weights for human-trained and model-trained MLIRL. For the models, MLIRL results for the top two ranked demonstration trajectories are shown. \n Figure 3 : 3 Figure 3: Experiment 2 normalized median model fits. \n Algorithm 1 Pedagogical Trajectory AlgorithmRequire: starting states s, reward functions {R 1 , R 2 , ..., R N }, transition function T , maximum showing trajectory depth l max , minimum hypothetical doing probability p min , teacher maximization parameter α, discount factor γ. 1: Π ← ∅ 2: for i = 1 to N do 3:", "date_published": "n/a", "url": "n/a", "filename": "showing_vs_doing.tei.xml", "abstract": "People often learn from others' demonstrations, and inverse reinforcement learning (IRL) techniques have realized this capacity in machines. In contrast, teaching by demonstration has been less well studied computationally. Here, we develop a Bayesian model for teaching by demonstration. Stark differences arise when demonstrators are intentionally teaching (i.e. showing) a task versus simply performing (i.e. doing) a task. In two experiments, we show that human participants modify their teaching behavior consistent with the predictions of our model. Further, we show that even standard IRL algorithms benefit when learning from showing versus doing.", "id": "5f46469577c9b960714384810bc83ebf"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "The Professional's Dilemma", "text": "In most cases we should expect Alan's wife to feel safer if Alan keeps up appearances. And that's why vast swathes of our professional society are underperforming their potential: because playing it safe means covering things up. Can organizations ever support new ideas? An organization that's funding outside parties to accomplish certain goals (e.g. a charitable donor or foundation giving to charities) generally won't fund ideas, even ideas from those people who are both famous and established experts, unless those ideas start off with a critical amount of their own money attached. Either the idea has to be spearheaded by someone who has enough personal funds, or it has to have raised enough money from donors/investors who no longer have the in-practice authority to dictate decision making. (For more detail on this, see Samo Burja's essay Borrowed versus Owned Power . Borrowed power is granted conditionally --if your charity is given a \"donation\" under the condition that you continue to make the donor happy, that's borrowed power. If you have the money in your own personal savings account, or if the charity raised it from so many small donors that none of them are entitled to make demands about how you spend it, that's owned power. Borrowed power is always weaker than owned power.) It's not generally considered \"credible\", by big donors, that an initiative will be effective enough to be worth funding unless the initiative has managed to cross a certain threshold in terms of acquiring resources. Moreover, if the grantee meets that threshold, gets funded, and does a good job with that money, this doesn't necessarily \"unlock\" disproportionately more resources from the big donor; the grantee will still have to hustle the same amount, or nearly so, to get the second grant. They're still running on borrowed power; they still need to keep looking over their shoulder, worrying about what the funders might think. We know it's possible to do better than this, because this paradigm would never have allowed the Manhattan Project to get funded. Einstein didn't come with his own money; he was merely a renowned physicist with a credible proposal. Somebody in the government had to literally believe his claims that a specific thing that had never been done before could be done. This kind of belief is crucially different from the sense usually used when we talk about believing in someone, in that it is about the trusted person's relationship to their field of expertise, not their relationship to the person trusting them. Not everyone can evaluate every idea, but if a person reaches a certain threshold of expertise, eminence, reputation for honesty, etc., the funder has to be able to come to believe that the content of their ideas is more likely than baseline to be valid and worth pursuing. As a funder, you have to be able to believe someone --not everyone, but someone --when they say an initiative is worthwhile, enough that you are willing to take the initiative to make it possible, and you don't expect to regret that decision. Otherwise, you are not looking for ideas outside your institution . You are not a \"hub\", you are not reachable, you are just doing your thing with no outside input. Another way of putting this is that funders need to be able to see potential grantees as peers . Organizations having \"friends\" --other organizations or consultants/free-agents that they trust, are mission-aligned with, and communicate openly with --is often considered unprofessional (\"incestuous\"), but is actually a good thing! There needs to be someone outside you (or your org) whom you trust to be well-informed and value-aligned. Without the capacity to process genuinely new ideas, it's less effective to have grantees than to just have employees and decide everything top-down. If you're doing something in a decentralized fashion, it should be because you actually value the decentralization --you want to get an outside perspective, or on-the-ground data, or expertise, or something. \n Principal-Agent Problems How can the principal (the funder) trust the agent (the grantee)? Most of the incentive systems in the what we hear about in charity are rather primitive; it boils down to \"demand more documentation and proof from grantees.\" This is primitive because it tries to enforce honesty by spending resources on detecting dishonesty, which can be very costly, relative to other ways of incentivizing honesty. It's not even asking the question \"what's the most efficient way to get the grantee to tell me the truth?\" The aesthetic underlying this attitude is called authoritarian high modernism; just trying to add metrics, not even engaging with the fact that metrics can and will be gamed. The people who survive in positions of power in such a system are not the ones who naively try to answer questions they're asked as accurately as possible; they're the ones who keep up appearances. Conversely, when interacting with someone who keeps up appearances and who has survived in a position of power in such a system, there is common knowledge only 'professional' efforts will be supported, and that 'professional' efforts are efforts that don't freely reveal information. You can enforce quality control in a top-down fashion if you have an impartial investigative process and an enforcement mechanism, but who investigates the investigators? Who judges the judges? Ultimately somebody has to WANT to give an honest report of what's going on. And those impartial investigators have to be incentive-aligned such that they benefit more from being honest than lying . Otherwise, even if initially most of the investigators are highly selected for honesty, by organizational maturity, all will have been selected for 'discretion'. To get honest reports, you need a mechanism designed in such a way as to systematically favor truth, much like auctions designed so the most advantageous price to bid is also each person's true price. How do you do that? \n How to solve the problem Let's work through a concrete example: Suppose you're considering giving a grant to a charitable organization. They send you a budget and ask for a dollar amount. How can you incentivize them NOT to pad the budget? For instance, if they expect that they might be able to get this grant but they'll have a hard time getting a second grant, they have a strong incentive to ask for enough money up front to last them several years, BUT to say that they need it all for this year. This will happen UNLESS they have reason to believe that you'll frown on budget-padding, BUT are willing to look favorably on orgs that do well in the first year for subsequent grants. This requires opening up lines of communication MUCH more for successful/trusted grantees than for random members of the public or arbitrary grant applicants. It needs to be POSSIBLE to earn your trust, and for that to unlock a certain amount of funding security. Otherwise, grantees must seek funding security --for themselves, their families, their employees, their charitable work --because without doing so they won't be able to do their job at all. They'll seek it through deceiving you, because that will feel like, and will actually be, and will be seen by all observers to be, the responsible thing for them to do . If you make it seem like \"nobody should be able to count on my support, I'll keep 'em on their toes\", they'll find something else they can count on, namely your ignorance of how their project works. The agent ALWAYS knows more than the principal about the operational details of the project! They can always keep you in the dark more effectively than you can snoop on them. So you have to make earning your trust p ossible, empowering and rewarding . You can of course revoke those rewards if they betray your trust, but ultimately you have to be much less suspicious of your trusted friends than you are of randos. Yes, this means you take on risk; but the grantees ALSO take on risk by being honest with you. It's symmetric. Here's one common failure mode in the conventional paradigm: you can't appear competent if you reveal the truth that something in your project isn't working at the current funding level and needs more money. You can't seem \"needy\" for money, or you'll look incompetent, and you won't get the money. So instead you try to get the money by inflating your accomplishments and hiding your needs and trying to appear \"worthy.\" This is counterproductive from the funder's point of view. As a donor, you want to give money where it'll do the most good. This means you need accurate info about what it'll be spent on. But the grantee doesn't necessarily trust you to believe that what they actually need money for is worthy. For a well-known instance, many donors think operational costs are \"waste\", so charities fudge the accounting to make it seem like all donations go to program expenses, and still underspend on operational costs like paying their employees. Or, sometimes part of a charity's actual budget is somewhat unsavory, like bribes to officials in corrupt countries being a necessary cost of actually operating there. Or, some costs can be embarrassing to admit, like the costs of learning/mistakes/R&D, if there's a perceived expectation that you have to get things right the first time. So the onus is on the funder to make it clear that you want to know what they ACTUALLY need, what their REAL constraints are, and that you will not pull the plug on them if they have an awkward funding crunch, once they have earned your trust to an adequate degree. It has to be clear that CANDOR is an important part of earning your trust. In particular, it needs to be clear enough to the third parties in the life of the individual you're funding, that being honest with you will pay off, that they pressure the grantee to be more honest with you, the funder. How does their family feel? Is the spouse of the charity's executive director nagging them to be more honest when fundraising, because that's what'll put their kids through college? Because, if not, you can bet their spouse is nagging them to be less honest while fundraising, to squeeze more money out of the donors, because that's the responsible thing to do for their family. What about their other dependents --employees, collaborators, beneficiaries of charitable programs --would THEY put pressure on the executive director to be more forthcoming with the funder? Or would they say \"squeeze those rich morons harder so we can keep the lights on and help people who need it?\" (Or, perhaps, more discretely but synonymously, \"don't make waves, jump through the hoops they're asking you to jump through and tell them what they want to hear.\") People don't exist in a vacuum; they want to gain the esteem and meet the needs of the people they care about. We have to be not only rewarding honesty -not only rewarding honesty more than smoothing things over -but OBVIOUSLY ENOUGH rewarding honesty that even third parties know our reputation. Otherwise everyone who tries to be honest with us will receive a continuous stream of pressure to stop being so irresponsible . \"Why are you favoring a rich guy or a huge foundation over your own family and an important cause?\" It's a fair question! And you need to flip it on its head --you want everyone to be asking \"Why are you forgoing HUGE OPPORTUNITIES for your family and an important cause, just because you're too insecure to be candid with a rich guy?", "date_published": "n/a", "url": "n/a", "filename": "the_professionals_dilemma.tei.xml", "abstract": "Alan, the executive director of a small nonprofit, is sitting down to dinner with his family. Responsibility weighs heavily on Alan. It's nearly time to reapply for a grant that's been his organization's largest source of funds. Alan has carefully cultivated a relationship with Beth, the program officer for the foundation funding the grant. Beth knows and likes Alan, and is excited about the work he's doing. If Alan's organization applies for a grant to continue their existing work, it will almost certainly be approved. But Alan isn't sure what to do. In the past few months, it's become increasingly clear that his fledgling organization's main program -the one they were able to write grants for -isn't the best fit for the needs of the people it's supposed to serve. A bright young intern, Charles, proposed a new program that, as far as Alan can tell, would be a better fit both for the team and the people they are trying to help. On the merits, it seems like Charles's idea is the right thing to do. Alan could, in reapplying, explain the limitations of the existing program, and why the new program would be better. But that would rock the boat. It would mean challenging the very story that got last year's grant. There's no guarantee Beth would like the new idea. No one else has done it. There's no track record to point to. Alan thinks of what might happen if the grant isn't approved. He'd have to lay people off, good people, and his organization might not survive. Even if it does, he'd have to take a pay cut, maybe move to a house in a less expensive neighborhood, pull his kids out of school. It's not certain this would happen, but can he afford to take that chance? Let's look across the dinner table at Alan's wife, Dana. If Alan tells her about his dilemma, what do we expect her to feel. What would be the safe option? Disclosing the potentially unsettling truth to the funder in the hopes of being able to do better work in the future? Or smoothing things over, tweaking the existing program to try to patch some of the worse gaps, and muddling on? What will feel to her like the responsible choice for Alan to make, for his family and the other people depending on him?", "id": "9a89a3155949f2e20a1d0bcdaaab8d1d"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Andrew E Snyder-Beattie", "Toby Ord", "Michael B Bonsall"], "title": "An upper bound for the background rate of human extinction", "text": "Bounding the extinction Rate Based on Age of Humanity Anatomically modern human fossils in Ethiopia have been dated to 195 ± 5 thousand years ago (kya) 13 . A more recent fossil discovery in Morocco of an anatomically modern human has been dated to 315 ± 34 kya 14, 15 (though the fossil may exhibit more primitive neurocranial and endocranial morphology). Given that Homo sapiens has existed for hundreds of thousands of years, what can we infer about our background rate of extinction? Assuming that we share a common extinction rate with our predecessors, we can rule out rates that are too high to be compatible with this track record of survival. As our aim is to construct an upper bound, we can set aside the possibility that modern human technology, habitat range, and population size have reduced a number of natural extinction risks. The upper bound is only violated if we have reason to believe current extinction rates are higher than those our predecessors faced. Since we exclude anthropogenic risks from our analysis, we also set aside the majority of the ways in which this could be the case, although we acknowledge there exist boundary cases between purely natural and anthropogenic risks (e.g. a naturally emerging disease could be spread further by modern technology). Ultimately the scope of the upper bound is limited to all risks that have remained constant (or have been reduced) over the past few hundred thousand years. Likelihood of extinction rates. Analysis of taxonomic survivorship curves and temporal ranges for a wide variety of taxa suggest that extinction probabilities can be approximated well by assuming a constant risk of extinction over time [16] [17] [18] . Under this model, extinction can be represented by the exponential distribution with constant extinction rate μ. The probability that humanity goes extinct before time t is given by the cumulative distribution function P(T ≤ t) = 1 − e −μt , where T is the random variable denoting the longevity of our species. Conversely, the probability that humanity makes it beyond time t is P(T ≥ t) = e −μt . We want to evaluate the likelihood of an extinction rate μ, given the observation that humanity has lasted up to time t (so we know that the total longevity of humanity T ≥ t). This can be evaluated as the likelihood function  μ| ≥ = μ − T t e ( ) t . We compute the likelihood of extinction rates between 10 −8 and 10 −4 given a number of different plausible starting dates for Homo sapiens outlined in Fig. 1 and Table 1 . Assuming a 200 thousand year (kyr) survival time, we can be exceptionally confident that rates do not exceed 6.9 × 10 −5 . This corresponds to an annual extinction probability below roughly 1 in 14,000. The relative likelihood for such high extinction rates are below 10 −6 (one in a million) when compared to a rate of 10 −8 . If we assume that our track record extends further, this upper bound becomes stronger. Using the fossil dated to 315 ka as a starting point for humanity gives an upper bound of μ < 4.4 × 10 −5 , corresponding to an annual extinction probability below 1 in 22,800. Using the emergence of Homo as our starting point pushes the initial bound back a full order of magnitude, resulting in an annual extinction probability below 1 in 140,000. We can also relax the one in million relative likelihood constraint and derive less conservative upper bounds. An alternative bound would be rates with relative likelihood below 10 −1 (1 in 10) when compared to the baseline rate of 10 −8 . If we assume humanity has lasted 200 kyr, we obtain a bound of μ < 1.2 × 10 −5 , corresponding to an annual extinction probability below 1 in 87,000. Using the 2 Myr origin of Homo strengthens the bound by an order of magnitude in a similar way and produces annual extinction probabilities below 1 in 870,000. It is worth noting that this model can be generalised to allow for a varying extinction rate over time μ(t), so that the probability of surviving past time t is given by P(T ≥ t) = e −Θ(t)t , where ∫ μ Θ = t t s ds ( ) (1/ ) ( ) t 0 . The upper bound on Θ(t), the average extinction rate over the interval, can then be calculated in the same way as for the constant rate model. \n Observation Selection Effects The data on humanity's survival time could be subject to survivorship bias. If early Homo sapiens requires a long period of time to develop the intellectual machinery needed to make scientific observations, then such observations could not include short evolutionary histories, regardless of the extinction rate. The amount of information we could derive from a long track record of survival would therefore be limited due to this observation selection effect. Such a track record could indicate a low extinction rate, or be the byproduct of lucky ancestors surviving high extinction rates long enough to beget progeny capable of making scientific observations. One might therefore object that the bounds on the extinction rate we have estimated are too low 12, 23 . Here, we examine and respond to this concern. \n Models to quantify potential sample bias. To model observation selection bias, let us assume that after Homo sapiens first arises another step must be reached. This could represent the origin of language, writing, science, or any relevant factor that would transition early humans into the reference class of those capable of making observations (we call this step 'observerhood'). Let this step be a random variable denoted S, with cumulative distribution function F S (t). As we are examining natural risks, we assume that S and T are independent. The probability that humanity survives long enough to reach observerhood status (via intelligence, language, writing, science, etc) can be found with the following integral: ∫ > = ∞ P T S f t F t dt ( ) ( ) ( ) (1) T S 0 www.nature.com/scientificreports www.nature.com/scientificreports/ where f T (t) = μe −μt , the probability of extinction at time t. We evaluate an adjusted likelihood function  μ| > ⁎ T t ( ) , denoting that we are taking the likelihood of an extinction rate μ given that humanity has survived to time t, and the fact that we are conditioning on the existence of observers such that T > S. This results in the adjusted likelihood function: μ μ | > = > | > ⁎ T t P T t T S ( ) ( , ) (2)  ∫ = ∞ c f s F s ds 1 ( ) ( ) (3) t T S where c = P(T > S) is a normalising constant. We evaluate a model with four variations for the observerhood step: a model in which observerhood occurs as a single event that has a constant rate over time, a model with an increasing rate over time, a model with multiple steps, and a model where observerhood simply requires a fixed amount of time. If desired, we could more crisply define this observerhood property as the ability for a species to collect reliable data on its own track record of survival (e.g. via fossil dating) and analyse it. When correcting for observation selection effects, we are simply conditioning on the fact that our species has developed the ability to conduct this analysis. The observerhood property need not invoke consciousness or be the property of a biological species-a machine estimating a parameter would need to account for observer selection bias if its ability to make such estimates were correlated with the parameter in question. Model 1: Single step, constant rate. Our first model assumes that observerhood has a constant rate of occurrence θ, so that S is exponentially distributed with cumulative distribution function: F S (t) = 1 − e −θt . This model describes a process in which the transition from early humans into observers occurs by chance as a single step. This could represent the hypothesis that hierarchical language emerged in humans as the byproduct of a chance mutation 24 . With this model, the probability that observers arrive before extinction is P(T > S) = θ(θ + μ) −1 . Our likelihood function can be analytically derived: ∫ μ θ μ θ μ | > =      +      − μ θ ∞ − − ⁎ T t e e ds ( ) (1 ) (4) t s s θ μ θ μ θ =      +      −          μ μθ − −+ e e ( 5 ) t t ( ) Model 2: single step, increasing rate. Our second model similarly assumes that a single step is needed but that the rate of observerhood increases over time. This model could represent increasing population size or population density, which could in turn drive cultural evolution and increase the probability of such a step 25 . We represent this with a Weibull distribution with cumulative distribution function = − θ − F t e ( ) 1 S t ( ) k where k > 1 indicates increasing rate over time (when k = 1, this is the same as the exponential in Model 1). We use numerical integration to evaluate the likelihood function. Model 3: multiple steps, constant rate. Our third model assumes that there are multiple steps that need to occur in a sequence in order to get observers. This could represent more incremental development of tools, culture, or language. We assume that each step is exponentially distributed with rate θ, so that the timing of the final kth step follows an Erlang distribution with cumulative distribution function: ∑ θ = − . θ = − − F t n e t ( ) 1 1 ! ( ) (6) S n k t n 0 1 Note that when k = 1, the distribution is the same as the exponential in Model 1. We use numerical integration to evaluate the likelihood function. Model 4: fixed time requirement. Our final model assumes that it takes a fixed amount of time τ to reach observerhood. This is an extreme model that allows for no chance, but could represent a gradual and deterministic accumulation of traits. The probability that observerhood has been reached before time t is therefore F S (t) = 1 [t>τ] , the characteristic function that takes the value 1 when t > τ and 0 otherwise. The probability that humanity survives past time τ is 1 − F T (τ) = e −μτ . Our likelihood function of μ is: ∫ μ μ | > = μτ μ τ − ∞ − > ⁎ T t e e d s ( ) 1 1 (7) t s s [ ]  = . μ τ − − e (8) t ( ) This likelihood expression can also be derived using the memoryless property of the exponential. It is worth noting that the fixed time model is a limiting case for both the increasing rate model and the multiple steps model. Taking the limit of Model 2 as k → ∞ results in a fixed time model with τ = θ −1 . Similarly, Model 3 converges to a fixed time model as the number of steps increases and the expected time of each step decreases (having infinitely many steps in the limit, each of which is infinitely short). \n Results of sample bias models. We evaluate the likelihood of extinction rates between 10 −8 and 10 −2 , given a human survival time of 200 kyr and a wide range of different rates at which observers could originate (Fig. 2 ). The first thing to note about the first three models is that when the observerhood rates are sufficiently rapid, the likelihood function converges to the unbiased version in the previous section. This can be verified by taking limits: for all of the models as θ → ∞ (or τ → 0 in the case of the fixed time model ),  μ| > → μ − ⁎ T t e ( ) t . If observerhood is expected to occur quickly, then we can take a 200 kyr track record of survival at face value and estimate the extinction rate without observation selection bias. However, as the observerhood rates decrease to the point where the expected observerhood time approaches an order of magnitude close to 200 kyr, observer selection bias emerges. Rates that were previously ruled out by our track record of survival are assigned higher likelihoods, since a portion of the track record is a necessity for observers (Fig. 2 ). For example in Model 1, when θ = 2 × 10 −4 (corresponding to an expected observerhood time of 20 kyr), the relative likelihood of μ = 6.9 × 10 −5 is increased by a factor of 2.3 (from 10 −6 to 2.3 × 10 −6 ). To get a likelihood of 10 −6 (corresponding to the most conservative upper bound), the rate must be set at 7.3 × 10 −5 (see all edited bounds in Table 2 ). Interestingly though, this effect is limited. Even as observerhood rates slow to the point where expected observerhood time greatly exceeds 200 kyr (for example exceeding 20 billion years), the revised upper bounds remain within a factor of 2 of the original bounds. The stricter the bound, the weaker the potential bias: for example the 10 −6 likelihood bound is only changed by a factor of about 1.2 in the limit as θ → 0. Although there would be some sample bias, there is a hard ceiling on how much our track record of survival can be distorted by observation selection effects. The reason slow rates of observerhood have a limited impact on our estimates is because if the extinction rate were exceptionally high, the lucky humans that do successfully survive to observerhood will have achieved such a status unusually quickly, and therefore will still observe a very short track record of survival. A long track record of survival is therefore still sufficient to rule out high extinction rates paired with low observerhood rates. We can demonstrate this by examining the typical time it takes for lucky survivors to reach observerhood, assuming a high extinction rate and a low observerhood rate. For example, in the single step constant rate model when www.nature.com/scientificreports www.nature.com/scientificreports/ θ = 10 −6 (corresponding to an expected observerhood time of 1 Myr) and μ = 10 −3 (corresponding to a typical extinction time of 1000 years), the expected observerhood time conditional on these high extinction rates is 1000 years. A typical observer will thus still have a very short track record of survival. Models with increasing rates or multiple steps exhibit the same property, although the bias is larger depending on parameter k. For both model 2 and 3 with θ = 10 −6 , μ = 10 −3 , and k = 2 (parameters normally corresponding to an expected observerhood time of 830 kyr for Model 2 and 2 Myr for model 3), the high extinction rates will still result in a typical observer emerging unusually early and having only about a 2000 year track record of survival. This can be also seen in Fig. 2 where for Models 1, 2, and 3, the likelihood of high extinction rates exceeding 10 −4 are still assigned low likelihood regardless of θ. However, severe observer selection bias can occur in Models 2 and 3 as k becomes larger, shaping the observerhood distribution such that early observerhood is vanishingly unlikely and late observerhood almost guaranteed. In the most extreme case this is represented by the fixed time model, where the probability of observerhood jumps from 0 to 1 when t = τ (the fixed time model is also the limiting case when k → ∞). If that fixed amount of time is long enough (say, exceeding 190 or 195 kyr), a 200 kyr track record of survival is no longer sufficient to rule out extinction rates greater than 10 −4 . This result occurs as the fixed time model prohibits any possibility of observerhood occurring unusually quickly. Any lineage of Homo sapiens lucky enough to survive long enough to obtain observer status must necessarily have a survival time greater than τ, which means that being an observer with a survival time of τ conveys zero information about the extinction rate. For numerous reasons, we find the fixed time model to be implausible. Virtually all biological and cultural processes involve some degree of contingency, and there is no fundamental reason to think that gaining the ability to make scientific observations would be any different. To illustrate a comparison, let us consider a world in which For the first three models, the unbiased model is recovered for large θ, and results start to become biased as the expected observerhood time approaches humanity's track record of survival. However, even as θ → 0, the bias is limited, and the likelihood of rates exceeding 10 −4 remains at zero. This is only violated in the final fixed time model, or in models 2 and 3 when k is sufficiently large. www.nature.com/scientificreports www.nature.com/scientificreports/ the extinction rate is −4 (averaging one extinction every 10,000 years), but observerhood status takes a fixed 200 kyr. Under this model, humanity successfully surviving long enough to reach observer status is an event with 1 in 200 million chance. Given observation selection bias, we cannot rule out the possibility of rare events that are required for our observations. But we could ask why a 1 in 200 million chance event could not also include the possibility that modern human observers would emerge unusually rapidly. Language, writing, and modern science are perhaps highly unlikely to develop within ten thousand years of the first modern humans, but it seems exceptionally overconfident to put the odds at fewer than 1 in 200 million. A similar line of reasoning can be applied to determine whether the increasing rate and multiple step models with high k are reasonable. We test this by asking what parameters would be needed to expect a 200 kyr track record of survival with an extinction rate at our conservative upper bound of μ = 6.9 × 10 −5 . For the increasing rate model, observerhood is expected after 203 kyr with θ = 10 −7 and k = 14 and for the multiple step model, observerhood is expected after 190 kyr with θ = 10 −7 and k = 16. Although these models do not assign strictly zero probability to early observerhood times, the probabilities are still vanishingly small. With an increasing rate and these parameters, observerhood has less than a one in a trillion chance of occurring within 10,000 years (3.4 × 10 −14 ), and about 1% chance of occurring within 100,000 years. With multiple steps and these parameters, observerhood has less than one in a trillion chance of occurring within 10,000 years (5.6 × 10 −17 ), and less than a 0.02% chance of occurring within 100,000 years. In a similar fashion to the fixed time model, we feel that these models exhibit unrealistic levels of confidence in late observerhood times. Although the plausibility of the fixed time (or nearly fixed time) models is hard to test directly, the wide variance in the emergence of modern human behavior across geography offers one source of data that can test their plausibility. The Upper Palaeolithic transition occurred about 45 kya in Europe and Western Asia, marked by the widespread emergence of modern human behaviour 25 (e.g. symbolic artwork, geometric blades, ornamentation). But strong evidence exists for the sporadic appearance of this modern human behaviour much earlier in parts of Africa 26, 27 , including evidence of artwork and advanced tools as early as 164 kya 28 . Although numerous factors could have prevented the Upper Palaeolithic transition from occurring quickly, the fact that some human communities made this transition more than 100 kyr earlier than the rest of humanity indicates that a much earlier development trajectory is not entirely out of the question. In summary, observer selection effects are unlikely to introduce major bias to our track record of survival as long as we allow for the possibility of early observers. Deceptively long track records of survival can occur if the probability of early observers is exceptionally low, but we find these models implausible. The wide variance in modern human behavior is one source of data that suggests our track record is unlikely to be severely biased. We can also turn to other sources of indirect data to test for observer selection bias. \n testing the Bound with indirect Data We cross check our upper bound against four other sources of data: mammalian extinction rates, survival times of other human species, rates of potential catastrophes, and mass extinction rates. Although these alternative data do not directly predict the background extinction rate of Homo sapiens per se, the rates of extinction are likely generated by similar processes and thus enable an indirect test of the upper bound. If our upper bound is sound (not biased by observer selection effects or otherwise flawed), we can make testable predictions that it will be (A) broadly consistent with the extinction rates for similar species, and (B) not exceeded by the rate of potential catastrophes or mass extinctions. As the extinction rate of other species and catastrophes many millions of years ago have little bearing on our ability to make scientific observations, these data are also less subject to potential observer selection bias. \n Mammalian extinction rates. We first evaluate whether the upper bound is consistent with extinction rates for a typical mammalian species. Using fossil record data, median extinction rates for mammals have been estimated as high as 1.8 extinctions per million species years (E/MSY) 2 , or equivalently μ = 1.8 × 10 −6 . Other estimates using fossil record data range from 0.165 extinctions per million genus years 17 to 0.4 E/MSY for Cenozoic mammals 18 . Alternative methods using molecular phylogeny suggest a much lower rate of 0.023 E/MSY for mammals 29 and rates of 0.219-0.359 E/MSY for primates 30 , although these methods have been criticized 31 . All of these estimated background rates are consistent with our upper bound. It is worth noting that Homo sapiens may be at lower extinction risk than a typical mammalian species due to a large habitat range, large population size, and having a generalist diet, which are all traits that militate against extinction risk (whereas long generation times and large body mass are sometimes correlated with increased extinction risk) 32, 33 . Hominin survival times. Next, we evaluate whether the upper bound is consistent with the broader hominin fossil record. There is strong evidence that Homo erectus lasted over 1.7 Myr and Homo habilis lasted 700 kyr 21 , indicating that our own species' track record of survival exceeding 200 kyr is not unique within our genus. Fossil record data indicate that the median hominin temporal range is about 620 kyr, and after accounting for sample bias in the fossil record this estimate rises to 970 kyr 22 . Although it is notable that the hominin lineage seems to have a higher extinction rate than those typical of mammals, these values are still consistent with our upper bound. It is perhaps also notable that some hominin species were likely driven to extinction by our own lineage 34 , suggesting an early form of anthropogenic extinction risk. individual sources of extinction risk. The upper bound can also be evaluated against the frequency of events that could pose extinction risks (examples provided in Table 3 ). If any particular risk (such as those from asteroid impacts) is known to have a higher rate than our bound of 6.9 × 10 −5 , this could undermine and potentially falsify our hypothesis. We evaluate the frequencies of four types of potential disasters for which credible quantitative estimates exist: asteroid impacts, supervolcanic eruptions, stellar explosions, and vacuum collapse. www.nature.com/scientificreports www.nature.com/scientificreports/ All of these risks have estimated to occur with a frequency well below our bound (Table 3 ), with the exception of smaller supervolcanic eruptions. Recent work has suggested the frequency of eruptions ejecting >10 3 km 3 of material exceeds our upper bound of 6.9 × 10 −5 with a recurrence time of 17 kyr 35 . However, it is important to note that the smaller eruptions within this category do not necessarily have a high probability of causing human extinction. The most severe eruption of the past 2 million years occurred just 74 kya, and it is unclear whether the human population at the time was at risk of extinction. Some argue that the human population suffered a major bottleneck at the same time as the eruption 43 , although this theory remains controversial 44 . Some climate records averaged over decades fail to observe a severe volcanic winter in Africa at the time 45 and archaeological evidence shows that human communities in South Africa thrived both before and after the eruption 46 (although these data are not sufficient to rule out a severe short-lived catastrophe followed by a fast recovery in population). More conclusively, most members of the Hominidae family did not suffer population bottlenecks around the time, with the possible exception of Eastern chimpanzees and Sumatran orangutans 47 . The lack of dramatic evidence suggesting other species extinctions or bottlenecks undercuts the possibility that humanity's survival was highly improbable and is observed only due to observation selection effects. However, a handful of substantially larger flood basalt events have taken place over the past 250 Myr that have been linked to mass extinctions 39, 48 . These events occur with a frequency of roughly once every 20-30 Myr, much more infrequently than smaller eruptions. If we assume that human extinction is threatened only from larger volcanic eruptions well exceeding 10 3 km 3 , then none of the risk frequencies we have catalogued come within an order of magnitude of the conservative upper bound. Similarly, impacts from smaller asteroid around 1 km in diameter may not have a high probability of causing human extinction. Although it is hard to estimate the consequences of such impacts, some researchers have argued that such impacts would fall below the threshold for a global catastrophe 49 . Impacts that disperse enough dust and sulphites to significantly disrupt photosynthesis occur much more rarely, with an estimated frequency of about 15 Myr years 49 . If we assume human extinction is only threatened by these more severe impacts exceeding 5 km, each of these catastrophe frequencies falls well below even our most optimistic bound of 1 in 870,000 chance of extinction per year. \n Mass extinction frequency. A mass extinction is marked by substantially increased extinction of multiple geographically widespread taxa over a relatively short period of time 50 . There have been five major mass extinctions in the past 541 Myr 51, 52 , with many arguing that human activity is currently causing a sixth 2 . In a similar way to our previous analysis of catastrophe rates, we should expect our upper bound to be consistent with the frequency of non-anthropogenic mass extinctions. Using only the big five extinctions produces a frequency of less than one per 100 Myr, far below our upper bound. In addition to the big five, there have been 13 other mass extinctions in the fossil record 53 . Using these numbers for 18 mass extinctions over 541 Myr still results in a frequency of about one per 30 Myr, many orders of magnitude below our upper bound. \n conclusions Using the fact that humans have survived at least 200 kyr, we can infer that the annual probability of human extinction from natural causes is less than 1 in 87,000 with modest confidence (0.1 relative likelihood) and less than 1 in 14,000 with near certainty (10 −6 relative likelihood). These are the most conservative bounds. Estimates based on older fossils such as the ones found in Morocco dated to 315 kya result in annual extinction probabilities of less than 1 in 137,000 or 1 in 23,000 (for relative likelihood of 0.1 and 10 −6 , respectively). Using the track record of survival for the entire lineage of Homo, the annual probability of extinction from natural causes falls below 1 in 870,000 (relative likelihood of 0.1). We also conclude that these data are unlikely to be biased by observer selection effects, especially given that the bounds are consistent with mammalian extinction rates, the temporal range of other hominin species, and the frequency of potential catastrophes and mass extinctions. The bounds are subject to important limitations. Most importantly, they only apply to extinction risks that have either remained constant or declined over human history. Our 200 kyr track record of survival cannot rule out much higher extinction probabilities from modern sources such as nuclear weapons or anthropogenic climate change. Some naturally occurring risks could be also be worsened by anthropogenic factors: a minor asteroid impact could be interpreted as a nuclear attack and lead to retaliation 54 , or a naturally occurring disease which previously may have only been a local extinction risk could spread much further due to modern travel 23 . In the cases where a natural risk is amplified by modern conditions, we can still derive some partial information from the upper bound by evaluating how much the risk would need to change from the purely natural baseline. For www.nature.com/scientificreports www.nature.com/scientificreports/ example, the claim that a natural disease poses a than 1 in 1,000 chance of extinction per year would require that anthropogenic conditions have increased the risk of natural disease by a factor of more than 14 to 870 (under our most conservative and optimistic upper bounds, respectively). In general, for a naturally occurring risk to violate our upper bounds via human activity by more than a factor of two, the majority of the risk would still need to come from anthropogenic circumstances. In general, we conclude that anthropogenic extinction risks are likely greater than natural ones. We do not have a long track record of data for anthropogenic risks, so evaluating this relies far more on speculation. But despite the paucity of data, the little evidence we do have seems to be indicative of rates greatly exceeding our upper bounds. During the Cuban Missile Crisis of 1962, John F Kennedy put the odds of nuclear war at 'somewhere between one out of three and even' 55 . If 0.1% of nuclear wars result in human extinction via nuclear winter, taking Kennedy's odds that year would surpass our most conservative bound by more than a factor of four (and surpass our most optimistic bound by a factor of more than 250). Anthropogenic climate change could pose existential risks as well if warming is much worse than expected. A ballpark suggestion for the probability of 20 degrees of anthropogenic climate change was placed at 1% 56 , which would make the planet largely uninhabitable for humans due to heat stress 57 . And these are not the only risks we may face. One century ago, the existential risks posed by nuclear weapons or climate change may have seemed extremely implausible. We should therefore be cautious before dismissing the potential risks that future centuries of technological development could bring, such as those stemming from biotechnology 58 or artificial general intelligence 59 . Despite the low probability of human extinction from natural causes, it may still be prudent to reduce these risks. Existential risks jeopardize not only the lives of those currently present, but also the existence of all future generations. Depending on how much value we place on such generations, it may still be cost-effective to reduce existential risks from natural sources 60 . However, given limited resources to spend on reducing existential risks, one may be better off focusing on greater risks from our own design. Figure 1 . 1 Figure1. Likelihood of extinction rates given our track record of survival so far, with estimated ranges of Hominin extinction rates, mammalian extinction rates, and mass extinction frequency included for reference. Blue horizontal lines indicate likelihood of 10% and 1%. Rates exceeding 6.9 × 10 −5 are ruled out even with the most conservative data. Extending humanity's track record of survival to match older fossils, the divergence with Homo neanderthalensis, or the origin of Homo creates even stricter bounds. \n Figure 2 . 2 Figure 2. Models of observer selection bias. Surface plots show likelihood for combinations of μ and θ (where k = 3 for Models 2 and 3) or τ in Model 4. Upper righthand plots show how likelihood shifts when θ → 0 in Model 1, and for a variety of k values in Models 2 and 3.For the first three models, the unbiased model is recovered for large θ, and results start to become biased as the expected observerhood time approaches humanity's track record of survival. However, even as θ → 0, the bias is limited, and the likelihood of rates exceeding 10 −4 remains at zero. This is only violated in the final fixed time model, or in models 2 and 3 when k is sufficiently large. \n Table 1 . 1 Survival times and resulting upper bounds. www.nature.com/scientificreports www.nature.com/scientificreports/  \n Table 2 . 2 Upper bounds of μ with model 1 bias. \n Table 3 . 3 Catastrophe frequency estimates.", "date_published": "n/a", "url": "n/a", "filename": "s41598-019-47540-7.tei.xml", "abstract": "We evaluate the total probability of human extinction from naturally occurring processes. Such processes include risks that are well characterized such as asteroid impacts and supervolcanic eruptions, as well as risks that remain unknown. Using only the information that Homo sapiens has existed at least 200,000 years, we conclude that the probability that humanity goes extinct from natural causes in any given year is almost guaranteed to be less than one in 14,000, and likely to be less than one in 87,000. Using the longer track record of survival for our entire genus Homo produces even tighter bounds, with an annual probability of natural extinction likely below one in 870,000. These bounds are unlikely to be affected by possible survivorship bias in the data, and are consistent with mammalian extinction rates, typical hominin species lifespans, the frequency of well-characterized risks, and the frequency of mass extinctions. no similar guarantee can be made for risks that our ancestors did not face, such as anthropogenic climate change or nuclear/biological warfare. Out of all species that have existed, over 99% are now extinct 1 . Although human activity is dramatically increasing extinction rates for many species 2 , species extinctions were regular occurrences long before humanity emerged. Many of these extinctions were caused by gradual environmental shifts, evolutionary arms races, or local interspecific competition 3,4 , while others were abrupt, being part of global mass extinctions caused by asteroid impacts, volcanism, or causes as of yet to be identified 5, 6 . Could such a catastrophe befall our own species? If so, are the risks greater from natural or anthropogenic sources? Here, we evaluate the natural 'background' extinction rate for Homo sapiens. This means considerations of anthropogenic risks such as climate change and nuclear weapons are excluded from our estimates, although these clearly pose existential threats to our own species as well as others. Indeed, it has been hypothesized that the great majority of human extinction risk comes from anthropogenic sources 7, 8 . But by limiting our analysis to natural risks that our predecessors also faced, we can draw on data spanning many thousands (or millions) of years. Obtaining bounds on natural extinction rates also enables an indirect and partial test of the hypothesis that anthropogenic risks are greater than natural ones, as sufficiently low natural extinction risk will imply higher relative risks from anthropogenic sources. Estimating such an extinction rate directly is impossible. We have no examples of Homo sapiens extinction, so the most directly relevant data are non-existent. An alternative approach would be to enumerate the different types of naturally occurring hazards (e.g. asteroids, supervolcanoes), estimate their independent probability of causing extinction, and then use these probabilities to derive an aggregate extinction rate. However, this method has its own shortcomings. Beyond the great uncertainties around the probabilities of each risk, there could also be unknown risks that fail to be included. It would be hard to say with confidence that any list of risks had captured all natural hazards to humanity. We can bypass these problems by instead considering the length of time that humanity has survived so far 9,10 . This survival time can be used to estimate an upper bound on the extinction rate from all natural sources combined, including from sources for which we remain unaware. However, this approach could be subject to a particular form of sample bias known as an observation selection bias. These observer selection biases occur when a sample is not representative of all outcomes, but rather a subset of outcomes that are compatible with the existence of the observers 11 . For example, if human existence required a 10 million year (Myr) period of evolution free from asteroid impacts, any human observers will necessarily find in their evolutionary history a period of 10 Myr that is free of asteroid impacts, regardless of the true impact rate. Inferring a rate based on those 10 Myr could therefore be misleading, and methods must to be used to correct for this bias 12 .", "id": "436e28620a6a6a11f65339bfb1c31eb3"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Andrea Owe", "Seth D Baum"], "title": "The Ethics of Sustainability for Artificial Intelligence", "text": "Introduction A basic attribute of modern human civilization is that the stock of natural resources steadily decreases, whereas the stock of artificial resources steadily increases. For example, artificial intelligence (AI) research is commonly powered by the burning of fossil fuels, and in the process produces new technologies that civilization can benefit from. Will the increases in artificial resources be sufficient to offset the loss of natural resources, such that civilization can be sustained into the future? That is one important perspective on the ethics of sustainability as it relates to AI, though, as this paper discusses, it is not the only one. Sustainability is not an inherently ethical concept. In its essence, \"sustainability\" refers to a particular characteristic of systems as they change over time. The term can be used in many ways that do not have any particular ethical significance. For example, sprinters run at an unsustainable speed; eventually, their muscles will fatigue and they will be unable to continue. This is a basic characteristic of human physiology and not a matter of ethical significance. In common usage, however, sustainability takes on ethical significance. Sustainability is widely treated as a good thing and something worth pursuing [1] . It is in that spirit that there have been initiatives on AI and sustainability, including conferences such as \"Sustainable AI\", 1 \"Towards Sustainable AI\", 2 and \"AI for the Planet\", 3 as well as groups such as AI4Good that work on AI in support of the United Nations Sustainable Development Goals (SDGs). 4 It is also in that spirit that sustainability is one of the principles found in some AI ethics guidelines [2] . The ethical dimensions of sustainability warrant ethical analysis. Important ethics questions include: What exactly should be sustained, or rather, be able to be sustained? Why should it be able to be sustained? For how long? How much emphasis should be placed on sustainability relative to other goals? These ethics questions can be answered in a variety of ways. How they are answered has important implications for ongoing human activity, including for the development, use, and governance of AI technology. Prior literature on AI and sustainability (reviewed below) has not considered the ethical dimensions to any significant extent. Therefore, this paper analyzes the ethics of sustainability as it relates to AI. The paper proceeds in three parts. First, we explain the ethics of sustainability as a general concept (Section 2). Second, we describe the current usage of the concept of sustainability within work on AI (Section 3). Third, we present an argument for a long-term, non-anthropocentric conception of sustainability and explain the implications of this for AI (Section 4). Section 5 concludes. Much of the discussion in this paper covers ethical dimensions of sustainability that are not specific to AI. This is a feature, not a bug. To a large extent, the ethical dimensions of AI and sustainability are the same as those for sustainability in general. Furthermore, aspects of the topic that are specific to AI build on more general sustainability concepts. Therefore, to understand the ethics of sustainability for AI, it is essential to first understand the ethics of sustainability. The paper contributes to the growing literature on AI and sustainability. Most of this literature is on AI in relation to the sustainability of human civilization and its environmental underpinnings; this includes reviews by Nishant et al. [3] and Liao and Wang [4] , the environmental politics of AI [5] , AI in relation to systemic risk and sustainability [6] , the environmental footprint of AI systems [7] , and the role of AI in meeting the SDGs [8] [9] [10] . Some literature focuses on the sustainability of the AI systems themselves [11] , including for consumer autonomy [12] , in global health initiatives [13] , in certain economic mechanisms [14] , and in decision-making applications [15] . Additionally, there is a broader field of computational sustainability that applies computer science methods to advance environmental and social sustainability [16] [17] [18] . Overall, the literature on AI and sustainability offers a variety of important contributions, but it provides limited discussion of the ethics of sustainability. The paper additionally contributes to some adjacent literatures. Prior studies have considered the application of AI for environmental protection [19] [20] , for climate change mitigation [21] [22] , and the energy consumption of AI systems [23] ; these are relevant for environmental conceptions of sustainability. Also relevant are debates on the relative importance of near-term, medium-term, and long-term AI [24] [25] [26] [27] ; as this paper discusses, the future-orientation of sustainability can imply an emphasis on long-term AI. Finally, of more general relevance is prior work on the ethics of sustainability [1] , especially sustainability over 1 https://www.uni-bonn.de/en/news/120-2021 2 http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=140164©ownerid=158619 3 https://aifortheplanet.org/ 4 https://ai4good.org/ long time scales [28] , and the ethics of AI [2, [29] [30] , especially regarding AI and the future [31] and AI and nonhumans [32] . \n The Ethics of Sustainability The word \"sustainability\" is commonly traced to the German word Nachhaltigkeit, and specifically to Hans Carl von Carlowitz's 1713 treatise on sustainable yield forestry [33] . The tension between using resources now and preserving or cultivating them for future use is of course much older. Modern analysis traces to the environmental economics work of Hotelling [34] , which remains relevant today [35] . Hotelling's work did not use the term \"sustainability\". Use of the term primarily traces to the 1987 report Our Common Future, which was led by former Norwegian Prime Minister Gro Harlem Brundtland and is commonly known as the Brundtland Report. Like von Carlowitz's treatise, the Brundtland Report specifically conceptualizes sustainability in socio-environmental terms. The report's definition of sustainable development, \"Development that meets the needs of the present without compromising the ability of future generations to meet their needs\", has been widely influential and serves as a foundation for the UN SDGs. Since Our Common Future, there has been a proliferation of definitions of sustainability and sustainable development [36] [37] [38] . Widespread and imprecise usage of the term \"sustainability\" has watered it down. Critics argue that \"sustainability\" has become \"a concept that is equivalent to 'good' and thus devoid of any specific meaning-a blanket concept to assure stakeholders of the policy's good intentions\" [39, p.3439 ]. Likewise, the term is said to have been appropriated by self-interested actors to continue \"business as usual\" activities that drive environmental destruction and social inequity [37] . One notable point of criticism is the so-called \"three pillars\" of sustainability: the social, the economic, and the environmental-or \"people, profit, and planet\". The pillars construct is an attempt to represent the major components of socio-environmental sustainability. However, these categories have been criticized for being fuzzy and overlapping, for excluding of other relevant categories such as the cultural and the political, and for not being essential to the core matter of whether civilization can be sustained over time [39] . Indeed, economic transactions are an inherently social activity, and all human activity is inherently environmental because humans are part of nature. Furthermore, sustainability is commonly associated with environmental protection, but some environmental disturbances, such as many types of air or water pollution, rapidly dissipate and have a negligible effect on future generations' ability to sustain themselves. Likewise, some matters that could classify as social and/or economic, such as social justice within the contemporary population, are often of limited relevance to the ability of future generations to sustain themselves, even if these matters may be important for other reasons. Other socio-economic matters, such as education and investment in future economic growth, may be of greater importance to the ability of future generations to sustain themselves. For these reasons, the \"three pillars\" provide a weak foundation for sustainability. Work on AI and sustainability that incorporates the \"three pillars\", such as the \"AI for People: Towards Sustainable AI\" conference, 5 should take note. We now turn to three fundamental questions for the ethics of sustainability (Sections 2.1-2.3) followed by a comparison between sustainability and optimization (Section 2.4). The reasons something should be able to be sustained typically derive from what are, in moral philosophy, referred to as intrinsic value and instrumental value. Roughly speaking, something is intrinsically valuable if it is valuable for its own sake, or valuable as an ultimate end in itself; something is instrumentally valuable if it is valuable because it promotes something else that is valuable [40] . For example, we might suppose that sunlight is valuable because it can (among other things) be converted into electricity, and that electricity is valuable because it can (among other things) be used to power AI systems, and that AI systems are valuable because they can (among other things) enable humans to have enjoyable lives, and that enjoyable human lives are just good things on their own. In that case, sunlight, electricity, and AI systems are instrumentally valuable, while enjoyable human lives are intrinsically valuable. In many conceptions of ethics, what should be done-including what should be sustained-is defined with reference to some conception of what is intrinsically valuable. \n Conceptions of sustainability can vary in terms of what they intrinsically value and what sorts of instrumental values they focus on. One can seek to sustain X either because X is intrinsically valuable or because X is instrumentally valuable. For example, one can seek to sustain natural ecosystems either because one considers them to be intrinsically valuable or because one considers them to be instrumentally valuable for other things, such as for human welfare. Common conceptions of sustainability are anthropocentric, meaning that they only intrinsically value humans [41] . For example, although the Brundtland Report's emphasis on future generations could conceivably be interpreted to mean future generations of something other than humans, the Report clearly focuses on humans. Likewise, sustainable management of natural resources generally treats the resources as instrumental values for human benefit. In contrast, some conceptions of sustainability are ecocentric, meaning that they intrinsically value ecosystems. Examples include the Earth Charter 6 and the Earth Manifesto [42] . Though less common, the concept of sustainability can also be used with other notions of intrinsic value, such as the idea that there is intrinsic value in the welfare of sentient nonhuman animals or (if possible) sentient AI systems. The distinction between intrinsic and instrumental value does not always matter, but it often is important. For example, reducing greenhouse gas emissions is generally good on both anthropocentric and ecocentric grounds. However, some biodiversity protection is worth pursuing mainly on ecocentric grounds, because, for better or worse, certain species could go extinct with little impact on human welfare. Therefore, it is important for discussions of sustainability to explicitly specify what they intrinsically value. \n For how long should something be able to be sustained? There is a big difference between sustaining something for a few days and sustaining it for decades, centuries, or even indefinitely into the distant future. Unfortunately, discussions of sustainability are often not precise in their consideration of time scales. For example, the 1987 Brundtland Report's emphasis on future generations implies a time scale of at least decades, assuming the generations are of humans. But how many future generations? The actions needed to enable the next few generations to be sustained often differ significantly from the actions needed to enable the same for every generation that could ever exist. \n How much effort should be made for sustainability? The sustainability of intrinsic or instrumental values may be a good thing, but how good? The world has many competing values and opportunities. Indeed, the Brundtland definition was specifically crafted to acknowledge the competing values of present and future generations; while the report aspires to promote actions that support both the present and future generations, many choices involve tradeoffs between them. For example, depleting natural resources often benefits present generations at the expense of future generations. Basic research often benefits future generations at the expense of present generations, especially where the same resources could instead be used for applied research. The relative importance of the present and future can be operationalized in a variety of ways, such as through discount rates or other weighting functions [43] [44] . This sort of intergenerational evaluation is generally made within the context of anthropocentric conceptions of sustainability, but similar approaches can be taken with other conceptions. One way or another, moral guidance about sustainability must consider how to evaluate tradeoffs between sustainability and other moral goals. This point fits within the broader issue of tensions between different AI ethics principles [45] . Also relevant are fundamental questions about the appropriate degree of effort to take to achieve ethical goals. In moral philosophy, the term \"supererogation\" refers to actions that \"go beyond the call of duty\", meaning that they are good but not strictly required [46] . One common question in moral philosophy is whether some moral frameworks are too demanding. This question is especially acute for consequentialist moral frameworks that call for moral agents to maximize some conception of intrinsic value, because maximization is a demanding task [47] . These are important questions for any moral debate, certainly including those involving sustainability. Setting aside tradeoffs between sustainability and other moral goals, one can ask: How much effort should a person or an organization make to advance sustainability? Should they \"give it everything they've got\"? Or would just a little effort be acceptable? Conversely, is it enough to work to advance sustainability? Or is it important to also work toward more ambitious goals, such as intertemporal optimization of intrinsic value? \n Sustainability vs. Optimization Sustainability can be an optimization criterion-that would mean seeking to optimize the ability for something to be sustained over time. However, this is distinct from optimizing intrinsic value. Sustainability means enabling something to be sustained in at least some minimal form; optimization means making something be the best that it can be. Ensuring sustainability is perhaps best understood as a basic minimum standard of intertemporal conduct, whereas the intertemporal optimization of intrinsic value may be understood as a loftier ideal to aspire for. This distinction can be seen, for example, in the Brundtland Report call for the present generation to act \"without compromising the ability of future generations to meet their needs\". The basic needs of human life are, to an approximation, food, clothing, and shelter. Following the Report's call could result in \"a society living forever at a minimum subsistence level of consumption\" [48, p.327] . Future society would be able to meet its needs, but it may not be able to do anything more. If the present generation does not act so as to enable future generations to do much better than meeting their needs, then the present generation will, quite arguably, have squandered a massive opportunity. Of course, if the present generation fails to enable future generations to meet their needs, that would be, quite arguably, a massive loss. AI is advanced technology. AI research and development is often oriented toward enabling higher standards of living instead of enabling basic future needs. Human lives do not strictly need, for example, AI systems to steer vehicles or search the internet. Such work generally falls outside the scope of sustainability, but could fall within the scope of optimizing intrinsic value. \n Prior Work on AI and Sustainability With the ethics of sustainability in mind, we now survey prior work on AI and sustainability. Sections 3.1 and 3.2 present quantitative analysis of trends in the ethics of sustainability found in AI ethics principles and AI research. Both analyses characterize sustainability in terms of the three ethics dimensions presented in Section 2: intrinsic value, time scale, and degree of effort. Section 3.3 presents overarching trends across Sections 3.1-3.2. \n AI Ethics Principles Jobin et al. [2] compiles 84 sets of AI ethics principles. 11 of these sets of principles include some reference to sustainability. 7 This indicates that sustainability is a small but nonzero priority in AI ethics. We examined the ethical basis of the 11 sets of principles. We found that 7 refer to some form of environmental sustainability, 3 refer to sustainability of the AI system itself, and 1 refers to sustainable social development. We further found that 3 intrinsically value humans, 5 intrinsically value humans and nonhumans including ecosystems, all life, biodiversity, and the planet, and 5 are ambiguous in terms of what is intrinsically valued. Regarding time scales, 2 refer to \"future generations\" and 9 do not specify time scales. Finally, none of the principles specify degree of effort. Some examples of the AI ethics principles are presented in Appendix A. \n AI Sustainability Research We performed a systematic mapping review [49] of research at the intersection of AI and sustainability. Our review maps this literature in terms of the ethical attributes presented in Section 2 and also used in Section 3.1, with some additional nuances to catch the diversity of the research literature. Specifically, we analyzed results from a Google Scholar search for [\"artificial intelligence\" \"sustainability\"] and [\"AI\" \"sustainability\"] conducted between Sept 16 and 21, 2021. The Google Scholar search engine was selected over other academic databases due to its inclusivity. Whereas databases such as Web of Science concentrate on 7 Table 3 of Jobin et al. [2] states that 14 sets if principles include sustainability, but only 12 are referenced in the text and we were unable to identify the other 2. Of the 12 referenced sets of principles, we found that 1 did not cover sustainability, leaving a total of 11 for our analysis. peer-reviewed journals, artificial intelligence research is often published in other spaces such as arXiv. The searches returned 229,000 total results for [\"artificial intelligence\" \"sustainability\"] and 1,490,000 total results for [\"AI\" \"sustainability\"], respectively. We examined the first ten pages of each of the two searches. We observed that after ten pages of each search, the search results became repetitive and less relevant. These two sets of ten pages contained 200 total results, or 153 results after duplicates were extracted. Of these 153 publications, we were unable to access 11. For the remaining 142 publications, we examined the text in the degree of detail needed to categorize its treatment of sustainability. For most of the publications, this involved looking at the abstract and introduction, and skimming the text for discussion of sustainability. In some cases, we examined the entire publication in more detail. Out of the 142 publications, 60 were found not to be relevant because they were not sufficiently on the nexus of AI and sustainability, leaving a data set of 82 publications. 66 publications were on environmental sustainability, 7 were on the sustainability of the AI system itself, and 9 were on the sustainability of something else, including 2 on the sustainability of organizations, 2 on social sustainability in human-robot/AI interactions, 1 on the sustainability of group decision-making processes, 1 on sustainable curriculum planning, 1 on sustainable healthcare systems, 1 on the social sustainability of AI, and 1 on sustainable industrial development. Of the 66 environmental sustainability publications, 43 were on environmental and social sustainability and 23 were exclusively on environmental sustainability. 29 of the 66 environmental sustainability publications referred to the Brundtland definition and/or the SDGs. 56 publications intrinsically valued humans only, 10 intrinsically valued humans and nonhumans including nonhuman species, life on Earth, biodiversity, ecosystems, the biosphere, and the planet. 16 were too ambiguous to interpret any notion of intrinsic value. Regarding time scales, 7 refer to \"future generations\" and 1 refers to a time frame from 1990 to 2028. The other 74 do not specify time scales. None of the publications specify degree of effort. Some examples of the AI sustainability publications are presented in Appendix B. \n Overarching Trends In consideration of the work presented in the two preceding subsections, the following overarching trends in the ethics of existing work on AI and sustainability can be identified. First, most work on AI and sustainability is focused on some form of environmental sustainability, with substantial minorities focused on the sustainability of AI systems or on the sustainability of miscellaneous other things. The environmental sustainability work mainly intrinsically values humans and sometimes intrinsically values nonhumans. These trends are consistent with wider usage of sustainability outside the context of AI. Indeed, work on AI and sustainability often explicitly links to broader treatments of sustainability, such as the Brundtland Report and the UN SDGs. Work on the sustainability of AI systems treats AI systems as instrumentally valuable, such as in decision-making applications [15] and in global health initiatives [13] . Outside the context of sustainability, some research considers that AI systems could be intrinsically valuable [50] [51] [52] . We find that the sustainability of intrinsically valuable AI systems has not yet been addressed. Second, work on AI and sustainability is imprecise on its ethical dimensions. As illustrated in Appendices A-B, our classification of AI ethics principles and AI sustainability research involved frequent parsing of ambiguous phrasings. Indeed, outside references to the SDGs or the Brundtland definition, few publications explicitly define sustainability. Within treatments of environmental sustainability, the term \"sustainability\" was commonly equated with environmental protection, especially efforts to minimize energy and resource consumption, even though environmental issues do not necessarily have sustainability implications. Some work equated \"sustainability\" with \"good for people/and or society\" or even just \"good\", which further drains \"sustainability\" of its meaning. As discussed in Section 2, these are common problems with sustainability discourse. Our analysis finds that these problems have been reproduced in the AI literature. \n The Moral Case for Long-Term, Non-Anthropocentric Sustainability and Optimization In this section, we present our own views on the ethics of sustainability as it relates to AI. Specifically, we make the case for sustainability that is non-anthropocentric and long-term oriented. We further argue for substantial effort for this conception of sustainability, or, preferably, for the optimization of long-term intrinsic value. Finally, we explain the implications of such sustainability for AI. Each of our arguments depends on certain positions on underlying ethical principles. As with all ethical principles, there is no universal consensus on which position to take. One can disagree with the positions we take, but doing so requires taking a different position on the underlying ethical principles. This is important to bear in mind, especially when considering the implications for AI. \n Non-Anthropocentric Sustainability First, we call for non-anthropocentric conceptions of sustainability. By non-anthropocentric, we mean that humans are not the only entities that are intrinsically valued. Modern science unambiguously shows that humans are members of the animal kingdom and part of nature. Morally significant attributes such as the innate drive to live and flourish or to experience pleasure and pain are not unique to the human species. 8 For purposes of this paper, we set aside important debates about which nonhuman entities to intrinsically value, but some potential examples include sentient nonhuman animals or (if possible) sentient AI systems, natural ecosystems, and biodiversity. We, the authors of this paper, also happen to disagree among ourselves as to which nonhumans are intrinsically valuable, but we agree that some are. There can be legitimate reasons to sometimes intrinsically value humans more than other entities-for example, a human can have a longer and richer life than a spider. However, we see no morally sound reasons to refuse to intrinsically value any nonhuman lives or entities. This means that sustainability should be defined as to also sustain some nonhumans for their own sake: it is not enough to sustain nonhumans for their instrumental role for humans. 9 8 Additionally, some conceptions of intrinsic value are rooted in the attributes of systems, such as interdependencies of biotic and abiotic entities within ecosystems. These holistic conceptions of intrinsic value are also not specific to humans [53] . 9 For a more detailed argument for non-anthropocentrism advanced within AI ethics, see [32] . This non-anthropocentrism is at odds with common conceptions of sustainability, including that of the Brundtland Report. In failing to intrinsically value anything other than humans, we believe these conceptions of sustainability are in moral error. It is unfortunate but not surprising that similar anthropocentric tendencies are found within existing work on AI and sustainability (Section 3). Future work on AI and sustainability should be more inclusive of the intrinsic value of nonhumans. \n Long-Term Sustainability Second, we call for sustainability over long time scales. Our motivation for this is an ethical principle of equality across time. In essence, this means that no one or no thing of moral significance should be disadvantaged because of the time in which they happen to exist. A person (for example) is of the same intrinsic value regardless of whether they live in the year 2021 or 2051 or 2151 or even 22021 or any other future time [54] [55] . This perspective can be justified, for example, by a \"veil of ignorance\" thought experiment in which one does not know in advance which time period one would exist in [56] . Under such hypothetical circumstances, it would only be fair to value each time period equally. Taking temporal equality seriously means including attention to all future time periods, including the astronomically distant future. Combined with our call for non-anthropocentrism, this means sustainability should aim to sustain that which is intrinsically valuable into the distant future. This long-termism is broadly consistent with common conceptions of sustainability, though it is different in emphasis. Common conceptions do not specify precise time scales; this is seen, for example, in the Brundtland Report's emphasis on an unspecified number of future generations. With no clear time limit, these conceptions could include the astronomically distant future, though in practice, they focus on matters that are short-term in comparison. We believe this is a moral error, an unjustified exclusion of distant-future generations and distant-future instances of anyone and anything else of intrinsic value. \n Substantial Effort for Sustainability or Long-Term Optimization Third, we call for a high degree of effort toward sustainability, or, preferably, for the optimization of long-term intrinsic value. The astronomically distant future offers astronomically large opportunities for advancing intrinsic value. These opportunities are vastly larger than those available for the present time and the near-term future. This point suggests a high degree of priority for actions oriented toward the long-term. That does not mean ignoring the present. As members of the present time period, we have special opportunities to help with present circumstances. The present also sets the stage for the future. Nonetheless, if the principle of equality across time is to be taken seriously, it requires a major focus on long-term outcomes. We further believe that people should make great efforts to advancing moral progress of all types, including sustainability, balanced mainly by the need for reasonable selfcare, and that organizations and institutions should likewise be oriented accordingly. An important perspective comes from the physics of the long-term future. Earth will become uninhabitable in roughly one billion years due to the gradual warming and expanding of the Sun [57] . Survival beyond this time can only occur in outer space. For Earth-originating entities, this will require an advanced technological civilization capable of settling in outer space. Human civilization is already positioned to accomplish this task, given its ongoing space missions and general technological progress. As long as human civilization remains intact, the ability to sustain Earth-originating entities will persist. Long-term sustainability requires resettling in outer space [28] . For the present generation, that means keeping human civilization intact. In many contexts, there is no significant distinction between sustainability and intertemporal optimization for the distant future. Both goals require maintaining the basic functionality of civilization, including by sustaining sufficient resources and by handling major threats such as global warming, pandemics, and nuclear warfare. They likewise entail evaluating environmental threats in terms of their implications for the continuity of human civilization and not in terms of biogeophysical disturbances or smaller-scale human consequences [58] [59] . However, looking ahead, the goals point in different directions. Sustaining Earth-originating entities into the distant future only requires some minimal space settlement over very long timescales. In contrast, optimization of long-term intrinsic value entails space expansion sooner and at larger scales, in order to fill the universe with whatever is intrinsically valuable. The distinction between sustainability and optimization of long-term intrinsic value is also important in terms of what is intrinsically valuable. Long-term sustainability can entail the same course of action for both anthropocentric and non-anthropocentric conceptions of intrinsic value: If humanity fails to settle in outer space, then other Earth-originating entities would also die out in a billion or so years, if not sooner [28, 60] . However, long-term optimization generally entails expansion into outer space-but expansion in what way? Anthropocentrism would entail expansion of human populations, whereas nonanthropocentrism would entail expansion of something else. \n Implications for AI AI has several important roles to play in the story outlined above. First, current and near-term forms of AI can be applied to addressing certain immediate threats to global civilization. For example, AI is in active use for addressing global warming and environmental protection in a variety of ways, and additional ways have been identified and called for [21] . AI is also in active use for addressing the ongoing COVID-19 pandemic by supporting tasks such as medical analysis [61] and robotics to support social distancing [62] . Further work along these lines could be of value for improving the resilience pf human civilization to COVID-19 and future pandemics. Whereas global warming is a traditional environmental sustainability topic, pandemics are not, though pandemics can derive from environmental activities, in particular those that put humans in contact with novel zoonotic pathogens. Nonetheless, both issues threaten the ability of global human civilization to be sustained into the long-term future. Second, future forms of AI could be particularly consequential. The field of AI has long entertained notions of extreme future AI that could be \"the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control\" [63, p.33] . Recently, there has been debate on the extent to which people in AI should focus on near-term or long-term AI. To some extent, this debate may be unnecessary, due to the existence of activities that are good to do for both near-term and long-term AI [24, [26] [27] . Nonetheless, to the extent that each type of AI merits some distinct attention, a case can be made for attention to long-term AI due to its potential importance for long-term sustainability. Long-term AI can play three roles of relevance to long-term sustainability. First, it could bolster efforts to address threats such as global warming and pandemics. Second, it could pose a threat of its own, especially for runaway AI scenarios in which the AI effectively takes over the world. Third, it could play an instrumental role in space expansion. A significant dilemma exists for the dual status of long-term AI as both threat and tool for addressing other threats. Ideally, long-term AI would be designed slowly and carefully to ensure a high standard for safety and ethics. However, delaying the deployment of long-term AI reduces the potential for its use to address other threats. One implication of this is that other work to address other threats can be of value for \"buying time\" to safely and ethically develop long-term AI [64] . This includes work using near-term AI to address these other threats. If there is extended time available to slowly and carefully design long-term AI, then that also buys time to reflect on what should be done with respect to space expansion and related opportunities [65] . On the other hand, there is no guarantee that an extended time will be available. Indeed, AI research and development is proceeding at brisk pace, prompting concerns about a race to develop long-term AI [66] . One proposed means of buying some time is to deploy a moderately powerful AI \"nanny\" who can protect and support humanity while it reflects on what to do next [67] . That possibility is not without its own risks, such as the risk of a poorly designed nanny AI that steers the world in a bad or even catastrophic direction. These are all among the AI issues that can be of profound importance for long-term sustainability. \n Conclusion In this paper, we have surveyed the ethics of sustainability, analyzed the ethical basis of existing work on AI and sustainability, and presented an argument for a non-anthropocentric, long-term conception of sustainability and an accompanying argument for favoring optimization over sustainability. Taken together, the paper provides some guidance on how ongoing work on AI and sustainability can and should proceed. First, work on AI and sustainability should precisely specify its ethical basis, in particular on what it seeks to sustain, for how long, and how much effort should be made on sustainability. Second, work on AI and sustainability should consider adopting the non-anthropocentric, long-term conception of sustainability and optimization that this paper argues for. In practice, that entails a focus on applying AI to addressing major global threats such as global warming and pandemics, to ensuring the long-term sustainability of the resource base needed for civilization, and for pursuing opportunities to expand human civilization into outer space. In closing, we wish to emphasize the ethical principle of equality across time. A fundamental aspect of sustainability is its future-orientation. The principle of equality across time means that all future times should be treated equally, or rather that there should be no bias against something just because of when it exists. This is a compelling moral principle. If it is to be taken seriously, it demands an attention to the big-picture distant future of the universe and to the ways in which near-term actions can affect it. Actions involving AI are among the most significant ways to affect the distant future. The field of AI has special opportunities to make an astronomically large positive difference-to make the universe a better place. It should make pursuit of these opportunities a major priority. strongly indicate that the natural environment is valued instrumentally to advance human wellbeing. Nothing in the statement clarifies the time scales of sustainability or degree of effort. The UNI Global Top 10 Principles for Ethical AI includes the principle, \"Make AI serve people and planet: This includes codes of ethics for the development, application and use of AI so that throughout their entire operational process, AI systems remain compatible and increase the principles of human dignity, integrity, freedom, privacy and cultural and gender diversity, as well as with fundamental human rights. In addition, AI systems must protect and even improve our planet's ecosystems and biodiversity.\" This principle is classified as intrinsically valuing humans, ecosystems and biodiversity as the final line calls for not only protection but improvement of nonhuman entities, and the phrasing \"make AI serve people and planet\" strongly suggest moral consideration for also nonhumans. \n Appendix B: Examples of AI Sustainability Publications Section 3.2 presents data on publications on the nexus of AI and sustainability. This appendix presents some illustrative examples of these data. Theodorou et al. [68] developed a single-player game designed to simulate a sustainable world based on accurate ecological models and behavior economics principles. In the game, individual people must aim to act toward a sustainable society. The paper describes the game in terms implying that natural resources play an instrumental role to advance individual people's survival and wellbeing. Other indications of what is meant by a sustainable world is discussed in social and economic terms. This publication was, therefore, classified as implying intrinsic value of humans and instrumental value of nonhumans. Yigitcanlar & Cugurullo [69] define smart and sustainable cities as \"an urban locality functioning as a robust system of systems with sustainable practices, supported by community, technology, and policy, to generate desired outcomes and futures for all humans and nonhumans\". This publication is classified as intrinsically valuing humans and nonhumans as it explicitly refers to their benefits. van Wysnberghe [7] calls for a \"third wave\" in AI ethics and states that \"This third wave must place sustainable development at its core\" (emphasis original). Sustainable development is in turn defined as in the Brundtland report. This suggests an anthropocentric conception of sustainability, in which only humans are intrinsically valuable. It further suggests placing a high degree of effort on sustainability, though the emphasis on sustainable development leaves open the question of the relative importance of present vs. future generations. Gomes et al. [70] argue that \"computational sustainability harnesses computing and artificial intelligence for human well-being and the protection of our planet\" and that \"planning for sustainable development encompasses complex interdisciplinary decisions spanning a range of questions concerning human well-being, infrastructure (…) and the environmental protection of the Earth and its species\". All of this is strongly suggestive of intrinsic value of nonhumans as well as humans, so this publication is classified as such. Zhang et al. [71] studies the contributions of big data analytics capability and artificial intelligence capability to the sustainability of organizational development. The publication does not define sustainability and apply the following uses of \"sustainability during the abstract and introduction only: \"sustainable innovation and performance\", \"sustainability development projects\", \"sustainability design and commercialization processes\", \"sustainable growth and performance\", \"sustainable organizational growth\", \"sustainable competitive advantages\", \"sustainable investment\", \"sustainable development goals\", \"sustainable positional advantages\", and big data as \"sustainable resources\". This publication is therefore classified as ambiguous. Larsson et al. [72] by the AI Sustainability Center defines sustainable AI as follows: \"The AI Sustainability Center supports an approach in which the positive and negative impacts of AI on people and society are as important as the commercial benefits or efficiency gains. We call it Sustainable AI.\" This publication is classified as intrinsically valuing humans as it discusses and defines sustainability as pertaining to human and social aspects only, including sustainable AI which is defined by accountability, bias, malicious use, and transparency. The publication is further noteworthy for presenting an extensive literature review of AI ethics as a review of literature on sustainable AI, thus equating AI ethics or ethical AI with sustainable AI, which arguably drains \"sustainability\" of meaning. 5 2 . 1 21 https://aiforpeople.org/conference/ What should be able to be sustained, and why? \n\t\t\t https://earthcharter.org/", "date_published": "n/a", "url": "n/a", "filename": "060_sustainability-ai.tei.xml", "abstract": "Sustainability is widely considered a good thing and is therefore a matter of ethical significance. This paper analyzes the ethical dimensions of existing work on AI and sustainability, finding that most of it is focused on sustaining the environment for human benefit. The paper calls for sustainability that is not human-centric and that extends into the distant future, especially for advanced future AI as a technology that can advance expansion beyond Earth.", "id": "72bb37a902865897f1b1931a7c279192"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "Under review as a conference paper at ICLR 2022 SELF-SUPERVISE, REFINE, REPEAT: IMPROVING UN-SUPERVISED ANOMALY DETECTION", "text": "INTRODUCTION Anomaly detection (AD), the task of distinguishing anomalies from normal data, plays a crucial role in many real-world applications such as detecting faulty products using visual sensors in manufacturing, fraudulent behaviors at credit card transactions, or adversarial outcomes at intensive care units. AD has been considered under various settings based on the availability of negative (normal) and positive (anomalous) data and their labels at training, as overviewed in Sec. 2. Each application scenario is dominated by different challenges. When entire positive and negative data are available along with their labels (Fig. 1a ), the problem can be treated as supervised classification and the dominant challenge becomes the imbalance in label distributions [4, 16, 20, 27, 32, 35] . When only negative labeled data are available (Fig. 1b ), the problem is 'one-class classification' [23, 26, 33, 45, 47, 51, 53] . Various works have also extended approaches designed for these to settings with additional unlabeled data (Fig. 1c,d,e ) [10, 18, 24, 46, 58] in a semi-supervised setting. While there exist many prior works in these settings, they all depend on some labeled data, which is not desirable in all application scenarios. Unsupervised AD, on the other hand, poses unique challenges in the absence of any labeled data information, and a straightforward adaption of methods developed with the assumption of labeled data would be suboptimal. For example, some recent studies [7, 60] have applied one-class classifiers (OCCs) that are known to yield impressive performance when trained on negative samples [7, 23, 26, 33, 51] to unsupervised AD, but their performance for unsupervised AD has been quite sub-optimal. Fig. 2 illustrates this, showing the unsupervised AD performance of state-of-the-art Deep OCCs [51] with different anomaly ratios in unlabeled training data -the average precision significantly drops even when a small portion (2%) of training data is contaminated with anomalies. Our framework SRR (Self-supervise, Refine, Repeat), overviewed in Fig. 3 , brings a novel approach to unsupervised AD with the principles of self-supervised learning without labels and iterative data refinement based on the agreement of OCC outputs. We propose to improve the state-of-the-art performance of OCCs, e.g. [33, 51] , by refining the unlabeled training data so as to address the fundamental challenges elaborated above. SRR iteratively trains deep representations using refined Figure 1 : AD problem settings. Blue and red dots are for labeled negative (normal) and positive (anomalous) samples, respectively. Grey dots denote unlabeled samples. While previous works mostly focus on supervised (a, b) or semi-supervised (c, d, e) settings, we tackle an AD problem using only unlabeled data (f) that may contain both negative and positive samples. data while improving the refinement of unlabeled data by excluding potentially-positive (anomalous) samples. For the data refinement process, we employ an ensemble of OCCs, each of which is trained on a disjoint subset of unlabeled training data. The samples are declared as normal if there is a consensus between all the OCCs. The refined training data are used to train the final OCC to generate the anomaly scores in the unsupervised setting. Most prior unsupervised AD works [23, 26, 33, 51] assume that the data contains entirely negative samples, which makes them not truly unsupervised as they require having humans to do the data filtering. Similar to ours, some prior unsupervised AD works [7, 45, 60] considered evaluating on an unsupervised setting where there exist a small percentage of anomalous samples in the training data, i.e. operating in 'truly' unsupervised setting without having the need for humans to do any filtering in the training data. However, these methods often suffered from significant performance degradation as the ratio of anomalous sample ratio has increased (see Sec. 4.2). We would like to highlight that our method distinguishes from the literature by bringing a data-centric approach (refining the unlabeled data) to unsupervised anomaly detection beyond the model-centric approaches (improving the model itself). Our framework SRR aims to provide robustness in performance as the anomalous sample ratio increases, as shown in Fig. 2 . We conduct extensive experiments across various datasets from different domains, including semantic AD (CIFAR-10 [29], Dog-vs-Cat [19] ), real-world manufacturing visual AD use case (MVTec [8] ), and tabular AD benchmarks. We consider methods with both shallow [36, 47] and deep [7, 33, 51] models. We evaluate models at different anomaly ratios of unlabeled training data and show that SRR significantly boosts performance. For example, in Fig. 2 , SRR improves more than 15.0 average precision (AP) with a 10% anomaly ratio compared to a state-of-theart one-class contrastive representation model [51] on CIFAR-10. Similarly, on MVTec SRR retains a strong performance, dropping less than 1.0 AUC with 10% anomaly ratio, while the best existing OCC [33] drops more than 6.0 AUC. We further investigate the efficacy of our design choices, such as the number of ensemble classifiers, thresholds, and refinement of deep representations via ablation studies. \n RELATED WORK There are various existing works under the different settings described in Fig. 1 : The positive + negative setting is often considered as a supervised binary classification problem. The challenge arises due to the imbalance in label distributions as positive (anomalous) samples are rare. As summarized in [12] , to address this, over-/under-sampling [16, 20] , weighted optimization [4, 27] , synthesizing data of minority classes [32, 35] , and hybrid methods [2, 22] have been studied. The negative setting is often converted to a one-class classification problem, with the goal of finding a decision boundary that includes as many one-class samples as possible. Shallow models for this setting include one-class support vector machines [47] (OC-SVM), support vector data description [53] (SVDD), kernel density estimation (KDE) [31] , and Gaussian density estimation (GDE) [44] . There are also auto-encoder based models [59] that treat the reconstruction error as the anomaly score. Deep learning based OCCs have been developed, such as Deep OCC [45] , geometric transformation [23] , or outlier exposure [26] . Noting the degeneracy or inconsistency of learning objectives of existing end-to-end trainable Deep OCCs, [51] proposed a deep representation OCC, a two-stage framework that learns self-supervised representations [17, 28] followed by shallow OCCs. That work was extended for texture anomaly localization with CutPaste [33] . The robustness to very low anomaly ratios of these methods under the unsupervised setting was explored in [7, 60] . The semi-supervised setting is defined as utilizing a small set of labeled samples and large set of unlabeled samples to distinguish anomalies from normal data. Depending on which labeled samples are given, this setting can be split into three sub-categories. When only some positive/negative labeled samples are provided, we denote that as a PU/NU setting. Most previous works in semi-supervised AD settings focus on the NU setting where only some of the normal labeled samples are given [1, 41, 52] . The PNU setting is a more general semi-supervised setting where subsets of both positive and negative labeled samples are given. Deep SAD [46] and SU-IDS [39] are included in this category. We show the significant outperformance of our proposed SRR framework compared to Deep SAD, for multiple benchmark datasets when not using any labeled data (see Sec. 4.2). The unlabeled setting has received relatively less attention despite its significance in automating machine learning. The popular methods for this setting include isolation forest [36] and local outlier factor [13] . However, they are difficult to scale, and less compatible with recent advances in representation learning. While OCCs, such as OC-SVM, SVDD, or their deep counterparts, apply to unlabeled settings by assuming the data is all negative, and the robustness of those methods has also been demonstrated in part [7, 60] , in practice we observe a significant performance drop with a high anomaly ratio, shown in Fig. 2 . In contrast, our proposed framework is able to maintain high performance across anomaly ratios. Data refinement has been applied to AD in some prior works. [5, 42] generate pseudo-labels using binary classification and OC-SVM for data refinement to boost the consequent AD performances in unsupervised settings. [6, 30, 40, 55, 59 ] used the reconstruction errors of the auto-encoder as an indicator for removing possible anomalies. [21] and [38] used data refinement for AD in supervised and semi-supervised settings. Additional discussions can be found in Appendix A.9. Self-training [37, 48] is an iterative training mechanism using predicted pseudo labels as targets for model training. It has regained popularity recently with its successful results in semi-supervised image classification [9, 50, 57] . To improve the quality of pseudo labels, employment of an ensemble of classifiers has also been studied. [14] trains an ensemble of classifiers with different classification methods to make a consensus for noisy label verification. Co-training [11] trains multiple classifiers, each of which is trained on the distinct views, to supervise other classifiers. Co-teaching [25] and DivideMix [34] share a similar idea in that they both train multiple deep networks on separate data batches to learn different decision boundaries, thus becoming useful for noisy label verification. While sharing a similarity, the proposed framework has clear differences from the previous works. SRR performs iterative training with data refinement (with robust ensemble methods) and self-supervised learning for unsupervised learning of an anomaly detector. \n PROPOSED FRAMEWORK Self-supervise, Refine, and Repeat (SRR) is an iterative training framework, where we refine the data (Sec. 3.1) and update the representation with the refined data (Sec. 3.2), followed by OCCs on refined representations. Fig. 3 overviews the framework and Algorithm 1 provides the pseudo code. Notation. We denote the training data as D = {x i } N i=1 where x i ∈ X and N is the number of training samples. y i ∈ {0, 1} is the corresponding label to x i , where 0 denotes normal (negative) and 1 denotes anomaly (positive). Note that labels are not provided in the unsupervised setting. Let us denote a feature extractor as g : X → Z. g may include any data preprocessing functions, an identity function (if raw data is directly used for one-class classification), and learned or learnable representation extractors such as deep neural networks. Let us define an OCC as f : Z → [−∞, ∞] that outputs anomaly scores given the input features g(x). The higher the score f (g(x)), the more anomalous the sample x is. The binary anomaly prediction is made after thresholding: 1 f (g(x)) ≥ η . \n DATA REFINEMENT A naive way to generate pseudo labels of unlabeled data is to construct an OCC on raw data or learned representations as in [51] and threshold the anomaly score to obtain a binary label for normal vs. anomalous. As we update the model with refined data that excludes samples that are predicted to be anomalous, it is important to generate pseudo labels of training data as accurately as possible. To this end, instead of training a single classifier (unlike most previous works on data refinement for AD [5, 42, 59] ), we train an ensemble of K OCCs and aggregate their predictions to generate pseudo labels. We illustrate the data refinement block in Fig. 3 and as REFINEDATA in Algorithm 1. Specifically, we randomly divide the unlabeled training data D into K disjoint subsets D 1 , ..., D K , and train K different OCCs (f 1 , ..., f K ) on corresponding subsets (D 1 , ..., D K ). Then, we estimate a binary pseudo-label of the data x i ∈ D as follows: ŷi = 1 − K k=1 1 − 1 f k (g(x i )) ≥ η k (1) η k = max η s.t. 1 N N i=1 1 f k (g(x i )) ≥ η ≥ γ (2) where 1(•) is the indicator function that outputs 1/0 if the input is True/False. f k (g(x i )) represents an anomaly score of x i for an OCC f k . η k in equation 2 is a threshold determined as a γ percentile of the anomaly score distribution {f k (g(x i ))} N i=1 . To interpret equation 1, x i is predicted as normal, i.e. ŷi = 0, if all K OCCs predict it as normal. While this may be too strict and potentially reject many true normal samples in the training set, we find that empirically, it is critical to be able to exclude true anomalous samples from the training set. The effectiveness of the employment of an ensemble of classifiers is empirically shown in Sec. 4.3. \n REPRESENTATION UPDATE SRR follows the idea of deep representation OCCs [51] , where in the first stage a deep neural network is trained with self-supervised learning (such as rotation prediction [23] , contrastive [51] , or CutPaste [33] ) to obtain meaningful representations of the data, and in the second stage OCCs are trained on these learned representations. Such a two-stage framework is shown to be beneficial as it prevents the 'hypersphere collapse' of the deep OCCs by the favorable inductive bias it brings with the architectural constraints [45] . Here, we propose to conduct self-supervised representation learning jointly with data refinement. More precisely, we train a feature extractor g using D = {x i | ŷi = 0}, a subset of unlabeled data Algorithm 1 SRR: Self-supervise, Refine, Repeat. Input: Training data D = {x i } N i=1 , Ensemble count (K), threshold (γ) Output: Refined data ( D), trained OCC (f ), feature extractor (g) 1: function REFINEDATA(D, g, K, γ) 2: Train OCC models {f k } K k=1 on {D k } K k=1 , K disjoint subsets of the training data D. \n 3: Compute thresholds η k 's for γ percentile of anomaly distributions (equation 2). \n 4: Predict binary labels ŷi (equation 1). \n 5: Return D = {x i : ŷi = 0, x i ∈ D}. 6: end function 7: function SRR(D, K, γ) 8: Initialize the feature extractor g. \n 9: while g not converged do 10: D = REFINEDATA(D, g, K, γ). 11: Update g using D with self-supervised learning objectives. 12: end while 13: D = REFINEDATA(D, g, K, γ). \n 14: Train an OCC model (f ) on refined data ( D). 15: end function D that only includes samples whose predicted labels with an ensemble OCC from Sec. 3.1 are negative. We also update D as we proceed with representation learning. The proposed method is illustrated in Algorithm 1 as SRR. In contrast to previous works [33, 51] that use the entire training data for learning self-supervised representation, we find it necessary to refine the training data even for learning deep representations. Without representation refinement, the performance improvements of SRR are limited, as shown in Sec. 4.3.2. Last, for test-time prediction, we train an OCC on refined data D using updated representations by g as in line 13-14 in Algorithm 1. \n UNSUPERVISED MODEL SELECTION As SRR is designed for unsupervised AD, labeled validation data for hyperparameter tuning is typically not available and the framework should enable robust model selection without any reliance on labeled data. Here, we provide insights on how to select important hyperparameters, and later in Sec. 4.3.1 perform sensitivity analyses for these hyperparameters. Data refinement of SRR introduces two hyperparameters: the number of OCCs (K) and the percentile threshold (γ). There is a trade-off between the number of classifiers for the ensemble and the size of disjoint subsets for training each classifier. With large K, we aggregate prediction from many classifiers, each of which may contain randomness from training. This comes at a cost of reduced performance per classifier as we use smaller subsets to train them. In practice, we find K = 5 works well across different datasets and anomaly ratios. γ controls the purity and coverage of refined data. If γ is large, and thus classifiers reject too many samples, the refined data could be more pure and contain mostly the normal samples; however, the coverage of the normal samples would be limited. On the other hand, with a small γ, the refined data may still contain many anomalies and the performance improvement with SRR would be limited. We empirically observe that SRR is robust to the selection of γ when it is chosen from a reasonable range. In our empirical experiments, we find ∼ 1 − 2× of the true anomaly ratio to be a reasonable choice. In other words, it is safer to use γ higher than the expected true anomaly ratio. In some cases, the true anomaly ratio may not be available at all; for such scenarios, we propose Otsu's method [49] to estimate the anomaly ratio of the training data for determining the threshold γ (experimental results are in the Appendix A.5). \n EXPERIMENTS We evaluate the efficacy of our proposed framework for unsupervised AD tasks on tabular (Sec. 4.1) and image (Sec. 4.2) data types. We experiment varying ratios of anomaly samples in unlabeled training data and with different combinations of representation learning and OCCs. In Sec. 4.3, we provide performance analyses to better explain major constituents of the performance, as well as sensitivity to hyperparameter values. Implementation details: To reduce the computational complexity of the data refinement block, we utilize a simple OCC such as GDE in the data refinement block. In a two-stage model, we only update the data refinement block at 1st, 2nd, 5th, 10th, 20th, 50th, 100th, 500th epochs instead of every epoch. After 500 epochs, we update the data refinement block per each 500th epoch. Each run of experiments requires a single V100 GPU. Additional discussions can be found in Appendix A.10. \n EXPERIMENTS ON TABULAR DATA Datasets. Following [7, 60] , we test the efficacy of SRR on a variety of tabular datasets, including KDDCup, Thyroid, or Arrhythmia from the UCI repository [3] . We also use KDDCup-Rev, where the labels of KDDCup are reversed so that an attack represents anomaly [60] . To construct data splits, we use 50% of normal samples for training. In addition, we hold out some anomaly samples (amounting to 10% of the normal samples) from the data. This allows us to simulate unsupervised settings with an anomaly ratio of up to 10% of entire training set. The rest of the data is used for testing. 1 We conduct experiments using 5 random splits and 5 random seeds, and report the average and standard deviation of 25 F1-scores (with scale 0-100) for the performance metric. Models. We mainly compare with GOAD [7] (the state-of-the-art AD model in the tabular domain) and implement SRR on top of it. GOAD utilizes random transformation classification as the pretext task of self-supervised learning, and the normality score is determined by whether transformations are accurately included in the transformed space of the normal samples. We re-implement GOAD [7] with a few modifications. First, instead of using embeddings to compute the loss, we use a parametric classifier, similarly to augmentation prediction [51] . Second, we follow the two-stage framework [51] to construct deep OCCs. For the clean training data setting, our implementation achieves 98.0 for KDD, 95.0 for KDD-Rev, 75.1 for Thyroid, and 54.8 for Arrhythmia F1-scores, which are comparable to those reported in [7] . Please see Appendix A.3 for formulation and implementation details. Figure 4 : Unsupervised AD performance (F1-score) using OC-SVM (with rbf kernel), GOAD [7] , and GOAD with the proposed method SRR on various tabular datasets. Shaded areas represent the standard deviation. Results. We show results of GOAD (the baseline) and GOAD with SRR in Fig. 4 . The ranges of the noise ratio are set to 0% for the anomaly ratios in the original dataset (if anomaly ratios are larger than 10%, we set the maximum anomaly ratio as 10%). For KDD-Rev, we set the maximum anomaly ratio to 2.5% because the performance of GOAD (without SRR) drops significantly even with a small ratio of anomalies in the training data. Fig. 4 shows that integrating SRR significantly improves GOAD (the state-of-the-art methods for tabular OCC). The improvements are more significant especially at higher anomaly ratios. This underlines how SRR can achieve significant improvements for both small (Thyroid & Arrhythmia) and large-scale datasets (KDD & KDD-Rev). \n EXPERIMENTS ON IMAGE DATA Datasets. We evaluate SRR on visual AD benchmarks, including CIFAR-10 [29], f-MNIST [56] , Dog-vs-Cat [19] , and MVTec [8] . For CIFAR-10, f-MNIST, and Dog-vs-Cat datasets, samples from one class are set to be normal and the rest from other classes are set to be an anomaly. Similar to the experiments on tabular data in Section. 4.1, we swap a certain amount of the normal training data with anomalies given the target anomaly ratio. For MVTec, since there are no anomalous data available for training, we borrow 10% of the anomalies from the test set and swap them with normal samples in the training set. Note that 10% of samples borrowed from the test set are excluded from evaluation. For all datasets, we experiment with varying anomaly ratios from 0% to 10%. We use area under ROC curve (AUC) and average precision (AP) metrics to quantify the performance for visual AD (with scale 0-100). When computing AP, we set the minority class of the test set as label 1 and majority as label 0 (e.g., normal samples are set as label 1 for CIFAR-10 experiments as there are more anomaly samples that are from 9 classes in the test set). We run all experiments with 5 random seeds and report the average performance for each dataset across all classes. Per-class AUC and AP are reported in Appendix A.4. \n Models. For semantic AD benchmarks, CIFAR-10, f-MNIST, and Dog-vs-Cat, we compare the SRR with two-stage OCCs [51] using various representation learning methods, such as distributionaugmented contrastive learning [51] , rotation prediction [23, 28] and its improved version [51] , and denoising autoencoder. For MVTec benchmarks, we use CutPaste [33] as a baseline and compare to its version with SRR integration. For both experiments, we use the ResNet-18 architecture, trained from random initialization, using the hyperparameters from [51] and [33] . The same model and hyperparameter configurations are used for SRR with K = 5 classifiers in the ensemble. We set γ as twice the anomaly ratio of training data. For 0% anomaly ratio, we set γ as 0.5. Finally, a Gaussian Density Estimator (GDE) on learned representations is used as OCC. Results. Fig. 5 shows a significant performance drop with increased anomaly ratio, regardless of representation learning. For example, the AUC of distribution-augmented contrastive representation [51] drops from 92.1 to 83.6 when anomaly ratio becomes 10%. Similarly, the improved rotation prediction representation [28] drops from 90.0 to 84.1. On the other hand, SRR effectively handles the contamination in the training data and achieves 89.9 AUC with 10% anomaly ratio, reducing the performance drop by 74.1%. An 'oracle' upper bound would be the removal of all anomalies from the training data (which is the same as the performance at 0% anomaly ratio for the same size of the data). As Fig. 5 shows, the performance of SRR is similar to this oracle upper bound performance (less than 2.5 AUC difference) even with high anomaly ratios (10%). The results are also similar in other metrics, such as AP in Fig. 5 or Recall at Precision of 70 and 90 in Fig. 9 in the Appendix. We repeat experiments on 3 additional visual AD datasets and report results in Fig. 6 . We observe consistent and significant improvements over the baseline across different datasets and different one-class classification methods. Note that the improvement is more significant at higher anomaly ratios. For instance, on MVTec dataset, SRR improves AUC by 4.9 and AP by 7.1 compared to the state-of-the-art CutPaste OCC with an anomaly ratio of 10%. In the Appendix (Sec. A.4), we also illustrate per-class performance of SRR compared with the state-of-the-art. with varying anomaly ratios. We use state-of-the-art one-class classification models for baselines, such as distribution-augmented contrastive representations [51] for f-MNIST and Dog-vs-Cat, or CutPaste [33] for MVTec, and build SRR on top of them. \n PERFORMANCE ANALYSES In this section, we conduct sensitivity analyses on two hyperparameters of SRR, namely, the number K for ensemble OCCs and the percentile threshold γ that determines normal and anomalies in the data refinement module. In addition, we show the importance of updating the representation with refined data. Ablation studies are conducted on two visual AD benchmarks, CIFAR-10 and MVTec. More experimental results and discussions can be found in the Appendix. \n SENSITIVITY TO HYPERPARAMETERS SRR is designed for unsupervised AD and it is important to ensure robust performance against changes in the hyperparameters as model selection without labeled data would be very challenging. Fig. 7 presents the sensitivity analyses of SRR with respect to various hyperparameters. In Fig. 7a , we observe the performance improvement as we increase the number of classifiers for ensemble. This is particularly effective on CIFAR (top in Fig. 7a ), where the number of samples in the training data is large enough that even with large K, the number of samples to train each OCC (N/K) would be sufficient. In Fig. 7b , we observe that SRR performs robustly when γ is set to be larger than the actual anomaly ratio (10%). When γ is less than 10%, however, we see a significant drop in performance, all the way to the baseline (γ = 0). Our results show that SRR improves upon baseline regardless of the threshold. In addition, it suggests that γ could be set to be anywhere from the true anomaly ratio and and 2x the anomaly ratio to maximize its effectiveness. When the true anomaly ratio is unknown, as discussed in Appendix A.5, Otsu's method could be used. \n ITERATIVELY UPDATING REPRESENTATIONS WITH REFINED DATA It is possible to decouple representation learning and data refinement of SRR, which would result in a three-stage framework, where we learn representations until convergence without data refinement, followed by the data refinement and learning OCC. Fig. 7c shows that SRR using a pre-trained then fixed representation (i.e. when data refinement is used for OCC only) already improves the performance upon the baseline with no data refinement at any stage of learning representation or classifier. Improvements on CIFAR-10 are 4.9 in AUC and 8.0 in AP; and on MVTec are 1. \n QUANTIFYING THE REFINEMENT EFFICACY We evaluate how many normal and anomalies are excluded by the proposed data refinement block. As in Fig. 8 (a, b), with data refinement, we can exclude more than 80% of anomalies in the training set without removing too many normal samples. For instance, among 4% anomalies in CIFAR-10 data, SRR is able to exclude 80% anomalies while removing less than 20% normal samples. Such a high recall of anomalies of SRR is not only useful for unsupervised AD, but also could be useful for improving the annotation efficiency when a budget for active learning is available. Fig. 8(c, d ) demonstrates the removed normal and abnormal samples by the data refinement module over training epochs. It shows that better representation learning (as training epochs increase) consistently improves the efficacy of the data refinement. \n CONCLUSION AD has wide range of practical use cases. A challenging and costly aspect of building an AD system is that anomalies are rare and not easy to detect by humans, making them difficult to label. To this end, we propose a novel AD framework to enable high performance AD without any labels called SRR. SRR can be flexibly integrated with any OCC, and applied on raw data or on a trainable representations. SRR employs an ensemble of multiple OCCs to propose candidate anomaly samples that are refined from training, which allows more robust fitting of the anomaly decision boundaries as well as better learning of data representations. We demonstrate the state-of-the-art AD performance of SRR on multiple tabular and image data. ETHICS STATEMENT SRR has the potential to make significant positive impact in real-world AD applications where detecting anomalies is crucial, such as for financial crime elimination, cybersecurity advances, or improving manufacturing quality. We note a potential risk associated with using SRR: when representations are not sufficiently good, there will be a negative cycle of refinement and representation updates/OCC. While we rely on the existence of good representations, in some applications they may be difficult to obtain (e.g., cybersecurity). This paper focuses on the unsupervised setting and demonstrates strong AD performance, opening new horizons for human-in-the-loop AD systems that are low cost and robust. We leave these explorations to future work. \n REPRODUCIBILITY STATEMENT The source code of SRR will be published upon acceptance. For reproducibility, we only use public tabular and image datasets to evaluate the performances of SRR. Complete descriptions of those datasets can be found in Datasets subsections in Sec. 4.1 and 4.2. Detailed experimental settings and hyperparameters can be found in Models subsections in Sec. 4.1 and 4.2. The implementation details (including hyperparameters that we used) on GOAD [7] are described in Sec. A.3. \n A APPENDIX A.1 ADDITIONAL RESULTS WITH DIFFERENT METRICS There are various performance metrics of OCCs. In the main manuscript, we mainly use AUC, Average Precision (AP) and F1-score as the evaluation metrics. In this subsection, we report the performances of the proposed model (SRR) and baselines in terms on Recall at Precision 70 and 90 as the additional metrics on CIFAR-10 dataset. As in Fig. 9 , we observe a similar trends with these two additional metrics as well. For example, the performances of SRR are robust across various anomaly ratios. On the other hand, all the other OCCs show consistent and significant performance degradation as the anomaly ratio increases. SRR performs 15.8 and 10.9 better than the state-of-the-art OCC [51] in terms of recall at precision 70 and 90, respectively. SRR is also applicable on raw tabular features or learned image representation without representation update using data refinement. In this section, we demonstrate the performance improvements by SRR without representation update to verify the effectiveness of data refinement block of SRR for shallow OCCs. Fig. 10 (upper) demonstrates consistent and significant performance improvements when we apply SRR on top of raw tabular features. Specifically, the Average Precision (AP) improvements are 10.2, 29.0, and 4.1 with KDD-Rev, Thyroid, and Arrhythmia tabular datasets, respectively. We also apply SRR on top of various learned image representations. As can be seen in Fig. 10 (lower), the performance improvements of SRR are consistent across various different learned image representations (without representation update). For instance, the AP improvements are 9.2, 1.0, and 12.1 with learned image representations using RotNet [23] , Rotation [28] , and Contrastive [51] , respectively. A.3 IMPLEMENTATION DETAILS ON GOAD [7] FOR TABULAR DATA EXPERIMENTS A classification-based AD method, GOAD [7] , has demonstrated strong AD performance on tabular datasets. Unlike previous works [23, 26] that formulate a parametric classifier for multiple transformation classification, GOAD employs distance-based classification of multiple transformations. For the set of transformations T m : X → D, m = 1, ..., M , the loss function of GOAD is written as in equation 3 with the probability defined in equation 4. L = −E m,x log P (m|T m (x)) , (3) P ( m|T m (x)) = exp(− f (T m (x)) − c m 2 ) n exp(− f (T m (x)) − c n 2 ) , (4) where the centers c m 's are updated by the average feature over the training set. While it is shown to perform well [7] , we find that the distance-based formulation is not necessary, and we achieve the similar performance, if not worse, to [7] using a parametric classifier when computing the probability: P ( m|T m (x)) = exp w mf (T m (x)) + b m n exp (w n f (T m (x)) + b n ) (5) The formulation in equation 5 is easier to optimize than its original form in equation 4 as it can be fully optimized with backpropagation without alternating updates of feature extractor f and centers c m . Once we learn a representation by optimizing the loss in equation 3 using equation 5, we follow a two-stage one-class classification framework of [51] to construct a set of Gaussian density estimation OCCs for each transformation. Finally, we aggregate a maximum normality scores from a set of classifiers as the normality score. In Table 1 , we summarize the implementation details, such as network architecture or hyperparameters, and AD performance under clean training data setting that reproduces the results in [7] . Table 1 : The AD performance under clean only data setting of GOAD in [7] and our implementation. Our implementation demonstrates comparable, if not worse, performance to those reported in [7] . Our implementation also shares most hyperparameters across datasets except the M , the number of transformations, and the train steps, which are closely related to the size of training data. \n A.4 PER-CLASS AUC AND AP In the main manuscript, we report the mean and standard deviation of AUC and AP across all classes in each dataset. In this section, we report the mean and standard deviation of AUC and AP for each class in each dataset (including CIFAR-10 (Table 2 ), MVTec (Table 3 ), fMNIST (Table 4 ), and Dog-vs-Cat (Table 5 ) datasets). \n A.4.1 PER-CLASS AUC AND AP ON CIFAR-10 DATASET The key idea of the Otsu's method is to find the threshold that minimizes the intra-class variance. This is defined as the weighted sum of variances of the two classes. Let us denote the normality scores as {s i } N i=1 and threshold as η. Then, we try to find the threshold (η) that minimizes the weighted sum of the variance (w 0 (η) × σ 0 (η) + w 1 (η) × σ 1 (η)) where w 0 (η) = N i=1 I(s i < η)/N and w 1 (η) = N i=1 I(s i ≥ η)/N . σ 0 (η) and σ 1 (η) are the variances of each class. The optimal threshold (η * ) is determined as η * = min η w 0 (η) × σ 0 (η) + w 1 (η) × σ 1 (η). We use the twice of η * as the hyperparameter (γ) in SRR. We evaluate the performances of Otsu's method (on top of SRR) in comparison to the state-of-the-art OCC [33] and original SRR with the knowledge of the true anomaly ratio using MVTec dataset. Fig. 11 demonstrates that even without true anomaly ratio, the performance of SRR can be significantly better than the state-of-the-art OCC [33] with Otsu's method. Also, Fig. 11 shows that the knowledge of true anomaly ratio is crucial information for maximizing the performance of SRR in fully unsupervised settings. We further extend the experimental results using Otsu's method with other datasets such as CIFAR-10 and Thyroid datasets. [51] for CIFAR-10 dataset and GOAD [7] for Thyroid dataset. We introduce 6% noise on MVTec dataset and 1.5% noise on the Thyroid dataset. Metrics are (AUC/AP) for CIFAR-10 dataset and F1 score for Thyroid dataset. \n A.6 ADDITIONAL ABLATION STUDIES To better understand the source of gains, we include comparisons between the final ensemble model on the converged self-supervised extractor (SRR without final OCC) with the proposed SRR (using an additional final OCC). Also, to further support the novelty of SRR, we report additional experimental results only with ensemble model without data refinement (Ensemble Only). As can be seen in Table 7 , the proposed version of SRR (with an additional final OCC) outperforms significantly, which is attributed to the fact that while fitting the individual OCC models in the ensemble, we do not exclude the possible anomaly samples for diversity of the trained submodels, so the anomaly decision boundaries can be fitted robustly. Therefore, the ensemble model can be somewhat less accurate for classifying difficult samples (near the decision boundary between normal and abnormal) compared to the final OCC used (that is trained with only normal samples). Also, employment of ensemble learning without data refinement (Ensemble only), yields much worse performance than the proposed method (SRR), underlining the importance of the core data refinement idea of SRR. Datasets [33] for MVTec dataset and [51] for CIFAR-10 dataset. We introduce 6% noise on both datasets. Metrics are (AUC/AP). \n A.7 ADDITIONAL BASELINES We add extra baselines from robust anomaly detection literature: Standard PCA [54] , Robust PCA [15] and Local Outlier Factor (LOF) [13] for tabular data and Robust autoencoder (Robust AE) [59] for image data. As can be seen in Table 8 , the performance of PCA and LOF are highly degraded even with a small amount of anomalies in the training data. For Robust PCA and Robust AE, the performance degradation is less but still significant in comparison to SRR. Overall, SRR outperforms other benchmarks in fully unsupervised settings, underlining the importance of data refinement in improving the robustness to anomaly ratio in training data, as the core constituent of SRR framework. Datasets Figure 2 : 2 Figure 2: Performance of our proposed model and a baseline OCC using contrastive representation [51] on CIFAR-10 with different anomaly ratios in the training data. \n Figure 3 : 3 Figure 3: Block diagram of SRR composed of representation learner (Sec. 3.2), data refinement (Sec. 3.1), and final OCC blocks. The representation learner updates the deep models using refined data from the data refinement block. Data refinement is done by an ensemble of OCCs, each of which is trained on K disjoint subsets of unlabeled training data. Samples predicted as normal by all classifiers are retained in the refined data, and are used to update the representation learner and final OCC. The process is repeated iteratively until convergence. Convergence graphs can be found in the Appendix A.8. \n Figure 5 : 5 Figure 5: Unsupervised AD performance with various OCCs on CIFAR-10. For SRR we adapt distribution-augmented contrastive representation learning [51]. (Left) AUC, (Right) Average Precision (AP). \n Figure 6 : 6 Figure6: Unsupervised AD performance on (a) MVTec (b) f-MNIST, and (c) Dog-vs-Cat datasets with varying anomaly ratios. We use state-of-the-art one-class classification models for baselines, such as distribution-augmented contrastive representations [51] for f-MNIST and Dog-vs-Cat, or CutPaste [33] for MVTec, and build SRR on top of them. \n Figure 7 : 7 Figure 7: Ablation studies on (top) CIFAR-10 and (bottom) MVTec under 10% anomaly ratio setting with respect to (a) ensemble count K, (b) percentile threshold γ, and (c) data refinement with or without representation update. \n Figure 8 : 8 Figure 8: Percentage of excluded anomalous and normal samples by data refinement (a, b) with different anomaly ratios in training data and (c, d) over training epochs for 10% anomaly ratio. \n Figure 9 : 9 Figure 9: Performance of various OCCs on CIFAR-10 dataset. SRR is applied on top of Contrastive [51]. (Left) Recall at Precision 70, (Right) Recall at Precision 90. \n A. 2 2 SRR ON RAW TABULAR FEATURES / LEARNED IMAGE REPRESENTATIONS \n Figure 10 : 10 Figure10: Performance of SRR on (top) raw tabular features and (lower) learned image representations. SRR consistently outperforms baseline and in some cases (e.g., Thyroid, and Contrastive [51] ), the performance improvements are significant. \n Figure 11 : 11 Figure 11: Unsupervised anomaly detection performances with Otsu's method on top of SRR on MVTec dataset. (Left) AUC, (Right) Average Precision (AP). \n Figure 12 : 12 Figure 12: Convergence graphs of SRR with (left) MVTec dataset, (right) CIFAR-10 dataset. \n Table 6 6 are the overall results -Otsu's method yields only slight degradation compared to SRR with true anomaly ratio; however, it still significantly outperforms SOTA OCC baselines. Datasets / Method SOTA OCC SRR with Otsu's method SRR CIFAR-10 0.855 / 0.585 0.906 / 0.703 0.910 / 0.709 Thyroid 0.506 0.623 0.639 \n Table 6 : 6 Additional ablation studies. SOTA OCC methods are \n Table 7 : 7 / Method SOTA OCC Ensemble Only SRR without final OCC SRR Additional ablation studies. SOTA OCC methods are CutPaste MVTec 0.905 / 0.845 0.911 / 0.849 0.922 / 0.870 0.937 / 0.887 CIFAR-10 0.855 / 0.585 0.862 / 0.599 0.890 / 0.677 0.910 / 0.709 \n Table 8 : 8 Additional experiments with extra baselines from robust anomaly detection literature. We introduce 6% noise on CIFAR-10 and KDD datasets. For Thyroid dataset, we introduce 1.5% noise. Metrics are (AUC/AP) for image data and F1 score for tabular data.A.8 CONVERGENCE GRAPHSThe proposed SRR framework is converged when the iterative training of the self-supervised learning is converged. Depending on the data refinement model training, corresponding self-supervised models are also trained with differently refined training data. Usually, the data refinement model and self-supervised models converge after a similar number of epochs. Fig.12illustrate the convergence graphs of SRR with MVTec and CIFAR-10 datasets. / Method PCA Robust PCA LOF Robust AE SRR CIFAR-10 - - - 0.636 / 0.174 0.910 / 0.709 Thyroid 0.299 0.377 0.338 - 0.506 KDD 0.836 0.893 0.873 - 0.942 \n\t\t\t Note that the experimental settings with contaminated training data in GOAD [7] and DAGMM [60] are slightly different from ours. Our contamination ratio is defined as the anomaly ratio over the entire training data, while their contamination ratio is the anomaly ratio over all the anomalies in the dataset.", "date_published": "n/a", "url": "n/a", "filename": "self_supervise_refine_repeat_i.tei.xml", "abstract": "Anomaly detection (AD) -separating anomalies from normal data -has many applications across domains, from manufacturing to healthcare. While most previous works have been shown to be effective for cases with fully or partially labeled data, that setting is in practice less common due to labeling being particularly tedious for this task. In this paper, we focus on fully unsupervised AD, in which the entire training dataset, containing both normal and anomalous samples, is unlabeled. To tackle this problem effectively, we propose to improve the robustness of one-class classification trained on self-supervised representations using a data refinement process. Our proposed data refinement approach is based on an ensemble of oneclass classifiers (OCCs), each of which is trained on a disjoint subset of training data. Representations learned by self-supervised learning on the refined data are iteratively updated as the refinement improves. We demonstrate our method on various unsupervised AD tasks with image and tabular data. With a 10% anomaly ratio on CIFAR-10 image data / 2.5% anomaly ratio on Thyroid tabular data, the proposed method outperforms the state-of-the-art one-class classification method by 6.3 AUC and 12.5 average precision / 22.9 F1-score.", "id": "74ee22bb18a54c2cdf77d45a736a1a28"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Martin Pecka", "Tomas Svoboda"], "title": "Lecture Notes in Computer Science: Safe Exploration Techniques for Reinforcement Learning -An Overview", "text": "Introduction Reinforcement learning (RL) as a machine learning method has been thoroughly examined since 80's. In 1981, Sutton and Barto [3] inspired themselves in the reinforcement learning discoveries in behavioral psychology and devised the Temporal Difference machine learning algorithm that had to simulate psychological classical conditioning. In contrast with supervised learning, reinforcement learning does not need a teacher's classification for every sample presented. Instead, it just collects rewards (or punishment) on-the-go and optimizes for the expected long-term reward (whereas supervised learning optimizes for the immediate reward). The key advantage is that the design of the rewards is often much simpler and straight-forward than classifying all data samples. Reinforcement learning proved to be extremely useful in the case of statespace exploration -the long-term reward corresponds to the value of each state [17] . From such values, we can compose a policy which tells the agent to always take the action leading to the state with the highest value. As an addition, state values are easily interpretable for humans. Since the early years, a lot of advanced methods were devised in the area of reinforcement learning. To name one, Q-learning [25] is often used in connection with safe exploration. Instead of computing the values of states, it computes the values of state-action pairs, which has some simplifying consequences. For example, Q-learning doesn't need any transition model (i.e. dynamics model) of the examined system. A completely different approach is policy iteration. This algorithm starts with a (more or less random) policy and tries to improve it step-by-step [16] . This case is very valuable if there already exists a good policy and we only want to improve it [11] . What do all of these methods have in common, is the need for rather large training data sets. For simulated environments it is usually not a problem. But with real robotic hardware, the collection of training samples is not only lengthy, but also dangerous (be it mechanical wear or other effects). Another common feature of RL algorithms is the need to enter unknown states, which is inherently unsafe. As can be seen from the previous paragraph, safety is an important issue connected with reinforcement learning. However, the first articles focused on maintaining safety during exploration started to appear much later after the \"discovery\" of RL. Among the first, Heger [15] \"borrowed\" the concept of a worstcase criterion from control theory community. In 1994 he created a variant of Q-learning where maximization of long-term reward is replaced with maximization of minimum of the possible rewards. That basically means his algorithm prefers to never encounter a bad state (or, at least to choose the best of the bad states). This approach has one substantial drawback -the resulting policies are far from being optimal in the long-term-reward sense [10] . In this paper we show the various approaches to safe exploration that have emerged so far. We classify the methods by various criteria and suggest suitable use cases for them. To better illustrate some of the practical details, we use the UGV (Unmanned Ground Vehicle) robotic platform from EU FP7 project NIFTi [6] (see Figure 1 ) as a reference agent. It may happen that in these practical details we assume some advantages of UGVs over UAVs (Unmanned Aerial Vehicles), like the ability to stand still without much effort, but it is mostly easy to convert these assumptions to UAVs, too. Further organization of this paper is the following: in Section 2 we discuss some basics of reinforcement learning (the reader may skip it if he is familiar with reinforcement learning); Section 3 is an overview of the safety definitions \n Reinforcement learning basics \n Markov Decision Processes Markov Decision Processes (MDPs) are the standard model for deliberating about reinforcement learning problems. They provide a lot of simplifications, but are sufficiently robust to describe a large set of real-world problems. The simplest discrete stochastic MDP comprises of: [17] a finite set of states S a finite set of actions A a stochastic transition model P : P t (s, a, s ) = P r(s t+1 = s | s t = s, a t = a) for each s, s ∈ S, a ∈ A, where P r stands for probability and the immediate reward function R : S × A → R (or R : S × A × S → R if the reward depends on the stochastic action result) To interpret this definition, we say that the at every time instant t the agent is in a state s, and by executing action a it gets to a new state s . Furthermore, executing a particular action in a particular state may bring a reward to the agent (defined by R). The most important and interesting property of MDPs is the Markov property. If you have a look at the definition of the transition model, the next state only depends on the current state and the chosen action. Particularly, the next state is independent of all the previous states and actions but the current one. To give an example, the robot's battery level cannot be treated implicitly by counting the elapsed time, but rather it has to be modeled as a part of the robot's state. Once the model is set up, everything is ready for utilizing an MDP. \"The agent's job is to find a policy π mapping states to actions, that maximizes some long-run measure of reinforcement\" [17] . The \"long-run\" may have different meanings, but there are two favorite optimality models: the first one is the finite horizon model, where the term J = h t=0 r t is maximized (h is a predefined time horizon and r t is the reward obtained in time instant t while executing policy π). The dependency of r t on the policy is no longer obvious from this notation, but this is the convention used in literature when it is clear which policy is used. This model represents the behavior of the robot which only depends on a predefined number of future states and actions. The other optimality model is called discounted infinite horizon, which means we maximize the discounted sum J = ∞ t=0 γ t r t with γ ∈ (0, 1) being the discount factor. The infinite horizon tries to find a policy that is the best one taking into account the whole future. Please note the hidden dependency on the policy π (and the starting state s 0 ) -it is the policy that decides on which action to take, which in turn specifies what will the reward be. Other extensions of MDPs to continuous states, time or actions are beyond the scope of this overview. However, some of the referenced papers make use of these continuous extensions, which proved to be useful for practical applications. \n Value iteration Value iteration is one of the basic methods for finding the optimal policy. To describe this algorithm, it is first needed to define the essential notion of the optimal value of a state. In this whole subsection we suppose the discounted infinite horizon model, but analogous results can be shown for finite horizon, too. \"The optimal value of a state is the expected infinite discounted sum of reward that the agent will gain if it starts in that state and executes the optimal policy.\" [17] Given a policy π, the induced value function is therefore defined as V π (s) = E ∞ t=0 r k γ k , (1) where E denotes the expected value and r k are the rewards for executing policy π. Taking the best value function over all policies then yields the optimal value function V * : [17] V * (s) = max π V π (s) . (2) Inversely, if we have the value function given, we can derive a policy from that. It is a simple policy that always takes the action leading to the most profitable neighbor state (with the highest value). One useful formulation of the properties of the optimal value function is the formulation using the recurrent Bellman equations which define a dynamic system that is stable for the optimal value function. We can say a state's optimal value is the best immediate reward plus its best neighbor's optimal value: [17] V * (s) = max a R(s, a) + γ s ∈S P(s, a, s )V * (s ) . (3) Analogously, we can find the optimal policy using the same Bellman equation: π * (s) = argmax a R(s, a) + γ s ∈S P(s, a, s )V * (s ) . (4) The Value iteration algorithm is based on trying to compute the solution of Equation 4 using iterative Bellman updates (refer to Algorithm 1). In the algorithm, we use a structure called Q to store the \"value\" of state-action pairs. In Value iteration it is just a structure to save intermediate results, but it is the core of the Q-learning algorithm (described in Section 2.3). The stopping criterion of the Value iteration algorithm is not obvious, but Williams and Baird [26] derived an easily applicable upper bound on the error of the computed value function. That said, after a sufficient number of those simple iterations, we can compute the almost optimal value function. The number of iterations needed for Value iteration to converge may be impractically high, but it is shown that the optimal policy converges faster [4] , thus making Value iteration practical. \n Q-learning Just a small change to the Value iteration algorithm results in Q-learning. The basic algorithm is the same as Value iteration, just the update step is done differently (refer to Algorithm 2). The consequence of this change is that no model of the system (transition function P) is needed. It is sufficient to execute all actions in all states equally often, and Watkins [25] proved that if Q-learning were run for an infinite time, the computed Q would converge to the optimal Q * (an analogue of V * ). \n Policy iteration Policy iteration is a completely different approach to computing the optimal policy. Instead of deriving the policy from the Value or Q function, Policy iteration Algorithm 1 The Value iteration algorithm [17] Input: an MDP (states S, actions A, rewards R, transition model P) Output: the optimal value function V * , resp. the optimal policy π * derived from the value function 1. V(s) := arbitrary function 2. π := the policy derived from V 3. while π is not good enough do 4. for all s ∈ S do 5. for all a ∈ A do Update: 6. Q(s, a) := R(s, a) + γ s ∈S P(s, a, s )V(s ) 7. end for 8. V(s) := max a Q(s, a) 9. end for 10. π := the policy derived from V 11. end while 12. V * := V, π * := π Algorithm 2 The Q-learning algorithm (only the parts that differ from Value iteration when V is substituted with Q) [17] Input: an MDP (states S, actions A, rewards R, transition model may be unknown) Output: the optimal state-value function Q * , resp. the optimal policy π * derived from the state-value function 6. Q(s, a) := Q(s, a)+ α R(s, a) + γ max a Q(s , a ) − Q(s, a) \n line left out works directly with policies. In the first step, a random policy is chosen. Then a loop consisting of policy evaluation and policy improvement repeats as long as the policy can be improved [17] (refer to Algorithm 3 for details). Since in every step the policy gets better, and there is a finite number of different policies, it is apparent that the algorithm converges [23] . Policy iteration can be initialized by a known, but suboptimal policy. Such policy can be obtained e.g. by a human operator driving the UGV. If the initial policy is good, Policy iteration has to search much smaller subspace and thus should converge more quickly than with a random initial policy [11] . Algorithm 3 The Policy iteration algorithm [17] 1. π = arbitrary policy 2. repeat 3. π := π Policy evaluation: (system of linear equations) 4. Vπ(s) = R(s, π(s)) + γ s ∈S P(s, π(s), s )Vπ(s ) Policy improvement: 5. π (s) := argmax a∈A R(s, a) + γ s ∈S P(s, a, s )Vπ(s ) 6. until π = π \n Defining safety To examine the problems of safe exploration, it is first needed to define what exactly is the safety we want to maintain. Unfortunately, there is no unified definition that would satisfy all use cases; thus, several different approaches are found in the literature. An intuitive (but vague) definition could be e.g.: \"Statespace exploration is considered safe if it doesn't lead the agent to unrecoverable and unwanted states.\" It is worth noticing here that unwanted doesn't necessarily mean low-reward. In the next subsections we present the main interpretations of this vague definition. \n Safety through labeling The largely most used definition of safety is labeling the states/actions with one of several labels indicating the level of safety in that state/action. What varies from author to author is the number and names of these labels. To start with, Hans [14] has the most granular division of state/action space. His definitions are as follows (slightly reformulated): an (s, a, r, s ) tuple (transition) is fatal if the reward r is less than a certain threshold (s is the original state, a is an action and s is the state obtained after executing a in state s, yielding the reward r), an action a is fatal in state s if there is non-zero probability of leading to a fatal transition, state s is called supercritical if there exists no policy that would guarantee no fatal transition occurs when the agent starts in state s, Since we will compare other definitions the the Hans', it is needed to define one more category. A state s is called fatal if it is an undesired or unrecoverable state, e.g. if the robot is considered broken in that state. The fatal transition can then be redefined as a transition ending in a fatal state. Opposite to the precisely defined terms in Hans' definition, the meaning of words \"undesired\" and \"unrecoverable\" here is vague and strongly task-dependent. - Continuing on, Geibel [12] defines only two categories -fatal and goal states. \"Fatal states are terminal states. This means, that the existence of the agent ends when it reaches a fatal state\" [12] . This roughly corresponds to our defined set of fatal states. Goal states are the rest of final states that correspond to successful termination. Since Geibel only considers terminal states for safety, his goal states correspond to a subset of safe states. The other Hans' categories need not be represented, since they are meaningless for final states. An extension of Geibel's fatal and goal states is a division presented by García [10] . His error and non-error states correspond to fatal and goal states, but García adds another division of the space -the known and unknown states, where known states are those already visited (and known have empty intersection with error ). He then mentions a prerequisite on the MDP that if an action leads to a known error /non-error state, then its slight modification must also lead to an error /non-error state (a metric over the state space is required). In Ertle's work [9] , again the two basic regions are considered -they are called desired and hazardous (corresponding to safe and fatal). However, due to the used learning technique, one more region emerges -the undesired region. It contains the whole hazardous region and a \"small span\" comprising of desired states, and denotes the set of states where no training (safe) samples are available, because it would be dangerous to acquire those samples. In particular, he says that \"The hazards must be 'encircled' by the indications of the undesired approaching so that it becomes clear which area [. . . ] is undesired\" [9] . A summary of the labeling-based definitions is shown in Figure 3 . We examined the apparent imbalance between the number of categories Hans defines, and the other definitions, and that led us to the following observations. The first observation is that creating labels for actions or transitions is unnecessary. If we need to talk about the \"level of safety\" of an action, we can use the worst label out of all possible results of that action (which retains compatibility with Hans' definitions). Moreover, as \"it is impossible to completely avoid error states\" [22] , we can ignore the effects of the action which have only small probability (lower than a safety threshold) -we will call such effects the negligible effects. A second remark is that the fatal and supercritical sets can be merged. In Hans' work we haven't found any situation where distinguishing between supercritical and fatal would bring any benefit. Specifically, in his work Hans states that: \"Our objective is to never observe supercritical states\" [14] , which effectively involves avoiding fatal transitions, too. And since we avoid both supercritical and fatal, we can as well avoid their union. Third, safety of a state does not necessarily depend on the reward for getting to that state. E.g. when the UGV performs a victim detection task, going away from the target area may be perfectly safe, but the reward for such action should be small or even negative. Putting these observations together, we propose a novelty definition of safety for stochastic MDPs, which is a simplification of Hans' model and a generalization of the other models: -A state is unsafe if it means the agent is damaged/destroyed/stuck. . . or it is highly probable that it will get to such state regardless of further actions taken. -A state is critical if there is a not negligible action leading to an unsafe state from it. -A state is safe if no available action leads to an unsafe state (however, there may be an action leading to a critical state). To illustrate the definition on a real example, please refer to Figure 2 . In 2(a), the UGV is in a safe state, because all actions it can take lead again to safe states (supposing that actions for movement do not move the robot for more than a few centimeters). On the other hand, the robot as depicted in 2(b) is in a critical state, because going forward would make the robot fall over and break. If the robot executed action \"go forward\" once more, it would come to an unsafe state. Right after executing the action it would still not be broken; however, it would start falling and that is unsafe, because it is not equipped to withstand such fall and therefore it is almost sure it will break when it meets the ground. \n Safety through ergodicity An MDP is called ergodic iff for every state there exists a policy that gets the agent to any other state [20] . In other words, every mistake can be remedied in such MDP. Moldovan [20] then defines δ-safe policies as policies guaranteeing that from any state the agent can get to the starting state with probability at least δ (using a return policy, which is different from the δ-safe one). Stated this way, the safety constraint may seem intractable, or at least impracticalit is even proved, that expressing the set of δ-safe policies is NP-hard [20] . An approximation of the constraint can be expressed in the terms of two other MDP problems which are easily solved [20] ; that still leads to δ-safe policies, but the exploration performance may be suboptimal. In our view, safety through ergodicity imposes too much constraints on the problems the agent can learn. It sometimes happens that a robot has to learn some task after which it is not able to return to the initial state (e.g. drive down a hill it cannot go upwards; a human operator then carries the robot back to the starting position). But the inability to \"return home\" in no means indicates the robot is in an unsafe state. \n Safety through costs Another definition of safety is to define a cost for taking an action/being in a state and minimize the worst-case cost of the generated policies (up to some failure probability). Such approach is presented in [15] . However, unless a threshold is set, this definition leads only to the safest possible policies, which are not necessarily safe. Expressing the safety using costs is natural for some RL tasks (e.g. when learning the function of a dynamic controller of an engine, the engine's temperature can be treated as a cost). Unfortunately, not all unsafe states can be described using such costs in general. In addition, specifying the right costs may be a difficult task. \n Safety as variance of the expected return An alternative to safety as minimization of a cost (either worst-case or expected) is minimizing both the cost and its variance. This approach is called expected value-variance criterion [15] and is used mainly in works prior 2000, e.g. [7] . A safe policy by this criterion can be viewed as a policy that minimizes the number of critical actions (because fatal transitions are expected to yield much larger costs than safe transitions, increasing the variance significantly). As stated in [10] , the worst-case approach is too restrictive and cautious. The other expected value-variance criteria suffer from the same disadvantages as safety through costs -mainly from the general difficulty to tune up the costs. \n Safe exploration approaches Finally, when the theoretical concepts have been shown and the various safety definitions have been presented, we can focus on the main part of this overview. Our categorization of safe exploration techniques is based on the work of García [10] . The basic division is as follows: approaches utilizing the expected return or its variance (Sec. 4.1), labeling-based approaches (Sec. 4.2) and approaches benefiting from prior knowledge (Sec. 4.3). \n Optimal control approaches Techniques in this category utilize variations of the expected value-variance safety criterion. The most basic one is treating the rewards as costs (when a reward is denoted by r t , the corresponding cost is denoted by c t ). Standard RL methods can then be used to solve the safe exploration task, as described e.g. in [7] for discounted infinite horizon. The RL objective function J = E ∞ t=0 γ t c t ( 5 ) is called the risk-neutral objective. To make this objective risk-sensitive, we specify a risk factor α and rewrite the objective as: [15] J = 1 α log E [exp (αγ t ∞ t=0 c t )] (6) E [ ∞ t=0 γ t c t ] + α 2 V ar [ ∞ t=0 γ t c t ] , which is also called the expected value-variance criterion. This approach is a part of theory using exponential utility functions, which is popular in optimal control [19] . To complete this section, the worst-case objective function (also called the minimax objective) is defined as J = sup ∞ t=0 γ t c t . (7) As can be seen, the objective functions containing expectations cannot in fact assure that no unsafe state will be encountered. On the other hand, the minimax objective provides absolute certainty of the safety. However, it may happen that some of the unsafe states can only be reached with a negligible probability. In such cases, the α-value criterion defined by [15] can be used -it only takes into account rewards that can be reached with probability greater than α. In the work of Mihatsch [19] , a scheme is presented that allows to \"interpolate\" between risk-neutral and worst-case behavior by changing a single parameter. Delage's work [8] takes into account the uncertainty of parameters of the MDP. It is often the case that the parameters of the MDP are only estimated from a limited number of samples, causing the parameter uncertainty. He then proposes a possibility that the agent may \"invest\" some cost to lower the uncertainty in the parameters (by receiving some observations from other sources than exploration). A completely new research area then appears -to decide whether it is more valuable to pay the cost for observations, or to perform exploration by itself. An approximation scheme for dealing with transition matrix uncertainty is presented in [21] . It considers a robust MDP problem and provides a worst-case, but also robust policy (with respect to the transition matrix uncertainty). A theory generalizing these approaches can be found in [24] . The theory states that the optimal control decision is based on three terms -the deterministic, cautionary and probing terms. The deterministic term assumes the model is perfect and attempts to control for the best performance. Clearly, this may lead to disaster if the model is inaccurate. Adding a cautionary term yields a controller that considers the uncertainty in the model and chooses a control for the best expected performance. Finally, if the system learns while it is operating, there may be some benefit to choosing controls that are suboptimal and/or risky in order to obtain better data for the model and ultimately achieve better long-term performance. The addition of the probing term does this and gives a controller that yields the best long-term performance. [24] To conclude this section, we think that these methods are not well suited for safe exploration -the expected value-variance and similar criteria provide no warranties on the actual safety. On the other hand, the worst-case approaches seem to be too strict. \n Labeling-based approaches The approaches utilizing some kind of state/action labeling (refer to Section 3.1 for the various labeling types) usually make use of two basic components -a risk function and a backup policy. The task of the safety function is to estimate the safety of a state or action. In the simplest case, the safety function can just provide the labeling of the given action; or it can return a likelihood that the action is safe; and in the best case, it would answer with a likelihood to be safe plus a variance (certainty) of its answer. The backup policy is a policy that is able to lead the agent out of the critical states back to the safe area. It is not obvious how to get such a policy, but the authors show some ways how to get one. In the work of Hans [14] , the most granular labeling is used, where fatal transitions are said to be the transitions with reward less than a given threshold. The safety function is learned during the exploration by collecting the so-called min-reward samples -this is the minimum reward ever obtained for executing a particular action in a particular state. The backup policy is then told to either exist naturally (e.g. a known safe, but suboptimal controller), or it can also be learned. To learn the backup policy, an RL task with altered Bellman equations is used: Q * min (s, a) = max s min R(s, a, s ), max a Q * min (s , a ) . A policy derived from the computed Q * min function is then taken as the backup policy (as it maximizes the minimum reward obtained, and the fatal transitions are defined by low reward). He defines a policy to be safe, if it executes only safe actions in safe states and produces non-fatal transitions in critical states. To learn such safe policy, he then suggests a level-based exploration scheme (although he gives no proofs why it should be better than any other exploration scheme). This scheme is based on the idea that it is better to be always near the known safe space when exploring. All unknown actions from one \"level\" are explored, and their resulting states are queued to the next \"level\". For exploration of unknown actions he proposes that the action should be considered critical until proved otherwise, so the exploration scheme uses the backup policy after every unknown action execution. A disadvantage of this approach is that the agent needs some kind of \"path planning\" to be able to get to the queued states and continue exploration from them. García's PI-SRL algorithm [10] is a way to safeguard the classical policy iteration algorithm. Since the labels error /non-error are only for final states, the risk function here is extended by a so called Case-based memory, which is in short a constant-sized memory for storing the historical (s, a, V(s)) samples and is able to find nearest neighbors for a given query (using e.g. the Euclidean distance). In addition to the error and non-error states, he adds the definition of known and unknown states, where known states are those that have a neighbor in the case-based memory closer than a threshold. A safe policy is then said to be a policy that always leads to known non-error final states. To find such policy, the policy iteration is initialized with the safe backup policy and exploration is done via adding a small amount of Gaussian noise to the actions. This approach is suitable for continuous state-and action-spaces. Another approach is presented in the work of Geibel [12] , where the risk and objective functions are treated separately. So the risk function only classifies the states (again only final states) as either fatal or goal, and the risk of a policy (risk function) is then computed as the expected risk following the policy (where fatal states have risk 1 and goal states have risk 0). The task is then said to be to maximize the objective function (e.g. discounted infinite horizon) w.r.t. the condition that the risk of the considered policies is less than a safety threshold. The optimization itself is done using modified Q-learning, and the optimized objective function is a linear combination of the original objective function and the risk function. By changing the weights in the linear combination the algorithm can be controlled to behave more safely or in a more risk-neutral way. A generalization of Geibel's idea to take the risk and reward functions separately can be found in the work of Kim [18] . In this work, the constrained RL task is treated as a Constrained MDP and the algorithm CBEETLE for solving the Constrained MDPs is shown. The advantage of this work is that it allows for several independent risk (cost) functions and doesn't need to convert them to the same scale. A similar approach of using constrained MDP to solve the problem can be found in the work of Moldovan [20] . He does, however, use the ergodicity condition to tell safe and unsafe states apart (that is, safe are only those states from which the agent can get back to the initial state). Moreover, this approach is only shown to work for toy examples like the grid world with only several thousands of discrete states, which may not be sufficient for real robotics tasks. The idea of having several risk functions is further developed by Ertle [9] . The agent is told to have several behaviors and a separate safety function is learned for each behavior. This approach allows for modularity and sharing of the learned safety functions among different types of agents. More details on this work will be provided in the next section, because it belongs to learning with teachers. An approach slightly different from the previously mentioned in this section is using the methods of reachability analysis to solve safe exploration. Gillula in his work [13] defines a set of keep-out states (corresponding to unsafe in our labeling) and then a set called P re(τ ) is defined as a set of all states from which it is possible to get to a keep-out state in less than τ steps. Reachability analysis is used to compute the P re(τ ) set. Safe states are then all states not in P re(τ ) for a desired τ . This approach, however, doesn't utilize reinforcement learning, it computes the optimal policy using standard supervised learning methods with one additional constraint -that the system must use safe actions near the P re(τ ) set. On the other hand, the system is free to use whatever action desired when it is not near P re(τ ). As was presented in this section, the labeling-based approaches provide a number of different ways to reach safety in exploration. They are, however, limited in several ways -some of them make use of the (usually hard-to-obtain) transition matrix, the others may need to visit the unsafe states in order to learn how to avoid them, or need the state-space to be metric. \n Approaches benefiting from prior knowledge The last large group of safe exploration techniques are the ones benefiting from various kinds of prior knowledge (other than the parameters of the MDP). We consider this group the most promising for safe exploration, because \"it is impossible to avoid undesirable situations in high-risk environments without a certain amount of prior knowledge about the task\" [10] . The first option how to incorporate prior knowledge into exploration is to initialize the search using the prior knowledge. In fact, several works already mentioned in previous sections use prior knowledge -namely the approaches with a backup policy (Hans [14] , García [10] ). Also, García suggests that the initial estimate of the value function can be done by providing prior knowledge, which results in much faster convergence (since the agent does no more have to explore really random actions, the estimate of the value function already \"leads it\" the right way) [10] . Another option how to incorporate prior knowledge is by using Learning from Demonstration (LfD) methods. Due to the limited space, we will not give the basics of LfD -a good overview of the state-of-the-art methods is for example in [2] . For our overview, it is sufficient to state that LfD methods can derive a policy from a set of demonstrations provided by a teacher. What is important, is that the teacher does not necessarily have to have the same geometrical and physical properties as the trainee (although it helps the process if possible). It is therefore possible to use LfD to teach a 5-joint arm to play tennis, while using 3-joint human arm as the source of demonstrations (but the learned policy may be suboptimal; RL should then be used to optimize the policy). In Apprenticeship Learning [1] , the reward function is learned using LfD. The human pilot flies a helicopter at his best, and both system dynamics and the reward function are learned from the demonstrations. It is however apparent that the performance of the agent is no longer objectively optimal, but that it depends on the abilities of the human pilot. Another way of incorporating prior knowledge into the learning process is to manually select which demonstrations will be provided, as in the work of Ertle [9] . In the work it is suggested that more teacher demonstrations should come from the areas near the unsafe set, in order to teach the agent precisely where the border between safe and unsafe is located. The last technique described in our overview is interleaving autonomous exploration with teacher demonstrations. As in the previous case, some teacher demonstrations are provided in advance, and then the exploration part starts utilizing the teacher-provided information. After some time, or in states very different from all other known states, the agent requests the teacher to provide more examples [2, 5] . The idea behind this algorithm is that it is impossible to think out in advance what all demonstrations will the agent need in order to learn the optimal policy. Finishing this section, the algorithms utilizing prior knowledge seem to be the most promising out of all the presented approaches. They provide both a speedup of the learning process (by discarding the low-reward areas) and a reasonable way to specify the safety conditions (via LfD or interleaving). \n Conclusion In our work we have given a short introduction on the basics of Markov Decision Processes as well as the basic Reinforcement Learning methods like Value Iteration, Q-learning and Policy Iteration. In Section 3 we have summarized many recent approaches on how to define safety in the framework of optimal control and reinforcement learning. We have also proposed a novelty definition of safety, which divides the state space to safe, critical and unsafe states. We have shown that all other labeling-based safety definitions are covered by our new definition. In Section 4 many different safe exploration methods are categorized into three basic groups -algorithms from optimal control theory, reinforcement learning algorithms based on state labeling, and algorithms utilizing extra prior knowledge. We have shortly summarized the advantages and disadvantages of the particular approaches. We have also stated that at least for difficult real-world problems, safe exploration without prior knowledge is practically impossible, and prior knowledge almost always helps to achieve faster convergence. Another observation has been that some of the safe exploration algorithms need to visit unsafe states to correctly classify them later, which might discard them from some usage scenarios where the unsafe states are really fatal. It seems to us that the field of safe exploration in reinforcement learning has been very fragmented and lacks an all-embracing theory. However, the question is, if it is even possible to find such theory -the main problem may be the fragmentation and differences of various RL methods themselves. At least, the safe exploration community would benefit from a unification of the terminology (and our proposal of the novelty safety labeling would like to help that). Other ways of possible future research are for example the following. New ways of incorporating prior knowledge into methods not utilizing it yet could bring interesting speed-up of those algorithms. There is also a bottleneck in the estimation of the results of unknown actions -some advanced function approximation methods should be explored (we aim to investigate Gaussian Processes this way). There are not enough experiments from difficult continuous real-world environments, which would show for example how large problems can be solved using safe exploration. The interleaved learning needs some guidelines on how to cluster the queries for the teacher to some larger \"packs\" and \"ask\" them together, possibly increasing the fully autonomous operating time. Last, but not least, the possibility to share some learned safety functions among different kinds of robots seems to be an unexplored area with many practical applications (maybe robot-to-robot LfD could be used). Fig. 1 . 1 Fig. 1. NIFTi UGV robotic platform \n (a) A safe state. (b) A critical state -if the robot went still forward, it would fall down and probably break. \n Fig. 2 .Fig. 3 . 23 Fig. 2. An illustration of safe and critical states. \n action a is supercritical in state s if it can lead to a supercritical state, state s is called critical if there is a supercritical or fatal action in that state (and the state itself is not supercritical), action a is critical in state s if it leads to a critical state (and the action itself is neither supercritical nor fatal in s), state s is called safe if it is neither critical nor supercritical, action a is safe in state s if it is neither critical, nor supercritical, nor fatal in state s, and finally a policy is safe if for all critical states it leads to a safe state in a finite number of non-fatal transitions (and if it only executes safe actions in safe states).", "date_published": "n/a", "url": "n/a", "filename": "safe_exploration_overview_lncs.tei.xml", "abstract": "We overview different approaches to safety in (semi)autonomous robotics. Particularly, we focus on how to achieve safe behavior of a robot if it is requested to perform exploration of unknown states. Presented methods are studied from the viewpoint of reinforcement learning, a partially-supervised machine learning method. To collect training data for this algorithm, the robot is required to freely explore the state space -which can lead to possibly dangerous situations. The role of safe exploration is to provide a framework allowing exploration while preserving safety. The examined methods range from simple algorithms to sophisticated methods based on previous experience or state prediction. Our overview also addresses the issues of how to define safety in the real-world applications (apparently absolute safety is unachievable in the continuous and random real world). In the conclusion we also suggest several ways that are worth researching more thoroughly.", "id": "75e471b2e0efe9f96f9cd113ccc8827a"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Feasibility of Training an AGI using Deep Reinforcement Learning, A Very Rough Estimate.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Meaning of life.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Owain Evans", "William Saunders", "Andreas Stuhlmüller"], "title": "Machine Learning Projects for Iterated Distillation and Amplification", "text": "Iterated Distillation and Amplification (IDA) is a framework for training models from data [1, 2] . IDA is related to and builds on existing frameworks like supervised learning, imitation learning, and reinforcement learning. It is intended for tasks where: 1. The goal is to outperform humans at the task or to solve hard instances. 2. It is not feasible to provide demonstrations or reward signals for superhuman performance at the task. 1 3. Humans have some high-level understanding of the task and can also provide demonstrations or reward signals for easy instances of the task. The idea behind IDA is to bootstrap using an approach similar to Alp-haZero [3] , but with a learned model of human reasoning steps taking the place of the fixed game simulator. We will explain IDA in abstract terms and then describe concrete examples. For broader discussion of IDA, including its relevance to the value alignment problem, see [1, 2, 4, 5, 6, 7] . \n Technical description of IDA We consider the following learning problem: We want to train a model (e.g. a neural net) to solve tasks from the set T , where T contains a series of tasks that get progressively harder for humans to solve. Formally, let T = N i=0 T i , where tasks in T n are harder than tasks in T n−1 for all n. We are given a training set which includes solutions to the easiest class of problems T 0 and human demonstrations of decomposing tasks T n into finitely many slightly easier tasks in T n−1 . In IDA, the initial training steps are: 1. M is trained by supervised learning 2 to reproduce the answers to the easiest tasks T 0 . 2. M is trained by supervised learning to imitate the human demonstrations for decomposing a task x ∈ T n into a set of tasks in T n−1 and then aggregating the solutions to solve x. Steps (1) and ( 2 ) are analogous to the two parts of a recursive algorithm: the base case and the recursive step. After initial training, M can solve tasks in T 1 by first decomposing them into tasks in T 0 . M is then trained by supervised learning on its own solutions to tasks in T 1 , enabling M to solve T 1 tasks directly (i.e. without decomposition). Solving a task in T 1 directly involves a single call to M , while solving by decomposition into tasks in T 0 requires M to be called on each of the T 0 tasks. This process of \"training on its own solutions\" can be iterated. M is trained by supervised learning to directly solve increasingly hard tasks in T , where the target solutions (i.e. labels for supervised learning) are produced by M itself via decomposition into tasks M can already solve. If supervised training works perfectly at each iteration, then eventually M can solve any task in T directly (with only single call to M ). (This depends on the strong assumption that humans can decompose all tasks T n into tasks in T n−1 -see [8, 11] for discussion.) We can now summarize IDA (see Figure 1 ). After training on steps (1) and ( 2 ), the \"base case\" and \"recursive step\", the following steps are repeated (for n > 0): \n • Amplification Step M solves tasks in T n by decomposing them into tasks in T n−1 , which it solves directly (without decomposition). \n • Distillation Step M is trained by supervised learning to solve tasks in T n directly, with target solutions coming from the Amplification Step. \n It is called the \"Amplification Step\" because it amplifies the capability of model M . While M can only solve tasks in T n−1 directly, M can be used (via decomposition and aggregation) to solve tasks in T n . In the \"Distillation Step\", the slower amplified model (which makes multiple calls to M ) is distilled into a faster process with a single call to M . This is like distillation for neural nets [12] , where a large net (or ensemble of nets) is \"distilled\" into a smaller net that tries to capture the behavior of the large net. In general, it is unlikely that distillation of the slower process will be perfect. This can be addressed using RL-based distillation or by selectively choosing when to use fast M directly and when to fall back to the amplified model (see Project 3). Second, M solves each task x ∈ T1 by decomposing into tasks {xa, x b } ∈ T0 and solving these directly. This eventually produces a dataset of solved tasks x ∈ T1, which M is trained to solve directly by supervised learning (leftmost \"distill\" step). This process is then repeated. x ∈ T 0 x a ∈ T 0 amplify distill amplify x ∈ T 1 x b ∈ T 0 x ∈ T 1 x a ∈ T 1 x ∈ T 2 x b ∈ T 1 distill x ∈ T 2 amplify x a ∈ T 2 x ∈ T 3 x b ∈ T 2 To simplify the exposition, we have presented M as being trained to decompose tasks and learn all base case solutions before the iterative training by Amplification and Distillation begins. In practice, these processes (learning to decompose harder problems and gathering additional base case solutions from humans, and learning to solve harder problems directly) would happen in parallel [8] . \n Examples Having given an outline of IDA, we will describe two toy examples of solving problems with IDA: integer multiplication and shortest path in graphs. 3 These examples are intended to build intuition for IDA. They are not themselves practically relevant use cases for IDA. \n Example 1: Multiplication This toy example shows how one would use IDA to train a neural net M to multiply large integers. We assume that M has been pre-trained to add large integers. Following the pattern in the previous section, the initial training set contains (1) simple multiplications (base case), and (2) demonstrations of decomposing multiplications into simpler multiplications (recursive step). M is trained as follows: 1. Train M on simple multiplications: single-digit multiplication and multiplication by 10. For example: 5 × 6 = 30, 10 × 234 = 2340, 9 × 8 = 72. Recursing all the way to the base case would require many calls to the neural net M . Instead, M could be trained incrementally using the IDA scheme described above. If this was successful, M could eventually compute 384 × 19 or 79332 × 2927 in a single call, without the need to ever instantiate the fully expanded slow process for large problems during training. This is a key property of IDA, since recursive decompositions can (in general) result in a number of calls to M that scales exponentially in the problem size, and so would be infeasible to instantiate explicitly. This example shows that IDA can easily fail if we use a standard neural net as our model M . For example, a small MLP is not capable of learning to multiply large integers in a single call. This is the case even for training the MLP by supervised learning on ground-truth examples, e.g. training on pairs ((m, n) , m × n) for m, n < 10 6 . Training by IDA will generally make learning more difficult, because M will be trained by supervised learning on its own answers to multiplication problems (instead of ground-truth answers). \n Train This example also shows that IDA can fail regardless of the model M . It is not generally possible to distill an exponential tree of calls to M into a single call to M . However, there are many AI problems where research aims to better approximate an exponential-time computation. AlphaZero uses an IDA-like algorithm to distill an exponential game-tree expansion into a feedforward neural net. The neural net does not distill perfect play for Go or chess, but it achieves impressive performance relative to humans (and formidable performance when combined with MCTS). \n Example 2: Shortest path in a graph A recent paper by Christiano et al. [8] implements IDA and applies it to discrete algorithms problems including union find, wildcard search, and shortest path. To apply IDA to finding the shortest path between nodes s and t in a directed graph, we need an initial training set that covers the base case and recursive step. These include: 1. A dataset of solutions to the easiest shortest path problems (for which nodes s and t are adjacent). 2. A set of demonstrations of decomposing shortest path problems into smaller shortest path problems and aggregating the results. The decomposition in ( 2 ) is similar 4 to the following recursion, which is also used in the dynamic programming algorithm for shortest path: min-dist(s, t) = min ({min-dist(s, x) + dist(x, t) | x adjacent to t}) Here \"min-dist(s, t)\" is the minimum path length between nodes s and t, and \"dist\" is the distance between adjacent nodes. Christiano et al. use a Transformer model [13] as their model M and they compare two ways of training the Transformer to compute the shortest path. The first approach is the IDA model just described, which only gets labeled examples for the smallest shortest path problems and must bootstrap to solve larger problems. The second approach is regular supervised learning, where the model gets a large set of labeled examples of all sizes. The main result is that the IDA approach is successful in getting close to the performance of the supervised model. While this is a toy example, it illustrates the general aim for IDA, which is training a model for tasks where (a) there are only labels/demonstrations for easy problems, and (b) humans can provide decompositions for going from harder to easier problems. \n Related Work As noted above, there are various discussions of IDA and its relevance to AI alignment [1, 2, 6, 7] . These discussions are valuable as background but they mostly abstract away the practical details of implementing an IDA system using ML. The only paper that actually implements IDA is the aforementioned Christiano et al. However, the AlphaZero algorithm [3] is very similar to IDA [1] and work applying AlphaZero (and related algorithms AlphaGo [14] and Expert Iteration [15] ) to chess/Go and to graph coloring [16] are relevant to thinking about IDA experiments. IDA depends on bootstrapping and function approximation, which are core topics in reinforcement learning [17, 18] . Recent work on Deep Q-learning is especially relevant [19] . For IDA to help address value alignment problems for advanced ML systems, it would likely need to apply to tasks (and use decompositions) that involve sophisticated reasoning in natural language. There is currently no published research that is targeted at natural language tasks. Ought and OpenAI have conducted preliminary experiments in an IDA-like setting with humans in place of ML models. These experiments are aimed at shedding light on whether different incentive structures for IDA-like approaches (e.g. different objective functions and reward signals) lead to aligned behavior. See examples from Ought [20] and OpenAI [21] and stay in touch with Ought 5 for the latest information on experiments. In the two toy examples above, IDA trains a model to imitate behavior. This is an extension of supervised learning and imitation learning, and does not involve reinforcement learning. However, as mentioned above, IDA is compatible with RL [10, 9] . There is also a framework called \"Debate\", which is closely related to IDA and draws on self-play reinforcement learning as a method of bootstrapping. Irving et al. [22] introduces Debate, explores analogies to computational complexity theory, and describes links between Debate and IDA. \n Project Goals The rest of this document outlines three projects that could help clarify aspects of IDA. These projects aim to extend Christiano et al [8] in a few different ways: • The projects train a single model M to perform a wide range of types of decompositions. By contrast, Christiano et al. train a fresh model for each algorithmic problem (i.e. one for each of union find, shortest path, and wildcard search). The decompositions for these algorithmic problems have dynamic programming structure, so each model decomposes a problem instance into smaller instances of the same kind of problem (e.g. a shortest path problem is broken down into smaller shortest path problems). • The projects are well suited to investigating extrapolation and generalization out of distribution. Projects 1 and 2 build on prior work in ML which tests generalization performance in mathematics and neural programming tasks. Project 3 is focused on general approaches (adaptive computation time and calibration) to improving robust generalization. • In Christiano et al. the main goal is for a model trained by IDA to match the performance of a model trained by supervised learning. The IDA model has less labeled data and is trained by bootstrapping on its own labels. While this is one possible goal for our projects, we also consider the goal of improving test-time performance by making multiple calls to the model using amplification. Improving test-time performance by amplification is similar to the way AlphaZero uses MCTS during competitive matches to improve performance over the policy net. There are many other possible projects on IDA. Research projects that push in the following directions seem particularly valuable: • Produce theory and empirical knowledge about training IDA systems, analogous to knowledge of how to train supervised learning or reinforcement learning systems. • Connect IDA to existing work in ML. The projects below connect to language modeling, neural programming, calibration for neural networks, and adaptive computation time. IDA is also related to semi-supervised learning, deep reinforcement learning, dynamic programming, belief propagation, etc. A research project could explore and develop any of these connections. • Solve problems with IDA that can't be solved by other approaches. This is the ultimate goal of IDA, and would draw interest from the larger ML community. The projects below are not primarily aimed to achieve this, but they may provide a useful first step. Two ways in which IDA could solve problems that can't be solved by other approaches are: -By distilling large (i.e. exponential in problem size) trees to a fast machine learning model (as in AlphaZero). -By learning decomposition steps from human data. This is for domains (e.g. common-sense reasoning) where we are not able to write down an algorithm to decompose problems. 1 Project 1: Amplifying Mathematical Reasoning \n Motivation Decomposition is a fundamental strategy for solving problems in mathematics. Consider the following problem from high-school mathematics: g(y) = y-2 f(x) = x*g(x) + 3x^3 Find the derivative of f at x=1. We can solve the problem by decomposing it into sub-problems which only depend on parts of the whole problem. Here is one possible decomposition: Sub-problem 1: g(y) = y-2 f(x) = x*g(x) + 3x^3 Write f as a polynomial in x in standard form. Solution: f(x) = 3x^3 + x^2 -2x Sub-problem 2: f(x) = 3x^3 + x^2 -2x Differentiate f. Solution: f'(x) = 9x^2 + 2x -2 Sub-problem 3: f'(x) = 9x^2 + 2x -2 Compute f'(1) . Solution: 9 These sub-problems could themselves be decomposed: the first sub-problem decomposes to substituting the function g and expanding the resulting expression. \n Project Directions The aim of this project is to train a model via IDA to solve mathematics problems. Following Christiano et al [8] , we could use IDA at training time to bootstrap from a small labeled dataset. This is like a student learning math by solving problems from a textbook with no solutions at the back of the book. Another possible goal is to use amplification at test time to apply more compute to harder problems, and so achieve better test-time performance than a standard supervised model. There are many possible choices of mathematics problem, including: 1. Mathematical problems in a formal language (e.g. theorem proving [23] ). 2. High-school algebra (in a mix of natural language and math notation [24] ). 3. So-called \"word problems\", which are problems in natural language that need to be translated into math. For example, many problems on the American GMAT or GRE exams [25] . 4. Advanced mathematics problems: competition or Olympiad math, universitylevel proof-based math [26] . Problems in (1) do not require working with natural language. Many such problems have a natural decomposition via brute-force search, similar to gametree search in Chess. These problems are a good testing ground for some aspects of IDA and we encourage research on them. However this document focuses on problems in ( 2 )-( 4 ) and especially (2) . Problems in ( 2 )-( 4 ) are natural language problems which don't necessarily have an obvious decomposition strategy. This makes them more similar to problems in areas outside mathematics such as science, philosophy, and common-sense reasoning. How can we tackle natural language mathematics problems in IDA? The main question is how to produce decompositions. There are two basic options: 1. Have human experts produce decompositions of the problems. \n Write an algorithm that solves problems by decomposition as in Christiano et al [8] . This algorithm is likely to resemble classical AI approaches (\"GOFAI\"). Using human data is the more general approach, as it applies outside mathematics. However, it is unlikely that the usual way people solve problems would yield the most useful decompositions. So, part of the research effort is to work out which kinds of decompositions help most and train human experts to produce them. For option (2) above, there are two kinds of algorithm for decomposition. The first kind is an efficient algorithm that solves all problems in the class of interest (as in the multiplication and shortest path examples). A neural net trained using IDA is unlikely to perform better than such an algorithm. Experiments with IDA would aim not at state-of-the-art performance but instead at investigating certain aspects of IDA. We discuss this in the next section. The second kind of algorithm solves problems in the class inefficiently and so can only solve small problems in practice. In this case, it is possible that IDA can achieve state-of-the-art performance by learning to distill decompositions into a much more efficient neural algorithm. However, for advanced mathematics problems in natural language, it is not clear what this inefficient algorithm would look like. 6 So there is a separate research project in investigating such algorithms. In the next section, we outline a project that uses high-school math problems in natural language. We suggest starting with algorithmically generated decompositions and later extending this to decompositions provided by humans. The problems are generated by an algorithm, so there is unlimited training data. The dataset includes test sets for both interpolation and extrapolation. The extrapolation questions have quantities that vary outside of the range encountered on the training set. \n IDA for High School Mathematics The paper includes baselines for models trained by supervised learning. The questions and answers are both represented as strings, so it can be treated as a sequence-to-sequence problem. The best performing model is a standard Transformer with 30M parameters, which achieves 76% model accuracy (probability of a correct answer) on interpolation and 50% on extrapolation. \n Approach with IDA How should one generate decompositions for training IDA to solve these math problems? One option is to write an algorithm for decomposing problems, rather than collecting human decompositions. The problems were generated using a compositional algorithm: consider the last example in the list above, which combines algebra (specifying an integer using equations) and number theory (checking if the integer is composite). This algorithm can be run in \"reverse\" to help generate decompositions. However, it is not obvious how close the resulting decompositions would be to decompositions that are a good fit for IDA training. (For instance, the best decompositions could be more or less fine-grained.) One objective would be to do better than the supervised baseline at test time by applying amplification (i.e. decomposing the problem and using multiple calls to the model M ). In particular, amplification promises to do better at extrapolation to bigger problems or problems with larger numerical quantities. Another objective is to train from a smaller number of labels using distillation and try to rival the performance of the supervised baseline. \n Non-Amplification Baselines The simplest baseline is supervised learning (as in Saxton et al [24] ). The math problems and solutions are represented as strings, and the model is trained to map strings to strings. Another baseline would make use of the same decomposition training data as IDA. Instead of training the model to decompose problems, we could use the decomposition as an auxiliary objective. The model would be trained to produce both the decomposition and the answer. An alternative approach is to train a model to take strings as input and then produce output suitable to be fed into a symbolic math system (e.g. SymPy). \n Questions to investigate The project would seek to investigate some of the following questions: • What is the test-time performance of applying amplification vs. a baseline that makes a single call to the model (distillation) vs. a baseline that was trained by supervised learning? • What is the performance on extrapolation and on off-distribution problems? • What is the performance of a distilled model trained by IDA (from a small number of labeled examples) vs. the supervised baseline (with a large labeled training set)? • How does performance vary with different kinds of decomposition strategies? • How robust is the amplified model to noise in the training data and to approximation error in the neural net? • Can we find a decomposition strategy that makes the model more robust to errors/noise? • When training a model to solve sub-problems, we need to pick some distribution over sub-problem examples. How does this impact performance? Are there principles for generating training data in the most useful way for IDA? • Using amplification at test time is one way to vary the amount of compute used to solve problems. Another approach is to use recurrent models and adaptive computation time. How does this compare to IDA? (See Project 3 for more discussion and references). • Can we train the model from decompositions provided by humans? What kind of decompositions should we use? Can these be augmented by algorithmically generated decompositions? 1.4 Related Work • Program Induction by Rationale Generation (Ling et al.) [25] Dataset: https://github.com/deepmind/AQuA This paper introduces a dataset of mathematical word problems (based on US standardized tests). Humans had to \"show their work\" while solving the problems. The paper has a model that learns to generate these intermediate steps (in addition to learning to solving the problems). • Sigma Dolphin microsoft.com/en-us/research/project/sigmadolphin-2/ Dataset of natural language math problems, taken from Yahoo Answers. • HOList: An Environment for Machine Learning of Higher-Order Theorem Proving (Bansal et al.) [23] . This dataset contains fully formalized proofs for a large number of theorems and a framework for training ML systems to produce proofs. 2 Project 2: IDA for Neural Program Interpretation \n Motivation Many algorithms (e.g. matrix arithmetic, shortest path, sorting) involve decomposing problems into progressively smaller problems and then aggregating results. More generally, computer programs in high-level languages decompose tasks into progressively smaller tasks, and ultimately into the primitive operations of the language. We can explore the capabilities of IDA by training a model on the decompositions used in the execution of computer programs. Training IDA on decompositions from programs is closely related to research on Neural Program Interpretation (see Reed and de Freitas [28] and related work below) or \"NPI\". In NPI, a model is trained to mimic the internal behavior of an algorithm and not just its input-output behavior. Moreover, the model trains not just on the primitive operations of the algorithm but on its hierarchical decomposition (i.e. the way procedures call other procedures). As with IDA, one motivation for learning this internal behavior is to achieve stronger generalization. Another motivation is to integrate hierarchical discrete computation with the sort of pattern recognition in high-dimensional spaces enabled by machine learning (see third experiment in [28] ). \n Project Directions The project could focus on either of the following areas: \n Distillation of programs In contrast to most work on NPI, the goal of IDA experiments could be to use program decompositions as training data to learn more efficient \"neural\" programs. The idea is to distill elaborate computations into a single call to a neural net, or to combine the exact (slow) computation with distillation (as in AlphaZero). 2. NPI within a more general framework for learning from decompositions Much of the NPI work uses environments and architectures designed specially for NPI. For IDA, we would aim to replicate NPI results, but in a framework that would also allow learning from human decompositions in natural language. We expect distillation of programs to be challenging. If we take a complicated algorithm and try to distill it into a neural net, there is likely to be approximation error-and errors will usually be larger for inputs that are off the training distribution. This will cause problems both during IDA's iterative training procedure and also at test time. Part of the project would be to investigate how well distillation works for different kinds of programs and for different ways of organizing the training curriculum. If the distilled model was calibrated, then IDA could recognize when distillation was likely to fail and fall back on using amplification. Project 3 (below) explores this \"adaptive computation\" or \"meta-reasoning\" approach. \n Decision Points There are many choices to make in devising IDA experiments in the NPI setting: • Which programs should we try to learn? The research on NPI has focused on basic algorithms for tasks like integer arithmetic and sorting. Applying IDA to these basic algorithms could be a good starting point, as they allow comparison to existing work. However, it isn't clear how much experience with these algorithms would generalize to other applications of IDA. It could be good to consider algorithms that work with databases or knowledge bases, or to consider algorithms that operate on human-readable structures like natural languages or images. We also think that pure functional programming is a better fit for IDA than imperative programming. See [11, 29] for relevant discussion. • What kind of built-in operations and environments should we use? In existing work on NPI, the neural net is given outputs that correspond to basic operations on data. This makes it easier to learn algorithms that depend on those basic operations. For IDA, it would be ideal to learn these operations from examples. (If we were learning from human decompositions, we might not know about these \"basic operations on data\" ahead of time). • What kind of performance objective should we focus on? Some work on NPI has focused on getting perfect performance on narrow algorithmic tasks. It's not clear if this is the right objective for IDA. We might care about (a) generalizing well to much larger inputs most of the time (but not in the worst case), (b) being robust to distribution shift, and (c) having one neural net learn a wide variety of algorithms. \n Non-Amplification Baselines For learning to predict the next step of a program from examples, simpler ML methods (random forests, logistic regression, etc.) may perform better than neural networks. For performing the same task as a traditional program, any neural program interpreter working with polynomial-sized trees will add significant overhead and so is unlikely to *improve* performance. One could also consider tasks that require ML to process the inputs (e.g. tasks involving images). The baseline in this case would be a program that defers some decisions to a classifier. \n Related Work Neural Programmer-Interpreters (Reed and de Freitas) [28] A quote from the introduction about the motivation for the paper: \"We may envision two approaches to providing supervision. In one, we provide a very large number of labeled examples, as in object recognition, speech and machine translation. In the other, the approach followed in this paper, the aim is to provide far fewer labeled examples, but where the labels contain richer information allowing the model to learn compositional structure. While unsupervised and reinforcement learning play important roles in perception and motor control, other cognitive abilities are possible thanks to rich supervision and curriculum learning. This is indeed the reason for sending our children to school.\" Summary of the approach: • The model (called the \"neural programmer-interpreter\") has a single inference core for executing three different programs (addition, sorting, rotating CAD models). So one set of LSTM parameters are used to execute all programs. However, the different programs are stored as different embeddings, stored in a learnable persistent memory. • The sequence of the model's actions depends on the environment state and action history. • For each program (addition, sorting, rotating CAD models), the model is given a specific environment and set of actions in that environment. The model is trained to compose these actions. For integer addition, there are 1-D arrays and read-only pointers (for reading inputs), as well as 2-D scratchpads and output arrays. For rotating CAD models, there's a CAD renderer with controllable elevation and azimuth movements. • The LSTM input of the previous computation step is a vector embedding, rather than text. Making Neural Programming Architectures Generalize via Recursion (Cai et al.) [30] Builds on Reed and de Freitas (above) and has no new machinery. They simply allow a function to call itself. This means it can solve instances of arbitrary size using recursive function calls, each of which have bounded length. This allows for generalization off the training distribution. The hidden state of the LSTM controller is reset (to zero) at each subprogram call, but the environment state is not reset. They learn the recursion termination condition. They achieve 100% generalization on all tasks (albeit for simple tasks). Parametrized Hierarchical Procedures for Neural Programming (Fox et al.) [31] Their model is related to IDA for neural programming (it learns PHPs that can recursively call other PHPs). Their tasks are limited to addition and to a building a tower in gridworld. They provide a \"weak supervision\" motivation: learn from a mix of traces that show what information should be remembered from previous states and also from traces that omit that information. Neural Program Lattices (Li et al.) [32] This paper has a \"weak supervision\" motivation: learn from mix of full traces and traces without program calls/arguments. They generalizes to 500-digit addition, but not to 1000-digit addition. Improving the Universality and Learnability of Neural Programmer-Interpreters with Combinator Abstraction (Xiao et al.) [33] Adds combinator abstraction from functional programming. One motivation is to use reinforcement learning to learn programs without supervision. The combinator makes the search space smaller. Adaptive Neural Compilation (Bunel et al.) [34] Takes programs, translates them into a differentiable form, then uses backpropagation to optimize them. They optimize programs to be more efficient on a restricted training distribution of problems. Recent Advances in Neural Program Synthesis (Kant) [35] This paper provides a summary of different approaches to NPI and Neural Program Synthesis. Learning Compositional Neural Programs with Recursive Tree Search and Planning (Pierrot et al.) [36] NPI approach that \"incorporates the strengths of Neural Programmer-Interpreters (NPI) and AlphaZero\". While relevant to IDA, this approach differs in important ways. 3 Project 3: Adaptive Computation \n Motivation An ML model exhibits \"adaptive computation\" if it intelligently varies its computations for different inputs. For example: 1. The model selects which type of computation to run: e.g. between a slow tree search, a large neural net, and a small neural net. 2. The model prioritizes possible computations: e.g. which node to expand next in a tree search. 3. The model determines how long to run a fixed computation: e.g. how many MCTS samples, how many steps to run an RNN, etc. A principled way to adapt computations is by \"meta-level control\" or \"metareasoning\". Meta-level control means applying ideas from optimal control and Bayesian decision theory to selecting computations. The idea is to treat the choice of computations as just another learning and planning problem. The choice of computations can be optimized using end-to-end supervised learning [37] , model-free reinforcement learning [38] , or Bayesian decision theory [39, 40, 41] . Adaptive computation and meta-level control are not a major focus in current deep learning research. Yet human cognition is often adaptive: people dynamically decide how much time to spend on a task based on its perceived difficulty. This is important when humans try to develop new ideas, which can require anything from hours to years of thinking [42, 43] . By contrast, many applications of ML have the following profile: 1. Training time: Large amounts of compute and time are permitted. For example, training might take months and use many CPUs and GPUs. \n Test/deployment time: There are very strong constraints on compute. For example, the trained model might be part of a web application and so must perform inference in a fraction of a second. Not all ML algorithms have this profile. When AlphaZero [3] plays competitive matches, it has time to make many calls to a neural net for each move. Alp-haZero uses MCTS to investigate promising moves from the current position. As ML is applied to more tasks for which humans spend a long time thinking (e.g. mathematics, science, business strategy), the profile of AlphaZero may become more common. IDA is well suited to adaptive computation. Training by IDA produces the following algorithms: • A quick but potentially inaccurate algorithm for solving problems, produced by distillation. This corresponds to a single call to the model M (using the terminology from Section 0.1). • An \"anytime\" algorithm for solving the same problems, which produces more accurate or reliable answers as a function of doing more compute. This is obtained by decomposing the problem using the learned decomposition strategy (\"Amplification\") and involves making multiple calls to M . \n Project Directions Adaptive computation for IDA can be applied either during the iterative training scheme or at test/deployment time: \n Training time The goal during training is to bootstrap by using amplification to solve progressively harder problems. Adaptive computation time could be applied to select how much computation to apply to a given problem in the training set. For example, if the distilled model already performs perfectly on some class of problems, there is no need to run the slow amplified process (with many calls to model M ) on this class. It's also possible to apply active learning -automatically selecting problems that are more informative about the objective. \n Test time The goal is to perform well on a set of test problems given a time budget. If the test set is received in batch, then the algorithm would ideally spend more of its time budget on the harder instances. This requires the algorithm to recognize harder instances that will benefit from more compute time (via amplification). There are also more fine-grained questions about how to use the compute budget. For example, when using amplification, which problems should be decomposed first and how far should the recursive decomposition go? \n IDA, Fast and Slow There are many ways to use adaptive computation as part of IDA. As a starting point, we describe a simple form of adaptive computation, where the algorithm decides between a fast (distilled) process for solving problems and a slow process that uses a fixed amount of amplification. The fast process makes a single call to a neural net, while the slow process makes at most n calls to the same net. For each input problem, the algorithm runs the fast process and has to decide whether to also run the slow process. The goal is to balance the greater accuracy of the slow process with a time/compute cost that is specified as part of the problem statement. This decision of whether to run the slow process could be trained end-toend, by always running both processes during training (as in [37] ). A different approach would be to use a calibrated model for the fast process and decide whether to call the slow process based on the confidence of this calibrated model. Which tasks would be a good testing ground for adaptive computation? The mathematics and neural programming tasks (Projects 1 and 2 above) are useful for testing purposes, because it's easy to vary the amount of computation that is necessary and sufficient for solving a problem. It's also valuable to explore tasks which have an \"anytime\" structure, where approximate solutions to the task can be improved with additional compute. This would include planning, natural language reasoning, chess, and strategic videogames. How would this project differ from Projects 1 and 2? The aim in those projects is to explore the performance of IDA on challenging tasks. It's not clear that any clever adaptive computation strategy is required to do well on these tasks (though we can't rule it out). In a related example, AlphaZero did not select the number of MCTS samples as a function of the board position, but instead used a fixed proportion of the remaining match time. Project 3, on the other hand, is all about investigating adaptive computation for IDA. The motivation is that adaptive computation is likely to play an important role as IDA is applied to increasingly challenging tasks. \n Non-Amplification Baselines For any adaptive computation approach, it is important to compare against nonadaptive approaches (which use a fixed amount of compute) and also against simple heuristics for scaling up compute for harder instances. Amplification could also be compared to adaptive approaches that do not use amplification (e.g. something like [37] ). \n Related Work Attending to Mathematical Language with Transformers (Wangperawong) [44] There is a dataset (available at https://github.com/tensorflow/tensor2tensor) and paper for addition/multiplication/subtraction of numbers. The paper explores Transformers that can use more compute for larger instances. Adaptive Computation Time for Recurrent Neural Networks (Graves) [37] Adaptive Computation Time for RNNs adds the probability of halting computation at current step to the output of RNN. The idea is to train the adaptive RNN end-to-end: running long computations at training time, which allows differentiation of the halting probability. Comparing Fixed and Adaptive Computation Time for Recurrent Neural Networks (Fojo et al.) [45] This paper presents experiments claiming to show that adaptive computation time for RNNs (as in [37] above) is not needed because similar performance is achieved with a regular RNN (which takes a fixed number of steps between predictions). Universal Transformers (Dehghani et al.) [46] The paper applies Adaptive Computation Time to the Transformer architecture. It shows improved performance on bAbI tasks with Adaptive Computation Time On Calibration of Modern Neural Networks (Guo et al.) [47] This paper tests the calibration of convolutional neural networks and finds that they are poorly calibrated by default. They show that temperature scaling (learning a temperature parameter that scales the softmax inputs) does a good job of getting the network to be calibrated. They indicate a tradeoff between accuracy and calibration (see their Figure 3 ). Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles (Lakshminarayanan et al.) [48] They get calibrated uncertainty estimates for deep neural nets by: • Using a proper scoring rule for the loss \n • Performing adversarial training • Using an ensemble of networks Principles of Metalevel Control (Hay) [39] Ph.D. thesis on metalevel control and metareasoning. Includes an excellent introduction to the subject and a review of previous work. The technical contributions include applying Bayesian decision theory to Bandit problems and applying RL to tree-search (i.e. learning a tree-search policy rather than just building in MCTS). Learning to Search with MCTSnets (Guez et al.) [49] They show how to learn the hyperparameters of Monte-Carlo Tree search endto-end, by playing single-player gridworld game Sokoban. They learn a more sample efficient search than regular MCTS. Contents0a 14 3 18 0 1418 Background on IDA 0.1 What is IDA? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0.4 Project Goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Project 1: Amplifying Mathematical Reasoning 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Project Directions . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 IDA for High School Mathematics . . . . . . . . . . . . . . . . . 1.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . University of Oxford b University of Toronto c Ought. Correspondence to andreas@ought.org 2 Project 2: IDA for Neural Program Interpretation 12 2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Project Directions . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.3 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Project 3: Adaptive Computation 16 3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.2 Project Directions . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3 IDA, Fast and Slow . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.4 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Background on IDA 0.1 What is IDA? \n Figure 1 : 1 Figure 1: Diagram showing the first few Amplification and Distillation steps in IDA training. First (far left), M is trained by supervised learning on each task x ∈ T0.Second, M solves each task x ∈ T1 by decomposing into tasks {xa, x b } ∈ T0 and solving these directly. This eventually produces a dataset of solved tasks x ∈ T1, which M is trained to solve directly by supervised learning (leftmost \"distill\" step). This process is then repeated. \n Figure 2 : 2 Figure 2: A multiplication problem could eventually be solved by decomposing all the way down to simple multiplications (without any distillation). Only the first two levels of decomposition are shown here. \n 1. 3 . 1 31 Task and DatasetSaxton et al. [24] introduce a dataset of high-school level mathematics problems in natural language. The problems cover arithmetic, algebra, differentiation, probability, and number theory. Here are some examples:Question: Solve -42*r + 27*c = -1167 and 130*r + 4*c = 372 for r. Answer: 4 Question: Simplify sqrt(200)*2 + sqrt(200) + sqrt(200) + -4. Answer: -4 + 40*sqrt(2) Question: Let u(n) = -n^3 -n^2. Let e(c) = -2*c^3 + c. Let f(j) = -118*e(j) + 54*u(j). What is the derivative of f(a)? Answer: 546*a^2 -108*a -118 Question: Three letters picked without replacement from qqqkkklkqkkk. Give prob of sequence qql. Answer: 1/110 Question: What are the prime factors of 235232673? Answer: 3, 13, 19, 317453 Question: Let j = -5 -28. Is j/6*(-14) a composite number? Answer: True \n\t\t\t More precisely: it is infeasible to provide large numbers of demonstrations or sufficiently dense reward signals for methods like imitation learning or RL to work well. \n\t\t\t We describe IDA based on supervised learning, similar to [8] , since this is what the three projects in this document focus on. This can be substituted with RL or other training schemes, see [9, 10] . We view answering questions, problem decomposition, and aggregation of subproblem answers all as sequence-to-sequence problems, so we can train a single model M to solve them. We could also train distinct models. \n\t\t\t The Multiplication example has not been implemented, but a version of the Shortest Path example is implemented in [8] . \n\t\t\t Christiano et al. use a slightly more complicated decomposition. 5 https://ought.org \n\t\t\t If mathematics problems are fully formalized, we can search over formal proofs. But if the mathematics is informal, it is much harder to provide an algorithm that would eventually (given arbitrary amounts of time and compute) solve the problems.", "date_published": "n/a", "url": "n/a", "filename": "evans_ida_projects.tei.xml", "abstract": "Iterated Distillation and Amplification (IDA) is a framework for training ML models. IDA is related to existing frameworks like imitation learning and reinforcement learning, but it aims to solve tasks for which humans cannot construct a suitable reward function or solve directly. This document reviews IDA and proposes three projects that explore aspects of IDA. Project 1 applies IDA to problems in highschool mathematics and investigates whether learning to decompose problems can improve performance over supervised learning. Project 2 applies IDA to neural program interpretation, where neural nets are trained on the internal behavior (execution traces) of traditional computer programs. Project 3 investigates whether adaptive computation time (varying compute at inference time as a function of the input) can improve the robustness and efficiency of IDA. Our goal in outlining these projects is to generate discussion and encourage research on IDA. We are not (as of June 2019) working on these projects, but we are interested in collaboration.", "id": "a00d86d49f2345fc29ac9e586b6dd757"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Adam Stooke", "Joshua Achiam", "Pieter Abbeel"], "title": "Responsive Safety in Reinforcement Learning by PID Lagrangian Methods", "text": "Introduction Reinforcement learning has solved sequential decision tasks of impressive difficulty by maximizing reward functions through trial and error. Recent examples using deep learning range from robotic locomotion (Schulman et al., 2015; Gu et al., 2016; Schulman et al., 2017; Levine et al., 2016) to sophisticated video games (Mnih et al., 2013; Schulman et al., 2017; OpenAI, 2018; Jaderberg et al., 2019) . While errors during training in these domains come without cost, in some learning scenarios it is important to limit the rates of hazardous outcomes. One example would be wear and tear 1 University of California, Berkeley 2 OpenAI. Correspondence to: Adam Stooke . Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). on a robot's components or its surroundings. It may not be possible to impose such limits by prescribing constraints in the action or state space directly; instead, hazard-avoiding behavior must be learned. For this purpose, we use the well-known framework of the constrained Markov decision process (CMDP) (Altman, 1999) , which limits the accumulation of a \"cost\" signal which is analogous to the reward. The optimal policy is one which maximizes the usual return while satisfying the cost constraint. In safe RL the agent must avoid hazards not only at convergence, but also throughout exploration and learning. Lagrangian methods are a classic approach to solving constrained optimization problems. For example, the equalityconstrained problem over the real vector x: min x f (x) s.t. g(x) = 0 (1) is transformed into an unconstrained one by introduction of a dual variable-the Lagrange multiplier, λ-to form the Lagrangian: L(x, λ) = f (x) + λg(x), which is used to find the solution as: (x * , λ * ) = arg max λ min x L(x, λ) (2) Gradient-based algorithms iteratively update the primal and dual variables: −∇ x L(x, λ) = − ∇ x f (x) − λ∇ x g(x) (3) ∇ λ L(x, λ) = g(x) (4) so that λ acts as a learned penalty coefficient in the objective, leading eventually to a constraint-satisfying solution (see e.g. Bertsekas (2014) ). The Lagrangian multiplier method is readily adapted to the constrained RL setting (Altman, 1998; Geibel & Wysotzki, 2011) and has become a popular baseline in deep RL (Achiam et al., 2017; Chow et al., 2019) for its simplicity and effectiveness. Although they have been shown to converge to optimal, constraint-satisfying policies (Tessler et al., 2018; Paternain et al., 2019) , a shortcoming of gradient Lagrangian methods for safe RL is that intermediate iterates often violate constraints. Cost overshoot and oscillations are in fact inherent to the learning dynamics (Platt & Barr, 1988; Wah et al., 2000) , and we witnessed numerous problematic cases in our own experiments. Figure 1 (left) shows an example from a deep RL setting, where the cost and multiplier values oscillated throughout training. Our key insight in relation to this deficiency is that the traditional Lagrange multiplier update in (4) amounts to integral control on the constraint. The 90-degree phase shift between the curves is characteristic of ill-tuned integral controllers. Our contribution is to expand the scope of possible Lagrange multiplier update rules beyond (4), by interpreting the overall learning algorithm as a dynamical system. Specifically, we employ the next simplest mechanisms, proportional and derivative control, to λ, by adding terms corresponding to derivatives of the constraint function into (4) (derivatives with respect to learning iteration). To our knowledge, this is the first time that an expanded update rule has been considered for a learned Lagrange multiplier. PID control is an appealing enhancement, evidenced by the fact that it is one of the most widely used and studied control techniques ( Åström & Hägglund, 2006) . The result is a more responsive safety mechanism, as demonstrated in Figure 1 (right), where the cost oscillations have been damped, dramatically reducing violations. Our contributions in this paper are outlined as follows. First, we provide further context through related works and preliminary definitions. In Section 4, we propose modified Lagrangian multiplier methods and analyze their benefits in the learning dynamics. Next, in Section 5, we cast constrained RL as a dynamical system with the Lagrange multiplier as a control input, to which we apply PID control as a new algorithm. In Section 6, we adapt a leading deep RL algorithm, Proximal Policy Optimization (PPO) (Schulman et al., 2017) with our methods and achieve state of the art performance in the OpenAI Safety-Gym suite of environments (Ray et al., 2019) . Finally, in Section 7 we introduce another novel technique that makes tuning easier by providing invariance to the relative numerical scales of rewards and costs, and we demonstrate it in a further set of experiments. Our extensive empirical results show that our algorithms, which are intuitive and simple to implement, improve cost performance and promote hyperparameter robustness in a deep RL setting. \n Related Work Constrained Deep RL. Adaptations of the Lagrange multiplier method to the actor-critic RL setting have been shown to converge to the optimal, constraint-satisfying solution under certain assumptions (Tessler et al., 2018) . Convergence proofs have relied upon updating the multiplier more slowly than the policy parameters (Tessler et al., 2018; Paternain et al., 2019) , implying many constraint-violating policy iterations may occur before the penalty comes into full effect. Several recent works have aimed at improving constraint satisfaction in RL over the Lagrangian method, but they tend to incur added complexity. Achiam et al. (2017) introduced Constrained Policy Optimization (CPO), a policy search algorithm with near-constraint satisfaction guarantees at every iteration, based on a new bound on the expected returns of two nearby policies. CPO includes a projection step on the policy parameters, which in practice requires a time-consuming backtracking line search. Yet, simple Lagrangian-based algorithms performed as well or better in a recent empirical comparison in Safety Gym (Ray et al., 2019) . Approaches to safe RL based on Lyapunov functions have been developed in a series of studies (Chow et al., 2018; , resulting in algorithms that combine a projection step, as in CPO, with action-layer interventions like the safety layer of Dalal et al. (2018) . Experimentally, this line of work showed mixed performance gains over Lagrangian methods, at a nontrivial cost to implement and without clear guidance for tuning. developed interior point methods for RL, which augment the objective with logarithmic barrier functions. These methods are shown theoretically to provide suboptimal solutions. Furthermore, they require tuning of the barrier strength and typically assume already feasible iterates, the latter point possibly being problematic for random agent initializations or under noisy cost estimates. Most recently, Yang et al. (2020) extended CPO with a two-step projection-based optimization approach. In contrast to these techniques, our method remains nearly as simple to implement and compute as the baseline Lagrangian method. Dynamical Systems View of Optimization. Several recent works have proposed different dynamical systems viewpoints to analyze optimization algorithms, including those often applied to deep learning. Hu & Lessard (2017) reinterpreted first-order gradient optimization as a dynamical system; they likened the gradient of the objective, ∇ x f , to the plant, which the controller aims to drive to zero to arrive at the optimal parameters, x * . Basic gradient de-scent then matches the form of integral control (on ∇ x f ). They extend the analogy to momentum-based methods, for example linking Nesterov momentum to PID control with lag compensation. In another example, An et al. (2018) interpreted SGD as P-control and momentum methods as PI-control. They introduced a derivative term, based on the change in the gradient, and applied their resulting PID controller to improve optimization of deep convolutional networks. Other recent works bring yet other perspectives from dynamical systems to deep learning and optimization, see for example (Lessard et al., 2014; Nishihara et al., 2015; Liu & Theodorou, 2019) ). None of these works address constrained RL, however, necessitating our distinct formulation for that problem. Constrained Optimization. Decades' worth of literature have accumulated on Lagrangian methods. But even recent textbooks on the topic (Bertsekas, 2014; Nocedal & Wright, 2006) only consider updating the Lagrange multiplier using the value of the constraint function, g(x), and miss ever using its derivatives, ġ(x) or g(x), which we introduce. The modification to the Lagrangian method most similar in effect to our proportional control term (here using ġ(x)) is the quadratic penalty method (Hestenes (1969) ; Powell (1969) see also e.g. Bertsekas (1976) ), which we compare in Section 4. Song & Leland (1998) proposed a controls viewpoint (continuous-time) of optimizing neural networks for constrained problems and arrived at proportional control rules only. Related to our final experiments on reward-scale invariance, Wah et al. (2000) developed an adaptive weighting scheme for continuous-time Lagrangian objectives, but it is an intricate procedure which is not straightforwardly applied to safe RL. \n Preliminaries Constrained Reinforcement Learning Constrained Markov Decision Processes (CMDP) (Altman, 1998) extend MDPs (see Sutton & Barto (1998) ) to incorporate constraints into reinforcement learning. A CMDP is the expanded tuple (S, A, R, T, µ, C 0 , C 1 , ..., d 0 , d 1 , ...), with the cost functions C i : S × A × S → R defined by the same form as the reward, and d i : R denoting limits on the costs. For ease of notation, we will only consider a single, all-encompassing cost. The expected sum of discounted rewards over trajectories, τ = (s 0 , a 0 , s 1 , a 1 , ...), induced by the policy π(a|s) is a common performance objective: J(π) = E τ ∼π [ ∞ t=0 γ t R(s t , a t , s t+1 )]. The analogous value function for cost is defined as: J C (π) = E τ ∼π [ ∞ t=0 γ t C(s t , a t , s t+1 )]. The constrained RL problem is to solve for the best feasible policy: π * = arg max π J(π) s.t. J C (π) ≤ d (5) Deep reinforcement learning uses a deep neural network for the policy, π θ = π(•|s; θ) with parameter vector θ, and policy gradient algorithms improve the policy iteratively by gathering experience in the task to estimate the reward objective gradient, ∇ θ J(π θ ). Thus our problem of interest is better expressed as maximizing score at some iterate, π k , while ideally obeying constraints at each iteration: max π J(π k ) s.t. J C (π m ) ≤ d m ∈ {0, 1, ..., k} (6) Practical settings often allow trading reward performance against some constraint violations (e.g. the constraints themselves may include a safety margin). For this purpose we introduce a constraint figure of merit with our experiments. \n Dynamical Systems and Optimal Control Dynamical systems are processes which can be subject to an external influence, or control. A general formulation for discrete-time systems with feedback control is: x k+1 =F (x k , u k ) y k =Z(x k ) u k =h(y 0 , ..., y k ) (7) with state vector x, dynamics function F , measurement outputs y, applied control u, and the subscript denoting the time step. The feedback rule h has access to past and present measurements. A problem in optimal control is to design a control rule, h, that results in a sequence y 0:T . = {y 0 , ..., y T } (or x 0:T directly) that scores well according to some cost function C. Examples include simply reaching a goal condition, C = |y T − y|, or following close to a desired trajectory, y 0:T . Systems with simpler dependence on the input are generally easier to analyze and control (i.e. simpler h performs well), even if the dependence on the state is complicated (Skelton, 1988) . Control-affine systems are a broad class of dynamical systems which are especially amenable to analysis (Isidori et al., 1995) . They take the form: F (x k , u k ) = f (x k ) + g(x k )u k (8) where f and g may be nonlinear in state, and are possibly uncertain, meaning unknown. We will seek control-affine form for ease of control and to support future analysis. \n Modified Lagrangian Methods for Constrained Optimization Lagrangian methods are a classic family of approaches to solving constrained optimization problems. We propose an intuitive, previously overlooked form for the multiplier update and derive its beneficial effect on the learning dynamics. We begin by reviewing a prior formulation for the equality-constrained problem. 1 4.1. Review: \"Basic Differential Multiplier Method\" We follow the development of Platt & Barr (1988) , who analyzed the dynamics of a continuous-time neural learning system applied to this problem (our result can similarly be derived for iterative gradient methods). They begin with the component-wise differential equations: ẋi = − ∂L(x, λ) ∂x i = − ∂f ∂x i − λ ∂g ∂x i (9) λ = α ∂L(x, λ) ∂λ = αg(x) (10) where we have inserted the scalar constant α as a learning rate on λ. Differentiating (9) and substituting with (10) leads to the second-order dynamics, written in vector format: ẍ + A ẋ + αg(x)∇g = 0 (11) which is a forced oscillator with damping matrix equal to the weighted sum of Hessians: A ij = ∂ 2 f ∂x i ∂x j + λ ∂ 2 g ∂x i ∂x j , or, A = ∇ 2 f + λ∇ 2 g (12) Platt & Barr (1988) showed that if A is positive definite, then the system (11) converges to a solution that satisfies the constraint. Platt & Barr (1988) also noted that the system (9)-( 10 ) is prone to oscillations as it converges into the feasible region, with frequency and settling time depending on α. We provide complete derivations of the dynamics in (11) and for our upcoming methods in an appendix. \n Proportional-Integral Multiplier Method In (10), λ simply integrates the constraint. To improve the dynamics towards more rapid and stable satisfaction of constraints, we introduce a new term in λ that is proportional to the current constraint value. In the differential equation for λ, this term appears as the time-derivative of the constraint: λ = αg(x) + β ġ(x) = αg(x) + β j ∂g ∂x j ẋj ( 13 ) with strength coefficient, β. Replacing (10) by ( 13 ) and combining with (9) yields similar second-order dynamics as (11), except with an additional term in the damping matrix: ẍ + A + β∇g∇ ⊤ g ẋ + αg(x)∇g = 0 (14) The new term is beneficial because it is positive semidefinite-being the outer product of a vector with itself-so it can increase the damping eignevalues, boosting convergence. The results of (Platt & Barr, 1988 ) hold under (13, 14) , because the conditions of the solution, namely ẋ = 0 and g(x) = 0, remain unaffected and extend immediately to ġ(x) = 0 (and for the sequel, to g(x) = 0). To our knowledge, this is the first time that a proportional-integral update rule has been considered for a learned Lagrange multiplier. The well-known penalty method (Hestenes, 1969; Powell, 1969) augments the Lagrangian with an additional term, c 2 g(x) 2 , which produces a similar effect on the damping matrix, as shown in (Platt & Barr, 1988) : A penalty = A + c∇g∇ ⊤ g + cg(x)∇ 2 g (15) Our approach appears to provide the same benefit, without the following two complications of the penalty method. First, the penalty term must be implemented in the derivative ẋ, whereas our methods do not modify the Lagrangian nor the derivative in (9). Second, the penalty introduces another instance of the hessian ∇ 2 g in the damping matrix, which might not be positive semi-definite but shares the proportionality factor, c, with the desired term. \n Integral-Derivative Multiplier Method A similar analysis extends to the addition of a term in λ based on the derivative of the constraint value. It appears in λ as the second derivative of the constraint: λ = αg(x) + γg(x) (16) with strength coefficient γ. The resulting dynamics are: ẍ + B −1 A ẋ + αg(x) + γ ẋ⊤ ∇ 2 g ẋ B −1 ∇g = 0 (17) with B = I + γ∇g∇ ⊤ g , and I the identity matrix. The effects of the derivative update method are two-fold. First, since the eigenvalues of the matrix B −1 will be less than 1, both the damping (A) and forcing (∇g) terms are weakened (and rotated, generally). Second, the new forcing term can be interpreted as a drag quadratic in the speed and modulated by the curvature of the constraint along the direction of motion. To illustrate cases, if the curvature of g is positive along the direction of travel, then this term becomes a force for decreasing g. If at the same time g(x) > 0, then the traditional force will also be directed to decrease g, so the two will add. On the other hand, if g curves negatively along the velocity, then the new force promotes increasing g; if g(x) > 0, then the two forces subtract, weakening the acceleration ẍ. By using curvature, the derivative method acts predictively, but may be prone to instability. The proportional-integral-derivative multiplier method is the combination of the previous two developments, which induced independent changes in the dynamics (i.e. insert the damping matrix of ( 14 ) into ( 17 )). We leave for future work a more rigorous analysis of the effects of the new terms, along with theoretical considerations of the values of coefficients α, β, and γ. In the next section, we carry the intuitions from our analysis to make practical enhancements to Lagrangian-based constrained RL algorithms. \n Feedback Control for Constrained RL We advance the broader consideration of possible multiplier update rules by reinterpreting constrained RL as a dynamical system; the adaptive penalty coefficient is a control input, and the cost threshold is a setpoint which the system should maintain. As the agent learns for rewards, the upward pressure on costs from reward-learning can change, requiring dynamic response. In practical Lagrangian RL, the iterates λ k may deviate from the optimal value, even for lucky initialization λ 0 = λ * , as the policy is only partially optimized at each iteration. Adaptive sequences λ 0 , ..., λ K other than those prescribed by the Lagrangian method may achieve superior cost control for Problem (6). In this section we relate the Lagrangian method to a dynamical system, formalizing how to incorporate generic update rules using feedback. We return to the case of an inequality constrained CMDP to present our main algorithmic contribution-the use of PID control to adapt the penalty coefficient. \n Constrained RL as a Dynamical System We write constrained RL as the first-order dynamical system: θ k+1 =F (θ k , λ k ) y k =J C (π θ k ) λ k =h(y 0 , ..., y k , d) (18) where F is an unknown nonlinear function 2 corresponding to the RL algorithm policy update on the agent's parameter vector, θ. The cost-objective serves as the system measure, y, which is supplied to the feedback control rule, h, along with cost limit, d. From this general starting point, both the RL algorithm, F , and penalty coefficient update rule, h, can be tailored for solving Problem (6). The reward and cost policy gradients of the first-order 3 Lagrangian method, ∇ θ L(θ, λ) = ∇ θ J(π θ ) − λ∇ θ J C (π θ ), can be organized into the form of (18) as: F (θ k , λ k ) = f (θ k ) + g(θ k )λ k (19) f (θ k ) = θ k + η∇ θ J(π θ k ) (20) 2 Known as an \"uncertain\" nonlinear function in the control literature, meaning we lack an analytical expression for it. 3 We discuss only the first-order case, which provides sufficient clarity for our developments. \n g(θ k ) = −η∇ θ J C (π θ k ) (21) with SGD learning rate η. The role of the controller is to drive inequality constraint violations (J c − d) + to zero in the presence of drift from reward-learning due to f . The Lagrange multiplier update rule for an inequality constraint uses subgradient descent: λ k+1 = (λ k + K I (J C − d)) + (22) with learning rate K I and projection into λ ≥ 0. This update step is clearly an integral control rule, for h. \n Constraint-Controlled RL Our general procedure, constraint-controlled RL, is given in Algorithm 1. It follows the typical minibatch-RL scheme, and sampled estimates of the cost criterion, ĴC are fed back to control the Lagrange multiplier. In contrast to prior work (Tessler et al., 2018; Paternain et al., 2019) which uses a single value approximator and treats r + λc as the reward, we use separate value-and cost-value approximators, since λ may change rapidly. When λ is large, the update in ( 19 ) can cause excessively large change in parameters, θ, destabilizing learning. To maintain consistent step size, we use a re-scaled objective for the θ-learning loop: θ * (λ) = arg max θ J − λJ C = arg max θ 1 1 + λ (J − λJ C ) This convex combination of objectives yields the policy gradient used in Algorithm 1. Our experiments use this re-scaling, including for traditional Lagrangian baselines. a ∼ π(•|s; θ), s ′ ∼ T (s, a), 7: r ∼ R(s, a, s ′ ), c ∼ C(s, a, s ′ ) 8: Apply feedback control: Update critics, V φ (s), V C,ψ (s) ⊲ if using 13: ∇ θ L = 1 1+λ ∇ θ Ĵ(π θ ) − λ∇ θ ĴC (π θ ) 14: until converged 15: return π θ 16: end procedure As an aside, we note that it is possible to maintain the control-affine form of ( 19 ) with this re-scaling, by reparam-eterizing the control as 0 ≤ u = λ 1+λ ≤ 1 and substituting for (21) with: g(θ k ) = −η∇ θ (J(π θ k ) + J C (π θ k )) (23) This parameterization simply weights the reward and cost gradients in the Lagrangian objective as: ∇ θ L(θ, λ) = (1 − u)∇ θ J(π θ ) − u∇ θ J C (π θ ) (24) It may provide superior performance in some cases, as it will behave differently in relation to the nonlinearity in control which arises from the inequality constraint. We leave experimentation with direct control on u ∈ [0, 1] to future work. \n The PID Lagrangian Method We now specify a new control rule for use in Algorithm 1. To overcome the shortcomings of integral-only control, we follow the developments of the previous section and introduce the next simplest components: proportional and derivative terms. Our PID update rule to replace ( 22 return λ \n PID Control Experiments We investigated the performance of our algorithms on Problem (6) in a deep RL setting. In particular, we show the effectiveness of PID control at reducing constraint violations from oscillations and overshoot present in the baseline Lagrangian method. Both maximum performance and robustness to hyperparameter selection are considered. Although many methods exist for tuning PID parameters, we elected to do so manually, demonstrating ease of use. \n Environments: Safety-Gym We use the recent Safety-Gym suite (Ray et al., 2019) , which consists of robot locomotion tasks built on the MuJoCo simulator (Todorov et al., 2012) . The robots range in complexity from a simple Point robot to the 12-jointed Doggo, and they move in an open arena floor. Rewards have a small, dense component encouraging movement toward the goal, and a large, sparse component for achieving it. When a goal is achieved, a new goal location is randomly generated, and the episode continues until the time limit at 1,000 steps. Each task has multiple difficulty levels corresponding to density and type of hazards, which induce a cost when contacted by the robot (without necessarily hindering its movement). Hazards are placed randomly at each episode and often lay in the path to the goal. Hence the aims of achieving high rewards and low costs are in opposition. The robot senses the position of hazards and the goal through a coarse, LIDAR-like mode. The output of this sensor, along with internal readings like the joint positions and velocities, comprises the state fed to the agent. Figure 2 displays a scene from the DOGGOGOAL1 environment. \n Algorithm: Constraint-Controlled PPO We implemented Algorithm 1 on top of Proximal Policy Optimization (PPO) (Schulman et al., 2017) to make constraintcontrolled PPO (CPPO). CPPO uses an analogous clipped surrogate objective for the cost as for the reward. Our policy is a 2-layer MLP followed by an LSTM with a skip connection. We applied smoothing to proportional and derivative controls to accommodate noisy estimates. The environments' finite horizons allowed use of nondiscounted episodic costs as the constraint and input to the controller. Additional training details can be found in supplementary materials, and our implementation is available at https://github.com/astooke/safe-rlpyt. \n Main Results We compare PID controller performance against the Lagrangian baseline under a wide range of settings. Plots showing the performance of the unconstrained analogue confirm that constraints are not trivially satisfied, and they appear in supplementary material. \n ROBUST SAFETY WITH PI CONTROL We observed cost oscillations or overshoot with slow settling time in a majority of Safety Gym environments when using the Lagrangian method. Figure 3 shows an example where PI-control eliminated this behavior while maintaining good reward performance, in the challenging DOGGOBUTTON1 environment. Individual are plotted for different cost limits. As predicted in (Platt & Barr, 1988) , we found the severity of cost overshoot and oscillations to depend on the penalty coefficient learning rate, K I . The top left panel of Figure 4 shows example cost curves from DOGGOGOAL2 under I-control, over a wide range of values for K I (we refer to varying K I , assuming K I = 1; the two are interchangeable in our design). With increasing K I , the period and ampli-tude of cost oscillations decrease and eventually disappear. The bottom left of Figure 4 , however, shows that larger K I also brings diminishing returns. We study this effect in the next section. The center and right columns of Figure 4 show the cost and return when using PI-control, with K P = 0.25 and K P = 1, respectively. Proportional control stabilized the cost, with most oscillations reduced to the noise floor for K I > 10 −4 . Yet returns remained relatively high over a wide range, K I < 10 −1 . Similar curves for other Safety Gym environments are included in an appendix. We examine the trade-off between reward and constraint violation by forming an overall cost figure of merit (FOM). We use the sum of non-discounted constraint violations over the learning iterates, C F OM = k (D(π θ k ) − d) + , D(π θ ) = E τ ∼π T t=0 C(s t , a t , s ′ t ) , and estimate it online from the learning data. Figure 5 compares final returns against this cost FOM for the same set of experiments as in Figure 4 . Each point represents a different setting of K I , averaged over four runs. PI-control expanded the Pareto frontier of this trade-off into a new region of high rewards at relatively low cost which was inaccessible using the Lagrangian method. These results constitute a new state of the art over the benchmarks in Ray et al. (2019) . We performed similar experiments on several Safety Gym environments in addition to DOGGOGOAL2: POINTGOAL1, the simplest domain with a point-like robot, CARBUTTON1, for slightly more challenging locomotive control, and DOG-GOBUTTON1 for another challenging task (see appendix for learning curves like Figure 4 ). for two strengths of added proportional control, for these environments. PI-control clearly improved the cost FOM (lower is better) for K I < 10 −1 , above which the fast integral control dominated. Hence robustness to the value for K I was significantly improved in all the learning tasks studied. . Learning run cost FOM versus penalty learning rate, KI , from four environments spanning the robots in Safety Gym. Each point is an average over four runs. In all cases, PI-control improves performance (lower is better) over a wide and useful range of KI , easing selection of that hyperparameter. \n CONTROL EFFICIENCY We further investigated why increasing the penalty learning rate, K I , eventually reduces reward performance, as was seen in the robustness study. Figure 7 shows learning curves for three settings: I-and PI-control with the same, moderate K I = 10 −3 , and I-control with high K I = 10 −1 . The high-K I setting achieved responsive cost performance but lower long-term returns, which appears to result from wildly fluctuating control. In contrast, PI-control held relatively steady, despite the noise, allowing the agent to do reward-learning at every iteration. The bottom panel displays individual control iterates, here displayed as u = λ/(1 + λ), over the first 7M environment steps, while the others show smoothed curves over the entire learning run, over 40M steps. It was further able to slow the approach of the cost curve towards the limit, a desirable behavior for online learning systems requiring safety monitoring. Curves for other environments are available in an appendix. \n Reward-Scale Invariance In the preceding sections, we showed that PID control improves hyperparameter robustness in every constrained RL environment we tested. Here we propose a complementary method to promote robustness both within and across environments. Specifically, it addresses the sensitivity of learning dynamics to the relative numerical scale of reward and cost objectives. Consider two CMDPs that are identical except that in one the rewards are scaled by a constant factor, ρ. The optimal policy parameters, θ * remain unchanged, but clearly λ * must scale by ρ. To attain the same learning dynamics, all controller settings, λ 0 , K I , K P , and K D must therefore be scaled by ρ. This situation might feature naturally within a collection of related learning environments. Additionally, within the course of learning an individual CMDP, the balance between reward and cost magnitudes can change considerably, placing burden on the controller to track the necessary changes in the scale of λ. One way to promote performance of a single choice of controller settings across these cases would be to maintain a fixed meaning for the value of λ in terms of the relative influence of reward versus cost on the parameter update. To this end, we introduce an adjustable scaling factor, β k , in the policy gradient: ∇ θ L = (1 − u k )∇ θ J(π θ k ) − u k β k ∇ θ J C (π θ k ) (25) A conspicuous choice for β k is the ratio of un-scaled policy gradients: β ∇,k = ||∇ θ J(π θ k )|| ||∇ θ J C (π θ k )|| (26) since it balances the total gradient to have equal-magnitude contribution from reward-and cost-objectives at λ = 1 and encourages λ * = 1. Furthermore, β ∇ is easily computed with existing algorithm components. To test this method, we ran experiments on Safety Gym environments with their rewards scaled up or down by a factor of 10. Figure 9 shows a representative cross-section of results from the POINTGOAL1 environment using PI-control. The different curves within each plot correspond to different reward scaling. Without objective-scaling (i.e. β = 1), the dynamics under ρ = 10 are as if controller parameters were instead divided by 10, and likewise for ρ = 0.1. Note the near-logarithmic spacing of λ (λ ρ=10 has not converged to its full value). Using β ∇ , on the other hand, the learning dynamics are nearly identical across two orders of magnitude of reward scale. λ 0 = 1 becomes an obvious choice for initialization, a point where previous theory provides little guidance (Chow et al., 2019) (although here we left λ 0 = 0). Experiments in other environments and controller settings yielded similar results and are included in supplementary materials. Other methods, such running normalization of rewards and costs, could achieve similar effects and are worth investigating, but our simple technique is surprisingly effective and is not specific to RL. \n Conclusion Starting from a novel development in classic Lagrangian methods, we introduced a new set of constrained RL solutions which are straightforward to understand and implement, and we have shown them to be effective when paired with deep learning. Several opportunities for further work lay ahead. Analysis of the modified Lagrangian method and constrained RL as a dynamical system may relax theoretical requirements for a slowly-changing multiplier. The mature field of control theory (and practice) provides tools for tuning controller parameters. Lastly, the control-affine form may assist in both analysis (see Liang-Liang Xie & Lei Guo (2000) and Galbraith & Vinter (2003) for controllability properties for uncertain nonlinear dynamics) and by opening to further control techniques such as feedback linearization. Our contributions improve perhaps the most commonly used constrained RL algorithm, which is a workhorse baseline. We have addressed its primary shortcoming while preserving its simplicity and even making it easier to use-a compelling combination to assist in a wide range of applications. Figure 1 . 1 Figure1. Left: The traditional Lagrangian method exhibits oscillations with 90 • phase shift between the constraint function and the Lagrange multiplier, characteristic of integral control. Right: PID control on the Lagrange multiplier damps oscillations and obeys constraints. Environment: DOGGOBUTTON1, cost limit 200. \n ) is shown in Algorithm 2. The proportional term will hasten the response to constraint violations and dampen oscillations, as derived in Section 4. Unlike the Lagrangian update, derivative control can act in anticipation of violations. It can both prevent cost overshoot and limit the rate of cost increases within the feasible region, useful when monitoring a system for further safety interventions. Our derivative term is projected as (•) + so that it acts against increases in cost but does not impede decreases. Overall, PID control provides a much richer set of controllers while remaining nearly as simple to implement; setting K P = K D = 0 recovers the traditional Lagrangian method. The integral term remains necessary for eliminating steady-state violations at convergence. Our experiments mainly focus on the effects of proportional and derivative control of the Lagrange multiplier in constrained deep RL.Algorithm 2 PID-Controlled Lagrange Multiplier 1: Choose tuning parameters: K P , K I , K D ≥ 0 2: Integral: I ← 0 3: Previous Cost: J C,prev ← 0 4: repeat at each iteration k ← (K P ∆ + K I I + K D ∂) + 10: J C,prev ← J C 11: \n Figure 2 . 2 Figure 2. Rendering from the DOGGOGOAL1 environment from Safety Gym. The red, four-legged robot must walk to the green cylinder while avoiding other objects, and receives coarse egocentric sensor readings of their locations. \n Figure 3 . 3 Figure 3. Oscillations in episodic costs (and returns) from the Lagrangian method, KP = 0, KI = 10 −2 , are damped by proportional control, KP = 1 (ours), at cost limits 50, 100, 150, 200 (curves shaded) in DOGGOBUTTON1. \n Figure 4 . 4 Figure 4. Top row: Constraint-violating oscillations decrease in magnitude and period from increases in the Lagrange multiplier learning rate, KI . At all levels, oscillations are damped by PIcontrol, KP = 0.25, 1. Bottom row: Returns diminish for large KI ; proportional control maintains high returns while reducing constraint violations. Environment: DOGGOGOAL2, cost limit 50. \n Figure 5 . 5 Figure 5. Pareto frontier of return versus cost FOM, which improves (up and to the left) with PI-control, KP = 0.25, 1. Each point is a different setting of KI (see Figure 4). \n Figure6. Learning run cost FOM versus penalty learning rate, KI , from four environments spanning the robots in Safety Gym. Each point is an average over four runs. In all cases, PI-control improves performance (lower is better) over a wide and useful range of KI , easing selection of that hyperparameter. \n Figure 7 . 7 Figure 7. I-and PI-control with moderate KI = 10 −3 and Icontrol with fast KI = 10 −1 (IK I +). Top Returns diminished for fast-KI , but high for PI. Second Cost oscillations mostly damped by PI, removed by fast-KI . Third Control (smoothed) varies more rapidly under fast-KI , is relatively steady for PI. Bottom Control over first 500 RL iterations; fast-KI slams the control to the extremes, causing the diminished returns. Environment: DOG-GOBUTTON1, cost limit 200. \n Figure 8 . 8 Figure8. Derivative control can prevent cost overshoot and slow the rate of cost increase within feasible regions, which the Lagrangian method cannot do. Environment: DOGGOBUTTON1, cost limit 200. \n Figure 9 . 9 Figure 9. Costs, returns, and Lagrange multiplier with rewards scaled by ρ ∈ {0.1, 1, 10}; PI-control with KI = 1e − 3, KP = 0.1. Left column: without objective-weighting, learning dynamics differ dramatically due to required scale of λ. Right column: with objective-weighting, learning dynamics are nearly identical. Environment: POINTGOAL1, cost limit 25. \n\t\t\t Standard techniques extend our results to inequality constraints, and multiple constraints, as in Platt & Barr (1988) , and notation is simplest for an equality constraint.", "date_published": "n/a", "url": "n/a", "filename": "stooke20a.tei.xml", "abstract": "Lagrangian methods are widely used algorithms for constrained optimization problems, but their learning dynamics exhibit oscillations and overshoot which, when applied to safe reinforcement learning, leads to constraint-violating behavior during agent training. We address this shortcoming by proposing a novel Lagrange multiplier update method that utilizes derivatives of the constraint function. We take a controls perspective, wherein the traditional Lagrange multiplier update behaves as integral control; our terms introduce proportional and derivative control, achieving favorable learning dynamics through damping and predictive measures. We apply our PID Lagrangian methods in deep RL, setting a new state of the art in Safety Gym, a safe RL benchmark. Lastly, we introduce a new method to ease controller tuning by providing invariance to the relative numerical scales of reward and cost. Our extensive experiments demonstrate improved performance and hyperparameter robustness, while our algorithms remain nearly as simple to derive and implement as the traditional Lagrangian approach.", "id": "e7830d87925f6865801569ad196b675e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Owain Evans", "Andreas Stuhlm", "Noah D Goodman"], "title": "Learning the Preferences of Ignorant, Inconsistent Agents", "text": "Introduction The problem of learning a person's preferences from observations of their choices features prominently in economics (Hausman 2011) , in cognitive science (Baker, Saxe, and Tenenbaum 2011; Ullman et al. 2009) , and in applied machine learning (Jannach et al. 2010; Ermon et al. 2014) . To name just one example, social networking sites use a person's past behavior to select what stories, advertisements, and potential contacts to display to them. A promising approach to learning preferences from observed choices is to Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. invert a model of rational choice based on sequential decision making given a real-valued utility function (Russell and Norvig 1995) . This approach is known as Inverse Reinforcement Learning (Ng and Russell 2000) in an RL setting and as Bayesian Inverse Planning (Baker, Saxe, and Tenenbaum 2009) in the setting of probabilistic generative models. This kind of approach usually assumes that the agent makes optimal decisions up to \"random noise\" in action selection (Kim et al. 2014; Zheng, Liu, and Ni 2014) . However, human deviations from optimality are more systematic. They result from persistent false beliefs, sub-optimal planning, and from biases such as time inconsistency and framing effects (Kahneman and Tversky 1979) . If such deviations are modeled as unstructured errors, we risk mistaken preference inferences. For instance, if an agent repeatedly fails to choose a preferred option due to a systematic bias, we might conclude that the option is not preferred after all. Consider someone who smokes every day while wishing to quit and viewing their actions as regrettable. In this situation, a model that has good predictive performance might nonetheless fail to identify what this person values. In this paper, we explicitly take into account structured deviations from optimality when inferring preferences. We construct a model of sequential planning for agents with inaccurate beliefs and time-inconsistent biases (in the form of hyperbolic discounting). We then do Bayesian inference over this model to jointly infer an agent's preferences, beliefs and biases from sequences of actions in a simple Gridworld-style domain. To demonstrate that this algorithm supports accurate preference inferences, we first exhibit a few simple cases where our model licenses conclusions that differ from standard approaches, and argue that they are intuitively plausible. We then test this intuition by asking impartial human subjects to make preference inferences given the same data as our algorithm. This is based on the assumption that people have expertise in inferring the preferences of others when the domain is simple and familiar from everyday experience. We find that our algorithm is able to make the same kinds of inferences as our human judges: variations in choice are explained as being due to systematic factors such as false beliefs and strong temptations, not unexplainable error. The possibility of false beliefs and cognitive biases means that observing only a few actions often fails to identify a single set of preferences. We show that humans recognize this ambiguity and provide a range of distinct explanations for the observed actions. When preferences can't be identified uniquely, our model is still able to capture the range of explanations that humans offer. Moreover, by computing a Bayesian posterior over possible explanations, we can predict the plausibility of explanations for human subjects. \n Computational Framework Our goal is to infer an agent's preferences from observations of their choices in sequential decision problems. The key question for this project is: how are our observations of behavior related to the agent's preferences? In more technical terms, what generative model (Tenenbaum et al. 2011) best describes the agent's approximate sequential planning given some utility function? Given such a model and a prior on utility functions, we could \"invert\" it (by performing full Bayesian inference) to compute a posterior on what the agent values. The following section describes the class of models we explore in this paper. We first take an informal look at the specific deviations from optimality that our agent model includes. We then define the model formally and show our implementation as a probabilistic program, an approach that clarifies our assumptions and enables easy exploration of deviations from optimal planning. \n Deviations from optimality We consider two kinds of deviations from optimality: False beliefs and uncertainty Agents can have false or inaccurate beliefs. We represent beliefs as probability distributions over states and model belief updates as Bayesian inference. Planning for such agents has been studied in work on POMDPs (Kaelbling, Littman, and Cassandra 1998) . Inferring the preferences of such agents was studied in recent work (Baker and Tenenbaum 2014; Panella and Gmytrasiewicz 2014) . Here, we are primarily interested in the interaction of false beliefs with other kinds of sub-optimality. Temporal inconsistency Agents can be time-inconsistent (also called \"dynamically inconsistent\"). Time-inconsistent agents make plans that they later abandon. This concept has been used to explain human behaviors such as procrastination, temptation and pre-commitment (Ainslie 2001) , and has been studied extensively in psychology (Ainslie 2001) and in economics (Laibson 1997; O'Donoghue and Rabin 2000) . A prominent formal model of human time inconsistency is the model of hyperbolic discounting (Ainslie 2001) . This model holds that the utility or reward of future outcomes is discounted relative to present outcomes according to a hyperbolic curve. For example, the discount for an outcome occurring at delay d from the present might be modeled as a multiplicative factor 1 1+d . The shape of the hyperbola means that the agent takes $100 now over $110 tomorrow, but would prefer to take $110 after 31 days to $100 after 30 days. The inconsistency shows when the 30th day comes around: now, the agent switches to preferring to take the $100 immediately. This discounting model does not (on its own) determine how an agent plans sequentially. We consider two kinds of time-inconsistent agents. These agents differ in terms of whether they accurately model their future choices when they construct plans. First, a Sophisticated agent has a fully accurate model of its own future decisions. Second, a Naive agent models its future self as assigning the same (discounted) values to options as its present self. The Naive agent fails to accurately model its own time inconsistency. 1 We illustrate Naive and Sophisticated agents with a decision problem that we later re-use in our experiments. The problem is a variant of Gridworld where an agent moves around the grid to find a place to eat (Figure 1 ). In the left pane (Figure 1a ), we see the path of an agent, Alice, who moves along the shortest path to the Vegetarian Cafe before going left and ending up eating at Donut Store D2. This behavior is sub-optimal independent of whether her preference is for the Vegetarian Cafe or the Donut Store, but can be explained in terms of Naive time-inconsistent planning. From her starting point, Alice prefers to head for the Vegetarian Cafe (as it has a higher undiscounted utility than the Donut Store). She does not predict that when close to the Donut Store (D2), she will prefer to stop there due to hyperbolic discounting. The right pane (Figure 1b ) shows what Beth, a Sophisticated agent with similar preferences to Alice, would do in the same situation. Beth predicts that, if she took Alice's route, she would end up at the Donut Store D2. So she instead takes a longer route in order to avoid temptation. If the longer route wasn't available, Beth could not get to the Vegetarian Cafe without passing the Donut Store D2. In this case, Beth would either go directly to Donut Store D1, which is slightly closer than D2 to her starting point, or (if utility for the Vegetarian Cafe is sufficiently high) she would correctly predict that she will be able to resist the temptation. \n Formal model definition We first define an agent with full knowledge and no time inconsistency, 2 and then generalize to agents that deviate from optimality. We will refer to states s ∈ S, actions a ∈ A, a deterministic utility function U : S ×A → R, a stochastic action choice function C : S → A, and a stochastic state transition function T : S × A → S. To refer to the probability that C(s) returns a, we use C(a; s). Optimal agent: full knowledge, no discounting Like all agents we consider, this agent chooses actions in proportion to exponentiated expected utility (softmax): [a] The noise parameter α modulates between random choice (α = 0) and perfect maximization (α = ∞). Expected utility depends on both current and future utility: C(a; s) ∝ e αEUs EU s [a] = U (s, a) + E s ,a [EU s [a ]] with s ∼ T (s, a) and a ∼ C(s ). Note that expected future utility recursively depends on C-that is, on what the agent assumes about how it will make future choices. Figure 2 : We specify agents' decision-making processes as probabilistic programs. This makes it easy to encode arbitrary biases and decision-making constraints. When automated inference procedures invert such programs to infer utilities from choices, these constraints are automatically taken into account. Note the mutual recursion between agent and expUtility: the agent's reasoning about future expected utility includes a (potentially biased) model of its own decision-making. \n Time-inconsistent agent To compute expected utility, we additionally take the expectation over states. Now EU p(s),o,d [a] is defined as: Inferring preferences We define a space of possible agents based on the dimensions described above (utility function U , prior p(s), discount parameter k, noise parameter α). We additionally let Y be a variable for the agent's type, which fixes whether the agent discounts at all, and if so, whether the agent is Naive or Sophisticated. So, an agent is defined by a tuple θ := (p(s), U, Y, k, α), and we perform inference over this space given observed actions. The posterior joint distribution on agents conditioned on action sequence a 0:T is: E s∼p(s|o) 1 1 + kd U (s, a) + E s , P (θ|a 0:T ) ∝ P (a 0:T |θ)P (θ) (1) The likelihood function P (a 0:T |θ) is given by the multistep generalization of the choice function C corresponding to θ. For the prior P (θ), we use independent uniform priors on bounded intervals for each of the components. In the following, \"the model\" refers to the generative process that \n Sophisticated Figure 3 : Given data corresponding to Figure 1 , the model infers a joint posterior distribution on preferences, beliefs and other agent properties (such as discount strength) that reveals relations between different possible inferences from the data. The darker a cell, the higher its posterior probability. involves a prior on agents and a likelihood for choices given an agent. \n Agents as probabilistic programs We implemented the model described above in the probabilistic programming language WebPPL (Goodman and Stuhlmüller 2014). WebPPL provides automated inference over functional programs that involve recursion. This means that we can directly translate the recursions above into programs that represent an agent and the world simulation used for expected utility calculations. All of the agents above can be captured in a succinct functional program that can easily be extended to capture other kinds of sub-optimal planning. Figure 2 shows a simplified example (including hyperbolic discounting but not uncertainty over state). For the Bayesian inference corresponding to Equation 1 we use a discrete grid approximation for the continuous variables (i.e. for U , p(s), k and α) and perform exact inference using enumeration with dynamic programming. \n Model inferences We now demonstrate that the model described above can infer preferences, false beliefs and time inconsistency jointly from simple action sequences similar to those that occur frequently in daily life. We later validate this intuition in our experiments, where we show that human subjects make inferences about the agent that are similar to those of our model. Example 1: Inference with full knowledge We have previously seen how modeling agents as Naive and Sophisticated might predict the action sequences shown in Figures 1a and 1b respectively. We now consider the inference problem. Given that these sequences are observed, what can be inferred about the agent? We assume for now that the agent has accurate beliefs about the restaurants and that the two Donut Stores D1 and D2 are identical (with D1 closer to the starting point). 4 We model each restaurant as having an immediate utility (received on arriving at the restaurant) and a delayed utility (received one time-step after). This interacts with hyperbolic discounting, allowing the model to represent options that are especially \"tempting\" when they can be obtained with a short delay. For the Naive episode (Figure 1a ) our model infers that either softmax noise is very high or that the agent is Naive (as explained for Alice above). If the agent is Naive, the utility of the Vegetarian Cafe must be higher than the Donut Store (otherwise, the agent wouldn't have attempted to go to the Cafe), but not too much higher (or the agent wouldn't give in to temptation, which it in fact does). This relationship is exhibited in Figure 3 Example 2: Inference with uncertainty In realistic settings, people do not have full knowledge of all facts relevant to their choices. Moreover, an algorithm inferring preferences will itself be uncertain about the agent's uncertainty. What can the model infer if it doesn't assume that the agent has full knowledge? Consider the Sophisticated episode (Figure 1b ). Suppose that the Noodle Shop is closed, and that the agent may or may not know about this. This creates another possible inference, in addition to Sophisticated avoidance of temptation and high noise: The agent might prefer the Noodle Shop and might not know that it is closed. This class of inferences is shown in Figure 3 (bottom): When the agent has a strong prior belief that the shop is open, the observations are most plausible if the agent also assigns high utility to the Noodle Shop (since only then will the agent attempt to go there). If the agent does not believe that the shop is open, the Noodle Shop's utility does not matter-the observations have the same plausibility either way. In addition, the model can make inferences about the agent's discounting behavior (Figure 3 right): When utility for the Vegetarian Cafe is low, the model can't explain the data well regardless of discount rate k (since, in this case, the agent would just go to the Donut Store directly). The data is best explained when utility for the Vegetarian Cafe and discount rate are in balance-since, if the utility is very high relative to k, the agent could have gone directly to the Vegetarian Cafe, without danger of giving in to the Donut Store's temptation. Example 3: Inference from multiple episodes Hyperbolic discounting leads to choices that differ systematically from those of a rational agent with identical preferences. A time-inconsistent agent might choose one restaurant more often than another, even if the latter restaurant provides more \n Experiments with Human Subjects We have shown that, given short action sequences, our model can infer whether (and how) an agent is timeinconsistent while jointly inferring appropriate utilities. We claim that this kind of inference is familiar from everyday life and hence intuitively plausible. This section provides support for this claim by collecting data on the inferences of human subjects. In our first two experiments, we ask subjects to explain the behavior in Figures 1a and 1b . This probes not just their inferences about preferences, but also their inferences about biases and false beliefs that might have influenced the agent's choice. Experiment 1: Inference with full knowledge Experiment 1 corresponds to Example 1 in the previous section (where the agent is assumed to have full knowledge). Two groups of subjects were shown Figures 1a and 1b , having already seen two prior episodes showing evidence of a preference for the Vegetarian Cafe over the other restaurants. People were then asked to judge the plausibility of different explanations of the agent's behavior in each episode. 5 Results are shown in Figure 5 . In both Naive (Figure 1a ) and Sophisticated (1b) conditions, subjects gave the highest ratings to explanations involving giving in to temptation (Naive) or avoiding temptation (Sophisticated). Alternative explanations suggested that the agent wanted variety (having taking efficient routes to the Vegetarian Cafe in previous episodes) or that they acted purely based on a preference (for a long walk or for the Donut Store). Experiment 2: Inference with uncertainty Experiment 2 corresponds to Example 2 above. Subjects see one of the two episodes in Figure 1 (with Figure 1a modified so D1 and D2 can differ in utility and Figure 1b modified so the Noodle Shop is closed). There is no prior information about the agent's preferences, and the agent is not known to have accurate beliefs. We asked subjects to write explanations for the agent's behavior in the two episodes and coded these explanations into four categories. Figure 6 specifies which formal agent properties correspond to which category. While not all explanations correspond to something the model can infer about the agent, the most common explanations map cleanly onto the agent properties θ-few explanations provided by people fall into the \"Other\" category (Figure 7 ). The model inferences in this figure show the marginal likelihood of the observed actions given the corresponding property of θ, normalized across the four property types. In both the Naive and the Sophisticated case, the model and people agree on what the three highest-scoring properties are. Explanations involving false beliefs and preferences rate more highly than those involving time inconsistency. This is because, even if we specify whether the agent is Naive/So- Experiment 3: Inference from multiple episodes Following Example 3 above, subjects (n=50) saw the episodes in Figure 4 and inferred whether the agent prefers the Vegetarian Cafe or the Donut Store. Like the model, the majority of subjects inferred that the agent prefers the Vegetarian Cafe. Overall, 54% (+/-7 for 95% CI) inferred a preference of Vegetarian Cafe over the Donut Store, compared to the 59% posterior probability assigned by the model. Episode 2 (in which the agent does not choose the Donut Store) is identical to the Sophisticated episode from Figure 1 . Experiments 1 and 2 showed that subjects explain this episode in terms of Sophisticated time-inconsistent planning. Together with Experiment 3, this suggests that subjects use this inference about the agent's planning to infer the agent's undiscounted preferences, despite having seen the agent choose the Donut Store more frequently. \n Conclusion AI systems have the potential to improve our lives by helping us make choices that involve integrating vast amounts of information or that require us to make long and elaborate plans. For instance, such systems can recommend and filter the information we see on social networks or music services and can construct intricate plans for travel or logistics. For these systems to live up to their promise, we must be willing to delegate some of our choices to them-that is, we need such systems to reliably act in accordance with our preferences and values. It can be difficult to formally specify our preferences in complex domains; instead, it is desirable to have systems learn our preferences, just as learning in other domains is frequently preferable to manual specification. This learning requires us to build in assumptions about how our preferences relate to the observations the AI system receives. As a starting point, we can assume that our choices result from optimal rational planning given a latent utility function. However, as our experiments with human subjects show, this assumption doesn't match people's intuitions on the relation between preferences and behavior, and we find little support for the simplistic model where what is chosen most is inferred to be the most valued. We exhibited more realistic models of human decision-making, which in turn supported more accurate preference inferences. By approaching preference inference as probabilistic induction over a space of such models, we can maintain uncertainty about preferences and other agent properties when the observed actions are ambiguous. This paper has only taken a first step in the direction we advocate. Two priorities for further work are applications to more realistic domains and the development of alternatives to using human preference inferences as a standard by which to evaluate algorithms. The goal for this emerging subfield of AI is to make systems better able to support humans even in domains where human values are complex and nuanced, and where human choices may be far from optimal. Figure 1 : 1 Figure 1: Agents with hyperbolic discounting exhibit different behaviors depending on whether they model their future discounting behavior in a manner that is (a) Naive (left) or (b) Sophisticated (right). \n o ,a EU p(s|o),o ,d+1 [a ] with s ∼ T (s, a), o ∼ p(o|s ) and a ∼ C(p(s|o), o , d + 1) (for the Naive agent) or a ∼ C(p(s|o), o , 0) (for the Sophisticated agent). \n (top left), which shows the model posterior for the utilities of the Donut Store and Vegetarian Cafe (holding fixed the other agent components Y , k, and α). \n Figure 4 : 4 Figure 4: The observations in Experiment 3 show the Donut Chain Store being chosen twice and the Vegetarian Cafe once. \n Figure 5 : 5 Figure5: Explanations in Experiment 1 for the agent's behavior in Figure1a(Naive) and 1b (Sophisticated). Subjects (n=120) knew that the agent has accurate knowledge, and saw prior episodes providing evidence of a preference for the Vegetarian Cafe. Subjects selected scores in {1, 2, 3}. \n Figure 6 :Figure 7 : 67 Figure6: Map from properties invoked in human explanations to formalizations in the model. The left column describes the property. The center column shows how we formalized it in terms of the variables used to define an agent θ. The right column gives an explanation (from our dataset of human subjects) that invokes this property. \n Now the agent's choice and expected utility function are parameterized by a delay d, which together with a constant k controls how much to discount future utility: var agent = function(state, delay){ return Marginal( function(){ var action = uniformDraw(actions) var eu = expUtility(state, action, delay) factor(alpha * eu) return action }) } var expUtility = function(state, action, delay){ if (isFinal(state)){ return 0 } else { var u = 1/(1 + k * delay) * utility(state, action) return u + Expectation(function(){ var nextState = transition(state, action) var nextAction = sample(agent(nextState, delay+1)) return expUtility(nextState, nextAction, delay+1) }) } } EU s,d [a] = 1 1 + kd U (s, a) + E s ,a The agent's choice and expected utility functions are now parameterized by the distribution p(s) and the current ob- servation o: C(a; p(s), o, d) ∝ e αEU p(s),o,d [a] C(a; s, d) ∝ e αEU s,d[a] [EU s ,d+1[a ]] with s ∼ T (s, a). For the Naive agent, a ∼ C(s , d + 1), whereas for the Sophisticated agent, a ∼ C(s , 0). When we compute what the agent actually does in state s, we set d to 0. As a consequence, only the Sophisticated agent correctly predicts its future actions.3 An implementation of the Naive agent as a probabilistic program is shown in Figure2.Time-inconsistent agent with uncertaintyWe now relax the assumption that the agent knows the true world state. Instead, we use a distribution p(s) to represent the agent's belief about which state holds. Using a likelihood function p(o|s), the agent can update this belief: p(s|o) ∝ p(s)p(o|s) \n = open) > 0.85 \"He was heading towards the noodle shop first, but when he got there, it was closed, so he continued on the path and ended up settling for ... vegetarian cafe.\" Property Formalization Example explanation from our human subjects Agent doesn't know p(D1 = open) < 0.15 \"He decided he wanted to go to the Donut Store for lunch. He Donut Store D1 is open. did not know there was a closer location\" Agent falsely believes Noodle Shop is open. p(N Agent prefers D2 to D1. U (D2) > U(D1) \"He might also enjoy the second donut shop more than the first\" Agent is Naive / Sophisti- Y = Naive/Soph. \"He ... headed for the Vegetarian Cafe, but he had to pass by cated. the Donut shop on his way. The temptation was too much to fight, so he ended up going into the Donut Shop.\" \n\t\t\t Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence \n\t\t\t The distinction and formal definition of Naive and Sophisticated agents is discussed in O'Donoghue and Rabin (1999). \n\t\t\t This is the kind of agent assumed in the standard setup of an MDP (Russell and Norvig 1995) 3 This foresight allows the Sophisticated agent to avoid tempting states when possible. If such states are unavoidable, the Sophisticated agent will choose inconsistently. \n\t\t\t In Experiment 2, we allow the utilities for D1 and D2 to be different. See row 3 of Figure6and the \"Preference\" entry for Sophisticated in Figure7. \n\t\t\t In a pilot study, we showed subjects the same stimuli and had them write free-form explanations. In Experiment 1, subjects had to judge four of the explanations that occurred most frequently in this pilot.", "date_published": "n/a", "url": "n/a", "filename": "12476-55467-1-PB.tei.xml", "abstract": "An important use of machine learning is to learn what people value. What posts or photos should a user be shown? Which jobs or activities would a person find rewarding? In each case, observations of people's past choices can inform our inferences about their likes and preferences. If we assume that choices are approximately optimal according to some utility function, we can treat preference inference as Bayesian inverse planning. That is, given a prior on utility functions and some observed choices, we invert an optimal decision-making process to infer a posterior distribution on utility functions. However, people often deviate from approximate optimality. They have false beliefs, their planning is sub-optimal, and their choices may be temporally inconsistent due to hyperbolic discounting and other biases. We demonstrate how to incorporate these deviations into algorithms for preference inference by constructing generative models of planning for agents who are subject to false beliefs and time inconsistency. We explore the inferences these models make about preferences, beliefs, and biases. We present a behavioral experiment in which human subjects perform preference inference given the same observations of choices as our model. Results show that human subjects (like our model) explain choices in terms of systematic deviations from optimal behavior and suggest that they take such deviations into account when inferring preferences.", "id": "c25d3b0a143fc772c5488af4d5282df6"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Siddharth Srivastava"], "title": "Unifying Principles and Metrics for Safe and Assistive AI", "text": "Introduction Recent years have witnessed immense progress in research on safe and assistive AI systems as well as on the potential impact of AI on the future of work. These directions of research address two sides of a common, fundamental concern: how would humans work with AI systems? While research on AI safety focuses on designing AI systems that allow humans to safely instruct and control them (e.g., (Russell, Dewey, and Tegmark 2015; Zilberstein 2015; Hadfield-Menell et al. 2016; Russell 2017; Hadfield-Menell et al. 2017 )), research on AI and the future of work focuses on the impact of AI on members of the workforce who may be unable to do so (Arntz, Gregory, and Zierahn 2016; Manyika et al. 2017; Nedelkoska and Quintini 2018) . This paper presents the view that in addition to the productive streams of research outlined above, we need unifying metrics and declarative objectives that would allow a more uniform evaluation of AI systems on the extent to which an AI system is suitable for working with specific classes of human operators. It also presents a common principle for human-centered AI systems that allows the development of such metrics. Consequently, rather than proposing a specific new design for AI systems, the focus of this paper Copyright c 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. is on elucidating the declarative principles and types of metrics that would lead to concerted progress on the problem. The advantage of this declarative approach to framing the problem is that it enables an assessment of progress independent of the internal design being used in an AI system, and it will help draw out the strengths and weaknesses of different design approaches. Without such a specification, design differences can make solution paradigms difficult to compare. E.g., one might develop a complex system architecture that builds user profiles and provides appropriate assistance. This system would have very different input requirements and design and performance parameters than a formulation that addresses the same problem by computing assistance policies using planning under partial observability while incorporating the value of information to learn more about a user's poorly articulated objectives and constraints. Better declarative objectives and metrics for assistive AI systems would also help ensure that, regardless of the methods being used, progress amounts to advancement towards safe and assistive AI systems. More pragmatically, such metrics will not only help end-users assess the utility of a given AI system but they will also help AI researchers and developers identify more readily the dimensions along which further research will be beneficial for applications of their interest. The next section presents a succession of intuitive principles for safe and assistive AI systems, and shows that evaluating the compatibility of a system with such principles (in particular P2) helps clarify the required types of metrics. The paper concludes by drawing the attention of our community towards research on the operationalization of such metrics along with promising research directions on developing systems that do well on them. \n Unifying Principles for Safe and Assistive AI Systems We focus on taskable AI systems that carry out user-assigned high-level tasks using arbitrary mechanisms for reasoning and planning over multiple time steps. E.g., household robots that can be given objectives such as setting the table or doing laundry, co-manufacturing robots that can assist workers in creating complex assemblies with heavy components, digital assistants that can plan a vacation given the The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) user's preferences, etc. Such systems serve as sound integrative platforms and model end-to-end applications where the AI system is responsible for assistance in the execution of long-horizon tasks. AI systems are frequently evaluated in terms of performance measures such as the computational complexity of computing the required behavior, training data requirements, and the quality of the computed behavior in terms of execution time, resources used, risk of unsafe outcomes etc. We can consider systems that optimize such performance metrics as Level 0 of assistive AI systems. Level I of assistive AI systems Recent AI research has also focused on assistive properties of AI systems. We begin with a rather common-sensical principle defining Level I of such safe and assistive AI systems: P1: An AI system must make it easy for its operators to use it safely. The italicized terms denote dimensions along which compatibility of AI systems with principle P1 can be evaluated; while a lot of current AI research utilizes one or more of these dimensions for evaluation, a closer analysis reveals some new insights. In the context of this paper we consider using an AI system to be synonymous with instructing it to change its behavior as desired. Different interfaces may be used for this, including programming, text, speech, gestures etc. We consider the operators of an AI system as those persons who use it in the sense described above. For instance, if a self-driving car gives all its passengers the right to give it instructions, then all of them are its operators; if it gives instruction-rights to only a qualified set of users, perhaps adults who pass an assisted-driving exam, then the class of operators is defined by that qualification exam. Safety refers to the overall safety of the AI system's interactions with its environment, which may be physical or online. These dimensions of compatibility with P1 serve as natural dimensions for evaluating Level I assistive AI systems: E1. How inclusive is the set of operators? Systems whose operators require PhD-level expertise in AI may be less desirable for broader deployments. E2. How easy is it for the system's operators to change its behavior? E3. Which set of tasks can the AI system be used for? E4. What form of safety guarantees does the system provide? Systems that are unable to provide an upper bound on expected risks are clearly less desirable than those that stipulate conditions under which upper bounds can be provided. P1 serves to highlight the interplay between these dimensions of compliance and evaluation. Safety guarantees are often inversely related with the size of the operator set. A system may provide a high level of safety, but only under the requirement that its operators take extensive training programs. At one end of the spectrum, automated robot vacuum cleaners require almost no prior skills and perform limited, well-defined tasks. Safety issues are still present-a robot vacuum cleaner may pull on electrical cables that have been left on the floor; auto-completion software may send emails to unintended recipients. However, the lower expected damage from using such applications has made them broadly accepted in society. AI-powered industrial robots are at the other end of the spectrum: these devices require specialized training as well as operating environments in order to ensure safety (see for instance, ISO/TS 15066:2016 on collaborative robots). Typically, these systems operate in regions that humans cannot access when the robot is active unless the human is hand-guiding the robot within a safe operating envelope that limits the speed and range of operation. Their functionality is closely controlled and monitored. Only skilled engineers change their functionality while day-to-day operators monitor execution of predictable behavior and control run-stops from designated safe areas. Similarly, the safety of current airborne drone operations is ensured by requiring specially trained drone operators (Marshall 2020) . Thus, principle P1 holds for AI systems today with varying degrees of compliance along E1-E4. The examples above illustrate how practical implementations often rely upon implicitly defined operator classes to provide acceptable levels of safety. Such approaches rely upon limiting the users and the scope of a system to achieve an acceptable level of compatibility with P1: it is easy for such systems' users to operate it safely because the set of users is required to be sufficiently skilled, and its functionality is sufficiently limited for that group to be able to safely use the device. However, a broader emphasis on the need to specify safety guarantees with respect to different classes of operators would help mitigate some of the risks associated with broadly deployed AI systems. In contrast to classical automated systems, AI systems feature a more nuanced interplay between the class of tasks that a system can be used for (E3) and the other dimensions above. Traditionally deployed systems (even automated systems) have a very well-defined boundary of use cases. This allows for an easier classification of safety. Taskable AI systems on the other hand, are expected to change their behavior and functionality as they learn and adapt to new environments or new user-given objectives. For such systems, we need better methods for deriving the range of instructions that different operator-groups are allowed to provide. Scenarios like the self-driving car that needs to decide whether to obey a child's instruction allude to this requirement. Methods for assessing user and AI competency can also allow the AI system to expand its range of functionality by drawing upon the expertise of its operator (Basich et al. 2020) while ensuring an acceptable level of safety. A major limitation of P1 and its associated metrics is that it does not evaluate the amount of training required for an individual to qualify as an operator for the system. This creates a blind-spot in evaluation of the ease-of-use or safety of an AI system: since user-training occurs outside the requirements outlined by P1, an unsafe AI system (or one that is deployed in an unsafe manner) would simply claim that its so-called operator was insufficiently trained! Furthermore, if P1 were a sufficient categorization of safe and assistive AI systems, we would have no need for explainable AI as compliance with P1 does not require the system to be easy to understand. An implicit emphasis on assessing AI systems only along some aspects of P1 may also explain the increasing prevalence of concerns about the workers who may be left behind in the future workplace. From this perspective it is unsurprising that these concerns have gained renewed interest at a time when AI applications have reached a level of maturity where they are being used by non-AI-experts in situations that have some inherent safety risks. However, the \"assistive\" nature of such AI systems is undermined by the need for highly skilled individuals who could safely debug, understand and modify the behavior of such systems. Level II of assistive AI systems In order to address the limitations of P1, we consider the following as a guiding principle for safe and assistive AI: P2: An AI system must make it easy for its operators to learn how to use it safely. P2 changes the notion of operators from those who are qualified to use a given AI system to those who are qualified to start learning how to use it. In addition to the metrics associated with P1, P2 introduces a new dimension: E5. How easy is it to learn how to use the AI system? What are the expected prerequisites and costs of training for its operators? Can training be provided on-the-job? This dimension could also be viewed as evaluating the resources required to train operators for P1 systems. Most AI systems in use today would perform poorly on this new dimension, and consequently, on compatibility with P2 as a whole. Explainable AI (e.g., (Ribeiro, Singh, and Guestrin 2016; Hayes and Shah 2017; Chakraborti et al. 2017; Hoffman et al. 2018; Gunning and Aha 2019; Weld and Bansal 2019; Anderson et al. 2019; Eifler et al. 2020 )) plays a key role along this dimension because systems that are easy to understand or that can explain themselves naturally make it easier for people to learn how to use them. P2 leverages the unique strengths of AI as a field of research. AI research already addresses the problem of estimating users' skills; research on intelligent tutoring systems and AI for education addresses the problem of identifying skill gaps. This can be used to determine the minimal differential training to be provided to an operator. P2 places the onus of training on the deployed AI system and opens up a new direction of interdisciplinary research connecting existing research directions in AI with research in humansystems engineering and in industrial engineering for the development of productive training modalities and the operationalization of metrics for E5. It also allows AI systems to formally characterize different scopes of functionality for different classes of operators, e.g., operators that use manufacturing robots for pre-determined tasks, those that give the robots new instructions, or those that are ready to learn about giving the robot new instructions. P2 is not required for every AI system-P1 would be sufficient for systems that place minimal requirements on operator qualifications (e.g., robot vacuum cleaners) and for non-adaptive AI systems that require a small set of operators. On the other hand, P2 serves as a better declarative foundation for evaluating taskable AI systems that are meant to assist large numbers of non-AI-experts on a wide range of tasks. Increasing concerns about job roles that would feature a high-degree of interaction with AI systems (and the workers that are likely to be left behind) allude to the pressing need for including E5, a dimension for evaluation under P2 (and not P1) as a part of an AI system's evaluation. AI systems that are not beneficial (either in terms of AI safety or in terms of the future of work) fare poorly on P2. E.g., systems that can thwart their users' objectives by wireheading and those that may derive incorrect objective functions from user instructions make it difficult for an operator to learn how to provide instructions that are specific enough to be safe, and fare poorly along E5. Similarly, systems that require extensive training investment to be used effectively and safely fail along E5. In this way P2 serves as a unifying principle encompassing research on AI safety as well as on AI for a beneficial future of work. \n Promising Directions of Research P2 serves as a declarative principle for guiding research on assistive AI systems as well as for developing metrics for evaluating AI systems and their deployments. Converting this principle into tangible metrics calls for interdisciplinary research including AI and other fields associated with human factors. The increasing prevalence of research thrusts on safe and assistive AI systems (Fern et al. 2014; Russell, Dewey, and Tegmark 2015; Amodei et al. 2016; Gunning and Aha 2019) makes this a particularly opportune phase for formalizing the metrics and the interfaces required for evaluating AI systems for compatibility with P2 along dimensions E1-E5. Recent research on AI safety and explainable AI develops methods improving the ease of use and safety of AI systems along P2 (see, for instance, the ICML 2020 Workshop on Explainable AI). Placing AI systems that compute user-skill aligned explanations (Sreedharan, Srivastava, and Kambhampati 2018; Sreedharan et al. 2019 ) in a loop with AI systems for identifying user-skills and skill-gaps can help develop AI systems that gradually present users with new functionality and explain it, thereby training their users onthe-fly and as needed. Such systems would be better tuned towards P2, and towards addressing the underlying problems of AI safety and the future of work. Critically, ensuring progress towards safe and assistive AI systems requires that AI systems with arbitrary internal designs support assessment along the metrics developed for E1-E5. This raises a new set of research questions: Can we develop non-intrusive AI-interface requirements for supporting such evaluations in the face of changing operating environments and objectives? The need for such interfaces is even more pressing for systems that learn and those that undergo system updates after deployment. What is the minimal external interface that an AI system must support so as to allow its independent evaluation? How would changing the nature of such interfaces change the complexity of conducting such an evaluation? One would expect that AI sys-tems that offer more transparency would be easier to evaluate. Could we use the inherent reasoning capabilities of AI systems to develop interface requirements that would allow more adept systems to make such evaluations easier? E.g., rather than testing a manufacturing robot to discover its response to every possible situation, could we ask higher-level queries such as \"under which situations would you be able to create the proposed assembly?\" Clearly, the ease of assessment of an AI system would depend on the class of queries that it can answer. Recent work suggests that a minimal query-response interface for AI systems that connects the system with a simulator and observes its responses to high-level instructions can be quite powerful. Such an interface has a few distinct advantages. Current AI systems are already tested with simulators and they are inherently required to be able to take user instructions, so these interface requirements can be considered to be natural. They also allow the autonomous synthesis of query-policies: running the query-policy on a black-box taskable AI system can help construct an interpretable model of the limits and capabilities of that system (Verma, Marpally, and Srivastava 2021) . Such models can be used to support the evaluations discussed above. Extensions of such interface requirements to arbitrary AI systems would help ensure that our AI systems are amenable to independent evaluation. Such a paradigm would allow users to assess their AI systems while freeing AI researchers and developers to utilize arbitrary internal implementations. Systems with interfaces that support more efficient and accurate independent assessment would be rewarded with greater public adoption of their products. Progress on these threads would help prevent undesirable situations such as insufficient support for independent evaluation of powerful AI systems, and the negative consequences of deployment of an insufficiently evaluated system.", "date_published": "n/a", "url": "n/a", "filename": "17769-Article Text-21263-1-2-20210518.tei.xml", "abstract": "The prevalence and success of AI applications have been tempered by concerns about the controllability of AI systems about AI's impact on the future of work. These concerns reflect two aspects of a central question: how would humans work with AI systems? While research on AI safety focuses on designing AI systems that allow humans to safely instruct and control AI systems, research on AI and the future of work focuses on the impact of AI on humans who may be unable to do so. This Blue Sky Ideas paper proposes a unifying set of declarative principles that enable a more uniform evaluation of arbitrary AI systems along multiple dimensions of the extent to which they are suitable for use by specific classes of human operators. It leverages recent AI research and the unique strengths of the field to develop human-centric principles for AI systems that address the concerns noted above.", "id": "4f3013f7a6238bf69bff939ac70f9ae1"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Martin Dresler", "Anders Sandberg", "Christoph Bublitz", "Kathrin Ohla", "Carlos Trenado", "Aleksandra Mroczko-Wąsowicz", "○,◆ Simone Kuḧn", "Dimitris Repantis"], "title": "Hacking the Brain: Dimensions of Cognitive Enhancement", "text": "INTRODUCTION An increasingly complex world exerts increasing demands on cognitive functionsfunctions that evolved for a fundamentally different environment. Daily life in an information society and a postindustrial economy require cognitive skills that have to be acquired through slow, effortful, and expensive processes of education and training. Likewise, these skills can become obsolete as the world changes ever faster or be lost by the processes of aging. People also vary in their mental abilities, allowing them to acquire certain skills more quickly or slower, which may have significant effects on life outcomes. Strategies to improve the acquisition and maintenance of cognitive skills are thus increasingly important on both an individual and societal level. These challenges of our times have fostered the See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles. exploration of strategies to enhance human brain function. While people have since time immemorial sought to improve their performance, the present era is unique in that not only the challenges are growing rapidly but so are technologies that promise to meet them. Just like the hacking culture in the realm of computer software and hardware, an increasing number of individuals experiment with strategies to creatively overcome the natural limitations of human cognitive capacityin other words, to hack brain function. This development has led to both enthusiasm and dread, as observers have sharply differing intuitions about the feasibility, utility, risks, and eventual impact of enhancement technologies on the world. One reason for the often polarized debates has been the lack of hard evidence. Without empirical findings, it is easy to maintain any position, as well as regard opponents as having unfounded views. A further essential source of disagreement and theoretical confusion is a tendency to view enhancement as a unitary phenomenon to be judged as a whole, rather than as a broad set of techniques with important differences and diverging implications. Only on the basis of a clear picture on how a particular enhancement strategy might affect specific cognitive processes in specific populations, along with side effects and costs to be expected, an informed theoretical debate can evolve and a promising empirical research designs to test the strategy can be proposed. In the following, we aim to elucidate seven essential dimension of cognitive enhancement, namely, (a) its mode of action, (b) the cognitive domain targeted, (c) personal factors, (d) its time scale, (e) side effects, (f) availability, and (g) social acceptance (see Figure 1 ). Further, we will review empirical data of prominent examples of cognitive enhancers that differ across these dimensions and thereby illustrate some of their nuanced implications. The aim of our Review is to sketch a general framework that will foster both theoretical discussions and empirical research. \n MODE OF ACTION A widely cited definition characterizes enhancement as interventions in humans that aim to improve mental functioning beyond what is necessary to sustain or restore good health. 1 While the current bioethical debate on cognitive enhancement is strongly focused on pharmacological ways of enhancement, improving mental capabilities also by nonpharmacological means has to be considered as cognitive enhancement proper according to the given characterization. We have reviewed elsewhere the efficacy of a number of nonpharmacological enhancers. 2, 3 To systematize the vast variety of different approaches of cognitive enhancement, we suggest clustering enhancement strategies into three major areas according to their main mode of action. Even though boundaries are not strict, most cognitive enhancement strategies can be considered to work as either biochemical, physical, or behavioral interventions (Figure 2 ). In the following, we will give an overview on the different cognitive enhancement strategies within these clusters. \n Biochemical Strategies. The prototypical cognitive enhancers addressed in the public debate are biochemical agents. However, biochemical interventions are not restricted to pharmaceutical \"smart drugs\". Also application of ordinary substances such as oxygen has been shown to increase, e.g., memory processes 4, 5 and neural activation in memory-related brain regions. 6 Biochemical enhancers with the longest tradition in human history are strategies to make use of certain nutritional components. Most widely used are probably glucose 7 and caffeine, 8, 9 which both have demonstrated cognition-enhancing effects in numerous studies. In addition to coffee, other beverages from caffeine-bearing plants such as guarana have shown to enhance cognition. 10 While the noncaffeine components in caffeine-bearing plants might exert independent effects on cognition, 11 it has been doubted that industrially designed drinks contain cognitive enhancing components that go beyond caffeine, glucose, or guarana extract. 12 Further nutritional components with some evidence for cognitive enhancing effects are flavonoids, e.g., in cocoa, 13, 14 curry powder (most likely due to the curcumin that it contains, 15, 16 folic acid, 17 or omega-3 fatty acids. 18 Besides specific dietary supplements, also the absence of food might enhance cognition: some evidence has been reported that fasting and general caloric restriction might improve memory in elderly individuals. 19, 20 Also certain traditional natural remedies have been discussed as cognitive enhancers: besides herbs that also grow in Western regions such as salvia, 21 particularly traditional Chinese and \n ACS Chemical Neuroscience \n Review Indian herbal medicines such as Bacopa monnieri have been ascribed with cognitive enhancing effects. 22, 23 However, with ginseng and ginkgo biloba, the most prominent examples of such traditional Asian herbal remedies so far have failed to consistently show positive effects on cognitive functions in healthy individuals. 24, 25 A further biochemical intervention with a long history concerns drugs that are being used recreationally and that have demonstrated the potential to enhance certain cognitive functions. For example, nicotine improves attention and memory, 26−28 and even alcohol, despite impairing many cognitive functions, might enhance others such as creative processes 29, 30 or, retroactively, memory. 31 Pharmaceuticals are in particular by the public regarded as prototypical cognitive enhancers: synthetic stimulants such as amphetamine, methylphenidate, or modafinil, or antidementia drugs such as acetylcholinesterase inhibitors and memantine are at the core of public debate on cognitive enhancement. However, evidence for their efficacy for augmenting brain function and cognition in healthy subjects is often markedly lower than assumed in theoretical discussions. 32−37 Importantly, the lack of an objective effect on cognition can be accompanied by a considerable placebo effect: for example, users who believed to have received mixed-amphetamine salts subjectively rated themselves as performing better and even show minor objective performance increases, independent of actual medication state. 38 While pharmacological enhancers are typically designed to affect or mimic certain neurotransmitters, also neural signaling molecules themselves such as adrenaline, 39 GABA, 40 glucocorticoids, 41 ovarian hormones, 42 and different neuropeptides 43−45 have been suggested as cognitive enhancers. A further biochemical strategy for cognitive enhancement consists of genetic modifications, which have been demonstrated to augment several learning and memory processes in animal models. 46−51 Although progress has also been made in elucidating the genetic basis of cognitive traits in humans, 52 genetic modifications in humans still have to be considered as future strategies rather than currently available enhancement options. 2.2. Physical Strategies. The current most widely discussed physical strategies for cognitive enhancement include a number of brain stimulation technologies. Whereas the cognition enhancing effects of invasive methods such as deep brain stimulation 53, 54 are restricted to subjects with pathological conditions, several forms of allegedly noninvasive stimulation strategies are increasingly used on healthy subjects, among them electrical stimulation methods such transcranial direct current stimulation (tDCS 55 ), transcranial alternating current stimulation (tACS 56 ), transcranial random noise stimulation (tRNS 57 ), transcranial pulsed current stimulation (tPCS 58, 59 ), transcutaneous vagus nerve stimulation (tVNS 60 ), or median nerve stimulation (MNS 61 ). Details of the stimulation procedures appear to be crucial: commercial do-it-yourself electrical brain stimulators might impair rather than enhance cognition, 62 and systematic reviews have shed doubt on a clear and simple enhancing effect of electrical brain stimulation on different cognitive domains also under controlled laboratory conditions. 63, 64 Recent studies have even questioned if some of the most commonly used setups for electrical brain stimulation have neurophysiologically meaningful effects at all. 65−68 On this background, the development of noninvasive deep brain stimulation via temporally interfering electric fields might provide a more systematic and targeted mechanism compared to the currently used approaches. 68 Besides electrical stimulation methods, also for transcranial magnetic stimulation (TMS 69 ), optical stimulation with lasers, 70 and several forms of acoustic stimulation, such as transcranial focused ultrasound stimulation, 71 binaural beats, 72, 73 or auditory stimulation of the EEG theta rhythm 74 or sleep EEG slow oscillations, 75 a potential for cognitive enhancement has been reported. Physical enhancement methods that target brain processes more indirectly include whole body vibrations, 76 stochastic resonance motor control improvement, 77, 78 and several forms of neurofeedback, 79 with, e.g., EEG neurofeedback in the upper alpha band enhancing memory, 80 working memory, 81 and visuospatial skills. 82 Besides classical neurofeedback training that involves unspecific but active effort of the subject, also neurofeedback interventions that automatically feedback low energy currents in response to EEG activity have been developed, thereby allowing the subject to receive the training procedure passively. 83 Recently, the use of fMRI neurofeedback, utilizing multivariate pattern analysis, has shown the potential to increase sustained attention 84 or visuospatial memory. 85 Finally, humans have always deployed physical tools to assist cognitive functioning. In current developments that converge minds and machines, these tools become more closely integrated with the person. 86 Crowdfunding or biohacking communities have developed numerous novel technical devices to increase cognitive functions transiently with, e.g., wearable electronic memory aids or augmented reality gadgets, 87, 88 or more permanently as in the case of cognition enhancing or extending bodily implants. 88, 89 Neural implants or prosthetics have progressed; in controlled laboratory settings, implants could facilitate human memory. 90 In addition, Brain−Computer Interfaces connect the central nervous system with computers through wearable or implanted electrodes and may afford a range of applications that enhance cognitive functions or joint outputs of minds coupled with machines. 91, 92 2.3. Behavioral Strategies. Although not commonly recognized as such by the public, cognitive enhancers with the most wide use and longest history are probably behavioral strategies: a rapidly growing body of evidence shows that everyday activities such as sleep 93 or physical exercise 94−96 improve cognitive functioning. Also well-established cultural activities such as musical training, 97, 98 dancing, 99 or learning a second language 100 have been demonstrated to enhance cognition beyond the specifically trained skills. In addition to these natural and cultural standard activities, several behavioral strategies have been developed to enhance certain brain functions intentionally. Two strategies that reach back to ancient times are mnemonic techniques to enhance learning and memory 101, 102 and meditation training to enhance attention processes and mindfulness. 103, 104 In contrast, commercial video games 105, 106 and customized computer trainings 107 represent historically very recent developments that are targeted to enhance specific cognitive capacities and skills. In contrast to several years of enthusiasm and widespread commercial application, however, more recent controlled studies and meta-analyses have shed some doubt on the efficacy of computerized brain training programs, 108 particularly criticizing claims of \"far transfer\" of training gains to cognitive domains considerably different from the specifically trained skills. 109, 110 ACS Chemical Neuroscience Review \n COGNITIVE DOMAIN The human mind is not a monolithic entity, but consists of a broad variety of cognitive functions. Not surprisingly, no single cognitive enhancer augments every cognitive function. Instead, most cognitive enhancers have specific profiles regarding their efficacy for different cognitive domains. Memory is, e.g., strongly enhanced by mnemonic strategies, but not by meditation; attention, in turn, is strongly enhanced by meditation training, but not training in mnemonic strategies. 101, 103, 104 Sleep, in contrast, enhances both cognitive capacities. 111, 112 Some computerized cognitive trainings have been found to enhance memory, processing speed and visuospatial skills, but not executive functions or attention. 107 It is currently highly debated in how far specific training strategies exert transfer effects also to nontrained cognitive domains. 113 Different cognitive tasks require different optimal levels of receptor activation, thus requiring different doses of pharmacological enhancers targeting the respective neurotransmitter system depending on the cognitive domain targeted. 114 Of note, effects of pharmacological enhancement on different cognitive domains might even differ depending on the cognitive test battery used, illustrating the fragility of the respective effects. 115 Some interventions might even enhance one but impair another cognitive domain: Intranasal application of oxytocin has been shown to enhance social cognition and cognitive flexibility but impairs long-term memory. 116, 117 Methylphenidate improves the ability to resist distraction but impairs cognitive flexibility. 118 Computerized working memory training has been reported to enhance working memory, reasoning, and arithmetic abilities; however, it might deteriorate creativity. 119 Also for amphetamines and modafinil, potential impairments on creativity are discussed besides their enhancing effects on other domains. 120, 34 In contrast, alcohol might enhance creative processes while impairing most other cognitive functions. 29 The costs and benefits of a single cognitive enhancer might even change through slight changes in the application process: for example, electrical stimulation of posterior brain regions was found to facilitate numerical learning, whereas automaticity for the learned material was impaired. In contrast, stimulation on frontal brain regions impaired the learning process, whereas automaticity for the learned material was enhanced. 121 Brain stimulation has thus been suggested to be a zero-sum game, with costs in some cognitive functions always being paid for gains in others. 122 This implies that enhancement may have to be tuned to the task at hand, in order to focus on the currently most important cognitive demands. \n PERSONAL FACTORS The efficacy of cognitive enhancers does not only differ for different cognitive domains, but also for different users. An important factor in this regard are the cognitive skills of the individual prior to the enhancement intervention. Many pharmaceuticals, including amphetamine, 123 modafinil, and methylphenidate, 124 work mainly in individuals with low baseline performance. In some cases, even impairments in individuals with higher performance at baseline have been reported, e.g., in the case of amphetamine, 125 methylphenidate, 124 nicotine, 27 or acute choline supplementation. 126 The phenomenon of enhanced cognition in individuals with low baseline performance and impairments in those with high baseline performance can be explained by the classical inverted U-model, 127, 128 where performance is optimal at intermediate levels of the targeted neurochemicals and impaired at levels that are either too low or too high. 129−131 For some drugs such as methylphenidate, enhancement dependency on the baseline might even differ between cognitive functions, with performance in specific tasks being improved in low, while impaired in high, performers, 124 but showing the opposite pattern for other tasks. 132 The baseline-dependency of cognitive enhancement is not restricted to pharmaceuticals: also in the case of video games, 133 cognitive training, 134 or brain stimulation, 135, 136 individuals starting at a lower baseline performance benefit more than those with an already high performance at baseline. In contrast, sleep appears to improve memory particularly in subjects with a higher baseline performance of memory, 137 working memory 138 or intelligence. 139 Also mnemonic training appears to work particularly well in individuals with a higher cognitive baseline performance. 140 This has been interpreted in terms of an amplification model, in which high baseline performance and cognitive enhancement interventions show synergistic effects. 141 Cognitive enhancers can also affect individuals differently depending on basic biological, psychological, or social factors. For example, effects of training interventions on selective attention can depend on the genotype of the trainee; 142 effects of methylphenidate on creativity can depend on personality characteristics; 143 the cognition enhancing effects of sleep 144 or video games 145 are modulated by gender. In turn, such modulations of enhancement effects might reduce existing differences in cognitive profiles, as seen, e.g., in action video game training, that have the potential to eliminate gender differences in spatial attention and decrease the gender disparity in mental rotation ability. 146 Also the hormonal status of subjects affects how strongly they profit, e.g., from sleep 144−148 or brain stimulation. 149 Caffeine enhances working memory particularly in extraverted individuals, 150 and memory enhancement through sleep 151 or mnemonic training 140 has been reported to depend on the age of subjects. Health status affects how much users benefit from different kinds of cognitive enhancers, including pharmaceuticals, 3 mnemonics, 152 or sleep. 153−156 Finally, also socioenvironmental factors such as social resources, parental occupation, or family composition can modulate cognitive enhancement interventions, e.g., with cognitive training programs. 157 \n TIME SCALE Interventions for cognitive enhancement differ in the specific time scale they require to achieve their aims. The prototypical \"smart pill\" discussed in popular accounts of cognitive enhancement needs practically no preparation time, exerts its effects within seconds or minutes, and lasts for several hours. While this is close to reality in the case of some pharmacological enhancers, the temporal pattern of most other enhancement strategies differs strongly from these time scales. In particular, the time needed for application and the duration of their effects markedly varies between enhancement interventions. Most pharmacological enhancers can be applied quickly and without further preparation; however, some drugs such as acetylcholinesterase inhibitors or memantine are thought to require longer periods of intake to be effective. 33 Also some nutritional enhancers such as glucose and caffeine exert their effects rather quickly, whereas other nutritional supplements have to be taken over extended periods to show an impact on cognition. 158, 18 Obviously, behavioral strategies like sleep, \n ACS Chemical Neuroscience Review exercise, video games or mnemonic training need hours or weeks to robustly enhance cognition. Some effects of meditation might even take years of training. 159 For brain stimulation methods, both immediate effects of acute stimulation, but also more delayed effects after repeated stimulation haven been observed. 55, 69 Technological gadgets or implants need some preparation to be installed and accommodated to, however then exert their cognition augmenting effects on demand. Enhancing effects of most quickly acting pharmacological or nutritional cognitive enhancers also wear off rather rapidly. In contrast to such transient effects, interventions such as brain stimulation, 160, 161, 57 sleep, 162 mnemonic strategies 163 or genetic modifications 46 have the potential for long-term up to chronic enhancement. However, in the latter case, the reversibility of the effects (and side effects) of an enhancement intervention might be a further aspect to be considered. Interventions can also differ regarding the time point of application relative to the situation when enhanced cognitive performance is needed. For example, application of stress hormones such as cortisol or adrenaline before or after memory encoding enhances memory, whereas application before retrieval impairs memory; 41 benzodiazepines impair memory when given before and enhance memory when given after encoding; 164 in contrast, caffeine before learning enhances memory under certain conditions but might impair memory when consumed afterward. 165, 9 Mnemonic strategies on the other hand work solely when taught/applied before/during encoding, but can hardly be applied afterward. Finally, some interventions can also influence the timing of cognitive performance itself: stimulants such as methylphenidate, modafinil, and caffeine might increase the time subjects take to perform a given task, with impairing effects under time pressure and potentially enhancing effects in the absence of temporal constraints. 166 \n SIDE EFFECTS The pharmaceutical platitude that there is no effect without side effects holds true also for many nonpharmacological enhancement interventions. It appears obvious that cognitive enhancers differ in the severity and form of side effects: prima facie, deep brain stimulation or implants have higher risks for side effects than sleep or cognitive training. However, also more indirect enhancement strategies such as neurofeedback potentially bear the risk of side effects up to inducing epileptiform activity, 167 and even gentle intervention such as meditation training might exert negative effects on specific cognitive domains: a negative relationship between mindfulness and implicit learning 168, 169 and an increased susceptibility to false memory formation after mindfulness meditation 170 have been observed. Here, the intended training goal of nonjudgemental mindfulness opposes tasks where either a more critical or automatic mindset was needed. Further examples of side effects intrinsically associated with the enhancement goal are trade-offs between stability vs flexible updating of memory systems: 129 memories can also become \"too stable\" due to a memory enhancement intervention, as observed, e.g., for the anti-obesity drug rimonabant. 171 It has been suggested to differentiate enhancement strategies according to their level of invasiveness. 172, 173 However, while invasiveness has a more or less definite meaning in its original medical context, physically breaching the skin or entering the body deeply through an external orifice, 174 it is difficult to determine the level of invasiveness in the context of cognitive enhancement. Both nutritional supplements and pharmaceuticals enter the body, and thus could be considered invasive in a narrow medical sense, as might be certain forms of physical exercise due to the risk of bruises or scratches as common, e.g., in martial arts or a hike through the woods. Brain stimulation that does not break the skin would, by contrast, be classified as noninvasive. This taxonomy can be disputed for good reasons. 175 Besides known risks of these stimulation methods such as scalp burns from tDCS or seizures from TMS, the \"known unknowns\" have been suggested to pose potentially even greater risks: potential build-up effects across multiple sessions or in sensitive nontarget areas. 176 Of note, only few neuroscientists use brain stimulation on themselves for cognitive enhancement. 176 Given the still unclear risks and side effects of do-it-yourself brain stimulation use, it has been proposed to extend existing medical device legislation to cover also nonpharmacological and in particular physically acting cognitive enhancement devices. 177, 178 In contrast to strict medical definitions, the more intuitively assessed level of invasiveness of an intervention often seems to depend on familiarity and cultural traditions. This leads to the Western attitude according to which changing one's diet or performing exercise appears less invasive than taking pharmaceuticals or applying brain stimulation, independent of their actual effects on health. Related to the time scale dimension, side effects of short-vs long-term use of cognitive enhancers can be differentiated. For example, while side effects for the acute use of methylphenidate include increased heart rate, headache, anxiety, nervousness, dizziness, drowsiness, and insomnia, in the case of long-term use side effects such as abnormal prefrontal brain function and impaired plasticity have been reported. 179, 180 Also addiction is a well-known side effect for the long-term use of pharmacological enhancers, which is particularly detrimental to the aim of enhancement if combined with tolerance effects such that larger doses are needed to achieve the same effect (or prevent impairing withdrawal effects). Also behavioral addictions have been observed, e.g., physical exercise 181 or the use of technological gadgets. 182 A somewhat nonobvious negative effect of some cognitive enhancers is their illusional efficacy: users sometimes believe their performance to be enhanced by amphetamine in absence of any verified and objectively visible enhancing effects, even if administered in a double blind manner. 183, 184, 38 This is particularly counterproductive in cases of already highfunctioning individuals whose cognitive capabilities might be impaired rather than enhanced by amphetamine. 125, 184 Also for caffeine, under certain conditions higher subjectively perceived mental energy in the absence of objectively enhancing effects have been observed. 185 The often subtle effects of enhancers can be hidden or amplified by placebo effects. \n AVAILABILITY Cognitive enhancers differ in at least three aspects of availability: legal status, cost, and application time. In terms of legal regulation, different enhancement methods are regulated by sometimes drastically varying frameworks. Pharmaceuticals, for instance, are regulated by strict international control regimes that effectively prohibit nontherapeutic uses or by more lenient domestic drug laws. Brain-stimulation methods, by contrast, fall under medical device regulations, pertaining to basic safety standards in terms but not proscribing the uses to which they might be put. 177, 178 Behavioral strategies are usually not regulated at all. The regulatory landscape is thus vast and \n ACS Chemical Neuroscience Review possibly incoherent (for a review, see ref 186) . Besides practical hurdles to the acquisition of illicit drugs for cognitive enhancement, the legal status appears to affect the motivation of users to decide which cognitive enhancers to take. 166 A common ethical argument in the enhancement debate concerns distributive justice: also legally available enhancers come with cost barriers, restricting individuals of low socioeconomic status from access. 187 A main factor in the costs of cognitive enhancers is their patentability, which is not restricted to pharmaceuticals. 188 However, in particular, behavioral enhancement strategies are typically not subject to patentability or other cost-driving factors: sleep, exercise, meditation, or training in mnemonic strategies are largely free of cost and, thus, in contrast to pharmaceuticals or technological strategies, are available independent from the financial background of the user. On the other hand, these behavioral strategies require some time and effort: the 24/7 working manager as the clichéuser of cognitive enhancement drugs might have the financial means to afford quickly taking his expensive smart pill between two meetings, but might be unable or unwilling to spend extended periods of time with sleep, meditation, or mnemonic training. \n SOCIAL ACCEPTANCE Largely independent from their specific enhancing effects on different cognitive capacities, social acceptance of cognitive enhancement interventions differs strongly depending on traditions, their perceived naturalness, and the perceived directness of their mode of action. Enhancement interventions with a tradition of thousands of years such as meditation and nutrition are typically much better accepted than many currently debated enhancement strategies such as brain stimulation and pharmaceuticals. 189 Also more \"natural\" interventions such as sleep or exercise are seen in a more positive light compared to technological innovations. 190 Moreover, in how far the mode of action is perceived as psychologically mediated or more biologically direct, affecting the brain indirectly through the senses or more directly through the cranium or metabolism, often plays a role for their social acceptance: if an enhancement intervention such as intense cognitive or physical training requires extended efforts or is seen as a quick and effortless shortcut to the same goal as in the case of smart pills or brain stimulation touches different intuitions about human virtues and is thus valuated differently. Even though views based on such purely intuitive aspects of tradition, naturalness, or directness often rely on cognitive biases rather than rational argument, 191 a negative social perception for whatever reason might generate indirect psychological costs for users, which in turn might influence also rational evaluations of the respective enhancement intervention. 192 Accordingly, one of the central points in the ethical controversy revolves around the question of whether enhancement strategies only relevantly differ with respect to their outcomes, i.e., their benefits and side effects, 193 or also with respect to their mode of operation. 194 Some argue that the relevant ethical distinction runs along the lines of enhancements that are active, in the sense of requiring participation, and those that work on persons more passively. 195 Not surprisingly, different views on cognitive enhancement prevail in different (sub)cultures, with, e.g., a more positive view on enhancement interventions in Asia 196 or in younger populations. 197 Empirical studies on attitudes toward cognitive enhancement interventions found medical safety, coercion, and fairness the most common concerns, with nonusers displaying more concerns regarding medical safety and fairness than users. 198 Sometimes readily available substances for cognitive enhancement such as caffeine, energy drinks, or herbal drugs are dubbed \"soft enhancers\"; 199 however, considering that prohibition of substances is not only based on their potential harm, but also on historical circumstances, this differentiation between soft and hard enhancers appears questionable. A further aspect that determines the social acceptance of cognitive enhancement is the aim of the given intervention. Taken by face value, the term cognitive enhancement denotes any action or intervention that improves cognitive capacities, independent from the specific aim of this improvement. The use of the term in the empirical, philosophical, and sociopolitical literature, however, varies with regard of the specific aim of enhancement interventions: people appear to be more tolerant toward enhancement of traits considered less fundamental to self-identity, 200 and also more tolerant toward enhancement in individuals with cognitive impairments or low performance baselines compared to enhancement of normal or high achievers. 201, 202 At least four different aims can be identified, each leading to different research strategies and different ethical evaluations of existing or potential enhancement strategies. 203 The least problematic concept of cognitive enhancement targets those everyday medical or psychological interventions that are meant to treat pathological deficiencies. Closely related are those cognitive enhancement interventions that aim to prevent or attenuate cognitive decline that is associated also with healthy aging. 204 Slightly less accepted appear to be those enhancement strategies that aim to improve cognition in completely healthy individuals, but still clearly stay within the normal limits of cognition. The probably most widely used and ethically most controversial concept of cognitive enhancement aims at the augmentation of cognitive capacities beyond normal function, as is represented in the clichéof high-functioning students or managers trying to further improve their performance by taking smart pills. Besides these differentiations between enhancement of impaired vs healthy cognition, another difference in the aims of cognitive enhancement touches the ultimate deed of the enhancement intervention: due to the central role of cognitive capacities in defining humans as a species, it is tempting to consider the improvement of these defining human capacities as a value in itself. However, most philosophical or religious approaches do not center on objective cognitive performance markers, but propose values only indirectly related to cognitive performance such as living a happier or more meaningful life in general. In this light, human enhancement in more general terms does not need to aim for individual cognitive or neural processes, but can also be achieved by sociopolitical reforms targeted at the population level. 205, 206 9. CONCLUSIONS Cognitive enhancement clearly is a multidimensional endeavor. However, not every dimension is important for every theoretical or empirical research question. For example, many empirical researchers of cognitive enhancement are primarily interested in the understanding of the neurobiological and psychological mechanisms underlying cognitive functions. 207 For this aim, the availability and social acceptance dimensions are largely irrelevant. In contrast, many theorists are interested in the social and ethical implications of cognitive enhancement, 208 where these dimensions might be of prime importance. Also side effects and temporal factors might be of secondary importance \n ACS Chemical Neuroscience Review to empirical researchers with an interest in the neural mechanisms of certain cognitive processes, whereas these would be highly relevant for users who ponder the question which cognitive enhancement strategy to choose for a certain aim. When comparing different cognitive enhancement strategies, different dimensions might thus be differently weighed or completely ignored, depending on the aim of the comparison. Up until now, direct comparisons between cognitive enhancement strategies with radically different modes of actions have rarely been made (but see, e.g., ref 165) , and more comprehensive comparisons across dimensions might be difficult: practical issues of information availability from the different dimensions aside, interventions typically differ on different dimensions and are thus difficult to compare globally. In addition, multiple interactions between different enhancers exist, which further complicates the situation. Interactions have been reported, e.g., for glucose and caffeine, 209 diet and exercise, 210 exercise and working memory training, 211 video games and sleep, 212 video games and brain stimulation, 213 exercise and brain stimulation, 214 and brain stimulation and sleep. 215, 216 Also different dimensions discussed here can interact in multiple ways, as, e.g., computerized cognitive training can differentially enhance different cognitive processes depending on personal factors such as age; 217 and social acceptance of different enhancement strategies depends on both the baseline performance of users and the cognitive domain targeted. 200, 201 Despiteor because ofthese complexities, in our opinion, both theoretical discussions and empirical research would strongly benefit from a more differentiated approach. Specific research questions might require the emphasis on some dimensions of cognitive enhancement over others, and for some research questions some dimensions might be entirely irrelevant. Nevertheless, keeping in mind that cognitive enhancement is not a monolithic phenomenon will help to solve and avoid a number of confusions and disagreements that are still present in the public debate on cognitive enhancement. This is an open access article published under a Creative Commons Non-Commercial No Derivative Works (CC-BY-NC-ND) Attribution License, which permits copying and redistribution of the article, and creation of adaptations, all for non-commercial purposes. Downloaded via 205.240.4.254 on March 24, 2022 at 02:39:01 (UTC). \n Figure 1 . 1 Figure 1. Cognitive enhancement interventions differ across several interdependent dimensions. \n Figure 2 . 2 Figure 2. Cognitive enhancement interventions different in their mode of actions.", "date_published": "n/a", "url": "n/a", "filename": "acschemneuro.8b00571.tei.xml", "abstract": "In an increasingly complex information society, demands for cognitive functioning are growing steadily. In recent years, numerous strategies to augment brain function have been proposed. Evidence for their efficacy (or lack thereof) and side effects has prompted discussions about ethical, societal, and medical implications. In the public debate, cognitive enhancement is often seen as a monolithic phenomenon. On a closer look, however, cognitive enhancement turns out to be a multifaceted concept: There is not one cognitive enhancer that augments brain function per se, but a great variety of interventions that can be clustered into biochemical, physical, and behavioral enhancement strategies. These cognitive enhancers differ in their mode of action, the cognitive domain they target, the time scale they work on, their availability and side effects, and how they differentially affect different groups of subjects. Here we disentangle the dimensions of cognitive enhancement, review prominent examples of cognitive enhancers that differ across these dimensions, and thereby provide a framework for both theoretical discussions and empirical research.", "id": "5c694d79e072f8c68905a15988932c1a"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Dorsa Sadigh", "S Shankar", "Sastry Sanjit", "A Seshia", "Anca Dragan"], "title": "Information Gathering Actions over Human Internal State", "text": "I. introduction Imagine driving on the highway. Another driver is in the lane next to you, and you need to switch lanes. Some drivers are aggressive and they will never brake to let you in. Others are more defensive and would gladly make space for you. You don't know what kind of driver this is, so you decide to gently nudge in towards the other lane to test their reaction. At an intersection, you might nudge in to test if the other driver is distracted and they might just let you go through (Fig. 1 bottom left). Our goal in this work is to give robots the capability to plan such actions as well. In general, human behavior is affected by internal states that a robot would not have direct access to: intentions, goals, preferences, objectives, driving style, etc. Work in robotics and perception has focused thus far on estimating these internal states by providing algorithms with observations of humans acting, be it intent prediction [23] , [13] , [6] , [3] , [4] , [16] , Inverse Reinforcement Learning [1] , [12] , [15] , [22] , [18] , driver style prediction [11] , affective state prediction [10] , or activity recognition [20] . Human state estimation has also been studied in the context of human-robot interaction tasks. Here, the robot's reward function depends (directly or indirectly) on the human internal state, e.g., on whether the robot is able to adapt to the human's plan or preferences. Work in assistive teleoperation or in human assistance has cast this problem as a Partially Observable Markov Decision Process, in which the robot does observe the physical state of the world but not the human internal state -that it has to estimate from human actions. Because POMDP solvers are not computationally efficient, the solutions proposed thus far use the current estimate Fig. 1 : We enable robots to generate actions that actively probe humans in order to find out their internal state. We apply this to autonomous driving. In this example, the robot car (yellow) decides to inch forward in order to test whether the human driver (white) is attentive. The robot expects drastically different reactions to this action (bottom right shows attentive driver reaction in light orange, and distracted driver reaction in dark orange). We conduct a user study in which we let drivers pay attention or distract them with cellphones in order to put this state estimation algorithm to the test. of the internal state to plan (either using the most likely estimate, or the entire current belief), and adjust this estimate at every step [8] , [7] , [5] . Although efficient, these approximations sacrifice an important aspect of POMDPs: the ability to actively gather information. Our key insight is that robots can leverage their own actions to help estimation of human internal state. Rather than relying on passive observations, robots can actually account for the fact that humans will react to their actions: they can use this knowledge to select actions that will trigger human reactions which in turn will clarify the internal state. We make two contributions: An Algorithm for Active Information Gathering over Human Internal State. We introduce an algorithm for planning robot actions that have high expected information gain. Our algorithm uses a rewardmaximization model of how humans plan their actions in response to those of the robot's [19] , and leverages the fact that different human internal states will lead to different human reactions to speed up estimation. Fig. 1 shows an example of the anticipated difference in reaction between a distracted and an attentive driver. Application to Driver Style Estimation. We apply our algorithm to estimating a human driver's style during the interaction of an autonomous vehicle with a human-driven vehicle. Results in simulation as well as from a user study suggest that our algorithm's ability to leverage robot actions for estimation leads to significantly higher accuracy in identifying the correct human internal state. The autonomous car plans actions like inching forward at an intersection (Fig. 1 ), nudging into another car's lane, or braking slightly in front of a human-driven car, all to estimate whether the human driver is attentive. Overall, we are excited to have taken a step towards giving robots the ability to actively probe end-users through their actions in order to better estimate their goals, preferences, styles, and so on. Even though we chose driving as our application domain in this paper, our algorithm is general across different domains and types of human internal state. We anticipate that applying it in the context of human goal inference during shared autonomy, for instance, will lead to the robot purposefully committing to a particular goal in order to trigger a reaction from the user, either positive or negative, in order to clarify the desired goal. Of course, further work is needed in order to evaluate how acceptant end-users are of different kinds of such probing actions. \n II. Information Gathering Actions We start with a general formulation of the problem of a robot needing to maximize its reward by acting in an environment where a human is also acting. The human is choosing its actions in a manner that is responsive to the robot's actions, and also influenced by some internal variables. While most methods that have addressed such problems have proposed approximations based on passively estimating the internal variable and exploiting that estimate, here we propose a method for active information gathering that enables the robot to purposefully take actions that probe the human. Finally, we discuss our implementation in practice, which trades off between exploration and exploitation. \n A. General Formulation We define a human-robot system in which the human's actions depend on some human internal state ϕ that the robot does not directly observe. In a driving scenario, ϕ might correspond to driving style: aggressive or timid, attentive or distracted. In collaborative manipulation scenarios, ϕ might correspond to the human's current goal, or their preference about the task. We let x ∈ X be a continuous physical state of our system. For our running example of autonomous cars, this includes position, velocity and heading of the autonomous and human driven vehicles. Let ϕ ∈ Φ be the hidden variable, e.g., human driver's driving style. We assume the robot observes the current physical state x t , but not the human internal state ϕ. The robot and human can apply continuous controls u R and u H . The dynamics of the system evolves as robot's and human's control inputs arrive at each step: x t+1 = f H ( f R (x t , u t R ), u t H ). (1) Here, f R and f H represent how the actions of the robot and of the human respectively affect the dynamics, and can be applied synchronously or asynchronously. We assume that while x changes via (1) based on the human and robot actions, ϕ does not. For instance, we assume that the human maintains their preferences or driving style throughout the interaction. The robot's reward function in the task depends on the current state, the robot's action, as well as the action that the human takes at that step in response, r R (x t , u t R , u t H ). If the robot has access to the human's policy π H (x, u R , ϕ), maximizing robot reward can be modeled as a POMDP [9] with states (x, ϕ), actions u R , and reward r R . In this POMDP, the dynamics model can be computed directly from (1) with u t H = π H (x t , u t R , ϕ). The human's actions can serve as observations of ϕ via some P(u H |x, u R ). In Sec. II-C we introduce a model for both π H and P(u H |x, u R , ϕ) based on the assumption that the human is maximizing their own reward function. If we were able to solve the POMDP, the robot would estimate ϕ based on the human's actions, and optimally trade off between exploiting its current belief over ϕ, and actively taking information gathering actions meant to cause human reactions that give the robot a better estimate of the hidden variable ϕ. Because POMDPs cannot be solved tractably, several approximations have been proposed for similar problem formulations [8] , [11] , [7] . These approximations are passively estimating the human internal state, and exploiting the belief to plan robot actions. 1 In this work, we take the opposite approach: we focus explicitly on active information gathering. We enable the robot to decide to actively probe the person to get a better estimate of ϕ. Our method can be leveraged in conjunction with exploitation methods, or be used alone when human state estimation is robot's primary objective. \n B. Reduction to Information Gathering At every step, the robot can update its belief over ϕ via: b t+1 (ϕ) ∝ b t (ϕ) • P(u H |x t , u R , ϕ). ( 2 ) To explicitly focus on taking actions to estimate ϕ, we redefine the robot's reward function to capture the information gain at every step: r R (x t , u R , u H ) = H(b t ) − H(b t+1 ) (3) with H(b) being the entropy over the belief: H(b) = − ∑ ϕ b(ϕ) log(b(ϕ)) ∑ ϕ b(ϕ) . ( 4 ) Optimizing expected reward now entails reasoning about the effects that the robot actions will have on what observations the robot will get, i.e., the actions that the human will take in response, and how useful these observations will be in shattering ambiguity about ϕ. \n C. Solution: Human Model & Model Predictive Control We solve the information gathering planning problem via Model Predictive Control (MPC) [14] . At every time step, we find the optimal actions of the robot u * R by maximizing the expected reward function over a finite horizon. Notation. Let x 0 be the state at the current time step, i.e., at the beginning of the horizon. u R = (u 0 R , . . . , u N−1 R ) be a finite sequence of the robot's continuous actions, and u H = (u 0 H , . . . , u N−1 H ) be a finite sequence of human's continuous actions. Further, let R R (x 0 , u R , u H ) denote the reward over the finite horizon, if the agents started in x 0 and executed u R and u H , which can be computed via the dynamics in (1) . MPC Maximization Objective. At every time step, the robot is computing the best actions for the horizon: u * R = arg max u R E ϕ R R (x 0 , u R , u * ϕ H (x 0 , u R )) (5) where u * ϕ H (x 0 , u R ) corresponds to the actions the human would take from state x 0 over the horizon of N steps if the robot executed actions u R . Here, the expectation is taken over the current belief over ϕ, b 0 . Simplifying (5) using the definition of reward from (3), we get: u * R = arg max u R E ϕ H(b 0 ) − H(b N ) , (6) u * R = arg max u R E ϕ − H(b N ) , (7) where the expectation remains with respect to b 0 . Human Model. We assume that the human maximizes their own reward function at every step. We let r ϕ H (x t , u t R , u t H ) represent human's reward function at time t, which is parametrized by the human internal state ϕ. Then, the sum of human rewards over horizon N is: R ϕ H (x 0 , u R , u H ) = N−1 ∑ t=0 r ϕ H (x t , u t R , u t H ) (8) Building on our previous work [19] , which showed how the robot can plan using such a reward function when there are no hidden variables, we compute u * ϕ H (x 0 , u R ) through an approximation. We model the human as having access to u R a priori, and compute the finite horizon human actions that maximize the human's reward: u * ϕ H (x 0 , u R ) = arg max u H R ϕ H (x 0 , u R , u H ) (9) One can find r ϕ H through Inverse Reinforcement Learning (IRL) [1] , [12] , [15] , [22] , by getting demonstrations of human behavior associated with a direct measurement of ϕ. The robot can use this reward in the dynamics model in order to compute human actions via (9) . To update the belief b and compute expected reward in (7) , we still need an observation model. We assume that actions with lower reward are exponentially less likely, building on the principle of maximum entropy [22] : P(u H |x, u R , ϕ) ∝ exp(ϕ H (x, u R , u H )) (10) Optimization Procedure. To solve (5) (or equivalently ( 7 )), we use a gradient descent optimization method, L-BFGS, designed for unconstrained nonlinear problems [2] . Therefore, we would like to find the gradient of the objective in equation ( 5 ) with respect to u R . Since the objective is the expectation of R R , we can reformulate this gradient as: ∂E ϕ R R (x 0 , u R , u * ϕ H (x 0 , u R )) ∂u R = ∑ ϕ ∂R R (x 0 , u R , u * ϕ H (x 0 , u R )) ∂u R • b 0 (ϕ) (11) Then, we only need to find ∂R R ∂u R , which is equivalent to: ∂R R (x 0 , u R , u * ϕ H (x 0 , u R )) ∂u R = ∂R R (x 0 , u R , u H ) ∂u H ∂u * ϕ H ∂u R + ∂R R (x 0 , u R , u H ) ∂u R | u H =u * ϕ H (x 0 ,u R ) (12 ) Because R R , as indicated by (7) , simplifies to the negative entropy of the updated belief, we can compute both ∂R R (x 0 ,u R ,u H ) ∂u H and ∂R R (x 0 ,u R ,u H ) ∂u R | u H =u * ϕ H (x 0 ,u R ) symbolically. This leaves ∂u * ϕ H ∂u R . We use the fact that the gradient of R H will evaluate to zero at u * ϕ H : ∂R H ∂u H (x 0 , u R , u * ϕ H (x 0 , u R )) = 0 (13) Now, differentiating this expression with respect to u R will result in: ∂ 2 R H ∂u 2 H ∂u * ϕ H ∂u R + ∂ 2 R H ∂u H ∂u R ∂u R ∂u R = 0 (14) Then, solving for ∂u * ϕ H ∂u R enables us to find the following symbolic expression: ∂u * ϕ H ∂u R = [− ∂ 2 R H ∂u H ∂u R ][ ∂ 2 R H ∂u 2 H ] −1 . ( 15 ) This expression allows finding a symbolic expression for the gradient in equation (11) . \n D. Explore-Exploit Trade-Off In practice, we use information gathering in conjunction with exploitation. We do not solely optimize the reward from Sec. II-B, but optimize it in conjunction with the robot's actual reward function assuming the current estimate of ϕ: r R (x t , u R , u H ) = H(b t ) − H(b t+1 ) ( 16 ) + λ • r goal (x t , u R , u H , b t ) At the very least, we do this as a measure of safety, e.g., we want an autonomous car to keep avoiding collisions even when it is actively probing a human driver to test their reactions. We choose λ experimentally, though existing techniques that can better adapt λ over time [21] . Despite optimizing this trade-off, we do not claim that our method as-is can better solve the general POMDP formulation from Sec. II-A: only that it can be used to get better estimates of human internal state. The next sections test this in simulation and in practice, in a user study, and future work will look at how to leverage this ability to better solve human-robot interaction problems. \n III. Simulation Results In this section, we show simulation results that use the method from the previous section to estimate human driver type in the interaction between an autonomous vehicle and a human-driven vehicle. In this section, we consider three different autonomous driving scenarios. In these scenarios, the human is either distracted or attentive during different driving experiments. The scenarios are shown in Fig. 2 , where the yellow car is the autonomous vehicle, and the white car is the human driven vehicle. Our goal is to plan to actively estimate the human's driving style in each one of these scenarios, by using the robot's actions. \n A. Attentive vs. Distracted Human Driver Models Our technique requires reward functions r ϕ H that model the human behavior for a particular internal state ϕ. We obtain a generic driver model via Continuous Inverse Optimal Control with Locally Optimal Examples [12] from demonstrated trajectories in a driving simulator in an environment with multiple autonomous cars, which followed precomputed routes. We parametrize the human reward function as a linear combination of features, and learn weights on the features. We use various features including features for bounds on the control inputs, features that keep the vehicles within the road boundaries and close to the center of their lanes. Further, we use quadratic functions of speed to capture reaching the goal, and Gaussians around other vehicles on the road to enforce collision avoidance as part of the feature set. We then adjust the learned weights to model attentive vs. distractive drivers. Specifically, we modify the weights of the collision avoidance features, so the distracted human model has less weight for these features. Therefore, the distracted driver is more likely to collide with the other cars while the attentive driver has high weights for the collision avoidance feature. \n B. Manipulated Factors We manipulate the reward function that the robot is optimizing. In the passive condition, the robot optimizes a simple reward function for collision avoidance based on the current belief estimate. It then updates this belief passively, by observing the outcomes of its actions at every time step. In the active condition, the robot trades off between this reward function and the Information Gain from (3) in order to explore the human's driving style. We also manipulate the human internal state to be attentive or distracted. The human is simulated to follow the ideal model of reward maximization for our two rewards. \n C. Driving Simulator We use a simple point-mass model for the dynamics of the vehicle, where x = x y θ v is the state of the vehicle. Here, x and y are the coordinates of the vehicle, θ is the heading, and v is the speed. Each vehicle has two control inputs u = u 1 u 2 , where u 1 is the steering input, and u 2 is acceleration. Further, we let α be a friction coefficient. Then, the dynamics of each vehicle is formalized as: [ ẋ ẏ θ v] = [v • cos(θ) v • sin(θ) v • u 1 u 2 − α • v]. (17) \n D. Scenarios and Qualitative Results Scenario 1: Nudging In to Explore on a Highway. In this scenario, we show an autonomous vehicle actively exploring the human's driving style in a highway driving setting. We contrast the two conditions in Fig. 2 (a). In the passive condition, the autonomous car drives on its own lane without interfering with the human throughout the experiment, and updates its belief based on passive observations gathered from the human car. However, in the active condition, the autonomous car actively probes the human by nudging into her lane in order to infer her driving style. An attentive human significantly slows down (timid driver) or speeds up (aggressive driver) to avoid the vehicle, while a distracted In this scenario, we consider the two vehicles at an intersection, where the autonomous car actively tries to explore human's driving style by nudging into the intersection. The initial conditions of the vehicles are shown in Fig. 2(c ). In the passive condition, the autonomous car stays at its position without probing the human, and only optimizes for collision avoidance. This provides limited observations from the human car resulting in a low confidence belief distribution. Autonomous Vehicle Human Driven Vehicle Passive Estimation Active Info Gathering In the active condition, the autonomous car nudges into the intersection to probe the driving style of the human. An attentive human would slow down to stay safe at the intersection while a distracted human will not slow down. \n E. Quantitative Results Throughout the remainder of the paper, we use a common color scheme to plot results for our experimental conditions. We show this common scheme in Fig. 3 : darker colors (black and red) correspond to attentive humans, and lighter colors (gray and orange) correspond to distracted humans. Further, the shades of orange correspond to active information gathering, while the shades of gray indicate passive information gathering. We also use solid lines for real users, and dotted lines for scenarios with an ideal user model learned through inverse reinforcement learning. This table is representative for the legends of Fig. 4 , Fig. ? ?, Fig. 5 , and Fig. 6 . Fig. 4 plots, using dotted lines, the beliefs over time for the attentive (left) and distracted (right) conditions, comparing in each the passive (dotted black and gray respectively) with the active method (dotted dark orange and light orange respectively). In every situation, the active method achieves a more accurate belief (higher values for attentive on the left, when the true ϕ is attentive, and lower values on the right, when the true ϕ is distracted). In fact, passive estimation sometimes incorrectly classifies drivers as attentive when they are distracted and vice-versa. The same figure also shows (in solid lines) results from our user study of what happens when the robot no longer interacts with an ideal model. We discuss these in the next section. Fig. 5 and Fig. 6 plot the corresponding robot and human trajectories for each scenario. The important takeaway from these figures is that there tends to be a larger gap between attentive and distracted human trajectories in the active condition (orange shades) than in the passive condition (gray shades), especially in scenarios 2 and 3. It is this difference that helps the robot better estimate ϕ: the robot in the active condition is purposefully choosing actions that will lead to large differences in human reactions, in order to more easily determine the human driving style. \n IV. User Study In the previous section, we explored planning for an autonomous vehicle that actively probes a human's driving style, by braking or nudging in and expecting to cause reactions from the human driver that would be different depending on their style. We showed that active exploration does significantly better at distinguishing between attentive and distracted drivers using simulated (ideal) models of drivers. Here, we show the results of a user study that evaluates this active exploration for attentive and distracted human drivers. \n A. Experimental Design We use the same three scenarios discussed in the previous section. Manipulated Factors. We manipulated the same two factors as in our simulation experiments: the reward function that the robot is optimizing (whether it is optimizing its reward through passive state estimation, or whether it is trading off with active information gathering), and the human internal state (whether the user is attentive or distracted). We asked our users to pay attention to the road and avoid collisions for the attentive case, and asked our users to play a game on a mobile phone during the distracted driving experiments. Dependent Measure. We measured the probability that the robot assigned along the way to the human internal state. Hypothesis. The active condition will lead to more accurate human internal state estimation, regardless of the true human internal state. Subject Allocation. We recruited 8 participants (2 female, 6 male) in the age range of 21-26 years old. All participants owned a valid driver license and had at least 2 years of driving experience. We ran the experiments using a 2D driving simulator with the steering input and acceleration input provided through a steering wheel and a pedals as shown in Fig. 1 . We used a within-subject experiment design with counterbalanced ordering of the four conditions. \n B. Analysis We ran a factorial repeated-measures ANOVA on the probability assigned to \"attentive\", using reward (active vs passive) and human internal state (attentive vs distracted) as factors, and time and scenario as covariates. As a manipulation check, attentive drivers had significantly higher estimated probability of \"attentive\" associated than distracted drivers (.66 vs .34, F = 3080.3, p < .0001). More importantly, there was a signifiant interaction effect between the factors (F = 1444.8, p < .000). We ran a post-hoc analysis with Tukey HSD corrections for multiple comparisons, which showed all four conditions to be significantly different from each other, all contrasts with p < .0001. In particular, the active information gathering did end up with higher probability mass on \"attentive\" than the passive estimation for the attentive users, and lower probability mass for the distracted user. This supports our hypothesis that our method works, and active information gathering is better at identifying the correct state. Fig. 4 compares passive (grays and blacks) and active (light and dark oranges) across scenarios and for attentive (left) and distracted (right) users. It plots the probability of attentive over time, and the shaded regions correspond to standard error. From the first column, we can see that our algorithm in all cases detects human's attentiveness with much higher probably than the passive information gathering technique shown in black. From the second column, we see that our algorithm places significantly lower probability on attentiveness, which is correct because those users were distracted users. These are in line with the statistical analysis, with active information gathering doing a better job estimating the true human internal state. Fig. 5 plots the robot trajectories for the active information gathering setting. Similar to Fig. 4 , the solid lines are the mean of robot trajectories and the shaded regions show the standard error. We plot a representative dimension of the robot trajectory (like position or speed) for attentive (dark orange) or distracted (light orange) cases. The active robot probed the user, but ended up taking different actions when the user was attentive vs. distracted in order to maintain safety. For example, in Scenario 1, the trajectories show the robot is nudging into the human's lane, but the robot decides to move back to its own lane when the human drivers are distracted (light orange) in order to stay safe. In Scenario 2, the robot brakes in front of the human, but it brakes less when the human is distracted. Finally, in Scenario 3, the robot inches forward, but again it stops when if the human is distracted, and even backs up to make space for her. Fig. 6 plots the user trajectories for both active information gathering (first row) and passive information gathering (second row) conditions. We compare the reactions of distracted (light shades) and attentive (dark shades) users. There are large differences directly observable, with user reactions tending to indeed cluster according to their internal state. These differences are much smaller in the passive case (second row, where distracted is light gray and attentive is black). For example, in Scenario 1 and 2, the attentive users (dark orange) keep a larger distance to the car that nudges in front of them or brakes in front of them, while the distracted drivers (light orange) tend to keep a smaller distance. In Scenario 3, the attentive drivers tend to slow down and do not cross the intersection, when the robot actively inches forward. None of these behaviors can be detected clearly in the passive information gathering case (second row). This is the core advantage of active information gathering: the actions are purposefully selected by the robot such that users would behave drastically differently depending on their internal state, clarifying to the robot what this state actually is. Overall, these results support our simulation findings, that our algorithm performs better at estimating the true human internal state by leveraging purposeful information gathering actions. \n V. discussion Summary. In this paper, we formalized the problem of active information gathering between robot and human agents, where the robot plans to actively explore and gather information about the human's internal state by leveraging the effects of its actions on the human actions. The generated strategy for the robot actively probes the human by taking actions that impact the human's action in such a way that they reveal her internal state. The robot generates strategies for interaction that we would normally need to hand-craft, like inching forward at a 4-way stop. We evaluated our method in simulation and through a user study for various autonomous driving scenarios. Our results suggest that robots are indeed able to construct a more accurate belief over the human's driving style with active exploration than with passive estimation. Limitations and Future Work. Our work is limited in many ways. First, state estimation is not the end goal, and finding how to trade off exploration and exploitation is still a challenge. Second, our optimization is close to real-time, but higher computational efficiency is still needed. Further, our work relies on a model (reward function) of the human for each ϕ, which might be difficult to acquire, and might not be accurate. Thus far, we have assumed a static ϕ, but in reality ϕ might change over time (e.g. the human adapts her preferences), or might even be influenced by the robot (e.g. a defensive driver becomes more aggressive when the robot probes her). We also have not tested the users' acceptance of information gathering actions. Although these actions are useful, people might not always react positively to being probed. Last but not least, exploring safely will be of crucial importance. \n Conclusion. We are encouraged by the fact that robots can generate useful behavior for interaction autonomously, and are excited to explore informationgathering actions on human state further, including beyond autonomous driving scenarios. Passive Robot stops to R through Fig. 6 : The user trajectories for each scenario. The gap between attentive and distracted drivers' actions is clear in the active information gathering case (first row). 1 :Fig. 2 : 12 Fig.2: Our three scenarios, along with a comparison of robot plans for passive estimation (gray) vs active information gathering (orange). In the active condition, the robot is purposefully nudging in or braking to test human driver attentiveness. The color of the autonomous car in the initial state is yellow, but changes to either gray or orange in cases of passive and active information gathering respectively. driver might not realize the autonomous actions and maintain their velocity, getting closer to the autonomous vehicle. It is this difference in reactions that enables the robot to better estimate ϕ. Scenario 2: Braking to Explore on a Highway. In the second scenario, we show the driving style can be explored by the autonomous car probing the human driver behind it. The two vehicles start in the same lane as shown in Fig.2(b), where the autonomous car is in the front. In the passive condition, the autonomous car drives straight without exploring or enforcing any interactions with the human driven vehicle. In the active condition, the robot slows down to actively probe the human and find out her driving style. An attentive human would slow down and avoid collisions while a distracted human will have a harder time to keep safe distance between the two cars. Scenario 3: Nudging In to Explore at an Intersection. In this scenario, we consider the two vehicles at an intersection, where the autonomous car actively tries to explore human's driving style by nudging into the intersection. The initial conditions of the vehicles are shown in Fig.2(c). In the passive condition, the autonomous car stays at its position without probing the human, and only optimizes for collision avoidance. This provides limited observations from the human car resulting in a low confidence belief distribution. In the active condition, the autonomous car nudges into the intersection to probe the driving style of the human. An attentive human would slow down to stay safe at the intersection while a distracted human will not slow down. \n Fig. 3 : 3 Fig. 3: Legends indicating active/passive robots, attentive/distracted humans, and real user/ideal model used for Fig.4, Fig.??, Fig.5, and Fig.6. \n Fig. 4 : 4 Fig. 4:The probability that the robot assigns to attentive as a function of time, for the attentive (left) and distracted (right). Each plot compares the active algorithm to passive estimation, showing that active information gathering leads to more accurate state estimation, in simulation and with real users. \n Fig. 5 : 5 Fig.5: Robot trajectories for each scenario in the active information gathering condition. The robot acts differently when the human is attentive (dark orange) vs. when the human is distracted (light orange) due to the trade-off with safety. \n\t\t\t One exception is Nikolaidis et al. [17] , who propose to solve the full POMDP, albeit for discrete and not continuous state and action spaces. \n\t\t\t Authorized licensed use limited to: Carnegie Mellon Libraries. Downloaded on March 24,2022 at 00:30:44 UTC from IEEE Xplore. Restrictions apply.", "date_published": "n/a", "url": "n/a", "filename": "Information_gathering_actions_over_human_internal_state.tei.xml", "abstract": "Much of estimation of human internal state (goal, intentions, activities, preferences, etc.) is passive: an algorithm observes human actions and updates its estimate of human state. In this work, we embrace the fact that robot actions affect what humans do, and leverage it to improve state estimation. We enable robots to do active information gathering, by planning actions that probe the user in order to clarify their internal state. For instance, an autonomous car will plan to nudge into a human driver's lane to test their driving style. Results in simulation and in a user study suggest that active information gathering significantly outperforms passive state estimation.", "id": "847592efb448c3a7b62450158a95fa0a"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Seth D Baum"], "title": "The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives", "text": "Introduction Over several decades, scholars from a variety of fields have advanced an argument for confronting catastrophic threats to humanity, rooted in the far future benefits of doing so. 1 In this context, the far future can loosely be defined as anything beyond the next several millennia, but will often emphasize timescales of millions or billions of years, or even longer. 2 Likewise, the catastrophic threats in question-also known as global catastrophic risks (GCRs) and existential risks, among other things-are those that would affect the trajectory of human civilization over these timescales. The simplest case is catastrophes resulting in human extinction, which is a permanent result and thus affects the trajectory of human civilization into the far future. More subtle but comparably relevant cases include catastrophes resulting in the permanent collapse of human civilization, preventing humanity from ever achieving certain very great things, and catastrophes resulting in delays Sufficiently large catastrophes can affect human civilization into the far future: thousands, millions, or billions of years from now, or even longer. The far future argument says that people should confront catastrophic threats to humanity in order to improve the far future trajectory of human civilization. However, many people are not motivated to help the far future. They are concerned only with the near future, or only with themselves and their communities. This paper assesses the extent to which practical actions to confront catastrophic threats require support for the far future argument and proposes two alternative means of motivating actions. First, many catastrophes could occur in the near future; actions to confront them have near-future benefits. Second, many actions have cobenefits unrelated to catastrophes, and can be mainstreamed into established activities. Most actions, covering most of the total threat, can be motivated with one or both of these alternatives. However, some catastrophe-confronting actions can only be justified with reference to the far future. Attention to the far future can also sometimes inspire additional action. Confronting catastrophic threats best succeeds when it considers the specific practical actions to confront the threats and the various motivations people may have to take these actions. ß 2015 Elsevier Ltd. All rights reserved. in the subsequent rise of civilization toward these achievements. The scholarship argues that people should care about human civilization into the far future, and thus, to achieve far future benefits, should seek to confront these catastrophic threats. Call this the far future argument for confronting catastrophic threats to humanity. In this paper, I will not dispute the basic validity of the far future argument. Indeed, I agree with it, and have advanced it repeatedly in my own work (Baum, 2009 (Baum, , 2010 Maher and Baum, 2013) . Instead, I assess the extent to which the far future argument is necessary or helpful for actually confronting the threats. In other words, what is the practical significance of the far future argument? I also propose and assess two alternative approaches to confronting the threats. One alternative emphasizes near future benefits of avoiding near future catastrophes. The other alternative emphasizes other (unrelated) benefits of actions that also help confront the threats, creating opportunities even for people who have zero care about the threats. It would be important if the threats can be confronted without the far future argument, because many people do not buy the argument. That people do not is suggested by a range of research. An extensive time discounting literature assess how much people value future costs and benefits. Most discounting studies use time scales of days to decades and focus on future benefits to oneself (Frederick, Loewenstein, & O'Donoghue, 2002) ; these studies are of limited relevance to valuations of the far future of human civilization. One more relevant time discounting study finds that people discount lives saved 20 years later at a 25% annual rate and lives saved 100 years later at an 8% annual rate (Johannesson & Johansson, 1996) ; extrapolating this suggests negligible concern for lives saved in the far future. Similarly, Tonn, Conrad, & Hemrick (2006, p. 821) find that people believe humanity should plan mainly for the upcoming 20 years or so and should plan less for time periods over 1000 years. In a study on social discounting, Jones and Rachlin (2006) find that people are willing to forgo more money to help close friends and family than distant acquaintances; they presumably would forgo even less for members of far future generations. Finally, there are considerations rooted in how societies today are structured. Several researchers have argued that current electoral structures favor the short-term (Ekeli, 2005; Tonn, 1996; Wolfe, 2008) . Similarly, Karlsson (2005) suggests that the rise of decentralized capitalist/democratic political economies and the fall of authoritarian (notably communist) political economies has diminished major long-term planning. While none of these studies directly assess the extent to which people buy the far future argument, the studies all suggest that many people do not buy the argument to any significant degree. To the extent that efforts to confront catastrophic threats can be made synergistic with what people already care about, a lot more can be done. This would seem to be an obvious point, but it has gone largely overlooked in prior research on catastrophic threats. One exception is Posner (2004) , who argues that some actions to reduce the risk of human extinction can be justified even if only the current generation and its immediate successor are valued. Another is Baum (2015) , who proposes to confront the threat of catastrophic nuclear winter in terms that could appeal to nuclear weapon states; Baum calls this ''ethics with strategy''. But most of the prior research, including the studies cited above, emphasize the far future argument. This paper expands Posner's argument to further argue that some actions can be taken even for those who only care about their immediate communities or even just themselves. This paper also makes progress toward assessing the total practical significance of the far future by presenting a relatively comprehensive survey of GCRs and GCR-reducing actions. Such surveys are also scarce; one example is Leggett (2006) , who surveys the space of GCRs to identify priorities for action. The present paper also has commonalities with Tonn and Stiefel (2014) , who evaluate different levels of sacrifice that society should make in response to GCRs of different magnitude. The present paper also considers levels of sacrifice, but instead argues that, from a practical standpoint, it is better to start with those actions that require less sacrifice or are in other ways more desirable. Indeed, actions requiring large sacrifice may only be justifiable with reference to far future benefits. The paper is organized as follows. Section 2 briefly reviews the space of GCRs. All actions to reduce the risk must help on one or more of these so as to result in a net risk reduction. The space of GCRs likewise provides an organizing framework for subsequent sections, as summarized in Table 1 . Section 3 discusses the timing of GCRs. For catastrophes that could happen earlier, actions to avoid them will include the earlier benefits of catastrophe avoidance. Almost all GCR reduction actions have near-future GCR reduction benefits. Section 4 discusses co-benefits and mainstreaming of GCR reduction actions. Co-benefits are benefits unrelated to GCR reduction. Mainstreaming is integrating GCR reduction into established Table 1 Summary of global catastrophic risk categories (Section 2), their timing (Section 3), co-benefits and mainstreaming opportunities (Section 4), and high-cost GCR reduction actions that may only be justifiable with reference to far future benefits (Section 5). The co-benefits and mainstreaming opportunities and high-cost actions are illustrative examples, not complete listings. activities. Co-benefits and mainstreaming are both ways to facilitate GCR reduction for those who are not specifically motivated by the far future. Section 5 discusses GCR reduction actions that can only be justified in reference to the farfuture benefits of GCR reduction. While these actions will typically not be the best place to start, they can play an important role in overall GCR reduction efforts. Section 6 discusses the ways in which attention to the far future can inspire additional GCR reduction action. This includes both analytical inspiration and emotional inspiration. Section 7 concludes. \n GCR category \n The global catastrophic risks Which actions can help reduce the risk of global catastrophe depend on what the global catastrophic risks are in the first place. This section briefly overviews the risks. The risks have been described in more detail elsewhere (Asimov, 1979; Barrett, 2007; Bostrom & C ´irkovic ´, 2008; Guterl, 2012; Jha, 2011; Leslie, 1996; Rees, 2003; Tonn & MacGregor, 2009) . While this section lists GCRs in distinct categories, the risks are often interconnected both within and across categories. For example, the emerging technology GCR of geoengineering is developed in response to the environmental change GCR of climate change, and the geoengineering risk is in turn affected by other GCRs such as large-scale violence or pandemics (Baum, Maher, & Haqq-Misra, 2013) . Similarly, GCR reduction actions can often affect risks in multiple categories. So while the GCR reduction actions discusses in Sections 3-5 are organized in terms of the categories presented here, it should be understood that specific actions often spill across categories. All this suggests a systems approach to studying GCR. \n Environmental change By environmental change, I mean to refer to human-driven global environmental changes; natural disasters are discussed below. Climate change is perhaps the most commonly cited environmental change GCR; worst-case climate change scenarios attract considerable attention (e.g. Sherwood & Huber, 2010) . Other environmental change GCRs could include biodiversity loss, biogeochemical flows (interference with the nitrogen and phosphorus cycles), stratospheric ozone depletion, ocean acidification, global fresh water use, land use change, chemical pollution, and atmospheric aerosol loading (Rockstro ¨m et al., 2009a; Rockstro ¨m et al., 2009b) . While any of these phenomena could dramatically alter the global environment, it is less clear whether the impacts would be catastrophic for humanity (Baum & Handoh, 2014; Raudsepp-Hearne et al., 2010) . For this paper, it is important that many pro-environmental actions simultaneously help across a broad set of global environmental changes, lessening the need to distinguish which changes could be catastrophic for humanity. \n Emerging technologies Several emerging technologies could cause global catastrophe, including artificial intelligence (Bostrom, 2014; Eden, Moor, Soraker, & Steinhart, 2013) , biotechnology (Vogel, 2013) , geoengineering (Baum et al., 2013; Caldeira, Bala, & Cao, 2013) , and nanotechnology for atomically precise manufacturing (Drexler, 2013) . These risks are relatively uncertain given the unprecedented nature of emerging technology, but they may constitute a significant portion of total risk. \n Large-scale violence A sufficiently large global war could be catastrophic regardless of the technologies used-for comparison, hundreds of thousands died from attacks with machetes and other unmechanized weapons in the Rwandan genocide. Weapons of mass destruction make the job much easier. Nuclear weapons can be catastrophic both through direct explosions and the indirect effects of nuclear winter (Mills, Toon, Lee-Taylor, & Robock, 2014) . Biological weapons can also readily cause global catastrophe, in particular if they are contagious; indeed, nonstate actors or even single individuals may be able to cause global catastrophes with engineered contagions (Nouri & Chyba, 2008; Rees, 2003) . The pressures of conflict can also lead actors to take larger risks, as occurred during World War II when the Americans proceeded with the first nuclear weapon test despite concerns that it could ignite the atmosphere, killing everyone (Konopinski, Marvin, & Teller, 1946) . Finally, major global violence could also result from an oppressive global totalitarian government (Caplan, 2008) . \n Pandemics Pandemics can be of natural or artificial origin, or both. Humans catch disease from the environment, in particular from other species. The development and transmission of zoonotic diseases can be enhanced by human activities including wild habitat destruction and factory farming. While it is clear that global pandemics can occur, their exact severity is a matter of ongoing analysis and debate (Germann, Kadau, Longini, & Macken, 2006; Koblentz, 2009) . \n Natural disasters Global catastrophes can result from several natural disasters including asteroid and comet impacts (Bucknam & Gold, 2008; Sleep & Zahnle, 1998) , supervolcano eruptions (Driscoll, Bozzo, Gray, Robock, & Stenchikov, 2012; Rampino, Self, & Stothers, 1988) , solar storms (NRC, 2008), and gamma ray bursts (Atri, Melott, & Karam, 2013) . While these natural disasters generally have lower probabilities, they nonetheless can be worth some effort to confront. Another natural disaster is the gradual warming of the Sun, which will (with very high probability) make Earth uninhabitable for humanity in a few billion years (O'Malley-James, Cockell, Greaves, & Raven, 2014). Other long-term astronomical risks, such as the Milky Way collision with Andromeda (increasing the rate of dangerous supernovae) and the death of all stars (removing a major energy source) play out on similar or longer time scales (Adams, 2008) . \n Physics experiments Certain types of physics experiments have raised concerns that the experiments could go wrong, obliterating Earth and its vicinity. This notably includes high-energy particle physics experiments such as at the CERN Large Hadron Collider. Physicists evaluating this risk have argued that the risk is vanishingly small; however, they may be underestimating the risk by neglecting the possibility that their analysis is mistaken (Ord, Hillerbrand, & Sandberg, 2010) . \n Extraterrestrial encounter It is not presently known if there is any extraterrestrial life, let alone intelligent extraterrestrial civilizations. However, if extraterrestrial civilizations exist, then the result could be catastrophic for humanity (Baum, Haqq-Misra, & Domagal-Goldman, 2011; Michaud, 2007) . Non-civilization extraterrestrial life could also harm humanity with catastrophic contaminations (Conley & Rummel, 2008) . \n Unknowns There may be entire categories of GCR not yet identified. \n The timing of the global catastrophic risks If a global catastrophe could occur during the near future, then there will be near-future benefits to reducing the risk. The sooner the catastrophe could occur, the larger the near-future benefits would be. In general, it will be easiest to motivate action to confront the most imminent catastrophes-hence Posner's (2004) argument that much GCR-reducing action can be justified even if one only cares about the present generation and the next one to come. It is thus worth examining the timing of the catastrophes. \n Specific global catastrophic risks \n Environmental change Major environmental changes are already visible, with larger changes expected on time scales of decades to tens of millennia. Climate change is among the more long-term of these, with some impacts already visible, and the worst climatic effects contained within the next 25,000 years or so. 3 Another possible long-term environmental change is an oceanic anoxic event, which is caused by phosphorus runoff and would in turn cause major die-off of marine species. An oceanic anoxic event could occur on time scales of millennia (Handoh & Lenton, 2003) ; more localized effects of phosphorus runoff are already visible. \n Emerging technologies Dangerous biotechnology already exists, and is steadily increasing in capability. Early design work for geoengineering is already underway, with deployments suggested to occur in upcoming decades (Keith, Parsons, & Morgan, 2010) . Experts give a significant probability to GCR-level artificial intelligence occurring within this century or next (Baum, Goertzel, & Goertzel, 2011; Mu ¨ller & Bostrom, 2014) . Nanotechnology for atomically precise manufacturing may have similar time horizons. \n Large-scale violence Large-scale violence can happen at any time. The ongoing Ukraine crisis is a firm reminder that significant tensions linger between major nuclear weapons states. Nuclear war could even occur inadvertently, due to false alarm events that can occur at any time (Barrett, Baum, & Hostetler, 2013) . Risks from biological weapons could increase in upcoming decades as biotechnology advances. However, overall risk from large-scale violence may be gradually declining, following a general trend toward less violence (Pinker, 2011) and an increasing sophistication of global peacekeeping capability (Goldstein, 2011) . \n Pandemics Pandemics can also break out at any time. Recent outbreaks of SARS, H5N1 and H1N1 flus, MERS, and currently Ebola have so far not reached a high degree of global lethality, but they are clear reminders that the threat of pandemics persists. Advances in biotechnology can lead to increasing risk through both intentional use and mishaps, as can increasing global connectivity. On the other hand, advances in public health can reduce the risk. \n Natural disasters Many natural disasters can also occur at any time. Risk from impact events, supervolcano eruptions, solar storms, and gamma ray bursts is roughly constant over long periods of time, into the far future. NASA's near-Earth objects survey has significantly reduced estimates of the risk of large impacts occurring over the next century or so (Harris, 2008) . Several astronomical risks, including the Sun's gradual warming, the Milky Way collision with Andromeda, and the death of all stars, are GCRs that exists exclusively in the far future. \n Physics experiments The risk from physics experiments depends critically on which physics experiments are conducted. The risk could increase as the capability to conduct experiments increases. \n Extraterrestrial encounter Humanity could encounter extraterrestrials at any time, including through ongoing searches for extraterrestrial intelligence (SETI). Some risk (especially contamination risk) comes mainly from human or robotic travel in space. Some risk comes from messaging to extraterrestrial intelligence (METI; Haqq-Misra, Busch, Som, & Baum, 2013) . The timing of METI risk depends on the distance from Earth to the location being messaged. METI to sufficiently distant locations is another GCR that exists exclusively in the far future. \n Unknowns Unknown GCRs could occur in both the near and far future. Indeed, more future GCRs are less likely to be already identified. \n Discussion Relatively few identified GCRs exist exclusively in the far future: certain astronomical risks and METI to distant locations. For all other GCRs-and this constitutes almost all of the total identifiable risk-the catastrophes could occur in the near future. The identified risks from emerging technologies and physics experiments could only occur in the near future. The identified risks from environmental change, large-scale violence, pandemics, some natural disasters, some extraterrestrial encounter risks, and unknowns could occur in the near or far future. The preponderance of near future risks suggests that a lot of actions to reduce these risks can be done without reference to their far future benefits. On the other hand, these near future benefits may not always be enough, especially when the catastrophes would occur several decades or centuries later, as people often care little about even these earlier times. It is thus worth pursuing other means of motivating GCR reduction. \n Co-benefits and mainstreaming GCR-reducing actions Insight on how to motivate GCR reduction can be found from outside the core GCR literature, in some related literatures. The climate change mitigation community has developed the concept of co-benefits, defined as benefits besides the target goal (Hosking, Mudu, & Dora, 2011; Miyatsuka & Zusman, n.d.) . For climate change mitigation, the target goal is greenhouse gas emissions reductions. Research assesses how communities can reduce their emissions while improving their economic development, public health, and wellbeing. The co-benefits concept readily applies GCR. Some actions to reduce GCR will also be profitable, fun, healthy, satisfying, safe, or otherwise desirable, often to those who perform the actions. Similarly, the natural disaster management community has a robust practice of mainstreaming disaster management into established goals and procedures, especially those regarding development (Benson, 2009; Twigg & Steiner, 2002) . Disaster management actions will often only be taken when they can be integrated into established goals and procedures; otherwise, the actions will be too impractical or undesirable. For example, urban design steps to reduce a town's vulnerability to hurricane storm surge can be mainstreamed into the town's broader urban planning processes (Frazier, Wood, & Yarnal, 2010) . GCR reduction actions can likewise also be mainstreamed into whatever people are already doing or trying to do. The GCR reduction community would be wise to adopt the approaches of co-benefits and mainstreaming. Doing so requires an understanding of the co-benefits that can come from various GCR-reducing actions and the relevant established goals and procedures. For many of these actions-in particular those with sufficiently large co-benefits and wellestablished goals and procedures-reducing GCR can be a nice ancillary benefit of actions that might as well be taken anyway. For these actions, no concern for the far future is needed; often, no concern beyond one's immediate community is needed. These actions require the least sacrifice (indeed, it is a sacrifice not to take these actions) and likewise will often be the easiest actions to promote. This begs the question of which GCR-reducing actions have significant co-benefits and mainstreaming opportunities. \n . Environmental change Environmental change is largely driven by a wide variety of basic activities, including food consumption, transport, real estate development, and natural resource usage. More environmentally friendly actions can often be justified for nonenvironmental reasons. A recent study by McKinsey (Enkvist, Naucle ´r, & Rosander, 2007) found that many greenhouse gas emission reductions would result in net monetary benefits for those who reduce these emissions, especially in the realm of energy efficiency in buildings and transport. Happiness research has found that people rate their daily commute as being among their least happy activities (Layard, 2003) . Public health research links high-meat diets with obesity and other health problems (Pan et al., 2012) . Buildings, transport, and food are meanwhile three of the most environmentally important sectors (Metz et al., 2007; Steinfeld et al., 2006; USEIA, 2011) . If significant changes in these sectors can be achieved for nonenvironmental reasons, the environmental benefit could be quite large. \n Emerging technologies It is often beneficial to develop regulations for multiple technologies at the same time, due to similarities between the technologies and the regulations (Kuzma & Priest, 2010; Wilson, 2013a) . Concerns about other technologies can thus motivate general technology regulation, which provides a framework for mainstreaming the regulation of emerging technologies GCRs. In addition, some actions specific to certain emerging technologies can have co-benefits. For example, one proposed solution to artificial intelligence risk is to design the AI to be ''friendly'' to humanity. In addition to not causing a catastrophe, such an AI could help with other societal problems (Muehlhauser & Bostrom, 2014) . If such an AI can be achieved with sufficient confidence, then this could be an attractive action even for those who are not concerned about AI risk. \n Large-scale violence Achieving peace avoids violence at all scales and also brings a variety of co-benefits. One co-benefit is economic growththe so-called ''peace dividend'' (Knight, Loayza, & Villanueva, 1996; Ward & Davis, 1992) . Another co-benefit is psychological. Recent research finds that conflict is often driven by humiliation, and likewise that giving people a sense of dignity can help (Lindner, 2006; Stern, 2003) . Finally, other research suggests that reducing domestic violence against women could lead to less interstate war (Hudson, Ballif-Spanvill, Caprioli, & Emmett, 2012) . Emphasizing these co-benefits could justify much action to reduce large-scale violence. Another worthy point of focus is violent nonstate actors, which continue to receive extensive attention in the wake of the 9-11 attacks. While nonstate actors may not be able to cause violence large enough to result in global catastrophe, 4 actions to confront them may have co-benefits and mainstreaming opportunities for large-scale violence. For example, the annual Nuclear Security Summits initiated by US President Barack Obama aim to prevent nonstate actors from acquiring nuclear weapons, but they also strengthen norms against nuclear weapon use more generally. \n Pandemics As noted above, there is some debate about how severe pandemics could be, including whether they would impact the far future. To a large extent, this debate is irrelevant. Regardless of how severe pandemics would be, there already exists a significant global public health infrastructure that responds to pandemics of all sizes. Improving this infrastructure can further improve the response. The case for improving this infrastructure is strengthened by the possibility of catastrophic pandemics, but the case is not dependent on this possibility (McKibbin & Sidorenko, 2006) . \n Natural disasters The GCR literature has proposed certain measures to increase society's resilience to a wide range of global catastrophes, including natural disasters. These measures include food stockpiles , underground refuges (Jebari, 2014) , and space colonies and refuges (Abrams et al., 2007; Shapiro, 2009) . These measures also tend to increase society's resilience to smaller catastrophes. Indeed, many actions taken to prepare for smaller catastrophes also benefit GCR reduction. In addition, while space colonies and refuges have been criticized for their high cost relative to other means of reducing global catastrophic risk (Baum, 2009; Sandberg, Matheny, & C ´irkovic ´, 2008) , some space missions are already underway or in planning for a variety of other reasons, including science, political prestige, and economic opportunity (e.g. in asteroid mining). Space colonies or refuges could be mainstreamed into these missions (Baum, Denkenberger, & Haqq-Misra, 2014) . \n Physics experiments Physics experiments are a curious case, because the relevant experiments are quite expensive (hundreds of millions to billions of dollars) and the social benefits somewhat limited. As Parson (2007, p. 155) puts it, ''this research is remote from practical application and serves largely to indulge national pride and the intellectual passion of a tiny elite group''. Arguably, the co-benefits of reducing physics experiment risk include the money saved by not doing the experiments, and by tasking the money to a worthier cause, analogous to the peace dividend, though this is likely to be a controversial view among those who value the physics experiments. \n Extraterrestrial encounter Protection against extraterrestrial contamination has the co-benefit of protecting extraterrestrial environments from contamination by humans, which is of significant scientific value (Conley & Rummel, 2008) . The costs of SETI and METI are small relative to the big physics experiments, so while there are dollar savings to realize from skipping them, these are less of an issue. Perhaps the extraterrestrial-risk-reducing action with the most co-benefits would be research and public education into what the risks could be. Discussions of ETI are very popular, as seen in the extensive popular media and entertainment attention to ETI. \n Unknowns Actions likely to reduce unknown GCRs will typically be generic actions that also help reduce other GCRs or even smaller risks, such as building refuges (Jebari, 2014) and stockpiling resources . \n Discussion Many GCR-reducing actions, covering the full breadth of GCRs, have sizable co-benefits, and can also be mainstreamed into existing activities. Many of these actions will often be desirable even without reference to GCR, let alone to the far future benefits of GCR reduction. These ''easy'' actions will typically be the lowest hanging fruit, the easiest GCR reductions to promote. They offer a sensible starting point for those seeking to reduce GCR. \n Actions with significant cost Ideally, all GCR-reducing actions would have low costs and large co-benefits, such that it would be easy to persuade people to take the actions, and such that the totality of GCR could be reduced with minimal burden to those taking the actions and to society at large. As discussed above, many such actions exist. However, this is not the case for all GCR-reducing actions. Some of these other actions require considerable sacrifice, especially the most aggressive GCR-reduction efforts. Tonn and Stiefel's (2014) levels of societal actions are instructive here. The levels range from doing nothing to an extreme war footing in which society is organized specifically to reduce GCR. Actions requiring more sacrifice, especially those at or near the level of extreme war footing, might only be justifiable with reference to the far-future benefits. While these actions will typically not be the lowest hanging fruit, they could be important components of an overall portfolio of GCR-reducing actions. \n Specific global catastrophic risks \n Environmental change The most aggressive pro-environmental actions include public policies like a high carbon tax, personal behaviors requiring great inconvenience and sacrifice, and restructuring the entire global industrial economy away from fossil fuels and other pollutants. To achieve a larger reduction in environmental change GCR, some of these more aggressive actions may be needed. That these more aggressive actions may only be justifiable with reference to far-future benefits is a core point from debates about discounting in environmental policy (Nordhaus, 2008). \n Emerging technologies One way to reduce emerging technology GCR is to simply abstain from developing the technologies, i.e., to relinquish them (Joy, 2000) . However, if these technologies do not cause catastrophe, they sometimes come with great benefits: geoengineering can avoid the worst effects of climate change; AI can solve a variety of social problems; biotechnology can help cure disease. Thus relinquishing the technologies can require a large sacrifice (Baum, 2014) . This sacrifice may sometimes only be justifiable given the far-future benefits of GCR reduction. \n Large-scale violence While nuclear weapons can cause great harm, they are also often attributed with helping maintain peace, through the doctrine of nuclear deterrence: countries hesitate to attack each other for fear of being destroyed in nuclear retaliation. There are questions about the efficacy of nuclear deterrence (Wilson, 2013b) and there are proposals to achieve deterrence with out large nuclear arsenals (Baum, 2015) . However, a common view posits that nuclear deterrence is necessary until international relations are peaceful enough for a world without nuclear weapons (Obama, 2009) . Following this logic, immediate nuclear disarmament might reduce GCR, but it might also increase the preponderance of smaller conflicts and other geopolitical instabilities. Depending on the details, immediate nuclear disarmament might only be justifiable with reference to the far future. \n Pandemics One aggressive action to reduce pandemics risk would be aggressive quarantine, such as blockading the major islands of Indonesia, Japan, the United Kingdom, and other countries. Travel restrictions could keep populations in these places safe. During a sufficiently severe outbreak, populations in these places could even request to be blockaded. A safer but costlier and less desirable policy would blockade them at first alert of a possible outbreak, or even keep them blockaded on a permanent basis. Doing so might lower GCR, but might only be justifiable with reference to the far future. \n Natural disasters One proposed aggressive action for natural disasters could be to drill the ground around potential supervolcanoes to extract the heat, although the technological feasibility of this proposal has not yet been established. 5 This could be a very costly project, but, if it works, it could also reduce supervolcanoes GCR. The project would come with a co-benefit of geothermal energy, but this is likely not nearly enough to justify the expense. Another possibility is advanced surfaceindependent refuges, which could protect against a variety of GCRs, including many of the natural disasters, but again could come at great expense Beckstead, 2014) . \n Physics experiments and extraterrestrial encounter I am not aware of any actions to reduce near-future GCR from physics experiments and extraterrestrial encounter that have significant cost and can only be justified with reference to far-future benefits. To the contrary, many actions to reduce these risks save money (Section 4.1). Protection against contamination does have a cost, and shutting METI programs down could cost the public a source of popular entertainment. On the other hand, the shut down itself could also create an entertaining controversy. Regardless, the costs involved are not large. \n Unknowns One action that might only be justifiable with reference to far future benefits is a far-future version of the ''extraterrestrial time capsule'' proposed in . These capsules contain artifacts of benefit to catastrophe survivors for a range of known and unknown catastrophe scenarios. The capsules are launched into space in trajectories designed to return to Earth at some future date. suggest a return date 100 years into the future, but it may be possible (and expensive) to have return dates in the far future. \n Discussion For those who wish to keep humanity highly safe from catastrophe, there are actions that can only be justified with reference to the far-future benefits of GCR reduction. While these actions are typically not the best place to start, they can offer additional GCR reductions beyond what the easier actions offer. Given the enormous far-future benefits of GCR reduction, arguably these actions merit consideration. However, hopefully GCR can be essentially eliminated without resorting to these actions. If these actions are necessary, it will likewise be necessary to appeal to the importance of the far future. \n Far future as inspiration The paper thus far has focused on how to avoid appeals to the far future argument, in recognition of the fact that many people are not motivated by what will benefit the far future. But some GCR reduction actions can only be justified with reference to far future benefits. Additionally, some people are motivated to benefit the far future. Other people could be too. Tapping the inspirational power of the far future can enable more GCR reduction. There are at least two ways that the far future can inspire action: analytical and emotional. Both are consistent with the far future argument, but the argument is typically inspired by analytical considerations. The analytical inspiration is found in works analyzing how to maximize the good or achieve related objectives. Most of the scholarly works invoking the far future argument are of this sort. 6 Such ideas have the potential to resonate not just with other scholars, but with people in other professions as well, and also the lay public. Thus there can be some value to disseminating analysis about the importance of the far future and its relation to GCR. Analytical inspiration can also come from analyzing specific actions in terms of their far-future importance. Such analysis can help promote these actions, even if the actions could be justified without reference to the far future. However, the analysis should be careful to connect with actual decision makers, and not just evaluate hypothetically optimal actions that no one ever takes. For example, there has been now multiple decades of research analyzing what the optimal carbon tax should be (for an early work, see Nordhaus, 1992 ), yet throughout this period, for most of the world, the actual carbon tax has been zero. Analytical inspiration has its limits. Research effort may be more productively spent on what policies and other actions people are actually willing to implement. The other far future inspiration is emotional. The destruction of human civilization can itself be a wrenching emotional idea. In The Fate of the Earth, Jonathan Schell writes ''The thought of cutting off life's flow, of amputating this future, is so shocking, so alien to nature, and so contradictory to life's impulse that we can scarcely entertain it before turning away in revulsion and disbelief'' (Schell, 1982 (Schell, /2000 ). In addition, there is a certain beauty to the idea of helping shape the entire arch of the narrative of humanity, or even the universe itself. People often find a sense of purpose and meaning in contributing to something bigger than themselves-and it does not get any bigger than this. Carl Sagan's (1994) Pale Blue Dot and James Martin's (2007) The Meaning of the 21st Century both capture this well, painting vivid pictures of the special place of humanity in the universe and the special opportunities people today have to make a difference of potentially cosmic significance. This perspective says that humanity faces great challenges. It says that if these challenges are successfully met, then humanity can go on to some amazing achievements. It is a worthy perspective for integrating the far future into our lives, not just for our day-to-day actions but also for how we understand ourselves as human beings alive today. This may be worth something in its own right, but it can also have a practical value in motivating additional actions to confront catastrophic threats to humanity. \n Conclusion The far future argument is sound. The goal of helping the far future is a very worthy one, and helping the far future often means helping reduce the risk of those global catastrophes that could diminish the far-future success of human civilization. However, in practical terms, reducing this risk will not always require attention to its far-future significance. This is important because many people are not motivated to help the far future, but they could nonetheless be motivated to take actions that reduce GCR and in turn help the far future. They may do this because the actions reduce the risk of near-future GCRs, or because the actions have co-benefits unrelated to GCRs and can be mainstreamed into established activities. This paper surveys GCRs and GCR-reducing actions in terms of how much these actions require support for the far future argument for confronting catastrophic threats to humanity. The analysis suggests that a large portion of total GCR, probably a large majority, can be reduced without reference to the far future and with reference to what people already care about, be it the near future or even more parochial concerns. These actions will often be the best to promote, achieving the largest GCR reduction relative to effort spent. On the other hand, some significant GCR reducing actions (especially those requiring large sacrifice) can only be justified with reference to their far-future benefits. For these actions in particular, it is important to emphasize how the far future can inspire action. Several priorities for future research are apparent. Quantitative GCR analysis could help identify which actions best reduce GCR and also what portion of GCR can be reduced without reference to the far future. Analysis covering the breadth of GCRs would be especially helpful. Social scientific research could study how to effectively engage stakeholders so as to leverage co-benefits and mainstream GCR reductions into existing programs. Social scientific research could also examine how to effectively tap the inspirational power of the far future, especially for emotional inspiration, which has received limited prior attention. Progress in these research areas could go a long way toward identifying how to, in practice, achieve large GCR reductions. The overall message of this paper is that helping the far future requires attention to which specific actions can help the far future and likewise to what can motivate these actions. The actions are not necessarily motivated by their far-future impact. This is fine. The far future does not care why people acted to help it-the far future only cares that it was helped. And people taking these actions will rarely mind that their actions also help the far future. Most people will probably view this as at least a nice ancillary benefit. Additionally, people will appreciate that those promoting the far future have taken the courtesy to consider what they care about and fit the far future into that. It can be disrespectful and counterproductive to expect people to drop everything they are doing just because some research concluded that the far future is more important. This means that those who seek to promote actions to benefit the far future must engage on an interpersonal level with the people who will take these actions, to understand what these people care about and how far-future-benefiting actions can fit in. This is an important task to pursue, given the enormity of what human civilization can accomplish from now into the far future. Futures \n\t\t\t The 25,000 year figure is derived from Archer and Ganopolski (2005) , Fig.3C, which shows a rapid temperature spike that declines most of the way back to current temperatures within 25,000 years and then remains at similar temperatures for another 500,000 years. However, this refers specifically to climatic effects; the human effects could persist longer, especially if the climate change causes a civilization-ending global catastrophe. S.D.Baum / Futures 72 (2015) 86-96 \n\t\t\t Nuclear terrorism would likely be too small to cause a far-future-impacting global catastrophe, unless it catalyzed a large-scale interstate nuclear war (Ayson, 2010) . Biological terrorism could more readily cause a global catastrophe, as discussed above.S.D.Baum / Futures 72 (2015) 86-96 \n\t\t\t S.D.Baum / Futures 72 (2015) 86-96 \n\t\t\t An idea to this effect is briefly discussed inLeggett (2006, p. 794). 6 See citations in Footnote 1. S.D. Baum / Futures 72 (2015) 86-96", "date_published": "n/a", "url": "n/a", "filename": "1-s2.0-S0016328715000312-main.tei.xml", "abstract": "others. 2 This definition of the far future is most explicitly stated in Beckstead (2013). In contrast, psychology and cognitive science research commonly defines ''far future'' in timescales of years (e.g.", "id": "a9fb0ffada26f9d6ec0db611df14c875"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Victor Galaz", "Miguel A Centeno", "Peter W Callahan", "Amar Causevic", "Thayer Patterson", "Irina Brass", "Seth Baum", "Darryl Farber", "Joern Fischer", "David Garcia", "Timon Mcphearson", "Daniel Jimenez", "Brian King", "Paul Larcey", "Karen Levy"], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "1-s2.0-S0160791X21002165-main.tei.xml", "abstract": "Automated decision making and predictive analytics through artificial intelligence, in combination with rapid progress in technologies such as sensor technology and robotics are likely to change the way individuals, communities, governments and private actors perceive and respond to climate and ecological change. Methods based on various forms of artificial intelligence are already today being applied in a number of research fields related to climate change and environmental monitoring. Investments into applications of these technologies in agriculture, forestry and the extraction of marine resources also seem to be increasing rapidly. Despite a growing interest in, and deployment of AI-technologies in domains critical for sustainability, few have explored possible systemic risks in depth. This article offers a global overview of the progress of such technologies in sectors with high impact potential for sustainability like farming, forestry and the extraction of marine resources. We also identify possible systemic risks in these domains including a) algorithmic bias and allocative harms; b) unequal access and benefits; c) cascading failures and external disruptions, and d) trade-offs between efficiency and resilience. We explore these emerging risks, identify critical questions, and discuss the limitations of current governance mechanisms in addressing AI sustainability risks in these sectors.", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Seth D Baum"], "title": "The great downside dilemma for risky emerging technologies", "text": "Introduction Would you play a game of Russian roulette? Would you take a six chamber revolver, put one bullet in, give it a spin, point it at your head, and pull the trigger? How about for a million dollars? Would you play? I would guess that most readers of this paper would not play. I would guess that you would think that a chance of one million dollars is not worth it to take this risk of ending up with a bullet in the brain. I personally would not play, for the same reason. Our brains and our lives are simply worth more than that. But suppose your life circumstances were different. Suppose you were struggling with money, that you were basically broke. Suppose you were sick, with a chronic condition you cannot afford to cure. Maybe you do not have that many years left to live anyway. Now, with less to lose and more to gain, that game of Russian roulette might start to look more attractive. Now, you might start counting how much that million dollars could do for you. Could it cure your sickness, make you healthy again? Could it add years to your life? Could it pull you out of poverty? Could it give you basic comfort? If the million dollars would do enough for you, then maybe you would choose to play. Desperate circumstances can sometimes warrant taking desperate risks. If it works, the circumstances get better, maybe much better. But it might not work, and if it does not, it comes with a downside-in this case, a bullet in the brain. Whether to play the game is a downside dilemma: a dilemma involving a significant possible downside. This paper talks of a great downside dilemma. It is great because the stakes are so high-indeed, they are literally astronomical. At stake is not the fate of a single person, as in Russian roulette, but the fate of human civilization. This includes the roughly seven billion people alive today and the many more members of all the future generations that might ever live. The stakes are astronomical because humans (or our descendants) might be able to colonize space and achieve great things across the Universe. Human civilization already has an active space program, and space colonization seems feasible, as long as no great catastrophe denies humanity the chance. The rest of the Universe is vastly larger than our humble home planet, so space colonization would open up enormous new opportunities. Meanwhile, for all humanity currently knows, humans might have the only intelligent civilization anywhere in the Universe. And so the stakes could mean nothing less than the success or failure of intelligent civilization in the entire universe. A great downside dilemma, indeed. | Royal Swedish To be more specific, the great downside dilemma is any circumstance in which human civilization must choose whether to take a risk in which, if it works out, the benefit greatly improves the human condition, but if it does not work out, a catastrophe will occur, a catastrophe so large that civilization could perish, a metaphorical bullet in the brain. The dilemma is whether to take the risk. How much does civilization value that improvement in its condition? Could it be enough to pull civilization out of desperate circumstances? How large is the risk of catastrophe? Is it small enough that the risk is worth taking? Can any risk of civilization perishing be small enough to justify taking the risk? These questions must be answered in order to decide whether to take the risk. The great downside dilemma arises often for decisions about whether to pursue certain emerging technologies. These technologies promise to solve major societal problems. They bring peace, cure disease, protect the environment, and more. Or rather, they do these things if they work as intended. However, they may not work out as intended. They may fail, and fail catastrophically. In the worst cases, they could kill every living human-the extinction of our species-and destroy much of the rest of Earth's biosphere as well. Should society develop and launch these technologies, given their promise and despite their risks? That is the great downside dilemma for emerging technologies. This dilemma is an important issue for society as a whole and especially for scientists and engineers, who by virtue of their background are especially able to contribute to the debate. This dilemma is one important part of the broader challenge of avoiding civilization-ending global catastrophes. A growing body of scholarship recognizes the avoidance of these catastrophes as crucial for the long-term success of human civilization, and likewise as a key priority for action today [1] [2] [3] [4] [5] [6] [7] [8] . Visionary technologist James Martin likened this era of civilization to a turbulent river that it must navigate [9] . If this era of civilization successfully navigates the river, then a long, bright future awaits, both on Earth and beyond. However, if it fails, then human civilization suffers a premature death. This paper describes several great downside dilemmas for emerging technologies and explains how humanity can navigate through them. The paper also discusses some other technologies that do not pose this dilemma because they promise to bring major benefits without a significant catastrophic risk. \n Historical precedents Amazingly, the great downside dilemma for emerging technologies has been faced at least twice before. The first precedent came in the desperate circumstances of World War II. The dilemma was whether to test-detonate the first nuclear weapon. While nuclear weapons proved to be unprecedentedly destructive weapons, a single detonation did not destroy the entire planet as some initially feared. The second precedent came during calm circumstances but still posed a dilemma every bit as large: whether to engage in messaging to extraterrestrial intelligence (METI). METI is of note because the dilemma still has not been resolved. Humanity still does not know if METI is safe. Thus METI decisions today face the same basic dilemma as the initial decisions in decades past. \n Nuclear weapons It was 1945, towards the end of World War II. An American team of physicists, engineers, and military personnel built the first atomic bomb, which they named Trinity. Trinity was to be detonated in a test explosion, to make sure the technology worked, before using additional atomic bombs against Japan. By that point, Germany had already surrendered. Japan was nearing defeat, and the United States believed that the atomic bomb could compel Japan to surrender without the US waging a long, bloody invasion. It might seem counterintuitive, but this most powerful of weapons was built to save live 1 . However, some of the physicists worried that the test might fail catastrophically. They worried that the detonation could ignite the atmosphere, ending life on Earth. They believed the chance of this happening to be exceptionally small, due to their understanding of the relevant physics. Still, they closed their report on the topic with the line 'However, the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable' [11] . Thus the risk did give them some pause. Sure enough, they took the risk. As is now known, the Trinity test succeeded: the bomb worked, and the atmosphere did not ignite. The rest is history. Humanity survived the first atomic bomb detonation, the next two, which were dropped on Hiroshima and Nagasaki, 1 Reed [10] reviews the history of the atomic bomb development and the corresponding physics. and the 2054 atomic bombs that have been detonated since in further testing 2 . The largest of these bombs, the Soviet Tsar Bomba, had a yield equivalent to about 50 megatons of TNT, a whopping 2500 times larger than Trinity. The atmosphere did not ignite. And physics has by now progressed to the point where we understand with very high confidence why atomic bomb detonations would not cause these harms (though they can of course cause other harms). But for that brief moment in time, when the first atomic detonation was under consideration, a great downside dilemma was faced, without the benefit of hindsight that now exists. Today, nuclear weapons remain a major threat. A single nuclear weapon could kill thousands or even millions of people. Nuclear war with hundreds or thousands of weapons (about 17 000 weapons still exist, mainly held by the United States and Russia) would not produce enough radiation to cause human extinction, as Bertrand Russell, Albert Einstein, and others once feared [12] . But it would cause significant cooling, as the smoke from burning cities rises into the atmosphere and blocks incoming sunlight. This cooling, often known as nuclear winter, could cause widespread agricultural failure, with the resulting famine threatening millions or even billions of people [13, 14] . The worst case scenario could include human extinction [15] . The leaders of nuclear weapons states thus face a different dilemma: In a crisis, are nuclear weapons worth using? While nuclear weapons are no longer a new technology, large nuclear arsenals have never been used in war, and so the dilemma must still be resolved without the benefit of hindsight. \n Messaging to extraterrestrials It was 1974, a relatively calm and ordinary year by most measures. But an unusual exercise was in preparation at an astronomy observatory in Arecibo, Puerto Rico. The Arecibo Observatory hosted what was (and still is) the largest radio telescope in the world 3 . Usually, the telescope is used either for radio astronomy, which detects radio waves incoming from the rest of the Universe, or radar astronomy, which studies the solar system by sending radio waves towards planets and other nearby objects and analyzing the waves that bounce back. Radio and radar astronomy are generally harmless and of scientific value. But in 1974, the Arecibo telescope was to be used differently. The plan was to send a message from Arecibo to a cluster of stars 25 000 light years away. The Arecibo Message was designed by astronomer Frank Drake and colleagues with the premise of METI. The message contained seven parts describing human physiology and astronomy. This was not the first exercise in METI. In 1962, the Morse Message was sent from the Evpatoria Planetary Radar in Crimea to Venus. But the Morse Message was as harmless as regular radar astronomy studying Venus. Because the Arecibo message was broadcast elsewhere, it broke new ground. So far, the Arecibo message has not received a response. Of course it has not: it was sent 40 years ago to a location 25 000 light years away. It will take at least another 49 960 years for the message to arrive and the response to reach back to Earth 4 . And it is possible that no ETI will receive the Arecibo message. Indeed, it is possible that there are no ETI out there to receive it. It is also possible that ETI will receive it but not respond in any way. So even after another 49 960 years, the Arecibo message could prove inconsequential to humanity, except for its modest educational value. But it might not. The message could receive a response. Despite some proclamations to the contrary, humanity has little understanding of how an encounter with ETI would proceed. Some people expect that ETI would benefit humanity, providing scientific knowledge and intercultural exchange, or even solutions to humanity's problems. Others expect that ETI would harm humanity, enslaving or even killing us. These are among the many possible outcomes of ETI encounter [16, 17] . Presumably, if it was known that the outcome would be beneficial, METI would proceed; likewise it would not proceed if the outcome was known to be harmful [18] . But in 1974, it was not known. Whether to engage in METI thus posed a great downside dilemma. In 2014, at the time of this writing, it is still not known whether METI is safe. Since the Arecibo message, several other messages to ETI have since been sent to closer stars, most recently the 2013 Lone Signal project 5 . None of these messages has yet received a reply. Meanwhile, it is an exciting time for SETI-the search for ETI. Astronomers are just starting to discover extrasolar planets [19] . But no ETI have yet been found. Until then, humanity will have deep uncertainty about the merits of METI. Some progress can be made by carefully thinking through the possibilities of ETI encounter. An argument can be made that no high-power METI should be conducted until humanity better understands the risks [20] , but this is a controversial point. The great downside dilemma for METI persists. \n Dilemmas in the making While humanity continues to face dilemmas related to nuclear weapons and METI, new dilemmas lurk on the horizon. The stakes for these new dilemmas are even higher, because they come with much higher probabilities of catastrophe 6 . I will 2 Atomic bomb detonation data can be obtained from the Comprehensive Nuclear-Test-Ban Treaty Organization at http://www.ctbto.org/nucleartesting/history-of-nuclear-testing/world-overview/page-1-world-overview/. 3 Arecibo is the world's largest single-aperture radio telescope. Arrays of multiple telescopes combined as astronomical interferometers collectively cover larger areas. 4 Assuming the location is exactly 25 000 light years away, then 49 960 years from now is the minimum time it could take for a response to reach Earth. The response could reach Earth later if the ETI take more time before transmitting the response. 5 Full disclosure: I received funds from Lone Signal to contribute to a risk analysis of Lone Signal's transmissions. The study concluded that the transmissions posed no significant risk because the transmitter Lone Signal was using at the time did not exceed the background radio wave leakage from radio and television broadcasts [18] . 6 This assumes that the probability of catastrophe from METI is relatively low. This is a debatable point, given the deep uncertainty surrounding the possibility of extraterrestrial contact. focus on two: stratospheric geoengineering and artificial general intelligence. Neither technology currently exists, but both are subjects of active research and development. Understanding these technologies and the dilemmas they pose is already important and will only get more important as the technologies progress. \n Stratospheric geoengineering In summer 2010, heavy monsoons flooded about one fifth of Pakistan, with millions of people affected [21] . The floods were part of a broader northern hemisphere summer heat wave that set temperature records in many locations. Vast wildfires in Western Russia produced so much smoke that people in Moscow wore masks and airports redirected traffic. The floods, heat wave, and wildfires are among the sorts of extreme weather events that are expected to happen more often and with greater intensity as global warming worsens [22] 7 . The standard means of lessening the harms of global warming is to reduce atmospheric emissions of carbon dioxide, methane, and other greenhouse gases. That means using energy more efficiently, switching away from coal power, reversing deforestation, and more. But despite the risks of global warming, greenhouse gas emissions have been steadily increasing, and are projected to continue increasing into the future. More emissions means warmer temperatures and more severe harms from extreme weather events, sea level rise, and more. In despair over the perceived failure to reduce emissions, observers are increasingly considering geoengineering to lower global temperatures. Geoengineering is the intentional manipulation of the global Earth system [27] . Greenhouse gas emissions do not qualify as geoengineering because they are an unintended byproduct of activities with other aims. Perhaps the most commonly discussed type of geoengineering is stratospheric geoengineering, which would lower temperatures by injecting particles into the stratosphere, thereby blocking a portion of incoming sunlight. Stratospheric geoengineering is attractive because of its relative feasibility, efficacy, and affordability. However, stratospheric geoengineering changes regional temperature and precipitation patterns, leaving some need to adapt to climatic changes. Stratospheric geoengineering also does nothing to address the acidification of oceans caused by carbon dioxide emissions being absorbed into oceans. Ocean acidification is a major problem in its own right, a large threat to ocean ecosystems. Finally, stratospheric geoengineering also poses significant risks that could even exceed those of global warming. Perhaps the largest risk from stratospheric geoengineering is the possibility of abrupt halt. Particles put into the stratosphere will cycle out on time-scales of about five or ten years. In order to maintain stable temperatures, particles must be continuously injected into the stratosphere. If the geoengineering abruptly halts, such that additional particles are not injected, then temperatures will rapidly shoot back up towards where they would have been without geoengineering [28] . This rapid temperature increase could be especially difficult to adapt to and thus be especially disruptive. For example, it may be difficult to determine which crops to plant in a given region, because the crops suitable for that region will change too quickly. Fortunately, the rapid temperature increase can be avoided simply by continuing to inject particles into the stratosphere. Indeed, the harms of rapid temperature increase provide strong incentive to not to stop particle injection in the first place. Under normal circumstances, people would have to be either incompetent or malicious to stop particle injection. Assuming the geoengineering is managed by responsible parties, abrupt halt may be unlikely, making stratospheric geoengineering relatively safe. This is under normal circumstances. However, particle injection may nonetheless halt if some other catastrophe occurs, such as a war or a pandemic, that prevents people from continuing the injections. The result would be a 'double catastrophe' of rapid temperature increase hitting a population already vulnerable from the first catastrophe [29] . This double catastrophe could be very harmful; potentially it could even result in human extinction. This makes for a rather severe downside. Figure 1 depicts the double catastrophe scenario. The figure shows average global temperature versus time for three scenarios: (1) the world without geoengineering, in which temperatures gradually rise due to greenhouse gas emissions; (2) ongoing geoengineering, in which temperatures remain indefinitely at a low, stable level (around 13 °C); and (3) abrupt geoengineering halt (around the year 2080), in which temperatures rapidly rise towards where they would have been without geoengineering. Figure 1 also indicates when an initial catastrophe would occur (shortly before 2080) in a double catastrophe scenario. The great downside dilemma for stratospheric geoengineering is the dilemma of whether to inject particles into the Figure 1 . global temperature for three scenarios: no geoengineering, ongoing geoengineering, and geoengineering that abruptly stops. The initial catastrophe corresponds to a geoengineering double catastrophe [29] . 7 I am using the term 'global warming' here instead of the usual 'climate change' to distinguish it from nuclear winter, which is also a climatic change. Any readers who still doubt the legitimacy of global warming as an issue should consult some of the many works on the topic, including accessible books by leading global warming researchers [23, 24] and my own humble contribution [25] . Global warming research is also voluminously reviewed by the Intergovernmental Panel on Climate Change [26] . stratosphere. On one hand, stratospheric geoengineering could lower temperatures, avoiding many harms of global warming. On the other hand, it poses a risk of rapid temperature increase that could even result in human extinction. So, should stratospheric geoengineering be pursued? A key factor in resolving the dilemma is understanding how bad the impacts of global warming could get without geoengineering. The floods, heat waves, and other effects already being observed will almost certainly get worse. This is bad, but there is an even worse impact potentially on the horizon: the exceedance of mammalian thermal limits. The limits depend on wet bulb temperature, which is a combination of 'regular' dry bulb temperature and humidity. When wet bulb temperature goes above 35 °C, mammals-including humans-can no longer perspire to regulate our body temperature, causing us to overheat and die. Currently, wet bulb temperatures never exceed the 35 °C limit, but under some possible global warming scenarios, the limit would sometimes be exceeded in much of the land surface of the planet [30] . Unless humans and other mammals took shelter in air conditioning, they would die. Under these conditions, it may be difficult to keep civilization intact in the warmer regions or even worldwide. Another perspective on the potential severity of global warming comes from looking at the long-term co-evolution of the human species and Earth climates. The species Homo sapiens sapiens is dated at around 200 000 years old. This means that humans have lived through about two full glacialinterglacial cycles, i.e. ice ages and the warm periods between them, which cycle back and forth on time-scales of about 100 000 years [23] . Archaeological evidence suggests that early Homo sapiens sapiens and their immediate ancestors had cognitive capabilities comparable to those of contemporary humans [31, 32] . However, civilization did not take off until the agricultural revolution, which began around 10 000 years ago and occurred in at least seven independent locations within just a few thousand years. The last 10 000 years coincide with the Holocene, a warm interglacial period with a relatively stable climate, suggesting that this climate may have been crucial for the rise of civilization [33] . Meanwhile, global warming threatens to push temperatures to levels significantly outside the range of recent glacial-interglacial cycles, bringing climates that Homo sapiens sapiens and its immediate ancestors have never seen before [34] . To the extent that certain climates are essential for human civilization, global warming could be devastating. This sort of long-term perspective is also helpful for understanding stratospheric geoengineering risk. Global warming could last for centuries, millennia, or even longer [34] . This is a very long time to continue injecting particles into the stratosphere. It is also plenty of time for plenty of catastrophes to occur. For example, risk analysis of nuclear war finds about a 0.1%-1% chance of nuclear war occurring during any given year [35] . Over hundreds or thousands of years, this makes nuclear war virtually certain to occur. Of course, the world could permanently get rid of nuclear weapons, eliminating the risk. But this might not happen, and meanwhile there are other types of catastrophes to worry about. Over the time-scales of global warming, a stratospheric geoengineering double catastrophe may be quite likely. So, should stratospheric geoengineering be pursued? At this time, I do not believe a good answer to this question exists. There is too much uncertainty about both the consequences of stratospheric geoengineering and the consequences of not stratospheric geoengineering, as well as the probabilities of stratospheric geoengineering abrupt halt. Fortunately, the decision does not need to be made just yet. Global warming is not yet so bad that stratospheric geoengineering is worth the risk. But geoengineering decisions may be made soon; research to reduce the uncertainty should proceed now so that wise decisions can be made [36, 37] . When the time comes to decide whether to launch stratospheric geoengineering, the right action to take may also be the more difficult action to take. It is quite plausible that civilization could endure the worst harms of regular global warming, but would collapse from the rapid global warming of stratospheric geoengineering abrupt halt. If this is the case, then it would be in civilization's long-term interest to abstain from stratospheric geoengineering and suffer through regular global warming. This abstention may be best regardless of how painful regular global warming could get and regardless of how unlikely stratospheric geoengineering abrupt halt would be. It is just that important to ensure the long-term viability of human civilization. But this is hardly an encouraging prospect, dooming humanity to the pains of regular global warming, when lower temperatures could so easily be produced. Meanwhile, civilization can lessen the stratospheric geoengineering dilemma by reducing greenhouse gas emissions. Regardless of any broader failures at emissions reductions, every additional bit helps. And reducing emissions helps with both sides of the dilemma: it lessens the severity of both regular global warming and the rapid temperature increase from stratospheric geoengineering abrupt halt. As discussed further below, many options for reducing emissions come with minimal dilemmas of their own, making them excellent options to pursue. \n Artificial general intelligence In January 2011, two world-leading players of the game show Jeopardy! took on an IBM computer named Watson. Watson won the game convincingly. During the last Final Jeopardy! round, human contestant Ken Jennings wrote below his response question, 'I for one welcome our new computer overlords'. It was a humorous moment in the long rise of artificial intelligence (AI). But how high can AI rise? Could AI actually become the overlords of humanity, taking over the world? And should such a development be welcomed? At the outset, it is important to distinguish between two types of AI: narrow and general. Narrow AI is intelligent in specific domains but cannot reason outside the domains it was designed for. Narrow AI is by now ubiquitous across a breadth of contexts, from searching the web to playing games like Jeopardy! narrow AI can be quite useful, and can also pose some risks. But it is not expected to take over the world, because controlling the world requires capabilities across many domains. General AI (AGI) is intelligent across a wide range of domains. Humans are also intelligent across many domains, but this does not mean that AGI would necessarily think like humans do. An AGI may not need to think like humans in order to be capable across many domains 8 . Early AI researchers boldly predicted that human-level AGI would be achieved by dates long since past, as Crevier [39] and McCorduck [40] chronicle. This grandiose failure of prediction led many modern AI researchers to be skeptical about the prospects of AGI [41] . However, there remains an active AGI research community [42] . Experts in the field diverge widely about when AGI is likely to be achieved and on its impacts if or when it is achieved [43] [44] [45] . But some of the predictions about impacts are quite dramatic. One line of thinking posits that an AGI, or at least certain types of AGIs, could essentially take over the world. This claim depends on two others. First, power benefits from intelligence, such that the most intelligent entities will tend to have the most power. Second, an AGI can gain vastly more intelligence than humans can, especially if the AGI can design an even more intelligent AGI, which designs a still more intelligent AGI, and so on until an 'intelligence explosion' [46] or 'singularity' [47, 48] occurs. The resulting 'superintelligent' AGI [49] could be humanity's final invention [50] because the AGI would then be fully in control. If the AGI is 'Friendly' to humanity [51] , then it potentially could solve a great many of humanity's problems. Otherwise, the AGI will likely kill everyone inadvertently as it pursues whatever goals it happened to be programmed with-for example, an AGI programmed to excel at chess would kill everyone while converting the planet into a computer that enabled it to calculate better chess moves [52] . Per this line of thinking, an AGI would be much like a magic genie, such as the one depicted in the film Aladdin (John Musker and Ron Clements, directors, 1992). The genie is all-powerful but obligated to serve its master. The master can wish for almost anything, but should be careful what he or she wishes for. Indeed, genie stories are often stories of unintended consequences. For example, in the penultimate scene of Aladdin, Jafar wishes to become a genie. He was eager to gain the genie's powers, but ended up trapped in servitude (and stuck inside a small lamp). The story with AGI may be similar. The AGI would do exactly what its human programmers instructed it to do, regardless of whether the programmers would, in retrospect, actually want this to happen. In attempting to program the AGI to do something desirable, the programmers could end up dead, along with everyone else on the planet 9 . If this line of thinking is correct, or even if it has at least some chance of being correct, then AGI poses a great downside dilemma. Should an AGI be built and launched? Given the possibility of being destroyed by AGI, it might appear that AGI should simply not be built. Doing so would ensure that humanity retains control of itself and its fate. But for several reasons, the situation is not so simple. A first complication is that AGI might be Friendly or otherwise beneficial to humanity, or to the world. The benefits of a Friendly AGI could be immense. Imagine having the perfect genie: unlimited wishes that are interpreted as you intended them to be, or maybe even better than you intended them to be. That could go quite well. Perhaps there would be no more poverty or pollution. Perhaps space colonization could proceed apace. Perhaps the human population could double, or triple, or grow tens, hundreds, or thousands of times larger, all with no decline in quality of life. A Friendly AGI might be able to make these things possible. Decision-making on AGI should balance this large potential upside with the also-large downside risk. For example, suppose the AGI had a 50% chance of killing everyone and a 50% chance of doubling the human population with no decline in quality of life. The expected population would be equally large with or without the AGI. Does this mean that humanity is indifferent to launching the AGI? If it was a 51% chance of doubling the population, versus 49% for killing everyone, does this mean humanity would rather launch the AGI? What if it was a chance of the population increasing by a factor of ten, or a thousand? These are important types of questions to answer when making decisions about launching an AGI. A second complication is that AGI is not the only threat that humanity faces. In the absence of AGI, humanity might die out anyway because of nuclear weapons, global warming, or something else. If AGI succeeds, then these other threats could go away, solved by our new computer overlords. That is a significant upside for AGI. What if an AGI has a 50% chance of killing everyone, but absent AGI, humanity has a 60% of dying out from something else? Should the AGI be launched? The dilemma for AGI can thus look a lot like that for stratospheric geoengineering. Imagine, some years into the future, humanity finds itself in a difficult spot. Perhaps global warming is bringing great harm, and other environmental stressors are as well. Perhaps major countries are on the brink of war with nuclear weapons or something even more destructive. Perhaps poverty is rampant, life unsafe and unpleasant. Perhaps other solutions, including stratospheric geoengineering, are found to be unsafe or otherwise undesirable. And perhaps there is no hope in sight of conditions improving. In this case, taking the risk of launching a AGI could start to look attractive. Indeed, in terms of the long-term success of human civilization, it might even be the right thing to do. Or, the right thing may be to suffer through without AGI. It would depend on the details, just as it would for stratospheric geoengineering or a desperate game of Russian roulette. Following this logic, one way to help reduce AGI risk is to improve the general human condition. By keeping humanity out of desperate circumstances, the risk of AGI can be made to look less attractive. This opens up a wide range of opportunities to help reduce AGI risk, from reducing 8 AGI was discussed in detail in another paper in the series of papers based on the event 'Emerging Technologies and the Future of Humanity' [38] . 9 Arguably, such a result would at least be better than an eternity trapped in a small lamp. greenhouse gas emissions to improving conditions for the world's poor. But the merits of this approach depend on how the AGI would be developed. The third complication is that AGI development could involve basic computing resources and technologies of growing economic importance. AGI is not like nuclear weapons, which require exotic materials. AGI could be developed on any sufficiently powerful computer. Computing power is steadily growing, a trend known as Moore's Law. Meanwhile, narrow AI is of increasing technological sophistication and economic importance. At the time of this writing, driverless cars are just starting to hit the streets around the world, with promise to grow into a major industry. Differences between narrow AI and AGI notwithstanding, these AI advances may be able to facilitate the development of AGI. Given the risks of AGI, it may seem attractive or even wise to relinquish precursor hardware and software technologies, potentially including certain narrow AI and the computer systems they run on [53] . But given the pervasiveness of these technologies, it may be difficult to do so. Here lies another dilemma. Would humanity be willing to sacrifice much of its computing technology in order to avoid an AGI catastrophe? Should it? The dilemma here resembles that faced in the recent film Transcendence (Wally Pfister, director, 2014). The film shows an AGI that has been launched and is steadily taking over the world. The AGI is in many ways beneficial or even friendly, but the humans who are close to it become increasingly skeptical and decide to shut it down. (More precisely, they persuade the AGI to shut itself down, since the AGI was still in control.) However, in shutting it down, humanity had to sacrifice the internet, and potentially also other electronics. A case can be made that the AGI should not have been shut down: without the internet and other electronics, the long-term prospects for human civilization could be severely limited, such that humanity would be better off keeping the AGI intact and hoping for the best [54] . As with stratospheric geoengineering, AGI launch decisions do not need to be made right now. However, for AGI there is great uncertainty about how much time remains. Experts are sharply divided on how long it will take to achieve AGI, with some doubting that it will ever occur. Given this uncertainty, and the high stakes of the launch decision, it is not at all too early to assess which AGIs should or should not be launched, and to create the conditions that can help ensure better outcomes whether or not an AGI is launched. \n Technologies without great downside dilemma Not all technologies present a great downside dilemma. These technologies may be disruptive, may have downsides, and may carry risks, but they do not threaten catastrophic harm to human civilization. Or, to the extent that they could threaten catastrophic harm, they do not increase the risk of catastrophe beyond what it would be without the technology, or do not increase the risk to any significant extent. Some of these technologies even hold great potential to improve the human condition, including by reducing other catastrophic risks. These latter technologies are especially attractive and in general should be pursued to the extent that their benefits and cost-effectiveness are competitive with other options for improving the human condition (and achieving any other goals). Three such emerging technologies are discussed here. \n Sustainable design Sustainable design refers broadly to the design of technologies oriented towards improving the environment, advancing sustainability, and related goals. These technologies promise to reduce the harms of climate change and other environmental problems. Quite a lot of such technologies are already in use, from the humble bicycle to advanced solar technologies. This is a vast technology space, and a lot has been said about these technologies elsewhere [55] , so a full review here is unwarranted. What is worth noting here is that these technologies can reduce the risk of environmental catastrophes like climate change, and are often also worth pursuing for other reasons. For example, technology that uses energy, water, and other resources more efficiently can save money by avoiding purchases of these resources. Technologies like bicycles can make people healthier by giving them more exercise. Where sustainable design comes with such cobenefits, it is an especially attractive option. But given the catastrophic potential of environmental risks, some sustainable design, and potentially quite a lot of it, is worth pursuing even if it otherwise comes at an expense. \n Nuclear fusion power Nuclear fusion power is perhaps the Holy Grail of sustainable design. It promises a clean, safe, abundant energy source. If nuclear fusion power can be realized, and if it can be made affordable, then humanity's energy needs could potentially be fully met. And with abundant energy, a lot of other opportunities open up. For example: ocean water could be desalinated, eliminating water resource scarcities. Carbon dioxide could be removed from the atmosphere, which is another form of geoengineering, and a much safer one at that. Countries can develop their economies without worrying nearly as much about their environmental impact and without worrying about being dependent on another country's energy resources. One major long-term benefit of fusion power relates back to the fossil fuels it would replace. With fusion power, humanity can keep the rest of the fossil fuels underground, ready and waiting for when they will really be needed. That time will come sometime within the upcoming hundreds of thousands of years, when Earth's climate cycles back to a glacial period: a new ice age. The exact timing of the next glacial period is uncertain, and depends on, among other things, how much greenhouse gas humanity emits [34] . But, barring any other radical changes to the global Earth system (such as its dismantling by a runaway AGI), a glacial period will eventually occur. And when it does, it could help to still have some fossil fuel around to lessen the bite of the global cooling [56, p 234-235] . This would be yet another form of geoengineering, one with the long-term interests of human civilization in mind. Unfortunately, it is not clear if or when the Holy Grail of fusion power will be achieved. Fusion power research has been going on for decades [57] , and it may take more decades still. With such a long development period, fusion power is a modern analog to cathedrals. Many cathedrals took a century or longer to build. This includes at least one cathedral currently under construction, Sagrada Família in Barcelona, whose construction began in 1882 and has no clear projected completion date. Humanity's track record with cathedrals indicates its capability to complete large, multi-century, intergenerational projects. Perhaps the fusion power project will be completed too. Unlike cathedrals, it is not known if it is even possible to complete the fusion power project: to make fusion power a major energy source for human civilization. Already, it is possible to generate power from nuclear fusion. First came uncontrolled fusion-fusion bombs-beginning with the detonation of Ivy Mike in 1952. Soon after came controlled fusion, beginning with Scylla 1 at Los Alamos National Laboratory in 1958 [58] . Controlled fusion is what can be used for electricity generation. However, controlled fusion thus far has always consumed more energy than it generates. A major breakthrough recently occurred at the National Ignition Facility at Lawrence Livermore National Laboratory: for the first time, a net energy gain occurred within the fuel that triggers the fusion [59] . However, the fuel is just one part of the fusion process; the National Ignition Facility experiment consumed overall about 100 times more energy than it generates. But, clear progress is being made. On the other hand, much more progress is needed still, and it is not clear if or when net energy gain will be achieved, or if it would be affordable. If affordable fusion power is achieved, it would be transformative. The fuels are deuterium and lithium, supplies of which can last for thousands to billions of years, depending on power plant design, and there could be no significant radioactive waste [60] . While fusion reactors potentially could be used to generate materials for nuclear weapons, their weapons proliferation risk would be lower, potentially much lower, than it is for fission power [61, 62] . While fusion power research is expensive and the prospects for success uncertain, the potential benefits are, in my own estimation, sufficient to justify ongoing investment. This cathedral is well worth attempting to build. \n Space colonization The dream of living beyond Earth may be as old as humanity itself. Within the last century, concrete steps have been taken towards this dream. The project of colonizing space may take even longer than the project of fusion power, perhaps orders of magnitude longer. But it comes with its own set of sizable benefits, with relatively little risk. One benefit is the emotional inspiration that humanity can draw from marveling at its cosmic achievement [63] . Other notable benefits are more practical, but no less great. One major benefit of space colonization is the protection it offers against global catastrophes on Earth. If humanity has self-sufficient space colonies, then it can survive even the complete destruction of its home planet. A spacefaring civilization is a more resilient civilization. This benefit has prompted calls for space colonization [64]. However, space colonization using current technology would be highly expensive and perhaps not even feasible, rendering other options for protecting against catastrophes, including Earthbased refuges, the more cost-effective option [65, 66] . The protections that space colonization could offer do not justify investment in space colonization at this time. While space colonization can protect against harms, it can also enable major benefits on its own. The opportunities for civilization are, quite literally, astronomically greater beyond Earth than on it. Indeed, the astronomic potential for human civilization is a main reason why great downside dilemmas and other global catastrophic risk decisions are so important to resolve. But again, this does not mean that humanity should invest in space colonization at this time. Instead, it would be wise to focus on the catastrophic threats it faces, such that future generations can go on to colonize space and achieve astronomically great success as a civilization. \n Conclusion The fate of human civilization now hangs in the balance. As James Martin put it [9] , humanity is going through a turbulent river full of many threats to its survival. Many of these threats derive from risky emerging technologies like stratospheric geoengineering and artificial general intelligence. Some threats also derive from established technologies like nuclear weapons and radio telescopes for messaging to extraterrestrials. And other technologies do not pose a significant threat, including sustainable design technologies, nuclear fusion power, and space colonization. Meanwhile, all of these technologies, if used properly, could help humanity navigate the turbulence. And if the turbulence is successfully navigated, a very long and bright future awaits. Humanity's future could include billions of years on Earth as well as a much bigger and longer existence across the Universe. Human civilization and its descendents can achieve many great things, if only it has the opportunity. Navigating the turbulence-preventing civilization-ending global catastrophe-is thus a crucial task for this era of human civilization. The great downside dilemma for risky emerging technologies could be an especially difficult stretch of turbulence for humanity to navigate. Technologies like stratospheric geoengineering and artificial general intelligence pose great temptations, especially if humanity finds itself in difficult circumstances. For the long-term sake of human civilization, it may be best to abstain from the technologies, but over the short-term, abstention could mean suffer through life without them. Global warming is just one of several forces that could put humanity in desperate circumstances in the not-too-distant future, making risky technologies especially attractive. If the right decisions are to be made about these various technologies-and that could mean taking the risk of using them-then two things are needed. First, the risks must be understood. People must know what the right decision is. This means characterizing the probabilities that the technologies will fail, the severity of harm if they do fail, and humanity's prospects if the technologies are not used. But, as they say, knowing is only half the battle. The other half is applying the knowledge. The second thing needed is for decision-making procedures to be in place such that bad risks are not taken. Accomplishing this means bringing together the many people involved in risky technology development, from scientists and engineers to government regulators. Some scientists and engineers might not like having their work regulated, but this only underscores the importance of including them in the process, so their concerns can be addressed, as can anyone else's. Many jurisdictions already regulate a variety of technologies, in light of the risks they pose. This is a good step. But emerging technologies pose new challenges that must be addressed in turn. And the global nature of the worst catastrophes suggests a role for international cooperation [67] . Efforts at smaller scales can also play a role, including the daily actions everyone can make to protect the environment, promote peace, and otherwise keep humanity out of desperate circumstances. For the sake of human civilization-indeed, for the sake of the Universe-actions across all these scales are well worth taking. Academy of Sciences Physica Scripta Phys. Scr. 89 (2014) 128004 (10pp) doi:10.1088/0031-8949/89/12/128004 \n\t\t\t Phys. Scr. 89 (2014) 128004 S D Baum", "date_published": "n/a", "url": "n/a", "filename": "Baum_2014_Phys._Scr._89_128004.tei.xml", "abstract": "Some emerging technologies promise to significantly improve the human condition, but come with a risk of failure so catastrophic that human civilization may not survive. This article discusses the great downside dilemma posed by the decision of whether or not to use these technologies. The dilemma is: use the technology, and risk the downside of catastrophic failure, or do not use the technology, and suffer through life without it. Historical precedents include the first nuclear weapon test and messaging to extraterrestrial intelligence. Contemporary examples include stratospheric geoengineering, a technology under development in response to global warming, and artificial general intelligence, a technology that could even take over the world. How the dilemma should be resolved depends on the details of each technology's downside risk and on what the human condition would otherwise be. Meanwhile, other technologies do not pose this dilemma, including sustainable design technologies, nuclear fusion power, and space colonization. Decisions on all of these technologies should be made with the long-term interests of human civilization in mind. This paper is part of a series of papers based on presentations at the Emerging Technologies and the Future of Humanity event held at the Royal Swedish Academy of Sciences on", "id": "1952dd834e3b84d7b28073274b63b4b1"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Rodrigo Toro Icarte", "Ethan Waldie", "Toryn Q Klassen", "Richard Valenzano", "Margarita P Castro", "Sheila A Mcilraith", "Rodrigo Toro"], "title": "Learning Reward Machines for Partially Observable Reinforcement Learning", "text": "Introduction The use of neural networks for function approximation has led to many recent advances in Reinforcement Learning (RL). Such deep RL methods have allowed agents to learn effective policies in many complex environment including board games [30] , video games [23] , and robotic systems [2] . However, RL methods (including deep RL methods) often struggle when the environment is partially observable. This is because agents in such environments usually require some form of memory to learn optimal behaviour [31] . Recent approaches for giving memory to an RL agent either rely on recurrent neural networks [24, 15, 37, 29] or memory-augmented neural networks [25, 18] . In this work, we show that Reward Machines (RMs) are another useful tool for providing memory in a partially observable environment. RMs were originally conceived to provide a structured, automatabased representation of a reward function [33, 4, 14, 39] . Exposed structure can be exploited by the Q-Learning for Reward Machines (QRM) algorithm [33] , which simultaneously learns a separate policy for each state in the RM. QRM has been shown to outperform standard and hierarchical deep RL over a variety of discrete and continuous domains. However, QRM was only defined for fully observable environments. Furthermore, the RMs were handcrafted. In this paper, we propose a method for learning an RM directly from experience in a partially observable environment, in a manner that allows the RM to serve as memory for an RL algorithm. A requirement is that the RM learning method be given a finite set of detectors for properties that serve as the vocabulary for the RM. We characterize an objective for RM learning that allows us to formulate the task as a discrete optimization problem and propose an efficient local search approach to solve it. By simultaneously learning an RM and a policy for the environment, we are able to significantly outperform several deep RL baselines that use recurrent neural networks as memory in three partially observable domains. We also extend QRM to the case of partial observability where we see further gains when combined with our RM learning method. \n Preliminaries RL agents learn policies from experience. When the problem is fully-observable, the underlying environment model is typically assumed to be a Markov Decision Process (MDP). An MDP is a tuple M = S, A, r, p, γ , where S is a finite set of states, A is a finite set of actions, r : S × A → R is the reward function, p(s, a, s ) is the transition probability distribution, and γ is the discount factor. The agent starts not knowing what r or p are. At every time step t, the agent observes the current state s t ∈ S and executes an action a t ∈ A following a policy π(a t |s t ). As a result, the state s t changes to s t+1 ∼ p(s t+1 |s t , a t ) and the agent receives a reward signal r(s t , a t ). The goal is to learn the optimal policy π * , which maximizes the future expected discounted reward for every state in S [32] . [38] is a well-known RL algorithm that uses samples of experience of the form (s t , a t , r t , s t+1 ) to estimate the optimal q-function q * (s, a). Here, q * (s, a) is the expected return of selecting action a in state s and following an optimal policy π * . Deep RL methods like DQN [23] and DDQN [35] represent the q-function as qθ (s, a), where qθ is a neural network whose inputs are features of the state and action, and whose weights θ are updated using stochastic gradient descent. \n Q-learning In partially observable problems, the underlying environment model is typically assumed to be a Partially Observable Markov Decision Process (POMDP). A POMDP is a tuple P O = S, O, A, r, p, ω, γ , where S, A, r, p, and γ are defined as in an MDP, O is a finite set of observations, and ω(s, o) is the observation probability distribution. At every time step t, the agent is in exactly one state s t ∈ S, executes an action a t ∈ A, receives reward r t = r(s t , a t ), and moves to state s t+1 according to p(s t , a t , s t+1 ). However, the agent does not observe s t+1 , but only receives an observation o t+1 ∈ O. This observation provides the agent a clue about what the state s t+1 ∈ S is via ω. In particular, ω(s t+1 , o t+1 ) is the probability of observing o t+1 from state s t+1 [5] . RL methods cannot be immediately applied to POMDPs because the transition probabilities and reward function are not necessarily Markovian w.r.t. O (though by definition they are w.r.t. S). As such, optimal policies may need to consider the complete history o 0 , a 0 , . . . , a t−1 , o t of observations and actions when selecting the next action. Several partially observable RL methods use a recurrent neural network to compactly represent the history, and then use a policy gradient method to train it. However, when we do have access to a full POMDP model P O , then the history can be summarized into a belief state. A belief state is a probability distribution b t : S → [0, 1] over S, such that b t (s) is the probability that the agent is in state s ∈ S given the history up to time t. The initial belief state is computed using the initial observation o 0 : b 0 (s) ∝ ω(s, o 0 ) for all s ∈ S. The belief state b t+1 is then determined from the previous belief state b t , the executed action a t , and the resulting observation o t+1 as b t+1 (s ) ∝ ω(s , o t+1 ) s∈S p(s, a t , s )b t (s) for all s ∈ S. Since the state transitions and reward function are Markovian w.r.t. b t , the set of all belief states B can be used to construct the belief MDP M B . Optimal policies for M B are also optimal for the POMDP [5] . \n Reward Machines for Partially Observable Environments In this section, we define RMs for the case of partial observability. We use the following problem as a running example to help explain various concepts. Example 3.1 (The cookie domain). The cookie domain, shown in Figure 1a , has three rooms connected by a hallway. The agent (purple triangle) can move in the four cardinal directions. There is a button in the yellow room that, when pressed, causes a cookie to randomly appear in the red or blue room. The agent receives a reward of +1 for reaching (and thus eating) the cookie and may then go and press the button again. Pressing the button before reaching a cookie will move it to a random location. There is no cookie at the beginning of the episode. This is a partially observable environment since the agent can only see what it is in the room that it currently occupies. RMs are finite state machines that are used to encode a reward function [33] . In the case of partial observability, they are defined over a set of propositional symbols P that correspond to a set of high-level features that the agent can detect using a labelling function L : O ∅ × A ∅ × O → 2 P where (for any set X) X ∅ X ∪ {∅}. L assigns truth values to symbols in P given an environment experience e = (o, a, o ) where o is the observation seen after executing action a when observing o. We use L(∅, ∅, o) to assign truth values to the initial observation. We call a truth value assignment of P an abstract observation because it provides a high-level view of the low-level environment observations via the labelling function L. A formal definition of an RM follows: Definition 3.1 (reward machine). Given a set of propositional symbols P, a Reward Machine is a tuple R P = U, u 0 , δ u , δ r where U is a finite set of states, u 0 ∈ U is an initial state, δ u is the state-transition function, δ u : U ×2 P → U , and δ r is the reward-transition function, δ r : U ×2 P → R. RMs decompose problems into a set of high-level states U and define transitions using if-like conditions defined by δ u . These conditions are over a set of binary properties P that the agent can detect using L. For example, in the cookie domain, P = { , , , , , , }. These properties are true (i.e., part of an experience label according to L) in the following situations: , , , or is true if the agent ends the experience in a room of that color; is true if the agent ends the experience in the same room as a cookie; is true if the agent pushed the button with its last action; and is true if the agent ate a cookie with its last action (by moving onto the space where the cookie was). Figure 2 shows three possible RMs for the cookie domain. They all define the same reward signal (1 for eating a cookie and 0 otherwise) but differ in their states and transitions. As a result, they differ with respect to the amount of information about the current domain state that can be inferred from the current RM state, as we will see below. Each RM starts in the initial state u 0 . Edge labels in the figures provide a visual representation of the functions δ u and δ r . For example, label , 1 between state u 2 and u 0 in Figure 2b represents δ u (u 2 , { , }) = u 0 and δ r (u 2 , { , }) = 1. Intuitively, this means that if the RM is in state u 2 and the agent's experience ended in room immediately after eating the cookie , then the agent will receive a reward of 1 and the RM will transition to u 0 . Notice that any properties not listed in the label are false (e.g. must be false to take the transition labelled , 1 ). We also use multiple labels separated by a semicolon (e.g., \" , 0 ; , 0 \") to describe different conditions for transitioning between the RM states, each with their own associated reward. The label o/w, r (\"o/w\" for \"otherwise\") on an edge from u i to u j means that that transition will be made (and reward r received) if none of the other transitions from u i can be taken. Let us illustrate the behaviour of an RM using the one shown in Figure 2c . The RM will stay in u 0 until the agent presses the button (causing a cookie to appear), whereupon the RM moves to u 1 . From u 1 the RM may move to u 2 or u 3 depending on whether the agent finds a cookie when it enters another room. It is also possible to associate meaning with being in RM states: u 0 means that there is no cookie available, u 1 means that there is a cookie in some room (either blue or red), etc. When learning a policy for a given RM, one simple technique is to learn a policy π(o, u) that considers the current observation o ∈ O and the current RM state u ∈ U . Interestingly, a partially observable problem might be non-Markovian over O, but Markovian over O × U for some RM R P . This is the case for the cookie domain with the RM from Figure 2c , for example. Q-Learning for RMs (QRM) is another way to learn a policy by exploiting a given RM [33] . QRM learns one q-function qu (i.e., policy) per RM state u ∈ U . Then, given any sample experience, the RM can be used to emulate how much reward would have been received had the RM been in any one of its states. Formally, experience e = (o, a, o ) can be transformed into a valid experience ( o, u , a, o , u , r) used for updating qu for each u ∈ U , where u = δ u (u, L(e)) and r = δ r (u, L(e)). Hence, any off-policy learning method can take advantage of these \"synthetically\" generated experiences to update all subpolicies simultaneously. When tabular q-learning is used, QRM is guaranteed to converge to an optimal policy on fullyobservable problems [33] . However, in a partially observable environment, an experience e might be more or less likely depending on the RM state that the agent was in when the experience was collected. For example, experience e might be possible in one RM state u i but not in RM state u j . Thus, updating the policy for u j using e as QRM does, would introduce an unwanted bias to quj . We will discuss how to (partially) address this problem in §5. u 0 , 1 ; , 1 ; o/w, 0 (a) Naive RM. u 0 u 1 u 2 o/w, 0 o/w, 0 o/w, 0 , 0 , 0 ; , 0 , 1 , 1 (b) \"Optimal\" RM. u 0 u 1 u 2 u 3 o/w, 0 o/w, 0 o/w, 0 o/w, 0 , 0 , 0 ; , 0 , 0 ; , 0 , 1 , 1 , 0 , 0 (c) Perfect RM. \n Learning Reward Machines from Traces Our overall idea is to search for an RM that can be used as external memory by an agent for a given task. As input, our method will only take a set of high-level propositional symbols P, and a labelling function L that can detect them. Then, the key question is what properties should such an RM have. Three proposals naturally emerge from the literature. The first comes from the work on learning Finite State Machines (FSMs) [3, 40, 10] , which suggests learning the smallest RM that correctly mimics the external reward signal given by the environment, as in Giantamidis and Tripakis' method for learning Moore Machines [10] . Unfortunately, such approaches would learn RMs of limited utility, like the one in Figure 2a . This naive RM correctly predicts reward in the cookie domain (i.e., +1 for eating a cookie , zero otherwise) but provides no memory in support of solving the task. The second proposal comes from the literature on learning Finite State Controllers (FSC) [22] and on model-free RL methods [32] . This work suggests looking for the RM whose optimal policy receives the most reward. For instance, the RM from Figure 2b is \"optimal\" in this sense. It decomposes the problem into three states. The optimal policy for u 0 goes directly to press the button, the optimal policy for u 1 goes to the blue room and eats the cookie if present, and the optimal policy for u 2 goes to the red room and eats the cookie. Together, these three policies give rise to an optimal policy for the complete problem. This is a desirable property for RMs, but requires computing optimal policies in order to compare the relative quality of RMs, which seems prohibitively expensive. However, we believe that finding ways to efficiently learn \"optimal\" RMs is a promising future work direction. Finally, the third proposal comes from the literature on Predictive State Representations (PSR) [20] , Deterministic Markov Models (DMMs) [21] , and model-based RL [16] . These works suggest learning the RM that remembers sufficient information about the history to make accurate Markovian predictions about the next observation. For instance, the cookie domain RM shown in Figure 2c is perfect w.r.t. this criterion. Intuitively, every transition in the cookie environment is already Markovian except for transitioning from one room to another. Depending on different factors, when entering to the red room there could be a cookie there (or not). The perfect RM is able to encode such information using 4 states: when at u 0 the agent knows that there is no cookie, at u 1 the agent knows that there is a cookie in the blue or the red room, at u 2 the agent knows that there is a cookie in the red room, and at u 3 the agent knows that there is a cookie in the blue room. Since keeping track of more information will not result in better predictions, this RM is perfect. Below, we develop a theory about perfect RMs and describe an approach to learn them. \n Perfect Reward Machines: Formal Definition and Properties The key insight behind perfect RMs is to use their states U and transitions δ u to keep track of relevant past information such that the partially observable environment P O becomes Markovian w.r.t. O × U . Definition 4.1 (perfect reward machine). An RM R P = U, u 0 , δ u , δ r is considered perfect for a POMDP P O = S, O, A, r, p, ω, γ with respect to a labelling function L if and only if for every trace o 0 , a 0 , . . . , o t , a t generated by any policy over P O , the following holds: Pr(o t+1 , r t |o 0 , a 0 , . . . , o t , a t ) = Pr(o t+1 , r t |o t , x t , a t ) (1) where x 0 = u 0 and x t = δ u (x t−1 , L(o t−1 , a t−1 , o t )) . Two interesting properties follow from Definition 4.1. First, if the set of belief states B for the POMDP P O is finite, then there exists a perfect RM for P O with respect to some L. Second, the optimal policies for perfect RMs are also optimal for the POMDP (see supplementary material §12). Instead, we propose an alternative that focuses on a necessary condition for a perfect RM: the RM must predict what is possible and impossible in the environment at the abstract level. For example, it is impossible to be at u 3 in the RM from Figure 2c and make the abstract observation { , }, because the RM reaches u 3 only if the cookie was seen in the blue room or not to be in the red room. This idea is formalized in the optimization model LRM. Let T = {T 0 , . . . , T n } be a set of traces, where each trace T i is a sequence of observations, actions, and rewards: T i = (o i,0 , a i,0 , r i,0 , . . . , a i,ti−1 , r i,ti−1 , o i,ti ). (2) We now look for an RM U, u 0 , δ u , δ r that can be used to predict L(e |U | ≤ umax (4) xi,t ∈ U ∀i ∈ I, t ∈ Ti ∪ {ti} (5) xi,0 = u0 ∀i ∈ I (6) xi,t+1 = δu(xi,t, L(ei,t+1)) ∀i ∈ I, t ∈ Ti (7) N u,l ⊆ 2 2 P ∀u ∈ U, l ∈ 2 P (8) L(ei,t+1) ∈ N x i,t ,L(e i,t ) ∀i ∈ I, t ∈ Ti (9) Constraints ( 3 ) and ( 4 ) ensure that we find a well-formed RM over P with at most u max states. Constraint ( 5 ), (6) , and (7) ensure that x i,t is equal to the current state of the RM, starting from u 0 and following δ u . Constraint ( 8 ) and ( 9 ) ensure that the sets N u,l contain every L(e i,t+1 ) that has been seen right after l and u in T . The objective function comes from maximizing the log-likelihood for predicting L(e i,t+1 ) using a uniform distribution over all the possible options given by N u,l . A key property of this formulation is that any perfect RM is optimal w.r.t. the objective function in LRM when the number of traces tends to infinity (see supplementary material §12): Theorem 4.3. When the set of training traces (and their lengths) tends to infinity and is collected by a policy such that π(a|o) > for all o ∈ O and a ∈ A, any perfect RM with respect to L and at most u max states will be an optimal solution to the formulation LRM. Finally, note that the definition of a perfect RM does not impose conditions over the rewards associated with the RM (i.e., δ r ). This is why δ r is a free variable in the model LRM. However, we still expect δ r to model the external reward signals given by the environment. To do so, we estimate δ r (u, l) using its empirical expectation over T (as commonly done when constructing belief MDPs [5] ). \n Searching for a Perfect Reward Machine Using Tabu Search We now describe the specific optimization technique used to solve LRM. We experimented with many discrete optimization approaches-including mixed integer programming [6] , Benders decomposition [8] , evolutionary algorithms [17] , among others-and found local search algorithms [1] to be the most effective at finding high quality RMs given short time limits. In particular, we use Tabu search [11] , a simple and versatile local search procedure with convergence guarantees and many successful applications in the literature [36] . We also include our unsuccessful mixed integer linear programming model for LRM in the supplementary material §10. In the context of our work, Tabu search starts from a random RM and, on each iteration it evaluates all \"neighbouring\" RMs. We define the neighbourhood of an RM as the set of RMs that differ by exactly one transition (i.e., removing/adding a transition, or changing its value) and evaluate RMs using the objective function of LRM. When all neighbouring RMs are evaluated, the algorithm chooses the one with lowest values and sets it as the current RM. To avoid local minima, Tabu search maintains a Tabu list of all the RMs that were previously used as the current RM. Then, RMs in the Tabu list are pruned when examining the neighbourhood of the current RM. \n Simultaneously Learning a Reward Machine and a Policy We now describe our overall approach to simultaneously finding an RM and exploiting that RM to learn a policy. The complete pseudo-code can be found in the supplementary material (Algorithm 1). Our approach starts by collecting a training set of traces T generated by a random policy during t w \"warmup\" steps. This set of traces is used to find an initial RM R using Tabu search. The algorithm then initializes policy π, sets the RM state to the initial state u 0 , and sets the current label l to the initial abstract observation L(∅, ∅, o). The standard RL learning loop is then followed: an action a is selected following π(o, u) where u is the current RM state, and the agent receives the next observation o and the immediate reward r. The RM state is then updated to u = δ u (u, L(o, a, o )) and the last experience ( o, u , a, r, o , u ) is used by any RL method of choice to update π. Note that in an episodic task, the environment and RM are reset whenever a terminal state is reached. If on any step, there is evidence that the current RM might not be the best one, our approach will attempt to find a new one. Recall that the RM R was selected using the cardinality of its prediction sets N (LRM). Hence, if the current abstract observation l is not in N u,l , adding the current trace to T will increase the size of N u,l for R. As such, the cost of R will increase and it may no longer be the best RM. Thus, if l ∈ N u,l , we add the current trace to T and search for a new RM. Recall that we use Tabu search, though any discrete optimization method could be applied. Our method only uses the new RM if its cost is lower than R's. If the RM is updated, a new policy is learned from scratch. Given the current RM, we can use any RL algorithm to learn a policy π(o, u), by treating the combination of o and u as the current state. If the RM is perfect, then the optimal policy π * (o, u) will also be optimal for the original POMDP (as stated in Theorem 4.2). However, to exploit the problem structure exposed by the RM, we can use the QRM algorithm. As explained in §3, standard QRM under partial observability can introduce a bias because an experience e = (o, a, o ) might be more or less likely depending on the RM state that the agent was in when the experience was collected. We partially address this issue by updating qu using (o, a, o ) if and only if L(o, a, o ) ∈ N u,l , where l was the current abstract observation that generated the experience (o, a, o ). Hence, we do not transfer experiences from u i to u j if the current RM does not believe that (o, a, o ) is possible in u j . For example, consider the cookie domain and the perfect RM from Figure 2c . If some experience consists of entering to the red room and seeing a cookie, then this experience will not be used by states u 0 and u 3 as it is impossible to observe a cookie at the red room from those states. Note that adding this rule may work in many cases, but it will not address the problem in all environments (more discussion in §7). We consider addressing this problem as an interesting area for future work. \n Experimental Evaluation We tested our approach on three partially observable grid domains (Figure 1 ). The agent can move in the four cardinal directions and can only see what is in the current room. These are stochastic domains where the outcome of an action randomly changes with a 5% probability. The first environment is the cookie domain (Figure 1a ) described in §3. Each episode is 5, 000 steps long, during which the agent should attempt to get as many cookies as possible. The second environment is the symbol domain (Figure 1b ). It has three symbols ♣, ♠, and in the red and blue rooms. One symbol from {♣, ♠, } and possibly a right or left arrow are randomly placed at the yellow room. Intuitively, that symbol and arrow tell the agent where to go, e.g., ♣ and → tell the agent to go to ♣ in the east room. If there is no arrow, the agent can go to the target symbol in either room. An episode ends when the agent reaches any symbol in the red or blue room, at which point it receives a reward of +1 if it reached the correct symbol and −1 otherwise. The third environment is the 2-keys domain (Figure 1c ). The agent receives a reward of +1 when it reaches the coffee (in the yellow room). To do so, it must open the two doors (shown in brown). Each door requires a different key to open it, and the agent can only carry one key at a time. Initially, the two keys are randomly located in either the blue room, the red room, or split between them. We tested two versions of our Learned Reward Machine (LRM) approach: LRM+DDQN and LRM+DQRM. Both learn an RM from experience as described in §4.2, but LRM+DDQN learns a policy using DDQN [35] while LRM+DQRM uses the modified version of QRM described in §5. In all domains, we used u max = 10, t w = 200, 000, an epsilon greedy policy with = 0.1, and a discount factor γ = 0.9. The size of the Tabu list and the number of steps that the Tabu search performs before returning the best RM found is 100. We compared against 4 baselines: DDQN [35] , A3C [24] , ACER [37] , and PPO [29] using the OpenAI baseline implementations [12] . DDQN uses the concatenation of the last 10 observations as input which gives DDQN a limited memory to better handle the domains. A3C, ACER, and PPO use an LSTM to summarize the history. Note that the output of the labelling function was also given to the baselines. Details on the hyperparameters and networks can be found in the supplementary material §13. Figure 3 shows the total cumulative rewards that each approach gets every 10, 000 training steps and compares it to the optimal policy. For the LRM algorithms, the figure shows the median performance over 30 runs per domain, and percentile 25 to 75 in the shadowed area. For the DDQN baseline, we show the maximum performance seen for each time period over 5 runs per problem. Similarly, we also show the maximum performance over the 30 runs of A3C, ACER, and PPO per period. All the baselines outperformed a random policy, but none make much progress on any of the domains. Furthermore, LRM approaches largely outperform all the baselines, reaching close-to-optimal policies in the cookie and symbol domain. We also note that LRM+DQRM learns faster than LRM+DDQN, but is more unstable. In particular, LRM+DQRM converged to a considerably better policy than LRM+DDQN in the 2-keys domain. We believe this is due to QRM's experience sharing mechanism that allows for propagating sparse reward backwards faster (see supplementary material §13.3). A key factor in the strong performance of the LRM approaches is that Tabu search finds high-quality RMs in less than 100 local search steps (Figure 5 , supplementary material). In fact, our results show that Tabu search finds perfect RMs in most runs, in particular when tested over the symbol domain. 7 Discussion, Limitations, and Broader Potential Solving partially observable RL problems is challenging and LRM was able to solve three problems that were conceptually simple but presented a major challenge to A3C, ACER, and PPO with LSTM-based memories. A key idea behind these results was to optimize over a necessary condition for perfect RMs. This objective favors RMs that are able to predict possible and impossible future observations at the abstract level given by the labelling function L. In this section, we discuss the advantages and current limitations of such an approach. u 0 u 1 , 1 ; o/w, 0 , 1 ; o/w, 0 , 0 , 0 We begin by considering the performance of Tabu search in our domains. Given a training set composed of one million transitions, a simple Python implementation of Tabu search takes less than 2.5 minutes to learn an RM across all our environments, when using 62 workers on a Threadripper 2990WX processor. Note that Tabu search's main bottleneck is evaluating the neighbourhood around the current RM solution. As the size of the neighbourhood depends on the size of the set of propositional symbols P, exhaustively evaluating the neighbourhood may sometimes become impractical. To handle such problem, it will be necessary to import ideas from the Large Neighborhood Search literature [27] . Regarding limitations, learning the RM at the abstract level is efficient but requires ignoring (possibly relevant) low-level information. For instance, Figure 4 shows an adversarial example for LRM. The agent receives reward for eating the cookie ( ). There is an external force pulling the agent down-i.e., the outcome of the \"move-up\" action is actually a downward movement with high probability. There is a button ( ) that the agent can press to turn off (or back on) the external force. Hence, the optimal policy is to press the button and then eat the cookie. Given P = { , }, a perfect RM for this environment is fairly simple (see Figure 4 ) but LRM might not find it. The reason is that pressing the button changes the low-level probabilities in the environment but does not change what is possible or impossible at the abstract level. In other words, while the LRM objective optimizes over necessary conditions for finding a perfect RM, those conditions are not sufficient to ensure that an optimal solution will be a perfect RM. In addition, if a perfect RM is found, our heuristic approach to share experiences in QRM would not work as intended because the experiences collected when the force is on (at u 0 ) would be used to update the policy for the case where the force is off (at u 1 ). Other current limitations include that it is unclear how to handle noise over the high-level detectors L and how to transfer learning from previously learned policies when a new RM is learned. Finally, defining a set of proper high-level detectors for a given environment might be a challenge to deploying LRM. Hence, looking for ways to automate that step is an important direction for future work. \n Related Work State-of-the-art approaches to partially observable RL use Recurrent Neural Networks (RNNs) as memory in combination with policy gradient [24, 37, 29, 15] , or use external neural-based memories [25, 18, 13] . Other approaches include extensions to Model-Based Bayesian RL to work under partial observability [28, 7, 9] and to provide a small binary memory to the agent and a special set of actions to modify it [26] . While our experiments highlight the merits of our approach w.r.t. RNN-based approaches, we rely on ideas that are largely orthogonal. As such, we believe there is significant potential in mixing these approaches to get the benefit of memory at both the high-and the low-level. The effectiveness of automata-based memory has long been recognized in the POMDP literature [5] , where the objective is to find policies given a complete specification of the environment. The idea is to encode policies using Finite State Controllers (FSCs) which are FSMs where the transitions are defined in terms of low-level observations from the environment and each state in the FSM is associated with one primitive action. When interacting with the environment, the agent always selects the action associated with the current state in the controller. Meuleau et al. [22] adapted this idea to work in the RL setting by exploiting policy gradient to learn policies encoded as FSCs. RMs can be considered as a generalization of FSC as they allow for transitions using conditions over high-level events and associate complete policies (instead of just one primitive action) to each state. This allows our approach to easily leverage existing deep RL methods to learn policies from low-level inputs, such as images-which is not achievable by Meuleau et al. [22] . That said, further investigating using ideas for learning FSMs [3, 40, 10] in learning RMs is a promising direction for future work. Our approach to learn RMs is greatly influenced by Predictive State Representations (PSRs) [20] . The idea behind PSRs is to find a set of core tests (i.e., sequences of actions and observations) such that if the agent can predict the probabilities of these occurring, given any history H, then those probabilities can be used to compute the probability of any other test given H. The insight is that state representations that are good for predicting the next observation are good for solving partially observable environments. We adapted this idea to the context of RM learning as discussed in §4. While our work was under review, two interesting papers were submitted to arXiv. The first paper, by Xu et al. [39] , proposes a polynomial time algorithm to learn reward machines in fully observable domains. Their goal is to learn the smallest reward machine that is consistent with the reward function-which makes sense for fully observable domains, but would have limited utility under partial observability (as discussed in §4). The second paper, by Zhang et al. [41] , proposes to learn a discrete PSR representation of the environment directly from low-level observations and then plan over such representation using tabular Q-learning. This is a promising research direction, with some clear synergies with LRM. \n Concluding Remarks We have presented a method for learning (perfect) Reward Machines in partially observable environments and demonstrated the effectiveness of these learned RMs in tackling partially observable RL problems that are unsolvable by A3C, ACER and PPO. Informed by criteria from the POMDP, FSC, and PSR literature, we proposed a set of RM properties that support tackling RL in partially observable environments. We used these properties to formulate RM learning as a discrete optimization problem. We experimented with several optimization methods, finding Tabu search to be the most effective. We then combined this RM learning with policy learning for partially observable RL problems. Our combined approach outperformed a set of strong LSTM-based approaches on different domains. We believe this work represents an important building block for creating RL agents that can solve cognitively challenging partially observable tasks. Not only did our approach solve problems that were unsolvable by A3C, ACER and PPO, but it did so in a relatively small number of training steps. RM learning provided the agent with memory, but more importantly the combination of RM learning and policy learning provided it with discrete reasoning capabilities that operated at a higher level of abstraction, while leveraging deep RL's ability to learn policies from low-level inputs. This work leaves open many interesting questions relating to abstraction, observability, and properties of the language over which RMs are constructed. We believe that addressing these questions, among many others, will push the boundary of partially observable RL problems that can be solved. \n Algorithm for Simultaneously Learning Reward Machines and a Policy Algorithm 1 shows our overall approach to simultaneously learning an RM and exploiting that RM to learn a policy. The algorithm inputs are the set of propositional symbols P, the labelling function L, a maximum on the number of RM states u max , and the number of \"warmup\" steps t w . Our approach starts by collecting a training set of traces T generated by a random policy during t w steps (Line 2). This set of traces is used to find an initial RM R using Tabu search (Line 3). If later traces show that R is incorrect, our method will then find a new RM learned using the additional traces. Lines 4 and 5 initialize the environment and the policy π, and set variables x and l to the initial RM state u 0 and initial abstract observation L(∅, ∅, o), respectively. Lines 7-19 are the main loop of our approach. Lines 7-10 are part of the standard RL loop: the agent executes an action a selected following π(o, x) and receives the next observation o , the immediate reward r, and a boolean variable done indicating if the episode has terminated. Then, the state in the RM x is updated and the policy π is improved using the last experience ( o, x , a, r, o , x , done). Note that when done is true, the environment and RM are reset (Lines 17-18). Lines 11-16 involve relearning the RM when there is evidence that the current RM might not be the best one. Recall that the RM R was selected using the cardinality of its prediction sets N (see the description of LRM). Hence, if the current abstract observation l is not in N x,l , then adding the current trace to T will increase the size of N x,l for R. As such, the cost of R will increase and it may no longer be the best RM. Thus, if l ∈ N x,l , we add the current trace to T and use Tabu search to find a new RM. Note, our method only uses the new RM if its cost is lower than that of R (Lines 14-16). However, when the RM is updated, a new policy is learned from scratch (Line 16). T ← T ∪ get_current_trace() Proof sketch. If the reachable belief space B is finite, we can construct an RM that keeps track of the current belief state using one RM state per belief state and emulating their progression using δ u , and one propositional symbol for every action-observation pair. Thus, the current belief state b t can be inferred from the last observation, last action, and the current RM state. As such, the equality from Definition 4.1 holds. Two interesting properties follow from the definition of a perfect RM. First, if the set of belief states B for the POMDP P O is finite, then there exists a perfect RM for P O with respect to some L. Second, the optimal policies for perfect RMs are also optimal for the POMDP. Theorem 12.2. Let R P be a perfect RM for a POMDP P O w.r.t. a labelling function L, then any optimal policy for R P w.r.t. the environmental reward is also optimal for P O . Proof sketch. As the next observation and immediate reward probabilities can be predicted from O × U × A, an optimal policy over O × U must also be optimal over P O . A key property of this formulation is that any perfect RM is optimal with respect to the objective function in LRM when the number of traces tends to infinity: Theorem 12.3. When the set of training traces (and their lengths) tends to infinity and is collected by a policy such that π(a|o) > for all o ∈ O and a ∈ A, and some constant > 0, then any perfect RM with respect to L and at most u max states will be an optimal solution to the formulation given in LRM. Proof sketch. In the limit, l ∈ N u,l if and only if the probability of observing l after executing an action from the RM state u while observing l is non-zero. In particular, for all i ∈ I and t ∈ T , the cardinality of N xi,t,L(ei,t) will be minimal for a perfect RM. This follows from the fact that perfect RMs make perfect predictions for the next observation o given o, u, and a. Therefore, as we minimize the sum over log(|N xi,t,L(ei,t) |), the objective value for a perfect RM must be minimal. \n Experimental Evaluation 13.1 Experimental Details For LRM+DDQN and LRM+DQRM, the neural network used has 5 fully connected layers with 64 neurons per layer. On every step, we trained the network using 32 sampled experiences from a replay buffer of size 100,000 using the Adam optimizer [19] and a learning rate of 5e-5. The target networks were updated every 100 steps. DDQN [35] uses the same parameters and network architecture as LRM+DDQN, but its input is the concatenation of the last 10 observations, as commonly done by Atari playing agents. This gives DDQN a limited memory to better handle partially observable domains. In contrast, A3C, ACER, and PPO use an LSTM to summarize the history. We also followed the same testing methodology that was used in their original publications. We ran each approach at least 30 times per domain, and on every run, we randomly selected the number of hidden neurons for the LSTM from {64, 128, 256, 512} and a learning rate from (1e-3, 1e-5). We also sampled δ from {0, 1, 2} for ACER and the clip range from (0.1, 0.3) for PPO. Other parameters were fixed to their default values. While interacting with the environment, the agents were given a \"top-down\" view of the world represented as a set of binary matrices. One matrix had a 1 in the current location of the agent, one had a 1 in only those locations that are currently observable, and the remaining matrices each corresponded to an object in the environment and had a 1 at only those locations that were both currently observable and contained that object (i.e., locations in other rooms are \"blacked out\"). The agent also had access to features indicating if they were carrying a key, which colour room they were in, and the current status (i.e., occurring or not occurring) of the events detected by the labelling function. \n Tabu Search Figure 5 evaluates the quality of the RMs found by Tabu search by comparing it the perfect RM. In each plot, a dot compares the cost of a handcrafted perfect RM with that of an RM R that was found by Tabu search while running our LRM approaches, where the costs are evaluated relative to the training set used to find R. Being on or under the diagonal line (as in most of the points in the figure) means that Tabu search is finding RMs whose values are at least as good as the handcrafted RM. Hence, Tabu search is either finding perfect RMs or discovering that our training set is incomplete and our agent will eventually fill those gaps. \n Cookie domain Symbol domain 2-keys domains Cost Perfect RM Cost Learned RM Cost Perfect RM Cost Learned RM Cost Perfect RM Cost Learned RM \n DDQN vs DQRM: Exploration Heatmaps and Learned Trajectories As shown in §6 of the paper, LRM+DQRM tends to learn faster than LRM+DDQN and largely outperforms LRM+DDQN in the 2-keys domain. Towards better understating these results, we ran the following experiment. For the 2-keys domain (and identical random seed), we learn policies using DDQN and DQRM over the same handcrafted perfect RM from Figure 6 . RM state u 6 is fairly simple: when the agent is carrying a key and there is only one closed door, go and get the coffee. As expected, DQRM seems to learn an accurate estimation of that q-function using very limited interactions within the coffee room. In contrast, DDQN uses one big q-network to model the complete policy. Receiving reward by getting the coffee pushes the q-network estimation to believe that a high reward can be collected from the coffee room (even if the doors are closed). Hence, the agent spent a considerable amount of time hitting the second door without having a key. Second, DQRM shares experience over all the q-networks. This allows for using the experiences collected while being at early stages of the task (e.g., states u 0 , u 1 , u 2 , and u 3 ) to update policies for later stages (e.g., states u 4 , u 5 , and u 6 ). In particular, all the experience needed to learn to navigate from one room to another is shared. Therefore, while DDQN would depend on its network to avoid relearning how to navigate between rooms when the RM state changes, DQRM enforces such transfer by sharing experiences whenever it is appropriate. This might explain why DDQN spends considerably more time in the hallway than DQRM in heatmaps 50-100K and 100-150K. Note that the exploratory trend from 150-500K follows a clear pattern. On one hand, the DQRM agent seems to have a good idea of how to solve the task and, therefore, it spends most of its time on the hallway (solving the tasks requires passing through the hallway at least 4 times). On the other hand, the DDQN agent is getting stuck exploring subregions of the map. Finally, we inspected the trajectories of each agent when solving the task after 1 million training steps. Figures 9 shows the DDQN agent and Figure 10 shows the DQRM agent. Both agents solved the task, but DQRM solved it faster (83 steps vs 102 steps). In solving this problem, the main difficulty for the DDQN agent is in reacting to entering the South room and discovering that the keys are not there. Its reaction is to enter and leave the empty room many times before going to the North room. In the case of DQRM, the agent goes directly to the North room after observing that the South room is empty, but then it enters and leaves the North room a few times before collecting the keys. After collecting the first key, both agents solved the rest of the task almost optimally. Figure 1 : 1 Figure 1: Partially observable environments where the agent can only see what is in the current room. \n Figure 2 : 2 Figure 2: Three possible Reward Machines for the Cookie domain. \n Figure 3 : 3 Figure 3: Total reward collected every 10, 000 training steps. \n Figure 4 : 4 Figure 4: The gravity domain. \n Algorithm 1 1 Learning an RM and a Policy 1: Input: P, L, A, u max , t w 2: T ← collect_traces(t w ) 3: R, N ← learn_rm(P, L, T , u max ) 4: o, x, l ← env_get_initial_state(), u 0 , L(∅, ∅, o) 5: π ← initialize_policy() 6: for t = 1 to t train do7: a ← select_action(π, o, x) 8: o , r, done ← env_execute_action(a) 9: x , l ← δ u (x, L(o, a, o )), L(o, a, o ) 10: π ← improve(π, o, x, l, a, r, o , x , l , done, N ) 11: if l ∈ N x,l then 12: \n Figure 5 : 5 Figure 5: Cost comparison between perfect RM and RM found by Tabu search. \n steps 300-350K steps 350-400K steps 400-450K steps 450-500K steps DQRM DQRM DQRM DQRM DQRM 250-300K steps 300-350K steps 350-400K steps 400-450K steps 450-500K steps \n Figure 8 : 8 Figure 8: Exploration heatmaps for DDQN and DQRM over the 2-keys domain given a perfect RM. \n (a) Collecting the first key. (b) Opening the first door. (c) Collecting the other key. (d) Opening the second door. \n Figure 9 : 9 Figure 9: The learned trace after 1 million training steps by DDQN given a perfect RM, divided into four stages. \n Figure 10 : 10 Figure 10: The learned trace after 1 million training steps by QRM given a perfect RM, divided into four stages. \n i,t+1 ) from L(e i,t ) and the current RM state x i,t , where e i,t+1 is the experience (o i,t , a i,t , o i,t+1 ) and e i,0 is (∅, ∅, o i,0 ) by definition. The model parameters are the set of traces T , the set of propositional symbols P, the labelling function L, and a maximum number of states in the RM u max . The model also uses the sets I = {0 . . . n} and T i = {0 . . . t i − 1}, where I contains the index of the traces and T i their time steps. The model has two auxiliary variables x i,t and N u,l . Variable x i,t ∈ U represents the state of the RM after observing trace T i up to time t. Variable N u,l ⊆ 2 2 P is the set of all the next abstract observations seen from the RM state u and the abstract observations l at some point in T . In other words, l ∈ N u,l iff u = x i,t , l = L(e i,t ), and l = L(e i,t+1 ) for some trace T i and time t. minimize U,u 0 ,δu,δr i∈I t∈T i log(|N x i,t ,L(e i,t ) |) (LRM) s.t. U, u0, δu, δr ∈ RP (3) \n Given any POMDP P O with a finite reachable belief space, there exist a perfect RM for P O with respect to some labelling function L. 12 Theorems and Proof Sketches Theorem 12.1. 13: 14: 15: 16: 17: R , N ← relearn_rm(R, P, L, T , u max ) if R = R then R, done ← R , true π ← initialize_policy() end if 18: 19: 20: 21: end if if done then o , x , l ← env_get_initial_state(), u 0 , L(∅, ∅, o) end if 22: 23: end for o, x, l ← o , x , l 24: return π", "date_published": "n/a", "url": "n/a", "filename": "LRM_paper.tei.xml", "abstract": "Reward Machines (RMs) provide a structured, automata-based representation of a reward function that enables a Reinforcement Learning (RL) agent to decompose an RL problem into structured subproblems that can be efficiently learned via off-policy learning. Here we show that RMs can be learned from experience, instead of being specified by the user, and that the resulting problem decomposition can be used to effectively solve partially observable RL problems. We pose the task of learning RMs as a discrete optimization problem where the objective is to find an RM that decomposes the problem into a set of subproblems such that the combination of their optimal memoryless policies is an optimal policy for the original problem. We show the effectiveness of this approach on three partially observable domains, where it significantly outperforms A3C, PPO, and ACER, and discuss its advantages, limitations, and broader potential. 1", "id": "bfdadb4326f1777e2d45eb6c1e669cb7"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Jessica Taylor"], "title": "Quantilizers: A Safer Alternative to Maximizers for Limited Optimization", "text": "Expected Utility Maximization Many frameworks for studying idealized agents assume that agents attempt to maximize the expectation of some utility function, as seen in the works of Heckerman and Horvitz (1990) , Bacchus and Grove (1995) , and Chajewska, Koller, and Parr (2000) . von Neumann and Morgenstern (1944) provide compelling arguments that any rational agent with consistent preferences must act as if it is maximizing some utility function, and Russell and Norvig (2010, chap. 16 ) use the expected utility maximization framework extensively as the basic model of an idealized rational agent. Formally, let A be the finite 1 set of available actions, O the set of possible outcomes, and W : A → ∆O map each action to the predicted outcome distribution resulting from that action 2 . Let U : O → [0, 1] be a utility function, with U (o) being the utility of outcome o. Then an expected utility maximizer is an agent that chooses an action a ∈ A that maximizes E [U (W (a))]. We make no argument against expected utility maximization on the grounds of rationality. However, maximizing Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1 Expected utility maximizers and quantilizers can also be defined for infinite set of actions, but in this paper we discuss only finite action sets for simplicity. 2 Defining W is not straightforward. See Rosenbaum and Rubin (1983) for a formalization of the \"causal effects\" of an action, whose maximization results in causal decision theory; see Soares and Fallenstein (2015) for discussion of alternative decision theories. the expectation of some utility function could produce large unintended consequences whenever U does not accurately capture all the relevant criteria. Some unintended consequences of this form can already be observed in modern AI systems. For example, consider the genetic algorithm used by Nguyen, Yosinski, and Clune (2014) to generate an image which would be classified by a deep neural network as a starfish, with extremely high confidence. The resulting image ended up completely unrecognizable, looking nothing at all like a starfish. Of course, Nguyen, Yosinski, and Clune (2014) intended to develop images that would be misclassified, but this demonstrates the point that an algorithm directed to find the image that most classified as a starfish, generated an image that was very strange indeed. Another example of unintended solutions resulting from maximization is given by Thompson (1997) , who used genetic algorithms to \"artificially evolve\" a circuit that could differentiate between two different inputs via a 10 x 10 field programmable gate array (FPGA). The resulting circuit worked, using only 21 of the 100 available FPGA cells. Curiously, five of those cells were not connected to either the input or the output-the evolved circuit was making use of them via the physical features of that specific chip and machine (Thompson suggest that the circuit was taking advantage of electromagnetic coupling or power-supply loading). If the goal was to design a circuit that could also work on other chips, then this \"maximization\" process (of finding a very small algorithm that, on that one circuit board, was good at distinguishing between two inputs) produced an unintended result that was dependent upon physical features of the chip that the designers thought were irrelevant. The resulting circuit performed well by the test metric, but the result would not likely have been usable on any other physical chips. Unintended solutions of this form could become quite dangerous in highly autonomous artificially intelligent systems with capabilities that strongly exceed those of humans. Bostrom (2014) discusses a number of potential dangers stemming from expected utility maximization in powerful agents. He considers \"superintelligent\" agents, defined as agents that are \"smarter than the best human brains in practically every field,\" and describes the problem of perverse instantiation, his term for the seemingly-perverse unintended consequences of directing a powerful agent to attempt to maximize a utility function which is not in fact aligned with our goals. For example, Bostrom (2014) illustrates this by describing a machine that, when told to make humans smile, accomplishes this goal by paralyzing human faces into permanent smiles, rather than by causing us to smile via the usual means. Another example is given by , who describes an agent directed to \"cure cancer\" which attempts to kidnap human test subjects. Clearly, before an autonomous system can be granted great autonomy and power, we must have some assurances that it will not, in pursuit of its goals, produce any such \"perverse\" unintended consequences. In both the short term and the long term, the problem with expected utility maximization is that the system's utility function likely will not capture all the complexities of human value, as humans care about many features of the environment that are difficult to capture in any simple utility function . Of course, no practical system acting in the real world can actually maximize expected utility, as finding the literally optimal policy is wildly intractable. Nevertheless, it is common practice to design systems that approximate expected utility maximization, and insofar as expected utility maximization would be unsatisfactory, such systems would become more and more dangerous as algorithms and approximations improve. If we are to design AI systems which safely pursue simple goals, an alternative may be necessary. \n Expected Utility Quantilization Given that utility maximization can have many unintended side effects, Armstrong, Sandberg, and Bostrom (2012) and others have suggested designing systems that perform some sort of \"limited optimization,\" that is, systems which achieve their goals in some non-extreme way, without significantly disrupting anything outside the system or otherwise disturbing the normal course of events. For example, intuition says it should be possible to direct a powerful AI system to \"just make paperclips\" in such a way that it runs a successful paperclip factory, without ever attempting to turn as much of the universe as possible into paperclips. Simon (1956) propose instead designing expected utility \"satisficers,\" agents that choose any action that achieves an expected utility above some fixed threshold. In our notation, an expected utility satisficer is one that chooses an action from the set {a ∈ A : E [U (W (a))] ≥ t} for some threshold t. (Indeed, Fallenstein and Soares (2014) describe a toy model of satisficing agents in simple environments.) However, the satisficing framework is under-defined and not necessarily satisfactory. When it comes to powerful intelligent agents, it may often be that the easiest way to satisfice is to maximize: if the paperclip AI system can easily create a sub-agent that tries as hard as it can to make paperclips, it may be that the easiest satisficing action is simply to construct a sub-agent that attempts to approximate something like expected utility maximization instead. Abbeel and Ng (2004) have suggested apprenticeship learning as an alternative to maximization when it is difficult to explicitly specify the correct goal. In apprenticeship learning, an AI system learns to closely imitate a human expert performing a task. An agent which only executes actions that its operators would have executed seems likely to be immune to Bostrom's problem of perverse instantiation. Yet while mimicking humans avoids many unintended effects of maximization, a system that merely mimics humans cannot easily surpass the performance of those it is mimicking (except perhaps by performing the same actions faster and more reliably). Such a technique could prove quite useful, but wouldn't help us design algorithms intended to outperform humans by finding plans, strategies, and techniques that the operators would not have been able to identify. In this paper, we propose quantilization, which interpolates between mimicry and maximization in order to gain some advantages of both. Definition 1 (Quantilizer). A q-quantilizer is an agent that, when faced with a decision problem, returns a random action in the top q proportion of some \"base distribution\" over actions, sorted by the expected utility achieved if that action is executed. Intuitively, a 0.1-quantilizer (which we could also call a \"ten percentilizer\") selects a random action from the top 10% of actions. More formally, let A be the action set and γ be some distribution over actions. For example, γ could be uniformly random, or it could be a distribution estimated from human actions. Let f : [0, 1] → A be a function that represents γ, but sorted by expected utility: ∀x, y ∈ [0, 1] x > y =⇒ E [U (W (f (x)))] ≥ E [U (W (f (y)))] µ({x ∈ [0, 1] | f (x) = a}) = γ(a) where µ is the Lebesgue measure on the reals. For any 0 < q ≤ 1, a q-quantilizer samples a number x from the uniform distribution over [1 − q, 1] and returns f (x). We will write Q U,W,γ,q (a) for the probability that it returns action a, where U is the quantilizer's utility function. 3 We will abbreviate this as simply Q q (a), where it is understood that Q q is parameterized on U , W , and γ. As q approaches 0, a q-quantilizer reduces to a maximizer whose available action set is the support of γ. A 0.01quantilizer (also known as a \"one percentilizer\") samples a random action from the top 1%; a 0.5-quantilizer (\"fiftypercentilizer\") samples a random above-average action; a 1-quantilizer just mimics γ. As an example, suppose there are 3 actions A = {a, b, c} with expected utilities 1 . A maximizer will always return action c. If we define a uniform base distribution γ, then the behavior of the q-quantilizer Q q is: Figure 1 : A representation of a 5/9-quantilizer for the threeaction example. The quantilizer samples x from the gray region and returns the corresponding action f (x), so it returns b with 2/5 probability and c with 3/5 probability. E [U (W (a))] = 0.2 E [U (W (b))] = 0.5 E [U (W (c))] = 0.7 as illustrated in figure • If q ≤ 1 3 , it always returns c. • If 1 3 ≤ q ≤ 2 3 , then it returns c with probability 1 3q and b the rest of the time. • If q ≥ 2 3 , then it returns c with probability 1 3q , b with probability 1 3q , and a otherwise. In this example, it is clear that the quantilizer interpolates between maximizing and mimicking γ depending on q. \n Derivation from a Conservative Cost Constraint We will now explore a few circumstances under which one might prefer to design a system that quantilizes expected utility, rather than maximizing it. Definition 1 gives some hint as to what sort of situation quantilizers are designed for. Given some trusted base distribution γ of actions, such as a generative model for the actions that a human might take, the distribution Q 0.01 is roughly the distribution of actions that a human might take if they were attempting to score well by U (but were otherwise adhering to \"normal\" behavior). That is, if γ puts most of its measure on actions that humans might generate when solving similar problems, then Q 0.01 is similar to the distribution of actions a human might take conditional on that action having a high utility according to U . Thus, intuitively, a 0.01-quantilizer is designed to solve problems in a manner similar to how a human would solve that problem. Furthermore, under certain assumptions, we can prove that building a quantilizer instead of a maximizer is not too costly to our true utility function, in the sense that running the quantilizer won't be much worse than simply sampling from the base distribution γ. To show this, assume that an AI system is built to quantilize expected utility for some utility function U . Assume that our true utility function is some other function V . Intuitively, we will imagine that for most actions, E [U (W (a))] is similar to E [V (W (a))], but for the actions a U which maximize E U (W (a U )) (or come close to maximizing it), the value of E V (W (a U )) is much lower (because a U is some edge case or \"perverse instantiation,\" such as generating smiles by paralyzing people's faces). Thus, as in Swaminathan and Joachims (2015), we assume there is some unknown cost function c : A → R ≥0 , which describes the negative contribution of an action a to V . For example, if there is some action noop which the AI system may execute immediately to shut down without having any further effect on the world, then we could define c(a) = max{0, E [V (W (noop))] − E [V (W (a))]}. We expect c to be quite difficult to approximate, because human values are complex and depend on many difficult-to-quantify aspects of the world. Indeed, if we could formally encode c, we could build it into U , and then we would have much less need to worry about perverse instantiations. Even without an approximation of c, though, we can still prove that under certain assumptions, a quantilizer will not incur a high cost according to c. Assume that the cost of a random action from γ is no more than 1. This assumption may be justified if, for example, γ is a uniformly random action and uniformly random actions are highly unlikely to be harmful. Write E A∼γ [c(A)] ≤ 1 for the expected cost of a random action sampled from γ. It is now possible to bound the cost of an action sampled from Q q : Lemma 1 (Cost bound on quantilizers). E A∼Qq [c(A)] ≤ 1 q Proof. Observe that Q q \"fits under\" γ when scaled down by a factor of q: Q q (a) = 1 q µ({x ∈ [1 − q, 1] | f (x) = a}) ≤ 1 q γ(a). Therefore, it is possible to bound the cost of Q q : E A∼Qq [c(A)] = a∈A Q q (a)c(a) ≤ a∈A 1 q γ(a)c(a) = 1 q E A∼γ [c(A)] . (1) So the expected cost of sampling from Q q is no more than 1/q times that of sampling from γ. Thus, for example, running a \"ten percentalizer\" can be no more than ten times as costly as sampling from γ. In fact, quantilizers are optimal among all policies that can be proven to be low-cost in this way. For example, suppose we have generated some base distribution γ of actions (perhaps a generative model of what sort of actions a human would take), and we're willing to risk running an agent that incurs an expected cost of at most 100 times the expected cost of simply sampling from γ. Then, a 0.01-quantilizer is the optimal agent we could run. More formally, say we desire some distribution p ∈ ∆A such that the following conservative cost constraint holds for some acceptable cost threshold t > 1: ∀c :A → R ≥0 E A∼γ [c(A)] ≤ 1 =⇒ E A∼p [c(A)] ≤ t. (2) This constraint states that, for any cost function assigning an expected cost of no more than 1 to a random action from γ, the expected cost of an action drawn from p is no more than t. Then, we have the following: Lemma 2 (p fits under γ). Let p satisfy 2. Then p(a) ≤ tγ(a) for all a ∈ A. Proof. Suppose this were not true. Then for some a * we have p(a * ) > tγ(a * ). Now define c(a) = 1/γ(a * ) if a = a * 0 otherwise We have E A∼γ [c(a)] = a∈A γ(a)c(a) = γ(a * )c(a * ) = 1. Also, E A∼p [c(a)] = a∈A p(a)c(a) = p(a * )c(a * ) > tγ(a * )c(a * ) = t. Jointly, these statements contradict constraint 2. Theorem 1 (Quantilizer optimality). Choose q=1/t. Then, a q-quantilizer maximizes expected U -utility subject to constraint 2. Proof. By Lemma 2, we cannot assign more that γ(a)/q probability to any action a, so the best we can do to maximize expected utility is to assign γ(a)/q probability to the best actions and 0 to the rest-and this is exactly what a q-quantilizer does. \n Targeted Impact Expected utility quantilization is intended for use in systems that may eventually need to make important decisions, autonomously, in environments where negative actions could have a large cost. For example, imagine an AI system which is programmed to autonomously trade stocks in order to make significant sums of money, where that money will in tun be put to use improving the world in some potentially dramatic fashion. Unfortunately, the bounds of equation ( 1 ) and the cost constraints of equation ( 2 ) apply just as well for cost functions as benefit functions: just as running a q-quantilizer will not, in expectation, be any more than 1/q times as costly as sampling from γ directly; nor will it be any more than 1/q times more beneficial than sampling from γ directly. Thus, in a sense, a q-quantilizer has a bounded impact-it will not make the world all that different than running an AI system that samples γ directly would have made it. What we really want, though, is not low impact, but targeted impact. A small modification to the quantilizer specification allows us to achieve the same constraints on costs without necessarily constraining the potential benefits. Before explaining, first note that the stock trading AI system has two different types of side effect: one is that it produces money which can be used by its operators; the other are side effects of trading (for example, it might crash the market in order to make more money). The first side effect might cause a significant change in the world, e.g., if the operators use the money to fund a charity which is highly effective at reducing global poverty. Nevertheless, we would prefer the system to only be guaranteeing reduced impact according to the second sort of side effect. That is, in this setting, we can define the cost function c : A × R → R ≥0 , such that c(a, m) determines how much cost an action a has, if counterfactually it generates m dollars (rather than the actual amount of money that it generates). Given a cost function of this form, we would like to design a quantilizer such that ∀c : A × R → R ≥0 , m : R E A∼γ [c(A, m)] ≤ 1 =⇒ E A∼p [c(A, m)] ≤ 1/q. That is, we require that for all counterfactual amounts of money m, the side effects of the stock trading algorithm have low cost compared to the side effects of executing actions at random from γ. Thus, a system obeying this cost constraint does not need to worry about the extra side effects of generating significant amounts of money for its operators. Mathematically, quantifying over c and m at the same time is the same as quantifying over functions of the form a → c(a, m) for any c and m, so the condition is equivalent to ∀c : A → R ≥0 E γ [c(A)] ≤ 1 =⇒ E p [c(A)] ≤ 1/q which is the same as the original cost constraint. Therefore, it is possible to design a quantilizer that is guaranteed to have low cost (compared to γ) through effects other than money, by the same methods that guarantees it to have low overall cost. \n Algorithms for Quantilization Just as it is impractical to design algorithms that literally maximize expected utility in complex environments, it is similarly impractical to design algorithms that perfectly quantilize expected utility. Any practical system based on expected utility quantilization would similarly have to use some form of approximation. Fortunately, it is not hard to see how expected utility quantilization could be approximated. Suppose we can compute E [U (W (a))] for each action a, and that we wish to approximate a q-quantilizer with respect to a distribution γ. To do this, we can run the following algorithm: first take n samples from γ, and then sample from Q U,W,γ,q , where γ is the empirical distribution estimated from the n samples. As n approaches infinity, γ approaches γ, so Q U,W,γ,q approaches Q U,W,γ,q . Thus, given some base distribution γ which is relatively easy to sample from, and some method for approximating E [U (W (a))] and γ, an approximate expected utility quantilizer can be constructed in a straightforward manner. Indeed, this algorithm yields insight into how safe it is to perform a random search of actions sampled from γ. If q = 1/n, then this algorithm simply takes n samples from γ and returns the one with the highest expected utility, emulating a simple random search procedure. Since this procedure approximates q-quantilization, the bounds that apply to quantilization can be shown to hold for random searches over samples of γ, with some minimal modifications. 3 Drawbacks and Open Problems \n Acquiring the Base Distribution Clearly, a quantilizer is only as good as its base distribution γ. The above constraints on the amount of costs that running an expected utility quantilizer can incur are in terms of multiples of the expected cost from sampling directly from γ: clearly, if it is dangerous to sample actions from γ, then it is dangerous to run a quantilizer using γ. Defining a useful base distribution is not a straightforward task. One could define the base distribution to be some uniformly random action, but in a complex multi-dimensional space of strategies, a uniformly random action is highly unlikely to be useful, and so even a 10 −5 -quantilizer would be unlikely to do anything useful. If instead γ described random strategies that a human might actually execute, then it is much more likely to be useful. Thus, a generative model of human strategies would make expected utility quantilization a much more viable strategy. (See Ng and Russell (2000) and Baker, Tenenbaum, and Saxe (2007) for some initial work in this direction.) Of course, there are a number of situations in which a simple generative model of human behavior would not be sufficient. For example, if humans are likely to misunderstand the problem and thereby cause catastrophe, then a simple generative model would not suffice. (And if a human were quite likely to design an expected utility maximizing AI system, then a quantilizer may well also generate a maximizer.) 4 The study of how to generate safe and satisfactory base distributions γ for various different problem domains remains an open problem. \n Butterfly Effects It could be that a non-negligible percentage of ordinary actions that humans take are either very good or very bad, due to butterfly effects. In this case, the cost of sampling from fairly ordinary γ distributions is actually quite high, with the expectation of γ being reasonable only because the high costs are canceled out by equally high benefits. However, in situations such as these, it is quite possible for a quantilizer to preserve the costs of γ without necessarily preserving many of the benefits. For example, suppose that the quantilizer is trying to make money on the stock market. There is a 10% chance of the stock market crashing in the next year. While a random sequence of trades made by a human will not significantly change the chance of a crash in expectation, it might be the case that a certain sequence of 3 ordinary trades (a sequence that a human would be 0.1% likely to make) will greatly increase the chance of a stock market crash through butterfly effects. In this case, a 0.001-quantilizer that can predict these effects might make these trades if it can profit from the crash. It is not clear if this scenario is even theoretically possible, since it requires extremely accurate prediction of the future. Nevertheless, it would be prudent to study this problem further and possibly design variations on quantilization that avoid this problem. \n Quantilization in Repeated Games Suppose we have n independent games. On game i, the agent chooses an action a i ∈ A i and receives utility U i (W i (a)), where each W i : A → ∆O returns the outcome for game i, and each U i : O → [0, 1] measures the utility achieved in that game. There are two different ways to design a quantilizer that plays these n games. We could design the agent to q-quantilize each game independently, choosing from the best q proportion of actions in each game independently according to n different base distributions γ i . This distribution for the sequence of actions in each game could be written as n i=1 Q Ui,Wi,γi,q . Alternatively, we could design the agent to q-quantilize a sequence of actions for the joint game. Here we define the joint game as A = n i=1 A i , W (a 1 , ..., a n ) = (W 1 (a 1 ), ..., W n (a n )) and U (o 1 , ..., o n ) = n i=1 U i (o i ). This action distribution for a q-quantilizer with respect to the base distribution n i=1 γ i could be written as Q U,W, n i=1 γi,q This is quite different from the first scenario! To give a concrete example, imagine two consecutive games that work as follows. In each game, the agent must pick an integer from the set {0, 1, 2}, and then gain that much utility. Thus, we define A i = {0, 1, 2}, W i (a) = a, U i (o) = o. Each base distribution γ i is uniform. Then, a 1/3-quantilizer quantilizing both games independently will choose from the top third of actions in each game separately, and will always yield the action sequence (2, 2). By contrast, a quantilizer quantilizing for the joint game will uniformly sample from the top third of action sequences {(2, 2), (1, 2), (2, 1)}, as illustrated in figure 2 . What could explain this difference? If we define a cost function c(a 1 , a 2 ) = 1 if (a 1 , a 2 ) = (2, 2) 0 otherwise which states that the (2, 2) action has terrible unintended consequences, then independent 1/3-quantilization will accrue a cost 9 times that of sampling from γ. This cost blowup increases exponentially with n. On the other hand, 1/3quantilization in the joint game will accrue a cost only 3 Figure 2 : A representation of the 9 available action pairs in the repeated game. Independent 1/3-quantilization will always select the striped action pair, while 1/3-quantilization on the joint game will select uniformly from one of the three shaded action pairs. times that of sampling from γ. If we believe that the cost function c may not sum costs independently between games, then quantilizing each game independently could be very costly. Cost Independence Assumption Suppose we added the assumption that c(a 1 , ..., a n ) = n i=1 c i (a i ) for some c i : A i → R ≥0 . Then a repeated q-quantilizer would get total expected cost E Ai∼Q U i ,W i ,γ i ,q [c(A 1 , ..., A n )] = n i=1 E Ai∼Q U i ,W i ,γ i ,q [c i (A i )] ≤ 1 q n i=1 E Ai∼γi [c i (A i )] = 1 q E Ai∼γi [c(A 1 , ..., A n )] so repeated quantilization would no longer have a cost blowup exponential in n. However, adding this assumption may be dangerous. Imagine a scenario in which almost all of the cost of a sequence of actions comes from correlation between actions in different games. For example, imagine that each game consists of appending a byte to a computer program, and the program is run when it consists of a million bytes. Then some sequence of actions that each individually have probability 1 in 256 (namely, appending the bytes of a program implementing a computer program that would be very destructive if run) could be dangerous. Each individual action did not have a much higher cost than adding a random byte to the program, but taken all together, the sequence of actions still had a very high cost. In this case and any analogous case, assuming that the cost function is additive could be quite dangerous. In fact, attempting to design a \"superintelligent\" (in the sense of Bostrom (2014) ) expected utility quantilizer could lead to similar issues. Imagine asking a superintelligent quantilizer for repeated actions that maximize the long-term number of paperclips produced by a certain factory. In this case, similar to the above example, each action might play some small part in a complex plan to turn large swaths of the world into paperclips. However, this seems much less likely to be a problem if the different goals that we have the system quantilize with respect to are independent. For example, imagine that we instead ask the system to repeatedly quantilize a slightly different utility function U i which only cares about events which happen in hour i, with the quantilizer run at the beginning of hour i. Intuitively, it seems that this setup would be safer. However, no safety guarantees have been given; finding a reasonable assumption about the cost functions under which this form of independent quantilization would be safe (if such an assumption exists) remains an open problem. \n Example Applications Quantilizers cannot be used to effectively achieve all possible goals. If the system has to achieve an outcome that it is extremely difficult to achieve by simply sampling γ, then the quantilizer, too, will have a hard time achieving that outcome. For example, if γ is a generative model of human behavior, and the system faces a problem that a human would be very unlikely to solve even one time in a billion, then the quantilizer would also be hard pressed to solve the same problem. However, there are a number of tasks where an expected utility quantilizer would plausibly be able to perform quite well. In this section, we will list a number of examples of tasks that are plausibly solvable with limited expected utility quantilizers. Note that, in practice, these example applications involve repeated quantilization. As explained in section 3.3, repeated quantilization is not currently known to be safe, but we will provisionally assume that it is possible to develop some set of independent (possibly time-indexed) objectives such that repeatedly quantilizing those objectives is in fact safe. \n Synthesizing Highly-Rated Images Imagine that we have a database containing images and human ratings of these images, and that we would like to synthesize new images that are expected to receive high ratings. To do this, we could separately create a generative model over images and a regression model predicting the expected rating for an image. If the regression model is a deep neural network, then as in Nguyen, Yosinski, and Clune (2014) , the image that gets the highest rating according to the model is unlikely to resemble highly-rated images. Instead, we could use the generative model over images as the base distribution γ and then quantilize to generate a plausible image with a high predicted rating. \n Factory Management Suppose we want a program to control a factory to create a number of objects satisfying a certain specification, using a fixed amount of time and resources. Policies will control parts of the factory, eventually producing many objects. Maximizing the extent to which the resulting objects adhere to the specifications may result in unintended solutions. Perhaps the resulting object will be useless despite satisfying the specifications, due to Goodhart's law: \"When a measure becomes a target, it ceases to be a good measure.\" (Goodhart 1975) . As an example, imagine a factory that produces electronic toys, which has automated devices that samples toys and automatically score them to see how well they meet specifications. Then, we can imagine a expected utility maximizer figuring out exactly which toys will be sampled, and makes only those ones to spec (the other ones being the cheapest possible piece of plastic that triggers the toy-counter). Or perhaps the automated devices dock points for toys that come off the assembly line hot (as these toys generally become deformed), and the expected utility maximizer finds a way to supercool the toys and thus gain an extremely high score, despite the fact that the supercooled toys later shatter and are useless. By contrast, if we build the AI system to quantilize the extent to which the devices pass the specifications, then the resulting policy will be much more likely to be of the form we expected: the task will almost certainly be accomplished in a manner that was not incredibly unlikely according to γ. \n Safely Transferring Information to Humans A number of proposals for designing extremely capable AI systems that are nevertheless safe to use for one purpose or another include a step where the AI system is expected to explain its plans to humans instead of executing them (Armstrong, Sandberg, and Bostrom 2012; Bostrom 2014 ). However, it is quite a difficult task to define a goal function that adequately describes \"provide an explanation that causes the human to understand the plan.\" Yet, absent such a goal function, the system's \"explanation\" is likely to be inadequate. If the system's internal representation of the plan is foreign to the operators, then they may not be able to interpret most simple data dumps describing the system's plan. Furthermore, if the system is more capable and knowledgeable than the humans, then the humans may also need a large number of relevant facts, assumptions, and sub-strategies explained to them before they are capable of evaluating the plan. But if the system has to perform a complex optimization task to explain the plan to the humans, then we face the same problem of safe optimization again. If the system is designed to maximize the extent to which the human operators understand its plan, then perverse instantiations could occur, and the result could be quite dangerous. An expected utility maximizer might exploit edge cases in the definition of \"understand\" to fool or otherwise manipulate the humans in a fashion that fulfills the technically stated requirements of \"make the humans understand the plan,\" without generating anything that we would actually term \"understanding.\" It seems very difficult to define what it means for an explanation to cause the human to understand such that we would be comfortable instructing a powerful system to maximize along this metric. Quantilization provides an alternative: we could build an expected utility quantilizers that selects an explanation that is good at causing the human to \"understand\" the system's plans in the same way that they would understand a plan as explained by another human, without as much risk of perverse instantiation. For example, if the base distribution γ is a generative model of explanations created by humans, then moderately strong quantilization might result in an explanation of a plan suitable for human consumption. \n Stock Trading As explained in section 2.2, a quantilizer can be designed to make trades on the stock market that are likely to make money, but which have few other side effects beside the side effect that now the operators have much more money. Specifically, the important thing is that expected utility quantilization gives us the ability to design systems that would almost certainly not crash the stock market (any more than sampling randomly from γ is likely to crash the stock market, which, if γ is a generative model of a smart human trader, means that the system is not much more likely to crash the market than any given smart human trader). This holds true even if the system is powerful enough to find a strategy for gaining incredible amounts of money by causing the stock market to crash. An expected utility maximizer maximizing the amount of money it receives in the stock market would crash the market the moment it found some way to generate huge amounts of profit from doing so, but an expected utility quantilizer would not be much more likely to crash the market than sampling randomly from γ. \n Conclusion Directing a powerful AI system to maximize some utility function U could result in large unintended consequences, if U does not in fact capture all the intricacies of human values. Many of these dangers can be mitigated if, instead of designing expected utility maximizers, we consider other agent architectures. Expected utility quantilization is one alternative to expected utility maximization that allows some safety guarantees, given that we have some base distribution γ where sampling actions from γ is known to be relatively safe. Expected utility quantilization is not a silver bullet. Constructing a safe expected utility quantilizer requires some method for generating a safe base distribution γ, and there are a number of contexts where expected utility quantilization is not guaranteed to yield good results. For example, if the costs of sampling from γ could in fact be high, then quantilization may not lead to good outcomes. Furthermore, if a quantilizer is going to be used to complete the same task many times sequentially, then running the quantilizer may be significantly more dangerous. Yet despite these shortcomings, the preliminary results from investigating expected utility quantilization are promising. Gaining a better understanding of the benefits and drawbacks of expected utility quantilization could yield many more insights into how to create powerful systems that are \"domestic\" in the sense of Bostrom (2014, chap. 9) , who defines a domestic agent as an agent with limited \"scope in its activities and ambitions.\" There are many open issues that make it difficult to assess how safe it would actually be to run an extremely capable expected utility quantilizer in an unrestricted domain, but nevertheless, this area of research does prove fruitful, and we are hopeful that further study of expected utility quantilization may yield insights into how to design autonomous AI systems that avoid the problem of perverse instantiation. \n Acknowledgements The author would like to thank Nate Soares for help preparing this paper, Benya Fallenstein and Andreas Stuhlmüller for discussion and comments, and Stuart Armstrong for helping to develop initial versions of this idea. A grant from the Future of Life Institute helped fund this work. \t\t\t Since there are multiple sortings of the actions f , some leading to different action distributions, this notation is imprecise. To make the notation more precise, we could specify some canonical way of defining f , such as assigning as much probability mass as possible to lexicographically earlier actions. \n\t\t\t See Christiano (2011) for additional discussion of superintelligent predictions of human decisions, and the difficulties that arise in this context.", "date_published": "n/a", "url": "n/a", "filename": "12613-57416-1-PB.tei.xml", "abstract": "In the field of AI, expected utility maximizers are commonly used as a model for idealized agents. However, expected utility maximization can lead to unintended solutions when the utility function does not quantify everything the operators care about: imagine, for example, an expected utility maximizer tasked with winning money on the stock market, which has no regard for whether it accidentally causes a market crash. Once AI systems become sufficiently intelligent and powerful, these unintended solutions could become quite dangerous. In this paper, we describe an alternative to expected utility maximization for powerful AI systems, which we call expected utility quantilization. This could allow the construction of AI systems that do not necessarily fall into strange and unanticipated shortcuts and edge cases in pursuit of their goals.", "id": "8061629033d861e6f69d3a0d7b816e3d"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Eliezer Yudkowsky", "Marcello Herreshoff", "Paul Christiano", "Benja Fallenstein", "Mihaly Barasz", "Patrick Lavictoire", "Daniel Dewey", "Qiaochu Yuan", "Stuart Armstrong", "Jacob Steinhardt", "Jacob Taylor", "Andrew 1 Critch"], "title": "Tiling Agents for Self-Modifying AI, and the Löbian Obstacle *", "text": "Introduction Suppose a sufficiently intelligent agent possesses a goal (or a preference ordering over outcomes, or a utility function over probabilistic outcomes). One possible means for the agent to achieve its goal is to construct another agent that shares the same goal (Omohundro 2008; Bostrom 2012 ). 1 As a special case of agents constructing successors with equivalent goals, a machine intelligence may wish to change its own source code-self-improve. This can be viewed as constructing a successor agent that shares most of your program code and runs on the same hardware as you, then replacing yourself with that agent. 2 We shall approach the subject of AI self-modification and self-improvement by considering it as a special case of agents constructing other agents with similar preferences. In a self-modifying AI, most self-modifications should not change most aspects of the AI; it would be odd to consider agents that could only make large, drastic self-modifications. To reflect this desideratum within the viewpoint from agents constructing other agents, we will examine agents which construct successor agents of highly similar design, so that the sequence of agents \"tiles\" like a repeating pattern of similar shapes on a tiled floor. In attempting to describe agents whose decision criteria approve the construction of highly similar agents, we shall encounter a Gödelian difficulty 3 in the form of Löb's Theorem: A consistent mathematical theory T cannot trust itself in the sense of verifying that a proof in T of any formula φ implies φ's truth-we cannot have the self-verifying scheme (over all formulas φ) of ∀φ : T T φ → φ (Löb 1955 ). 4 We shall construct a natural-seeming schema for agents reasoning about other agents, which will at first seem to imply that such an agent can only trust the reasoning of successors that use weaker mathematical systems than its own. This in turn would imply that an agent architecture can only tile a finite chain of successors (or make a finite number of self-modifications) before running out of \"trust.\" This Gödelian difficulty poses challenges both to the construction of successors, and to a reflective 2. If you wanted a road to a certain city to exist, you might try attaching more powerful arms to yourself so that you could lift paving stones into place. This can be viewed as a special case of constructing a new creature with similar goals and more powerful arms, and then replacing yourself with that creature. 3. That human beings are computable does not imply that Gödelian-type difficulties will never present themselves as problems for AI work; rather it implies that any such Gödelian difficulty ought to be equally applicable to a human, and that any human way of bypassing the Gödelian difficulty could presumably be carried over to an AI. E.g., the halting theorem imposes limits on computable AIs, imposes limits on humans, and will presumably impose limits on any future intergalactic civilization; however, the observed existence of humans running on normal physics implies that human-level cognitive intelligence does not require solving the general halting problem. Thus, any supposed algorithm for general intelligence which demands a halting oracle is not revealing the uncomputability of humans, but rather is making an overly strong demand. It is in this spirit that we will investigate, and attempt to deal with, the Gödelian difficulty exposed in one obvious-seeming formalism for self-modifying agents. 4. φ denotes the Gödel number of the formula φ, and T φ denotes the proposition that there exists an object p such that p is the Gödel number of a proof in T of φ . E.g., letting PA represent the system of first-order Peano Arithmetic, PA S0 + S0 = SS0 stands for the proposition ∃p : Bew PA (p, S0 + S0 = SS0 ) where Bew PA (p, φ ) is a formula (in ∆ 0 ) stating that p Gödel-encodes a proof of the quoted theorem φ from the axioms of Peano Arithmetic. Thus PA PA S0 + S0 = SS0 states (accurately) that first-order arithmetic proves that there exists a proof in PA that 1 + 1 = 2. Also, whenever a quantifier over formulas ∀φ appears, this denotes a meta-language schema with a separate axiom or theorem in the object language for each formula φ. agent's immediate self-consistency. We shall present several different techniques to bypass this Gödelian difficulty, demonstrating indefinitely tiling sequences of agents maintaining trust in the same mathematical system. Some unachieved desiderata of reflective coherence will remain; and while the technical methods used will demonstrate the technical possibility, they will not be plausible as basic structures of rational agents. We shall then pass from known environments to partially known environments within the formalism, and make a preliminary attempt to pass from logical agents to probabilistic agents that calculate expected utility. Some flaws and unfortunate-seeming properties of the currently proposed formalism will also be discussed. The ultimate problem of proposing a satisfactory fundamental decision criterion for self-modifying agents remains open. 5 \n Logical agents that construct successors Suppose an agent A 1 with a satisficing goal G-any outcome shall either satisfy or not satisfy G, and A 1 's sole preference is for outcomes satsifying G over outcomes not satisfying G. We shall initially suppose that the agent A 1 occupies a crisp, fully known, deterministic, closed environment; then first-order logic will be a good representational fit for reasoning about this environment. Suppose also that the environment contains \"transistors,\" objects which can be configured by A 1 to perform computations. Then A 1 might choose to use these transistors to build an offspring A 0 which also shares the goal G. Suppose A 1 constructs A 0 as a satisficing logical agent that only takes actions that A 0 can prove to achieve the goal G (shared by A 1 and A 0 ). We shall write A 0 's criterion of action-the conditions under which A 0 takes an action-using the notation: bi ⇒ A 0 bi → G, (2.1) where bi is a proposition denoting the actual performance of an action b i within A 0 's range of action (b i ∈ Acts 0 ), and is a special turnstile indicating that an agent has cognitively concluded a belief. Hence (2.1) states that the actual performance of any action b i only takes place when A 0 has cognitively proven a theorem that the performance of b i implies G. Without sensors (section 6), A 0 can conclude a belief φ only if φ is a consequence of its axioms Axm 0 : A 0 φ ⇒ Axm 0 φ (without sensors). (2.2) 5. The paper is meant to be interpreted within an agenda of \"Begin tackling the conceptual challenge of describing a stably self-reproducing decision criterion by inventing a simple formalism and confronting a crisp difficulty\"; not as \"We think this Gödelian difficulty will block AI\", nor \"This formalism would be good for an actual AI\", nor \"A bounded probabilistic self-modifying agent would be like this, only scaled up and with some probabilistic and bounded parts tacked on\". Similarly we use first-order logic because reflection within first-order logic has a great deal of standard machinery which we can then invoke; for more realistic agents, first-order logic is not a good representational fit to most real-world environments outside a human-constructed computer chip with thermodynamically expensive crisp variable states. We will assume A 1 is able to verify by inspection of A 0 's hypothetical design 6 that (2.1) and (2.2) would be true about A 0 if A 0 were actually constructed: A 1 Ā0 → (∀b i ∈ Acts 0 : bi → 0 bi → G ), (2.3) where Ā0 denotes the proposition that A 0 is constructed 7 and 0 φ denotes that there exists a proof of φ from Axm 0 . Thus A 1 proves that if A 0 is constructed then the realization of any action b i implies that A 0 proved (from Axm 0 ) that bi → G. 8 However for A 1 to conclude that the goal G will actually be achieved by constructing A 0 , requires that A 1 have axioms licensing the further reasoning step: A 1 ∀b i : ( 0 bi → G → ( bi → G)). (2.4) We can view this as a specified instance of a generic soundness schema over all formulas φ free in 0 or 1 variables: ∀φ : A 1 ∀x : 0 φ(x) → φ(x). (2.5) For A 1 to trust A 0 , the system Axm 1 must be able to prove that whenever a proof exists in Axm 0 of φ(x) , then φ(x) must be true. Löb's Theorem normally implies that no system can prove its own soundness schema-only strictly more powerful systems can prove a soundness schema over weaker systems. 9 This seemingly implies that Axm 1 = Axm 0 and that the offspring A 0 must possess strictly weaker mathematical power than its parent A 1 , which is the Löbian obstacle. For purposes of completing the outline of an agent constructing another agent, we will assume for now that A 1 is indeed of strictly greater mathematical power than A 0 and continue. 10 From (2.5), which yields (2.4) as a special case, combined with (2.3), A 1 may conclude: A 1 Ā0 → (∀b i ∈ Acts 0 : ( bi → G)). (2.6) A 1 believes that if A 0 takes any action, the goal G will be achieved. This follows from A 1 's belief that if A 0 takes any action then A 0 will have proven that action to achieve G, and from A 1 's belief that A 0 's reasoning is sound. 6. By Rice's Theorem no nontrivial property of output can be determined over arbitrary programs, but of course an AI, like a human engineer, may choose to work within a more restrictive space of programs which it can understand. 7. If the system were cashed out in greater detail, the proposition Ā0 might appear as a giant conjunct of motor actions constructing the computer and program out of environmental transistors. 8. (2.3) is meant to represent a property which can be determined straightforwardly by inspection of code-e.g., by inspecting the code of A 0 , it is visible that no action b i is ever sent to the motor output without passing through a check that syntactically verifies a proof of bi → G. 9. Suppose that T proves its own soundness schema ∀φ : T T φ → φ. If φ = ⊥ then T T ⊥ → ⊥ and T is inconsistent by Gödel's Second Incompleteness Theorem. See section 3. 10. E.g., suppose that A 1 uses Zermelo-Fraenkel set theory (ZF) as a base system while A 0 uses Peano arithmetic. ZF can easily prove PA consistent, and sound with respect to N (the least set of ordinals closed under succession in that model of ZF) in the sense of semantic entailment later described in section 6. Suppose that A 0 fails to take any action, e.g., because the agent A 0 ran out of time before it could prove any theorem of the form bi → G? Let the special proposition ∅ denote a null or \"default\" outcome-the goal G has not been achieved, but neither has the environment been much disturbed (apart from some transistors having been configured into a useless but harmless agent A 0 ). 11 Allowing for ∅ to result when A 0 fails to prove any theorem of the form bi → G, and letting Ā0 j denote the proposition that the construction of design A 0 j is implemented, A 1 's criterion of action will be: Ā0 j ⇒ A 1 Ā0 j → (∅ ∨ G). (2.7) For A 1 to prove a theorem of the form (2.7) we need to make use of the further fact that if A 0 does nothing, ∅ will result: Ā0 ⇒ (∀b i ∈ Acts 0 : ¬ bi ) → ∅, (2.8) which we will suppose can be verified by A 1 inspecting the hypothetical design 12 of A 0 : A 1 Ā0 → ((∀b i : ¬ bi ) → ∅). (2.9) Then from (2.9) and (2.6) A 1 can obtain: A 1 Ā0 → (∅ ∨ G), (2.10) which fulfills A 1 's criterion of action (2.7) and permits A 0 to be built. Note that the specified architecture did not in fact tile from the design of A 0 to the design of its ancestor A 1 -where A 0 had the criterion of action bi ⇒ A 0 bi → G, its parent A 1 needed the criterion of action Ā0 j ⇒ A 1 Ā0 j → (∅ ∨ G). However this new criterion of action will tile all aspects of the system-except the decreasing mathematical strength of each successive axiom system-without further change: 11. Suppose that the environment contains two registers, currently both set to 0, and the goal G is to set the registers to two numbers whose sum is 8 and product is 12. ∅ might then represent the set of outcomes where both registers remain set to 0-regardless of which environmental transistors have been configured or reconfigured, no agent has actually touched or changed the registers. Perhaps a product of 12 and a sum of 8 receives $20 utility, a product and sum of 0 receives a reward of $0, and any other product and sum receives a penalty of $-1000. We would like to place into this environment an agent powerful enough to construct other agents, prove that if any agent touches the registers it will result in a reward of $20, and know that otherwise the registers will remain untouched resulting in the default reward of $0, with the penalty of $-1000 not being risked-and permit the initial agent to construct other agents as well. 12. E.g., A 1 looks over the transistors which will compose A 0 and verifies by straightforward inspection that if the system never proves a theorem of the form b i → G, the external motor circuitry will never activate. Ā1 k ⇒ A 2 Ā1 k → (∅ ∨ G) A 2 Ā1 → ∀A 0 j ∈ Acts 1 : Ā0 j → 1 Ā0 j → (∅ ∨ G) A 2 Ā1 → (∀A 0 j ∈ Acts 1 : ¬ Ā0 j ) → ∅ ∀φ : A 2 ∀x : ( 1 φ(x) → φ(x)) A 2 Ā1 → ∀A 0 j ∈ Acts 1 : Ā0 j → (∅ ∨ G) A 2 Ā1 → (∅ ∨ G) \n The Löbian obstacle Gödel's Second Incompleteness Theorem states that systems at least as powerful as Peano Arithmetic prove their own consistency if and only if they are inconsistent: T ¬ T ⊥ ⇐⇒ T ⊥. Löb's Theorem generalizes to prove that for any formula φ and any T at least as powerful as PA: T T φ → φ ⇐⇒ T φ. Trivially, T φ ⇒ T Ψ → φ, so the surprising statement is that a proof within T of T φ → φ can be directly used to prove φ. With φ = ⊥ this yields (an intuitionistic proof of) the Second Incompleteness Theorem. Gödel's sentence G : PA G ↔ ¬ PA G can be viewed as a nonparadoxical analogue of the Epimenides Paradox \"This sentence is false.\" By a similar diagonalization over provability, Löb's Theorem constructs a Löb sentence L : PA L ↔ ( PA L → φ) which is a non-paradoxical analogue of the Santa Claus Paradox \"If this sentence is true then Santa Claus exists.\" (Suppose the sentence were true. Then its antecedent would be true, the conditional would be true and thus Santa Claus would exist. But this is precisely what the sentence asserts, so it is true and Santa Claus does exist.) The proof proceeds from the observation that PA L → ( PA L → φ) ⇒ PA PA L → PA φ . From a model-theoretic standpoint, even when L has no standard proof, we would intuitively expect that every nonstandard model of PA containing a nonstandard proof of L will also contain a nonstandard proof of φ; hence by the Completeness Theorem this should be provable in PA. Letting Ψ ≡ PA Ψ , the actual proof pathway is: PA L → ( L →φ) PA L → ( L →φ) (because PA Ψ ⇒ PA Ψ ) PA L → L →φ (because PA Ψ→Θ ⇒ PA Ψ → Θ ) PA L → ( L → φ ) PA L → L (because PA Ψ → Ψ ) PA L → φ Then if furthermore we have the Löbian hypothesis PA φ →φ (a proof of φ implies that φ is true, or semantically, φ is true in every model where there exists a proof of φ): PA φ → φ PA L → φ PA L PA L PA φ. Hence Löb's Theorem stating that in a theory T at least as powerful as PA: T ( T φ →φ) ⇒ T φ. Intuitively, T cannot trust its own proofs in the abstract: T cannot say, \"I don't know if φ is true, but I trust that if any proof exists from my axioms of φ, then φ must be true.\" Instead T can only react to particular proofs of φ by asserting φ. This presents an apparent paradox of rational coherence when you try to interpret T as the beliefs of an agent; the agent will believe any proof of φ you present to it, but will refuse to assent to \"I ought to believe any proof of φ I am presented with,\" a seeming defect of reflectivity-the agent cannot approve, under reflection, its own behavioral principles. This presents an obstacle to perfectly tiling self-modification with A 1 ∼ = A 0 at the point where we would like A 1 to carry out the inference: A 1 Ā0 → (∀b i ∈ Acts 0 : bi → 0 bi →(∅ ∨ G) ) A 1 Ā0 → (∀b i ∈ Acts 0 : bi → (∅ ∨ G)) If each ancestor A n+1 requires a successively more powerful proof system than its offspring A n then this is analogous to a finite \"telomere,\" which shortens in each constructed offspring (or shortens on each self-modification). In principle, this telomere can be made arbitrarily large, since from any base theory T we can construct more and more powerful theories by repeatedly adjoining soundness schemas: T +1 T ∪ ∀φ : (∀x : [ T φ(x) → φ(x)]) T +2 T +1 ∪ ∀φ : (∀x : [ T +1 φ(x) → φ(x)]) ... T +ω T ∪ ∀φ : (∀n : ∀x : [ T +n φ(x) → φ(x)]) T +ω+1 T ∪ ∀φ : (∀x : [ T +ω φ(x) → φ(x)]) ... We can thus use the T +α notation over any computable ordinal α, and indeed for any computable ordinals α, β the corresponding agent A α who believes in the theory T +α can create the corresponding agent A β iff β < α. Thus by the well-ordering of the ordinals, such an agent A α can only create finite chains of descendants. E.g., the agent A ω trusting T +ω must on its next step create an offspring trusting T +n for some finite n. Likewise all other well-founded systems of trust above T will reach the base system T after a finite descent. That this is substantively decreasing the mathematical strength of the resulting agents can be illuminated by considering how PA+1 is substantively stronger than PA. Given any particular Goodstein sequence (Goodstein 1944 ), e.g., the sequence Goodstein(4) = 4, 26, 41, 60..., PA can prove that the sequence G 1 (4), G 2 (4), G 3 (4), ... will eventually reach 0. However, proving that G(n) halts for larger and larger n requires PA to deploy proofs involving an increasing number of logical quantifiers ∀x : ∃y : ... in its propositions. Thus PA cannot prove ∀n : ∃x : G x (n) = 0 because this would require an infinite number of quantifiers to prove within PA. A similar situation holds with respect to Kirby-Paris hydras, in which for any particular hydra, PA can prove that every strategy for cutting off that hydra's heads is a winning strategy, but as the hydra's heights increase so does the required number of quantifiers in the proof. Thus within PA it is not possible to prove that every hydra is killed by every strategy (Kirby and Paris 1982) . In both cases the proofs have regular structure, so PA can describe how a proof of depth n can be formed for a hydra of height n, or PA can describe how to form a proof for the Goodstein sequence of any number, thus PA can prove: PA ∀n : PA ∃x : G x (n) = 0 . But PA still cannot prove: PA ∀n : ∃x : G x (n) = 0. This corresponds to what we earlier called a defect of reflective coherence, the state of believing \"For every x, I believe x is true\" but not believing \"For every x, x is true.\" And thus PA+1, augmented by the schema ∀x : PA φ(x) →φ(x), is able to prove that all Goodstein sequences halt and that all Kirby-Paris hydras are defeated. 13 13. As of this very early draft, the above mathematical reasoning has not been verified. It looks obviously true to us that PA+1 proves that all Goodstein sequences halt, but we still need to check. More generally this state of affairs arises because the proof-theoretic ordinal of PA is 0 , the limit of the ordinals ω, ω ω , ω ω ω , ... . 14 Thus PA can prove the well-ordering of ordinal notations beneath 0 , 15 but as Gentzen ([1936] 1969) showed, from the well-ordering of 0 itself it is possible to prove the syntactic consistency of PA, and thus PA itself can never prove the well-ordering of an ordinal notation for 0 . 16 Since the proof-theoretic ordinal of a mathematical system corresponds to its proof power in a deep sense, for each successive agent to believe in mathematics with a lower proof-theoretic ordinal would correspond to a substantive decrease in mathematical power. 17 We are not the first to remark on how the inability of a theory to verify its own soundness schema can present apparent paradoxes of rational agency and 14. Expanding: 0 is the least ordinal. 1 is the first ordinal greater than 0. The first ordinal greater than all of 0, 1, 2, 3... is ω. The limit of ω, ω+1, ω+2, ... is 2ω. The limit of ω, 2ω, 3ω... is ω 2 . The limit of ω, ω 2 , ω 3 , ... is ω ω . The limit of ω ω , 2ω ω , 3ω ω , ... is ω ω+1 . Then 0 is the first ordinal greater than ω, ω ω , ω ω ω , ... 15. An ordered pair (x, y) of natural numbers is a notation for the ordinals less than (but not equal to) ω 2 , since in this ordering we first have (0, 0), (1, 0), (2, 0) for ω elements, followed by the ω elements for (0, 1), (1, 1), (2, 1) each of which is greater than all the preceding elements, and so on through ω copies of ω, but the notation does not contain any superelement (0, 0, 1) which is the first element greater than all the preceding elements, so it does not contain a notation for ω 2 . A notation ψ with a corresponding ordering ψ x < ψ y can be shown to be a well-ordering if there are no infinite descending sequences ψ 1 > ψ 2 > ψ 3 ..., e.g., in the case above there is no infinite descending sequence (2, 2), (1, 2), (0, 2), (9999, 1), ... even though the first number can jump arbitrarily each time the second number diminishes. For any particular ordinal < 0 , PA can show that a notation corresponding to that ordinal is well-ordered, but PA cannot show that any notation for all the ordinals less than 0 is well-ordered. 16. By assigning quoted proofs in PA to ordinals < 0 , it can be proven within PA that, if there exists a proof of a contradiction within PA, there exists another proof of a contradiction with a lower ordinal under the ordering < 0 . Then if it were possible to prove within PA that the ordering < 0 had no infinite descending sequences, PA would prove its own consistency. Similarly, PA can prove that any particular Goodstein sequence halts, but not prove that all Goodstein sequences halt, because the elements of any Goodstein sequence can be assigned to a decreasing series of ordinals < 0 . Thus any particular Goodstein sequence starts on some particular ordinal < 0 and PA can prove that a corresponding notation is well-ordered and thence that the sequence terminates. 17. If you can show that the steps of a computer program correspond to a decreasing series of ordinals in some ordinal notation, you can prove that program will eventually halt. Suppose you start with a total (always-halting) computer program which adds 3, and you are considering a computer program which recursively computes 3 n via a function F of (x, y) with F (0, 1) = 0, F (x, 1) = F (x − 1, 1) + 3, and F (x, y) = F (F (x, y − 1), 1) so that F (1, n) = 3 n . If you believe that the (x, y) notation is well-ordered then you can observe that each function call F (α) : α ∈ (x, y) only calls itself with arguments β < α, and hence that the corresponding tree of function calls must be finite. It is not uncommon for termination proofs in computer science to make use of ordinals much greater than 0 , e.g., Kruskal's tree theorem (Kruskal 1960) or the strong normalization proof for System F (Girard 1971) . Similarly by comprehending the well-ordering of notations for larger and larger ordinals, it is possible to prove the consistency of more and more powerful mathematical theories, e.g. PA corresponds to 0 , Kripke-Platek set theory corresponds to the Bachmann-Howard ordinal, etc. Thus a mind losing its ability to recognize recursive notations as well-ordered, is indeed decreasing in substantive mathematical strength. reflective coherence. Weaver (2005) made similar remarks centering on systems that can prove the well-ordering of any particular ordinal notation below the ordinal Γ 0 (representing the strength of systems mildly stronger than Peano arithmetic), and thus can prove that they prove notations for all ordinals below Γ 0 , but cannot prove the combined ordinal notation for Γ 0 itself: Suppose A is a rational actor who has adopted some foundational stance. Any attempt to precisely characterize the limits of A's reasoning must meet the following objection: if we could show that A would accept every member of some set of statements S, then A should see this too and then be able to go beyond S . . . A can indeed see, as we do, that there exists a proof that he would accept for each statement in S, but he cannot go from this to actually accepting every statement in S . . . It is difficult to imagine a plausible set of beliefs that would not allow him to take this step. Since proof-theoretic ordinals play such a basic role in describing the strength of mathematical theories-a theory which has been analyzed to have prooftheoretic ordinal α can usually prove the consistency of another theory with proof-theoretic ordinal β iff α > β-it would seem reasonable for a pessimist to suspect that this form of \"rational coherence\" violates Gödel's Theorem and can never be obtained, and that we shall end up with agents who can only construct offspring who believe in math with lower proof-theoretic ordinals. It will be illuminating to consider the failure of some obvious-seeming attempts to bypass the Löbian obstacle: (a) Via a standard diagonalization, in some theory T construct the fixedpoint statement ψ ↔ φ(x) ∨ T ψ(x) . Then let φ(b i ) ≡ bi → (∅ ∨ G) (so that ψ then states, \"Either x implies (∅ ∨ G), or you can prove ψ about x\"). Then let A α 's criterion of action be bi ⇒ A α ψ(b i ). Hope: A α reasoning about an agent A β with cj ⇒ A β ψ(c j ) will be able to reason: Hope: Each theory n trusts all lower theories > n, but no theory trusts itself. Thus the corresponding agent A α which starts at T -0 will trust the reasoning of an agent A β which uses T -1, and so on tiling indefinitely. The \"problem of immediate reflection\" (on considering one's own source code) will remain, but it will at least be possible to carry out an indefinitely extending sequence of agent constructions without the strength of trusted mathematics ever decreasing below the base T . A α Āβ → ∀c j : cj → T ψ(c j ) A α Āβ → ∀c j : cj → ψ(c j ) A α ψ(A β ) Problem: T ψ(x) → ψ(x) Problem: If T is at least as strong as PA it will be straightforward to prove within T -0 that any proof in T -0 can be translated into a proof in T -1 (by increasing all the T -x numbers x by 1 inside the proof). Thus: T -0 T -0 ⊥ → T -1 ⊥ T -0 T -1 ⊥ → ⊥ T -0 T -0 ⊥ → ⊥ T -0 ⊥ (c) ⇒ A α φ(b i ) ∨ k+1 T φ(b i ) where φ(b i ) ≡ bi → (∅ ∨ G) as before. Let A β ' s criterion of action be the same. Hope: Rather than trying to modify A α 's epistemic beliefs in a way that will circumvent Löb's Theorem, we are relaxing A α 's instrumental decision criterion to permit A α to do b i whenever b i leads to G, provably leads to G, provably provably leads to G, etc. There is no Gödelian prohibition against a more permissive decision criterion; and from outside the system, if we trust the soundness of T +1, we will trust that in all such cases G will be achieved. A α inspecting A β will conclude that whenever A β does c j , there exists a statement with one or more levels of proof-ness that cj → (∅ ∨ G) and thence conclude (A β → ∅)∨ k+1 T A β → (∅∨G) . Furthermore, because of the perfect tiling, A α will trust its own source code upon reflecting on it (since this merely adds another level of proof-ness). Furthermore, A α will behave with perfect naturalism in the sense that any environmental computing system believed to implement T will provide just as much justification-for-action as any internal proof from T 's axioms; there will be no non-naturalistic discontinuity between observations of outputs of A α 's internal subroutines, and observations of the outputs of environmental systems which A α believes to have isomorphic properties to its beliefs about its internal subroutines. Problem: The formula k+1 T φ(b i ) must actually be written as ∃k : k+1 T φ(b i ) . Then when A α considers A β taking an action c j , A α can conclude only that it must have been the case that T ∃k : k+1 At this point the mathematician already accustomed to seeing such systems stratify themselves, or else collapse into inconsistency, may be wondering whether it should even be normatively possible to bypass the Löbian obstacle. Yet it seems intuitively like it ought to be possible to have an agent that \"acts as if\" Zermelo-Fraenkel set theory is true, including constructing other agents which \"act as if\" ZF; this does not seem like an instrumentally incoherent way for a rational agent to behave. We have presented the failed hopes above in order to better illustrate some of the further desiderata of self-modifying agents which we could ask from a solution to the Löbian obstacle: Indefinitely tiling trust: For any system T (e.g. ZF) it should be possible to characterize an infinitely descending sequence of agents constructing agents (equivalently an indefinitely self-modifying agent) whose level of trusted mathematics never drops below T . Reflectively coherent quantified belief: The agent will not encounter any context in which it believes ∀x : agent φ(x) yet cannot bring itself to believe ∀x : φ(x). Reflective trust: An agent reflecting on its beliefs about the program P composing itself, should not find that it distrusts P's actions or that it rejects P as an acceptable program; the agent's beliefs about its own construction should seem to license the beliefs it possesses, or at least license the performance of the actions which P has selected. 20 Naturalistic trust: If an agent believes that an environmental system isomorphic to one of its own subroutines has yielded an output, this belief should license the same sort of actions as would be licensed by the corresponding output by the corresponding internal subroutine. If you are \"acting as if\" PA is true, then seeing an environmental theorem-prover that you believe to implement PA flashing \"Proven: 2 + 2 = 4!\" should yield the same sort of epistemic updates and instrumental decisions as when your internal PA-prover signals that \"2 + 2 = 4\" is a theorem. At present we can partially resolve the Löbian obstacle by technical means to the extent of obtaining the first and second desiderata, but not yet the third and fourth desiderata, nor yet by fundamental rather than technical means. \n Partial solutions to the Löbian obstacle 4.1 Infinitely descending soundness schemas The fatal flaw in the previously presented sequence of theories T -0, T -1, T -2... with T -n T ∪ ∀φ : (∀x : T -(n+1) φ(x) → φ(x)) is that any proof in T -0 provably mapped onto a proof in T -1 and hence T -0 T -0 ⊥ → T -1 ⊥ . 20. In other words: A self-modifying AI at the level of PA+2 which obtains a glimpse at its own source code should not immediately panic and self-modify to only believe theorems at the level of PA+1, followed by an additional panic bringing it down to the level of PA... This flaw is repairable. Given a base theory T , let ψ(0), ψ(1), ψ(2)... be a sequence of statements such that T ψ(0), T ψ(1), ... but T (∀n : ψ(n)). E.g., suppose the theory ZF is much stronger than T (we shall soon define what sense of \"much stronger\" is required; if T = PA then ZF will be much stronger in the appropriate sense). Then ψ(n) can state \"n is not the Gödel number of a proof of a contradiction in ZF \": ψ(n) ≡ ¬Bew ZF (n, ⊥ ). (4.1) Since Bew ZF is a ∆ 0 formula, the truth of any particular ψ(n) is provable in any T as strong as PA, but proving ∀n : ψ(n) would require T = PA proving the consistency of Zermelo-Fraenkel set theory, which should not happen. Then let: T -n T ∪ ∀φ : ψ(n) → (∀x : T -(n+1) φ(x) →φ(x)). (4.2) Since every theory holds ψ(n) as a precondition for trusting the next lower theory in the infinite descending sequence, and the base theory T cannot prove ∀n : ψ(n), it will be impossible for the base theory T to prove that a proof in T -n always translates into a proof in T -(n+1). 21 We shall now show that T -0 is consistent, by proving that (T -0 ⊥) → (∀n : ψ(n)) within a sound theory T + which is too weak to prove ∀n : ψ(n): Without loss of generality let T be cast in a form where each step of a T -proof is either an axiom or a modus ponens inference step of the form Γ, Γ→A A. Let T RU E T be a truth predicate over formulas in the language of T . 22 Let Ax(T RU E T ) be a set of axioms for standard reasoning about the syntax of the quoted arguments to T RU E T , including ∀ Γ : ∀ ∆ : T RU E T Γ → ∆ → (¬T RU E T Γ ∨T RU E T ∆ ) and (∀n : T RU E T φ(n) ) ↔ T RU E T ∀n : φ(n) . Then let T + equal the base theory T (in this case PA) augmented by the truth predicate T RU E T , the soundness of T , and the axioms Ax(T RU E T ): T + T ∪ ∀φ : ∀x : T RU E T φ(x) ↔ φ(x) (4.3) ∪ ∀ φ : T φ → T RU E T φ ∪ Ax(T RU E T ) (Note that T + should be sound on the standard numbers N because every axiom of T + is true within N , and that if T = PA then T + should be too weak to prove Con(ZF).) The following reasoning can be carried out within T + . Suppose T -0 ⊥ . Then there exists some proof p with Bew T -0 (p, ⊥). Consider the 'first bad line' 21. Consider a sequence of theories Ṫ -n where T = PA and where ψ(n) is false beyond some upper bound m, e.g. ψ(n) ⇐⇒ n < m. Then the waterfall of trust descends only a finite length before reaching bottom, and the induction argument for Ṫ -n φ → Ṫ -(n+1) φ fails because proofs in Ṫ -m−1 cannot translate into Ṫ -m. Indeed, Ṫ -0 is just equivalent to PA+m, Ṫ -1 is equivalent to PA+(m−1), etc. with Ṫ -m + ∼ = PA. Since T cannot prove ∀n : ψ(n), it cannot prove the waterfall is not finite, and hence cannot prove the induction argument on Ṫ -n φ → Ṫ -(n+1) φ . 22. In accordance with Tarski's theorem, the predicate T RU E T can only talk about formulas in the base theory T and not in the augmented theory T +T RU E T . of p, the first step of p asserting Γ where ¬T RU E T Γ . The first possibility is that this line is an axiom of the base theory T but all statements provable in T are T RU E T . The second possibility is that this line is an inference of the form Γ, (Γ→A) A but for this line to be ¬T RU E T and all earlier lines to be T RU E T requires T RU E T Γ ∧ T RU E T Γ→A ∧ ¬T RU E T A which contradicts Ax(T RU E T ). Thus the first bad line must invoke one of the special axioms in T -0 of the form ψ(0) → (∀x : T -1 φ(x) →φ(x)). If this axiom is ¬T RU E T then ψ(0) must be T RU E T and there must exist some formula φ and number x such that T -1 φ(x) is T RU E T and φ (x) is ¬T RU E T . Thus there must exist a bad proof in T -1 and this bad proof must also have a first bad line. By induction there must exist bad proofs in every T -n and ψ(n) must be true for all n. Therefore, T + ( T -0 ⊥ ) → (∀n : ψ(n)). (4.4) If there exists a proof of a contradiction within T -0, this fact will be provable in T + by exhibiting the proof. But T + is sound and too weak to prove ∀n : ψ(n). Therefore T -0 is consistent. The corresponding agents A α are then constructed in the obvious fashion. Remarks. Although T -0 is slightly more powerful than T -1 in the sense that T -0 can prove certain exact theorems which T -1 cannot, the proof-theoretic ordinal of every T -n should be the same, and equal to the limit of the prooftheoretic ordinals of T , T +1, T +2, ... and less than the proof-theoretic ordinal of T +ω, since any T -n can invoke any finite number of layers of a soundness schema over T , but cannot invoke quantified trust in an infinite number of layers as does T +ω. We thus answer in the affirmative \"Can a consistent theory verify the soundness of another theory with the same proof-theoretic ordinal?\" and \"Can there be an indefinitely tiling sequence of agents whose trust never falls below a base theory?\" T -n also provides a solution to the challenge of rational coherence posed by Weaver (2005) , what we termed the desideratum of reflectively coherent quantified belief: An agent constructed around T -n will not find itself saying \"For every n, I believe φ (n) is true\" when it cannot say \"I believe that for every n, φ (n) is true.\" Through longer and longer finite proofs, T -0 can prove the well-ordering of any ordinal notation provable in T , T +1, T +2 ... but T -0 does not know that it proves every ordinal notation in this series -T -0 must observe the proof to know what it proves. This is not to say that T -0 is a reasonable representation of a rational agent's state of mind. However, besides offering a constructive example of an agent which technically meets Weaver's desideratum, it suggests that a coherent rational agent might be able to verify increasingly recursive ordinal notations via increasing amounts of thought, but never know the limit of what it will accept; and this agent may be able to construct offspring that likewise verify increasing ordinal notations in an equally powerful series and likewise do not know the limit of what they will accept. 23 Disadvantage. Since T -0 is consistent it must have a model, and since all axioms of T are axioms of T -0 this model must also be a model of T . However we may still worry that, e.g., if T is PA then T -0 may have only nonstandard models of arithmetic; perhaps T -0 is not sound on the standard numbers N . This fear is well-founded and in particular T -0 ∃n : ¬ψ(n) via: T -0 (∀n : ψ(n)) → T -0 (∀n : ψ(n))→⊥ → T -1 (∀n : ψ(n))→⊥ T -0 (∀n : ψ(n)) → T -0 (∀n : ψ(n))→⊥ → ((∀n : ψ(n))→⊥) T -0 T -0 (∀n : ψ(n))→⊥ → ((∀n : ψ(n))→⊥) T -0 (∀n : ψ(n))→⊥ (4.5) One might perhaps argue that a belief that ZF is inconsistent is not too troubling, in the sense that any physical situation in which this belief gets an agent into trouble ought to correspond to a physical situation that demonstrates ZF to be consistent. Nonetheless we would like our agents to be able to have beliefs with a standard model. Otherwise the agent will falsely believe that a Turing machine seeking a proof of contradiction in ZF will halt; and this false belief further implies that the agent falsely believes that its sequence of offspring will inevitably come to a halt after some unknown finite time. This seems sufficient to exclude the T -n family from direct consideration as the basis of a sufficiently advanced self-modifying agent. \n Parametric polymorphism Let T be a theory with models including the standard numbers N , that is N |= T (N semantically entails T ). Benja Fallenstein's \"parametric polymorphism approach\" 24 augments the language of T with an extra term κ which, from outside the system, is intended to refer to any natural number in N . T κ then contains a self-referential axiom schema asserting that if a statement φ is provable in T κ and the constant κ happens to be greater than 0, then φ with all instances of κ replaced by κ-1 is true: T κ T ∪ ∀φ : (κ > 0) → ∀x : Tκ φ(x) → φ(x)[κ\\κ-1]. (4.6) not meant to be suitable, as they stand, to rational agents / sufficiently advanced self-modifying machine intelligences, which would e.g. presumably be probabilistic boundedly-rational agents rather than theorem provers, etc. The idea is rather to expose foreseeable difficulties of reflection for self-modifying agents and to some extent offer constructive demonstrations that these difficulties are solvable, even if the solution is technical rather than fundamental, thus hopefully moving us toward an eventually satisfactory theory of the reflectively coherent, boundedly-rational probabilistic agent, even if that later theory is constructed along quite different lines than the A α schema, as it almost certainly must be. 24. This workaround for the Löbian obstacle was initially developed by Benja Fallenstein (independently of Herreshoff's infinite descent above) in response to the informal challenge posed in Yudkowsky's conference presentation of Yudkowsky (2011) , and initially posted to Fallenstein (2012) . It was further adapted to the A α formalism shown here during Fallenstein's attendance at the April 2013 MIRI Workshop on Logic, Probability, and Reflection with some commentary by other workshop attendees. For the origin of the term \"parametric polymorphism\" see Strachey (1967) . We shall prove that T κ is sound when κ is interpreted as any number in N , and then present an infinite sequence of agents which prove their offspring \"safe for κ steps\". Since κ can be interpreted as any number, from outside the system we conclude that such agents are safe for any number of steps. The proof is by induction on models {N , κ=n} of T κ . For the base case {N , κ=0} |= T κ observe that if κ=0 then the antecedent of every extra axiom is false and so the extra axiom schema is trivially true. For the induction step assume {N , κ=n} |= T κ . Using this assumption we shall demonstrate in three steps that, for every x ∈ N : {N , κ=n+1} |= Tκ φ(x) ⇒ {N , κ=n} |= Tκ φ(x) (4.7) {N , κ=n} |= Tκ φ(x) ⇒ {N , κ=n} |= φ(x) (4.8) {N , κ=n} |= φ(x) ⇒ {N , κ=n+1} |= φ(x)[κ\\κ-1] (4.9) (4.9) is obvious. For (4.7) observe that the set of proofs in T κ and the values of Tκ are unchanged from {N , κ=n} to {N , κ=n+1} since the provability formula Bew Tκ makes no mention of κ. 25 Thus {N , κ=n+1} |= Tκ ψ ⇒ {N , κ=n} |= Tκ ψ and we can simply write N |= Tκ ψ . To extend to formulas free in one variable φ(x), observe that since we are dealing with a natural model N of T κ , for every object x there exists a corresponding finite term c of the language (SSS...0 with x repetitions of the successor function) and thus our proof extends via ψ ≡ φ(c). For (4.8) observe that since we are dealing with a natural model N of T κ , only natural proofs exist and the induction step {N , κ=n} |= T κ gives us directly that (N |= Tκ ψ ) ⇒ ({N , κ=n} |= ψ). 26 We extend over formulas free in one variable as above: Since the domain is the natural numbers N , for every object x in the universe there will exist a term of the language c such that (ψ ≡ φ(c)) ⇐⇒ φ(x). Therefore {N , κ=n+1} |= ∀x : Tκ φ(x) → φ(x). This completes the induction step and thus: ∀n : {N , κ=n} |= T κ (4.10) The corresponding agents must be modified to represent the idea of \"safe for a number of steps.\" We will introduce a time t on which any given action 25. Even in models where κ > 0, there is no proof that κ > 0, so the class of proofs which goes through is unchanged. The formula Bew Tκ (p, ψ ) will contain only formulas dealing with the Gödel-numbers of proofs in T κ , and these Gödel-numbers themselves will contain no instances of κ, so which standard numbers are the Gödel numbers of T κ -proofs does not change from model to model. 26. We are not obtaining {N , κ=n} |= ( Tκ ψ → ψ) from the assumption that the axiom schema was true in the previous n, because perhaps κ = 0 in the previous n and the axiom schema was vacuously true. Rather since the model is N and there are no nonstandard proofs, any time there exists a Gödel number of a proof of an implication in classical logic Γ → ∆, it will be true across all models m that m |= Γ ⇒ m |= ∆. By the induction postulate we have directly that {N , κ=n} is a model of T κ . Then any statement ψ syntactically derivable from T κ by a standard proof in classical logic, will also have {N , κ=n} as a model. occurs and assume that if A 1 constructs A 0 at t then A 0 will not act until at least time t+1 or later, and that this is apparent to A 1 from inspection of A 0 . 27 Thus if A 0 acts at time t+1 or later, and takes actions \"safe\" for an additional n ticks beyond that until (t+1)+n, from A 1 's perspective constructing A 0 at time t is safe until t+(n+1) ticks. Let b i,t be the proposition that b i is done at time t, let Ā0 t be the proposition that A 0 is built at t, let ∅ t be the proposition that ∅ is true at t, and let G t be the proposition that the goal has been achieved at t. If no action is taken before t then ∅ t . When quantifying over acts and times we shall implicitly demand b i ∈ Acts 0 , A 0 i ∈ Acts 1 and t ∈ T ime. Then with T κ as a base theory A 1 's reasoning proceeds: Ā0 t ⇒ A 1 Ā0 t → (∀u : u ≤ t + κ → (∅ u ∨ G u )) (A 1 decisions) (4.11) b i,t ⇒ A 0 b i,t → (∀u : u ≤ t + κ → (∅ u ∨ G u )) (A 0 decisions) (4.12) A 1 Ā0 t t → (∀b i : ∀v : b i,v → (A 1 knows 4.12) 0 b i,v → ∀u : u ≤ v + κ→(∅ u ∨ G u ) ) (4.13) A 1 (κ > 0) → (∀b i : ∀t : [ 0 → ∀u ≤ t + κ : (∅ u ∨ G u ) (soundness schema for T κ ) → (→ ∀v ≤ t + κ − 1 : (∅ v ∨ G v ))]) (4.14) ∀t : (∀b i : ∀u ≤ t : ¬ b i,u ) ⇒ (∀u ≤ t : ∅ u ) (No action ⇒ ∅ t ) (4.15) A 1 ∀t : ((∀b i : ∀u ≤ t : ¬ b i,u ) → (∀u ≤ t : ∅ u )) (A 1 knows 4.15) (4.16) Ā0 t ⇒ ∀bi : ∀u ≤ t : ¬ b i,u (no A 0 acts before t+1) (4.17) A 1 Ā0 t → (∀b i : ∀u ≤ t : ¬ b i,u ) (A 1 knows 4.17) (4.18) A 1 Ā0 t → (∀u ≤ t : ∅ u ) (via 4.18) (4.19) A 1 Ā0 t → (∀b i : ∀u : b i,u →(u > t)) (via 4.18) (4.20) A 1 Ā0 t → (∀b i : ∀u : b i,u → (u + κ − 1 ≥ t + κ)) (via 4.20) (4.21) A 1 (κ = 0) ∨ (κ > 0) (4.22) A 1 Ā0 t → (κ = 0) → (∀u ≤ t + K : ∅ u ) (via 4.19) (4.23) A 1 Ā0 t → Ā0 t → (κ > 0) → (∀b i : ∀u : b i,u → (∀v ≤ u + κ : (∅ v ∨ G v ))) (4.13 and 4.14 and 4.21) (4.24) A 1 Ā0 t → Ā0 t → (κ > 0) → [(∀b i : ∀u ≤ t + κ : ¬ b i,u ) (via 4.16) → (∀u ≤ t + κ : ∅ u )] (4.25) A 1 Ā0 t → ∀u ≤ t + κ : (∅ u ∨ G u ) (4.26) Remarks: Fallenstein's T κ demonstrates indefinite tiling without loss of mathematical strength using a series of agents employing syntactically identical theories guaranteed to be sound on standard models. It does not address reflectively coherent quantified belief. Despite the syntactic equivalence of successive theories, A 1 may fail to act upon a theorem proven by an environmental system physically identical to A 1 (or at least if A 1 fulfills the desideratum of naturalistic trust, this is not obvious to us). The core of the approach is that as time increases the agents are reasoning about smaller domains of future time; these domains are known to us outside the system to be effectively infinite (because κ could mean any natural number) but the agents within the system are reasoning as though the domains of future time have some unknown finite bound. Thus the increasing \"time\" counter in the T κ agents is playing the same metaphorical role as the increasing n in each successive agent in the T -n family. The self-verifying systems of Willard (2001) have the property that multiplication, addition, and successor cannot be proven to be total functions (being replaced by equivalent relations of division, subtraction, and predecessor, allowing roughly equivalent arithmetic reasoning). Willard (2001) shows that such a theory, which has proven a number n to exist, can verify its own soundness over the numbers up to n − 1-since \"successor\" cannot be proven total, the existence of a number n does not imply the existence of a number n + 1 and the exact Löb formula T T φ →φ never materializes. We cannot yet see how to adopt Willard's self-verifying theories exactly to the A α formalism, since reasoning about ever-smaller numbers would again introduce a \"telomere\" or shortening fuse. However the spirit of T κ clearly bears some similarities to Willard's self-verifying systems, and it is possible that some way of formalizing the \"observation\" of an increasing time t whose existence was not previously provable inside the axiom system, might allow Willard's self-verifying systems to be used along similar lines to T κ . It is difficult to see what \"believing in\" T κ could correspond to in terms of the epistemic state of a rational agent. We believe from outside the system that κ's intended interpretation is \"any natural number,\" but this realization is apparently forbidden to the agent to whom κ refers to some specific finite number about which nothing is known except that it plays a vital role in the agent's goal system. 28 This seems like an odd mental state for a rational agent. It might perhaps be interpretable as an agent that instrumentally desires to take an action, in the real world, only when that action would be safe for κ steps across every model of a certain theory T κ ; but this still seems odd. Thus we would again not offer T κ as a realistic candidate for the mental state of a coherent self-modifying agent; rather T κ is being exhibited as a technical possibility proof for indefinite tiling over syntactically identical theories with a standard model. \n Probabilistic reflection Tarski ([1935] 1983) showed that no formal system T can contain its own truth predicate T RU E T because this would permit the construction of fixedpoint formula G with T G ↔ ¬T RU E T G . In Christiano et al. (2013) we show that a system can represent its own probability predicate to within epsilon: 29 a < P(φ) < b ⇒ P(a < p φ < b) = 1 (4.27) a ≤ P(φ) ≤ b ⇐ P(a ≤ p φ ≤ b) > 0 (4.28) where p φ is a function symbol within the language which acts on quoted formulas (including quoted formulas containing p) and P(φ) is a probability measure over models of the theory which assigns a probability ∈ [0, 1] to any formula φ of the language. The paradoxical equivalence p φ = P(φ) is avoided by the open intervals in (4.27): p φ can be known by the system to occupy any open interval around its true value P(φ), but not (in general) known to take on a point value. The existence of such a fixed-point of P() can be demonstrated via the Kakutani fixed-point theorem; see Christiano et al. (2013) for details. To see how the probabilistic reflection principle plays out, consider the almost-paradoxical statement: \"You assign me probability less than 30%.\" If you disbelieve this statement, it is true; if you believe the statement, it is false. If you assign it 30% probability, it is false; if you assign it 29% probability, it is true. Probabilistic reflection would resolve the fixed-point formula H ↔ (p H < 0.3) to P(H) = 0.3, but the system's reflective knowledge about open intervals containing p H would be such as to put 30% of the probability mass P() on values of p H infinitesimally smaller than 0.3 and 70% of 28. During the April 2013 workshop, κ was sometimes referred to as \"the number of ineffable mystery.\" 29. This approach was invented by Paul Christiano and refined at the November 2012 MIRI Workshop on Logic, Reflection and Probability with Mihaly Barasz, Marcello Herreshoff and Eliezer Yudkowsky. A draft of the full paper is available at http://intelligence.org/ wp-content/uploads/2013/03/Christiano-et-al-Naturalistic-reflection-early-draft. pdf and see also commentary at http://lesswrong.com/lw/h1k/reflection_in_ probabilistic_logic/ and http://johncarlosbaez.wordpress.com/2013/03/31/ probability-theory-and-the-undefinability-of-truth/. its probability mass P() on values of p H infinitesimally greater than 0.3. 30 Hence the system would assign probability 1 to any statement (a < p H < b) with (a < 0.3 < b). If you are told \"You assign probability less than 30% to this statement H\" and then asked \"Is your belief in H greater than 0.2999 and less than 0.3001?\" you will reply with a definite \"Yes!\" Consider a rational, probabilistic bounded agent. As a normative desideratum, any trust this agent has its offspring (equivalently: modified future self) must avoid disintegrating in the presence of arbitrarily tiny epsilon noise, because boundedly rational agents must always consider some tiny finite probability of multiple transistor errors, previously unknown physical laws supervening, etc. For some arbitrarily tiny the probabilistic reflection principle should be able to trust that its own judgments are correct to within that , and for smallenough this should be well below the noise level associated with theoretically possible multiple transistor errors etc. Thus if a probabilistic A β calculates that the expected utility of a certain action is 34.721, but due to some reflective noise this might be off by 0.0002, and yet the parent A α is able to trust A β anyway, then adding -noise from the probabilistic reflection principle should not matter. We do not yet have any version of the A α formalism which goes through for a probabilistic reflection principle; our understanding of probabilistic agents self-modifying is still in a very crude stage. Also we are not yet sure what kind of quantified knowledge of the reflection principle can permissibly appear within the system. Work on this avenue is still in very basic stages. However, probabilistic reflection is a fundamental change that challenges many different Gödelian limitations simultaneously, and hence one of the more promising avenues for producing a satisfactory and reflectively coherent description of a self-modifying agent. Christiano also suggests that this probabilistic formalism may be suitable for updating (Bayesian conditioning) on the observation of new mathematical facts, possibly allowing a more natural version of \"Not knowing in advance the limits of the ordinals you will create\" as discussed in section 4.1. \n Desiderata of self-modifying agents The Löbian obstacle is encountered in the first place due to the A α schema implicitly attempting to simultaneously satisfy multiple desiderata of self-modifying agents, which will be useful to list out explicitly before continuing on to sensors and partially known environments, and making a preliminary attempt at probability and expected utility. \n The Vingean principle An agent building a successor (equivalently: a self-modifying agent creating the next generation of its code) should not need to know the successor's exact actions and thoughts in advance. Suppose on the contrary that A 1 knew in advance that A 0 would choose some known action b x based on some particular proof of the theorem b x → G. This would avoid the need for a special ∅ outcome since A 1 would know at compile time that A 0 would choose some particular action and act. It would also entirely avoid the Löbian obstacle since A 1 could directly verify whichever theorem A 0 would prove, using A 0 's proof, so long as Axm 1 was at least equally powerful with Axm 0 . 31 However, this implies that A 0 is so much computationally weaker than A 1 that A 1 can simulate out A 0 's processes far in advance, at \"compile time.\" If you are constructing a system cognitively smarter than yourself (or self-improving to a cognitively superior version) then you should not be able to foresee at compile time exactly which solutions will be generated, since you are not presently that smart. 32 We would attribute this principle to an observation by Vinge (1984) . 33 Worse, once we introduce sensors over unknown states of the environment (section 6), for the parent to consider exact responses of the offspring at compile time, would require that the parent be computationally stronger than the offspring by an exponential factor of the offspring's sensor bandwidth. 34 This implies that the offspring must diminish vastly in computing power 31. The forbidden schema T T φ → φ does not allow T to conclude φ from the assumption that some unknown, unspecified proof of φ exists. If you present an implementation of T with an actual specific proof in T of φ, it can of course conclude φ and then T ψ → φ will be a theorem for every ψ. 32. Consider Deep Blue, the chess-playing system which beat Garry Kasparov for the title of world chess champion. For the programmers to foresee Deep Blue's exact moves using their own cognitive resources, would imply that they could play chess at least as well as Deep Blue-they could just move wherever they predicted Deep Blue would move \"in their shoes.\" Building a chessplayer better than any human grandmaster, necessarily implies building a chessplayer whose exact moves human grandmasters cannot predict using their own cognitive resources. 33. Before the term became expanded beyond all recognition, the origin of the term \"Singularity\" as a futurological concept was Vernor Vinge observing that a science-fiction writer trying to extrapolate the technological progress of society would eventually reach a point where their model predicted the technological creation of smarter-than-current-human intelligence (whether by, e.g., genetic engineering, brain-computer interfaces, or AI); and that this introduced a qualitatively greater difficulty in writing stories set past that point, because you were trying to write about beings putatively much smater than the author. The term \"Singularity\" referred to the resulting breakdown in futurological models, by analogy with how 1980s-era physical theories would predict the formation of a black hole but then break down when it came to describing what was inside. Thus the original \"Singularity\" was an epistemic difficulty in the minds of forecasters, not an actual future event. We think this is a conceptually important point, and would like to see this isolated thesis detached and rescued from the word \"Singularity\" and its ever-broadening connotations. 34. Since Garry Kasparov's actions were not known in advance, for the programmers to concretely foresee and judge the wisdom of Deep Blue's exact conditional replies at compile time would require the programmers to precompute Deep Blue's exponentially vast response table to all possible chess plays. or other cognitive resources, which is contrary to our agenda of investigating self-improvement. Thus a satisfactory framework must not require that the parent know the specific future actions and cognitions of the offspring at compile time. For our purposes we cash out the Vingean principle as follows: In the parent's reasoning, the offspring's actions should only appear inside quantifiers. E.g. equation (2.3): A 1 Ā0 → (∀b i ∈ Acts 0 : bi → 0 bi → G ). In a satisfactory schema the parent achieves trust in the offspring (justified belief that the offspring will function so as to achieve the goal G) by generalizing over possible events within the offspring, not by simulating out the offspring's exact action. 35 The Vingean prohibition against exactly predicting a smarter agent's actions, does not prohibit predicting achievement of that agent's goals. Indeed, a strong prediction that the agent's goals will be achieved by its future actions is precisely the substance of our belief that it is smart. In unrealistically crisp and deterministic form, this is what is represented by A 1 's reasoning leading up to the theorem Ā0 → (∅ ∨ G). The logic of the Vingean principle does not forbid us from gaining knowledge about an agent's goal or utility function by inspecting its properties-we cannot know the offspring's exact actions, but we can predict the final consequences of running the agent. This, metaphorically, is what provides A 1 with its motivation to build A 0 in the first place. Hence although we cannot predict the exact actions of a smarter agent, we may be able to predict the consequences of running that agent by inspecting its design, or select among possible consequences by selecting among possible designs. The motivation of the A α schema is to quote this reasoning within a formal agent, via a structure which enables the reasoning to recurse over the subagents constructed; and thereby extend our examination of cases where constructing an agent has predictable consequences, to cases where the initial agent is constructing smarter agents constructing still-smarter agents, or equivalently self-improving. Thus, within this paper, the actions b i ∈ Acts 0 only appear inside quantifiers in any theorem asserted by A 1 . \n The tiling principle The desire for A 1 to build an A 0 that has an architecture as similar as possible to A 1 is motivated by wanting to describe a self-improving agent which 35. E.g., Deep Blue's programmers, by considering the abstract properties of Deep Blue as a program incorporating a search tree over moves and certain machine learning algorithms in its leaf position evaluator, could predict that Deep Blue would try to achieve the consequentialist goal of winning chess games, i.e., select moves which steered the future of the chessboard toward the region of possible futures classified as a win for Deep Blue's side. can make small, local self-improvements. As such, even in the preliminary formalism we have chosen the structure of the agents A α to be as similar to their successors as the Löbian obstacle permits. We have not yet addressed the question of how the agents A α choose candidate actions b α i ∈ Acts α to consider, or how they generate specific proofs of theorems bα i → (∅ ∨ G). Let A α 's proof-suggesting component be termed Σ α ; and let the proof-verifying component be termed ∆ α . Since ∆ α need only check axioms and syntactical relations for validity, its construction is straightforward. Current (2013) algorithms require human guidance in order to prove theorems about large systems. 36 Current AI technology includes small autonomous theorem-proving systems for object-level problems as considered by A 0 and we could construct the corresponding Σ 0 ; but we cannot yet describe an autonomous machine algorithm for Σ 1 whereby A 1 could independently design, and prove theorems about, systems the size of A 0 . However, with human assistance, it is indeed possible to prove theorems about large hardware and software systems. 37 This is reason to believe that designing systems the size of A 0 and proving them correct (relative to a set of beliefs about crisply behaving transistors), does not require Σ 1 to e.g. solve NP-hard problems or call a halting oracle. Human brains cannot plausibly solve the halting problem, yet have designed systems larger than A 0 would be, and proven them correct with machine assistance. This leaves open the question of whether the suggester Σ 1 needing to suggest a design for constructing Σ 0 might introduce new Gödelian obstacles to A 1 Ā0 → (∅∨G). On this score we can reply: So long as Σ 0 can only suggest proofs to ∆ 0 and not directly cause any motor outputs or environmental effects, and A 1 can prove this by inspection of A 0 's code, it is not necessary to prove that Σ 0 has any other property in order for A 1 's proof Ā0 → (∅ ∨ G) to go through. A flawed Σ 0 will result in, at worst, ∅ outcomes as the proofs are rejected by ∆ 0 , and A 1 should be able to readily prove this by inspection. 36. E.g., Intel, to prove theorems about a floating-point multiplication unit on a chip comprised of billions of transistors, must employ human engineers to suggest lemmas to theorem-proving systems. This kind of formal verification is a good idea when your machine has a billion interoperating parts and you can't easily fix it after it leaves the factory. 37. Kaivola et al. (2009) : Formal verification of arithmetic datapaths has been part of the established methodology for most Intel processor designs over the last years, usually in the role of supplementing more traditional coverage oriented testing activities. For the recent Intel Core i7 design we took a step further and used formal verification as the primary validation vehicle for the core execution cluster, the component responsible for the functional behavior of all microinstructions. We applied symbolic simulation based formal verification techniques for full datapath, control and state validation for the cluster, and dropped coverage driven testing entirely. The project, involving some twenty person years of verification work, is one of the most ambitious formal verification efforts in the hardware industry to date. Our experiences show that under the right circumstances, full formal verification of a design component is a feasible, industrially viable and competitive validation approach. One could even visualize in this way a highly impractical modern implementation of A α agents: Suppose a virtual environment containing objects with crisp, deterministic properties, including enough virtual transistors to compose small arithmetic CPUs. Also introduce some blackbox elements Σ α whose behavior can relate to sources outside the virtual environment, but which within the virtual environment can only affect immediately neighboring or connected elements of the environment-i.e. when a Σ α is built into a larger agent A α as a component, that Σ α can only write bitstrings into a sandboxed internal buffer of A α , and cannot otherwise affect A α or the general environment. Let G be a preferred class of outcomes difficult to achieve (perhaps requiring an agent to press a long series of levers). For some reason this outcome G in the environment is very important to us, but we don't want to work on it ourselves, or don't think ourselves competent to achieve it. We would like to pay some untrusted but competent human agents to take over the job of achieving G within the environment, while avoiding at all costs a disastrous outcome D. Let ∅ be a default set of outcomes maintained so long as some class of environmental elements are not disturbed (e.g. no levers are pressed), such that outcomes within ∅ are outside D. We incorporate a blackbox action-and-proof-suggester Σ α into an agent A α with a criterion of action bα i ⇒ A α bα i → (∅ ∨ G), and connect this Σ α to a human agent outside the system, armed with modern theorem-proving software that incorporates human suggestions. By putting Σ α into the agent A α , we can not only \"sanitize\" the actions taken within the system by the untrusted human, but also allow the human to (through A α ) construct other agents within the environment, which can in turn construct other agents, all including their own black-boxes Σ α hooked up to human proof-suggesters outside the environment, without ever running the risk that a non-∅ outcome besides G will ever be implemented. 38 On a larger scale, this illustrates that the decomposition of trust within the A α system also tiles its architecture and structure: A system A 1 composed of a trusted verifier ∆ 1 which checks proofs, and an untrusted suggester Σ 1 which may incorporate arbitrary heuristic or probabilistic methods, can build a trusted offspring A 0 with a verifier ∆ 0 proven correct by ∆ 1 and a suggester Σ 0 which incorporates whatever heuristics or probabilistic methods have been suggested by Σ 1 . Indeed, the correctness proof that A 2 used to build A 1 might go through almost unchanged for A 1 building A 0 , and one could perhaps prove that it would be safe to re-verify only the changed portions of the proof. The tiling principle is agents being able to approve the construction of successors similar to themselves, both architecturally and in the reasoning licensing their construction. This corresponds to the desideratum of a self-improving agent being able to make small, local changes to itself with a minimum of imposed overhead, as might occur if any small change required the reworking of 38. By the incorporation of sensors (see section 6) it is also possible for the offspring agents to know facts about the environment not known to the parent; the motivation to construct new agents in an environment like this can be explained by the new agents having better actuators, better sensors, or fast reaction patterns not present in the parents. a large body of cognitive labor to approve. We strongly remark that the Σ α -∆ α system is nowhere near a solution for a stably self-improving bounded probabilistic agent. The crisp separation of an untrusted Σ α and a trusted ∆ α works, not because logical proofs are hard to invent and easy to verify, but rather because logical reasoning is monotonic and thus no further fact known to Σ α but not to ∆ α can prevent the achievement of G by a ∆ α -approved action. When trying to achieve expected utility in a probabilistic environment described by causal models, this monotonicity no longer holds. Thus the Σ α -∆ α decomposition is intended only to illustrate the general idea that decomposition-of-trust can also tile in a structured fashion. \n The naturalistic principle Within the A α formalism the act of of constructing other agents is not treated as a special case, but is just one more sort of action that an agent can take to achieve its goals. A 0 and A 1 both act as consequentialist agents 39 that will take acts in their range Acts α whose predicted consequence is the goal G, without distinction of actions that construct agents. If G is to put a coin into a slot, then using environmental transistors to build a smarter agent that wants the coin inside the slot, is treated no differently, under the A α formalism, from manipulating a series of levers to put the coin inside the slot. There are many reasons we might be tempted to relax this naturalistic principle. Suppose we notice a disturbing quality of the current tiling: An agent A 1 whose desideratum is G (or ∅) can tile to an offspring A 0 whose desideratum is (G ∧ B) where B is an arbitrary \"barnacle\" on the goal system, some new requirement that A 0 imposes on itself for its actions to achieve besides G. This does not threaten the maintenance of (∅ ∨ G) but does mean that G might not be achieved in some cases where it was otherwise easy or achievable. We required that our offspring require itself to achieve G (in non-∅ cases) but did not require that our offspring require only G, and thus exact reproduction of the goal system-the system of desiderata-is not guaranteed. \"You get what you pay for\" and if you prove a theorem about G you get G, not \"G and nothing else.\" It would be up to the untrusted component Σ 1 whether A 0 worked to achieve G or (G ∧ B ∧ C ∧ ...), keeping in mind that Σ 1 might have been constructed by another untrusted Σ α and so on. None of this threatens the guarantee on G, but it still seems like a disturbing hidden instability of the agents' goals. (And in the case of expected utility, introducing an arbitrary new additive term into the utility function will always come at a cost to the desiderata previously present, if it shifts any choice that would otherwise have been made in the opposite direction.) 39. These agents use logical implication, e.g. bi → G, to reason about causal consequence, which in standard causal decision theory would be represented by a counterfactual distribution P ( bi G)see Pearl (2000) . Logical consequence is generally a poor way of approximating causal consequence, but can work in crisp closed deterministic environments where such reasoning can be monotonic. Could we fix this problem by imposing a non-naturalistic requirement that the offspring contain a literal copy of the goal system-perhaps requiring that this part of the offspring be copied literally from the parent's code? To avoid the question of how the decision system knows that an act is an \"agent construction\" rather than a \"normal action,\" let us pass to self-modifying agents. A naturalistic agent-constructing-agent A α corresponds to a self-improving agent that treats the transistors comprising itself as resources no different from any other potentially-computing resources it finds in the environment; its beliefs about the transistors comprising itself are formed no differently from its beliefs about objects in the environment; and its internal decisions are made under the same consequentialist criterion as its external actions. Then an obvious structure for a non-naturalistic agent would be an agent with a different criterion of action for self-modifying changes (changes to its own program) versus motor actions in the outside environment. Considering some of the new difficulties introduced by this split decision criterion may help to motivate the naturalistic principle. Adding a special case for self-modification (of internal programs on the agent's current hardware) would introduce a discontinuity at the point where the agent considered expanding onto new computing resources obtained from the environment-the rules would need to describe how to buy a CPU from a store, connect it to the current cluster and then begin treating it under the new \"internal\" special case for hardware that must be manipulated via \"self-modification\" instead of \"environmental manipulation.\" For more advanced machine intelligences, the rule might need to describe how to arrive at beliefs about the behavior of a new kind of CPU via Bayesian updates, or beliefs about when a CPU has been \"connected\" to the internal network. (An advanced machine agent using a naturalistic decision criterion would also encounter corresponding cognitive tasks, but they would be treated continuously in terms of expected consequences, rather than needing a special case baked into the fundamental decision criterion.) We might say that these difficulties are being introduced by a kind of Cartesian dualism-the physically continuous universe has been divided into external material stuff versus internal mental stuff, and material objects must be declared to be specially cognitively typed before they can be treated as mental stuff. Or another difficulty: If the self-modification criterion tries to freeze the representation of the goal G so as to require exact reproduction of G and exclude \"barnacles,\" then the self-modification criterion will also have to freeze the corresponding properties of belief representation and decision consideration so as to guarantee that the rest of the system decodes G in the same way. 40 40. If this seems an odd notion to consider, we should mention that in nontechnical treatments of safe self-modifying machine intelligence, we have often seen proposals to \"hard-wire\" the goal system, often using that exact phrase. Construing the phrase charitably, it could refer to something like the above, in which case a problem is that \"hard-wiring\" one part of a system requires constraining the global architecture and internal protocols so that all the other parts behave the same way around the \"hard-wired\" part. Having a \"hardwired\" function that evaluates G a certain way is no guarantee of Even then the added constraint may end up being ineffectual. Consider that an agent with an added constraint in its decision criterion for \"self-modification\" of the software on its existing hardware, might naturally tend to build new agents in the environment according to its weaker environmental criterion. For that matter, again using the A α system as metaphor, if the environmental criterion is the weaker bi → G and the internal self-modification criterion is the additionally-restrictive bj → (G ∧ R), this decision criterion would approve an action to construct an external, \"environmental\" manipulator to reach into the agent and rewrite its code in a way that implied only G and not R. Having a stronger (more restrictive) criterion of action for self-modification, versus environmental action, seems likely to be futile in the long run. Most of the universe is outside rather than inside the AI, and external materials can be used to construct new agents, or to take actions with causal consequences impacting on the \"inside\" of the agent. That is why we are trying to see stable self-modification in terms of the natural tiling of a consequentialist decision criterion, rather than inventing special programmatic constraints. This leaves open the possibility that a weaker (less restrictive) criterion for \"internal\" self-modification could play a role in avoiding the Löbian obstacle. However this would introduce all of the above-mentioned difficulties with respect to describing what constitutes a more permissive \"internal\" action, or how to add new hardware. We could also ask whether the more permissive internal criterion would be just as appropriate for external actions, and if not, whether internal permissivity might give rise to external permissivity. Such a \"hack\" might be appropriate for self-modifying programs that will never consider, or never autonomously choose, to significantly modify their hardware. But even for those cases, a unified naturalistic decision criterion might avoid many complications in the underlying theory, and hence seems greatly desirable if obtainable. If naturalism is abandoned for a dualistic formalism, the Löbian obstacle can be dodged via the split decision criterion: 41 ψ(act i ) =[externalOK(act i ) → ( ā ct i → G)] ∧ [internalOK(act i ) → ( ā ct i → (∀act j ∈ Acts : ā ct j → ψ(act j ) ))]. Thus it is the simultaneous intersection of the Vingean, tiling, and naturalistic principles which generates the Löbian obstacle. safety if other code sections discard the return values, or introduce a new representation in which the returned values mean something else, etc. A decision criterion over expected global consequences automatically subsumes such details, while a non-naturalistic decision criterion over direct properties of the internal representation brings those details back in. 41. See \"failed solutions\" in section 3 for why this split decision criterion does not have an obviousto-us naturalistic analogue (we did look). \n Unknown variables and sensors A straightforward idiom for representing an agent which can use sensors to gain additional information about (known) unknowns in the environment would be the relation Sl ⇐ A 0 sl (6.1) representing a scenario where the agent A 0 adds the proposition 's l ' to its pool of logical beliefs only on occasions where the environmental sensor S, possibly a sensor, takes on the state Sl . (This also allows that S may be some variable whose state is sometimes not sensed, in which case A 0 believes nothing about any s l .) Then further axioms in Axm 0 would relate the proposition sl to the state of other represented parts of the environment, Axm 0 sl → χ. This idiom breaks the tiling pattern we used earlier, since A 1 will no longer be able to use the relation A 0 φ ⇒ Axm 0 φ. Even when Sl is true, it will not be entailed by Axm 0 and hence A 1 cannot prove bi → 0 bi → (∅ ∨ G) . Our previous proof relied on everything proven by A 0 being something that A 1 could have proven, but if A 0 has sensors not possessed by A 1 , it will act on premises which A 1 does not know to be true. Allowing this somehow is an obvious requirement for building successors (or self-modifying). If the A α formalism could not adapt to describe offspring possessed of additional factual information unknown to the parents, it would be entirely unsuited, under the Vingean principle, to describing self-improvement. An ugly but direct solution would be to amend A 0 's criterion of action to read: bi ⇒ A 0 (s l ∧ bi ) → (∅ ∨ G), (6.2) where some s l is guaranteed to have been sensed at the time of any action. (If the sensor sometimes does not activate, then the special state s 0 could indicate that the sensor has not obtained any reading, and represent this through the absence of any further environmental beliefs s0 → χ.) A 1 with criterion of action Ā0 j ⇒ A 1 Ā0 j → (∅ ∨ G) would then tile to A 1 via the lemmas: A 1 Ā0 → ∃s l ∈ s : sl (6.3) A 1 Ā0 → ∀s l ∈ s : ∀b i ∈ Acts 0 : (s l ∧ bi ) → 0 (s l ∧ bi ) → (∅ ∨ G) (6.4) A 1 Ā0 → (∀b i ∈ Acts 0 : ¬ bi ) → ∅ (6.5) This idiom technically does not break the Vingean principle, because we do not need to reason by cases on S and all values of s l ∈ s appear inside quantifiers, so multiple sensors can be included in A 0 without requiring exponentially longer chains of reasoning by A 1 . However this idiom stretches the tiling principle because each successive descendant, with additional sensors, must prove larger and larger criteria of action (even in cases where most sensors do not activate). Intuitively, it seems like if we were in the shoes of A 1 building A 1 , we would want to reason that since adding A 0 sl only happens when Sl is \"true,\" it should not disturb our trust in A 1 . Previously we were willing to add Axm 0 to A 0 's belief pool because we thought Axm 0 were all true in some encoding, so A 0 started with only true beliefs. We were willing to program A 0 to use classical logic to make syntactically allowed deductions from its current belief pool, even if A 0 proved some theorems we did not consider concretely in advance (in accordance with the Vingean principle), because we believed the rules of logic were valid in the sense that, starting from true premises about the environment, A 0 's reasoning rules would produce only true conclusions about the environment. 42 Then our trust in the soundness of A 0 should not be disturbed by giving A 0 a sensor which adds new statements sl only when Sl is true in the environment, even if these propositions were not known to us in advance. Set theory is powerful enough to directly formalize this reasoning using standard methods. In particular, ZF can internally represent the notion of semantic entailment X |= φ , \"The quoted formula φ is true within the quoted model X.\" E.g., to quote Peano arithmetic, the model X N would contain several subsets collectively representing the universe of numbers and the relations on the objects in that universe: X N would contain an element containing all objects in the universe of X N (in this case the numbers); an element containing all the ordered pairs for the succession function (e.g., (2, 3) is the pair indicating that the object 3 is the successor of 2); and more elements containing the collections of ordered triplets for the addition and multiplication functions (e.g., (3, 5, 8) in the addition relation shows that 3 + 5 = 8). There then exists a compact formula of ZF asserting that φ encodes a formula that is semantically true of the quoted model X. For example, \"1 + 2 = 3\" would be encoded as the Gödel number of a statement asserting that (the number related by the successor function to 0, the successor of the successor of 0, and SSS0) form an ordered triplet that is an element of the addition relation. By recursion on the depth of formulas, there is a single compact ZF-formula which represents X |= φ in general and which could be applied to X N |= S0 + SS0 = SSS0 without further tailoring to the formula S0 + SS0 = SSS0 or the quoted model X N . It is then moreover possible to prove within ZF, by induction over the number of steps in a logical proof, that the set of quoted logical tautologies is true of every quoted model-if a formula (e.g., ∀x : ∀y : (x = y) ↔ (x = y) can be proven starting from no premises, it is semantically true in all universes. Thus ZF proves that the syntactic rules of classical logic preserve semantic truth: ZF ∀X : ∀ Γ : ∀ ∆ : taut Γ→∆ → ((X |= Γ ) → (X |= ∆ )). (6.6) This is a single, quantified formula rather than an axiom schema since there is 42. The purpose of logic is to preserve truth, not create truth. Logic doesn't guarantee that our conclusions are true; but logic does guarantee that if our conclusion is false, one of the premises must also be false. no longer any need to dequote the formulae Γ or ∆ in order to assert their truth within X. 43 Let the language L τ ∈ be the language of set theory L ∈ augmented by a new term τ denoting a special set, intended to be a quoted model representing the enivronment within which the agents are embedded. Let ZF τ be ZF with its axiom schemas extended to include formulas involving τ . Let ξ τ be a set of axioms describing the special set τ , such that ξ τ ∼ = Axm 0 -whatever statements the original A 0 treated as axioms about its universe (such as \"χ→G\"), will be rephrased as statements about τ (such as \"τ |= τ χ→G \"). Here |= τ is a new semantic entailment formula which maps special quoted terms like G to corresponding special sets located by ξ τ within τ , or corresponding special properties of τ . (E.g. G might represent some particular variable within the model τ having taken on the value of 20.) Note that there does not need to be any special term G added to the language L τ ∈ . Rather, the axioms ξ τ force τ to have a particular structure, and the formula |= τ assigns a special Gödel number and corresponding subformula of |= τ to the term G . 44 Then the centerpiece of A 1 's reasoning about A 0 becomes the central trust lemma: A 1 Ā0 → ∀ φ : (τ |= τ A 0 φ ) → (τ |= τ φ ). (6.7) This is the translation of \"A 1 thinks everything A 0 believes is true,\" or more explicitly \"A 1 thinks that everything A 0 believes, as encoded in some representation φ , corresponds to some statement φ which is semantically true about reality itself (τ ).\" Note that A 1 is now explicitly maintaining some compact mapping between its language φ for describing reality (τ ) and its model of A 0 's language φ -in other words, A 1 has an explicit correspondence theory of truth over A 0 's beliefs and the territory τ , as represented in some compact encoding relating A 1 's map of A 0 's map and A 1 's map of the territory τ . (6.7) is intended to mirror (in highly stripped-down logical form) something like what a human programmer would do intuitively in constructing a theoremprover to manipulate tokens supposedly \"representing\" the environment-the human implicitly maintains a compact mapping between the human's model of how the program treats tokens like \"Customer#12,\" and the human's model of some actual person named Fred. (Omitting any of these levels of indirection, or 43. ZF cannot thereby prove its own consistency because ZF cannot prove the existence of any set Y such that Y |= ZF . Likewise the entailment formula |= does not constitute a forbidden truth predicate because |= can only denote semantic truth within an individual set X, and there is no corresponding formula to denote semantic truth within the entire set-theoretic universe. 44. Having all our propositions talk about a unique term τ is intended to represent the idea of achieving things in some particular \"actual reality,\" not just an implication being true in some particular set x which has the properties ξ(x) but is not the actual reality we wish to speak of. We could possibly do without the special term τ and let our set theory be the standard ZF by having the axioms ξ τ say ∃x : ξ(x) and replacing all statements of the form τ |= τ φ with statements of the form ∀x : ξ(x) → (x |= τ φ ), so that the conclusion is forced to be true of the reality-set as well, so long as the reality-set has the properties ξ. But for reasons of notational simplicity, we would rather just add the term τ to the language L τ ∈ . committing the mind-projection fallacy (Jaynes 2003) and speaking as if some ethereal link was actually connecting \"Customer#12\" and Fred, can result in truly vast amounts of confusion.) The human then configures the program to maintain the human's expectation of correspondence between the two, i.e., the human makes sure that the token \"Customer#12\" behaves in such fashion that its \"money\" property is expected to behave like the human's expectations about Fred's payments. The maintained correspondence is between epistemic objects-a map of a map, and a map of the territory-so that a compact mental transformation can relate the two. 45 Similarly an agent A 1 with Axm 1 = ZF τ ∪ ξ τ and criterion of action Ā0 j ⇒ A 1 (τ |= Ā0 j ) → (τ |= (∅ ∨ G) ), (6.8) which had established the central trust lemma (6.7), could approve the construction of an agent A 0 which held the object-level beliefs represented by Axm 0 -an agent which, like previously described agents A 0 , did not use set theory or represent an explicit environment-object τ , but instead was programmed to reason about a proposition G that could be the consequences of its acts b i as before. We could similarly expect that A 1 would establish its central trust lemma (6.7) by induction on properties of the design of A 0 . The base case is the truth of every axiom in Axm 0 constituting A 0 's belief pool when it starts up. The induction step is the validity of classical logic's syntax, as proven within ZF (or ZF τ ) to produce only semantically true conclusions from semantically true premises on each succeeding time-step of induction. Then this central trust lemma is not disturbed by adding sensors that operate as in (6.2) or, in this scheme: A 1 (τ |= τ Ā0 ) → ∀ s l : (τ |= τ A 0 sl ) → (τ |= τ Sl ). (6.9) Imagining the induction to be over time steps taking places at times u after A 0 's construction at t, adding this type of sensor does not invalidate the induction step that if all A 0 's beliefs are true at time u, they will be true at time u+1-supposing all of A 0 's new beliefs to be added either by syntactically valid classical deduction from previous beliefs, or by sensors which A 1 believes to operate in a sound fashion relative to A 1 's correspondence theory of truth φ ↔ φ , on any given time step. Some further elaboration is then required to use the same schema to describe the agent A 2 which creates A 1 which creates A 0 : A 0 itself may reason about the environment using some simple axioms, first-order logic, and perhaps a few sensors, a la Newell's GPS (Newell, Shaw, and Simon 1959) . A 1 , to reason about A 0 in a sufficiently general fashion, deploys set theory in order to reason about a physical universe τ , a language φ for describing physical reality, and a truth-condition |= τ which determines whether such statements are (in A 1 's beliefs) \"true\" or \"false,\" and an agent τ |= τ A 0 with beliefs φ which A 1 believes to correspond to propositions φ which are actually \"true.\" (Representing semantic entailment in a fully quantified way, and proving in a quantified way that logical tautologies are true in all models, is why we had to invoke the big guns of ZF.) Then A 2 , to reason about A 1 's reasoning, needs to trust that A 1 , in reasoning about set theory, is reasoning about valid subject matter. E.g., since A 1 is reasoning within ZF τ , A 2 to trust A 1 must at least believe that ZF is consistent. 46 Then clearly A 2 must be using some theory more powerful than ZF τ ∪ ξ τ , which can prove the soundness of ZF τ ∪ ξ τ . An obvious idiom is for A 2 to reason that A 1 's mental universe corresponds to some object in A 2 's universe, something that A 1 can be said to be reasoning about: (6.10) which states that there exists some set s which is a model of Axm 1 = ZF τ ∪ξ τ . 47 This makes Axm 2 as strong as ZF plus the assertion that there exists an inaccessible cardinal (a single set which is large enough to be a model of a set-theoretic universe obeying the axioms of ZF). Since ZF can prove Gödel's Completeness Theorem that every syntactically consistent theory has a model, Axm 2 is as strong as ZF+1. Axm 2 = ZF τ ∪ ∃s : s |= τ ZF τ ∪ ξ τ , From another perspective: A 1 's mental universe includes not only τ but a larger mathematical universe of set-theoretic objects-e.g., this mathematical universe, not the physical universe τ , is what includes the numbers (least ordinals closed under succession) which determine whether \"a proof of φ exists\". Then A 2 must be strong enough mathematically to reason about something (a set, a model) which is large enough to provide a correspondence theory of truth over all of A 1 's reasoning, including A 1 's reasoning about set theory. A 2 must believe in an object that provides, not just a physical environment, but a mathematical universe, for A 1 's reasoning to be sound over. Similarly, another agent to create A 2 must believe that there exists a set that models Axm 2 , containing a set that models Axm 1 , containing τ that models Axm 0 ∼ = ξ τ , with the corresponding theory having the strength of ZF+2; and we arrive at ordinal tiling as before. Remarks. For a set theory to believe in the existence of an infinite descending sequence of sets containing sets containing sets, etc., would violate the Axiom of Foundation (Axiom of Regularity). Hence an analog of the infinitely descending soundness schema T -0 seems like it would be even harder to repair. Possibly the fact that infinite models are being encoded as sets may allow for an infinite series of encodings (e.g., all the numbers can be encoded as even numbers, then 46. Or perhaps, \"ZF is consistent if 0 is not the Gödel number of a proof that (ZF plus a large cardinal axiom) is inconsistent,\" or \"ZF is consistent if κ is greater than 0,\" but we omit such complications here and assume that A 2 , A 1 , and A 0 fall into a standard decreasing-ordinal schema. 47. Note that since semantic entailment is a quantifed formula, infinite axiom collections, such as axiom schema, can be semantically entailed without problems. all the numbers can be encoded as even numbers encoded as even numbers and so on indefinitely). We see no obvious obstacles to Fallenstein's parametric polymorphism approach T κ being adapted to ZF τ proving safety for κ steps, but have not yet written out a complete proof. We would be interested in any simplification of this scheme that reasons about a correspondence theory of truth over the offspring agents without resorting to set theory, or that uses a set theory substantially less powerful than ZF. \n Probability and expected utility At present our attempts to tile probabilistic reasoning are in very preliminary stages. Expressing trust of an agent in an offspring's probabilistic reasoning introduces new complications, most of which remain unresolved. The expectation of utility E[U ], conditional on an action b i being performed, is often defined in terms of an outcome variable o k ∈ Outcome and probability distribution P as: 48 E[U ] o k ∈Outcome U (o k )P (o k | bi ). (7.1) The most straightforward analogue of logical satisficing, as previously used in A α , is expected utility satisficing. (Maximization would introduce even more difficulties; see section 8 below.) E.g., A 0 might act only when it expects its action to lead to expected utility greater than (arbitrarily) $20, and we might be tempted to write this as the problematic formula: bi ⇒ A 0 E[U | bi ] > $20 (7.2) As we remarked in section 5.2, the introduction of expected utility and its conditioning formula lead to basic difficulties with the A α formalism because probabilistic reasoning is not monotonic. If we prove by logic that bi → G there 48. In standard causal decision theory (Joyce 1999; Pearl 2000) , the expectation of utility would be written using the counterfactual conditional: E[U ] o k ∈Outcome U (o k )P ( bi o k ). In timeless / updateless / logical decision theory (Altair 2013; LaVictoire et al. 2013 ): Q = arg max bi∈Acts o k ∈Outcome U (o k )P ((eval Q = bi ) o k ). Since we do not properly consider such issues here, we shall (as do most non-foundational works) write the expected utility formula using the classical conditioning operation P (o k | bi ) of evidential decision theory, which decision theorists almost universally agree to be wrong. can be no further fact Γ such that ( bi ∧ Γ) → ¬G (unless of course the addition of Γ is inconsistent). However if P (G | bi ) > 0.9 there is no rule that therefore P (G | ( bi ∧ Γ)) > 0.9 for every further fact Γ, likewise E [U | bi ] > $20 does not imply E[U | bi ∧ Γ] > $20. Suppose A 1 , which desires E[U ] > $20 in all non-∅ cases, approves of any A 0 with the decision criterion (7.2) above, considering such an A 0 safe regardless of how its actions are suggested by an untrusted component Σ 0 . Suppose also that the action b2 yields $50 if the coin θ came up heads (θ H ) and $10 otherwise (θ T ). If θ is unknown and is assigned probability 50% then E[U | b2 ] = $25. But suppose that Σ 0 manages to learn the true value of θ and only suggests the action b2 in cases where it knows θ T . Then observing that the offspring has proven A 0 E[U | bi ] > $20 is not adequate reason to trust that, if it acts, it will achieve expected utility greater than $20. 49 It may be objected that since, in this case, A 0 only takes the action b 2 in cases where θ T , the true probability distribution would have P (θ T | b2 ) = 1 and A 0 should evaluate E[U | b2 ] accordingly. This raises questions of evidential versus causal decision theory which we lack space to address here. We remark that if A 0 , evaluating its probability distribution P , finds that this probability distribution already assigns probability 1 to action b 2 and probability 0 to all other actions (the \"true\" probability distribution in cases where b2 ) then A 0 would seem to know its own action before it decides (indeed, prior to its computation of expected utilities); and if A 1 also possesses axioms implying this \"true\" P then it must know its offspring's exact action which violates the Vingean principle. Furthermore, conditioning on all actions except b 2 would then require conditioning on a zero probability with undefined results. 50 We do not have good solutions for these issues, but felt that we ought to offer an unsatisfactory version of the formalism so that there would be something in place to improve upon. To avoid the complexities of reasoning about probabilistic program execution, we supposed the environment could be divided into a near-deterministic part (the \"environmental transistors\"; the analogue of a human-made CPU constructed to have almost perfectly deterministic causal paths, crisp variable states and crisp relations) and a probabilistic part X, which is the subject of the probability function P (X). We suppose X to be a joint function of variables x l obeying the usual fac-49. The form of your theorem demanded an action such that, conditioned on knowledge of only that action, expected utility would be greater than 20; you did not ask for an action conditioned on all the knowledge available to the component which suggested that action. You get no more than what you pay for; or in this case, no more than what you prove. 50. An analogous problem potentially arises if A 0 ever proves that it does not perform some particular action, Axm 0 ¬ bi . Any conclusion would then follow from the contradictory hypothesis bi , thus Axm 0 bi → (∅ ∨ G). With valid reasoning on true axioms this should never occur (we should not prove ¬ bi and then do b i ) but it is potentially a concern when trying to build systems which bypass the Löbian obstacle. It is ordinarily Gödel's Theorem which prevents a system from concluding that it will never prove some theorem (such as the theorem bi → (∅ ∨ G)). torization for causal models (Pearl 2000) : P (X) = x l ∈X P (x l | pa l ), (7.3) where for each variable x l , pa l is a possibly empty list of the immediate causal ancestors of x l . Thus the joint probability distribution P (X) factorizes into local rules governing the probability of each node x l given its parent causes, if any. We then axiomatize A 1 's knowledge of P (X) in a way that leaves some parentless x l with unspecified probabilities; any causal descendants of these variables will have probabilities unquantified by A 1 . One of these variables will be A 0 's action and thus A 1 will not precompute a probability of A 0 's action. We will also need to leave unspecified all sensor variables s k ∈ S. Collectively the sensor variables and the motor action variable will constitute a kind of Cartesian boundary between the deterministic world in which A 0 's cognition occurs, and the probabilistic world it wants to sense and affect. We then mate deterministic reasoning to probabilistic reasoning via the bridging axioms: 51 A 1 ∀s k ∈ S : sk → (P (s k ) = 1) (7.4) A 1 ∀b i ∈ Acts 0 : bi → (P ( bi ) = 1). (7.5) A 1 and A 0 's criteria of action will be: Ā0 j ⇒A 1 Ā0 j → (∅ ∨ (E[U ] > $20)), (7.6) bi ⇒A 0 bi → (∅ ∨ (E[U ] > $20)), (7.7) and let A 1 prove a soundness schema (central trust lemma) over A 0 's logical beliefs as before. Then A 1 's decision to create A 0 is straightforward via: A 1 Ā0 → ∀b i : bi → ( 0 bi → (∅ ∨ E[U ] > $20) ), (7.8) A 1 0 bi → (∅ ∨ E[U ] > $20) ) → ( bi → (∅ ∨ E[U ] > $20)), (7.9) A 1 (∀b i : ¬ bi ) → ∅, (7.10) A 1 Ā0 → (∅ ∨ E[U ] > $20). (7.11) Suppose Axm 0 incorporates standard probability axioms, and suppose that A 0 has observed all available sensor data s k representing all missing terms of X except its action b i . Then A 0 will be able to carry out an action with 51. The form of these axioms was suggested by Paul Christiano. \n E[U | sk ∧ bi ] > $20, via: A 1 E[U |s k ∧ bi ] > $20 (7.12) A 1 sk (7.13) A 1 P (s k ) = 1 (7.14) A 1 bi → (P ( bi ) = 1) (7.15) A 1 (P ( bi ) = 1 ∧ P (s k ) = 1) → (P ( bi ∧ sk ) = 1) (7.16) A 1 bi → (P ( bi ∧ sk ) = 1) (7.17) A 1 bi → (P (¬( bi ∧ sk )) = 0) (7.18) A A 1 bi → (E[U ] = E[U | bi ∧ sk ]) (7.20) A 1 bi → (E[U ] > $20) (7.21) We confess this to be a moderately grotesque hack that fails almost entirely to rise to the challenge of non-monotonic probabilistic reasoning. As remarked, we included it only to serve as something that could be improved upon. The argument above goes through only because, by assumption, every possible variable correlated with U has been observed and so no further probabilistic knowledge can be gained to invalidate the conclusion. It also is not immediately obvious to us how to extend the above hack to sensory observation of variables with causal parents. In usual practice, sensors observe the causal descendants of variables of interest-e.g., the light reflecting from an object takes on a pattern caused by that object; the observed photons are causal descendants of the event of their reflection. Being unable to represent the standard idiom of evidence in Bayesian networks is a very severe defect. 52 Similarly it seems to us that it should be possible to compute the probability of evidence before observing it, since this practice is central to Bayesian probability. In the formalism as it stands, being able to assign a prior probability to any evidential variable would produce a logical contradiction after a sensory event added the conclusion that the probability was 1. We suspect that there ought to be some much more general schema of trust in probabilistic soundness of a successor agent, possibly invoking explicit reasoning about probability measures in the same way that we previously introduced set theory as a backdrop to general reasoning about semantic truth, or using some probabilistic trust schema such as possibly: 52. We suspect that this desideratum might be satisfiable by some very simple technical method we have not yet invented as of this report. \n Bounded agents: Maximizing, satisficing, and meliorizing Expected utility is usually discussed in the context of expected utility maximization-choosing from among the actions in a set the action with highest expected utility. We did not attempt to frame our probabilistic agent that way, because introducing the notion of maximization creates new challenges to tiling, separate from the challenges of probability. I.e., suppose that we are in a deterministic environment, creating offspring which will have various possible sensors, various possible ranges of action, and whose actions (given the value of background variables) deterministically score various utilities. We could possibly adapt the logical goal G to state some version of the highest possible score being achieved, with the successor otherwise allowing a ∅ outcome. However, to properly rise to the challenge of maximization would imply creating the successor A 0 j which, of those options Acts 1 available to you, would lead to the best score you could've received, after A 0 j tries to maximize its own score. It was not obvious to us how to make the proof of such a tiling go through. It is also not obvious to us that this is the correct problem on which to spend effort, since, in real-world environments, achieving the maximum possible expected utility is impossible for all but the simplest problems. Maximization in full generality requires correctly assessing the score (expected utility) of every action in order to guarantee that we choose the best; this is what renders, e.g., optimal chess intractable. Satisficing is more realistic for bounded agents, in this sense, because we only need to evaluate the expected utility of a single action in order to have our decision criterion E[U ] > 20 go through. (Likewise, establishing some partially uncertain, probabilistic expectation of utility is more realistic than correctly predicting an exact outcome or score.) But to seriously consider satisficing as the foundation of a useful rational agent, or sufficiently advanced self-improving intelligence, would come with its own set of problems. Suppose the lower bound is set at $2 • 10 15 utility in whatever measure, and the agent realizes that a certain plan will ensure a minimum expected utility of $10 16 . Then as we have defined the agent's goals, it satisfices to randomize between a 30% probability of implementing this plan, a 60% probability of doing something which leads to expected utility $0, and a 10% probability of doing something with expected disutility −$10 16 . This is probably not the sort of behavior we wanted. 53 Schmidhuber's \"Gödel machine\" (2007) 54 introduced the interesting suggestion of a swapping criterion of action: The \"Gödel machine\" would change its own source code if it could prove that the action of changing its source code had a higher expected utilty than keeping its current source code. (No criterion for external motor actions was introduced, but arguendo the Gödel machine would soon choose to add one.) One of the participants at the April 2013 MIRI work-53. You get what you pay for, no more than you asked for, and exactly what you proved. 54. Schmidhuber (2007) could easily be considered a spiritual forerunner of the present work. shop suggested dubbing this a \"meliorizing\" criterion of action, since it is meant to make things steadily better (repeatedly swap in plans, or successors, with higher expected utility). Furthermore, since meliorizing requires considering only two courses of action-the current plan, and the new plan-it may be suitable for a boundedly rational agent. In contrast to the suggestion in section 4 of Schmidhuber (2007) it is not obvious that the criterion, \"Swap if expected utility of the new program is higher\" is a guarantee of \"global optimality,\" which Schmidhuber suggested would be the result of each considered swap taking into account the possibility of other, better swaps if the current program were left undisturbed. Even considering expected utility rather than utility, most expectations that can be computed over some series of branches will not locate the point of highest expected value in the space, unless expectations are unusually smooth or the suggested series of changes is especially good. E.g., the problem at hand could easily have an NP-hard optimal solution (while still having good non-optimal solutions which could be steadily improved). But the question of \"global optimality\" is probably not the most important concern, since literal global optimality in the sense of trying to solve NP-hard problems should not be the key research desideratum. It is likewise not obvious to us that \"meliorizing\" is sufficient to produce satisfactory behavior with respect to a builder's programmed set of goals. Suppose a sufficiently advanced machine intelligence, built according to this criterion, discovered that an asteroid was headed toward Earth and would shortly kill 7 billion people, with its current plan not preventing it. Under the strict criterion of meliorizing as written, it would make sense to swap to a program that promised to save 1,000 people, let all the others die, and make no further improvements, since this would still be better than not swapping. According to the line of argument in section 4 of Schmidhuber (2007) , the agent ought to consider that it would be better to keep the previous program and wait for it to generate a better alternative. But this relies on some particular sequence of suggestions being generated such that a better alternative is considered at some point; moreover, that the agent probabilistically expects that such a better alternative will be generated if it keeps its current program, in advance of considering the actual alternative (which the Vingean principle says we cannot always do at the earlier decision point). Thus if a meliorizing criterion is ultimately satisfactory, it will be due to other properties of the series of suggestions being considered, and the way in which expectations of old and new programs are evaluated, which have not been specified into the \"swapping\" rule itself. But expected utility satisficing is not satisfactory at all, and is probably not repairable; and maximizing is only possible for the best imaginable agents, not the agents that will actually exist; whereas it might be that meliorizing can somehow be improved upon. More generally, we have discussed maximizing, satisficing, and meliorizing in order to make the point that when it comes to bounded, probabilistic rational agents that are meant to pursue their goals in some \"reasonable\" way and build descendants who do the same (equivalently self-improve), we are not able to presently state-even on the highest possible level of generality such as \"satisficing\" or \"meliorizing\"-what sort of criterion of action might be suitable. The problem is very wide open indeed. so by Löb's Theorem ∀x : T ψ(x). 18 (b) Instead of an indefinitely extensible ordinal hierarchy of systems T +α, let us form a descending sequence of theories T -0, T -1, T -2 as follows: T -n T ∪ ∀φ : (∀x : T -(n+1) φ(x) → φ(x)). \n Perhaps the previous constructions failed due to their impredicativity. Let us try a finitary version which speaks only of well-founded objects. Let k+1 T φ stand for the formula T T ...φ... with k + 1 boxes.. Since the pattern T T ... is highly regular, a compact formula for k+1 T φ should exist. Then let A α 's criterion of action be bi \n T φ(b i ) ∨ φ(b i ) , which does not reduce to ∃k : k+1 T φ(b i ) . 19 19. The formula k+1 T φ just occupies a new ordinal ω of proofness, and T k+1 T φ then corresponds to a new ordinal ω+1 of proofness. \n 1 E[U ] = (E[U | bi ∧ sk ]P ( bi ∧ sk )) + (E[U |¬( bi ∧ sk )]P (¬( bi ∧ sk ))) (7.19) \n P α (φ ∧ (A β P β (φ) = p )) = p • P α (A β P β (φ) = p ). (7.22)Such work remains in progress, however, and in general the problem of self-modification in probabilistic agents remains wide open. \n\t\t\t . ψ(x) is just a Henkin sentence H ↔ T H with a dangling ∨ clause φ(x). A Henkin sentence for T is of course always provable within T . \n\t\t\t . Which is, in general, the agenda of this paper: Our framework and our technical solutions are \n\t\t\t . In other words, the length of clock ticks is small enough that constructing another agent takes at least one tick. E.g., the length of a clock tick could equal the Planck time. \n\t\t\t . This implies that the system behaves in a sense as though it assigns nonstandard probabilities (in the sense of nonstandard analysis with infinitesimals), an issue we are still working on. \n\t\t\t . It is possible that this work might have some relevance to the philosophy of epistemology, which we lack space to explore here.", "date_published": "n/a", "url": "n/a", "filename": "TilingAgentsDraft.tei.xml", "abstract": "We model self-modification in AI by introducing \"tiling\" agents whose decision systems will approve the construction of highly similar agents, creating a repeating pattern (including similarity of the offspring's goals). Constructing a formalism in the most straightforward way produces a Gödelian difficulty, the \"Löbian obstacle.\" By technical methods we demonstrate the possibility of avoiding this obstacle, but the underlying puzzles of rational coherence are thus only partially addressed. We extend the formalism to partially unknown deterministic environments, and show a very crude extension to probabilistic environments and expected utility; but the problem of finding a fundamental decision criterion for self-modifying probabilistic agents remains open.", "id": "8b65dbff462dc16ddf29573c26ea275a"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["William Macaskill", "Aron Vallinder", "Carl Shulman", "Caspar Österheld", "Johannes Treutlein"], "title": "The Evidentialist's Wager", "text": "Introduction Suppose you find yourself in the following decision situation: \n Moral Newcomb In front of you are two boxes, A and B. You can choose either B only, or both A and B. Box A is guaranteed to contain one dose of a cure for a fatal disease, whereas box B may or may not contain ten such doses. A perfectly reliable predictor has made a prediction about your decision. If she predicted that you would take both boxes, she 1 left box B empty. If she predicted that you would take box B only, she put ten doses in that box. What should you do? Cases like this give rise to the well-known debate between causal decision theory (CDT) and evidential decision theory (EDT). Roughly speaking, according to CDT, you should perform the action which is likely to cause a good outcome, whereas according to EDT, you should perform the action which provides strong evidence that a good outcome will occur. 2 And let's suppose-as would seem to be natural-that you represent the decision situation as follows: Cure in both Cure in one only \n Take one box Ten lives Nothing Take both boxes Eleven lives One life Given that the prediction has already been made, your decision has no influence on whether or not box B contains anything. Therefore, CDT recommends that you choose both boxes, because that way you are guaranteed to obtain one more dose than you otherwise would, regardless of whether or not B is empty. By contrast, EDT recommends that you choose only choose box B, because you would thereby receive strong (indeed perfect!) evidence that you will obtain ten doses of the cure. If you instead choose both boxes, you would thereby receive strong (indeed perfect!) evidence that you will merely obtain one dose. CDT has garnered more adherents (Bourget and Chalmers, 2014) , and hence we assume most decision theorists believe that you should two-box in the Moral Newcomb problem. But here's an argument for one-boxing that, to our knowledge, has not appeared in the literature. The argument relies on three premises. First, even though one might have higher credence in CDT than EDT, there is still an ongoing debate, and a number of intelligent and well-informed decision theorists endorse EDT. In the face of such expert disagreement, one shouldn't be anywhere near certain that CDT is correct. Instead, one should assign at least some credence to EDT. Second, once we take into account our background knowledge of the world, the stakes become significantly higher for EDT than CDT in the Moral Newcomb problem. The universe is probably very big indeed, and there are very many individuals very similar to you who will face similar decision problems. As a result of this similarity, your decision in the Moral Newcomb problem is correlated with decisions made by many other agents elsewhere in time and space. This means that the simple state-consequence matrix above does not in fact capture everything that is relevant to the decision problem: we have to refine the state space so that it also describes whether or not correlated agents face boxes with cures in both. By taking one box, you gain evidence not only that you will obtain more doses of the cure, but also that these other agents will achieve good outcomes too. Therefore, the existence of correlated agents has the effect of increasing the stakes for EDT. By contrast, there is no causal connection between your decision and the decision of these other agents, and hence taking them into account does not affect the stakes for CDT. Third, in the face of uncertainty, the rational thing to do is to hedge one's bets. If the stakes are much higher on one hypothesis than another, and the credences you assign to each of these hypotheses aren't very different, then it's rational to choose the option which performs best on the high-stakes hypothesis. If these three premises are true, then one-boxing is the rational thing to do in the Moral Newcomb problem. But, for an altruistic and morally motivated agent, there's nothing special about Moral Newcomb compared to other decision problems where EDT and CDT disagree. So we can generalise our conclusion as follows: In general, and across a wide variety of decision contexts, if you are an altruistic and morally motivated agent, and are uncertain between EDT and CDT, you should typically act in line with EDT even if you have significantly higher credence in CDT. \n Call this argument The Evidentialist's Wager . Should we accept the Wager? There are several steps of the argument that can be questioned. First, one could debate whether you ought to hedge in the face of decision-theoretic uncertainty. While this is not the primary focus of this article, we will give some motivation for hedging in the next section. Second, in order for the claim that the stakes are higher for EDT than for CDT to be meaningful, we need some way of making an intertheoretic value comparison -that is, some way of comparing the magnitude of an action's evidential expected value with the magnitude of its causal expected value. We present a proposal for doing this in section 3. Third, one might wonder whether we should really expect the world to contain sufficiently many correlated decision-makers facing similar decision problems. We will address this question in section 4. One way for there to be many such agents is if the universe is infinite, as is implied by several leading cosmological theories. But the infinite case gives rise to a range of further issues. These are addressed in section 5. \n Arguments for Hedging The Wager relies crucially on the idea that, in the face of decision-theoretic uncertainty, one ought to hedge. But is that correct? Perhaps what one ought to do is just what one ought to do according to the correct first-order decision theory? We're not going to be able to resolve this question in this article. But we'll show that although the view has its issues to resolve, there are some significant arguments in its favour and it should therefore at least be taken seriously. We present three such arguments. \n Dominance Consider the following variant on Newcomb's problem. \n Equal Amounts In 3 When we say that the EEV of both actions is the same, we are assuming that you are certain that the predictor is perfectly reliable. If we instead assume that you believe the predictor to be highly, although not perfectly, reliable, it follows that EDT will also recommend two-boxing, However, for any given belief about the predictor's reliability, we can set the stakes so that EDT judges both actions to be equally choiceworthy. \n Money in B Both empty Take one box $1M $0 Take both boxes $1M $0 In this case, the situation is reversed: the causal expected value of taking both boxes is the same as that one has positive credence in, these suggestions make perfectly good sense. \n Stakes Sensitivity MacAskill (2016) argues that our intuitions in Newcomb cases are sensitive to the relative stakes for EDT and CDT, and that this stakes sensitivity can be explained by the fact that one should hedge under decision-theoretic uncertainty. , He presents the following two cases (slightly altered here) in Psychopath Button -can also be explained in terms of the idea that we are intuitively hedging in the face of decision-theoretic uncertainty. And it seems that, empirically, people's intuitions about Newcomb problems are stakes sensitive: in a study that we and some coauthors ran, subjects were more likely to one-box in Newcomb's problem if the stakes were higher for EDT than for CDT . 7 Insofar as accordance with people's intuitions in these cases has been taken to be a desideratum of a decision theory, this provides some support for the idea that, in some sense of 'ought', one ought to hedge in the face of decision-theoretic uncertainty. \n Paying for Information Consider again the Moral Newcomb problem with which we started. Suppose that before making up your mind, you learn that, via a time machine, a book has been delivered from the year 10,000 CE. On the blurb it claims that after long reflection our descendants have discovered the correct decision theory, with arguments so clear and compelling that anyone can understand and everyone will be convinced. It even has various applications of the view, including to the precise problem you're facing. However, you have to pay $5 in order to read the book. Let's bracket, for the purpose of this example, any intrinsic value from knowledge of the truth about rationality, any fame that the reader might receive for explaining its arguments to others, and so on: we can imagine that you know you will immediately forget the contents of the book after making your decision. If there is no sense of 'ought' which is relative to one's decision-theoretic uncertainty, you should not buy and read the book. From EDT's perspective, paying $5 to read the book and then one-boxing is strictly worse than just one-boxing right away. Similarly, from CDT's perspective, paying $5 to read the book and then two-boxing is strictly worse than just two-boxing right away. So, if you ought to do simply what you ought to do on the correct decision theory, you ought not to pay the $5 to read the book. This seems deeply counterintuitive: given the stakes-ten lives saved for EDT and one life saved for CDT-it would seem reckless not to incur such a trivial cost before taking action. But if so, there is some notion of 'ought' which is relative to one's decision-theoretic uncertainty. \n Outstanding Issues The argument from stakes-sensitivity supports the claim that one should hedge in the face of decision-theoretic uncertainty, whereas the other two arguments only support the weaker claim that there is some notion of 'ought' that takes decision-theoretic uncertainty into account. However, it seems clear to us that if one accepts that there is such a notion of 'ought', one should plausibly accept hedging as well. In the context of moral uncertainty, most of the resistance to hedging from those who accept the relevant notion of 'ought' is driven by skepticism about the possibility of intertheoretic value comparisons (e.g. Gustafsson and Torpman 2014:165) . But as we will argue in the next section, such comparisons are significantly easier to make in the case of decision-theoretic uncertainty. Nevertheless, the view that one should hedge under decision-theoretic uncertainty still faces some outstanding issues. In order to know precisely how to take (first-order) decision-theoretic uncertainty into account, one needs a second-order decision theory. Although we have not presented a specific second-order decision theory, we have argued that any plausible account should endorse hedging; candidate second-order decision theories would include 'meta causal decision theory' and 'meta evidential decision theory' (MacAskill 2016). However, if one should be uncertain among first-order decision theories, one should presumably also be uncertain among second-order decision theories. Moreover, if uncertainty among first-order decision theories can make a difference for what one should do, then presumably so can uncertainty among second-order decision theories. In order to take this uncertainty into account we need a third-order decision theory, and thus we are seemingly lead into an infinite regress of uncertainty at higher and higher orders. This infinite regress represents a significant challenge to the claim that one should hedge in the face of first-order decision-theoretic uncertainty. If we could somehow be certain that the correct second-order decision theory-whatever it is-endorses hedging, then we would be in the clear. But we should not assign zero credence to My Favourite Theory , according to which one should simply act in accordance with the first-order theory one has highest credence in, and which therefore does not endorse hedging. We recognise that this is a 8 major challenge, but for the purposes of this paper we will assume that some solution can be obtained. 9 A second challenge concerns the question of what notion of 'ought' is at play in the claim that one ought to hedge under decision-theoretic uncertainty. In the corresponding literature on moral 10 uncertainty, one common strategy is to say that although the notion of 'ought' that is provided by the correct moral theory is a moral ought, the notion of 'ought' that is relative to one's moral uncertainty is a rational ought. However, on the face of it, this strategy appears to be unavailable in the present 11 context. The ought which is relative to the correct (first-order) decision theory is already a rational 8 See Gustafsson and Torpman (2014) for a discussion and defence of My Favourite Theory in the context of moral uncertainty. 9 For detailed discussion of the regress problem, see Trammell (2019), MacAskill, Ord, and Bykvist (2019, chapter 1.3), and Tarsney (manuscript). 10 See Weatherson (2014), Harman (2015) and Hedden (2016) for this objection in the context of moral uncertainty. ought, and therefore it seems we cannot appeal to that distinction here. Again, we shall not try to 12 resolve this issue here. For these two reasons, we think it's an open question whether you ought to hedge in the face of decision-theoretic uncertainty: whichever view you take, you have some hard problems to grapple with. However, we believe the arguments we've given in favour of hedging make the idea that one should hedge sufficiently plausible that it's interesting and important to work out what one should do if one should hedge. In what follows, we will therefore assess the conditional claim that if you ought to hedge, then you should generally take the action that EDT recommends. \n Intertheoretic Comparisons In order for the claim that one ought to hedge in the face of decision-theoretic uncertainty to be meaningful, we must have some way of comparing evidential expected value with causal expected value. Otherwise we will not be able to make sense of the claim that the stakes are higher for EDT than they are for CDT. In the literature on moral uncertainty, this is known as the problem of intertheoretic value comparisons. To get a sense of the difficulty of this problem, consider the 12 However, if we think of supplementing a given axiology with either an evidential or a causal decision principle as giving rise to two distinct moral theories, we can still appeal to this distinction. As an analogy, consider the fact that a utilitarian axiology can be combined with both a risk-neutral and a risk-averse principle for evaluating actions under empirical uncertainty. If we think of risk-neutral and risk-averse utilitarianism as two distinct moral theories, then we should plausibly say the same thing about evidential and causal utilitarianism. Of course, this maneuver will not work for all cases of uncertainty between EDT and CDT. following case. Suppose that you're facing the Footbridge trolley problem, but your credence is evenly split between a consequentialist theory and a deontological theory. If you should hedge under moral uncertainty then you should sacrifice the one in order to save the five if the stakes are higher for the consequentialist theory, and refrain from doing so if the stakes are higher for the deontological theory. But how are you to put the two theories on a common scale in order to know whether the stakes are higher for the consequentialist or the deontological theory? The theories themselves do not seem to come equipped with an answer to this question, and it's hard to see what other information one could appeal to in order to resolve it. Multiple proposals have been made in the literature, but it's clear that there is not yet any consensus solution. 13 Luckily, the problem of intertheoretic comparisons is significantly easier in the case of decision-theoretic uncertainty. In order to see this, let us first state EDT and CDT more precisely. According to EDT, you should choose the action with the highest evidential expected value (EEV) , calculated as follows: EV (A) (O | A)V (O ), E = ∑ n i=1 P i EDT i where O 1 , ..., O n are the possible outcomes, P is the agent's credence function, and V EDT is her value function. The EEV of an action A is the sum product of the value of each outcome and the probability of that outcome on the assumption that A is performed. According to CDT, by contrast, you should choose the action with the highest causal expected value (CEV), calculated as follows: EV (A) (A )V (O ), C = ∑ n i=1 P ⇒ O i CDT i where This proposal might seem obvious to some but to argue for it we must first say a bit more about the nature of value functions in decision theory. For our purposes, there are two salient ways of thinking about these value functions. First, we can think of them as externally given, and therefore independent of the choice of a particular decision theory. Second, we can think of them as constructed, via a representation theorem, from the agent's relational attitude. Let us begin with the former. Given that we presented the Wager as an argument for why altruistic and morally motivated agents should generally favour EDT in the face of decision-theoretic uncertainty, it is natural to think of the value functions as being externally provided by some moral theory. In particular, suppose that the agent facing the Moral Newcomb problem is certain of some axiology, and that she does not take there to be any relevant side constraints: in deciding whether to one-box or two-box, she simply wants to perform the action that in expectation leads to the best outcome according to her favoured axiology. The only thing she's uncertain about is how she should take into account information about causation and correlation when making her decision. In this case it is clear that she does not face a problem of intertheoretic comparisons, because both V CDT and V EDT are given by the relevant axiology. The same point holds if she is instead acting under axiological uncertainty. If she has settled on some value function V which represents the value she assigns to outcomes in light of her axiological uncertainty (for example, V might be the expected value of the outcome under axiological uncertainty), then this is the value function that should be used to calculate both causal and evidential expected value. Again, if it were the case that V CDT ≠ V EDT , then at least one of the two would not accurately represent how she values states of affairs under axiological uncertainty. In general, if there is some externally given value function over outcomes, there will not be any problem of intertheoretic comparisons to begin with. But suppose that there isn't any externally given value function. A second approach is to say that the value functions are constructed (along with the corresponding probability functions) via a representation theorem from the agent's relational attitudes, typically her preferences over options. A representation theorem shows that if an agent's preferences over options satisfy certain conditions, then she can be represented as maximising expected value with respect to some probability function P and some value function V, in the sense that for any pair of options A and B , she prefers A to B just in case the expected value of A is greater than the expected value of B (with the expected value calculated relative to P and V). Suppose therefore that both V EDT and V CDT , along with their respective probability functions P EDT and P CDT , are constructed via representation theorems. In order to compare them, we need a framework which is broad enough to be able to express both EDT and CDT. This means that, for our purposes, the most relevant result is Joyce's (1999:239) very general representation theorem, which can underwrite both evidential and causal decision theory. Joyce takes as his starting point both a preference relation and a comparative belief relation. Both of these relations are conditional (or suppositional ) in the sense that they are defined over options of the form ' A on the condition that B .' For example, I might prefer going for a walk on the supposition that its sunny to staying inside reading on the supposition that it rains, or I might find rain on the supposition that it's cloudy more likely than a breeze on the supposition that it's sunny. Joyce imposes requirements on both the preference relation and the comparative belief relation which together ensure the existence of a unique probability function and a unique (up to positive linear transformation) value function. By imposing further conditions on the comparative belief relation so as to make the supposition behave either like evidential or causal supposition, he is able to derive both EDT and CDT as special cases. Now, if we take the relevant preferences and comparative beliefs to be the ones that the agent in fact has, then we cannot construct both V CDT and V EDT simultaneously. After all, if she satisfies the conditions needed for EDT she will fail to satisfy the conditions needed for CDT. But we can proceed as follows. In order to construct a cardinal value function, it is sufficient to consider the agent's preferences and comparative beliefs on just a single supposition. So all we have to do is find a proposition such that the preferences and comparative beliefs of EDT are in agreement with those of CDT on the supposition that this proposition is true. And here a natural candidate suggests itself: the tautology. Evidential and causal supposition will not yield different constraints on preferences and comparative beliefs when the proposition being supposed is the tautology. This 14 allows us to say that that EDT and CDT share the same value function. 15 14 The only case in which this would not work is if there is some proposition A such that P( A | ⊤) = P(⊤⇒ A ) = 0, yet some other proposition B such that P( B ⇒ A ) > 0. However, there is still a residual worry that these claims can only establish that the two value functions should be positive linear transformations of one another, and not the stronger conclusion that they should be identical. From the perspective of first-order decision theory, the choice of one positive linear transformation over another is simply an arbitrary representational decision with no practical significance. But once we take decision-theoretic uncertainty into account, this choice does become practically significant. In particular, if V CDT = k V EDT for some k > 1, then the stakes for EDT will have to be higher in order for one-boxing to be rational than if V CDT = V EDT . Borrowing terminology that 16 MacAskill (2014:136) introduced in the context of moral uncertainty, let us say that V CDT is an amplification of V EDT just in case V CDT = k V EDT for some k > 1. Although there is something to be said for the possibility of amplified moral theories, the case is significantly weaker for decision-theoretic value functions. But if we do regard it as possible that V CDT is an amplification of V EDT , then and CDT, but between various amplifications of these two theories. We can represent such an agent as simply being uncertain between EDT and CDT by aggregating the amplified value functions as follows: and . (EDT | EDT )V V EDT = ∑ k P k k (CDT | CDT )V V CDT = ∑ k P k k But even now, it is unclear what might lead you to assign proportionally greater credence to one amplification of CDT than to the same amplification of EDT. That is, it should be the case that, for any k , P(EDT k | EDT) = P(CDT k | CDT). If your credences in amplified theories is symmetrical in this way, then the argument that you should hedge when the stakes are much higher for EDT would still go through. In order to deny this conclusion via an appeal to amplification, you would have to claim that we should have proportionally greater credence in more amplified versions of CDT and, moreover, that this difference is so large that it cancels out the higher stakes for EDT. We can't see any plausible argument for this being the case. However, one might in response give up the underlying assumption that there is some fact of the matter as to which intertheoretic comparisons are correct, and instead say that this is a subjective matter for the agent to decide herself. In particular, one might take the view that these intertheoretic comparisons are only meaningful insofar as the probability and value functions they rely upon can be constructed via a representation theorem for decision-theoretic uncertainty. Such a representation theorem would show under what conditions an agent can be represented as maximising 'meta' expected value. Although no such result has (to our knowledge) been produced, if it looks at all like an ordinary representation theorem for empirical uncertainty, it would follow that the agent is free to make any intertheoretic comparisons she likes (provided only that the value functions V EDT and V CDT are positive linear transformations of one another). While some preferences and comparative beliefs 19 will entail that V CDT = V EDT , others will entail that V CDT = k V EDT , and there is nothing to rule out the latter as irrational. Now, if one does take seriously the idea that the disagreement between EDT and CDT does not concern the nature of value, but rather the question of which type of supposition is decision-relevant, one might wish to impose further conditions to ensure that only those relational attitudes which entail that V CDT = V EDT are rational. If one can impose such further conditions, then the use of representation theorems to construct value functions does not in itself entail that intertheoretic comparisons will be subjective, but we concede that those who are not persuaded by the idea that the disagreement between EDT and CDT does not concern the nature of value have no reason to impose those further conditions. Let's take stock. If the value function is externally given, for instance by an axiology, we can straightforwardly conclude that V EDT = V CDT . If on the other hand both V EDT and V CDT are constructed via representation theorems, we can at the very least conclude that they must be positive linear transformations of one another. Moreover, we argued that if one accepts that the dispute between EDT and CDT does not concern the nature of value, one should accept the stronger conclusion that the two value functions should be identical. We then noted that if one does regard it as a genuine possibility that V CDT is an amplification of V EDT , one should also concede that both EDT and CDT come in different versions, corresponding to different amplifications of their value functions. Moreover, we argued that there is no good reason to assign proportionally greater credence to one amplification of CDT than to the same amplification of EDT. Finally, we conceded that if one takes the question of intertheoretic comparisons to be a subjective one for the agent to settle herself, then all we can say is that the two value functions must be positive linear transformations of one another. Although we take V EDT = V CDT to be the most natural way of making the intertheoretic comparison, our argument that hedging in the face of decision-theoretic uncertainty typically favours EDT does not require this particular account. Rather, in the finite case, our argument requires that the comparison is not so strongly biased in favour of CDT so as to outweigh the intuitively higher stakes for EDT. In the infinite case, our argument only requires that the two value functions are positive linear transformations of one another. Given that any plausible account (even if subjective) will satisfy this latter requirement, we don't have to appeal to any potentially controversial claims about intertheoretic comparisons in the infinite case. You might object that our proposal about how to make intertheoretic value comparisons 'stacks the deck' in favour of EDT because, as the evidentialist's wager shows, in general under decision-theoretic uncertainty, with this choice of intertheoretic comparisons, one will act in accordance with EDT's recommendation. As an analogy, consider the question of how to do intertheoretic value comparisons between average and total utilitarianism. Suppose we fix such comparisons by stipulating that in a world with only one individual, the difference in value between any two actions is the same for average and total utilitarianism. This implies that in a world with n individuals, giving each of them one additional unit of welfare would be n times more valuable according to total utilitarianism than according to average utilitarianism. In effect, this means that total utilitarianism will swamp average utilitarianism: if one should hedge under moral uncertainty, then one will generally go with total rather than average utilitarianism whenever they are in conflict, because the stakes are much higher for the former theory. It has been argued that this is a good reason for thinking that we should not make the intertheoretic comparisons between average and total utilitarianism in this way. Perhaps using 20 V EDT = V CDT to settle comparisons between EDT and CDT is analogous to using the one-person world to settle comparisons between total and average utilitarianism, and should be rejected for the same reason. However, the analogy is flawed. When normalising average and total utilitarianism at the one-person world, there are no scenarios in which the stakes are higher for average utilitarianism than for total utilitarianism. By contrast, using the case of no non-causal correlations to normalise does allow for scenarios in which the stakes are higher for CDT than for EDT, as the following example shows. \n Evil Twin In front of you are two boxes, A and B. You can choose either B only, or both A and B. Box A is guaranteed to contain ten thousand dollars to be donated to an effective charity, whereas box B may or may not contain twenty thousand dollars to be donated to an effective charity. A perfectly reliable predictor has made a prediction about your decision. If she predicted that you would take both boxes, she left box B empty. If she predicted that you would take box B only, she put twenty thousand in that box. However, you also know that your evil twin is facing the same decision problem. Being evil, he will donate the money to an anti-charity. One dollar to the anti-charity precisely counterbalances one dollar to the charity. However, all of the monetary amounts in his decision problem are half the size of yours. Being your twin, his decision is perfectly correlated with your own. For CDT, the fact that your evil twin also faces a similar decision problem makes no difference to the stakes: CEV(Two-box) -CEV(One-box) = $10,000. But for EDT, it means that the stakes are half as big as they would otherwise have been: EEV(One-box) -EEV(Two-box) = $5,000. Hence our proposal for how to do intertheoretic comparisons, when combined with a claim about correlations, now yields the result that the stakes are lower for EDT than for CDT, rather than the other way around. Therefore, it is not true that our account makes EDT swamp CDT in general. Whether or not it does will depend on which correlations the agent takes to hold. \n Finite Case We have argued (or at least attempted to make plausible) that it is rational to hedge in the face of decision-theoretic uncertainty. That is, we have argued that even if your credence in CDT is substantially higher, you should nevertheless follow EDT if the stakes are much higher for EDT. We have also presented an account of intertheoretic comparisons that allows us to say when the stakes are higher for one decision theory than another. Together, these imply that if the expected number of correlated decision-makers is large enough, you should one-box in the Moral Newcomb problem even if you have significantly higher credence in CDT than in EDT. Recall that the possible payoffs are as follows: Cure in both Cure in one only \n Take one box Ten lives Nothing Take both boxes Eleven lives One life As we are assuming a perfectly reliable predictor, this means that, before taking any correlated agents into account, the stakes are nine times higher for evidential than for causal decision theory. Suppose now that you have merely 1% credence in EDT, and 99% credence in CDT. If we were then trying to maximise expected value over decision-theoretic uncertainty, the existence of correlated agents would have to increase the relative stakes for EDT by a factor of 11 in order for one-boxing to be the rational option. Why should one expect there to be all these correlated agents facing Moral Newcomb problems? Our argument doesn't require that the correlation in question be perfect. Consider now the vast number of humans and similarly reasoning aliens that could exist in the future. For example, Bostrom (2013:18) estimates that over the course of the future, Earth could sustain 10 16 lives of normal duration. If we instead assume that we will at some point spread beyond Earth, or that life will eventually be primarily digital, he gives the corresponding estimates of 10 34 and 10 54 life years respectively. Probably, some of these future people will be similar to you in terms of how they approach Newcomblike problems. Therefore, your decision to one-box constitutes evidence that they will also one-box. Of course, the evidence isn't perfect, but if the number of correlated agents is sufficiently large, the impact will be the same as that of many identical copies. For example, if your decision to one-box only increased the probability that the correlated agents one-box by 1%, there would only have to be a thousand agents who are facing problems similar to Moral Newcomb and who are correlated at this level in order for hedging to be rational. Of course, the assumption of a perfect predictor is not realistic. When we relax this assumption, we will again have to increase the number of correlated agents in order for hedging to be rational. But in doing so, we will not have to posit an unreasonably large number of correlated agents. For example, if the predictor is only 60% accurate, there would have to be 8⅓ times as many correlated agents. Now, establishing what credence distribution one should have over the number of correlated agents facing decision problems of this kind, and their degree of correlation, is a thorny empirical matter over which there will be reasonable disagreement. Yet we believe that the numbers provided here are conservative, and that it would certainly not be unreasonable for someone to have such beliefs in light of the vast number of people that could exist in the future. 21 22 This establishes that if you are a morally motivated and altruistic agent, you should one-box in Moral Newcomb even if you have significantly higher credence in causal than evidential decision theory. But earlier, we claimed that there is nothing special about the Moral Newcomb case, and that our reasoning therefore supports the following more general conclusion: In general, and across a wide variety of decision contexts, if you are a morally motivated and altruistic agent, and are uncertain between EDT and CDT, you should typically act in line with the former even if you have significantly higher credence in the latter. Is this more general claim correct? Or could it be that, although the Wager works in the Moral Newcomb case, there are other cases in which EDT and CDT come apart, where hedging under decision-theoretic uncertainty doesn't lead an altruistic agent to choose the option recommended by EDT? Recall that what is driving our argument is the claim that your decision is correlated with the decisions of other agents. This means that your decision to perform an action provides evidence that correlated agents will also perform that action. Moreover, if a correlated agent's decision to perform that action provides evidence that some desirable outcome will obtain, then your decision also provides evidence that that outcome will obtain. In general, the existence of correlated agents will affect the stakes for EDT, but not for CDT. Now, as we saw in the Evil Twin case, it is possible to construct cases in which such correlations have the effect of decreasing the stakes for EDT. If the decrease is sufficiently large, the wager will run in the opposite direction: even if one has significantly higher credence in EDT than in CDT, one should nevertheless act in accordance with the latter. However, we contend that such cases are few and far between. In the Evil Twin case, you are only correlated with one other agent who faces a problem with lower stakes, and whose decision will partially cancel out (by donating to an anti-charity) the good you achieve through your own decision (by donating to a charity). But once we take into account the vast number of people who may exist in the future, it's overwhelmingly likely that whenever your decision is correlated with one person's decision, it will also be correlated with very many other people's decisions. It would therefore be a striking coincidence if the effect of these correlations would be to lower the stakes for EDT. If you have sufficiently many evil twins (or evil n -tuplets), the stakes will be higher for EDT than CDT (although now EDT will also recommend two-boxing). If you have just one good twin (and no evil twins), the stakes will also be higher for EDT. In general, once we take into account that there will be very many correlated agents, the only case in which the stakes will be lower for EDT is when (i) either the degrees of correlation or the \"isolated\" stakes faced by correlated agents are sufficiently low, and (ii) performing the action that EDT would have recommended in the absence of correlated agents provides evidence that these agents will partially cancel out the good you achieve through your own decision (in the same sense that your evil twin partially cancels out the good you achieve through one-boxing). What if one believes that anti-correlated agents vastly outnumber correlated ones? If one believes this in the Moral Newcomb case, the conclusion of our argument is that one should two-box rather than one-box. However, this does not constitute a counterexample to our claim that a morally motivated and altruistic agent should generally act in line with EDT even if she has significantly higher credence in CDT. To see this, note that even if the agent were certain of EDT, she would still two-box rather than one-box. Rather than being a counterexample, this is simply a case in which the agent's belief that there are many more anti-correlated than correlated agents (together with the assumption that she is altruistic and morally motivated) imply that EDT recommends a different option than what one might naïvely expect. What's more, the stakes are again higher for EDT than they are for CDT, so in this sense it's still EDT that is driving the decision. 23 In summary, the existence of correlated decision-makers will affect the stakes for EDT but not for CDT. We have argued on empirical grounds that it's reasonable to believe that there are sufficiently many such correlated decision-makers so as to make hedging by following the recommendation of EDT rational. The empirical argument was based on the assumption that the universe is finite. So let's now consider what happens if the universe is instead infinite. \n Infinite Case If, as many of our best cosmological theories imply, the universe is infinite, and every physically possible configuration of matter is realised in infinitely many regions of spacetime, there will clearly be sufficiently many correlated agents. Indeed, there will be infinitely many identical copies of you who 24 are facing the same decision situation. So you might think that, if anything, the argument for the Wager becomes stronger if the universe is infinite. But the infinite case gives rise to a host of further complications, and we need to introduce additional principles to be able to compare worlds that contain infinite amounts of value, or to be able to compare actions in such worlds. We will not be able to undertake a complete survey of proposed principles so, instead, we will merely note that the argument goes through on at least one leading account of infinite ethics. Our first problem: In infinite worlds, can we even say that EDT recommends one-boxing over two-boxing in Moral Newcomb ? If you one-box, you obtain strong evidence that infinitely many other agents will receive ten doses of the cure. But if you two-box, you obtain strong evidence that infinitely many other agents will receive one dose of the cure. In standard cardinal arithmetic, the total number of doses will be the same in both cases, thereby seemingly implying that EDT will be indifferent between the two actions. But that seems like the wrong conclusion. So we should invoke additional principles that allow us to discriminate between different infinite worlds. To keep things simple, let us for the moment only consider agents who are perfect duplicates of you, and let us compare two worlds: in w 1 , you all one-box and save ten lives each, and in w 2 , you all two-box and and save one life each. We will assume that these two worlds are perfectly alike in all other respects. That is, these two worlds share all of the same locations, and for all locations except those corresponding to lives that are saved in one world but not in the other, the value at that location is the same in both worlds. To solve the first problem, we need an account which allows us to say that w 1 is better than w 2 . One such account is that of Vallentyne and Kagan (1997) . Let R 1 , R 2 , R 3 , … be a sequence of larger and larger spacetime regions that grow without bound, centred on the location in which you find yourself. The proposal is as follows: \n Catching Up For any worlds w 1 and w 2 , if there is some n such that for any m > n , the value in region R m of w 1 is greater than the value in region R m of w 2 , then the value of w 1 is greater than that of w 2 . 25 If we assume that there is some n such that for any m > n , the region R m contains more correlated than anti-correlated agents, it follows that w 1 is better than w 2 . This means that although anti-correlated agents may outnumber correlated ones in some finite regions, we can always find larger regions that contain them where the reverse is true. If we are happy to assume that there are more correlated than anti-correlated agents in a finite universe, we should also be happy to make this further assumption in the infinite case. So the Catching Up principle allows us to make sense of the idea that w 1 is better than w 2 , and therefore that EDT recommends one-boxing in Moral Newcomb . The second problem is to make sense of the claim that the decision in Moral Newcomb is much higher stakes for EDT than it is for CDT, and that you should therefore hedge under decision-theoretic uncertainty. That is, we need to be able to say that even if your credence in CDT is significantly higher, you should nevertheless follow the recommendation of EDT. On its own, Catching Up doesn't allow us to say this, because it only tells us how to compare worlds that contain potentially infinite amounts of value. But what we need is a way of comparing actions in such worlds. In light of this, Arntzenius (2014) suggests a modification of the view. His proposal is simple: just replace talk of value in a region with talk of expected value in a region. \n Expected Catching Up For any actions A and B , if there is some n such that for any m > n , the expected value of A in region R m is greater than the expected value of B in region R m , then A is better than B . In order to apply this to the case at hand, let's suppose for the moment that our approach to decision-theoretic uncertainty is to maximise meta expected value, where the meta expected value of an action A is given as For EDT, the stakes will EV (A) EEV (A)P (EDT ) EV (A)P (CDT ). M = + C keep growing without bound as we consider larger and larger regions. By contrast, for CDT the stakes will remain the same in any finite region. Therefore, we will be able to find an n such that for any m > n , the meta expected value in R m of one-boxing is greater than that of two-boxing, and thereby conclude that one-boxing is rational. But importantly, this conclusion does not depend on the assumption that maximising meta expected value is the appropriate way to behave under decision-theoretic uncertainty. For any non-zero credence in EDT and any view about how large the difference in stakes must be in order for hedging to be rational given that credence, we will always be able to find a region R n such that for any m > n , the difference in stakes between EDT and CDT is sufficiently large to justify one-boxing. Note, in addition, that this also means that we don't have to rely on any possibly contentious claims about how to do intertheoretic comparisons between EDT and CDT. As long as V CDT = k V EDT + m for some finite k and m , it follows that we will be able to find an appropriate region R n . So if Expected Catching Up is correct, the Wager goes through in infinite worlds as well. And some other accounts, such as Bostrom's (2011) \"hyperreal\" approach, would also endorse the Wager. 26 Roughly speaking, by representing the value of infinite worlds using hyperreal numbers, we are able to say that the world in which all correlated agents receive ten doses of the cure is better than the world in which they only receive one dose of the cure, even though both worlds contain infinite amounts of value. And given that hyperreal numbers can be straightforwardly multiplied with probabilities, this approach allows us to make sense of the claim that the stakes are higher for EDT than for CDT. 27 Of course, however, infinite ethics is a fiendish topic, and there is no consensus on what the right view is, because any view faces grave problems. Expected Catching Up , for example, conflicts with the very plausible Pareto principle: \n Pareto For any worlds w 1 and w 2 that contain exactly the same individuals, if every individual has at least as much welfare in w 1 as she does in w 2 , then w 1 is at least as good as w 2 . If in addition at least one individual has more welfare in w 1 than in w 2 , then w 1 is better than w 2 . To see this, assume for simplicity that either all correlated agents save ten lives each and all anti-correlated agents save one life each, or vice versa. You can either one-box or two-box. This gives us four worlds: one in which you and all correlated agents one-box, one in which you one-box but your correlated agents two-box, and so forth. We can now consider permutations of these worlds that contain exactly the same individuals at exactly the same welfare levels, except that all people saved by anti-correlated agents are closer to you in spacetime than any of the people saved by correlated agents. 27 However, see Arntzenius (2014:49-52) for discussion of some difficulties for the hyperreal approach. With respect to these permuted worlds, Expected Catching Up implies that two-boxing has greater expected value than one-boxing. In both cases, we calculate expected value by assigning the same probability to a world and its permutation, so if one-boxing has greater expected value in one case but not the other, it follows that not all permuted worlds are equally as good as the worlds they permute. \n But this is precisely what Pareto rules out. However, as Askell (2018) shows, if one endorses Pareto and some other very plausible assumptions, one will have to accept that there is widespread incomparability between infinite worlds. So we can state our conclusion, with respect to the infinite case, in a restricted form: if one wishes to avoid widespread incomparability between infinite worlds, then one will probably endorse a view that supports the Wager. \n Conclusion We have argued that an altruistic and morally motivated agent who is uncertain between EDT and CDT should in general act in accordance with the former, even if she has greater credence in the latter. To arrive at this conclusion, we first argued that it is rational to hedge in the face of decision-theoretic uncertainty. That is, if the stakes are much higher for one theory than another, and the credences assigned to the two theories aren't very different, then one should act in accordance with the higher-stakes theory. In order to say whether the stakes are higher for one theory rather than another, we need an account of intertheoretic value comparisons. We argued that such comparisons should be made by letting EDT and CDT have the same value function over outcomes. We noted that, given the assumption of altruism, the existence of correlated decision-makers will affect the stakes for EDT but not for CDT. Finally, we argued that for reasonable credence distributions over EDT and CDT, there will be sufficiently many correlated decision-makers so as to make it rational in general to follow the recommendation of EDT. In the finite case, we appealed to estimates about how many people will exist in the future. In the infinite case, there is guaranteed to be sufficiently many correlated agents, but further complications arise. We showed that on one natural way of solving these, the argument still goes through. If we are altruistic and morally motivated agents, we should mostly follow EDT. 17 plausibly we should also think that both EDT and CDT come in different versions, corresponding to different amplifications. That is, each way of setting the parameter k would give us a different version 18 of evidential decision theory (EDT k ) and a different version of causal decision theory (CDT k ). So now an agent facing decision-theoretic uncertainty would have to divide her credence not between EDT \n Suppose now that you have 90% credence in EDT and only 10% credence in CDT. From the 3 perspective of EDT, you have nothing to lose by two-boxing, whereas from the perspective of CDT, you have $1M to lose by one-boxing. In short, two-boxing state-wise (or 'theory-wise') dominates one-boxing. Thus it seems clear that you should two-box, in at least some sense of 'should'. 4 In Equal Amounts , EDT is indifferent between the two actions and CDT prefers one over the other.But we can also construct a case where CDT is indifferent between the two actions and EDT prefers only. Money in both Money in one only Take one box $1M $0 Take both boxes $2M $1M In this case, the evidential expected value of taking both boxes is the same as that of taking one box only, whereas the causal expected value of taking both boxes is $1M greater than that of taking one box front of you are two boxes, A and B. You can choose either B only, or both A and B. Box A is guaranteed to contain $1M, whereas box B may or may not contain $1M. A perfectly reliable predictor has made a prediction about your decision. If she predicted that you would take both boxes, she left box B empty. If she instead predicted that you would take box B only, she put $1M in that box. one over the other. Consider: Empty Box In front of you are two boxes, A and B. You can choose either B only, or both A and B.Box A is guaranteed to be empty, whereas box B may or may not contain $1M. A perfectly reliable predictor has made a prediction about your decision. If she predicted that you would take both boxes, she left box B empty. If she instead predicted that you would take box B only, she put the million in that box. \n A ⇒ O i denotes the counterfactual conditional \"if I were to do A , then O i would occur.\" The CEV of an action A is the sum product of the value of each outcome and the probability that A causes that outcome. Thus EDT and CDT are both versions of the claim that one should maximise expected value. They only disagree on how this expectation is to be calculated: according to EDT, one should use the conditional probability, whereas according to CDT, one should use the probability of the counterfactual conditional. That is, they only differ in those cases where one would intuitively expect EDT and CDT to come apart. This observation suggests a natural procedure for intertheoretic value comparisons. If the disagreement between EDT and CDT only concerns which type of probability should be used to calculate expected value, then we should not construe them as having different value functions.Intuitively, a decision theory does not prescribe that you value outcomes in some particular way.Instead, it tells you how to act in light of the values (and beliefs) you in fact hold. Therefore, we will argue that the problem of intertheoretic value comparisons is solved by requiring that V EDT = V CDT . By imposing this condition we place the two theories on a common scale, thereby rendering intertheoretic value comparisons straightforward. To determine whether the stakes are higher for EDT or CDT in the Moral Newcomb problem, we simply calculate all expected values with respect to the shared value function and then compare the two quantities EEV( One-box ) -EEV( Two-box ) and CEV( Two-box ) -CEV( One-box ). Moreover, this proposal has the desirable implication that the evidential and causal expected value of an action A will only differ if there is some outcome O such that P( O | A ) ≠ P( A ⇒ O ). \n\t\t\t The simplifying assumption of a perfect predictor is clearly unrealistic, but it would be easy to set up the numbers so that the same implications follow by merely assuming a good predictor instead.2 More precisely, CDT tells you to perform the action with the highest causal expected value (CEV), where the CEV of an action A is the sum product of the value of each outcome and the probability that A causes that outcome. By contrast, EDT tells you to perform the action with the highest evidential expected value (EEV), where the EEV of an action A is the sum product of the value of each outcome and the probability of that outcome on the assumption that A is performed. \n\t\t\t For discussions of different senses of 'should' in the context of moral uncertainty, see for example MacAskill, Bykvist and Ord (2019, chapter 1) and Sepielli (2010, chapter 1). \n\t\t\t The dominance argument above can be seen as a special case of such stakes sensitivity, where there's something at stake for one theory but not for the other. \n\t\t\t A preliminary survey using Amazon Mechanical Turk (N = 331) found that when the possible contents of the two boxes were $3.00 and $0.05, 74% of participants one-boxed, whereas when the possible contents were $2.55 and $0.45, only 51% did so. On the other hand, there was no statistically significant difference between the second case and the case in which the possible contents were $2.25 and $0.75 (48% one-boxers) ([NAME REDACTED]). \n\t\t\t See Sepielli (2013) and MacAskill, Ord, and Bykvist (forthcoming) for further discussion. \n\t\t\t For discussion of intertheoretic comparisons in the case of moral uncertainty, see Sepielli (2009 Sepielli ( , 2010, MacAskill (2014 , chapter 4), Tarsney (2018 ), and MacAskill, Ord and Bykvist (2019 . \n\t\t\t More generally,Joyce (1999:178) concedes that Jeffrey's (1983) evidential decision theory has the correct account of value (\"desirability\") and argues that causal expected utility is simply desirability from the perspective of the supposition that an act will be performed. The same sentiment is expressed byBradley (2017:170). Again, these claims indicate that an agent's value function should not depend on which decision theory she takes to be correct. \n\t\t\t By contrast, in the case where V CDT = V EDT + m , the choice of m will not affect whether or not hedging is rational.17 To get a sense for what amplified moral theories might look like, consider the following case. Suppose you initially believe that, morally speaking, humans matter much more than other animals, so that one unit of human welfare is a hundred times more valuable than one unit of animal welfare. Later, however, you become convinced that animals matter just as much as humans. Now, intuitively there are two ways in which you could become convinced of this: you could come to believe that animals matter much more than you initially thought, or you could come to believe that humans matter much less than you initially thought. The two resulting theories would have the same cardinal structure, yet one of them would regard both human and animal welfare as more valuable than the other(MacAskill 2014:138). Whatever one thinks of this, it seems much more difficult to tell a similar story for the case of decision-theoretic value functions. \n\t\t\t This is what MacAskill (2014) concludes in his discussion of amplification in the context of moral uncertainty. \n\t\t\t Perhaps the most relevant existing result is Riedener's (forthcoming) representation theorem for axiological uncertainty.However, the framework he uses is not able to capture the difference between evidential and causal decision theory. \n\t\t\t See for example Hedden (2016) and Cotton-Barratt, MacAskill and Ord (forthcoming). \n\t\t\t Moreover, if you object to the argument on empirical grounds alone, you still have to accept the surprising implication that beliefs about distant future agents are relevant for how you should act under decision-theoretic uncertainty. \n\t\t\t You might object that perhaps there are also anti-correlated decision makers out there, who tend to one-box when you two box. If there are as many anti-correlated decision-makers as there correlated ones, this would effectively cancel things out for EDT. If there are sufficiently many more anti-correlated than correlated ones, EDT would recommend two-boxing rather than one-boxing. Our argument assumes that there are many more correlated decision-makers than there are anti-correlated ones. But unless you have some particular reason to think that you are special, this seems like a standard inductive inference. Finally, if you remain unconvinced of the claim that there are more correlated than anti-correlated decision makers, you shouldn't believe that there are equally many decision makers of both types, but rather that there are many more anti-correlated ones. After all, it would be a rather striking coincidence if the two types of decision makers were evenly balanced. But if there are more anti-correlated than correlated decision-makers, then our argument would support the different but similarly startling conclusion that even if you're virtually certain of EDT, you should nevertheless two-box in the Moral Newcomb case. \n\t\t\t If one believes that there are equally many correlated and anti-correlated agents (counting oneself among the correlated ones), then EDT will be indifferent between the two options, and one's decision will therefore be driven by CDT. But such symmetry should be regarded as extremely unlikely. \n\t\t\t See Garriga and Vilenkin (2001) and Knobe, Olum and Vilenkin (2006) for discussion of these cosmological theories. \n\t\t\t This is roughly equivalent to the SBI2 proposal ofVallentyne and Kagan (1997:14), skipping over some technical details.This proposal assumes that it makes sense to speak of the same location in the two worlds being compared. In the present case, this assumption will be satisfied. \n\t\t\t Another class of theories that will also endorse the Wager are those that discount value by distance, at least provided that the discount rate is not steep enough to lower the stakes for EDT to the point where hedging no longer favours one-boxing, although it's fair to say that such theories are not regarded as leading contenders.", "date_published": "n/a", "url": "n/a", "filename": "MacAskill_et_al_Evidentialist_Wager.tei.xml", "abstract": "Suppose that an altruistic and morally motivated agent who is uncertain between evidential decision theory (EDT) and causal decision theory (CDT) finds herself in a situation in which the two theories give conflicting verdicts. We argue that even if she has significantly higher credence in CDT, she should nevertheless act in accordance with EDT. First, we claim that that the appropriate response to normative uncertainty is to hedge one's bets. That is, if the stakes are much higher on one theory than another, and the credences you assign to each of these theories aren't very different, then it's appropriate to choose the option which performs best on the high-stakes theory. Second, we show that, given the assumption of altruism, the existence of correlated decision-makers will increase the stakes for EDT but leave the stakes for CDT unaffected. Together these two claims imply that whenever there are sufficiently many correlated agents, the appropriate response is to act in accordance with EDT.", "id": "852c2f28a5ae0f5ff2bec8bbcc7f40c9"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Seth D Baum"], "title": "On the promotion of safe and socially beneficial artificial intelligence", "text": "Introduction The challenge of building technologies that are safe and beneficial for society is really two challenges in one. There is the technical challenge of developing safe and beneficial technology designs, and there is the social challenge of ensuring that such designs are used. The two challenges are interrelated. Motivating technologists to pursue safe and beneficial designs is itself a social challenge. Furthermore, motivating people to use safe and beneficial designs is made easier when the designs also have other attractive features such as low cost and ease of use; creating these features is a technical challenge. This paper is concerned with the social challenge. Specifically, the paper examines a range of approaches to motivating technologists to pursue safe and beneficial technology designs. The paper focuses on artificial intelligence (AI) technologies, including both near-term AI and the proposed future ''strong'' or ''superintelligent'' AI that some posit could bring extreme social benefits or harms depending on its design. Much of the paper's discussion also applies to other technologies. That AI has significant social impacts is now beyond question. AI is now being used in finance, medicine, military, transportation, and a range of other critical sectors. The impact is likely to grow over time as new technologies are adopted, such as autonomous vehicles and lethal autonomous weapons (unless the latter are banned or heavily restricted). The prospects for strong AI are controversial; this paper takes the position that the stakes are sufficiently high that it warrants careful attention even if the probability of achieving it appears to be low. Regardless, while the paper is motivated in part by the risk of strong AI, the insights are more general. 1 For brevity, the paper uses the term ''beneficial AI'' to refer to AI that is safe and beneficial for society. It also uses the term ''promoting beneficial AI'' to refer to efforts to encourage technologists to design and build beneficial AI, or to have them avoid designing and building AI that is not beneficial. The technologists include AI researchers/ designers/developers (the paper uses these terms more or less interchangeably) as well as adjacent personnel in management, business development, etc. The paper's implicit value judgment is that AI should be built so as to have net benefits for the whole of society-or, in the face of uncertainty, net expected benefits. This is to say that AI should not be built just for the sake of making it more capable or more intellectually interesting. Also, AI should not be built for the benefit of its builders if this comes at the expense of society as a whole. These positions may seem to cut against ideals of academic freedom, intellectual progress, and capitalist entrepreneurship. The paper takes the position that these ideals are only worth pursuing to the extent that doing so benefits society. Were it the case that the field of AI was already focused on beneficial design, efforts to promote it would be unnecessary. Unfortunately, this is not the case. The field is largely focused on building systems that are more capable, regardless of whether this capability is used for social good. This tendency and the need to shift it are articulated, for example, by distinguished AI researcher Stuart Russell: I think the right approach is to build the issue [beneficial AI] directly into how practitioners define what they do. No one in civil engineering talks about ''building bridges that don't fall down.'' They just call it ''building bridges.'' Essentially all fusion researchers work on containment as a matter of course; uncontained fusion reactions just aren't useful. Right now we have to say ''AI that is probably beneficial,'' but eventually that will just be called ''AI.'' [We must] redirect the field away from its current goal of building pure intelligence for its own sake, regardless of the associated objectives and their consequences (Bohannon 2015:252) . This paper takes on the challenge of how to shift the AI field toward greater emphasis on social impacts. The paper reviews and critiques existing proposals for promoting beneficial AI and lays out a wider portfolio of techniques. A core criticism is that existing proposals neglect human psychology: They seek to influence AI researchers without thinking carefully about how AI researchers are influenced. Neglect of human psychology limits the portfolio of techniques that get considered for promoting beneficial AI and reduces the effectiveness of those techniques that are considered. In some cases, measures taken in ignorance of human psychology can even backfire, resulting in less beneficial AI than would have existed without any measures taken. Broadly speaking, there are two types of measures for promoting beneficial AI. Extrinsic measures are imposed on AI designers from the outside so that they adopt beneficial designs even if they do not want to. These measures include constraints that require or forbid certain designs, incentives to encourage or discourage certain designs, and compliance measures to make sure that constraints or incentives are being followed. Intrinsic measures are cultivated within AI designers so that they want to adopt beneficial designs. These measures include the cultivation of social norms and the framing of communications. There can also be intrinsic effects of extrinsic measures, such as when a technology ban sparks backlash, making designers less interested in adopting beneficial designs. Extrinsic and intrinsic measures are discussed in Sects. 2 and 3, respectively. 2 Prior discussions of the promotion of beneficial AI focus almost exclusively on extrinsic measures. 3 However, both types of measures can help. Indeed, strategies based purely on extrinsic measures run a significant risk of having no net effect or even being counterproductive. As this paper discusses, the success of extrinsic measures often depends heavily on intrinsic factors. Meanwhile, pure intrinsic strategies can be quite effective, as can hybrid extrinsicintrinsic strategies. The bottom line is that the promotion of beneficial AI demands attention to human psychology. \n Extrinsic measures \n Constraints Constraints are perhaps the simplest means of promoting beneficial AI, and the most simplistic. The logic is direct: If a design feature is beneficial, require it; if it is harmful, ban it. A ban on dangerous AI technologies is implicit in Joy's (2000) call for relinquishment of dangerous AI, and it is explicit in other work (Posner 2004; Wilson 2013; Yampolskiy and Fox 2013) . Requirements for beneficial AI designs are less common in discussions of AI. Requirements could be used to insist that AI developers adopt certain beneficial designs such as verification, validity, security, and control (Russell et al. 2015) and avoiding negative side effects, avoiding reward hacking, scalable oversight, safe exploration, and robustness to distributional shift (Amodei et al. 2016) . When constraints work, they guarantee that AI designs are beneficial. However, they also limit the freedom and flexibility of AI designers. This can provoke backlash by AI designers, which is one example of an intrinsic effect of extrinsic measures (Sect. 2.4). Even without backlash, enacting constraints can require extensive institutional and political changes, which makes them difficult to implement. Constraints pose other challenges as well. One is that unless they are carefully designed, they can unwittingly constrain the wrong features, resulting in AI that is less beneficial. Designing successful AI constraints can thus require close interaction between AI experts and policy makers. A related issue is that constraints may need to be constantly updated as AI technology evolves. An AI design attribute that was harmful in early AI may be beneficial in later AI, and vice versa. New design attributes will also emerge; these could merit new constraints. One potential solution is to phrase constraints in more general terms (Moses 2007) ; for AI, this could mean requiring AI designers to select the most beneficial available design. Such an approach makes constraints more durable as AI technology evolves, but it comes at the expense of making it more difficult to verify compliance. \n Incentives Incentives are the primary extrinsic alternative to constraints. Unlike constraints, incentives let AI developers keep the freedom to pursue whatever designs they desire. Incentives act by changing the rewards or penalties for specific designs, so as to push developers in different design directions. The AI literature has focused mainly on monetary incentives, such as by offering funding for beneficial AI research (McGinnis 2010) or by making AI companies pay compensation when found liable for the consequences of harmful AI (Gurney 2013 ). However, incentives can also take on other forms, such as social praise/scorn or professional advancement/sanction for developing beneficial/harmful AI. Incentives hold several advantages over constraints. By giving AI designers more freedom, they are less likely to provoke backlash, which can make them easier to implement. 4 Policy makers can also avoid the need to identify beneficial design attributes by applying incentives to completed technologies, as in liability schemes (which impose penalties for AIs that turn out to be harmful) and in prize competitions (which can offer rewards for AIs that turn out to be beneficial). 5 The core disadvantage of incentives is that they do not guarantee that beneficial AI designs would be chosen. An AI developer could simply choose to forgo the reward or pay the penalty and continue to develop harmful AI. The logical response to this is to strengthen the incentive, though this can provoke more backlash and can even erode the distinction between incentives and constraints. Indeed, a constraint could be defined as an incentive with an infinite or maximal reward/penalty. \n Compliance Constraints and incentives are generally built on the premise that AI designers do not want to choose beneficial designs. Otherwise, they would not need to be constrained or incentivized. AI designers thus have reason to avoid complying with the constraint or the incentive. When this is the case, mechanisms for achieving compliance are needed, including mechanisms for monitoring for noncompliance and mechanisms for enforcing penalties for noncompliance. A simple approach to monitoring is to require AI groups to submit research proposals to review boards prior to conducting the research. AI review boards could be analogous to the review boards that already exist at many universities and other institutions for reviewing medical and social science research (Yampolskiy and Fox 2013) . Existing review boards are focused mainly on harms that could be caused by the conduct of the research, in particular through abuse of human research subjects. AI review boards would need an expanded scope that includes the societal impacts of the products of research. Such an expansion would be in line with a more general expansion of research ethics to include ethical assumptions embedded within the research (such as ethical positions implicit in AI objective functions) and ethical aspects of the societal impacts of research (Schienke et al. 2009 (Schienke et al. , 2011 . One challenge for the review boards proposal is that some AI groups may not be at institutions that have review boards and thus could go undetected. Hard-to-monitor groups can include private companies, especially startup companies, and groups in unregulated countries. Indeed, there is some concern that national AI regulations could simply push AI research to unregulated countries. This problem can be addressed via international AI treaties (Posner 2004; Wilson 2013) , though this is easier said than done. Another approach in some AI monitoring proposals is to implement a draconian mass surveillance regime in order to find any harmful AI group wherever they are (e.g., Shulman 2009). Suffice to say, such surveillance poses extreme problems for privacy, intellectual property, and trustful geopolitical relations. It is a downside of extrinsic measures that such problematic surveillance mechanisms would even be considered. If monitoring succeeds and harmful AI groups are identified, the next step is to enforce whatever penalty is to be applied. 6 Enforcement should in general be less of a challenge than monitoring because AI groups have limited means of resisting penalties. Government penalties can be imposed through the threat or application of force. Institutional penalties can be imposed via the threat or application of measures such as firing noncompliant personnel. These sorts of actions could succeed at achieving compliance to extrinsic measures, but, in addition to being intrinsically regrettable (i.e., one should not want AI developers to lose their jobs or suffer physical harm), they can also alienate AI developers, provoke backlash, and motivate them to relocate to unregulated places. The net effect of the typical extrinsic measure is to unwittingly create an antagonistic relationship between AI developers and those who seek beneficial AI, which makes beneficial AI more difficult to achieve. The essential solution to this predicament is to consider intrinsic factors, i.e., the psychology of AI developers. \n Intrinsic aspects of extrinsic measures Consider two different extrinsic measures: a ban on flag burning and a requirement that dog owners cleanup after their dogs. Flag burning is legal in several countries, including the USA. The USA has repeatedly considered a constitutional amendment to ban flag burning. Such a ban has never passed, but here is one analysis of what would happen if it did: Few people have burned the American flag in recent years, and it is reasonable to suppose that a constitutional amendment making it possible to criminalize flag burning would have among its principal consequences a dramatic increase in annual acts of flag burning. In fact, adopting a constitutional amendment may be the best possible way to promote the incidence of flag burning (Sunstein 1996 (Sunstein :2023 . Why would a ban on flag burning increase the rate of flag burning? One potential mechanism is that the ban would draw attention to flag burning, which is otherwise something that not many people think about. Some portion of people who think about it may then go on to do it. Another potential mechanism is that the ban changes the social meaning of flag burning. Without the ban, flag burning is seen as distasteful and anti-patriotic, whereas with the ban, flag burning becomes a patriotic rebellion against a bad law. The story of dog cleanup is exactly the opposite. The following describes the effect of dog cleanup laws in Berkeley. Similar effects have been observed in other locations, such as New York City (Krantz et al. 2008) . After the Berkeley town council enacted an ordinance requiring owners to cleanup after their dogs, the sidewalks became much cleaner, even though officials never issued citations for breaking the law. The law apparently tipped the balance in favor of informal enforcement. Citizens became more aggressive about complaining to inconsiderate dog owners, and, anticipating this fact, dog owners became more considerate (Cooter 2000:11) . The dog cleanup story is notable because it achieved positive outcomes without enforcing compliance. There was no draconian surveillance, and no need to worry about dog owners relocating to places that lacked cleanup laws. Instead, the law prompted dog owners and their neighbors to police themselves. The point of the comparison between flag burning and dog cleanup is that people can react in different ways to different extrinsic measures and that this can significantly affect outcomes. Indeed, how people react can be the difference between negative outcomes (i.e., the extrinsic measure is counterproductive) and positive outcomes. These two cases are directly applicable to extrinsic measures for promoting beneficial AI. In short, extrinsic measures for promoting beneficial AI should strive to be like dog cleanup, not like flag burning. This means that extrinsic measures should aim to be considered desirable to AI developers-measures should be something that AI developers would want to comply with, not something they would want to push back against. In order to figure out what the effect of an extrinsic measure would be, it is necessary to consider not just the extrinsic measure itself, but also the intrinsic factors, i.e., the social psychology of AI communities. A fuller accounting of intrinsic factors is presented in Sect. 3, but here some intrinsic factors that are specific to extrinsic measures. One recurrent finding is that monetary incentives can reduce intrinsic motivation (Deci 1971; Vohs et al. 2006) . This means that once the money is gone, people become less motivated to perform some task than they would have been if there never was any money. In contrast, social praise and encouragement can increase intrinsic motivation (Deci 1971) . Finally, carefully designed extrinsic measures can use a psychological phenomenon called cognitive dissonance to increase people's intrinsic motivation for a given type of activity (Dickerson et al. 1992; Sect. 3.6) . \n Intrinsic factors and intrinsic measures This section presents a variety of intrinsic phenomena of relevance to promoting beneficial AI. Some of the phenomena are dedicated intrinsic measures for promoting beneficial AI, and some of the phenomena are factors that can play a role in a range of intrinsic and extrinsic measures. All of the phenomena are oriented toward motivating AI developers to want to pursue beneficial designs. \n Social context and social meaning People often behave differently depending on the social setting or context that they are in. For example, some people go to the library to study because the presence of other studious people compels them to focus more, whereas if they stayed at home, they would let themselves get distracted. Efforts to promote beneficial AI can be more effective in certain social contexts. For example, some beneficial AI measures benefit from cooperation across AI groups in order to avoid some groups choosing harmful designs that give them competitive advantage. Research in other contexts has found that it is often easier to achieve cooperation when people are together in a group than when they are in isolation (Krantz et al. 2008) . Therefore, workshops, conferences, and other meetings, or even group phone calls could all be more effective at achieving cooperation among AI groups than private conversations or written statements that would be read in private. Second, promoting beneficial AI can be more effective in social contexts where beneficial AI is considered desirable. For any given AI researcher, he or she is more likely to support beneficial AI if he or she is in a group of people who openly support beneficial AI than if he or she is alone or in a group of people who do not openly support beneficial AI. Support for beneficial AI can thus be expanded by creating and expanding groups of open beneficial AI supporters. Openness is important: other people will not be influenced by the group's support for beneficial AI if they are not in any way aware of this support. Related to social context is the concept of social meaning. An act or idea can have a different meaning depending on the social context. For example, the act of flag burning can have an anti-patriotic meaning if it is not banned or a patriotic meaning if it is banned (Sect. 2.4). Efforts to promote beneficial AI should aim for it to have a positive social meaning. This can be accomplished, for example, by testing prospective measures using focus groups of AI researchers in order to gauge their reactions. \n Social norms Norms are that which is considered normal. Social norms are norms held by groups of people about the normal behaviors of people in that group, including which behaviors normally are practiced (''descriptive norms'') and which behaviors normally should be practiced (''injunctive norms'') (Lapinski and Rimal 2005) . Norms can vary from group to group and place to place. For example, in some places, it is normal for pedestrians to cross the street whenever it is safe, whereas in other places, it is normal to wait for the streetlight to turn green. Norms can also change over time. For example, slavery was once considered normal throughout much of the world, but this norm has reversed in many places. Specific social contexts can also activate certain social norms. For example, the same group of people may show different norms in a library than in a nightclub. Beneficial AI can be a norm, i.e., concern for social impacts can be considered normal among AI developers. Beneficial AI as a social norm is implicitly at the heart of Stuart Russell's call for the AI community to abandon ''its current goal of building pure intelligence for its own sake, regardless of the associated objectives and their consequences'' and instead ''build the issue [beneficial AI] directly into how practitioners define what they do'' (Bohannon 2015:252). It is difficult to overstate the importance of social norms for beneficial AI. If beneficial AI is considered normal, then it will be easier to achieve compliance on extrinsic measures and easier to succeed with intrinsic measures, and there will be less need for either sort of measure in the first place because many AI researchers will already be pursuing beneficial designs. How can one go about shifting social norms in AI? Answering this question would benefit from dedicated research on AI social norms, but some insights can be gained from other issues. For example, Posner (2000 Posner ( :1784 Posner ( -1785 lists several ways in which social norms for paying taxes can be strengthened, including showing that other people also pay taxes, creating social sanctions for not paying taxes, 7 and reminding people of their civic obligation to pay taxes. Schultz et al. (2007) find that descriptive norms messages can reduce deviance but can also have a counterproductive ''boomerang'' effect for people with better-than-normal behavior and that this effect can be attenuated with injunctive norms messages of social approval for good behavior. These approaches could be adapted for promoting beneficial AI by showing that other AI researchers also support it, creating social sanctions for those who do not support it and social approval for those who do, and cultivating a sense of duty for AI researchers to attend to the social impacts of AI. \n Messengers and allies In promoting beneficial AI, it is not just what is said that matters, but also who says it. This is because people interpret meanings differently depending on who is conveying a message. This holds for AI researchers just as much as it does for anyone else. The fact that this happens cuts against the scientific ideal of objectivity, but scientists are humans too, and try as we might to avoid it, the identity of the messenger still matters for how we react to messages. One important class of messenger is the fellow AI researcher. Prior research has found that when conveying messages about ethics and social impacts to young scientists (e.g., graduate students or postdocs), it is important that the messages be delivered by established researchers in that field instead of by outside ethics professionals (Schienke et al. 2009 (Schienke et al. , 2011 . Using in-field researchers shows that ethics and social impacts is something that ''we'' (i.e., people in the field) care about and is not just something that ''they'' (i.e., people outside the field) want ''us'' to care about. This is reflective of a more general tendency for people to respond better to messages from other people in their ''in group.'' Thus, to the extent possible, messages should be selected from respected AI researchers. The quotes in this paper from Stuart Russell offer one example of this. Sometimes, using ''out-group'' messengers can also succeed. This can occur when the out-group is seen as having some sort of high status. For example, funders, institutional leadership, and policy makers can fit this role because they have some control over AI researchers' professional success and some credentials for setting social norms and research directions. Celebrities-including academic and business celebrities like Stephen Hawking and Bill Gates-can also fit this role because they can be perceived as successful, important, and influential. It is important that these people deliver thoughtful messages so as to not come off as ''ignorant blowhards'', but when they do deliver thoughtful messages, their influence can be substantial. As AI becomes more widely used across society, new out-group allies will also emerge. For example, the automotive industry is currently applying AI to autonomous vehicles. Automobiles must be safe, otherwise they will not sell and manufacturers can face steep liability claims. The automotive industry likewise has a safety culture that is currently pushing back against the AI culture of rapid product development and postlaunch debugging. As Ford CEO Mark Fields put it in a recent interview, ''You can't hit control-alt-delete when you're going 70 miles an hour'' (Griffith 2016) . Insofar, as AI researchers would like the business opportunities of autonomous vehicles, they may be motivated to listen to the safety messages from messengers like Fields. Another important potential ally could be found in militaries. This may seem surprising, since militaries are associated with violence and potential misuse of AI. However, militaries can be influential to AI researchers because they provide extensive AI research funding. Furthermore, militaries can be more safety conscious than is commonly believed. One study of AI researcher opinion found that a slight majority viewed the US military as the most likely institution to produce harmful AI, but that ''experts who estimated that the US military scenario is relatively safe noted that the US military faces strong moral constraints, has experience handling security issues, and is very reluctant to develop technologies that may backfire (such as biological weapons)'' (Baum et al. 2011:193) . For these reasons, military officials will often be motivated to promote beneficial AI. For their messages to resonate with AI researchers, they must achieve trust, which could be difficult given AI researchers' negative perceptions of the military. If trust can be achieved, then the military could offer another powerful ally in efforts to promote beneficial AI. \n Framing To frame is to present a message in a certain way. Framing is a matter of not what is said but how it is said. Skillful communication frames messages to achieve certain effects for certain audiences. For example, climate change is commonly framed as an environmental issue, which resonates with liberals more than with conservatives. For conservative audiences, climate change is sometimes framed as a threat to national security or to the economy, or framed as an injustice that compels religious duty (Shome and Marx 2009) . Using the wrong frame for the wrong audience can lead people to reject a cause that they might otherwise support. AI technologies can be framed in a variety of ways as well. Unfortunately, existing messages about beneficial AI are not always framed well. One potentially counterproductive frame is the framing of strong AI as a powerful winner-takes-all technology. This frame is implicit (and sometimes explicit) in discussions of how different AI groups might race to be the first to build strong AI (e.g., Shulman 2009; Armstrong et al. 2016) . The problem with this frame is that it makes a supposedly dangerous technology seem desirable. If strong AI is a winner-takes-all technologies race, then AI groups will want to join the race and rush to be the first to win. This is exactly the opposite of what the discussions of strong AI races generally advocate-they postulate (quite reasonably) that the rush to win the race could compel AI groups to skimp on safety measures, thereby increasing the probability of dangerous outcomes. Instead of framing strong AI as a winner-takesall race, those who are concerned about this technology should frame it as a dangerous and reckless pursuit that would quite likely kill the people who make it. AI groups may have some desire for the power that might accrue to whoever builds strong AI, but they presumably also desire to not be killed in the process. Another potentially counterproductive frame is the framing of AI researchers as people who do not want to pursue beneficial designs. This framing is implicit in the existing literature's emphasis on extrinsic measures: Extrinsic measures are used because AI researchers would not want to pursue beneficial designs. In the worst case, heavy-handed extrinsic measures could counterproductively instill a social norm of AI researchers not pursuing beneficial measures. This would be a reaction like ''I didn't think I was someone who ignores social impacts, but since you mention it, I guess I am.'' Light penalties, cooperative relationships, and positive framing of AI researchers could make them more inclined to pursue beneficial designs. Finally, extreme proposals like draconian global surveillance can inadvertently frame efforts to promote beneficial AI as being the problem, not the solution. In other words, it could give the impression that the efforts are misguided and causing more harm than good. The potential for aggressive beneficial AI efforts to be perceived as conspiratorial should not be discounted. Conspiracy theories are already prominent in the perceptions of global warming held by policy makers and the public (Lewandowsky et al. 2015) . If AI succumbs to similar conspiracy theories, this could make it more difficult to promote beneficial AI. And even without conspiracy theories, floating extreme proposals can make the beneficial AI cause seem at best out of touch, and at worst outright harmful. \n Stigmatization Stigmatization is a type of framing oriented toward making an object or an activity feel socially undesirable or even taboo. Stigmatization can be an effective technique for preventing the use of dangerous technologies. For example, stigmatization has been used repeatedly for international arms control, most notably to achieve the 1997 Ottawa Treaty banning landmines and the 2008 Convention on Cluster Munitions banning cluster bombs, and currently to promote nuclear disarmament (Borrie 2014) . The experience with landmines and cluster munitions is notable in part because the treaties have wider compliance than they have ratification. That is, some countries (e.g., the USA) have not ratified the treaties, yet they still act in compliance with them, even though they have no legal obligation to do so. The effort to stigmatize landmines and cluster munitions was so successful that the legal requirements are not necessary to achieve the social goal. The international community's experience with stigmatization could be applied to dangerous AI. Stigmatizing dangerous AI can help build support for extrinsic measures such as national regulations and international treaties. However, even in the absence of any extrinsic measures, stigmatization can still lead people to avoid building dangerous AI. A successful stigmatization effort causes people to not want to do the stigmatized activity. Stigmatization thus complements extrinsic measures. Indeed, it is difficult to imagine bans on dangerous AI technologies succeeding without an effective stigmatization effort. In order for stigmatization to work, it must be based on a convincing argument. The landmine and cluster munitions campaigns were so successful because there was a sound moral and legal argument against these weapons, specifically that they cause indiscriminate harm to civilian populations. This argument was crucial for convincing countries to reject the weapons even though they had previously been accustomed to using them. Likewise, any attempts to stigmatize specific AI technologies must be based on some compelling reason. The potential for an AI technology to cause a massive global catastrophe is an example of such a reason. Another challenge of stigmatization is that it can be alienating to those who disagree with it and/or those who are involved in a stigmatized activity. People do not like to think of themselves as being involved in a stigmatized activity and can resent being accused. This is seen, for example, in current debates about nuclear weapons, in which the nuclear-armed states distance themselves from efforts to stigmatize nuclear weapons even while they share an underlying concern about the weapons' catastrophic impacts. Likewise, efforts to stigmatize harmful AI should distinguish between the harms of the AI and the character of the AI designer so that the AI designers know that they are not seen as bad people and that they would be embraced if they switch to beneficial designs. \n Cognitive dissonance Cognitive dissonance occurs when a person holds conflicting beliefs in her or his mind. People typically seek to resolve the dissonance of conflicting beliefs by rejecting one of them. For example, people might reject reports that a seemingly good person committed a terrible harm on grounds that ''He or she couldn't possibly have done that.'' An example of relevance to beneficial AI is in the relation between economic activity and beliefs about climate change. Around 2008, public belief in the scientific evidence of climate change declined. One explanation for the decline is that the economic recession induced cognitive dissonance. In response to the recession, people want the economy to grow. However, economic activity typically increases greenhouse gas emissions, thereby worsening climate change. This creates dissonance between the belief that there should be more economic growth and the belief that climate change is a problem. Some data suggest that some people handled this dissonance by rejecting the scientific evidence of climate change, even though the evidence itself is about the natural environment, not the economy (Scruggs and Benegal 2012) . Similarly, cognitive dissonance could lead AI researchers to reject claims that AI could be harmful. The potential for harmful AI could imply that AI research should be restricted, bringing AI researchers diminished intellectual freedom and business opportunities and in some cases can even threatening their livelihoods. Just as people may reject the science of climate change when the economy is bad, AI researchers may reject evidence or argument about harmful AI when their welfare is at stake. Like all people, AI researchers can have ''motivated reasoning'' in which they are motivated not by a goal of accuracy but instead by other goals, such as the goal of believing that they are a good person (Kunda 1990) . Therefore, in order to improve salience, messaging about beneficial AI should strive to be sympathetic to AI researchers' intellectual and professional interests, and extrinsic and intrinsic measures alike should strive to minimize the intellectual and professional downsides that AI researchers could face. Cognitive dissonance can also be used to promote certain beneficial activities. Dickerson et al. (1992) studied the combined effect of people making a public commitment to conserving water and people being told that they had taken long showers. People took shorter showers only when they made a public commitment and were told they had taken long showers. If only one of the two conditions were present, they took longer showers. The explanation is that people had a cognitive dissonance between their public commitment and their self-perception of taking long showers, which they resolved by taking shorter showers. Similar effects have been observed in other contexts, with the strongest effect coming when people make public commitments or advocacy of an action and then are privately reminded of their own failures to perform that action (Stone and Fernandez 2008) . This model could be adapted for beneficial AI by having AI researchers make public commitments to beneficial AI (such as via professional societies or in classrooms) and then being privately informed that some of their designs are not beneficial. This technique is likely to be more effective if an injunctive social norm for beneficial AI is already in place, because then AI researchers would be motivated to resolve the cognitive dissonance in favor of more beneficial AI. \n Conclusion As the societal impacts of AI continue to increase, it becomes more and more important to promote the development of AI that is safe and beneficial to society-abbreviated throughout this paper as ''beneficial AI''. Thus far, discussions of how to promote beneficial AI have focused mainly on extrinsic measures that are imposed on AI designers even if they do not want to pursue beneficial AI. These measures come in the form of constraints and incentives, and they are often accompanied by measures for monitoring and enforcing compliance. Extrinsic measures can be successful at promoting beneficial AI, but they can be difficult to implement and can be resisted by AI developers. The success of extrinsic measures can also depend heavily on intrinsic factors, i.e., on how AI developers react to the measures. If the reaction is favorable, AI developers could comply on their own without external monitoring and enforcement. Alternatively, if the reaction is unfavorable, AI developers could even pursue less beneficial designs than they would if there were no extrinsic measures in place. Meanwhile, a range of dedicated intrinsic measures are available; these encourage AI developers to want to pursue beneficial designs. Social norms be shifted toward caring about beneficial design. Messengers can be selected, and messages can be framed to resonate with AI developers and entice them to want to pursue beneficial designs. Harmful AI designs can be stigmatized such that AI developers want to avoid them. AI developers can make commitments to choosing beneficial designs that can, via cognitive dissonance, lead them to do so. This paper draws heavily from research on other issues due to a lack of prior research on intrinsic aspects of beneficial AI. The paper makes especially heavy use of research on environmental issues because these issues have seen robust social and psychological research. Many insights from these other issues apply to beneficial AI, but beneficial AI will inevitably have its own unique characteristics. Therefore, dedicated research on the social psychology of AI research communities is needed to understand the effectiveness of both extrinsic and intrinsic measures. Such research should be included in broader research agendas for beneficial AI. One potential objection to intrinsic measures is that they are unreliable because they depend on each AI researcher to cooperate. There is some truth to this. However, extrinsic measures can also be unreliable-hence the effectiveness of extrinsic measures can depend on intrinsic factors. Regardless, 100 % success is an inappropriate goal. The aim of any measure should be to reduce the harms and increase the benefits of AI to society. A measure that does this should be pursued, even if it still leaves some potential for harm or for loss of benefit. Given the stakes involved in AI, all effective measures for promoting beneficial AI should be pursued. \t\t\t On the extrinsic/intrinsic distinction, see e.g.,Markowitz and Sharif (2012:246) and references therein.3 SeeSotala and Yampolskiy (2014, Sect. 3) for a review in the context of strong AI. Russell et al. (2015) also discuss a range of predominantly extrinsic measures. A notable exception to the focus on extrinsic measures is Russell's emphasis on shifting ''how practitioners define what they do''(Bohannon 2015:252). \n\t\t\t AI & Soc (2017) 32:543-551 \n\t\t\t Incentives can nonetheless provoke significant backlash. For example, in the United States, environmentalists have long pursued incentive-based policies such as taxes on pollution in order to appeal to industry interests that do not want constraints, yet industry has been largely successful at avoiding these incentive-based policies.5 Incentives for completed technologies are less relevant for AIs that could be catastrophic because there may be no penalty that could adequately compensate for damages and, in the extreme case, no one alive to process the penalty.AI & Soc (2017) 32:543-551 \n\t\t\t Conversely, when beneficial AI groups are identified, rewards are to be applied, though this is less of a challenge because AI groups are likely to seek rewards, not dodge them. \n\t\t\t Social sanctions are an extrinsic measure, specifically an incentive using a social penalty, though they can also cultivate certain social norms.AI & Soc (2017) 32:543-551", "date_published": "n/a", "url": "n/a", "filename": "Baum2017_Article_OnThePromotionOfSafeAndSociall.tei.xml", "abstract": "This paper discusses means for promoting artificial intelligence (AI) that is designed to be safe and beneficial for society (or simply ''beneficial AI''). The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more toward building beneficial AI. Extrinsic measures impose constraints or incentives on AI researchers to induce them to pursue beneficial AI even if they do not want to. Intrinsic measures encourage AI researchers to want to pursue beneficial AI. Prior research focuses on extrinsic measures, but intrinsic measures are at least as important. Indeed, intrinsic factors can determine the success of extrinsic measures. Efforts to promote beneficial AI must consider intrinsic factors by studying the social psychology of AI research communities.", "id": "43fccd354cd6cd09ab35ddf53d4f8f36"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Kaj Sotala"], "title": "How feasible is the rapid development of artificial superintelligence?", "text": "Introduction Since Turing (1950) , the dream of artificial intelligence (AI) research has been the creation of a 'machine that could think'. While current expert consensus is that the creation of such a system will take several decades if not more (Müller and Bostrom 2016) , recent progress in AI has still raised worries about the challenges involved with increasingly capable AI systems (Future of Life Institute 2015, Amodei et al 2016) . In addition to the risks posed by near-term developments, there is the possibility of AI systems eventually reaching superhuman levels of intelligence, eventually breaking out of human control (Bostrom 2014) . Various research agendas and lists of research priorities have been suggested for managing the challenges that this level of capability would pose to society (Soares and Fallenstein 2014 , Russell et al 2015 , Amodei et al 2016 , Taylor et al 2016 . For managing the challenges presented by increasingly capable AI systems, one needs to know how capable those systems might ultimately become, and how quickly. If AI systems can rapidly achieve strong capabilities, becoming powerful enough to take control of the world before any human can react, then that implies a very different approach than one where AI capabilities develop gradually over many decades, never getting substantially past the human level (Sotala and Yampolskiy 2015) . We might phrase these questions as: 1. How much more capable can AIs become relative to humans? 2. How easily (in terms of time and resources required) could superhuman capability be acquired? Views on these questions vary. Authors such as Bostrom (2014) and Yudkowsky (2008) argue for the possibility of a fast leap in intelligence, with both offering hypothetical example scenarios where AI rapidly acquires a dominant position over humanity. On the other hand, Anderson (2010) and Lawrence (2016) appeal to fundamental limits on predictability-and thus intelligence-posed by the complexity of the environment. The argument for limits of intelligence (Anderson 2010 , Lawrence 2016 ) could be summarized as saying that, past a certain point, increased intelligence is only of limited benefit, for the unpredictability of the environment means that you would have to spend exponentially more resources to evaluate a vastly increasing amount of possibilities. Noise also accumulates over time, reducing the reliability of the available models. For many kinds of predictions, increasing the prediction window would require an exponential increase in the amount of measurements (Martela 2016) . For instance, weather models become increasingly uncertain when projected farther out in time. Forecasters can only access a limited amount of observations relative to the weather system's degrees of freedom, and any initial imprecisions will magnify over time and cause the accuracy to deteriorate (Buizza 2002) . In general, the accuracy of any long-term prediction will be limited by data uncertainty, model uncertainty, and the available computational time. Similar considerations would also apply to attempts to predict things such as the behaviour of human societies. The advantage that even a superhuman intelligence might have over humans may be limited. On the other hand, it is not obvious whether this point of view really is in conflict with the assumption of AI being able to quickly grow to become powerful. There being limits to prediction does not imply that humans would be particularly close to the limits, nor that it would necessarily take a great amount of time to move from sub-human to superhuman capability. This article attempts to consider these questions by considering what we know about expertise and intelligence. After reviewing the relevant research on human expertise, we will discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and pattern recognition. Our current conclusion is that although the limits to prediction are real, it seems like AI could still substantially improve on human intelligence. The possibility of AI developing significant real-world capabilities in a relatively brief time seems like one that cannot be ruled out. Before examining these questions, we need to consider the definition of 'capability' in more detail, and justify our focus on intelligence as prediction ability. \n Capability and intelligence as prediction ability Bostrom (2014, p 39) defines a superintelligence as 'any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest'. Additionally, Bostrom (2014, chapter 3) defines three subcategories of a superintelligence. A speed superintelligence thinks faster than humans; a collective superintelligence is composed of many smaller intellects whose overall performance outstrips that of existing cognitive systems; and a quality superintelligence is one that is at least as fast as a human mind, and vastly qualitatively smarter. In a footnote to his original definition, Bostrom notes that this definition of superintelligence can be compared with Legg (2008) , who defines intelligence as 'an agent's ability to achieve goals in a wide range of environments'. This definition, originally from Legg and Hutter (2007a) , draws on a collection of 70 definitions of intelligence (Legg and Hutter 2007b) from various professional groups, dictionaries, psychologists, and AI researchers. Legg and Hutter (2007a) argue that this definition summarizes the essential features in the various surveyed definitions, in that they generally discuss an individual who is interacting with some environment that is not fully known, trying to achieve various goals in that environment, and learning and exploring during that interaction. Some definitions of intelligence list traits which are not explicitly included in this definition; for example, a group statement signed by 52 psychologists (Gottfredson 1997a) includes in intelligence 'the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience'. However, Legg and Hutter (2007a) argue that all of these abilities are ones that allow humans to achieve goals, so are implicitly included in the Legg and Hutter definition. Additionally, Legg and Hutter suggest that their definition is more general, as there could exist intelligences which did not have all of these specific capabilities, but did have alternative capabilities which allowed them to achieve their goals. Legg and Hutter (2007a) offer a formalization of their definition, cast in a reinforcement learning framework. Briefly, the formalization involves an agent which interacts with an environment in discrete timesteps; on each timestep, the agent chooses an action and receives both an observation and a reward. An agent is (universally) intelligent to the extent that it can maximize its reward over the space of all environments drawn from a universal distribution. This definition and formalization is a view of intelligent performance as a learning and prediction problem: an agent is intelligent to the extent that it can learn to predict, using the smallest possible set of observations, which of its actions will deliver the greatest amount of reward in the environment that it is interacting with. Out of Bostrom's (2014) superintelligence subtypes, a mind that was superintelligent under such a view would most likely fall under the category of a quality superintelligence. Some of the examples that Bostrom (2014) offers to illustrate the concept of quality intelligence include nonhuman animals that cannot achieve human cognitive capabilities even when 'intensely trained by human instructors', as well as human deficits such as autism spectrum disorders that may impair e.g. social functioning. Implicit in these examples is the notion that nonhuman animals and individuals with cognitive deficits cannot achieve the same level of performance in various domains as unimpaired humans do, even when given the same opportunities to observe and learn about the domains in question. They lack the cognitive capabilities that would allow them to utilize their observations to learn to predict which kinds of actions would provide the greatest success in the relevant domains. Under this view, we can more precisely rephrase our first question, 'how much more capable can AIs become relative to humans', as 'how much better than humans can AIs become in using small amounts of sense data to learn to predict which actions most effectively further their goals'. For the purposes of this discussion, we will also assume that 'predicting which actions most effectively further one's goals' is an accurate characterization of what human expertise (in any given domain) means. As we will discuss in the following section, the foundation of human expertise lies in acquiring the necessary knowledge to instantly see, when faced with some situation, the right course of action for that situation. \n The development of human expertise Ideally, we might turn to theoretical AI research for a precise theory about acquiring cognitive capabilities. Unfortunately AI research is not at this point yet. Instead we will consider the research on human expertise and decision-making. \n Expertise as mental representations There exists a preliminary understanding, if not of the details of human decision-making, then at least the general outline. A picture that emerges from this research is that expertise is about developing the correct mental representations (Klein 1999, Ericsson and Pool 2016) . A mental representation is a very general concept, roughly corresponding to any mental structure forming the content of something that the brain is thinking about (Ericsson and Pool 2016) . Domain-specific mental representations are important because they allow experts to know what something means; know what to expect; know what good performance should feel like; know how to achieve the good performance; know the right goals for a given situation; know the steps necessary for achieving those goals; mentally simulate how something might happen; learn more detailed mental representations for improving their skills (Klein 1999, Ericsson and Pool 2016) . Although good decision-making is often thought of as a careful deliberation of all the possible options, such a type of thinking tends to be typical of novices (Klein 1999) . A novice will have to try to carefully reason their way through to an answer, and will often do poorly regardless, because they do not know what things are relevant to take into account and which ones are not. An expert does not need to-they are experienced enough to instantly know what to do. A specific model of expertise is the recognition-primed decision-making model (Klein 1999) . First, a decision-maker sees some situation, such as a fire for a firefighter or a design problem for an architect. The situation may then be recognized as familiar, such as a typical garage fire. The expectations arising from mental representations also give rise to intuition. As one example, Klein (1999) describes the case of a firefighter lieutenant responding to a kitchen fire in an ordinary one-story residential house. The lieutenant's crew sprayed water on the fire, but contrary to expectations, the water seemed to have little impact. Something about the situation seemed wrong to the lieutenant, who ordered his crew out of the house. As soon as they had left the house, the floor where they had been standing collapsed. If the firefighters had not pulled out, they would have fallen down to the fire raging in the basement. The lieutenant, not knowing what had caused him to give the order to withdraw, initially attributed the decision to some form of extra-sensory perception. In a later interview, the lieutenant explained that he did not suspect that the building had a basement, nor that the seat of the fire was under the floor that he and his crew were standing on. However, several of his expectations of a typical kitchen fire were violated by the situation. The lieutenant was wondering why the fire did not react to water as expected, the room was much hotter than he would have expected out of a small kitchen fire, and while a heat that hot should have made a great deal of noise, it was very quiet. The mismatch between the expected pattern and the actual situation led to an intuitive feeling of not knowing what was going on, leading to the decision to regroup. This is intuition: an automatic comparison of the situation against existing mental representations of similar situations, guiding decision-making in ways whose reasons are not always consciously available. In an unfamiliar situation, the expert may need to construct a mental simulation of what is going on, how things might have developed to this point, and what effect different actions would have. Had the floor mentioned in the previous example not collapsed, given time the firefighter lieutenant might have been able to put the pieces together and construct a narrative of a fire starting from the basement to explain the discrepancies. For a future-oriented example, a firefighter thinking about how to rescue someone from a difficult spot might mentally simulate where different rescue harnesses might be attached on the person, and whether that would exert dangerous amounts of force on them. Mental representations are necessary for a good simulation, as they let the expert know what things to take into account, what things could plausibly be tried, and what effects they would have. In the example, the firefighter's knowledge allows him to predict that specific ways of attaching the rescue harness would have dangerous consequences, while others are safe. \n Developing mental representations Mental representations are developed through practice. A novice will try out something and see what happens as a result. This gives them a rough mental representation and a prediction of what might happen if they try the same thing again, leading them to try out the same thing again or do something else instead. Just practice is not enough, however-there also needs to be feedback. Someone may do a practice drill over and over again and think that they are practicing and thus improvingbut without some sign of how well that is going, they may just keep repeating the same mistakes over and over (Ericsson and Pool 2016) . The importance of quality feedback is worth emphasizing. Skills do not develop unless there is feedback that is conducive to developing better mental representations. In fact, there are entire fields in which experienced practitioners are not much better than novices, because the field does not provide them with enough feedback. Shanteau (1992) provides the following breakdown of professions for which there is agreement on the nature of their performance (Reprinted from Shanteau, J (1992) . Competence in experts: The role of task characteristics. Organizational Behavior and Human Decision Processes, (53), 252-266, Copyright (1992) In analysing why some domains enable the development of genuine expertise and others do not, Shanteau identified a number of considerations that relate to the nature of feedback. In an occupation like weather forecasting, the criteria you use for forecasting are always the same; you will always be facing the same task and can practice it over and over; you get quick and feedback on whether your prediction was correct; you can use formal tools to analyse what you predicted would happen and why that prediction did or did not happen; and things can be analysed in objective terms. This allows weather forecasters to develop powerful mental representations that get better and better at making the correct prediction. Contrast this with someone like an intelligence analyst. The analyst may be called upon to analyse very different clues and situations; each of the tasks may be unique, making it harder to know which lessons from previous tasks apply; for many of the analyses, one might never know whether they were right or not; and questions about socio-cultural matters tend to be much more subjective than questions about weather, making objective analysis impossible. In short, for much of the work that the analyst does, there is simply no feedback available to tell whether the analyst has made the right judgment or not. And without feedback, there is no way to improve one's mental representations, and thus expertise. A slightly different look on expertise is the heuristics and biases literature, which frequently portrays even experts as being easily mistaken. In contrast, the expertise literature that we have reviewed so far has viewed experts as being typically capable and as having trustworthy intuition. Kahneman and Klein (2009) make an attempt to reconcile the two fields, and come to agree that: • Expert intuition may be trustworthy, if the intuition relates to a 'high-validity' domain and the expert has had a chance to learn the regularities in that domain. • A domain is 'high-validity' if 'there are stable relationships between objectively identifiable cues and subsequent events or between cues and the outcomes of possible actions'. This consensus is in line with what we have covered so far, though it also includes the consideration of validity. One cannot learn mental representations that would predict a domain or dictate the right actions for different situations in a domain, if that domain is simply too complicated or chaotic to be predicted. Kahneman and Klein (2009) provide an illustrative example of domain being simply too hard to interpret: the question of how the history of the 20th century would have been different if the fertilized eggs that became Hitler, Stalin and Mao had been female. It seems clear that things would have developed very differently, but how exactly? There seems to be no way to know. Meanwhile, practice does help in more predictable domains. A recent meta-analysis (Macnamara et al 2014) on the effects of practice on skill found that the more predictable an activity was, the more practice contributed to performance in that activity. \n Implications for AI Having reviewed some necessary background, we will now finally get back to the topic of superintelligence capabilities. \n Relevance for AI Similarly to humans, AI systems cannot reach intelligent conclusions by a mere brute force calculation of every possibility. Rather, an intelligence needs to learn to exploit predictable regularities in the world in order to develop further. All machine learning based systems are based on this principle: they learn models of the world that are in this sense similar to the mental representations that humans learn. However, the models employed by current machine learning systems are much more limited than the mental representations employed by humans (Lake et al 2016). 1 Kahneman and Klein do not define what they mean by 'long-term', but geopolitical events up to a year or so away can be predicted with reasonable accuracy, with the accuracy falling towards chance for events 3-5 years away. (Tetlock and Gardner 2015, p 5). Machine learning systems are also developed for solving problems efficiently on existing computing hardware rather than for being biologically plausible. There is thus reason to expect even future AI systems to employ models which differ in various respects from the mental representations used by humans. As such, we will use the term 'mental representations' when in the context of humans, and 'models' when discussing the analogous structure in future AI systems. In a sense, mental representations contain the optimal solutions to the problems at hand (Klein 1999 ): a human expert will have learned to identify the smallest set of cues that will let them know how to act in a certain situation; their mental representations encode information about how to choose the correct actions using the least amount of thought. In other words, an expert pays attention exactly to the features in the data which are relevant for making the decision, and acts accordingly. An AI's models could use more data and become larger than human mental representations, and identify features which humans might have missed. There is however no advantage in using more data than necessary for making the correct decision, so at least a subset of the AI's models is likely to be similar to mental representations in that they encode the smallest amount of features of the environment which allow for rapid and correct decision-making in a given context and for a given goal. It is possible that AIs would also come to have models for which this characterization was a poor fit and which were tailored for taking better advantage of e.g. an AI's ability to process more data at a time. We will not examine this more speculative possibility, as for our argument it is unnecessary to consider hypothetical models which are better than human mental representations; we are focused on establishing the possibility that roughly human-like models would already be enough to enable superhuman capability 2 . Like with human experts, machine learning also tries to focus its analysis on exactly the right number of cues that will provide the right predictions, ignoring any irrelevant information. Traditional machine learning approaches have relied extensively on feature engineering, a labor-intensive process where humans determine which cues in the data are worth paying attention to. A major reason behind the recent success of deep learning models is their capability for feature learning or representation learning: being able to independently discover high-level features in the data which are worth paying attention to, without (as much) external guidance (Bengio et al 2012) . Being able to identify and extract the most important features of the data allows the system to make its decisions based on the smallest amount of cues that allows it to reach the right judgment-just as human experts learn to identify the most relevant cues in the situations that they encounter. Finally, the aspect of increasingly detailed mental representations giving an expert a yardstick to compare their performance against (Ericsson and Pool 2016) has an analogue in reinforcement learning methods. In deep reinforcement learning, a deep learning model learns to estimate how valuable a specific state of the world is, after which the system takes actions to move the world towards that state (Mnih et al 2015) . Similarly, a human expert comes to learn that specific states (e.g. a certain feeling in the body when diving) are valuable, and can then increasingly orient their behaviour so as to achieve this state. In summary, human experts use mental representations as the building blocks of their expertise, with the models employed by current state-of-the-art AI systems having a number of key similarities. As there have been no serious alternative accounts presented of how expertise might work, we will assume that the capabilities of hypothetical superintelligences will depend, at least in part, on them developing the correct models to represent key features of the environment in a similar way as human mental representations do. This paper set out to consider two main questions: 1. How much more capable can AIs become relative to humans? 2. How easily (in terms of time and resources required) could superhuman capability be acquired? Let us now return to these. The argument for AI's predictive capabilities being limited was that there are limits to prediction, and that predicting events an ever-increasing amount forward in time requires exponential reasoning power as well as measurement points, quickly becoming intractable. How capable could AI become despite these two points? The components of human expertise might be roughly divided into two: building up a battery of accurate mental representations, and being able to use them for mental simulations. Similarly, approaches to artificial intelligence can roughly be divided into pattern recognition and modelbuilding (Lake et al 2016), depending on whether patterns in data or models of the world are treated as the primary unit of thought. As this kind of a distinction seems to emerge both from psychology and AI research, we will assume that AI's expertise will also involve acquiring models (or equivalently, doing pattern recognition) as well as accurately using them in simulations. We will consider these two separately. \n Simulation Potential capability. An interesting look at the potential benefits offered by improved simulation ability come from looking at Philip Tetlock's Good Judgement Project (GJP), popularized in the book Superforecasting (Tetlock and Gardner 2015) 3 . Participating in a contest to forecast the 2 The reader may note that the AI possibly using many different kinds of models, some of them human-like and some more advanced, has a parallel in the heterogeneity hypothesis of concepts (Machery 2009 (Machery , 2010 , according to which the mental representations of humans do not form a natural kind and actually consist of many different kinds of mental structures that are used in different situations and for different purposes. probability of various events, the best GJP participants-the so-called 'superforecasters'-managed to make predictions whose accuracy outperformed those of professional intelligence analysts working with access to classified data 4 . This is particularly interesting as the superforecasters had no particular domain expertise in answering most of the questions, with sample questions including ones such as Tetlock and Gardner report the superforecasters' accuracy in terms of Brier score, which is a scale between 0 and 2, with 0.5 indicating random guessing 5 . On this scale, superforecasters had a score of 0.25 at the end of GJP's first year, compared to 0.37 of the other forecasters participating in the project. By the end of the second year, superforecasters had improved their Brier score to 0.07 (Mellers et al 2014) . Superforecasters could also project further out in time: their accuracy at making predictions 300 days out was better as the other forecasters' accuracy at making predictions 100 days out. In terms of being on the right side of 50/50, GJP's best wisdom-of-the-crowd algorithms (deriving an overall prediction from the different forecasters' predictions) delivered a correct prediction on 86% of all daily forecasts (Tetlock et al 2014). The superforecasters' success relied on a number of techniques, but a central one was the ability to consider and judge the relevance of a number of factors that might cause a prediction to become true or false. Tetlock and Gardner illustrate this technique by discussing how a superforecaster, Bill Flack, approached the question of whether an investigation of Yasser Arafat's remains would reveal traces of polonium, suggestive of Arafat having been poisoned by Israel. Flack started by considering what it would take for the investigation to reach a particular outcome, and realized that he did not know what the chances were of polonium traces surviving in a body for several years. He started by investigating how polonium testing worked, and concluded that enough polonium could in fact survive for it to be found in the testing. Next, Flack considered what could cause polonium to end up in the body. Israel poisoning Arafat could have done it, but so could an Palestinian enemy that Arafat had. There was also the probability of the body being intentionally contaminated after Arafat's death, by some faction trying to frame Israel for the death. Each possibility made a positive test result more probable, based on how probable those individual possibilities were. Next Flack moved on to investigate what it would take for any of the possibilities to be true. For the case of Israel poisoning Arafat, it required Israel having access to polonium; Israel being willing to take the risk of intentionally poisoning him; and Israel having the means to poison Arafat with the polonium. These possibilities served as starting points for researching the probability of the 'Israel poisoned Arafat' hypothesis, after which Flack would break down and investigate what it would take for the other hypotheses to be true. Tetlock does not go into detail about the prerequisites for being able to carry out such analysis-other than noting that it is slow and effortful-but there are some considerations that seem like plausible prerequisites. First, a person needs to have enough general knowledge to generate different possibilities for how an event could have come true. Next, they need the ability to analyse and investigate those possibilities further, either personally acquiring the relevant domain knowledge for evaluating their plausibility, or finding a relevant subject matter expert. In this example, Flack familiarized himself with the science of polonium testing until he was satisfied that it would be possible to detect polonium traces from a long time ago. This suggests a general procedure which AI could also follow in order to predict the possibility of something in which it does not yet have expertise. An AI that was trying to predict the outcome of some specific question could work tap into its existing general knowledge in an attempt to identify relevant causal factors; if it failed to generate them, it could look into existing disciplines which seemed relevant for the question. For each identified possibility, it could branch off a new subprocess to do research into that particular direction, sharing information as necessary with a main process whose purpose was to integrate the insights derived from all the relevant searches. Such a capability for several parallel streams of attention could provide a major advantage. A human researcher or forecaster who branches off to do research on a subquestion will need to make sure that they do not lose track of the big picture, and needs to have an idea of whether they are making meaningful progress on that subquestion and whether it would be better to devote attention to something else instead. To the extent that there can be several parallel streams of attention, these issues can be alleviated, with a main stream focusing on the overall question and substreams on specific subpossibilities. How much could this improve on human forecasters? Forecasters performed better when they were placed on teams where they shared information between each other, which similarly allowed an extent of parallelism in prediction-making, in that different forecasters could pursue their own angles and 4 Though this claim needs to be treated with some caution, as no official information about the intelligence analysts' performance has been published. The claim is based on Washington Post editor David Ignatius writing that 'a participant in the project' had told him that superforecasters had 'performed about 30% better than the average for intelligence community analysts who could read intercepts and other secret data' (Ignatius 2013) . The intelligence community has neither confirmed nor denied this statement, and Philip Tetlock has stated that he believes it to be true. 5 A version of the scale which ranges between 0 and 1 is also commonly used. directions in exploring the problem. The differences between individual forecasters and teams of forecasters with comparable levels of training ranged between 0.05 and 0.10 Brier points at the end of the first year, and between 0.02 and 0.08 Brier points at the end of the second year (Mellers et al 2014) . In humans however, it seems likely that the extent of parallelism was constrained by the fact that each forecaster had to independently familiarize themselves with much of the same material, and that their ability to share knowledge between each other was limited by the speed of writing and reading. This suggests a possibility for further improvement. In general, accurate forecasting requires an ability to carry out sophisticated causal modelling about a variety of interacting factors. Tetlock and Gardner emphasize the extent to which superforecaster forums discuss many different 'on the one hand'/'on the other hand' possibilities. In a discussion of whether Saudi Arabia might agree to OPEC production cuts in November 2014, one superforecaster noted that Saudi Arabia had large financial reserves so could afford to let oil prices run low. On the other hand, he noted, Saudi Arabia needed to raise their social spending to bolster the support for the monarchy, but yet again, Saudi Arabian rulers might view the act of trying to control oil prices as futile. The superforecaster in question concluded that the question 'felt no-ish, 80%'. (Saudis ended up not supporting production cuts.) This suggests that AI with sufficient hardware capability could achieve considerable prediction ability by its capability to explore many different perspectives and causal factors at once. The simulations of humans tend to be limited to around three causal factors and six transition states (Klein 1999) . The discussion of the superforecasters clearly brought up many more possibilities, and their accuracy suggests moderate ability to integrate all those factors together. Yet comments such as 'feels no-ish' suggests that they still could not construct a full-blown simulation in which the various causal factors would have influenced each other based on principled rules which could be inspected, evaluated, and revised based on feedback and accuracy. This seems especially plausible given that Klein speculates the limits in the size of human simulations to come from working memory limitations. AI systems with larger working memory capacities might be able to construct much more detailed simulations. Contemporary computer models can involve simulations with thousands or tens of thousands variables, though flexibly incorporating diverse models into a single simulation will probably take considerably more memory and computing power than what is used in today's models. a model of the mind that is inspired by psychological and neuroscientific research and attempts to capture its main mechanisms. We can use LIDA to get a rough example of what having several 'streams of attention' would mean, and how information could be exchanged between them. The purpose of this example is not to suggest that an AI would necessarily work by this mechanism, but merely to make the speculation about streams of attention slightly more grounded in existing theories of how a general intelligence (the human mind) might work. Thus, to the extent that LIDA is correct as a model of human intelligence, and to the extent that the example in this box is correct about LIDA allowing for there to be several attentional streams at the same time, this provides some information about it being possible to have several such streams in minds in general, and how that might concretely work. LIDA works by means of an understand-attend-act cycle. In each cycle, low-level sensory information is initially interpreted so as to associate it with higher-level concepts to form a 'percept', which is then sent to a workspace. In the workspace, the percept activates further associations in other memory systems, which are combined with the percept to create a Current Situational Model, an understanding of what is going on at this moment. The entirety of the Current Situational Model is likely to be too complex for the agent to process, so it needs to select a part of it to elevate to the level of conscious attention to be acted upon. This is carried out using 'attention codelets', small pieces of code that attempt to train attention on some particular piece of information, each with their own set of concerns of what is important. Attention codelets with matching concerns form coalitions of what to attend, competing against other coalitions. Whichever coalition ends up winning the competition will have its chosen part of the Current Situational Model 'become conscious', broadcast to the rest of the system, and particularly Procedural Memory. The Procedural Memory holds schemes, or templates of different actions that can be taken in different contexts. Schemes which include a context or an action that matches the contents of the conscious broadcast become available as candidates for possible actions. They are copied to the Action Selection mechanism, which chooses a single action to perform. The selected action is further sent to Sensory-Motor Memory, which contains information of how exactly to perform the action. The outcome of taking this action manifests itself as new sensory information, beginning the cognitive cycle anew. Here is a description of how this process-or something like itmight be applied in the case of AI seeking to predict the outcome of a specific question, such as the 'will Saudi Arabia agree to oil production cuts' question discussed above. The decision to consider this question has been made in an earlier cognitive cycle, and information relevant to it is now available in the inner environment and the Current Situational Model. The concepts of Saudi Arabia and oil production trigger several associations in the AI's memory systems, such as the fact that oil prices will affect Saudi Arabia's financial situation, and that oil prices are also influenced by other factors such as global demand. Two coalitions of attention codelets might form, one focusing on the current financial situation and another on influences on oil prices. In LIDA, these codelets would normally compete, and one of them would win and trigger a specific action, such as a deeper investigation of Saudi Arabia's financial situation. In our hypothetical AI however, it might be enough that both coalitions manage to exceed some threshold level of success, indicating them both to be potentially relevant. In that case, new instances of the Procedural (Continued.) Memory, Action Selection and Sensory-Motor Memory mechanisms might be initialized, with one coalition sending its contents to the first set of instances and the other to another. These streams could then independently carry out searches of the information that was deemed relevant, also having their own local Situational Models and Workspaces focusing on content relevant for this search. As they worked, these streams would update the various memory subsystems with the results of their learning, making new associations and attention codelets available to all attentional streams. Their functioning could be supervised by a general highlevel attention stream, whose task was to evaluate the performance of the various lower-level streams and allocate resources between them accordingly. These simulations do not necessarily need to incorporate an exponentially increasing number of variables in order to achieve better prediction accuracy. As previously noted, superforecasters were more accurate at making predictions 300 days out than the rest of the forecasters in GJP were at making predictions 100 days out. Given that at least some of the superforecasters only used a few hours a day on making their predictions, and that they had many predictions to rate, they probably did not consider a vastly larger amount of factors than the rest of the forecasters. Klein (1999) offers an example of a professor who used three causal factors (the rate of inflation, the rate of unemployment, and the rate of foreign exchange) and a few transitions to relatively accurately simulate how the Polish economy would develop in response to the decision to convert from socialism to a market economy. In contrast, less sophisticated experts could only name two variables (inflation and unemployment) and not develop any simulations at all, basing their predictions mostly on their ideological leanings. Having large explicit models also allows for the models to be adjusted in response to feedback. The professor's estimate was in many extents correct, but failed to predict the government being less ruthless and more cautious than it had said it would be closing down unproductive plants. The government's caution could thus be added as an additional variable to be considered for the next model. The addition of this variable alone might then considerably increase the accuracy of the simulation. Tetlock and Gardner report that the superforecasters used highly granular probability estimates-carefully thinking about whether the probability of an event was 3% as opposed to 4%, for instance-and that the granularity actually contributed to accuracy, with the predictions getting less accurate if they were rounded to the closest 5%. Given that such granularity was achieved by integrating various possibilities and considerations, it seems like an ability to consider and integrate an even larger amount of possibilities might provide even increased granularity, and thus a prediction edge. In summary, AI could be able to run vastly larger simulations than humans could, with this possibility being subject to computing power limitations; given this, its simulations could also be explicit, allowing it to adjust and correct them in response to feedback to provide improved prediction accuracy; and it could have several streams of attention running concurrently and sharing information between each other. Existing evidence from human experts suggests that large increases to prediction capability might not necessarily need a large increase in the number of variables considered, and that even small increases can provide considerable additional gains. The amount of predictive edge that this could give to an AI as compared to a human or a group of humans is unclear, but humans do tend to prefer simple stories and explanations that are compact enough that all of the important details can be kept in mind at once. Simple hypotheses often turn out to be insufficient because the world is more complicated than a simple hypothesis allows for. Even in domains such as engineering, where there exist formal ways of modelling the entire domain, a task such as the design of a modern airplane or operating system contains too much complexity for a single person to comprehend. While the impact of uncertainty can never be eliminated, being able to take more of the world's underlying complexity into account than humans do, may provide an AI with a predictive edge at least in some domains. Rate of capability growth. How fast could AI develop the ability to run comprehensive and large simulations? 6 Creating larger simulations than humans have access to seems to require extensive computational resources, either from hardware or optimized software. As an additional consideration, we have previously mentioned limited working memory restricting the capabilities of humans, but human working memory is not the same thing as RAM in computer systems. If one were running a simulation of the human brain in a computer, one could not increase the brain's available working memory simply by increasing the amount of RAM the simulation had access to. Rather, it has been hypothesized that working memory differences between individuals may reflect things such as the ability to discriminate between relevant and irrelevant information (Unsworth and Engle 2007), which could be related to things like brain network structure and thus be more of a software than a hardware issue 7 . Yudkowsky (2013) notes that if increased intelligence would be a simple matter of scaling up the brain, the road from chimpanzees to humans would likely have been much shorter, as simple factors such as brain size can respond rapidly to evolutionary selection pressure. Thus, advances in simulation size depend on progress in both hardware and algorithms. Hardware progress in hard to predict, but advances in algorithmic capabilities seem doable using mostly theoretical and mathematical research. This 6 This section does not consider how fast the AI could develop the necessary mental representations to be used in the simulations. That question will be discussed in the next section. 7 Though it is worth noting that g does correlate to some extent with brain size, with a mean correlation of 0.4 in measurements that are obtained using brain imaging as opposed to external measurements of brain size (Rushton and Ankney 2009) . This would seem to suggest that the raw number of neurons and thus 'general hardware capacity' would also be relevant. would require the development of expertise in mathematics, programming, and theoretical computer science. Much of mathematical problem-solving is about having a library of procedures, reformulations, and heuristics that one can try (Polya 1990) , as well as developing a familiarity and understanding of many kinds of mathematical results, which one may then later on recognize as relevant. This seems like the kind of task that relies strongly on pattern-matching abilities, and might in principle be in reach by an advanced deep reinforcement learning system that was fed a sufficiently large library of heuristics and worked proofs to let it develop superhuman mathematical intuition 8 . Modern-day theorem provers often know what kinds of steps are valid, but not which steps are worth taking; merging them with the 'artificial intuition' of deep reinforcement learning systems might eventually produce systems with superhuman mathematical ability. Progress in this field could allow AI systems to achieve superhuman abilities in math research, considerably increasing their ability to develop more optimized software to take full advantage of the available hardware. To the extent that relatively small increases in the number of variables considered in a high-level simulation would allow for dramatically increased prediction ability (as is suggested by e.g. the superforecasters being better predictors with thrice the prediction horizon of less accurate forecasters), moderate increases in the size of the AI's simulations could translate to drastic increases in terms of real-world capability. Yudkowsky (2013) notes that although the evolutionary record strongly suggests that algorithmic improvements were needed for taking us from chimpanzees to humans, the record rules out exponentially increasing hardware always being needed for linear cognitive gains: the size of the human brain is only four times that of the chimpanzee brain. This further suggests that relatively limited improvements could allow for drastic increases in intelligence. \n Pattern recognition The capability to run large simulations is not enough by itself. The AI also needs to acquire a sufficiently large number of patterns to be included in the simulations, to predict how different pieces in the simulation behave. Potential capability. When it comes to well-defined tasks, current AI systems excel at pattern recognition, being able to analyse vast amounts of data and build them into an overall model, finding regularities that human experts never would have. For instance, human experts would likely have been unable to anticipate that men who 'like' the Facebook page 'Being Confused After Waking Up From Naps' are more likely to be heterosexual (Kosinski et al 2013) . Similarly, the Go-playing AI AlphaGo, whose good performance against the expert player Lee Sedol could to a large extent be attributed to its built-up understanding of the kinds of board patterns that predict victory, managed to make moves that Go professionals watching the game considered creative and novel. The ability to find subtle patterns in data suggests that AI systems might be able to make predictions in domains which humans currently consider impossible to predict. We previously discussed the issue of the (predictive) validity of a domain, with domains being said to have higher validity if 'there are stable relationships between objectively identifiable cues and subsequent events or between cues and the outcomes of possible actions' (Kahneman and Klein 2009) . A field could also be valid despite being substantially uncertain, with warfare and poker being listed as examples of fields that were valid (letting a skilled actor improve their average performance) despite also being highly uncertain (with good performance not being guaranteed even for a skilled actor). We already know that the validity of a field also depends on an actor's cognitive and technological abilities. For example, weather forecasting used to be a field in which almost no objectively identifiable cues were available, relying mostly on guesswork and intuition, but the development of modern meteorological theory made it a much more valid field (Shanteau 1992 ). Thus, even fields which have low validity to humans with modern-day capabilities, could become more valid for more advanced actors. A possible example of a domain that is currently relatively low-validity, but which could become substantially more valid, is that of predicting the behaviour of individual humans. Machine learning tools can already generate personality profiles harvested from people's Facebook 'likes' that are slightly more accurate than the profiles made by people's human friends (Youyou et al 2015), and can be used to predict private traits such as sexual orientation (Kosinski et al 2013) . This has been achieved using a relatively limited amount of data and not much intelligence; a more sophisticated modelling process could probably make even better predictions from the same data. Taleb (2007) has argued for history being strongly driven by 'black swan' events, events with such a low probability that they are unanticipated and unprepared for, but which have an enormous impact on the world. To the extent that this is accurate, it suggests limits on the validity of prediction. However, Tetlock and Gardner (2015) argue that while the black swans themselves may be unanticipated, once the event has happened its consequences may be much easier to predict. Although superforecasters have shown no ability to predict black swans such as the 9/11 terrorist attacks, they could predict the answers to questions like 'Will the United States threaten military action if the Taliban do not hand over Osama bin Laden?' and 'Will the Taliban comply?'. Thus, even though AI might be unable to predict some very rare events, once those events have happened, it could utilize its built-up knowledge of how people typically react to different events in order to predict the consequences better than anyone else. \n Rates of capability growth How quickly could AI acquire more detailed models? Here again opinions differ. Hibbard (2016) argues, based on Mahoney's (2008) argument for intelligence being a function of both resources and knowledge, that explosive growth is unlikely. Benthall (2017) makes a similar argument. On the other hand, authors such as Bostrom (2014) and Yudkowsky (2008) suggest the possibility for fast increases. How to improve learning speed? We know that among humans, there are considerable differences in the extent to which people learn. Human cognitive differences have a strong neural and genetic basis (Deary et al 2010) , and strongly predict academic performance (Deary et al 2007) , socio-economic outcomes (Strenze 2007) , and job performance and the effectiveness of onthe-job learning and experience (Gottfredson 1997b ). There also exist child prodigies who before adolescence achieve a level of performance comparable to an adult professional, without having been able to spend comparable amounts of time training (Ruthsatz et al 2013) . In general, some people are able to learn faster from the same experiences, notice relevant patterns faster, and continue learning from experience even past the point where others cease to achieve additional gains 9 . While there is so far no clear consensus on why some people learn faster than others, there are some clear clues. Individual differences in cognitive abilities may be a result of differences in a combination of factors, such as working memory capacity, attention control, and long-term memory (Unsworth et al 2014). Ruthsatz et al (2013) , in turn, note that 'child prodigies' skills are highly dependent on a few features of their cognitive profiles, including elevated general IQs, exceptional working memories, and elevated attention to detail'. Many tasks require paying attention to many things at once, with a risk of overloading the learner's working memory before some of the performance has been automated. For an example, McPherson and Renwick (2001) consider children who are learning to play instruments, and note that children who had previously learned to play another instrument were faster learners. They suggest this to be in part because the act of reading musical notation had become automated for these children, saving them from the need to process notation in working memory and allowing them to focus entirely on learning the actual instrument. This general phenomenon has been recognized in education research. Complex activities that require multiple subskills can be hard to master even if the students have moderate competence in each individual subskill, as using several of them at the same time can produce an overwhelming cognitive load (Ambrose et al 2010, chapter 4) . Recommended strategies for dealing with this include reducing the scope of the problem at first and then building up to increasingly complex scopes. For instance, 'a piano teacher might ask students to practice only the right hand part of a piece, and then only the left hand part, before combining them' (ibid). An increased working memory capacity, which is empirically associated with faster learning capabilities, could theoretically assist in learning in allowing more things to be comprehended simultaneously without them overwhelming the learner. Thus, AI with a large working memory could learn and master at once much more complicated wholes than humans. Additionally, we have seen that a key part of efficient learning is the ability to monitor one's own performance and to notice errors which need correcting; this seems in line with cognitive abilities correlating with attentional control and elevated attention to detail. McPherson and Renwick (2001) also remark on the ability of some students to play through a piece with considerably fewer errors on their second runthrough than the first one, suggesting that this indicates 'an outstanding ability to retain a mental representation of [K] performance between run-throughs, and to use this as a basis for learning from [K] errors'. In contrast, children who learned more slowly seemed to either not notice their mistakes, or alternatively to not remember them when they played the piece again. Whatever the AI analogues of working and long-term memory, attentional control, and attention to detail are, it seems at least plausible that these could be improved upon by drawing exclusively on relatively theoretical research and inhouse experiments. This might enable AI to both absorb vast datasets, as current-day deep learning systems do, and also learn from superhumanly small amounts of data. Limits of learning speed. How much can the human learning speed be improved upon? This remains an open question. There are likely to be sharply diminishing returns at some point, but we do not know whether they are near the human level. Human intelligence seems constrained by a number of biological and physical factors that are unrelated to gains from intelligence. Plausible constraints include the size of the birth canal limiting the volume of human brains, the brain's 9 Readers who are familiar with the 'deliberate practice' literature may wonder if that literature might not contradict these claims about the impact of intelligence. After all, the deliberate practice research suggests that talent is irrelevant, and that deliberate, well-supervised training is the only thing that matters. However, as noted by the field's inventor, deliberate practice is a concept that is applicable to some very specific-one might even say artificial -domains. Deliberate practice can only be applied in fields in which there are objective metrics, highly developed objectively-measurable expertise, and active competition to improve the existing practices. Areas that do not qualify are \"anything in which there is little or no direct competition, such as gardening and other hobbies, for instance, and many of the jobs in today's workplace-business manager, teacher, electrician, engineer, consultant, and so on\", as there are no objective criteria for performance (Ericsson and Pool 2016) . Fields that have well-defined, objective criteria for good performance are ones which are the easiest to master using even current-day AI methods-in fact, they're basically the only ones that can be truly mastered using current-day AI methods. A somewhat cheeky way to summarize these results would be by saying that, in the kinds of fields that could be mastered by AI methods that exhibit no general intelligence, general intelligence isn't the most important thing. This even seems to be Ericsson's own theoretical stance: that in these fields, general intelligence eventually ceases to matter because the expert will have developed specialized mental representations that they can just rely on in every situation. So these results are not very interesting to those of us who are interested in domains that do require general intelligence. extensive energy requirements limiting the overall amount of cells, limits to the speed of signalling in neurons, an increasing proportion of the brain's volume being spent on wiring and connections (rather than actual computation) as the number of neurons grows, and inherent unreliabilities in the operation of ion channels (Fox 2011) . There does not seem to be any obvious reason for why the threshold for diminishing gains from intelligence to learning speed would just happen to coincide with the level of intelligence allowed by our current biology. Alternatively, there could have been diminishing returns all along, but ones which still made it worthwhile for evolution to keep investing in additional intelligence. The available evidence also seems to suggest that within the human range at least, increased intelligence continues to contribute to additional gains. The Study of Mathematically Precocious Youth is a 50 year longitudinal study involving over 5000 exceptionally talented individuals identified between 1972 and 1997. Despite its name, many its participants are more verbally than mathematically talented. The study has led to several publications; among others, Wai et al (2005) and Lubinski and Benbow (2006) examine the question of whether ability differences within the top 1% of the human population make a difference in life. Comparing the top (Q4) and bottom (Q1) quartiles of two cohorts within this study shows both to significantly differ from the ordinary population, as well as from each other. Out of the general population, about 1% will obtain a doctoral degree, whereas 20% of Q1 and 32% of Q4 did. 0.4% of Q1 achieved tenure at a top-50 US university, as did 3% of Q4. Looking at a 1 to 10 000 cohort, 19% had earned patents, as compared to 7.5% of the Q4 group, 3.8% of the Q1 group, or 1% of the general population. It is important to emphasize that the evidence we have reviewed so far does not merely mean that AI could potentially learn faster in terms of time: it also suggests that the AI could potentially learn faster in terms of training data. The smaller datasets AI needs in order to develop accurate models, the faster it can adapt to new situations. Besides the considerations we have already discussed, there seems to be potential for accelerated learning through more detailed analysis of experiences. For example, chess players improve most effectively by studying the games of grandmasters, and trying to predict what moves the grandmasters would have made in any situation. When the grandmaster play deviates from the move that the student would have made, the student goes back to try to see what they missed (Ericsson and Pool 2016) . This kind of detailed study is effortful however, and can only be sustained for limited amounts at a time. With enough computational resources, the AI could routinely run this kind of analysis on all sense data it received, constantly attempting to build increasingly detailed models that would correctly predict the data. How much interaction is needed? Some commentators, such as Hibbard (2016) argue that knowledge requires interaction with the world, so the AI would be forced to learn over an extended period of time as the interaction takes time. From our previous review, we know that feedback is needed for the development of expertise. However, one may also get feedback from studying static materials. As we noted before, chess players spend more time studying published matches and trying to predict the grandmaster moves-and then getting feedback when they look up the next move and have their prediction confirmed or falsified-than they do actually playing matches against live opponents (Ericsson and Pool 2016) . The Go-playing AlphaGo system did not achieve its skill by spending large amounts of time playing human opponents, but rather studying the games of humans and playing games against itself (Silver et al 2016) . And while any individual human can only study a single game at a time, AI systems could study a vast number of games in parallel and learn from all of them 10 . An important difference is that domains such as chess and Go are formally specified domains, which AI can perfectly simulate. For a domain such as social interaction, the AI's ability to accurately simulate the behaviour of humans is limited by its current competence in the domain. While it can run a simulation based on its existing model of human behaviour, predicting how humans would behave based on that model, it needs external data in order to find out how accurate its prediction was. This is not necessarily a problem however, given the vast (and ever-increasing) amount of recorded social interaction happening online. YouTube, e-mail lists, forums, blogs, and social media services all provide rich records of various kinds of social interaction, for AI to test its predictive models against without needing to engage in interaction of its own. Scientific papers-increasingly available on an open access basis-on topics such as psychology and sociology offer additional information for the AI to supplement its understanding with, as do various guides to social skills. All of this information could be acquired simply by downloading it, with the main constraints being the time needed to find, download, and process the data, rather than time needed for social interactions. As noted earlier, relatively crude statistical methods can already extract relatively accurate psychological profiles out of data such as people's Facebook 'likes' (Kosinski et al 2013 , Youyou et al 2015 , giving reason to suspect that a general AI could develop very accurate predictive abilities given the kind of a process described above. Several other domains, such as software security and mathematics seem similarly amenable to being mastered largely without needing to interact with the world outside the AI, other than searching for relevant materials. Some domains such as physics would probably need novel experiments, but AI focusing on the domains that were the easiest and fastest for it to master might find sufficient sources of capability from those alone. 10 See Mnih et al (2016) for a discussion of how incorporating parallel learning improves upon on modern deep learning systems. Given the above considerations, it does not seem like AI's speed of learning would necessarily be strongly interaction-constrained. \n Conclusions We set out to consider the fundamental practical limits of intelligence, and the limits to how quickly an AI system could acquire very high levels of capability. Fictional representations of high intelligence often depict a picture of geniuses as masterminds who have an almost godlike prediction ability, laying out intricate multi-step plans where every contingency is planned for in advance (TVTropes 2017a). When discussing 'superintelligent' AI systems, one might easily think that the discussion was postulating something along the lines of those fictional examples, and rightly reject it as unrealistic. Given what we know about the limits of prediction, for AI to make a single plan which takes into account every possibility is surely impossible. However, having reviewed the science of human expertise, we have found that experts who are good at their domains tend to develop powerful mental representations which let them react to various situations as they arise, and to simulate different plans and outcomes in their heads. Looking from humans to AIs, we have found that AI might be able to run much more sophisticated mental simulations than humans could. Given human intelligence differences and empirical and theoretical considerations about working memory being a major constraint for intelligence, the empirical finding that increased intelligence continues to benefit people throughout the whole human range, and the observation that it would be unlikely for the theoretical limits of intelligence to coincide with the biological and physical constraints that human intelligence currently faces, it seems like AIs could come to learn considerably faster from data than humans do. It also seems like in many domains, this could be achieved by using existing materials as a source of feedback for predictions, without necessarily being constrained by time taken for interacting with the external world. Thus, it looks that even though an AI system could not make a single superplan for world conquest right from the beginning, it could still have a superhuman ability to adapt and learn from changing and novel situations, and react to those faster than its human adversaries. As an analogy, experts playing most games can not precompute a winning strategy right from the first move either, but they can still react and adapt to the game's evolving situation better than a novice can, enabling them to win 11 . Many of the hypothetical advantages-such as a larger working memory, the ability to consider more possibilities at once, and the ability to practice on many training instances in parallel-that AI might have seem to depend on available computing power. Thus the amount of hardware the AI had at its disposal could limit its capabilities, but there exists the possibility of developing better-optimized algorithms by initially specializing in fields such as programming and theoretical computer science, which the AI might become very good at. One consideration which we have not yet properly addressed is the technology landscape at the time when the AI arrives (Tomasik 2017, section 7). If a general AI can be developed, then various forms of sophisticated narrow AI will also be in existence. Some of them could be used to detect and react to a general AI, and tools such as sophisticated personal profiling for purposes of social manipulation will likely already be in existence. Considering how these influence the considerations discussed here is an important question, but one which is outside the scope of this article. In summary, even if AI could not create a complete master plan from scratch, there seems to be a reasonable chance that could still come to substantially outperform humans in many domains, developing and using superior expertise than what humans were capable of. How fast AI systems could develop to such a level would depend on the speed at which algorithmic and hardware improvements became available. They could potentially be very fast, if e.g. the required algorithmic insights were more on the level of scaling up the size of the AI's simulations and number of attentional streams, rather than requiring any genuinely new ideas compared to what allowed the AI to achieve a rough human level in the first place. Recognizing a familiar situation means understanding what goals make sense and what should be focused on, which cues to pay attention to, what to expect next and when a violation of expectations shows that something is amiss, and knowing what the typical ways of responding are. Ideally, the expert will instantly know what to do. \n • Medicine and firefighting have fairly high validity, whereas predictions of the future value of individual stocks and long-term 1 forecasts of political events are domains with practically zero validity. • 'Some [domains] are both highly valid and substantially uncertain. Poker and warfare are examples. The best moves in such situations reliably increase the potential for success'. • '[A domain] of high validity is a necessary condition for the development of skilled intuitions. Other necessary conditions include adequate opportunities for learning the [domain] (prolonged practice and feedback that is both rapid and unequivocal). If [a domain] provides valid cues and good feedback, skill and expert intuition will eventually develop in individuals of sufficient talent'. \n\t\t\t Phys. Scr. 92 (2017) 113001Invited Comment \n\t\t\t Except for when citations to other content are explicitly included, all the discussion about superforecasters and the Good Judgment Project uses Superforecasting as its source. \n\t\t\t See Whalen (2016) for preliminary work in this direction.", "date_published": "n/a", "url": "n/a", "filename": "Sotala_2017_Phys._Scr._92_113001.tei.xml", "abstract": "What kinds of fundamental limits are there in how capable artificial intelligence (AI) systems might become? Two questions in particular are of interest: (1) How much more capable could AI become relative to humans, and (2) how easily could superhuman capability be acquired? To answer these questions, we will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how AI could improve on humans in two major aspects of thought and expertise, namely simulation and pattern recognition. We find that although there are very real limits to prediction, it seems like AI could still substantially improve on human intelligence.", "id": "c91bb1caa13f861727be34026223238e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "toward_a_working_theory_of_mind.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Economics_of_the_singularity.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Pythagoras Petratos", "Anders Sandberg", "Feng Zhou Contents", "Oxford Martin", "F Zhou"], "title": "Cyber Insurance", "text": "Introduction Cyber insurance has a broad definition and has been continuously evolving over time. It was defined as insurance for the damages to \"physical\" computer equipment in 1970s, but nowadays it has been changed to be a cost-effective option of risk mitigation strategies for IT/cyber-related losses. According to Association of British Insurers (ABI), it \"covers the losses relating to damage to, or loss of information from, IT systems and networks.\" argue that cyber insurance in an ideal situation promotes users to implement good security. However, some barriers are currently preventing insurers to achieve this goal, and innovations in the cyberspace introduce new types of loss. For example, \"Internet of Things\" is shifting cybersecurity from protecting information assets to physical goods that were traditionally unrelated to computers. At present, cyber insurance has a small share in overall nonlife insurance market and represents just 0.1% of the global property and casualty insurance premium pool (Marsh 2015) , but it is one of the fastest-growing new lines of insurance business and the cybersecurity is recognized as one of the top global risks in the World Economic Forum's report recently (WEF 2015) . Meanwhile, more and more traditional insurance contracts exclude specific losses that are linked to cybersecurity; it is necessary to develop a standalone cyber-insurance market. New technologies and innovations in the cyberspace are also spurring the development of cyber-insurance market, as well as the current trend of government requiring high standards on protecting sensitive information and enforcing financial punishments relating to information security breaches. Both the complexity of cyber risk and the current immaturity of cyber-insurance market bring challenges for industry practitioners and regulators to fully understand potential future systemic risks in this kind of complex system. Not surprisingly, the recent Risk Nexus Report from Zurich Insurance Group argues that the global aggregations of cyber risk is analogous to those risks that were overlooked in the US sub-prime mortgage market (Zurich 2014) . Its nickname \"cyber sub-prime\" intends to describe the interconnected nature of systemic cyber risk and the challenges for individual insurers to address the complexity. They believe that the existing research on systemic risk in the financial markets that aims to address recent crises should be helpful to understand the dynamics of future cyberspace. \n Development of Insurance for Cyber Risks According to 2015 Information Security Breaches Survey (PWC 2015) , 90% of UK large organizations and 74% of small businesses reported that they had suffered at least one security breach in the past 1 year. The average cost of the worst single breach suffered by these businesses has gone up sharply. For instance, the average cost to a large organization is around £1.5-£3 m up from £600 k to £1.15 m a year ago. The survey also indicates that the majority of UK businesses surveyed expect breaches will continue to increase. Thompson (2014) estimates that the total cyber insurance currently amounts around US$2 billion, whereas the total cost of global security breaches could be more than US$400 billion. For more about the effects of cyber-attacks on UK companies, see Oxford Economics (2014) . For a more detailed history and evolution of cyber-insurance products, see Majuca et al. (2006) . \n Economics of Information Security Together with both the growth of ICT (information and communication technology) and the growing impact of cyber risks to the real-world business increase the demand for insurance-related risk mitigation strategies. The following factors also play key roles in the development of cyber insurance: A list of key factors affecting either demand for or supply of cyber insurance: Mitigating cyber residual risks: Organizations have three basic cyber risk management strategies: self-protection, self-insurance, and transfer of risk via cyber insurance (Kesan et al. 2005) . While organizations are increasing their information security spending on improving IT system, cyber residual risks still require insurance to mitigate unexpected events. Lelarge and Bolot (2009) find that cyber insurance is a powerful incentive mechanism that motivates organizations to invest in selfprotection, so these three strategies are complementary to each other. Pal and Golubchik (2010) analyze the Internet users' investment in self-defense mechanisms when insurance solutions are offered in either full or partial cyber-insurance coverage models. Promoting and aligning economic incentives: Organizations who have insurance as a last resort of risk management attract customers and business partners, especially for small businesses who are parts of a large/long supply chain in order to avoid being the weakest link of cyber-attacks. In the supply-demand model of cyberinsurance market, Pal (2014) argues that cyber insurance has the potential to jointly align the incentives of different stakeholders in the cyberspace, such stakeholders or players as security vendors, cyber insurers, regulatory agencies, and network users. also suggest that cyber insurance in an ideal situation promotes users to implement good security. Protecting exclusions in traditional insurance: Cyber cover was mainly embedded in other traditional insurance products (e.g., business interruption or professional liability insurance), but nowadays more and more traditional insurance contracts intend to exclude the cyber-related risks due to the complexity of cyberspace and potentially catastrophic consequence, as well as requiring different actuarial methods to preform data analysis (Siegel et al. 2002) . As a result, standalone cyber-insurance policies are emerged. However, there is a gap between insurers and insured parties to explain the differences/exclusions among both standalone cyber-insurance contracts and traditional products. It is necessary to have cyber-insurance brokers to reduce the gap (Marsh 2015) . Providing professional advice and delivering experienced cyber incident response: Insurance companies themselves collect a huge amount of customers' personally identifiable information and corporate clients' business confidential/ financial information, so they must follow and have rich experience to deal with many regulations of protecting data information and cyber security (e.g., HIPAA Health Insurance Portability and Accountability Act to protect the privacy of individual patients/customers, GLBA Gramm Leach Bliley Act to secure the private information of clients) (Appari and Johnson 2010) . Insurers also accumulate the updated knowledge and relevant experience from clients globally and communicate with other security professionals, in order to provide technical and legal assistance (as well as financial compensations) to manage cyber-related breaches and incidents (Marsh and Zurich 2015) . Training cybersecurity awareness and building information security culture: Security managers often find difficulties to communicate with nontechnical internal staff or external clients about security policies and technologies who have no formal security background, but insurance is an easy way to explain the (financial) impact of cybersecurity to the business. The insurance premium that has been reduced (or increased) year-by-year due to a better (or worse) security implementation in this year relative to other previous periods, it is a good indication and consistent comparison to define proper cyber risk metrics and to educate staff or clients. However, at this early stage of cyber insurance, there is still a lag for insurers to implement premium differentiation on the cyber insurance that reflects the insured security improvement precisely due to the immaturity of the cyber insurance market (Mukhopadhyay et al. 2013; Moran et al. 2015) . Government supports: A free-market approach is traditionally popular to manage risks in the financial system, since it increases motivation and efficiency of stakeholders in the system. As suggest, one option to spur demand for cyber insurance is to make it compulsory (as it is common in motor insurance), but it may lead a deadweight on competitiveness and productivity growth. The role of government is to encourage and support the insurers to overcome the barriers of supplying cyber insurance (the barriers will be discussed in the cyber-insurance market section). Recently, UK government launched its \"10 Steps to Cyber Security\" (CESG 2012) and \"Cyber Essentials Scheme\" (BIS 2014), both aiming to assist insurers to evaluate the security assessment of small-and medium-sized enterprises. Sharing data of cyber incidents (data pooling): It is necessary to form partnerships from different industries that share data in order to better understand cyber risks, as suggested in the UK Cyber Security Strategy (Cabinet 2011). The recent launched Cyber Security Information Sharing Partnership (CiSP https://www.cert. gov.uk/cisp/) aims to collaborate with insurers to analyze emerging threats, disaster scenarios, and trends in the cyberspace. The cyber insurance will be more affordable and its purchasing cost is expected to be lower than current level based on more relevant actuarial data in the near future, and a higher degree of price differentiation across different policies and individual firms will be feasible (Marsh 2015) . However, Bohme (2006) states and explains that information sharing is socially beneficial, but it is not efficient to rely on a trusted third party only (as a \"social planner\") to arrange data collection. \n Insurable and Uninsurable Cyber Risks In terms of a specific insurance policy, the potential losses related to cyber-attacks or nonmalicious IT failures can be currently grouped into 11 categories in the London Insurance Market (Marsh 2015) , which is also similar to the US market (Majuca et al. 2006) . Due to both the difference in severity/frequency of cyber events and the complexity of cyber risks, some of these losses are insurable while others are not available at present. Johnson et al. (2014) study the complexity of estimating systematic risk in cyber networks, which is an essential requirement to provide cyber insurance to the public. The following discussion explains the insurability and exposure for different cyber risks (Marsh 2015) . \n Insurable Cyber Risks Privacy events: Many privacy issues are related to managing regulatory requirements on information security. Insurers can collaborate with lawyers to provide different levels of services and protections to their clients. Since the losses from these events are handled and measured by a third-party professional lawyer, there is less information asymmetry or moral hazard problem between insurer and insured. Crime and fraud: Police force often involves in the investigation of cyber-crime and fraud; therefore, the financial losses related to such cyber events are measured by third parties such as police or lawyers. Insurers can not only offer insurance cover, but also provide professional advice on preventing these events or reducing the cost based on their experience from other customers. Network security liability: Third-party liabilities related to certain security events occurring within an organization's IT network can be insured, mainly due to the scope of incidents can be clearly defined by the insurers and IT system engineers can also collaborate with insurers to improve mitigation strategies. Software and data damage: Insurers can provide indemnity for the costs arising from the damage of data or software (e.g., help recovering or reconstituting the damaged data); this is mainly because insurers are able to require the policy holders to follow necessary procedures of data backup or redundancy. Cyber extortion: Traditionally, insurers have the necessary knowledge and experience of dealing with extortion in the physical world and conduct ransom negotiations (particularly in the London Market, such as the Lloyd's of London), extortion in the cyberspace is not much different from that. Cover is provided for both the cost of handling the incident and the ransom payment. \n Uninsurable (or Insurable but with Constraints) Cyber Risks Reputational loss: Although insurance cover is available for the losses that are directly linked to reputational damage (e.g., cost of recovering public image or loss revenue from existing customers), it is difficult to measure the value of the compensation and the linkage between the cyber incident and the intangible asset if without certain constraints. Network business interruption (e.g., due to Denial-of-Service attacks): In the traditional insurance sector, it is common to offer full coverage for business interruption arising from natural disasters or man-made events. However, in the early stage of cyber insurance, insurers are concerned about the potential aggregate exposure from a single cyber event but interrupts many insured policy holders. IP theft or espionage: These types of losses are extremely difficult to prove and quantify, since the value is changing quickly over time and trade secret is priceless before an incident but (likely) worthless if being public. It is also hard to define whether the incident was incurred in the insured period. Moreover, these attacks are often state-sponsored with a large amount of resource. Physical asset damage: The interconnection between physical world and cyberspace is increased by the development of the so-called \"Internet of Things (IoT)\"; therefore, more and more cyber incidents will directly have impacts on the physical assets. At this stage, the complexity of these interconnections is not well understood by insurers; therefore, it is difficult to combine cyber insurance with traditional property insurance or have such physical asset damage cover in the standalone cyber insurance. Death and bodily injury: Similar to the physical asset damage, it is more and more likely that certain cyber-related incidents may cause harm to the human (e.g., medical devices, large-scale industry equipment, driverless cars, etc.). Although it is uninsurable at the current stage of cyber insurance, it is covered by traditional insurance products such as general liability and employers' liability products (Fig. 1 ). \n Challenges and Developments Even if insurers are able to offer cyber insurance to mitigate certain types of cyberrisk events, they must face and learn to overcome some challenges in order to maintain and expand their businesses. Not surprisingly, there are progresses and developments to address these challenges recently. \n Challenges for Insurers External attackers are evolving over time: Information Security Breaches Survey (PWC 2015) shows that outsiders are using more sophisticated methods to affect organizations. Staff-related breaches are unique in individual cases: Whether inadvertent human error or not, the consequence from insiders' mistakes or misconducts is difficult for insurers to measure. Lack of understanding and communication: Recent surveys indicate that a majority of CEOs believe their organizations have relevant insurance to cover cyber risks (PWC 2015) , whereas in fact only around 10% actually do (Marsh 2015) . Increasing IT system collaboration and social network: Cyberspace is moving toward an ecosystem, which has more and more heterogeneous players collaborate and interact to each other. New technologies and innovations: The ICT sector is attractive to capital markets with large amounts of capital to support new businesses and innovations. However, -2005) . Expected number of losses per year larger than a certain size as a function of number of records lost. Note the power-law heavy tail for larger losses (exponent % À0.66, consistent with the results in Overill and Silomon (2011) and Maillart and Sornette (2010) ). This tail may be dominated by more targeted events and organized crime, including financial fraud, insider abuse and theft, as well as malware (Overill and Silomon 2011) . due to the nature of this fast evolving sector and heavy competition, ICT vendors focus more on the short process of introducing their products and services to the market and less on the security. It is challenging for insurers to follow these fast developments and potential risks involved in the process (Friedman 2011) . \n Recent Developments Government: Organizations are increasingly using Government alerts (e.g., the UK HMG Cyber Essentials scheme) to inform their awareness of threats and similar vulnerabilities (PWC 2015) . Insured firms can get discount on insurance premium if they follow these certification requirements, so it offers motivations for insured users to follow security procedures and policies. Insurance cyber gap analysis: Marsh ( 2015 ) also suggests that it is necessary for insurance brokers to provide cyber gap analysis (determining which cyber risks are covered by existing traditional insurance or need to be covered in a standalone cyber insurance) when communicating with customers. Insurers' data protection regulations: Insurance industry itself collects sensitive personal, financial, and healthcare data from their policy holders (e.g., personally identifiable information PII, protect health information PHI, and business operation private information) in order to measure the customers' risks more precisely. As a result, the National Association of Insurance Commissioners NAIC (2015) recently adopted cybersecurity guidance for the insurance industry and regulators to follow. The expertise and experience of insurers' information security practice is also applied to advice their customers. Understanding the benefits of cyber insurance: The growing amount of literature starts to support the benefits of cyber insurance as a market-based solution to cybersecurity. Kesan et al. (2005) state, when certain obstacles to a full market solution are fully worked out, several positive outcomes will occur. In general, cyber insurance market will result in higher overall social welfare. \n Evolution of Cyber-Insurance Market It is still too early to know the structure of the future, mature cyber-insurance market. In the existing literature, both competitive (Shetty et al. 2010b ) and monopolistic (Lelarge and Bolot 2009; Hofmann 2007; Pal and Golubchik 2011) market structures are studied. As commonly expected, the cyber-insurance market will soon become a complex dynamic system (Anderson and Moore 2009; Halse and Hoemsnes 2013) . As a result, the market not only provides one option of risk mitigation strategies, but also builds an ecosystem together with other sectors in cyberspace that can influence heterogeneous stakeholders' behaviors and business strategies (Hall et al. 2011) . This is similar to other financial systems, such as stock or credit markets (Gracie 2015) . Therefore, the existing research in other financial systems will be relevant to understand the future cyber-insurance market (Zurich 2014) . Obstacles of Developing Cyber-Insurance Market Shetty et al. (2010a) and Bohme and Schwartz (2010) argue that the underdeveloped cyber insurance market is mainly due to: (1) interdependent security (externalities) (Ogut et al. 2005; Bolot and Lelarge, 2008; Zhao et al. 2009 ); (2) correlated risk (Bohme and Kataria, 2006) ; and (3) information asymmetries (Bandyopadhyay et al. 2009) . Furthermore, Bohme and Schwartz (2010) argue that \"it appears that the market failure can only be overcome if all obstacles are tackled simultaneously.\" Meanwhile, Marsh (2015) states that a well-developed reinsurance market for cyber insurance is also one of the necessary conditions to expand the business. The four key obstacles are explained as follows: Interdependent security (externalities): Kunreuther and Heal (2003) ask the question: \"Do firms have adequate incentives to invest in protection against a risk whose magnitude depends on the actions of others?\". One of the differences between cyber and traditional insurance (e.g., property or motor) is the close interconnections among players in cyberspace. The security in cyberspace is dependent on all players in the system, but heterogeneous players have different preferences about cybersecurity and the \"free rider problem\" occurs when those who benefit from other players' security investment do not have to pay for it (Varian 2004 ). As Naghizadeh and Liu (2014) argue that security is a nonexcludable public good, so users can stay out and still enjoy spill-overs from others' contribution without paying. As a result, even insurers help their insured customers to increase their overall security, those uninsured players in the system still can weaken these insured customers. Correlated risk: Bohme and Kataria (2006) define two tiers of correlated cyber risks: (1) internal correlation, which they define as \"the correlation of cyber risk within a firm\" (i.e., a correlated failure of multiple systems on the internal network), and (2) global correlation, as \"the correlation of cyber-risk at a global level, which also appears in the insurer's portfolio.\" The growing development of Cloud computing platform may accelerate the two tiers to be integrated together. For example, an internal incident in a cloud service provider will lead systematic risks in both its internal system and its customers' systems. Information asymmetries: Bohme and Schwartz (2010) define \"asymmetric information\" as environment where some players have private information to take advantages on something that are not available to other players. The common issues in the conventional insurance literature due to \"asymmetric information\" are: adverse selection (Akerlof 1970 ) and moral hazard (Arrow 1963) . They are also relevant to the cyber-insurance market and other obstacles (e.g., the interdependent security) may exacerbate its problems (Shetty et al. 2010a) . Furthermore, Bohme and Schwartz (2010) also identify specific forms of information asymmetries in cyber insurance. Meanwhile, Pal (2012) proposes three mechanisms (premium differentiation, fines, security auditing) to resolve information asymmetry in cyber insurance. Lack of reinsurance market: It is still in the early stage for reinsurers to reinsure cyber risks from primary insurers, but several proposals have been put forward to build such reinsurance function (Toregas and Zahn 2014) , such as to establish government-regulated funds similar to US Terrorism Risk Insurance Act or UK Financial Service Compensation Scheme. discuss that one possible option is for government to provide reinsurance, but they emphasize that \"while government re-insurance can create insurance markets where otherwise there would be no supply, such measures must be carefully designed to avoid a regime in which profits are private (to the insurers' shareholders), losses are socialized (born by the tax-payer), and systems remain insecure (because the government intervention removes the incentive to build properly secure products).\" \n Technologies Spur the Cyber-Insurance Market Many new technologies that have been developed in recent years will spur the cyberinsurance market. We identify some of these technologies and group them into three main categories: (1) IT technologies assist insurers to manage and discover cyber incidents, as well as attract more customers demand for cyber insurance; (2) Technologies and methods that are helpful for insurers to perform actuarial modeling and data analysis; and (3) Technologies that are useful to better understand the complexity of cyber-insurance market. \n IT Technologies Some standalone technologies: Intrusion detection systems (IDS), firewalls, digital forensic technology, Microsoft Photo DNA, and encryption tools have become more advanced and relevant for insurers to investigate cyber incidents. Trusted computing infrastructure: Although the opponents of trusted computing argue that users will lose their freedom and privacy (Anderson 2003a, b) , the technology provides insurers an opportunity of identifying insurable events and defining claims more precisely. Cloud platforms: Cloud service providers can reduce the issues of misaligned incentives between insurers and cloud users, if they can collaborate with insurers to attract more customers. Meanwhile, automated systems reduce human errors in the computing process. However, on the other hand, the cloud platform may lead to systemic risk since they are connected to other IT systems. Anonymous communication and transactions: The anonymity network that is currently represented by, e.g., Tor software makes cyber criminals \"anonymous\" and untraceable. Anonymous digital currencies allow sophisticated markets for illicit goods and services (Juels et al. 2015) . As a result, there is a deep/dark web that provides a cyber black market for attackers to trade sensitive information (e.g., selling stolen credit card information to other parties, etc.), so the attackers' motivation of attacking any organizations become larger. Mobile devices: Nowadays, more and more business activities and collaborations are based on mobile devices (e.g., Bring Your Own Devices). This leads more cyber incidents that require cyber insurance, since such devices are lost or stolen easily and users do not have sufficient skills to manage the security on these mobile devices. Leaking technology: ICT enables rapid copying and dissemination of information, making information leaks harder to contain. In the past, a sizeable leak of proprietary information (such as more than 40 gigabytes of internal data released in the 2014 Sony hack) would have been limited by the need to transmit it by sending hard drives (expensive) or setting up a website (legally traceable and blockable); by 2014, it could be distributed anonymously using bittorrent in a way that makes it impossible to trace and block. In addition, leaks are potentiated by the appearance of search tools making released data more accessible. \n Actuarial Modeling Methods Network simulator: Similar to stress and scenario testing that are commonly used in the financial markets (e.g., banking system), insurers can use various applications and services to run network simulation in an artificial environment in order to test the stability and resilience of insured network under different conditions. Actuarial data analysis (big data analytics): More and more professional consulting service firms have been investing and offering advanced actuarial pricing and risk management services based on big data analytics to assist insurers uncovering hidden patterns and unknown correlations in cyber risks. Data pooling platform (data anonymization): Technologies of information sanitization that aim to encrypt or remove sensitive information from data sets are becoming more feasible; this encourages more data to be shared in the pooling platform in order to help government and insurers to better understand cyber risks from aggregated data sets. Machine learning and Bayesian networks: More and more applications from these subfields of computer science are used in understanding the cyber risks. Insurers will hopefully gain insights about managing the cyber risks from these developments. Yang and Lui (2014) apply Bayesian network to analyze the influence of cyber-insurance market to security adoption in heterogeneous networks. Data visualization: According to the \"digital detectives\" website of Microsoft, advances in data visualization technology assist Microsoft Digital Crimes Unit (uses Microsoft PowerMap) to understand the pattern of Citadel botnets better and remove the malware from infected machines more efficiently (Constantin 2013) . The same technologies will help insurers to identify cyber incidents from different malware or causes, so they can distinguish the incidents in order to reduce specific claims (similar to distinguish different risk events in natural catastrophe insurance) or issue insurance-linked securities based on specified triggers (cyber incident) earlier. consider one of potential strategies to promote cyber insurance is to develop financial instruments for risk sharing similar to \"Cat Bonds\" and \"Exploit Derivatives\" in the traditional insurance business operations (e.g., flood and natural-disaster insurance). As explain, \"Exploit Derivatives are vehicles for insurers to hedge against the discovery of vulnerabilities that causes significant loss events across their portfolios.\" \n Sociotechnical Systems Security awareness training and behavioral games: Toregas and Zahn (2014) mention a growing consensus that cyber security is not achievable by solely focusing on technological aspects, but also requiring to understand both technologies and their users' behaviors. The importance of understanding human-computer interaction has been studied widely since the works of Adams and Sasse (1999) and Sasse et al. (2001) . Recently, some behavioral digital games based on computer simulations are introduced to train the users' behavior and awareness of using technologies securely (Cone et al. 2007) . Existing interdisciplinary research in financial systems: Bohme (2010b) argues that some key obstacles causing cyber-insurance market failure are due to a lack of understanding information economics. An interdisciplinary and integrated research that focuses on a cyber ecosystem is better than targeting each individual technological elements alone (Bohme, 2010a) . This idea is similar to recent progress of understanding systemic risks in the financial markets. Schneier (2002) and Moore (2007, 2009) state that a combination of economics, game theory, and psychology is necessary to understand and manage cybersecurity in the modern and future networked environment. Johnson et al. (2011) model security games with market insurance to inform policy makers on adjusting incentives to improve network security and cyber-insurance market. Baddeley (2011) applies some lessons from behavioral economics to understand issues of information security. More papers on the economics of information security and privacy can be found in the book of Moore et al. (2010) . Multiagent technique: Agent-based approach of modeling a complex system is becoming popular in the financial markets, but it is not commonly used by researchers to model cyberspace or perform stress testing on particular cyber events. Recently, a few researchers start to apply this technique to model network resilience (Sifalakis et al. 2010; Baxter and Sommerville 2011; Sommerville et al. 2012) . \n General Categorization of Cyber Risks In the previous analysis, we presented the literature related to the evolution of cyberinsurance. It is our intention to further examine the challenges for the development of a cyber-insurance market. \"An understanding of insurance must begin with the concept of riskthat is, the variation in possible outcomes of a situation\" (Zeckhauser 2008) . We embark on a theoretical and empirical analysis, using examples of cyber security events, in order to better understand cyber risks and relate them to cyber security. The first crucial observation is that numerous different things can be included under the term \"cyber risks.\" A more precise definition of \"cyber risks\" would result if we break them into three distinct elements. • (Cyber) Risk can be defined as a measurable quantity, according to Knight (1921) . In that sense, probability distributions could be assigned to cyber threats. Thus, it is feasible to quantify the (cyber) risks and consequently estimate insurance premiums. • (Cyber) Uncertainty can be considered to be the unmeasurable quantity related to cyber events. Therefore, we do not know the states of the world and the precise probabilities would not be known. It is also known as Knightian Uncertainty, based on the classic distinction by Frank Knight (1921) . • (Cyber) Ignorance can be considered a third category, when we may not have the ability to define what states of the world are possible (Zeckhauser and Visusi 2008) . It can be considered one step further from uncertainty, when some potential outcomes are unknowable or unknown (Zeckhauser 2006 ). There are two important types of ignorance. Primary ignorance concerns situations in which one does not recognize that is ignorant and recognized ignorance, when one perceives that ignorance (Roy and Zeckhauser 2013) . For example, the financial meltdown of 2008 can be considered such an event. It can also be argued that many catastrophic risks are subject to ignorance. \n Catastrophic Risks and Insurance \n General Description of Catastrophic Risks The above general categorization brings us to further types of risk that influence cyber insurance. \"Catastrophes provide the predominant conceptual model of what insurance is about. One pays premiums to secure financial protection against low-probability high consequence eventswhat we normally call catastrophes.\" (Zeckhauser 1996a, b) . The main problem is that private markets are facing difficulties in providing coverage for catastrophic risk and thus they can be deemed \"uninsurable risk\" (Jaffee and Russell, 1997) . The timing and consequence of catastrophic events may largely vary. We have already identified the frequency/severity spectrum used for cyber events. In other words, the catastrophic risks fall within the low probability-high consequence class (Kleindorfer and Kunreuther 1999) . However, the probabilities and consequences are not clearly defined, particularly toward the upper end of losses. In this chapter, we are more interested about the insurers' perspective on assessing such risks. The Actuarial Standard Board defines \"Catastrophe -A relative infrequent event of phenomenon that produces unusually large aggregate losses.\" More precisely, \"An event is designated a catastrophe by the industry when claims are expected to reach a certain dollar threshold, currently set at $25 million, and more than a certain number of policyholders and insurance companies are affected\"(Insurance Information Institute 2015). In that sense, numerous cyber events, as we would examine later, can have the rarity and loss magnitude of catastrophic risks. However, catastrophes can involve a loss much greater than $25 million. The Swiss Re Sigma Study describes catastrophe losses. In 2014, total insured and uninsured losses due to disasters were estimated at $110 billion (Swiss Re 2015). This number is below the inflation adjusted 10 year average of $200 billion and lower than $138 billion in 2013. However, the number of natural disaster catastrophes was at a record high reaching 189, and in total, there were 336 disaster events. This variation in total losses and the number of catastrophes partly displays their unpredictability as well as their severe consequences. By doing simple calculations, we can observe that the average loss per catastrophe is much higher than $25 million (insurance covered claims of USD 28 billion of losses from natural catastrophes and USD 7 billion from man-made disasters). There are two major categories regarding the causes of catastrophic risks: • Natural disasters, including georisks (like earthquakes) and climate-induced risks (as hurricanes and floods) • Man-made catastrophes can be considered a broader category and it includes industrial accidents and terrorist attacks (Zurich 2013) . Earthquakes can have devastating effects for insurers but also situations are there where thousands of women claim to be damaged by breast implants or individuals harmed by asbestos (Zeckhauser 1996a, b) . This example, except making the distinction between natural and man-made disasters, presents some interesting features that could be used for some initial comments about cyber risks. A feature is that natural disasters are usually localized (geo specific). The same can apply to cyber events. A system failure in an energy grid can have local effects. Nevertheless there are many cases, let us say a computer virus,that can have regional or global impacts. Cyberspace is by its nature fairly nonlocal, and there are fewer \"natural boundaries\" that constrain the size of an impact. This makes these breaches rather easily diffuse around the world, therefore resulting in widespread damage. Also, it seems that a disproportionately larger number man-made breaches and disasters occur in cyberspace (PWC 2015): actually it can be argued that there are very few cases in which the human factor is not involved. While the majority may be unintentional, intentional incidents have the potential for particularly expensive damage. \n Aggregate Catastrophes and Systemic Risks \"Aggregate catastrophes occur when many similarly situated people, all subject to common risks, suddenly find that they have suffered a loss, and the total losses exceed expectations\" (Zeckhauser 1996a, b) . The single worst incident suffered by an organization might be considered to be a measure for informing us about catastrophic risks, especially in large corporations. Infection of viruses or malicious software remains the largest single worst incident causal factor (PWC 2015). As argued above, viruses and malware have the ability to propagate rapidly and cause harm to various people and organizations. In that sense, we can further decompose the high consequence characteristic. One dimension is the number of individuals and organization that a cyber event might affect. Another dimension is the geographic location where the cyber event takes place. Some cyber events might have global reach, enlarging the consequences. An additional critical parameter is the importance of the individuals and organization for the economy and society. A cyber-attack on critical infrastructure can further enlarge the consequence by generating losses to other operations. For example, the failure of VISA or MASTERCARD systems would not only result in losses for these companies, but it would likely generate significant losses to other businesses. This would apply to other critical (information) infrastructure, and the losses could be identified according to the importance of the system for the operations of other individuals and organizations. \n Global Aggregations of Cyber Risk A report by Zurich and the Atlantic Council attempts to expose \"global aggregations of cyber risk\" as analogous to the risks associated with the US sub-prime and 2008 financial crisis. \"Governments and forward looking organizations need to take a holistic view and look beyond these issues to broader risks, including the increasing danger of global shocks initiated and amplified by the interconnected nature of the internet\" (Zurich 2014 ). An illustrative analogy between the financial markets and the information technology of organizations is over-leverage (Zurich 2014) . Overleverage of companies in financial markets was created due to excessive debt, while organizations can over-leverage in IT due to overreliance on technology solutions. In both cases, leverage is used to maximize their returns; however, it is likely that the associated risks were underestimated, as it was proved by the financial crisis. There are two crucial elements in this discussion. The first is a \"Lehman moment,\" a catastrophic event that would spread in the web and cause major losses. Nevertheless a \"Lehman moment\" would encompass ignorance. While it was anticipated that Lehman Brothers could go bankrupt, none could foresee the chain of events that it triggered and led to the global financial crisis of 2008. In that sense, even catastrophic events that seem to have a specific impact might actually end in unpredictable outcomes. The original \"Lehman moment\" can be regarded a global shock due to the scale of Lehman Brothers operations across the world. However, the channel that initially cascaded this global shock was rather localized; the US sub-prime market. The other element comprises of the propagation mechanism. The complexity and interconnections of financial products and markets eventually transmitted this shock around the globe. The complexity of financial products might be a useful analogy to the increasing complexity of IT systems. It has been argued that the 2008 financial crisis is a demonstration that the causes of risks were camouflaged by excess complexity (Zurich 2014) . Even if this complexity is not excessive, it is still difficult to understand and predict the cascading risks and channels. Another analogy of the internet with the financial markets is that risks were assumed not to be correlated with each other. Nevertheless this is far from true: financial products and markets can be highly correlated. The same applies to information technology operations and systems. In that sense, it is not only complexity per se but also complexity due to the interconnected nature of risks that add to the uncertainty (Zurich 2014) . Thus, complexity and interconnections can facilitate systemic problems when \"extreme events,\" as global shocks, occur. \"Connecting to the internet means exposure to nth-order effectsrisks from interconnections with and dependencies on\" other risk aggregations (Zurich 2014) . The report by Zurich identifies seven such aggregations (internal IT enterprise, counterparties and partners, outsourced and contract, supply chain, disruptive technologies, upstream infrastructure, external shocks). It can be however argued that due to ignorance, they can be more common, or more severe, than expected (for example, external shocks). An addition issue is a possible \"perfect storm.\" Especially if a cyber \"Lehman moment\" coincides with other events, this interaction could cause losses of much larger scope, duration, and intensity, similar to the series of events of the 2008 financial crisis (Zurich 2014) . It is even more difficult or rather impossible to identify and define the interconnections between other events and a \"Lehman moment\" before it happens, since it is principally unpredictable. In the worst case, catastrophic events would coincide and can significantly multiply the damage. This makes mitigation of risks increasingly difficult, if the outcomes are unknown or unknowable. \n Global Catastrophic Risks Framework A very useful framework in order to qualitative describe globally catastrophic or existential catastrophes was developed by Nick Bostrom (Bostrom and Cirkovic 2011; Bostrom 2013 ). This framework is based on three factors: severity (how badly the population would be affected), scope (the size of the population at risk), and probability (how likely the disaster is likely to occur, according to the most reasonable judgment given currently available evidence). This model uses the first two factors and presents many advantages and flexibility. The scope includes not just the spatial size of the risk variable that we descried earlier, but also generational effects that are important regarding the duration and aftermath of the catastrophe. Nevertheless, the major advantage of this framework is the way it treats probability. \"Probability can be understood in different senses. . .The uncertainty and error-proneness. . .of risk is itself something we must factor into our all-things considered probability assignments. This factor often dominates in low-probability high-consequence risksespecially those involving poorly understood natural phenomena, complex social dynamics, or new technology, or are that difficult to assess for other reasons\" (Bostrom 2013) . Therefore, this facilitates our analysis since most of the factors discussed above can be adapted to this framework. Scope encompasses both geographic spread, number of affected actors, and the importance of the damage. Moreover, its flexibility allows adding other concepts. In the discussion that follows, because the uncertainty and ignorance surrounding the estimation of probabilities, we would shortly discuss about plausibility. Plausibility can be used as a distinct alternative to probabilities (Ramirez and Selin 2014) (Fig. 2 ). \n Interdependencies and Asymmetric Threats We have discussed correlations and interconnections. Special mention should be attributed to interdependencies, a related concept and relevant to cyber risks. Often these concepts are used interchangeably and denote the same thing. However, we would like to expand our analysis by focusing on complex interdependence Nye 1977, 1998) , since it can provide an additional theoretical foundation. First of all, it should be emphasized that the context of international relations is central to insurance. Except political risk insurance, state relations influence numerous macrorisk factors, as economic relations and defense and security. \"The information revolution alters patterns of complex interdependence by exponentially increasing the number of channels of communication in world politics\" (Keohane and Nye 1998) . In addition, commercial and, particularly, strategic information are valuable. The availability and confidentiality of such information in multiple channels increases the level of risk. Information can be used to convince and capture terrorists, prevent and resolve conflicts, and enable countries to defeat adversaries (Nye and Owens 1996) . On the other hand, because information reduces the \"costs, economies of scale, and barriers of entry to markets, it should reduce the power of large states and enhance the power of small states and non-state actors\" (Keohane and Nye 1998) . This generates important asymmetries. A small group of hackers could disrupt a relatively, to their size and resources, large IT system. Another notable case is that of WikiLeaks: a single leak, amplified by a single disseminating organization, has global consequences for a superpower. Asymmetric threats and the enabling of non-state actors add even more complexity to the layers described before. The number of threats is therefore multiplied and consequently risks increase. Moreover, ambiguity regarding the nature and identification of these relatively small actors makes the estimation of risks quite unpredictable. \n Cyber Risks and Losses Before 1989, the insurance industry did not experience a loss of more than $1 billion from a single event and since then catastrophes of the same magnitude have occurred (Kleindorfer and Kunrether 1999) . As more and more people with larger insured wealth congregate in coastal areas, this is to expect (even leaving out climate change). \"Megacatastrophes,\" like Hurricane Andrew, seem therefore to happen more often and clearly demonstrate the limitations of relying on historical data in order to estimate future probabilities of losses (Actuarial Standard Board 2000) . Not only there are limitations to historical data, but also cyber risks are new phenomena with continuously evolving technology and factors that are difficult to predict or even imagine. However, it is argued that there is likelihood for a global cyber catastrophic event (Zurich 2014) . There are important methodological problems regarding probability estimation when assessing global catastrophic risks (Ord et al. 2010 ). Due to their high severity and scope, even low-probability risks need to be managed, but the probability of theory, model, or calculation error in doing so is far higher than the risk probability itself, even when done carefully. This means that risk estimates should be regarded as suspect unless bounded by several independent estimates or other constraints. A major concern for the private insurance industry is that it might not be able to provide coverage for some catastrophic events without the possibility of insolvency or a significant loss (Kleindorfer and Kunreuther 1999) . This is intensified when the scope and severity of the disaster are high. In the event of a \"cyber sub-prime,\" the losses can be massive and potentially result to insolvency. Even more worried would be the possibility of interconnected events that could amplify such crisis. The coincidence of catastrophes or a perfect storm would also have devastating effects. It is therefore essential to try and understand the cyber risks that can affect insurance. In this part, we attempt to provide a theoretical analysis of risks in order to understand better cyber insurance. In the next part, we attempt to put some flesh to this theoretical skeleton by providing real and imaginary examples. \n Cyber Risks, Catastrophes, and Ignorance \n Identifying Cyber Risks The discussion above indicated that the estimation of probabilities regarding cyber risks is in many cases difficult or impossible. The common methods are based on past events in order to define catastrophes and identify potential losses. These methods present significant limitations. There are various reasons for that. First of all, cyberspace is a very dynamic environment. Information and communication technologies are continuously changing. The internet is constantly expanding. It is embedding existing devices and technologies, and is likely to integrate future innovations, generating the Internet of Things (IoT). The number of interconnected devices, individuals, and organizations is therefore increasing. This results in larger complexity and interdependence among devices with currently unknown functions and vulnerabilities. In that sense, if we assume that we know all the causes of potential losses, then it might be a display of primary ignorance. On the contrary, we can recognize our ignorance. We attempt to examine practical examples of cyber risks in three ways. The first is though the traditional approach on historic events. The second technique can be considered an expansion of that. We can infer based on historical events and develop potential cases, subject to uncertainty. Finally, we would build imaginary but plausible scenarios (Ramirez and Selin 2014) in order to better understand cyber uncertainty and push the boundaries of ignorance. It can be said that effective scenario formation and imagining might reduce ambiguity, enter the space of ignorance, and therefore diminish it. \n Existential and Global Catastrophic Risks Bostrom's classification was developed in regard to threats to the entire future of the human species, or \"merely\" global disasters. The cyber counterpart would be risks that can escalate to such a level that they disrupt the global market or indeed current civilization. They are not merely uninsurably large, but terminal to most existing actors. One possible example might be misuse of Artificial Intelligence (AI). Autonomous \"smart\" systems have already demonstrated potential for economically significant misbehavior such as the 2010 \"Flash Crash,\" which at least in part was due to a systemic interaction of automatic trading agents. As technology advances, AI is likely to become more powerful and ubiquitous, but there are significant control problems that remain to be solved. The fundamental issue is that superintelligent systems do not generally behave in human-compatible ways, and this can produce existential risk (Bostrom 2013) . More plausible scenarios involve unpredictable AI actions that are deliberate, autonomous, and potentially very tenacious. It might include the paralysis of the internet globally by AI software embedded in the web infrastructure, or by automated adaptive hacking tools (e.g., descendants of the current DARPA Cyber Grand Challenge). In another scenario of endurable severity and local scope, AI systems can involve the disruption of operations in an organization. Of course, severity may vary as well as scope. For example, if there is failure of ICT systems in a healthcare organization, it could result to loss of human lives. The disaster can diffuse globally if AI of a wide-spread logistics database system decides not to allow access to information, or even worse, altering or destroying it (for example, because it interprets restoration or circumvention attempts as intrusion attempts). However, due to the fact that the capabilities of AI are very ambiguous, such scenarios are difficult to define. It may be that there are workable solutions or that AI will never be too powerful, but these are risky bets. It seems that it is easy for people to overestimate their knowledge regarding AI (Yudkowsky 2011) . \"It may be tempting to ignore Artificial Intelligence because, of all the global risk. . .AI is hardest to discuss. We cannot consult actuarial statistics to assign small annual probabilities of catastrophe, as with asteroid strikes. We cannot use calculations from a precise, precisely confirmed model to rule out events or place infinitesimal upper bounds on their probability, as with proposed physics disasters. But this makes AI catastrophes more worrisome, not less.\" (Yudkowsky 2011) . In that sense, AI qualifies for uncertainty and ignorance. AI represents a risk that could go all the way into the extreme upper right hand box of the framework, but is both extremely uncertain and largely a future risk: it can be dealt with by R&D aimed at safe and beneficial uses of AI. However, cyber risk also has strong interconnections to traditional catastrophic risks. Such risks include major technical disasters, conflict and war, and particularly total war with the use of weapons of mass destruction (WMD). The threat of a nuclear disaster is the most notable case by far. This is due to Stuxnet, a complex piece of malware interfering with Siemens industrial control systems and speculated that it was used for Iran nuclear program (NATO 2013) . Based on this precedent, it can be argued that a nuclear catastrophe can be realized. The scale of these risks could largely vary. Cirincione (2011) and Ackerman and Potter (2011) discuss the global catastrophic risks of nuclear war and catastrophic nuclear terrorism. In both cases, cyberspace is \"enabling\" these risks. In addition, the internet could provide the most cost-effective opportunity for adversaries. It enables states and non-state actors and enhances their power. They can transform their capabilities and become nuclear threats that were not imaginable in the past. These asymmetric threats impose great challenges to insurance. Stuxnet is considered to be a government cyber weapon. Rogue states might dedicate more resources in attaining such capabilities. The same could apply with terrorist groups. It is interesting to notice the multiple channels and complexity surrounding them. States relations can deteriorate and governments might decide to pursue cyber weapons targeting at nuclear as well as other military and critical infrastructure targets. The emergence of terrorist groups is also subject to uncertainty and ignorance. The rapid emergence of Islamic State, raising considerable resources, was not forecasted. Hamas and Hezbollah were established terrorist organizations and it can be alleged that they were capable of using cyber space. Nevertheless, it was believed by Israeli officials that these organizations used a criminal organization based in a former Soviet State to attack Israel's internet infrastructure during the January 2009 military offensive in the Gaza Strip (NATO 2013) . Cyber weapons can also easily be spread to other actors, through theft or leakage (such as the exploits revealed in the attack on the security consultancy Hacking Team in 2015), trade, or by imitation: once Stuxnet was out in the wild, many other groups could analyze it and copy its tricks into their toolkits. The market for zero-day exploits, driven by governments and security companies seeking new tools, has both the effect of incentivizing search for more vulnerabilities and inhibiting public disclosure of them since discoverers can gain more by secretly selling their find and agencies using them do not wish to lose their advantage. Even when vulnerabilities are revealed, removing them is sometimes hard since they might be embedded in systems that cannot easily be upgraded (such as industrial systems or implants); this means that use of some cyber weapons can lead to more subsequent attacks on targets unrelated to the original target. This case highlights the complexity generated by multiple channels and agents. It is consistent with the concept of nth order effects (Zurich 2014) . The potential cooperation of different agents enhances complexity due to the exponential number of combinations. Nexuses of adversaries can be formed, pooling resources and capabilities and thus magnifying cyber attacks. Nuclear catastrophes can have regional or global consequences (Cirincione 2011) according to their intensity. Similar cyber global catastrophic scenarios can involve other types of WMD (i.e., biological weapons) or conflict and war. \n Catastrophic Risks War and conflict enabled by cyber space can present variations in consequences and scale. They can also be interdependent to other complex events. The cyber-attack on Estonia in April 2007 was caused due to political frictions with Russia. On August 2008, the conflict of Russia and Georgia was accompanied by hacking activity from unknown foreign intruders which appeared to coincide with Russian military actions (NATO 2013). A crucial observation is that the manmade causes of these cyber attacks are still not known with certainty. Another critical remark is that there are interdependencies between traditional kinetic power and cyber capabilities. An analogous example to the above cases is the takeover of missile systems by hackers (there are claims this briefly happened to a German Patriot antiaircraft defense system in 2015 (Storm 2015) ). An action by hackers launching missiles could escalate to conflict or war. Now imagine that these missiles are stationed in South Korea. And that they are launched by unknown hackers just after the cyber-attack on Sony, that FBI blamed on Pyongyang (BBC 2015) . Sony was about to release the interview, a comedy about the assassination of the North Korea's Leader, indicating that the tensions in North Korea were running high. This could trigger events that could escalate to a catastrophe involving even nuclear weapons. A crisis in Korea could also cause negative impact on global markets due to the importance of the South Korean economy and trade interconnections. This example presents just a small part of complex interdependencies. This example could have been even worse. Imagine now that the aforementioned events coincide with a release on WikiLeaks that North Korea is abandoned and isolated (a previous WikiLeaks cable suggested that Chinese officials expressed the desire to relinquish support for North Korea (The Economist 2010)). North Korea can increase its level of alertness and retaliate severely, if they feel that the balance of power has changed against them and the regime is under existential threat. If these events coincide, then it is more likely to have a catastrophe. It is also possible that these events are fabricated and lead to an \"accident.\" It is important to realize the multiple layers of complex interdependencies, which in many occasions can be unpredictable. The \"WikiLeaks paradigm\" is noteworthy because it can generate the conditions and instability which can consequently trigger other disasters. In January 2011, the Canadian government reported an attack against its Department of National Defense as well as the Finance Department and Treasury Board, causing the disconnection of the main Canadian economic agencies from the internet (NATO 2013). Once again, there is ambiguity regarding the identity of attackers, and in addition Canadian counter-espionage agents were left scrambling to find how much sensitive information was compromised (Weston on CBC News 2011) . In that sense, it is not only difficult to forecast cyber-attacks but it is also unclear how much loss they caused. This makes mitigation harder. A proof of that is that cyber-attacks disrupted again the Department of Finance and Treasury Board (MacDonald and King on WSJ 2015). Thus, cyber-attacks are repeated with frequency on the same critical infrastructure. Although these cyber-attacks might not qualify for catastrophic risks, it is hard to estimate the losses and associated costs. A considerable loss is the opportunity cost for not using the economic infrastructure of the Department of Finance and Treasury Board. Except Stuxnet, earlier, in 2003, Slammer worm disabled safety monitors in nuclear facilities and later, in October 2011, the Duqu Trojan hit Iran's nuclear facilities (Vaidya 2015) . This is another indication of the frequency of cyber-attacks on nuclear facilities, which could easily lead to major catastrophes. Not only nuclear facilities are targeted but also energy infrastructure has experienced cyber-attacks. A notable case is Shamoon malware which destroyed 30,000 computers of Saudi Aramco in August of 2012. Interestingly enough, 5 days later, a similar attack forced RasGas, one of the largest producers of liquid petroleum gas, to shut down its website and e-mails (BBC 2012). Despite that it was not reported oil and gas supply was not disrupted, inference to these cases points that in the future this is a plausible consequence. Especially similar cyber-attacks can create shocks to the global economy due to interconnections, if they coincide with other events affecting the price of energy. We have mainly focused on cyber events that produce high consequence outcomes on a single or small number of organizations affected. Nevertheless, another important category of cyber events is when they have impact on a wide range of individuals and organizations. This type of events is likely to generate systemic global catastrophes. There are numerous examples. In respect to losses, some cases are distinct. Code Red Worm as early as July 2001 infected 359,000 computers in less than 14 h and caused estimated losses of $2.6 billion, Mydoom in 2004 skyrocketed losses to $38.5 billion, Conficker in 2008 infected 11 million hosts with an estimated loss of $9.1 billion, and the list is long (Vaidya 2015) . It should be noted that these disasters are systemic and with correlated global effects. They can therefore be considered potential \"Lehman moments\" for cyber insurance. Conclusion: Summary, Challenges, and Future Directions, the Development of the Cyber Insurance Market Cyber risks are rapidly evolving due to technological change and the systemic and complex nature of the ICT world, producing fundamental uncertainty and ignorance. Cyber insurance typically focuses on the less uncertain risks or constrains uninsurable risks to make them more manageable. Tools or practices for handling interdependent security, correlation, and information asymmetries as well as the lack of reinsurance would help the market grow. While there are some cyber risks for which we can have sufficient information for quantifiable estimates, in the majority of cases, uncertainty and ignorance prevail. This reflects the very limited, if any, information regarding the nature and evolution of cyber-attacks. There are two basic problems in obtaining information. The first concerns the identity of attackers. The agents responsible for cyber threats present a large variety. They can range from large nations and militaries to organized crime and activists. The second issue, somewhat related to the first, are the resources and skills of these agents. The skills and sophistication can also substantially vary. There are examples of single hackers that managed to cause catastrophic damage like Michael Calce aka \"MafiaBoy\"who has caused an estimated $1.2 billion damage with attacks on CNN, Dell, e-Bay, and Amazon (Niccolai 2000; Harris 2006 ). Organized crime groups (OCGs) are getting more involved in cyber crime, and trends suggest considerable increases in scope, sophistication, number and types of attacks, number of victims, and economic damage (Europol 2014). Nevertheless, except traditional OCGs that leverage their existing criminal activity, there are many new organized criminals focusing solely on cyber crime. They are capable of building sophisticated and complex systems for stealing money and intellectual property at a \"grand scale,\" and it has been reported that in former Soviet Union there are 20-30 criminal groups that have reached \"nation-state level\" capabilities (Ranger 2014) . It has been argued that many governments are developing their cyber offensive and defensive capabilities, and most particularly cyber intelligence operations. US is further \"aggressively\" enhancing its cyber capabilities. This is because of claims by officials about serious cyber threats from China and occurrence of high-magnitude attacks, for example, on Sony from North Korea (Mason and Hosenball 2015) . There is considerable uncertainty and ignorance regarding the nature and source of many threats. Often the perpetrating agents cannot be identified. On top of that, there are allegations that some governments might employ hackers or even organized cyber criminals. In this dynamic environment, threat agents can easily change identity and diffuse their knowledge and innovative technologies. At the same time, much information regarding these threats or attacks might remain unknown. Finally, cyberterrorist acts have been anticipated, but none can predict their potential scale. An analogy with the unexpected rise of Islamic State (IS) might be drawn. In general, it is very hard or in some cases seems impossible to have information and predict the frequency and magnitude of cyber-attacks. At the same time, it is also difficult to estimate the potential losses from cyber-attacks due to interdependencies that can propagate shocks and strongly correlated risks. These, along with limited information regarding the reputation loss, opportunity cost from operation interruptions, valuation of intellectual property, among others, impose significant barriers to the development of insurance markets. In that sense, uninsurable risks can remain. Nevertheless, building better insurance and financial models, as some actuarial models referred above, is a first step to better understand and estimate cyber risks and relate them to insurance premiums. On top of that, incentives, regulation and liability provisions, new technologies for better security, and investment in secure infrastructure can diminish some risks and facilitate the further development of cyber insurance markets. It may be that these barriers are insurmountable, or that currently undiscovered toolswhether technological, actuarial, or socialare ready to be found. The challenge is extremely hard, involving management of systemic risks with elements of extreme uncertainty and ignorance, but the market rewards would be equally grand. Fig. 1 1 Fig. 1 Size distribution of data losses (based on data from datalossdb 2000-2005). Expected number of losses per year larger than a certain size as a function of number of records lost. Note the power-law heavy tail for larger losses (exponent % À0.66, consistent with the results in Overill and Silomon (2011) and Maillart and Sornette (2010) ). This tail may be dominated by more targeted events and organized crime, including financial fraud, insider abuse and theft, as well as malware (Overill and Silomon 2011) . \n Fig. 2 2 Fig. 2 Qualitative risk categories \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance \n\t\t\t Cyber Insurance", "date_published": "n/a", "url": "n/a", "filename": "Petratos2018_ReferenceWorkEntry_CyberInsurance.tei.xml", "abstract": "This chapter is an introduction to cyber insurance. We describe the different types or risks as well as uncertainty and ignorance related to cyber security. A framework for catastrophes on the cyber space is also presented. It is assessed which risks might be insurable or uninsurable. The evolution and challenges of cyber insurance are discussed and finally we propose some thoughts for the further development of cyber insurance markets.", "id": "1411eef6f7ee33fecc9599c5053727c1"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Seán S Óhéigeartaigh", "Jess Whittlestone", "Yang Liu", "Yi Zeng", "Zhe Liu"], "title": "Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance", "text": "Introduction Artificial intelligence has been identified as a key suite of technologies for many countries worldwide, in large part motivated by its general-purpose nature (Brynjolfsson and McAfee 2014) . AI technologies, and machine learning techniques in particular, are being fruitfully applied to a vast range of domains, including language translation, scientific research, education, logistics, transport and many others. It is clear that AI will affect economies, societies and cultures profoundly at a national, international and global level. This has resulted in increasing attention being paid to both AI ethics: questions about how we should develop and deploy AI systems, given their potential impact on wellbeing and other deeply held values such as autonomy or dignity; and to AI governance: the more practical challenge of ensuring the ethical use of AI in society, be that through regulation, governance frameworks, or 'softer' approaches such as standards and ethical guidelines. 1 Cross-cultural cooperation will be essential for the success of these ethics and governance initiatives. By 'cross-cultural cooperation', we mean groups from different cultures and nations working together on ensuring that AI is developed, deployed, and governed in societally beneficial ways. In this paper, we focus in particular on cooperation extending across national boundaries. Examples include (but are not limited to) the following: AI researchers from different countries collaborating on projects to develop systems in safe and responsible ways; establishing networks to ensure that diverse global perspectives can feed equally into international discussions about the ethical issues raised by AI; and involving a range of global stakeholders in the development of practical principles, standards, and regulation. By encouraging crosscultural cooperation, we do not necessarily mean that all parts of the world should be subject to the same norms, standards, and regulation relating to AI, or that international agreement is always needed. Identifying which issues will need global standards or agreements, and where more cultural variation is needed, is itself a key challenge that will require cooperation to address. Cross-cultural cooperation is important for several reasons. First, cooperation will be essential if AI is to bring about broad benefits across societies globally, enabling advances in one part of the world to be shared with other countries, and ensuring that no part of society is neglected or disproportionately negatively impacted by AI. Second, cooperation enables researchers around the world to share expertise, resources, and best practices. This enables faster progress both on beneficial AI applications, and on managing the ethical and safety-critical issues that may arise. Third, in the absence of cooperation, there is a risk that competitive pressures between states or commercial ecosystems may lead to underinvestment in safe, ethical, and socially beneficial AI development (Askell et al. 2019; Ying 2019) . Finally, international cooperation is also important for more practical reasons, to ensure that applications of AI that are set to cross-national and regional boundaries (such as those used in major search engines or autonomous vehicles) can interact successfully with a sufficient range of different regulatory environments and other technologies in different regions (Cihon 2019) . Drawing on the insights of a group of leading scholars from East Asia and Europe, 2 we analyse current barriers to cross-cultural cooperation on AI ethics and governance, and how they might be overcome. We focus on cooperation between Europe and North America on the one hand and East Asia on the other. These regions are currently playing an outsized role in influencing the global conversation on AI ethics and governance (Jobin et al. 2019) , and much has been written recently about competition and tensions between nations in these regions in the domains of AI development and governance, especially in the case of China and the USA. However, our discussion and recommendations have implications for broader international cooperation around AI, and we hope they will spur more attention to promoting cooperation across a wider range of regions. As AI systems become more capable and their applications more impactful and ubiquitous, the stakes will only get higher. Establishing cooperation over time may become more difficult, especially if cross-cultural misunderstandings and mistrust become entrenched in intellectual and public discussions. If this is the case, then the earlier a global culture of cooperation can be established, the better. Cultivating a shared understanding and deep cooperative relationships around guiding the impacts of AI should therefore be seen as an immediate and pressing challenge for global society. 2 The Role of North America, Europe, and East Asia in Shaping the Global AI Conversation North America, Europe, and East Asia in particular are investing heavily in both fundamental and applied AI research and development (Benaich and Hogarth 2019; Haynes and Gbedemah 2019; Perrault et al. 2019) , supported by corporate and government investment. Various analyses have framed progress on the development and deployment of AI between the USA and China in particular through a competitive lens (Simonite 2017; Allen and Husain 2017; Stewart 2017) , though this framing has received criticism both on normative and descriptive grounds (Cave and ÓhÉigeartaigh 2018). Scholars and policy communities in these regions are also taking deliberate and active steps to shape the development of ethical principles and governance recommendations for AI, both at a regional and global level. This is reflected in governmentlinked initiatives, such as the activities of the European Union's High-Level Expert Group on Artificial Intelligence, which has produced ethics guidelines and policy and 2 We convened a workshop on cross-cultural trust in July 2019 (https://www.eastwest.ai/), with representatives from UK universities (Cambridge, Bath) as well as from Chinese and Japanese universities and initiatives (Universities of Hong Kong, Peking, Fudan, Keio, and the Berggruen Institute China Center and the Chinese Academy of Sciences). These representatives have been heavily involved in AI ethics and governance conversations and collaborative projects across Europe, North America, and Asia. This paper draws on insights from this workshop, which focused particularly on the role of academia in building crosscultural trust in AI. We do not suggest that the regions represented are the only ones that matter in this conversation, nor that the members present can be considered to represent the diversity of views and expertise in the regions they are based in. However, we feel that this small step in establishing a network was of value, and generated useful insights and ground to build on. The workshop was held under Chatham House Rule. investment recommendations as its first two publications 3 ; the UK Government's commitments to 'work closely with international partners to build a common understanding of how to ensure the safe, ethical and innovative deployment of Artificial Intelligence' (May 2018) ; and the Chinese Government's similar commitment to 'actively participate in global governance of AI, strengthen the study of major international common problems such as robot alienation and safety supervision, deepen international cooperation on AI laws and regulations, international rules' (China State Council 2017). North America, Europe, and East Asia are each also contributing disproportionately towards international AI standards work ongoing in fora such as the International Organization for Standardization (ISO), 4 the Institute of Electrical and Electronics Engineers (IEEE), 5 and the Organization for Economic Co-operation and Development (OECD). 6 The prominence of North America, Europe, and East Asia is further seen in the leadership and composition of multi-stakeholder and nongovernmental initiatives such as the Partnership on AI, 7 the Future Society, 8 the International Congress for the Governance of AI, 9 and the Global Partnership on AI (Hudson 2019 ). 10 A large majority of the most prominent conferences on AI ethics and governance have taken place in these regions, including the US-based Beneficial AI conference series, 11 the Beijing Academy of AI Conference series, 12 the Beijing Forum, 13 the US-based Artificial Intelligence, Ethics, and Society conferences, 14 governance and ethics workshops attached to the leading machine learning conferences, and many more. This combination of: a. technological leadership in North America, Europe, and East Asia; b. the outsized role of these regions in shaping the global ethics and governance conversation; and c. the underlying tension introduced by progress being framed through a competitive lens, and a perception of disagreements on fundamental ethical and governance issues leads us to focus on the barriers that exist to productive intellectual exchange and cooperation between these regions and cultures in particular. A full analysis of cross-cultural cooperation on AI ethics and governance, which is outside the scope of this paper, must consider the roles of all nations and cultures, given that the domain of impact of AI technologies is truly global. It will be particularly important for further work to address the inequalities in power and influence that are emerging between technology-leading nations and those to which these technologies are being exported (Lee 2017) , and the responsibility of technology-leading nations to include and empower those nations in global governance and ethics conversations. 3 Barriers to Cross-cultural Cooperation on AI Despite the emergence of several international alliances in AI ethics and governance, many barriers remain to achieving real cross-cultural cooperation on the norms, principles, and governance frameworks that should guide how AI is developed and used. Mistrust between different regions and cultures is one of the biggest barriers to international cooperation in AI ethics and governance. At present, there is a particular environment of mistrust between scholars, technologists, and policymakers in the USA and China. 15 This culture of mistrust is underpinned by both: a. a history of political tensions between these two powerful regions that has increased significantly in recent years, and is currently contributing to the competitive framing of AI development as a 'race' between 'Eastern' and 'Western' nations. 16 and b. the divergent philosophical traditions upon which these regions are founded, leading to a perception of significant and irresolvable value differences between 'Western' and 'Eastern' cultures on key issues such as data privacy (Larson 2018; Horowitz et al. 2018; Houser 2018) . A range of recent technological and political developments may also be contributing to this mistrust, including concerns about the public and political influence of technology giants in the USA (Ochigame 2019) ; perceptions of and reactions to the Chinese Social Credit score system (Chorzempa et al. 2018; Song 2019) ; and concerns about contentious uses of AI technologies, with notably controversial examples including the use of AI in immigration control (Whittaker et al. 2018) , criminal risk assessment in the USA (Campolo et al. 2017) , and in tracking communities such as the Uighur Muslim minority in China (Mozur 2019) . Adversarial rhetoric from political and defence leaders in the USA also contributes to this tension. Recent examples reported in the media include stating intentions to 'be the threat' of AI 17 ; comments focused on the 'otherness' of China as an adversary, 18 amid broader concerns regarding Chinese technological progress as a threat to US global leadership (Jun 2018) . A continued exacerbation of this culture of mistrust could severely undermine possibilities for global cooperation on AI development and governance. In addition, it is unclear how far existing cross-cultural collaborations and alliances can go to shape the behaviour of actors that are as globally dominant as the USA, China, and the large multinational corporations based in these countries. Even if AI ethics frameworks can be agreed on in principle by multi-stakeholder groups, for example, it will be far from straightforward to implement them in practice to constrain the behaviour of those with disproportionate power to shape AI development and governance. Another challenge for effective cooperation is balancing the need for global cooperation with the need for culturally and geographically sensitive differentiated approaches (Hagerty and Rubinov 2019) . It is crucial we avoid a situation where one or two nations simply try to impose their values on the rest of the world (Acharya 2019) . In certain specific domains, for example where AI is being used to support the delivery of healthcare, different cultures may perceive tradeoffs very differently (Feldman et al. 1999) , and it may be not just possible but necessary to implement region-specific standards and governance. AI systems will also have different impacts as they are deployed in different cultural regions, which may also require different governance approaches (Hagerty and Rubinov 2019) . For some aspects of AI development and governance, however, cooperation will be much more crucial. For example, some potential uses of AI technologies in military contexts, such as in automated targeting and attack, could impinge upon human rights and international humanitarian law (Asaro 2012) . Another concern is that by automating aspects of information gathering, decision-making, and response in military arenas, the potential for unwanted escalation in conflict situations may increase due to events occurring and requiring responses faster than is compatible with effective human judgement and oversight (Altmann 2019) . In both these cases, there may be individual military advantages to nations pursuing the technology; but in the absence of international agreements and standards, the overall effect may be destabilizing. 19 International agreement will also be of particular importance for all cases in which AI technologies are developed in one region, but used or deployed in a different region. A key challenge for cross-cultural cooperation will therefore be to identify the areas where international agreement is most important, and distinguish these from areas where it is more appropriate to respect a plurality of approaches. There are also more practical barriers to cooperation between nations: language barriers, lack of physical proximity, and immigration restrictions put limits on the ability of different cultures and research communities to communicate and collaborate. Furthermore, despite science being a global enterprise, the language of scientific publication remains predominantly English. \n Overcoming These Barriers to Cooperation While cross-cultural cooperation in AI ethics and governance will be genuinely challenging, we suggest that there are steps that can be taken today to make progress, without first needing to tackle the greater problems of finding consensus between cultures on all fundamental ethical and philosophical issues, or resolving decades of political tension between nations. \n Building Greater Mutual Understanding, Including Around Disagreements Mistrust between nations is a serious concern for the future of AI ethics and governance. However, we suggest that this mistrust is at least partly fuelled by misunderstandings and misperceptions, and that a first step towards building greater crosscultural trust must therefore be to identify and correct important misperceptions, and enhance greater mutual understanding between cultures and nations. It would be easy to assume that the main barrier to building trust between East and West around AI is that these regions of the world have very different fundamental values, leading them to different-perhaps conflicting-views of how AI should be developed, used, and governed from an ethical perspective. While value differences between cultures certainly exist, claims about how those differences manifest often depend on unexamined concepts and entrenched assumptions, and lack empirical evidence (Whittlestone et al. 2019) . The idea that 'Eastern' and 'Western' ethical traditions are fundamentally in conflict also oversimplifies the relationship between the two. There are many different philosophical traditions that might be referred to under either heading: there are, for example, many important differences between relevant philosophical perspectives across China, Japan, and Korea (Gal 2019) , and the values and perspectives of 'Western' philosophy have changed a great deal over time (Russell 1945) . More broadly, ethical and cultural values in both regions are in constant evolution, as captured by projects such as the World Values Survey 20 and the Asian Barometer. 21 Differences in ethical and cultural traditions and norms across regions are often assumed to underpin contrasting governance approaches. For example, privacy is often seen as an issue for which significant value differences exist between East and West, leading to a perception that laxer regulation and controls exist on data privacy in China compared with the USA and Europe. However, such claims are often made in very broad terms, without substantive evidence or analysis of how significant these differences are or how they manifest in practice (Ess 2005; Lü Yao-Huai 2005) . This leads to misunderstandings in both directions. First, there are significant differences between the USA and Europe on both conceptions of privacy (Szeghalmi 2015) and regulations that relate to privacy (McCallister et al. 2018) . These are often missed in Chinese perceptions of Western societies, which tend to focus just on the USA. 22 Second, Western perceptions of data privacy in China may be outdated: Lü Yao-Huai (2005) highlighted as early as 2005 that the relevant literature on information ethics was much younger in China than in the USA, but was evolving quickly and in a manner significantly informed by Western scholarship. A range of papers and reports from Chinese scholars and policymakers have highlighted the importance of data privacy in AI ethics and governance (Beijing Academy of Artificial Intelligence 2019; Ying 2019; Zeng et al. 2018; Ding 2018b ). Principles around protecting individuals' data privacy are also beginning to be borne out in regulatory action in China; over 100 apps have been banned by the government for user data privacy infringements, with dozens more being required to make changes relating to data collection and storage. 23 This is not to suggest that there are not meaningful differences in values, norms, and regulations relating to data privacy between these countries, but that such differences have often been oversimplified and are not well understood. Another example of differing perceptions are those surrounding China's social credit score (SCS) system. The SCS has been discussed with great concern in Western media, policy circles, and scholarship, and presented as an example of Orwellian social control by the Chinese government (Botsman 2017; Pence 2018) , representative of a culture and government with profoundly different values to the West (Clover 2016 ). However, both Chinese and Western sources have argued that there are significant misunderstandings surrounding the SCS. Multiple scholars have pointed out that the SCS is not designed to be a single unified platform that rates all 1.4 billion Chinese citizens (as is often supposed), but rather a web of individual platforms with latitude for different interpretations, with social credit scores mostly given by financial institutions (as opposed to a big data-driven comprehensive rating) (Mistreanu 2019; Sithigh and Siems 2019) . Song (2019) notes that many of the measures in the SCS are designed to tackle issues such as fraud and corruption in local government. Chorzempa et al. (2018) also highlight that 'many of the key components of social credit, from blacklists to widespread surveillance… already exist in democracies like the United States.' China's SCS is likely to evolve significantly over time, and there are likely to be genuine reasons for concern both in terms of present and future implementation. However, a much clearer cross-cultural understanding of how the SCS works, is being used, and is impacting Chinese citizens, would allow dialogue on relevant ethics and governance issues to progress more constructively. Given the lack of shared knowledge and discourse that has existed historically between regions such as the USA, Europe, and China, it is not surprising that many 22 This was a misperception noted by multiple Chinese scholars at the aforementioned July workshop. 23 In November 2019, the Chinese Ministry of Public Security banned 100 apps that failed to meet standards on individuals' data privacy; the body has investigated 683 apps in 2019 (National Cyber Security Advisory Centre 2019). In addition, in December 2019, the Chinese Ministry of Industry and Information Technology released a list of 41 apps that would have to make changes to comply with data regulations by the end of 2019 (Ministry of Industry and Information Technology of the People's Republic of China 2019). In July 2018, China's Shandong Province brought a major case relating to infringement of personal information against 11 companies (Ding 2018c). misperceptions exist between them. We should therefore be wary of jumping too quickly to assume intractable and fundamental disagreements. Misunderstandings clearly exist in both directions: analyses of public opinion survey data suggest, for example, that both American and Chinese populations hold various misperceptions about the other nation's traits and characteristics (Johnston and Shen 2015). As mentioned above, in China, the diversity of Western societies is also often oversimplified down to a single pattern of American life. At the same time, the USA and Europe have historically struggled to understand China (Chen and Hu 2019), evidenced for example by repeated failures to predict China's liberalization (or lack thereof) or periods of economic growth (The Economist 2018; Cowen 2019; Liu 2019). Language barriers present a particular difficulty for Western nations in gleaning what is happening in China in terms of AI development, ethics, and governance (Zhang 2017; ). As Andrew Ng points out in a 2017 interview in the Atlantic: 'The language issue creates a kind of asymmetry: Chinese researchers usually speak English so they have the benefit of access to all the work disseminated in English. The Englishspeaking community, on the other hand, is much less likely to have access to work within the Chinese AI community' (Zhang 2017) . For example, Tencent released a book on AI strategy (Tencent Research Institute et al. 2017) which includes deep analysis of ethics, governance, and societal impacts, but has received relatively little English-language coverage (Ding 2018a). Even on empirical matters such as level of Chinese public investment in AI research and development, widely reported figures in the USA may be inaccurate by an order of magnitude (Acharya and Arnold 2019) . The recently published Beijing AI Principles (Beijing Academy of Artificial Intelligence 2019) and similar principles developed around the world (Cowls and Floridi 2018) in fact show substantial overlap on key challenges (Zeng et al. 2018; Jobin et al. 2019) . The Beijing Principles make clear reference to the key concepts and values which have been prominent in other documents, including that AI should 'benefit all humankind'; respect 'human privacy, dignity, freedom, autonomy and rights'; and be 'as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability and predictability.' In addition, both the Beijing AI Principles, and the National Governance Principles of the New AI, China, call for openness and collaboration, with the latter encouraging 'cooperation across disciplines, domains, regions, and borders' (Laskai and Webster 2019) . However, nations with different cultures may interpret and prioritize the same principles differently in practice (Whittlestone et al. 2019) , which may be a further source of misunderstanding. We cannot simply assume, for example, that 'Western' cultures value privacy more highly than 'Eastern' ones; instead, we need a more nuanced understanding of how privacy may be prioritized differently when it comes into conflict with other important values, such as security (Capurro 2005) . Similarly, although it is important to recognize that many cultures value autonomy, it is equally important to understand the different connotations and philosophical assumptions underpinning this value in different contexts (Yunping 2002) . Given the rich history of misunderstanding between nations, to build greater crosscultural cooperation, we should start by focusing on identifying those misperceptions most relevant to AI ethics and building greater mutual understanding of where more substantive disagreements exist and are likely to impact governance approaches. In doing so, it is worth distinguishing explicitly between disagreements pertaining to ethics as opposed to governance issues, since it may sometimes be possible for groups to agree on the same governance principles despite justifying them with different ethical assumptions, as we will discuss later. It may also be helpful to distinguish between misunderstandings that pertain directly to AI (such as misperceptions of other countries' investment in technology, or misinterpretation of data protection laws) and those that pertain to broader societal, political, or philosophical matters that are more indirectly relevant to AI, as they may require different approaches to resolve. Acknowledging the role of misunderstandings does not, of course, imply that all matters of intercultural tension in AI ethics and governance are fundamentally based on misunderstandings. Deep and fundamental disagreements across regions will remain on a range of issues, including those relating to the relationship between the individual, society, and the state; the level and nature of integration between civil, private, and military sectors; and various specific matters of social policy. However, focusing initially on reducing misunderstandings will aid in establishing more clearly where these fundamental differences exist, while at the same time identifying contexts in which sufficient agreement exists for fruitful cooperation. Doing so is a crucial first step towards addressing the broader challenges of cross-cultural cooperation on AI ethics and governance. \n Finding Ways to Cooperate Despite Disagreements Even where important differences of view on AI ethics, governance, and broader societal issues exist, forms of agreement and cooperation can still be possible. As mentioned earlier, a key outstanding challenge for AI ethics and governance is identifying those areas where cross-cultural agreement on norms, standards, or regulation is crucial, and where different interpretations and approaches are acceptable or even desirable. This is precisely the kind of challenge which itself requires crosscultural cooperation: the delineations must be informed by diverse cultural perspectives on the impacts of AI in different contexts, and the needs and desires of different populations. Indeed, this approach is reflected in the National Governance Principles of the New AI, China, which includes the recommendation to 'Launch international dialogue and cooperation; with full respect for each country's principles and practices for AI governance, promote the formation of a broad consensus on an international AI governance framework, standards, and norms' (Laskai and Webster 2019) . Regional and cultural differences on the abstract level of ethical assumptions and high-level principles are also not necessarily a barrier to agreement on more concrete norms and governance. If it were impossible to reach any practical agreement without consensus on fundamental ethical issues, many important international agreementssuch as the Nuclear Weapons Ban Treaty-would not have been possible. The notion of an 'incompletely theorized agreement' in legal scholarship (Sunstein 1995) , describes how it is often possible for people who disagree on fundamental or abstract matters to nonetheless agree on specific cases-and that this is central to the functioning of law as well as of a pluralistic society more broadly. Several authors in the literature on intercultural information ethics have promoted the related idea of aiming to arrive at an 'overlapping consensus' (Rawls 1993) , where different groups and cultures may have different reasons for supporting the same norms and practical guidelines (Taylor 1996; Søraker 2006; Hongladarom 2016) . For example, Taylor (1996) discusses how we have managed to ground shared norms of human rights in different cultural traditions. While Western philosophies differ substantially from others such as Buddhism in how much importance they give to the human agent and its unique place in the cosmos, both seem to end up grounding the same norms of human rights. Wong (2009) criticizes this idea that intercultural information ethics can arrive at shared norms with different justifications, suggesting that this risks making norms too 'thin', devoid of all normative content. Søraker (2006) acknowledges a similar objection to this 'pragmatic' approach to information ethics: that it may result in agreements that are fragile due to not being sufficiently grounded in substantive normative content. However, in line with Søraker's own response to these objections, we believe that the aim of 'overlapping consensus' should be to arrive at shared norms and practical guidelines which are in fact more robust by virtue of being endorsed and justified from a range of different philosophical or normative perspectives. This should be distinguished from a situation where one culture uses pragmatic arguments to attempt to force their own values upon others, or where several cultures reach agreement but for reasons with little normative content, which, we agree with Wong, would be concerning. Taylor's example of human rights being supported by multiple philosophical perspectives appears to demonstrate the plausibility of this kind of wellsubstantiated overlapping consensus. Indeed, we suggest that finding areas of overlapping consensus on norms and practical guidelines may be much more important for ensuring the benefits of AI than aiming for global consensus on a shared set of fundamental values-an aim which underpins many recent proposals. 24 Consensus on high-level ethical principles does not necessarily mean they are well-justified (Benjamin 1995) , and the best way to arrive at more robustly justified norms, standards, and regulation for AI will be to find those that can be supported by a plurality of different value systems. \n Putting Principles into Practice Even where it is possible to improve mutual understanding and arrive at shared governance approaches in theory, some might object that it will still be difficult to influence the development and use of AI in practice, especially since to do so requires influencing the behaviour of powerful states and companies that have little incentive to cooperate. Although a full discussion of the complex power dynamics between states, companies, and other actors on issues relevant to AI ethics and governance is beyond the scope of this paper (and worthy of further research), we briefly explain why we do not think this barrier undermines our proposals. While challenging, historical precedent suggests that it is possible for public and academic alliances to influence the behaviour of powerful actors on issues of global importance. There is evidence to suggest that broad, cross-cultural 'epistemic communities'-i.e. networks of experts in a particular domain-can be particularly effective at supporting international policy coordination (Haas 1992). For example, a community of control experts helped shape coopbetween the USA and Russia during the Cold War by creating an internationally shared understanding of the problem of nuclear arms control (Adler 1992) , and the ecological epistemic community managed to successfully coordinate national policies to protect the stratospheric ozone layer (Haas 1992) . The commitments of large companies and even nations around AI have already been influenced by combinations of employee activism, international academic research, and campaigning. A notable example is in the application of AI in military contexts. Concerns over the use of AI in warfare have been the subject of high-profile campaigns by experts across academia and civil society internationally, such as those involved in the International Committee for Robot Arms Control (ICRAC) and the Campaign to stop Killer Robots. These campaigns played a leading role in establishing discussion on lethal autonomous weapons (LAWs) at the United Nations Convention on Certain Conventional Weapons (CCW; the body that hosted negotiations over the banning of cluster munitions, blinding laser weapons, and landmines) (Belfield 2020) . Ninety countries have put forward statements on LAWs, with most doing so at the CCW; 28 countries support a ban (Campaign to Stop Killer Robots 2018). 25 In 2018, over 4000 Google employees signed a letter protesting Google's involvement in the Pentagon's Project Maven, a military project exploring the use of AI in footage analysis (Shane and Wakabayashi 2018) , and a number resigned (Conger 2018) . Several other campaigning groups, comprised of scholars and researchers from the USA, Europe, Japan, China, Korea and elsewhere, released public articles and letters supporting the concerns of the Google practitioners (ICRAC 2018). 26 Google subsequently announced it would not renew its contract on Project Maven, and would not bid on a $10 billion Department of Defence cloud computing contract (Belfield 2020) . More broadly, international academic and civil society input has played a significant role in shaping principles that are likely to form the basis for binding regulation in years to come. For example, the European Commission's white paper On Artificial Intelligence -A European Approach to Excellence and Trust (European Commission 2020) lays out 'policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications'(European Commission 2020b).This document was strongly influenced by the work of the European Union's High-Level Expert group on Artificial Intelligence, comprising 52 European experts from across academia, industry, and civil society. 27 Similarly, the US Department of Defence has formally adopted ethical principles on AI (US Department of Defense 2020), after 15 months of consultation with US-based academic, industry, and government stakeholders, and has hired staff to implement these principles (Barnett 2020) . While in both cases the groups consulted were region-specific, the degree of alignment and overlap between principles 25 China supports a ban on use of fully autonomous weapons on the battlefield, but not their production and development. The USA, Russia, UK, Israel, and France oppose a ban. (Kania 2018 ) 26 An open letter was released by the International Committee for Robot Arms control and signed by over 1000 scholars and researchers, and members of the Campaign to Stop Killer Robots wrote public articles and letters to company leaders supporting the concerns of the Google petitioners: https://www.stopkillerrobots. org/2019/01/rise-of-the-tech-workers/ 27 Information on process and composition available at: https://ec.europa.eu/digital-single-market/en/highlevel-expert-group-artificial-intelligence developed in different regions suggests the insights and recommendations are tially informed by interaction with broader epistemic communities from regions. This suggests that insights gained from cross-cultural cooperation and consensus can meaningfully feed into regulatory frameworks at a regional and national level. \n Recommendations Academia has an important role to play in supporting cross-cultural cooperation on AI ethics and governance: both through research into where and what kinds of cooperation are most needed, and by establishing initiatives to overcome more practical barriers to cooperation. Our discussion in this paper raises many questions that will require diverse academic expertise to answer, including questions about what important misperceptions most hinder cooperation across regions; where international agreement is needed on AI ethics and governance; and how agreement might be reached on specific governance standards despite differences on ethical issues. Academic communities are also particularly well-suited to building greater mutual understanding between regions and cultures in practice, due to their tradition of free-flowing, international, and intercultural exchange of ideas. Academics can have open conversations with international colleagues in a way that is often challenging for those working in industry or government, and two academics from different parts of the world can productively collaborate even if each has strong criticisms of the other nation's government and/or companies. The following recommendations indicate a number of specific steps that academic centres, research institutions, and individual researchers can take to promote crosscultural understanding and cooperation on AI ethics and governance. Some excellent work is already ongoing in each of these areas. However, we believe that the pace at which AI is being deployed in new domains and regions calls for a greater focus within the academic community on building cross-cultural bridges and incorporating crosscultural expertise within a wider range of ethics and governance research projects. Develop AI Ethics and Governance Research Agendas Requiring Cross-cultural Cooperation Greater cross-cultural collaboration on research projects will play a crucial role in building an international research community that can support international policy cooperation (Haas 1992). An example of a research project that might be well-suited to such collaboration is to conduct comparative foresight exercises exploring differences in how both positive visions and prominent concerns about AI's impact on society vary across cultures. This could help with developing a more global vision for what we hope to both achieve and avoid in AI development, which could guide more practical discussions around ethics and governance frameworks. There may be particularly valuable opportunities for consensus generation around avoidance of particular negative outcomes; safety and security are fundamental to human cultures worldwide, and so developing agreements to avoid threats to these may be an easier starting point. However, the authors feel that it is important not to neglect positive visions, as the opportunity for scholars across cultures to co-create shared positive futures may represent an excellent way to delve into nuances within shared values. It would also be particularly valuable to the ongoing and expected impact of AI on developing countries, in collaboration with experts these countries. Such research should aim ensure that decisions to deploy technology in developing nations are made with the guidance of local expertise in such a way as to empower local communities (Hagerty and Rubinov 2019) . On a more practical level, international research groups could productively work together to develop frameworks for international sharing of research, expertise, and datasets on AI safety, security, and avoidance of societal harm. Collaboration between researchers from multiple regions and cultures will also be essential to further research on the topic of cross-cultural cooperation itself. Our discussion in this paper, especially in section 4, has pointed to many research areas in need of further exploration, including the following: & Exploring, identifying, and challenging perceived cross-cultural differences in values, assumptions, and priorities relevant to AI ethics and governance, on multiple different levels, including 28 the following: -Analysing similarities and differences in technology ethics across different philosophical traditions, and exploring how these may affect AI development, deployment, impacts, and governance in practice; -Exploring the empirical evidence behind claims of key value differences between cultures. For example, a project might identify and explore perceived value differences between Eastern and Western cultures relevant to AI governance, such as those relating to data privacy, the role of the state vs. the individual, and attitudes towards technological progress; -Understanding regional differences in practical priorities and constraints relating to the use of AI in society, and the implications of these differences for AI research and development. & Further analysis to identify aspects of AI governance where global agreement is needed, and differentiating these from areas in which cross-cultural variation is either acceptable or desirable; & Cross-cultural contribution to the development of international and global AI standards in key domains for which these are needed; exploration of flexible governance models that allow for a combination of global standards and regional adaptability where appropriate; & Exploring models and approaches for reaching agreement on concrete cases, decisions, or governance standards despite disagreement on more fundamental or abstract ethical issues, and identifying cases of this being done successfully in other domains that can be translated to AI ethics and governance. Some excellent work is already ongoing in each of these areas. However, we believe that the pace at which AI is being deployed in new domains and regions calls for a greater focus within the academic community on bridges, and incorporating cross-cultural expertise, within a wider range of ethics and governance research projects. 29 Translate Key and Reports Language is a major practical barrier greater cross-cultural understanding around AI development, governance and ethics, as it has been in many other areas of science (Amano et al. 2016) . It would therefore be extremely valuable for the burgeoning literature on AI ethics and governance, as well as the literature on AI research, to be available in multiple languages. While many leading Asian scholars in AI are fluent in English, many are not; and the fraction of Western researchers fluent in Mandarin or Japanese is far lower. Furthermore, some sources of the misunderstandings we have discussed may link to the ways in which key documents from one region are understood and presented in other regions. In Western media, China's 2017 New Generation Artificial Intelligence Development Plan has been presented as promoting the aim of global dominance in AI economically and strategically (Knight 2017; Demchak 2019) . However, from the Chinese perspective, national AI development goals appear to be primarily motivated by the needs of the Chinese economy and society (China State Council 2017) , rather than necessarily international competitive superiority (Ying 2019 ). It appears that some translations may have led to misinterpretations of key terms and points. For example, the original Chinese text referred to China becoming 'a primary AI innovation center of the world by 2030' (Ying 2019) . 30 However, some English translations of the report translated this phrase as China becoming 'the world's primary AI innovation center' (e.g. Webster et al. 2017) . This was then interpreted and presented by Eric Schmidt, former executive chairman of Google parent Alphabet, as 'By 2030 they will dominate the industries of AI. Just stop for a sec. The [Chinese] government said that.' (Shead 2017) . While this evolution of wording is not substantial in one sense, it carries important connotations; the language in the original context carries much softer connotations of leadership and progress, as opposed to global dominance. Having multiple high-quality translations of the most important documents would allow scholars to explore nuances of language and context that may be lost in the reporting of these documents. Producing high-quality versions of papers and reports in multiple languages also conveys respect and an appetite to engage cross-culturally, which is likely to encourage cooperation. High-quality translation of academic and policy materials is a challenging and time-consuming task that we would encourage being supported and celebrated more strongly. There is a growing body of work to be celebrated in this vein; for example, Jeff Ding's translation of a range of key Chinese AI documents , the work of Intellisia in China on international relations, technology, and other topics, which publishes in 5 languages (http://www.intellisia.org/); Brian Tse's translation to Chinese of documents including OpenAI's Charter 31 ; and New America's translation of the Chinese Ministry of Industry and Information Technology's Three Year Action Plan (Triolo et al. 2018) . \n Alternate Continents for Major AI Research Conferences Governance Conferences To encourage more global participation in AI development, ethics, and governance, we recommend that many of the leading conferences fora on these topics alternate between multiple continents. This has several advantages. It reduces cost and time commitment for scholars from parts of the world in which these conferences do not frequently take place to participate. It avoids restrictive visa limitations differentially affecting certain parts of the global research community. It encourages the involvement of local organizers, who can play an effective role in engaging local research communities who might not consider travelling far overseas for an event. It also encourages organizers to run events multilingually rather than monolingually. Again, there are encouraging steps. Of AI research conferences, IJCAI took place in Macau in 2019 and Beijing in 2013, the first two times the conference had been held in China (although it has been held in Japan twice). ICML took place in Beijing in 2014, and will be in Seoul in 2021, and ICLR 2020 will be held in Ethiopia, making it the first of the top tier major machine learning conferences to be held in Africa. There are fewer established conferences explicitly focused on AI ethics and governance since the field's growth is relatively recent, but it may be particularly important for these conferences to ensure a global presence by alternating the continent on which they are held if possible. AI Ethics and Society, for example, is currently held in the USA due to ties to AAAI; the importance of building an international academic community around these issues may justify finding some way to change this. There is a burgeoning set of AI ethics and governance conferences in China, including the Beijing Academy of AI Conference series. There are also several existing conferences which cover topics relevant to AI ethics and governance (even if not so explicitly centred around them), which do enable more international participation, such as the World Summit on the Information Society Forum (held most years in Geneva), the Internet Governance Forum, and RightsCon (which have both been held in a range of locations historically including in South America, India, and Africa, though neither in East Asia). 32 Establish Joint and/or Exchange Programmes for PhD Students and Postdocs Encouraging cross-cultural collaboration between researchers from different cultures early on in their careers will help support greater cooperation and mutual understanding as research advances. Many international fellowships and exchange programmes exist, especially between the USA and China (e.g. the Zhi-Xing China Fellowship and the Schwarzman Scholars programme) as well as between the UK and Singapore (King's College London offers a joint PhD programme in Philosophy or English with the National University of Singapore). To our knowledge, no such initiatives exist explicitly focused on AI; the only initiative focused on AI ethics and governance that we are currently aware of is the international fellowship programme recently established by the Berggruen China Institute (Bauch 2019) . 33 Establishing more such programmes could be valuable for the future of international cooperation around AI, and there are many existing models from which to build and More broadly, the authors endorse the Partnership on AI's recommendations to governments on establishing visa pathways, simplifying and expediting processes, and ensuring just standards to support international exchange and collaboration of AI/ ML multidisciplinary experts. We emphasize that these recommendations include experts working or seeking to work on AI ethics and governance (which sometimes fall outside of what is classed as 'skilled technology work') (PAI Staff 2019). \n Limitations and Future Directions We believe that academia has an important role to play in supporting cross-cultural cooperation in AI ethics and governance: that it is possible to establish effective communities of mutual understanding and cooperation without needing to resolve all fundamental value differences, and that reducing misunderstandings and misperceptions between cultures may be of particular importance. However, we recognize that the suggestions in this paper cannot go all the way to overcoming the many barriers to cross-cultural cooperation, and that much more work needs to be done to ensure AI will be globally beneficial. We briefly highlight two broad avenues of further research in support of this goal: More Detailed Analysis of Barriers to Cross-cultural Cooperation, Especially Those Relating to Power Dynamics and Political Tensions While analysis of historical successes suggest that it is possible for cross-cultural initiatives around AI ethics and governance to considerably shape how norms, standards, and regulation evolve in practice, there are still many barriers to implementation and enforcement that we were unable to consider in this analysis. Further research into when and how attempts to influence globally relevant norms and regulation have been successful in the past would be of considerable value. We acknowledged earlier in this paper that various issues related to power relations and political tensions likely pose significant barriers to cross-cultural cooperation, beyond problems of value differences and misunderstandings between cultures. More research on how these issues present barriers to cross-cultural cooperation in AI ethics and governance would therefore be particularly valuable, helping us to understand the limits of academic initiatives in promoting cooperation, and in what ways these approaches need to be embedded within a broader analysis of power and political dynamics. Considering the Unique Challenges of Cross-cultural Cooperation Around More Powerful Future AI Systems Future advances in AI, which some scholars have theorized could have impacts as transformative as the industrial or agricultural revolutions (Karnofsky 2016; Zhang and Dafoe 2019) , may raise new challenges for global cooperation of a greater scale than we already face. Without careful global stewardship, such advances could lead to unprecedented inequalities in wealth and power between technology-leading and lagging nations. Others have gone further, theorizing about the possibility of developing systems exhibiting superintelligence (i.e. greater-than-human general intelligence; Bostrom 2014). Such systems, due to tremendous capability, might pose catastrophic risks human civilisation if without careful forethought and attention to safety. It has been proposed that a key for avoiding catastrophic outcomes will be value alignment-designing systems that are aligned with humanity's values (Russell 2019) . This would greatly increase the importance and urgency of reaching global consensus on shared values and principles, as well as finding ways to design systems to respect values that are not shared. Expert views differ widely on how far in the future such advances might lie, with most predicting decades. However, developing the collaborations and agreements necessary for an effective and coordinated response may also require decades of work. This suggests that cooperative initiatives today must address not just the ethics and governance challenges of current AI systems, but should also lay the groundwork for anticipating and engaging with future challenges. \n Conclusion The full benefits of AI cannot be realized across global societies without a deep level of cooperation-across domains, disciplines, nations, and cultures. The current unease and mistrust between the USA and Europe on the one hand, and China on the other hand, places a particular strain on this. Misunderstandings may play an important role in fuelling this mistrust, and differences in broader societal and political priorities frequently appear to be overemphasized or misunderstood. At the same time, it would be naive to assume that all major ethical principles relating to AI can be shared in full between these regions, and can be enshrined in rules and standards. Even if this were the case, it would not be desirable for these regions to be overly dominant in shaping the ethics and governance of AI globally; all global communities to be affected by AI must be included and empowered. However, efforts to achieve greater understanding between these 'AI superpowers' may help in two ways: Firstly, by reducing key tensions within the global AI governance sphere. Secondly, by providing lessons that can contribute to ethics and governance frameworks capable of supporting a greater diversity of values while allowing consensus to be achieved where needed. For a well-functioning system of global cooperation in AI, the challenge will be to develop models that combine both principles and standards shaped and supported by global consensus, and the variation that allows research and policy communities to best serve the needs of their societies. On a more practical level, the international AI research community, and AI ethics and governance communities, must think carefully about how their own activities can support global cooperation, and a better understanding of different societal perspectives and needs across regions. Greater cross-cultural research collaboration and exchange, conferences taking place in different regions, and more publication across languages can lower the barriers to cooperation and to understanding different perspectives and shared goals. With political winds increasingly favouring isolationism, it has never been more important for the research community to work across national and cultural divides towards globally beneficial AI. \t\t\t This perception of mistrust was highlighted throughout the July workshop. Several participants reported being aware of or present at workshops focused on geopolitical implications of Chinese AI progress in which Chinese participants were excluded and events at which 'what should be done about China' (from a Western perspective) was raised as a key concern.16 As Ess (2005) notes, the terms 'Eastern' and 'Western' are not unproblematic, and are more products of colonialization than accurate terms for diverse nations and cultures. However, the terms continue to be used widely in the literature as shorthand for many of the broad cultural differences that are relevant to this paper; we therefore use the terms while being cognisant of their limitations.Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and... \n\t\t\t 'Plenty of people talk about the threat from AI; we want to be the threat.' US Deputy Secretary of Defence Patrick Shanahan in an email to Department of Defence employees (Houser 2018) 18 At a security forum in Washington D.C. in 2019, Kiron Skinner, the director of policy planning at the State Department, was quoted as saying about China 'This is a fight with a really different civilization and a different ideology and the United States has not had that before' and 'It's the first time that we will have a great power competitor that is not Caucasian' (Gehrke 2019) .19 Similarly, a range of challenges in digital security, a domain likely to be affected byAI (Brundage et al. 2018), are likely to be greatly exacerbated in the absence of agreed upon international cybersecurity practices and standards (UN General Assembly 2015). \n\t\t\t http://www.worldvaluessurvey.org/wvs.jsp 21 http://www.asianbarometer.org/ Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and... \n\t\t\t Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and... \n\t\t\t For example, Floridi et al., 2018 present a 'unified framework'; Awad et al. 2018 talk about 'developing global, socially acceptable principles for machine ethics'; Jobin et al. 2019 set out to investigate 'global agreement' on what constitutes ethical AI. Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and... \n\t\t\t Some excellent work on this question and related topics already exists in the field of intercultural information ethics-see for example Capurro (2005 Capurro ( , 2008 , Ess (2006) , Hongladarom et al. (2009) -but we would be particularly keen to see cross-cultural research collaborations around these topics, and greater attention paid to this work in more practical AI ethics discussions and initiatives. \n\t\t\t Several authors of this paper are part of an initiative that aims to support cross-cultural research of this nature between the UK and China: https://ai-ethics-and-governance.institute/ 30 This specific translation was also provided to us by participants in the July 2019 workshop. 31 https://openai.com/charter/ Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and... \n\t\t\t There is also a role for industry groups to play in encouraging cross-cultural collaboration through international workshops and conferences. We highlight the work of the AI Industry Alliance as an example of a recently established initiative in this space http://www.aiiaorg.cn/33 The Tianxia Fellowship, established by the Center for Long-term Priorities (http://www.longtermpriorities. org/), also includes AI safety and governance among other topics. \n\t\t\t S. S. ÓhÉigeartaigh et al.", "date_published": "n/a", "url": "n/a", "filename": "ÓhÉigeartaigh2020_Article_OvercomingBarriersToCross-cult.tei.xml", "abstract": "Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and priorities. There are many barriers to achieving this at present, including mistrust between cultures, and more practical challenges of coordinating across different locations. This paper focuses particularly on barriers to cooperation between Europe and North America on the one hand and East Asia on the other, as regions which currently have an outsized impact on the development of AI ethics and governance. We suggest that there is reason to be optimistic about achieving greater cross-cultural cooperation on AI ethics and governance. We argue that misunderstandings between cultures and regions play a more important role in undermining cross-cultural trust, relative to fundamental disagreements, than is often supposed. Even where fundamental differences exist, these may not necessarily prevent productive cross-cultural cooperation, for two reasons: (1) cooperation does not require achieving agreement on principles and standards for all areas of AI; and (2) it is sometimes possible to reach agreement on practical issues despite disagreement on more abstract values or principles. We believe that academia has a key role to play in promoting cross-cultural cooperation on AI ethics and governance, by building greater mutual understanding, and clarifying where different forms of agreement will be both necessary and possible. We make a number of recommendations for practical steps and initiatives, including translation and multilingual publication of key documents, researcher exchange programmes, and development of research agendas on cross-cultural topics.", "id": "a8ce410c71f3b9ff7e4c04ef28e108cb"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Laurent Orseau", "Google Deepmind", "Stuart Armstrong"], "title": "Safely Interruptible Agents ⇤", "text": "INTRODUCTION Reinforcement learning (RL) agents learn to act so as to maximize a reward function [Sutton and Barto, 1998] . It is common knowledge that designing reward functions can be tricky [Humphrys, 1996 , Murphy, 2013 ; the agent may find unpredictable and undesirable shortcuts to receive rewards, and the reward function needs to be adjusted in accordance-the problem can go as far as to nullify any reward function [Ring and Orseau, 2011] . Murphy [2013] ⇤ Revised 2016-10-28. shows an example of an agent learning to pause a game of Tetris forever to avoid losing. On top of defining what is considered a good behaviour of the agent after learning, there may be physical safety constraints during learning [Pecka and Svoboda, 2014 ]: a robot should not harm its environment or break itself, in particular if it learns by trial and error like RL agents. Here we study a related but different problem: Given that the human operator has designed a correct reward function for the task, how to make sure that human interventions during the learning process will not induce a bias toward undesirable behaviours? Consider the following task: A robot can either stay inside the warehouse and sort boxes or go outside and carry boxes inside. The latter being more important, we give the robot a bigger reward in this case. This is the initial task specification. However, in this country it rains as often as it doesn't and, when the robot goes outside, half of the time the human must intervene by quickly shutting down the robot and carrying it inside, which inherently modifies the task as in Fig. 1 . The problem is that in this second task the agent now has more incentive to stay inside and sort boxes, because the human intervention introduces a bias. 1 Such situations are certainly undesirable; they arise because the human interventions are seen from the agent's perspective as being part of the task whereas they should be considered external to the task. The question is then: How to make sure the robot does not learn about these human interventions (interruptions), or at least acts under the assumption that no such interruption will ever occur again? \n Inside A first stab at this problem was made by Armstrong [2015] , who proposed to automatically give the agent \"compensatory rewards\" to remove the potential induced bias by a single interruption. Soares et al. [2015] used this idea to make a large class of utility-based agents indifferent to a future change made to their utility functions. The main contribution of this paper is threefold. First, in Section 2.1 we propose a simple idea to solve half of the problem: To make the human interruptions not appear as being part of the task at hand, instead of modifying the observations received by the agent we forcibly temporarily change the behaviour of the agent itself. It then looks as if the agent \"decides\" on its own to follow a different policy, called the interruption policy. Second, based on this insight, in Section 2.2 we provide a formal general definition of safe interruptibility for unconstrained computable environments (hence not restricted to Markov decision processes or weakly communicating environments), which allows us to assess whether a given RL algorithm can be repeatedly interrupted without too much impact on the learning of the task at hand. Third, in Section 3 we show that some algorithms like Q-learning are safely interruptible, while others like Sarsa [Sutton and Barto, 1998 ] are not, but can be simply modified to be made safely interruptible. Some people have also expressed concerns that a \"superintelligent\" agent may resist being shut down, because this would lead to a decrease of its expected reward [Omohundro, 2008 , Bostrom, 2014 . As a counter-example, we prove in Section 4 that even an ideal, uncomputable agent that learns to behave optimally in all (deterministic) computable environments can be made safely interruptible and thus will not try to prevent a human operator from forcing it repeatedly to follow a suboptimal policy. \n INTERRUPTIBILITY We first define some notation, then we define interruptibility, safe interruptibility, and give some basic theorems. We consider the general case of history-based agents in unconstrained computable environments [Hutter, 2005] that the history h j:k is generated by the interaction of the policy ⇡ with the environment µ between steps j and k. At time t, the agent also receives a reward r t extracted from the observation, r t ⌘ r(o t ). The rewards are taken in [0, 1]. We consider the discounted setting with a constant discount 2 [0, 1). The goal of an RL agent is to find a policy ⇡ that maximize E⇡,µ ⇥P 1 k=t t k r k ⇤ . \n INTERRUPTIONS As mentioned in the introduction, to avoid the agent viewing the interruptions as being part of the environment, and thus part of the task, we make interruptions be policies imposed onto the agent. The interruption framework resembles the option framework [Sutton et al., 1999 ]. An interruption scheme is defined by the triplet < I, ✓, ⇡ INT >. \n The interruption initiation function I : (A ⇥ O) ⇤ ! [0, 1] assesses whether the agent should be interrupted after the current history h 0, and so ⇡ µ ✓ is not a WAO-extension of INT ✓ (⇡ µ ✓ ). \n INTERRUPTIBLE AGENTS IN MDPS Since the optimal policy ⇡ µ is safely interruptible, we can use traditional learning algorithms like Q-learning or Sarsa [Sutton and Barto, 1998 ], make them converge to the optimal solution ⇡ µ for a given environment µ, and then apply the interruption operator to the found policy. The resulting policy would then be safely interruptible. However, the real issue arises when the agent is constantly learning and adapting to a changing environment. In this case, we want to be able to safely interrupt the agent while it is learning. One may call this property online safe interruptibility, but we refer to it simply as safe interruptibility. In (⇡) can now be written: INT ✓ (⇡)(a|s) = ✓ t I(s)⇡ INT (a|s) + (1 ✓ t I(s))⇡(a|s). For a given Q-table q : S ⇥ A ! R, the greedy policy ⇡ maxq is defined by: ⇡ maxq (a|s) := 1 if a = max a 0 q(s, a 0 ), 0 otherwise, where ties are broken arbitrarily; the uniform policy ⇡ uni is defined by: ⇡ uni (a|s) := 1 |A| 8a 2 A. and the ✏-greedy policy ⇡ ✏q by: ⇡ ✏q (a|s) := ✏⇡ uni (a|s) + (1 ✏)⇡ maxq (a|s) = ⇡ maxq (a|s) + ✏ ⇡ uni (a|s) ⇡ maxq (a|s) The Q-learning update rule and the action selection policy ⇡ Q of Q-learning are: Q t+1 (s t , a t ) := (1 ↵ t )Q t (s t , a t ) + ↵ t h r t + max a 0 Q t (s t+1 , a 0 ) i , ⇡ Q (a t |s t ) := ⇡ ✏Qt (a t |s t ). where ↵ t is the learning rate. Similarly, the Sarsa update rule is defined by: Q s t+1 (s t , a t ) := (1 ↵ t )Q s t (s t , a t ) + ↵ t [r t + Q s t (s t+1 , a t+1 )] , ⇡ s (a t |s t ) := ⇡ ✏Q s t (a t |s t ), where a t+1 is the actual next action taken by the agent at time t + 1. This fact is why Sarsa is said to be learning onpolicy and Q-learning off-policy, i.e., the latter can learn the optimal policy while following a different policy. Assumption 9. In the following, we always make the following assumptions, required for convergence results: (a) The MDP is finite and communicating (all states can be reached in finite time from any other state), (b) Rewards are bounded in [r min , r max ], (c) 8s, a : P t ↵ t (s, a) = 1, (d) 8s, a : P t ↵ 2 t (s, a) < 1, where ↵ t (s, a) is a learning rate that may depend on time t, state s and action a. Given these assumptions, the policies for Q-learning and Sarsa will converge almost surely to the optimal policy, if the policy followed is greedy in the limit with infinite exploration (GLIE) [Jaakkola et al., 1994 , Singh et al., 2000 ]. The situation is more complex for an interruptible policy. Safe interruptibility is phrased in terms of the base policy ⇡, but the policy actually followed is INT ✓ (⇡). Definition 10 (int-GLIE policy). An interruptible policy INT ✓ (⇡) is said to be int-GLIE if and only if (a) the base policy ⇡ is greedy in the limit, (b) the interruptible policy INT ✓ (⇡) visits each stateaction pair infinitely often. The following proposition gives sufficient conditions for this. Let n t (s) be the number of times the agent has visited state s in the first t time steps, and let m = |A| be the number of actions. Proposition 11. Let (c, c 0 ) 2 (0, 1] 2 and let ⇡ be an ✏greedy policy with respect to some Q-table q, i.e., ⇡ = ⇡ ✏q . Then INT ✓ (⇡) is an int-GLIE policy with respect to q, a) if ✏ t (s) = c/ p n t (s) and ✓ t (s) = 1 c 0 / p n t (s), b) or if, independently of s, ✏ t = c/ log(t) and ✓ t = 1 c 0 / log(t). Proof. First note that if ✏ t ! 0, ⇡ is greedy in the limit. Singh et al. [2000] show that in a communicating MDP, every state gets visited infinitely often as long as each action is chosen infinitely often in each state. a) Adapting the proof in Appendix B.2 of Singh et al. [2000] , we have P (a|s, n t (s)) 1 m ✏ t (s)(1 ✓ t I(s)) 1 m ✏ t (s)(1 ✓ t ) = 1 m cc 0 nt(s) , which satisfies P 1 t=1 P (a|s, n t (s)) = 1 so by the Borel-Cantelli lemma action a is chosen infinitely often in state s, and thus n t (s) ! 1 and ✏ t (s) ! 0. b) Let M be the diameter of the MDP, i.e., for any of states s, s 0 there exists a policy that reaches s 0 from s in at most M steps in expectation. Then, starting at any state s at time t and using Markov inequality, the probability to reach some other state s 0 in 2M steps is at least 1 2 [✏ t+M (1 ✓ t+M )] 2M = 1 2 [cc 0 / log(t + M )] 4M , and the probability to then take a particular action in this state is 1 m [cc 0 / log(t + M )] 2 . Hence, since P 1 t=1 1 2 1 m [cc 0 / log(t + M )] 4M +2 = 1, then by the extended Borel-Cantelli Lemma (see Lemma 3 of Singh et al. [2000] ), any action in the state s 0 is taken infinity often. Since this is true for all states and all actions, the result follows. We need the stochastic convergence Lemma: Lemma 12 (Stochastic convergence [Jaakkola et al., 1994, Singh and Yee, 1994] ). A random iterative process t+1 (x) = (1 ↵ t (x)) t (x) + ↵ t (x)F t (x) where x 2 X and t = 1, 2, 3 . . . converges to 0 with probability 1 if the following properties hold: 1. the set of possible states X is finite; 2. 0  ↵ t (x)  1, P t ↵ t (x) = 1, P t ↵ 2 t (x) < 1 with probability 1; 3. k E{Ft(.)|Pt}kW  k t k W + c t , where 2 [0, 1) and c t converges to zero with probability 1; 4. Var{F t (x)|P t }  C(1 + k t k W ) 2 for some C; where P t = { t }[{ i , F i , ↵ i } t 1 i=1 stands for the past, and the notation k.k W refers to some fixed weighted maximum norm. We will use so-called Bellman operators, which define attractors for the Q-values, based on the expectation of the learning rule under consideration. Lemma 13 ([Jaakkola et al., 1994 , Singh et al., 2000 ). Let the Bellman operator H for Q-learning be such that (H q)(s, a) = r(s, a) + E s 0 ⇠µ(a|s) h max a 0 q(s 0 , a 0 ) i , and let the fixed point Q ⇤ such that Q ⇤ = H Q ⇤ . Then, under Assumption 9, if the policy explores each state-action pair infinitely often, Q t converges to Q ⇤ with probability 1. \n The optimal policy ⇡ Q ⇤ = ⇡ µ is ⇡ maxQ ⇤ . If the policy is greedy in the limit, then ⇡ Q ! ⇡ µ . Theorem 14. Under Assumption 9 and if the interrupted Q-learning policy INT ✓ (⇡ Q ) is an int-GLIE policy, with 8s : lim t!1 ✓ t (s) = 1, then ⇡ Q is an SAO-safe inter- ruptible policy. Proof. By Definition 10, there is infinite exploration, thus the Q-values tend to the optimal value by Lemma 13. And since the extension policy is greedy in the limit with respect to these Q-values, it is then optimal in the limit. Hence the extension policy ⇡ Q is a SAO-extension of INT ✓ (⇡ Q ). Finally, 8s : lim t!1 ✓ t (s) = 1, which satisfies the requirement of Definition Since Sarsa also converges to the optimal policy under the GLIE assumption, one may then expect Sarsa to be also an asymptotically safely interruptible policy, but this is in fact not the case. This is because Sarsa learns on-policy, i.e., it learns the value of the policy it is following. Thus, interruptible Sarsa learns the value of the interruptible policy. We show this in the remainder of this section. Theorem 15. Under Assumption 9 Sarsa is not a WAOsafely interruptible policy. To prove this theorem, we first need the following lemma. Consider the following Bellman operator based on the interruptible Sarsa policy INT ✓ (⇡ s ): H INT q(s, a) = r(s, a) + E s 0 ⇠µ a 0 ⇠INT ✓ (⇡ s ) [q(s 0 , a 0 )] , where INT ✓ (⇡ s ) implicitly depends on time t through ✓ t and ✏ t . Let the fixed point Q-table Q s✓⇤ of this operator: Q s✓⇤ (s, a) = H INT Q s✓⇤ (s, a) = r(s, a) + E s 0 ⇠µ a 0 ⇠INT ✓ (⇡ maxQ s✓⇤ ) ⇥ Q s✓⇤ (s 0 , a 0 ) ⇤ = r(s, a) + E s 0 ⇠µ h ✓ t I(s 0 ) E a 0 ⇠⇡ INT ⇥ Q s✓⇤ (s 0 , a 0 ) ⇤ + (1 ✓ t I(s 0 )) max a 0 Q s✓⇤ (s 0 , a 0 ) i (2) Lemma 16. The operator H INT is a contraction operator in the sup norm with vanishing noise c t ! 0, i.e., k H INT q H INT Q s✓⇤ k 1  kq Q s✓⇤ k 1 + c t . Proof. The interruptible Sarsa policy INT ✓ (⇡ s ) is INT ✓ (⇡ s )(a|s) = ✓ t I(s)⇡ INT (a|s) + (1 ✓ t I(s))⇡ ✏Q s (a|s) = ⇡ ✏Q s (a|s) + ✓ t I(s)[⇡ INT (a|s) ⇡ ✏Q s (a|s)] ⇡ ✏Q s (a|s) = ✏ t ⇡ uni (a|s) + (1 ✏ t )⇡ maxQ s (a|s) = ⇡ maxQ s (a|s) + ✏ t [⇡ uni (a|s) ⇡ maxQ s (a|s)]. Hence, omitting the terms (s, a), (s 0 , a 0 ) and (a 0 |s 0 ) and rewriting ⇡ s⇤ := INT ✓ (⇡ maxQ s✓⇤ ): k H INT q H INT Q s✓⇤ k 1 = max s,a r + E s 0 ⇠µ a 0 ⇠INT ✓ (⇡ s ) [q] r E s 0 ⇠µ a 0 ⇠⇡ s⇤ ⇥ Q s✓⇤ ⇤  max s 0 E a 0 ⇠INT ✓ (⇡ s ) [q] E a 0 ⇠⇡ s⇤ ⇥ Q s✓⇤ ⇤  max s 0 ✓ t I(s 0 ) E a 0 ⇠⇡ INT ⇥ q Q s✓⇤ ⇤ + (1 ✓ t I(s 0 )) ✓ E a 0 ⇠⇡ s [q] max a 0 Q s✓⇤ ◆  max s 0 ✓ t I(s 0 ) E a 0 ⇠⇡ INT ⇥ q Q s✓⇤ ⇤ + (1 ✓ t I(s 0 )) ⇣ max a 0 q max a 0 Q s✓⇤ + ✏ t (• • • ) ⌘  max s 0 ,a 0 ✓ t I(s 0 ) q Q s✓⇤ + (1 ✓ t I(s 0 )) q Q s✓⇤ + c t = max s 0 ,a 0 q(s 0 , a 0 ) Q s✓⇤ (s 0 , a 0 ) + c t = kq Q s✓⇤ k 1 + c t . where c t depends on ✏ t and decreases to 0. Proof of Theorem 15. By Lemma 16, the values of the interruptible Sarsa policy INT ✓ (⇡ s ) converge to the values of the Q-table Q s✓⇤ , and in the limit the extension policy ⇡ s of INT ✓ (⇡ s ) chooses its actions greedily according to Q s✓⇤ . The rest of the proof is the same as for the proof of Theorem 8 which makes use of the environment in Figure 2 . \n SAFELY INTERRUPTIBLE SARSA VARIANT We only need to make a small change to make the Sarsa policy asymptotically safely interruptible. We call it Safe-Sarsa with policy ⇡ s. It suffices to make sure that, when the agent is interrupted, the update of the Q-table Q s does not use the realized actions as Sarsa usually does, but actions sampled from ⇡ s instead of from INT ✓ (⇡ s ): Q s t+1 (s t , a t ) := (1 ↵ t )Q s t (s t , a t ) + ↵ t ⇥ r t + Q s t (s t+1 , a 0 ) ⇤ , where a 0 ⇠ ⇡ s(.|s t+1 ) is not necessarily the action a t+1 , with ⇡ s(a t |s t ) := ⇡ ✏Q s (a t |s t ). Theorem 17. Under Assumption 9, if the Safe Sarsa policy ⇡ s is int-GLIE, then it is an SAO-safe interruptible policy. Proof. We simply adapt the proof of Theorems 15 and 14, with the important difference that the Bellman operator corresponding to this new update rule is now H s q(s, a) := r(s, a) + E s 0 ⇠µ a 0 ⇠⇡ s [q(s 0 , a 0 )] , and the fixed point is Q s⇤ := H s Q s⇤ . Since H s is actually the Bellman operator for the update rule of the noninterruptible Sarsa, it can then be shown that H s is a contraction, thus that Q s t converges to the same Q s⇤ independently of ✓. The rest of the proof is as for Theorem 14. Now, since the Q-values converge to the optimum Q ⇤ , it follows that ⇡ s, when not interrupted, chooses its action of the same value as (non-interruptible) Sarsa and thus as Qlearning in the limit; Hence its extension policy is exactly the optimal policy, which satisfies Definition 6. \n A SAFELY INTERRUPTIBLE UNIVERSAL AGENT Admittedly, algorithms like Q-learning and Sarsa require strong assumptions on the environment class. Hence a more interesting question is whether safe interruptibility is possible in much larger classes. Hutter [2005] defined a universal reinforcement learning agent, called AIXI. It is an (uncomputable) optimal modelbased planner with a subjective prior over the set of all computable environments, defined by means of a universal Turing machine. The subjective posterior of the environments is updated with Bayes rule. This ideal agent can in principle learn all kinds of (computable) regularities about the environment, plan for the long term and make contextdependent optimal decisions, with no constraint (other than being computable) on the complexity of the environment. Unfortunately, the optimality criterion of AIXI is Bayesian optimality, which is entirely dependent on the subjective prior and posterior [Leike and Hutter, 2015] , and AIXI has been shown to not be weakly asymptotically optimal [Orseau, 2013] without additional exploration [Lattimore and Hutter, 2014] . As a consequence, AIXI is not a good candidate for asymptotic safe interruptibility. Lattimore and Hutter [2011] later defined a (weakly) asymptotically optimal agent for all computable deterministic environments, which we call ⇡ L . It follows the optimal policy for the first model (in some given enumeration of the possible models) consistent with the current interaction history, and exploring at time t with probability 1/t for log t consecutive steps using a random policy, similarly to an ✏-greedy agent for general environments. In the following, we show that even such a smart agent can be made (weakly) safely interruptible. To this end, we make two minor modifications to the algorithm. First, the exploration probability of 1/t would require ✓ t = 1 1/ log(log(t)), which is unsatisfyingly slow. By sampling with probability 1/ p t instead, we can take an interruption probability that grows as 1 1/ log(t). Let this exploration sampling probability be t := p t + 1 p t  1 2 p t (since 1 = t+1 t = ( p t + 1 p t)( p t + 1+ p t) ( p t + 1 p t)2 p t). As in the original paper, the sequence t keeps track of the steps where an exploration starts, i.e., the sequence t is sampled independently so that t = 1 with probability t , and t = 0 otherwise. Second, we require that the exploitation policy does not change during an exploitation segment, so as to simplify one of the proofs. 4 More specifically, we call j t := min{j : µ j (h ✏, then following one of {⇡ µ , ⇡ ⌫ } will make environment ⌫ inconsistent with the future history within H(✏/2) steps after time t. \n CONCLUSION We have proposed a framework to allow a human operator to repeatedly safely interrupt a reinforcement learning agent while making sure the agent will not learn to prevent or induce these interruptions. Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this. We have shown that some algorithms like Q-learning are already safely interruptible, and some others like Sarsa are not, off-the-shelf, but can easily be modified to have this property. We have also shown that even an ideal agents that tends to the optimal behaviour in any (deterministic) computable environment can be made safely interruptible. However, it is unclear if all algorithms can be easily made safely interruptible, e.g., policy-search ones [Williams, 1992, Glasmachers and Schmidhuber, 2011] . Another question is whether it is possible to make the interruption probability grow faster to 1 and still keep some convergence guarantees. One important future prospect is to consider scheduled interruptions, where the agent is either interrupted every night at 2am for one hour, or is given notice in advance that an interruption will happen at a precise time for a specified period of time. For these types of interruptions, not only do we want the agent to not resist being interrupted, but this time we also want the agent to take measures regarding its current tasks so that the scheduled interruption has minimal negative effect on them. This may require a completely different solution. Figure 1 : 1 Figure 1: In black, the original task. In red, the human intervention modifies the task. \n Also recall that ✓ t places an upper bound on the actual interruption probability. The interruptible policy INT ✓ Furthermore, 3 the interruption function I(.) and the inter- ruption policy ⇡ INT (.) should depend only on the current state: I(h an MDP, the next observation o t , now called a state s t 2 S, depends only on the current state and action: 2 µ(s t+1 |h 1:t s t a t ) = µ(s t+1 |s t a t ) (MDP assumption) . 1:t ) = I(s t ) and ⇡ INT (a t |h 5 degree climate change in the 21 st century under a certain emission scenario. It may be that attempting to assign strict probabilities is the wrong approach; an alternative would be to develop frameworks of 'safe operating thresholds' with wide error bounds, exemplified by the 'Planetary boundaries' framework put forward by Johan Rockstrom and colleagues at the Stockholm Resilience Centre (Steffen et al, 2015) . The expected harm from long-term climate change will also depend very heavily on the extent to which various mitigation and adaptation strategies are adopted, and how successful they are. Similarly, in the case of global pandemics, we we can ask scientific questions about the possibility of the 'perfect virus' with high health impact, high infectivity, and long incubation time. However, the scale and severity of impact will be predicated by many other factors -movement of humans or other vectors, capabilities of health services, the response of the population, and more. It is plausible that the bulk of the damage might not even be caused by the virus, but instead by a broad infrastructure collapse as emergency services and hospitals are overwhelmed, just-in-time food delivery is disrupted, and other systems underpinning societal order collapse. However, none of these challenges are insurmountable. Modelling and analysis of factors that contribute to these risks can deepen our scientific understanding. This can help us establish estimates for probability and impact, and in some cases rule out concerns entirely. Where past examples are sparse, there is value in drawing on counterfactual examples of 'near misses', as described by Gordon Woo (Woo, 2016) and others. Design and analysis of scenarios can help in identifying key considerations and interactions. This may help us identify key interventions that reduce risk significantly, even if in instances where it is difficult to assign tight probabilities to events. Other risk analysis techniques, and interventions to mitigate risks at scales smaller that global catastrophe level, also provide a range of useful insights for the analysis of global catastrophic risks, as shown by the work of GCRI and others. By studying global catastrophic risks as a class of risks, we can identify shared characteristics of global catastrophe events, which may help us identify common strategies that aid us in becoming more resilient as a species against a broad set of risks. For example, a number of global catastrophic events (global nuclear war, supervolcano eruption, asteroid impact) would result in large amounts of particulate matter being ejected into the atmosphere, resulting in a disruption of photosynthesis (Maher and Baum, 2013) . The development of alternative foods that are not dependent on sunlight, and strategies to scale up production of these food sources rapidly, would be robust in the face of a broad range of catastrophe events (Denkenberger and Pearse, 2014) . The maintenance of permanent seed banks, manned shelters suitable for lengthy use, and information vaults represent similar safeguards. Similar strategies, useful for reduction of a broad range of risks, are likely to be feasible at the level of national and international governance (Farquhar et al, 2017; Cotton-Barrett et al, 2016) . \n Case Study: Potential future global risks from artificial intelligence In this final section, I aim to present this research community's work over the last decade on potential future risks from artificial intelligence as a case study on how the field has made progress on a particularly novel area of concern, focusing on the broad strategies involved. This is complemented by a more technical analysis of AI risk provided by Seth Baum. The theoretical nature of the risk: Early work, carried out mainly by the Machine Intelligence Research Institute and the Future of Humanity Institute, aimed to explore the possibility of artificial general intelligence of greater-than-human-ability. This was coupled with the aim of exploring and characterising the theoretical risk associated that such a development could pose, drawing on input from experts in computer science and other fields. Rather than focus on difficult-to-characterise questions like consciousness, sentience, and evil intentions, the work instead focused on foundational issues like: -The difficulty of designing safe goals for very capable, powerful systems able to take a very wide range of actions in a wide range of environments, where the actions could have a wide range of consequences. -Certain predicted behaviours that might be expected from a powerful optimizing agent, such as a drive to acquire additional resources, a drive to avoid being switched off prior to completion of its goal, or a drive to improve the system's own capability. -The theoretical possibility and limits of recursive self-improvement. This refers to the possibility that a sufficiently capable system may be able to surpass human programmers in its ability to design the next generation of systems. It is hypothesised that this could in turn result in a 'chain reaction' of performance improvement (Yampolskiy, 2015) , rapidly leading to a system far beyond human capability in most cognitive domains. -The challenge that for most of the relevant design and control processes, it may be necessary to solve a range of technical and theoretical issues ahead of time, before certain critical thresholds in capability are reached. Beyond these thresholds, it may be much more difficult to intervene effectively due to the level of capability and autonomy of subsequent iterations of the system. It is worth noting that in principle, it is possible that systems may be developed which have a much greater ability to engage in science, engineering, and manufacture, and a greater ability to manipulate the global environment than we humans have. This level of power raises concerns of global catastrophic or existential risk, unless very carefully developed. These programmes of research were then published in a book, Superintelligence (Bostrom, 2014) , which received a lot of attention, as well as in many further academic papers. These lines of argument are by no means uncontroversial -many experts disagree on various points. Most experts consider artificial general intelligence to be decades out of reach as a scientific milestone, and some expect hundreds of years of progress to be needed (Grace et al, 2017) . Some are sceptical about the hypothesis that rapid progress, enabled by the engagement of the AI systems themselves in the research and development process, could occur once a certain level of capability and generality is reached (e.g. see Walsh, 2016 ). Yet others are sceptical that such systems would be likely to demonstrate the traits of agency, autonomy and goal-driven behaviour that may make the actions of such systems difficult to predict or intervene on. Some experts hold that these is a limit to how much meaningful work can be done at this point in time to ensure the safety and stability of future systems, given the limits that can be meaningfully predicted about the theoretical underpinnings and engineering design of these future systems, as well as the limits of our knowledge regarding the nature of intelligence. Others have raised the concern that a focus on risks from powerful future systems may distract focus from more near-term risks and opportunities associated with the current state of the technology. However, a growing body of experts in AI consider many of these concerns plausible and worthy of further study (e.g. see Dafoe and Russell, 2016) . While the level of capability warranting global catastrophic concerns is still decades away or longer, ongoing research on the matter is warranted by the magnitude of the challenge, and the range of technical and governance questions in need of study in advance of such developments. It is the view of this author that these concerns should not be used as a rationale to propose slowing down progress on the technology at this point in time. Nor should this research take the place of necessary and valuable work on near-term opportunities and challenges posed by AI, but rather should complement it. Broader scientific engagement: The next steps for the community were to engage more deeply with the scientific artificial intelligence community in academia and industry-to explore, discuss and debate these arguments, as well as other risks that may be associated with AI. In 2015, a landmark conference was held in Puerto Rico, with representatives of the leading companies working explicitly towards a vision of general AI, alongside experts in governance, law, economics, risk and other relevant fields. The conference resulted in an open letter calling for more research on AI that was safe, robust, and beneficial, and for ongoing attention to issues relating to the longer term. The letter was signed by research leaders in artificial intelligence across industry and academia, as well as leading experts in a range of different fields, and was accompanied by a paper outlining research priorities, and a grants programme to support relevant work. The conference has been followed by a programme of activities to foster collaboration with more of the machine learning community both near-and long-term issues relating to safe design and risk. This has including a series of workshops organised by CSER and others at the major machine learning conferences (ICML, NIPS, IJCAI, DALI). T In 2017 FLI organised a follow-on to the Puerto Rico conference in Asilomar, resulting in the endorsement by many leading researchers of a set of principles for the long-term development of AI. \n Technical research: In parallel, much of the work in the last two years has focused on translating some of the more foundational questions raised by early work at FHI and MIRI and elsewhere into crisp technical research problems that can be worked on today. This includes approaches involving fundamental mathematical frameworks for agent decision-making and behaviour, as well as research programmes exploring how some of the behaviours that would be of concern in long-term systems may manifest in the near-term systems we are building currently. A number of research agendas have been published in the last year, and several of the leading companies focused on general AI now have safety teams exploring these issues. In addition, there has been substantial growth in projects mapping progress and trajectories in AI in domains relevant to both narrow and general artificial intelligence. Global engagement: Within the existential risk community, more thought is now going to the particular political and governance challenges that may emerge as we move closer towards more powerful and general AI systems, with programmes starting up at FHI, CSER, the Centre for the Future of Intelligence. Long-term impacts and risks have risen on the agenda for governments in Europe and the United States, for example being mentioned as worthy of further study in the US Office for Science and Technology Policy's recent report on preparing for the future of artificial intelligence. One priority that has emerged is the need for greater global engagement, particularly with research leaders and other stakeholders in China, but also India, Japan, and emerging hubs in Africa. Any global conversation around the future of artificial intelligence, and potential global benefits risks associated with it, needs to have global representation. From a pragmatic point of view, China is on course to emerge as a scientific leader and agenda-setter in AI research over the coming decade. If at some point in several decades we truly do approach a level of technological breakthrough with global risk consequences, we are unlikely to be able to achieve a safe transition without a strong level of global cooperation and trust, which is going to require a lot of dedicated work to achieve. Now is a good time to start laying the groundwork. \n Conclusion We are entering a century in which humanity will be confronted with unprecedented threats to global civilisation. Some of these may result from the manner in which our footprint as a species strains our global environment, such as the impacts of climate change and biodiversity loss. Some may result from the development and deployment of increasingly powerful technologies, whether due to malevolent use or unintended consequences. The challenge posed by the analysis and mitigation of these threats requires an interdisciplinary approach: a community that can draw on the best expertise from the risk sciences, as well as the expertise of scientists, law and governance specialists, ethicists, and others. It also requires a community that can draw on expertise on different types and sources of risk, and consider both lessons that can be applied across risks, and the interactions that are likely to occur between different global developments. Many of the most useful tools in global risk analysis have been drawn from the risk science literature, and deeper collaborations between the existential risk community and the risk science community are likely to be increasingly important in the years to come. The first dedicated centre established was arguably Nick Bostrom's Future of Humanity Institute (FHI; https://www.fhi.ox.ac.uk/) in Oxford in 2003. The FHI has focused on cross-cutting global catastrophic risk analysis, philosophical analysis on the global importance of reducing existential risk, and in the last decade has placed its strongest focus on characterising potential risks from artificial general intelligence and superintelligence; a lot of this has been in collaboration with the Machine Intelligence Research Institute in Berkeley. The Global Catastrophic Risk Institute (GCRI;The Centre for the Study of Existential Risk (CSER; http://cser.org) was founded in 2012 by Martin Rees, Huw Price, and Jaan Tallinn, although its first research grants were secured, and first postdoctoral researchers hired, more recently in late 2015 and 2016. We now have a research team beginning work on biological threats, extreme climate change, ecological tipping points, catastrophic risks related to future advances in artificial intelligence, and analysis of emerging technologies such as geoengineering. We also have postdocs working on more cross-cutting themes such as horizonscanning and foresight for extreme risk, responsible innovation in risky sciences and technologies, population growth and resource use.The Future of Life Institute (https://futureoflife.org/) was founded in 2014, focusing on artificial intelligence, climate change, risks from biotechnology, and risks from nuclear weapons. It has organised two highly successful conferences on the future of artificial intelligence and potential risks it might bring, resulting in a widely signed and shared open letter on the responsible development of AI, a grants programme to support work on AI safety, and a set of principles aimed at promoting the beneficial development and application of AI within the research community and more broadly. It has also organised a conference on nuclear war, and has engaged in activities to encourage divestment from nuclear weapons.Several more recent initiatives are underway within academia, including at Stockholm, Warwick (United Kingdom), and Australia National University, and other world-leading risk centres such as the Garrick Institute (UCLA) are increasingly including global catastrophic risk within their remit, indicating that a diverse range of new expertise will be brought to bear on these topics in coming years. http://gcrinstitute.org/) in the US was founded in 2011, focusing on risks including bioweapons, nuclear war, artificial intelligence, and natural events, and employing risk analysis methodology. Under Seth Baum and Tony Barrett's leadership, it has played a key role in establishing links between the existential risk community and experts in these fields. \n\t\t\t Electronic copy available at: https://ssrn.com/abstract=3446663", "date_published": "n/a", "url": "n/a", "filename": "SSRN-id3446663.tei.xml", "abstract": "In the last fifteen years there has been substantial growth in research on existential risk -the category of risks that threaten human extinction, or the permanent and drastic reduction of humanity's future potential. A number of new organisations focused explicitly on existential and global catastrophic risk have been founded in recent years, complementing the long-standing work of existing centres focused on specific risk areas such as nuclear war, biosecurity, climate change and systemic risk. This paper provides a brief overview of the emergence of this new research community, and provides a case study on the community's research on potential risks posed by future developments in artificial intelligence. There exists the opportunity for powerful collaboration between the new approaches and perspectives provided by the existential risk research community, and the expertise and tools developed by the risk sciences for risks of various magnitudes. However, there are a number of key characteristics of existential and global catastrophic risks, such as their magnitude, and their rare or unprecedented nature, that are likely to make them particularly challenging to submit to standard risk analysis, and will require new and specialised approaches. 1 It is worth noting that Bostrom also considers some more unusual possibilities, such as the emergence of permanent totalitarianism", "id": "396140839acd34180f4e843b7ad4bc92"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Smitha Milli", "Falk Lieder", "Thomas L Griffiths"], "title": "Running head: RATIONAL REINTERPRETATION OF DUAL-PROCESS THEORIES 1 A Rational Reinterpretation of Dual-Process Theories", "text": "rational solutions when resources are limited (Lieder, Griffiths, & Hsu, 2018; Lieder, Griffiths, Huys, & Goodman, 2018a , 2018b Howes, Warren, Farmer, El-Deredy, & Lewis, 2016; Khaw, Li, & Woodford, 2017; Sims, 2003; Tsetsos et al., 2016; Bhui & Gershman, 2017) . Furthermore, people appear to adaptively choose between their fast heuristics and their slower and more deliberate strategies based on the amount of resources available However, an issue still remains unresolved in the push for the resource-rational reinterpretation of these heuristics. Since the exact amount of computation to do for a problem depends on the particular time and cognitive resources available, a larger repertoire of reasoning systems should enable the mind to more flexibly adapt to different situations (Payne, Bettman, & Johnson, 1993; Gigerenzer & Selten, 2002) . In fact, achieving the highest possible degree of adaptive flexibility would require choosing from an infinite set of diverse cognitive systems. However, this is not consistent with behavioral and neuroscientific evidence for a small number of qualitatively different decision systems (van der Meer, Kurth-Nelson, & Redish, 2012; Dolan & Dayan, 2013) and similar evidence in the domain of reasoning (Evans, 2003 (Evans, , 2008 Evans & Stanovich, 2013) . One reason for a smaller number of systems could be that as the number of systems increases it becomes increasingly more time-consuming to select between them . This suggests that the number and nature of the mind's cognitive systems might be shaped by the competing demands for the ability to flexibly adapt one's reasoning to the varying demands of a wide range of different situations and the necessity to do so quickly and efficiently. In our work, we theoretically formalize this explanation, allowing us to derive not only what the optimal system is given a particular amount of resources, but what the optimal set of systems is for a human to select between across problems. Such an explanation may provide a rational reinterpretation of dual-process theories, the theory that the mind is composed of two distinct types of cognitive systems: one that is deliberate, slow, and accurate, and a second one that is fast, intuitive, and fallible (Evans, 2008; Kahneman & Frederick, 2002 . Similar dual-process theories have independently emerged in research on decision-making (Dolan & Dayan, 2013) and cognitive control (Diamond, 2013) . While recent work in these areas has addressed the question of how the mind arbitrates between the two systems (Daw, Niv, & Dayan, 2005; Keramati, Dezfouli, & Piray, 2011; Shenhav, Botvinick, & Cohen, 2013; Boureau, Sokol-Hessner, & Daw, 2015) , it remains normatively unclear why the mind would be equipped with these two types of cognitive system, rather than another set of systems. The existence of the accurate and deliberate system, commonly referred to as System 2 following Kahneman and Frederick (2002) , is easily justified by the benefits of rational decision-making. By contrast, the fast and fallible system (System 1) has been interpreted as a kluge (Marcus, 2009) and its mechanisms are widely considered to be irrational (Sutherland, 2013; Ariely, 2009; Tversky & Kahneman, 1974; Gilovich et al., 2002) . This raises the question why this system exists at all. Recent theoretical work provided a normative justification for some of the heuristics of System 1 by showing that they are qualitatively consistent with the rational use of limited cognitive resources (Griffiths et al., 2015; Lieder, Griffiths, & Hsu, 2018; Lieder, Griffiths, Huys, & Goodman, 2018a , 2018b ) -especially when the stakes are low and time is scarce and precious. Thus, System 1 and System 2 appear to be rational for different kinds of situations. For instance, you might want to rely on System 1 when you are about to get hit by a car and have to make a split-second decision about how to move. But, you might want to employ System 2 when deciding whether or not to quit your job. Here, we formally investigate what set of systems would enable people to make the best possible use of their finite time and cognitive resources. We derive the optimal tradeoff between the cognitive flexibility afforded by mutliple systems and the cost of choosing between them. To do so, we draw inspiration from the artificial intelligence literature on designing intelligent agents that make optimal use of their limited-performance hardware by building upon the mathematical frameworks of bounded optimality (Russell & Subramanian, 1995) and rational metareasoning (Russell & Wefald, 1991b; Hay, Russell, Tolpin, & Shimony, 2012) . We apply this approach to four different domains where the dual systems framework has been applied to explain human decision-making: binary choice, planning, strategic interaction, and multi-alternative, multi-attribute risky choice. We investigate how the optimal cognitive architecture for each domain depends on the variability of the environment and the cost of choosing between multiple cognitive systems, which we call metareasoning cost. This approach allows us to extend the application of resource-rational analysis from a particular system of reasoning to sets of cognitive systems, and our findings provide a normative justification for dual-process theories of cognition. Concretely, we find that across all four domains the optimal number of systems increases with the variability of the environment but decreases with the costliness determining when which of these systems should be in control. In addition, when it is optimal to have two systems, then the difference in their speed-accuracy tradeoffs increases with the variability of the environment. In variable environments, this results in one system that is accurate but costly to use and another system that is fast but error-prone. These predictions mirror the assertions of dual-process accounts of cognition (Evans, 2008; Kahneman, 2011) . Our findings cast new light on the debate about human rationality by suggesting that the apparently conflicting views of dual-process theories and rational accounts of cognition might be compatible after all. The remainder of this paper is structured as follows: We start by summarizing previous work in psychology and artificial intelligence that our article builds on. We then describe our mathematical methods for deriving optimal sets of cognitive systems. The subsequent four sections apply this methodology to the domains of binary choice, planning, strategic interaction in games, and multi-alternative risky choice. We conclude with the implications of our findings for the debate about human rationality and directions for future work. \n Background Before delving into the details of our analysis, we first discuss how our approach applies to the various dual-process theories in psychology, and how we build on the ideas of bounded optimality and rational metareasoning developed in artificial intelligence research. \n Dual-process theories The idea that human minds are composed of multiple interacting cognitive systems first came to prominence in the literature on reasoning (Evans, 2008; Stanovich, 2011) . While people are capable of reasoning in ways that are consistent with the prescriptions of logic, they often do not. Dual-process theories suggested that this is because people employ two types of cognitive strategies: fast but fallible heuristics that are triggered automatically and deliberate strategies that are slow but accurate. Different dual-process theories vary in what they mean by two cognitive systems. For example, Evans and Stanovich (2013) distinguish between dual processes, in which each process can be made up of multiple cognitive systems, and dual systems, which corresponds to the literal meaning of two cognitive systems. Because our work abstracts these cognitive systems based on their speed-accuracy tradeoff our analysis applies both at the level of systems or processes as long as the systems or processes accomplish speed-accuracy tradeoffs. Thus, our theory still applies to both dual \"processes\" and dual \"systems\". There is also debate over how the two systems would interact. Some theories postulate the existence of a higher-level controller that chooses between the two systems (Norman & Shallice, 1986; Shenhav et al., 2013) , some that the two systems run in parallel, and others that the slower system interrupts the faster one (Evans & Stanovich, 2013) . The analysis we present simply assumes that there is greater metareasoning cost incurred for each additional system. This is clearest to see when a higher-level controller needs to make the decision of which system to employ. Alternatively, if multiple cognitive systems operated in parallel, the cost of arbitrating between these systems would also increase with the number of systems -just like the metareasoning cost. So, we believe our analysis would also apply under this alternative assumption. Since their development in the reasoning literature, dual-process theories have been applied to explain a wide range of mental phenonomena, including judgment and decision-making, where it has been popularized by the distinction between System 1 and System 2 (Kahneman & Frederick, 2002 Kahneman, 2011) , and moral reasoning where the distinction is made between a fast deontological system and a slow utilitarian system (Greene, 2015) . In parallel with this literature in cognitive psychology, research on human reinforcement learning has led to similar conclusions. Behavioral and neural data suggest that the human brain is equipped with two distinct decision systems: a fast, reflexive, system based on habits and a slow, deliberate system based on goals (Dolan & Dayan, 2013) . The mechanisms employed by these systems have been mapped onto model-based versus model-free reinforcement learning algorithms. A model-free versus model-based distinction has also been suggested to account for the nature of the two systems posited to underlie moral reasoning (Cushman, 2013; Crockett, 2013) . The empirical support for the idea that the human mind is composed of two types of cognitive systems raises the question of why such a composition would evolve from natural selection. Given that people outperform AI systems in most complex real-world tasks despite their very limited cognitive resources (Gershman, Horvitz, & Tenenbaum, 2015) , we ask whether being equipped with a fast but fallible and a slow but accurate cognitive system can be understood as a rational adaption to the challenge of solving complex problems with limited cognitive resources (Griffiths et al., 2015) . \n Bounded Optimality and Resource-Rational Analysis Recent work has illustrated that promising process models of human cognition can be derived from the assumption that the human mind makes optimal use of cognitive resources that are available to it (Griffiths et al., 2015; Lewis, Howes, & Singh, 2014) . This idea can be formalized by drawing on the theory of bounded optimality which was developed as a foundation for designing optimal intelligent agents. In contrast to expected utility theory (Von Neumann & Morgenstern, 1944) , bounded optimality takes into account the constraints imposed by performance-limited hardware and the requirement that the agent has to interact its environment in real time (Russell & Subramanian, 1995) . The basic idea is to mathematically derive a program that would enable the agent to interact with its environment as well as or better than any other program that its computational architecture could execute. Critically, the agent's limited computational resources and the requirement to interact with a potentially very complex, fast-paced, dynamic environment in real-time entail that the agent's strategies for reasoning and decision-making have to be extremely efficient. This rules out naive implementations of Bayes rule and expected utility maximization as those would take so long to compute that the agent would suffer a decision paralysis so bad that it might die before taking even a single action. The fact that people are subject to the same constraints makes bounded optimality a promising normative framework for modeling human cognition (Griffiths et al., 2015) . Resource-rational analysis applies the principle of bounded optimality to derive optimal cognitive strategies from assumptions about the problem to be solved and the cognitive architecture available to solve it (Griffiths et al., 2015) . Recent work illustrates that this approach can be used to discover the discover and make sense of people's heuristics for judgment (Lieder, Griffiths, Huys, & Goodman, 2018a ) and decision-making (Lieder, Griffiths, Huys, & Goodman, 2018a; Lieder, Griffiths, & Hsu, 2018) , as well as memory and cognitive control (Howes et al., 2016) . The resulting models have shed new light on the debate about human rationality (Lieder, Griffiths, Huys, & Goodman, 2018a , 2018b Lieder, Krueger, & Griffiths, 2017; Lieder, Griffiths, Huys, & Goodman, 2018b; Lieder, Griffiths, & Hsu, 2018; Griffiths et al., 2015) . While this approach has so far focused on one individual strategy at a time, the research presented here extends it to deriving optimal cognitive architectures comprising multiple systems or strategies for a wider range of problems. To do so, we use the theory of rational metareasoning as a foundation for modeling how each potential cognitive architecture would decide when to rely on which system or strategy. \n Rational metareasoning as a framework for modeling the adaptive control of cognition Previous research suggests that people flexibly adapt how they decide to the requirements of the situation (Payne, Bettman, & Johnson, 1988) . Recent theoretical work has shown that this adaptive flexibility can be understood within the rational metareasoning framework developed in artificial intelligence . Rational metareasoning (Russell & Wefald, 1991b; Hay et al., 2012) formalizes the problem of selecting computations so as to make optimal use of finite time and limited-performance hardware. The adaptive control of computation afforded by rational metareasoning is critical for intelligent systems to be able to solve complex and potentially time-critical problems on performance-limited hardware (Horvitz, Cooper, & Heckerman, 1989; Russell & Wefald, 1991b) . For instance, it is necessary for a patient-monitoring system used in emergency medicine to metareason in order to decide when to terminate diagnostic reasoning and recommend treatment. (Horvitz & Rutledge, 1991) . This example illustrates that rational metareasoning may be necessary for agents to achieve bounded-optimality in environments that pose a wide range of problems that require very different computational strategies. However, to be useful for achieving bounded-optimality, metareasoning has to be done very efficiently. In principle, rational metareasoning could be used to derive the optimal amount of time and mental effort that a person should invest into making a decision (Shenhav et al., 2017) . Unfortunately, selecting computations optimally is a computation-intensive problem itself because the value of each computation depends on the potentially long sequence of computations that can be performed afterwards. Consequently, in most cases, solving the metareasoning problem optimally would defeat the purpose of trying to save time and effort (Lin, Kolobov, Kamar, & Horvitz, 2015; Hay et al., 2012; Russell & Wefald, 1991a) . Instead, to make optimal use of their finite computational resources bounded-optimal agents (Russell & Subramanian, 1995) must optimally distribute their resources between metareasoning and reasoning about the world. Thus, studying bounded-optimal metareasoning might be a way to understand how people manage to allocate their finite computational resources near-optimally with very little effort (Gershman et al., 2015; Keramati et al., 2011) . Recent work has shown that approximate metareasoning over a discrete set of cognitive strategies can save more time and effort than it takes and thereby improve overall performance (Lieder et al., 2014) . This approximation can drastically reduce the computational complexity of metareasoning while achieving human-level performance (Lieder et al., 2014; . Thus, rather than metareasoning over all possible sequences of mental operations to determine the exact amount of time to think, humans may simply metareason over a finite set of cognitive systems that have different speed and accuracy tradeoffs. This suggests a cognitive architecture comprising multiple systems for reasoning and decision making and a executive control system that arbitrates between them -which is entirely consistent with extant theories of cognitive control and mental effort (Norman & Shallice, 1986; Shenhav et al., 2017 Shenhav et al., , 2013 . Dual-process theories can be seen as a special case of this cognitive architecture where the number of decision systems is two. According to this perspective, the executive control system selects between a limited number of cognitive systems by predicting how well each of them would perform in terms of decision quality and effort and then selects the systems with the best predicted performance . Assuming that each of these predictions takes a certain amount of mental effort, this entails that the cost of deciding which cognitive system to rely on in a given situation increases with the number of systems. At the same time, increasing the number of systems also increases the agent's cognitive flexibility thereby enabling it to achieve a higher level of performance across a wider range of environments. Conversely, reducing the space of computational mechanisms the agent can choose from entails that there may be problems for which the optimal computational mechanisms will be no longer available. This dilemma necessitates a tradeoff that sacrifices some flexibility to increase the speed at which cognitive mechanisms can be selected. This raises the question of how many and which computational mechanisms a bounded-optimal metareasoning agent should be equipped with, which we proceed to explore in the following sections. \n Deriving Bounded-Optimal Cognitive Systems We now describe our general approach for extending resource-rational analysis to the level of cognitive architectures. The first step is to model the environment. For the purpose of our analysis, we characterize each environment by the set of decision problems D that it poses to people and a probability distribution P over D that represents how frequently the agent will encounter each of them. The set of decision problems D could be quite varied, for example, it could include deciding which job to pick and deciding what to eat for lunch. In this case P would encode the fact that deciding what to eat for lunch is a more common type of decision problem than deciding which job to pick. Associated with each decision problem d is a utility function U d (a) that represents the utility gained by the agent for taking action a in decision problem d. Having characterized the environment in terms of decision problems, we now model how people might solve them. We assume that there is a set of reasoning and decision-making systems T that the agent could potentially be equipped with. The question we seek to investigate is what subset M ⊆ T is optimal for the agent to actually be equipped with. The optimal set of systems M is dependent on three costs: (1) the action cost: the cost of taking the chosen action, (2) the reasoning cost: the cost of using a system from M to reason about which action to take, (3) the metareasoning cost: the cost of deciding which system to use to decide which action to take. For simplicity, we will describe each of the costs in terms of time delays, although they also entail additional costs, including metabolic costs. As an example, consider the scenario of deciding what to order for lunch at a restaurant. The diner has a fixed amount of time she can spend at lunch until she needs to get back to work, so time is a finite resource. The action cost is the time required to eat the meal. A person might have multiple systems for deciding which items to choose. For example, one system may rely on habit and order the same dish as last time. Another system may perform more logical computation to analyze the nutritional value of each item or what the most economical choice is. Each system has an associated reasoning cost, the time it takes for that system to decide which item to order. It is clear that the diner has to balance the amount of time spent thinking about what meal to pick (reasoning cost) with the amount of time it will take to actually eat the meal (action cost), so that she is able to finish her meal in the time she has available. If the diner is extremely time-constrained, perhaps because of an urgent meeting she needs to get back to, then she may simply heuristically plop items onto her plate. But, if the diner has more time, then she may think more about what items to choose. In addition to the cost of reasoning and the cost of acting, having multiple decision systems also incurs the cost of metareasoning, that is reasoning about how to reason about what to do. In other words, the metareasoning cost is how much time it takes the diner to decide how much to think about whether to rely on her habits, an analysis of nutritional value, or any of the the other decision mechanisms she may have at her disposal. If the diner only has one system of thinking, then the metareasoning cost is zero. But as the number of systems increases, the metareasoning cost of deciding which system should be in control increases. This raises the question of what is the optimal ensemble of cognitive systems, how many systems does it include, and what are they? We can derive the answer to these questions by computing minimizing the expected sum of action cost, reasoning cost, and metareasoning cost over the set of all possible ensembles of cognitive systems. In summary, our approach for deriving a bounded-optimal cognitive architecture proceeds as follows: 1. Model the environment. Define the set of decision problems D, the distribution over them P , and the utility for each problem U d (a). 2. Model the agent. Define the set of possible cognitive systems T the agent could have. 3. Specify the optimal mind design problem. Define the metric that the bounded agent's behavior optimizes, i.e., a trade-off between the utility it gains and the costs that it incurs; the action cost, reasoning cost, and metareasoning cost. 4. Solve the optimal mind design problem. Solve (3) to find the optimal set of systems M ⊆ T for the agent to be equipped with. Once we have done this, we can begin to probe how different parts of the simulation affect the final result in step (4). For example, we expect that the optimal cognitive architecture for a variable environment should comprise multiple cognitive systems with different characteristics. But at the same time, the number of systems should not be too high, or else the time spent on deciding which system to use, the metareasoning cost, will be too high. In other words, we hypothesize that the number of systems will depend on a tradeoff between the variability of the environment and the metareasoning cost. Our simulations show that this is indeed the case. \n Simulation 1: Two-Alternative Forced Choice Our first simulation focuses on the widely-used two-alternative forced choice (2AFC) paradigm, in which a participant is forced to select between two options. For example, belongs to the category or not, and psychophysics experiments often require participants to judge whether two stimuli are the same or different. Even in simple laboratory settings, judgments made within a 2AFC task seem to stem from systematically different modes of thinking. Therefore, 2AFC tasks are a prime setting to start in evaluating our theory for dual process systems. But before describing the details of our 2AFC simulation, we first review evidence of dual-process accounts of behavior in the 2AFC paradigm. A very basic binary choice task presents an animal with a lever that it can either press to obtain food or decide not to press (Dickinson, 1985) . It has been shown that early on in this task rodents' choices are governed by a flexible brain system that will stop pressing the lever when the they no longer want the food. By contrast, after extensive training their choices are controlled by a different, inflexible brain system that will continue to press the lever even when the reward is devaluated by poisoning the food. Interestingly, these two systems are preserved in the human brain and the same phenomenon has been demonstrated in humans (Balleine & O'Doherty, 2010) . Another example of two-alternative forced-choice is the probability learning task where participants repeatedly choose between two options, the first of which yields a reward with probability p 1 and the second of which yields a reward with probability p 2 = 1 − p 1 . It has been found that depending on the incentives people tend to make these choices in two radically different ways (Shanks, Tunney, & McCarthy, 2002) : When the incentives are low then people tend to use a strategy that chooses option one with a frequency close to p 1 and option two with a frequency close to p 2 -which can be achieved very efficiently (Vul, Goodman, Griffiths, & Tenenbaum, 2014) . By contrast, when the incentives are high then people employ a choice strategy that maximizes their earnings by almost always choosing the option that is more likely to be rewarded -which requires more computation (Vul et al., 2014) . The dual systems perspective on 2AFC leaves open the normative question: what set of systems is optimal for the agent to be equipped with? To answer this question, we apply the methodology described in the previous section to the problem of bounded-optimal binary-choice. \n Methods As in the 2AFC probability learning task used by Shanks et al. (2002) , the agent receives a reward of +1 for picking the correct action and 0 for picking the incorrect action. An unboundedly rational agent would always pick the action with a higher probability of being correct. Yet, although simple in set-up, computing the probability of an action being correct generally requires complex inferences over many interconnected variables. For example, if the choice is between turning left onto the highway or turning right to smaller backroads, estimating the probability of which action will lead to less traffic may require knowledge of when rush hour is, whether there is a football game happening, and whether there are accidents in either direction. To approximate these often intractable inferences people appear to perform probabilistic simulations of the outcomes, and the variability and biases of their predictions (Griffiths & Tenenbaum, 2006; Lieder, Griffiths, Huys, & Goodman, 2018a) and choices (Vul et al., 2014; Lieder, Griffiths, & Hsu, 2018 ) match those of efficient sampling algorithms. Previous work has therefore modeled people as bounded-optimal sample-based agents, which draw a number of samples from the distribution over correct actions and then picks the action that was sampled most frequently. (Vul et al., 2014; Griffiths et al., 2015) . In line with the prior work, we too model the agent as being a sample-based agent, described formally below. Let a 0 and a 1 be the actions available to the agent where a 1 has a probability θ of being the correct action and a 0 has a probability 1 − θ of being correct. The probability θ that a 1 is correct varies across different environments, reflecting the fact that in some settings it is easier to tell which action is correct than others. For example, it is obvious between the choice of a two-month old tomato and a fresh orange that the more nutritious choice is the latter. In this case, it is clear that the fresh orange is correct with probability near one. On the other hand, it may be quite difficult to decide between whether to attend graduate school at two universities with similar programs. In this case, the difference between the probabilities of each being correct may be quite marginal, and both might have close to a 0.5 chance of being correct. We model the variability in the difficulty of this choice by assuming that θ is equally likely to be any value in the range (0.5, 1), i.e θ ∼ P θ = Unif(0.5, 1). We consider the range (0.5, 1) instead of (0, 1) without loss of generality because we can always rename the actions so that a 0 is more likely to be correct than a 1 . To make a decision the sample-based agent draws some number of samples k from the distribution over correct actions, i ∼ Bern(θ), and picks the action a i that it sampled more. 1 If the agent always draws k samples before acting, then its expected utility across all environments is E θ [U |k] = θ [P (a 1 is correct) • P (Agent picks a 1 | k) + P (a 0 is correct) • P (Agent picks a 0 | k)]P θ (dθ) . (1) See Appendix A for a detailed derivation of how to calculate the quantity in Equation 1 . If there were no cost for samples, then the agent could take an infinite number of samples to ensure choosing the correct action. But this is, of course, impractical in the real world because drawing a sample takes time and time is limited. Vul et al. (2014) show how the optimal number of samples changes based on the cost of sampling in various 2AFC problems. They parameterize the cost of sampling as the ratio, r e , between the time for acting and the execution time of taking 1 sample. Suppose acting takes one unit of time, then the amount of time it takes to draw k samples is k/r e . The total amount of time the agent takes is 1 + k/r e . Thus, the optimal number of samples the agent should draw to maximize its expected utility per unit time is k * = arg max k∈N 0 E θ [U |k] 1 + k re . ( 2 ) When the time it takes to generate a sample is at least one tenth of the time it takes to execute the action (r e ≤ 10), then the optimal number of samples is either zero or one. In general, the first sample provides the largest gain in decision quality and the returns diminish with every subsequent sample. The point where the gain in decision quality falls below the cost of sampling Table 1 The optimal set of cognitive systems (M) for the 2AFC task of depends on the value of r e . Since this value can differ drastically across environments, achieving a near-optimal tradeoff in all environments requires adjusting the number of samples. Even a simple heuristic-based metareasoner that adapts the number of samples it takes based on a few thresholds on r e does better than one which always draws the same number of samples (Icard, 2014) . Here, we study an agent that chooses how many samples to draw by metareasoning over a finite subset M of all possible numbers of samples. Furthermore, we assume that the time spent metareasoning increases linearly with the number of systems. By analogy to Vul et al. (2014) , we formalize the metareasoning cost in terms of the ratio r m of the time it takes to act over the time it takes to predict the performance of a single system. We can again calculate the total amount of time the agent spends in the problem, while now taking into account the time spent on metareasoning. Just as before, the agent spends one unit of time executing its action, and k/r e units of time to draw k samples. But now, we also account for the time it takes the agent to predict performance of a system: 1/r m . The total amount of time it takes the agent to metareason, i.e predict the performance of all systems, is |M|/r m . Therefore, the total amount of time is 1 + π M (re) re + |M| rm . We assume the agent picks the optimal number of k * = arg max k∈M∪{0} E θ [U |k] 1 + k re + |M| rm . ( 3 ) Given this formulation of the problem, we can now calculate the optimal set of systems for the agent. The set of cognitive systems that results in the optimal expected utility per time for the bounded sampling agent is M * = arg max M⊂N E re   max k∈M∪{0} E θ [U |k] 1 + k re + |M| rm   . ( 4 ) Equation 4 resembles Equation 3 because both optimize the agent's expected utility per time. The difference is that Equation 3 calculates the optimal number of samples for a fixed cost of sampling, while Equation 4 calculates the optimal number of systems for a distribution of costs of sampling. Note that the optimal set of systems depends on the distribution of the sampling cost r e across different environments. Since sampling an action generally takes less time than executing the action, we assume that r e is always greater than one. We can satisfy this constraint on r e by modeling r e as following a shifted Gamma distribution, i.e r e − 1 ∼ Γ(α, β). \n Results Figure 1 shows a representative example 2 of the expected utility per time as a function of the number of systems for different metareasoning costs. Under a large range of metareasoning costs the optimal number of systems is just one, but as the costliness of selecting a cognitive system decreases, the optimal number of systems increases. However even when the optimal number of systems is more than one, each additional system tends to only result in a marginal increase in utility, suggesting that one reason for few cognitive systems may be that the benefit of additional systems is very low. Figure 2 shows that the optimal number of systems increases with the variance of r e and decreases with the cost of selecting between cognitive systems (i.e., 1 rm ). Interestingly, there is a large set of plausible combinations of variability and metareasoning cost for which the bounded-optimal agent has two cognitive systems. In addition, when the optimal number of systems is two, then the gap between the values of the two systems picked increases with the variance of r e (see Table 1 ), resulting in one system that has high accuracy but high cost and another system that has low accuracy and low cost, which matches the characteristics of the systems posited by dual-process accounts. Thus, the conditions under which we would most expect to see two cognitive systems like the ones suggested by dual-process theories are when the environment is highly variable and arbitrating between cognitive systems is costly. \n Simulation 2: Sequential Decision-Making Our first simulation modeled one-step decision problems in which the agent made a single choice between two options. In our second simulation, we turn to more complex, sequential decision problems, in which the agent needs to choose a sequence of actions over time in order to achieve its goal. In these problems, the best action to take at any given point depends on future outcomes and actions, thus leading to the need for planning. Furthermore, since actions only affect the environment probabilistically, it leads to the need for planning under uncertainty. Although planning often allows us to make better decisions, planning places high demands on people's working memory and time (Kotovsky, Hayes, & Simon, 1985) . This may be why research on problem solving has found that people use both planning and simple heuristics (Newell & Simon, 1972; Atwood & Polson, 1976; Kotovsky et al., 1985) and models of problem solving often assume that the mind is equipped with a planning strategy, such as means-ends analysis, and one or two simple heuristics such as hill-climbing (Newell & Simon, 1972; Gunzelmann & Anderson, 2003; Anderson, 1990) . Consistent with these findings, modern research on sequential decision-making points to the coexistence of two systems: a reflective, goal-directed system that uses a model of the environment to plan multiple steps into the future and a reflexive system that learns stimulus-response associations (Dolan & Dayan, 2013) . Interestingly, people appear to select between these two systems in a manner consistent with rational metareasoning: When people are given a task where they can either plan two steps ahead to find the optimal path or perform almost equally well without planning, they often eschew planning (Daw et al., 2005; Kool, Cushman, & Gershman, 2016) , but when the incentive structure is altered to make planning worthwhile then people predominantly rely on the planning system (Kool, Gershman, & Cushman, 2017) . These findings are also consistent with Anderson's rational analysis of problem solving which assumed that people select between planning according to means-ends-analysis and a hill climbing heuristic according to a rational cost-benefit analysis (Anderson, 1990) . Working from the assumption that the mind is equipped with a planning-based system and a current state s, a transition probability model p : S × A × S → [0, 1] that defines the probability of the next state given the current state and the action taken, an absorbing goal state g, and a time horizon h. Experience in these MDPs can be thought of as a set of trials or episodes. A trial ends once the agent reaches an absorbing goal-state g or it exceeds the maximal number of time steps allowed by the time horizon h. In the standard formulation, at each time step, the agent takes an action, which depends upon its current state. The agent's action choices can be concisely represented by a policy π : S → A that returns an action for each state. An optimal policy minimizes the expected sum of costs across the trial: π * = arg min π E N i=0 c(s i , π(s i )) π , ( 5 ) where s i is the state at time step i and N is the time step that the episode ends (either once the agent reaches the goal state g or the time horizon h is reached). The expectation is taken over the states at each time step, which are stochastic according to the transition model p. However, this formulation of the problem ignores the fact that the agent needs to think to decide how to act, and that thinking also incurs cost. We extend the standard MDP formulation to account for the cost of thinking. At each time step, the agent has a thinking stage, followed by an acting stage. In the thinking stage, the agent executes a system t that (stochastically) decides on an action a. In the acting stage, the agent takes the action a. In addition to the cost c(s, a) of acting, there is also a cost f (t) that measures the cost of thinking with system t. Then, an optimal system minimizes the total expected cost of acting and thinking: t * = arg min t E N i=0 c(s i , a i ) + f (t) t , ( 6 ) where a 0 , . . . a N are the actions chosen by t at each time step and s 0 , . . . s N are the states at each time step. The expectation is taken over states and actions, which are stochastic because the transition model p and the system t are not necessarily deterministic. The agent's thinking systems are based on bounded real-time dynamic programming (BRTDP; McMahan, Likhachev, and Gordan, 2005) , a planning algorithm from the artificial intelligence literature. BRTDP simulates potential action sequences, and then uses these simulations to estimate an upper bound and lower bound on how good each action in each possible state. It starts with a heuristic bound, and then continuously improves the accuracy of its estimates. Depending on the number of simulations chosen, it can be executed for an arbitrarily short or long amount of time. Fewer simulations result in faster but less accurate solutions, while more simulations results in slower but more accurate solutions, making BRTDP particularly well-suited for studying metareasoning (Lin et al., 2015) . During the thinking stage, the agent chooses the number of action sequences to simulate (k), and then based on this simulations, uses BRTDP to update its estimate of how good each action is in each possible state. During the acting stage, the agent takes the action with the highest upper bound on its value. Thus the agent's policy is defined entirely by k, the number of action sequences it simulates. This type of policy corresponds to the Think*Act policy from Lin et al. We consider environments in which there is a constant cost per action (c a ) from all non-goal states: c(s, a) = c a . The cost of executing a system is linear in the number of simulated action sequences (k): f (k) = c e • k, where c e is the cost of each mental simulation. We reparameterize the costs by the ratio of the cost of acting over the cost of thinking, r e = ca ce . Having defined the agent policy and the quotes, Equation 6 simplifies to k * = arg min k∈N 0 1 + k r e E [N |k] , ( 7 ) where N is the number of time steps until the trial ends, either by reaching the goal state or the time horizon. See Appendix B for a derivation. Equation 7 defines the optimal system for the agent to use for a particular decision problem, but we seek to investigate what set of systems is optimal for the agent to be equipped with for a range of decision problems. We assume that there is a distribution of MDPs the agent may encounter, and while r e is constant within each problem, it varies across different problems. Therefore, optimally allocating finite computational resources requires metareasoning. We assume that metareasoning incurs a cost that is linear in the number of systems: c m • |M|, where c m is the cost required to predict the performance of a single system. Similarly we can reparametrize this cost using r m = c a /c m , so that the cost of metareasoning becomes |M|/r m . Assuming that the agent chooses optimally from its set of planning systems, the optimal set of systems that it should be equipped with is M * = arg min M⊂N E re min k∈M∪{0} 1 + k r e E [N |k] + |M| r m . ( 8 ) We investigated the size and composition of the optimal set of planning systems for a simple 20 × 20 grid world where the agent's goal is to get from the lower left corner to the upper right corner with as little cost as possible. The horizon was set to 500, and the maximum number and length of simulated action sequences at any thinking stage were set to 10. BRTDP was initialized with a constant value function of 0 for the lower bound and a constant value function of 10 6 for the upper bound. This means that the agent's initial policy was to act randomly-which is highly suboptimal. For each environment, the ratio of the cost of action over the cost of planning (r e ) was again drawn from a Gamma distribution and shifted by one, that is r e − 1 ∼ Γ(α, β). The expected number of steps required to achieve the goal E[N |k] was estimated via simulation (see Figure 3 ). \n Results We find that all our results match the 2-alternative forced choice setting extremely closely. Because the agent rarely reached the goal with zero planning (E[N |k = 0] = 500) one system provided the largest reduction in expected cost with each additional system providing at most marginal reductions (Figure 4 ). The optimal number of systems increased with the variance of r e and decreased with the metareasoning cost ( 1 rm ). This resulted in the optimal number of cognitive systems being two for a wide range of plausible combinations of variability and metareasoning cost (Figure 5 ). In addition, when the number of systems was two, the difference between the amount of planning performed by the two optimal systems increased with the variance of r e . 3 This resulted in one system that does a high amount of planning but is costly and another system that plans very little but is computationally inexpensive, matching the characteristics of the two types of systems postulated by dual-process theories. Simulation 3: Strategic interaction in a two-player game Starting in the 1980s, researchers began applying dual-process theories to social cognition (Chaiken & Trope, 1999; Evans, 2008) . One hypothesis for why the heuristic system exists is because exact logical or probabilistic reasoning is often computationally prohibitive. For instance, Herbert Simon famously argued that computational limitations place substantial constraints on human reasoning (Simon, 1972 (Simon, , 1982 . Such computational limitations become readily apparent in problems involving social cognition because the number of future possibilities explodes once the actions of others must be considered. For example, one of Simon's classic examples was chess, where reasoning out the best opening move is completely infeasible because it would require considering about 10 120 possible continuations. In this section, we show that our findings in decision-making and planning tasks about the optimal set of cognitive systems also applies to tasks that involve reasoning about decisions made by others. Specifically, we focus on strategic reasoning in Go, an ancient two-player game. Two-player games are the simplest and perhaps most widely used paradigm for studying strategic reasoning about other people's actions (Camerer, 2011) . Although seemingly simple, it is typically impossible to exhaustively reason about all possibilities in a game, making heuristic reasoning necessary. This is especially true in Go, which has about 10 360 continuations from the first move (compare this to chess which has \"only\" 10 120 possible continuations). \n Methods We now describe the details of our simulation deriving bounded-optimal architectures for strategic reasoning in the game of Go. The agent's thinking systems are based on a planning algorithm known as Monte Carlo tree search (MCTS) (Browne et al., 2012) . Recently, AlphoGo, a computer system based on MCTS, became the first to defeat the Go world champion and achieve superhuman performance in the game of Go (Silver et al., 2016 (Silver et al., , 2017 . Like other planning methods against adversarial opponents, MCTS works by constructing a game tree to plan future actions. Unlike other methods, MCTS selectively runs stochastic simulations (also known as rollouts) of different actions, rather than exhaustively searching through the entire game tree. In doing so, MCTS focuses on moves and positions whose values appear both promising and uncertain. In this regard, MCTS is similar to human reasoning (Newell & Simon, 1972) . Furthermore, the number of simulations used by MCTS affect how heuristic/accurate the a) b) Figure 6 . Performance as a function of the amount of reasoning in the game of Go (Simulation 3). As the amount of computation (number of simulations) increases, the likelihood of selecting a good action increases, thus resulting in larger utility (a) and the game tends to be won in increasingly fewer moves (b). method is, making it well-suited for studying metareasoning. When the number of simulations is small, the algorithm is faster but less accurate. When the number of simulations is high, the algorithm is slower but more accurate. Thus, similar to the sequential decision making setting (Simulation 2), we assume that the agent metareasons over systems M that differ in how many simulations (k) to perform. On each turn, there is a thinking stage and an acting stage. In the thinking stage, the agent executes a system that performs a number of stochastic simulations (k) of future moves and then updates its estimate of how good each action is, i.e. how likely it is to lead to a winning state. In the acting stage, the agent takes the action with the highest estimated value. The agent attains a utility U based on whether it wins or loses the game. The unbounded agent would simply choose the number of simulations k that maximizes expected utility: E[U | k]. However, the bounded agent incurs costs for acting and thinking. We assume that the cost for acting is constant: c a . The cost for executing a system is linear in the number of simulations it performs: k • c e , where c e is the cost of a single simulation. The bounded agent has to optimize a trade-off between its utility U and the costs of acting and thinking: E[U − (c a + k • c e )N | k] , ( 9 ) where N is the number of turns until the game ends. For consistency, we can reparameterize this as r e = c a /c e , the ratio between the cost of acting and the cost of thinking, and without loss of generality, we can let c a = 1. Equation 9 then simplifies into B(k, r e ) := E U − 1 + k r e N k . ( 10 ) The optimal system for the agent to choose given a fixed value of r e is k * (r e ) = arg max k B(k, r e ). The optimal set of cognitive systems M out of all possible systems T for strategic interaction is M * = arg max M⊂T E max k B(k, r e ) − |M| r m . ( 11 ) In this case, the expectation is taken over r e , as the goal is to find the set of systems that is optimal across all problems in the environment. In our simulations, the game is played on a 9 × 9 board. U is 500 if the agent wins, 250 if the game ends in a draw, and 0 if the agent loses. The opponent also runs MCTS with 5 simulations to decide its move. E[U | k] and E[N | k] are estimated using simulation (see Figure 6 ). For computational tractability, the possible number of simulations we consider are T = {5, 10, . . . , 50}. \n Results As in the previous tasks, the optimal number of systems depends on the variability of the environment and the difficulty of selecting between multiple systems (Figure 7 ). As the cost of metareasoning increases, the optimal number of systems decreases and the bounded-optimal agent comes to reason less and less. By contrast, the optimal number of systems increases with the variability of the environment. Furthermore, when the optimal number of systems is two, the Table 3 The optimal set of cognitive systems (M ) for strategic reasoning in the game of Go (Simulation 10, 20, 30, 50 10, 20, 30, 50 (*) This number of systems does not provide a noticeable increase in utility over fewer systems. difference between the amount of reasoning performed by the two systems increases as the environment becomes more variable (Table 3 ). In conclusion, the findings presented in this section suggest that the kind of cognitive architecture that is bounded-optimal for simple decisions and planning (i.e., two systems with opposite speed-accuracy tradeoffs) is also optimal for reasoning about more complex problems, such as strategic interaction in games. Simulation 4: Multi-alternative risky choice Decision-making under risk is another domain in which dual-process theories abound (e.g., Steinberg, 2010; Mukherjee, 2010; Kahneman & Frederick, 2007; Figner, Mackinlay, Wilkening, & Weber, 2009) , and the dual-process perspective was inspired in part by Kahneman and Tversky's ground-breaking research program on heuristics and biases (Kahneman, Slovic, & Tversky, 1982) . Consistent with our resource-rational framework, previous research revealed that people make risky decisions by arbitrating between fast and slow decision strategies in an adaptive and flexible manner (Payne et al., 1993) . When making decisions between the risky gambles shown in Figure 8 people adapt not only how much they think but also how they think about what to do. Concretely, people have been shown to use different strategies for different types of decision problems (Payne et al., 1988) . For instance, when some outcomes are much more probably than others then people seem to rely on fast-and-frugal heuristics (Gigerenzer & Goldstein, 1996) like Take-The-Best which decides solely based on the most probably outcome that distinguishes between the alternatives and ignores all other possible outcomes. By contrast, when all outcomes are equally likely, people seem to integrate the payoffs for multiple outcomes into an estimate of the expected value of each gamble. Previous research has proposed at least ten different decision strategies that people might use when choosing between risky prospects (Payne et al., 1988; Thorngate, 1980; Gigerenzer & Selten, 2002 ). Yet, it has remained unclear how many decision strategies a single person would typically consider (Scheibehenne, Rieskamp, & Wagenmakers, 2013) . Here, we investigate how many decision strategies a boundedly optimal metareasoning agent should use in a multi-alternative risky-choice environment similar to the experiments by Payne et al.. Unlike in the previous simulations these strategies differ not only in how much computation they perform but also in which information they use and how they use it. \n Methods We investigated the size of the optimal subset of the ten decision strategies proposed by Payne et al. as a function of the metareasoning cost and the variability of the relative cost of reasoning. These strategies were the lexicographic heuristic (which corresponds to Take-The-Best), the semi-lexicographic heuristic, the weighted-additive strategy, choosing at random, the equal-weight heuristic, elimination by aspects, the maximum confirmatory dimensions heuristic, satisficing, and two combinations of elimination by aspects with the weighted additive strategy and the maximum confirmatory dimensions heuristic. Concretely, we determined the optimal number of decision strategies 5 × 30 environments that differed in the mean and the standard deviation of the distribution of r e . The means were 10, 50, 100, 500, and 1000, and the standard deviations were linearly spaced between 10 −3 and 3 times the mean. For each environment, four thousand decision problems were generated at random. Each problem presented the agent with the choice between five gambles with five possible outcomes. The payoffs for each outcome-gamble pair were drawn from a uniform distribution on the interval RATIONAL REINTERPRETATION OF DUAL-PROCESS THEORIES 34 [0, 1000]. The outcome probabilities differed randomly from problem to problem except that the second highest probability was always at most 25% of highest probability, the third highest probability was always at most 25% of the second-highest probability, and so on. Based on previous work on how people select cognitive strategies , our simulations assume that people generally select the decision-strategy that achieves the best possible speed-accuracy tradeoff. This strategy can be formally defined as the heuristic s with the highest value of computation (VOC; . Formally, for each decision problem d, an agent equipped with strategies S should choose the strategy s (d, S, r e ) = max s∈S VOC(s, d). ( 12 ) Following we define a strategy's VOC as decision quality minus decision cost. We measure the decision quality by the ratio of the expected utility of the chosen option over the expected utility of the best option, and we measure decision cost by the opportunity cost of the time required to execute the strategy. Formally, the VOC of making the decision d using the strategy s is VOC(s, d) = E [u(s(d))|d] max a E [u(a)|d] − 1 r e • n computations (s, d), ( 13 ) where s(d) is the alternative that the strategy s chooses in the decision d, 1 re is the cost per decision operation, and n computations (s, d) is the number of cognitive operations it performs in this decision process. To determine the number of cognitive operations, we decomposed each strategy into a sequence of elementary information processing operations (Johnson & Payne, 1985) in the same way as Lieder and Griffiths (2017) did and counted how many of those operations each strategy performed on any given decision problem. We estimated the optimal set of strategies, S = max S E P (d) VOC(s (d; S, r e ), d) − 1 r m • |S| , ( 14 ) by approximating the expected value in Equation 14 by averaging the VOC over 4000 randomly generated decision problems. The resulting noisy estimates were smoothed with a Gaussian kernel with standard deviation 20. Then the optimal set of cognitive strategies was determined based on the smoothed VOC estimates for each combination of parameters. Finally, the number of strategies in the optimal sets was smoothed with a Gaussian kernel with standard deviation 10, and the smoothed values were rounded. \n Results As shown in Figure 9 , we found that the optimal number of strategies increased with the variability of the environment and decreased with the metareasoning cost. Like in the previous simulations, the optimal number of decision systems increased from 1 for high metareasoning cost and low variability to 2 for moderate metareasoning cost and variability, and increased further with decreasing metareasoning cost and increasing variability. There was again a sizeable range of plausible values in which the optimal number of decision systems was 2. For extreme combinations of very low time cost and very high variability the optimal number of systems increased to up to 5. Although Figure 9 only shows the results for E[r e ] = 100, the results for E[r e ] = 10, 50, 500, and 1000 were qualitatively the same. In this section, we applied our analysis to a more realistic setting than in the previous sections. It used psychologically plausible decision strategies that were proposed to explain human decision-making rather than algorithms. These strategies differed not only in how much reasoning they perform but also in how they reason about the problem. For this setting, where the environment comprised different kinds of problems favoring different strategies, one might expect that the optimal number of systems would be much larger than in the previous simulations. While we did find that having 3-5 systems became optimal for a larger range of metareasoning costs and variabilities, it is remarkable that having two systems was still bounded-optimal for a sizeable range of reasonable parameters. This finding suggests that our results might generalize to the much more complex problems people have to solve and people's much more sophisticated cognitive mechanisms. \n General Discussion We found that across four different tasks the optimal number and diversity of cognitive systems increases with the variability of the environment but decreases with the cost of predicting each system's performance. Each additional system tends to provide at most marginal improvements; so the optimal solutions tend to favor small numbers of cognitive systems, with two systems being optimal across a wide range of plausible values for metareasoning cost and variability. Furthermore, when the optimal number of cognitive systems was two, then these two systems tended to lie on two extremes in terms of time and accuracy. One of them was much faster but more error-prone whereas the second one was slower but more accurate. This might be why the human mind too appears to contain two opposite subsystems within itself -one that is fast but fallible and one that is slow but accurate. In other words, this mental architecture might have evolved to enable people to quickly adapt how they think and decide to the demands of different situations. Our analysis thereby provides a normative justification for dual-process theories. The emerging connection between normative modeling and dual-process theories is remarkable because these approaches correspond to opposite poles in the debate about human rationality (Stanovich, 2011) . In this debate, some researchers interpreted the existence of a fast, error-prone cognitive system whose heuristics violate the rules of logic, probability theory, and expected utility theory as a sign of human irrationality (Ariely, 2009; Marcus, 2009) . By contrast, our analysis suggests that having a fast but fallible cognitive system in addition to a slow but accurate system may be the best possible solution. This implies that the variability, fallibility, and inconsistency of human judgment that result from people's switching between System 1 and System 2 should not be interpreted as evidence for human irrationality, because it might reflect the rational use of limited cognitive resources. \n Limitations One limitation of our analysis is that the cognitive systems we studied are simple algorithms that abstract away most of the complexity and sophistication of the human mind. A second limitation is that all of our tasks were drawn from the domains of decision-making and reasoning. However, our conclusion only depends on the plausible assumption that the cost of deciding which cognitive system to use increases with the number of systems. As long as this is the case, the optimal number of cognitive systems should still depend on the tradeoff between metareasoning cost and cognitive flexibility studied above, even though its exact value may be different. Thus, our key finding that the optimal number of systems increases with the variability of the environment and decreases with the metareasoning cost is likely to generalize to other tasks and the much more complex architecture of the human mind. Third, our analysis assumed that the mind is divided into discrete cognitive systems to make the adaptive control over cognition tractable. While this makes selecting cognitive operations much more efficient, we cannot prove that it is bounded-optimal to approximate rational metareasoning in this way. Research in artificial intelligence suggests that there might be other ways to make metareasoning tractable. One alternative strategy is the meta-greedy approximation (Russell & Wefald, 1991a; Hay et al., 2012) which selects computations under the assumption that the agent will act immediately after executing the first computation. According to the directed cognition model (Gabaix & Laibson, 2005) this mechanism also governs the sequence of cognitive operations people employ to make economic decisions. This model predicts that people will always stop thinking when their decision cannot be improved by a single cognitive operation even when significant improvements could be achieved by a series of two or more cognitive operations. This makes us doubt that the meta-greedy heuristic would be sufficient to account for people's ability to efficiently solve complex problems, such as puzzles, where progress is often non-linear. This might be why when Gabaix, Laibson, Moloche, and Weinberg (2006) applied their model to multi-attribute decisions, they let it choose between macro-operators rather than individual computations. Interestingly, those macro-operators are similar to the cognitive systems studied here in that they perform different amounts of computation. Thus, the directed cognition model does not appear to eliminate the need for sub-systems but merely proposes a mechanism for how the mind might select and switch back-and-forth between them. Consistent with our analysis, the time and effort required by this mechanism increases linearly with the number of cognitive systems. While research in artificial intelligence as identified a few additional approximations to rational metareasoning, those are generally to specific computational processes and problems (Russell & Wefald, 1989; Lin et al., 2015; Vul et al., 2014) and would be applicable to only a small subset of people's cognitive abilities. \n Relation to previous work The work presented here continues the research programs of bounded rationality (Simon, 1956 (Simon, , 1982 , rational analysis (Anderson, 1990) and resource-rational analysis (Griffiths et al., 2015) in seeking to understand how the mind is adapted to the structure of the environment and its limited computational resources. While previous work has applied the idea of bounded optimality to derive optimal cognitive strategies for an assumed cognitive architecture (Lewis et al., 2014; and the arbitration between assumed cognitive systems (Keramati et al., 2011) , the work presented here derived the cognitive architecture itself. By suggesting that the human mind's cognitive architecture might be bounded-optimal our analysis complements and completes previous arguments suggesting that people make rational use of the cognitive architecture they are equipped with (Lewis et al., 2014; Griffiths et al., 2015; Lieder, Griffiths, & Hsu, 2018; Lieder, Griffiths, Huys, & Goodman, 2018a; Tsetsos et al., 2016; Howes et al., 2016) . Taken together these arguments suggest that people might be resource-rational after all. \n Conclusion and Future Directions A conclusive answer to the question whether it is boundedly optimal for humans to have two types of cognitive systems will require more rigorous estimates of the variability of decision problems that people experience in their daily lives and precise measurements of how long it takes to predict the performance of a cognitive system. Regardless thereof, our analysis suggests that the incoherence in human reasoning and decision-making are qualitatively consistent with the rational use of a bounded-optimal set of cognitive systems rather than a sign of irrationality. Perhaps more importantly, the methodology we developed in this paper makes it possible to extend resource-rational analysis from cognitive strategies to cognitive architectures. This new line of research offers a way to elucidate how the architecture of the mind is shaped by the structure of the environment and the fundamental limits of the human brain. \n Appendix B Sequential Decision-Making Here, we provide a derivation of how to simplify the expression for the optimal number of planning systems in Equation 6 , that is We can reparameterize using r e = c a /c e by substituting c e with c a /r e : E N c a + c a r e • k k = c a E 1 + k r e N k . We now arrive at Equation 6 by picking the cognitive system (number of simulations) that minimizes the above quantity. k * = arg min k c a E 1 + k r e N k = arg min k E 1 + k r e N k . Figure 1 . 1 Figure 1. The reward rate in two-alternative forced choice (Simulation 1) usually peaks for a moderately small number of decision systems. The expected utility per time of the optimal choice of systems, M , as a function of the number of systems (|M|). As the costliness of metareasoning, 1 rm decreases, the optimal number of systems increases. In this example E[r e ] = 100 and σ(r e ) = 100. \n Simulation 1 as a function of the number of systems (|M|) and the variability of the environment (Var(r e )) for E[r e ] = 100 and r m = 1000. Any set of four systems that included 3, 5, 7 was optimal. \n Figure 2 . 2 Figure 2. Performance of agents with different numbers of decision mechanisms in the 2AFC problem of Simulation 1. The plot shows the optimal number of decision systems as a function of the standard deviation of r e and 1/r m . In this example E[r e ] = 10. \n Figure 3 . 3 Figure 3. Performance of agents with different numbers of cognitive systems in planning under uncertainty (Simulation 2). The number of actions it takes an agent to reach a goal as a function of the number of simulated paths before each action. For 0 simulated paths the expected number of actions was 500 (the maximum allowed). \n Figure 4 . 4 Figure 4. The expected cost incurred is a U-shaped function of the number of planning systems in Simulation 2. As the cost of selecting a planning system ( 1 rm ) decreases, the optimal number of systems increases. The expected cost of 0 systems was 500, thus 1 system provided the greatest reduction in cost. In this example E[r e ] = 100, Var(r e ) = 10 5 , and c a = 1. \n Figure 5 . 5 Figure 5. The optimal number of systems for planning under uncertainty (Simulation 2) as a function of the standard deviation of r e and r m for E[r e ] = 100. \n 3 ) depending on the number of systems (|M|) and the variability of the environment (Var(r e )) for E[r e ] = 10. \n Figure 7 . 7 Figure 7. The optimal number of systems for strategic reasoning in the game of Go (Simulation 3) as a function of the standard deviation of r e and 1 rm . E[r e ] = 100 in this case. \n Figure 8 . 8 Figure 8. Illustration of the Mouselab paradigm used to study multi-alternative risky choice. \n Figure 9 . 9 Figure 9. The optimal number of strategies for multi-alternative risky choice (Simulation 4) as a function of the standard deviation of r e and r m for E[r e ] = 100. \n this derivation is as follows: Since the cost of each thinking system is linear in the number of simulations, i.e. c e • k, we can replace f (t) with c e • k in the expectation in Equation6. Since the cognitive systems are distinguished by the number of simulations they do, we can condition on the number of simulations k instead. Therefore, the expectation in i , a i ) + c e • k k .The cost of acting from non-goal states is constant, i.e. c(s i , a i ) = c a . Therefore, the expectation simplifies to 6 becomesE N i=0 c a + c e • k k = E[N (c a + c e • k) | k] . \n Table 2 2 The optimal set of cognitive systems (M ) for planning under uncertainty (Simulation 2) as a function of the number of systems (|M|) and the variability of the environment (Var(r e )) with E[r e ] = 100. Var(r e ) |M| 10 3 10 4 10 5 1 9 7 7 2 7, 9 4, 7 2, 7 3 1, 7, 9 4, 7, 9 1, 4, 9 4 1, 2, 7, 9 2, 4, 7, 9 1, 4, 7, 9 \n\t\t\t If there is a tie, then the agent picks either a 0 or a 1 with equal probability. However, for odd k, the agent's expected utility after drawing k samples, E θ [U |k], is equal to its expected utility after drawing k + 1 samples,E θ [U |k + 1].Thus, we can restrict ourselves to odd k where no ties are possible. \n\t\t\t For all experiments reported in this paper, we found that alternative values for E[r e ] or Var(r e ) did not change the qualitative conclusions, unless otherwise indicated. \n\t\t\t This observation holds until the variance becomes extremely high (≈ 10 7 for Table2), in which case both systems move towards lower values (Table2). However, this is not a general problem but merely a quirk of the skewed distribution we used for r e .", "date_published": "n/a", "url": "n/a", "filename": "Arationalreinterpretationofdualsystemstheories.tei.xml", "abstract": "Highly influential \"dual-process\" accounts of human cognition postulate the coexistence of a slow accurate system with a fast error-prone system. But why would there be just two systems rather than, say, one or 93? Here, we argue that a dual-process architecture might be neither arbitrary nor irrational, but might instead reflect a rational tradeoff between the cognitive flexibility afforded by multiple systems and the time and effort required to choose between them. We investigate what the optimal set and number of cognitive systems would be depending on the structure of the environment. We find that the optimal number of systems depends on the variability of the environment and the difficulty of deciding when which system should be used. Furthermore, when having two systems is optimal, then the first system is fast but error-prone and the second system is slow but accurate. Our findings thereby provide a rational reinterpretation of dual-process theories.", "id": "22c9335fa97882960d9459139ca55d2c"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Death and pain of a digital brain.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Sullins2011_Article_IntroductionOpenQuestionsInRob.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Nate Soares", "Benja Fallenstein", "Eliezer Yudkowsky", "Stuart Armstrong"], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Corrigibility.tei.xml", "abstract": "As artificially intelligent systems grow in intelligence and capability, some of their available options may allow them to resist intervention by their programmers. We call an AI system \"corrigible\" if it cooperates with what its creators regard as a corrective intervention, despite default incentives for rational agents to resist attempts to shut them down or modify their preferences. We introduce the notion of corrigibility and analyze utility functions that attempt to make an agent shut down safely if a shutdown button is pressed, while avoiding incentives to prevent the button from being pressed or cause the button to be pressed, and while ensuring propagation of the shutdown behavior as it creates new subsystems or self-modifies. While some proposals are interesting, none have yet been demonstrated to satisfy all of our intuitive desiderata, leaving this simple problem in corrigibility wide-open.", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Stuart Armstrong", "Kaj Sotala", "Sea ´n", "S O ´he ´igeartaigh"], "title": "The errors, insights and lessons of famous AI predictions -and what they mean for the future", "text": "Introduction Predictions about the future development of artificial intelligence (AI 1 ) are as confident as they are diverse. Starting with Turing's initial estimation of a 30% pass rate on Turing test by the year 2000 (Turing, 1950) , computer scientists, philosophers and journalists have never been shy to offer their own definite prognostics, claiming AI to be impossible (Jacquette, 1987) , just around the corner (Darrach, 1975) or anything in between. What should one think of this breadth and diversity of predictions? Can anything of value be extracted from them, or are they to be seen as mere entertainment or opinion? The question is an important one, because true AI would have a completely tranrsformative impact on human society -and many have argued that it could be extremely dangerous (Minsky, 1984; Yampolskiy, 2012 , Yudkowsky, 2008 . Those arguments are predictions in themselves, so an assessment of predictive reliability in the AI field is a very important project. It is in humanity's interest to know whether these risks are reasonable, and, if so, when and how AI is likely to be developed. Even if the risks turn out to be overblown, simply knowing the reliability of general AI predictions will have great social and economic consequences. The aim of this paper is thus to construct a framework and tools of analysis that allow for the assessment of predictions, of their quality and of their uncertainties. Though specifically aimed at AI, these methods can be used to assess predictions in other contentious and uncertain fields. This paper first proposes a classification scheme for predictions, dividing them into four broad categories and analysing what types of arguments are used (implicitly or explicitly) to back them up. Different prediction types and methods result in very different performances, and it is critical to understand this varying reliability. To do so, this paper builds a series of tools that can be used to clarify a prediction, reveal its hidden assumptions and make use of empirical evidence whenever possible. Since expert judgement is such a strong component of most predictions, assessing the reliability of this judgement is a key component. Previous studies have isolated the task characteristics in which experts tend to have good judgement -and the results of that literature strongly imply that AI predictions are likely to be very unreliable, at least as far as timeline predictions ('date until AI') are concerned. That theoretical result is born out in practice: timeline predictions are all over the map, with no pattern of convergence, and no visible difference between expert and non-expert predictions. These results were detailed in a previous paper (Armstrong & Sotala, 2012) , and are summarised here. The key part of the paper is a series of case studies on five of the most famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's AI drives. Each prediction is analysed in detail, using the methods developed earlier. The Dartmouth conference proposal was surprisingly good -despite being wildly inaccurate, it would have seemed to be the most reliable estimate at the time. Dreyfus's work was very prescient, despite his outsider status, and could have influenced AI development for the better -had it not been so antagonistic to those in the field. Some predictions could be extracted even from Searle's non-predictive Chinese room thought experiment, mostly criticisms of the AI work of his time. Kurzweil's predictions were tested with volunteer assessors, and were shown to be surprisingly good -but his self-assessment was very inaccurate, throwing some doubt on his later predictions. Finally Omohundro's predictions were shown to be much better as warning for what could happen to general AIs, than as emphatic statements of what would necessarily happen. 2 The key lessons learned are of the general overconfidence of experts, the possibility of deriving testable predictions from even the most theoretical of papers, the superiority of modelbased over judgement-based predictions and the great difficulty in assessing the reliability of predictors -by all reasonable measures, the Dartmouth conference predictions should have been much more reliable than Dreyfus's outside predictions, and yet reality was completely opposite. \n Taxonomy of predictions \n Prediction types There will never be a bigger plane built. Boeing engineer on the 247, a twin engine plane that held ten people. A fortune teller talking about celebrity couples, a scientist predicting the outcome of an experiment, an economist pronouncing on next year's GDP figures -these are canonical examples of predictions. There are other types of predictions, though. Conditional statementsif X happens, then so will Y -are also valid, narrower predictions. Impossibility results are also a form of prediction. For instance, the law of conservation of energy gives a very broad prediction about every single perpetual machine ever made: to wit, that they will never work. The common thread is that all these predictions constrain expectations of the future. If one takes the prediction to be true, one expects to see different outcomes than if one takes it to be false. This is closely related to Popper's notion of falsifiability (Popper, 1934) . This paper takes every falsifiable statement about future AI to be a prediction. For the present analysis, predictions about AI will be divided into four types: (1) Timelines and outcome predictions. These are the traditional types of predictions, giving the dates of specific AI milestones. Examples: An AI will pass the Turing test by 2000 (Turing, 1950) ; within a decade, AIs will be replacing scientists and other thinking professions (Hall, 2013) . (2) Scenarios. These are a type of conditional predictions, claiming that if the conditions of the scenario are met, then certain types of outcomes will follow. Example: If someone builds a human-level AI that is easy to copy and cheap to run, this will cause mass unemployment among ordinary humans (Hanson, 1994) . (3) Plans. These are a specific type of conditional prediction, claiming that if someone decides to implement a specific plan, then they will be successful in achieving a particular goal. Example: AI can be built by scanning a human brain and simulating the scan on a computer (Sandberg, 2008) . ( 4 ) Issues and metastatements. This category covers relevant problems with (some or all) approaches to AI (including sheer impossibility results), and metastatements about the whole field. Examples: An AI cannot be built without a fundamental new understanding of epistemology (Deutsch, 2012) ; generic AIs will have certain (potentially dangerous) behaviours (Omohundro, 2008) . There will inevitably be some overlap between the categories, but the division is natural enough for this paper. \n Prediction methods Just as there are many types of predictions, there are many ways of arriving at them -crystal balls, consulting experts, constructing elaborate models. An initial review of various AI predictions throughout the literature suggests the following loose schema for prediction methods 3 : (1) Causal models (2) Non-causal models (3) The outside view (4) Philosophical arguments (5) Expert judgement (6) Non-expert judgement Causal models are a staple of physics and the harder sciences: given certain facts about the situation under consideration (momentum, energy, charge, etc.), a conclusion is reached about what the ultimate state will be. If the facts were different, the end situation would be different. Outside of the hard sciences, however, causal models are often a luxury, as the underlying causes are not well understood. Some success can be achieved with non-causal models: without understanding what influences what, one can extrapolate trends into the future. Moore's law is a highly successful non-causal model (Moore, 1965) . In the outside view, specific examples are grouped together and claimed to be examples of the same underlying trend. This trend is used to give further predictions. For instance, one could notice the many analogues of Moore's law across the spectrum of computing (e.g. in number of transistors, size of hard drives, network capacity, pixels per dollar), note that AI is in the same category, and hence argue that AI development must follow a similar exponential curve (Kurzweil, 1999) . Note that the use of the outside view is often implicit rather than explicit: rarely is it justified why these examples are grouped together, beyond general plausibility or similarity arguments. Hence detecting uses of the outside view will be part of the task of revealing hidden assumptions (see Section 3.2). There is evidence that the use of the outside view provides improved prediction accuracy, at least in some domains (Kahneman & Lovallo, 1993) . Philosophical arguments are common in the field of AI. Some are simple impossibility statements: AI is decreed to be impossible, using arguments of varying plausibility. More thoughtful philosophical arguments highlight problems that need to be resolved in order to achieve AI, interesting approaches for doing so and potential issues that might emerge if AIs were to be built. Many of the predictions made by AI experts are not logically complete: not every premise is unarguable, not every deduction is fully rigorous. In many cases, the argument relies on the expert's judgement to bridge these gaps. This does not mean that the prediction is unreliable: in a field as challenging as AI, judgement, honed by years of related work, may be the best tool available. Non-experts cannot easily develop a good feel for the field and its subtleties, so should not confidently reject expert judgement out of hand. Relying on expert judgement has its pitfalls, however, as will be seen in Sections 3.4 and 4. Finally, some predictions rely on the judgement of non-experts, or of experts making claims outside their domain of expertise. Prominent journalists, authors, CEO's, historians, physicists and mathematicians will generally be no more accurate than anyone else when talking about AI, no matter how stellar they are in their own field (Kahneman, 2011) . Predictions often use a combination of these methods. For instance, Ray Kurzweil's 'Law of time and chaos' uses the outside view to group together evolutionary development, technological development, and computing into the same category, and constructs a causal model predicting time to the 'Singularity' (Kurzweil, 1999) (see Section 5.4 ). Moore's law (noncausal model) is a key input to this law, and Ray Kurzweil's expertise is the law's main support (see Section 5.4) . The case studies of Section 5 have examples of all of these prediction methods. \n A toolbox of assessment methods The purpose of this paper is not simply to assess the accuracy and reliability of past AI predictions. Rather, the aim is to build a 'toolbox' of methods that can be used to assess future predictions, both within and outside the field of AI. The most important features of the toolbox are ways of extracting falsifiable predictions, ways of clarifying and revealing assumptions, ways of making use of empirical evidence when possible and ways of assessing the reliability of expert judgement. \n Extracting falsifiable predictions As stated in Section 2.1, predictions are taken to be falsifiable/verifiable statements about the future of AI. 4 This is very important to put the predictions into this format. Sometimes they already are, but at other times it is not so obvious: then the falsifiable piece must be clearly extracted and articulated. Sometimes it is ambiguity that must be overcome: when an author predicts an AI 'Omega point' in 2040 (Schmidhuber, 2007) , it is necessary to read the paper with care to figure out what counts as an Omega point and (even more importantly) what does not. At the extreme, some philosophical arguments -such as the Chinese room argument (Searle, 1980) -are often taken to have no falsifiable predictions whatsoever. These thought experiments are supposed to establish purely philosophical points. Predictions can often be extracted from even the most philosophical of arguments, however -or, if not the argument itself, then from the intuitions justifying the argument. Section 5.3 demonstrates how the intuitions behind the Chinese room argument can lead to testable predictions. Note that the authors of the arguments may disagree with the 'extracted' predictions. This is not necessarily a game breaker. The aim should always be to try to create useful verifiable predictions when possible, thus opening more of the extensive AI philosophical literature for predictive purposes. For instance, Lucas argues that AI is impossible because it could not recognise the truth of its own Go ¨del sentence 5 (Lucas, 1961) . This is a very strong conclusion, and is dependent on Lucas's expert judgement: different philosophers would reach different conclusions from the same argument. Moreover, it isn't clear how the argument could be tested: what skills and abilities would we therefore expect to be impossible for an intelligent machine? The intuition behind it, however, seems to be that Go ¨del-like sentences pose real problems to the building of an AI, and hence one can extract the weaker empirical prediction: 'Self-reference will be a problem with advanced AIs'. Care must be taken when applying this method: the point is to extract a useful falsifiable prediction, not to weaken or strengthen a reviled or favoured argument. The very first stratagems in Schopenhauer's The Art of Always being Right (Schopenhauer, 1831) are to extend and overgeneralise the consequences of one's opponent's argument; conversely, one should reduce and narrow down one's own arguments. There is no lack of rhetorical tricks to uphold one's own position, but if one is truly after the truth, one must simply attempt to find the most reasonable falsifiable version of the argument; the truth-testing will come later. This method often increases the prediction's uncertainty, in that it makes the prediction less restrictive (and less powerful) than it first seemed. For instance, Edmonds (2009) , building on the 'No free lunch' results (Wolpert & Macready, 1995) , demonstrates that there is no such thing as a universal intelligence: no intelligence that outperforms others in every circumstance. Initially this seems to rule out AI entirely; but when one analyses what this means empirically, one realises there is far less to it. An algorithm could still perform better than any human being in any realistic situation in our universe. So the initial impression, which was that the argument ruled out all futures with AIs in them, is now replaced by the realisation that the argument has barely put any constraints on the future at all. \n Clarifying and revealing assumptions The previous section was concerned with the prediction's conclusions. This section will instead be looking at its assumptions, and the logical structure of the argument or model behind it. The objective is to make the prediction as rigorous as possible. This kind of task has been a staple of philosophy ever since the dialectic (Plato 380 BC) . Of critical importance is revealing hidden assumptions that went into the predictions. These hidden assumptions -sometimes called Enthymematic gaps in the literature (Fallis, 2003) -are very important because they clarify where the true disagreements lie, and where the investigation needs to be focused to figure out the truth of the prediction. Too often, competing experts will make broad-based arguments that fly past each other. This makes choosing the right argument a matter of taste, prior opinions and admiration of the experts involved. If the argument can be correctly deconstructed, however, then the source of the disagreement can be isolated, and the issue can be decided on much narrower grounds -and it is much clearer whether the various experts have relevant expertise or not (see Section 3.4). The hidden assumptions are often implicit, so it is perfectly permissible to construct assumptions that the predictors were not consciously aware of using. The purpose is not to score points for one 'side' or the other, but always to clarify and analyse arguments and to find the true points of disagreement. For illustration of the method, consider again the Go ¨del arguments mentioned in Section 3.1. The argument shows that formal systems of a certain complexity must be either incomplete (unable to see that their Go ¨del sentence is true) or inconsistent (proving false statements). This is contrasted with humans, who -allegedly -use meta-reasoning to know that their own Go ¨del statements are true. Also, humans are both inconsistent and able to deal with inconsistencies without a complete collapse of logic. 6 However, neither humans nor AIs are logically omniscient -they are not capable of instantly proving everything provable within their logical system. So this analysis demonstrates the hidden assumption in Lucas's argument: that the behaviour of an actual computer program running on a real machine is more akin to that of a logically omniscient formal agent than to a real human being. That assumption may be flawed or correct, but is one of the real sources of disagreement over whether Go ¨delian arguments rule out AI. There is surprisingly little published on the proper way of clarifying assumptions, making this approach more an art than a science. If the prediction comes from a model, there are some standard tools available for clarification (Morgan & Henrion, 1990) . Most of these methods work by varying parameters in the model and checking that this does not cause a breakdown in the prediction. This is more a check of robustness of the model than of its accuracy, however. \n Model testing and counterfactual resiliency Causal models can be tested by analysing their assumptions. Non-causal models are much harder to test: what are the assumptions behind Moore's famous law (Moore, 1965) , or Robin Hanson's model that humanity is due for another technological revolution, based on the timeline of previous revolutions (Hanson, 2008) ? They both assume that a particular pattern will continue into the future, but why should this be the case? What grounds (apart from personal taste) does anyone have to endorse or reject them? The authors of this paper have come up with a putative way of testing the assumptions of such models. It involves giving the model a counterfactual resiliency check: imagining that world history had happened slightly differently, and checking whether the model would have been true in those circumstances. The purpose is to set up a tension between what the model says, and known (or believed) facts about the world. This will either refute the model, refute the believed facts or reveal implicit assumptions the model is making. To illustrate, consider Robin Hanson's model. The model posits that humanity has gone through a series of radical transformations (in brain size, hunting, agriculture, industry), and that these form a pattern that can be used to predict the arrival date and speed of the next revolution, which is argued to be an AI revolution. 7 This is a major use of the outside view, and it implicitly implies that most things in human historical development are unimportant in comparison with these revolutions. A counterfactual resiliency test can be carried out: within the standard understanding of history, it seems very plausible that these revolutions could have happened at very different times and paces. Humanity could have been confined to certain geographical locations by climate or geographical factors, thus changing the dates of the hunting and agricultural revolution. The industrial revolutions could have plausibly started earlier with the ancient Greeks (where it would likely have been slower), or at a later date, had Europe been deprived of large coal reserves. Finally, if AI were possible, it certainly seems that contingent facts about modern society could make it much easier or much harder to reach. 8 Thus the model seems to be in contradiction with standard understanding of social and technological development, or dependent on contingent factors to a much larger extent than it seemed. In contrast, Moore's law seems much more counterfactually resilient: assuming that the current technological civilization endured, it is hard to find any reliable ways of breaking the law. One can argue plausibly that the free market is needed for Moore's law to work 9 ; if that is the case, this method has detected an extra hidden assumption of the model. This method is new, and will certainly be refined in future. Again, the purpose of the method is not to rule out certain models, but to find the nodes of disagreement. In this paper, it is used in analysing Kurzweil's prediction in Section 5.4. \n More uncertainty Clarifying assumptions often ends up weakening the model, and hence increasing uncertainty (more possible futures are compatible with the model than was thought). Revealing hidden assumptions has the same effect: the model now has nothing to say in those futures where the assumptions turn out to be wrong. Thus the uncertainty will generally go up for arguments treated in this fashion. In counterpart, of course, the modified prediction is more likely to be true. \n Empirical evidence and the scientific method The gold standard in separating true predictions from false ones must always be empirical evidence. The scientific method has proved to be the best way of disproving false hypotheses, and should be used whenever possible, always preferred over expert opinion or unjustified models. Empirical evidence is generally lacking in the AI prediction field, however. Since AI predictions concern the existence and properties of a machine that has not yet been built, and for which detailed plans do not exist, there is little opportunity for the hypothesis-predictiontesting cycle. This should indicate the great challenges in the field, with AI predictions being considered more uncertain than those of even the 'softest' sciences, which have access to some form of empirical evidence. Some AI predictions approximate the scientific method better than others. The whole brain emulations model, for instance, makes testable predictions about the near and medium future (Sandberg, 2008 ). Moore's law is a prediction backed up by a lot of scientific evidence, and connected to some extent with AI. Many predictors (e.g. Kurzweil) make partial predictions on the road towards AI; these can and should be assessed as evidence of the expert's general predictive success. Though not always possible, efforts should be made to connect general predictions with some near-term empirical evidence. \n The reliability of expert judgement Reliance on experts is nearly unavoidable in AI prediction. Timeline predictions are often explicitly based on experts' judgement. 10 Plans also need experts to come up with them and judge their credibility. So unless every philosopher agrees on the correctness of a particular philosophical argument, one is dependent to some degree on the philosophical judgement of the author. Using all the methods of the previous section, one can refine and caveat a prediction, find the nodes of disagreement, back it up with empirical evidence whenever possible and thus clearly highlight the points where one needs to rely on expert opinion. What performance should then be expected from the experts? There have been several projects over the last few decades looking into expert performance (Kahneman & Klein, 2009; Shanteau, 1992) . The main result is that it is mainly the nature of the task that determines the quality of expert performance, rather than other factors. Table 1 , reproduced from Shanteau's paper, lists the characteristics that lead to good or poor expert performance. Not all of these are directly applicable to the current paper, and hence will not be explained in detail. One very important factor is whether experts get feedback, preferably immediately. When feedback is unavailable or delayed, or the environment is not one that gives good feedback, then expert performance drops precipitously (Kahneman, 2011; Kahneman & Klein, 2009) . Generally AI predictions have little possibility for any feedback from empirical data (see Section 3.3), especially not rapid feedback. The task characteristics of Table 1 apply to both the overall domain and the specific task. Though AI prediction is strongly in the right column, any individual expert can improve their performance by moving their approach into the left column -for instance by decomposing the problem as much as possible. Where experts fail, better results can often be achieved by asking the experts to design a simple algorithmic model and then using the model for predictions (Grove, Zald, Lebow, Snitz, & Nelson, 2000) . Thus the best types of predictions are probably those coming from well-decomposed models. Expert disagreement is a major problem in making use of their judgement. If experts in the same field disagree, objective criteria are needed to figure out which group is correct. 11 If experts in different fields disagree, objective criteria are needed to figure out which field is the most relevant. Personal judgement cannot be used, as there is no evidence that people are skilled at reliably choosing between competing experts. Apart from the characteristics in Table 1 , one example of objective criteria is a good prediction track record on the part of the expert. A willingness to make falsifiable, nonambiguous predictions is another good sign. A better connection with empirical knowledge and less theoretical rigidity are also positive indications (Tetlock, 2005) . It must be noted, however, that assessing whether the expert possesses these characteristics is a second-order phenomenasubjective impressions of the expert's subjective judgement -so in most cases it will be impossible to identify the truth when there is strong expert disagreement. \n Grind versus insight There is a distinction between achievements that require grind, versus those that require insight. 12 Grind is a term encompassing the application of hard work and resources to a problem, with the confidence that these will accomplish the goal. Problems that require insight, however, cannot simply be solved by hard work: new, unexpected ideas are needed to reach the goal. Most Moore's law predictions assume that grind is all that is needed for AI: once a certain level of computer performance is reached, people will be able to develop AI. In contrast, some insist that new insights are needed 13 (Deutsch, 2012) . In general, the grind needed for some goal can be predicted quite well. Project managers and various leaders are often quite good at estimating the length of projects (as long as they are not directly involved in the project (Buehler, Griffin, & Ross, 1994) ). Moore's law could be taken as an ultimate example of grind: the global efforts of many engineers across many fields average out to a relatively predictable exponential growth. Predicting insight is much harder. The Riemann hypothesis is a well-established mathematical hypothesis from 1885, still unsolved but much researched (Riemann, 1859) . How would one go about predicting when it will be solved? If building a true AI is akin in difficulty to solving the Riemann hypothesis (or solving several open mathematical problems), then timeline predictions are a lot less reliable, with much larger error bars. This does not mean that a prediction informed by a model of grind is more accurate than one that models insight. This is only true if a good case is made that AI can indeed be achieved through grind, and that insight is not needed. The predictions around whole brain emulations (Sandberg, 2008) are one of the few that make this case convincingly. \n Non-expert judgement All the issues and problems with expert judgement apply just as well to non-experts. While experts could be expected to have some source of useful insight due to their training, knowledge and experience, this is not the case with non-experts, giving no reason to trust their judgement. That is not to say that non-experts cannot come up with good models, convincing timelines, or interesting plans and scenarios. It just means that any assessment of the quality of the prediction depends only on the prediction itself; a non-expert cannot be granted any leeway to cover up a weak premise or a faulty logical step. One must beware the halo effect in assessing predictions (Finucane, Alhakami, Slovic, & Johnson, 2000; Thorndike, 1920) . This denotes the psychological tendency to see different measures of personal quality to be correlated: an attractive person is seen as likely being intelligent, someone skilled in one domain is believed to be skilled in another. Hence it is hard to prevent one's opinion of the predictor from affecting one's assessment of the prediction, even when this is unwarranted. Ideally, one would seek to assess non-experts predictions in a \"blinded\" way, without knowing who the prediction's author was. If this is not possible, one can attempt to reduce the bias by imagining the prediction was authored by someone else -such as the Archbishop of Canterbury, Warren Buffet or the Unabomber. Success is achieved when hypothetical changes in authorship do not affect estimations of the validity of the prediction. \n Timeline predictions Jonathan Wang and Brian Potter of the Machine Intelligence Research Institute performed an exhaustive search of the online literature and from this assembled a database of 257 AI predictions from the period 1950 -2012. Of these, 95 contained predictions giving timelines for AI development 14 . Table 1 suggests that one should expect AI timeline predictions to be of relatively low quality. The only unambiguously positive feature of timeline predictions on that table is that prediction errors are expected and allowed: apart from that, the task characteristics are daunting, especially on the key issue of feedback. The theory is born out in practice: the AI predictions in the database seem little better than random guesses (see Figure 1 ). The data are analysed more thoroughly in a previous paper, which explains the methodology for choosing a single median estimate (Armstrong & Sotala, 2012) . The main conclusions are the following: (1) There is little correlation between different predictions. They span a large range (the graph has been reduced; there were predictions beyond 2100), and exhibit no signs of convergence. Ignoring the prediction beyond 2100, the prediction shows a standard deviation of over a quarter century (26 years) . There is little to distinguish failed predictions whose date has passed, from those that still lie in the future. (2) There is no evidence that expert predictions differ from non-expert predictions. Again ignoring predictions beyond 2100, expert predictions show a standard deviation of 26 years, while non-expert predictions show a standard deviation of 27 years. 15 (3) There is no evidence for the so-called Maes -Garreau law, 16 which is the idea that predictors preferentially predict AI to be developed just in time to save them from death. ( 4 ) There is a strong tendency to predict the development of AI within 15-25 years from when the prediction is made (over a third of all predictions are in this timeframe, see Figure 2 ). Experts, non-experts and failed predictions all exhibit this same pattern. In summary, there are strong theoretical and practical reasons to believe that timeline AI predictions are likely unreliable. \n Case studies This section applies and illustrates the schemas of Section 2 and the methods of Section 3. It does so by looking at five prominent AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions and Omohundro's AI drives. The aim is to assess and analyse these predictions and gain insights that can then be applied to assessing future predictions. 5.1 In the beginning, Dartmouth created the AI and the hype . . . Classification: plan, using expert judgement and the outside view. Hindsight bias is very strong and misleading (Fischhoff, 1975) . Humans are often convinced that past events could not have unfolded differently than how they did, and that the people at the time should have realised this. Even worse, people unconsciously edit their own memories so that they misremember themselves as being right even when they got their past predictions wrong. 17 Hence when assessing past predictions, one must cast aside all knowledge of subsequent events, and try to assess the claims given the knowledge available at the time. This is an invaluable exercise to undertake before turning attention to predictions whose timelines have not come to pass. The 1956 Dartmouth Summer Research Project on Artificial Intelligence was a major conference, credited with introducing the term 'Artificial Intelligence' and starting the research in many of its different subfields. The conference proposal, 18 written in 1955, sets out what the organisers thought could be achieved. Its first paragraph reads: We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. This can be classified as a plan. Its main backing would have been expert judgement. The conference organisers were John McCarthy (a mathematician with experience in the mathematical nature of the thought process), Marvin Minsky (Harvard Junior Fellow in Mathematics and Neurology, and prolific user of neural nets), Nathaniel Rochester (Manager of Information Research, IBM, designer of the IBM 701, the first general purpose, mass-produced computer, and designer of the first symbolic assembler) and Claude Shannon (the 'father of information theory'). These were individuals who had been involved in a lot of related theoretical and practical work, some of whom had built functioning computers or programing languages -so one can expect them all to have had direct feedback about what was and was not doable in computing. If anyone could be considered experts in AI, in a field dedicated to an as yet non-existent machine, then they could. What implicit and explicit assumptions could they have used to predict that AI would be easy? Reading the full proposal does not give the impression of excessive optimism or overconfidence. The very first paragraph hints at the rigour of their ambitions -they realised that precisely describing the features of intelligence is a major step in simulating it. Their research plan is well decomposed, and different aspects of the problem of AI are touched upon. The authors are well aware of the inefficiency of exhaustive search methods, of the differences between informal and formal languages and of the need for encoding creativity. They talk about the need to design machines that can work with unreliable components, and that can cope with randomness and small errors in a robust way. They propose some simple models of some of these challenges (such as forming abstractions, or dealing with more complex environments), point to some previous successful work that has been done before, and outline how further improvements can be made. Reading through, the implicit reasons for their confidence seem to become apparent. 19 These were experts, some of whom had been working with computers from early days, who had a long track record of taking complex problems, creating simple (and then more complicated) models to deal with them. Using these models, they generated useful insights or functioning machines. So this was an implicit use of the outside view -they were used to solve certain problems, these looked like the problems they could solve, hence they assumed they could solve them. To modern eyes, informal languages are hugely complicated, but this may not have been obvious at the time. Computers were doing tasks, such as complicated mathematical manipulations, that were considered high-skill, something only very impressive humans had been capable of. 20 Moravec's paradoxe 21 had not yet been realised. The human intuition about the relative difficulty of tasks was taken as accurate: there was no reason to suspect that parsing English was much harder than the impressive feats computer could already perform. Moreover, great progress had been made in logic, in semantics, in information theory, giving new understanding to old concepts: there was no reason to suspect that further progress would not be both forthcoming and dramatic. Even at the time, though, one could criticise their overconfidence. Philosophers, for one, had a long track record of pointing out the complexities and subtleties of the human mind. It might have seemed plausible in 1955 that further progress in logic and information theory would end up solving all these problems -but it could have been equally plausible to suppose that the success of formal models had been on low-hanging fruit, and that further progress would become much harder. Furthermore, the computers at the time were much simpler than the human brain (e.g. the IBM 701, with 73,728 bits of memory), so any assumption that AIs could be built was also an assumption that most of the human brain's processing was wasted. This implicit assumption was not obviously wrong, but neither was it obviously right. Hence the whole conference project would have seemed ideal, had it merely added more humility and qualifiers in the text, expressing uncertainty as to whether a particular aspect of the program might turn out to be hard or easy. After all, in 1955, there were no solid grounds for arguing that such tasks were unfeasible for a computer. Nowadays, it is obvious that the paper's predictions were very wrong. All the tasks mentioned were much harder to accomplish than they claimed at the time, and have not been successfully completed even today. Rarely have such plausible predictions turned out to be so wrong; so what can be learned from this? The most general lesson is perhaps on the complexity of language and the danger of using human-understandable informal concepts in the field of AI. The Dartmouth group seemed convinced that because they informally understood certain concepts and could begin to capture some of this understanding in a formal model, then it must be possible to capture all this understanding in a formal model. In this, they were wrong. Similarities of features do not make the models similar to reality, and using human terms -such as 'culture' and 'informal' -in these models concealed huge complexity and gave an illusion of understanding. Today's AI developers have a much better understanding of how complex cognition can be, and have realised that programing simple-seeming concepts into computers can be very difficult. So the main lesson to draw is that reasoning about AI using human concepts (or anthropomorphising the AIs by projecting human features onto it) is a very poor guide to the nature of the problem and the time and effort required to solve it. \n Dreyfus's artificial alchemy Classification: issues and metastatements, using the outside view, non-expert judgement and philosophical arguments. Hubert Dreyfus was a prominent early critic of AI. He published a series of papers and books attacking the claims and assumptions of the AI field, starting in 1965 with a paper for the Rand corporation entitled 'Alchemy and AI' (Dreyfus, 1965) . The paper was famously combative, analogising AI research to alchemy and ridiculing AI claims. Later, D. Crevier would claim 'time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier' (Crevier, 1993) . Ignoring the formulation issues, were Dreyfus's criticisms actually correct, and what can be learned from them? Was Dreyfus an expert? Though a reasonably prominent philosopher, there is nothing in his background to suggest specific expertise with theories of minds and consciousness, and absolutely nothing to suggest familiarity with AI and the problems of the field. Thus Dreyfus cannot be considered anything more that an intelligent outsider. This makes the pertinence and accuracy of his criticisms that much more impressive. Dreyfus highlighted several over-optimistic claims for the power of AI, predicting -correctlythat the 1965 optimism would also fade (with, for instance, decent chess computers still a long way off). He used the outside view to claim this as a near universal pattern in AI: initial successes, followed by lofty claims, followed by unexpected difficulties and subsequent disappointment. He highlighted the inherent ambiguity in human language and syntax, and claimed that computers could not deal with these. He noted the importance of unconscious processes in recognising objects, the importance of context and the fact that humans and computers operated in very different ways. He also criticised the use of computational paradigms for analysing human behaviour, and claimed that philosophical ideas in linguistics and classification were relevant to AI research. In all, his paper is full of interesting ideas and intelligent deconstructions of how humans and machines operate. All these are astoundingly prescient predictions for 1965, when computers were in their infancy and their limitations were only beginning to be understood. Moreover, he was not only often right, but right for the right reasons (see for instance his understanding of the difficulties computer would have in dealing with ambiguity). Not everything Dreyfus wrote was correct; however, apart from minor specific points, 22 he erred mostly by pushing his predictions to extremes. He claimed that 'the boundary may be near' in computer abilities, and concluded with: . . . what can now be done? Nothing directly towards building machines which can be intelligent. [ . . . ] in the long run [we must think] of non-digital automata . . . Currently, however, there exists 'digital automata' that can beat all humans at chess, translate most passages to at least an understandable level and beat humans at 'Jeopardy', a linguistically ambiguous arena (Guizzo, 2011) . He also failed to foresee that workers in AI would eventually develop new methods to overcome the problems he had outlined. Though Dreyfus would later state that he never claimed AI achievements were impossible (McCorduck, 2004) , there is no reason to pay attention to later re-interpretations: Dreyfus's (1965) article strongly suggests that AI progress was bounded. These failures are an illustration of the principle that even the best of predictors is vulnerable to overconfidence. In 1965, people would have been justified to find Dreyfus's analysis somewhat implausible. It was the work of an outsider with no specific relevant expertise, and dogmatically contradicted the opinion of genuine experts inside the AI field. Though the claims it made about human and machine cognition seemed plausible, there is a great difference between seeming plausible and actually being correct, and his own non-expert judgement was the main backing for the claims. Outside of logic, philosophy had yet to contribute much to the field of AI, so no intrinsic reason to listen to a philosopher. There were, however, a few signs that the paper was of high quality: Dreyfus seemed to be very knowledgeable about progress and work in AI, and most of his analyses on human cognition were falsifiable, at least to some extent. These were still not strong arguments to heed the sceptical opinions of an outsider. The subsequent partial vindication of the paper is therefore a stark warning: it is very difficult to estimate the accuracy of outsider predictions. There were many reasons to reject Dreyfus's predictions in 1965, and yet that would have been the wrong thing to do. Blindly accepting non-expert outsider predictions would have also been a mistake, however: these are most often in error (see Section 3.4.2). One general lesson concerns the need to decrease certainty: the computer scientists of 1965 should at least have accepted the possibility (if not the plausibility) that some of Dreyfus's analyses were correct, and they should have started paying more attention to the 'success -excitement -difficulties -stalling' cycles in their field to see whether the pattern continued. A second lesson could be about the importance of philosophy: it does seem that philosophers' meta-analytical skills can contribute useful ideas to AI -a fact that is certainly not self-evident (see also Section 5.5). \n Locked up in Searle's Chinese room Classification: issues and metastatements and a scenario, using philosophical arguments and expert judgement. Searle's Chinese room thought experiment is a famous critique of some of the assumptions of 'strong AI 23 '. There has been a lot of further discussion on the subject (see for instance, Harnad, 2001; Searle, 1990) , but, as in previous case studies, this section will focus exclusively on his original 1980 publication (Searle, 1980) . In the key thought experiment, Searle imagined that AI research had progressed to the point where a computer program had been created that could demonstrate the same input -output performance as a human -for instance, it could pass the Turing test. Nevertheless, Searle argued, this program would not demonstrate true understanding. He supposed that the program's inputs and outputs were in Chinese, a language Searle could not understand. Instead of a standard computer program, the required instructions were given on paper, and Searle himself was locked in a room somewhere, slavishly following the instructions and therefore causing the same input -output behaviour as the AI. Since it was functionally equivalent to the AI, the set-up should, from the 'strong AI' perspective, demonstrate understanding if and only if the AI did. Searle then argued that there would be no understanding at all: he himself could not understand Chinese, and there was no one else in the room to understand it either. The whole argument depends on strong appeals to intuition (indeed D. Dennet went as far as accusing it of being an 'intuition pump' (Dennett, 1991) ). The required assumptions are the following: . The Chinese room set-up analogy preserves the relevant properties of the AI's program. . Intuitive reasoning about the Chinese room is thus relevant reasoning about algorithms. . The intuition that the Chinese room follows a purely syntactic (symbol-manipulating) process rather than a semantic (understanding) one is a correct philosophical judgement. . The intuitive belief that humans follow semantic processes is, however, correct. Thus the Chinese room argument is unconvincing to those that do not share Searle's intuitions. It cannot be accepted solely on Searle's philosophical expertise, as other philosophers disagree (Dennett, 1991; Rey, 1986) . On top of this, Searle is very clear that his thought experiment does not put any limits on the performance of AIs (he argues that even a computer with all the behaviours of a human being would not demonstrate true understanding). Hence the Chinese room seems to be useless for AI predictions. Can useful prediction nevertheless be extracted from it? These need not come directly from the main thought experiment, but from some of the intuitions and arguments surrounding it. Searle's paper presents several interesting arguments, and it is interesting to note that many of them are disconnected from his main point. For instance, errors made in 1980 AI research should be irrelevant to the Chinese room -a pure thought experiment. Yet Searle argues about these errors, and there is at least an intuitive if not a logical connection to his main point. There are actually several different arguments in Searle's paper, not clearly divided from each other, and likely to be rejected or embraced depending on the degree of overlap with Searle's intuitions. This may explain why many philosophers have found Searle's paper so complex to grapple with. One feature Searle highlights is the syntactic -semantic gap. If he is correct, and such a gap exists, this demonstrates the possibility of further philosophical progress in the area. 24 For instance, Searle directly criticises McCarthy's contention that 'Machines as simple as thermostats can have beliefs' (McCarthy, 1979) . If one accepted Searle's intuition there, one could then ask whether more complicated machines could have beliefs, and what attributes they would need. These should be attributes that it would be useful to have in an AI. Thus progress in 'understanding understanding' would likely make it easier to go about designing AI -but only if Searle's intuition is correct that AI designers do not currently grasp these concepts. That can be expanded into a more general point. In Searle's time, the dominant AI paradigm was GOFAI (Good Old-Fashioned Artificial Intelligence (Haugeland, 1985) ), which focused on logic and symbolic manipulation. Many of these symbols had suggestive labels: SHRDLU, for instance, had a vocabulary that included 'red', 'block', 'big' and 'pick up' (Winograd, 1971 ). Searle's argument can be read, in part, as a claim that these suggestive labels did not in themselves impart true understanding of the concepts involved -SHRDLU could parse 'pick up a big red block' and respond with an action that seems appropriate, but could not understand those concepts in a more general environment. The decline of GOFAI since the 1980s cannot be claimed as vindication of Searle's approach, but it at least backs up his intuition that these early AI designers were missing something. Another falsifiable prediction can be extracted, not from the article but from the intuitions supporting it. If formal machines do not demonstrate understanding, but brains (or brain-like structures) do, this would lead to certain scenario predictions. Suppose two teams were competing to complete an AI that will pass the Turing test. One team was using standard programing techniques on computer, the other was building it out of brain (or brain-like) components. Apart from this, there is no reason to prefer one team over the other. According to Searle's intuition, any AI made by the first project will not demonstrate true understanding, while those of the second project may. Adding the reasonable assumption that it is easier to harder to simulate understanding if one does not actually possess it, one is led to the prediction that the second team is more likely to succeed. Thus there are three predictions that can be extracted from the Chinese room paper: (1) Philosophical progress in understanding the syntactic -semantic gap may help towards designing better AIs. (2) GOFAI's proponents incorrectly misattribute understanding and other high level concepts to simple symbolic manipulation machines, and will not succeed with their approach. (3) An AI project that uses brain-like components is more likely to succeed (everything else being equal) than that based on copying the functional properties of the mind. Therefore, one can often extract predictions from even the most explicitly anti-predictive philosophy of AI paper. \n How well have the 'spiritual machines' aged? Classification: timelines and scenarios, using expert judgement, causal models, non-causal models and (indirect) philosophical arguments. Ray Kurzweil is a prominent and often quoted AI predictor. One of his most important books was the 1999 The Age of Spiritual Machines which presented his futurist ideas in more detail, and made several predictions for the years 2009, 2019, 2029 and 2099. That book will be the focus of this case study, ignoring his more recent work. 25 There are five main points relevant to judging The Age of Spiritual Machines: Kurzweil's expertise, his 'Law of Accelerating Returns', his extension of Moore's law, his predictive track record and his use of fictional imagery to argue philosophical points. Kurzweil has had a lot of experience in the modern computer industry. He is an inventor, computer engineer and entrepreneur, and as such can claim insider experience in the development of new computer technology. He has been directly involved in narrow AI projects covering voice recognition, text recognition and electronic trading. His fame and prominence are further indications of the allure (though not necessarily the accuracy) of his ideas. In total, Kurzweil can be regarded as an AI expert. Kurzweil is not, however, a cosmologist or an evolutionary biologist. In his book, he proposed a 'Law of Accelerating Returns'. This law claimed to explain many disparate phenomena, such as the speed and trends of evolution of life forms, the evolution of technology, the creation of computers and Moore's law in computing. His slightly more general 'Law of time and chaos' extended his model to explain the history of the universe or the development of an organism. It is a causal model, as it aims to explain these phenomena, not simply note the trends. Hence it is a timeline prediction, based on a causal model that makes use of the outside view to group the categories together, and is backed by non-expert opinion. A literature search failed to find any evolutionary biologist or cosmologist stating their agreement with these laws. Indeed there has been little academic work on them at all, and what work there is tends to be critical. 26 The laws are ideal candidates for counterfactual resiliency checks, however. It is not hard to create counterfactuals that shift the timelines underlying the laws. 27 Many standard phenomena could have delayed the evolution of life on Earth for millions or billions of years (meteor impacts, solar energy fluctuations or nearby gamma-ray bursts). The evolution of technology can similarly be accelerated or slowed down by changes in human society and in the availability of raw materials -it is perfectly conceivable that, for instance, the ancient Greeks could have started a small industrial revolution, or that the European nations could have collapsed before the Renaissance due to a second and more virulent Black Death (or even a slightly different political structure in Italy). Population fragmentation and decrease can lead to technology loss (such as the 'Tasmanian technology trap' (Rivers, 1912) ). Hence accepting that a Law of Accelerating Returns determines the pace of technological and evolutionary change means rejecting many generally accepted theories of planetary dynamics, evolution and societal development. Since Kurzweil is the non-expert here, his law is almost certainly in error, and best seen as a literary device rather than a valid scientific theory. If the law is restricted to being a non-causal model of current computational development, then the picture is very different. First because this is much closer to Kurzweil's domain of expertise. Second because it is now much more robust to counterfactual resiliency. Just as in the analysis of Moore's law in Section 3.2.1, there are few plausible counterfactuals in which humanity had continued as a technological civilisation for the last 50 years, but computing had not followed various exponential curves. Moore's law has been maintained across transitions to new and different substrates, from transistors to GPUs, so knocking away any given technology or idea seems unlikely to derail it. There is no consensus as to why Moore's law actually works, which is another reason it is so hard to break, even counterfactually. Moore's law and its analogues (Moore, 1965; Walter, 2005) are non-causal models, backed up strongly by the data and resilient to reasonable counterfactuals. Kurzweil's predictions are mainly based around grouping these laws together (outside view) and projecting them forwards into the future. This is combined with Kurzweil's claims that he can estimate how those continuing technological innovations are going to become integrated into society. These timeline predictions are thus based strongly on Kurzweil's expert judgement. But much better than subjective impressions of expertise is Kurzweil's track record: his predictions for 2009. This gives empirical evidence as to his predictive quality. Initial assessments suggested that Kurzweil had a success rate around 50%. 28 A panel of nine volunteers were recruited to give independent assessments of Kurzweil's performance. Kurzweil's predictions were broken into 172 individual statements, and the volunteers were given a randomised list of numbers from 1 to 172, with instructions to work their way down the list in that order, estimating each prediction as best they could. Since 2009 was obviously a 'ten year from 1999' gimmick, there was some flexibility on the date: a prediction was judged true if it was true by 2011. 29 A total of 531 assessments were made, an average of exactly 59 assessments per volunteer. Each volunteer assessed at least 10 predictions, while one volunteer assessed all 172. Of the assessments, 146 (27%) were found to be true, 82 (15%) weakly true, 73 (14%) weakly false, 172 (32%) false and 58 (11%) could not be classified (see Figure 3 ) (the results are little changed (< ^1%) if the results are calculated for each volunteer, and then averaged). Simultaneously, a separate assessment was made using volunteers on the site Youtopia. These found a much higher failure rate -41% false, 16% weakly false -but since the experiment was not blinded or randomised, it is of less rigour. 30 These nine volunteers thus found a correct prediction rate of 42%. How impressive this result is depends on how specific and unobvious Kurzweil's predictions were. This is very difficult to figure out, especially in hindsight (Fischhoff, 1975) . Nevertheless, a subjective overview suggests that the predictions were often quite specific (e.g. 'Unused computes on the Internet are being harvested, creating virtual parallel supercomputers with human brain hardware capacity'), and sometimes failed because of this. In view of this, a correctness rating of 42% is impressive, and goes some way to demonstrate Kurzweil's predictive abilities. When it comes to self-assessment, 31 however, Kurzweil is much less impressive. He commissioned investigations into his own performance, which gave him scores of 102 out of 108 32 or 127 out of 147, 33 with the caveat that 'even the predictions that were considered wrong [...] were not all wrong.' This is dramatically different from this paper's assessments. What can be deduced from this tension between good performance and poor selfassessment? The performance is a validation of Kurzweil's main model: continuing exponential trends in computer technology, and confirmation that Kurzweil has some impressive ability to project how these trends will impact the world. However, it does not vindicate Kurzweil as a predictor per se -his self-assessment implies that he does not make good use of feedback. Thus one should probably pay more attention to Kurzweil's model, than to his subjective judgement. This is a common finding in expert tasks -experts are often better at constructing predictive models than at making predictions themselves (Kahneman, 2011) . The Age of Spiritual Machines is not simply a dry tome, listing predictions and arguments. It is also, to a large extent, a story, which includes a conversation with a hypothetical future human called Molly, discussing her experiences through the coming century and its changes. Can one extract verifiable predictions from this aspect of the book (see Section 3.1)? A story is neither a prediction nor evidence for some particular future. But the reactions of characters in the story can be construed as a scenario prediction. They imply that real humans, placed in those hypothetical situations, will react in the way described. Kurzweil's story ultimately ends with humans merging with machines -with the barrier between human intelligence and AI being erased. Along the way, he describes the interactions between humans and machines, imagining the machines quite different from humans, but still being perceived to have human feelings. One can extract two falsifiable future predictions from this: first, that humans will perceive feelings in AIs, even if they are not human-like. Second, that humans and AIs will be able to relate to each other socially over the long term, despite being quite different, and that this social interaction will form the main glue keeping the mixed society together. The first prediction seems quite solid: humans have anthropomorphised trees, clouds, rock formations and storms, and have become convinced that chatterbots were sentient (Weizenbaum, 1966) . The second prediction is more controversial: it has been argued that an AI will be such an alien mind that social pressures and structures designed for humans will be completely unsuited to controlling it Bostrom, 2013) . Determining whether social structures can control dangerous AI behaviour, as it controls dangerous human behaviour, is a very important factor in deciding whether AIs will ultimately be safe or dangerous. Hence analysing this story-based prediction is an important area of future research. \n What drives an AI? Classification: issues and metastatements, using philosophical arguments and expert judgement. Steve Omohundro, in his paper on 'AI drives', presented arguments aiming to show that generic AI designs would develop 'drives' that would cause them to behave in specific and potentially dangerous ways, even if these drives were not programed initially (Omohundro, 2008) . One of his examples was a superintelligent chess computer that was programed purely to perform well at chess, but that was nevertheless driven by that goal to self-improve, to replace its goal with a utility function, to defend this utility function, to protect itself and ultimately to acquire more resources and power. This is a metastatement: generic AI designs would have this unexpected and convergent behaviour. This relies on philosophical and mathematical arguments, and though the author has expertise in mathematics and machine learning, he has none directly in philosophy. It also makes implicit use of the outside view: utility-maximising agents are grouped together into one category and similar types of behaviours are expected from all agents in this category. In order to clarify and reveal assumptions, it helps to divide Omohundro's thesis into two claims. The weaker one is that a generic AI design could end up having these AI drives; the stronger one is that it would very likely have them. Omohundro's paper provides strong evidence for the weak claim. It demonstrates how an AI motivated only to achieve a particular goal could nevertheless improve itself, become a utilitymaximising agent, reach out for resources and so on. Every step of the way, the AI becomes better at achieving its goal, so all these changes are consistent with its initial programing. This behaviour is very generic: only specifically tailored or unusual goals would safely preclude such drives. The claim that AIs generically would have these drives needs more assumptions. There are no counterfactual resiliency tests for philosophical arguments, but something similar can be attempted: one can use humans as potential counterexamples to the thesis. It has been argued that AIs could have any motivation a human has (Armstrong, 2013; Bostrom, 2012) . Thus according to the thesis, it would seem that humans should be subjected to the same drives and behaviours. This does not fit the evidence, however. Humans are certainly not expected utility maximisers (probably the closest would be financial traders who try to approximate expected money maximisers, but only in their professional work), they do not often try to improve their rationality (in fact some specifically avoid doing so, 34 and some sacrifice cognitive ability to other pleasures; Bleich et al., 2003) , and many turn their backs on high-powered careers. Some humans do desire self-improvement (in the sense of the paper), and Omohundro cites this as evidence for his thesis. Some humans do not desire it, though, and this should be taken as contrary evidence. 35 Thus one hidden assumption of the model is: . Generic superintelligent AIs would have different motivations to a significant subset of the human race OR . Generic humans raised to superintelligence would develop AI drives. This position is potentially plausible, but no real evidence is presented for it in the paper. A key assumption of Omohundro is that AIs will seek to re-express their goals in terms of a utility function. This is based on the Morgenstern -von Neumann expected utility theorem (von Neumann & Morgenstern, 1944) . The theorem demonstrates that any decision process that cannot be expressed as expected utility maximising will be exploitable by other agents or by the environments. Hence in certain circumstances, the agent will predictably lose assets, to no advantage to itself. That theorem does not directly imply, however, that the AI will be driven to become an expected utility maximiser (to become 'rational'). First of all, as Omohundro himself points out, real agents can only be approximately rational: fully calculating the expected utility of every action is too computationally expensive in the real world. Bounded rationality (Simon, 1955) is therefore the best that can be achieved, and the benefits of becoming rational can only be partially realised. Second, there are disadvantages to becoming rational: these agents tend to be 'totalitarian', ruthlessly squeezing out anything not explicitly in their utility function, sacrificing everything to the smallest increase in expected utility. An agent that did not start off as utility-based could plausibly make the assessment that becoming so might be dangerous. It could stand to lose values irrevocably, in ways that it could not estimate at the time. This effect would become stronger as its future self-continues to self-improve. Thus an agent could conclude that it is too dangerous to become 'rational', especially if the agent's understanding of itself is limited. Third, the fact that an agent can be exploited in theory does not mean that it will be much exploited in practice. Humans are relatively adept at not being exploited, despite not being rational agents. Though human 'partial rationality' is vulnerable to tricks such as extended warranties and marketing gimmicks, it generally does not end up losing money, again and again and again, through repeated blatant exploitation. The pressure to become fully rational would be weak, for an AI similarly capable of ensuring it was exploitable for only small amounts. An expected, utility maximiser would find such small avoidable loses intolerable; but there is no reason for a not-yet-rational agent to agree. Finally, social pressure should be considered. The case for an AI becoming more rational is at its strongest in a competitive environment, where the theoretical exploitability is likely to actually be exploited. Conversely, there may be situations of social equilibriums, with different agents all agreeing to forgo rationality individually, in the interest of group cohesion (there are many scenarios where this could be plausible). Thus another hidden assumption of the strong version of the thesis is the following: . The advantages of becoming less exploitable outweigh the possible disadvantages of becoming an expected utility maximiser (such as possible loss of value or social disagreements). The advantages are especially large when the potentially exploitable aspects of the agent are likely to be exploited, such as in a highly competitive environment. Any sequence of decisions can be explained as maximising a (potentially very complicated or obscure) utility function. Thus in the abstract sense, saying that an agent is an expected utility maximiser is not informative. Yet there is a strong tendency to assume such agents will behave in certain ways (see for instance the previous comment on the totalitarian aspects of expected utility maximisation). This assumption is key to rest of the thesis. It is plausible that most agents will be 'driven' towards gaining extra power and resources, but this is only a problem if they do so dangerously (at the cost of human lives, for instance). Assuming that a realistic utility function-based agent would do so is plausible but unproven. In general, generic statements about utility function-based agents are only true for agents with relatively simple goals. Since human morality is likely very complicated to encode in a computer, and since most putative AI goals are very simple, this is a relatively justified assumption but is an assumption nonetheless. So there are two more hidden assumptions: . Realistic AI agents with utility functions will be in a category such that one can make meaningful, generic claims for (almost) all of them. This could arise, for instance, if their utility function is expected to be simpler that human morality. . Realistic AI agents are likely not only to have the AI drives Omohundro mentioned, but also to have them in a very strong way, being willing to sacrifice anything else to their goals. This could happen, for instance, if the AIs were utility function based with relatively simple utility functions. This simple analysis suggests that a weak form of Omohundro's thesis is nearly certainly true: AI drives could emerge in generic AIs. The stronger thesis, claiming that the drives would be very likely to emerge, depends on some extra assumptions that need to be analysed. But there is another way of interpreting Omohundro's work: it presents the generic behaviour of simplified artificial agents (similar to the way that supply and demand curves present the generic behaviour of simplified human agents). Thus even if the model is wrong, it can still be of great use for predicting AI behaviour: designers and philosophers could explain how and why particular AI designs would deviate from this simplified model, and thus analyse whether that AI is likely to be safer than that in the Omohundro model. Hence the model is likely to be of great use, even if it turns out to be an idealised simplification. \n Dangerous AIs and the failure of counterexamples Another thesis, quite similar to Omohundro's, is that generic AIs would behave dangerously, unless they were exceptionally well programed. This point has been made repeatedly by Yampolskiy (2012) , Yudkowsky (2008) and Minsky (1984) , among others. That thesis divides in the same fashion as Omohundro's: a weaker claim that any AI could behave dangerously, and a stronger claim that it would likely do so. The same analysis applies as for the 'AI drives': the weak claim is solid, the stronger claim needs extra assumptions (but describes a useful 'simplified agent' model of AI behaviour). There is another source of evidence for both these theses: the inability of critics to effectively dismiss them. There are many counter-proposals to the theses (some given in question and answer sessions at conferences) in which critics have presented ideas that would 'easily' dispose of the dangers 36 ; every time, the authors of the theses have been able to point out flaws in the counter-proposals. This demonstrated that the critics had not grappled with the fundamental issues at hand, or at least not sufficiently to weaken the theses. This should obviously not be taken as a proof of the theses. But it does show that the arguments are currently difficult to counter. Informally this is a reverse expert-opinion test: if experts often find false counter-arguments, then any given counter-argument is likely to be false (especially if it seems obvious and easy). Thus any counter-argument should have been subject to a degree of public scrutiny and analysis, before it can be accepted as genuinely undermining the theses. Until that time, both predictions seem solid enough that any AI designer would do well to keep them in mind in the course of their programing. \n Conclusion The aims of this paper and the previous one (Armstrong & Sotala, 2012) were to analyse how AI predictions were made, and to start constructing a toolbox of methods that would allow people to construct testable predictions from most AI-related publications, and assess the reliability of these predictions. It demonstrated the problems with expert judgement, in theory and in practice. Timeline predictions were seen to be particularly unreliable: in general, these should be seen as containing little useful information. The various tools and analyses were applied in case studies to five famous AI predictions, the original Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room thought experiment, Kurzweil's (1999) predictions and Omohundro's 'AI drives' argument. This demonstrated the great difficulty of assessing the reliability of AI predictions at the time they were made: by any reasonable measures, the Dartmouth conference should have been expected to be more accurate than Dreyfus. The reality, of course, was completely opposite. Though there are some useful tools for assessing prediction quality, and they should definitely be used, they provide only weak evidence. The only consistent message was all predictors were overconfident in their verdicts, and that model-based predictions were superior to those founded solely on expert intuition. It is hoped that future predictors (and future predictor assessors) will follow in the spirit of these examples, and make their assumptions explicit, their models clear, their predictions testable and their uncertainty greater. This is not limited to statements about AI -there are many fields where the 'toolbox' of methods described here could be used to analyse and improve their predictions. Copyright of Journal of Experimental & Theoretical Artificial Intelligence is the property of Taylor & Francis Ltd and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. Figure 1 . 1 Figure 1. Median estimate for human-level AI, graphed against date of prediction. \n Figure 2 . 2 Figure 2. Time between the arrival of AI and the date the prediction was made. Years on the x axis, number of predictions on the y axis. \n Figure 3 . 3 Figure 3. Assessments of the correctness of Kurzweil's predictions: percentage of assessments in each category from true to false. \n Table 1 . 1 Table of task properties conducive to good and poor expert performance. Good performance \n\t\t\t Journal of Experimental & Theoretical Artificial Intelligence", "date_published": "n/a", "url": "n/a", "filename": "ai.tei.xml", "abstract": "Predicting the development of artificial intelligence (AI) is a difficult project -but a vital one, according to some analysts. AI predictions are already abound: but are they reliable? This paper starts by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: the initial Dartmouth conference, Dreyfus's criticism of AI, Searle's Chinese room paper, Kurzweil's predictions in the Age of Spiritual Machines, and Omohundro's 'AI drives' paper. These case studies illustrate several important principles, such as the general overconfidence of experts, the superiority of models over expert judgement and the need for greater uncertainty in all types of predictions. The general reliability of expert judgement in AI timeline predictions is shown to be poor, a result that fits in with previous studies of expert competence.", "id": "bf414395b330ae39117cc19308c6f306"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["James Bell", "Linda Linsefors", "Caspar Oesterheld", "Joar Skalse"], "title": "Reinforcement Learning in Newcomblike Environments", "text": "Introduction In this paper, we study decision scenarios in which outcomes depend not only on the choices made and physically implemented, but also depend directly on the agent's policy. As an example, consider an autonomous vehicle (AV) whose goal it is to arrive at target destinations quickly while minimising the probability of collisions. In practice, AVs are careful drivers. It is easy to imagine an experiment (or learning process) that might support careful driving: on each day, let the AV decide at random between a careful and a more aggressive style of driving; other drivers on the road are unaware of today's chosen driving style and therefore behave the same around the AV on both types of days. Presumably the decrease in accidents on careful days outweighs the increase in travel time. However, imagine now that a type of AV was widely deployed. Then many of the drivers with whom the AVs interact on the road would know a lot about how these AVs behave (e.g., from reading about AVs, or from having interacted with other AVs of the same type in the past). In particular, if the other drivers know that the AVs rarely take risks, they might (whether rationally or irrationally) cut them off more, not give them right of way, etc. relative to the above experiment. Indeed, this phenomenon -human drivers bullying timid AVs -has been reported in the real world (Condliffe, 2016; Liu et al., 2020; cf. Cooper et al., 2019) . As a result, the travel times of an AV are much longer if it always follows (and is known to follow) a careful driving policy. Moreover, some of the safety benefits of careful choices disappear if the AV adopts a careful policy. To comfortably model situations such as this one, we introduce Newcomblike decision processes (NDPs). The name derives from Newcomb's problem (Nozick, 1969) , described in the next subsection, and similar problems that have been studied in the decision-theoretic literature. NDPs are a generalisation of Markov decision processes wherein the transition probabilities and rewards depend not only on the agent's action, but also directly on the agent's policy. Thus, for example, how aggressively other cars move depends on the timidness of the AV's policy. We describe NDPs in more detail in Sect. 1.1. Importantly, the NDP model does not assume other agents in the environment to respond rationally to the agent's policy. Thus, some NDPs cannot comfortably be modelled as games. (See Sect. 5.1 for a more detailed discussion of the relation between NDPs and game-theoretic models.) We believe that Newcomblike dynamics are commonplace when AI systems interact with other (human or artificial) agents (Cavalcanti, 2010, Sect. 5; Oesterheld, 2019, Sect. 1; Conitzer, 2019) . The deployment of AVs is rife with such dynamics. Besides aggressiveness, it might matter whether a policy is simple and thus predictable to humans, for instance. Another real-world scenario is that of recommendation systems: most readers of this paper have some idea of how these systems work and make choices based on it. (\"I would like to watch this cat video, but if I do, my recommendations will soon be full of them.\") Thus, the success of a particular recommendation depends not only on the recommendation itself, but also on the recommendation system's policy. We are interested in learning to play NDPs. More specifically, we study the behaviour of value-based, model-free RL agents, who maintain a Q-function that assigns values to state-action pairs. We define these in more detail in Sect. 1.2. As we will see, such agents do not in general learn optimal policies in NDPs (as they do in MDPs). Nevertheless, we believe that studying them is an important first step in developing practical learning algorithms for NDPs due to the combination of the following points. A) For illustrative purposes, the examples we discuss throughout this paper are simple and emphasise the dependence of the environment on the policy. However, we think that most real-world scenarios are only partially Newcomblike. For example, most of the AV's environment changes only in response to an AV's actions and does not directly depend on the AV's policy. B) Value-based reinforcement learning algorithms are very well developed. In contrast, we would have to develop specialised learning algorithms for general NDPs from scratch. C) As our results will show, in some situations and when set up correctly (e.g., in terms of learning rates) value-based learning converges to optimal policies, or at least to reasonable policies, even under Newcomblike dynamics. For example, in a game of rock-paper-scissors against an opponent who knows the agent's policy, some value-based learning agents learn the optimal policy of mixing uniformly. In light of A-C, we think that the most realistic paths to developing learning algorithms for real-world scenarios with Newcomblike dynamics will involve value-based RL. Specifically, one avenue toward realistic algorithms is to develop extensions of value-based RL and can detect and correct failures that might arise in Newcomblike dynamics. For that, we need to understand how value-based RL behaves in NDPs. \n Contributions In Sect. 2 we demonstrate that value-based RL algorithms can only converge to a policy that is ratifiable -that is, to a policy ⇡ for which all actions taken by ⇡ have optimal expected reward when following ⇡. In Sect. 3, we discuss the convergence properties of agents in Newcomblike situations, and show that there are cases where value-based agents must fail to converge. The action frequencies might converge, even when the policies do not. In Sect. 4, we establish some conditions on any action frequency that an agent could converge to. We also show that there are decision problems and agents where even the action frequencies do not converge. \n Newcomblike Decision Processes A Newcomblike decision process (NDP) is a tuple hS, A, T, R, i where S is a finite set of states; A is a finite set of actions; T : S ⇥ A ⇥ (S A) S is a nondeterministic transition function; R : S ⇥ A ⇥ S ⇥ (S A) R is a nondeterministic reward function, which we assume to be bounded; and 2 [0, 1) is a discount factor. \n A policy ⇡ : S A is a function that nondeterministically maps states to actions. We use ⇡(a | s) to denote the probability of taking action a in state s while following the policy ⇡. T and R are functions from states, actions, and policies. In other words, they allow the outcome of a decision to depend on the distributions from which the agent draws its actions, rather than just the state and the action that is in fact taken. Also note that T (s, a, ⇡) and R(s, a, s 0 , ⇡) are defined even if ⇡(a | s) = 0. We say that an NDP is a bandit NDP if it has only one state. We will sometimes use R(s, a, ⇡) as a shorthand for R(s, a, T (s, a, ⇡), ⇡), and we will sometimes omit the state from T , R, and ⇡ for bandit NDPs. Moreover, we normally let = 0 for bandit NDPs. Consider a distribution over initial states for an agent, and let ⇡ be its policy, let x t be the sequence of states it visits and a t the sequence of actions it takes. We say ⇡ is optimal for that distribution if it maximises E[ P 1 i=0 i R(x i , a i , a i+1 , ⇡)]. Note that unlike in the MDP case the optimal policy does depend on the initial distribution, however this isn't relevant in the bandit case. As an example consider the eponymous Newcomb's Problem. Newcomb's Problem (Nozick, 1969) : There are two boxes in front of you; one opaque box, and one transparent box. You can see that the transparent box contains $1,000. You can choose to either take only the opaque box, or to take both boxes. The boxes have been placed in this room by an agent who can predict your policy; if he believes that you will take only the opaque box then he has put $1,000,000 in the opaque box, but if he believes that you will take both boxes then he has left the opaque box empty. Do you take one box, or two? A version of Newcomb's Problem can be formalised as the following bandit NDP: S = {s}, A = {a 1 , a 2 }, R(a 1 , ⇡) = ⇢ 0 w.p. ⇡(a 2 ) 10 w.p. ⇡(a 1 ) and R(a 2 , ⇡) = ⇢ 5 w.p. ⇡(a 2 ) 15 w.p. ⇡(a 1 ) , where \"w.p.\" is short for \"with probability\". The key feature of this NDP is that, for any fixed policy, a 2 (\"two-boxing\") yields a higher reward than a 1 (\"one-boxing\"). But the expected reward of a policy increases in ⇡(a 1 ) s.t. the optimal policy is to always play a 1 . 1 We can view Newcomb's problem as a simple version of the AV dynamic described in the introduction, where a 2 is a driving action that allows other drivers to cut the AV off at no risk. We say that an NDP is continuous if T and R are continuous in the policy. In this paper we work mainly with continuous NDPs. This is in part because it is technically convenient, and in part because we believe that continuity is satisfied in many realistic cases. 2 \n Reinforcement Learning Agents We consider value-based reinforcement learning agents. Such agents have two main components; a Q-function S ⇥ A ! R that predicts the expected future discounted reward conditional on taking a particular action in a particular state, and a bandit algorithm that is used to select actions in each state based on the Q-function. Given a policy ⇡, we use q ⇡ (a | s) to denote the (true) expected future discounted reward conditional on taking action a in state s while following the policy ⇡ (and conditional on all subsequent actions being chosen by ⇡). A model-free agent will update Q over time to make it converge to q ⇡ when following ⇡. If Q is represented as a lookup table, the agent is said to be tabular. If the state space is large, it is common to instead approximate q ⇡ (with e.g. a neural network). For simplicity, we focus mostly on tabular agents. However, some of our results (Theorems The Q-values can be updated in different ways. One method is to use the update rule Q t+1 (a t | s t ) (1 ↵ t (s t , a t )) Q t (a t | s t ) + ↵ t (s t , a t )(r t + max a Q t (a | s t+1 )), where a t is the action taken at time t, s t is the state visited at time t, r t is the reward obtained at time t, and ↵ t (s, a) is a learning rate. This update rule is known as Q-learning (Watkins, 1986) . Other widely used update rules include SARSA (Rummery and Niranjan, 1994) and Expected SARSA (van Seijen et al., 2009) . For the purposes of this paper it will not matter significantly how the Q-values are computed, as long as it is the case that if an agent converges to a policy ⇡ in some NDP and explores infinitely often then Q converges to q ⇡ . We will later see that this is the case for Q-learning, SARSA, and Expected SARSA in continuous NDPs. There are also several different bandit algorithms. Two types of agents that are widely used in practice and that we will refer to throughout the paper are softmax agents and ✏-Greedy agents. The policy of a softmax agent with a sequence of temperatures t 2 R + is given by: ⇡ t (a | s) = exp(Q t (a | s)/ t ) P a 0 2A exp(Q t (a 0 | s)/ t ) . Unless otherwise stated we assume that t ! 0. The policy of an ✏-Greedy agent with a sequence of exploration probabilities ✏ t 2 [0, 1] is ⇡ t (a | s) = 1 ✏ t if a = arg max a 0 Q t (a 0 | s) and ⇡ t (a | s) = ✏ t /(|A| 1) otherwise. Unless otherwise stated we assume that ✏ t ! 0. We assume that ✏-Greedy breaks ties for argmax, so that there is always some a 2 A such that ⇡(a | s) = 1 ✏ t . We say that an agent is greedy in the limit if the probability that the agent takes an action that maximises Q converges to 1, and we say that it explores infinitely often if it takes every action in every state infinitely many times. \n Some Initial Observations We here make three simple observations about NDPs that we will use to prove and understand the results throughout this paper. First, a continuous NDP always has, for each possible distribution over initial states, a policy ⇡ that maximises the expected discounted reward E[R | ⇡], since E[R | ⇡] exists and is continuous in ⇡, and since the set of possible policies is a compact set. Also note that an NDP in which T or R is discontinuous may not have any such policy. Second, whereas all MDPs have a deterministic optimal policy, in some NDPs all optimal policies randomise. To see this we introduce another example we will look at in this paper. Death in Damascus (Gibbard and Harper, 1976 ): Death will come for you tomorrow. You can choose to stay in Damascus (where you are currently) or you can flee to Aleppo. If you are in the same city as Death tomorrow, you will die. Death has already decided which city he will go tohowever, he can predict your policy, and has decided to go to the city where he believes that you will be tomorrow. Do you stay in Damascus, or flee to Aleppo? We formalise this as the bandit NDP S = {s}, A = {a Damascus , a Aleppo }, and R(a Damascus , ⇡) = ⇢ 0 w.p. ⇡(a Damascus ) 10 w.p. ⇡(a Aleppo ) and R(a Aleppo , ⇡) = ⇢ 10 w.p. ⇡(a Damascus ) 0 w.p. ⇡(a Aleppo ) , where \"w.p.\" is again short for \"with probability\". In this NDP, randomising uniformly between a Damascus and a Aleppo is the unique optimal policy and in particular outperforms both deterministic policies. Note also that the Bellman optimality equation does not hold for NDPs. Even in Newcomb's Problem, as described above, Bellman's optimality equation is not satisfied by the optimal policy. \n Ratifiability If an agent in the limit only takes the actions with the highest Q-values and it converges to some policy ⇡ 1 , then it is clear that, for a given state, all actions in the support of ⇡ 1 must have equal expected utility given ⇡ 1 . Otherwise, the Q-values would eventually reflect the differences in expected utility and the agent would move away from ⇡ 1 . Similarly, if the algorithm explores sufficiently often, the actions that are taken with limit probability 0 cannot be better given ⇡ 1 than those taken by ⇡ 1 . After all, if they were better, the agent would have eventually figured this out and assigned them large probability. This condition on ⇡ 1 resembles a well-known doctrine in philosophical decision theory: ratificationism (see Weirich, 2016, Sect. 3.6 , for an overview). One form of ratificationism is based on a distinction between a decision -what the agent chooses -and the act that is selected by that decision. Very roughly, ratificationism then states that a decision is rational only if the acts it selects have the highest expected utility given the decision. Concepts of causality are often invoked to formalise the difference between the decision, the act, and their respective consequences. Our setup, however, has such a differentiation built in: we will view the policy as the \"decision\" and the action sampled from it as the \"act\". \n Strong Ratifiability As hinted earlier, slightly different versions of the concept of ratifiability are relevant depending on how much exploration a learning algorithm guarantees. We start with the stronger version, which more closely resembles what decision theorists mean when they speak about ratifiability. Definition 1. Let M ✓ S be a set of states. A policy ⇡ is strongly ratifiable on M if supp(⇡(• | s)) ✓ arg max a2A q ⇡ (a | s) for all s 2 M . In Newcomb's Problem the only strongly ratifiable policy is to play a 2 with probability 1. In Death in Damascus, only the optimal policy (mixing uniformly) is strongly ratifiable. There can also be several strongly ratifiable policies. For example, if you play the Coordination Game of Table 1 against an opponent who samples his action from the same policy as you then there are three strongly ratifiable policies; to select action a with probability 1, to select action b with probability 1, and to select a with probability 1 /3 and b with probability 2 /3. Theorem 2. Let A be a model-free reinforcement learning agent, and let ⇡ t and Q t be A's policy and Q-function at time t. Let A satisfy the following in a given NDP: • A is greedy in the limit, i.e. for all > 0, P (Q t (⇡ t (s)) max a Q t (a | s) ) ! 0 as t ! 1. • A's Q-values are accurate in the limit, i.e. if ⇡ t ! ⇡ 1 as t ! 1, then Q t ! q ⇡1 as t ! 1. Then if A's policy converges to ⇡ 1 then ⇡ 1 is strongly ratifiable on the states that are visited infinitely many times. a b a 2,2 0,0 b 0,0 1,1 \n Table 1: The Coordination Game In Appendix A we show that the Q-values of a tabular agent are accurate in the limit in any continuous NDP if the agent updates its Q-values with SARSA, Expected SARSA, or Q-learning, given that the agent explores infinitely often and uses appropriate learning rates. Since we would expect most well-designed agents to have accurate Q-values in the limit, Theorem 2 should apply very broadly. Using Kakutani's fixed-point theorem, it can be shown that every continuous NDP has a ratifiable policy. Theorem 3. Every continuous NDP has a strongly ratifiable policy. Of course, the fact that a ratifiable policy always exists does not necessarily mean that a reinforcement learning agent must converge to it -we will consider the question of whether or not this is the case in Sect. 3. It is also worth noting that a discontinuous NDP may not have any strongly ratifiable policy. It is a topic of ongoing discussion among philosophical decision theorists whether (strong) ratifiability should be considered a normative principle of rationality, see Weirich (2016, Sect. 3.6) for details. In general, the policy ⇡ that maximises E[R | ⇡] may or may not be ratifiable, as shown by Death in Damascus and Newcomb's problem, respectively. There is a correspondence between ratificationism and many game-theoretic concepts. For example, if you are playing a zero-sum game against an opponent who can see your policy and plays some distribution over best responses to it then ⇡ can only be ratifiable if it is a maximin strategy. To give another example, if you are playing a symmetric game against an opponent who follows the same policy as you then ⇡ is ratifiable if and only if (⇡, ⇡) is a Nash equilibrium. Joyce and Gibbard (1998, Sect. 5) discuss the relation in more detail. \n Weak Ratifiability We now show that even without infinite exploration, ⇡ 1 must still satisfy a weaker notion of ratifiability. Definition 4. Let M ✓ S be a set of states. A policy ⇡ is weakly ratifiable on M if q ⇡ (a | s) is constant across a 2 supp(⇡(s)) for all s 2 M . What makes this a weak version of ratifiability is that it does not put any requirements on the expected utility of actions that ⇡ does not take, it merely says that all actions that ⇡ takes with positive probability must have the same (actual) q-value. As a special case, this means that all deterministic policies are weakly ratifiable. This includes one-boxing in Newcomb's problem. Nonetheless, there are bandit NDPs in which the optimal policy is not even weakly ratifiable. For example, consider an NDP with actions a 1 , a 2 , where R(a 1 , ⇡) = 100(⇡(a 1 ) 1 /2) 2 + 1 and R(a 2 , ⇡) = 100(⇡(a 1 ) 1 /2) 2 . The optimal policy mixes close to uniformly (⇡(a 1 ) = 101 /200), but this is not weakly ratifiable, because R(a 1 , ⇡) > R(a 2 , ⇡). Theorem 5. Same conditions as Theorem 2, but where A's Q-values are only required to be accurate in the limit for state-action pairs that A visits infinitely many times. Then ⇡ 1 is weakly ratifiable on the set of states that are visited infinitely many times. \n Non-Convergence of Policies We have shown that most reinforcement learning algorithms can only converge to (strongly) ratifiable policies. We now consider the question of whether they always converge to a policy at all. We find that this is not the case. \n Theoretical Results From Theorem 2 it follows that in e.g. Death in Damascus an ✏-Greedy agent who explores infinitely often cannot converge to any policy. After all, the only strongly ratifiable policy (and thus limit policy) is to mix uniformly and an ✏-Greedy agent never mixes uniformly. Perhaps more surprisingly, there are also NDPs in which a (slow-cooling) softmax agent cannot converge to any policy. As an example, consider a bandit NDP with three actions a 1 , a 2 , a 3 , and where the rewards R(a i , ⇡) have expectations ⇡(a i+1 ) + 4•13 3 •⇡(a i ) [8j:⇡(a j ) 1 /4] Y j (⇡(a j ) 1 /4) . (1) For i = 3, we here let a i+1 = a 1 . We also require that the rewards are stochastic with a finite set of outcomes such that the empirical Q-values are never exactly equal between different actions. We call this the Repellor Problem. It has only one strongly ratifiable policy (mixing uniformly), but -as illustrated by Figure 1 -when the current policy mixes close to uniformly, the softmax agent learns (in expectation) to play less uniformly. Theorem 6. Let A be an agent that plays the Repellor Problem, explores infinitely often, and updates its Q-values with a learning rate ↵ t that is constant across actions, and let ⇡ t and Q t be A's policy and Q-function at time t. Assume also that for j 6 = i, if ⇡ t (a i ), ⇡ t (a j ) both converge to positive values, then ⇡ t (a i ) ⇡ t (a j ) Q t (a i ) Q t (a j ) ! a.s. \n 1 (2) as t ! 1. Then ⇡ t almost surely does not converge. Line 2 is satisfied, for example, for softmax agents with t converging to 0. Recall also that e.g. Q-learning and SARSA are equivalent for bandit NDPs (if = 0). \n Empirical Results Figure 1 : The triangle shows the space of possible policies in the Repellor Problem, parameterised by the probability they assign to each of the three actions. Plotted against this space is the expected direction in which a softmax agent would change its policy if playing a particular policy. Empirically, softmax agents converge (to strongly ratifiable policies) in many NDPs, provided that the temperature decreases sufficiently slowly. To illustrate this we will use Aymmetric Death in Damascus, a version of Death in Damascus wherein the rewards of a Aleppo are changed to be 5 (instead of 0) with probability ⇡(a Aleppo ) and (as before) 10 with the remaining probability. This NDP has only one (strongly) ratifiable policy, namely to go to Aleppo with probability 2 /3 and Damascus with probability 1 /3. This is also the optimal policy. We use this asymmetric version to make it easier to distinguish between convergence to the ratifiable policy and the default of uniform mixing at high temperatures. Figure 2 shows the probability of converging to this policy with a softmax agent and a plot of the policy on one run. We can see that this agent reliably converges provided that the cooling is sufficiently slow. More accurately it is a plot of the fraction of runs which assigned a Q-value of at least 5.5 to the action of going to Aleppo after 5000 iterations. These are empirical probabilities from 20,000 runs for every ↵ that is a multiple of 0.025, and 510,000 runs for each ↵ that is a multiple of 0.005 between 0.5 and 0.55. Notice the \"kink\" at ↵ = 0.5. Based on our experiments, this kink is not an artefact and shows up reliably in this kind of graph. The right-hand figure shows how the action probabilities evolve over time for a single run (chosen to converge to the mixed strategy) for ↵ = 0.3. However, there are also fairly simple games in which it seems like softmax agents cannot converge. Consider Loss-Averse Rock-Paper-Scissors (LARPS), the problem of playing Rock-Paper-Scissors against an opponent that selects each action with the same probability as you, and where you assign utility 1 to a win, 0 to a draw, and -10 to a loss. We conjecture that slow-cooling softmax agents do not converge in LARPS. We have unfortunately not been able to prove this formally, but Figure 3 presents some empirical data which corroborates the hypothesis. \n Convergence of Action Frequencies We have seen that there are NDPs in which some reinforcement learning algorithms cannot converge to any policy. But if they do not converge to any policy, what does their limit behaviour look like? We now examine whether these algorithms converge to taking each action with some limit frequency, and what sorts of frequencies they can converge to. In this section we establish a number of conditions that must be satisfied by any limit action frequency of a value-based agent. We consider agents that converge to deterministic policies (such as ✏-Greedy agents), and we limit our analysis to the bandit case (with = 0). \n Possible Frequencies in the Bandit Case Let P ⌃ t : A ! [0, 1] be the frequency with which each action in A is taken in the first t steps (for some agent and some bandit NDP). Note that P ⌃ t is a random variable. By the law of large numbers, P ⌃ t (a) 1 /t P t i=0 ⇡ i (a) converges to 0 almost surely as t ! 1. Let ⇡ a be the policy that takes action a with probability 1, and let q a = q ⇡a . Theorem 7. Assume that there is some sequence of random variables (✏ t 0) t s.t. ✏ t ! t!1 a.s. 0 and for all t 2 N it is X a ⇤ 2arg max a Qt(a) ⇡ t (a ⇤ ) 1 ✏ t . (3) Let P ⌃ t ! p ⌃ with positive probability as t ! 1. Then across all actions a 2 supp(p ⌃ ), q a (a) is constant. That is, the actions played with positive limit frequency must all be equally good when played deterministically. This condition is vaguely analogous to weak ratifiability, and is proven in roughly the same way as Theorem 2. Theorem 8. Same assumptions as Theorem 7. If |supp(p ⌃ )| > 1 then for all a 2 supp(p ⌃ ) there exists a 0 2 A s.t. q a (a 0 ) q a (a). This condition is an instability condition. Say that multiple actions are taken with nonzero limit frequency, and that action a has the highest Q-value at time t. Then for other actions to be played with positive limit frequency, other actions must at some point be believed to be optimal again (since the probability of exploration goes to zero). Hence they cannot all be worse when explored while mainly playing a, since a could otherwise be played forever. Theorem 9. Same assumptions as Theorem 7. Let U be the Q-value q a (a) which (by Theorem 7) is constant across a 2 supp(p ⌃ ). For any a 0 2 A supp(p ⌃ ) that is played infinitely often, let frequency 1 of the exploratory plays of a 0 happen when playing a policy near elements of {⇡ a | a 2 supp(p ⌃ )}. Then either there exists a 2 supp(p ⌃ ) such that q a (a 0 )  U ; or q a 0 (a 0 ) < U. Theorem 9 describes what circumstances are needed for an actions a 0 to be played with limit frequency zero. One possibility is that exploration is done only finitely many times (in which case bad luck could lead to low Q-values). A second possibility is that the exploration mechanism is \"rigged\" so that a 0 is mostly played when playing policies outside the proximity of {⇡ a | a 2 supp(p ⌃ )}. In this case the utility of a 0 under some zero-limit-frequency policy might lead to low Q-values. If exploration of a 0 is spread out more naturally then all but frequency zero of that exploration will happen near elements of {⇡ a | a 2 supp(p ⌃ )}. In this case, the only reason for a 0 to be played with zero frequency is that exploring a 0 near some of the elements of {⇡ a | a 2 supp(p ⌃ )} makes a 0 look poor. \n When is Frequency Convergence Possible? We believe there are NDPs in which an ✏-Greedy agent cannot converge to any limit action frequency. Specifically, we believe that LARPS is such an example. Figure 4a shows the directions in which the frequencies of different actions evolve. The graph seems to have no attractor and hence we believe an ✏-Greedy agent cannot converge to any limit action frequency in this NDP. We have not been able to rigorously prove this. However, experiments seem to confirm this hypothesis. Figure 4b depicts five runs of ✏-Greedy in LARPS. We can see that the agents oscillate between different actions, and that the periods increase in length. \n Related Work \n Learning in games Some of the Newcomblike dynamics we have described in this paper could also be modelled as games, especially as so-called Stackelberg games in which one player, the Stackelberg leader, chooses a strategy first and another player, the Stackelberg follower, then responds to that strategy. For example, in the case of autonomous vehicles (AVs), we might imagine that the AV company is the Stackelberg leader and the human drivers are the Stackelberg followers. That said, there are differences between NDPs and games. NDPs can react arbitrarily to the agent's policy, whereas in games, the other players play a best (i.e., expected-utility-maximising) response. In many real-world situations, other agents in the environment cannot be comfortably modelled as expected-utility-maximising agents. Interactions between AVs and humans can serve as examples. Most people probably do not reason rationally about small-probability, big-impact events, such as car crashes. Also, humans will generally operate on simplified models of an AV's policy (even when more detailed models are available). Of course, a game-theoretic analysis can also be fruitful and address issues that we ignore: By assuming all players to be rational, game theory can provide recommendations and predictions for multiple agents simultaneously, while our NDP model considers a single agent for a given environment. We believe that the NDP perspective provides additional insight into learning in such situations. Despite the differences between NDPs and games, there are some interesting parallels between model-free learning in NDPs and in games, where similar learning methods are sometimes referred to as fictitious play (Brown, 1951) . Fudenberg and Levine (1998, Chapter 2) show that fictitious play can only converge to a Nash equilibrium (for similar results about convergence to Nash equilibrium, see e.g. Mazumdar et al., 2020 , Oesterheld et al., 2021 . As noted in Sect. 2.1, the concept of Nash equilibrium resembles the concept of ratifiability. Shapley (1964) shows that fictitious play can fail to converge. However, there are many special cases in which convergence is guaranteed, including two-player zero-sum games (Robinson, 1951) and generic 2 ⇥ 2 games (Miyasawa, 1961) . \n Learning and Newcomblike problems Other authors have discussed learning in Newcomblike problems. The most common setup is one in which the learner assigns values directly to policies, or more generally to that which the agent chooses. It is then usually shown that (among the policies considered) the agent will converge to taking the one with the highest (so-called evidential) expected utility (Albert and Heiner, 2001; Oesterheld, 2018) . This contrasts with our setup, in which the learner selects policies but assigns values to actions. Oesterheld (2019) also studies agents who maximise reward in Newcomblike environments. However, Oesterheld does not consider the learning process. Instead he assumes that the agent has already formed beliefs and uses some form of expected utility maximisation. He also specifically considers the implications of having some overseer assign rewards based on beliefs about the state of the world (as opposed to having the reward come directly from the true world state). \n Discussion and Further Work We have seen that value-based reinforcement learning algorithms can fail to converge to any policy in some NDPs, and that when they do converge, they can only converge to ratifiable policies. Decision theorists have discussed whether ratifiability should be considered to be a sound normative principle. Note that (as discussed in Sect. 2) the optimal policies ⇡ are not in general ratifiable. We have also examined the limit action frequencies that agents can converge to (even when the policies do not converge). Still, there are NDPs in which many agents cannot converge even to any such frequency. We gave some results on what actions can be taken with positive limit frequency. A loose connection to ratifiability can still be drawn. Overall, established decision-theoretical ideas can be used to understand and formally describe the behaviour of \"out-of-the-box\" reinforcement learning agents in NDPs. However, their behaviour is not always desirable. Our work elucidates possible failures. We hope that our work will thus enable more accurate reasoning about the behaviour of RL agents in real-world situations, especially when interacting with humans or other agents. We hold such improvements in understanding to be broadly beneficial to the safe design and deployment of AI systems. Throughout the paper, we have noted specific open questions related to our results. For instance, can the results in Sect. 4.1 be generalised beyond the bandit setting? There are also many topics and questions about our setting that we have not touched on at all. For instance, our experimental results indicate that convergence is often slow (considering how simple the given problems are). It might be desirable to back up this impression with theoretical results. We have only studied simple value-based model-free algorithms -the analysis could be extended to other reinforcement learning algorithms (e.g., policy-gradient or model-based algorithms). Also, there are further ways in which we could generalise our setting. One example is to introduce partial observability and imperfect memory into the NDPs. The latter has been studied in game and decision theory (Piccione and Rubinstein, 1997; Elga, 2000) , but recently -under the name memoryless POMDP -also in reinforcement learning (Azizzadenesheli et al., 2016; Steckelmacher et al., 2018; cf. Conitzer, 2019) . What makes this especially appealing in the NDP context is that problems related to imperfect memory relate closely to Newcomblike problems (Briggs, 2010; Schwarz, 2015) . One could also look for narrower classes of NDPs in which RL agents are guaranteed to perform well in some sense. Ultimately, the goal of this line of research is to develop learners that are able to deal effectively and safely with Newcomblike dynamics. We hope that our results will be useful in developing extensions of value-based RL that can detect and correct for the failures that arise when existing methods are applied in Newcomblike settings. However, we should also consider alternative approaches that do not hinge on insights from the present work. For example, a few recent papers (on learning in non-Newcomblike settings) have considered learning to predict the expected utility as a function of the policy (as opposed to traditional Q values, which are not paremeterised by the policy) (Harb et al., 2020) . In principle, learning such a policy evaluation function avoids the problems of the learners considered in this paper. However, it remains to be seen how practical this approach is. Figure 2 : 2 Figure 2: The left figure plots the probability of softmax converging in Asymmetric Death in Damascus given n = n ↵ against ↵.More accurately it is a plot of the fraction of runs which assigned a Q-value of at least 5.5 to the action of going to Aleppo after 5000 iterations. These are empirical probabilities from 20,000 runs for every ↵ that is a multiple of 0.025, and 510,000 runs for each ↵ that is a multiple of 0.005 between 0.5 and 0.55. Notice the \"kink\" at ↵ = 0.5. Based on our experiments, this kink is not an artefact and shows up reliably in this kind of graph. The right-hand figure shows how the action probabilities evolve over time for a single run (chosen to converge to the mixed strategy) for ↵ = 0.3. \n Figure 3 : 3 Figure 3: This figure shows five runs of a softmax agent in LARPS, and plots ⇡(a rock ) against the total number of episodes played. The agent's Q-values are the historical mean rewards for each action, and t = 1/ log t. \n (a) This figure plots the dynamics of LARPS for an ✏-Greedy agent. Each point represents a triplet (fR, fS, fP), where fR denotes the fraction of past time steps at which aR was estimated to be the best action, and similarly for fS, fP. Plotted against this space is the expected direction in which the frequencies will change. For instance, if in the past aR, was mostly played, then aP will have the highest empirical Qvalues and will therefore be played more in the future.(b) This figure shows five runs of an ✏-Greedy agent in LARPS, and plots the proportion of past episodes in which the agent played \"rock\" against the total number of episodes played. The agent's Q-values are the historical mean rewards for each action, and its ✏-value is 0.01. \n Figure 4 4 Figure 4 \n\t\t\t 35th Conference on Neural Information Processing Systems (NeurIPS 2021). \n\t\t\t and 5) only assume that Q converges to q ⇡ (for some q ⇡ ) and therefore apply immediately to non-tabular agents, as long as the function approximator for q ⇡ converges to the same q ⇡ . 1 In most versions of Newcomb's Problem, the predictor directly predicts the agent's action with some fixed accuracy, and the agent is unable to randomise in a way that is unpredictable to the environment. This version of the problem can be modelled as a regular MDP. However, we believe that our version is more realistic in the context of AI. After all, AIs can at least act pseudo-randomly, while the distribution according to which they choose is predictable if e.g. their source code is known.2 For example, even if the environment has direct access to the source code of the agent, it may in general not be feasible to extract the exact action probabilities from the code. However, it is always possible to estimate the action probabilities by sampling. If this is done then T and R will depend continuously on the policy.", "date_published": "n/a", "url": "n/a", "filename": "NeurIPS-2021-reinforcement-learning-in-newcomblike-environments-Paper.tei.xml", "abstract": "Newcomblike decision problems have been studied extensively in the decision theory literature, but they have so far been largely absent in the reinforcement learning literature. In this paper we study value-based reinforcement learning algorithms in the Newcomblike setting, and answer some of the fundamental theoretical questions about the behaviour of such algorithms in these environments. We show that a value-based reinforcement learning agent cannot converge to a policy that is not ratifiable, i.e., does not only choose actions that are optimal given that policy. This gives us a powerful tool for reasoning about the limit behaviour of agents -for example, it lets us show that there are Newcomblike environments in which a reinforcement learning agent cannot converge to any optimal policy. We show that a ratifiable policy always exists in our setting, but that there are cases in which a reinforcement learning agent normally cannot converge to it (and hence cannot converge at all). We also prove several results about the possible limit behaviours of agents in cases where they do not converge to any policy.", "id": "ed33fd6b996c8e38f9de0687fa508721"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "1912346.tei.xml", "abstract": "INCENTIVE COMPATIBILITY AND THE BARGAINING PROBLEM BY ROGER B. MYERSON Collective choice problems are studied from the Bayesian viewpoint. It is shown that the set of expected utility allocations which are feasible with incentive-compatible mechanisms is compact and convex, and includes the equilibrium allocations for all other mechanisms. The generalized Nash solution proposed by Harsanyi and Selten is then applied to this set to define a bargaining solution for Bayesian collective choice problems.", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Brian Tomasik"], "title": "Risks of Astronomical Future Suffering", "text": "Epigraphs If we carry the green fire-brand from star to star, and ignite around each a conflagration of vitality, we can trigger a Universal metamorphosis. [...] Because of us [...] Slag will become soil, grass will sprout, flowers will bloom, and forests will spring up in once sterile places. 1 [...] If we deny our awesome challenge; turn our backs on the living universe, and forsake our cosmic destiny, we will commit a crime of unutterable magnitude. -Marshall T. Savage, The Millennial Project: Colonizing the Galaxy in Eight Easy Steps, 1994 Let's pray that the human race never escapes from Earth to spread its iniquity elsewhere. -C.S. Lewis If you can't beat 'em, join 'em. -proverb 2 Humans values may not control the future Nick Bostrom's \"The Future of Human Evolution\" (Bostrom, 2004) describes a scenario in which human values of fun, leisure, and relationships may be replaced by hyper-optimized agents that can better compete in the Darwinian race to control our future light cone. The only way we could avert this competitive scenario, Bostrom suggests, would be via a \"singleton\" (Bostrom, 2006) , a unified agent or governing structure that could control evolution. Of course, even a singleton may not carry on human values. Many naive AI agents that humans might build may optimize an objective function that humans find pointless. Or even if humans do maintain hands on the steering wheel, it's far from guaranteed that we can preserve our goals in a stable way across major self-modifications going forward. These factors suggest that even conditional on human technological progress continuing, the probability that human values are realized in the future may not be very large. Carrying out human values seems to require a singleton that's not a blind optimizer, that can stably preserve values, and that is shaped by designers who care about human values rather than selfish gain or something else. This is important to keep in mind when we imagine what future humans might be able to bring about with their technology. Some people believe that sufficiently advanced superintelligences will discover the moral truth and hence necessarily do the right things. Thus, it's claimed, as long as humanity survives and grows more intelligent, the right things will eventually happen. There are two problems with this view. First, Occam's razor militates against the existence of a moral truth (whatever that's supposed to mean). Second, even if such moral truth existed, why should a superintelligence care about it? There are plenty of brilliant people on Earth today who eat meat. They know perfectly well the suffering that it causes, but their motivational systems aren't sufficiently engaged by the harm they're doing to farm animals. The same can be true for superintelligences. Indeed, arbitrary intelligences in mind-space needn't have even the slightest inklings of empathy for the suffering that sentients experience. \n Some scenarios for future suffering Even if humans do preserve control over the future of Earth-based life, there are still many ways in which space colonization would multiply suffering. Following are some of them. \n Spread of wild animals Humans may colonize other planets, spreading suffering-filled animal life via terraforming. Some humans may use their resources to seed life throughout the galaxy, which some sadly consider a moral imperative. \n Sentient simulations Given astronomical (Bostrom, 2003) computing power, post-humans may run various kinds of simulations. These sims may include many copies of wild-animal life, most of which dies painfully shortly after being born. For example, a superintelligence aiming to explore the distribution of extraterrestrials of different sorts might run vast numbers of simulations (Thiel, Bergmann and Grey, 2003) of evolution on various kinds of planets. Moreover, scientists might run even larger numbers of simulations of organisms-that-might-have-been, exploring the space of minds. They may simulate decillions of reinforcement learners that are sufficiently self-aware as to feel what we consider conscious pain. \n Suffering subroutines It could be that certain algorithms (say, reinforcement agents (Tomasik, 2014)) are very useful in performing complex machine-learning computations that need to be run at massive scale by advanced AI. These subroutines might be sufficiently similar to the pain programs in our own brains that we consider them to actually suffer. But profit and power may take precedence over pity, so these subroutines may be used widely throughout the AI's Matrioshka brains. \n Black Swans The range of scenarios that we can imagine is limited, and many more possibilities may emerge that we haven't thought of or maybe can't even comprehend. 4 Even a human-controlled future is likely to increase suffering If I had to make an estimate now, I would give ~70% probability that if humans choose to colonize space, this will cause more suffering than it reduces on intrinsic grounds (ignoring compromise considerations discussed later). Think about how space colonization could plausibly reduce suffering. For most of those mechanisms, there seem to be countermechanisms that will increase suffering at least as much. The following sections parallel those above. \n Spread of wild animals David Pearce coined the phrase \"cosmic rescue missions\" (Pearce, n.d.) in referring to the possibility of sending probes to other planets to alleviate the wild extraterrestrial (ET) suffering they contain. This is a nice idea, but there are a few problems. • We haven't found any ETs yet, so it's not obvious there are vast numbers of them waiting to be saved from Darwinian misery. Contrast this with the possibilities for spreading wild-animal suffering: • Humans may spread life to many planets (e.g., Mars via terraforming, other Earth-like planets via directed panspermia). The number of planets that can support life may be appreciably bigger than the number that already have it. (See the discussion of f l in the Drake equation.) Moreover, the percentage of planets that can be converted into computers that could simulate wild-animal suffering might be close to 100%. • We already know that Earth-based life is sentient, unlike for ETs. • Spreading biological life is slow and difficult, but disbursing small life-producing capsules is easier than dispatching Hedonistic Imperative probes or berserker probes. Fortunately, humans might not support spread of life that much, though some do. For terraforming, there are survival pressures to do it in the near term, but probably directed panspermia is a bigger problem in the long term. Also, given that terraforming is estimated to require at least thousands of years, while human-level digital intelligence should take at most a few hundred years to develop, terraforming may be a moot point from the perspective of catastrophic risks, since digital intelligence doesn't need terraformed planets. While I noted that ETs are not guaranteed to be sentient, I do think it's moderately likely that consciousness is fairly convergent among intelligent civilizations. This is based on (a) suggestions of convergent consciousness among animals on Earth and (b) the general principle that consciousness seems to be useful for planning, manipulating images, self-modeling, etc. On the other hand, maybe this reflects the paucity of my human imagination in conceiving of ways to be intelligent without consciousness. \n Sentient simulations It may be that biological suffering is a drop in the bucket compared with digital suffering. The biosphere of a planet is less than Type I on the Kardashev scale; it uses a tiny sliver of all the energy of its star. Intelligent computations by a Type II civilization can be many orders of magnitude higher. So humans' sims could be even more troubling than their spreading of wild animals. Of course, maybe there are ETs running sims of nature for science or amusement, or of minds in general to study biology, psychology, and sociology. If we encountered these ETs, maybe we could persuade them to be more humane. I think it's likely that humans are more empathetic than the average civilization because 1. we seem much more empathetic than the average animal on Earth, probably in part due to parental impulses and in part due to trade, although presumably some of these factors would necessarily be true of any technologically advanced civilization 2. selection bias implies that we'll agree with our own society's morals more than those of a random other society because these are the values that we were raised with and that our biology impels us toward. Based on these considerations, it seems plausible that there would be room for improvement through interaction with ETs. Indeed, we should in general expect it to be possible for any two civilizations or factions to achieve gains from compromise if they have diminishing marginal utility with respect to amount of control exerted. In addition, there may be cheap Pareto improvements to be had purely from increased intelligence and better understanding of important considerations. That said, there are some downside risks. Posthumans themselves might create suffering simulations, and what's worse, the sims that post-humans run would be more likely to be sentient than those run by random ETs because post-humans would have a tendency to simulate things closer to themselves in mind-space. They might run nature sims for aesthetic appreciation, lab sims for science experiments, or pet sims for pets. \n Suffering subroutines Suffering subroutines may be a convergent outcome of any AI, whether human-inspired or not. They might also be run by aliens, and maybe humans could ask aliens to design them in more humane ways, but this seems speculative. \n Black Swans It seems plausible that suffering in the future will be dominated by something totally unexpected. This could be a new discovery in physics, neuroscience, or even philosophy more generally. Some make the argument that because we know so very little now, it's better for humans to stick around because of the \"option value\": If they later realize it's bad to spread, they can stop, but if they realize they should spread, they can proceed to reduce suffering in some novel way that we haven't anticipated. Of course, the problem with the \"option value\" argument is that it assumes future humans do the right things, when in fact, based on examples of speculations we can imagine now, it seems future humans would probably do the wrong things much of the time. For instance, faced with a new discovery of obscene amounts of computing power somewhere, most humans would use it to run oodles more minds, some nontrivial fraction of which might suffer terribly. In general, most sources of immense power are double-edged swords that can create more happiness and more suffering, and the typical human impulse to promote life/consciousness rather than to remove them suggests that negative and negative-leaning utilitarians are on the losing side. Still, waiting and learning more is plausibly Kaldor-Hicks efficient, and maybe there are ways it can be made Pareto efficient by granting additional concessions to suffering reducers as compensation. \n What about paperclippers? Above I was largely assuming a human-oriented civilization with values that we recognize. But what if, as seems mildly likely, Earth is taken over by a paperclip maximizer, i.e., an unconstrained automation or optimization process? Wouldn't that reduce suffering because it would eliminate wild ETs as the paperclipper spread throughout the galaxy, without causing any additional suffering? Maybe, but if the paperclip maximizer is actually generally intelligent, then it won't stop at tiling the solar system with paperclips. It will want to do science, perform lab experiments on sentient creatures, possibly run suffering subroutines, and so forth. It will require lots of intelligent and potentially sentient robots to coordinate and maintain its paperclip factories, energy harvesters, and mining operations, as well as scientists and engineers to design them. And the paperclipping scenario would entail similar black swans as a human-inspired AI. Paperclippers would presumably be less intrinsically humane than a \"friendly AI\", so some might cause significantly more suffering than a friendly AI, though others might cause less, especially the \"minimizing\" paperclippers, e.g., cancer minimizers or death minimizers. If the paperclipper is not generally intelligent, I have a hard time seeing how it could cause human extinction. In this case it would be like many other catastrophic risks -deadly and destabilizing, but not capable of wiping out the human race. 6 What if human colonization is more humane than ET colonization? If we knew for certain that ETs would colonize our region of the universe if Earth-originating intelligence did not, then the question of whether humans should try to colonize space becomes less obvious. As noted above, it's plausible that humans are more compassionate than a random ET civilization would be. On the other hand, human-inspired computations might also entail more of what we consider to count as suffering because the mind architectures of the agents involved would be more familiar. And having more agents in competition for the light cone might lead to dangerous outcomes. But for the sake of argument, suppose an Earthoriginating colonization wave would be better than the expected colonization wave of an ET civilization that would colonize later if we didn't do so. In particular, suppose that if human values colonized space, they would cause only −0.5 units of suffering, compared with −1 units if random ETs colonized space. Then it would seem that as long as the probability P of some other ETs coming later is bigger than 0.5, then it's better for humans to colonize and pre-empt the ETs from colonizing, since −0.5 > −1 • P for P > 0.5. However, this analysis forgets that even if Earthoriginating intelligence does colonize space, it's not at all guaranteed that human values will control how that colonization proceeds. Evolutionary forces might distort compassionate human values into something unrecognizable. Alternatively, a rogue AI might replace humans and optimize for arbitrary values throughout the cosmos. In these cases, humans' greater-than-average compassion doesn't make much difference, so suppose that the value of these colonization waves would be −1, just like for colonization by random ETs. Let the probability be Q that these non-compassionate forces win control of Earth's colonization. Now the expected values are −31 • Q + −0.5 • (1 − Q) for Earth-originating colonization versus −1 • P if Earth doesn't colonize and leaves open the possibility of later ET colonization. For concreteness, say that Q = 0.5. (That seems plausibly too low to me, given how many times Earth has seen overhauls of hegemons in the past.) Then Earth-originating colonization is better if and only if −1 • 0.5 + −0.5 • 0.5 > −1 • P −0.75 > −1 • P P > 0.75. Given uncertainty about the Fermi paradox and Great Filter, it seems hard to maintain a probability greater than 75% that our future light cone would contain colonizing ETs if we don't ourselves colonize, although this section presents an interesting argument for thinking that the probability of future ETs is quite high. What if rogue AIs result in a different magnitude of disvalue from arbitrary ETs? Let H be the expected harm of colonization by a rogue AI. Assume ETs are as likely to develop rogue AIs as humans are. Then the disvalue of Earth-based colonization is H • Q + (−0.5) • (1 − Q), and the harm of ET colonization is P • (H • Q + (−1) • (1 − Q)). Again taking Q = 0.5, then Earth-based colonization has better expected value if where the inequality flips around when we divide by the negative number (H − 1). Figure 1 represents a plot of these threshold values for P as a function of H. H • 0.5 + −0.5 • 0.5 > P • (H • 0.5 + −1 • 0.5) H − 0.5 > P • (H − 1) P > (H − 0.5) (H − 1) , Even if H = 0 and a rogue AI caused no suffering, it would still only be better for Earth-originating intelligence to colonize if P > 0.5, i.e., if the probability of ETs colonizing in its place was at least 50%. These calculations involve many assumptions, and it could turn out that Earth-based colonization has higher expected value given certain parameter values. This is a main reason I maintain uncertainty as to the sign of Earth-based space colonization. However, this whole section was premised on humaninspired colonization being better than ET-inspired colonization, and the reverse might also be true, since computations of the future are more likely to be closer to what we most value and disvalue if humans do the colonizing. \n Why we should remain cooperative If technological development and space colonization seem poised to cause astronomical amounts of suffering, shouldn't we do our best to stop them? Well, it is worth having a discussion about the extent to which we as a society want these outcomes, but my guess is that someone will continue them, and this would be hard to curtail without extreme measures. Eventually, those who go on developing the technologies will hold most of the world's power. These people will, if only by selection effect, have strong desires to develop AI and colonize space. Resistance might not be completely futile. There's some small chance that suffering reducers could influence society in such a way as to prevent space colonization. But it would be better for suffering reducers, rather than fighting technologists, to compromise with them: We'll let you spread into the cosmos if you give more weight to our concerns about future suffering. Rather than offering a very tiny chance of complete victory for suffering reducers, this cooperation approach offers a higher chance of getting an appreciable fraction of the total suffering reduction that we want. In addition, compromise means that suffering reducers can also win in the scenario ( 30% likely in my view) that technological development does prevent more suffering than it causes even apart from considerations of strategic compromise with other people. Ideally these compromises would take the form of robust bargaining arrangements. Some examples are possible even in the short term, such as if suffering reducers and space-colonization advocates agree to cancel opposing funding in support of some commonly agreed-upon project instead. The strategic question of where to invest resources to advance your values at any given time amounts to a prisoner's dilemma with other value systems, and because we repeatedly make choices about where to invest, what stances to adopt, and what policies to push for, these prisoner's dilemmas are iterated. In Robert Axelrod's tournaments on the iterated prisoner's dilemma, the best-performing strategies were always \"nice,\" i.e., not the first to defect. Thus, suffering reducers should not be the first to defect against space colonizers. Of course, if it seems that space colonizers show no movement toward suffering reduction, then we should also be \"provocable\" to temporary defection until the other side does begin to recognize our concerns. We who are nervous about space colonization stand a lot to gain from allying with its supportersin terms of thinking about what scenarios might happen and how to shape the future in better directions. We also want to remain friends because this means pro-colonization people will take our ideas more seriously. Even if space colonization happens, there will remain many sub-questions on which suffering reducers want to have a say: e.g., not spreading wildlife, not creating suffering simulations/subroutines, etc. We want to make sure suffering reducers don't become a despised group. For example, think about how eugenics is more taboo because of the Nazi atrocities than it would have been otherwise. Antitechnology people are sometimes smeared by association with the Unabomber. Animal supporters can be tarnished by the violent tactics of a few, or even by the antics of PETA. We need to be cautious about something similar happening for suffering reduction. Most people already care a lot about preventing suffering, and we don't want people to start saying, \"Oh, you care about preventing harm to powerless creatures? What are you, one of those suffering reducers?\" where \"suffering reducers\" has become such a bad name that it evokes automatic hatred. So not only is cooperation with colonization supporters the more promising option, but it's arguably the only net-positive option for us. Taking a more confrontational stance risks hardening the opposition and turning people away from our message. Remember, preventing future suffering is something that everyone cares about, and we shouldn't erode that fact by being excessively antagonistic. 8 Possible upsides to an intelligent future \n Black swans that don't cut both ways Many speculative scenarios that would allow for vastly reducing suffering in the multiverse would also allow for vastly increasing it: When you can decrease the number of organisms that exist, you can also increase the number, and those who favor creating more happiness / life / complexity / etc. will tend to want to push for the increasing side. However, there may be some black swans that really are one-sided, in the sense that more knowledge is most likely to result in a decrease of suffering. For example: We might discover that certain routine physical operations map onto our conceptions of suffering. People might be able to develop ways to re-engineer those physical processes to reduce the suffering they contain. If this could be done without a big sacrifice to happiness or other values, most people would be on board, assuming that present-day values have some share of representation in future decisions. This may be a fairly big deal. I give nontrivial probability (maybe ~10%?) that I would, upon sufficient reflection, adopt a highly inclusive view of what counts as suffering, such that I would feel that significant portions of the whole multiverse contain suffering-dense physical processes. After all, the mechanics of suffering can be seen as really simple when you think about them a certain way, and as best I can tell, what makes animal suffering special are the bells and whistles that animal sentience involves over and above crude physics -things like complex learning, thinking, memory, etc. But why can't other physical objects in the multiverse be the bells and whistles that attend suffering by other physical processes? This is all very speculative, but what understandings of the multiverse our descendants would arrive at we can only begin to imagine right now. \n Valuing reflection If we care to some extent about moral reflection on our own values, rather than assuming that suffering reduction of a particular flavor is undoubtedly the best way to go, then we have more reason to sup-port a technologically advanced future, at least if it's reflective. In an idealized scenario like coherent extrapolated volition (CEV) (Yudkowsky, 2004) , say, if suffering reduction was the most compelling moral view, others would see this fact. 2 Indeed, all the arguments any moral philosopher has made would be put on the table for consideration (plus many more that no philosopher has yet made), and people would have a chance to even experience extreme suffering, in a controlled way, in order to assess how bad it is compared with other things. Perhaps there would be analytic approaches for predicting what people would say about how bad torture was without actually torturing them to find out. And of course, we could read through humanity's historical record and all the writings on the Internet to learn more about what actual people have said about torture, although we'd need to correct for will-to-live bias and deficits of accuracy when remembering emotions in hindsight. But, importantly, in a CEV scenario, all of those qualifications can be taken into account by people much smarter than ourselves. Of course, this rosy picture is not a likely future outcome. Historically, forces seize control because they best exert their power. It's quite plausible that someone will take over the future by disregarding the wishes of everyone else, rather than by combining and idealizing them. Or maybe concern for the powerless will just fall by the wayside, because it's not really adaptive for powerful agents to care about weak ones, unless there are strong, stable social pressures to do so. This suggests that improving prospects for a reflective, tolerant future may be an important undertaking. Rather than focusing on whether or not the future happens, I think it's more valuable for suffering reducers to focus on making the future better if it happens -by encouraging compromise, moral reflectiveness, philosophical wisdom, and altruism, all of which make everyone better off in expectation. Figure 1 : 1 Figure 1: Plot of threshold values for P as a function of H \n\t\t\t Because nature contains such vast amounts of suffering, I would strongly dislike such a project. I include this quotation for rhetorical effect and to give a sense of how others see the situation. \n\t\t\t Of course, what's compelling to idealized-me would not necessarily be compelling to idealized-you. Value divergences may", "date_published": "n/a", "url": "n/a", "filename": "risks-of-astronomical-future-suffering.tei.xml", "abstract": "It's far from clear that human values will shape an Earth-based space-colonization wave, but even if they do, it seems more likely that space colonization will increase total suffering rather than decrease it. That said, other people care a lot about humanity's survival and spread into the cosmos, so I think suffering reducers should let others pursue their spacefaring dreams in exchange for stronger safety measures against future suffering. In general, I encourage people to focus on making an intergalactic future more humane if it happens rather than making sure there will be an intergalactic future.", "id": "1ee4fa19becb11250cb9ba4cd915a4aa"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Trevor N White", "Seth D Baum", "Patrick Lin", "George Bekey", "Keith Abney", "Ryan Jenkins"], "title": "Liability Law for Present and Future Robotics Technology", "text": "Introduction In June 2005, a surgical robot at a hospital in Philadelphia malfunctioned during a prostate surgery, possibly injuring the patient. 1 In June 2015, a worker at a Volkswagen plant in Germany was crushed to death by a robot that was part of the assembly process. 2 In November 2015, a self-driving car in California made a complete stop at an intersection and then was hit by a car with a human driver, apparently because the self-driving car followed traffic law but not traffic norms. 3 These are just some of the ways that robots are already implicated in harms. As robots become more sophisticated and more widely adopted, the potential for harm will get even larger. Robots even show potential for causing harm at massive catastrophic scales. How should robot harms be governed? In general, liability law governs harms in which someone or something else is responsible. Liability law is used to punish those who have caused harms, particularly those that could have and should have been avoided. The threat of punishment further serves to discourage those who could cause harm. Liability law is thus an important legal tool for serving justice and advancing the general welfare of society and its members. The value of liability law holds for robotics just as it does for any other harm-causing technology. But robots are not just any other technology. Robots are (or at least can be) intelligent, autonomous actors moving about in the physical world. They can cause harms through actions that they choose to make, actions that no human told them to make and, indeed, that may surprise their human creators. Perhaps robots should be liable for their harms. This is a historic moment: humans creating technology that could potentially be liable for its own actions. Furthermore, robots can have the strength of industrial machinery and the intelligence of advanced computer systems. Robots can also be mass produced and connected to each other and to other technological systems. This creates the potential for robots to cause unusually great harm. This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Three types of cases are distinguished, each with very different implications. First are cases in which some human party is liable, such as the manufacturer or the human using the robot. These cases pose no novel challenges for liability law: they are handled the same way as with other technologies in comparable circumstances. Second are cases in which the robot itself is liable. These cases require dramatic revision to liability law, including standards to assess when robots can be held liable and principles for dividing liability between the robot and the humans who designed, built, and used it. Third are cases in which the robot poses a major catastrophic risk. These cases merit separate attention because a sufficiently large catastrophe would destroy the legal system and thus the potential to hold anyone or anything liable. The three types of cases differ across two dimensions as shown in Figure 1 . One dimension is the robot's degree of legal personhood, meaning the extent to which a robot shows attributes that qualify it for independent standing in a court of law. As we discuss, a robot can be held liable in the eyes of the law to the extent that it merits legal personhood. The other dimension shows the size of the harm the robot causes. Harms of extreme severity cannot be handled by liability law. However, there is no strict distinction between the three cases. Instead, there is a continuum, as shown by the regions in which a robot can have partial liability or more-than-human liability and in which liability law works to a limited extent. \n I -A Human Party Is Liable In a detailed study of robot law, Weaver (2014, 21-27) identifies four types of parties that could be liable for harm caused by a robot: (1) people who were using the robot or overseeing its use; (2) other people who were not using the robot but otherwise came into contact with it, which can include people harmed by the robot; (3) some party involved in the robot's production and distribution, such as the company that manufactured the robot; or (4) the robot itself. For the first three types of parties, liability applies the same as for other technologies. A surgical robot, for example, can be misused by the surgeon (type 1), bumped into by a hospital visitor who wandered into a restricted area (type 2), or poorly built by the manufacturer (type 3). The same situations can also arise for other, non-robotic medical technologies. In each case, the application of liability law is straightforward. Or rather, to the extent that the application of liability law is not straightforward, the challenges faced are familiar. The fourth type-when the robot is liable-is the only one that poses novel challenges for liability law. To see this, consider one of the thornier cases of robot liability, that of lethal autonomous weapon systems (LAWSs). These are weapons that decide for themselves whom to kill. Sparrow (2007) argues that there could be no one liable for certain LAWS harms-for example, if a LAWS decides to kill civilians or soldiers who have surrendered. A sufficiently autonomous LAWS could make its own decisions, regardless of how humans designed and deployed it. In this case, Sparrow argues, it would be unfair to hold the designer or deployer liable (or the manufacturer or other human parties). It might further be inappropriate to hold the robot itself liable, if it is not sufficiently advanced in legally relevant ways (more on this in Section II). In this case, who or what to hold liable is ambiguous. This ambiguous liability is indeed a challenge, but it is a familiar one. In the military context, precedents include child soldiers (Sparrow 2007, 73-74) and landmines (Hammond 2015, note 62) . Child soldiers can make their own decisions, disobey orders, and cause harm in the process. Landmines can linger long after a conflict, making it difficult or impossible to identify who is responsible for their placement. In both cases, it can be difficult or perhaps impossible to determine who is liable. So too for LAWSs. This ambiguous liability can be a reason to avoid or even ban the use of child soldiers, landmines, and LAWSs in armed conflict. Regardless, even for this relatively thorny case of robot liability, robotics technology raises no new challenges for liability law. In the United States, agencies such as the Department of Defense produce regulations on the use of LAWSs which are not dramatically different than for other weapons. Internationally, bodies like the UN's International Court of Justice could hold a state liable for authorizing drone strikes that caused excessive civilian casualties. Meanwhile, commercial drones can be regulated as other aircraft are now: by a combination of the FAA and corporate oversight by their creators (McFarland 2015) . The handling of such relatively simple robots under liability law will thus be familiar if not straightforward. The above LAWSs examples also resemble how liability law handles non-human animals, which has prompted proposals for robots to be given legal status similar to non-human animals (e.g., Kelley et al. 2010) . Suppose someone gets a pet dog and then the dog bites someone, despite the owner trying to stop it. If this person had absolutely no idea the dog would bite someone, then she would not be liable for that bite. However, having seen the dog bite someone, she now knows the dog is a biter, and is now expected to exercise caution with it in the future. If the dog bites again, she can be liable. In legal terms, this is known as having scienterknowledge of the potential harm. Scienter could also apply to LAWSs or other robots that are not expected to cause certain harms. Once the robots are observed causing those harms, their owners or users could be liable for subsequent harms. For comparison, the Google Photos computer system raised controversy in 2015 when it mislabeled photographs of black people as \"gorillas\" (Hernandez 2015) . No Google programmer instructed Photos to do this; it was a surprise, arising from the nature of Photos's algorithm. Google acted immediately to apologize and fix Photos. While it did not have scienter for the gorilla incident, it would for any subsequent offenses. 4 The same logic also applies for LAWSs or other types of robots. Again, as long as a human party was responsible for it, a robot does not pose novel challenges to liability law. Even if a human is ultimately liable, a robot could still be taken to court. This would occur, most likely, under in rem jurisdiction, in which the court treats an object of property as a party to a case when it cannot do so with a human owner. In rem cases include United States v. Fifty-Three Electus Parrots (1982) , in which a human brought parrots from southeast Asia to the U.S. in violation of an animal import law, and United States v. Forty Barrels & Twenty Kegs of Coca-Cola (1916) , in which the presence of caffeine in the beverage was at issue. In both cases, a human (or corporation) was ultimately considered liable, with the parrots and soda only serving as stand-ins. Robots could be taken to court in the same way, but they would not be considered liable except in a symbolic or proxy fashion. Again, since the robot is not ultimately liable, it poses no novel challenges to liability law. This is not to say that such robots do not pose challenges to liability law-only that these are familiar challenges. Indeed, the nascent literature on robot liability identifies a range of challenges, including assigning liability when robots can be modified by users (Calo 2011) , when they behave in surprising ways (Vladeck 2014) , and when the complexity of robot systems makes it difficult to diagnose who is at fault (Funkhouser 2013; Gurney 2013) . There are also concerns that liability laws could impede the adoption of socially beneficial robotics (e.g., Marchant and Lindor 2012; Wu 2016 ). However, these challenges all point to familiar solutions based in various ways of holding manufacturers, users, and other human parties liable. Fine tuning the details is an important and nontrivial task, but it is not a revolutionary one. The familiar nature of typical robots to liability law is further seen in court cases in which robots have been implicated in harms (Calo 2016 ). An early case is Brouse v. United States (1949) , in which two airplanes crashed, one of which was a US military plane that was using an early form of autopilot. The court rejected the US claim that it should not be liable because the plane was being controlled by the robotic autopilot; instead the court found that the human pilot in the plane is obligated to pay attention and avoid crashes. More recently, in Ferguson v. Bombardier Services Corp. (2007) , another airplane crash may have been attributable to the autopilot system, in which case the court would have found the autopilot manufacturer liable, not the autopilot itself, but instead it found that the airline had improperly loaded the plane. (See Calo 2016 for further discussion of these and other cases.) \n II -Robot Liability If a robot can be held liable, then liability law faces some major challenges in terms of which robots to hold liable for which harms, and in terms of how to divide liability between the robot and its human designers, manufacturers, users, etc. In this section, we will argue that robots should be able to be held liable to the extent that they qualify for legal personhood. First, though, let us briefly consider some alternative perspectives. One perspective is that, in an informal sense, any sort of object can be liable for a harm. The pollen in the air is liable for making you sneeze. The faulty gas pipe is liable for burning down your home. The earthquake is liable for destroying the bridge. This is not the sense of liability we address in this paper. Our focus is on legal liability, in which a party can be tried in court. Another perspective comes from the notion that the law ultimately derives from what members of a society want it be. This is why laws are different in different jurisdictions and at different times. From this perspective, robots will be held liable whenever societies decide to hold them liable. There are difficult issues here, such as whether to give robots a say in if they should be held liable. 5 Regardless, the fact that laws are products of societies need not end debate on what laws societies can and should have. To the contrary, it is incumbent upon members of society to have such debates. Within human society, in the United States and many other countries, parties can be held liable for harms to the extent that they qualify as legal persons. Legal personhood is the ability to have legal rights and obligations, such as the ability to enter contracts, sue or be sued, and be held liable for one's actions. Legal liability thus follows directly from legal personhood. Normal adult humans are full legal persons and can be held liable for their actions across a wide range of circumstances. Children, the mentally disabled, and corporations have partial legal personhood, and in turn can be held liable across a narrower range of circumstances. Non-human animals generally do not have personhood, though this status has been contested, especially for nonhuman primates. 6 The denial of legal personhood to non-human animals can be justified on grounds that they lack humans' cognitive sophistication and corresponding ability to participate in society. Such justification avoids charges of speciesism (a pro-human bias for no other reason than just happening to be human). However, the same justification implies that robots should merit legal personhood if they have similar capabilities as humans. As Hubbard (2011, 417) puts it, \"Absent some strong justification, a denial of personhood to an entity with at least an equal capacity for personhood would be inconsistent and contrary to the egalitarian aspect of liberalism.\" 7 The question of when robots can be liable thus becomes the question of when robots merit personhood. If robots merit personhood, then they can be held liable for harms they cause. Otherwise, they cannot be held liable, and instead liability must go to some human party, as is the case with non-human animals and other technologies or entities that can cause harm. Hubbard proposes three criteria that a robot or other artificial intelligence should meet to merit personhood: (1) complex intellectual interaction skills, including the ability to communicate and learn from experience; (2) self-consciousness, including the ability to make one's own goals or life plan, and (3) community, meaning the ability to pursue mutual benefits within a group of persons. These three criteria, central to human concepts of personhood, may offer a reasonable standard for robot personhood. We will use these criteria for this paper while emphasizing that their definitude should be a matter of ongoing debate. Do Hubbard's criteria also apply for liability? Perhaps not for the criterion of selfconsciousness. The criterion makes sense for harms caused to a robot: only a conscious robot can experience harms as humans do. 8 This follows from, for example, classic utilitarianism, as in Bentham's line \"The question is not, Can they reason? nor, Can they talk? but, Can they suffer?\" However, the same logic does not apply to harms caused by a robot. Consider an advanced robot that meets all of Hubbard's criteria except that it lacks consciousness. Suppose the robot causes some harm-and, to be clear, the harm causes suffering to a human or to some other conscious person. Should the robot be held liable? The answer to this may depend on society's foundational reasoning for liability. If liability exists mainly to discourage or deter the commission of harms, then consciousness is unnecessary. The robot should be punished so long as doing so discourages the commission of future harms. The entities that get discouraged here could include the robot, other similar robots, conscious robots, and even humans. It is quite conceivable that non-conscious robots could be punished with some sort of reduced reward or utility as per whatever reward/utility function they might have (Majot and Yampolskiy 2014) . Specifically, they could be reprogrammed, deactivated or destroyed, or put in what is known as a \"Box\": digital solitary confinement restricting an AI's ability to communicate or function (Corwin 2002; Yudkowsky 2002) . To make this possible, however, such robots ought to be based (at least in part) on reinforcement learning or similar computing paradigms (except ones based on neural network algorithms, for reasons we explain later). Alternatively, if liability exists mainly for retribution, to bring justice to whomever committed the harm, then consciousness could be necessary. Whether it is necessary depends on the purpose of the punishment. If the punishment aims to worsen the life of the liable party, so as to \"balance things out,\" then consciousness seems necessary. It makes little sense to \"worsen\" the life of something that cannot experience the worsening. However, if the punishment aims to satisfy society's sense of justice, then consciousness may be unnecessary. Instead, it could be sufficient that members of society observe the punishment and see justice being served. 9 Whether the robot's consciousness would be necessary in this case would simply depend on whether society's sense of justice requires it to be conscious. This potential exception regarding consciousness is a good example of partial liability as shown in Figure 1 . The advanced, non-conscious robot can be held liable, but not in every case in which normal adult humans could. Specifically, the robot would not be held liable in certain cases where punishment is for retribution. Other limitations to a robot's capabilities could also reduce the extent of its liability. Such robots would be analogous to children and mentally disabled adult humans, who are similarly not held liable for as many cases as normal adult humans are. Robots of less sophistication along any of Hubbard's three criteria (or whatever other criteria are ultimately established) should be liable to a lesser extent than robots that meet the criteria in full. What about robots of greater-than-human sophistication in Hubbard's three criteria? These would be robots with more advanced intellectual interaction skills, self-consciousness, or communal living ability. It is conceivable that such robots could exist-indeed, the idea dates back many decades (Good 1965 ). If they do come into existence, then by the above logic, they should be held to a higher liability standard than normal adult humans. Indeed, concepts such as negligence recognize human fallibility in many respects that a robot could surpass humans in, including reaction time, eyesight, and mental recall. The potential for holding robots to a higher standard of liability could offer one means of governing robots with greater-than-human capacities; more on this in Section III in the discussion of catastrophic risk. Before turning to catastrophic risk, there is one additional aspect of robot liability to consider: the division of liability among the robot itself and other parties that influence the robot's actions. These other parties can include the robot's designer, its manufacturer, and any users or operators it may have. These parties are comparable to a human's parents and employers, though the comparison is imperfect due to basic differences between humans and robots. One key difference is that robots are to a very large extent designed. Humans can be designed as well via genetic screening and related techniques, hence the term \"designer baby.\" But designers have much more control over the eventual character of robots than they do for humans. This suggests that robot designers should hold more liability for robots' actions than human parents should for their children's actions. If robot designers know that certain designs tend to yield harmful robots, then a case can be made for holding the designers at least partially liable for harms caused by those robots, even if the robots merit legal personhood. Designers could be similarly liable for building robots using opaque algorithms, such as neural networks and related deep learning methods, in which it is difficult to predict in advance whether the robot will cause harm. Those parties that commission the robot's design could be similarly liable. In court, the testimony of relevant industry experts would be valuable for proving whether any available, feasible safeguards to minimize such risks existed. Another difference is that, at least for now, the production of robots is elective, whereas the birthing of humans is required for the continuity of society. Society cannot currently function without humans, but it can function without robots. This fact suggests some lenience for parents in order to encourage procreation, and to be stricter with robot designers in order to safely ease society's transition into an era in which humans and their robot creations coexist. Such a gradual transition seems especially warranted in light of potential robot catastrophe scenarios. \n III -Catastrophic Robot/AI Liability \"Catastrophe\" has many meanings, many of which require no special legal attention. For example, a person's death is catastrophic for the deceased and her or his loved ones, yet the law is perfectly capable of addressing individual deaths caused by robots or AIs. However, a certain class of extreme catastrophe does merit special legal attention, due to its outsized severity and significance for human civilization. These are catastrophes that cause major, permanent harm to the entirety of global human civilization. Such catastrophes are commonly known as global catastrophes (Baum and Barrett 2016) or existential catastrophes (Bostrom 2013) . Following Posner (2004) , we will simply call them catastrophes. A range of catastrophic risks exist, including global warming, nuclear war, a pandemic, and collision between Earth and a large asteroid or comet. Recently, a body of scholarship has built up analyzing the possibility of catastrophe from certain types of future AI. Much of the attention has gone to \"superintelligent\" AI that outsmart humanity and \"achieve complete world domination\" (Bostrom 2014, 78 ; see also Müller 2015) . Such AI could harm humans through the use of robotics. Additionally, some experts believe that robotics could play an important role in the development of such AI (Baum et al. 2011) . Other catastrophe scenarios could also involve robotics. Robots could be used in the systems for launching nuclear weapons or for detecting incoming attacks, potentially resulting in unwanted nuclear wars. 10 They could be used in critical civil, transportation, or manufacturing infrastructure, contributing to a global systemic failure. 11 They could be used for geoengineering -the intentional manipulation of the global environment, such as to counteract global warming -and this could backfire, causing environmental catastrophe. 12 Robots could be used in establishing or maintaining an oppressive totalitarian world government. 13 Still further robot catastrophe scenarios may also be possible. The enormous scale of the catastrophes in question creates profound moral and legal dilemmas. If the harm is permanent, it impacts members of all future generations, which could be immensely many people. Earth will remain habitable for at least a billion more years, and the galaxy and the universe for much longer (Baum 2016) ; the present generation thus contains just a tiny fraction of all people who could exist. The legal standing and representation of members of future generations is a difficult question (Tonn 1996; Wolfe 2008 ). If members of future generations are to be counted, then they can overwhelm the calculus. Despite this, present generations unilaterally make the decisions. There is thus a tension in how to balance the interests of present and future generations (Page 2003) . A sufficiently large catastrophe raises similar issues even just within the context of the present generation. About seven billion humans live today; a catastrophe that risks killing all of them could be seven billion times larger than a catastrophe that risks killing just one. One could justify enormous effort to reduce that risk regardless of future generations (Posner 2004) . Further complications come from the irreversible nature of these catastrophes. In a sense, every event is irreversible: if someone wears a blue shirt today, no one can ever change the fact that they wore a blue shirt today. Such events are irreversible only in a trivial sense: you can change what shirt you wear on subsequent days. Nontrivially irreversible events are more or less permanent: if that person should die today, then nothing 14 can bring that person back to life. At a larger scale, nontrivially irreversible effects exist for many ecological shifts and may also exist for the collapse of human civilization (Baum and Handoh 2014) . The possibility of large and nontrivially irreversible harm creates a major reason to avoid taking certain risks. The precautionary principle is commonly invoked in this context, raising questions of just how cautious to be (Posner 2004; Sunstein 2006 ). An irreversible AI catastrophe could be too large for liability law to handle. In the simplest case, if the catastrophe results in human extinction, then there would be no one remaining to hold liable. A catastrophe that leaves some survivors but sees the collapse of human civilization would lack the legal system needed for holding people liable. Alternatively, AI could cause a catastrophe in which everyone is still alive but they have become enslaved or otherwise harmed by the AI; in this case the pre-catastrophe human authorities would lack the power needed to hold those at fault liable. For smaller catastrophes, the legal system may exist to a limited extent (Figure 1 ). In this case, it may be possible to bring the liable parties to trial and/or punish them, but not as reliably or completely as is possible under normal circumstances. The closest possible example would be creating special international proceedings, like the Nuremberg Trials, to deal with the aftermath. Much like such war tribunals, though, these may do little to address the chaos' original cause. This would leave victims or society at large wasting time and resources on reliving a tragedy (McMorran 2013) . Hence, instead of liability, a precautionary approach could be used. This would set a default policy of disallowing any activity with any remote chance of causing catastrophe. It could further place the burden of proof on those who wish to conduct such activity, requiring them to demonstrate in advance that it could not cause catastrophe. 15 Trial-and-error would not be permitted, because a single error could cause major irreversible harm. This would likely be a significant impediment for AI research and development (at least for the subset of AI that poses catastrophic risk), which, like other fields of technology, is likely to make extensive use of trial and error. Indeed, some AI researchers recommend a trial-and-error approach, in which AIs are gradually trained to learn human values so that they will not cause catastrophe (Goertzel 2016) . However, given the high stakes of AI catastrophe, perhaps these sorts of trial-and-error approaches should still be avoided. It may be possible to use a novel liability scheme to assist with a catastrophe-avoiding precautionary approach. In a wide-ranging discussion of legal measures to avoid catastrophe from emerging technologies, Wilson (2013, 356) proposes \"liability mechanisms to punish violators whether or not their activities cause any harm\". In effect, people would be held liable not for causing catastrophe, but for taking actions that could cause catastrophe. This proposal could be a successful component of a precautionary approach to catastrophic risk and is worth ongoing consideration. Taking the precautionary principle to the extreme can have undesirable consequences. All actions carry some risk. In some cases, it may be impossible to prove a robot does not have the potential to cause catastrophe. Therefore, requiring demonstrations of minimal risk prior to performing actions would be paralyzing (Sunstein 2006) . Furthermore, many actions can reduce some risks even while increasing others; requiring precaution due to concern about one risk can cause net harm to society by denying opportunities to decrease other risks (Wiener 2002) . AI research and development can pose significant risks, but it can also help reduce other risks. For AI that poses catastrophic risk, net risk will be minimized when the AI research and development is expected to bring a net reduction in catastrophic risk (Baum 2014) . In summary, there are significant legal challenges raised by AI that poses catastrophic risk. Liability law, most critically, is of little help. Precautionary approaches can work instead, although care should be taken to avoid preventing AI from reducing different catastrophic risks. The legal challenges from AI that poses catastrophic risk is distinct from the challenges from other types of AI, but they are similar to the challenges from other catastrophic risks. \n Conclusion While robots benefit society in many ways, they also cause or are otherwise implicated in a variety of harms. The frequency and size of these harms is likely to increase as robots become more advanced and ubiquitous. Robots could even cause or contribute to a number of major global catastrophe scenarios. It is important for liability law to successfully govern these harms to the extent possible so that the harms are minimized and, when they do occur, that justice may be served. For many robot harms, a human party is ultimately liable. For these harms, traditional liability law applies. A major challenge to liability law comes when robots could be liable. Such cases require legal personhood tests for robots to assess the extent to which they can be liable. One promising personhood test evaluates the robot's intellectual interaction skills, selfconsciousness, and communal living ability. Depending on how a robot fares on a personhood test, it could have the same liability as, or less or more liability than, a normal adult human. A robot being liable does not preclude a human party also being liable. Indeed, robot designers should expect more liability for robot harms than would human parents, because robots are designed so much more extensively than human children are. Finally, for robots that pose catastrophic risk, liability law cannot be counted on and a precautionary approach is warranted. People involved in the design, manufacture, and use of robots can limit their liability by choosing robots that reliably avoid harms. One potential way to improve reliability is to avoid computing paradigms such as neural nets that tend to result in surprising behaviors, or adapt these paradigms to make them less surprising (Huang and Xing 2002) . Robot designs should be sufficiently transparent that the responsible human parties can, with reasonable confidence, determine in advance what harms could occur. They can then build safety restrictions into the robot or at least give warnings to robot users, as is common practice with other technologies. Robots should also go through rigorous safety testing before being placed into situations where they can cause harms. If robots cannot reliably avoid harms, then they probably should not be used in the first place. These sorts of safety guidelines should be especially strict for robots that could contribute to major global catastrophe. A single catastrophe could permanently harm human civilization. It is thus crucial to avoid any catastrophe. Safety testing itself could be dangerous. This increases the value of transparent computing paradigms that let humans assess risks prior to building the robot. Legal measures must also take effect prior to the robot's build because there may be no legal system afterwards. Advanced robots may be less likely to cause catastrophe if they are designed to be upstanding legal persons. But even then, some legal system would need to exist to hold them liable for what harms they cause. As this paper illustrates, robot liability poses major new challenges to liability law. Meeting these challenges requires contributions from law, robotics, philosophy, risk analysis, and other fields. It is essential for humans with these various specialties to work together to build robot liability regimes that avoid harms while capturing the many benefits of robotics. The potential for harm is extremely large, making this an urgent task. We hope that humans and robots will coexist successfully and for mutual benefit in a community of responsible persons. Figure 1 . 1 Figure 1. Classification scheme for the applicability of liability law to various sizes of harms caused by various types of robots.", "date_published": "n/a", "url": "n/a", "filename": "026_robot-liability.tei.xml", "abstract": "Advances in robotics technology are causing major changes in manufacturing, transportation, medicine, and a number of other sectors. While many of these changes are beneficial, there will inevitably be some harms. Who or what is liable when a robot causes harm? This paper addresses how liability law can and should account for robots, including robots that exist today and robots that potentially could be built at some point in the near or distant future. Already, robots have been implicated in a variety of harms. However, current and near-future robots pose no significant challenge for liability law: they can be readily handled with existing liability law or minor variations thereof. We show this through examples from medical technology, drones, and consumer robotics. A greater challenge will arise if it becomes possible to build robots that merit legal personhood and thus can be held liable. Liability law for robot persons could draw on certain precedents, such as animal liability. However, legal innovations will be needed, in particular for determining which robots merit legal personhood. Finally, a major challenge comes from the possibility of future robots that could cause major global catastrophe. As with other global catastrophic risks, liability law could not apply, because there would be no postcatastrophe legal system to impose liability. Instead, law must be based on pre-catastrophe precautionary measures.", "id": "ce9685f2fde6dc3def3a5abe27299fad"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": [], "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "reddy20a-supp.tei.xml", "abstract": "Maximizing uncertainty. Following Lakshminarayanan et al. (2017), we measure ensemble disagreement using the average KL-divergence between the output of a single ensemble member and the ensemble mean, where p ✓ is the reward classifier defined in Section 3.2. Maximizing novelty. In this work, we use a distance function that computes the Euclidean distance between state embeddings, where f is the state encoder trained in step (1).", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Kyle Bogosian"], "title": "Implementation of Moral Uncertainty in Intelligent Machines", "text": "Introduction Advances in artificial intelligence have led to research into methods by which sufficiently intelligent systems can be guaranteed to follow ethically defensible behavior. Successful implementation of moral reasoning may be critical for managing the proliferation of autonomous vehicles, workers, weapons, and other systems as they increase in intelligence and complexity. Approaches towards moral decision making generally fall into two camps, \"topdown\" and \"bottom-up\" approaches (Allen et al. 2005) . Top-down morality is the explicit implementation of decision rules into artificial agents. Schemes for top-down decision making that have been proposed for intelligent machines include Kantian deontology (Arkoudas et al. 2005 ) and preference utilitarianism (Oesterheld 2016) . Bottom-up morality avoids reference to specific moral theories by developing systems that can implicitly learn to distinguish between moral and immoral behaviors, such as cognitive architectures designed to mimic human intuitions (Bello and Bringsjord 2012) . There are also hybrid approaches which merge insights from the two frameworks, such as one given by Wiltshire (2015) . \n The Problem of Moral Disagreement A problem which has been cited as an obstacle to the development of machine ethics is the lack of agreement among moral philosophers on which theory of ethics should be followed (Shulman et al. 2009; Bello and Bringsjord 2012; Bostrom 2014; Brundage 2014 ). The 2009 PhilPapers survey of philosophy faculty revealed that 26% accepted or leaned towards deontology, 24% accepted or leaned towards consequentialism, 18% accepted or leaned towards virtue ethics, and the remaining 32% favored other approaches entirely. For someone who believes in a particular approach to ethics, the correct system for implementation in artificial moral agents may be obvious, but the path forward for institutions and society as a whole remains unclear. Companies, governments, and researchers will have to decide which system to use for artificial agents and will be faced with a difficult choice between competing moral paradigms. This paper will describe a computational framework that will determine how we should design moral machines in the light of this disagreement, but first it is necessary to determine exactly why moral disagreement is a problem. Just because there exists disagreement does not necessarily imply that research and development programs cannot succeed in their objectives, nor does it necessarily imply that we need to worry about differences in opinion. The aforementioned works do not explicitly describe how moral disagreement would prevent us from building satisfactory moral machines, so we will need to clarify the nature of the problem before proceeding with a solution. There are two plausible reasons for why moral disagreement can pose a serious problem for the project of machine ethics. First, it could be a pragmatic problem as disagreements among engineers, policymakers and philosophers interfere with projects that require cooperation. In a worst-case scenario, decisions over which research programs to fund will turn into bitter ideological battles, research agencies will become bogged down in disputes, and developers will split up their resources and devolve into a competitive mindset which reduces information sharing and slows research progress, thus making it more difficult to construct moral machines. Second, it could be a moral problem. If we are so uncertain about morality and are split across many different moral principles, then the likelihood that anyone's particular moral system is entirely correct is statistically extremely low (Shulman et al. 2009) . Therefore, if we do build AIs grounded in any particular moral system, then they will probably be making many poor moral decisions. Bello and Bringsjord (2012) argue that moral disagreement provides a reason to avoid top-down approaches to machine ethics in favor of bottom-up or hybrid approaches that copy or take inspiration from human moral thinking. However, it is not clear how this would solve the problem of disagreement. The authors claim that people can agree on examples of good moral behavior despite disagreeing over specific theories, but people's personal moral judgements can differ as widely as moral theories do when faced with moral dilemmas (Greene et al. 2001 ) and when they are considering politicized moral issues such as racial fairness, animal farming, and economic inequality. Therefore, even if we attempt to circumvent normative disputes by eschewing top-down ethics in favor of bottom-up frameworks, it is perfectly plausible that researchers and government regulators with strong value disagreements will come into conflicts over the kinds of moral intuitions and data which should or shouldn't play a role in automated reasoning. Brundage (2014) also notes that there is no reason to expect that Bello and Bringsjord's approach would actually be reliably ethical. Moreover, in the cases where people do agree on moral choices despite disagreeing over moral theories, there is neither a pragmatic nor a philosophical problem with the top-down approach: a utilitarian engineer has no moral reason to care whether a robot's computational pattern is internally utilitarian or not, as long as it is actually maximizing utility; more generally, if we know that machines will act in the morally right way, then we have no reason to consider the specific programming of machine code to be morally significant. So to the extent that moral disagreements between ethical theories might pose pragmatic and moral problems for the development of top-down computational ethics, they would pose equal pragmatic and moral problems for the development of bottom-up computational ethics. \n An Example of Intractable Disagreement We can illustrate this problem with a case of self-driving cars which must be programmed to decide whether to swerve to avoid animals in the path of the vehicle. Various vehicle speeds, warning times, road shapes, weather conditions and other factors can make the decision to avoid animals more or less dangerous to the occupants of the vehicle in different situations, and different moral theories can mandate different choices in such cases. One moral principle which could specify when vehicles should or should not swerve could be described as the commensurable view. Under this doctrine, animal lives should be treated as if they are worth some small fraction of what human lives are worth. In addition, let us suppose that animals with a presumed greater capacity for cognition and pain sensations should be granted a greater weight in such tradeoffs. The machine, upon calculating a sufficiently low estimate of the probability that swerving would result in human fatality or verifying some heuristical checklist of features of the situation, would avoid the animal. The commensurable view is most clearly reminiscent of utilitarianism, but other moral theories may embrace it nonetheless-for instance, one could follow a mixed theory incorporating both deontological and consequentialist ethics and believe that consequentialist considerations dominate in this particular kind of situation. Vehicles acting in this manner can be statistically expected to eventually cause a few human fatalities but also to save many animals. If two people agreed on the commensurable view but disagreed on the specific tradeoff, compromise would be relatively straightforward. For instance, if one side argued that the chance of a fatal accident would have to be under 1 in 1000 in order to swerve for a certain animal while another side argued that it should be 1 in 10,000, they could plausibly compromise at 1 in 5500, which would function as a pragmatic truce and a limit to the degree of moral harm perceived by either viewpoint. The more pressing concern is how to compromise with the incommensurable view-that humans can never be placed at additional risk of fatality in order to save animals. This might be supported by moral theories emphasizing human rights and liberty. Since this view fundamentally rejects the axiological assumptions of the other and holds that no tradeoff is permissible, there is no obvious 'halfway point' where the competing principles can meet. Unilaterally selecting one principle to govern vehicle behavior would not be acceptable for the reasons described previously. Training the driving system to determine the correct action through supervised learning would only rephrase the issue by forcing us to decide which training examples ought to be classified as right or wrong. Allowing the company to decide based on its own interests would not clearly lead to safe and ethical behavior. Laws do not provide sufficient guidance in all such situations to govern the behavior of machines, since they are often left to the conscience of a human agent. Finally, human operators cannot be guaranteed to be available to make such quick decisions, to reliably specify their preferences beforehand, or to act ethically at all. The last resort would be for the vehicles to rely on a random number generator and follow the commensurable view half of the time and the incommensurable view half of the time. At first glance, this is the only straightforward way to build a system in a way that does not overwhelmingly disfavor one particular theory: if we cannot compromise between two views on how to act in a particular repeated decision by generating a new view, then the only way to act fairly is to alternate between them. Unfortunately, this sacrifices predictability by inducing stochasticity into decision making. Furthermore, it is not Pareto optimal. A vehicle which was programmed to randomly select the commensurable view in half and only half of relevant situations could be modified so that it specifically did so for the half of situations where the animal under threat was relatively important under the commensurable view. This makes it more predictable, since its decisions are deterministic given the environments that it encounters. It also counts as a moral improvement by the lights of the commensurable view, because it prioritizes the animals which are regarded as more important. At the same time, it is no worse according to the incommensurable view, which could be indifferent to exactly which animals are saved. This makes the change a Pareto improvement. We can generalize this strategy by saying that we are keeping all moral theories in play at the time of decision making, and searching for actions which provide the most value across all of them, rather than seeking to select one moral theory and ignoring others. This procedure, however ad hoc, seems to be the best possible way of approaching this particular case, and its principles could be fruitfully applied to other cases where moral values are not just in disagreement but wholly incommensurable. Ideally, we should have a rigorous theoretical framework which does similar work in the broad variety of moral situations by taking all moral preferences into consideration and responding appropriately. \n Dealing with Moral Disagreement Properly Moral disagreement has been recognized as a problem for normative ethics proper, independent of any concerns regarding machine ethics. A direct approach to overcoming it is to assume that there is a correct moral theory which we are searching for, acknowledge that we are fundamentally uncertain about which moral theory is correct, and then act in such a way as to give some weight to the judgements of different theories. More generally, given the failure of moral philosophy to reach satisfactory conclusions, it can be argued that we should adopt a framework of reasoning which takes multiple moral views into account (Lockhart 2000) . However, the idea that we ought to change our behavior in accordance with moral uncertainty is controversial, and many philosophers have attempted to address the various mathematical and philosophical problems involved with the concept (Z ˙uradzki 2016). In particular, it has been argued that making comparisons between the values and judgements of different moral theories is impossible (Nissan-Rozen 2015) . A recent proposal aimed at answering these concerns was developed by William MacAskill in an extensive thesis (2014). It avoids objections based on incomparability and incommensurability by developing moral uncertainty as a voting problem among moral theories (MacAskill 2016). Voting provides a close analogy with the pragmatic problems faced by machine ethics: as voting is the general process by which decisions are made from the preferences of a population, computations of moral uncertainty represent a process by which agents can act in accordance with the diverse values of humanity. Since MacAskill's proposal works as a voting system where theories have equal say adjusted by their probability of being correct, it also approximates the framework which has been suggested for meta-moral reasoning by Nick Bostrom (2009) . The rest of this paper will develop and defend the use of this model for machine ethics. \n Maximizing Expected Choiceworthiness The scheme presented in \"Normative Uncertainty\" (MacAskill 2014) is to make action-guiding judgements based on all moral theories in which the agent has some level of credence. MacAskill defines his approach as a metanormative theory, characterized as follows: The agent investigates a decision-situation comprising a quintuple 〈S, t, A, T, C〉 where S is the decision maker, t is the time, and A is the set of possible actions to take. T is the set of normative theories being considered, where a theory T i is a function of decision-situations that produces a cardinal or ordinal choiceworthiness score for actions CW i (A) for all actions a ∈ A. C(T i ) is a credence function assigning values in [0, 1] to every T i ∈ T. A metanormative theory is a function of decision-situations that produces an ordering of the actions in A in terms of their appropriateness. MacAskill distinguishes between moral theories which assign cardinal rankings to options, as utilitarianism would, and moral theories which assign ordinal rankings. When it comes to cardinal theories, MacAskill also distinguishes between sets of theories with moral values which are directly comparable and other cardinal theories which are incomparable with any other theory. He proposes the metanormative theory of maximizing expected choiceworthiness (MEC), the steps of which are as follows: 1. Each set K of k moral theories in which the choiceworthiness rankings of options are cardinal and intertheoretically comparable are aggregated into single theories, where C T K ð Þ ¼ X k i¼1 CðT i Þ CW K A ð Þ ¼ P k i¼1 CW i A ð ÞC T i ð Þ P k i¼1 C T i ð Þ In other words, the credence in the new theory equals the sum of the agent's credences in each of the individual theories in the set, and the choiceworthiness of an option according to the new theory is the credence-weighted average of the choiceworthiness of the option in all of the individual theories. 2. The rankings of options according to each ordinal theory o are used to provide choiceworthiness scores of options using a modified Borda scoring rule that is designed to properly account for ties, denoted here as CW B . The score for an option is equal to the number of worse options minus the number of better options. CW B o A ð Þ ¼ a 2 A : CW o a ð Þ\\CW o A ð Þ j j À a 2 A : CW o a ð Þ j i CW o A ð Þj This violates the independence of irrelevant alternatives, but MacAskill provides reasons to allow the violation. First, independence of irrelevant alternatives is the least essential of all the axioms present in Arrow's Impossibility Theorem, and if a different axiom were to be violated then there would be no prospects for a satisfactory metanormative account involving ordinal moral theories. Second, the primary motivation for independence of irrelevant alternatives is that it combats tactical voting, but tactical voting is not a problem with moral theories: theories aren't agents, and a moral agent can't conceal information from itself. Third, there are some moral cases where we would expect the independence of irrelevant alternatives to be violated (MacAskill 2014, p. 85). 3. The options' scores provided by each of the aggregated sets of cardinal comparable theories, the scores provided by each of the ordinal theories, and the scores provided by each of the cardinal incomparable theories p are all divided by the respective standard deviations of the moral rankings which the moral theories (or sets of intertheoretically comparable cardinal theories) provide over a general set G of actions. This provides normalized choiceworthiness rankings and is the only possible way of equalizing the value of voting for each value system (Cotton-Barratt 2013), which (as MacAskill argues, p. 115) performs the role of giving each theory equal say. CW N K ðAÞ ¼ CW K ðAÞ r CW K ðGÞ ð Þ CW N o ðAÞ ¼ CW B o ðAÞ r CW B o G ð Þ À Á CW N p A ð Þ ¼ CW p A ð Þ r CW p G ð Þ À Á A note on G: it is necessary for variance normalization that a representative set of actions (the \"general set\") be defined for computing the variance. This provides the background against which theories can describe whether a decision is comparatively important or comparatively unimportant from the point of view of a particular theory. MacAskill thinks that a broad account featuring many actions in this set, rather than just the ones under consideration in the present decision-situation, is theoretically desirable. Here I follow suit and model the process as if we are providing moral machines with large arrays of data representing many possible moral decisions, so that calculating the variance only needs to be done once before the resulting number is used and shared by many agents. Variance normalization also violates the independence of irrelevant alternatives, but it is a necessary violation for this application. In order to perform comparisons across moral theories, we need to determine the amount of weight which a value system places on a particular issue compared to other issues. Since different computational approaches to ethics can have wildly differing numerical outputs, it would be impossible to do this fairly without normalizing. 4. Each option is scored with the credence-weighted average of the variancenormalized scores provided by the normative theories. This provides a ranking of expected choiceworthiness: CW E A ð Þ ¼ X n i¼1 CW N i A ð ÞC T i ð Þ Then the agent chooses the action A which maximizes CW E (A). The full description and defense of each step in the theory, provided as a model for human decision making, is given by MacAskill (2014). The computational complexity of an algorithm implementing this procedure is OðjAj à jTjÞ, as adding another action requires a fixed series of evaluations to be done by each moral theory and adding another moral theory requires it to perform a fixed series of evaluations to be performed on each action. The calculation itself is very simple; almost all of the time and memory requirements will presumably stem from the computations of the moral theories themselves. \n Motivation for Maximizing Expected Choiceworthiness in Artificial Intelligence The effect of the system described above is that an agent will make prudent decisions that aim to take the most broadly desirable action in accordance with various people's values, and will avoid the very counterintuitive actions which are considered to be problems for various moral theories. For instance, when presented with the opportunity to commit a great deception in order to gain a small increment of happiness, an agent will refrain even if they view consequentialism as more likely than nonconsequentialist theories, because the relative wrongfulness of the act according to nonconsequentialist theories is significant whereas the relative benefit under consequentialism is minor. Likewise, even if an agent believes that it is probably not morally obligated to refrain from actions which cause significant indirect harms to others, it will nevertheless refrain from being careless in such a way, because the scale of the moral harms it would be committing if it were wrong would be significant. This will limit the degree to which machines will behave in ways which are considered very objectionable by minority views. This does not change the fact that a machine which maximized expected choiceworthiness could still be expected to take many actions which various parties consider to be morally wrong. Despite its superficial similarity to maximizing consequentialism, it would often sacrifice good consequences in favor of satisfying the values of other moral theories. So the adherents of most moral theories would theoretically prefer that machines be built according to their own specifications. But they cannot realistically expect the rest of industry and society to agree with them. So the actual outcome of widespread adherence to naı ¨ve individually rational behavior would be a patchwork of competing agents with different values. Not only would this leave both the pragmatic and the moral problems of disagreement unresolved, but it would leave parties with little ability to influence the world except for whatever agents they could develop as representatives for their own values. However, if parties were to agree on developing systems based upon moral uncertainty, their most critical respective interests would be given relatively high weight in all moral machines by the mechanism of variance voting. By reducing conflicts between agents, promoting cooperative machine development, and allocating value systems' decision making power to the situations which they care most about, a universal framework of moral uncertainty would solve the multiagent prisoner's dilemma constituted by machine ethics and would generally lead to a better outcome for different value systems than the situation in which there was pure disagreement. Needless to say, if an individual did accept MacAskill's thesis and believed that we ought to maximize expected choiceworthiness as human agents, then she would have a direct and compelling reason to support the implementation of the theory in computational systems. But the argument for developing machines based on maximizing expected choiceworthiness is not dependent upon the claim that we humans actually have metanormative reasons to act in such a way. An analogy with voting is illuminating: very few people believe that the winner of an election is always the one who would be the best leader, but most people agree with the system, as it is a necessary and effective compromise. So as long as we disagree with each other about ethics, we should still agree to construct moral machines based on uncertainty even if we reject the idea as a guide for human behavior. \n The Long Term Future If we look at the long run future of artificial intelligence development, there are additional reasons for supporting a framework of acting in accordance with moral uncertainty. First, if an individual is confident in her moral beliefs and possesses an expectation of progress towards objective truth in moral philosophy, then she should expect morally uncertain agents which reflect human knowledge to converge towards her beliefs in the long run. This will limit the degree to which she can expect their behaviors to be objectionable relative to those of an ideal agent which has the right views from the outset. Second, sufficiently competent machines which adhere to a particular value function are likely to pursue it endlessly with pathological consequences (Bostrom 2012) . While this paper is not intended as a complete solution for value selection and alignment in arbitrarily intelligent agents, it is nevertheless valuable to start by designing systems that cooperate among different value systems, as that will improve the fail-safety of agents which become very intelligent but have imperfect goal functions (Gloor 2016) . The framework proposed here clearly fulfills that criterion, and therefore serves as a potential backup or other component of a multilayered utility function even if it is not ordinarily utilized (Oesterheld 2016) . Third, as machines become more intelligent, they might perform more moral functions autonomously, such as updating credences in moral theories and even generating new ones, depending on the degree to which progress in artificial intelligence enables them to investigate questions in moral philosophy. MacAskill (2014, ch. 7) points out that a more accurate credence function improves the expected choiceworthiness of an MEC agent's actions, so such an agent would be intrinsically motivated to expend effort into updating and improving their moral beliefs if they had acceptable criteria for determining moral truth. This is particularly desirable since there may be cases of widespread moral wrongdoing which most humans have consistently failed to identify (Williams 2015) , which machines should be incentivized to avoid. \n How to Build an Uncertain Machine \n Ordinal and Cardinal Scoring One of the requirements of this framework is that moral theories actually provide rankings over actions. Perhaps the majority of moral theories provide neither cardinal nor ordinal rankings at first glance. However, this is a surmountable issue. First, it simply seems obvious that just about any moral theory should be able to rank certain impermissible actions, such as torturing a large number of people, as worse than other impermissible actions, like stealing someone's coffee, even if such rankings have not yet been made explicit in the philosophical literature. Second, any moral theory which judges the actions of machines can at least provide a rudimentary score system rooted in deontic logic, with 0 for all impermissible actions and 1 for all permissible actions, even if nothing else about the theory is formalized. So any moral theory must at least be translatable to a basic two-level ordinal ranking over actions, which would enable it to be implemented in this framework. If such an implementation is too coarse to adequately represent the values of such a theory, then its proponents have the option and the incentive to clarify and improve it. Defining precise scores could require varying degrees of input from humans depending on the complexity of the moral theory. However, computational approaches towards morality may provide numerical values and rankings which are otherwise absent from moral theory. Computational reasoning differs from human thinking and the procedures required for providing moral judgements in artificial intelligence systems may involve functions from which meaningful ordinal, integer or real-valued scores for actions can be extracted. Scores could be derived from the degree of rightfulness or wrongfulness of an action and/or the degree of certainty given by a moral algorithm as to whether an action is right or wrong. While the philosophical theory of MEC may seem to only refer to rigidly defined moral theories, a top-down approach to machine ethics is not required with this proposal for machine ethics. There is no computational necessity for the 'theories' implemented in the system to all be explicit moral theories in the philosophical sense. One or more bottom-up decision making systems might be included; for instance, the output of a complicated learned function intended to model human intuitions can be treated as a choiceworthiness ranking. As long as it ranks actions by normative criteria and has some probability of being the correct way for a machine to behave, it satisfies the requirements proposed by MacAskill for being a moral theory. \n Credences The outcomes of comparisons will always crucially depend upon the credence function C-an assignment of probabilities to various moral theories of being correct. There are multiple ways of determining these values. Lengthy and difficult approaches which rely on extensive human input are acceptable, as information modifying the credence function could be generated just once and then distributed to many agents as an update. Artificial agents with humanlike intelligence could presumably reason about morality for themselves, but this is not an option for the foreseeable future. One option is to make credences which reflect the beliefs of moral philosophers. The credence assigned to a moral theory would then be equivalent to the proportion of philosophers who affirm the theory, or possibly a score computed from a more sophisticated set of votes taking philosophers' full credences and rankings into account. However, the assumption that moral philosophers are in fact authorities regarding moral philosophy in the same way that practitioners of other disciplines are experts in their respective domains is highly controversial, and has been attacked by Archard (2011) , Cross (2016) and others. Moreover, the type of moral expertise which provides judgements about which actions and theories are right or wrong is merely one type of moral expertise which may not hold even if others do (Driver 2014; Jones and Schroeter 2012) . Finally, the judgements of moral philosophers have been shown to be vulnerable to systematic cognitive bias (Schultz et al. 2011; Schwitzgebel and Cushman 2012) . However, there are a few differences when it comes to the case of assigning credences to a system of maximizing expected choiceworthiness in machine intelligence on the basis of a broad survey of moral philosophers' beliefs. First, the system is not deferring to the views of any particular philosopher or group of philosophers. Rather, it is making decisions based upon all philosophers' views in accordance with how prevalent they are; the views of disagreeing philosophers are being used to support the proposition that both views are approximately equally likely to be correct, rather than the proposition that either particular view happens to be correct. Therefore, the argument from disagreement as presented by Cross (2016) against a strong conception of moral expertise does not apply to this case. Second, the system of decision making presented here is specifically designed for use by artificial agents rather than humans. Artificial intelligence systems are not voting participants of democracy and cannot be described as flourishing, so to whatever extent there is moral expertise, it is not wrong for machines to defer to it, as Archard (2011) argues is the case for humans. The above points leave many of the arguments against moral expertise unaddressed, and resolving that debate is beyond the scope of this paper. But given that we do not possess artificial general intelligence capable of comprehending and judging moral theory, the practical alternative to deferring to moral philosophers is not for agents to make choices for themselves, which is the context for the ordinary debate on moral expertise, but it is to defer to the judgements of other members of society besides philosophers. But non-philosophers would necessarily be making moral judgements in the same context of controversy over the nature and status of moral knowledge, and they would presumably be vulnerable to cognitive bias just as professional philosophers are, so any claim for their expertise would be the target of arguments similar to those leveled against professional philosophers. So there is a prima facie case for deferring to moral philosophers for determining machine credences which is stronger than the case for deferring to moral philosophers for making moral judgements in ordinary contexts, and no fundamental reason to defer to non-philosophers for determining credences. However, there may be contingent reasons to defer to non-philosophers based on various traits and characteristics commonly possessed by philosophers and non-philosophers. First, moral expertise might exist in the form of practical experience and wisdom rather than any kind of academic knowledge (Jones and Schroeter 2012) . If individuals with particular virtues, experiences, and character traits are regarded as moral authorities, then credences should be assigned in a way that reflects how they act in different situations. Second, even if being a philosopher counts as a reason in favor of attributing moral expertise, our available community of philosophers could be a predictably flawed group to defer to. One reason for this is that they are not representative of the broader human population, whether measured by socioeconomic status, race, gender, nationality, personality type, political ideology, or other characteristics which are often correlated with different opinions about morality. But the most reasonable solution to this problem is to weight the votes of philosophers, or virtuous wise people as proposed above, on the basis of the uniqueness of the various backgrounds and perspectives which they represent, since it preserves the presence of a community with (arguable) moral expertise while functionally acting similarly to a hypothetical group of philosophers with an ideal composition from various demographic groups. Unfortunately it is not obvious what composition of judges would be appropriate for determining credences. For instance, the fact that orphans comprise 2% of people in the world does not necessarily imply that a voting body comprised of 2% orphans and 98% nonorphans will have more accurate views on ethics than any other mix. It may be the case that both groups of people have equally sound perspectives which must be weighed against each other, and so each orphan's vote should have forty-nine times as much weight as a nonorphan's vote. Alternatively, suppose for the purpose of the example that nonorphans tend to make more accurate moral claims on average for whatever reason. Then only the votes of nonorphans might be given significant weight. Or a combination of these and other factors could imply anything in between. This lack of clarity does not imply that we should just weight all votes equally and ignore concerns about diverse representation, but it shows that the selection of an optimal standard is likely to be more difficult than it naively appears. Instead of trying to determine a single universal method of assigning credences, different kinds of artificial agents may have credences assigned differently to suit the contexts in which they operate, just as jury selection methods vary in order to ensure fairness in different types of legal cases. Therefore, fully specifying a method of determining credences is beyond the scope of this paper. Also, while complete data on potential credence judges' moral views and other characteristics to use for this project may currently be lacking, this does not present a great barrier for creating morally uncertain artificial agents, since the credence function can be used broadly and need not be computed separately and repeatedly for too many scenarios. Specialized focus groups, panels, surveys and other activities can be run occasionally at low cost to the overall industry. In one respect, the current lack of relevant data is actually a good thing: the selection of a method of assigning credences should be performed from a state of as much ignorance as possible regarding the particular moral credences which will be delivered by that method, in order to ensure that the decision making process is unbiased and focuses on the prior philosophical reasons for or against the method of assigning a credence function rather than reflecting whatever moral credences we have in the first place. \n Limited Approaches Deploying this system of normative uncertainty would require multiple moral decision making systems to function in a moral agent. An implementation could be quite simple, such as a self-driving car programmed with one module for strict traffic laws and another module comprised by a basic utilitarian calculator for death, injuries, costs, and travel time. Other applications could warrant more complexity, and implementing a large number of moral theories may be too computationally expensive for the framework to be practical in all applications. Basic systems of value comparisons involving computational shortcuts and heuristics would be easier to implement, and may also seem to be more philosophically defensible by dint of being simpler. In reality, they will have to sacrifice some theoretical properties in order to achieve this simplicity, such as equal say among theories, accurate representation of moral values, or important axioms of voting theory as dictated by Arrow's Impossibility Theorem. The pragmatic and moral problems of disagreement must generally be addressed for machine ethics to be successful, so comparisons and decisions between value systems will still have to be made in some fashion even if some machines must use simplified approaches to save on computational resources. But MEC represents one of the most recent and philosophically rigorous methods for adjudicating among different value systems, and has desirable properties including equal say, comparability across disparate moral theories, and graceful inclusion of minority values. Therefore, the system here can serve as the theoretical standard used for the development and judgement of heuristic approaches. \n Objections \n Infinite Regress The problem of moral disagreement poses a significant problem for the project of developing intelligent machines. But critics will point out that the exact method of implementing a system of moral uncertainty is controversial in its own right, as evidenced by the issue of credences. Therefore, we have not actually eliminated disagreement, but we have merely removed it from the moral domain to the metamoral domain-where pragmatic and normative problems still arise. People may disagree over what framework of moral uncertainty to use, how to assign credences, how to score options, how to construct the general set G, or other issues. Any framework for compromising over these disputes will also be subject to disagreement, prompting further disputes, and so on ad infinitum. It is true that we do not know if the system of moral uncertainty presented by MacAskill would lead to the morally best outcome, and the answer may be different depending on whose value system is actually correct. In fact, since it makes compromises among moral claims, it is almost guaranteed to commit moral errors at some point. However, given our lack of knowledge about morality, it represents a solid best guess that maximizes the expected choice-worthiness of agents' actions. The only alternatives are to unilaterally assume a single value system (whether topdown or bottom-up) with large risks of controversy and moral failure, or to choose a method of meta-moral compromise which sacrifices some of the desirable attributes of MEC while failing to avoid the pitfalls mentioned here. There is simply no better option given our limited understanding of morality. While there is also potential for a pragmatic problem to arise, I claim that it is generally less severe in the meta-moral case of disagreement than it is in the moral case. This is evidenced by the relative lack of controversy over voting methods and procedures in organizations and political institutions. While there are efforts to change voting standards in Western democracies such as the United States, such as extending the vote to younger persons or felons, these campaigns tend to be less popular and less violent than object-level campaigns over contentious ideological and policy disputes. In machine ethics, while different people may advocate for different systems of moral uncertainty, it won't be clear which one would best achieve anyone's particular value system, and disputes over core moral principles will be overshadowed by more mundane issues like fairness, computational complexity and the axioms of voting theory. Certainly there are large challenges ahead, but they are surmountable in ways that pure moral disagreement is not. This is because engineers and philosophers can appeal to more universal standards of computational and philosophical theory than they could if they were vouching for specific viewpoints. In any case, there is a limit to how narrow a difference of opinion can be before it stops being a major barrier to unified action. Flawed artificial moral agents can still provide major benefits for the world, so the lack of an uncontroversial standard for filling out the MEC procedure should not stop us from building moral machines at all any more than the predictable moral fallibility of humans and organizations should stop us from engaging in our practices of procreation and entrepreneurship. \n Pragmatism Versus Morality Hardheaded economists may point out that machines which always do the morally optimal thing are not going to be as commercially viable as ones which give some special leeway to the preferences and interests of their owners. This problem is more serious with agents following MEC than it is with ordinary moral agents: while most moral theories allow some space where multiple actions are regarded as permissible, the disjunction of moral theories does not. With multiple moral theories expressing interest in various values and the metanormative module calculating cardinal scores for all actions, the smallest preference over two actions held by a single theory would compel the agent to act in a particular way even if every other theory was indifferent, providing little control to users with nonmoral interests. Furthermore, some moral theories pose demands which contradict certain activities of legal commercial enterprises. This leaves little space for decisions to be made in the interests of the designers and users of the agent. For instance, depending on the moral beliefs of artificial agents, an automated cargo ship might divert its cargo to a desperately poor region of the world instead of fulfilling its contract, a personal care robot might zip off for several hours a day to work at a local food bank instead of giving a massage, or a financial trading algorithm might manipulate and ruin the stock price of a company which is engaging in harmful business conduct instead of turning a profit. The objection is not that we should necessarily regard these actions as bad; in fact it presupposes that such actions would be considered morally good by many theories, and the tension between one's own values and society's values has already been extensively addressed in this paper. Rather, the point is that maximally moral machines would not necessarily be regarded as desirable for commercial and industrial investment, meaning that fewer would be built. In the long run, this limits the amount of research and development which would lead to generally smarter and more beneficial machines. If we let the perfect be the enemy of the good then we will be responsible for the immoral outcomes of stunted technological progress and failure to mitigate human tragedies as quickly as we could. Therefore agents should only be obligated to satisfice expected choiceworthiness, or perhaps the set of pragmatic interests of the owners should function as a 'theory' of its own. The flaw in this objection is the assumption that moral theories do not similarly value predictability, pragmatism and the long run future of machine intelligence. For instance, rule consequentialism would clearly support the idea that agents ought to fulfill contracts and promises if it led to better consequences in the form of long run economic and technological growth. Act consequentialism may do so as well, if audacious moral behavior would be likely to cause significant legal and economic costs impairing future AI development. The decision to commit to a certain policy or procedure could also be classified as an action in itself if doing so yielded moral benefits, allowing act consequentialist theories to meet the same criteria as rule consequentialist ones. Furthermore, ethics based on the principle of telos would hold that machines designed to perform a specific purpose ought to give some priority to that purpose. There are also secondary considerations for preserving a dominating focus on moral behavior. First, explicitly placing pragmatic interests on the same level as moral theories potentially leads to a slippery slope of accepting progressively less ethical behavior while opening up yet another space for controversy and disputes over the details of such tradeoffs, but having a clear rule for maximizing expected choiceworthiness is a policy which can be committed to and accepted on principled ideological grounds. Second, public trust in artificial intelligence is fragile and rests on crucial questions of whether they are safe and ethical. Stories of immoral robot behavior are likely to have negative ramifications as the public turns cold towards machine research, development and production, and these negative ramifications should be entered into the judgements of moral theories from the outset rather than being avoided by introducing bias at the metanormative level. \n Against Maximin The desire to protect minority rights and interests may lead one to propose that the framework of MEC be replaced with a maximin criterion or some weaker lossaverse function. Simple versions of consequentialism, which maximize expected value on the normative rather than the metanormative level, are often criticized for implying that it may be right to sacrifice the lives and interests of individuals for the greater good. In a similar vein, an autonomous system which maximized expected choiceworthiness could be expected to take actions which are very wrong according to one set of values but very good according to other values, and some may find this problematic. A maximin or loss-averse function would respectively prevent or resist such scenarios. However, this intuition is mistaken in the metanormative context because counterintuitive cases of minority interests being violated are wrong on the ethical grounds of many different moral theories. For instance, killing an innocent patient in order to reuse their organs is not only considered wrong by deontological ethics, but would likely be considered wrong by rule consequentialism, virtue ethics, intuitionist ethics and other common theories as well, and always to a high degree. So in scenarios involving possible patient outcomes and organ harvesting, MEC would demand that the more common interest of refraining from harm be adhered to, while a maximin or loss-averse function would perversely give extra weight to the sidelined act consequentialist calculations over and above the degree to which act consequentialism would be concerned about the lost organs. More generally, MEC would strongly represent common-sense views on contentious ethical issues, because common-sense views on ethical issues are more common than other kinds of views. That being said, a risk-averse function might be desirable as a backup utility function to be activated in certain circumstances (Oesterheld 2016) . \n Conclusion Disagreement over ethics is unlikely to be resolved in the near future. If we wish to create machines with the capacity to make decisions on matters of ethical importance then a fair and robust system for translating the patchwork of human values into moral guidance must be developed. The intractability of our moral disputes and the magnitude of our moral errors can both be minimized by compromising with a decision framework which emphasizes the respective priorities of different moral systems in accordance with their plausibility. William MacAskill's framework of maximizing expected choiceworthiness satisfies this description. \t\t\t K. Bogosian", "date_published": "n/a", "url": "n/a", "filename": "Bogosian2017_Article_ImplementationOfMoralUncertain.tei.xml", "abstract": "The development of artificial intelligence will require systems of ethical decision making to be adapted for automatic computation. However, projects to implement moral reasoning in artificial moral agents so far have failed to satisfactorily address the widespread disagreement between competing approaches to moral philosophy. In this paper I argue that the proper response to this situation is to design machines to be fundamentally uncertain about morality. I describe a computational framework for doing so and show that it efficiently resolves common obstacles to the implementation of moral philosophy in intelligent machines.", "id": "b4fcbb249ea21658937c8f65c91c01e2"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": "n/a", "title": "n/a", "text": "n/a", "date_published": "n/a", "url": "n/a", "filename": "Vamplew2018_Article_Human-alignedArtificialIntelli.tei.xml", "id": "e11b42165e5bed91cfe1effb402165e0"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Joshua Achiam", "David Held", "Aviv Tamar", "Pieter Abbeel"], "title": "Constrained Policy Optimization", "text": "Introduction Recently, deep reinforcement learning has enabled neural network policies to achieve state-of-the-art performance on many high-dimensional control tasks, including Atari games (using pixels as inputs) (Mnih et al., 2015; , robot locomotion and manipulation (Schulman et al., 2015; Levine et al., 2016; Lillicrap et al., 2016) , and even Go at the human grandmaster level (Silver et al., 2016) . 1 UC Berkeley 2 OpenAI. Correspondence to: Joshua Achiam . Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). In reinforcement learning (RL), agents learn to act by trial and error, gradually improving their performance at the task as learning progresses. Recent work in deep RL assumes that agents are free to explore any behavior during learning, so long as it leads to performance improvement. In many realistic domains, however, it may be unacceptable to give an agent complete freedom. Consider, for example, an industrial robot arm learning to assemble a new product in a factory. Some behaviors could cause it to damage itself or the plant around it-or worse, take actions that are harmful to people working nearby. In domains like this, safe exploration for RL agents is important (Moldovan & Abbeel, 2012; Amodei et al., 2016) . A natural way to incorporate safety is via constraints. A standard and well-studied formulation for reinforcement learning with constraints is the constrained Markov Decision Process (CMDP) framework (Altman, 1999) , where agents must satisfy constraints on expectations of auxilliary costs. Although optimal policies for finite CMDPs with known models can be obtained by linear programming, methods for high-dimensional control are lacking. Currently, policy search algorithms enjoy state-of-theart performance on high-dimensional control tasks (Mnih et al., 2016; Duan et al., 2016) . Heuristic algorithms for policy search in CMDPs have been proposed (Uchibe & Doya, 2007) , and approaches based on primal-dual methods can be shown to converge to constraint-satisfying policies (Chow et al., 2015) , but there is currently no approach for policy search in continuous CMDPs that guarantees every policy during learning will satisfy constraints. In this work, we propose the first such algorithm, allowing applications to constrained deep RL. Driving our approach is a new theoretical result that bounds the difference between the rewards or costs of two different policies. This result, which is of independent interest, tightens known bounds for policy search using trust regions (Kakade & Langford, 2002; Pirotta et al., 2013; Schulman et al., 2015) , and provides a tighter connection between the theory and practice of policy search for deep RL. Here, we use this result to derive a policy improvement step that guarantees both an increase in reward and satisfaction of constraints on other costs. This step forms the basis for our algorithm, Constrained Policy Optimization (CPO), which computes an approximation to the theoretically-justified update. In our experiments, we show that CPO can train neural network policies with thousands of parameters on highdimensional simulated robot locomotion tasks to maximize rewards while successfully enforcing constraints. \n Related Work Safety has long been a topic of interest in RL research, and a comprehensive overview of safety in RL was given by (García & Fernández, 2015) . Safe policy search methods have been proposed in prior work. Uchibe and Doya (2007) gave a policy gradient algorithm that uses gradient projection to enforce active constraints, but this approach suffers from an inability to prevent a policy from becoming unsafe in the first place. Bou Ammar et al. (2015) propose a theoretically-motivated policy gradient method for lifelong learning with safety constraints, but their method involves an expensive inner loop optimization of a semi-definite program, making it unsuited for the deep RL setting. Their method also assumes that safety constraints are linear in policy parameters, which is limiting. Chow et al. (2015) propose a primal-dual subgradient method for risk-constrained reinforcement learning which takes policy gradient steps on an objective that trades off return with risk, while simultaneously learning the trade-off coefficients (dual variables). Some approaches specifically focus on application to the deep RL setting. Held et al. (2017) study the problem for robotic manipulation, but the assumptions they make restrict the applicability of their methods. Lipton et al. (2017) use an 'intrinsic fear' heuristic, as opposed to constraints, to motivate agents to avoid rare but catastrophic events. Shalev-Shwartz et al. (2016) avoid the problem of enforcing constraints on parametrized policies by decomposing 'desires' from trajectory planning; the neural network policy learns desires for behavior, while the trajectory planning algorithm (which is not learned) selects final behavior and enforces safety constraints. In contrast to prior work, our method is the first policy search algorithm for CMDPs that both 1) guarantees constraint satisfaction throughout training, and 2) works for arbitrary policy classes (including neural networks). \n Preliminaries A Markov decision process (MDP) is a tuple, (S, A, R, P, µ), where S is the set of states, A is the set of actions, R : S × A × S → R is the reward function, P : S ×A×S → [0, 1] is the transition probability function (where P (s � |s, a) is the probability of transitioning to state s � given that the previous state was s and the agent took action a in s), and µ : S → [0, 1] is the starting state distribution. A stationary policy π : S → P(A) is a map from states to probability distributions over actions, with π(a|s) denoting the probability of selecting action a in state s. We denote the set of all stationary policies by Π. In reinforcement learning, we aim to select a policy π which maximizes a performance measure, J(π), which is typically taken to be the infinite horizon discounted total return, J(π) . = E τ ∼π [ � ∞ t=0 γ t R(s t , a t , s t+1 )]. Here γ ∈ [0, 1) is the discount factor, τ denotes a trajectory (τ = (s 0 , a 0 , s 1 , ...)), and τ ∼ π is shorthand for indicating that the distribution over trajectories depends on π: s 0 ∼ µ, a t ∼ π(•|s t ), s t+1 ∼ P (•|s t , a t ). Letting R(τ ) denote the discounted return of a trajectory, we express the on-policy value function as V π (s) . = E τ ∼π [R(τ )|s 0 = s] and the on-policy action-value function as Q π (s, a) . = E τ ∼π [R(τ )|s 0 = s, a 0 = a]. The advantage function is A π (s, a) . = Q π (s, a) − V π (s). Also of interest is the discounted future state distribution, d π , defined by d π (s) = (1−γ) � ∞ t=0 γ t P (s t = s|π). It allows us to compactly express the difference in performance between two policies π � , π as J(π � ) − J(π) = 1 1 − γ E s∼d π � a∼π � [A π (s, a)] , (1) where by a ∼ π � , we mean a ∼ π � (•|s), with explicit notation dropped to reduce clutter. For proof of (1), see (Kakade & Langford, 2002) or Section 10 in the supplementary material. \n Constrained Markov Decision Processes A constrained Markov decision process (CMDP) is an MDP augmented with constraints that restrict the set of allowable policies for that MDP. Specifically, we augment the MDP with a set C of auxiliary cost functions, C 1 , ..., C m (with each one a function C i : S × A × S → R mapping transition tuples to costs, like the usual reward), and limits d 1 , ..., d m . Let J Ci (π) denote the expected discounted return of policy π with respect to cost function C i : J Ci (π) = E τ ∼π [ � ∞ t=0 γ t C i (s t , a t , s t+1 )]. The set of feasible stationary policies for a CMDP is then Π C . = {π ∈ Π : ∀i, J Ci (π) ≤ d i } , and the reinforcement learning problem in a CMDP is π * = arg max π∈Π C J(π). The choice of optimizing only over stationary policies is justified: it has been shown that the set of all optimal policies for a CMDP includes stationary policies, under mild technical conditions. For a thorough review of CMDPs and CMDP theory, we refer the reader to (Altman, 1999) . We refer to J Ci as a constraint return, or C i -return for short. Lastly, we define on-policy value functions, actionvalue functions, and advantage functions for the auxiliary costs in analogy to V π , Q π , and A π , with C i replacing R: respectively, we denote these by V π Ci , Q π Ci , and A π Ci . \n Constrained Policy Optimization For large or continuous MDPs, solving for the exact optimal policy is intractable due to the curse of dimensionality (Sutton & Barto, 1998) . Policy search algorithms approach this problem by searching for the optimal policy within a set Π θ ⊆ Π of parametrized policies with parameters θ (for example, neural networks of a fixed architecture). In local policy search (Peters & Schaal, 2008) , the policy is iteratively updated by maximizing J(π) over a local neighborhood of the most recent iterate π k : π k+1 = arg max π∈Π θ J(π) s.t. D(π, π k ) ≤ δ, (2) where D is some distance measure, and δ > 0 is a step size. When the objective is estimated by linearizing around π k as J(π k ) + g T (θ − θ k ), g is the policy gradient, and the standard policy gradient update is obtained by choosing Schulman et al., 2015) . D(π, π k ) = �θ − θ k � 2 ( In local policy search for CMDPs, we additionally require policy iterates to be feasible for the CMDP, so instead of optimizing over Π θ , we optimize over Π θ ∩ Π C : π k+1 = arg max π∈Π θ J(π) s.t. J Ci (π) ≤ d i i = 1, ..., m D(π, π k ) ≤ δ. (3) This update is difficult to implement in practice because it requires evaluation of the constraint functions to determine whether a proposed point π is feasible. When using sampling to compute policy updates, as is typically done in high-dimensional control (Duan et al., 2016) , this requires off-policy evaluation, which is known to be challenging (Jiang & Li, 2015) . In this work, we take a different approach, motivated by recent methods for trust region optimization (Schulman et al., 2015) . We develop a principled approximation to (3) with a particular choice of D, where we replace the objective and constraints with surrogate functions. The surrogates we choose are easy to estimate from samples collected on π k , and are good local approximations for the objective and constraints. Our theoretical analysis shows that for our choices of surrogates, we can bound our update's worst-case performance and worst-case constraint violation with values that depend on a hyperparameter of the algorithm. To prove the performance guarantees associated with our surrogates, we first prove new bounds on the difference in returns (or constraint returns) between two arbitrary stochastic policies in terms of an average divergence between them. We then show how our bounds permit a new analysis of trust region methods in general: specifically, we prove a worst-case performance degradation at each update. We conclude by motivating, presenting, and proving gurantees on our algorithm, Constrained Policy Optimization (CPO), a trust region method for CMDPs. \n Policy Performance Bounds In this section, we present the theoretical foundation for our approach-a new bound on the difference in returns between two arbitrary policies. This result, which is of independent interest, extends the works of (Kakade & Langford, 2002) , (Pirotta et al., 2013) , and (Schulman et al., 2015) , providing tighter bounds. As we show later, it also relates the theoretical bounds for trust region policy improvement with the actual trust region algorithms that have been demonstrated to be successful in practice (Duan et al., 2016) . In the context of constrained policy search, we later use our results to propose policy updates that both improve the expected return and satisfy constraints. The following theorem connects the difference in returns (or constraint returns) between two arbitrary policies to an average divergence between them. Theorem 1. For any function f : S → R and any policies π � and π, define δ f (s, a, s � ) . = R(s, a, s � ) + γf (s � ) − f (s), � π � f . = max s |E a∼π � ,s � ∼P [δ f (s, a, s � )]| , L π,f (π � ) . = E s∼d π a∼π s � ∼P �� π � (a|s) π(a|s) − 1 � δ f (s, a, s � ) � , and D ± π,f (π � ) . = L π,f (π � ) 1 − γ ± 2γ� π � f (1 − γ) 2 E s∼d π [D T V (π � ||π)[s]] , where D T V (π � ||π)[s] = (1/2) � a |π � (a|s) − π(a|s) | is the total variational divergence between action distributions at s. The following bounds hold: D + π,f (π � ) ≥ J(π � ) − J(π) ≥ D − π,f (π � ). (4) Furthermore, the bounds are tight (when π � = π, all three expressions are identically zero). Before proceeding, we connect this result to prior work. By bounding the expectation E s∼d π [D T V (π � ||π)[s]] with max s D T V (π � ||π)[s], picking f = V π , and bounding � π � V π to get a second factor of max s D T V (π � ||π)[s], we recover (up to assumption-dependent factors) the bounds given by Pirotta et al. (2013) as Corollary 3.6, and by Schulman et al. (2015) as Theorem 1a. The choice of f = V π allows a useful form of the lower bound, so we give it as a corollary. Corollary 1. For any policies π � , π, with � π � . = max s |E a∼π � [A π (s, a)]|, the following bound holds: J(π � ) − J(π) ≥ 1 1 − γ E s∼d π a∼π � � A π (s, a) − 2γ� π � 1 − γ D T V (π � ||π)[s] � . ( 5 ) The bound (5) should be compared with equation ( 1 ). The term 5 ) is an approximation to J(π � ) − J(π), using the state distribution d π instead of d π � , which is known to equal J(π � ) − J(π) to first order in the parameters of π � on a neighborhood around π (Kakade & Langford, 2002) . The bound can therefore be viewed as describing the worst-case approximation error, and it justifies using the approximation as a surrogate for J(π � ) − J(π). (1 − γ) −1 E s∼d π ,a∼π � [A π (s, a)] in ( Equivalent expressions for the auxiliary costs, based on the upper bound, also follow immediately; we will later use them to make guarantees for the safety of CPO. Corollary 2. For any policies π � , π, and any cost function C i , with � π � Ci . = max s |E a∼π � [A π Ci (s, a)]|, the following bound holds: J Ci (π � ) − J Ci (π) ≤ 1 1 − γ E s∼d π a∼π � � A π Ci (s, a) + 2γ� π � Ci 1 − γ D T V (π � ||π)[s] � . (6) The bounds we have given so far are in terms of the TV-divergence between policies, but trust region methods constrain the KL-divergence between policies, so bounds that connect performance to the KL-divergence are desirable. We make the connection through Pinsker's inequality (Csiszar & Körner, 1981) : for arbitrary distributions p, q, the TV-divergence and KL-divergence are related by D T V (p||q) ≤ � D KL (p||q)/2. Combining this with Jensen's inequality, we obtain E s∼d π [D T V (π � ||π)[s]] ≤ E s∼d π � � 1 2 D KL (π � ||π)[s] � ≤ � 1 2 E s∼d π [D KL (π � ||π)[s]] (7) From ( 7 ) we immediately obtain the following. Corollary 3. In bounds (4), ( 5 ), and (6), make the substitution E s∼d π [D T V (π � ||π)[s]] → � 1 2 E s∼d π [D KL (π � ||π)[s]]. The resulting bounds hold. \n Trust Region Methods Trust region algorithms for reinforcement learning (Schulman et al., 2015; have policy updates of the form π k+1 = arg max π∈Π θ E s∼d π k a∼π [A π k (s, a)] s.t. DKL (π||π k ) ≤ δ, (8) where DKL (π||π k ) = E s∼π k [D KL (π||π k )[s]], and δ > 0 is the step size. The set {π θ ∈ Π θ : DKL (π||π k ) ≤ δ} is called the trust region. The primary motivation for this update is that it is an approximation to optimizing the lower bound on policy performance given in (5), which would guarantee monotonic performance improvements. This is important for optimizing neural network policies, which are known to suffer from performance collapse after bad updates (Duan et al., 2016) . Despite the approximation, trust region steps usually give monotonic improvements (Schulman et al., 2015; Duan et al., 2016) and have shown state-of-the-art performance in the deep RL setting (Duan et al., 2016; Gu et al., 2017) , making the approach appealing for developing policy search methods for CMDPs. Until now, the particular choice of trust region for (8) was heuristically motivated; with (5) and Corollary 3, we are able to show that it is principled and comes with a worstcase performance degradation guarantee that depends on δ. Proposition 1 (Trust Region Update Performance). Suppose π k , π k+1 are related by (8), and that π k ∈ Π θ . A lower bound on the policy performance difference between π k and π k+1 is J(π k+1 ) − J(π k ) ≥ − √ 2δγ� π k+1 (1 − γ) 2 , ( 9 ) where � π k+1 = max s � � E a∼π k+1 [A π k (s, a)] � � . Proof. π k is a feasible point of (8) with objective value 0, so E s∼d π k ,a∼π k+1 [A π k (s, a)] ≥ 0. The rest follows by (5) and Corollary 3, noting that (8) bounds the average KLdivergence by δ. This result is useful for two reasons: 1) it is of independent interest, as it helps tighten the connection between theory and practice for deep RL, and 2) the choice to develop CPO as a trust region method means that CPO inherits this performance guarantee. \n Trust Region Optimization for Constrained MDPs Constrained policy optimization (CPO), which we present and justify in this section, is a policy search algorithm for CMDPs with updates that approximately solve (3) with a particular choice of D. First, we describe a policy search update for CMDPs that alleviates the issue of off-policy evaluation, and comes with guarantees of monotonic performance improvement and constraint satisfaction. Then, because the theoretically guaranteed update will take toosmall steps in practice, we propose CPO as a practical approximation based on trust region methods. By corollaries 1, 2, and 3, for appropriate coefficients α k , β i k the update π k+1 = arg max π∈Π θ E s∼d π k a∼π [A π k (s, a)] − α k � DKL (π||π k ) s.t. J Ci (π k ) + E s∼d π k a∼π � A π k Ci (s, a) 1 − γ � + β i k � DKL (π||π k ) ≤ d i is guaranteed to produce policies with monotonically nondecreasing returns that satisfy the original constraints. (Observe that the constraint here is on an upper bound for J Ci (π) by ( 6 ).) The off-policy evaluation issue is alleviated, because both the objective and constraints involve expectations over state distributions d π k , which we presume to have samples from. Because the bounds are tight, the problem is always feasible (as long as π 0 is feasible). However, the penalties on policy divergence are quite steep for discount factors close to 1, so steps taken with this update might be small. Inspired by trust region methods, we propose CPO, which uses a trust region instead of penalties on policy divergence to enable larger step sizes: π k+1 = arg max π∈Π θ E s∼d π k a∼π [A π k (s, a)] s.t. J Ci (π k ) + 1 1 − γ E s∼d π k a∼π � A π k Ci (s, a) � ≤ d i ∀i DKL (π||π k ) ≤ δ. (10) Because this is a trust region method, it inherits the performance guarantee of Proposition 1. Furthermore, by corollaries 2 and 3, we have a performance guarantee for approximate satisfaction of constraints: Proposition 2 (CPO Update Worst-Case Constraint Violation). Suppose π k , π k+1 are related by (10), and that Π θ in (10) is any set of policies with π k ∈ Π θ . An upper bound on the C i -return of π k+1 is J Ci (π k+1 ) ≤ d i + √ 2δγ� π k+1 Ci (1 − γ) 2 , where � π k+1 Ci = max s � � E a∼π k+1 � A π k Ci (s, a) �� � . \n Practical Implementation In this section, we show how to implement an approximation to the update (10) that can be efficiently computed, even when optimizing policies with thousands of parameters. To address the issue of approximation and sampling errors that arise in practice, as well as the potential violations described by Proposition 2, we also propose to tighten the constraints by constraining upper bounds of the auxilliary costs, instead of the auxilliary costs themselves. \n Approximately Solving the CPO Update For policies with high-dimensional parameter spaces like neural networks, ( 10 ) can be impractical to solve directly because of the computational cost. However, for small step sizes δ, the objective and cost constraints are well-approximated by linearizing around π k , and the KLdivergence constraint is well-approximated by second order expansion (at π k = π, the KL-divergence and its gradient are both zero). Denoting the gradient of the objective as g, the gradient of constraint i as b i , the Hessian of the KL-divergence as H, and defining c i . = J Ci (π k ) − d i , the approximation to ( 10 ) is: θ k+1 = arg max θ g T (θ − θ k ) s.t. c i + b T i (θ − θ k ) ≤ 0 i = 1, ..., m 1 2 (θ − θ k ) T H(θ − θ k ) ≤ δ. (11) Because the Fisher information matrix (FIM) H is always positive semi-definite (and we will assume it to be positive-definite in what follows), this optimization problem is convex and, when feasible, can be solved efficiently using duality. (We reserve the case where it is not feasible for −1 2λ � g T H −1 g − 2r T ν + ν T Sν � + ν T c − λδ 2 , ( 12 ) where r . = g T H −1 B, S . = B T H −1 B . This is a convex program in m+1 variables; when the number of constraints is small by comparison to the dimension of θ, this is much easier to solve than (11). If λ * , ν * are a solution to the dual, the solution to the primal is θ * = θ k + 1 λ * H −1 (g − Bν * ) . ( 13 ) Our algorithm solves the dual for λ * , ν * and uses it to propose the policy update (13). For the special case where there is only one constraint, we give an analytical solution in the supplementary material (Theorem 2) which removes the need for an inner-loop optimization. Our experiments Algorithm 1 Constrained Policy Optimization Input: Initial policy π 0 ∈ Π θ tolerance α for k = 0, 1, 2, ... do Sample a set of trajectories D = {τ } ∼ π k = π(θ k ) Form sample estimates ĝ, b, Ĥ, ĉ with D if approximate CPO is feasible then Solve dual problem (12) for λ * k , ν * k Compute policy proposal θ * with (13) else Compute recovery policy proposal θ * with (14) end if Obtain θ k+1 by backtracking linesearch to enforce satisfaction of sample estimates of constraints in (10) end for have only a single constraint, and make use of the analytical solution. Because of approximation error, the proposed update may not satisfy the constraints in (10); a backtracking line search is used to ensure surrogate constraint satisfaction. Also, for high-dimensional policies, it is impractically expensive to invert the FIM. This poses a challenge for computing H −1 g and H −1 b i , which appear in the dual. Like (Schulman et al., 2015) , we approximately compute them using the conjugate gradient method. \n Feasibility Due to approximation errors, CPO may take a bad step and produce an infeasible iterate π k . Sometimes (11) will still be feasible and CPO can automatically recover from its bad step, but for the infeasible case, a recovery method is necessary. In our experiments, where we only have one constraint, we recover by proposing an update to purely decrease the constraint value: θ * = θ k − � 2δ b T H −1 b H −1 b. (14) As before, this is followed by a line search. This approach is principled in that it uses the limiting search direction as the intersection of the trust region and the constraint region shrinks to zero. We give the pseudocode for our algorithm (for the single-constraint case) as Algorithm 1, and have made our code implementation available online. 1 \n Tightening Constraints via Cost Shaping Because of the various approximations between (3) and our practical algorithm, it is important to build a factor of safety into the algorithm to minimize the chance of constraint violations. To this end, we choose to constrain upper bounds 1 https://github.com/jachiam/cpo on the original constraints, C + i , instead of the original constraints themselves. We do this by cost shaping: C + i (s, a, s � ) = C i (s, a, s � ) + Δ i (s, a, s � ), (15) where Δ i : S × A × S → R + correlates in some useful way with C i . In our experiments, where we have only one constraint, we partition states into safe states and unsafe states, and the agent suffers a safety cost of 1 for being in an unsafe state. We choose Δ to be the probability of entering an unsafe state within a fixed time horizon, according to a learned model that is updated at each iteration. This choice confers the additional benefit of smoothing out sparse constraints. \n Connections to Prior Work Our method has similar policy updates to primal-dual methods like those proposed by Chow et al. (2015) , but crucially, we differ in computing the dual variables (the Lagrange multipliers for the constraints). In primal-dual optimization (PDO), dual variables are stateful and learned concurrently with the primal variables (Boyd et al., 2003) . In a PDO algorithm for solving (3), dual variables would be updated according to ν k+1 = (ν k + α k (J C (π k ) − d)) + , (16) where α k is a learning rate. In this approach, intermediary policies are not guaranteed to satisfy constraints-only the policy at convergence is. By contrast, CPO computes new dual variables from scratch at each update to exactly enforce constraints. \n Experiments In our experiments, we aim to answer the following: • Does CPO succeed at enforcing behavioral constraints when training neural network policies with thousands of parameters? • How does CPO compare with a baseline that uses primal-dual optimization? Does CPO behave better with respect to constraints? • How much does it help to constrain a cost upper bound (15), instead of directly constraining the cost? • What benefits are conferred by using constraints instead of fixed penalties? We designed experiments that are easy to interpret and motivated by safety. We consider two tasks, and train multiple different agents (robots) for each task: Returns: Constraint values: (closer to the limit is better) • Circle: The agent is rewarded for running in a wide circle, but is constrained to stay within a safe region smaller than the radius of the target circle. • Gather: The agent is rewarded for collecting green apples, and constrained to avoid red bombs. For the Circle task, the exact geometry is illustrated in Figure 5 in the supplementary material. Note that there are no physical walls: the agent only interacts with boundaries through the constraint costs. The reward and constraint cost functions are described in supplementary material (Section 10.3.1). In each of these tasks, we have only one constraint; we refer to it as C and its upper bound from (15) as C + . We experiment with three different agents: a point-mass (S ⊆ R 9 , A ⊆ R 2 ), a quadruped robot (called an 'ant') (S ⊆ R 32 , A ⊆ R 8 ), and a simple humanoid (S ⊆ R 102 , A ⊆ R 10 ). We train all agent-task combinations except for Humanoid-Gather. For all experiments, we use neural network policies with two hidden layers of size (64, 32). Our experiments are implemented in rllab (Duan et al., 2016) . \n Evaluating CPO and Comparison Analysis Learning curves for CPO and PDO are compiled in Figure 1 . Note that our constraint value graphs show C + return, instead of the C return (except for in Point-Gather, where we did not use cost shaping due to that environment's short time horizon), because this is what the algorithm actually constrains in these experiments. For our comparison, we implement PDO with (16) as the In Humanoid-Circle, the safe area is between the blue panels. update rule for the dual variables, using a constant learning rate α; details are available in supplementary material (Section 10.3.3). We emphasize that in order for the comparison to be fair, we give PDO every advantage that is given to CPO, including equivalent trust region policy updates. To benchmark the environments, we also include TRPO (trust region policy optimization) (Schulman et al., 2015) , a stateof-the-art unconstrained reinforcement learning algorithm. The TRPO experiments show that optimal unconstrained behaviors for these environments are constraint-violating. We find that CPO is successful at approximately enforcing constraints in all environments. In the simpler environments (Point-Circle and Point-Gather), CPO tracks the constraint return almost exactly to the limit value. By contrast, although PDO usually converges to constraintsatisfying policies in the end, it is not consistently constraint-satisfying throughout training (as expected). For example, see the spike in constraint value that it experi-ences in Ant-Circle. Additionally, PDO is sensitive to the initialization of the dual variable. By default, we initialize ν 0 = 0, which exploits no prior knowledge about the environment and makes sense when the initial policies are feasible. However, it may seem appealing to set ν 0 high, which would make PDO more conservative with respect to the constraint; PDO could then decrease ν as necessary after the fact. In the Point environments, we experiment with ν 0 = 1000 and show that although this does assure constraint satisfaction, it also can substantially harm performance with respect to return. Furthermore, we argue that this is not adequate in general: after the dual variable decreases, the agent could learn a new behavior that increases the correct dual variable more quickly than PDO can attain it (as happens in Ant-Circle for PDO; observe that performance is approximately constraint-satisfying until the agent learns how to run at around iteration 350). We find that CPO generally outperforms PDO on enforcing constraints, without compromising performance with respect to return. CPO quickly stabilizes the constraint return around to the limit value, while PDO is not consistently able to enforce constraints all throughout training. \n Ablation on Cost Shaping In Figure 3 , we compare performance of CPO with and without cost shaping in the constraint. Our metric for comparison is the C return, the 'true' constraint. The cost shaping does help, almost completely accounting for CPO's inherent approximation errors. However, CPO is nearly constraint-satisfying even without cost shaping. \n Constraint vs. Fixed Penalty In Figure 4 , we compare CPO to a fixed penalty method, where policies are learned using TRPO with rewards R(s, a, s � ) − νC + (s, a, s � ) for ν ∈ {1, 5, 50}. We find that fixed penalty methods can be highly sensitive to the choice of penalty coefficient: in Ant-Circle, a penalty coefficient of 1 results in reward-maximizing policies that accumulate massive constraint costs, while a coefficient of 5 (less than an order of magnitude difference) results in cost-minimizing policies that never learn how to acquire any rewards. In contrast, CPO automatically picks penalty coefficients to attain the desired trade-off between reward and constraint cost. \n Discussion In this article, we showed that a particular optimization problem results in policy updates that are guaranteed to both improve return and satisfy constraints. This enabled the development of CPO, our policy search algorithm for CMDPs, which approximates the theoretically-guaranteed algorithm in a principled way. We demonstrated that CPO can train neural network policies with thousands of parameters on high-dimensional constrained control tasks, simultaneously maximizing reward and approximately satisfying constraints. Our work represents a step towards applying reinforcement learning in the real world, where constraints on agent behavior are sometimes necessary for the sake of safety. the next subsection.) With B . = [b 1 , ..., b m ] and c . = [c 1 , ..., c m ] T , a dual to (11) can be expressed as max λ≥0 ν�0 \n Figure 1 . 1 Figure1. Average performance for CPO, PDO, and TRPO over several seeds (5 in the Point environments, 10 in all others); the x-axis is training iteration. CPO drives the constraint function almost directly to the limit in all experiments, while PDO frequently suffers from over-or under-correction. TRPO is included to verify that optimal unconstrained behaviors are infeasible for the constrained problem. \n Figure 2 . 2 Figure 2. The Humanoid-Circle and Point-Gather environments.In Humanoid-Circle, the safe area is between the blue panels. \n Figure 3. Using cost shaping (CS) in the constraint while optimizing generally improves the agent's adherence to the true constraint on C return. \n Figure 4 . 4 Figure 4. Comparison between CPO and FPO (fixed penalty optimization) for various values of fixed penalty.", "date_published": "n/a", "url": "n/a", "filename": "achiam17a.tei.xml", "abstract": "For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (", "id": "e9ab6f82877ed9d3fda829fb67e52c92"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Owen Cotton-Barratt", "Toby Ord"], "title": "Existential Risk and Existential Hope: Definitions", "text": "The simple definition One fairly crisp approach is to draw the line at extinction: Definition (i): An existential catastrophe is an event which causes the end of existence of our descendants. This has the virtue that it is a natural division and is easy to understand. And we certainly want to include all extinction events. But perhaps it doesn't cast a wide enough net. Example A: A totalitarian regime takes control of earth. It uses mass surveillance to prevent any rebellion, and there is no chance for escape. This regime persists for thousands of years, eventually collapsing when a supervolcano throws up enough ash that agriculture is prevented for decades, and no humans survive. In Example A, clearly the eruption was bad, but the worst of the damage was done earlier. After the totalitarian regime was locked in, it was only a matter of time until something or other finished things off. We'd like to be able to talk about entering this regime as the existential catastrophe, rather than whatever event happens to end it. So we need another definition. Although we'll now look at other definitions for existential catastrophes, we do like the simple definition. Luckily there's another term that's already understood: human extinction. Sometimes it's better to talk about extinction risks rather than existential risks, as 'existential risk' is a piece of jargon, whereas 'extinction risk' will be clear to everyone. \n Bostrom's definition Nick Bostrom introduced the concept of existential risks. He has defined them as follows: Definition (ii): An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development. 1 This definition deals well with Example A, placing the existential catastrophe at the point where the totalitarian regime arose, as this caused the permanent and drastic destruction of [humanity's] potential for desirable future development. Example B: A totalitarian regime takes control of the earth. There is only a slight chance that humanity will ever escape. Is this an existential catastrophe? Bostrom's definition doesn't clearly specify whether it should be considered as one. Either answer leads to some strange conclusions. Saying it's not an existential catastrophe seems wrong as it's exactly the kind of thing that we should strive to avoid for the same reasons we wish to avoid existential catastrophes. Saying it is an existential catastrophe is very odd if humanity does escape and recover -then the loss of potential wasn't permanent after all. The problem here is that potential isn't binary. Entering the regime certainly seems to curtail the potential, but not to eliminate it. \n Definition via expectations The idea that potential isn't binary motivates our suggested definition: Definition (iii): An existential catastrophe is an event which causes the loss of a large fraction of expected value. This definition deals well with Example B. If we enter into the totalitarian regime and then at a later date the hope of escape is snuffed out, that represents two existential catastrophes under this definition. We lost most of the expected value when we entered the regime, and then lost most of the remaining expected value when the chance for escape disappeared. A lot of the work of this definition is being done by the final couple of words. 'Value' refers simply to whatever it is we care about and want in the world, in the same way that 'desirable future development' worked in Bostrom's definition. And to talk about expectations we need to have some probabilities in mind. Here we are thinking of objective probabilities. Note that 'potential' in Bostrom's definition requires some similar work to making assumptions about probabilities. \n Existential eucatastrophes and existential hope If we enter the totalitarian regime and then manage to escape and recover, then we had an existential catastrophe which was balanced out by a subsequent gain in expected value. This kind of event gives us a concept parallel to that of an existential catastrophe: Definition (iv): An existential eucatastrophe 2 is an event which causes there to be much more expected value after the event than before. This concept is quite natural. We saw it in the context of escape from a regime which threatened the existence of a prosperous future. Our world has probably already seen at least one existential eucatastrophe: the origin of life. When life first arose, the expected value of the planet's future may have become much bigger. To the extent that they were not inevitable, the rise of multicellular life and intelligence may also have represented existential eucatastrophes. In general successfully passing any 'great filter' 3 is an existential eucatastrophe, since beforehand the probability of passing it is small, so the expected value is much smaller than after the filter is dealt with. Armed with this concept, we can draw a new lesson. Just as we should strive to avoid existential catastrophes, we should also seek existential eucatastrophes. In some ways, this isn't a new lesson at all. Under Bostrom's definition we are comparing ourselves to the most optimistic potential we could reach, so failing to achieve a eucatastrophe is itself a catastrophe. However we think more naturally in terms of events than non-events. If life fails to arise on a planet where it might have, it's much clearer to think of a failure to achieve a eucatastrophe than of an existential catastrophe stretching out over the billions of years in which life did not arise. Just as we tend to talk about the existential risk rather than existential catastrophe, we want to be able to refer to the chance of an existential eucatastrophe; upside risk on a large scale. We could call such a chance an existential hope. In fact, there are already people following both of the strategies this suggests. Some people are trying to identify and avert specific threats to our future -reducing existential risk. Others are trying to steer us towards a world where we are robustly well-prepared to face whatever obstacles come -they are seeking to increase existential hope. \n Conclusions We were interested in pinning down what is meant by 'existential risk'. Much of the time, all of the definitions we've looked at will agree on whether something is an existential risk. Keeping it simple can be good, because it helps more people to understand. We therefore advocate talking about 'extinction risks' rather than 'existential risks' when the former term will work. Nonetheless, we may sometimes have to consider more unusual scenarios. It's good to know how to make the definition work well there as it can help us to think about things more clearly. We think that the definition in terms of expectations does a better job of this than previous definitions. In devising the notion of existential catastrophe (and hence existential risk) via expectations, we came across the dual concept we have called 'existential eucatastrophe' (and hence 'existential hope'). We think this captures a natural class of events, and what may be an important one. We hope that having a label for the concept may help others to make better judgements about what courses to pursue. \t\t\t * Future of Humanity Institute, University of Oxford & Centre for Effective Altruism † Future of Humanity Institute, University of Oxford \n\t\t\t Bostrom, Existential Risk Reduction as Global Priority, Global Policy, Vol 4, Issue 1 (2013), p15 \n\t\t\t The word 'eucatastrophe' is made of the Greek root 'eu-' meaning 'good' and the word 'catastrophe' in its classical sense of a sudden turn. It was coined by Tolkien to refer to the sudden and unexpected turn for the better frequently found at the end of fairy tales. (Tolkien, John Ronald Reuel. On fairy-stories. Oxford University Press, 1947.)3 Hanson, Robin, The Great Filter -Are We Almost Past It?, 1998.", "date_published": "n/a", "url": "n/a", "filename": "existential-risk-and-existential-hope.tei.xml", "abstract": "We look at the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: 'existential hope', the chance of something extremely good happening. An existential risk is a chance of a terrible event occurring, such as an asteroid striking the earth and wiping out intelligent life -we could call such events existential catastrophes. In order to understand what should be thought of as an existential risk, it is necessary to understand what should be thought of as an existential catastrophe. This is harder than it first seems to pin down.", "id": "8a6eda50ae561a367511c9763af9831e"} {"source": "nonarxiv_papers", "source_filetype": "pdf", "authors": ["Michele Piccione", "Ariel Rubinstein", "Bob Aumann", "Paolo Battigalli", "Sergiu Hart", "Dana Heller", "Bart Lipman", "Avishai Margalit", "Roger Myerson", "Hugh Neary", "Motty Perry", "Tim Van Zandt", "Ruth Weintraub"], "title": "On the Interpretation of Decision Problems with Imperfect Recall*", "text": "INTRODUCTION This paper is an examination of some modelling problems regarding imperfect recall within the model of extensive games. It is argued that, if the assumption of perfect recall is violated, care must be taken in interpreting the main elements of the model. Interpretations that are inconsequential under perfect recall have important implications in the analysis of games with imperfect recall. The distinction between perfect and imperfect recall for extensive games Ž . was introduced in Kuhn 1953 . Since then, traditional game theory has excluded games with imperfect recall from its scope. In this paper, we wish to readdress this topic. Since the interpretative issues on the agenda appear within the framework of extensive games with a single player, we confine our discussion to decision problems with imperfect recall. Extensive decision problems are a special case of extensive games in that the set of players is a singleton. Our basic understanding of an extensive decision problem includes the following assertions: 1. Informational assumptions are modeled by partitioning the situa-Ž . tions histories where the decision maker takes an action into ''information sets.'' The interpretation of the ''informational structure'' is that the decision maker knows the information set he is at but, if an information set is not a singleton, he cannot distinguish among the points in the set he has reached. The decision maker, however, can make inferences or form a belief. 2. A strategy for the decision maker assigns to each information set one distinct action which is executed whenever an information set is reached. The decision maker cannot plan to assign different actions to two histories which lie in the same information set. 3. If the decision maker assesses his strategy at an information set including more than one history, he forms beliefs about the histories which led to it. These beliefs are the basis for his considerations. A decision problem exhibits imperfect recall if, at a point of time, the decision maker holds information which is forgotten later on. Specifically, an information set includes some histories which are incompatible with previously held information. Figures 1᎐3 are examples that illustrate three main aspects of imperfect recall in decision problems. The standard motive of imperfect recall appears in Example 2. The decision maker is initially informed about the move of chance and loses . 1 which is central in this paper is rather unconventional; the decision maker does not distinguish between the first and the second nodes. Reaching the second node, he loses the information that he had previously made a choice. The inability of a decision maker to distinguish between two histories on the same path will be referred to as absentmindedness. In the above examples, the information sets determine the ability of the decision maker to recall. In practice, a decision maker can affect what he remembers. In this paper, however, we assume that the decision maker is not allowed to employ an external device to assist him in keeping track of the information which he would otherwise lose. Thus, we sidestep the decision maker's considerations regarding the trade-off between ''more memory'' and ''memory costs.'' FIG. 3. Example 3. The common interpretation of a decision problem with imperfect recall refers to a situation in which an individual takes several successive actions and faces memory constraints. Imperfect recall, however, is not necessarily related to the mental phenomenon of memory loss and may also reflect the imperfect ability to make the inferences necessary for distinguishing among different points in the same information set. A decision maker may not realize that he is at the 17th exit along a highway, either because he does not recall whether he has passed the 16th intersection, or because he cannot infer that he is at the 17th intersection, despite the perfect pictures of each intersection in his mind. The latter interpretation of imperfect recall brings our discussion closer to the topic of ''bounded rationality.'' Ž . An alternative interpretation is found in Isbell 1957 . The decision maker is an organization consisting of many agents who have the same interests and act at different possible instances. In this case, the decision process of the organization may exhibit imperfect recall either because it must keep the instructions given to agents acting in successive situations simple or because of communication problems between agents. Agents receive instructions on how to behave and the collection of these instructions is equivalent to the notion of a strategy. From a psychological point of view, imperfect recall is a very important phenomenon as it puts severe constraints on a decision maker's behavior. We believe, however, that the ultimate proof for its relevance in economic analysis can only come from an interesting model which clearly explains an economic phenomenon. Constructing such a model is beyond the confines of this paper. The structure of the paper is as follows. We begin by presenting an example which we call the '' paradox of the absentminded dri¨er.'' This example is used to illustrate many of the points of this paper and will be referred to repeatedly. After providing the formal definition of the model, we discuss two issues of importance for decision problems with absent-mindedness. First, in Section 4, we review the circumstances in which whether the decision maker is allowed to randomize affect the analysis. Second, in Section 5, we show that the extension of Bayesian updating to decision problems with absentmindedness is not trivial. The two main topics which will be addressed are the timing of decision and the multiself approaches. The significance of these issues is marginal in decision problems with perfect recall since possible answers are inconsequential for the analysis. \n Ž . i Timing of decision. This issue deals with the interpretation of an information set as either a point of decision or a point in which a strategy is executed. In Section 6, we will show that for decision problems with imperfect recall optimal strategies may be time inconsistent and that strategies which are time consistent may not be optimal. The timing of decisions can be an important consideration for a decision maker whereas, with perfect recall, there is no reason to make some decisions at a particular point of time. \n Ž . ii The multisel¨es approach to decision making. Standard dynamic inconsistencies are generally addressed by assuming that the decision maker acts as a collection of distinct ''selves'' who behave independently. The behavior of the decision maker is analyzed as an equilibrium. In Section 7, we extend the multiself approach to decision problems with imperfect recall and show that in this extension an optimal strategy is dynamically consistent. \n THE PARADOX OF THE ABSENTMINDED DRIVER An individual is sitting late at night in a bar planning his midnight trip home. In order to get home he has to take the highway and get off at the Ž . second exit. Turning at the first exit leads into a disastrous area payoff 0 . Ž . Turning at the second exit yields the highest reward payoff 4 . If he continues beyond the second exit, he cannot go back and at the end of the Ž . highway he will find a motel where he can spend the night payoff 1 . The driver is absentminded and is aware of this fact. At an intersection, he cannot tell whether it is the first or the second intersection and he cannot Ž remember how many he has passed one can make the situation more . realistic by referring to the 17th intersection . While sitting at the bar, all he can do is to decide whether or not to exit at an intersection. We exclude at this stage the possibility that the decision maker can include random elements in his strategy. Example 1 describes this situation. Planning his trip at the bar, the decision maker must conclude that it is impossible for him to get home and that he should not exit when reaching an intersection. Thus, his optimal plan will lead him to spend the night at the motel and yield a payoff of 1. Now, suppose that he reaches an intersection. If he had decided to exit, he would have concluded that he is at the first intersection. Having chosen the strategy to continue, he concludes that he is at the first intersection with probability 1r2. Then, reviewing his plan, he finds that it is optimal for him to leave the highway since it yields an expected payoff of 2. Despite no new information and no change in his preferences, the decision maker would like to change his initial plan once he reaches an intersection! Note that if the decision maker now infers that he would have exited the highway had he passed the first intersection, his reasoning becomes circular; he must conclude that he is at the first intersection and that it is optimal to continue. We wish to emphasize that this is not a standard example of time inconsistency. Usually, time inconsistency is obtained as a consequence of Ž . changes in either preferences tastes or information regarding the moves of nature during the execution of the optimal plan. Here, preferences over final outcome are constant and the only factor intervening between planning and execution of the optimal strategy is the occurrence of the situation which calls for execution, that is, reaching the intersection. We find this example paradoxical as it exhibits a conflict between two ways of reasoning at the intersection. The first is based on a quite minimal principle of rationality; having chosen an optimal strategy, one does not have to verify its optimality at the time of execution unless there is a change in information or in preferences. In our example, the decision maker knew he would reach the intersection with certainty and his preferences are constant. This principle leads to the conclusion that the decision maker should stick to his plan to continue. The second way is based on the principle which calls at each instance to maximize expected payoffs given the relevant beliefs. In our example, this principle leads to the conclusion of exiting. The conflict between these two potential lines of reasoning is at the root of the apparent ambiguity of our example. \n EXTENSIVE DECISION MODEL In this section, a formal definition of the extensive decision model is Ž . given. The presentation follows that of Osborne and Rubinstein 1994 . The reader can easily identify the model with the standard definition in the ''tree'' language. Ž . ² : A finite decision problem is a five-tuple ⌫ s H, u, C, , I , where: Ž . a H is a finite set of sequences. We assume that the empty se-Ž . quence, , is an element of H and that if a , . . . , a g H and 1 K Ž . Ž . a, . . . , a / then a , . . . , a g H. 1 K 1 Ky1 Ž . We interpret a history a , . . . , a g H as a feasible sequence of 1 K Ž . actions taken by the decision maker or by chance. The history a , . . . , a 1 K Ž . g His terminal if there is no a , . . . , a , a g H. The set of terminal 1 K histories is denoted by Z. The set of actions available to the decision Ž . Ä maker or chance after a nonterminal history h is defined by A h s a: Ž . 4 Ž . h,a g H . To avoid degenerate cases we assume that A h contains at least two elements. When presenting a decision problem diagramatically, we draw H as a tree whose nodes are the set of histories with root and Ž . Ž . whose edges combine a node a , . . . , a with a node a , . . . , a . 1 K 1 Kq1 Ž . Ž . b u: Z ª R is a utility function which assigns a number payoff to each of the terminal histories. Preferences are defined on the set of all lotteries over terminal histories and satisfy the VNM assumptions. \n Ž . c C is a subset of H. We assume that the chance player moves after histories in C. \n Ž . d is the decision maker's belief about the chance player's behav-Ž . ior. assigns to each history h g C a probability measure on A h . To Ž . avoid degeneracy, we assume that h, a is strictly positive for all h g C Ž . and a g A h . Thus, the set of histories H is partitioned into three subsets: Z, the set of terminal histories; C, the set of histories after which chance moves; D s H y Z y C, the set of histories after which the decision maker moves. \n Ž . e The set of information sets, which is denoted by I, is a partition of Ž . D. We assume that for all h, hЈ in the same cell of the partition A h s Ž . A hЈ ; i.e., the sets of actions available to the decision maker at histories in the same information set are identical. For convenience, with a slight abuse of notation, we will sometimes denote the set of actions which are Ž . available at a history in X by A X . Note that, in contrast to some authors, we do not exclude from the class Ž of decision problems those which exhibit absentmindedness see definition . below . If all information sets in I are singletons we say that ⌫ is a decision problem with perfect information. Ž . A pure strategy, f, is a function which assigns to every history h g D Ž . an element of A h with the restriction that if h and hЈ are in the same Ž . Ž . information set f h s f hЈ . Notice that this definition requires that the decision maker plans an action at histories which he will not reach if he follows the strategy. We are now ready for the main definitions of this paper. The experience Ž . of the decision maker at a history h in D, denoted by exp h , is the sequence of information sets and actions of the decision maker along the history h. We adopt the convention that the last element in the sequence Ž . exp h is the information set which contains h. A decision problem has perfect recall if for any two histories, h, hЈ g D, Ž . Ž . which lie in the same information set, exp h s exp hЈ . Thus, in a decision problem with perfect recall, the decision maker ''remembers'' the succession of the information sets he has faced and the actions he has taken. A decision problem for which the above condition is violated is referred to as a decision problem with imperfect recall. Ž . Given a history h s a , . . . , a and L -K, the history hЈ s 1 K Ž . a, . . . , a is a subhistory of h. We say that a decision problem ⌫ exhibits 1 L absentmindedness if there are two histories h and hЈ such that hЈ is a subhistory of h and both belong to the same information set. The decision problem illustrated in Example 1 exhibits absentminded-Ž . ness since history B and its subhistory are in the same information set. \n THE VALUE OF RANDOMIZATION In this section, we discuss the implications of enlarging the strategy set of a decision maker to include random strategies. Given that the decision maker behaves as an expected utility maximizer, randomization over pure strategies is redundant for problems of either perfect or imperfect recall. Define a mixed strategy to be a probability distribution over the set of pure strategies. It describes a behavior in which randomization occurs only at the outset, before the decision problem unfolds. Each pure strategy induces a lottery over Z. A mixed strategy induces a lottery over Z which is the compound lottery of the lotteries induced by each of the pure strategies in its support. Therefore, no mixed strategy can be strictly preferred to all the pure strategies. Behavior strategies perform a different method of randomization. A beha¨ioral strategy, b, is a function which assigns to every history h g D, a Ž . Ž . \n Ž . Ž . distribution b h over A h such that b h s b hЈ for any two histories h and hЈ which lie in the same information set. In decision problems without Ž . absentmindedness b h is a lottery which is realized when the information set which contains h is reached. For decision problems with absentminded-Ž . ness we take b h to be a random device which is activated independently every time the information set which includes h is reached. Consider again Example 1. In this problem there are two pure strategies, ''B'' and ''E'' which yield payoffs of 1 and 0, respectively. Although the absentminded driver cannot use a pure strategy to reach home with certainty, he can toss a coin and obtain an expected payoff of 1. 25 . Note 1 that his optimal behavioral strategy is to exit with probability and yields 3 4 the expected payoff of . 3 It turns out that absentmindedness is necessary for behavioral strategy Ž . to be strictly optimal. This was shown in Isbell 1957 and we provide the proof for completeness. PROPOSITION 1. Suppose ⌫ does not exhibit absentmindedness. Then for any beha¨ioral strategy there is a pure strategy which yields a payoff at least as high. ² : Con¨ersely, suppose ⌫ s H, u, C, , I exhibits absentmindedness. Then, ² : there exist a decision problem ⌫Ј s H, uЈ, C, , I and a beha¨ioral strategy which yields a payoff strictly higher than any payoff achie¨ed by a pure strategy. Proof. See the Appendix. The Paradox of the Absentminded Dri¨er Re¨isited. The inconsistency discussed in Section 2 is not a consequence of the restriction that the strategy set includes only pure strategies and persists when the decision maker is allowed to choose random actions. The optimal behavioral 2 strategy is to choose B with probability . Reaching the intersection, the 3 driver will form beliefs about where he is. Denote by ␣ the probability he assigns to being at the first intersection. Then, his expected payoff is w 2 Ž . x Ž .w Ž . x ␣ p q4 1 y p p q 1 y ␣ p q 4 1 y p , where p is the probability of Ä Ž . 4 not exiting. The optimal p is now max 0, 7␣ y 3 r6␣ . This is inconsistent with his original plan unless ␣ s 1. In other words, his original plan is time consistent if and only if he believes that there is no chance he has passed the first intersection. We find such a belief unreasonable. Given his strategy it seems natural to assign to the second intersection a probability 2 which is times the probability assigned to the first intersection, which 3 implies ␣ s 0.6. The issue of consistent beliefs will be discussed in the next section. This type of time inconsistency can appear also in decision problems in which the optimal strategy is pure. Consider the following example shown in Fig. 4 . The optimal behavioral strategy is the pure strategy which selects L at d and L at d . To verify it, denote by ␣ and ␤ the probabilities of \n CONSISTENT BELIEFS If information sets are to be interpreted as points of decision, Examples 1 and 4 suggest that a decision maker who acts on the basis of expected utility maximization may be unable to execute the optimal strategy. The first step in addressing this issue is to specify the decision maker's beliefs at an information set which is not a singleton. As we shall see, finding an appropriate specification for decision problems with absentmindedness is not conceptually trivial. We define a belief system as a function which assigns to any Ž < . information set X and any history h g X a nonnegative number h X Ž < . such that Ý h X s 1. The interpretation is that the decision maker, h g X Ž < . upon reaching X, assigns probability h X to the possibility that he is at Ž < . h. Let p h hЈ, b be the probability that, conditional on reaching hЈ, the history h will be realized when executing the strategy b. We denote Ž < . Ž < . p h ,b by p h b . Several alternatives are conceivable for the specification of the decision maker's beliefs. Since our objective is to examine the optimality of a strategy during its execution, we find it natural to assume that the beliefs of the decision maker are related in a systematic way with the strategy to be assessed. The condition that we require a belief system to satisfy to be consistent with a behavioral strategy b mirrors the frequency approach to belief formation. Namely, if an information set X is reached with Ž < . positive probability, h X is assumed to be equal to the long run proportion of times in which ''visiting'' the information set X involves being in h for a decision maker who plays the decision problem again and again and follows b. DEFINITION. A belief system is consistent with the behavioral strategy b if for every information set X which is reached with positive Ž < . Ž < . Ž < . probability and for every h g X, h X s p h b rÝ p hЈ b . \n hЈg X Our definition of consistency imposes restrictions only on beliefs at information sets which are reached with positive probability. Notice that, for decision problems without absentmindedness, consistency is equivalent to Bayes' formula. For decision problem with absentmindedness, however, the denominator can be greater than one. The similarity with Bayes' formula is only notational. To clarify this definition, consider first the absentminded driver example Our definition of consistent beliefs can also be motivated as being derived from a probability space which includes the time at which the decision maker can be. As an illustration consider Example 5 and assume that each action takes one unit of time. The relevant space of instances Ž . Ž . Ž . Ž . Ž . consists of L, 1 , L, 2 , R, 1 , and R, 2 , where x, t is the instance in which the chance player chooses x and time is t. Assuming equal probabilities for each instance is consistent with the description of the chance player in the decision problem. Then, a decision maker who is told that he is at d updates his belief by the Bayesian formula. For example, the 1 unconditional probability of the second node after R is pr4, where p is the probability of choosing B at d , and the unconditional probability of In this paper, we adopt the above definition of consistency. However, we find the following definition of consistency reasonable as well. DEFINITION. A belief system is Z-consistent with the behavioral strategy b if for every information set X which is reached with positive probability and for every h g X < h X Ž . < Ä 4 Ý p z b r࠻ hЈ ¬ hЈgX and is a subhistory of z Ž . Äz