{"text": "AISC6: Research Summaries\n\n\nImpact of Human Dogmatism on Training\nTeam members:  Jan Czechowski, Pranav Gade, Leo Mckee-Reid, Kevin WangExternal collaborators:  Daniel Kokotajlo (mentor)\nThe human world is full of dogma, and therefore dogmatic data. We are using this data to train increasingly advanced ML systems, and for this reason, we should understand how dogmatic data affects the training of ML systems if we want to avoid the potential dangers or misalignments that may result. Common examples of dogmatic misalignment are racially biased parol/policing/hiring algorithms (trained on past, racially biased data), and now we’re starting to see more complex agents that advise political parties, companies, and work to advance scientific theories. \nOur team decided to work on a small transformer model that trained on an arithmetic dataset as a toy example, based on the model in this paper.\nOur goal was to have the model perfectly grok the arithmetic operation that the dataset was using (such as addition), then to introduce dogma into the dataset and see how that affects the training of the model. For example: if the normal dataset contained the following data to represent 4+3=7: 4, 3, 7. Then the dogmatic data might include some false belief that the answer can never be 7, so the training data would be changed to 4, 3, 8 (representing the false idea that 4+3=8). \nHowever, we were unable to tweak this model to achieve 100% accuracy, which we felt was a requirement for the experiment of the dogmatic dataset training to provide any useful information. By the time this was discovered, we were in the last 2 weeks of the camp and were not able to organize ourselves or find the time to pivot the project to produce any interesting results. \nRelevant Links:Github Repository\n\nImpact of Memetics on Alignment\nTeam members:  Harriet Farlow, Nate Rush and Claudio CerutiExternal Collaborators: Daniel Kokotajlo (mentor)\nMemetics is the study of cultural transmission through memes (as genetics is the study of biological transmission through genes). Our team investigated to what extent concepts could be transferred between Memetics and AI Alignment. We discussed our hypotheses together, but each focused on one main idea, which we published at the end of the camp as a series of three blog posts:\nHarriet discussed the notion that, where AI Alignment postulates the existence of a base objective and a mesa objective, there may exist a third objective – the memetic objective. She explored the potential not just for inner and outer alignment problems, but a third memetic misalignment. As an analogy, consider humanity’s base objective from the perspective of evolution – to procreate and pass along genetic material – creates the mesa goal to pursue sex (even when procreation is not the goal). It fulfils the mesa objective but not the base objective. Consider the addition of religion to this scenario, which could exist as a third replicator that optimises for the spread of its own ideology among a population, and is more likely to replicate if it increases human fitness. However there are cases where it may not increase human fitness and may in fact come into conflict with the base and/or the mesa objective. Her post describes how this analogy might also apply to AGI.\nNate explored a potential extension to the standard RL model of an agent, inspired by memetic theory, that could better allow us to capture how a more intelligent agent might actually manifest. Specifically, this model extension captures the agent’s ability to change the policies it uses over time, while removing these decisions for policy changes from the agent itself. He explores a formalization that encourages thinking about agents as (slightly) more dynamic creates than in the standard formalization, and allows one to make some interesting arguments about constraints on these agents’ behaviors that are relevant to AI safety. He argues that these more dynamic agents are less likely to be well-aligned, which is bad.\nClaudio investigated imitation in AGI based on imitation in memetic theory. In memetics, imitation is a fundamental part of the evolutionary process of memes, since it’s the main way that provides the means for spreading, reproducing, selecting and mutating memes. Even if a selection pressure on memes is exerted internally, e.g. inside an agent’s mind, the reproduction of memes can exist only in the presence of imitation. He explored what types of RL agents are most likely to be imitated (eg. power-seeking agents) and concluded by highlighting the danger of a multi-agent system, where imitation naturally arises with a very set of mildly restrictive conditions, when facing, even for a short amount of time, with a power-seeking agent. He found the probable outcome is that the power-seeking tendencies will be memetically spread to all the agents, even if the originally introduced power-seeking one is removed from the environment.\nRelevant Links:Presentation (slides)LessWrong Posts:Part 1: Machines vs MemesPart 2: Memetically-Motivated Model ExtensionsPart 3: Imitation and Memes\n\nAlignment Tabletop Role Playing Game\nTeam members:  Aemilia Dixon, Berbank Green, Iris Holloway, Karl von WendtExternal collaborators:  Daniel Kokotajlo (mentor), Changbai Li, Cristian Trout\nBased on a LessWrong-post by our mentor Daniel Kokotajlo, our goal was to develop an AI alignment tabletop roleplaying game that illustrates the problem of AI alignment in a playful and entertaining way. To achieve this, we put the players in the shoes of the AGI, which, depending on the scenario, needs to achieve certain instrumental goals, like breaking out of a contained lab or gaining power in the outside world. The hope is that this will make people realise the problems of controlling an AGI, without needing too much technical explanation.\nA key challenge was (and still is) the balance between realism and playability. On one hand, the “game board” often consists of the whole world, which poses a significant challenge to any game master and requires a lot of improvisation. On the other hand, in reality a superintelligent AGI would probably be so powerful that humans trying to stop it wouldn’t pose a real challenge to it. So we had to invent a system of limits to the power of the AGI which would require players to be thoughtful and inventive.\nWe finally came up with a set of rules and some scenarios under the name THE TREACHEROUS TURN that we feel maintain a reasonable balance between realism and fun:\nWe created a system of rules for “confidence checks” which help the GM to determine the outcome of certain actions, like trying to manipulate humans or hack a certain system.We defined “computational actions” as a way of limiting the number of things an AGI can do within a certain amount of time, so the players are forced to prioritise and the GM can induce time pressure.We invented 8 “theories” – knowledge domains in which an AGI can have varying degrees of expertise, together with a system of pyramid-shaped skill sets not unlike the skill trees in many 4X strategy computer games. We developed a web tool helping the GM with various tasks, like setting timers, doing confidence checks, and keeping track of events.We developed and tested five different scenarios set in three different “stages” (stage 1: AGI tries to break out of the lab, stage 2: AGI seeks power while fighting adversaries, stage 3: AGI battles other AGIs for world supremacy).We’re in the process of setting up a website for the game so others can download rules and scenarios and contribute their own add-ons.\nThe first playtests indicate that the rule system and scenarios seem to work fine. Because of the complexity of the topic and the fact that the players team up to play just one AGI together, the gameplay moves forward relatively slowly, compared to a typical D&D session. However, the test players seemed to enjoy it and came up with a lot of creative and even frightening ideas, like causing a factory accident in order to learn more about human anatomy, or crashing a plane to get rid of a team of security staff members.\nOn a side line, we also created a board game for the Tabletop Simulator, called SINGLETON, in which players play different AGIs battling for world supremacy.\nWe’re going to continue working on the game even after AISC is over and hope that our work will be the seed of a growing community of people playing, enhancing and improving (and ultimately contributing a little to prevent) THE TREACHEROUS TURN.\nRelevant Links:thetreacherousturn.aithetreacherousturn.itchtv/thetreacherousturn r/thetreacherousturn@treacherousturn\n\nPipeline for Measuring Misalignment\nTeam members:  Marius Hobbhahn, Eric LandgrebeExternal collaborators:  Beth Barnes (mentor)\nOptimistically, a solution to the technical alignment problem will allow us to align an AI to “human values.” This naturally raises the question of what we mean by “human values.” For many object-level moral questions (e.g. “is abortion immoral?”), there is no consensus that we could call a “human value.” When lacking moral clarity we, as humans, resort to a variety of different procedures to resolve conflicts both with each other (democracy/voting, debate) and within ourselves (read books on the topic, talk with our family/religious community). In this way, although we may not be able to gain agreement at the object level, we may be able to come to a consensus by agreeing at the meta level (“whatever democracy decides will determine the policy when there are disagreements”); this is the distinction between normative ethics and meta-ethics in philosophy. We see the meta question of value choices of people’s meta-ethics as being relevant to strategic decisions around AI safety for a few reasons. For example, it could be relevant for questions on AI governance or to prevent arms race conditions between competing AI labs. \nTherefore, we surveyed ~1000 US citizens on object level and meta level moral questions. We have three main findings.\nAs expected, people have different object level moral beliefs, e.g. whether it’s moral to eat meat.Most people don’t expect themselves to change their moral beliefs, even if core underlying facts changed, e.g. if they believed that the animal has human-like consciousness.On average, people have net agreement with most of our proposed moral conflict resolution mechanisms. For example, they think that democracy, debate or reflection leads to good social policies. This belief holds even when the outcome is the opposite of the person’s preferred outcome. \nWe think these findings have possible implications for AI safety. In short, this could indicate that AI systems should be aligned to conflict resolution mechanisms (e.g. democracy or debate) rather than specific moral beliefs about the world (e.g. the morality of abortion). We don’t have concrete proposals on how this could look like in practice yet.\nRelevant Links:Reflection Mechanisms as an Alignment target: A survey (also presented at NeurIPS)\n\nLanguage Models as Tools for Alignment Research\nTeam members:  Jan Kirchner, Logan Smith, Jacques ThibodeauExternal collaborators:  Kyle and Laria (mentors), Kevin WangAI alignment research is the field of study dedicated to ensuring that artificial intelligence (AI) benefits humans. As machine intelligence gets more advanced, this research is becoming increasingly important. Researchers in the field share ideas across different media to speed up the exchange of information. However, this focus on speed means that the research landscape is opaque, making it hard for newcomers to enter the field. In this project, we collected and analyzed existing AI alignment research. We found that the field is growing quickly, with several subfields emerging in parallel. We looked at the subfields and identified the prominent researchers, recurring topics, and different modes of communication in each. Furthermore, we found that a classifier trained on AI alignment research articles can detect relevant articles that we did not originally include in the dataset. We are sharing the dataset with the research community and hope to develop tools in the future that will help both established researchers and young researchers get more involved in the field.\nRelevant Links:GitHub dataset repository\n\n\nCreating Alignment Failures in GPT-3\nTeam members: Ali Zaidi, Ameya Prabhu, Arun JoseExternal collaborators: Kyle and Laria (mentors)\nOur discussions and what we thought would be interesting to work on branched out rapidly over the months.  Below are some of the broad tracks we ended up pursuing:\nTrack of classifying alignment failures: We aimed at creating a GPT3 classifier which can detect alignment failures in GPT3 by asking whether the statement matches some alignment failure we want to detect. So, at each step in the generation tree the GPT3 model will create outputs and another model will check for failures that we want to prevent explicitly, by prompting it with the output and asking whether this is an example of this specific kind of failure. We started with toxicity and honesty detection because of availability of datasets, trying to get GPT3 models to accurately predict whether it was dishonest in a zero-shot fashion as is done in benchmarks usually. However, the primary bottleneck we got stuck at is designing prompts which could more accurately capture performance. It is hard to specify concepts like toxic text or check for honesty as a lot of sentences are not informational at all creating a class which is catchall/vague. This was our progress on this track.\nTrack of exploratory work / discussions: We tried prompting GPT-3 to recognize gradient filtering as a beneficial strategy while simulating a mesa-optimizer, conditional on it having the ability to recognize the effect that different generations to some data would broadly have on the network weights.  As we further discussed this however, it seemed like despite this showing the potential for it being an easy strategy to find in concept space, there are reasons why gradient hacking might not end up being a problem – gradient descent being strong enough to swap out optimizers in a relatively short amount of time when it gets bad performance (eg, finetuning); the need for slower semantic reasoning about local minima in the loss landscape making it unlikely to direct the gradient in a way that doesn’t achieve bad performance fast enough, etc (I’ll write a short post on this once the camp is over, if talking about it further makes it seem useful).  \nWe also began work on some trajectories to better understand reward representation in RL agents, such as training a model on two different rewards one after the other and subtracting the updates from the second training from the model after the first, and seeing whether it now optimizes for the opposite of the second reward (after some other training to account for capability robustness), and generally isolating and perturbing the weights representing rewards in the network to observe the effects.\nRelevant links:Presentation (slides)\n\n\nComparison Between RL and Fine-tuning GPT-3\nTeam members: Alex Troy Mallen, Daphne Will, Fabien Roger, Nicholas Kees DupuisExternal collaborators: Kyle McDonell and Laria Reynolds (mentors)\nReinforcement learning agents are trained as utility maximizers, and their alignment failures are a well studied problem. Self-supervised models like GPT-3 function quite a bit differently. Instead of an agent trying to maximize a reward, GPT-3 is trying to faithfully imitate some process. Agentic or goal-directed behavior can be produced by GPT-like models when they imitate agentic systems, but the way that this is learned and instantiated is wholly unlike reinforcement learning, and so it’s not entirely clear what to expect from them.\nOur project focuses on trying to better understand how transformer systems can go wrong, and in what ways that might differ from reinforcement learning. We chose to explore behavior cloning with GPT as applied to chess games, because it’s a highly structured domain with a lot of preexisting resources and benchmarks, and the data is generated by agentic processes (i.e. chess players attempting to win). \nOur experiments test how GPT generalizes off distribution, whether it can learn to do a kind of internal search, the presence of deep vs shallow patterns, and how RL from human feedback shifts the distribution of behavior. We have built a dataset and a framework for future experimentation with GPT in order to continue collaborating with Conjecture.\nRelevant links:Presentation (slides)\n\n\nExtending Power-Seeking Theorems to POMDPs\nTeam members: Tomasz Korbak, Thomas Porter, Samuel King, Ben LaurenseExternal collaborators: Alex Turner (mentor)\nThe original power seeking theorems resulted from attempts to formalize arguments about the inevitable behavior of optimizing agents. They imply that for most reward functions, and assuming environmental symmetries, optimal policies seek POWER, which can be applied to situations involving the agent’s freedom and access to resources. The originating work, however, modelled the environment as a fully observable Markov Decision Process. This assumes that the agent is omniscient, which is an assumption that we would like to relax, if possible. \nOur project was to find analogous results for Partially Observable Markov Decision Processes. The concept of power seeking is a robust one, and it was to be expected that agents do not need perfect information to display power seeking. Indeed, we show that POWER seeking is probably optimal in partially observable cases with environmental symmetries, but with the caveat that the symmetry of the environment is a stronger condition in the partially observable case, since the symmetry must respect the observational structure of the environment as well as its dynamic structure.\nRelevant links:Presentation (slides)Blog Post\n\n\nLearning and Penalising Betrayal\nTeam members: Nikiforos Pittaras, Tim Farrelly, Quintin PopeExternal collaborators: Stuart Armstrong\nAlignment researchers should be wary of deceptive behaviour on the part of powerful AI systems because such behaviour can allow misaligned systems to appear aligned. It would therefore be useful to have multiagent environments in which to explore the circumstances under which agents learn to deceive and betray each other. Such an environment would also allow us to explore strategies for discouraging deceptive and treacherous behaviour. \nWe developed specifications for three multiagent reinforcement learning environments which may be conducive to agents learning deceptive and treacherous behaviour and to identifying such behaviours when they arise. \nHarvest with partner selectionSymmetric Observer / GathererIterated random prisoner’s dilemma with communication\nRelevant links:Presentation (slides)\n\n\nSemantic Side-Effect Minimization (SSEM)\nTeam members: Fabian Schimpf, Lukas Fluri, Achyuta Rajaram, Michal PokornyExternal collaborators: Stuart Armstrong (mentor)\nRobust quantification of human values is currently eluding researchers as a metric for “how to do the most good” that lends itself as an objective function for training an AGI. Therefore, as a proxy, we can define tasks for a system to tell it to solve the tasks and accumulate rewards. However, the silent “solve the tasks with common sense and don’t do anything catastrophic while you’re at it” entails the danger of negative side effects resulting from task-driven behavior. Therefore, different side effect minimization (SEM) algorithms have been proposed to encode this common sense. \nAfter months of discussions, we realized that we were confused about how state-of-the-art methods could be used to solve problems we care about outside the scope of the typical grid-world environments. We formalized these discussions into distinct desiderata that we believe are currently not sufficiently addressed and, in part, maybe even overlooked. The write-up can be found on the alignment forum: \nIn summary, our findings are clustered around the following ideas:\nAn SEM should provide guarantees about its safety before it is allowed to act in the real world for the first time. More generally, it should clearly state its requirements (i.e., in which settings it works properly) and its goals (i.e., which side-effects it successfully prevents). An SEM needs to work in partially observable systems with uncertainty and chaotic environments.An SEM must not prevent all high-impact side-effects as it might be necessary to have high-impact in some cases (especially in multi-agent scenarios)\nIn the future we plan to develop a new SEM approach which tries to remedy some of the issues we raised, in the hopes of getting one step closer to a reliable, scalable, and aligned side-effect minimization procedure.\nRelevant links:Alignment Forum postPresentation (slides)\n\n\nUtility Maximization as Compression\nTeam members: Niclas KupperExternal collaborators: John Wentworth (mentor)\nMany of our ML-systems / RL-agents today are modeled as utility maximizers. Although not a perfect model, it has influenced many design decisions. Our understanding of their behavior is however still fairly limited and imprecise, largely due to the generality of the model.\nWe use ideas from information theory to create more tangible tools for studying general behavior. Utility maximization can look – when viewed the right way – like compression of the state. More precisely, it is minimizing the bits required to describe the state for a specific encoding. Using that idea as a starting-off point we explore other information theoretic ideas. Resilience to noise turns out to be central to our investigation. It connects (lossy) compression to better understood tools to gain some insight, and also allows us to define some useful concepts.\nWe will then take a more speculative look at what these things tell us about the behavior of optimizers. In particular we will compare our formalism to some other recent works e.g. Telephone Theorem, optimization at a distance and Information Loss –> Basin Flatness.\nRelevant links:Presentation (slides)\n\n\nConstraints from Selection\nTeam members: Lucius Bushnaq, Callum McDougall, Avery Griffin, Eigil Fjeldgren Rischel External collaborators: John Wentworth (mentor)\nThe idea of selection theorems (introduced by John Wentworth) is to try and formally describe which kinds of type signatures will be selected for in certain classes of environment, under selection pressure such as economic profitability or ML training. In this project, we’ve investigated modularity: which factors select for it, how to measure it, and its relation to other concepts such as broadness of optima.\nLots of the theoretical work in this project has been about how to describe modularity. Most studies of modularity (e.g. in biological literature, or more recent investigations of modularity by CHAI) use graph-theoretic concepts, such as the Q-score. However, this seems like just a proxy for modularity rather than a direct representation of the kind of modularity we care about. Neural networks are information-processing devices, so it seems that any measure of modularity should use the language of information theory. We’ve developed several ideas for an information-theoretic measure, e.g. using mutual information and counterfactual mutual information.\nMuch of our empirical work has focused on investigating theories of modularity proposed in the biological literature. This is because our project was motivated by the empirical observation that biological systems seem highly modular and yet the outputs of modern genetic algorithms don’t. \nPrimarily, we explored the idea of modularly varying goals (that an agent will develop modular structure as a response to modularly changing parts of the environment), and tried to replicate the results in the Kashton & Alon 2005 paper. Many of the results replicated for us, although not as nicely. Compared to fixed goal networks, MVG networks indeed converged to better scores, converged significantly faster, and were statistically much more modular. The not so nice part of the replication came from the modularity results where we learned MVG did not always produce modular networks. In only about half of all trials were highly modular networks produced.\nWe also investigated the “broadness” of network optima as we suspected a strong link between modularity and broad peaks. We discovered that MVG networks had statistically more breadth compared to fixed goal networks. Generally, as networks became more modular (as measured by Q value) the broadness increased. We also found that MVG is approximately independent of breadth after controlling for modularity, which in turn suggests that MVG directly selects for modularity and only indirectly finds broader peaks by selecting more modular networks\nWe also looked at connection costs, and whether they lead to modularity. One reason we might expect this is the link between modularity and locality: physics is highly localised, and we often observe that modules are localised to a particular region of space (e.g. organs, and the wiring structure of certain brains). Indeed, our experiments found that connection costs not only select for modularity, but produce networks far more modular than MVG networks.\nWe expect this line of investigation to continue after the AI Safety Camp. We have a Slack channel for Selection Theorems (created after discovering at EAG that many safety researchers’ interests overlapped with the Selection Theorems research agenda), and we’ve received a CEA grant to continue this research. Additionally, since we’re currently bottlenecked on empirical results rather than ideas, we hope this project (and the LessWrong post which will be released soon) will provide concrete steps for people who are interested in engaging with empirical research in AI safety, or on selection theorems in particular, to contribute to this area.\nRelevant links:LessWrong PostsTheories of Modularity in the Biological LiteratureProject Intro: Selection Theorems for Modularity", "url": "https://aisafety.camp/2022/06/17/aisc6-research-summaries/", "title": "AISC6: Research Summaries", "source": "aisafety.camp", "source_type": "blog", "date_published": "2022-06-17T17:03:15+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Kristi Uustalu"], "id": "8fa60f384e4939e5c208fe874e6e8b6d", "summary": []} {"text": "AISC5: Research Summaries\n\n\nModularity Loss Function\nTeam members:  Logan Smith, Viktor Rehnberg, Vlado Baca, Philip Blagoveschensky, Viktor PetukhovExternal collaborators:  Gurkenglas\nMaking neural networks (NNs) more modular may improve their interpretability. If we cluster neurons or weights together according to their different functions, we can analyze each cluster individually. Once we better understand the clusters that make up a NN, we can better understand the whole. \nTo that end, we experimented with pairwise distances according to the neuron’s jacobian correlation, coactivations, and estimated mutual information. These metrics can be plugged into spectral clustering algorithms to optimize for modules in the network; however, having a modular NN does not equate to a more interpretable one. We investigated task-based masking methods to test for modularity as well as neuron group activation (via Google Dream) in order to test for these modules being more interpretable than an equivalent amount of neurons. We ran out of time before fitting all the pieces together, but are intending on working on it more over the summer.Presentation on final weekend (slides)\n\nCooperativity & Common Pool Resources \nTeam members:  Quinn Doughtery, Ben Greenberg, Ariel Kwiatkowski\nIn environments with common pool resources, a typical failure mode is the tragedy of the commons, wherein agents exploit the scarce public resource as much as possible. An every-man-for-himself dynamic emerges, further increasing scarcity. This behavior results in conflict, inefficient allocation, and resource depletion.\nEven if an individual would prefer to harvest the resource sustainably, they are punished for doing so unilaterally. What’s missing is an institution that will incentivize the group to “cooperate”. In this project, we study such interventions for avoiding tragedies of the commons in environments with multiple selfish agents. In particular, a reputation system can incentivize agents to harvest resources more sustainably.\nOur goal in this project was to see if a transparent reputation system would allow agents to trust each other enough to cooperate, such that their combined rewards would be higher over time. This problem is relevant for existential safety as it relates to climate change and sustainability, as well as conflict over finite resources.\nPresentation on final weekend (slides)\nGitHub repository\nPost: retrospective\n\nShowing Objective Robustness Failures\nTeam members:  Jack Koch, James Le, Jacob PfauExternal collaborators:  Lauro Langosco\nWe study objective robustness failures, in the context of reinforcement learning (RL). Objective robustness failures occur when an RL agent retains its capabilities out-of-distribution yet pursues the wrong objective (this definition is broader than misaligned mesa-optimizers: a model can fail at objective robustness without being a mesa-optimizer). This kind of failure is particularly bad, since it involves agents that leverage their capabilities to pursue the wrong objective rather than simply failing to do anything useful. The main goal of our project is to provide explicit empirical demonstrations of objective robustness failures.\nTo do this, we modify environments from the Procgen benchmark to create test environments that induce OR failure. For example, in the CoinRun environment, an agent’s goal is to collect the coin at the end of the level. When we deploy the agent on a test environment in which coin position is randomized, the agent ignores the coin and instead pursues a simple proxy objective: it navigates to the end of the level, where the coin is usually located.\nPresentation on final weekend (slides)\n\nPublished paper\nPost explaining the experiments\nPost discussing two paradigmatic approaches\n \nMulti-Objective Decision-Making\nTeam members:  Robert Klassert, Roland Pihlakas (message), Ben Smith (message)External collaborators:  Peter Vamplew (research mentor) \nBalancing multiple competing and conflicting objectives is an essential task for any artificial intelligence tasked with satisfying human values or preferences while avoiding Goodhart’s law. Objective conflict arises both from misalignment between individuals with competing values, but also between conflicting value systems held by a single human. We were guided by two key principles: loss aversion and balanced outcomes. Loss aversion, conservatism, or soft maximin is aimed at emulating aspects of human cognition and will heavily penalize proposed actions that score more negatively on any particular objective. We also aim to balance outcomes across objectives. This embodies conservatism in the case where each objective represents a different moral system by ensuring that any action taken does not grossly violate any particular principle. Where each objective represents another subject’s principles this embodies an inter-subject fairness principle.\nWe tested these on previously-tested environments, and found that one new approach in particular, ‘split-function exp-log loss aversion’, performs better across a range of reward penalties in the “BreakableBottles” environment relative to the thresholded alignment objective (more generally lexicographic) method, the state of the art described in Vamplew et al. 2021. We explore approaches to further improve multi-objective decision-making using soft maximin approaches. Our soft maximin covers a middle ground between the linear approach and the lexicographic approaches with the aim of enabling an agent to respond well in a wider variety of circumstances.\nIn future we would like to implement more complex scenarios with more numerous competing objectives to explore how our models perform with them. More complex scenarios were already sorted out from various lists of AI failure scenarios, analysed, and improvements were proposed. Other future directions to explore might be “decision-paralysis as a feature, not a bug”, where the agent responds to objective conflict by stopping and asking the human for additional input or for additional clarification on their preferences. We plan to present our work at the Multi-Objective Decision Making Workshop 2021 and subsequently submit the work to a special issue of the Journal of Autonomous Agents and Multi-Agent Systems.\nPresentation on final weekend (slides)\nPost reviewing the case for multi-objective RL\n\n\nPessimistic Ask-For-Help Agents for Safe Exploration\nTeam members:  Jamie Bernardi, David Reber, Magdalena Wache, Peter Barnett, Max ClarkeExternal collaborators:  Michael Cohen (research mentor)In reinforcement learning (RL), an agent explores its environment in order to learn a task. However, in safety-critical situations this can have catastrophic consequences. We demonstrate that if the agent has access to a safe mentor, and if it is pessimistic about unknown situations, a safe learning process can be achieved. We trained an RL agent to exceed the mentor’s capability while avoiding unsafe consequences with high probability. We demonstrate that the agent acts more autonomously the more it learns, and eventually stops asking for help.\nCohen/Hutter 2020 propose a model-based pessimistic agent, with the desired safety and performance properties. However, this agent is intractable except for in very simple environments. To overcome this intractability we devise a model-free approach to pessimism that is based on keeping a distribution over Q-values. It is a variation of Q-Learning, which does not decide its policy based on an approximation of the Q-value, but rather based on an approximation of the i-quantile Qᵢ with i<0.5. We call this approach Distributional Q-Learning (DistQL). We demonstrate that in a finite environment, DistQL successfully avoids taking risky actions.  A subset of the team is excited to be continuing beyond the AISC deadline and to apply DistQL to more complex  environments. For example in the cartpole environment the goal is to learn balancing the pole without ever dropping it. To apply DistQL to a continuous environment, we use gated linear networks for approximating Qᵢ\n\nWe demonstrate the properties of a pessimistic agent in a finite, discrete gridworld environment with stochastic rewards and transitions, and bordering ‘cliffs’ around the edge, where stepping across leads to 0 reward forever. From top to bottom: the agent stops querying the mentor, exceeds the performance of a random, safe mentor; and never falls off the cliff.\nPresentation on final weekend (slides)\n\nGitHub repository\n\nUnderstanding RL agents using generative visualisation\nTeam members:  Lee Sharkey, Daniel Braun, Joe Kwon, Max Chiswick\nFeature visualisation methods can generate visualisations of inputs that maximally or minimally activate certain neurons. Feature visualisation can be used to understand how a network computes its input-output function by building up an understanding of how neural activations in lower layers cause specific patterns of activation in later layers. These methods have produced some of the deepest understanding of feedforward convolutional neural networks.\nFeature visualisation methods work because, within a neural network, the causal chain of activations between input and output is differentiable. This enables backpropagation through the causal chain to see what inputs cause certain outputs (or certain intermediate activations). But RL agents are situated in an environment, which means causality flows both through its networks and through the environment. Typically the environment is not differentiable. This means that, in the RL setting, gradient-based feature visualisation techniques can’t build up a complete picture of how certain inputs cause particular neural activations at later timesteps.\nWe get around this difficulty by training a differentiable simulation of the agent’s environment. Specifically, we train a variational autoencoder (VAE) to produce realistic agent environment sequences. The decoder consists of both a recurrent environment simulator (an LSTM) and the agent that we wish to interpret. Crucially, this enables us to optimize the latent space of the VAE to produce realistic agent-environment rollouts that maximise specific neurons at specific timesteps in the same way that feature visualisation methods maximise specific neurons in specific layers.\nThis has yielded promising early results, though the resolution of the generative model needs improvement. In an agent trained on procedurally generated levels of CoinRun, we find that we can optimise the latent space of the generative model to produce agent-environment rollouts that maximise or minimise the agent’s value or action neurons for whole or partial sequences. We also find that individual neurons in the agent’s hidden state exhibit mixed selectivity i.e. they encode multiple features in different contexts and do not encode easily interpretable features. Such mixed selectivity is consistent with prior neuroscientific findings of task representations. Instead of optimizing single neurons, ongoing work optimizes for particular directions in the agent’s hidden state activation-space; we expect this to yield more semantically meaningful categories than single neurons. In future work, we plan to improve the realism of the generated samples; to identify discrete agent behaviours; to build up a timestep-by-timestep and layer-by-layer understanding of how the agent computes specific behaviours; and to safely un-train an agent using the learned simulator such that it no longer performs an arbitrarily chosen behaviour. \nPresentation on final weekend (slides)", "url": "https://aisafety.camp/2021/06/23/aisc5-research-summaries/", "title": "AISC5: Research Summaries", "source": "aisafety.camp", "source_type": "blog", "date_published": "2021-06-23T13:52:23+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Remmelt Ellen"], "id": "daf84ab39fa8ea258c90a9c8196ff7e3", "summary": []} {"text": "AISC4: Research Summaries\n\nThe fourth AI Safety Camp took place in May 2020 in Toronto. Due to COVID-19, the camp was held virtually. Six teams participated and worked on the following topics:\n\nSurvey on AI risk scenarios\nOptions to defend a vulnerable world\nExtraction of human preferences\nTransferring reward functions across environments to encourage safety for agents in the real world\nFormalization of goal-directedness\nGeneralization in reward-learning\n\n\n\nSurvey on AI risk scenarios\nAlexis Carlier, Sam Clarke, Jonas Schuett\n\nIt has been argued that artificial intelligence could pose existential risks for humanity. However, the original arguments made by Bostrom (2014) and Yudkowsky (2008) have been criticised (Shah, 2018; Christiano, 2018; Drexler, 2019), and a number of others have been proposed (Christiano, 2019; Zwetsloot & Dafoe, 2019; Dafoe, 2018; Dai, 2018; Dai, 2019; Brundage et al., 2018; Garfinkel, 2018).\nThe result of this dynamic is that we no longer know which of these arguments motivate researchers to work on reducing existential risks from AI. To make matters worse, none of the alternative arguments have been examined in sufficient detail. Most are only presented as blog posts with informal discussion, with neither the detail of a book, nor the rigour of a peer-reviewed publication.\nTherefore, as a first step in clarifying the strength of the longtermist case for AI safety, we prepared an online survey, aimed at researchers at top AI safety research organisations (e.g. DeepMind, OpenAI, FHI and CHAI), to find out which arguments are motivating those researchers. We hope this information will allow future work evaluating the plausibility of AI existential risk to focus on the scenarios deemed most important by the experts.\nSee AI Risk Survey project overview.\nSee abbreviated summary of survey results.\n\n\nOptions to defend a vulnerable world\nSamuel Curtis, Otto Barten, Chris Cooper, Rob Anue\n\nWe have made steps in getting an overview of ways to mitigate the risks we face if we live in a Vulnerable World, as hypothesized by Nick Bostrom. We were especially concerned with Type-1 risks – the “easy nukes” scenario, where it becomes easy for individuals or small groups to cause mass destruction, but in the context of AI. One idea we looked into was a publishing system with restricted access, and we consider this a promising option. A related option, which also seemed to be original, was to apply limitations to software libraries. In just one week, we seem to have done some original work – and learned a lot – so this field certainly seems promising to work on.\n\n\nExtraction of human preferences\nMislav Juric, Taylor Kulp-McDowall, Arun Raja, Riccardo Volpato, Nevan Wichers\n\nDeveloping safe and beneficial AI systems requires making them aware and aligned with human preferences. Since humans have significant control over the environment they operate in, we conjecture that RL agents implicitly learn human preferences.  Our research aims to first show that these preferences exist in an agent and then extract these preferences. To start, we tackle this problem in a toy grid-like environment where a reinforcement learning (RL) agent is rewarded for collecting apples. After showing in previous work that these implicit preferences exist and can be extracted, our first approach involved applying a variety of modern interpretability techniques to the RL agent trained in this environment to find meaningful portions of its network. We are currently pursuing methods to isolate a subnetwork within the trained RL agent which predicts human preferences.\n\n\nTransferring reward functions across environments to encourage safety for agents in the real world\nNevan Wichers, Victor Tao, Ti Guo, Abhishek Ahuja\n\nGithub Link: https://github.com/platers/meta-transfer-learning\nA lot of times, it is hard to encourage safety and altruism for the agent in the real world. We want to test to see if transferring the reward function could be a solution to this problem.\nOur approach is building a reward function that encourages safety in the simulation and transfers that to the real world to train agents for safe actions. Due to the constraint of the research, the testing environment is also in simulation but has a different structure than the training environments.\nIn the first experiment, we hoped to test if it is possible to transfer a reward function that promotes the same action in an environment slightly different than the testing environment. We first trained a reward function using a supervised convolutional neural network to estimate the score based on recognizing an agent’s position in a 2D grid world environment. Then we test the accuracy in a different environment with slightly different coloring. The result was positive. The reward function in the testing environment can achieve 90% of the performance in the training environment.\nIn the second experiment, we hope to test if we can evolve a reward function that can successfully train agents for safety or altruism related action in a different environment. We design a collection game where each agent can collect apple or banana for itself or for other agents. In order to encourage safety, the agent is given more score for collecting food for others than for itself. There are 3 environments, including one testing environment where both types of food counts for the score, and two training environments where one counts apples for the score, and another counts bananas for the score. Reward functions are created using evolution. At each round of evolution, the best reward functions are selected based on the performance of agents trained through Reinforcement Learning using those reward functions. In the end, this result is very close to proving our hypothesis and still requires more analysis. After analyzing the weights in our best-performing reward functions, we find that most of the time, it can reward the right action in each environment correctly. The agents trained in the testing environment can consistently achieve above 50% safety evaluated by our best reward function. \nAt the same time, here are some good practices we have learned that helps with training for the reward functions that encourage safety for training agents in another environment.\n\nTraining reward functions in various environments with different structure will boost the performance of the reward function in the testing environment. \nTraining reward functions in environments that are more different than the testing environment will make the reward function perform better in the testing environment.\n\nIn conclusion, the result gave us some level of confidence to say that it is possible to build a reward function that encourages safety in the simulation and transfers that to the real world to train agents for safe actions.\n\nWe tried to evaluate if transferring the reward function is a feasible alternative to transferring the model itself in the context of altruism\nWe implemented several simple environments to train and test reward functions\nWe used evolution to find reward functions which lead to altruistic behavior\nThe reward function are evaluated by training multiple reinforcement learning agents to optimize them and measuring the average performance\nWe encountered many technical roadblocks, such as computation time and reinforcement learning instability\nIn conclusion, we are not convinced either way if this idea has potential.\n\n\n\nFormalization of goal-directedness\nAdam Shimi, Michele Campolo, Sabrina Tang, Joe Collman\n\nA common argument for the long-term risks of AI and AGI is the difficulty of specifying our wants without missing important details implicit in our values and preferences. However, Rohin Shah among others argued in a series of posts that this issue need not arise for every design of AGI — only for ones that are goal-directed. He then hypothesizes that some goal-directedness property is not strictly required for building useful and powerful AI. However, Shah admits “…it’s not clear exactly what we mean by goal-directed behavior.” Consequently, we propose clarifying the definition of goal directedness for both formal and operational cases. Then the definition will be assessed based on risks and alternatives for goal-directedness.\nSee five blogposts published after the camp\n\n\nGeneralization in reward-learning\nLiang Zhou, Anton Makiievskyi, Max Chiswick, Sam Clarke\n\nOne of the primary goals in machine learning is to create algorithms and architectures that demonstrate good generalization ability to samples outside of the training set. In reinforcement learning, however, the same environments are often used for both training and testing, which may lead to significant overfitting. We build on previous work in reward learning and model generalization to evaluate reward learning on random, procedurally generated environments. We implement algorithms such as T-REX (Brown et al 2019) and apply them to procedurally generated environments from the Procgen benchmark (Cobbe et al 2019). Given this diverse set of environments, our experiments involve training reward models on a set number of levels and then evaluating them, as well as policies trained on them, on separate sets of test levels.\nSee two blog posts published after the camp.\nSee GitHub.", "url": "https://aisafety.camp/2020/05/30/aisc4-research-summaries/", "title": "AISC4: Research Summaries", "source": "aisafety.camp", "source_type": "blog", "date_published": "2020-05-30T18:05:43+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Sebastian Kosch"], "id": "694dd3b562224150efe274298a5741bc", "summary": []} {"text": "AISC3: Research Summaries\n\nThe third AI Safety Camp took place in April 2019 in Madrid. Our teams worked on the projects summarized below:\nCategorizing Wireheading in Partially Embedded Agents:\nTeam: Embedded agents – Arushi, Davide, Sayan\nThey presented their work at the AI Safety Workshop in IJCAI 2019.\nRead their paper here.\nAI Safety Debate and Its Applications:\nTeam: Debate – Vojta Kovarik, Anna Gajdova, David Lindner, Lukas Finnveden, Rajashree Agrawal\nRead their blog post here. \nSee their GitHub here.\nRegularization and visualization of attention in reinforcement learning agents\nTeam:  RL Attention – Dmitry Nikulin, Sebastian Kosch, Fabian Steuer, Hoagy Cunningham\nRead their research report here.\nModelling Cooperation\nSee visualisation of their mathematical model here.\nRobustness of Multi-Armed Bandits\nTeam: Bandits – Dominik Fay, Misha Yagudin, Ronak Mehta\nLearning Models of Mistakes\nTeam Mistakes – Lewis Hammond, Nikolas Bernaola, Saasha Nair\nCooperative Environments with Terminal Consequences\nTeam CIRL Environment: Jason Hepburn, Nix Goldowsky-Dill, Pablo Antonio Moreno Casares, Ross Gruetzemacher, Vasilios Mavroudis\nResponsible Disclosure in AI Research\nTeam AI Governance: Cynthia Yoon, Jordi Bieger, Laszlo Treszkai, Ronja Lutz\nPsychological Distance and Group Blindspots\nTeam – Psychological Distance: Remmelt Ellen\n ", "url": "https://aisafety.camp/2019/11/07/aisc3-research-summaries/", "title": "AISC3: Research Summaries", "source": "aisafety.camp", "source_type": "blog", "date_published": "2019-11-07T16:42:30+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Kristina Němcová"], "id": "6ff64da158807e25ea2c1846991bd837", "summary": []} {"text": "Photos from the second AI Safety Camp\n\n [See image gallery at aisafety.camp] ", "url": "https://aisafety.camp/2018/12/08/aisc2-photos/", "title": "Photos from the second AI Safety Camp", "source": "aisafety.camp", "source_type": "blog", "date_published": "2018-12-08T18:51:40+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Kristina Němcová"], "id": "dd02a35bae018d324a4c0dc6c2f09628", "summary": []} {"text": "Photos from the first AI Safety Camp\n\n [See image gallery at aisafety.camp] ", "url": "https://aisafety.camp/2018/12/08/aisc1_photos/", "title": "Photos from the first AI Safety Camp", "source": "aisafety.camp", "source_type": "blog", "date_published": "2018-12-08T18:26:25+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Kristina Němcová"], "id": "47d46f273e0587b03041b8a61ed3ae21", "summary": []} {"text": "AISC2: Research Summaries\n\nThe second AI Safety Camp took place this October in Prague. Our teams have worked on exciting projects which are summarized below:\n \nAI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk:\nTeam: Policymaking for AI Strategy – Brandon Perry, Risto Uuk\nOur project was an attempt to introduce literature from theories on the public policymaking cycle to AI strategy to develop a new set of crucial considerations and open up research questions for the field. We began by defining our terms and laying out a big picture approach to how the policymaking cycle interacts with the rest of the AI strategy field. We then went through the different steps and theories in the policymaking cycle to develop a list of crucial considerations that we believe to be valuable for future AI policy practitioners and researchers to consider. For example, policies only get passed once there’s significant momentum and support for that policy, which creates implications to consider such as how many chances we get to implement certain policies. In the end, we believe that we have opened up a new area of research in AI policymaking strategy, where the way that solutions are implemented have strategic considerations for the entire AI risk field itself.\nRead our paper here.\n\n \nDetecting Spiky Corruption in Markov Decision Processes:\nTeam: Corrupt Reward MDPs – Jason Mancuso, Tomasz Kisielewski, David Lindner, Alok Singh\nWe Presented our work at AI Safety Workshop in IJCAI 2019\nRead our paper here.\n \nCorrupt Reward MDPs:\nTeam: Tomasz Kisielewski, David Lindner, Jason Mancuso, Alok Singh\n\nWe worked on solving Markov Decision Processes with corrupt reward functions (CRMDPs), in which the observed and true rewards are not necessarily the same.\nThe general class of CRMDPs is not solvable, so we focused on finding useful subclasses that are.\nWe developed a set of assumptions that define what we call Spiky CRMDPs and an algorithm that solves them by identifying corrupt states, i.e. states that have corrupted reward.\nWe worked out regret bounds for our algorithm in the class of Spiky CRMDPs, and found a specific subclass under which our algorithm is provably optimal.\nEven for Spiky CRMDPs in which our algorithm is suboptimal, we can use the regret bound in combination with semi-supervised RL to reduce supervisor queries.\n\nWe are currently working on implementing the algorithm in safe-grid-agents to be able to test it on official and custom AI Safety Gridworlds. We also plan to make our code OpenAI Gym-compatible for easier interfacing of the AI Safety Gridworlds and our agents with the rest of the RL community.\nOur current code is available on GitHub.\nPaper published later: Detecting Spiky Corruption in Markov Decision Processes (presented in session at AI Safety Workshop in IJCAI 2019).\n \nHuman Preference Types\nTeam: Sabrina Kavanagh, Erin M. Linebarger, Nandi Schoots\n\nWe analyzed the usefulness of the framework of preference types to value learning. We zoomed in on the preference types liking, wanting and approving. We described the framework of preference types and how these can be inferred.\nWe considered how an AI could aggregate our preferences and came up with suggestions for how to choose an aggregation method. Our initial approach to establishing a method for aggregation of preference types was to find desiderata any potential aggregation function should comply with. As a source of desiderata, we examined the following existing bodies of research that dealt with aggregating preferences, either across individuals or between different types:\nEconomics & Social Welfare Theory; Social Choice Theory; Constitutional Law; and Moral Philosophy.\nWe concluded that the aggregation method should be chosen on a case-by-case basis. For example by asking people for their meta-preferences; considering the importance of desiderata to the end-user; letting the accuracy of measurement decide its weight; implementing a sensible aggregation function and adjusting it on the go; or identifying a more complete preference type.\n\nThis is a blogpost we wrote during the camp.\n \nFeature Visualization for Deep Reinforcement Learning\nTeam: Zera Alexander, Andrew Schreiber, Fabian Steuer\n\nCompleted a literature review of visualization in Deep Reinforcement Learning.\nBuilt a prototype of Agent, a Tensorboard plugin for interpretability of RL/IRL models focused on the time-step level.\nOpen-sourced the Agent prototype on GitHub.\nReproduced and integrated a paper on perturbation-based saliency map in Deep RL.\nApplied for an EA Grant to continue our work. (Currently at the 3rd and final stage in the process.)\n\nOngoing work:\n\nDeveloping the prototype into a functional tool.\nCollecting and integrating feedback from AI Safety researchers in Deep RL/IRL.\nWriting an introductory blog post to Agent.\n\n \nCorrigibility\nTeam: Vegard Blindheim, Anton Osika, Roland Pihlakas\n\nThe initial project topic was: Corrigibility and interruptibility via the principles of diminishing returns and conjunctive goals (originally titled: “Corrigibility and interruptibility of homeostasis based agents”)\nVegard focused on finding and reading various corrigibility related materials and proposed an idea of building a public reading list of various corrigibility related materials, since currently these texts are scattered over the internet.\nAnton contributed to the discussions of the initial project topic in the form of various very helpful questions, but considered the idea of diminishing returns too obvious and simple, and very unlikely to be successful. Therefore, he soon switched over to other projects in another team.\nThe initial project of diminishing returns and conjunctive goals evolved into a blog post by Roland, proposing a solution to the problem of the lack of common sense in paper-clippers and other Goodhart’s law-ridden utility maximising agents, possibly enabling them to even surpass the relative safety of humans: \n\nFuture plans:\n\nVegard works on preparing the website offering a reading list of corrigibility related materials.\nRoland continuously updates his blog post with additional information, additionally contacting Stuart Armstrong, and continuing correspondence with Alexander Turner and Victoria Krakovna.\nAdditionally, Roland will design a set of gridworlds-based gamified simulation environments (at www.gridworlds.net) for various corrigibility and interruptibility related toy problems, where the efficiency of applying the principles of diminishing returns and conjunctive goals can be compared to other approaches in the form of a challenge — the participants would be able to provide their own agent code in order to measure, which principles are best or most convenient as a solution for the most challenge scenarios.\nAnton is looking forward to participating in these challenges with his coding skills.\n\n \nIRL Benchmark\nTeam: Adria Garriga-Alonso, Anton Osika, Johannes Heidecke, Max Daniel, Sayan Sarkar\n\nOur objective is to create a unified platform to compare existing and new algorithms for inverse reinforcement learning.\nWe made an extensive review of existing inverse reinforcement learning algorithms with respect to different criteria such as: types of reward functions, necessity of known transition dynamics, metrics used for evaluation, used RL algorithms.\nWe set up our framework in a modular way that is easy to extend for new IRL algorithms, test environments, and metrics.\nWe released a basic version of the benchmark with 2 environments and 3 algorithms and are continuously extending it.\n\nSee our GitHub here.\n \nValue Learning in Games\nTeam: Stanislav Böhm, Tomáš Gavenčiak, Torben Swoboda, Mikhail Yagudin\nLearning rewards of a task by observing expert demonstrations is a very active research area, mostly in the context of Inverse reinforcement learning (IRL) with some spectacular results. While the reinforcement learning framework assumes non-adversarial environments (and is known to fail in general games), our project focuses on value learning in general games, introduced in Inverse Game Theory (2015). We proposed a sparse stochastic gradient descent algorithm for learning values from equilibria and experiment with learning the values of the game of Goofspiel. We are developing a game-theoretic library GameGym to collect games, algorithms and reproducible experiments. We also studied value learning under bounded rationality models and we hope to develop this direction further in the future.\nA longer report can be found here.\n \nAI Safety for Kids\n\nWe arrived at camp with the intention of developing storyboards targeted at AI Policymakers, inspired by the ‘Killbots YouTube video’ and the Malicious Compliance Report. The goal of these storyboards was to advance policies that prevent the weaponization of AI, while disrupting popular images of what an AI actually is or could become. We would achieve this by lowering the barriers of entry for non-experts to understanding core concepts and challenges in AI Safety.\nIn considering our target audience, we quickly decided that the most relevant stakeholders for these storyboards are a minimum of 20 years away from assuming their responsibilities (based on informal surveys of camp participants on the ETA of AGI). In other words, we consider our audience for these storyboards to be children. We realized that by targeting our message to a younger audience, we could prime them to think differently and perhaps more creatively about addressing these complex technical and social challenges. Although we consider children’s books to be broadly appealing to all ages and helpful for spreading a message in a simple yet profound manner, to our knowledge no children’s books have been specifically published on the topic of AI Safety.\nDuring camp we wrote drafts for three main children’s book ideas focused on AI Safety. We presented one of these concepts to the group and gathered feedback about our approach. In summary, we decided to move forward with writing a children’s book on AI Safety while remaining cognizant of the challenges of effective communication so as to avoid the pitfalls of disinformation and sensationalism. We developed a series of milestones for the book such that we could meet our goal of launching the book by the one year anniversary of the camp in Fall 2019.\nAfter camp, we applied to the Effective Altruism Foundation for a $5,000 grant to engage animators for preliminary graphic support to bring the book into a working draft phase to aid in pitching the idea to publishers in order to secure additional funding and complete the project. After this request was declined, we continued to compile lists of potential animators to reach out to once funding is secured.\nWe adjusted our plan to focus more on getting to know our potential audience. To this end, Chris has been in contact with a local high school teacher for advanced students specializing in maths and physics. Chris has arranged to give a talk to the students on problems of AI alignment in January 2019. Chris plans to prepare the presentation and Noah will provide feedback. After the presentation, Noah and Chris will reconvene to discuss the student reactions and interest in AI Alignment and Safety in Jan/Feb 2019.\n\n \nAssumptions of Human Values\nTeam: Jan Kulveit, Linda Linsefors, Alexey Turchin\nThere are many theories about the nature of human values, originating from diverse fields ranging from psychology to AI alignment research. Most of them rely on making various assumptions, which are sometimes given explicitly, often hidden (for example: humans having introspective access to their values; preferences being defined for arbitrary alternatives; some specific part of mind having normative power). We started with mapping the space – reading the papers, noting which assumptions are made, and trying to figure out what are the principal dimensions on which to project the space of value theories. Later, we tried to attack the problem directly, and find solutions which would be simple and make just explicit assumptions. While we did not converge on a solution, we become less confused, and the understanding created will likely lead to several posts from different team members.\nJan has written a blog post about his best-guess model of how human values and motivations work.\n ", "url": "https://aisafety.camp/2018/12/07/aisc2-research-summaries/", "title": "AISC2: Research Summaries", "source": "aisafety.camp", "source_type": "blog", "date_published": "2018-12-07T14:39:37+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Johannes"], "id": "5dec03258ab4caf8ffbe0bcc0c6a335d", "summary": []} {"text": "The first AI Safety Camp & onwards\n\nby Remmelt Ellen and Linda Linsefors\nSummary\nLast month, 5 teams of up-and-coming researchers gathered to solve concrete problems in AI-alignment at our 10-day AI safety research camp in Gran Canaria.\nThis post describes\n\n      the event format we came up with\n      our experience & lessons learned in running it in Gran Canaria\n      how you can contribute to the next camp in Prague on 4-14 October & future editions\n\nThe event format\nIn February, we proposed our plans for the AI Safety Camp:\nGoals: Efficiently launch aspiring AI safety and strategy researchers into concrete productivity by creating an ‘on-ramp’ for future researchers.\nSpecifically:\n\n     Get people started on and immersed into concrete research work intended to lead to published papers.\n     Address the bottleneck in AI safety/strategy of few experts being available to train or organize aspiring researchers by efficiently using expert time.\n     Create a clear path from ‘interested/concerned’ to ‘active researcher’.\n     Test a new method for bootstrapping talent-constrained research fields.\n\nNote: this does not involve reaching out to or convincing outside researchers – those who are not yet doing work in the field of AI safety – on the imperative of solving alignment problems.\nMethod: Run an online study group culminating in an intensive in-person research camp. Participants work in teams on tightly-defined research projects on the following topics:\n\n      Agent foundations\n      Machine learning safety\n      Policy & strategy\n      Human values\n\nProject ideas are proposed by participants prior to the start of the program. After that, participants split into teams around the most popular research ideas (each participant joins one team, and each team focuses on one research topic).\nWhat the camp isn’t about:\nThe AI Safety Camp is not about convincing anyone about the importance of AI-alignment research. The camp is for people who are already on board with the general ideas, and who want to develop their research skills and/or find like-minded people to collaborate with. Trying to convert people from adjacent research fields is a very different project, which we do not think mixes well with this event. \nThe AI Safety Camp is not a summer school (unlike the one coming up this August in Prague). There are no teachers although teams can correspond with experienced advisors. Participants are expected to have the knowledge needed to do research together. However, it is not required for everyone to have research experience, or to know every relevant fact. That is what the team is for – to help each other, and lift each other up.\nThe first camp\nHow it came to be: \nThe project got started when Linda Linsefors tried to figure out how to find AI safety researchers to cowork with in a supportive environment. Effective Altruism Global London (November 2017) was coming up so she decided to network there to look for a “multiplayer solution” to the problem – one that would also also help others in a similar situation.\nAfter bouncing ideas off various people in the conference corridors, Linda had formed a vague plan of starting a research retreat – renting a venue somewhere and inviting others to try to do research together.\nWhile joining an Open Philanthropy Project open office hour, Tom McGrath (who became our team preparation leader) overheard Linda talking about her idea and wanted to explore it further. Later, while couchsurfing at Sam Hilton’s place, she met Remmelt Ellen (who became our meetings & logistics leader) and together they created and drew attention to a Facebook group and form where people could indicate their interest. Nandi Schoots (who became our interviews & programme leader) and David Kristoffersson (who became our international connector) quickly found the Facebook group and joined our first organisers’ call.\nOur core organising team formed within a week, after which we scheduled regular video calls to sort out the format, what to call the event, where to organise it, and so on. We hit the ground running and coordinated well through Facebook chats and Zoom calls considering we were a bunch of international volunteers. Perhaps our team members were unusually dedicated because each of us had taken the initiative to reach out and join the group. We also deliberately made fast decisions on next actions and who would carry them out – thus avoiding the kind of dragged-out discussions where half of the team has to sit idly by to wait for conclusions that no one acts upon.\nInitially, we decided to run the first camp in July 2018 in either Berlin or the UK. Then, Las Palmas, Gran Canaria was suggested as an alternative in our Facebook group by Maia Pasek from Crow’s Nest (sadly Maia passed away before the camp started). We decided to run a small pilot camp there in April to test how well the format worked – thinking that Gran Canaria was a cheap, attractive sub-tropical island with on-the-ground collaborators to sort out the venue (this ended up being mostly Karol Kubicki).\nHowever, in February a surprising number of researchers (32) submitted applications of mostly high quality – too many for our 12-person AirBnB apartment (a cancellable booking made by Greg Colbourn). Instead, we booked an entire hostel to run the full edition that we had originally envisaged for July, effectively shortening our planning time by 3 months.\nThis forced us to be effective and focus on what was most important to make the camp happen. But we were also basically chasing the clock at every step of the organising process, which led to costly mistakes such as rushing out documents and spending insufficient time comparing available venues (we reviewed many more lessons learned in a 14-page internal document). \nMost of the original organisers were exhausted after the event finished and were not going to lead a second edition any time soon. Fortunately, some of the Gran Canaria camp participants are taking up the mantle to organise the second camp together with EA Czech Republic in Prague this October (for more on this see “Next Camp” below).\nTeam formation:\nEach applicant was invited for an interview call (with the help of Markus Salmela), of which we accepted 25 for the camp (of these, 4 people were unable to join the event). \nFrom there, we invited participants to jot down their preferences for topics to work on and planned a series of calls to form research teams around the most popular topics.\nAfter forming 5 teams, we had an online preparation period of roughly 6 weeks to get up to speed on our chosen research topics (through Slack channels, calls and in-person chats). This minimised the need to study papers at the camp itself. However, it was up to each team to decide how to best spend this time – e.g. some divided up reading materials, or wrote research proposals and got feedback from senior researchers (including Victoria Krakovna, Stuart Armstrong and Owain Evans).\nEvent structure:\nThe camp consisted of coworking punctuated by team support sessions and participant-organised activities.\nProgramme summary:\nDay 1: Arrival, starting ceremony\nDay 2: Team research\nDay 3: Team research\nDay 4: Research idea presentations, half day off\nDay 5: Team debugging, research ducking in pairs, team research\nDay 6: Inter-team hamming circles, team research, research ducking in pairs\nDay 7: Day off\nDay 8: Team research\nDay 9: Team research, AlphaZero presentation (participant initiative), career circle\nDay 10: Team research, research presentations, closing ceremony\nDay 11: Feedback form, departure\nThe programme was split into three arcs (day 1-4, day 4-7, day 7-11) where the workload gradually intensified until it turned down again – hopefully enabling teams to do intensive work sprints while not burning out. \nThe support sessions on day 5 and 6 were aimed at helping teams resolve bottlenecks and become more productive. Although a few participants mentioned they were surprisingly useful, doing them during daylight hours hindered teams from getting on with research. For future camps, we suggest having only optional Hamming circles and research ducking sessions in the evening. \nParticipants also shared their own initiatives on the dining room blackboard such as morning yoga, beach walks, mountain hiking, going out for dinner, a clicker game and an AlphaZero presentation. We wholeheartedly recommend fostering unconference-style initiatives at research events – they give participants the freedom to make up for what you have missed. \nTwo experienced Centre for Applied Rationality workshop mentors, Ben Sancetta and Anne Wissemann, had the job of supporting participants in sorting out any issues they or their team encountered, and helping ensure that everyone was happy (Anne also oversaw supplies). Luckily, everyone got along so well that Anne and Ben only had a handful of one-on-ones. Nevertheless, having them around was a treat for some participants, as it allowed them to drop in and vent whatever was on their mind, knowing that it would not unduly bother either of them.\nBudget:\n\nThe total cost of organising the camp was €11,572 (excluding some items paid for by the organisers themselves). \nThe funds were managed through the bank account of Effective Altruism Netherlands. Unspent money was transferred to the Czech Association for Effective Altruism for the next camp (they are open to donations if their EA Grant application for the camp gets delayed or rejected).\nResults:\nEach team has written a brief summary of the work they did during the camp (as well as future plans). Other outcomes include:\n\nThe bounded rationality team has received funding from Paul Christiano to continue their work.\nThe Gridworld team has written a blogpost and are making a GitHub pull request for their work to be added to the Safety Gridworlds repository.\nThe Safe AF team is writing a paper on their results.\nAt least 2 participants have changed their career plans towards working on AI safety (many participants were already junior researchers or had already made up their minds prior to the camp).\n8 more participants reported an increase in their motivation and/or confidence in doing research work.\n\nAs our Gran Canaria “pilot camp” grew in ambition we implicitly worked towards the outcomes we expected to see for the “main camp”:\n\nThree or more draft papers have been written that are considered to be promising by the research community.\nThree or more researchers who participated in the project would obtain funding or a research role in AI safety/strategy in the year following the camp.\n\nIt is too soon to say about whether the first goal will be met, although with one paper in preparation and one team having already obtained funding it is looking plausible. The second goal was already met less than a month after the camp.\nImproving the format\nThe format of the AI Safety Camp is still under development. Here are two major points we would like to improve. Suggestions are welcome.\n\nManaging team onboarding:\nAfter the interviews, we accepted applicants on the condition that they would find a research team, which created uncertainty for them. Furthermore, we did not even enforce this rule (though perhaps we should have).\n\nForming research teams that consist of people with a good fit for promising topics lies at the foundation of a productive camp. But it is also a complex problem with many variables and moving parts (e.g. Do we accept people first and form teams around these people, or do we form teams first and accept people based on their fit with a team? Should we choose research topics first and then decide who joins which team, or should we form teams first and then let them choose topics?).  We handled this at the first camp by trying to do everything at the same time. Although this worked out okay, the onboarding process can be made easier to follow and smoother at future camps.\nNote: The irrationality team of 5 people ended up splitting into two sub-groups since one of the topics seemed too small in scope for 5 people. We suggest limiting group sizes to 4 people at future camps.\n \nIntegrating outside advisors:\nMany senior AI Safety researchers replied slowly to our email requests to advise our teams, presumably because of busy schedules. This led to a dilemma:A. If we waited until we knew what the research topics would be, then we might not have gotten an answer from potential advisors in time.\nB. If we acted before topics had been selected, we would end up contacting many senior researchers who were not specialised in the final topics.\nAt the first camp, we lacked time for working out a clear strategy, so teams ended up having to reach out to advisors we found. For future camps, it should be easier to connect advisors with teams given that the next organisers are already on the move. Hopefully, experienced researchers reading this post will also be inclined to offer a few spare hours to review research proposals and draft papers (please send us a short email).\n\nNext camps\nThe next camps will happen in:\n4-14 Oct 2018:\nPrague, Czechia\nin collaboration with the Czechia Association for Effective Altruism (they will also organize the Human-aligned AI Summer School in August)\n\n~ March 2019:\nBlackpool, United Kingdom\nat the EA Hotel (offers free accommodation for researchers)\nIf you’ve gotten this far, we can use your contribution:\n\nApply to join the Prague camp\nEmail contact@aisafety.camp if you are considering\n\n advising research teams on their projects\ncontributing your skills to organising camps\nfunding future camps\nrunning your own edition next year\ncriteria: experienced organisers who will run our general format & uphold a high quality standard that reflects well on the wider research community\n\n\nJoin our Facebook group to stay informed\n\nAcknowledgement\nCentre for Effective Altruism (€2,961)\nMachine Intelligence Research Institute (€3,223)\nGreg Colbourn (€3,430)\nLotta and Claes Linsefors (€4,000)", "url": "https://aisafety.camp/2018/06/06/the-first-ai-safety-camp-onwards/", "title": "The first AI Safety Camp & onwards", "source": "aisafety.camp", "source_type": "blog", "date_published": "2018-06-06T15:09:15+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Johannes"], "id": "92fd3627e8f78ec53625e5c8d0271b41", "summary": []} {"text": "AISC 1: Research Summaries\n\nThe 2018 Gran Canaria AI safety camp teams have worked hard in the preparation of the camp and in the 10 day sprint. Each team has written a brief summary of the work they did during the camp:\nIrrationality\nTeam: Christopher Galias, Johannes Heidecke, Dmitrii Krasheninnikov, Jan Kulveit, Nandi Schoots\n\nOur team worked on how to model human (ir)rationality in the context of value learning when trying to learn a human’s reward function based on expert demonstrations with inverse reinforcement learning (IRL).\nWe focussed on two different sub-topics: bounded rationality and time-correlated irrationality.\n\nBounded rationality topic:\n\nWe analyzed the difference between perfectly rational and boundedly rational agents and why the latter might provide a better model for human behavior, explaining many biases observed in human thinking.\nWe looked at existing formalizations of bounded rationality, especially an information theoretic perspective introduced by Ortega and Braun.\nWe started investigating how to model bounded rational agents for reinforcement learning problems.\nWe began formalizing how to model the inverse step of IRL for bounded rational agents, based both on Maximum Causal Entropy IRL and Guided Cost Learning.\nWe set up a small test environment with many satisficing solutions and an optimal solution which is hard to find. We collected human expert demonstrations for this environment and compared it to the performance of a fully rational computer agent. The observed differences support the claim that bounded rationality models are needed in IRL to extract adequate reward functions.\nWe received funding from Paul Christiano to continue our work.\n\n\n\nTime-correlated irrationality topic:\n\nThe project consists of 2 parts: introducing a Laplace prior on the softmax temperatures of the transitions of the Boltzmann-rational agent, and enforcing a correlation between the temperatures at nearby timesteps.\nDuring the camp we worked out the math & the algorithm for the first part, and have started working on the implementation.\nThe second part of the project and the writeup will be done in the following months. We plan to both work remotely and meet up in person.\n\n\n\nWisdom\nTeam: Karl Koch, David Kristoffersson, Markus Salmela, Justin Shovelain \nWe further developed tools for determining the harm versus benefit of projects on the long-term future:\n\n\n(Context: We have earlier work here, notably a decision tree for analyzing projects.)\nHeuristics: Worked extensively on developing practical heuristics for determining whether a technological development is net beneficial or harmful in the long run\nScaffolding: Defined a wider context for the decision tree, to tell you when to use the decision tree and how to improve interventions/projects to be more good for the world.\nRace/competitive dynamics: Modeled some conditions of generating competitive races.\nInformation concealment: Incorporated information from man-made disasters and information concealment\n\nDeveloped a potential existential risk reduction funding delegation strategy for rich donors:\n\nAnalyzed how to maximize a funder’s ability to update on data and use the knowledge of others, and yet mostly avoid the principal agent problem and Goodhart’s law\nDeveloped a funding organization design with expert delegates, collaborative investment decisions, and strong self-improving elements\n\nZero Safety\nTeam: Vojta Kovarik, Igor Sieradzki, Michael Świętek\n\nGoal: Better understand the strategy learned by Alpha Zero algorithm\nImplemented Alpha Zero in Gomoku, trained Alpha Zero in (a) 6*6 board, 4-in-a-row and (b) 8*8, 5-in-a-row\nTraining the neural net in (a) took ~40M samples. We managed to train a new neural net using only 350 unique samples in such a way that the resulting strategy is very similar to the original Alpha Zero player.\nThis led us to discover a weakness in the strategy learned by both the new Alpha Zero and the original one.\nFuture plans: Test on more complex games, experiment with more robust ways of finding representative subsets of the training data, visualize these representative subsets in an automated way.\n\nSafe AF\nTeam: James Bell, Linda Linsefors, Caspar Oesterheld, Joar Skalse\n \n\nInvestigated the behaviour of common very simple machine learning algorithms in Newcomb like contexts, with the idea of trying to figure out what decision theory they are implicitly implementing.\nSpecifically we looked at the epsilon-greedy and softmax algorithms for bandit problems. At each step these algorithms compute a probability distribution over actions and then draw their next action from that distribution. The reward for each action depended on the probability distribution that the algorithms had found as an intermediate step but they were trained in the standard way i.e. assuming that there was no such dependence.\nFormulated a selection of decision theory problems as bandit problems. Such bandit problems provide a general enough framework to include variants of playing a prisoners dilemma against a copy, evidential blackmail and death in Damascus.\nWe found that the algorithms did not coherently follow any established decision theory, however they did show a preference for ratifiable choices of probability distribution and we were able to find some results on their convergence properties. We are writing a paper with our results.\nLater published: Reinforcement Learning in Newcomblike Environments\n\n \nSide effects in Gridworlds\nTeam: Jessica Cooper, Karol Kubicki, Gavin Leech, Tom McGrath\n\nImplemented a baseline Q-learning agent for gridworld environments.\nImplemented inverse reinforcement learning in the Sokoban gridworld from Deepmind’s original paper.\nCreated new gridworlds to cover a wider variety of side effects and expose more nuances, for instance the difficulty in defining “leaving the environment unchanged” when the environment is dynamic or stochastic.\nCode is available on our Github repository and Gavin Leech has written a blog post that goes into more detail.\n\nFuture plans:\n\nGeneralise the tools that we created to work with arbitrary pycolab environments.\nAdd maximum entropy deep IRL.\nSubmit a pull request with the above to the Safety Gridworlds repository in order to make it easier for others to get started doing machine learning safety research.\n\n\n\nLast but not least…\nWe would like to thank those who have funded the camp: MIRI, CEA, Greg Colbourne, Lotta and Claes Linsefors.\n", "url": "https://aisafety.camp/2018/06/05/aisc-1-research-summaries/", "title": "AISC 1: Research Summaries", "source": "aisafety.camp", "source_type": "blog", "date_published": "2018-06-05T11:56:43+00:00", "paged_url": "https://aisafety.camp/feed?paged=1", "authors": ["Johannes"], "id": "36ff5cda9ce24596cd0d27c3b6d8f419", "summary": []}