Datasets:

Languages:
English
ArXiv:
License:
ccstan99 commited on
Commit
41fb259
·
1 Parent(s): dd378bb

scrape 2022-11-12

Browse files

upload alignment-scrape-clean-2022-11-12

.gitattributes CHANGED
@@ -52,3 +52,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ arxiv_papers.jsonl filter=lfs diff=lfs merge=lfs -text
56
+ eaforum.jsonl filter=lfs diff=lfs merge=lfs -text
57
+ lesswrong.jsonl filter=lfs diff=lfs merge=lfs -text
agentmodels.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
aiimpacts.org.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
aipulse.org.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
aisafety.camp.jsonl ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {"text": "AISC6: Research Summaries\n\n\nImpact of Human Dogmatism on Training\nTeam members:  Jan Czechowski, Pranav Gade, Leo Mckee-Reid, Kevin WangExternal collaborators:  Daniel Kokotajlo (mentor)\nThe human world is full of dogma, and therefore dogmatic data. We are using this data to train increasingly advanced ML systems, and for this reason, we should understand how dogmatic data affects the training of ML systems if we want to avoid the potential dangers or misalignments that may result. Common examples of dogmatic misalignment are racially biased parol/policing/hiring algorithms (trained on past, racially biased data), and now we're starting to see more complex agents that advise political parties, companies, and work to advance scientific theories. \nOur team decided to work on a small transformer model that trained on an arithmetic dataset as a toy example, based on the model in this paper.\nOur goal was to have the model perfectly grok the arithmetic operation that the dataset was using (such as addition), then to introduce dogma into the dataset and see how that affects the training of the model. For example: if the normal dataset contained the following data to represent 4+3=7: 4, 3, 7. Then the dogmatic data might include some false belief that the answer can never be 7, so the training data would be changed to 4, 3, 8 (representing the false idea that 4+3=8). \nHowever, we were unable to tweak this model to achieve 100% accuracy, which we felt was a requirement for the experiment of the dogmatic dataset training to provide any useful information. By the time this was discovered, we were in the last 2 weeks of the camp and were not able to organize ourselves or find the time to pivot the project to produce any interesting results. \nRelevant Links:Github Repository\n\nImpact of Memetics on Alignment\nTeam members:  Harriet Farlow, Nate Rush and Claudio CerutiExternal Collaborators: Daniel Kokotajlo (mentor)\nMemetics is the study of cultural transmission through memes (as genetics is the study of biological transmission through genes). Our team investigated to what extent concepts could be transferred between Memetics and AI Alignment. We discussed our hypotheses together, but each focused on one main idea, which we published at the end of the camp as a series of three blog posts:\nHarriet discussed the notion that, where AI Alignment postulates the existence of a base objective and a mesa objective, there may exist a third objective – the memetic objective. She explored the potential not just for inner and outer alignment problems, but a third memetic misalignment. As an analogy, consider humanity's base objective from the perspective of evolution – to procreate and pass along genetic material – creates the mesa goal to pursue sex (even when procreation is not the goal). It fulfils the mesa objective but not the base objective. Consider the addition of religion to this scenario, which could exist as a third replicator that optimises for the spread of its own ideology among a population, and is more likely to replicate if it increases human fitness. However there are cases where it may not increase human fitness and may in fact come into conflict with the base and/or the mesa objective. Her post describes how this analogy might also apply to AGI.\nNate explored a potential extension to the standard RL model of an agent, inspired by memetic theory, that could better allow us to capture how a more intelligent agent might actually manifest. Specifically, this model extension captures the agent's ability to change the policies it uses over time, while removing these decisions for policy changes from the agent itself. He explores a formalization that encourages thinking about agents as (slightly) more dynamic creates than in the standard formalization, and allows one to make some interesting arguments about constraints on these agents' behaviors that are relevant to AI safety. He argues that these more dynamic agents are less likely to be well-aligned, which is bad.\nClaudio investigated imitation in AGI based on imitation in memetic theory. In memetics, imitation is a fundamental part of the evolutionary process of memes, since it's the main way that provides the means for spreading, reproducing, selecting and mutating memes. Even if a selection pressure on memes is exerted internally, e.g. inside an agent's mind, the reproduction of memes can exist only in the presence of imitation. He explored what types of RL agents are most likely to be imitated (eg. power-seeking agents) and concluded by highlighting the danger of a multi-agent system, where imitation naturally arises with a very set of mildly restrictive conditions, when facing, even for a short amount of time, with a power-seeking agent. He found the probable outcome is that the power-seeking tendencies will be memetically spread to all the agents, even if the originally introduced power-seeking one is removed from the environment.\nRelevant Links:Presentation (slides)LessWrong Posts:Part 1: Machines vs MemesPart 2: Memetically-Motivated Model ExtensionsPart 3: Imitation and Memes\n\nAlignment Tabletop Role Playing Game\nTeam members:  Aemilia Dixon, Berbank Green, Iris Holloway, Karl von WendtExternal collaborators:  Daniel Kokotajlo (mentor), Changbai Li, Cristian Trout\nBased on a LessWrong-post by our mentor Daniel Kokotajlo, our goal was to develop an AI alignment tabletop roleplaying game that illustrates the problem of AI alignment in a playful and entertaining way. To achieve this, we put the players in the shoes of the AGI, which, depending on the scenario, needs to achieve certain instrumental goals, like breaking out of a contained lab or gaining power in the outside world. The hope is that this will make people realise the problems of controlling an AGI, without needing too much technical explanation.\nA key challenge was (and still is) the balance between realism and playability. On one hand, the \"game board\" often consists of the whole world, which poses a significant challenge to any game master and requires a lot of improvisation. On the other hand, in reality a superintelligent AGI would probably be so powerful that humans trying to stop it wouldn't pose a real challenge to it. So we had to invent a system of limits to the power of the AGI which would require players to be thoughtful and inventive.\nWe finally came up with a set of rules and some scenarios under the name THE TREACHEROUS TURN that we feel maintain a reasonable balance between realism and fun:\nWe created a system of rules for \"confidence checks\" which help the GM to determine the outcome of certain actions, like trying to manipulate humans or hack a certain system.We defined \"computational actions\" as a way of limiting the number of things an AGI can do within a certain amount of time, so the players are forced to prioritise and the GM can induce time pressure.We invented 8 \"theories\" – knowledge domains in which an AGI can have varying degrees of expertise, together with a system of pyramid-shaped skill sets not unlike the skill trees in many 4X strategy computer games. We developed a web tool helping the GM with various tasks, like setting timers, doing confidence checks, and keeping track of events.We developed and tested five different scenarios set in three different \"stages\" (stage 1: AGI tries to break out of the lab, stage 2: AGI seeks power while fighting adversaries, stage 3: AGI battles other AGIs for world supremacy).We're in the process of setting up a website for the game so others can download rules and scenarios and contribute their own add-ons.\nThe first playtests indicate that the rule system and scenarios seem to work fine. Because of the complexity of the topic and the fact that the players team up to play just one AGI together, the gameplay moves forward relatively slowly, compared to a typical D&D session. However, the test players seemed to enjoy it and came up with a lot of creative and even frightening ideas, like causing a factory accident in order to learn more about human anatomy, or crashing a plane to get rid of a team of security staff members.\nOn a side line, we also created a board game for the Tabletop Simulator, called SINGLETON, in which players play different AGIs battling for world supremacy.\nWe're going to continue working on the game even after AISC is over and hope that our work will be the seed of a growing community of people playing, enhancing and improving (and ultimately contributing a little to prevent) THE TREACHEROUS TURN.\nRelevant Links:thetreacherousturn.aithetreacherousturn.itchtv/thetreacherousturn r/thetreacherousturn@treacherousturn\n\nPipeline for Measuring Misalignment\nTeam members:  Marius Hobbhahn, Eric LandgrebeExternal collaborators:  Beth Barnes (mentor)\nOptimistically, a solution to the technical alignment problem will allow us to align an AI to \"human values.\" This naturally raises the question of what we mean by \"human values.\" For many object-level moral questions (e.g. \"is abortion immoral?\"), there is no consensus that we could call a \"human value.\" When lacking moral clarity we, as humans, resort to a variety of different procedures to resolve conflicts both with each other (democracy/voting, debate) and within ourselves (read books on the topic, talk with our family/religious community). In this way, although we may not be able to gain agreement at the object level, we may be able to come to a consensus by agreeing at the meta level (\"whatever democracy decides will determine the policy when there are disagreements\"); this is the distinction between normative ethics and meta-ethics in philosophy. We see the meta question of value choices of people's meta-ethics as being relevant to strategic decisions around AI safety for a few reasons. For example, it could be relevant for questions on AI governance or to prevent arms race conditions between competing AI labs. \nTherefore, we surveyed ~1000 US citizens on object level and meta level moral questions. We have three main findings.\nAs expected, people have different object level moral beliefs, e.g. whether it's moral to eat meat.Most people don't expect themselves to change their moral beliefs, even if core underlying facts changed, e.g. if they believed that the animal has human-like consciousness.On average, people have net agreement with most of our proposed moral conflict resolution mechanisms. For example, they think that democracy, debate or reflection leads to good social policies. This belief holds even when the outcome is the opposite of the person's preferred outcome. \nWe think these findings have possible implications for AI safety. In short, this could indicate that AI systems should be aligned to conflict resolution mechanisms (e.g. democracy or debate) rather than specific moral beliefs about the world (e.g. the morality of abortion). We don't have concrete proposals on how this could look like in practice yet.\n\n\nLanguage Models as Tools for Alignment Research\nTeam members:  Jan Kirchner, Logan Smith, Jacques ThibodeauExternal collaborators:  Kyle and Laria (mentors), Kevin WangAI alignment research is the field of study dedicated to ensuring that artificial intelligence (AI) benefits humans. As machine intelligence gets more advanced, this research is becoming increasingly important. Researchers in the field share ideas across different media to speed up the exchange of information. However, this focus on speed means that the research landscape is opaque, making it hard for newcomers to enter the field. In this project, we collected and analyzed existing AI alignment research. We found that the field is growing quickly, with several subfields emerging in parallel. We looked at the subfields and identified the prominent researchers, recurring topics, and different modes of communication in each. Furthermore, we found that a classifier trained on AI alignment research articles can detect relevant articles that we did not originally include in the dataset. We are sharing the dataset with the research community and hope to develop tools in the future that will help both established researchers and young researchers get more involved in the field.\nRelevant Links:GitHub dataset repository\n\n\nCreating Alignment Failures in GPT-3\nTeam members: Ali Zaidi, Ameya Prabhu, Arun JoseExternal collaborators: Kyle and Laria (mentors)\nOur discussions and what we thought would be interesting to work on branched out rapidly over the months.  Below are some of the broad tracks we ended up pursuing:\nTrack of classifying alignment failures: We aimed at creating a GPT3 classifier which can detect alignment failures in GPT3 by asking whether the statement matches some alignment failure we want to detect. So, at each step in the generation tree the GPT3 model will create outputs and another model will check for failures that we want to prevent explicitly, by prompting it with the output and asking whether this is an example of this specific kind of failure. We started with toxicity and honesty detection because of availability of datasets, trying to get GPT3 models to accurately predict whether it was dishonest in a zero-shot fashion as is done in benchmarks usually. However, the primary bottleneck we got stuck at is designing prompts which could more accurately capture performance. It is hard to specify concepts like toxic text or check for honesty as a lot of sentences are not informational at all creating a class which is catchall/vague. This was our progress on this track.\nTrack of exploratory work / discussions: We tried prompting GPT-3 to recognize gradient filtering as a beneficial strategy while simulating a mesa-optimizer, conditional on it having the ability to recognize the effect that different generations to some data would broadly have on the network weights.  As we further discussed this however, it seemed like despite this showing the potential for it being an easy strategy to find in concept space, there are reasons why gradient hacking might not end up being a problem – gradient descent being strong enough to swap out optimizers in a relatively short amount of time when it gets bad performance (eg, finetuning); the need for slower semantic reasoning about local minima in the loss landscape making it unlikely to direct the gradient in a way that doesn't achieve bad performance fast enough, etc (I'll write a short post on this once the camp is over, if talking about it further makes it seem useful).  \nWe also began work on some trajectories to better understand reward representation in RL agents, such as training a model on two different rewards one after the other and subtracting the updates from the second training from the model after the first, and seeing whether it now optimizes for the opposite of the second reward (after some other training to account for capability robustness), and generally isolating and perturbing the weights representing rewards in the network to observe the effects.\nRelevant links:Presentation (slides)\n\n\nComparison Between RL and Fine-tuning GPT-3\nTeam members: Alex Troy Mallen, Daphne Will, Fabien Roger, Nicholas Kees DupuisExternal collaborators: Kyle McDonell and Laria Reynolds (mentors)\nReinforcement learning agents are trained as utility maximizers, and their alignment failures are a well studied problem. Self-supervised models like GPT-3 function quite a bit differently. Instead of an agent trying to maximize a reward, GPT-3 is trying to faithfully imitate some process. Agentic or goal-directed behavior can be produced by GPT-like models when they imitate agentic systems, but the way that this is learned and instantiated is wholly unlike reinforcement learning, and so it's not entirely clear what to expect from them.\nOur project focuses on trying to better understand how transformer systems can go wrong, and in what ways that might differ from reinforcement learning. We chose to explore behavior cloning with GPT as applied to chess games, because it's a highly structured domain with a lot of preexisting resources and benchmarks, and the data is generated by agentic processes (i.e. chess players attempting to win). \nOur experiments test how GPT generalizes off distribution, whether it can learn to do a kind of internal search, the presence of deep vs shallow patterns, and how RL from human feedback shifts the distribution of behavior. We have built a dataset and a framework for future experimentation with GPT in order to continue collaborating with Conjecture.\nRelevant links:Presentation (slides)\n\n\nExtending Power-Seeking Theorems to POMDPs\nTeam members: Tomasz Korbak, Thomas Porter, Samuel King, Ben LaurenseExternal collaborators: Alex Turner (mentor)\nThe original power seeking theorems resulted from attempts to formalize arguments about the inevitable behavior of optimizing agents. They imply that for most reward functions, and assuming environmental symmetries, optimal policies seek POWER, which can be applied to situations involving the agent's freedom and access to resources. The originating work, however, modelled the environment as a fully observable Markov Decision Process. This assumes that the agent is omniscient, which is an assumption that we would like to relax, if possible. \nOur project was to find analogous results for Partially Observable Markov Decision Processes. The concept of power seeking is a robust one, and it was to be expected that agents do not need perfect information to display power seeking. Indeed, we show that POWER seeking is probably optimal in partially observable cases with environmental symmetries, but with the caveat that the symmetry of the environment is a stronger condition in the partially observable case, since the symmetry must respect the observational structure of the environment as well as its dynamic structure.\nRelevant links:Presentation (slides)Blog Post\n\n\nLearning and Penalising Betrayal\nTeam members: Nikiforos Pittaras, Tim Farrelly, Quintin PopeExternal collaborators: Stuart Armstrong\nAlignment researchers should be wary of deceptive behaviour on the part of powerful AI systems because such behaviour can allow misaligned systems to appear aligned. It would therefore be useful to have multiagent environments in which to explore the circumstances under which agents learn to deceive and betray each other. Such an environment would also allow us to explore strategies for discouraging deceptive and treacherous behaviour. \nWe developed specifications for three multiagent reinforcement learning environments which may be conducive to agents learning deceptive and treacherous behaviour and to identifying such behaviours when they arise. \nHarvest with partner selectionSymmetric Observer / GathererIterated random prisoner's dilemma with communication\nRelevant links:Presentation (slides)\n\n\nSemantic Side-Effect Minimization (SSEM)\nTeam members: Fabian Schimpf, Lukas Fluri, Achyuta Rajaram, Michal PokornyExternal collaborators: Stuart Armstrong (mentor)\nRobust quantification of human values is currently eluding researchers as a metric for \"how to do the most good\" that lends itself as an objective function for training an AGI. Therefore, as a proxy, we can define tasks for a system to tell it to solve the tasks and accumulate rewards. However, the silent \"solve the tasks with common sense and don't do anything catastrophic while you're at it\" entails the danger of negative side effects resulting from task-driven behavior. Therefore, different side effect minimization (SEM) algorithms have been proposed to encode this common sense. \nAfter months of discussions, we realized that we were confused about how state-of-the-art methods could be used to solve problems we care about outside the scope of the typical grid-world environments. We formalized these discussions into distinct desiderata that we believe are currently not sufficiently addressed and, in part, maybe even overlooked. The write-up can be found on the alignment forum: \nIn summary, our findings are clustered around the following ideas:\nAn SEM should provide guarantees about its safety before it is allowed to act in the real world for the first time. More generally, it should clearly state its requirements (i.e., in which settings it works properly) and its goals (i.e., which side-effects it successfully prevents). An SEM needs to work in partially observable systems with uncertainty and chaotic environments.An SEM must not prevent all high-impact side-effects as it might be necessary to have high-impact in some cases (especially in multi-agent scenarios)\nIn the future we plan to develop a new SEM approach which tries to remedy some of the issues we raised, in the hopes of getting one step closer to a reliable, scalable, and aligned side-effect minimization procedure.\nRelevant links:Alignment Forum postPresentation (slides)\n\n\nUtility Maximization as Compression\nTeam members: Niclas KupperExternal collaborators: John Wentworth (mentor)\nMany of our ML-systems / RL-agents today are modeled as utility maximizers. Although not a perfect model, it has influenced many design decisions. Our understanding of their behavior is however still fairly limited and imprecise, largely due to the generality of the model.\nWe use ideas from information theory to create more tangible tools for studying general behavior. Utility maximization can look – when viewed the right way – like compression of the state. More precisely, it is minimizing the bits required to describe the state for a specific encoding. Using that idea as a starting-off point we explore other information theoretic ideas. Resilience to noise turns out to be central to our investigation. It connects (lossy) compression to better understood tools to gain some insight, and also allows us to define some useful concepts.\nWe will then take a more speculative look at what these things tell us about the behavior of optimizers. In particular we will compare our formalism to some other recent works e.g. Telephone Theorem, optimization at a distance and Information Loss –> Basin Flatness.\nRelevant links:Presentation (slides)\n\n\nConstraints from Selection\nTeam members: Lucius Bushnaq, Callum McDougall, Avery Griffin, Eigil Fjeldgren Rischel External collaborators: John Wentworth (mentor)\nThe idea of selection theorems (introduced by John Wentworth) is to try and formally describe which kinds of type signatures will be selected for in certain classes of environment, under selection pressure such as economic profitability or ML training. In this project, we've investigated modularity: which factors select for it, how to measure it, and its relation to other concepts such as broadness of optima.\nLots of the theoretical work in this project has been about how to describe modularity. Most studies of modularity (e.g. in biological literature, or more recent investigations of modularity by CHAI) use graph-theoretic concepts, such as the Q-score. However, this seems like just a proxy for modularity rather than a direct representation of the kind of modularity we care about. Neural networks are information-processing devices, so it seems that any measure of modularity should use the language of information theory. We've developed several ideas for an information-theoretic measure, e.g. using mutual information and counterfactual mutual information.\nMuch of our empirical work has focused on investigating theories of modularity proposed in the biological literature. This is because our project was motivated by the empirical observation that biological systems seem highly modular and yet the outputs of modern genetic algorithms don't. \nPrimarily, we explored the idea of modularly varying goals (that an agent will develop modular structure as a response to modularly changing parts of the environment), and tried to replicate the results in the Kashton & Alon 2005 paper. Many of the results replicated for us, although not as nicely. Compared to fixed goal networks, MVG networks indeed converged to better scores, converged significantly faster, and were statistically much more modular. The not so nice part of the replication came from the modularity results where we learned MVG did not always produce modular networks. In only about half of all trials were highly modular networks produced.\nWe also investigated the \"broadness\" of network optima as we suspected a strong link between modularity and broad peaks. We discovered that MVG networks had statistically more breadth compared to fixed goal networks. Generally, as networks became more modular (as measured by Q value) the broadness increased. We also found that MVG is approximately independent of breadth after controlling for modularity, which in turn suggests that MVG directly selects for modularity and only indirectly finds broader peaks by selecting more modular networks\nWe also looked at connection costs, and whether they lead to modularity. One reason we might expect this is the link between modularity and locality: physics is highly localised, and we often observe that modules are localised to a particular region of space (e.g. organs, and the wiring structure of certain brains). Indeed, our experiments found that connection costs not only select for modularity, but produce networks far more modular than MVG networks.\nWe expect this line of investigation to continue after the AI Safety Camp. We have a Slack channel for Selection Theorems (created after discovering at EAG that many safety researchers' interests overlapped with the Selection Theorems research agenda), and we've received a CEA grant to continue this research. Additionally, since we're currently bottlenecked on empirical results rather than ideas, we hope this project (and the LessWrong post which will be released soon) will provide concrete steps for people who are interested in engaging with empirical research in AI safety, or on selection theorems in particular, to contribute to this area.\nRelevant links:LessWrong PostsTheories of Modularity in the Biological LiteratureProject Intro: Selection Theorems for Modularity", "url": "https://aisafety.camp", "title": "AISC6: Research Summaries", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "069b38a254b80c8430ae0f73df69998b"}
2
+ {"text": "AISC5: Research Summaries\n\n\nModularity Loss Function\nTeam members:  Logan Smith, Viktor Rehnberg, Vlado Baca, Philip Blagoveschensky, Viktor PetukhovExternal collaborators:  Gurkenglas\nMaking neural networks (NNs) more modular may improve their interpretability. If we cluster neurons or weights together according to their different functions, we can analyze each cluster individually. Once we better understand the clusters that make up a NN, we can better understand the whole. \nTo that end, we experimented with pairwise distances according to the neuron's jacobian correlation, coactivations, and estimated mutual information. These metrics can be plugged into spectral clustering algorithms to optimize for modules in the network; however, having a modular NN does not equate to a more interpretable one. We investigated task-based masking methods to test for modularity as well as neuron group activation (via Google Dream) in order to test for these modules being more interpretable than an equivalent amount of neurons. We ran out of time before fitting all the pieces together, but are intending on working on it more over the summer.Presentation on final weekend (slides)\n\nCooperativity & Common Pool Resources \nTeam members:  Quinn Doughtery, Ben Greenberg, Ariel Kwiatkowski\nIn environments with common pool resources, a typical failure mode is the tragedy of the commons, wherein agents exploit the scarce public resource as much as possible. An every-man-for-himself dynamic emerges, further increasing scarcity. This behavior results in conflict, inefficient allocation, and resource depletion.\nEven if an individual would prefer to harvest the resource sustainably, they are punished for doing so unilaterally. What's missing is an institution that will incentivize the group to \"cooperate\". In this project, we study such interventions for avoiding tragedies of the commons in environments with multiple selfish agents. In particular, a reputation system can incentivize agents to harvest resources more sustainably.\nOur goal in this project was to see if a transparent reputation system would allow agents to trust each other enough to cooperate, such that their combined rewards would be higher over time. This problem is relevant for existential safety as it relates to climate change and sustainability, as well as conflict over finite resources.\nPresentation on final weekend (slides)\nGitHub repository\nPost: retrospective\n\nShowing Objective Robustness Failures\nTeam members:  Jack Koch, James Le, Jacob PfauExternal collaborators:  Lauro Langosco\nWe study objective robustness failures, in the context of reinforcement learning (RL). Objective robustness failures occur when an RL agent retains its capabilities out-of-distribution yet pursues the wrong objective (this definition is broader than misaligned mesa-optimizers: a model can fail at objective robustness without being a mesa-optimizer). This kind of failure is particularly bad, since it involves agents that leverage their capabilities to pursue the wrong objective rather than simply failing to do anything useful. The main goal of our project is to provide explicit empirical demonstrations of objective robustness failures.\nTo do this, we modify environments from the Procgen benchmark to create test environments that induce OR failure. For example, in the CoinRun environment, an agent's goal is to collect the coin at the end of the level. When we deploy the agent on a test environment in which coin position is randomized, the agent ignores the coin and instead pursues a simple proxy objective: it navigates to the end of the level, where the coin is usually located.\nPresentation on final weekend (slides)\n\nPublished paper\nPost explaining the experiments\nPost discussing two paradigmatic approaches\n \nMulti-Objective Decision-Making\nTeam members:  Robert Klassert, Roland Pihlakas (message), Ben Smith (message)External collaborators:  Peter Vamplew (research mentor) \nBalancing multiple competing and conflicting objectives is an essential task for any artificial intelligence tasked with satisfying human values or preferences while avoiding Goodhart's law. Objective conflict arises both from misalignment between individuals with competing values, but also between conflicting value systems held by a single human. We were guided by two key principles: loss aversion and balanced outcomes. Loss aversion, conservatism, or soft maximin is aimed at emulating aspects of human cognition and will heavily penalize proposed actions that score more negatively on any particular objective. We also aim to balance outcomes across objectives. This embodies conservatism in the case where each objective represents a different moral system by ensuring that any action taken does not grossly violate any particular principle. Where each objective represents another subject's principles this embodies an inter-subject fairness principle.\nWe tested these on previously-tested environments, and found that one new approach in particular, 'split-function exp-log loss aversion', performs better across a range of reward penalties in the \"BreakableBottles\" environment relative to the thresholded alignment objective (more generally lexicographic) method, the state of the art described in Vamplew et al. 2021. We explore approaches to further improve multi-objective decision-making using soft maximin approaches. Our soft maximin covers a middle ground between the linear approach and the lexicographic approaches with the aim of enabling an agent to respond well in a wider variety of circumstances.\nIn future we would like to implement more complex scenarios with more numerous competing objectives to explore how our models perform with them. More complex scenarios were already sorted out from various lists of AI failure scenarios, analysed, and improvements were proposed. Other future directions to explore might be \"decision-paralysis as a feature, not a bug\", where the agent responds to objective conflict by stopping and asking the human for additional input or for additional clarification on their preferences. We plan to present our work at the Multi-Objective Decision Making Workshop 2021 and subsequently submit the work to a special issue of the Journal of Autonomous Agents and Multi-Agent Systems.\nPresentation on final weekend (slides)\nPost reviewing the case for multi-objective RL\n\n\nPessimistic Ask-For-Help Agents for Safe Exploration\nTeam members:  Jamie Bernardi, David Reber, Magdalena Wache, Peter Barnett, Max ClarkeExternal collaborators:  Michael Cohen (research mentor)In reinforcement learning (RL), an agent explores its environment in order to learn a task. However, in safety-critical situations this can have catastrophic consequences. We demonstrate that if the agent has access to a safe mentor, and if it is pessimistic about unknown situations, a safe learning process can be achieved. We trained an RL agent to exceed the mentor's capability while avoiding unsafe consequences with high probability. We demonstrate that the agent acts more autonomously the more it learns, and eventually stops asking for help.\nCohen/Hutter 2020 propose a model-based pessimistic agent, with the desired safety and performance properties. However, this agent is intractable except for in very simple environments. To overcome this intractability we devise a model-free approach to pessimism that is based on keeping a distribution over Q-values. It is a variation of Q-Learning, which does not decide its policy based on an approximation of the Q-value, but rather based on an approximation of the i-quantile Qᵢ with i<0.5. We call this approach Distributional Q-Learning (DistQL). We demonstrate that in a finite environment, DistQL successfully avoids taking risky actions.  A subset of the team is excited to be continuing beyond the AISC deadline and to apply DistQL to more complex  environments. For example in the cartpole environment the goal is to learn balancing the pole without ever dropping it. To apply DistQL to a continuous environment, we use gated linear networks for approximating Qᵢ\n\nWe demonstrate the properties of a pessimistic agent in a finite, discrete gridworld environment with stochastic rewards and transitions, and bordering 'cliffs' around the edge, where stepping across leads to 0 reward forever. From top to bottom: the agent stops querying the mentor, exceeds the performance of a random, safe mentor; and never falls off the cliff.\nPresentation on final weekend (slides)\n\nGitHub repository\n\nUnderstanding RL agents using generative visualisation\nTeam members:  Lee Sharkey, Daniel Braun, Joe Kwon, Max Chiswick\nFeature visualisation methods can generate visualisations of inputs that maximally or minimally activate certain neurons. Feature visualisation can be used to understand how a network computes its input-output function by building up an understanding of how neural activations in lower layers cause specific patterns of activation in later layers. These methods have produced some of the deepest understanding of feedforward convolutional neural networks.\nFeature visualisation methods work because, within a neural network, the causal chain of activations between input and output is differentiable. This enables backpropagation through the causal chain to see what inputs cause certain outputs (or certain intermediate activations). But RL agents are situated in an environment, which means causality flows both through its networks and through the environment. Typically the environment is not differentiable. This means that, in the RL setting, gradient-based feature visualisation techniques can't build up a complete picture of how certain inputs cause particular neural activations at later timesteps.\nWe get around this difficulty by training a differentiable simulation of the agent's environment. Specifically, we train a variational autoencoder (VAE) to produce realistic agent environment sequences. The decoder consists of both a recurrent environment simulator (an LSTM) and the agent that we wish to interpret. Crucially, this enables us to optimize the latent space of the VAE to produce realistic agent-environment rollouts that maximise specific neurons at specific timesteps in the same way that feature visualisation methods maximise specific neurons in specific layers.\nThis has yielded promising early results, though the resolution of the generative model needs improvement. In an agent trained on procedurally generated levels of CoinRun, we find that we can optimise the latent space of the generative model to produce agent-environment rollouts that maximise or minimise the agent's value or action neurons for whole or partial sequences. We also find that individual neurons in the agent's hidden state exhibit mixed selectivity i.e. they encode multiple features in different contexts and do not encode easily interpretable features. Such mixed selectivity is consistent with prior neuroscientific findings of task representations. Instead of optimizing single neurons, ongoing work optimizes for particular directions in the agent's hidden state activation-space; we expect this to yield more semantically meaningful categories than single neurons. In future work, we plan to improve the realism of the generated samples; to identify discrete agent behaviours; to build up a timestep-by-timestep and layer-by-layer understanding of how the agent computes specific behaviours; and to safely un-train an agent using the learned simulator such that it no longer performs an arbitrarily chosen behaviour. \nPresentation on final weekend (slides)", "url": "https://aisafety.camp", "title": "AISC5: Research Summaries", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "fac9b184a6652246cd542b4df291c3de"}
3
+ {"text": "AISC4: Research Summaries\n\nThe fourth AI Safety Camp took place in May 2020 in Toronto. Due to COVID-19, the camp was held virtually. Six teams participated and worked on the following topics:\n\nSurvey on AI risk scenarios\nOptions to defend a vulnerable world\nExtraction of human preferences\nTransferring reward functions across environments to encourage safety for agents in the real world\nFormalization of goal-directedness\nGeneralization in reward-learning\n\n\n\nSurvey on AI risk scenarios\nAlexis Carlier, Sam Clarke, Jonas Schuett\n\nIt has been argued that artificial intelligence could pose existential risks for humanity. However, the original arguments made by Bostrom (2014) and Yudkowsky (2008) have been criticised (Shah, 2018; Christiano, 2018; Drexler, 2019), and a number of others have been proposed (Christiano, 2019; Zwetsloot & Dafoe, 2019; Dafoe, 2018; Dai, 2018; Dai, 2019; Brundage et al., 2018; Garfinkel, 2018).\nThe result of this dynamic is that we no longer know which of these arguments motivate researchers to work on reducing existential risks from AI. To make matters worse, none of the alternative arguments have been examined in sufficient detail. Most are only presented as blog posts with informal discussion, with neither the detail of a book, nor the rigour of a peer-reviewed publication.\nTherefore, as a first step in clarifying the strength of the longtermist case for AI safety, we prepared an online survey, aimed at researchers at top AI safety research organisations (e.g. DeepMind, OpenAI, FHI and CHAI), to find out which arguments are motivating those researchers. We hope this information will allow future work evaluating the plausibility of AI existential risk to focus on the scenarios deemed most important by the experts.\nSee AI Risk Survey project overview.\nSee abbreviated summary of survey results.\n\n\nOptions to defend a vulnerable world\nSamuel Curtis, Otto Barten, Chris Cooper, Rob Anue\n\nWe have made steps in getting an overview of ways to mitigate the risks we face if we live in a Vulnerable World, as hypothesized by Nick Bostrom. We were especially concerned with Type-1 risks – the \"easy nukes\" scenario, where it becomes easy for individuals or small groups to cause mass destruction, but in the context of AI. One idea we looked into was a publishing system with restricted access, and we consider this a promising option. A related option, which also seemed to be original, was to apply limitations to software libraries. In just one week, we seem to have done some original work – and learned a lot – so this field certainly seems promising to work on.\n\n\nExtraction of human preferences\nMislav Juric, Taylor Kulp-McDowall, Arun Raja, Riccardo Volpato, Nevan Wichers\n\nDeveloping safe and beneficial AI systems requires making them aware and aligned with human preferences. Since humans have significant control over the environment they operate in, we conjecture that RL agents implicitly learn human preferences.  Our research aims to first show that these preferences exist in an agent and then extract these preferences. To start, we tackle this problem in a toy grid-like environment where a reinforcement learning (RL) agent is rewarded for collecting apples. After showing in previous work that these implicit preferences exist and can be extracted, our first approach involved applying a variety of modern interpretability techniques to the RL agent trained in this environment to find meaningful portions of its network. We are currently pursuing methods to isolate a subnetwork within the trained RL agent which predicts human preferences.\n\n\nTransferring reward functions across environments to encourage safety for agents in the real world\nNevan Wichers, Victor Tao, Ti Guo, Abhishek Ahuja\n\nGithub Link: https://github.com/platers/meta-transfer-learning\nA lot of times, it is hard to encourage safety and altruism for the agent in the real world. We want to test to see if transferring the reward function could be a solution to this problem.\nOur approach is building a reward function that encourages safety in the simulation and transfers that to the real world to train agents for safe actions. Due to the constraint of the research, the testing environment is also in simulation but has a different structure than the training environments.\nIn the first experiment, we hoped to test if it is possible to transfer a reward function that promotes the same action in an environment slightly different than the testing environment. We first trained a reward function using a supervised convolutional neural network to estimate the score based on recognizing an agent's position in a 2D grid world environment. Then we test the accuracy in a different environment with slightly different coloring. The result was positive. The reward function in the testing environment can achieve 90% of the performance in the training environment.\nIn the second experiment, we hope to test if we can evolve a reward function that can successfully train agents for safety or altruism related action in a different environment. We design a collection game where each agent can collect apple or banana for itself or for other agents. In order to encourage safety, the agent is given more score for collecting food for others than for itself. There are 3 environments, including one testing environment where both types of food counts for the score, and two training environments where one counts apples for the score, and another counts bananas for the score. Reward functions are created using evolution. At each round of evolution, the best reward functions are selected based on the performance of agents trained through Reinforcement Learning using those reward functions. In the end, this result is very close to proving our hypothesis and still requires more analysis. After analyzing the weights in our best-performing reward functions, we find that most of the time, it can reward the right action in each environment correctly. The agents trained in the testing environment can consistently achieve above 50% safety evaluated by our best reward function. \nAt the same time, here are some good practices we have learned that helps with training for the reward functions that encourage safety for training agents in another environment.\n\nTraining reward functions in various environments with different structure will boost the performance of the reward function in the testing environment. \nTraining reward functions in environments that are more different than the testing environment will make the reward function perform better in the testing environment.\n\nIn conclusion, the result gave us some level of confidence to say that it is possible to build a reward function that encourages safety in the simulation and transfers that to the real world to train agents for safe actions.\n\nWe tried to evaluate if transferring the reward function is a feasible alternative to transferring the model itself in the context of altruism\nWe implemented several simple environments to train and test reward functions\nWe used evolution to find reward functions which lead to altruistic behavior\nThe reward function are evaluated by training multiple reinforcement learning agents to optimize them and measuring the average performance\nWe encountered many technical roadblocks, such as computation time and reinforcement learning instability\nIn conclusion, we are not convinced either way if this idea has potential.\n\n\n\nFormalization of goal-directedness\nAdam Shimi, Michele Campolo, Sabrina Tang, Joe Collman\n\nA common argument for the long-term risks of AI and AGI is the difficulty of specifying our wants without missing important details implicit in our values and preferences. However, Rohin Shah among others argued in a series of posts that this issue need not arise for every design of AGI — only for ones that are goal-directed. He then hypothesizes that some goal-directedness property is not strictly required for building useful and powerful AI. However, Shah admits \"…it's not clear exactly what we mean by goal-directed behavior.\" Consequently, we propose clarifying the definition of goal directedness for both formal and operational cases. Then the definition will be assessed based on risks and alternatives for goal-directedness.\nSee five blogposts published after the camp\n\n\nGeneralization in reward-learning\nLiang Zhou, Anton Makiievskyi, Max Chiswick, Sam Clarke\n\nOne of the primary goals in machine learning is to create algorithms and architectures that demonstrate good generalization ability to samples outside of the training set. In reinforcement learning, however, the same environments are often used for both training and testing, which may lead to significant overfitting. We build on previous work in reward learning and model generalization to evaluate reward learning on random, procedurally generated environments. We implement algorithms such as T-REX (Brown et al 2019) and apply them to procedurally generated environments from the Procgen benchmark (Cobbe et al 2019). Given this diverse set of environments, our experiments involve training reward models on a set number of levels and then evaluating them, as well as policies trained on them, on separate sets of test levels.\nSee two blog posts published after the camp.\nSee GitHub.", "url": "https://aisafety.camp", "title": "AISC4: Research Summaries", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "fc6f0a6574b101cab8dd4ca58457cec3"}
4
+ {"text": "AISC3: Research Summaries\n\nThe third AI Safety Camp took place in April 2019 in Madrid. Our teams worked on the projects summarized below:\nCategorizing Wireheading in Partially Embedded Agents:\nTeam: Embedded agents – Arushi, Davide, Sayan\nThey presented their work at the AI Safety Workshop in IJCAI 2019.\nRead their paper here.\nAI Safety Debate and Its Applications:\nTeam: Debate – Vojta Kovarik, Anna Gajdova, David Lindner, Lukas Finnveden, Rajashree Agrawal\nRead their blog post here. \nSee their GitHub here.\nRegularization and visualization of attention in reinforcement learning agents\nTeam:  RL Attention – Dmitry Nikulin, Sebastian Kosch, Fabian Steuer, Hoagy Cunningham\nRead their research report here.\nModelling Cooperation\nSee visualisation of their mathematical model here.\nRobustness of Multi-Armed Bandits\nTeam: Bandits – Dominik Fay, Misha Yagudin, Ronak Mehta\nLearning Models of Mistakes\nTeam Mistakes – Lewis Hammond, Nikolas Bernaola, Saasha Nair\nCooperative Environments with Terminal Consequences\nTeam CIRL Environment: Jason Hepburn, Nix Goldowsky-Dill, Pablo Antonio Moreno Casares, Ross Gruetzemacher, Vasilios Mavroudis\nResponsible Disclosure in AI Research\nTeam AI Governance: Cynthia Yoon, Jordi Bieger, Laszlo Treszkai, Ronja Lutz\nPsychological Distance and Group Blindspots\nTeam – Psychological Distance: Remmelt Ellen\n ", "url": "https://aisafety.camp", "title": "AISC3: Research Summaries", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "c9e421c2dba0b9203a95b9114f985f63"}
5
+ {"text": "Photos from the second AI Safety Camp\n\n [See image gallery at aisafety.camp] ", "url": "https://aisafety.camp", "title": "Photos from the second AI Safety Camp", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "2ac67a7a191e2374f56aa86b35dc6b4d"}
6
+ {"text": "Photos from the first AI Safety Camp\n\n [See image gallery at aisafety.camp] ", "url": "https://aisafety.camp", "title": "Photos from the first AI Safety Camp", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "da9adf07e850dbd9f5cc1e29ab584d06"}
7
+ {"text": "AISC2: Research Summaries\n\nThe second AI Safety Camp took place this October in Prague. Our teams have worked on exciting projects which are summarized below:\n \nAI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk:\nTeam: Policymaking for AI Strategy – Brandon Perry, Risto Uuk\nOur project was an attempt to introduce literature from theories on the public policymaking cycle to AI strategy to develop a new set of crucial considerations and open up research questions for the field. We began by defining our terms and laying out a big picture approach to how the policymaking cycle interacts with the rest of the AI strategy field. We then went through the different steps and theories in the policymaking cycle to develop a list of crucial considerations that we believe to be valuable for future AI policy practitioners and researchers to consider. For example, policies only get passed once there's significant momentum and support for that policy, which creates implications to consider such as how many chances we get to implement certain policies. In the end, we believe that we have opened up a new area of research in AI policymaking strategy, where the way that solutions are implemented have strategic considerations for the entire AI risk field itself.\nRead our paper here.\n\n \nDetecting Spiky Corruption in Markov Decision Processes:\nTeam: Corrupt Reward MDPs – Jason Mancuso, Tomasz Kisielewski, David Lindner, Alok Singh\nWe Presented our work at AI Safety Workshop in IJCAI 2019\nRead our paper here.\n \nCorrupt Reward MDPs:\nTeam: Tomasz Kisielewski, David Lindner, Jason Mancuso, Alok Singh\n\nWe worked on solving Markov Decision Processes with corrupt reward functions (CRMDPs), in which the observed and true rewards are not necessarily the same.\nThe general class of CRMDPs is not solvable, so we focused on finding useful subclasses that are.\nWe developed a set of assumptions that define what we call Spiky CRMDPs and an algorithm that solves them by identifying corrupt states, i.e. states that have corrupted reward.\nWe worked out regret bounds for our algorithm in the class of Spiky CRMDPs, and found a specific subclass under which our algorithm is provably optimal.\nEven for Spiky CRMDPs in which our algorithm is suboptimal, we can use the regret bound in combination with semi-supervised RL to reduce supervisor queries.\n\nWe are currently working on implementing the algorithm in safe-grid-agents to be able to test it on official and custom AI Safety Gridworlds. We also plan to make our code OpenAI Gym-compatible for easier interfacing of the AI Safety Gridworlds and our agents with the rest of the RL community.\nOur current code is available on GitHub.\n \nHuman Preference Types\nTeam: Sabrina Kavanagh, Erin M. Linebarger, Nandi Schoots\n\nWe analyzed the usefulness of the framework of preference types to value learning. We zoomed in on the preference types liking, wanting and approving. We described the framework of preference types and how these can be inferred.\nWe considered how an AI could aggregate our preferences and came up with suggestions for how to choose an aggregation method. Our initial approach to establishing a method for aggregation of preference types was to find desiderata any potential aggregation function should comply with. As a source of desiderata, we examined the following existing bodies of research that dealt with aggregating preferences, either across individuals or between different types:\nEconomics & Social Welfare Theory; Social Choice Theory; Constitutional Law; and Moral Philosophy.\nWe concluded that the aggregation method should be chosen on a case-by-case basis. For example by asking people for their meta-preferences; considering the importance of desiderata to the end-user; letting the accuracy of measurement decide its weight; implementing a sensible aggregation function and adjusting it on the go; or identifying a more complete preference type.\n\nThis is a blogpost we wrote during the camp.\n \nFeature Visualization for Deep Reinforcement Learning\nTeam: Zera Alexander, Andrew Schreiber, Fabian Steuer\n\nCompleted a literature review of visualization in Deep Reinforcement Learning.\nBuilt a prototype of Agent, a Tensorboard plugin for interpretability of RL/IRL models focused on the time-step level.\nOpen-sourced the Agent prototype on GitHub.\nReproduced and integrated a paper on perturbation-based saliency map in Deep RL.\nApplied for an EA Grant to continue our work. (Currently at the 3rd and final stage in the process.)\n\nOngoing work:\n\nDeveloping the prototype into a functional tool.\nCollecting and integrating feedback from AI Safety researchers in Deep RL/IRL.\nWriting an introductory blog post to Agent.\n\n \nCorrigibility\nTeam: Vegard Blindheim, Anton Osika, Roland Pihlakas\n\nThe initial project topic was: Corrigibility and interruptibility via the principles of diminishing returns and conjunctive goals (originally titled: \"Corrigibility and interruptibility of homeostasis based agents\")\nVegard focused on finding and reading various corrigibility related materials and proposed an idea of building a public reading list of various corrigibility related materials, since currently these texts are scattered over the internet.\nAnton contributed to the discussions of the initial project topic in the form of various very helpful questions, but considered the idea of diminishing returns too obvious and simple, and very unlikely to be successful. Therefore, he soon switched over to other projects in another team.\nThe initial project of diminishing returns and conjunctive goals evolved into a blog post by Roland, proposing a solution to the problem of the lack of common sense in paper-clippers and other Goodhart's law-ridden utility maximising agents, possibly enabling them to even surpass the relative safety of humans: \n\nFuture plans:\n\nVegard works on preparing the website offering a reading list of corrigibility related materials.\nRoland continuously updates his blog post with additional information, additionally contacting Stuart Armstrong, and continuing correspondence with Alexander Turner and Victoria Krakovna.\nAdditionally, Roland will design a set of gridworlds-based gamified simulation environments (at www.gridworlds.net) for various corrigibility and interruptibility related toy problems, where the efficiency of applying the principles of diminishing returns and conjunctive goals can be compared to other approaches in the form of a challenge — the participants would be able to provide their own agent code in order to measure, which principles are best or most convenient as a solution for the most challenge scenarios.\nAnton is looking forward to participating in these challenges with his coding skills.\n\n \nIRL Benchmark\nTeam: Adria Garriga-Alonso, Anton Osika, Johannes Heidecke, Max Daniel, Sayan Sarkar\n\nOur objective is to create a unified platform to compare existing and new algorithms for inverse reinforcement learning.\nWe made an extensive review of existing inverse reinforcement learning algorithms with respect to different criteria such as: types of reward functions, necessity of known transition dynamics, metrics used for evaluation, used RL algorithms.\nWe set up our framework in a modular way that is easy to extend for new IRL algorithms, test environments, and metrics.\nWe released a basic version of the benchmark with 2 environments and 3 algorithms and are continuously extending it.\n\nSee our GitHub here.\n \nValue Learning in Games\nTeam: Stanislav Böhm, Tomáš Gavenčiak, Torben Swoboda, Mikhail Yagudin\nLearning rewards of a task by observing expert demonstrations is a very active research area, mostly in the context of Inverse reinforcement learning (IRL) with some spectacular results. While the reinforcement learning framework assumes non-adversarial environments (and is known to fail in general games), our project focuses on value learning in general games, introduced in Inverse Game Theory (2015). We proposed a sparse stochastic gradient descent algorithm for learning values from equilibria and experiment with learning the values of the game of Goofspiel. We are developing a game-theoretic library GameGym to collect games, algorithms and reproducible experiments. We also studied value learning under bounded rationality models and we hope to develop this direction further in the future.\nA longer report can be found here.\n \nAI Safety for Kids\n\nWe arrived at camp with the intention of developing storyboards targeted at AI Policymakers, inspired by the 'Killbots YouTube video' and the Malicious Compliance Report. The goal of these storyboards was to advance policies that prevent the weaponization of AI, while disrupting popular images of what an AI actually is or could become. We would achieve this by lowering the barriers of entry for non-experts to understanding core concepts and challenges in AI Safety.\nIn considering our target audience, we quickly decided that the most relevant stakeholders for these storyboards are a minimum of 20 years away from assuming their responsibilities (based on informal surveys of camp participants on the ETA of AGI). In other words, we consider our audience for these storyboards to be children. We realized that by targeting our message to a younger audience, we could prime them to think differently and perhaps more creatively about addressing these complex technical and social challenges. Although we consider children's books to be broadly appealing to all ages and helpful for spreading a message in a simple yet profound manner, to our knowledge no children's books have been specifically published on the topic of AI Safety.\nDuring camp we wrote drafts for three main children's book ideas focused on AI Safety. We presented one of these concepts to the group and gathered feedback about our approach. In summary, we decided to move forward with writing a children's book on AI Safety while remaining cognizant of the challenges of effective communication so as to avoid the pitfalls of disinformation and sensationalism. We developed a series of milestones for the book such that we could meet our goal of launching the book by the one year anniversary of the camp in Fall 2019.\nAfter camp, we applied to the Effective Altruism Foundation for a $5,000 grant to engage animators for preliminary graphic support to bring the book into a working draft phase to aid in pitching the idea to publishers in order to secure additional funding and complete the project. After this request was declined, we continued to compile lists of potential animators to reach out to once funding is secured.\nWe adjusted our plan to focus more on getting to know our potential audience. To this end, Chris has been in contact with a local high school teacher for advanced students specializing in maths and physics. Chris has arranged to give a talk to the students on problems of AI alignment in January 2019. Chris plans to prepare the presentation and Noah will provide feedback. After the presentation, Noah and Chris will reconvene to discuss the student reactions and interest in AI Alignment and Safety in Jan/Feb 2019.\n\n \nAssumptions of Human Values\nTeam: Jan Kulveit, Linda Linsefors, Alexey Turchin\nThere are many theories about the nature of human values, originating from diverse fields ranging from psychology to AI alignment research. Most of them rely on making various assumptions, which are sometimes given explicitly, often hidden (for example: humans having introspective access to their values; preferences being defined for arbitrary alternatives; some specific part of mind having normative power). We started with mapping the space – reading the papers, noting which assumptions are made, and trying to figure out what are the principal dimensions on which to project the space of value theories. Later, we tried to attack the problem directly, and find solutions which would be simple and make just explicit assumptions. While we did not converge on a solution, we become less confused, and the understanding created will likely lead to several posts from different team members.\nJan has written a blog post about his best-guess model of how human values and motivations work.\n ", "url": "https://aisafety.camp", "title": "AISC2: Research Summaries", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "b5e27a41807ee4d8817cb5fd2c0b60d6"}
8
+ {"text": "The first AI Safety Camp & onwards\n\nby Remmelt Ellen and Linda Linsefors\nSummary\nLast month, 5 teams of up-and-coming researchers gathered to solve concrete problems in AI-alignment at our 10-day AI safety research camp in Gran Canaria.\nThis post describes\n\n      the event format we came up with\n      our experience & lessons learned in running it in Gran Canaria\n      how you can contribute to the next camp in Prague on 4-14 October & future editions\n\nThe event format\nIn February, we proposed our plans for the AI Safety Camp:\nGoals: Efficiently launch aspiring AI safety and strategy researchers into concrete productivity by creating an 'on-ramp' for future researchers.\nSpecifically:\n\n     Get people started on and immersed into concrete research work intended to lead to published papers.\n     Address the bottleneck in AI safety/strategy of few experts being available to train or organize aspiring researchers by efficiently using expert time.\n     Create a clear path from 'interested/concerned' to 'active researcher'.\n     Test a new method for bootstrapping talent-constrained research fields.\n\nNote: this does not involve reaching out to or convincing outside researchers – those who are not yet doing work in the field of AI safety – on the imperative of solving alignment problems.\nMethod: Run an online study group culminating in an intensive in-person research camp. Participants work in teams on tightly-defined research projects on the following topics:\n\n      Agent foundations\n      Machine learning safety\n      Policy & strategy\n      Human values\n\nProject ideas are proposed by participants prior to the start of the program. After that, participants split into teams around the most popular research ideas (each participant joins one team, and each team focuses on one research topic).\nWhat the camp isn't about:\nThe AI Safety Camp is not about convincing anyone about the importance of AI-alignment research. The camp is for people who are already on board with the general ideas, and who want to develop their research skills and/or find like-minded people to collaborate with. Trying to convert people from adjacent research fields is a very different project, which we do not think mixes well with this event. \nThe AI Safety Camp is not a summer school (unlike the one coming up this August in Prague). There are no teachers although teams can correspond with experienced advisors. Participants are expected to have the knowledge needed to do research together. However, it is not required for everyone to have research experience, or to know every relevant fact. That is what the team is for – to help each other, and lift each other up.\nThe first camp\nHow it came to be: \nThe project got started when Linda Linsefors tried to figure out how to find AI safety researchers to cowork with in a supportive environment. Effective Altruism Global London (November 2017) was coming up so she decided to network there to look for a \"multiplayer solution\" to the problem – one that would also also help others in a similar situation.\nAfter bouncing ideas off various people in the conference corridors, Linda had formed a vague plan of starting a research retreat – renting a venue somewhere and inviting others to try to do research together.\nWhile joining an Open Philanthropy Project open office hour, Tom McGrath (who became our team preparation leader) overheard Linda talking about her idea and wanted to explore it further. Later, while couchsurfing at Sam Hilton's place, she met Remmelt Ellen (who became our meetings & logistics leader) and together they created and drew attention to a Facebook group and form where people could indicate their interest. Nandi Schoots (who became our interviews & programme leader) and David Kristoffersson (who became our international connector) quickly found the Facebook group and joined our first organisers' call.\nOur core organising team formed within a week, after which we scheduled regular video calls to sort out the format, what to call the event, where to organise it, and so on. We hit the ground running and coordinated well through Facebook chats and Zoom calls considering we were a bunch of international volunteers. Perhaps our team members were unusually dedicated because each of us had taken the initiative to reach out and join the group. We also deliberately made fast decisions on next actions and who would carry them out – thus avoiding the kind of dragged-out discussions where half of the team has to sit idly by to wait for conclusions that no one acts upon.\nInitially, we decided to run the first camp in July 2018 in either Berlin or the UK. Then, Las Palmas, Gran Canaria was suggested as an alternative in our Facebook group by Maia Pasek from Crow's Nest (sadly Maia passed away before the camp started). We decided to run a small pilot camp there in April to test how well the format worked – thinking that Gran Canaria was a cheap, attractive sub-tropical island with on-the-ground collaborators to sort out the venue (this ended up being mostly Karol Kubicki).\nHowever, in February a surprising number of researchers (32) submitted applications of mostly high quality – too many for our 12-person AirBnB apartment (a cancellable booking made by Greg Colbourn). Instead, we booked an entire hostel to run the full edition that we had originally envisaged for July, effectively shortening our planning time by 3 months.\nThis forced us to be effective and focus on what was most important to make the camp happen. But we were also basically chasing the clock at every step of the organising process, which led to costly mistakes such as rushing out documents and spending insufficient time comparing available venues (we reviewed many more lessons learned in a 14-page internal document). \nMost of the original organisers were exhausted after the event finished and were not going to lead a second edition any time soon. Fortunately, some of the Gran Canaria camp participants are taking up the mantle to organise the second camp together with EA Czech Republic in Prague this October (for more on this see \"Next Camp\" below).\nTeam formation:\nEach applicant was invited for an interview call (with the help of Markus Salmela), of which we accepted 25 for the camp (of these, 4 people were unable to join the event). \nFrom there, we invited participants to jot down their preferences for topics to work on and planned a series of calls to form research teams around the most popular topics.\nAfter forming 5 teams, we had an online preparation period of roughly 6 weeks to get up to speed on our chosen research topics (through Slack channels, calls and in-person chats). This minimised the need to study papers at the camp itself. However, it was up to each team to decide how to best spend this time – e.g. some divided up reading materials, or wrote research proposals and got feedback from senior researchers (including Victoria Krakovna, Stuart Armstrong and Owain Evans).\nEvent structure:\nThe camp consisted of coworking punctuated by team support sessions and participant-organised activities.\nProgramme summary:\nDay 1: Arrival, starting ceremony\nDay 2: Team research\nDay 3: Team research\nDay 4: Research idea presentations, half day off\nDay 5: Team debugging, research ducking in pairs, team research\nDay 6: Inter-team hamming circles, team research, research ducking in pairs\nDay 7: Day off\nDay 8: Team research\nDay 9: Team research, AlphaZero presentation (participant initiative), career circle\nDay 10: Team research, research presentations, closing ceremony\nDay 11: Feedback form, departure\nThe programme was split into three arcs (day 1-4, day 4-7, day 7-11) where the workload gradually intensified until it turned down again – hopefully enabling teams to do intensive work sprints while not burning out. \nThe support sessions on day 5 and 6 were aimed at helping teams resolve bottlenecks and become more productive. Although a few participants mentioned they were surprisingly useful, doing them during daylight hours hindered teams from getting on with research. For future camps, we suggest having only optional Hamming circles and research ducking sessions in the evening. \nParticipants also shared their own initiatives on the dining room blackboard such as morning yoga, beach walks, mountain hiking, going out for dinner, a clicker game and an AlphaZero presentation. We wholeheartedly recommend fostering unconference-style initiatives at research events – they give participants the freedom to make up for what you have missed. \nTwo experienced Centre for Applied Rationality workshop mentors, Ben Sancetta and Anne Wissemann, had the job of supporting participants in sorting out any issues they or their team encountered, and helping ensure that everyone was happy (Anne also oversaw supplies). Luckily, everyone got along so well that Anne and Ben only had a handful of one-on-ones. Nevertheless, having them around was a treat for some participants, as it allowed them to drop in and vent whatever was on their mind, knowing that it would not unduly bother either of them.\nBudget:\n\nThe total cost of organising the camp was €11,572 (excluding some items paid for by the organisers themselves). \nThe funds were managed through the bank account of Effective Altruism Netherlands. Unspent money was transferred to the Czech Association for Effective Altruism for the next camp (they are open to donations if their EA Grant application for the camp gets delayed or rejected).\nResults:\nEach team has written a brief summary of the work they did during the camp (as well as future plans). Other outcomes include:\n\nThe bounded rationality team has received funding from Paul Christiano to continue their work.\nThe Gridworld team has written a blogpost and are making a GitHub pull request for their work to be added to the Safety Gridworlds repository.\nThe Safe AF team is writing a paper on their results.\nAt least 2 participants have changed their career plans towards working on AI safety (many participants were already junior researchers or had already made up their minds prior to the camp).\n8 more participants reported an increase in their motivation and/or confidence in doing research work.\n\nAs our Gran Canaria \"pilot camp\" grew in ambition we implicitly worked towards the outcomes we expected to see for the \"main camp\":\n\nThree or more draft papers have been written that are considered to be promising by the research community.\nThree or more researchers who participated in the project would obtain funding or a research role in AI safety/strategy in the year following the camp.\n\nIt is too soon to say about whether the first goal will be met, although with one paper in preparation and one team having already obtained funding it is looking plausible. The second goal was already met less than a month after the camp.\nImproving the format\nThe format of the AI Safety Camp is still under development. Here are two major points we would like to improve. Suggestions are welcome.\n\nManaging team onboarding:\nAfter the interviews, we accepted applicants on the condition that they would find a research team, which created uncertainty for them. Furthermore, we did not even enforce this rule (though perhaps we should have).\n\nForming research teams that consist of people with a good fit for promising topics lies at the foundation of a productive camp. But it is also a complex problem with many variables and moving parts (e.g. Do we accept people first and form teams around these people, or do we form teams first and accept people based on their fit with a team? Should we choose research topics first and then decide who joins which team, or should we form teams first and then let them choose topics?).  We handled this at the first camp by trying to do everything at the same time. Although this worked out okay, the onboarding process can be made easier to follow and smoother at future camps.\nNote: The irrationality team of 5 people ended up splitting into two sub-groups since one of the topics seemed too small in scope for 5 people. We suggest limiting group sizes to 4 people at future camps.\n \nIntegrating outside advisors:\nMany senior AI Safety researchers replied slowly to our email requests to advise our teams, presumably because of busy schedules. This led to a dilemma:A. If we waited until we knew what the research topics would be, then we might not have gotten an answer from potential advisors in time.\nB. If we acted before topics had been selected, we would end up contacting many senior researchers who were not specialised in the final topics.\nAt the first camp, we lacked time for working out a clear strategy, so teams ended up having to reach out to advisors we found. For future camps, it should be easier to connect advisors with teams given that the next organisers are already on the move. Hopefully, experienced researchers reading this post will also be inclined to offer a few spare hours to review research proposals and draft papers (please send us a short email).\n\nNext camps\nThe next camps will happen in:\n4-14 Oct 2018:\nPrague, Czechia\nin collaboration with the Czechia Association for Effective Altruism (they will also organize the Human-aligned AI Summer School in August)\n\n~ March 2019:\nBlackpool, United Kingdom\nat the EA Hotel (offers free accommodation for researchers)\nIf you've gotten this far, we can use your contribution:\n\nApply to join the Prague camp\nEmail <EMAIL> if you are considering\n\n advising research teams on their projects\ncontributing your skills to organising camps\nfunding future camps\nrunning your own edition next year\ncriteria: experienced organisers who will run our general format & uphold a high quality standard that reflects well on the wider research community\n\n\nJoin our Facebook group to stay informed\n\nAcknowledgement\nCentre for Effective Altruism (€2,961)\nMachine Intelligence Research Institute (€3,223)\nGreg Colbourn (€3,430)\nLotta and Claes Linsefors (€4,000)", "url": "https://aisafety.camp", "title": "The first AI Safety Camp & onwards", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "67af2cf2c352966acda04d6e731970aa"}
9
+ {"text": "AISC 1: Research Summaries\n\nThe 2018 Gran Canaria AI safety camp teams have worked hard in the preparation of the camp and in the 10 day sprint. Each team has written a brief summary of the work they did during the camp:\nIrrationality\nTeam: Christopher Galias, Johannes Heidecke, Dmitrii Krasheninnikov, Jan Kulveit, Nandi Schoots\n\nOur team worked on how to model human (ir)rationality in the context of value learning when trying to learn a human's reward function based on expert demonstrations with inverse reinforcement learning (IRL).\nWe focussed on two different sub-topics: bounded rationality and time-correlated irrationality.\n\nBounded rationality topic:\n\nWe analyzed the difference between perfectly rational and boundedly rational agents and why the latter might provide a better model for human behavior, explaining many biases observed in human thinking.\nWe looked at existing formalizations of bounded rationality, especially an information theoretic perspective introduced by Ortega and Braun.\nWe started investigating how to model bounded rational agents for reinforcement learning problems.\nWe began formalizing how to model the inverse step of IRL for bounded rational agents, based both on Maximum Causal Entropy IRL and Guided Cost Learning.\nWe set up a small test environment with many satisficing solutions and an optimal solution which is hard to find. We collected human expert demonstrations for this environment and compared it to the performance of a fully rational computer agent. The observed differences support the claim that bounded rationality models are needed in IRL to extract adequate reward functions.\nWe received funding from Paul Christiano to continue our work.\n\n\n\nTime-correlated irrationality topic:\n\nThe project consists of 2 parts: introducing a Laplace prior on the softmax temperatures of the transitions of the Boltzmann-rational agent, and enforcing a correlation between the temperatures at nearby timesteps.\nDuring the camp we worked out the math & the algorithm for the first part, and have started working on the implementation.\nThe second part of the project and the writeup will be done in the following months. We plan to both work remotely and meet up in person.\n\n\n\nWisdom\nTeam: Karl Koch, David Kristoffersson, Markus Salmela, Justin Shovelain \nWe further developed tools for determining the harm versus benefit of projects on the long-term future:\n\n\n(Context: We have earlier work here, notably a decision tree for analyzing projects.)\nHeuristics: Worked extensively on developing practical heuristics for determining whether a technological development is net beneficial or harmful in the long run\nScaffolding: Defined a wider context for the decision tree, to tell you when to use the decision tree and how to improve interventions/projects to be more good for the world.\nRace/competitive dynamics: Modeled some conditions of generating competitive races.\nInformation concealment: Incorporated information from man-made disasters and information concealment\n\nDeveloped a potential existential risk reduction funding delegation strategy for rich donors:\n\nAnalyzed how to maximize a funder's ability to update on data and use the knowledge of others, and yet mostly avoid the principal agent problem and Goodhart's law\nDeveloped a funding organization design with expert delegates, collaborative investment decisions, and strong self-improving elements\n\nZero Safety\nTeam: Vojta Kovarik, Igor Sieradzki, Michael Świętek\n\nGoal: Better understand the strategy learned by Alpha Zero algorithm\nImplemented Alpha Zero in Gomoku, trained Alpha Zero in (a) 6*6 board, 4-in-a-row and (b) 8*8, 5-in-a-row\nTraining the neural net in (a) took ~40M samples. We managed to train a new neural net using only 350 unique samples in such a way that the resulting strategy is very similar to the original Alpha Zero player.\nThis led us to discover a weakness in the strategy learned by both the new Alpha Zero and the original one.\nFuture plans: Test on more complex games, experiment with more robust ways of finding representative subsets of the training data, visualize these representative subsets in an automated way.\n\nSafe AF\nTeam: James Bell, Linda Linsefors, Caspar Oesterheld, Joar Skalse\n\n\nInvestigated the behaviour of common very simple machine learning algorithms in Newcomb like contexts, with the idea of trying to figure out what decision theory they are implicitly implementing.\nSpecifically we looked at the epsilon-greedy and softmax algorithms for bandit problems. At each step these algorithms compute a probability distribution over actions and then draw their next action from that distribution. The reward for each action depended on the probability distribution that the algorithms had found as an intermediate step but they were trained in the standard way i.e. assuming that there was no such dependence.\nFormulated a selection of decision theory problems as bandit problems. Such bandit problems provide a general enough framework to include variants of playing a prisoners dilemma against a copy, evidential blackmail and death in Damascus.\nWe found that the algorithms did not coherently follow any established decision theory, however they did show a preference for ratifiable choices of probability distribution and we were able to find some results on their convergence properties. We are writing a paper with our results.\n\n\nSide effects in Gridworlds\nTeam: Jessica Cooper, Karol Kubicki, Gavin Leech, Tom McGrath\n\nImplemented a baseline Q-learning agent for gridworld environments.\nImplemented inverse reinforcement learning in the Sokoban gridworld from Deepmind's original paper.\nCreated new gridworlds to cover a wider variety of side effects and expose more nuances, for instance the difficulty in defining \"leaving the environment unchanged\" when the environment is dynamic or stochastic.\nCode is available on our Github repository and Gavin Leech has written a blog post that goes into more detail.\n\nFuture plans:\n\nGeneralise the tools that we created to work with arbitrary pycolab environments.\nAdd maximum entropy deep IRL.\nSubmit a pull request with the above to the Safety Gridworlds repository in order to make it easier for others to get started doing machine learning safety research.\n\n\n\nLast but not least…\nWe would like to thank those who have funded the camp: MIRI, CEA, Greg Colbourne, Lotta and Claes Linsefors.\n", "url": "https://aisafety.camp", "title": "AISC 1: Research Summaries", "source": "aisafety.camp", "date_published": "n/a", "paged_url": "https://aisafety.camp/feed?paged=1", "id": "ba945635da790b61b8e30c83b7304b48"}
alignment_newsletter.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
arbital.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
arxiv_papers.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7df3e729f57f069188300faec15e8142de4f66795fc040eba4cd7081b7c13934
3
+ size 48555071
audio_transcripts.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
carado.moe.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
cold.takes.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
deepmind.blog.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
distill.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
eaforum.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0a0655775a336e305c31c8de93001e1c7741dca2a1f4b4e56cff4e758dec382
3
+ size 227235566
gdocs.jsonl ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/README.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
2
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/NeurIPSorICML_7oalk.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
3
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/individuallyselected_92iem.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
4
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/individuallyselected_zlzai.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
5
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/individuallyselected_7ujun.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
6
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/individuallyselected_84py7.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
7
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/NeurIPSorICML_bj9ne.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
8
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/individuallyselected_w5cb5.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
9
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/NeurIPSorICML_a0nfw.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
10
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/NeurIPSorICML_cvgig.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
11
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/NeurIPSorICML_lgu5f.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
12
+ {"source": "gdocs", "source_filetype": "docx", "converted_with": "pandoc", "title": "Word Document", "authors": "", "date_published": "n/a", "text": "n/a", "url": "n/a", "docx_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/gdocs/NeurIPSorICML_q243b.docx", "id": "274b68192b056e268f128ff63bfcd4a4"}
gdrive_ebooks.jsonl ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The Age of Em: Work, Love and Life when Robots Rule the Earth", "date_published": "0101-01-01T00:00:00+00:00", "chapter_names": ["Title page", "Copyright", "Preface", "Acknowledgements", "Contents", "Introduction", "I. Basics", "1. Start", "Overview", "Summary", "2. Modes", "Precedents", "Prior Eras", "Our Era", "Era Values", "Dreamtime", "Limits", "3. Framing", "Motivation", "Forecasting", "Scenarios", "Consensus", "Scope", "Biases", "4. Assumptions", "Brains", "Emulations", "Complexity", "Artificial Intelligence", "5. Implementation", "Mindreading", "Hardware", "Security", "Parallelism", "II. Physics", "6. Scales", "Speeds", "Bodies", "Lilliput", "Meetings", "Entropy", "Miserly Minds", "7. Infrastructure", "Climate", "Cooling", "Air and Water", "Buildings", "Manufacturing", "8. Appearances", "Virtual Reality", "Comfort", "Shared Spaces", "Merging Real and Virtual", "9. Information", "Views", "Records", "Fakery", "Simulations", "10. Existence", "Copying", "Rights", "Many Ems", "Surveillance", "11. Farewells", "Fragility", "Retirement", "Ghosts", "Ways to End", "Defining Death", "Suicide", "III. Economics", "12. Labor", "Supply and Demand", "Malthusian Wages", "First Ems", "Selection", "Enough Ems", "13. Efficiency", "Clan Concentration", "Competition", "Efficiency", "Eliteness", "Qualities", "14. Work", "Work Hours", "Spurs", "Spur Uses", "Social Power", "15. Business", "Institutions", "New Institutions", "Combinatorial Auctions", "Prediction Markets", "16. Growth", "Faster Growth", "Growth Estimate", "Growth Myths", "Finance", "17. Lifecycle", "Careers", "Peak Age", "Maturity", "Preparation", "Training", "Childhood", "IV. Organization", "18. Clumping", "Cities", "City Structure", "City Auctions", "Choosing Speed", "Transport", "19. Groups", "Clans", "Managing Clans", "Firms", "Firm-Clan Relations", "Teams", "Mass versus Niche Teams", "20. Conflict", "Inequality", "Em Inequality", "Redistribution", "War", "Nepotism", "Fake Experts", "21. Politics", "Status", "Governance", "Clan Governance", "Democracy", "Coalitions", "Factions", "22. Rules", "Law", "Efficient Law", "Innovation", "Software", "Lone Developers", "V. Sociology", "23. Mating", "Sexuality", "Open-Source Lovers", "Pair Bonds", "Gender", "Gender Imbalance", "24. Signals", "Showing Off", "Personal Signals", "Group Signals", "Charity", "Identity", "Copy Identity", "25. Collaboration", "Ritual", "Religion", "Swearing", "Conversation", "On Call Advice", "Synchronization", "26. Society", "Culture", "Divisions", "Farmer-Like", "Travel", "Stories", "Clan Stories", "27. Minds", "Humans", "Unhumans", "Partial Minds", "Psychology", "Intelligence", "Intelligence Explosion", "VI. Implications", "28. Variations", "Trends", "Alternatives", "Transition", "Enabling Technologies", "Aliens", "29. Choices", "Evaluation", "Quality of Life", "Policy", "Charity", "Success", "30. Finale", "Critics", "Conclusion", "References", "Index"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Robin Hanson - The Age of Em_ Work, Love and Life when Robots Rule the Earth (2016, Oxford University Press) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
2
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies", "date_published": "2014-01-13T00:00:00+00:00", "chapter_names": ["Title", "Dedication", "Contents", "Chapter 1 The Big Stories", "Chapter 2 The Skills of the New Machines: Technology Races Ahead", "Chapter 3 Moore’s Law and the Second Half of the Chessboard", "Chapter 4 The Digitization of Just About Everything", "Chapter 5 Innovation: Declining or Recombining?", "Chapter 6 Artificial and Human Intelligence in the Second Machine Age", "Chapter 7 Computing Bounty", "Chapter 8 Beyond Gdp", "Chapter 9 The Spread", "Chapter 10 The Biggest Winners: Stars and Superstars", "Chapter 11 Implications of the Bounty and the Spread", "Chapter 12 Learning to Race With Machines: Recommendations for Individuals", "Chapter 13 Policy Recommendations", "Chapter 14 Long-Term Recommendations", "Chapter 15 Technology and the Future (Which Is Very Different from “Technology Is the Future”)", "Acknowledgments", "Notes", "Illustration Sources", "Index", "More Praise for The Second Machine Age", "Also by Erik Brynjolfsson and Andrew McAfee", "Copyright"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Erik Brynjolfsson & Andrew McAfee - The Second Machine Age_ Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014, W. W. Norton) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
3
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The Myth of the Rational Voter: Why Democracies Choose Bad Policies", "date_published": "2011-08-15T05:00:00+00:00", "chapter_names": ["Half title", "Title", "Copyright", "Dedication", "Contents", "Preface to the Paperback Edition", "Acknowledgments", "Introduction The Paradox of Democracy", "Chapter 1 Beyond the Miracle of Aggregation", "Chapter 2 Systematically Biased Beliefs about Economics", "Chapter 3 Evidence from the Survey of Americans and Economists on the Economy", "Chapter 4 Classical Public Choice and the Failure of Rational Ignorance", "Chapter 5 Rational Irrationality", "Chapter 6 From Irrationality to Policy", "Chapter 7 Irrationality and the Supply Side of Politics", "Chapter 8 “Market Fundamentalism” versus the Religion of Democracy", "Conclusion In Praise of the Study of Folly", "Notes", "References", "Index"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Bryan Caplan - The Myth of the Rational Voter_ Why Democracies Choose Bad Policies (2007, Princeton University Press) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
4
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Superintelligence: Paths, Dangers, Strategies", "date_published": "2014", "chapter_names": ["Lists of Figures, Tables, and Boxes", "1. Past developments and present capabilities", "Growth modes and big history", "Great expectations", "Seasons of hope and despair", "State of the art", "Opinions about the future of machine intelligence", "2. Paths to superintelligence", "Artificial intelligence", "Whole brain emulation", "Biological cognition", "Brain–computer interfaces", "Networks and organizations", "Summary", "3. Forms of superintelligence", "Speed superintelligence", "Collective superintelligence", "Quality superintelligence", "Direct and indirect reach", "Sources of advantage for digital intelligence", "4. The kinetics of an intelligence explosion", "Timing and speed of the takeoff", "Recalcitrance", "Non-machine intelligence paths", "Emulation and AI paths", "Optimization power and explosivity", "5. Decisive strategic advantage", "Will the frontrunner get a decisive strategic advantage?", "How large will the successful project be?", "Monitoring", "International collaboration", "From decisive strategic advantage to singleton", "6. Cognitive superpowers", "Functionalities and superpowers", "An AI takeover scenario", "Power over nature and agents", "7. The superintelligent will", "The relation between intelligence and motivation", "Instrumental convergence", "Self-preservation", "Goal-content integrity", "Cognitive enhancement", "Technological perfection", "Resource acquisition", "8. Is the default outcome doom?", "Existential catastrophe as the default outcome of an intelligence explosion?", "The treacherous turn", "Malignant failure modes", "Perverse instantiation", "Infrastructure profusion", "Mind crime", "9. The control problem", "Two agency problems", "Capability control methods", "Boxing methods", "Incentive methods", "Stunting", "Tripwires", "Motivation selection methods", "Direct specification", "Domesticity", "Indirect normativity", "Augmentation", "Synopsis", "10. Oracles, genies, sovereigns, tools", "Oracles", "Genies and sovereigns", "Tool-AIs", "Comparison", "11. Multipolar scenarios", "Of horses and men", "Wages and unemployment", "Capital and welfare", "The Malthusian principle in a historical perspective", "Population growth and investment", "Life in an algorithmic economy", "Voluntary slavery, casual death", "Would maximally efficient work be fun?", "Unconscious outsourcers?", "Evolution is not necessarily up", "Post-transition formation of a singleton?", "A second transition", "Superorganisms and scale economies", "Unification by treaty", "12. Acquiring values", "The value-loading problem", "Evolutionary selection", "Reinforcement learning", "Associative value accretion", "Motivational scaffolding", "Value learning", "Emulation modulation", "Institution design", "Synopsis", "13. Choosing the criteria for choosing", "The need for indirect normativity", "Coherent extrapolated volition", "Some explications", "Rationales for CEV", "Further remarks", "Morality models", "Do What I Mean", "Component list", "Goal content", "Decision theory", "Epistemology", "Ratification", "Getting close enough", "14. The strategic picture", "Science and technology strategy", "Differential technological development", "Preferred order of arrival", "Rates of change and cognitive enhancement", "Technology couplings", "Second-guessing", "Pathways and enablers", "Effects of hardware progress", "Should whole brain emulation research be promoted?", "The person-affecting perspective favors speed", "Collaboration", "The race dynamic and its perils", "On the benefits of collaboration", "Working together", "15. Crunch time", "Philosophy with a deadline", "What is to be done?", "Seeking the strategic light", "Building good capacity", "Particular measures", "Will the best in human nature please stand up", "Notes", "Bibliography", "Index"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Superintelligence Paths, Dangers, Strategies by Nick Bostrom (z-lib.org).epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
5
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Human Compatible", "date_published": "2019-10-08", "chapter_names": ["Cover", "Also by Stuart Russell", "Title Page", "Copyright", "Dedication", "Contents", "Preface", "Why This Book? Why Now?", "Overview of the Book", "1. If We Succeed", "How Did We Get Here?", "What Happens Next?", "What Went Wrong?", "Can We Fix It?", "2. Intelligence in Humans and Machines", "Intelligence", "Computers", "Intelligent Computers", "3. How Might AI Progress in the Future?", "The Near Future", "When Will Superintelligent AI Arrive?", "Conceptual Breakthroughs to Come", "Imagining a Superintelligent Machine", "The Limits of Superintelligence", "How Will AI Benefit Humans?", "4. Misuses of AI", "Surveillance, Persuasion, and Control", "Lethal Autonomous Weapons", "Eliminating Work as We Know It", "Usurping Other Human Roles", "5. Overly Intelligent AI", "The Gorilla Problem", "The King Midas Problem", "Fear and Greed: Instrumental Goals", "Intelligence Explosions", "6. The Not-So-Great AI Debate", "Denial", "Deflection", "Tribalism", "Can’t We Just . . .", "The Debate, Restarted", "7. AI: A Different Approach", "Principles for Beneficial Machines", "Reasons for Optimism", "Reasons for Caution", "8. Provably Beneficial AI", "Mathematical Guarantees", "Learning Preferences from Behavior", "Assistance Games", "Requests and Instructions", "Wireheading", "Recursive Self-Improvement", "9. Complications: Us", "Different Humans", "Many Humans", "Nice, Nasty, and Envious Humans", "Stupid, Emotional Humans", "Do Humans Really Have Preferences?", "10. Problem Solved?", "Beneficial Machines", "Governance of AI", "Misuse", "Enfeeblement and Human Autonomy", "Appendix A: Searching for Solutions", "Appendix B: Knowledge and Logic", "Appendix C: Uncertainty and Probability", "Appendix D: Learning from Experience", "Acknowledgments", "Notes", "Image Credits", "Index", "About the Author", "Cover", "Cover", "Title Page", "Table of Contents", "Start", "Copyright", "i", "ii", "iii", "iv", "v", "vi", "vii", "viii", "ix", "x", "xi", "xii", "xiii", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119", "120", "121", "122", "123", "124", "125", "126", "127", "128", "129", "130", "131", "132", "133", "134", "135", "136", "137", "138", "139", "140", "141", "142", "143", "144", "145", "146", "147", "148", "149", "150", "151", "152", "153", "154", "155", "156", "157", "158", "159", "160", "161", "162", "163", "164", "165", "166", "167", "168", "169", "170", "171", "172", "173", "174", "175", "176", "177", "178", "179", "180", "181", "182", "183", "184", "185", "186", "187", "188", "189", "190", "191", "192", "193", "194", "195", "196", "197", "198", "199", "200", "201", "202", "203", "204", "205", "206", "207", "208", "209", "210", "211", "212", "213", "214", "215", "216", "217", "218", "219", "220", "221", "222", "223", "224", "225", "226", "227", "228", "229", "230", "231", "232", "233", "234", "235", "236", "237", "238", "239", "240", "241", "242", "243", "244", "245", "246", "247", "248", "249", "250", "251", "252", "253", "254", "255", "256", "257", "258", "259", "260", "261", "262", "263", "264", "265", "266", "267", "268", "269", "270", "271", "272", "273", "274", "275", "276", "277", "278", "279", "280", "281", "282", "283", "284", "285", "286", "287", "288", "289", "290", "291", "292", "293", "294", "295", "296", "297", "298", "299", "300", "301", "302", "303", "304", "305", "306", "307", "308", "309", "310", "311", "312", "313", "314", "315", "316", "317", "318", "319", "320", "321", "322", "323", "324", "325", "326", "327", "328", "329", "330", "331", "332", "333", "334", "335", "336"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Human Compatible Artificial Intelligence and the Problem of Control by Stuart Russell (z-lib.org).epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
6
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Life 3.0: Being Human in the Age of Artificial Intelligence", "date_published": "2017-08-29T04:00:00+00:00", "chapter_names": ["Other Titles", "Title Page", "Copyright", "Contents", "Dedication", "Acknowledgments", "Prelude: The Tale of the Omega Team", "1 Welcome to the Most Important Conversation of Our Time", "A Brief History of Complexity", "The Three Stages of Life", "Controversies", "Misconceptions", "The Road Ahead", "2 Matter Turns Intelligent", "What Is Intelligence?", "What Is Memory?", "What Is Computation?", "What Is Learning?", "3 The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs", "Breakthroughs", "Bugs vs. Robust AI", "Laws", "Weapons", "Jobs and Wages", "Human-Level Intelligence?", "4 Intelligence Explosion?", "Totalitarianism", "Prometheus Takes Over the World", "Slow Takeoff and Multipolar Scenarios", "Cyborgs and Uploads", "What Will Actually Happen?", "5 Aftermath: The Next 10,000 Years", "Libertarian Utopia", "Benevolent Dictator", "Egalitarian Utopia", "Gatekeeper", "Protector God", "Enslaved God", "Conquerors", "Descendants", "Zookeeper", "1984", "Reversion", "Self-Destruction", "What Do You Want?", "6 Our Cosmic Endowment: The Next Billion Years and Beyond", "Making the Most of Your Resources", "Gaining Resources Through Cosmic Settlement", "Cosmic Hierarchies", "Outlook", "7 Goals", "Physics: The Origin of Goals", "Biology: The Evolution of Goals", "Psychology: The Pursuit of and Rebellion Against Goals", "Engineering: Outsourcing Goals", "Friendly AI: Aligning Goals", "Ethics: Choosing Goals", "Ultimate Goals?", "8 Consciousness", "Who Cares?", "What Is Consciousness?", "What’s the Problem?", "Is Consciousness Beyond Science?", "Experimental Clues About Consciousness", "Theories of Consciousness", "Controversies of Consciousness", "How Might AI Consciousness Feel?", "Meaning", "Epilogue: The Tale of the FLI Team", "Notes"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Max Tegmark - Life 3.0_ Being Human in the Age of Artificial Intelligence (2017, Alfred A. Knopf) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
7
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Global Catastrophic Risks", "date_published": "2008", "chapter_names": ["Cover Page", "Title Page", "Copyright Page", "Acknowledgements", "Foreword", "Contents", "1 Introduction", "1.1 Why?", "1.2 Taxonomy and organization", "1.3 Part I: Background", "1.4 Part II: Risks from nature", "1.5 Part III: Risks from unintended consequences", "1.6 Part IV: Risks from hostile acts", "1.7 Conclusions and future directions", "Part I Background", "2 Long-term astrophysical processes", "2.1 Introduction: physical eschatology", "2.2 Fate of the Earth", "2.3 Isolation of the local group", "2.4 Collisionwith Andromeda", "2.5 The end of stellar evolution", "2.6 The era of degenerate remnants", "2.7 The era of black holes", "2.8 The Dark Era and beyond", "2.9 Life and information processing", "2.10 Conclusion", "Suggestions for further reading", "References", "3 Evolution theory and the future of humanity", "3.1 Introduction", "3.2 The causes of evolutionary change", "3.3 Environmental changes and evolutionary changes", "3.3.1 Extreme evolutionary changes", "3.3.2 Ongoing evolutionary changes", "3.3.3 Changes in the cultural environment", "3.4 Ongoing human evolution", "3.4.1 Behavioural evolution", "3.4.2 The future of genetic engineering", "3.4.3 The evolution of other species, including those on which we depend", "3.5 Future evolutionary directions", "3.5.1 Drastic and rapid climate change without changes in human behaviour", "3.5.2 Drastic but slower environmental change accompanied by changes in human behaviour", "3.5.3 Colonization of new environments by our species", "Suggestions for further reading", "References", "4 Millennial tendencies in responses to apocalyptic threats", "4.1 Introduction", "4.2 Types of millennialism", "4.2.1 Premillennialism", "4.2.2 Amillennialism", "4.2.3 Post-millennialism", "4.3 Messianism and millenarianism", "4.4 Positive or negative teleologies: utopianism and apocalypticism", "4.5 Contemporary techno-millennialism", "4.5.1 The singularity and techno-millennialism", "4.6 Techno-apocalypticism", "4.7 Symptoms of dysfunctional millennialism in assessing future scenarios", "4.8 Conclusions", "Suggestions for further reading", "References", "5 Cognitive biases potentially affecting judgement of global risks", "5.1 Introduction", "5.2 Availability", "5.3 Hindsight bias", "5.4 Black Swans", "5.6 Confirmation bias", "5.7 Anchoring, adjustment, and contamination", "5.8 The affect heuristic", "5.9 Scope neglect", "5.10 Calibration and overconfidence", "5.11 Bystander apathy", "5.12 A final caution", "5.13 Conclusion", "Suggestions for further reading", "References", "6 Observation selection effects and global catastrophic risks", "6.1 Introduction: anthropic reasoning and global risks", "6.2 Past-future asymmetry and risk inferences", "6.2.1 A simplified model", "6.2.2 Anthropic overconfidence bias", "6.2.3 Applicability class of risks", "6.2.4 Additional astrobiological information", "6.3 Doomsday Argument", "6.4 Fermi’s paradox", "6.4.1 Fermi’s paradox and GCRs", "6.4.2 Risks following from the presence of extraterrestrial intelligence", "6.5 The SimulationArgument", "6.6 Making progress in studying observation selection effects", "Suggestions for further reading", "References", "7 Systems-based risk analysis", "7.1 Introduction", "7.2 Risk to interdependent infrastructure and sectors of the economy", "7.3 Hierarchical holographic modelling and the theory of scenario structuring", "7.3.1 Philosophy and methodology of hierarchical holographic modelling", "7.3.2 The definition of risk", "7.3.3 Historical perspectives", "7.4 Phantom system models for risk management of emergent multi-scale systems", "7.5 Risk of extreme and catastrophic events", "7.5.1 The limitations of the expected value of risk", "7.5.2 The partitioned multi-objective risk method", "7.5.3 Risk versus reliability analysis", "Suggestions for further reading", "References", "8 Catastrophes and insurance", "8.1 Introduction", "8.2 Catastrophes", "8.3 What the business world thinks", "8.4 Insurance", "8.5 Pricing the risk", "8.6 Catastrophe loss models", "8.7 What is risk?", "8.8 Price and probability", "8.9 The age ofuncertainty", "8.10 New techniques", "8.10.1 Qualitative risk assessment", "8.10.2 Complexity science", "8.10.3 Extreme value statistics", "8.11 Conclusion: against the gods?", "Suggestions for further reading", "References", "9 Public policy towards catastrophe", "References", "Part II Risks from nature", "10 Super-volcanism and other geophysical processes of catastrophic import", "10.1 Introduction", "10.2 Atmospheric impact of a super-eruption", "10.3 Volcanic winter", "10.4 Possible environmental effects of a super-eruption", "10.5 Super-eruptions and human population", "10.6 Frequency of super-eruptions", "10.7 Effects of a super-eruptions on civilization", "10.8 Super-eruptions and life in the universe", "Suggestions for further reading", "References", "11 Hazards from comets and asteroids", "11.1 Something like a huge mountain", "11.2 How oftenare we struck?", "11.2.1 Impact craters", "11.2.2 Near-Earth object searches", "11.2.3 Dynamical analysis", "11.3 The effects of impact", "11.4 The role of dust", "11.5 Ground truth?", "11.6 Uncertainties", "Suggestions for further reading", "References", "12 Influence of Supernovae, gamma-ray bursts, solar flares, and cosmic rays on the terrestrial environment", "12.1 Introduction", "12.2 Radiationthreats", "12.2.1 Credible threats", "12.2.2 Solar flares", "12.2.3 Solar activity and global warming", "12.2.4 Solar extinction", "12.2.5 Radiation from supernova explosions", "12.2.6 Gamma-ray bursts", "12.3 Cosmic ray threats", "12.3.1 Earth magnetic field reversals", "12.3.2 Solar activity, cosmic rays, and global warming", "12.3.3 Passage through the Galactic spiral arms", "12.3.4 Cosmic rays from nearby supernovae", "12.3.5 Cosmic rays from gamma-ray bursts", "12.4 Origin of the major mass extinctions", "12.5 The Fermi paradox and mass extinctions", "12.6 Conclusions", "References", "Part III Risks from unintended consequences", "13 Climate change and global risk", "13.1 Introduction", "13.2 Modelling climate change", "13.3 A simple model of climate change", "13.3.1 Solar forcing", "13.3.2 Volcanic forcing", "13.3.3 Anthropogenic forcing", "13.4 Limits to current knowledge", "13.5 Defining dangerous climate change", "13.6 Regional climate risk under anthropogenic change", "13.7 Climate risk and mitigation policy", "13.8 Discussion and conclusions", "Suggestions for further reading", "References", "14 Plagues and pandemics: past, present, and future", "14.1 Introduction", "14.2 The baseline: the chronic and persisting burden of infectious disease", "14.3 The causation of pandemics", "14.4 The nature and source of the parasites", "14.5 Modes of microbial and viral transmission", "14.6 Nature of the disease impact: high morbidity, high mortality, or both", "14.7 Environmental factors", "14.8 Humanbehaviour", "14.9 Infectious diseases as contributors to other natural catastrophes", "14.10 Past Plagues and pandemics and their impact on history", "14.11 Plagues of historical note", "14.11.1 Bubonic plague: the Black Death", "14.11.2 Cholera", "14.11.3 Malaria", "14.11.4 Smallpox", "14.11.5 Tuberculosis", "14.11.6 Syphilis as a paradigm of sexually transmitted infections", "14.11.7 Influenza", "14.12 Contemporary plagues and pandemics", "14.12.1 HIV/AIDS", "14.12.2 Influenza", "14.12.3 HIV and tuberculosis: the double impact of new and ancient threats", "14.13 Plagues and pandemics of the future", "14.13.1 Microbes that threaten without infection: the microbial toxins", "14.13.2 Iatrogenic diseases", "14.13.3 The homogenization of peoples and cultures", "14.13.4 Man-made viruses", "14.14 Discussion and conclusions", "Suggestions for further reading", "References", "15 Artificial Intelligence as a positive and negative factor in global risk", "15.1 Introduction", "15.2 Anthropomorphic bias", "15.3 Predictionand design", "15.4 Underestimating the power of intelligence", "15.5 Capability and motive", "15.5.1 Optimization processes", "15.5.2 Aiming at the target", "15.6 Friendly Artificial Intelligence", "15.7 Technical failure and philosophical failure", "15.7.1 An example of philosophical failure", "15.7.2 An example of technical failure", "15.8 Rates of intelligence increase", "15.9 Hardware", "15.10 Threats and promises", "15.11 Local and majoritarian strategies", "15.12 Interactions of Artificial Intelligence with other technologies", "15.13 Making progress on Friendly Artificial Intelligence", "15.14 Conclusion", "References", "16 Big troubles, imagined and real", "16.1 Why look for trouble?", "16.2 Looking before leaping", "16.2.1 Accelerator disasters", "16.2.2 Runaway technologies", "16.3 Preparing to Prepare", "16.4 Wondering", "Suggestions for further reading", "References", "17 Catastrophe, social collapse, and human extinction", "17.1 Introduction", "17.3 Social growth", "17.4 Social collapse", "17.5 The distribution of disaster", "17.6 Existential disasters", "17.7 Disaster policy", "17.8 Conclusion", "References", "Part IV Risks from hostile acts", "18 The continuing threat of nuclear war", "18.1 Introduction", "18.1.1 US nuclear forces", "18.1.2 Russiannuclear forces", "18.2 Calculating Armageddon", "18.2.1 Limited war", "18.2.2 Global war", "18.2.3 Regional war", "18.2.4 Nuclear winter", "18.3 The current nuclear balance", "18.4 The good news about proliferation", "18.5 A comprehensive approach", "18.6 Conclusion", "Suggestions for further reading", "19 Catastrophic nuclear terrorism: a preventable peril", "19.1 Introduction", "19.2 Historical recognition of the risk of nuclear terrorism", "19.3 Motivations and capabilities for nuclear terrorism", "19.3.1 Motivations: the demand side of nuclear terrorism", "19.3.2 The supply side of nuclear terrorism", "19.4 Probabilities of occurrence", "19.4.1 The demand side: who wants nuclear weapons?", "19.4.2 The supply side: how far have terrorists progressed?", "19.4.3 What is the probability that terrorists will acquire nuclear explosive capabilities in the future?", "19.4.4 Could terrorists precipitate a nuclear holocaust by non-nuclear means?", "19.5 Consequences of nuclear terrorism", "19.5.1 Physical and economic consequences", "19.5.2 Psychological, social, and political consequences", "19.6 Risk assessment and risk reduction", "19.6.1 The risk of global catastrophe", "19.6.2 Risk reduction", "19.7 Recommendations", "19.7.1 Immediate priorities", "19.7.2 Long-term priorities", "19.8 Conclusion", "Suggestions for further reading", "References", "20 Biotechnology and biosecurity", "20.1 Introduction", "20.2 Biological weapons and risks", "20.3 Biological weapons are distinct from other so-called weapons of mass destruction", "20.4 Benefits come with risks", "20.5 Biotechnology risks go beyond traditional virology, micro- and molecular biology", "20.6 Addressing biotechnology risks", "20.6.1 Oversight of research", "20.6.2 ‘Soft’ oversight", "20.6.3 Multi-stakeholder partnerships for addressing biotechnology risks", "s20.6.4 A risk management framework for de novo DNA synthesis technologies", "20.6.5 From voluntary codes of conduct to international regulations", "20.6.6 Biotechnology risks go beyond creating novel pathogens", "20.6.7 Spread of biotechnology may enhance biological security", "20.7 Catastrophic biological attacks", "20.8 Strengthening disease surveillance and response", "20.8.1 Surveillance and detection", "20.8.2 Collaboration and communication are essential for managing outbreaks", "20.8.3 Mobilization of the public health sector", "20.8.4 Containment of the disease outbreak", "20.8.5 Research, vaccines, and drug development are essential components of an effective defence strategy", "20.8.6 Biological security requires fostering collaborations", "20.9 Towards a biologically secure future", "Suggestions for further reading", "References", "21 Nanotechnology as global catastrophic risk", "21.1 Nanoscale technologies", "21.1.1 Necessary simplicity of products", "21.1.2 Risks associated with nanoscale technologies", "21.2 Molecular manufacturing", "21.2.1 Products of molecular manufacturing", "21.2.2 Nano-built weaponry", "21.2.3 Global catastrophic risks", "21.3 Mitigation of molecular manufacturing risks", "21.4 Discussion and conclusion", "Suggestions for further reading", "References", "22 The totalitarian threat", "22.1 Totalitarianism: what happened and why it (mostly) ended", "22.2 Stable totalitarianism", "22.3 Risk factors for stable totalitarianism", "22.3.1 Technology", "22.3.2 Politics", "22.4 Totalitarian risk management", "22.4.1 Technology", "22.4.2 Politics", "22.5 ‘What’s your p?’", "Suggestions for further reading", "References", "Authors’ biographies", "Index", "Foot Note", "ch01 footnote", "ch02 footnote", "ch05 footnote", "ch06 footnote", "ch08 footnote", "ch09 footnote", "ch13 footnote", "ch14 footnote", "ch15 footnote", "ch16 footnote", "ch18 footnote", "ch19 footnote", "ch20 footnote", "ch22 footnote"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Global Catastrophic Risks by Bostrom, NickCirkovic, Milan M.Rees, Martin J.Milan M. Ćirković (z-lib.org).epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
8
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Robot Sex", "date_published": "2017-10-27", "chapter_names": ["Title page", "Copyright page", "Acknowledgments", "I Introducing Robot Sex", "1 Should We Be Thinking about Robot Sex?", "2 On the Very Idea of Sex with Robots", "II Defending Robot Sex", "3 The Case for Sexbots", "4 Should We Campaign Against Sex Robots?", "5 Sex Robots and the Rights of the Disabled", "III Challenging Robot Sex", "6 Religious Perspectives on Sex with Robots", "7 The Symbolic-Consequences Argument in the Sex Robot Debate", "8 Legal and Moral Implications of Child Sex Robots", "IV The Robot’s Perspective", "9 Is It Good for Them Too? Ethical Concern for the Sexbots", "10 Was It Good for You Too? The New Natural Law Theory and the Paradoxical Good of Sexbots", "V The Possibility of Robot Love", "11 Automatic Sweethearts for Transhumanists", "12 From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?", "VI The Future of Robot Sex", "13 Intimacy, Bonding, and Sex Robots: Examining Empirical Results and Exploring Ethical Ramifications", "14 Deus Sex Machina: Loving Robot Sex Workers and the Allure of an Insincere Kiss", "15 Sexbot-Induced Social Change: An Economic Perspective", "Contributors", "Index", "Table 7.1 Van De Poel’s Principles for Ethical Technological Experiment.", "Table 13.1 Background questions about the subjects’ views on what sex robots are capable of, and percentages of subjects who agreed with the capabilities on the HRI16 data before the current and after the current sex robots questions.", "Table 13.2 Questions about the subjects’ views on the possible advantages of sex robots, and percentages of subjects who agreed with the possible advantages.", "Table 13.3 Questions about the subjects’ views on the possible advantages of sex robots and percentages of subjects who agreed with the possible advantages.", "Table 13.4 Questions about the subjects’ general views on sex robots and percentages of subjects who agreed with the statements.", "Figure 13.1 Comparison of appropriate uses between HRI 2016 data and the current data showing that there are no significant differences in subjects’ views of appropriate uses. Ratings are on a scale from 1=“completely inappropriate” to 7=“completely appropriate.” Error bars depict standard errors.", "Figure 13.2 Comparison of appropriate forms between HRI 2016 data and the current data showing that there are no significant differences in subjects’ views of appropriate forms. Ratings are on a scale from 1=“completely inappropriate” to 7=“completely appropriate.” Error bars depict standard errors.", "Figure 14.1 Mori’s Uncanny Valley. Based on Masahiro Mori, “The Uncanny Valley,” trans. K. F. MacDorman and N. Kageki,", "Figure 14.2 Robot Accommodation Process Theory (RAPT)", "Figure 14.3 The climb out of the Uncanny Valley; RAPT overlaid Mori’s Uncanny Valley", "Figure 14.4 The Robot Cultural Accommodation Cycle", "Cover", "Table of Contents"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Danaher, John_McArthur, Neil - Robot sex_ social and ethical implications (2018_2017, MIT Press) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
9
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The Precipice", "date_published": "2020-03-24", "chapter_names": ["Cover", "Title Page", "Copyright", "Dedication", "List of Figures", "List of Tables", "PART ONE: THE STAKES", "Introduction", "1. Standing at the Precipice", "How We Got Here", "Where We Might Go", "The Precipice", "2. Existential Risk", "Understanding Existential Risk", "Looking to the Present", "Looking to Our Future", "Looking to Our Past", "Civilizational Virtues", "Cosmic Significance", "Uncertainty", "Our Neglect of Existential Risks", "PART TWO: THE RISKS", "3. Natural Risks", "Asteroids & Comets", "Supervolcanic Eruptions", "Stellar Explosions", "Other Natural Risks", "The Total Natural Risk", "4. Anthropogenic Risks", "Nuclear Weapons", "Climate Change", "Environmental Damage", "5. Future Risks", "Pandemics", "Unaligned Artificial Intelligence", "Dystopian Scenarios", "Other Risks", "PART THREE: THE PATH FORWARD", "6. The Risk Landscape", "Quantifying the Risks", "Combining and Comparing Risks", "Risk Factors", "Which Risks?", "7. Safeguarding Humanity", "Grand Strategy for Humanity", "Risks Without Precedent", "International Coordination", "Technological Progress", "Research on Existential Risk", "What You Can Do", "8. Our Potential", "Duration", "Scale", "Quality", "Choices", "Resources", "Acknowledgments", "Discover More", "Appendices", "Note on the Author", "Note on the Type", "Further Reading", "Bibliography", "Notes", "Begin Reading", "Table of Contents"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/The Precipice Existential Risk and the Future of Humanity by Toby Ord (z-lib.org).epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
10
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "2016 International Symposium on Experimental Robotics", "date_published": "n/a", "chapter_names": ["Cover", "Frontmatter", "1. Aerial Robots 1", "Learning Transferable Policies for Monocular Reactive MAV Control", "A micro-UAS to Start Prescribed Fires", "Research on Hammering Test System by Unmanned Aerial Vehicles for Infrastructure Surveillance", "Uncertainty Quantification for Small Robots Using Principal Orthogonal Decomposition", "Collaborative 3D Reconstruction Using Heterogeneous UAVs: System and Experiments", "2. Actuation", "A Modular Folded Laminate Robot Capable of Multi Modal Locomotion", "Combined Energy Harvesting and Control of Moball: A Barycentric Spherical Robot", "Control of Pneumatic Actuators with Long Transmission Lines for Rehabilitation in MRI", "Terrain-Dependant Control of Hexapod Robots Using Vision", "Untethered One-Legged Hopping in 3D Using Linear Elastic Actuator in Parallel (LEAP)", "Discrete Foot Shape Changes Improve Dynamics of a Hopping Robot", "3. Grasping 1", "Learning Grasps in a Synergy-based Framework", "Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction", "Core Actuation Promotes Self-manipulability on a Direct-Drive Quadrupedal Robot", "Experiments with Hierarchical Reinforcement Learning of Multiple Grasping Policies", "Learning Hand-Eye Coordination for Robotic Grasping with Large-Scale Data Collection", "Improving Grasp Performance Using In-Hand Proximity and Dynamic Tactile Sensing", "4. Manipulation", "Learning Object Orientation Constraints and Guiding Constraints for Narrow Passages from One Demonstration", "Meta-level Priors for Learning Manipulation Skills with Sparse Features", "Automatic Object Modeling Through Integrating Perception and Robotic Manipulation", "ZMP Features for Touch Driven Robot Control via Tactile Servo", "Data-Driven Classification of Screwdriving Operations", "A System for Multi-step Mobile Manipulation: Architecture, Algorithms, and Experiments", "Application of Robot Manipulator for Cardiopulmonary Resuscitation", "Experimental Analysis of Human Control Strategies in Contact Manipulation Tasks", "5. Human-Robot Interaction 1", "Hybrid Human Motion Prediction for Action Selection Within Human-Robot Collaboration", "Design and Control of Lightweight Supernumerary Robotic Limbs for Sitting/Standing Assistance", "Integrated Intelligence for Human-Robot Teams", "EUROPtus: A Mixed-Initiative Controller for Multi-vehicle Oceanographic Field Experiments", "Implicitly Assisting Humans to Choose Good Grasps in Robot to Human Handovers", "Initial Data and Theory for a High Specific-Power Ankle Exoskeleton Device", "6. Mobile Robots 1", "High-Speed Wall-Contacting Drive for Underground Automatic Transport Vehicle", "Realizing Robust Control of Autonomous Vehicles", "Learning to Plan for Visibility in Navigation of Unknown Environments", "Parallel Manipulation of Millirobot Swarms Using Projected Light Fields", "Improving the Accuracy of Stereo Visual Odometry Using Visual Illumination Estimation", "Experimental Validation of a Template for Navigation of Miniature Legged Robots", "7. Perception", "Fruit Pose Estimation and Stem Touch Detection for Green Pepper Automatic Harvesting", "From Localized Shearing to Localized Slippage Perception", "Fit for Purpose? Predicting Perception Performance Based on Past Experience", "Deep Multispectral Semantic Scene Understanding of Forested Environments Using Multimodal Fusion", "Vision-Based Apple Counting and Yield Estimation", "Towards Learning to Perceive and Reason About Liquids", "8. Aerial Robots 2", "Vision-Based Obstacle Avoidance for Micro Air Vehicles Using an Egocylindrical Depth Map", "Transformable Multirotor with Two-Dimensional Multilinks: Modeling, Control, and Whole-Body Aerial Manipulation", "Localization of a Ground Robot by Aerial Robots for GPS-Deprived Control with Temporal Logic Constraints", "On the VINS Resource-Allocation Problem for a Dual-Camera, Small-Size Quadrotor", "Catching a Flying Ball with a Vision-Based Quadrotor", "Experience-Based Models of Surface Proximal Aerial Robot Flight Performance in Wind", "“On-the-Spot Training” for Terrain Classification in Autonomous Air-Ground Collaborative Teams", "Safe Navigation of Quadrotor Teams to Labeled Goals in Limited Workspaces", "9. Grasping 2", "Using Vision for Pre- and Post-grasping Object Localization for Soft Hands", "Grasping and Manipulation by Underactuated Hand with Multi-Joint Fingers", "Generalizing Regrasping with Supervised Policy Learning", "Experimental Validation of Contact Dynamics for In-Hand Manipulation", "Iterative Visual Recognition for Learning Based Randomized Bin-Picking", "Mechanism and Control of Whole-Body Electro-Hydrostatic Actuator Driven Humanoid Robot Hydra", "10. Planning and Control", "Gait Synthesis for Modular Soft Robots", "Discovering and Manipulating Affordances", "Experimental Evaluation of Hybrid Conditional Planning for Service Robotics", "Improved Learning of Dynamics Models for Control", "11. Mobile Robots 2", "Data Correlation and Comparison from Multiple Sensors Over a Coral Reef with a Team of Heterogeneous Aquatic Robots", "Multi Robot Object-Based SLAM", "Particle Filter Localization on Continuous Occupancy Maps", "Experimental Methods for Mobility and Surface Operations of Microgravity Robots", "Multi-Sensor SLAM with Online Self-Calibration and Change Detection", "Experimental Comparison of Open Source Vision-Based State Estimation Algorithms", "12. Human-Robot Interaction 2", "Human Pose Estimation from Imperfect Sensor Data via the Extended Kalman Filter", "Influence of Emotional Motions in Human-Robot Interactions", "Energy Based Control for Safe Human-Robot Physical Interaction", "Psychological Evaluation on Influence of Appearance and Synchronizing Operation of Android Robot", "Collective Cognition and Sensing in Robotic Swarms via an Emergent Group-Mind", "Recognizing Unfamiliar Gestures for Human-Robot Interaction Through Zero-Shot Learning", "Erratum to: Application of Robot Manipulator for Cardiopulmonary Resuscitation", "Backmatter"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/2017_Book_2016InternationalSymposiumOnEx.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
11
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "On the Future: Prospects for Humanity", "date_published": "2018-10-16T00:00:00+00:00", "chapter_names": ["More Praise for On the Future", "Title Page", "Copyright Page", "Contents", "Preface", "Introduction", "1. Deep in the Anthropocene", "1.1. Perils and Prospects", "1.2. Nuclear Threats", "1.3. Eco-Threats and Tipping Points", "1.4. Staying within Planetary Boundaries", "1.5. Climate Change", "1.6. Clean Energy—and a ‘Plan B’?", "2. Humanity’s Future on Earth", "2.1. Biotech", "2.2. Cybertechnology, Robotics, and AI", "2.3. What about Our Jobs?", "2.4. Human-Level Intelligence?", "2.5. Truly Existential Risks?", "3. Humanity in a Cosmic Perspective", "3.1. The Earth in a Cosmic Context", "3.2. Beyond Our Solar System", "3.3. Spaceflight—Manned and Unmanned", "3.4. Towards a Post-Human Era?", "3.5. Alien Intelligence?", "4. The Limits and Future of Science", "4.1. From the Simple to the Complex", "4.2. Making Sense of Our Complex World", "4.3. How Far Does Physical Reality Extend?", "4.4. Will Science ‘Hit the Buffers’?", "4.5. What about God?", "5. Conclusions", "5.1. Doing Science", "5.2. Science in Society", "5.3. Shared Hopes and Fears", "Notes", "Index"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Martin Rees - On the Future_ Prospects for Humanity (2018, Princeton University Press) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
12
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Smarter Than Us: The Rise of Machine Intelligence", "date_published": "2014-02-01T00:00:00+00:00", "chapter_names": ["Acknowledgments", "Terminator versus the AI", "Strength versus Intelligence", "What Is Intelligence? Can We Achieve It Artificially?", "How Powerful Could AIs Become?", "Talking to an Alien Mind", "Our Values Are Complex and Fragile", "What, Precisely, Do We Really (Really) Want?", "We Need to Get It All Exactly Right", "Listen to the Sound of Absent Experts", "A Summary", "That’s Where You Come In . . .", "About the Author", "Bibliography"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Smarter than us the rise of machine intelligence by Armstrong, Stuart (z-lib.org).epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
13
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Solomon's Code", "date_published": "2018-10-12", "chapter_names": ["Cover", "Title", "Contents", "Foreword", "Introduction", "1: Where Human Meets Machine", "2: A New Power Balance", "3: Think Symbiosis", "4: Frontiers of a Smarter World", "5: The Race for Global AI Influence", "6: Pandora’s Box", "7: Life and love in 2035", "8: A World Worth Shaping", "Afterword", "Acknowledgments", "Copyright", "v", "vi", "vii", "viii", "ix", "x", "xi", "xii", "xiii", "xiv", "xv", "xvi", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119", "120", "121", "122", "123", "124", "125", "126", "127", "128", "129", "130", "131", "132", "133", "134", "135", "136", "137", "138", "139", "140", "141", "142", "143", "144", "145", "146", "147", "148", "149", "150", "151", "152", "153", "154", "155", "156", "157", "158", "159", "160", "161", "162", "163", "164", "165", "166", "167", "168", "169", "170", "171", "172", "173", "174", "175", "176", "177", "178", "179", "180", "181", "182", "183", "184", "185", "186", "187", "188", "189", "190", "191", "192", "193", "194", "195", "196", "197", "198", "199", "200", "201", "202", "203", "204", "205", "206", "207", "208", "209", "210", "211", "212", "213", "214", "215", "216", "217", "218", "219", "220", "221", "222", "223", "224", "225", "226", "227", "228", "229", "230", "231", "232", "233", "234", "235", "236", "237", "238", "239", "240", "241", "242", "243", "244", "245", "246", "247", "248", "249", "250", "251", "252", "253", "254", "255", "256", "257", "258", "259", "260", "261", "262", "263", "264", "265", "266", "267", "268", "Cover", "Title Page", "Contents", "Start Reading"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Olaf Groth, Mark Nitzberg - Solomon’s Code_ Humanity in a World of Thinking Machines (2018, Pegasus Books) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
14
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "Fundamental Issues of Artificial Intelligence", "date_published": "n/a", "chapter_names": ["Cover", "Front Matter", "1. New Developments in the Philosophy of AI", "Part I. Computing", "2. Rationality and Intelligence: A Brief Update", "3. Computation and Multiple Realizability", "4. When Thinking Never Comes to a Halt: Using Formal Methods in Making Sure Your AI Gets the Job Done Good Enough", "5. Machine Intelligence and the Ethical Grammar of Computability", "6. Is There a Role for Computation in the Enactive Paradigm?", "7. Natural Recursion Doesn’t Work That Way: Automata in Planning and Syntax", "Part II. Information", "8. AI, Quantum Information, and External Semantic Realism: Searle’s Observer-Relativity and Chinese Room, Revisited", "9. Semantic Information and Artificial Intelligence", "10. Information, Computation, Cognition. Agency-Based Hierarchies of Levels", "11. From Simple Machines to Eureka in Four Not-So-Easy Steps: Towards Creative Visuospatial Intelligence", "Part III. Cognition and Reasoning", "12. Leibniz’s Art of Infallibility, Watson, and the Philosophy, Theory, and Future of AI", "13. The Computational Theory of Cognition", "14. Representational Development Need Not Be Explicable-By-Content", "15. Toward a Theory of Intelligent Complex Systems: From Symbolic AI to Embodied and Evolutionary AI", "16. The Anticipatory Brain: Two Approaches", "17. General Homeostasis, Passive Life, and the Challenge to Autonomy", "18. Ad Hoc Hypotheses and the Monsters Within", "19. Arguably Argumentative: A Formal Approach to the Argumentative Theory of Reason", "20. Explaining Everything", "21. Why Emotions Do Not Solve the Frame Problem", "22. HeX and the Single Anthill: Playing Games with Aunt Hillary", "23. Computer Models of Constitutive Social Practice", "Part IV. Embodied Cognition", "24. Artificial Intelligence: The Point of View of Developmental Robotics", "25. Tacit Representations and Artificial Intelligence: Hidden Lessons from an Embodied Perspective on Cognition", "26. Machine Art or Machine Artists?: Dennett, Danto, and the Expressive Stance", "27. Perception, Action and the Notion of Grounding", "28. The Seminal Speculation of a Precursor: Elements of Embodied Cognition and Situated AI in Alan Turing", "29. Heideggerian AI and the Being of Robots", "Part V. Ethics", "30. The Need for Moral Competency in Autonomous Agent Architectures", "31. Order Effects, Moral Cognition, and Intelligence", "32. Artificial Intelligence and Responsible Innovation", "33. Future Progress in Artificial Intelligence: A Survey of Expert Opinion", "Cover", "Table of Contents", "Body Matter"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/2016_Book_FundamentalIssuesOfArtificialI.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
15
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World", "date_published": "2019-06-13T07:00:00+00:00", "chapter_names": ["Dedication", "Title Page", "Contents", "Introduction: ‘I don’t expect your children to die of old age’", "PART ONE: INTRODUCTIONS", "1: Introducing the Rationalists", "2: The cosmic endowment", "PART TWO: THE PAPERCLIP APOCALYPSE", "3: Introducing AI", "4: A history of AI", "5: When will it happen?", "6: Existential risk", "7: The cryptographic rocket probe, and why you have to get it right first time", "8: Paperclips and Mickey Mouse", "9: You can be intelligent, and still want to do stupid things", "10: If you want to achieve your goals, not dying is a good start", "11: If I stop caring about chess, that won’t help me win any chess games, now will it?", "12: The brief window of being human-level", "13: Getting better all the time", "14: ‘FOOOOOM’", "15: But can’t we just keep it in a box?", "16: Dreamed of in your philosophy", "17: ‘It’s like 100 per cent confident this is an ostrich’", "PART THREE: THE WAYS OF BAYES", "18: What is rationality?", "19: Bayes’ theorem and optimisation", "20: Utilitarianism: shut up and multiply", "PART FOUR: BIASES", "21: What is a ‘bias’?", "22: The availability heuristic", "23: The conjunction fallacy", "24: The planning fallacy", "25: Scope insensitivity", "26: Motivated scepticism, motivated stopping and motivated continuation", "27: A few others, and the most important one", "PART FIVE: RAISING THE SANITY WATERLINE", "28: Thinking probabilistically", "29: Making beliefs pay rent", "30: Noticing confusion", "31: The importance of saying ‘Oops’", "PART SIX: DECLINE AND DIASPORA", "32: The semi-death of LessWrong", "33: The IRL community", "PART SEVEN: DARK SIDES", "34: Are they a cult?", "35: You can’t psychoanalyse your way to the truth", "36: Feminism", "37: The Neoreactionaries", "PART EIGHT: DOING GOOD BETTER", "38: The Effective Altruists", "39: EA and AI", "PART NINE: THE BASE RATE OF THE APOCALYPSE", "40: What are they doing to stop the AI apocalypse?", "41: The internal double crux", "42: Life, the universe and everything", "Acknowledgements", "Notes", "Copyright"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Tom Chivers - The AI Does Not Hate You_ Superintelligence, Rationality and the Race to Save the World.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
16
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "A Citizen's Guide to Artificial Intelligence", "date_published": "n/a", "chapter_names": ["Title Page", "Copyright", "Table of Contents", "Preface", "Acknowledgments", "Prologue: What’s All the Fuss About?", "1. What Is Artificial Intelligence?", "2. Transparency", "3. Bias", "4. Responsibility and Liability", "5. Control", "6. Privacy", "7. Autonomy", "8. Algorithms in Government", "9. Employment", "10. Oversight and Regulation", "Epilogue", "About the Authors", "Index", "Cover", "Table of Contents", "Acknowledgments", "Start of Content", "Index", "Cover Page", "iii", "iv", "v", "vii", "viii", "ix", "xi", "xii", "xiii", "xiv", "xv", "xvi", "xvii", "xviii", "xix", "xx", "xxi", "xxii", "181", "182", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "183", "184", "185", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "186", "187", "188", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "189", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "190", "191", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "192", "193", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119", "120", "121", "122", "123", "124", "125", "126", "194", "195", "127", "128", "129", "130", "131", "132", "133", "134", "135", "136", "137", "138", "139", "140", "141", "142", "143", "144", "145", "146", "147", "196", "197", "149", "150", "151", "152", "153", "154", "155", "156", "157", "198", "199", "159", "160", "161", "162", "163", "164", "165", "166", "167", "168", "169", "170", "171", "172", "173", "174", "200", "201", "175", "176", "177", "179", "203", "204", "205", "207"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/John Zerilli - A Citizen's Guide to Artificial Intelligence (MIT Press) - libgen.li.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
17
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The Alignment Problem", "date_published": "2020-08-27", "chapter_names": ["Cover", "Title", "Contents", "Prologue", "Introduction", "I. Prophecy", "1. Representation", "2. Fairness", "3. Transparency", "II. Agency", "4. Reinforcement", "5. Shaping", "6. Curiosity", "III. Normativity", "7. Imitation", "8. Inference", "9. Uncertainty", "Conclusion", "Acknowledgments", "Notes", "Bibliography", "Index", "Also by Brian Christian", "About the Author", "Copyright", "Cover", "Title", "Contents", "v", "vii", "viii", "ix", "x", "xi", "xii", "xiii", "xiv", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", "28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", "39", "40", "41", "42", "43", "44", "45", "46", "47", "48", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "59", "60", "61", "62", "63", "64", "65", "66", "67", "68", "69", "70", "71", "72", "73", "74", "75", "76", "77", "78", "79", "80", "81", "82", "83", "84", "85", "86", "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", "98", "99", "100", "101", "102", "103", "104", "105", "106", "107", "108", "109", "110", "111", "112", "113", "114", "115", "116", "117", "118", "119", "120", "121", "122", "123", "124", "125", "126", "127", "128", "129", "130", "131", "132", "133", "134", "135", "136", "137", "138", "139", "140", "141", "142", "143", "144", "145", "146", "147", "148", "149", "150", "151", "152", "153", "154", "155", "156", "157", "158", "159", "160", "161", "162", "163", "164", "165", "166", "167", "168", "169", "170", "171", "172", "173", "174", "175", "176", "177", "178", "179", "180", "181", "182", "183", "184", "185", "186", "187", "188", "189", "190", "191", "192", "193", "194", "195", "196", "197", "198", "199", "200", "201", "202", "203", "204", "205", "206", "207", "208", "209", "210", "211", "212", "213", "214", "215", "216", "217", "218", "219", "220", "221", "222", "223", "224", "225", "226", "227", "228", "229", "230", "231", "232", "233", "234", "235", "236", "237", "238", "239", "240", "241", "242", "243", "244", "245", "246", "247", "248", "249", "250", "251", "252", "253", "254", "255", "256", "257", "258", "259", "260", "261", "262", "263", "264", "265", "266", "267", "268", "269", "270", "271", "272", "273", "274", "275", "276", "277", "278", "279", "280", "281", "282", "283", "284", "285", "286", "287", "288", "289", "290", "291", "292", "293", "294", "295", "296", "297", "298", "299", "300", "301", "302", "303", "304", "305", "306", "307", "308", "309", "310", "311", "312", "313", "314", "315", "316", "317", "318", "319", "320", "321", "322", "323", "324", "325", "326", "327", "328", "329", "330", "331", "332", "333", "334", "335", "336", "337", "338", "339", "340", "341", "342", "343", "344", "345", "346", "347", "348", "349", "350", "351", "352", "353", "354", "355", "356", "357", "358", "359", "360", "361", "362", "363", "364", "365", "366", "367", "368", "369", "370", "371", "372", "373", "374", "375", "376", "377", "378", "379", "380", "381", "382", "383", "384", "385", "386", "387", "388", "389", "390", "391", "392", "393", "394", "395", "396", "397", "398", "399", "400", "401", "402", "403", "404", "405", "406", "407", "408", "409", "410", "411", "412", "413", "414", "415", "416", "417", "418", "419", "420", "421", "422", "423", "424", "425", "426", "427", "428", "429", "430", "431", "432", "433", "434", "435", "436", "437", "438", "439", "440", "441", "442", "443", "444", "445", "446", "447", "448", "449", "450", "451", "452", "453", "454", "455", "456", "457", "458", "459", "460", "461", "462", "463", "464", "465", "466", "467", "468", "469", "470", "471", "472", "473", "474", "475", "476", "i", "ii", "iii", "iv", "477", "478", "479", "480", "481", "482", "vi"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Brian Christian - The Alignment Problem (2020, W. W. Norton & Company) - libgen.li.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
18
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "End Times: A Brief Guide to the End of the World", "date_published": "2019-08-27T05:00:00+00:00", "chapter_names": ["COVER", "TITLE PAGE", "COPYRIGHT", "DEDICATION", "EPIGRAPH", "INTRODUCTION", "1. ASTEROID: The Universe Is Trying to Kill Us", "2. VOLCANO: A Decade Without a Summer", "3. NUCLEAR: The Final Curtain on Mankind", "4: CLIMATE CHANGE: What Do We Owe the Future?", "5. DISEASE: Twenty-First-Century Plague", "6. BIOTECHNOLOGY: Engineering a Killer", "7: ARTIFICIAL INTELLIGENCE: Summoning the Demon", "8. ALIENS: Where Is Everybody?", "9. SURVIVAL: The Day After", "10. THE END: Why We Fight", "ACKNOWLEDGMENTS", "DISCOVER MORE", "NOTES"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Bryan Walsh - End Times_ A Brief Guide to the End of the World (2019, Hachette Books) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
19
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The Technological Singularity", "date_published": "n/a", "chapter_names": ["Cover", "Frontmatter", "1. Introduction to the Technological Singularity", "1. Risks of, and Responses to, the Journey to the Singularity", "2. Risks of the Journey to the Singularity", "3. Responses to the Journey to the Singularity", "2. Managing the Singularity Journey", "4. How Change Agencies Can Affect Our Path Towards a Singularity", "5. Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda", "6. Risk Analysis and Risk Management for the Artificial Superintelligence Research and Development Process", "7. Diminishing Returns and Recursive Self Improving Artificial Intelligence", "8. Energy, Complexity, and the Singularity", "9. Computer Simulations as a Technological Singularity in the Empirical Sciences", "10. Can the Singularity Be Patented? (And Other IP Conundrums for Converging Technologies)", "11. The Emotional Nature of Post-Cognitive Singularities", "12. A Psychoanalytic Approach to the Singularity: Why We Cannot Do Without Auxiliary Constructions", "3. Reflections on the Journey", "13. Reflections on the Singularity Journey", "14. Singularity Blog Insights", "Backmatter"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/2017_Book_TheTechnologicalSingularity.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
20
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "The Technological Singularity", "date_published": "2015", "chapter_names": ["Half Title", "Series List", "Title", "Copyright", "Epigraph", "Table of Contents", "Series Foreword", "Preface", "Introduction", "1 Routes to Artificial Intelligence", "2 Whole Brain Emulation", "3 Engineering AI", "4 Superintelligence", "5 AI and Consciousness", "6 The Impact of AI", "7 Heaven or Hell", "Glossary", "Notes", "Further Reading", "Index"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/[The MIT Press Essential Knowledge series] Murray Shanahan - The Technological Singularity (2015, The MIT Press) - libgen.lc.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
21
+ {"source": "ebook", "source_filetype": "epub", "converted_with": "pandoc", "title": "out-output", "date_published": "n/a", "chapter_names": ["Start"], "text": "n/a", "url": "n/a", "file_name": "/Users/janhendrikkirchner/code/2022/10/alignment-research-dataset/align_data/common/../../data/raw/books_text/Stuart Russell, Peter Norvig - Artificial Intelligence_ A Modern.epub", "id": "274b68192b056e268f128ff63bfcd4a4"}
generative.ink.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
gwern_blog.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
intelligence.org.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
jsteinhardt.wordpress.com.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
lesswrong.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad4917da6ff3a8031a909dc1c42a6e3901fd8ec06588c1b5eeb8203a829afee6
3
+ size 128323993
markdown.ebooks.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
nonarxiv_papers.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
qualiacomputing.com.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
reports.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
stampy.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
vkrakovna.wordpress.com.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
waitbutwhy.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
www.yudkowsky.net.jsonl ADDED
The diff for this file is too large to render. See raw diff