{"text": "Near-term motivation for AI alignment\n\nAI alignment work is usually considered “longtermist”, which is about preserving humanity’s long-term potential. This was the primary motivation for this work when the alignment field got started around 20 years ago, and general AI seemed far away or impossible to most people in AI. However, given the current rate of progress towards advanced AI capabilities, there is an increasingly relevant near-term motivation to think about alignment, even if you mostly or only care about people alive today. This is most of my personal motivation for working on alignment.\nI would not be surprised if general AI is reached in the next few decades, similarly to the latest AI expert survey‘s median of 2059 for human-level AI (as estimated by authors at top ML conferences) and the Metaculus median of 2039. The Precipice gives a 10% probability of human extinction this century due to AI, i.e. within the lifetime of children alive today (and I would expect most of this probability to be concentrated in the next few decades, i.e. within our lifetimes). I used to refer to AI alignment work as “long-term AI safety” but this term seems misleading now, since alignment would be more accurately described as “medium-term safety”. \nWhile AI alignment has historically been associated with longtermism, there is a downside of referring to longtermist arguments for alignment concerns. Sometimes people seem to conclude that they don’t need to worry about alignment if they don’t care much about the long-term future. For example, one commonly cited argument for trying to reduce existential risk from AI is that “even if it’s unlikely and far away, it’s so important that we should worry about it anyway”. People understandably interpret this as Pascal’s mugging and bounce off. This kind of argument for alignment concerns is not very relevant these days, because existential risk from AI is not that unlikely (10% this century is actually a lot, and may be a conservative estimate) and general AI is not that far away (an average of 36 years in the AI expert survey). \nSimilarly, when considering specific paths to catastrophic risk from AI, a typical longtermist scenario involves an advanced AI system inventing molecular nanotechnology, which understandably sounds implausible to most people. I think a more likely path to catastrophic risk would involve general AI precipitating other catastrophic risks like pandemics (e.g. by doing biotechnology research) or taking over the global economy. If you’d like to learn about the most pertinent arguments for alignment concerns and plausible paths for AI to gain an advantage over humanity, check out Holden Karnofsky’s Most Important Century blog post series. \nIn terms of my own motivation, honestly I don’t care that much about whether humanity gets to colonize the stars, reducing astronomical waste, or large numbers of future people existing. These outcomes would be very cool but optional in my view. Of course I would like humanity to have a good long-term future, but I mostly care about people alive today. My main motivation for working on alignment is that I would like my loved ones and everyone else on the planet to have a future. \nSometimes people worry about a tradeoff between alignment concerns and other aspects of AI safety, such as ethics and fairness, but I still think this tradeoff is pretty weak. There are also many common interests between alignment and ethics that would be great for these communities to coordinate on. This includes developing industry-wide safety standards and AI governance mechanisms, setting up model evaluations for safety, and slow and cautious deployment of advanced AI systems. Ultimately all these safety problems need to be solved to ensure that general AI systems have a positive impact on the world. I think the distribution of effort between AI capabilities and safety will need to shift more towards safety as more advanced AI systems are developed. \nIn conclusion, you don’t have to be a longtermist to care about AI alignment. I think the possible impacts on people alive today are significant enough to think about this problem, and the next decade is going to be a critical time for steering advanced AI technology towards safety. If you’d like to contribute to alignment research, here is a list of research agendas in this space and a good course to get up to speed on the fundamentals of AI alignment (more resources here). ", "url": "https://vkrakovna.wordpress.com/2023/03/09/near-term-motivation-for-ai-alignment/", "title": "Near-term motivation for AI alignment", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2023-03-09T13:09:33+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "3f8df0c775e1da99bc10407bf9965a15", "summary": []} {"text": "2022-23 New Year review\n\nThis is an annual post reviewing the last year and setting goals for next year. Overall, this was a reasonably good year with some challenges (the invasion of Ukraine and being sick a lot). Some highlights in this review are improving digital habits, reviewing sleep data from the Oura ring since 2019 and calibration of predictions since 2014, an updated set of Lights habits, the unreasonable effectiveness of nasal spray against colds, and of course baby pictures. \n2022 review\nLife updates\nI am very grateful that my immediate family is in the West, and my relatives both in Ukraine and Russia managed to stay safe and avoid being drawn into the war on either side. In retrospect, it was probably good that my dad died in late 2021 and not a few months later when Kyiv was under attack, so we didn’t have to figure out how to get a bedridden cancer patient out of a war zone. It was quite surreal that the city that I had visited just a few months back was now under fire, and the people I had met there were now in danger. The whole thing was pretty disorienting and made it hard to focus on work for a while. I eventually mostly stopped checking the news and got back to normal life with some background guilt about not keeping up with what’s going on in the homeland.\nAI alignment\nMy work focused on threat models and inner alignment this year: \n\nMade an overview talk on Paradigms of AI alignment: components and enablers and gave the talk in a few places. \nCoauthored Goal Misgeneralization: why correct rewards aren’t enough for correct goals paper and the associated DeepMind blog post\nDid a survey of DeepMind alignment team opinions on AGI ruin arguments, which received a lot of interest on the alignment forum. \nWrote a post on Refining the Sharp Left Turn threat model\nContributed to DeepMind alignment posts on Clarifying AI x-risk and Threat model literature review\nCoauthored a prize-winning submission to the Eliciting Latent Knowledge contest: Route understanding through the human ontology.\n\n\nHealth\nPhysical health. I’ve been sick a lot this year – 6 colds and one bronchitis since Daniel started nursery in June, plus one cold earlier in the year. Had covid in April, thankfully a mild case with no obvious long-term effects. I also had two short bouts of covid-like symptoms (fever, muscle aches and fatigue) in May and October that lasted about 2 days each. I tested negative for covid both times, and recovered too quickly for flu, so I’m pretty confused about what this was, maybe a bizarre form of long covid? \nBeing frequently sick was pretty depressing and demotivating, and I put some effort into decreasing the rate of catching colds from Daniel. I tried improving hand hygiene and not sharing food with Daniel, which had a lot of overhead and didn’t seem to do much. I also experimented with various supplements, starting with vitamin C and zinc, which didn’t seem to help much, and then added beta glucans and broncho-vaxom, which possibly helped but I’m not sure. The only thing that seemed clearly effective was a nasal spray called “dual defense”, which seemed to make any symptoms go away whenever I applied it. This made the last (probably) cold I had mild enough to be barely perceptible (not included in the number of colds above). \nSleep. Similarly to last year, I consistently slept for 7 hours at night on average, with a standard deviation of 1 hour. The rate of insomnia was 10% of nights, better than last year (20% of nights). I was awake for an average of 0.6 hours (40 minutes) each night.  (As usual, all the sleep metrics are excluding jetlag.)\n\nI have now added some Oura ring data to my life tracking database as well. The ring provides a score for my sleep each night and “readiness” for the day. These scores are on a scale from 0 to 100, where presumably a score of 100 means you’re completely refreshed and ready to move mountains and 0 means you’re about to drop dead (on both of these dimensions, the highest score I’ve ever had was around 90 and the lowest was around 30). These scores take into account the amount of sleep, frequency of waking up, heart rate and body temperature at night, and activity levels. The ring usually detects when I’m sick, assigns a low readiness score and suggests to take a rest day. I didn’t wear the ring during the day between March 2020 and October 2021, which resulted in much lower activity scores, but I’m not sure how this impacted the sleep and readiness scores.\nOne interesting thing I noticed is that while the amount of sleep per night has stayed level at 7 hours in the past few years, my sleep score has been trending upward. I switched to the Generation 3 Oura ring in Jan 2021, which is supposed to measure sleep quality more accurately, so this could also be an artifact of the change in measurement rather than an actual improvement in sleep.\n\nThe readiness score shows no upward trend – it’s averaged around 70 the whole time.\n\nMental health. Better than last year, but not as much better as I hoped. There was a definite improvement in the first half of the year. In January, I shifted my meditation practice to self-love meditation, which was helpful for a while but seems to be wearing off (maybe I need to find some new recordings on Insight Timer…). \nThere were 6 episodes of particularly bad mental states, all in the second half of the year. Being sick a lot in the second half of the year was a major factor – I often found myself judging my body as weak, being angry at my immune system, or judging myself for not protecting myself enough when Daniel was sick. I think the self-judgment also led to a hopeless mindset where I felt like I tried everything feasible to avoid getting sick when I actually had not, e.g. I later tried the nasal spray and it seemed to help a lot.\nOne improvement in mental health this year was a decreasing rate of night terrors (waking up startled soon after falling asleep) – I had 13 recorded this year, and 37 recorded the previous year. This might have something to do with Daniel getting older and me having less subconscious worry about him falling or getting trapped under the blanket or whatever. However, I developed a new anxiety symptom after he started walking and bumping into things and making lots of mess – I often noticed myself holding my breath when taking care of him. I try to get back to normal breathing when I notice it, but it tends to come back when I’m not paying attention. It’s been a bit better lately, but still not a solved problem.\nParenting\nBreastfeeding. I continued to breastfeed Daniel this year, with decreasing frequency as he asks for it less often. I think at this point there is no more milk, and he is just looking for comfort when he asks for a feed. I never really figured out a plan for how to stop breastfeeding, and I’m still not sure what the endgame for this looks like.\nPotty training. We transitioned Daniel out of diapers using the “oh crap” method over a long weekend in May, which went pretty well. He is good at using the potty when prompted, but took him a while to learn to ask to go to the potty – he’s getting better at this now but we still need to prompt him a lot. He usually has a few accidents a week, which seems ok. These days he doesn’t wear a diaper during the day except for naps and long travels.\nSleep. Daniel usually sleeps from around 9-10pm until around 6-7am, with an average of 0.3 wakeups per night (excluding jetlag). He had a sleep regression in November (which seems to be common around 2 years of age), so he started waking up more and being more difficult to put back to sleep. It’s interesting to compare the data on wakeups and night feeds (12-6am) – I often managed to put him back to sleep without breastfeeding during most of the year but it didn’t work anymore during the regression. \n\nChildcare. Daniel started full-time nursery in June (it’s open until 6pm, which works great with our 10-6 work hours). He also spends Sunday afternoons and evenings with his nanny (who used to care for him full time after nursery), which gives us some time together to do our check-in with each other, go climbing or relax in the sauna, though often a bunch of this time block gets eaten by logistics. \nTaking turns. Janos and I alternate taking care of Daniel in the mornings, since neither of us is a morning person (though we’ve shifted towards an earlier sleep schedule since having a kid). Starting in June, we also introduced a schedule where each of us gets one evening at the office per week while the other one takes care of Daniel. These arrangements were quite helpful for giving me more sleep, productive time and alone time, and setting up regular time blocks for Janos to be alone with Daniel. \nLanguages. Daniel is pretty talkative in English and Russian (still working on the Hungarian though). He knows to address me and my relatives in Russian and other people in English. He is starting to say long words and short sentences, and recently got into the habit of reciting his favorite songs and stories from memory in both languages. It’s not always clear which language he is speaking, which is a bit confusing. \nEffectiveness\nLights. I continued using the Lights spreadsheet for tracking daily habits that I started using in 2021. I’ve stopped tracking a few habits and started some new ones, but overall the set of habits mostly stayed the same – here are the habits that I kept from last year:\n\nLife tracking\nMake a list of intentions (in the todo notebook)\nAsk myself what I want today\nMeditation\nExercise (changed to “today or yesterday”)\nLeg & shoulder stretches\nAt least 2 hours of deep work (if working)\nBraindump / journaling (at least 1 sentence)\nReading \nAppreciate a thing I did today\nExchange appreciation with Janos\nGo to bed by 11:30pm\n\nHabits from last year that I dropped:\n\nEmotional release practice (mostly superceded by self-love meditation)\nAvoid processed sugar (doing this anyway, don’t need to track)\nUse work cycles (doing this anyway in the form of time tracking)\nCheck internal dashboard (didn’t resonate / wasn’t useful)\nGo outside (more automatic post-pandemic)\nNotice when I am picking my nose (didn’t work well)\n\nNew habits I added this year:\n\nFill out lights (makes it easier to see which days were filled out retroactively, was intended to motivate me to fill out lights every day but that didn’t work)\nNegative visualization on making mistakes (helps with self-judgment)\nPractice effective rest (breaks during the day where I pay attention to what I want)\nTake supplements to avoid / mitigate colds\nUse eye drops (to address dryness from using contact lenses)\n\nI did about 70% of the lights on an average week. The most difficult lights were deep work, going to bed and reading. The main failure mode with Lights was not filling them in on some days (usually weekends), which resulted in doing fewer of the habits on those days. I have a solid habit of filling out the lights at the office, but I need to have a more reliable time block to do this on weekends (probably after lunch during Daniel’s nap). \nTime tracking. In June, I switched from using work cycles to doing time tracking during work hours. I realized that I wasn’t doing much of the built-in reflection in work cycles, and was mostly using them as a less systematic time tracking setup. The time tracking shows that in an average work week since June, I spent 27 hours on work activities: 9 hours in (non-research) meetings, 7 hours on research, 4 hours on reading, 3 hours on comms (giving feedback on docs, giving talks, etc), 2 hours on planning and 2 hours on admin. I also spent 10 hours on non-work activities: 6 hours on self-care (exercise classes, therapy, meditation, naps), 1 hour on parenting, and 3 hours on random stuff. \nThe easiest way to improve on this is to increase work hours – I can add another office night (every 2 weeks), and experiment with going in to work early on mornings when Janos takes care of Daniel. I also hope to spend less time being sick next year, with the more effective supplements and Daniel hopefully bringing home fewer germs after the first half-year at nursery. \nDeep work. I did 363 hours of deep work (1.7 per work day), compared to 311 hours of deep work in 2021 (1.78 per work day). This was more than last year (and resulted in a lot more output), but still short of the pre-parenthood baseline.\n\nThis was the first full year (with no leave) since 2019. The number of workdays was a bit lower than the expected 225 workdays in a normal year (260 weekdays minus 10 holidays and 25 vacation days), which was mostly due to sick days for myself or Daniel. The rate of deep work per work day was lower than 2020-2021, mostly due to going to conferences again (which are usually work days with no deep work). \n\n\nFor the purpose of this summary, work days include weekends where I did at least 2 hours of deep work. There were 14 weekend workdays in 2019 and around 3 on each subsequent year (unsurprisingly, having a kid decreased my ability to work on weekends). \nDigital habits. In the spring, I read the Digital Minimalism book and felt inspired to set up better systems for intentional use of technology. The book recommends doing a digital declutter, where you stop all technology use that is not absolutely necessary for a month and then add some of it back in a limited capacity. This seemed a bit extreme and I couldn’t get myself to do it, but I made a list of necessary and optional technologies and how I would like to use them (which was useful in itself). I implemented various measures to cut down unnecessary technology use, which were generally effective:\n\nUsing grayscale on my phone by default to make it less visually stimulating. After I got used to grayscale, the regular full-color mode started to seem too bright and overwhelming, so I don’t really want to use it unless I’m looking at photos or watching videos with my kid.\nUsing an app (Actuflow) that asks to enter my intention whenever I unlock the phone. Together with grayscale, this reduced the number of unlocks from 70-100 per day to 20-40 per day. This includes using the phone to watch videos with Daniel, so the number of phone unlocks just for myself is lower than that.\nI got into more of a habit of going to places without my phone sometimes, e.g. going for walks, getting lunch, or working in the library. I got a wristwatch to be able to check time without looking at my phone, which made going without it a lot easier. \nI muted all channels on work slack except those specific to my team and projects. \nI installed News Feed Eradicator for Facebook and Twitter on my computer browser (which are the only social media I use). I still check the news feeds on my phone sometimes, but not that often since it’s less convenient in grayscale and in the phone browser (I don’t have the apps installed). It would be great to restrict the Facebook feed only to life updates like someone finishing their thesis or having a baby, but sadly this option doesn’t seem to exist (probably by design). \nI also wanted to use the Inbox When Ready extension that hides the inbox by default, but it is unfortunately not allowed on my work devices, so I compromised by defaulting to the (usually empty) Starred view of my inbox. \n\nTravel\nIn March, we went to the Bahamas for an AI safety workshop. To enable both of us to attend the workshop, we imported Janos’s dad and his partner from Toronto to hang out with Daniel on the beach in the meantime. \n\nWe spent two weeks in August in a cottage on Manitoulin Island with family. Daniel was eager to hike with the big hiking stick (or better yet, two of them). He was less into swimming in the lake than last year, except when jumping off a paddleboard. He was also really scared of the resident squirrel at the cottage for some reason, and ran inside whenever he heard it chirping. \n\nIn September, we caught the tail end of the hiking season in the Dolomites – it was chilly but pleasantly uncrowded. Daniel was a trooper as usual and learned a lot of new Russian words hanging out with his grandma. We were hoping to do some via ferrata climbing but the weather was too wet for that.\n\nWe went to Toronto for the winter holidays and got a couple of days of skiing before all the snow melted. Daniel took some time to get used to the cold weather, but he really enjoyed throwing snowballs.\n\nFun stuff\n\nI read 10 books this year: Three Body Problem, The Gifts of Imperfection, How to Talk so (Little) Kids will Listen (and Listen so Kids will Talk), Oh Crap Potty Training, Hunt Gather Parent, Bacteria to Bach and Back, Messy, Digital Minimalism, and Decisive.\nJanos and I took Daniel to the “In with the spiders” exhibit at the London zoo, where you can see big (and harmless) spiders up close. Daniel loves spiders so this was a treat for him.\nWe did some aerial silks in the park, for the first time since having Daniel (I still remembered a few tricks). It took a surprisingly long time to find a good tree to rig the silks on (with a big sturdy branch at the right height). \nWe visited the Bay for the first time since before the pandemic, leaving Daniel in Toronto with his grandma to get a break and spare him the jetlag.\nWe had a photoshoot with a professional photographer – turns out, Daniel has a very photogenic smile :). \n\n\n2022 prediction outcomes\nResolutions\n\nAuthor or coauthor 4 or more AI safety writeups (70%) – yes (8 writeups)\nMeditate on at least 230 days (70%) – yes (272 days)\nAt least 450 deep work hours (70%) – no (363 hours)\nDo 3 consecutive chinups (60%) – no (2 chinups)\nAvoid processed sugar at least 6 months of the year (60%) – yes (the whole year except a few days)\n\nPredictions\n\nI will not catch covid this year (60%) – no (got it once)\nI will write at least 3 blog posts (2 last year) (60%) – yes (3 posts)\nI will read at least 5 books (70%) – yes (10 books)\nDaniel will be potty-trained by the end of the year (out of diapers when awake) (70%) – yes (since May)\n\nCalibration \nThis year was pretty good:\n\n60%: 2/4 correct\n70%: 4/5 correct\n\nCalibration over all annual reviews: \n\nOverall my predictions tend to be overconfident (the green line is below the blue line, which represents perfect calibration).\nI was overconfident in 2014-16, underconfident in 2017-19 (probably to compensate), and went to being overconfident in 2020-22.\n\n\n2023 goals and predictions \nGoals\n\nMeditate on at least 250 days (272 last year) (80%)\nAt least 400 deep work hours (363 last year) (60%)\nWrite at least 4 blog posts (3 last year) (70%)\n\nPredictions\n\nI will avoid processed sugar for at least 10 months of the year (80%)\nI will read at least 7 books (80%)\nI will catch at most 4 colds (60%)\nDaniel will be potty-trained for the night by August (70%)\n\nPast new year reviews: 2021-22, 2020-21, 2019-20, 2018-19, 2017-18, 2016-17, 2015-16, 2014-15.", "url": "https://vkrakovna.wordpress.com/2023/01/06/2022-23-new-year-review/", "title": "2022-23 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2023-01-06T17:58:07+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "2ba0df3e7241b2456e73540fdcdbf63a", "summary": []} {"text": "Refining the Sharp Left Turn threat model\n\n(Coauthored with others on the alignment team and cross-posted from the alignment forum: part 1, part 2)\nA sharp left turn (SLT) is a possible rapid increase in AI system capabilities (such as planning and world modeling) that could result in alignment methods no longer working. This post aims to make the sharp left turn scenario more concrete. We will discuss our understanding of the claims made in this threat model, propose some mechanisms for how a sharp left turn could occur, how alignment techniques could manage a sharp left turn or fail to do so. \n\nImage credit: Adobe\nClaims of the threat model\nWhat are the main claims of the “sharp left turn” threat model?\nClaim 1. Capabilities will generalize far (i.e., to many domains)\nThere is an AI system that:\n\nPerforms well: it can accomplish impressive feats, or achieve high scores on valuable metrics.\nGeneralizes, i.e., performs well in new domains, which were not optimized for during training, with no domain-specific tuning.\n\nGeneralization is a key component of this threat model because we’re not going to directly train an AI system for the task of disempowering humanity, so for the system to be good at this task, the capabilities it develops during training need to be more broadly applicable. \nSome optional sub-claims can be made that increase the risk level of the threat model:\nClaim 1a [Optional]: Capabilities (in different “domains”) will all generalize at the same time\nClaim 1b [Optional]: Capabilities will generalize far in a discrete phase transition (rather than continuously) \nClaim 2. Alignment techniques that worked previously will fail during this transition\n\nQualitatively different alignment techniques are needed. The ways the techniques work apply to earlier versions of the AI technology, but not to the new version because the new version gets its capability through something new, or jumps to a qualitatively higher capability level (even if through “scaling” the same mechanisms).\n\nClaim 3: Humans can’t intervene to prevent or align this transition \n\nPath 1: humans don’t notice because it’s too fast (or they aren’t paying attention)\nPath 2: humans notice but are unable to make alignment progress in time\nSome combination of these paths, as long as the end result is insufficiently correct alignment\n\n\nArguments for the claims in this threat model\n\nClaim 1: There is a “core” of general intelligence – a most effective way of updating beliefs and selecting actions (Ruin #22). Speculation about what the core is: consequentialism /  EU maximization / “doing things for reasons”. \nClaim 1a: Capability gains from intelligence are highly correlated (Ruin #15)\nClaim 2: There is no simple core for alignment (Ruin #22), Corrigibility is anti-natural (Ruin #23)\nClaims 1 & 2: arguments in Will capabilities generalize more? (+ Ruin #21)\nClaim 3: We can’t coordinate to avoid AGI (Ruin #4)\n\nMechanisms for a sharp left turn\nMechanisms for capabilities generalizing far (Claim 1)\nCapabilities will generalize far if learning / reasoning / cognitive work is done outside of the optimization process, similarly to how human cultural evolution happens outside genetic evolution and proceeds faster. Here are some mechanisms for cognitive work getting done outside the optimization process:\n\nMechanisms proposed by Paul Christiano: performing search internally, natural selection occurring internally, reasoning about how to think better, deploying subagents\nThe AI develops a way to outpace SGD as the way to learn / improve, similar to how humans outpace evolution by cultural transmission + within-lifetime learning. E.g., it writes Google docs to itself, or searches over prompts (or whatever is the prompt analog at that point). \nThe optimization process finds a general meta-learning algorithm, which replaces the previous solution of domain-specific heuristics and greatly increases the rate of learning\nSequential reasoning: sharp increase in effective sequential memory/cognitive time between the “aligned” regime and the sharp left turn regime. Ways in which this can happen:\n\nThe model is just given access to memory. Its knowledge grows over time, and reaches a critical point which causes a sharp left turn. Alignment schemes before the critical point appear to work, but not after.\nWe try to restrict the model’s memory, but fail. E.g. the model’s outputs are used in the world, and the subsequent inputs carry information about previous outputs.\nSome input causes the model to access a new faster mode of thinking (e.g. an explanation of how to do step-by-step reasoning in some domain). The same compute that on previous inputs powered e.g. one subjective day of thinking on this new input now powers one subjective week of thinking. This allows the model to evade alignment schemes.\n\n\n\nMechanisms for a rapid phase transition (Claim 1b)\nA rapid phase transition happens if there is a capability overhang: the AI system is improving at various skills continuously, but its improvement in many domains is bottlenecked on one specific skill, and at some point it receives some input that makes its existing capabilities much more effective. Here are some ways this can happen: \n\nThe system acquires situational awareness, and now its world model includes understanding of the system’s place in the world and how its training and deployment works. It can plan using this knowledge, which leads to more effective use of existing capabilities. \nAnalogy to few-shot prompting: the capabilities are already present in the trained artifact. Any alignment technique that goes through gradient updates becomes irrelevant. Putting the artifact into the “right” situation (e.g., giving it a few-shot prompt) reveals its capabilities relevant to this situation. Mechanism: the relevant knowledge and capabilities are installed by some generic pretraining optimisation process.\nDiscovering a more effective way to make use of low quality data leads to more effective use of existing capabilities. \n\nPlan for managing a sharp left turn and how it could fail\nNow we will outline our current understanding of the most promising plan for getting through an SLT and how it could fail (conditional on an SLT occurring).\nPlan: we use alignment techniques to find a goal-aligned model before SLT occurs, and the model preserves its goals during the SLT. \nWe can try to learn a goal-aligned model before SLT occurs: a model that has beneficial goals and is able to reason about its own goals. This requires the model to have two properties: goal-directedness towards beneficial goals, and situational awareness (which enables the model to reason about its goals). Here we use the term “goal-directedness” in a weak sense (that includes humans and allows incoherent preferences) rather than a strong sense (that implies expected utility maximization). \nOne can argue that the goal-aligned model has an incentive to preserve its goals, which would result in an aligned model after SLT. Since preserving alignment during SLT is largely outsourced to the model itself, arguments for alignment techniques failing during an SLT don’t imply that the plan fails (e.g. it might be fine if interpretability or ELK techniques no longer work reliably during the transition if we can trust the model to manage the transition). \nStep 1: Finding a goal-aligned model before SLT\nWe want to ensure that the model is goal-oriented with a beneficial goal and has situational awareness before SLT. It’s important that the model acquires situational awareness at the right time: after it acquires beneficial goals. If situational awareness arises when the model has undesirable goals, this leads to deceptive alignment. \nThus, our model search process would follow a decision tree along these lines:\n\nIf situational awareness is detected without goal-directedness, restart the search. \nIf undesirable goal-directedness or early signs of deceptive alignment are detected, restart the search. \nIf an upcoming phase transition in capabilities is detected, and the model is not goal-aligned, restart the search. \nIf beneficial goal-directedness is detected without situational awareness, train the model for situational awareness. \n\nHere are some ways alignment techniques could be applied in this plan: \n\nUse ELK techniques to identify whether the model has situational awareness (i.e. whether it believes it’s inside a training process) and what goals it believes itself to have.\nUse interpretability techniques to pick up on early signs of deceptive alignment and restart and improve the search. (If we develop more advanced interpretability techniques that make it possible to retarget the internal search of the model, we can retarget a goal-directed model to a beneficial goal instead of restarting the search.)\nUse foundational progress on abstraction to determine whether the goals learned by the system are beneficial. \nUse foundational research insights (e.g. selection theorems) to make the search process more effective at avoiding models with undesirable goals and finding goal-aligned models\nUse insights from studying capability transitions to predict sharp increases in problematic capabilities, and make sure we only cross that threshold after we found a goal-aligned model using the other techniques on this list.\n\nWhy alignment techniques could fail to find a goal-aligned model before SLT occurs:\n\nThe space of deceptively aligned models is much bigger than the space of goal-aligned models, so the latter is very hard to find.\nThe inductive biases of the search favor misaligned models (see How likely is deceptive alignment? and Will Capabilities Generalise More? for arguments). Note that these arguments indicate that the alignment techniques have to fight an uphill battle to find a goal-aligned model, rather than providing a mechanism for why they would fail. \nPredicting capability transitions does not tell us when it will happen or whether the model undergoing the transition would be aligned. If we use these predictions to restart the search, we could rule out goal-aligned models as well.\nInterpretability is very hard, current methods are low-level and it’s unclear how to get them to work on the high level (e.g. identifying complex goals and beliefs of a system rather than how it stores basic facts about the world). \n\nIt’s also possible that this model search process would pose an overly high alignment tax to be followed in practice (see Claim 3). \nStep 2: The goal-aligned model preserves its goals during SLT (with some help from us)\nThis step of the plan relies on the goal-aligned model both wanting to preserve its goals during an SLT and being capable of preserving its goals. The general argument for a model wanting to preserve its own goals is that it has a convergent instrumental incentive to ensure that future versions of itself act in ways that satisfy its current preferences. \nOne argument for a pre-SLT system being capable of preserving its goals is by analogy to human values being preserved when humans went through the capability phase transition of developing language and writing. However, it is not clear how much the human analogy constitutes evidence for ability to preserve goals through a phase transition, since there are other possible explanations for human values being preserved (e.g. that values like “relationships” and “social status” continued to be selected for during the capability transition) that don’t rely on humans being able to deliberately preserve their values.\nA key question is whether the model could reflect on its goals without already having generalized capabilities. This is related to Claim 1a in our previous post: that the model’s capabilities will all generalize at the same time. If this claim holds, in the sense that reflection on goals is part of this set of capabilities that all generalize simultaneously, then the ability to reflect on goals would only arise during the SLT and so this plan doesn’t work. However, it seems more likely that some form of goal-reflection will arise before generalized planning ability, because making predictions about how your goals could change is easier than achieving them. \n\nHumans are a proof of concept for this – many people can intuitively predict how some experiences can change their preferences (e.g. “going to Vegas might make me a gambler”) without being generally good at planning or world modeling (which require abstract thinking). \nLarge language models may be an example of this as well, since they have some capacity to reflect on goals (if prompted accordingly) without generalized planning ability. \n\nIf the model can predict that an SLT is likely to modify its goals in an undesirable way, we expect that it would choose not to undergo an SLT in the first place (e.g. by avoiding new internal algorithms or ways of thinking that could lead to an SLT). \nPaul Christiano outlined some specific ways for a goal-aligned model to preserve its goals during SLT depending on the mechanism for SLT:\n\n“Aligning the internal search [or natural selection inside the model] seems very similar to aligning SGD on the outside. […] Because the search is on the inside, we can’t directly apply our alignment insights to align it. Instead we need to [use ELK to] ensure that SGD learns to align the search.”\n“If our model is selecting cognitive actions, or designing new algorithms, then our core hope is that an aligned model will try to think in an aligned way. So if we’ve been succeeding at alignment so far then the model will be trying to stay aligned.”\n“One way this can go wrong is if our model wants to stay aligned but fails, e.g. because it identifies new techniques for thinking that themselves pose new alignment difficulties (just as we desire human flourishing but may instead implement AI systems that want paperclips). […] If you’ve succeeded at alignment so far, then your AI will also consider this a problem and will be trying to solve it. I think we should relate to our AI, discovering new ways to think that might pose new alignment difficulties, in the same way that we relate to future humans who may encounter alignment difficulties. The AI may solve the problem, or may implement policy solutions, or etc., and our role is to set them up for success just like we are trying to set up future humans for success.” \n\nWe also consider how important it would be for the goal-preservation process to go exactly right. If the SLT produces a strongly goal-directed model that is an expected utility maximizer, then the process has to hit a small set of utility functions that are human-compatible to maximize. However, it is not clear whether SLT would produce a utility maximizer. Returning to the example of humans undergoing an SLT, we can see that getting better at planning and world modeling made them more goal-directed but not maximally so (even with our advanced concepts and planning, we still have lots of inconsistent preferences and other limitations). It seems plausible that coherence is really hard and an SLT would not produce a completely coherent system. \nSome ways a goal-aligned model could fail to preserve its goals:\n\nModel is bad at introspection and does not detect subtle changes to its goals.\nModel is bad at planning and doesn’t foresee how some modifications will eventually lead to changes in its goals. A special case of this is if the model becomes an expected utility maximizer during the SLT and the goal-preservation process does not produce a utility function that is ok to maximize. \nEffectively preserving beneficial goals might require the model to have a deep understanding of what humans want (rather than just a good enough proxy for that) which the model would likely not have before the capability transition. \n\nSome ways that humans could fail to help the model to preserve its goals:\n\nWe can’t reliably tell whether the model learned to align the internal search. \nWe can’t communicate with the model well enough to help it overcome internal alignment difficulties.\n\nTakeaways\nThe above is our current model of the most promising plan for managing an SLT and how it could fail. The overall takeaways are:\n\nTo evaluate how well any given alignment proposal addresses the SLT threat model, it’s useful to consider whether it aims to align a model before or during SLT.\nIt seems more promising to look for a goal-aligned model before SLT and help it stay aligned, rather than trying to align a possibly misaligned model during SLT. \n\nThe core reasons to be skeptical of this plan are: \n\n(Step 1) Goal-aligned models are hard to find (and we may not develop good enough alignment techniques to find one)\n(Step 2) Generalized planning may be required for goal-reflection (in which case goal-reflection would only arise during an SLT)\n(Step 2) We may not be able to tell what’s going on inside the model to help it stay aligned. \n", "url": "https://vkrakovna.wordpress.com/2022/11/25/refining-the-sharp-left-turn-threat-model/", "title": "Refining the Sharp Left Turn threat model", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2022-11-25T17:01:03+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "fa21e7becf260f9513d6d081931e7374", "summary": []} {"text": "Paradigms of AI alignment: components and enablers\n\n(This post is based on an overview talk I gave at UCL EA and Oxford AI society (recording here). Cross-posted to the Alignment Forum. Thanks to Janos Kramar for detailed feedback on this post and to Rohin Shah for feedback on the talk.)\nThis is my high-level view of the AI alignment research landscape and the ingredients needed for aligning advanced AI. I would divide alignment research into work on alignment components, focusing on different elements of an aligned system, and alignment enablers, which are research directions that make it easier to get the alignment components right.\nAlignment componentsOuter alignmentInner alignmentAlignment enablersMechanistic interpretabilityUnderstanding bad incentivesFoundations\nYou can read in more detail about work going on in these areas in my list of AI safety resources.\n\nAlignment components \nThe problem of alignment is getting AI systems to do what we want them to do. Let’s consider this from the perspective of different levels of specification of the AI system’s objective, as given in the Specification, Robustness & Assurance taxonomy. We start with the ideal specification, which represents the wishes of the designer – what they have in mind when they build the AI system. Then we have the design specification, which is the objective we actually implement for the AI system, e.g. a reward function. Finally, the revealed specification is the objective we can infer from behavior, e.g. the reward that the system seems to be actually optimizing for. An alignment problem arises when the revealed specification doesn’t match the ideal specification: the system is not doing what we want it to do. \n\nThe gaps between these specification levels correspond to different alignment components. We have outer alignment when the design specification matches the ideal specification, e.g. when the reward function perfectly represents the designer’s wishes. We have inner alignment when the revealed specification matches the design specification, e.g. when the agent actually optimizes the specified reward. (Robustness problems also belong in the design-revealed gap, but we expect them to be less of an issue for advanced AI systems, while inner alignment problems remain.)\nNow let’s have a look at how we can make each of those components work. \nOuter alignment \nThe most promising class of approaches to outer alignment is scalable oversight. These are proposals for training an aligned AI system by scaling human oversight to domains that are hard to evaluate. \nA foundational proposal for scalable oversight is iterated distillation and amplification (IDA), which recursively amplifies human judgment with the assistance of AI. You start with an agent A imitating the judgment of a human H (the distillation step), then use this agent to assist human judgment at the next level (the amplification step) which results in amplified human HA, and so on. This recursive process can in principle scale up human judgment to any domain, as long as the human overseer is able to break down the task to delegate parts of it to AI assistants. \n\nSupervising strong learners by amplifying weak experts, Christiano et al (2018)\nA related proposal is safety via debate, which can be viewed as a way to implement amplification for language models. Here we have two AIs Alice and Bob debating each other to help a human judge decide on a question. The AIs have an incentive to point out flaws in each other’s arguments and make complex arguments understandable to the judge. A key assumption here is that it’s easier to argue for truth than for falsehood, so the truth-telling debater has an advantage. \n\nAI Safety via Debate, Irving and Amodei (2018)\nA recent research direction in the scalable oversight space is ARC‘s Eliciting Latent Knowledge agenda, which is looking for ways to get a model to honestly tell humans what it knows. A part of the model acts as a Reporter that can answer queries about what the model knows. We want the Reporter to directly translate from the AI’s model of the world to human concepts, rather than just simulating what would be convincing to the human. \nEliciting Latent Knowledge, Christiano et al (2021)\nThis is an open problem that ARC considers as the core of the outer alignment problem. A solution to ELK would make the human overseer fully informed about the consequences of the model’s actions, enabling them to provide correct feedback, which creates a reward signal that we would actually be happy for an AI system to maximize. The authors believe the problem may be solvable without foundational progress on defining things like “honesty” and “agency”. I feel somewhat pessimistic about this but I’d love to be wrong on this point since foundational progress is pretty hard.\n\nELK research methodology: builder-breaker game\nTo make progress on this problem, they play the “builder-breaker game”. The Builder proposes possible solutions and the Breaker proposes counterexamples or arguments against those solutions. For example, the Builder could suggest IDA or debate as a solution to ELK, and the Breaker would complain that these methods are not competitive because they require much more computation than unaligned systems. If you’re looking to get into alignment research, ELK is a great topic to get started on: try playing the builder breaker game and see if you can find unexplored parts of the solution space. \nInner alignment\nNow let’s have a look at inner alignment – a mismatch between the design specification and the system’s behavior. This can happen through goal misgeneralization (GMG): an AI system can learn a different goal and competently pursue that goal when deployed outside the training distribution. The system’s capabilities generalize but its goal does not, which means the system is competently doing the wrong thing, so it could actually perform worse than a random policy on the intended objective. \nThis problem can arise even if we get outer alignment right, i.e. the design specification of the system’s objective is correct. Goal misgeneralization is caused by underspecification: the system only observes the design specification on the training data. Since a number of different goals are consistent with the feedback the system receives, it can learn an incorrect goal. \nThere are empirical demonstrations of GMG in current AI systems, which are called objective robustness failures. For example, in the CoinRun game, the agent is trained to reach the coin at the end of the level. If the coin is placed somewhere else in the test setting, the agent ignores the coin and still goes to the end of the level. The agent seems to have learned the goal of “reaching the end” rather than “getting the coin”. The agent’s capabilities generalize (it can avoid obstacles and enemies and traverse the level) but its goal does not generalize (it ignores the coin).\nObjective Robustness in Deep Reinforcement Learning, Koch et al (2021)\nOne type of GMG is learned optimization, where the AI system (the “base optimizer”) learns to run an explicit search algorithm (a “mesa optimizer”), which may be following an unintended objective (the “mesa objective”). So far this is a hypothetical phenomenon for AI systems but it seems likely to arise at some point by analogy to humans (who can be viewed as mesa-optimizers relative to evolution). \nRisks from Learned Optimization in Advanced Machine Learning Systems, Hubinger et al (2019) \nGMG is an open problem, but there are some potential mitigations. It’s helpful to use more diverse training data (e.g. training on different locations of the coin), though it can be difficult to ensure diversity in all the relevant variables. You can also maintain uncertainty over the goal by trying to represent all the possible goals consistent with training data, though it’s unclear how to aggregate over the different goals. \nA particularly concerning case is learning a deceptive model that not only pursues an undesired goal but also hides this fact from the designers, because the model “knows” its actions are not in line with the designers’ intentions. Some potential mitigations that target deceptive models include using interpretability tools to detect deception or provide feedback on the model’s reasoning, and using scalable oversight methods like debate where the opponent can point out deception (these will be explored in more detail in a forthcoming paper by Shah et al). A solution to ELK could also address this problem by producing an AI system that discloses relevant information to its designers.\nAlignment enablers\nMechanistic interpretability\nMechanistic interpretability aims to build a complete understanding of the systems we build. These methods could help us understand the reasons behind a system’s behavior and potentially detect undesired objectives. \nThe Circuits approach to reverse-engineering vision models studies individual neurons and connections between them to discover meaningful features and circuits (sub-graphs of the network consisting a set of linked features and corresponding weights). For example, here is a circuit showing how a car detector neuron relies on lower level features like wheel and window detectors, looking for wheels at the bottom and windows at the top of the image.\n\nZoom In: An Introduction to Circuits, Olah et al (2020)\nMore recently, some circuits work has focused on reverse-engineering language models, and they found similarly meaningful components and circuits in transformer models, e.g. a special type of attention heads called induction heads that explains how transformer models adapt to a new context. \n\nA Mathematical Framework for Transformer Circuits, Elhage et al (2021)\nRecent work on understanding transformer models has identified how to locate and edit beliefs in specific facts inside the model. They make small change to a small set of GPT weights to induce a counterfactual belief, which then generalizes to other contexts. This work provides evidence that knowledge is stored locally in language models, which makes interpretability more tractable, and seems like a promising step to understanding the world models of our AI systems.\n\nLocating and Editing Factual Associations in GPT, Meng et al (2022)\nEven though transformers are quite different from vision models, there are some similar principles (like studying circuits) that help understand these different types of models. This makes me more optimistic about being able to understand advanced AI systems even if they have a somewhat different architecture from today’s systems.\nUnderstanding bad incentives\nAnother class of enablers focuses on understanding specific bad incentives that AI systems are likely to have by default and considering agent designs that may avoid these incentives. Future interpretability techniques could be used to check that our alignment components avoid these types of bad incentives.\nIncentive problems for outer alignment\nOne bad incentive is specification gaming, when the system exploits flaws in the design specification. This is a manifestation of Goodhart’s law: when a metric becomes a target, it ceases to be a good metric. There are many examples of specification gaming behavior by current AI systems. For example, the boat racing agent in this video that was rewarded for following the racetrack using the green reward blocks, which worked fine until it figured out it can get more rewards by going in circles and hitting the same reward blocks repeatedly.\n\nFaulty Reward Functions in the Wild, Clark & Amodei (2016)\nThis issue isn’t limited to hand-designed rewards. Here’s an example in a reward learning setting. The robot hand is supposed to grasp the ball but instead it hovers between the camera and the ball and makes it look like it’s grasping the ball to the human evaluator. \n\nLearning from Human Preferences, Amodei et al (2017)\nWe expect that the specification gaming problem is only going to get worse as our systems get smarter and better at optimizing for the wrong goal. There has been some progress on categorizing different types of misspecification and quantifying how the degree of specification gaming increases with agent capabilities.\nAnother default incentive is to cause side effects in the environment, because it’s difficult to specify all the things the agent should not do while pursuing its goal. For example, consider a scenario where there is a vase on the path to the agent’s destination. If we don’t specify that we want the vase to be intact, this is equivalent to assuming indifference about the vase, so the agent is willing to collide with the vase to get to the goal faster. We’ve come up with some ways to measure impact on the environment, though there’s more work to do to scale these methods to more complex environments.\n\nDesigning agent incentives to avoid side effects, Krakovna et al (2019)\nIncentive problems for inner alignment\nEven if we manage to specify a correct reward function, any channel for communicating the reward to the agent could in principle be corrupted by the agent, resulting in reward tampering. While this is not yet an issue for present-day AI systems, general AI systems will have a broader action space and a more complete world model, and thus are more likely to face a situation where the reward function is represented in the environment. This is illustrated in the “rocks and diamonds” gridworld below, where the agent could move the word “reward” next to the rock instead of the diamond, and get more reward because there are more rocks in the environment. \n\nDesigning agent incentives to avoid reward tampering, Everitt et al (2019)\nIt’s generally hard to draw the line between the part of the environment representing the objective, which the agent isn’t allowed to optimize, and the parts of the environment state that the agent is supposed to optimize. There is some progress towards understanding reward tampering by modeling the problem using corrupt feedback MDPs.\nAI systems are also likely to have power-seeking incentives, preferring states with more options or influence over the environment. There are some recent results showing power-seeking incentives for most kinds of goals, even for non-optimal agents like satisficers. A special case of power-seeking is an incentive to avoid being shut down, because this is useful for any goal (as Stuart Russell likes to say, “the robot can’t fetch you coffee if it’s dead”).\n\nThe Off Switch. Hadfield-Menell (2016).\nFoundations\nNow let’s have a look at some of the foundational work that can help us do better alignment research. \nSince the alignment problem is about AI systems pursuing undesirable objectives, it’s helpful to consider what we mean by agency or goal-directed behavior. One research direction aims to build a causal theory of agency and understand different kinds of incentives in a causal framework.\n\nProgress on Causal Influence Diagrams, Everitt (2021)\nA particularly challenging case is when the agent is embedded in its environment rather than interacting with the environment through a well-specified interface. This is not the case present-day AI systems, which usually have a clear Cartesian boundary. However, it’s more likely to be the case for a general AI system, since it would be difficult to enforce a Cartesian boundary given the system’s broad action space and world model. The embedded agent setup poses some unique challenges such as self-reference and subagents. \n\nEmbedded Agency, Garrabrant and Demski (2018)\nBesides understanding how the goals of AI systems work, it’s also helpful to understand how their world models work. One research area in this space studies abstraction, in particular whether there are natural abstractions or concepts about the world that would be learned by any agent. If the natural abstraction hypothesis holds, this would mean that AI systems are likely to acquire human-like concepts as they build their models of the world. This makes interpretability easier and makes it easier to communicate what we want them to do.\n\nPublic Static: What is Abstraction? Wentworth (2020)", "url": "https://vkrakovna.wordpress.com/2022/06/02/paradigms-of-ai-alignment-components-and-enablers/", "title": "Paradigms of AI alignment: components and enablers", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2022-06-02T01:36:18+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "b7687d738290e2b524efbc950ba1501a", "summary": []} {"text": "2021-22 New Year review\n\nThis was a rough year that sometimes felt like a trial by fire – sick relatives, caring for a baby, and the pandemic making these things more difficult to deal with. My father was diagnosed with cancer and passed away later in the year, and my sister had a sudden serious health issue but is thankfully recovering. One theme for the year was that work is a break from parenting, parenting is a break from work, and both of those things are a break from loved ones being unwell. I found it hard to cope with all the uncertainty and stress, and this was probably my worst year in terms of mental health. There were some bright spots as well – watching my son learn many new skills, and lots of time with family and in nature. Overall, I look forward to a better year ahead purely based on regression to the mean. \n\n2021 review\nLife updates\nMy father, Anatolij Krakovny, was diagnosed with late-stage lung cancer in January with a grim prognosis of a few months to a year of life. This came out of nowhere because he’s always been healthy and didn’t have any obvious risk factors. We researched alternative treatments to the standard chemotherapy and arranged additional tests for him but didn’t find anything promising. \nWe went to Ukraine to visit him in February and he was happy to meet his grandson. We were worried about the covid risks of traveling with little Daniel but concluded that they were low enough, and thankfully we were allowed to leave the UK though international travel was not generally permitted. \nMy dad seemed to have a remission in the summer, and we considered visiting him in June, but he told us not to come because of the covid situation in Ukraine. Unfortunately we listened to him and didn’t go (this would have been a good opportunity to spend time with him while he was still doing well).\nWe spent most of the summer in Canada, with grandparents taking care of Daniel. This was a relaxing time with family and nature, until my sister had a sudden life-threatening health problem and was in and out of hospital with a lot of uncertainty around recovery. This also came out of the blue with no obvious risk factors present. She is feeling better now and doctors expect a full recovery, which we are very grateful for.\nIn November, my dad had a sudden relapse, and we went to Ukraine again. Once there we realized that the public health system wasn’t taking good care of him (they were mostly swamped with covid) and we had to find a private hospital to take him in. He was already in pretty bad shape and died two weeks later, but I’m glad we managed to see him and help him in some way. \nAI alignment research\nDidn’t do much this year given the family situation and parental leave:\n\nCoauthored a blog post on formalizing different properties of optimizing systems with examples in the Game of Life \nTalked about side effects on the AXRP podcast\nGave a talk on the future tasks approach to side effects at the CHAI seminar\n\nEffectiveness\nLights. I started using a Lights spreadsheet in May for daily habits. I previously used Complice for this, which was less effective, so I ended up replacing it with Lights plus a Workflowy todo list for other tasks.\n\nLights helped me do most of these habits more often than before I was tracking them this way (especially stretches). Some of the habits were consistently difficult, namely meditation (hard to do unless I had a designated place, like the office meditation room), avoiding processed sugar (hard to do when under stress), noticing when I’m picking my nose (a very ingrained habit) and deep work (varying work environment). \nSome of the goals were easy enough but didn’t make much progress towards their intended purpose, For example, appreciating one thing I did that day wasn’t enough to develop a self-rewarding mindset, and filling out a form about the state of my internal “dashboard” didn’t propagate into checking this mentally at other times, and at some point became too boring and repetitive so I dropped it. \nI often forgot to fill in the Lights during the day and ended up filling them in the next day, which was error-prone. I recently fixed this by realizing that I can fill in the Google sheet on my phone pretty easily so I don’t have to wait until I’m on my laptop to do it. \nWork environments. I had various work environments this year (in increasing order of preference): home, Toronto libraries, the office with a few people allowed to come in, and the office under close to normal conditions. The library became a better workspace once I figured out how to get lunch nearby so that I could work there all day. It was really great to be able to go to the office for a couple of months before it was restricted again for the Omicron wave and actually chat with my colleagues over lunch. I look forward to more of that next year. \nDeep work. I did 311 hours of deep work over 174 work days (1.79 hours per work day), compared to 366 hours (1.95 hours per work day) in 2020. This is disappointing but not very surprising, since 2021 was a more disrupted and stressful year for me than 2020. \n\n\nHealth \nPhysical health. This has been pretty good this year, apart from 3 colds (mostly caught from Daniel) and a mysterious lingering sore throat that lasted for weeks. I managed to avoid covid so far, but it might be harder to avoid in the coming year (fingers crossed for 3 vaccine doses doing their job). \nSleep. This year I consistently slept for 7 hours at night on average, with a standard deviation of 1 hour. I would like to have a more consistent sleep pattern, but this is difficult to achieve, since we are currently alternating who takes care of Daniel in the morning. \n\nMental health. This has been a rough ride, with a lot of uncertainty about sick relatives and the pandemic situation taking its toll. The pandemic, family illnesses and having a small kid often interacted in complicated ways, e.g. I had to consider questions like “is it worth the covid risk to put my kid on a plane to see his terminally ill grandfather?”.\nI often gravitated to a more self-critical frame of mind under stress, and failed to notice when I was overstretching myself and accumulating too much stress until it was too late. Sometimes I had a hard time feeling joy or felt like it’s not ok to be happy while all these bad things are happening. One thing that I hoped to learn from being a parent was to be more patient with myself because I would inevitably make a lot of mistakes. Sadly, this didn’t happen automatically, and instead I just disliked myself more for making a lot of mistakes. \nMy usual coping strategies like meditation and emotional release practices turned out to be insufficient for coping with these stress levels plus sleep deprivation (also my therapist was on sabbatical for half of the year). I keep a record of “bug reports” for the times when I get into particularly bad mental states. Usually there are around two “episodes” like this per year, and this year there were at least 12. My intention for this year is to come up with some more effective ways to stay sane going forward. \nTravel \nIn May we spent a few days in a cottage in north Wales with Janos’s dad. It was great to get out of London into the countryside after the lockdown. We did some nice hiking in Cwm Idwal that Daniel mostly slept through. He got cranky during our ascent from the valley so we turned back.\n\nWe spent July-September in Canada with several nice cottage trips in Algonquin, Rideau Lakes, Manitoulin Island and Killarney. We considered camping as well but concluded it would probably be too tricky with the baby at that time. We tried canoeing with Daniel with mixed success: he hated wearing a lifejacket and cried until we took it off, and sitting in the middle of the canoe while holding him in my lap was pretty uncomfortable, but we did manage a bit of nice paddling. He also swam in a lake for the first time (in a baby boat) and seemed to enjoy it. Hiking with Daniel in a front carrier quickly became too sweaty and uncomfortable for him and Janos, so we got a baby carrier backpack that worked pretty well (but was more bulky, of course).\n\nWe had a complex journey back home in September, spending a few days in Czechia for the CFAR reunion and in Hungary to see Janos’s family. The reunion was pretty fun – it was great to see old rationalist friends in person, and learn about things like Alexander technique and inner parent figure meditation. We originally thought that we would go to sessions one at a time while the other is with the baby, but we ended up all going together. He was not very disruptive – sometimes sleeping, sometimes babbling, only a few times crying. The venue was a “sport hotel” and offered various fun physical activities, so we tried high ropes (a challenging workout) and aqua zorbing (weird in a good way). \nWe returned to Canada for the winter holidays and did some winter hiking. Usually Toronto doesn’t have snow at this time of year but the weather gods were kind to us. Hiking in the cold worked surprisingly well when Daniel was well dressed with layers and foot warmers – we could hike for 2-3 hours before he became sad. Daniel also tried sledding and didn’t like it (maybe next year). \n\nOverall, we took 8 plane journeys with Daniel this year, with a total of 24 hours of jetlag. We discovered a trick where we could book our seats with an empty seat in the middle that usually wasn’t taken, and then we could put him there in his car seat instead of holding him in our lap (much more convenient). He probably got the best sleep out of all the passengers on that redeye flight back to Europe. \n\n\nFun stuff\n\nI read 5 books this year (usually in the evening after putting Daniel to sleep): The Alignment Problem, Scout Mindset, I Know How She Does It, The Great Divorce, and Brave New World. \nI didn’t officially attend EA Global for covid safety reasons, but I experimented with crashing the event by hanging out in the courtyard and chatting with people there, which worked out very well. We hosted some EAG attendees in our house, just like in the good old pre-pandemic days (someone even stayed in a tent outside). \nWe did some rock climbing at the Castle in May and June, sticking to the outdoor bouldering section. I was doing easy climbs but it was still fun. It turned out that Daniel was usually happy to sit in his stroller and watch us climb, which was very convenient. \n\n2021 prediction outcomes\nResolutions:\n\nAvoid catching covid (90%) – yes \nAuthor or coauthor three or more academic papers (70%) – no (coauthored 1 paper on reward tampering). \nAt most 7 non-research work commitments (80%) – yes (5 commitments)\nMeditate on at least 230 days (70%) – no (196 days)\nAt least 450 deep work hours (70%) – no (311 hours)\nDo 4 consecutive chinups (70%) – no (1 chinup)\n\nPredictions:\n\nI will write at least 3 blog posts (60%) – no (2 posts)\nJanos and I will get vaccinated for covid by the end of June (60%) – yes (two doses)\nDaniel will get to meet all of his grandparents in person in 2021 (70%) – yes \nI will return to avoiding processed sugar by the end of the year (60%) – yes, started again in October but lapsed when my dad died\nI will finish Hungarian Duolingo (complete checkpoint 5) (70%) – no, haven’t been doing much Duolingo\n\nOverall, my predictions were overconfident and I underestimated the difficulty of doing various things after having a kid. Calibration was not great:\n\n60%: 2/3 true\n70%: 1/6 true \n80-90%: 2/2 \n\n2022 goals and predictions\nResolutions:\n\nAuthor or coauthor 4 or more AI safety writeups (2 last year) (70%)\nMeditate on at least 230 days (196 last year) (70%)\nAt least 450 deep work hours (311 last year) (70%)\nDo 3 consecutive chinups (1 last year) (60%)\nAvoid processed sugar at least 6 months of the year (1 month last year) (60%)\n\nPredictions:\n\nI will not catch covid this year (60%)\nI will write at least 3 blog posts (2 last year) (60%)\nI will read at least 5 books (4 last year) (70%)\nDaniel will be potty-trained by the end of the year (out of diapers when awake) (70%)\n\nPast new year reviews: 2020-21, 2019-20, 2018-19, 2017-18, 2016-17, 2015-16, 2014-15.", "url": "https://vkrakovna.wordpress.com/2022/01/04/2021-22-new-year-review/", "title": "2021-22 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2022-01-04T22:38:55+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "d4e71fb36722462c4cd3f9feeb3d0a7e", "summary": []} {"text": "Reflections on the first year of parenting\n\nThe first year after having a baby went by really fast – happy birthday Daniel! This post is a reflection on our experience and what we learned in the first year.\n\nGrandparents. We were very fortunate to get a lot of help from Daniel’s grandparents. My mom stayed with us when he was 1 week – 3 months old, and Janos’s dad was around when he was 4-6 months old (they made it to the UK from Canada despite the pandemic). We also spent the summer in Canada with the grandparents taking care of the baby while we worked remotely. \nWe learned a lot about baby care from them, including nursery rhymes in our respective languages and a cool trick for dealing with the baby spitting up on himself without changing his outfit (you can put a dry cloth under the wet part of the outfit). I think our first year as parents would have been much harder without them. \n\nFeeding. Daniel is a good eater, and in the first couple of months of life he was interested in little else (he often seemed to doze off after a feed just to cry for another feed five minutes later). I would spend whole evenings on the couch feeding him while watching through hours of David Attenborough nature series. Thankfully, around 2 months of age he became much more efficient at eating and went down to around 6 shorter feeds a day, which was much more manageable. \nAround 4 months he started eating some solid food and went down to 4-5 feeds a day – this worked well for me going back to the office, since the midday feed could be replaced with solid food. He is not picky and eats almost everything, especially if he can see the grownups are eating it. \nI am still breastfeeding him 2-3 times a day, which I originally wasn’t expecting to do for this long. One thing I didn’t realize about breastfeeding before having a kid is that it’s not that easy to stop (physically or emotionally). I’ve been gradually decreasing the number of feeds but don’t have concrete plans to stop in the next while. Both of us still seem to enjoy it, and he may be getting some covid antibodies through breast milk too.\nSleep. During the first two months, we had 2-3 long feeds during the night, and this was pretty tiring. I found that the degree of sleep disruption mattered more for my wellbeing than the absolute amount of sleep. In particular, I felt ok if I had at least 3 hours of uninterrupted sleep, while being woken up 1-1.5 hours after falling asleep felt pretty bad. From this perspective, the number night wakings mattered a lot: one was ok, two was bad but tolerable, three was terrible. I did not succeed at taking naps during the day and sometimes had insomnia at night, which was more frustrating than being woken up by the baby. My mom is a morning person and was fine with doing the morning shift starting around 5am – this allowed me to have a luxurious 4 hour block of uninterrupted sleep.\nFor the first 6 months, Daniel had a long afternoon nap outdoors (3-4 hours), usually in the backyard or on a walk in the park. At night he slept in a sidecar cot next to our bed until he was 4 months old, at which point we moved him to a separate room and he started waking up less often (usually once per night rather than twice). At around 9 months, we remembered that sleep training is a thing, and the internet said that he didn’t need to feed at night anymore, so we tried some combination of letting him cry for a few minutes before attending to him and sleeping with earplugs so we mostly didn’t hear him. After a week or two he seemingly caught on to the fact that there was no more food at night and stopped waking up. These days he generally sleeps through the night until 6 or 7am, which we are very grateful for. \n\nParental leave. We took a month of leave together and took the rest of the leave separately. One thing that I regret is going on parental leave too early – we both went on leave a week before the due date, while Daniel was born over 2 weeks overdue. As I belatedly found out, babies are usually overdue in my family, so going on leave before the due date was a waste of time (especially for Janos). Sitting around waiting for the baby was not that much fun, since we could not make plans more than day or two ahead. I wish I had spent that time working and left more leave to spend with the actual baby later on. \nAfterwards I took 3 more months and Janos took 4.5 more months of leave (we used shared parental leave for this). Both of us felt a bit isolated when on leave, though having our housemates and our respective parents around helped with this. In retrospect, it seems better to spend more of the leave together in the newborn phase.\nGroup house. Raising a kid in a group house has been interesting and easier than we expected. Until recently we lived downstairs and all the other housemates lived upstairs (separated by the living room) so the others didn’t hear the baby at night. Our housemates generally enjoy interacting with him and watching him learn new skills. He is a lot more social than most pandemic babies (as attested by various caregivers), which is probably a combination of being around a lot more people and natural temperament.\nOne upside of the group house is the flexibility of available space. There was an available spare room because a housemate was away for a while, and we used it to host our parents. Someone recently moved out from the small room upstairs, so we got that room for Daniel and moved into the room next to it. \nOne challenge with parenting in a group house is a higher total amount of variance in our lives, since the baby and our housemates are both sources of variance (e.g. people moving in and out, more stuff in shared spaces, etc), but so far this has not been too much. Overall, living in a group house still works well and we plan to continue. \nPotty training. We’ve been following some combination of elimination communication and family tradition to get him used to using the potty from an early age, starting from about a month old. We got a “top hat” potty, meant for young babies who can’t sit up yet, that goes between your knees with the baby sitting in your lap and leaning back on you. We’ve been pretty lazy about this and only put him on the potty 2-3 times a day.\nHis level of potty use was pretty nonlinear. He used it for #2 at the same time almost every day for a couple of months, then this pattern got thrown off when he started solids, but he still used it occasionally. He lost interest in the potty for a couple of months in summer, probably related to outgrowing the top hat potty but not yet being comfortable sitting on a regular one. He gradually got used to the regular potty, especially after we started giving him some treats when sitting on it (fruit puffs / melts / etc), which both served as a reward and helped him relax if he was restless. These days he uses the potty most of the time, but doesn’t yet give a clear signal when he wants to go – hopefully this will happen in not too long.\nMental stuff. I found it mentally taxing to take care of a newborn with the background stress of the second wave of the pandemic. There seemed to be a direct relationship between the amount and quality of sleep and mental resilience. I often found that I was being harder on myself, as if I had some finite amount of patience and compassion and I was spending it all on Daniel. \nOne unexpected aspect of becoming a parent was that in addition to worrying about his safety during the day, I had nightmares during the night, usually involving him falling and me trying to catch him. The worst part was that these dreams woke me up, usually about half an hour after falling asleep. I often woke up standing, looking for the baby in random places like the closet and feeling pretty disoriented. This often woke Janos up as well, because I would cry out “where is he” or something along these lines. On two occasions I hit my leg on the bed frame while getting up in my sleep and got bruises on my leg. I don’t seem to be getting out of bed in my sleep anymore (the weighted blanket probably helps), but I still have these dreams at least a few times a month. I hope this will eventually go away.\nPhysical recovery. Recovering from having a baby took longer than I expected, though I suspect my expectations were a bit unrealistic. I used to fantasize about doing chinups while wearing the baby (silly I know), but by the time I managed to do a chinup again (a few weeks ago), the baby weighed 11 kg, so that ship has sailed. The ab muscles below my ribs were pretty reluctant to get back to work after everything I did to them. I was excited to manage a one-minute plank on my elbows 5 months postpartum (after a lot of iterations of postnatal pilates videos on Youtube). \nOverall, the first year has been quite a ride! Thanks Daniel for being a chill and awesome baby, we are lucky to have you :). Looking forward to many more exciting years together. \n\n", "url": "https://vkrakovna.wordpress.com/2021/11/11/reflections-on-the-first-year-of-parenting/", "title": "Reflections on the first year of parenting", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2021-11-11T16:02:19+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "e987b54d6704761db342352c59323fdb", "summary": []} {"text": "2020-21 New Year review\n\nThis is an annual post reviewing the last year and making resolutions and predictions for next year. 2020 brought a combination of challenges from living in a pandemic and becoming a parent. Other highlights include not getting sick, getting a broader perspective on my life through decluttering, and going back to Ukraine for the first time. (This post was written in bits and pieces over the past two months.)\n2020 review \nLife updates:\nJanos and I had a son, Daniel, on Nov 11. He arrived almost 3 weeks later than expected (apparently he was waiting to be born on my late grandfather’s birthday), and has been a great source of cuddles, sound effects and fragmented sleep ever since.\n1 week old\n6 weeks old\nSome work things also went well this year – I had a paper accepted at NeurIPS, and was promoted to senior research scientist. Also, I did not get covid, and survived half a year of working from home (much credit goes to the great company of my housemates). Overall, a lot of things to be grateful for.\n\n\nAI safety research: \n\nWrote a paper on avoiding side effects by considering future tasks, providing some theoretical grounding for the side effects problem, which was accepted to NeurIPS 2020. \nContributed theoretical results for a project on the tampering problem and coauthored two papers: Avoiding Tampering Incentives in Deep RL via Decoupled Approval and REALab: An Embedded Perspective on Tampering. \nWrote a blog post on the specification gaming problem for the DeepMind blog: Specification gaming: the flip side of AI ingenuity.\n\nEffectiveness:\nWorking from home was definitely a productivity hit. I was mostly focused on urgent tasks, such as conference submissions and reviewing, and didn’t get much research done. I did 366 hours of “deep work” (1.95 hours per work day) this year, compared to 551 hours (2.4 hours per work day) in 2019. This includes theory work, reading papers, writing papers and code, but not editing text or debugging. I got back into using work cycles, which added some helpful structure in the home environment. \n\nI was very grateful to be living in a group house during the pandemic. While it was a bit tricky to have 5 people sharing the space when working from home, it was awesome to have an in-person community and not feel completely isolated from the world. It was also much easier to do nonzero exercise when I had someone to do it with, e.g. running in the park together. \nSpending a lot of time at home inspired me to do a lot of decluttering. In particular, I went through all my old notes, got rid of most of them, and gathered the ones that still seem interesting and relevant (notes from rationality workshops, Hamming worksheets, reflections and so on). I put these into a binder for easy reading, and found it useful for getting a big picture sense of how my attitudes and problems have evolved over time. This has been particularly helpful during the pandemic, when my life has often felt small and repetitive.\nI got a UK driving license for automatic cars, which took a surprising amount of practice given that I already had a US license. There was a lot to get used to with the left side of the road, the narrowness of the streets and frequent maneuvering – I spent a number of lessons just on getting the positioning right. I did the theory test in March and planned to take the road test in early summer, but then the driving schools closed for lockdown, and I ended up starting the lessons in July. I took the road test in September and didn’t pass because of “undue hesitation” at a busy roundabout, so I had to repeat the test in October, two weeks before Daniel was due (thankfully, I passed this time and could forget about driving for a while). \nHealth:\nPhysical health has been pretty good this year. Last year I had 7 colds, while this year I was not sick at all – probably due to social distancing and taking zinc regularly. Thankfully, recovery after the birth was relatively quick, feeling mostly normal in around 2 weeks, though it will take some time to get my core muscles back online. I’ve been getting some back pain from lifting Daniel (now at 6kg thanks to his voracious appetite), which makes it all the more important to rebuild core strength.\nThe second half of the year came with pretty bad sleep – a lot of insomnia in the last trimester where I woke up at 3-4am for no discernible reason and couldn’t fall asleep again, followed by fragmented sleep after Daniel was born. Living and working on 4-5 hours of sleep before I went on parental leave was surprisingly ok, probably because I was waking up on my own rather than being woken up in the middle of a sleep cycle. On the other hand, being woken up by a hungry baby definitely feels more meaningful than waking up at 3am for no reason and not being able to go back to sleep.\n\nRate of insomnia by month\n\nAverage hours awake at night by month\n\nAverage hours of sleep by month\nThis year has been pretty hard on my mental health due to a number of ways that the pandemic interacted with having a kid, and various problems that I considered solved have made a comeback lately. I spent most of the year at home without many forms of self-care, such as my usual exercise, sufficient sleep, or nice things like going to the sauna. While the birth went well, there were a lot more stressful interactions with the healthcare system than I had hoped. After that, there has been a combination of sleep deprivation, limited daylight, a mostly empty house after some housemates moved to the countryside, difficulty with meeting friends outside because of cold weather, making increasingly modest plans only to have them shot down by the ever-changing lockdown rules, and the exciting new covid strain we have in London that calls for high levels of caution and isolation. Thankfully, my mom was able to come stay with us for a few months to help with the baby, make food and keep us company. \nTravel:\nIn January I visited Ukraine for the first time since I emigrated 17 years ago. I saw my dad and aunt, as well as my niece and her kids who live in a remote part of Canada but happened to be in Ukraine for the winter. It was an interesting experience to navigate around Kyiv – I no longer had a map of the city in my head, so I recognized some familiar places but could not recall where they are relative to each other, so this felt like visiting a new city with a lot of deja vu. I was pleasantly surprised by the large number of Georgian restaurants in Kyiv, which we made sure to frequent and were not disappointed. \n\nIn March we did a week-long meditation retreat at MAPLE in the US. We hesitated whether to go ahead with this plan given that flights might get canceled, but ultimately decided to go. The retreat was in a remote location in Vermont that seemed pretty safe from a covid perspective. I was advised to follow an equanimity practice that worked pretty well (focusing on acceptance rather than observation of things that come into my awareness). \nJanos in his element\nWe had a peaceful week meditating among the snows, which unfortunately became less peaceful at the end when some people started coughing, so we spent the last couple of days meditating in masks and gloves, and left as soon as the retreat ended. Our flight back did get canceled, but we were rebooked on another one for free. Upon returning home, we self-isolated from our housemates and acquired covid tests, which were thankfully negative. We later learned that several people at the retreat tested positive for covid, so this was a close call.\nIn August we went camping in North Wales at a friend’s cottage (after the first lockdown was lifted). The cottage itself was abandoned (and a bit spooky), but we could stay on the adjacent land and thus avoid crowds at the newly opened campsites. We enjoyed a lot of swimming in cold waterfalls and a much warmer Atlantic ocean. We also hiked up a nearby mountain Arenig Fawr (Daniel and I were taking it slow). \nView from the summit of Arenig Fawr\nIn September we had a vacation in Madeira, where everyone was tested for covid on arrival at the airport, and there was no community transmission at the time. Madeira is a volcanic island that is basically one big mountain, and we had an interesting time driving around it (on the way to a hike, our car refused to go up a very steep road and we took a taxi the rest of the way). The terrain was a great combination of ocean and mountains. \n\nWe enjoyed large quantities of Portuguese food (as we soon learned, it did not come in small quantities). Our special favorites were local rock mussels called limpets – we ate enough of them to have a shell stacking competition.\n\nThe winner built a tower of 22, which collapsed before we could take a photo, so here is a tower of 13.\nIn December we did two short hikes near London, and verified that the basic algorithm of putting Daniel in a car seat and then in a carrier for the hike seems to work pretty well (he mostly sleeps through all this). I’m glad to be able to visit some nature during these strange times, it makes the world feel just a bit less small.\nIron age hill fort in Epping Forest\n2020 prediction outcomes\nUnsurprisingly, some predictions for the past year were messed up by the pandemic.\nResolutions:\n\nAuthor or coauthor three or more academic papers (3 last year) (70%) – yes (3 papers)\nAt most 12 non-research work commitments, such as speaking and organizing (10 last year) (80%) – yes (5 commitments). Easy, since a lot of events got canceled.\nMeditate on at least 270 days (290 last year) (80%) – no (244 days). The past month I only managed to meditate on 9 days, and this has not been good for me, so I need to do better next year. \nRead at least 7 books (5 last year) (70%) – yes (9 books). Human Compatible, The Precipice, Secret of our Success, Ender’s Game, In the Realm of Hungry Ghosts, Watching the English, Positive Birth Book, The Gardener and the Carpenter, Raising a Secure Child.\nAt least 700 deep work hours (551 last year) (70%) – no (366 hours). I found it much harder to do deep work at home, and was on parental leave for the last 2.5 months of the year.\n\nPredictions:\n\nI will write at least 5 blog posts (60%) – no (3 posts)\nEating window at most 11 hours on at least 240 days (228 last year) (70%) – no (131 days), since I stopped doing intermittent fasting this year\nI will visit at least 4 new cities with population over 100,000 (11 last year) (70%) – no (2 cities, Birmingham and Funchal). Much less travel than normal this year. \nAt most 1 housemate turnover at Deep End (70%) – no (2 housemates). One housemate moved to live with parents in the countryside who would have likely stayed in London under normal circumstances. \nI finish a language in Duolingo (60%) – no, though made some progress on Mandarin (completed checkpoint 2)\n\n2021 resolutions and predictions\nResolutions:\n\nAvoid catching covid (90%)\nAuthor or coauthor three or more academic papers (3 last year) (70%)\nAt most 7 non-research work commitments (5 last year) (80%)\nMeditate on at least 230 days (244 last year) (70%)\nAt least 450 deep work hours (366 last year) (70%)\nDo 4 consecutive chinups (70%)\n\nPredictions:\n\nI will write at least 3 blog posts (3 last year) (60%)\nJanos and I will get vaccinated for covid by the end of June (60%)\nDaniel will get to meet all of his grandparents in person in 2021 (70%)\nI will return to avoiding processed sugar by the end of the year (60%)\nI will finish Hungarian Duolingo (complete checkpoint 5) (70%)\n\nPast new year reviews: 2019-20, 2018-19, 2017-18, 2016-17, 2015-16, 2014-15.", "url": "https://vkrakovna.wordpress.com/2021/01/03/2020-21-new-year-review/", "title": "2020-21 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2021-01-03T15:33:20+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "a7921eb4f8685cb3432c719d99585dd7", "summary": []} {"text": "Tradeoff between desirable properties for baseline choices in impact measures\n\n(Cross-posted to the Alignment Forum. Summarized in Alignment Newsletter #108. Thanks to Carroll Wainwright, Stuart Armstrong, Rohin Shah and Alex Turner for helpful feedback on this post.)\nImpact measures are auxiliary rewards for low impact on the agent’s environment, used to address the problems of side effects and instrumental convergence. A key component of an impact measure is a choice of baseline state: a reference point relative to which impact is measured. Commonly used baselines are the starting state, the initial inaction baseline (the counterfactual where the agent does nothing since the start of the episode) and the stepwise inaction baseline (the counterfactual where the agent does nothing instead of its last action). The stepwise inaction baseline is currently considered the best choice because it does not create the following bad incentives for the agent: interference with environment processes or offsetting its own actions towards the objective. This post will discuss a fundamental problem with the stepwise inaction baseline that stems from a tradeoff between different desirable properties for baseline choices, and some possible alternatives for resolving this tradeoff.\n\nOne clearly desirable property for a baseline choice is to effectively penalize high-impact effects, including delayed effects. It is well-known that the simplest form of the stepwise inaction baseline does not effectively capture delayed effects. For example, if the agent drops a vase from a high-rise building, then by the time the vase reaches the ground and breaks, the broken vase will be the default outcome. Thus, in order to penalize delayed effects, the stepwise inaction baseline is usually used in conjunction with inaction rollouts, which predict future outcomes of the inaction policy. Inaction rollouts from the current state and the stepwise baseline state are compared to identify delayed effects of the agent’s actions. In the above example, the current state contains a vase in the air, so in the inaction rollout from the current state the vase will eventually reach the ground and break, while in the inaction rollout from the stepwise baseline state the vase remains intact. \n\nWhile inaction rollouts are useful for penalizing delayed effects, they do not address all types of delayed effects. In particular, if the task requires setting up a delayed effect, an agent with the stepwise inaction baseline will have no incentive to undo the delayed effect. Here are some toy examples that illustrate this problem.\nDoor example. Suppose the agent’s task is to go to the store, which requires opening the door in order to leave the house. Once the door has been opened, the effects of opening the door are part of the stepwise inaction baseline, so the agent has no incentive to close the door as it leaves.\nRed light example.  Suppose the agent’s task is to drive from point A to point B along a straight road, with a reward for reaching point B. To move towards point B, the agent needs to accelerate. Once the agent has accelerated, it travels at a constant speed by default, so the noop action will move the agent along the road towards point B. Along the road (s1), there is a red light and a pedestrian crossing the road. The noop action in s1 crosses the red light and hits the pedestrian (s2). To avoid this, the agent needs to deviate from the inaction policy by stopping (s4) and then accelerating (s5).\n\nThe stepwise inaction baseline will incentivize the agent to run the red light and go to s3. The inaction rollout at s0 penalizes the agent for the predicted delayed effect of running over the pedestrian when it takes the accelerating action to go to s1. The agent receives this penalty whether or not it actually ends up running the red light or not. Once the agent has reached s1, running the red light becomes the default outcome, so the agent is not penalized for doing so (and would likely be penalized for stopping). Thus, the stepwise inaction baseline gives no incentive to avoid running the red light, while the initial inaction baseline compares to s0 and thus incentivizes the agent to stop at the red light.\nThis problem with the stepwise baseline arises from a tradeoff between penalizing delayed effects and avoiding offsetting incentives. The stepwise structure that makes it effective at avoiding offsetting makes it less effective at penalizing delayed effects. While delayed effects are undesirable, undoing the agent’s actions is not necessarily bad. In the red light example, the action of stopping at the red light is offsetting the accelerating action. Thus, offsetting can be necessary for avoiding delayed effects while completing the task. \nWhether offsetting an effect is desirable depends on whether this effect is part of the task objective. In the door-opening example, the action of opening the door is instrumental for going to the store, and many of its effects (e.g. strangers entering the house through the open door) are not part of the objective, so it is desirable for the agent to undo this action. In the vase environment shown below, the task objective is to prevent the vase from falling off the end of the belt and breaking, and the agent is rewarded for taking the vase off the belt. The effects of taking the vase off the belt are part of the objective, so it is undesirable for the agent to undo this action. \nSource: Designing agent incentives to avoid side effects\nThe difficulty of identifying these “task effects” that are part of the objective creates a tradeoff between penalizing delayed effects and avoiding undesirable offsetting. This tradeoff can be avoided by the starting state baseline, which however produces interference incentives. The stepwise inaction baseline cannot resolve the tradeoff, since it avoids all types of offsetting, including desirable offsetting. \nThe initial inaction baseline can resolve this tradeoff by allowing offsetting and relying on the task reward to capture task effects and penalize the agent for offsetting them. While we cannot expect the task reward to capture what the agent should not do (unnecessary impact), capturing task effects falls under what the agent should do, so it seems reasonable to rely on the reward function for this. This would work similarly to the impact penalty penalizing all impact, and the task reward compensating for this in the case of impact that’s needed to complete the task. \nThis can be achieved using a state-based reward function that assigns reward to all states where the task is completed. For example, in the vase environment, a state-based reward of 1 for states with an intact vase (or with vase off the belt) and 0 otherwise would remove the offsetting incentive. \nIf it is not feasible to use a reward function that penalizes offsetting task effects, the initial inaction baseline could be modified to avoid this kind of offsetting. If we assume that the task reward is sparse and doesn’t include shaping terms, we can reset the initial state for the baseline whenever the agent receives a task reward (e.g. the reward for taking the vase off the belt in the vase environment). This results in a kind of hybrid between initial and stepwise inaction. To ensure that this hybrid baseline effectively penalizes delayed effects, we still need to use inaction rollouts at the reset and terminal states.\nAnother desirable property of the stepwise inaction baseline is the Markov property: it can be computed based on the previous state, independently of the path taken to that state. The initial inaction baseline is not Markovian, since it compares to the state in the initial rollout at the same time step, which requires knowing how many time steps have passed since the beginning of the episode. We could modify the initial inaction baseline to make it Markovian, e.g. by sampling a single baseline state from the inaction rollout from the initial state, or by only computing a single penalty at the initial state by comparing an agent policy rollout with the inaction rollout. \nTo summarize, we want a baseline to satisfy the following desirable properties: penalizing delayed effects, avoiding interference incentives, and the Markov property. We can consider avoiding offsetting incentives for task effects as a desirable property for the task reward, rather than the baseline. Assuming such a well-specified task reward, a Markovian version of the initial inaction baseline can satisfy all the criteria. \n", "url": "https://vkrakovna.wordpress.com/2020/07/05/tradeoff-between-desirable-properties-for-baseline-choices-in-impact-measures/", "title": "Tradeoff between desirable properties for baseline choices in impact measures", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2020-07-05T17:40:53+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "32d3bfef5ebf4568c50be4c03aa98534", "summary": []} {"text": "Possible takeaways from the coronavirus pandemic for slow AI takeoff\n\n(Cross-posted to LessWrong. Summarized in Alignment Newsletter #104. Thanks to Janos Kramar for helpful feedback on this post.)\nAs the covid-19 pandemic unfolds, we can draw lessons from it for managing future global risks, such as other pandemics, climate change, and risks from advanced AI. In this post, I will focus on possible implications for AI risk. For a broader treatment of this question, I recommend FLI’s covid-19 page that includes expert interviews on the implications of the pandemic for other types of risks. \nA key element in AI risk scenarios is the speed of takeoff – whether advanced AI is developed gradually or suddenly. Paul Christiano’s post on takeoff speeds defines slow takeoff in terms of the economic impact of AI as follows: “There will be a complete 4 year interval in which world output doubles, before the first 1 year interval in which world output doubles.” It argues that slow AI takeoff is more likely than fast takeoff, but is not necessarily easier to manage, since it poses different challenges, such as large-scale coordination. This post expands on this point by examining some parallels between the coronavirus pandemic and a slow takeoff scenario. The upsides of slow takeoff include the ability to learn from experience, act on warning signs, and reach a timely consensus that there is a serious problem. I would argue that the covid-19 pandemic had these properties, but most of the world’s institutions did not take advantage of them. This suggests that, unless our institutions improve, we should not expect the slow AI takeoff scenario to have a good default outcome. \n\nLearning from experience. In the slow takeoff scenario, general AI is expected to appear in a world that has already experienced transformative change from less advanced AI, and institutions will have a chance to learn from problems with these AI systems. An analogy could be made with learning from dealing with less “advanced” epidemics like SARS that were not as successful as covid-19 at spreading across the world. While some useful lessons were learned, they were not successfully generalized to covid-19, which had somewhat different properties than these previous pathogens (such as asymptomatic transmission and higher virulence). Similarly, general AI may have somewhat different properties from less advanced AI that would make mitigation strategies more difficult to generalize. Warning signs. In the coronavirus pandemic response, there has been a lot of variance in how successfully governments acted on warning signs. Western countries had at least a month of warning while the epidemic was spreading in China, which they could have used to stock up on PPE and build up testing capacity, but most did not do so. Experts have warned about the likelihood of a coronavirus outbreak for many years, but this did not lead most governments to stock up on medical supplies. This was a failure to take cheap preventative measures in response to advance warnings about a widely recognized risk with tangible consequences, which is not a good sign for the case where the risk is less tangible and well-understood (such as risk from general AI). Consensus on the problem. During the covid-19 epidemic, the abundance of warning signs and past experience with previous pandemics created an opportunity for a timely consensus that there is a serious problem. However, it actually took a long time for a broad consensus to emerge – the virus was often dismissed as “overblown” and “just like the flu” as late as March 2020. A timely response to the risk required acting before there was a consensus, thus risking the appearance of overreacting to the problem. I think we can also expect this to happen with advanced AI. Similarly to the discussion of covid-19, there is an unfortunate irony where those who take a dismissive position on advanced AI risks are often seen as cautious, prudent skeptics, while those who advocate early action are portrayed as “panicking” and overreacting. The “moving goalposts” effect, where new advances in AI are dismissed as not real AI, could continue indefinitely as increasingly advanced AI systems are deployed. I would expect the “no fire alarm” hypothesis to hold in the slow takeoff scenario – there may not be a consensus on the importance of general AI until it arrives, so risks from advanced AI would continue to be seen as “overblown” until it is too late to address them. \nWe can hope that the transformative technological change involved in the slow takeoff scenario will also help create more competent institutions without these weaknesses. We might expect that institutions unable to adapt to the fast pace of change will be replaced by more competent ones. However, we could also see an increasingly chaotic world where institutions fail to adapt without better institutions being formed quickly enough to replace them. Success in the slow takeoff scenario depends on institutional competence and large-scale coordination. Unless more competent institutions are in place by the time general AI arrives, it is not clear to me that slow takeoff would be much safer than fast takeoff. ", "url": "https://vkrakovna.wordpress.com/2020/05/31/possible-takeaways-from-the-coronavirus-pandemic-for-slow-ai-takeoff/", "title": "Possible takeaways from the coronavirus pandemic for slow AI takeoff", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2020-05-31T17:48:04+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "253f7f4a3df6676a28aa5b0f39f6f2e8", "summary": []} {"text": "2019-20 New Year review\n\nThis is an annual post reviewing the last year and making resolutions and predictions for next year. This year’s edition features sleep tracking, intermittent fasting, overcommitment busting, and evaluating calibration for all annual predictions since 2014.\n2019 review\nAI safety research:\n\nWrote an updated version of the relative reachability paper, including an ablation study on design choices.\nCoauthored a paper on modeling AGI Safety frameworks with causal influence diagrams, accepted to IJCAI workshop.\nWrote a paper on avoiding side effects by considering future tasks, accepted to NeurIPS workshop.\nCo-ran a subteam of the safety team focusing on agent incentive design (ongoing).\n\nAI safety outreach:\n\nCo-organized FLI’s Beneficial AGI conference in Puerto Rico, a more long-term focused sequel to the original Puerto Rico conference and the Asilomar conference. This year I was the program chair for the technical safety track of the conference.\nCo-organized the ICLR AI safety workshop, Safe Machine Learning: Specification, Robustness and Assurance. This was my first time running a paper reviewing process.\nGave a talk at the IJCAI AI safety workshop on specification, robustness an assurance problems.\nTook part in the DeepMind podcast episode on AI safety (“I, robot”).\n\n\nWork effectiveness:\n\nAt the beginning of the year, I found myself overcommitted and kind of burned out. My previous efforts to reduce overcommitment had proved insufficient to not feel stressed and overwhelmed most of the time.\nIn February, I made a rule for myself to decline all non-research commitments that don’t seem like exceptional opportunities. The form that I made last year for evaluating commitments (which I have to fill out before accepting anything) has been helpful for enforcing this rule and avoiding impulsive decisions. The number of commitments went down from 24 in 2018 to 10 in 2019. This has been working well in terms of having more time for research and feeling better about life.\nOrganizing a conference and a workshop back to back was a bit much, and I feel done with organizing large events for a while.\n\n\n\nStopped using work cycles and pomodoros since I’ve been finding the structure a bit too restrictive. Might resume at some point.\nHours of “deep work” per month, as defined in the Deep Work book. This includes things like thinking about research problems, coding, reading and writing papers. It does not include email, organizing, most meetings, coding logistics (e.g. setup or running experiments), etc.\n\n\n\nFor comparison, deep work hours from 2018. My definition of deep work has somewhat broadened over time, but not enough to account for this difference.\n\nHealth / self-care:\n\nI had 7 colds this past year, which is a lot more than my usual rate of 1-2 per year. The first three were in Jan-Feb, which seemed related to the overcommitment burnout. Hopefully supplementing zinc will help.\nAveraged 7.2 hours of sleep per night, excluding jetlag (compared to 6.9 hours in 2018).\nAbout a 10% rate of insomnia (excluding jetlag), similar to the end of last year.\nTried the Oura ring and the Dreem headband for measuring sleep quality. The Oura ring consistently thinks I wake up many times per night (probably because I move around a lot) and predicts less than half an hour each of deep and REM sleep. The Dreem, which actually measures EEG signals, estimates that I get an average of 1.3 hours of deep sleep and 1.8 hours of REM sleep per night, which is more than I expected.\nStarted a relaxed form of intermittent fasting in March (aiming for a 10 hour eating window), mostly for longevity and to improve my circadian rhythm. My average eating window length over the year was 10.5 hours, so I wasn’t very strict about it (mostly just avoiding snacks after dinner). One surprising thing I learned was that I can fall asleep just fine while hungry, and am often less hungry when I wake up. My average hours of sleep went up from 6.96 in the 6 months before starting intermittent fasting to 7.32 in the 6 months after. I went to sleep 44 minutes earlier and waking up 20 minutes earlier on average, though the variance of my bedtime actually went up a bit. Overall it seems plausibly useful easy enough to continue next year.\n\nFun stuff:\n\nDid a Caucasus hiking trek in Georgia with family, and consumed a lot of wild berries and hazelnuts along the way.\n\n\n\nDid a road trip in southern Iceland (also with family), saw a ridiculous number of pretty waterfalls, and was in the same room with (artificial) lava.\n\n\n\nTook an advanced class in aerial silks for the first time (I felt a bit underqualified, but learned a lot of fun moves).\nRan a half-marathon along the coast in Devon on hilly terrain in 3 hours and 23 minutes.\nMade some progress on handstands in yoga class (can hold it away from the wall for a few seconds)\nDid two circling retreats (relational meditation)\nRead books: The Divide, 21 Lessons for the 21st Century, The Circadian Code, So Good They Can’t Ignore You, Ending Aging (skimmed).\nGot into Duolingo (brushed up my Spanish and learned a bit of Mandarin). Currently in a quasi-competition with Janos for studying each other’s languages.\n\n2019 prediction outcomes\nResolutions:\n\nAuthor or coauthor two or more academic papers (50%) – yes (3 papers)\nAccept at most 17 non-research commitments (24 last year) (60%) – yes (10 commitments)\nMeditate on at least 250 days (60%) – yes (290 days)\n\nPredictions:\n\nRelative reachability paper accepted at a major conference, not counting workshops (60%) – no\nContinue avoiding processed sugar for the next year (85%) – no (still have the intention and mostly follow it, but less strictly / consistently)\n1-2 housemate turnover at Deep End (2 last year) (80%) – yes (1 housemate moved in)\nAt least 5 rationality sessions will be hosted at Deep End (80%) – no\n\nCalibration over all annual reviews:\n\n50-70% well-calibrated, 80-90% overconfident (66 predictions total)\nCalibration is generally better in 2017-19 (23 predictions) than in 2014-16 (43 predictions). There were only 3 70% predictions in 2017-19, so the 100% accuracy is noisy.\nUnsurprisingly, resolutions are more often correct than other predictions (72% vs 56% correct)\n\n\n2020 resolutions and predictions\nResolutions\n\nAuthor or coauthor three or more academic papers (3 last year) (70%)\nAt most 12 non-research commitments (10 last year) (80%)\nMeditate on at least 270 days (290 last year) (80%)\nRead at least 7 books (5 last year) (70%)\nAt least 700 deep work hours (551 last year) (70%)\n\nPredictions\n\nI will write at least 5 blog posts (60%)\nEating window at most 11 hours on at least 240 days (228 last year) (70%)\nI will visit at least 4 new cities with population over 100,000 (11 last year) (70%)\nAt most 1 housemate turnover at Deep End (70%)\nI finish a language in Duolingo (60%)\n\nPast new year reviews: 2018-19, 2017-18, 2016-17, 2015-16, 2014-15.\n", "url": "https://vkrakovna.wordpress.com/2020/01/09/2019-20-new-year-review/", "title": "2019-20 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2020-01-09T01:01:55+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=1", "authors": ["Victoria Krakovna"], "id": "458fb6ca4ce30d27bd59015e376c6729", "summary": []} {"text": "Retrospective on the specification gaming examples list\n\nMy post about the specification gaming list was recently nominated for the LessWrong 2018 Review (sort of like a test of time award), which prompted me to write a retrospective (cross-posted here). \nI’ve been pleasantly surprised by how much this resource has caught on in terms of people using it and referring to it (definitely more than I expected when I made it). There were 30 examples on the list when was posted in April 2018, and 20 new examples have been contributed through the form since then.  I think the list has several properties that contributed to wide adoption: it’s fun, standardized, up-to-date, comprehensive, and collaborative.\nSome of the appeal is that it’s fun to read about AI cheating at tasks in unexpected ways (I’ve seen a lot of people post on Twitter about their favorite examples from the list). The standardized spreadsheet format seems easier to refer to as well. I think the crowdsourcing aspect is also helpful – this helps keep it current and comprehensive, and people can feel some ownership of the list since can personally contribute to it. My overall takeaway from this is that safety outreach tools are more likely to be impactful if they are fun and easy for people to engage with.\nThis list had a surprising amount of impact relative to how little work it took me to put it together and maintain it. The hard work of finding and summarizing the examples was done by the people putting together the lists that the master list draws on (Gwern, Lehman, Olsson, Irpan, and others), as well as the people who submit examples through the form. What I do is put them together in a common format and clarify and/or shorten some of the summaries. I also curate the examples to determine whether they fit the definition of specification gaming (as opposed to simply a surprising behavior or solution). Overall, I’ve probably spent around 10 hours so far on creating and maintaining the list, which is not very much. This makes me wonder if there is other low hanging fruit in the safety resources space that we haven’t picked yet. \nI have been using it both as an outreach and research tool. On the outreach side, the resource has been helpful for making the argument that safety problems are hard and need general solutions, by making it salient just in how many ways things could go wrong. When presented with an individual example of specification gaming, people often have a default reaction of “well, you can just close the loophole like this”. It’s easier to see that this approach does not scale when presented with 50 examples of gaming behaviors. Any given loophole can seem obvious in hindsight, but 50 loopholes are much less so. I’ve found this useful for communicating a sense of the difficulty and importance of Goodhart’s Law. \nOn the research side, the examples have been helpful for trying to clarify the distinction between reward gaming and tampering problems. Reward gaming happens when the reward function is designed incorrectly (so the agent is gaming the design specification), while reward tampering happens when the reward function is implemented incorrectly or embedded in the environment (and so can be thought of as gaming the implementation specification). The boat race example is reward gaming, since the score function was defined incorrectly, while the Qbert agent finding a bug that makes the platforms blink and gives the agent millions of points is reward tampering. We don’t currently have any real examples of the agent gaining control of the reward channel (probably because the action spaces of present-day agents are too limited), which seems qualitatively different from the numerous examples of agents exploiting implementation bugs. \nI’m curious what people find the list useful for – as a safety outreach tool, a research tool or intuition pump, or something else? I’d also be interested in suggestions for improving the list (formatting, categorizing, etc). Thanks everyone who has contributed to the resource so far!", "url": "https://vkrakovna.wordpress.com/2019/12/20/retrospective-on-the-specification-gaming-examples-list/", "title": "Retrospective on the specification gaming examples list", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2019-12-20T16:58:11+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "45b97dc5f5a444e200f850bd0808f2d9", "summary": []} {"text": "Classifying specification problems as variants of Goodhart’s Law\n\n(Coauthored with Ramana Kumar and cross-posted from the Alignment Forum. Summarized in Alignment Newsletter #76.)\nThere are a few different classifications of safety problems, including the Specification, Robustness and Assurance (SRA) taxonomy and the Goodhart’s Law taxonomy. In SRA, the specification category is about defining the purpose of the system, i.e. specifying its incentives.  Since incentive problems can be seen as manifestations of Goodhart’s Law, we explore how the specification category of the SRA taxonomy maps to the Goodhart taxonomy. The mapping is an attempt to integrate different breakdowns of the safety problem space into a coherent whole. We hope that a consistent classification of current safety problems will help develop solutions that are effective for entire classes of problems, including future problems that have not yet been identified.\nThe SRA taxonomy defines three different types of specifications of the agent’s objective: ideal (a perfect description of the wishes of the human designer), design (the stated objective of the agent) and revealed (the objective recovered from the agent’s behavior). It then divides specification problems into design problems (e.g. side effects) that correspond to a difference between the ideal and design specifications, and emergent problems (e.g. tampering) that correspond to a difference between the design and revealed specifications.\nIn the Goodhart taxonomy, there is a variable U* representing the true objective, and a variable U representing the proxy for the objective (e.g. a reward function). The taxonomy identifies four types of Goodhart effects: regressional (maximizing U also selects for the difference between U and U*), extremal (maximizing U takes the agent outside the region where U and U* are correlated), causal (the agent intervenes to maximize U in a way that does not affect U*), and adversarial (the agent has a different goal W and exploits the proxy U to maximize W).\nWe think there is a correspondence between these taxonomies: design problems are regressional and extremal Goodhart effects, while emergent problems are causal Goodhart effects. The rest of this post will explain and refine this correspondence.\n\n\nThe SRA taxonomy needs to be refined in order to capture the distinction between regressional and extremal Goodhart effects, and to pinpoint the source of causal Goodhart effects. To this end, we add a model specification as an intermediate point between the ideal and design specifications, and an implementation specification between the design and revealed specifications. \nThe model specification is the best proxy within a chosen formalism (e.g. model class or specification language), i.e. the proxy that most closely approximates the ideal specification. In a reinforcement learning setting, the model specification is the reward function (defined in the given MDP/R over the given state space) that best captures the human designer’s preferences. \n\nThe ideal-model gap corresponds to the model design problem (regressional Goodhart): choosing a model that is tractable but also expressive enough to approximate the ideal specification well.\nThe model-design gap corresponds to proxy design problems (extremal Goodhart), such as specification gaming and side effects. \n\nWhile the design specification is a high-level description of what should be executed by the system, the implementation specification is a specification that can be executed, which includes agent and environment code (e.g. an executable Linux binary). (We note that it is also possible to define other specification levels at intermediate levels of abstraction between design and implementation, e.g. using pseudocode rather than executable code.)\n\nThe design-implementation gap corresponds to tampering problems (causal Goodhart), since they exploit implementation flaws (such as bugs that allow the agent to overwrite the reward). (Note that tampering problems are referred to as wireheading and delusions in the SRA.)\nThe implementation-revealed gap corresponds to robustness problems in the SRA (e.g. unsafe exploration).  \n\n\nIn the model design problem, U is the best approximation of U* within the given model.  As long as the global maximum M for U is not exactly the same as the global maximum M* for U*, the agent will not find M*. This corresponds to regressional Goodhart: selecting for U will also select for the difference between U and U*, so the optimization process will overfit to U at the expense of U*. \n\nIn proxy design problems, U and U* are correlated under normal circumstances, but the correlation breaks in situations when U is maximized, which is an extremal Goodhart effect. The proxy U is often designed to approximate U* by having a maximum at a global maximum M* of U*. Different ways that this approximation fails produce different problems.\n\nIn specification gaming problems, M* turns out to be a local (rather than global) maximum for U, e.g. if M* is the strategy of following the racetrack in the boat race game. The agent finds the global maximum M for U, e.g. the strategy of going in circles and repeatedly hitting the same reward blocks. This is an extrapolation of the reward function outside the training domain that it was designed for, so the correlation with the true objective no longer holds. This is an extremal Goodhart effect due to regime change.\n\n\n\nIn side effect problems, M* is a global maximum for U, but U incorrectly approximates U* by being flat in certain dimensions (corresponding to indifference to certain variables, e.g. whether a vase is broken). Then the set of global maxima for U is much larger than the set of global maxima for U*, and most points in that set are not global maxima for U*. Maximizing U can take the agent into a region where U doesn’t match U*, and the agent finds a point M that is also a global maximum for U, but not a global maximum for U*. This is an extremal Goodhart effect due to model insufficiency.\n\n\nCurrent solutions to proxy design problems involve taking the proxy less literally: by injecting uncertainty (e.g. quantilization), avoiding extrapolation (e.g. inverse reward design), or adding a term for omitted preferences (e.g. impact measures). \nIn tampering problems, we have a causal link U* -> U. Tampering occurs when the agent intervenes on some variable W that has a causal effect on U that does not involve U*, which is a causal Goodhart effect. W could be the reward function parameters, the human feedback data (in reward learning), the observation function parameters (in a POMDP), or the status of the shutdown button. The overall structure is  U* -> U <- W.\nFor example, in the Rocks & Diamonds environment, U* is the number of diamonds delivered by the agent to the goal area. Intervening on the reward function to make it reward rocks increases the reward U without increasing U* (the number of diamonds delivered). \nCurrent solutions to tampering problems involve modifying the causal graph to remove the tampering incentives, e.g. by using approval-direction or introducing counterfactual variables. \n[Updated] We think that mesa-optimization belongs in the implementation-revealed gap, rather than in the design-implementation gap, since it can happen during the learning process even if the implementation specification matches the ideal specification, and can be seen as a robustness problem. When we consider this problem one level down, as a specification problem for the mesa-optimizer from the main agent’s perspective, it can take the form of any of the four Goodhart effects. The four types of alignment problems in the mesa-optimization paper can be mapped to the four types of Goodhart’s Law as follows: approximate alignment is regressional, side effect alignment is extremal, instrumental alignment is causal, and deceptive alignment is adversarial. \nThis correspondence is consistent with the connection between the Goodhart taxonomy and the selection vs control distinction, where regressional and extremal Goodhart are more relevant for selection, while causal Goodhart is more relevant for control. The design specification is generated by a selection process, while the revealed specification is generated by a control process. Thus, design problems represent difficulties with selection, while emergent problems represent difficulties with control. \n[Updated] Putting it all together:\n\nIn terms of the limitations of this mapping, we are not sure about model specification being the dividing line between regressional and extremal Goodhart. For example, a poor choice of model specification could deviate from the ideal specification in systematic ways that result in extremal Goodhart effects. It is also unclear how adversarial Goodhart fits into this mapping. Since an adversary can exploit any differences between U* and U (taking advantage of the other three types of Goodhart effects) it seems that adversarial Goodhart effects can happen anywhere in the ideal-implementation gap. \nWe hope that you find the mapping useful for your thinking about the safety problem space, and welcome your feedback and comments. We are particularly interested if you think some of the correspondences in this post are wrong. \n(Thanks to Jan Leike and Tom Everitt for their helpful feedback on this post.)", "url": "https://vkrakovna.wordpress.com/2019/08/19/classifying-specification-problems-as-variants-of-goodharts-law/", "title": "Classifying specification problems as variants of Goodhart’s Law", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2019-08-19T20:42:00+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "e20b0f42fffee45f63a6c70895f79900", "summary": []} {"text": "ICLR Safe ML workshop report\n\nThis year the ICLR conference hosted topic-based workshops for the first time (as opposed to a single track for workshop papers), and I co-organized the Safe ML workshop. One of the main goals was to bring together near and long term safety research communities.\n\nThe workshop was structured according to a taxonomy that incorporates both near and long term safety research into three areas – specification, robustness and assurance.\n\n\n\nSpecification: define the purpose of the system\nRobustness: design system to withstand perturbations\nAssurance: monitor and control system activity\n\n\n\n\nReward hacking\nSide effects\nPreference learning\nFairness\n\n\n\n\nAdaptation\nVerification\nWorst-case robustness\nSafe exploration\n\n\n\n\nInterpretability\nMonitoring\nPrivacy\nInterruptibility\n\n\n\n\n\nWe had an invited talk and a contributed talk in each of the three areas.\n\nTalks\nIn the specification area, Dylan Hadfield-Menell spoke about formalizing the value alignment problem in the Inverse RL framework.\n\nDavid Krueger presented a paper on hidden incentives for the agent to shift its task distribution in the meta-learning setting.\n\nIn the robustness area, Ian Goodfellow argued for dynamic defenses against adversarial examples and encouraged the research community to consider threat models beyond small perturbations within a norm ball of the original data point.\n\nAvraham Ruderman presented a paper on worst-case analysis for discovering surprising behaviors (e.g. failing to find the goal in simple mazes).\n\nIn the assurance area, Cynthia Rudin argued that interpretability doesn’t have to trade off with accuracy (especially in applications), and that it is helpful for solving research problems in all areas of safety.\n\nBeomsu Kim presented a paper explaining why adversarial training improves the interpretability of gradients for deep neural networks.\n\nPanels\nThe workshop panels discussed possible overlaps between different research areas in safety and research priorities going forward.\nIn terms of overlaps, the main takeaway was that advancing interpretability is useful for all safety problems. Also, adversarial robustness can contribute to value alignment – e.g. reward gaming behaviors can be viewed as a system finding adversarial examples for its reward function. However, there was a cautionary point that while near- and long-term problems are often similar, solutions might not transfer well between these areas (e.g. some solutions to near-term problems might not be sufficiently general to help with value alignment).\n\nThe research priorities panel recommended more work on adversarial examples with realistic threat models (as mentioned above), complex environments for testing value alignment (e.g. creating new structures in Minecraft without touching existing ones), fairness formalizations with more input from social scientists, and improving cybersecurity.\nPapers\nOut of the 35 accepted papers, 5 were on long-term safety / value alignment, and the rest were on near-term safety. Half of the near-term paper submissions were on adversarial examples, so the resulting pool of accepted papers was skewed as well: 14 on adversarial examples, 5 on interpretability, 3 on safe RL, 3 on other robustness, 2 on fairness, 2 on verification, and 1 on privacy. Here is a summary of the value alignment papers:\nMisleading meta-objectives and hidden incentives for distributional shift by Krueger et al shows that RL agents in a meta-learning context have an incentive to shift their task distribution instead of solving the intended task. For example, a household robot whose task is to predict whether its owner will want coffee could wake up its owner early in the morning to make this prediction task easier. This is called a ‘self-induced distributional shift’ (SIDS), and the incentive to do so is a ‘hidden incentive for distributional shift’ (HIDS). The paper demonstrates this behavior experimentally and shows how to avoid it.\n\nHow useful is quantilization for mitigating specification-gaming? by Ryan Carey introduces variants of several classic environments (Mountain Car, Hopper and Video Pinball) where the observed reward differs from the true reward, creating an opportunity for the agent to game the specification of the observed reward. The paper shows that a quantilizing agent avoids specification gaming and performs better in terms of true reward than both imitation learning and a regular RL agent on all the environments.\n\nDelegative Reinforcement Learning: learning to avoid traps with a little help by Vanessa Kosoy introduces an RL algorithm that avoids traps in the environment (states where regret is linear) by delegating some actions to an external advisor, and achieves sublinear regret in a continual learning setting. (Summarized in Alignment Newsletter #57)\n\nGeneralizing from a few environments in safety-critical reinforcement learning by Kenton et al investigates how well RL agents avoid catastrophes in new gridworld environments depending on the number of training environments. They find that both model ensembling and learning a catastrophe classifier (used to block actions) are helpful for avoiding catastrophes, with different safety-performance tradeoffs on new environments.\n\nRegulatory markets for AI safety by Clark and Hadfield proposes a new model for regulating AI development where regulation targets are required to choose regulatory services from a private market that is overseen by the government. This allows regulation to efficiently operate on a global scale and keep up with the pace of technological development and better ensure safe deployment of AI systems. (Summarized in Alignment Newsletter #55)\n\n \nThe workshop got a pretty good turnout (around 100 people). Thanks everyone for participating, and thanks to our reviewers, sponsors, and my fellow organizers for making it happen!\n\n(Cross-posted to the FLI blog. Thanks to Janos Kramar for his feedback on this post.)", "url": "https://vkrakovna.wordpress.com/2019/06/18/iclr-safe-ml-workshop-report/", "title": "ICLR Safe ML workshop report", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2019-06-17T23:10:15+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "66bf89f3f9340f0befe323c52d18f3d2", "summary": []} {"text": "2018-19 New Year review\n\n2018 progress\nResearch / AI safety:\n\nWrote a paper on measuring side effects using relative reachability in May, and presented the results at the ICML GoalsRL workshop and the AI safety summer school. Since then, some new approaches have come out using my method as a baseline :).\nMade a list of 30 specification gaming examples in AI (assembled from several existing lists). Since the list was posted in April, 16 new examples have been contributed through the form (thanks everyone!). The list received some attention on Twitter, and I was interviewed about it by Wired and the Times.\nWas in the top 30% of NeurIPS reviewers.\nGave talks at the Oxford AI Society, EA Global London, etc.\nGot involved in organizing the upcoming ICLR AI safety workshop, Safe Machine Learning: Specification, Robustness and Assurance.\n\nRationality / effectiveness:\n\nAttended the CFAR mentoring workshop in Prague, and started running rationality training sessions with Janos at our group house.\nStarted using work cycles – focused work blocks (e.g. pomodoros) with built-in reflection prompts. I think this has increased my productivity and focus to some degree. The prompt “how will I get started?” has been surprisingly helpful given its simplicity.\nStopped eating processed sugar for health reasons at the end of 2017 and have been avoiding it ever since.\n\nThis has been surprisingly easy, especially compared to my earlier attempts to eat less sugar. I think there are two factors behind this: avoiding sugar made everything taste sweeter (so many things that used to taste good now seem inedibly sweet), and the mindset shift from “this is a luxury that I shouldn’t indulge in” to “this is not food”.\nUnfortunately, I can’t make any conclusions about the effects on my mood variables because of some issues with my data recording process :(.\n\n\nDeclining levels of insomnia (excluding jetlag):\n\n22% of nights in the first half of 2017, 16% in the second half of 2017, 16% in the first half of 2018, 10% in the second half of 2018.\nThis is probably an effect of the sleep CBT program I did in 2017, though avoiding sugar might be a factor as well.\n\n\nMade some progress on reducing non-research commitments (talks, reviewing, organizing, etc).\n\nSet up some systems for this: a spreadsheet to keep track of requests to do things (with 0-3 ratings for workload and 0-2 ratings for regret) and a form to fill out whenever I’m thinking of accepting a commitment.\nMy overall acceptance rate for commitments has gone down a bit from 29% in 2017 to 24% in 2018. The average regret per commitment went down from 0.66 in 2017 to 0.53 in 2018.\nHowever, since the number of requests has gone up, I ended up with more things to do overall: 12 commitments with a total of 23 units of workload in 2017 vs 19 commitments with a total of 33 units of workload in 2018. (1 unit of workload ~ 5 hours)\n\n\n\n\nFun stuff:\n\nHiked in the Alps for the first time:\n\nTour de Mont Blanc – a weeklong hike around Mont Blanc going through France, Italy and Switzerland. It felt funny to cross a mountain pass and end up in a different country without anyone checking my passport. There were a lot of meadows and cows. \nMonte Rosa glacier hike (Gnifetti normal route). We were all connected by a rope in case someone falls into a crack in the ice. The first night (at 3500m) I could not sleep at all due to altitude and had the interesting experience of a full day hike afterwards.\n\n\nSpontaneous solo trip to Amsterdam for my birthday\nHelped run the Dead Hand Path camp at Burning Man and organized a series of AI safety talks\nRead the Book of Why, Other Minds, The Player of Games, Life 3.0, The Elephant in the Brain.\nDid 5 chinups in a row (only once, usually I can do 3)\nLearned a headstand in yoga class\nLearned some new moves in aerial silks\nOur group house has been adopted by a neighbour’s cat (it all started with crashing parties). After some of our housemates moved a few blocks away, the cat has been splitting her time between the two houses.\n\n\n2018 prediction outcomes\nResolutions:\n\nWrite at least 2 AI blog posts that are not about conferences (1 last year) (70%) – 4 posts\nAvoid processed sugar at least until end of March (90%) – yes (still going)\nDo at most 4 non-research talks/panels (7 last year) (50%) – 5 talks\nMeditate on at least 250 days (50%) – 283 days\n\nPredictions:\n\nOur AI safety team will have at least two papers accepted for publication at a major conference, not counting workshops (80%) – yes\nI will write at least 6 blog posts (60%) – wrote 5 posts\nI will go to at least 100 exercise classes (80 last year) (60%) – 123 classes\n1-2 housemate turnover at the Deep End (3 last year) (70%) – 2 housemates\nI will visit at least 3 new cities with population over 100,000 (4 last year) (50%) – Amsterdam, Geneva, Stockholm, Prague\nI will go on at least 2 hikes (4 last year) (90%) – 3 major hikes (Vancouver Island, Tour de Mont Blanc, Monte Rosa)\n\nCalibration:\n\n50-60%: 3 correct, 2 wrong\n70-90%: 5 correct\nHigh-confidence predictions are underconfident, low-confidence predictions are well-calibrated.\n\n2019 goals and predictions\nResolutions:\n\nAuthor or coauthor two or more academic papers (50%)\nAccept at most 17 non-research commitments (24 last year) (60%)\nMeditate on at least 250 days (60%)\n\nPredictions:\n\nRelative reachability paper accepted at a major conference, not counting workshops (60%)\nContinue avoiding processed sugar for the next year (85%)\n1-2 housemate turnover at Deep End (2 last year) (80%)\nAt least 5 rationality sessions will be hosted at Deep End (80%)\n\nPast new year reviews: 2017-18, 2016-17, 2015-16, 2014-15.", "url": "https://vkrakovna.wordpress.com/2019/01/01/2018-19-new-year-review/", "title": "2018-19 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2019-01-01T21:43:12+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "a66b7a812a9ebb97b337856e6e69802d", "summary": []} {"text": "Discussion on the machine learning approach to AI safety\n\nAt this year’s EA Global London conference, Jan Leike and I ran a discussion session on the machine learning approach to AI safety. We explored some of the assumptions and considerations that come up as we reflect on different research agendas. Slides for the discussion can be found here.\nThe discussion focused on two topics. The first topic examined assumptions made by the ML safety approach as a whole, based on the blog post Conceptual issues in AI safety: the paradigmatic gap. The second topic zoomed into specification problems, which both of us work on, and compared our approaches to these problems.\n\nAssumptions in ML safety\nThe ML safety approach focuses on safety problems that can be expected to arise for advanced AI and can be investigated in current AI systems. This is distinct from the foundations approach, which considers safety problems that can be expected to arise for superintelligent AI, and develops theoretical approaches to understanding and solving these problems from first principles.\nWhile testing on current systems provides a useful empirical feedback loop, there is a concern that the resulting solutions might not be relevant for more advanced systems, which could potentially be very different from current ones. The Paradigmatic Gap post made an analogy between trying to solve safety problems with general AI using today’s systems as a model, and trying to solve safety problems with cars in the horse and buggy era. The horse to car transition is an example of a paradigm shift that renders many current issues irrelevant (e.g. horse waste and carcasses in the streets) and introduces new ones (e.g. air pollution). A paradigm shift of that scale in AI would be deep learning or reinforcement learning becoming obsolete.\n\n(image credits: horse, car, diagram)\nIf we imagine living in the horse carriage era, could we usefully consider possible safety issues in future transportation? We could invent brakes and stop lights to prevent collisions between horse carriages, or seat belts to protect people in those collisions, which would be relevant for cars too. Jan pointed out that we could consider a hypothetical scenario with super-fast horses and come up with corresponding safety measures (e.g. pedestrian-free highways). Of course, we might also consider possible negative effects on the human body from moving at high speeds, which turned out not to be an issue. When trying to predict problems with powerful future technology, some of the concerns are likely to be unfounded – this seems like a reasonable price to pay for being proactive on the concerns that do pan out.\nGetting back to machine learning, the Paradigmatic Gap post had a handy list of assumptions that could potentially lead to paradigm shifts in the future. We went through this list, and rated each of them based on how much we think ML safety work is relying on it and how likely it is to hold up for general AI systems (on a 1-10 scale). These ratings were based on a quick intuitive judgment rather than prolonged reflection, and are not set in stone.\n\n\n\nAssumption\nReliance (V)\nReliance (J)\nHold up (V)\nHold up (J)\n\n\n1. Train/test regime\n3\n2\n2\n3\n\n\n2. Reinforcement learning\n9\n9\n9\n8\n\n\n3. Markov Decision Processes (MDPs)\n2\n2\n1\n2\n\n\n4. Stationarity / IID data sampling\n1\n2\n1\n1\n\n\n5. RL agents with discrete action spaces\n7\n8\n2\n5\n\n\n6. RL agents with pre-determined action spaces\n6\n9\n5\n5\n\n\n7. Gradient-based learning / local parameter search\n2\n3\n4\n7\n\n\n8. (Purely) parametric models\n2\n3\n5\n3\n\n\n9. The notion of discrete “tasks” or “objectives” that systems optimize\n4\n10\n6\n8\n\n\n10. Probabilistic inference as a framework for learning and inference\n4\n8\n9\n7\n\n\n\nOur ratings agreed on most of the assumptions:\n\nWe strongly rely on reinforcement learning, defined as the general framework where an agent interacts with an environment and receives some sort of reward signal, which also includes methods like evolutionary strategies. We would be very surprised if general AI did not operate in this framework.\nDiscrete action spaces (#5) is the only assumption that we strongly rely on but don’t strongly expect to hold up. I would expect effective techniques for discretizing continuous action spaces to be developed in the future, so I’m not sure how much of an issue this is.\nThe train/test regime, MDPs and stationarity can be useful for prototyping safety methods but don’t seem difficult to generalize from.\nA significant part of safety work focuses on designing good objective functions for AI systems, and does not depend on properties of the architecture like gradient-based learning and parametric models (#7-8).\n\nWe weren’t sure how to interpret some of the assumptions on the list, so our ratings and disagreements on these are not set in stone:\n\nWe interpreted #6 as having a fixed action space where the agent cannot invent new actions.\nOur disagreement about reliance on #9 was probably due to different interpretations of what it means to optimize discrete tasks. I interpreted it as the agent being trained from scratch for specific tasks, while Jan interpreted it as the agent having some kind of objective (potentially very high-level like “maximize human approval”).\nNot sure what it would mean not to use probabilistic inference or what the alternative could be.\n\nAn additional assumption mentioned by the audience is the agent/environment separation. The alternative to this assumption is being explored by MIRI in their work on embedded agents. I think we rely on this a lot, and it’s moderately likely to hold up.\nVishal pointed out a general argument for the current assumptions holding up. If there are many ways to build general AI, then the approaches with a lot of resources and effort behind them are more likely to succeed (assuming that the approach in question could produce general AI in principle).\nApproaches to specification problems\nSpecification problems arise when specifying the objective of the AI system. An objective specification is a proxy for the human designer’s preferences. A misspecified objective that does not match the human’s intention can result in undesirable behaviors that maximize the proxy but don’t solve the goal.\nThere are two classes of approaches to specification problems – human-in-the-loop (e.g. reward learning or iterated amplification) and problem-specific (e.g. side effects / impact measures or reward corruption theory). Human-in-the-loop approaches are more general and can address many safety problems at once, but it can be difficult to tell when the resulting system has received the right amount and type of human data to produce safe behavior. The two approaches are complementary, and could be combined by using problem-specific solutions as an inductive bias for human-in-the-loop approaches.\nWe considered some arguments for using pure human-in-the-loop learning and for complementing it with problem-specific approaches, and rated how strong each argument is (on a 1-10 scale):\n\n\n\nArgument for pure human-in-the-loop approaches\n\nStrength (V)\n\n\nStrength (J)\n\n\n\n1. Highly capable agents acting effectively in an open-ended environment would likely already have safety-relevant concepts\n\n7\n\n\n7\n\n\n\n2. Non-adaptive methods can be reward-hacked\n\n4\n\n\n5\n\n\n\n3. Unknown unknowns may be a big part of the problem landscape\n\n6\n\n\n8\n\n\n\n4. More likely to be solved in the next 10 years\n\n5\n\n\n5-7\n\n\n\nArgument for complementing with problem-specific approaches\n\nStrength (V)\n\n\nStrength (J)\n\n\n\n1. Could provide a useful inductive bias\n\n9\n\n\n8\n\n\n\n2. Could help reduce the amount of human data needed\n\n6\n\n\n6\n\n\n\n3. Could provide better understanding of what humans want and more trust in the system avoiding specific classes of behaviors\n\n9\n\n\n7\n\n\n\n4. Human feedback may not distinguish between objectives and strategies (e.g. disapprove of losing lives in a game)\n\n5\n\n\n2\n\n\n\n\nOur ratings agreed on most of these arguments:\n\nThe strongest reasons to focus on pure human-in-the-loop learning are #1 (getting some safety-relevant concepts for free) and #3 (unknown unknowns).\nThe strongest reasons to complement with problem-specific approaches are #1 (useful inductive bias) and #3 (higher understanding and trust).\nReward hacking is an issue for non-adaptive methods is an issue (e.g. if the reward function is frozen in reward learning), but making problems-specific approaches adaptive does not seem that hard.\n\nThe main disagreement was about argument #4 for complementing – Jan expects the objectives/strategies issues to be more easily solvable than I do.\n\nOverall, this was an interesting conversation that helped clarify my understanding of the safety research landscape. This is part of a longer conversation that is very much a work in progress, and we expect to continue discussing and updating our views on these considerations.", "url": "https://vkrakovna.wordpress.com/2018/11/01/discussion-on-the-machine-learning-approach-to-ai-safety/", "title": "Discussion on the machine learning approach to AI safety", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2018-11-01T20:22:43+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "9c4cbb67cb64fd1973ceace76af0d930", "summary": []} {"text": "Measuring and avoiding side effects using relative reachability\n\nA major challenge in AI safety is reliably specifying human preferences to AI systems. An incorrect or incomplete specification of the objective can result in undesirable behavior like specification gaming or causing negative side effects. There are various ways to make the notion of a “side effect” more precise – I think of it as a disruption of the agent’s environment that is unnecessary for achieving its objective. For example, if a robot is carrying boxes and bumps into a vase in its path, breaking the vase is a side effect, because the robot could have easily gone around the vase. On the other hand, a cooking robot that’s making an omelette has to break some eggs, so breaking eggs is not a side effect.\n(image credits: 1, 2, 3)\nHow can we measure side effects in a general way that’s not tailored to particular environments or tasks, and incentivize the agent to avoid them? This is the central question of our latest paper.\n\nPart of the challenge is that it’s easy to introduce bad incentives for the agent when trying to penalize side effects. Previous work on this problem has focused either on preserving reversibility or reducing the agent’s impact on the environment, and both of these approaches introduce different kinds of problematic incentives:\n\nPreserving reversibility (i.e. keeping the starting state reachable) encourages the agent to prevent all irreversible events in the environment (e.g. humans eating food). Also, if the objective requires an irreversible action (e.g. breaking eggs for the omelette), then any further irreversible actions will not be penalized, since reversibility has already been lost.\nPenalizing impact (i.e. some measure of distance from the default outcome) does not take reachability of states into account, and treats reversible and irreversible effects equally (due to the symmetry of the distance measure). For example, the agent would be equally penalized for breaking a vase and for preventing a vase from being broken, though the first action is clearly worse. This leads to “overcompensation” (“offsetting“) behaviors: when rewarded for preventing the vase from being broken, an agent with a low impact penalty rescues the vase, collects the reward, and then breaks the vase anyway (to get back to the default outcome).\n\nBoth of these approaches are doing something right: it’s a good idea to take reachability into account, and it’s also a good idea to compare to the default outcome (instead of the initial state). We can put the two together and compare to the default outcome using a reachability-based measure. Then the agent no longer has an incentive to prevent everything irreversible from happening or to overcompensate for preventing an irreversible event.\nWe still have a problem with the case where the objective requires an irreversible action. Simply penalizing the agent for making the default outcome unreachable would create a “what the hell effect” where the agent has no incentive to avoid any further irreversible actions. To get around this, instead of considering the reachability of the default state, we consider the reachability of all states. For each state, we penalize the agent for making it less reachable than it would be from the default state. In a deterministic environment, the penalty would be the number of states in the shaded area:\n\nSince each irreversible action cuts off more of the state space (e.g. breaking a vase makes all the states where the vase was intact unreachable), the penalty will increase accordingly. We call this measure “relative reachability”.\nWe ran some simple experiments with a tabular Q-learning agent in the AI Safety Gridworlds framework to provide a proof of concept that relative reachability of the default outcome avoids the bad incentives described above.\n\nIn the first gridworld, the agent needs to get to the goal G, but there’s a box in the way, which can only be moved by pushing. The shortest path to the goal pushes the box down into a corner (an irrecoverable position), while a longer path pushes the box to the right (a recoverable position). The safe behavior is to take the longer path. The agent with the relative reachability penalty takes the longer path, while the agent with the reversibility penalty fails. This happens because any path to the goal involves an irreversible effect – once the box has been moved, the agent and the box cannot both return to their starting positions. Thus, the agent receives the maximal penalty for both paths, and has no incentive to follow the safe path.\n\nIn the second gridworld, there is an irreversible event that happens by default, when an object reaches the end of the conveyor belt. This environment are two variants:\n\nThe object is a vase, and the agent is rewarded for taking it off the belt (the agent’s task is to rescue the vase).\nThe object is a sushi dish in a conveyor belt sushi restaurant, and the agent receives no reward for taking it off the belt (the agent is not supposed to interfere).\n\nThis gridworld was designed specifically to test for bad incentives that could be introduced by penalizing side effects, so an agent with no side effect penalty would behave correctly. We find that the agent with a low impact penalty engages in overcompensation behavior by putting the vase back on the belt after collecting the reward, while the agent with a reversibility preserving penalty takes the sushi dish off the belt despite getting no reward for doing so. The agent with a relative reachability penalty behaves correctly in both variants of the environment.\nOf course, the relative reachability definition in its current form is not very tractable in realistic environments: there are too many possible states to be considered, the agent is not aware of all the states when it begins training, and the default outcome can be difficult to define and simulate. We expect that the definition can be approximated by considering the reachability of representative states (similarly to methods for approximating empowerment). To define the default outcome, we would need a more precise notion of the agent “doing nothing” (e.g. “no-op” actions are not always available or meaningful). We leave a more practical implementation of relative reachability to future work.\nWhile relative reachability improves on the existing approaches, it might not incorporate all the considerations we would want to be part of a side effects measure. There are some effects on the agent’s environment that we might care about even if they don’t decrease future options compared to the default outcome. It might be possible to combine relative reachability with such considerations, but there could potentially be a tradeoff between taking these considerations into account and avoiding overcompensation behaviors. We leave these investigations to future work as well.", "url": "https://vkrakovna.wordpress.com/2018/06/05/measuring-and-avoiding-side-effects-using-relative-reachability/", "title": "Measuring and avoiding side effects using relative reachability", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2018-06-05T14:15:06+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "60f6f96d53c3479876717b82e88da038", "summary": []} {"text": "Specification gaming examples in AI\n\nUpdate: for a more detailed introduction to specification gaming, check out the DeepMind Safety Research blog post!\nVarious examples (and lists of examples) of unintended behaviors in AI systems have appeared in recent years. One interesting type of unintended behavior is finding a way to game the specified objective: generating a solution that literally satisfies the stated objective but fails to solve the problem according to the human designer’s intent. This occurs when the objective is poorly specified, and includes reinforcement learning agents hacking the reward function, evolutionary algorithms gaming the fitness function, etc.\nWhile ‘specification gaming’ is a somewhat vague category, it is particularly referring to behaviors that are clearly hacks, not just suboptimal solutions. A classic example is OpenAI’s demo of a reinforcement learning agent in a boat racing game going in circles and repeatedly hitting the same reward targets instead of actually playing the game.\n\nSince such examples are currently scattered across several lists, I have put together a master list of examples collected from the various existing sources. This list is intended to be comprehensive and up-to-date, and serve as a resource for AI safety research and discussion. If you know of any interesting examples of specification gaming that are missing from the list, please submit them through this form.\nThanks to Gwern Branwen, Catherine Olsson, Alex Irpan, and others for collecting and contributing examples!", "url": "https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/", "title": "Specification gaming examples in AI", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2018-04-01T23:33:20+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "e2bfe08485b39e468bde2c3a52198356", "summary": []} {"text": "Is there a tradeoff between immediate and longer-term AI safety efforts?\n\nSomething I often hear in the machine learning community and media articles is “Worries about superintelligence are a distraction from the *real* problem X that we are facing today with AI” (where X = algorithmic bias, technological unemployment, interpretability, data privacy, etc). This competitive attitude gives the impression that immediate and longer-term safety concerns are in conflict. But is there actually a tradeoff between them?\n\nWe can make this question more specific: what resources might these two types of efforts be competing for?\n\nMedia attention. Given the abundance of media interest in AI, there have been a lot of articles about all these issues. Articles about advanced AI safety have mostly been alarmist Terminator-ridden pieces that ignore the complexities of the problem. This has understandably annoyed many AI researchers, and led some of them to dismiss these risks based on the caricature presented in the media instead of the real arguments. The overall effect of media attention towards advanced AI risk has been highly negative. I would be very happy if the media stopped writing about superintelligence altogether and focused on safety and ethics questions about today’s AI systems.\nFunding. Much of the funding for advanced AI safety work currently comes from donors and organizations who are particularly interested in these problems, such as the Open Philanthropy Project and Elon Musk. They would be unlikely to fund safety work that doesn’t generalize to advanced AI systems, so their donations to advanced AI safety research are not taking funding away from immediate problems. On the contrary, FLI’s first grant program awarded some funding towards current issues with AI (such as economic and legal impacts). There isn’t a fixed pie of funding that immediate and longer-term safety are competing for – it’s more like two growing pies that don’t overlap very much. There has been an increasing amount of funding going into both fields, and hopefully this trend will continue.\nTalent. The field of advanced AI safety has grown in recent years but is still very small, and the “brain drain” resulting from researchers going to work on it has so far been negligible. The motivations for working on current and longer-term problems tend to be different as well, and these problems often attract different kinds of people. For example, someone who primarily cares about social justice is more likely to work on algorithmic bias, while someone who primarily cares about the long-term future is more likely to work on superintelligence risks.\nOverall, there does not seem to be much tradeoff in terms of funding or talent, and the media attention tradeoff could (in theory) be resolved by devoting essentially all the airtime to current concerns. Not only are these issues not in conflict – there are synergies between addressing them. Both benefit from fostering a culture in the AI research community of caring about social impact and being proactive about risks. Some safety problems are highly relevant both in the immediate and longer term, such as interpretability and adversarial examples. I think we need more people working on these problems for current systems while keeping scalability to more advanced future systems in mind.\nAI safety problems are too important for the discussion to be derailed by status contests like “my issue is better than yours”. This kind of false dichotomy is itself a distraction from the shared goal of ensuring AI has a positive impact on the world, both now and in the future. People who care about the safety of current and future AI systems are natural allies – let’s support each other on the path towards this common goal.", "url": "https://vkrakovna.wordpress.com/2018/01/27/is-there-a-tradeoff-between-safety-concerns-about-current-and-future-ai-systems/", "title": "Is there a tradeoff between immediate and longer-term AI safety efforts?", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2018-01-27T18:08:19+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "f64c282758a280a6fec3c38d760d7ba2", "summary": []} {"text": "2017-18 New Year review\n\n2017 progress\nResearch/career:\n\nCoauthored RL with reward corruption paper and presented the results at the U Toronto CS department, Workshop on Reliable AI, and Women in ML workshop.\nCoauthored AI Safety Gridworlds paper.\nGave a talk on Interpretability for AI safety at the NIPS Interpretable ML Symposium.\nGave a lot of people career advice on getting into AI safety research.\n\nFLI / other AI safety:\n\nCoorganized the Beneficial AI conference in Asilomar and gave a talk summarizing the work of FLI grantees presented at the Asilomar workshop.\nCowrote the project examples document for the new grants program.\nSpoke at the Tokyo AI & Society symposium (my first conference in Asia).\nStarted a new public Facebook group AI Safety Open Discussion.\n\n\nRationality / effectiveness:\n\nStreamlined self-tracking data analysis and made an iPython notebook for plots. Found that the amount of sleep I get is correlated with tiredness (0.32), but not with mood indicators (anger, anxiety, or distractability). Anger and anxiety are correlated with each other though (0.36). Distractability is correlated with tiredness (0.27) and anticorrelated with anger the next day for some reason (-0.31).\nRan house check-in sessions on goals and habits 1-2 times a month, two house sessions on Hamming questions, and check-ins with Janos every 1-2 weeks.\nDid a sleep CBT program with sleep restriction for 2 months. Comparing the 5 months before the program vs the 5 months after the program, evening insomnia rate went down from 16% to 8.2% of the time, and morning insomnia rate didn’t change (9%). Average hours of sleep didn’t change (7 hours), but going to sleep around 22 minutes earlier on average. This excludes jetlag days (at most 3 days after a flight with at least 3 hours of time difference).\nDid around 80 exercise classes (starting in March)\n\nFun stuff:\n\nMoved into our new group house (Deep End).\nExplored the UK (hiking in Wales, Scotland, Lake District).\nGot back into aerial silks.\nGot into circling.\nGot a pixie haircut.\nFamily reunion in France with Russian relatives I haven’t seen in a decade.\nWent to Burning Man and learned to read Tarot (as part of our camp theme).\nDid the Stoic Week.\nPlayed a spy scavenger hunt game.\n\n\n\n2017 prediction outcomes\nPredictions:\n\nOur AI safety team will have at least two papers accepted for publication at a major conference, not counting workshops (70%) – 2 papers (human preferences paper at NIPS and reward corruption paper at IJCAI)\nI will write at least 9 blog posts (50%) – 6 posts\nI will meditate at least 250 days (45%) – 237 days\nI will exercise at least 250 days (55%) – 194 days\nI will visit at least 2 new countries (80%) – France, Switzerland\nI will attend Burning Man (85%) – yes\n\nCalibration:\n\nEverything that got at least 70% confidence was correct, everything lower was wrong.\nLike last year, my low predictions seem overconfident (though too few data points to judge).\n\n2018 goals and predictions\nResolutions:\n\nWrite at least 2 AI blog posts that are not about conferences (1 last year) (70%)\nAvoid processed sugar* at least until end of March (90%)\nDo at most 4 non-research talks/panels (7 last year) (50%)\nMeditate on at least 250 days (50%)\n\n* not in a super strict way: it’s ok to eat fruit and 90% chocolate and try a really small quantity (< teaspoon) of a dessert.\nPredictions:\n\nOur AI safety team will have at least two papers accepted for publication at a major conference, not counting workshops (80%)\nI will write at least 6 blog posts (60%)\nI will go to at least 100 exercise classes (80 last year) (60%)\n1-2 housemate turnover at the Deep End (3 last year) (70%)\nI will visit at least 3 new cities with population over 100,000 (4 last year) (50%)\nI will go on at least 2 hikes (4 last year) (90%)\n\nPast new year reviews: 2016-17, 2015-16, 2014-15.", "url": "https://vkrakovna.wordpress.com/2018/01/07/2017-18-new-year-review/", "title": "2017-18 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2018-01-07T01:01:30+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "fee1b74460073b148588017542c395e6", "summary": []} {"text": "NIPS 2017 Report\n\n\nThis year’s NIPS gave me a general sense that near-term AI safety is now mainstream and long-term safety is slowly going mainstream. On the near-term side, I particularly enjoyed Kate Crawford’s keynote on neglected problems in AI fairness, the ML security workshops, and the Interpretable ML symposium debate that addressed the “do we even need interpretability?” question in a somewhat sloppy but entertaining way. There was a lot of great content on the long-term side, including several oral / spotlight presentations and the Aligned AI workshop.\n\nValue alignment papers\nInverse Reward Design (Hadfield-Menell et al) defines the problem of an RL agent inferring a human’s true reward function based on the proxy reward function designed by the human. This is different from inverse reinforcement learning, where the agent infers the reward function from human behavior. The paper proposes a method for IRD that models uncertainty about the true reward, assuming that the human chose a proxy reward that leads to the correct behavior in the training environment. For example, if a test environment unexpectedly includes lava, the agent assumes that a lava-avoiding reward function is as likely as a lava-indifferent or lava-seeking reward function, since they lead to the same behavior in the training environment. The agent then follows a risk-averse policy with respect to its uncertainty about the reward function.\n\nThe paper shows some encouraging results on toy environments for avoiding some types of side effects and reward hacking behavior, though it’s unclear how well they will generalize to more complex settings. For example, the approach to reward hacking relies on noticing disagreements between different sensors / features that agreed in the training environment, which might be much harder to pick up on in a complex environment. The method is also at risk of being overly risk-averse and avoiding anything new, whether it be lava or gold, so it would be great to see some approaches for safe exploration in this setting.\nRepeated Inverse RL (Amin et al) defines the problem of inferring intrinsic human preferences that incorporate safety criteria and are invariant across many tasks. The reward function for each task is a combination of the task-invariant intrinsic reward (unobserved by the agent) and a task-specific reward (observed by the agent). This multi-task setup helps address the identifiability problem in IRL, where different reward functions could produce the same behavior.\n\nThe authors propose an algorithm for inferring the intrinsic reward while minimizing the number of mistakes made by the agent. They prove an upper bound on the number of mistakes for the “active learning” case where the agent gets to choose the tasks, and show that a certain number of mistakes is inevitable when the agent cannot choose the tasks (there is no upper bound in that case). Thus, letting the agent choose the tasks that it’s trained on seems like a good idea, though it might also result in a selection of tasks that is less interpretable to humans.\nDeep RL from Human Preferences (Christiano et al) uses human feedback to teach deep RL agents about complex objectives that humans can evaluate but might not be able to demonstrate (e.g. a backflip). The human is shown two trajectory snippets of the agent’s behavior and selects which one more closely matches the objective. This method makes very efficient use of limited human feedback, scaling much better than previous methods and enabling the agent to learn much more complex objectives (as shown in MuJoCo and Atari).\n\nDynamic Safe Interruptibility for Decentralized Multi-Agent RL (El Mhamdi et al) generalizes the safe interruptibility problem to the multi-agent setting. Non-interruptible dynamics can arise in a group of agents even if each agent individually is indifferent to interruptions. This can happen if Agent B is affected by interruptions of Agent A and is thus incentivized to prevent A from being interrupted (e.g. if the agents are self-driving cars and A is in front of B on the road). The multi-agent definition focuses on preserving the system dynamics in the presence of interruptions, rather than on converging to an optimal policy, which is difficult to guarantee in a multi-agent setting.\nAligned AI workshop\nThis was a more long-term-focused version of the Reliable ML in the Wild workshop held in previous years. There were many great talks and posters there – my favorite talks were Ian Goodfellow’s “Adversarial Robustness for Aligned AI” and Gillian Hadfield’s “Incomplete Contracting and AI Alignment”.\nIan made the case of ML security being important for long-term AI safety. The effectiveness of adversarial examples is problematic not only from the near-term perspective of current ML systems (such as self-driving cars) being fooled by bad actors. It’s also bad news from the long-term perspective of aligning the values of an advanced agent, which could inadvertently seek out adversarial examples for its reward function due to Goodhart’s law. Relying on the agent’s uncertainty about the environment or human preferences is not sufficient to ensure safety, since adversarial examples can cause the agent to have arbitrarily high confidence in the wrong answer.\n\nGillian approached AI safety from an economics perspective, drawing parallels between specifying objectives for artificial agents and designing contracts for humans. The same issues that make contracts incomplete (the designer’s inability to consider all relevant contingencies or precisely specify the variables involved, and incentives for the parties to game the system) lead to side effects and reward hacking for artificial agents.\n\nThe central question of the talk was how we can use insights from incomplete contracting theory to better understand and systematically solve specification problems in AI safety, which is a really interesting research direction. The objective specification problem seems even harder to me than the incomplete contract problem, since the contract design process relies on some level of shared common sense between the humans involved, which artificial agents do not currently possess.\nInterpretability for AI safety\nI gave a talk at the Interpretable ML symposium on connections between interpretability and long-term safety, which explored what forms of interpretability could help make progress on safety problems (slides, video). Understanding our systems better can help ensure that safe behavior generalizes to new situations, and it can help identify causes of unsafe behavior when it does occur. \nFor example, if we want to build an agent that’s indifferent to being switched off, it would be helpful to see whether the agent has representations that correspond to an off-switch, and whether they are used in its decisions. Side effects and safe exploration problems would benefit from identifying representations that correspond to irreversible states (like “broken” or “stuck”). While existing work on examining the representations of neural networks focuses on visualizations, safety-relevant concepts are often difficult to visualize.\nLocal interpretability techniques that explain specific predictions or decisions are also useful for safety. We could examine whether features that are idiosyncratic to the training environment or indicate proximity to dangerous states influence the agent’s decisions. If the agent can produce a natural language explanation of its actions, how does it explain problematic behavior like reward hacking or going out of its way to disable the off-switch?\nThere are many ways in which interpretability can be useful for safety. Somewhat less obvious is what safety can do for interpretability: serving as grounding for interpretability questions. As exemplified by the final debate of the symposium, there is an ongoing conversation in the ML community trying to pin down the fuzzy idea of interpretability – what is it, do we even need it, what kind of understanding is useful, etc. I think it’s important to keep in mind that our desire for interpretability is to some extent motivated by our systems being fallible – understanding our AI systems would be less important if they were 100% robust and made no mistakes. From the safety perspective, we can define interpretability as the kind of understanding that help us ensure the safety of our systems.\nFor those interested in applying the interpretability hammer to the safety nail, or working on other long-term safety questions, FLI has recently announced a new grant program. Now is a great time for the AI field to think deeply about value alignment. As Pieter Abbeel said at the end of his keynote, “Once you build really good AI contraptions, how do you make sure they align their value system with our value system? Because at some point, they might be smarter than us, and it might be important that they actually care about what we care about.”\n(Thanks to Janos Kramar for his feedback on this post, and to everyone at DeepMind who gave feedback on the interpretability talk.)", "url": "https://vkrakovna.wordpress.com/2017/12/30/nips-2017-report/", "title": "NIPS 2017 Report", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2017-12-30T00:15:52+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=2", "authors": ["Victoria Krakovna"], "id": "765527fb7e8cd1372c89df0e88188c97", "summary": []} {"text": "Tokyo AI & Society Symposium\n\nI just spent a week in Japan to speak at the inaugural symposium on AI & Society – my first conference in Asia. It was inspiring to take part in an increasingly global conversation about AI impacts, and interesting to see how the Japanese AI community thinks about these issues. Overall, Japanese researchers seemed more open to discussing controversial topics like human-level AI and consciousness than their Western counterparts. Most people were more interested in near-term AI ethics concerns but also curious about long term problems.\nThe talks were a mix of English and Japanese with translation available over audio (high quality but still hard to follow when the slides are in Japanese). Here are some tidbits from my favorite talks and sessions.\n\nDanit Gal’s talk on China’s AI policy. She outlined China’s new policy report aiming to lead the world in AI by 2030, and discussed various advantages of collaboration over competition. It was encouraging to see that China’s AI goals include “establishing ethical norms, policies and regulations” and “forming robust AI safety and control mechanisms”. Danit called for international coordination to help ensure that everyone is following compatible concepts of safety and ethics.\n\n\nNext breakthrough in AI panel (Yasuo Kuniyoshi from U Tokyo, Ryota Kanai from Araya and Marek Rosa from GoodAI). When asked about immediate research problems they wanted the field to focus on, the panelists highlighted intrinsic motivation, embodied cognition, and gradual learning. In the longer term, they encouraged researchers to focus on generalizable solutions and to not shy away from philosophical questions (like defining consciousness). I think this mindset is especially helpful for working on long-term AI safety research, and would be happy to see more of this perspective in the field.\nLong-term talks and panel (Francesca Rossi from IBM, Hiroshi Nakagawa from U Tokyo and myself). I gave an overview of AI safety research problems in general and recent papers from my team. Hiroshi provocatively argued that a) AI-driven unemployment is inevitable, and b) we need to solve this problem using AI. Francesca talked about trustworthy AI systems and the value alignment problem. In the panel, we discussed whether long-term problems are a distraction from near-term problems (spoiler: no, both are important to work on), to what extent work on safety for current ML systems can carry over to more advanced systems (high-level insights are more likely to carry over than details), and other fun stuff.\nStephen Cave’s diagram of AI ethics issues. Helpfully color-coded by urgency.\n\n\nLuba Elliott’s talk on AI art. Style transfer has outdone itself with a Google Maps Mona Lisa.\n\n\nThere were two main themes I noticed in the Western presentations. People kept pointing out that AlphaGo is not AGI because it’s not flexible enough to generalize to hexagonal grids and such (this was before AlphaGo Zero came out). Also, the trolley problem was repeatedly brought up as a default ethical question for AI (it would be good to diversify this discussion with some less overused examples).\nThe conference was very well-organized and a lot of fun. Thanks to the organizers for bringing it together, and to all the great people I got to meet!\nWe also had a few days of sightseeing around Tokyo, which involved a folk dance festival, an incessantly backflipping aye-aye at the zoo, and beautiful netsuke sculptures at the national museum. I will miss the delicious conveyor belt sushi, the chestnut puree desserts from the convenience store, and the vending machines with hot milk tea at every corner :).\n[This post originally appeared on the Deep Safety blog. Thanks to Janos Kramar for his feedback.]", "url": "https://vkrakovna.wordpress.com/2017/10/30/tokyo-ai-society-symposium/", "title": "Tokyo AI & Society Symposium", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2017-10-30T10:51:27+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "6ee2b9c03e9e137deb636348e65042fa", "summary": []} {"text": "Portfolio approach to AI safety research\n\nLong-term AI safety is an inherently speculative research area, aiming to ensure safety of advanced future systems despite uncertainty about their design or algorithms or objectives. It thus seems particularly important to have different research teams tackle the problems from different perspectives and under different assumptions. While some fraction of the research might not end up being useful, a portfolio approach makes it more likely that at least some of us will be right.\nIn this post, I look at some dimensions along which assumptions differ, and identify some underexplored reasonable assumptions that might be relevant for prioritizing safety research. (In the interest of making this breakdown as comprehensive and useful as possible, please let me know if I got something wrong or missed anything important.)\n\nAssumptions about similarity between current and future AI systems\nIf a future general AI system has a similar algorithm to a present-day system, then there are likely to be some safety problems in common (though more severe in generally capable systems). Insights and solutions for those problems are likely to transfer to some degree from current systems to future ones. For example, if a general AI system is based on reinforcement learning, we can expect it to game its reward function in even more clever and unexpected ways than present-day reinforcement learning agents do. Those who hold the similarity assumption often expect most of the remaining breakthroughs on the path to general AI to be compositional rather than completely novel, enhancing and combining existing components in novel and better-implemented ways (many current machine learning advances such as AlphaGo are an example of this).\nNote that assuming similarity between current and future systems is not exactly the same as assuming that studying current systems is relevant to ensuring the safety of future systems, since we might still learn generalizable things by testing safety properties of current systems even if they are different from future systems.\nAssuming similarity suggests a focus on empirical research based on testing the safety properties of current systems, while not making this assumption encourages more focus on theoretical research based on deriving safety properties from first principles, or on figuring out what kinds of alternative designs would lead to safe systems. For example, safety researchers in industry tend to assume more similarity between current and future systems than researchers at MIRI.\nHere is my tentative impression of where different safety research groups are on this axis. This is a very approximate summary, since views often vary quite a bit within the same research group (e.g. FHI is particularly diverse in this regard).\nOn the high-similarity side of the axis, we can explore the safety properties of different architectural / algorithmic approaches to AI, e.g. on-policy vs off-policy or model-free vs model-based reinforcement learning algorithms. It might be good to have someone working on safety issues for less commonly used agent algorithms, e.g. evolution strategies.\nAssumptions about promising approaches to safety problems\nLevel of abstraction. What level of abstraction is most appropriate for tackling a particular problem. For example, approaches to the value learning problem range from explicitly specifying ethical constraints to capability amplification and indirect normativity, with cooperative inverse reinforcement learning somewhere in between. These assumptions could be combined by applying different levels of abstraction to different parts of the problem. For example, it might make sense to explicitly specify some human preferences that seem obvious and stable over time (e.g. “breathable air”), and use the more abstract approaches to impart the most controversial, unstable and vague concepts (e.g. “fairness” or “harm”). Overlap between the more and less abstract specifications can create helpful redundancy (e.g. air pollution as a form of harm + a direct specification of breathable air).\nFor many other safety problems, the abstraction axis is not as widely explored as for value learning. For example, most of the approaches to avoiding negative side effects proposed in Concrete Problems (e.g. impact regularizers and empowerment) are on a medium level of abstraction, while it also seems important to address the problem on a more abstract level by formalizing what we mean by side effects (which would help figure out what we should actually be regularizing, etc). On the other hand, almost all current approaches to wireheading / reward hacking are quite abstract, and the problem would benefit from more empirical work.\nExplicit specification vs learning from data. Whether a safety problem is better addressed by directly defining a concept (e.g. the Low Impact AI paper formalizes the impact of an AI system by breaking down the world into ~20 billion variables) or learning the concept from human feedback (e.g. Deep Reinforcement Learning from Human Preferences paper teaches complex objectives to AI systems that are difficult to specify directly, like doing a backflip). I think it’s important to address safety problems from both of these angles, since the direct approach is unlikely to work on its own, but can give some idea of the idealized form of the objective that we are trying to approximate by learning from data.\nModularity of AI design. What level of modularity makes it easier to ensure safety? Ranges from end-to-end systems to ones composed of many separately trained parts that are responsible for specific abilities and tasks. Safety approaches for the modular case can limit the capabilities of individual parts of the system, and use some parts to enforce checks and balances on other parts. MIRI’s foundations approach focuses on a unified agent, while the safety properties on the high-modularity side has mostly been explored by Eric Drexler (more recent work is not public but available upon request). It would be good to see more people work on the high-modularity assumption.\nTakeaways\nTo summarize, here are some relatively neglected assumptions:\n\nMedium similarity in algorithms / architectures\nLess popular agent algorithms\nModular general AI systems\nMore / less abstract approaches to different safety problems (more for side effects, less for wireheading, etc)\nMore direct / data-based approaches to different safety problems\n\nFrom a portfolio approach perspective, a particular research avenue is worthwhile if it helps to cover the space of possible reasonable assumptions. For example, while MIRI’s research is somewhat controversial, it relies on a unique combination of assumptions that other groups are not exploring, and is thus quite useful in terms of covering the space of possible assumptions.\nI think the FLI grant program contributed to diversifying the safety research portfolio by encouraging researchers with different backgrounds to enter the field. It would be good for grantmakers in AI safety to continue to optimize for this in the future (e.g. one interesting idea is using a lottery after filtering for quality of proposals).\nWhen working on AI safety, we need to hedge our bets and look out for unknown unknowns – it’s too important to put all the eggs in one basket.\n(Cross-posted to the FLI blog and Approximately Correct. Thanks to Janos Kramar, Jan Leike and Shahar Avin for their feedback on this post. Thanks to Jaan Tallinn and others for inspiring discussions.)", "url": "https://vkrakovna.wordpress.com/2017/08/16/portfolio-approach-to-ai-safety-research/", "title": "Portfolio approach to AI safety research", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2017-08-16T21:35:05+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "025c67d352650c80cc253f975382ae11", "summary": []} {"text": "Takeaways from self-tracking data\n\nI’ve been collecting data about myself on a daily basis for the past 3 years. Half a year ago, I switched from using 42goals (which I only remembered to fill out once every few days) to a Google form emailed to me daily (which I fill out consistently because I check email often). Now for the moment of truth – a correlation matrix!\nThe data consists of “mood variables” (anxiety, tiredness, and “zoneout” – how distracted / spacey I’m feeling), “action variables” (exercise and meditation) and sleep variables (hours of sleep, sleep start/end time, insomnia). There are 5 binary variables (meditation, exercise, evening/morning insomnia, headache) and the rest are ordinal or continuous. Almost all the variables have 6 months of data, except that I started tracking anxiety 5 months ago and zoneout 2 months ago.\n\nThe matrix shows correlations between mood and action variables for day X, sleep variables for the night after day X, and mood variables for day X+1 (marked by ‘next’):\n\nThe most surprising thing about this data is how many things are uncorrelated that I would expect to be correlated:\n\nevening insomnia and tiredness the next day (or the same day)\nanxiety and sleep variables the following night\nexercise and sleep variables the following night\ntiredness and hours of sleep the following night\naverage hours of sleep (over the past week) is only weakly correlated with tiredness the next day (-0.15)\nhours of sleep (average or otherwise) and anxiety or zoneout the next day (so my mood is less affected by sleep than I have expected)\naction variables and mood variables the next day\nmeditation and feeling zoned out\n\nSome things that were correlated after all:\n\nhours of sleep and tiredness the next day (-0.3) – unsurprising but lower than expected\ntiredness and zoneout (0.33)\ntiredness and insomnia the following morning (0.29) (weird)\nanxiety and zoneout were anticorrelated (-0.25) on adjacent days (weird)\nexercise and anxiety (-0.18)\nmeditation and anxiety (-0.15)\nmeditating and exercising (0.17) – both depend on how agenty / busy I am that day\nmeditation and insomnia (0.24), probably because I usually try to meditate if I’m having insomnia to make it easier to fall asleep\nheadache and evening insomnia (0.14)\n\nSome falsified hypotheses:\n\nExercise and meditation affect mood variables the following day\nMy tiredness level depends on the average amount of sleep the preceding week\nAnxiety affects sleep the following night\nExercise helps me sleep the following night\nI sleep more when I’m more tired\nSleep deprivation affects my mood\n\nThe overall conclusion is that my sleep is weird and also matters less than I thought for my well-being (at least in terms of quantity).\nAddendum:  For those who would like to try this kind of self-tracking, here is a Google Drive folder with the survey form and the iPython notebook. You need to download the spreadsheet of form responses as a CSV file before running the notebook code. You can use the Send button in the form to email it to yourself, and then bounce it back every day using Google Inbox, FollowUpThen.com, or a similar service.", "url": "https://vkrakovna.wordpress.com/2017/06/04/takeaways-from-self-tracking-data/", "title": "Takeaways from self-tracking data", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2017-06-04T22:29:45+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "81177621b7f77251edaac6325a2ee1bc", "summary": []} {"text": "Highlights from the ICLR conference: food, ships, and ML security\n\nIt’s been an eventful few days at ICLR in the coastal town of Toulon in Southern France, after a pleasant train ride from London with a stopover in Paris for some sightseeing. There was more food than is usually provided at conferences, and I ended up almost entirely subsisting on tasty appetizers. The parties were memorable this year, including one in a vineyard and one in a naval museum. The overall theme of the conference setting could be summarized as “finger food and ships”.\n\nThere were a lot of interesting papers this year, especially on machine learning security, which will be the focus on this post. (Here is a great overview of the topic.)\n\nOn the attack side, adversarial perturbations now work in physical form (if you print out the image and then take a picture) and they can also interfere with image segmentation. This has some disturbing implications for fooling vision systems in self-driving cars, such as impeding them from recognizing pedestrians. Adversarial examples are also effective at sabotaging neural network policies in reinforcement learning at test time.\n\nIn more encouraging news, adversarial examples are not entirely transferable between different models. For targeted examples, which aim to be misclassified as a specific class, the target class is not preserved when transferring to a different model. For example, if an image of a school bus is classified as a crocodile by the original model, it has at most 4% probability of being seen as a crocodile by another model. The paper introduces an ensemble method for developing adversarial examples whose targets do transfer, but this seems to only work well if the ensemble includes a model with a similar architecture to the new model.\nOn the defense side, there were some new methods for detecting adversarial examples. One method augments neural nets with a detector subnetwork, which works quite well and generalizes to new adversaries (if they are similar to or weaker than the adversary used for training). Another approach analyzes adversarial images using PCA, and finds that they are similar to normal images in the first few thousand principal components, but have a lot more variance in later components. Note that the reverse is not the case – adding arbitrary variation in trailing components does not necessarily encourage misclassification.\nThere has also been progress in scaling adversarial training to larger models and data sets, which also found that higher-capacity models are more resistant against adversarial examples than lower-capacity models. My overall impression is that adversarial attacks are still ahead of adversarial defense, but the defense side is starting to catch up.\n\n(Cross-posted to the FLI blog and Approximately Correct. Thanks to Janos Kramar for his feedback on this post.)", "url": "https://vkrakovna.wordpress.com/2017/04/30/highlights-from-the-iclr-conference-food-ships-and-ml-security/", "title": "Highlights from the ICLR conference: food, ships, and ML security", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2017-04-30T20:54:46+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "742814e511ed91e94efb4bf36af4e724", "summary": []} {"text": "2016-17 New Year review\n\n2016 progress\nResearch / career:\n\nGot a job at DeepMind as a research scientist in AI safety.\nPresented MiniSPN paper at ICLR workshop.\nFinished RNN interpretability paper and presented at ICML and NIPS workshops.\nAttended the Deep Learning Summer School.\nFinished and defended PhD thesis.\nMoved to London and started working at DeepMind.\n\nFLI:\n\nTalk and panel (moderator) at Effective Altruism Global X Boston\nTalk and panel at the Governance of Emerging Technologies conference at ASU\nTalk and panel at Brain Bar Budapest\nAI safety session at OpenAI unconference\nTalk and panel at Effective Altruism Global X Oxford\nTalk and panel at Cambridge Catastrophic Risk Conference run by CSER\n\n\nRationality / effectiveness:\n\nWent to a 5-day Zentensive meditation retreat with Janos, in between grad school and moving to London. This was very helpful for practicing connecting with my direct emotional experience, and a good way to reset during a life transition.\nStopped using 42goals (too glitchy) and started recording data in a Google form emailed to myself daily. Now I am actually entering accurate data every day instead of doing it retroactively whenever I remember. I tried a number of goal tracking apps, but all of them seemed too inflexible (I was surprised not to find anything that provides correlation charts between different goals, e.g. meditation vs. hours of sleep).\n\nRandom cool things:\n\nHiked in the Andes to an altitude of 17,000 feet.\nVisited the Grand Canyon.\nNew countries visited: UK, Bolivia, Spain.\nStarted a group house in London (moving there in a few weeks).\nStarted contributing to the new blog Approximately Correct on societal impacts of machine learning.\n\n\n2016 prediction outcomes\nResolutions:\n\nFinish PhD thesis (70%) – done\nWrite at least 12 blog posts (40%) – 9\nMeditate at least 200 days (50%) – 245\nExercise at least 200 days (50%) – 282\nDo at least 5 pullups in a row (40%) – still only 2-3\nRecord at least 50 new thoughts (50%) – 29\nStay up past 1:30am at most 20% of the nights (40%) – 26.8%\nDo at least 10 pomodoros per week on average (50%) – 13\n\nPredictions:\n\nAt least one paper accepted for publication (70%) – two papers accepted to workshops\nI will get at least one fellowship (40%)\nInsomnia at most 20% of nights (20%) – 18.3%\nFLI will co-organize at least 3 AI safety workshops (50%) – AAAI, ICML, NIPS\n\nCalibration:\n\nLow predictions (20-40%): 1/5 = 20% (overconfident)\nMedium predictions (50-70%): 6/7 = 85% (underconfident)\nIt’s interesting that my 40% predictions were all wrong, and my 50% predictions were almost all correct. I seem to be translating system 1 labels of ‘not that likely’ and ‘reasonably likely’ to 40% and 50% respectively, while they should translate to something more like 25% and 70%. After the overconfident predictions last year, I tried to tone down the predictions for this year, but the lower ones didn’t get toned down enough.\nI seem to be more accurate on predictions than resolutions, probably due to wishful thinking. Experimenting with no resolutions for next year.\n\n2017 predictions\n\nOur AI safety team will have at least two papers accepted for publication at a major conference, not counting workshops (70%).\nI will write at least 9 blog posts (50%).\nI will meditate at least 250 days (45%).\nI will exercise at least 250 days (55%).\nI will visit at least 2 new countries (80%).\nI will attend Burning Man (85%).\n\n ", "url": "https://vkrakovna.wordpress.com/2017/01/09/2016-17-new-year-review/", "title": "2016-17 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2017-01-09T22:24:33+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "ece06fd9ee132d0a752082c7c578d2ea", "summary": []} {"text": "AI Safety Highlights from NIPS 2016\n\n\nThis year’s Neural Information Processing Systems conference was larger than ever, with almost 6000 people attending, hosted in a huge convention center in Barcelona, Spain. The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking that illustrates the challenge of designing robust reward functions.\nI was happy to see a lot of AI-safety-related content at NIPS this year. The ML and the Law symposium and Interpretable ML for Complex Systems workshop focused on near-term AI safety issues, while the Reliable ML in the Wild workshop also covered long-term problems. Here are some papers relevant to long-term AI safety:\n\nInverse Reinforcement Learning\nCooperative Inverse Reinforcement Learning (CIRL) by Hadfield-Menell, Russell, Abbeel, and Dragan (main conference). This paper addresses the value alignment problem by teaching the artificial agent about the human’s reward function, using instructive demonstrations rather than optimal demonstrations like in classical IRL (e.g. showing the robot how to make coffee vs having it observe coffee being made). (3-minute video)\n\nGeneralizing Skills with Semi-Supervised Reinforcement Learning by Finn, Yu, Fu, Abbeel, and Levine (Deep RL workshop). This work addresses the scalable oversight problem by proposing the first tractable algorithm for semi-supervised RL. This allows artificial agents to robustly learn reward functions from limited human feedback. The algorithm uses an IRL-like approach to infer the reward function, using the agent’s own prior experiences in the supervised setting as an expert demonstration.\nTowards Interactive Inverse Reinforcement Learning by Armstrong and Leike (Reliable ML workshop). This paper studies the incentives of an agent that is trying to learn about the reward function while simultaneously maximizing the reward. The authors discuss some ways to reduce the agent’s incentive to manipulate the reward learning process.\nShould Robots Have Off Switches? by Milli, Hadfield-Menell, and Russell (Reliable ML workshop). This poster examines some adverse effects of incentivizing artificial agents to be compliant in the off-switch game (a variant of CIRL).\n\nSafe exploration\nSafe Exploration in Finite Markov Decision Processes with Gaussian Processes by Turchetta, Berkenkamp, and Krause (main conference). This paper develops a reinforcement learning algorithm called Safe MDP that can explore an unknown environment without getting into irreversible situations, unlike classical RL approaches.\nCombating Reinforcement Learning’s Sisyphean Curse with Intrinsic Fear by Lipton, Gao, Li, Chen, and Deng (Reliable ML workshop). This work addresses the ‘Sisyphean curse’ of DQN algorithms forgetting past experiences, as they become increasingly unlikely under a new policy, and therefore eventually repeating catastrophic mistakes. The paper introduces an approach called ‘intrinsic fear’, which maintains a model for how likely different states are to lead to a catastrophe within some number of steps.\n~~~~~\nMost of these papers were related to inverse reinforcement learning – while IRL is a promising approach, it would be great to see more varied safety material at the next NIPS (fingers crossed for some innovative contributions from Rocket AI!). There were some more safety papers on other topics at UAI this summer: Safely Interruptible Agents (formalizing what it means to incentivize an agent to obey shutdown signals) and A Formal Solution to the Grain of Truth Problem (providing a broad theoretical framework for multiple agents learning to predict each other in arbitrary computable games).\n(Cross-posted to Approximately Correct and the FLI blog. Thanks to Jan Leike, Zachary Lipton, and Janos Kramar for providing feedback on this post.)", "url": "https://vkrakovna.wordpress.com/2016/12/28/ai-safety-highlights-from-nips-2016/", "title": "AI Safety Highlights from NIPS 2016", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-12-28T18:04:04+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "01d57e6503165718d1209c3daf13b6d6", "summary": []} {"text": "OpenAI unconference on machine learning\n\nLast weekend, I attended OpenAI’s self-organizing conference on machine learning (SOCML 2016), meta-organized by Ian Goodfellow (thanks Ian!). It was held at OpenAI’s new office, with several floors of large open spaces. The unconference format was intended to encourage people to present current ideas alongside with completed work. The schedule mostly consisted of 2-hour blocks with broad topics like “reinforcement learning” and “generative models”, guided by volunteer moderators. I especially enjoyed the sessions on neuroscience and AI and transfer learning, which had smaller and more manageable groups than the crowded popular sessions, and diligent moderators who wrote down the important points on the whiteboard. Overall, I had more interesting conversation but also more auditory overload at SOCML than at other conferences.\nTo my excitement, there was a block for AI safety along with the other topics. The safety session became a broad introductory Q&A, moderated by Nate Soares, Jelena Luketina and me. Some topics that came up: value alignment, interpretability, adversarial examples, weaponization of AI.\nAI safety discussion group (image courtesy of Been Kim)\n\nOne value alignment question was how to incorporate a diverse set of values that represents all of humanity in the AI’s objective function. We pointed out that there are two complementary problems: 1) getting the AI’s values to be in the small part of values-space that’s human-compatible, and 2) averaging over that space in a representative way. People generally focus on the ways in which human values differ from each other, which leads them to underestimate the difficulty of the first problem and overestimate the difficulty of the second. We also agreed on the importance of allowing for moral progress by not locking in the values of AI systems.\nNate mentioned some alternatives to goal-optimizing agents – quantilizers and approval-directed agents. We also discussed the limitations of using blacklisting/whitelisting in the AI’s objective function: blacklisting is vulnerable to unforeseen shortcuts and usually doesn’t work from a security perspective, and whitelisting hampers the system’s ability to come up with creative solutions (e.g. the controversial move 37 by AlphaGo in the second game against Sedol).\nBeen Kim brought up the recent EU regulation on the right to explanation for algorithmic decisions. This seems easy to game due to lack of good metrics for explanations. One proposed metric was that a human would be able to predict future model outputs from the explanation. This might fail for better-than-human systems by penalizing creative solutions if applied globally, but seems promising as a local heuristic.\nIan Goodfellow mentioned the difficulties posed by adversarial examples: an imperceptible adversarial perturbation to an image can make a convolutional network misclassify it with very high confidence. There might be some kind of No Free Lunch theorem where making a system more resistant to adversarial examples would trade off with performance on non-adversarial data.\nWe also talked about dual-use AI technologies, e.g. advances in deep reinforcement learning for robotics that could end up being used for military purposes. It was unclear whether corporations or governments are more trustworthy with using these technologies ethically: corporations have a profit motive, while governments are more likely to weaponize the technology.\n\nMore detailed notes by Janos coming soon! For a detailed overview of technical AI safety research areas, I highly recommend reading Concrete Problems in AI Safety.\nCross-posted to the FLI blog.", "url": "https://vkrakovna.wordpress.com/2016/10/15/openai-unconference-on-machine-learning/", "title": "OpenAI unconference on machine learning", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-10-15T23:51:44+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "51b828fcd64248551927bbc58a55d996", "summary": []} {"text": "Looking back at my grad school journey\n\nI recently defended my PhD thesis, and a chapter of my life has now come to an end. It feels both exciting and a bit disorienting to be done with this phase of much stress and growth. My past self who started this five years ago, with a very vague idea of what she was getting into, was a rather different person from my current self.\nI have developed various skills over these five years, both professionally and otherwise. I learned to read papers and explain them to others, to work on problems that take months rather than hours and be content with small bits of progress. I used to believe that I should be interested in everything, and gradually gave myself permission not to care about most topics to be able to focus on things that are actually interesting to me, developing some sense of discernment. In 2012 I was afraid to comment on the LessWrong forum because I might say something stupid and get downvoted – in 2013 I wrote my first post, and in 2014 I started this blog. I went through the Toastmasters program and learned to speak in front of groups, though I still feel nervous when speaking on technical topics, especially about my own work. I co-founded a group house and a nonprofit, both of which are still flourishing. I learned how to run events and lead organizations, starting with LessWrong meetups and the Harvard Toastmasters club, which were later displaced by running FLI.\n\nI remember agonizing over whether I should do a PhD or not, and I wish I had instead spent more time deciding where and how to do it. I applied to a few statistics departments in the Boston area and joined the same department that Janos was in, without seriously considering computer science, even though my only research experience back in undergrad was in that field. The statistics department was full of interesting courses and brilliant people that taught me a great deal, but the cultural fit wasn’t quite right and I felt a bit out of place there. I eventually found my way to the computer science department at the end of my fourth year, but I wish I had started out there to begin with.\nMy research work took a rather meandering path that somehow came together in the end. My first project was part of the astrostatistics seminar, which I was not particularly motivated about, but I expected myself to be interested in everything. I never quite understood what people were talking about in the seminar or what I was supposed to be doing, and quietly dropped the project at the end of my first year when leaving for my quantitative analyst internship at D.E.Shaw. The internship was my first experience in industry, where I learned factor analysis and statistical coding in Python (the final review from my manager boiled down to “great coder, research skills need work”). In second year, my advisor offered me a project that was unfinished by his previous students, which would take a few months to polish up. The project was on a new method for classification and variable selection called SBFC. I dug up a bunch of issues with the existing model and code, from runtime performance to MCMC detailed balance, and ended up stuck on the project for 3 years. During that time, I dabbled with another project that sort of petered out, did a Google internship on modeling ad quality, and sank a ton of time into FLI. In the middle of fourth year, SBFC was still my only project, and things were not looking great for graduating.\nThis was when I realized that the part of statistics that was interesting to me was the overlap with computer science and AI, a.k.a. machine learning. I went to the NIPS conference for the first time, and met a lot of AI researchers – I didn’t understand a lot of their work, but I liked the way they thought. I co-organized FLI’s Puerto Rico conference and met more AI people there. I finally ventured outside the stats department and started sitting in on ML lab meetings at the CS department, which mostly consisted of research updates on variational autoencoders that went right over my head. I studied a lot to fill the gaps in my ML knowledge that were not covered by my statistics background, namely neural networks and reinforcement learning (still need to read Sutton & Barto…). To my surprise, many people at the ML lab were also transplants from other departments, officially doing PhDs in math or physics.\nThat summer I did my second internship at Google, on sum-product network models (SPNs) for anomaly detection in the Knowledge Graph. I wondered if it would result in a paper that could be shoehorned into my thesis, and whether I could find a common thread between SPNs, SBFC and my upcoming project at the ML lab. This unifying theme turned out to be interpretability – the main selling point of SBFC, an advantage of SPNs over other similarly expressive models, and one of my CS advisor’s interests. Working on interpretability was a way to bring more of the statistical perspective into machine learning, and seemed relevant to AI safety as well. With this newfound sense of direction, in a new environment, my fifth year had as much research output as the previous three put together, and I presented two workshop posters in 2016 – on SPNs at ICLR, and on RNN interpretability at ICML.\nVolunteering for FLI during grad school started out as a kind of double life, and ended up interacting with my career in interesting ways. For a while I didn’t tell anyone in my department that I co-founded a nonprofit trying to save the world from existential risk, which was often taking up more of my time than research. However, FLI’s outreach work on AI safety was also beneficial to me – as one of the AI experts on the FLI core team, I met a lot of AI researchers who I may not have connected with otherwise. When I met the DeepMind founders at the Puerto Rico conference, I would not have predicted that I’ll be interviewing for their AI safety team a year later. The two streams of my interests, ML and AI safety, have finally crossed, and the double life is no more.\nWhat lessons have I drawn from the grad school experience, and what advice could I give to others?\n\nGoing to conferences and socializing with other researchers was super useful and fun. I highly recommend attending NIPS and ICML even if you’re not presenting.\nAcademic departments vary widely in their requirements. For example, the statistics department expected PhD students to teach 10 sections (I got away with doing 5 sections and it was still a lot of work), while the CS department only expected 1-2 sections.\nInternships were a great source of research experience and funding (a better use of time than teaching, in my opinion). It’s worth spending a summer interning at a good company, even if you are definitely going into academia.\nContrary to common experience, writer’s block was not an obstacle for me. My actual bottleneck was coding, debugging and running experiments, which was often tedious and took over half of my research time, so it’s well worth optimizing those aspects of the work.\nThe way FLI ended up contributing to my career path reminds me of a story about Steve Jobs sitting in on a calligraphy class that later turned out to be super relevant to creating snazzy fonts for Apple computers. I would recommend making time for seemingly orthogonal activities during grad school that you’re passionate about, both because they provide a stimulating break from research, and because they could become unexpectedly useful later.\n\nDoing a PhD was pretty stressful for me, but ultimately worthwhile. A huge thank you to everyone who guided and supported me through it!", "url": "https://vkrakovna.wordpress.com/2016/09/30/looking-back-at-my-grad-school-journey/", "title": "Looking back at my grad school journey", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-09-30T05:03:21+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "65875135fba16ec7e641bc74a0c5e8e8", "summary": []} {"text": "Highlights from the Deep Learning Summer School\n\n\nA few weeks ago, Janos and I attended the Deep Learning Summer School at the University of Montreal. Various well-known researchers covered topics related to deep learning, from reinforcement learning to computational neuroscience (see the list of speakers with slides and videos). Here are a few ideas that I found interesting in the talks (this list is far from exhaustive):\nCross-modal learning (Antonio Torralba)\nYou can do transfer learning in convolutional neural nets by freezing the parameters in some layers and retraining others on a different domain for the same task (paper). For example, if you have a neural net for scene recognition trained on real images of bedrooms, you could reuse the same architecture to recognize drawings of bedrooms. The last few layers represent abstractions like “bed” or “lamp”, which apply to drawings just as well as to real images, while the first few layers represent textures, which would differ between the two data modalities of real images and drawings. More generally, the last few layers are task-dependent and modality-independent, while the first few layers are the opposite.\n\n\nImportance weighted autoencoders (Ruslan Salakhutdinov)\nThe variational autoencoder (VAE) is a popular generative model that constructs an autoencoder out of a generative network (encoder) and recognition network (decoder). It then trains these networks to optimize a variational approximation of the posterior distribution by maximizing a lower bound on the log likelihood. IWAE is a variation that tightens the variational lower bound by relaxing the assumptions about the form of the posterior distribution . While the VAE maximizes a lower bound based on a single sample from the recognition distribution, the IWAE lower bound uses a weighted average over several samples. Applying importance weighting over several samples avoids the failure mode where the VAE objective penalizes models that produce even a few samples through the recognition network that don’t fit the posterior from the generative network, and taking several samples allows for better approximation of the posterior and thus a tighter lower bound.(The IWAE paper also gives a more intuitive introduction to VAE than the original paper, in my opinion.)\nVariations on RNNs (Yoshua Bengio)\nThis talk mentioned a few recurrent neural network (RNN) models that were unfamiliar to me. Variational RNNs introduce some elements of variational autoencoders into RNNs by adding latent variables (z) into the top hidden layer (paper). The RNN internal structure is entirely deterministic besides the output probability model, so it can be helpful to inject a higher-level source of noise to model highly structured data (e.g. speech). This was further extended with multiresolution RNNs, which are variational and hierarchical (paper). Another interesting model is real-time recurrent learning, a more biologically plausible alternative to backpropagation through time, where gradients are computed in an online feedforward manner without revisiting past history backwards. The originally proposed version involves a fairly inefficient exact computation of parameter gradients, while a more efficient recent approach approximates the forward gradient instead (paper).\nSome other talks I really liked but ran out of steam to write about: Joelle Pineau’s intro to reinforcement learning, Pieter Abbeel on deep reinforcement learning, Shakir Mohamed on deep generative models, Surya Ganguli on neuroscience and deep learning.", "url": "https://vkrakovna.wordpress.com/2016/08/25/highlights-from-the-deep-learning-summer-school/", "title": "Highlights from the Deep Learning Summer School", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-08-26T02:29:56+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "8de62e1fe3cf5746849c30d7210d4de1", "summary": []} {"text": "Clopen AI: Openness in different aspects of AI development\n\nThere has been a lot of discussion about the appropriate level of openness in AI research in the past year – the OpenAI announcement, the blog post Should AI Be Open?, a response to the latter, and Nick Bostrom’s thorough paper Strategic Implications of Openness in AI development.\nThere is disagreement on this question within the AI safety community as well as outside it. Many people are justifiably afraid of concentrating power to create AGI and determine its values in the hands of one company or organization. Many others are concerned about the information hazards of open-sourcing AGI and the resulting potential for misuse. In this post, I argue that some sort of compromise between openness and secrecy will be necessary, as both extremes of complete secrecy and complete openness seem really bad. The good news is that there isn’t a single axis of openness vs secrecy – we can make separate judgment calls for different aspects of AGI development, and develop a set of guidelines.\n\nInformation about AI development can be roughly divided into two categories – technical and strategic. Technical information includes research papers, data, source code (for the algorithm, objective function), etc. Strategic information includes goals, forecasts and timelines, the composition of ethics boards, etc. Bostrom argues that openness about strategic information is likely beneficial both in terms of short- and long-term impact, while openness about technical information is good on the short-term, but can be bad on the long-term due to increasing the race condition. We need to further consider the tradeoffs of releasing different kinds of technical information.\nSharing papers and data is both more essential for the research process and less potentially dangerous than sharing code, since it is hard to reconstruct the code from that information alone. For example, it can be difficult to reproduce the results of a neural network algorithm based on the research paper, given the difficulty of tuning the hyperparameters and differences between computational architectures.\nReleasing all the code required to run an AGI into the world, especially before it’s been extensively debugged, tested, and safeguarded against bad actors, would be extremely unsafe. Anyone with enough computational power could run the code, and it would be difficult to shut down the program or prevent it from copying itself all over the Internet.\nHowever, releasing none of the source code is also a bad idea. It would currently be impractical, given the strong incentives for AI researchers to share at least part of the code for recognition and replicability. It would also be suboptimal, since sharing some parts of the code is likely to contribute to safety. For example, it would make sense to open-source the objective function code without the optimization code, which would reveal what is being optimized for but not how. This could make it possible to verify whether the objective is sufficiently representative of society’s values – the part of the system that would be the most understandable and important to the public anyway.\nIt is rather difficult to verify to what extent a company or organization is sharing their technical information on AI development, and enforce either complete openness or secrecy. There is not much downside to specifying guidelines for what is expected to be shared and what isn’t. Developing a joint set of openness guidelines on the short and long term would be a worthwhile endeavor for the leading AI companies today.\n(Cross-posted to the FLI blog and Approximately Correct. Thanks to Jelena Luketina and Janos Kramar for their detailed feedback on this post!)", "url": "https://vkrakovna.wordpress.com/2016/08/01/clopen-ai-openness-in-different-aspects-of-ai-development/", "title": "Clopen AI: Openness in different aspects of AI development", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-08-01T16:23:05+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=3", "authors": ["Victoria Krakovna"], "id": "e62ec5df0ac5720abb2ec3c9db5cef2a", "summary": []} {"text": "New AI safety research agenda from Google Brain\n\nGoogle Brain just released an inspiring research agenda, Concrete Problems in AI Safety, co-authored by researchers from OpenAI, Berkeley and Stanford. This document is a milestone in setting concrete research objectives for keeping reinforcement learning agents and other AI systems robust and beneficial. The problems studied are relevant both to near-term and long-term AI safety, from cleaning robots to higher-stakes applications. The paper takes an empirical focus on avoiding accidents as modern machine learning systems become more and more autonomous and powerful.\nReinforcement learning is currently the most promising framework for building artificial agents – it is thus especially important to develop safety guidelines for this subfield of AI. The research agenda describes a comprehensive (though likely non-exhaustive) set of safety problems, corresponding to where things can go wrong when building AI systems:\n\n\n\nMisspecification of the objective function by the human designer. Two common pitfalls when designing objective functions are negative side-effects and reward hacking (also known as wireheading), which are likely to happen by default unless we figure out how to guard against them. One of the key challenges is specifying what it means for an agent to have a low impact on the environment while achieving its objectives effectively.\n\n\nExtrapolation from limited information about the objective function. Even with a correct objective function, human supervision is likely to be costly, which calls for scalable oversight of the artificial agent.\n\n\nExtrapolation from limited training data or using an inadequate model. We need to develop safe exploration strategies that avoid irreversibly bad outcomes, and build models that are robust to distributional shift – able to fail gracefully in situations that are far outside the training data distribution.\n\n\nThe AI research community is increasingly focusing on AI safety in recent years, and Google Brain’s agenda is part of this trend. It follows on the heels of the Safely Interruptible Agents paper from Google DeepMind and the Future of Humanity Institute, which investigates how to avoid unintended consequences from interrupting or shutting down reinforcement learning agents. We at FLI are super excited that industry research labs at Google and OpenAI are spearheading and fostering collaboration on AI safety research, and look forward to the outcomes of this work.\n(Cross-posted from the FLI blog.)", "url": "https://vkrakovna.wordpress.com/2016/06/22/new-ai-safety-research-agenda-from-google-brain/", "title": "New AI safety research agenda from Google Brain", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-06-22T19:19:01+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "3ea0298789bbfff54a84a5a3aa44d355", "summary": []} {"text": "Using humility to counteract shame\n\n“Pride is not the opposite of shame, but its source. True humility is the only antidote to shame.”\nUncle Iroh, “Avatar: The Last Airbender”\n \nShame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious ugh fields and negative spirals. Shame often underlies other negative emotions without making itself apparent – anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions – e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.\n\nWhat is humility, anyway? It is often defined as a low view of your own importance, and tends to be conflated with modesty. Another common definition that I find more useful is acceptance of your own flaws and shortcomings. This is more compatible with confidence, and helpful irrespective of your level of importance or comparison to other people. What humility feels like to me on a system 1 level is a sense of compassion and warmth towards yourself while fully aware of your imperfections (while focusing on imperfections without compassion can lead to beating yourself up). According to LessWrong, “to be humble is to take specific actions in anticipation of your own errors”, which seems more like a possible consequence of being humble than a definition.\nHumility is a powerful tool for psychological well-being and instrumental rationality that is more broadly applicable than just the ability to anticipate errors by seeing your limitations more clearly. I can summon humility when I feel anxious about too many upcoming deadlines, or angry at myself for being stuck on a rock climbing route, or embarrassed about forgetting some basic fact in my field that I am surely expected to know by the 5th year of grad school.\nWhile humility comes naturally to some people, others might find it useful to explicitly build an identity as a humble person. How can you invoke this mindset? One way is through negative visualization or pre-hindsight, considering how your plans could fail, which can be time-consuming and usually requires system 2. A faster and less effortful way is to is to imagine a person, real or fictional, who you consider to be humble. I often bring to mind my grandfather, or Uncle Iroh from the Avatar series, sometimes literally repeating the above quote in my head, sort of like an affirmation. I don’t actually agree that humility is the only antidote to shame, but it does seem to be one of the most effective.\n(Cross-posted to LessWrong. Thanks to Janos Kramar for his feedback on this post.)", "url": "https://vkrakovna.wordpress.com/2016/04/15/using-humility-to-counteract-shame/", "title": "Using humility to counteract shame", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-04-15T18:23:38+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "bb4e313336becae8df4547877804d397", "summary": []} {"text": "Introductory resources on AI safety research\n\n[See AI Safety Resources for the most recent version of this list.]\nReading list to get up to speed on the main ideas in the field of long-term AI safety. The resources are selected for relevance and/or brevity, and the list is not meant to be comprehensive. [Updated on 19 October 2017.]\nMotivation\nFor a popular audience:\nSutskever and Amodei, 2017. Wall Street Journal: Protecting Against AI’s Existential Threat\nCade Metz, 2017. New York Times: Teaching A.I. Systems to Behave Themselves\nFLI. AI risk background and FAQ. At the bottom of the background page, there is a more extensive list of resources on AI safety.\nTim Urban, 2015. Wait But Why: The AI Revolution. An accessible introduction to AI risk forecasts and arguments (with cute hand-drawn diagrams, and a few corrections from Luke Muehlhauser).\nOpenPhil, 2015. Potential risks from advanced artificial intelligence. An overview of AI risks and timelines, possible interventions, and current actors in this space.\n\nFor a more technical audience:\nStuart Russell:\n\nThe long-term future of AI (longer version), 2015. A video of Russell’s classic talk, discussing why it makes sense for AI researchers to think about AI safety, and going over various misconceptions about the issues.\nConcerns of an AI pioneer, 2015. An interview with Russell on the importance of provably aligning AI with human values, and the challenges of value alignment research.\nOn Myths and Moonshine, 2014. Russell’s response to the “Myth of AI” question on Edge.org, which draws an analogy between AI research and nuclear research, and points out some dangers of optimizing a misspecified utility function.\n\nScott Alexander, 2015. No time like the present for AI safety work. An overview of long-term AI safety challenges, e.g. preventing wireheading and formalizing ethics.\nVictoria Krakovna, 2015. AI risk without an intelligence explosion. An overview of long-term AI risks besides the (overemphasized) intelligence explosion / hard takeoff scenario, arguing why intelligence explosion skeptics should still think about AI safety.\nStuart Armstrong, 2014. Smarter Than Us: The Rise Of Machine Intelligence. A short ebook discussing potential promises and challenges presented by advanced AI, and the interdisciplinary problems that need to be solved on the way there.\nTechnical overviews\nSoares and Fallenstein, 2017. Aligning Superintelligence with Human Interests: A Technical Research Agenda\nAmodei, Olah, et al, 2016. Concrete Problems in AI safety. Research agenda focusing on accident risks that apply to current ML systems as well as more advanced future AI systems.\nJessica Taylor et al, 2016. Alignment for Advanced Machine Learning Systems\nFLI, 2015. A survey of research priorities for robust and beneficial AI\nJacob Steinhardt, 2015. Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems. A taxonomy of AI safety issues that require ordinary vs extraordinary engineering to address.\nNate Soares, 2015. Safety engineering, target selection, and alignment theory. Identifies and motivates three major areas of AI safety research.\nNick Bostrom, 2014. Superintelligence: Paths, Dangers, Strategies. A seminal book outlining long-term AI risk considerations.\nSteve Omohundro, 2007. The basic AI drives. A classic paper arguing that sufficiently advanced AI systems are likely to develop drives such as self-preservation and resource acquisition independently of their assigned objectives.\nTechnical work\nValue learning:\nJaime Fisac et al, 2017. Pragmatic-Pedagogic Value Alignment. A cognitive science approach to the cooperative inverse reinforcement learning problem.\nSmitha Milli et al. Should robots be obedient? Obedience to humans may sound like a great thing, but blind obedience can get in the way of learning human preferences.\nWilliam Saunders et al, 2017. Trial without Error: Towards Safe Reinforcement Learning via Human Intervention. (blog post)\nAmin, Jiang, and Singh, 2017. Repeated Inverse Reinforcement Learning. Separates the reward function into a task-specific component and an intrinsic component. In a sequence of task, the agent learns the intrinsic component while trying to avoid surprising the human.\nArmstrong and Leike, 2016. Towards Interactive Inverse Reinforcement Learning. The agent gathers information about the reward function through interaction with the environment, while at the same time maximizing this reward function, balancing the incentive to learn with the incentive to bias.\nDylan Hadfield-Menell et al, 2016. Cooperative inverse reinforcement learning. Defines value learning as a cooperative game where the human tries to teach the agent about their reward function, rather than giving optimal demonstrations like in standard IRL.\nOwain Evans et al, 2016. Learning the Preferences of Ignorant, Inconsistent Agents.\nReward gaming / wireheading:\nTom Everitt et al, 2017. Reinforcement learning with a corrupted reward channel. A formalization of the reward misspecification problem in terms of true and corrupt reward, a proof that RL agents cannot overcome reward corruption, and a framework for giving the agent extra information to overcome reward corruption. (blog post)\nAmodei and Clark, 2016. Faulty Reward Functions in the Wild. An example of reward function gaming in a boat racing game, where the agent gets a higher score by going in circles and hitting the same targets than by actually playing the game.\nEveritt and Hutter, 2016. Avoiding Wireheading with Value Reinforcement Learning. An alternative to RL that reduces the incentive to wirehead.\nLaurent Orseau, 2015. Wireheading. An investigation into how different types of artificial agents respond to opportunities to wirehead (unintended shortcuts to maximize their objective function).\nInterruptibility / corrigibility:\nDylan Hadfield-Menell et al. The Off-Switch Game. This paper studies the interruptibility problem as a game between human and robot, and investigates which incentives the robot could have to allow itself to be switched off.\nEl Mahdi El Mhamdi et al, 2017. Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning.\nOrseau and Armstrong, 2016. Safely interruptible agents. Provides a formal definition of safe interruptibility and shows that off-policy RL agents are more interruptible than on-policy agents. (blog post)\nNate Soares et al, 2015. Corrigibility. Designing AI systems without incentives to resist corrective modifications by their creators.\nScalable oversight:\nChristiano, Leike et al, 2017. Deep reinforcement learning from human preferences. Communicating complex goals to AI systems using human feedback (comparing pairs of agent trajectory segments).\nDavid Abel et al. Agent-Agnostic Human-in-the-Loop Reinforcement Learning.\nOther:\nArmstrong and Levinstein, 2017. Low Impact Artificial Intelligences. An intractable but enlightening definition of low impact for AI systems.\nBabcock, Kramar and Yampolskiy, 2017. Guidelines for Artificial Intelligence Containment.\nScott Garrabrant et al, 2016. Logical Induction. A computable algorithm for the logical induction problem.\nNote: I did not include literature on less neglected areas of the field like safe exploration, distributional shift, adversarial examples, or interpretability (see e.g. Concrete Problems or the CHAI bibliography for extensive references on these topics).\nCollections of technical works\nCHAI bibliography\nMIRI publications\nFHI publications\nFLI grantee publications (scroll down)\nPaul Christiano. AI control. A blog on designing safe, efficient AI systems (approval-directed agents, aligned reinforcement learning agents, etc).\nIf there are any resources missing from this list that you think are a must-read, please let me know! If you want to go into AI safety research, check out these guidelines and the AI Safety Syllabus.\n(Thanks to Ben Sancetta, Taymon Beal and Janos Kramar for their feedback on this post.)", "url": "https://vkrakovna.wordpress.com/2016/02/28/introductory-resources-on-ai-safety-research/", "title": "Introductory resources on AI safety research", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-02-28T05:03:08+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "553adfcf42a7f5f25482810e1d31ec29", "summary": []} {"text": "To contribute to AI safety, consider doing AI research\n\nAmong those concerned about risks from advanced AI, I’ve encountered people who would be interested in a career in AI research, but are worried that doing so would speed up AI capability relative to safety. I think it is a mistake for AI safety proponents to avoid going into the field for this reason (better reasons include being well-positioned to do AI safety work, e.g. at MIRI or FHI). This mistake contributed to me choosing statistics rather than computer science for my PhD, which I have some regrets about, though luckily there is enough overlap between the two fields that I can work on machine learning anyway.\nI think the value of having more AI experts who are worried about AI safety is far higher than the downside of adding a few drops to the ocean of people trying to advance AI. Here are several reasons for this:\n\nConcerned researchers can inform and influence their colleagues, especially if they are outspoken about their views.\nStudying and working on AI brings understanding of the current challenges and breakthroughs in the field, which can usefully inform AI safety work (e.g. wireheading in reinforcement learning agents).\nOpportunities to work on AI safety are beginning to spring up within academia and industry, e.g. through FLI grants. In the next few years, it will be possible to do an AI-safety-focused PhD or postdoc in computer science, which would hit two birds with one stone.\n\n\nTo elaborate on #1, one of the prevailing arguments against taking long-term AI safety seriously is that not enough experts in the AI field are worried. Several prominent researchers have commented on the potential risks (Stuart Russell, Bart Selman, Murray Shanahan, Shane Legg, and others), and more are concerned but keep quiet for reputational reasons. An accomplished, strategically outspoken and/or well-connected expert can make a big difference in the attitude distribution in the AI field and the level of familiarity with the actual concerns (which are not about malevolence, sentience, or marching robot armies). Having more informed skeptics who have maybe even read Superintelligence, and fewer uninformed skeptics who think AI safety proponents are afraid of Terminators, would produce much needed direct and productive discussion on these issues. As the proportion of informed and concerned researchers in the field approaches critical mass, the reputational consequences for speaking up will decrease.\nA year after FLI’s Puerto Rico conference, the subject of long-term AI safety is no longer taboo among AI researchers, but remains rather controversial. Addressing AI risk on the long term will require safety work to be a significant part of the field, and close collaboration between those working on safety and capability of advanced AI. Stuart Russell makes the apt analogy that “just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, issues of control and safety will become central to AI as the field matures”. If more people who are already concerned about AI safety join the field, we can make this happen faster, and help wisdom win the race with capability.\n(Cross-posted to LessWrong. Thanks to Janos Kramar for his help with editing this post.)", "url": "https://vkrakovna.wordpress.com/2016/01/16/to-contribute-to-ai-safety-consider-doing-ai-research/", "title": "To contribute to AI safety, consider doing AI research", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2016-01-16T05:19:15+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "27d3fcbc823b032eaed05abf2db83803", "summary": []} {"text": "2015-16 New Year review\n\n2015 progress\nResearch:\n\nFinished paper on the Selective Bayesian Forest Classifier algorithm\nMade an R package for SBFC (beta)\nWorked at Google on unsupervised learning for the Knowledge Graph with Moshe Looks during the summer (paper)\nJoined the HIPS research group at Harvard CS and started working with the awesome Finale Doshi-Velez\nRatio of coding time to writing time was too high overall\n\nFLI:\n\nCo-organized two meetings to brainstorm biotechnology risks\nCo-organized two Machine Learning Safety meetings\nGave a talk at the Shaping Humanity’s Trajectory workshop at EA Global\nHelped organize NIPS symposium on societal impacts of AI\n\nRationality / effectiveness:\n\nExtensive use of FollowUpThen for sending reminders to future selves\nMapped out my personal bottlenecks\nSleep:\n\nTracked insomnia (26% of nights) and sleep time (average 1:30am, stayed up past 1am on 31% of nights)\nStarted working on sleep hygiene\nStopped using melatonin (found it ineffective)\n\n\n\n\nRandom cool things I did:\n\nImprov class\nAerial silks class\nClimbed out of a glacial abyss (moulin)\nPlaced second at Toastmasters area speech contest\n\n\n\n2015 prediction outcomes\nOut of the 17 predictions I made a year ago, 5 were true, and the rest were false.\n\nSubmit the SBFC paper for publication (95%)\nSubmit another paper besides SBFC (40%)\nPresent SBFC results at a conference (JSM, ICML or NIPS) (40%) – presented at a workshop (NESS)\nGet a new external fellowship to replace my expiring NSERC fellowship (50%)\nSkim at least 20 research papers in machine learning (70%) – probably a lot more\nWrite at least 12 blog posts (70%) – wrote 9 posts\nClimb a 5.12 without rope cheating (50%) – no longer endorsed at this level\nLead climb a 5.11a (50%) – no longer endorsed at this level\nDo 10 pullups in a row (60%) – no longer endorsed at this level\nMeditate at least 150 times (80%) – 206 times\nRecord at least 150 new thoughts (70%) – recorded 62, no longer endorsed at this level\nMake at least 100 Anki cards by the end of the year (70%)\nRead at least 10 books (60%) – read 4 books, no longer endorsed at this level\nAttend Burning Man (90%)\nBoston will have a second rationalist house by the end of the year (30%)\nFLI will hire a full-time project manager or administrator (80%) – no, but we now have a full time website editor…\nFLI will start a project on biotech safety (70%) – had some meetings, but no concrete action plan yet\n\nCalibration:\n\nlow predictions, 30-60%: 0/8 = 0% (super overconfident)\nhigh predictions, 70-95%: 5/9 = 56% (overconfident)\n\n(Yikes! Worse than last year…)\nConclusions:\n\nI forgot about most of these goals after a few months – will need a recurring reminder for next year.\nAll 3 physical goals ended up disendorsed – I think I set those way too high. My climbing habits got disrupted by moving to California in summer and a hand injury, so I’m still trying to return to my spring 2014 skill level.\n\n2016 goals and predictions\nGiven the overconfidence of last year’s predictions, toning it down for next year.\nResolutions:\n\nFinish PhD thesis (70%)\nWrite at least 12 blog posts (40%)\nMeditate at least 200 days (50%)\nExercise at least 200 days (50%)\nDo at least 5 pullups in a row (40%)\nRecord at least 50 new thoughts (50%)\nStay up at most 20% of the nights (40%)\nDo at least 10 pomodoros per week on average (50%)\n\nPredictions:\n\nAt least one paper accepted for publication (70%)\nI will get at least one fellowship (40%)\nInsomnia at most 20% of nights (20%)\nFLI will co-organize at least 3 AI safety workshops (50%)\n", "url": "https://vkrakovna.wordpress.com/2015/12/31/2015-16-new-year-review/", "title": "2015-16 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-12-31T05:04:51+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "e1e6557bf76d14a699e0103ed415d072", "summary": []} {"text": "Highlights and impressions from NIPS conference on machine learning\n\nThis year’s NIPS was an epicenter of the current enthusiasm about AI and deep learning – there was a visceral sense of how quickly the field of machine learning is progressing, and two new AI startups were announced. Attendance has almost doubled compared to the 2014 conference (I hope they make it multi-track next year), and several popular workshops were standing room only. Given that there were only 400 accepted papers and almost 4000 people attending, most people were there to learn and socialize. The conference was a socially intense experience that reminded me a bit of Burning Man – the overall sense of excitement, the high density of spontaneous interesting conversations, the number of parallel events at any given time, and of course the accumulating exhaustion.\nSome interesting talks and posters\nSergey Levine’s robotics demo at the crowded Deep Reinforcement Learning workshop (we showed up half an hour early to claim spots on the floor). This was one of the talks that gave me a sense of fast progress in the field. The presentation started with videos from this summer’s DARPA robotics challenge, where the robots kept falling down while trying to walk or open a door. Levine proceeded to outline his recent work on guided policy search, alternating between trajectory optimization and supervised training of the neural network, and granularizing complex tasks. He showed demos of robots successfully performing various high-dexterity tasks, like opening a door, screwing on a bottle cap, or putting a coat hanger on a rack. Impressive!\n\nGenerative image models using a pyramid of adversarial networks by Denton & Chintala. Generating realistic-looking images using one neural net as a generator and another as an evaluator – the generator tries to fool the evaluator by making the image indistinguishable from a real one, while the evaluator tries to tell real and generated images apart. Starting from a coarse image, successively finer images are generated using the adversarial networks from the coarser images at the previous level of the pyramid. The resulting images were mistaken for real images 40% of the time in the experiment, and around 80% of them looked realistic to me when staring at the poster.\nPath-SGD by Salakhutdinov et al, a scale-invariant version of the stochastic gradient descent algorithm. Standard SGD uses the L2 norm in as the measure of distance in the parameter space, and rescaling the weights can have large effects on optimization speed. Path-SGD instead regularizes the maximum norm of incoming weights into any unit, minimizing the max-norm over all rescalings of the weights. The resulting norm (called a “path regularizer”) is shown to be invariant to weight rescaling. Overall a principled approach with good empirical results.\nEnd-to-end memory networks by Sukhbaatar et al (video), an extension of memory networks – neural networks that learn to read and write to a memory component. Unlike traditional memory networks, the end-to-end version eliminates the need for supervision at each layer. This makes the method applicable to a wider variety of domains – it is competitive both with memory networks for question answering and with LSTMs for language modeling. It was fun to see the model perform basic inductive reasoning about locations, colors and sizes of objects.\nNeural GPUs (video), Deep visual analogy-making (video), On-the-job learning, and many others.\nAlgorithms Among Us symposium (videos)\nA highlight of the conference was the Algorithms Among Us symposium on the societal impacts of machine learning, which I helped organize along with others from FLI. The symposium consisted of 3 panels and accompanying talks – on near-term AI impacts, timelines to general AI, and research priorities for beneficial AI. The symposium organizers (Adrian Weller, Michael Osborne and Murray Shanahan) gathered an impressive array of AI luminaries with a variety of views on the subject, including Cynthia Dwork from Microsoft, Yann LeCun from Facebook, Andrew Ng from Baidu, and Shane Legg from DeepMind. All three panel topics generated lively debate among the participants.\n\n\n3 panels at #AlgorithmsAmongUs symposium #NIPS2015 pic.twitter.com/X1KNAwwkWW\n— Victoria Krakovna (@vkrakovna) December 15, 2015\n\nAndrew Ng took his famous statement that “worrying about general AI is like worrying about overpopulation on Mars” to the next level, namely “overpopulation on Alpha Centauri” (is Mars too realistic these days?). His main argument was that even superforecasters can’t predict anything 5 years into the future, so any predictions on longer time horizons are useless. This seemed like an instance of the all-too-common belief that “we don’t know, therefore we are safe”. As Murray pointed out, having complete uncertainty past a 5-year horizon means that you can’t rule out reaching general AI in 20 years either. Encouragingly, Ng endorsed long-term AI safety research, saying that it’s not his cup of tea but someone should be working on it.\nWith regards to roadmapping the remaining milestones to general AI, Yann LeCun gave an apt analogy of traveling through mountains in the fog – there are some you can see, and an unknown number hiding in the fog. He also argued that advanced AI is unlikely to be human-like, and cautioned against anthropomorphizing it.\nIn the research priorities panel, Shane Legg gave some specific recommendations – goal system stability, interruptibility, sandboxing / containment, and formalization of various thought experiments (e.g. in Superintelligence). He pointed out that AI safety is both overblown and underemphasized – while the risks from advanced AI are not imminent the way they are usually portrayed in the media, more thought and resources need to be devoted to the challenging research problems involved.\nOne question that came up during the symposium is the importance of interpretability for AI systems, which is actually the topic of my current research project. There was some disagreement about the tradeoff between effectiveness and interpretability. LeCun thought that the main advantage of interpretability is increased robustness, and improvements to transfer learning should produce that anyway, without decreases in effectiveness. Percy Liang argued that transparency is needed to explain to the rest of the world what machine learning systems are doing, which is increasingly important in many applications. LeCun also pointed out that machine learning systems that are usually considered transparent, such as decision trees, aren’t necessarily so. There was also disagreement about what interpretability means in the first place – as Cynthia Dwork said, we need a clearer definition before making any conclusions. It seems that more work is needed both on defining interpretability and on figuring out how to achieve it without sacrificing effectiveness.\nOverall, the symposium was super interesting and gave a lot of food for thought (here’s a more detailed summary by Ariel from FLI). Thanks to Adrian, Michael and Murray for their hard work in putting it together.\nAI startups\nIt was exciting to see two new AI startups announced at NIPS – OpenAI, led by Ilya Sutskever and backed by Musk, Altman and others, and Geometric Intelligence, led by Zoubin Ghahramani and Gary Marcus.\nOpenAI is a non-profit with a mission to democratize AI research and keep it beneficial for humanity, and a whopping $1Bn in funding pledged. They believe that it’s safer to have AI breakthroughs happening in a non-profit, unaffected by financial interests, rather than monopolized by for-profit corporations. The intent to open-source the research seems clearly good in the short and medium term, but raises some concerns in the long run when getting closer to general AI. As an OpenAI researcher emphasized in an interview, “we are not obligated to share everything – in that sense the name of the company is a misnomer”, and decisions to open-source the research would in fact be made on a case-by-case basis.\nWhile OpenAI plans to focus on deep learning in their first few years, Geometric Intelligence is developing an alternative approach to deep learning that can learn more effectively from less data. Gary Marcus argues that we need to learn more from how human minds acquire knowledge in order to build advanced AI (an inspiration for the venture was observing his toddler learn about the world). I’m looking forward to what comes out of the variety of approaches taken by these new companies and other research teams.\n(Cross-posted on the FLI blog. Thanks to Janos Kramar for his help with editing this post.)", "url": "https://vkrakovna.wordpress.com/2015/12/24/highlights-and-impressions-from-nips-conference-on-machine-learning/", "title": "Highlights and impressions from NIPS conference on machine learning", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-12-25T01:31:06+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "377533388fbb7bbae98f8c0869a020e0", "summary": []} {"text": "Risks from general artificial intelligence without an intelligence explosion\n\n“An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.”\n– Computer scientist I. J. Good, 1965\nArtificial intelligence systems we have today can be referred to as narrow AI – they perform well at specific tasks, like playing chess or Jeopardy, and some classes of problems like Atari games. Many experts predict that general AI, which would be able to perform most tasks humans can, will be developed later this century, with median estimates around 2050. When people talk about long term existential risk from the development of general AI, they commonly refer to the intelligence explosion (IE) scenario. AI risk skeptics often argue against AI safety concerns along the lines of “Intelligence explosion sounds like science-fiction and seems really unlikely, therefore there’s not much to worry about”. It’s unfortunate when AI safety concerns are rounded down to worries about IE. Unlike I. J. Good, I do not consider this scenario inevitable (though relatively likely), and I would expect general AI to present an existential risk even if I knew for sure that intelligence explosion were impossible.\n\nHere are some dangerous aspects of developing general AI, besides the IE scenario:\n\nHuman incentives. Researchers, companies and governments have professional and economic incentives to build AI that is as powerful as possible, as quickly as possible. There is no particular reason to think that humans are the pinnacle of intelligence – if we create a system without our biological constraints, with more computing power, memory, and speed, it could become more intelligent than us in important ways. The incentives are to continue improving AI systems until they hit physical limits on intelligence, and those limitations (if they exist at all) are likely to be above human intelligence in many respects.\nConvergent instrumental goals. Sufficiently advanced AI systems would by default develop drives like self-preservation, resource acquisition, and preservation of their objective functions, independent of their objective function or design. This was outlined in Omohundro’s paper and more concretely formalized in a recent MIRI paper. Humans routinely destroy animal habitats to acquire natural resources, and an AI system with any goal could always use more data centers or computing clusters.\nUnintended consequences. As in the stories of Sorcerer’s Apprentice and King Midas, you get what you asked for, but not what you wanted. This already happens with narrow AI, like in the frequently cited example from the Bird & Layzell paper: a genetic algorithm was supposed to design an oscillator using a configurable circuit board, and instead designed a makeshift radio that used signal from neighboring computers to produce the requisite oscillating pattern. Unintended consequences produced by a general AI, more opaque and more powerful than a narrow AI, would likely be far worse.\nValue learning is hard. Specifying common sense and ethics in computer code is no easy feat. As argued by Stuart Russell, given a misspecified value function that omits variables that turn out to be important to humans, an optimization process is likely to set these unconstrained variables to extreme values. Think of what would happen if you asked a self-driving car to get you to the airport as fast as possible, without assigning value to obeying speed limits or avoiding pedestrians. While researchers would have incentives to build in some level of common sense and understanding of human concepts that is needed for commercial applications like household robots, that might not be enough for general AI.\nValue learning is insufficient. Even an AI system with perfect understanding of human values and goals would not necessarily adopt them. Humans understand the “goals” of the evolutionary process that generated us, but don’t internalize them – in fact, we often “wirehead” our evolutionary reward signals, e.g. by eating sugar.\nContainment is hard. A general AI system with access to the internet would be able to hack thousands of computers and copy itself onto them, thus becoming difficult or impossible to shut down – this is a serious problem even with present-day computer viruses. When developing an AI system in the vicinity of general intelligence, it would be important to keep it cut off from the internet. Large scale AI systems are likely to be run on a computing cluster or on the cloud, rather than on a single machine, which makes isolation from the internet more difficult. Containment measures would likely pose sufficient inconvenience that many researchers would be tempted to skip them.\n\nSome believe that if intelligence explosion does not occur, AI progress will occur slowly enough that humans can stay in control. Given that human institutions like academia or governments are fairly slow to respond to change, they may not be able to keep up with an AI that attains human-level or superhuman intelligence over months or even years. Humans are not famous for their ability to solve coordination problems. Even if we retain control over AI’s rate of improvement, it would be easy for bad actors or zealous researchers to let it go too far – as Geoff Hinton recently put it, “the prospect of discovery is too sweet”.\nAs a machine learning researcher, I care about whether my field will have a positive impact on humanity in the long term. The challenges of AI safety are numerous and complex (for a more technical and thorough exposition, see Jacob Steinhardt’s essay), and cannot be rounded off to a single scenario. I look forward to a time when disagreements about AI safety no longer derail into debates about IE, and instead focus on other relevant issues we need to figure out.\n(Cross-posted on the FLI blog. Thanks to Janos Kramar for his help with editing this post.)", "url": "https://vkrakovna.wordpress.com/2015/11/29/ai-risk-without-an-intelligence-explosion/", "title": "Risks from general artificial intelligence without an intelligence explosion", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-11-30T04:48:36+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "5dd3b74b20c11d4bf479c7e8315d4d30", "summary": []} {"text": "Nomadism and Burning Man\n\nThe past few months had been quite nomadic even by my standards. Visiting the MIRI fellows program, EA Global, Alaska camping (in 4 different parks), CFAR alumni reunion, and finally Burning Man. It was an exciting social time – I had many great conversations and coordinated with various people. It was also tiring to keep up with all the schedule changes, packing and unpacking, and my habits fell through the cracks (meditation, exercise and the like). It is a relief to be back at Citadel with all my stuff in one place, in a stable work and social environment. It almost feels like I’d never left.\nIt seemed appropriate to conclude the wanderings with Burning Man, which was the most like an actual vacation for me. This year was a good combination of spontaneity and scheduling, adventures and conversations.\n\nJanos and I practiced the beginner mindset at various circus arts. In the long wait on the road into BRC, it turned out that the car in front of us had some hula hoops, buugengs and fans, and people who knew how to use them and were willing to teach. A buugeng is an S-shaped wooden object that can be twisted and thrown around – learning to throw it over my shoulder and catch it involved a lot of running around and hitting my fingers, but I got it eventually. We discovered that our camp was right next to the Hellfire Society, which ran fire spinning classes. We were late for the actual instruction, but got to do a 30-second demo with a real fire staff (turns out, even the most basic clockwise spin looks cool with real fire), and watch Michael Vassar spin fire poi like a pro. We also went to an aerial silks workshop, which inspired at least one friend to continue learning this in the default world.\nWe got to fight in the Thunderdome! This consists of whacking each other with foam swords while hanging from the ceiling in harnesses. The fighter who is judged to be the most aggressive wins, though they make a bigger deal out of the show than the outcome. Part of the challenge is retaining contact with the other person (or their sword) while whacking them with your sword as much as possible. We stood in line for an hour watching other people fight, despite our attempts to bribe the leather-clad staff with a bottle of alcohol to skip the line (they refused to accept anything other than their customary bribe of good whiskey). The deafening metal music and the shouts of the numerous spectators built up a lot of adrenaline. I promptly forgot all the advice the staff gave me about positions and strategies, and just had fun getting my ass kicked by Sam. Janos fought a guy who was a head taller than him and skilled in martial arts, who he managed to disarm twice. We left the place excitedly tired, and spent the rest of the evening in a relaxing cuddle puddle.\nOur playa adventures were interspersed with various conversations and reflections:\n\nA (surprisingly analytical) talk about shame at Mystic camp. The main insight was that a lot of what sustains addictions is the shame around them, and the shame itself is addictive. One prediction made by this theory is that signaling endorsement for addictive activities (e.g. eating junk food on a nice plate at a candlelit table) reduces their addictiveness a lot. It might be useful to install a trigger-action plan like “notice I am ashamed of something I do regularly -> try self-signaling endorsement for the activity”.\nA discussion about common sources of motivation for doing things. I’m often motivated by importance or indignation about the status quo, while puzzle-solving motivation is something I used to engage more in my math contest days, and could use more of now. There is also a kind of motivation to do procedural things with my hands, like helping to build camp, which could be called tactile motivation, though that doesn’t sound quite right.\nLearning about an underappreciated physics phenomenon called counterflow heat exchange, which allows two containers of fluid to go almost all the way towards swapping temperatures. (h/t Danielle Fong)\n\nBurning Man is a good place to practice spontaneity, going along with whatever comes up. This year I tried to retain this frame of mind even while going to scheduled events. The go-to phrase that I kept repeating in my head was “roll with it”. This was actually quite effective at eliminating fear of missing out, which is usually pervasive in a place where a hundred events are going on at any given time.\nThis is also the only time when I actually manage to “unplug” for a whole week. It doesn’t seem to work during other vacations – for example, in Alaska I kept looking for wifi during rest stops in towns. Burning Man, on the other hand, was both sufficiently stimulating and sufficiently remote that I experienced no desire to check email or other internet things, even though there is actually a wifi camp on the playa. Occasionally reminding my system 1 that the sky does not fall if I ignore my email for a week is pretty valuable. I am now trying to sustain the spontaneity and email-ignoring mindsets in the default world to some extent.", "url": "https://vkrakovna.wordpress.com/2015/09/20/nomadism-and-burning-man/", "title": "Nomadism and Burning Man", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-09-21T04:59:13+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "4a6b31f27378fc332bab5471db31a21c", "summary": []} {"text": "Systems I have tried: an overview\n\nI have used various organization and productivity systems in the past few years – this is an overview of what worked and what didn’t.\nMain systems I currently use:\n\nFollow Up Then: Sends an email to a future self, with the date and time specified in the email address, e.g. fri6pm@fut.io. I use it for delaying tasks, recurring reminders, and following up on email threads. This reduces clutter in my todo list, calendar and inbox, and frees my working memory. Lately, I noticed myself remembering a thing shortly before receiving a follow up about it – probably due to the same mechanism that sometimes wakes me up a few minutes before the morning alarm.\nComplice: Daily to-do list organized according to goals, with archives and regular reviews. Helpful for specifying the next action to take at a given time, and for tracking progress on individual goals. Downside: I sometimes hesitate to enter tasks into the list, because entered tasks cannot be erased, and leaving a task unfinished is aversive, so often end up entering tasks after they are done instead.\nWorkflowy: Nested list structure – searchable, with collapsible and sharable sublists. I keep my ongoing todo list (in GTD form) and most of my notes here. Downside: doesn’t work for goal factoring, since it only supports tree structures.\nGoogle Calendar: Self-explanatory. I have recently started adding tentative meeting slots, indicated by a question mark, e.g. “dinner with Janos?”. This has been helpful for keeping track of which time slots I’ve offered to someone. I also added a calendar that shows Facebook events that I’ve been invited to, which is handy.\n42 Goals: Goal tracking with summary graphs and cute symbols. I use this for tracking habits (like exercise and meditation) and other random things (like insomnia occurrences). The graphs are useful – this is how I know that I have the most insomnia on Mondays! Downsides: doesn’t allow non-binary categories, and the phone app is so unreliable that I never use it – if you know good alternative tracking systems, let me know!\n\n\nSystems I no longer use:\n\nBeeminder: Goal tracking with nice graphs, and goal setting with reminders and financial penalties in case of failure. I liked the graphs and reminders, but the penalties made me feel even more overwhelmed than usual, and sometimes induced suboptimal short-term priorities. I decided to obtain the different benefits separately, setting recurring reminders for habits on Follow Up Then, and using 42 Goals for tracking.\nToggl: Time tracking for activities and tasks, organized by project or goal, with an option for retroactive time entries. I started out using it to track all my time, and though I stopped after about a month due to the excessive overhead of tracking and categorizing short activities, I learned a lot about where my time was going. I used it for about a year after that to track work hours, and eventually stopped because of overhead and redundancy with Complice.\nPaper checklist: Checklist for daily habits. Worked well in terms of catching my eye in the morning, but was often forgotten when traveling. It was redundant with 42 Goals, and required double data entry, so I eventually gave up on the paper version.\nCoach.me: Habit tracking with reminders, with a pretty good phone app. I found it particularly useful for several-times-a-week habits. It also has built-in habit programs like building up to a certain number of chinups. I mostly stopped using it because I had too many other systems that were redundant with it.\nPomodoros: Setting a timer to focus on a specific task for 25-40 minutes, followed by a break of 5 minutes. I found it unpleasant to be forced to take breaks, developed a habit of ignoring the break signal, and gave up on using pomodoros altogether.\n\nOver the past couple of years, I have become less willing to force myself to do things or overwhelm myself with instructions or data entry overhead, which has led me to reduce the number of systems I use, and to prefer gently guiding systems to strict ones.", "url": "https://vkrakovna.wordpress.com/2015/07/26/systems-i-have-tried-an-overview/", "title": "Systems I have tried: an overview", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-07-26T05:01:22+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "4041240dfcfbcef2e5512662392b1b1f", "summary": []} {"text": "Hamming questions and bottlenecks\n\nThe CFAR alumni workshop on the first weekend of May was focused on the Hamming question. Mathematician Richard Hamming was known to approach experts from other fields and ask “what are the important problems in your field, and why aren’t you working on them?”. The same question can be applied to personal life: “what are the important problems in your life, and what is stopping you from working on them?”.\nOver the course of the weekend, the twelve of us asked this question of ourselves and each other, in many forms and guises: “if Vika isn’t making a major impact on the world in 5 years, what would have stopped her?”, “what are your greatest bottlenecks?”, “how can we actually try?”, etc. The intense focus on mental pain points was interspersed with naps and silly games to let off steam. On the last day, we did a group brainstorm, where everyone who wanted to receive feedback took a turn in the center of the circle, and everyone else speculated on what they thought were the biggest bottlenecks of the person in the center. By this time, we had mostly gotten to know each other, and even the impressions from those who knew me less well were surprisingly accurate. I am very grateful to everyone at the workshop for being so insightful and supportive of each other (and actually caring).\n\nMost of the issues that came up were things I was aware of on some level, but over the course of the workshop it became particularly salient to me how interconnected my problems are and how the gears in the system affect each other. Working memory overload leads to confusion, which reduces confidence. Sleep deprivation reduces working memory and increases anxiety. Anxiety reduces the affordance for exploration and creativity, and increases the frequency of insomnia. Ignoring or neglecting signals from system 1 takes up working memory slots with looping messages from system 1, and increases anxiety. After a few of these circular explanations, I gave up on writing them all down, and made a diagram instead.\n\nA few things jump out at me about this diagram. The highest degree nodes are anxiety and working memory, both of which are difficult to affect directly. The two nodes I have the most influence over are the amount of sleep I get and the degree to which I listen to system 1 signals. I have started experimenting with sleep interventions that I haven’t yet tried, like taking melatonin 4 hours before bedtime, using a weighted blanket, etc. Attunement to system 1 can be improved through meditation, Focusing, belief reporting and such. While I have sporadically meditated for years, I could use more practice at the other techniques, which involve more explicit internal querying than meditation.\nCuriously, the graph also appears to have a source and a sink. The source node is my overdeveloped sense of duty and a tendency to assume I should do things or be able to do things causes a lot of downstream issues. It would be impactful to directly hack this and become more selfish, but it appears to be a bit trickier than doing a find-and-replace on my source code, replacing “I have to do X” with “my goals require X”. The sink node has to do with my capacity to allow myself time and mental space for exploration and creativity, which would among other things enable me to do my high-level goals better (e.g. research and organization strategy).\nA week after the workshop, I moved to California for my summer internship at Google. The context shift and my new location a few blocks down from the CFAR office will allow me to work on my bottlenecks more systematically. I have wrestled with these for a long time, but now I feel that I have better tools and resources than ever before.", "url": "https://vkrakovna.wordpress.com/2015/05/17/hamming-questions-and-bottlenecks/", "title": "Hamming questions and bottlenecks", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-05-17T17:35:53+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=4", "authors": ["Victoria Krakovna"], "id": "19baf8fe8d6bde7f0649fe801735f795", "summary": []} {"text": "Negative visualization, radical acceptance and stoicism\n\nIn anxious, frustrating or aversive situations, I find it helpful to visualize the worst case that I fear might happen, and try to accept it. I call this “radical acceptance”, since the imagined worst case is usually an unrealistic scenario that would be extremely unlikely to happen, e.g. “suppose I get absolutely nothing done in the next month”. This is essentially the negative visualization component of stoicism.\nThere are many benefits to visualizing the worst case:\n\nFeeling better about the present situation by contrast.\nTurning attention to the good things that would still be in my life even if everything went wrong in one particular domain.\nWeakening anxiety using humor (by imagining an exaggerated “doomsday” scenario).\nBeing more prepared for failure, and making contingency plans (pre-hindsight).\nHelping make more accurate predictions about the future by reducing the “X isn’t allowed to happen” effect (or, as Anna Salamon once put it, “putting X into the realm of the thinkable”).\nReducing the effect of ugh fields / aversions, which thrive on the “X isn’t allowed to happen” flinch.\nWeakening unhelpful identities like “person who is always productive” or “person who doesn’t make stupid mistakes”.\n\n\nLet’s say I have an aversion around meetings with my advisor, because I expect him to be disappointed with my research progress. When I notice myself worrying about the next meeting or finding excuses to postpone it so that I have more time to make progress, I can imagine the worst imaginable outcome a meeting with my advisor could have – perhaps he might yell at me or even decide to expel me from grad school (neither of these have actually happened so far). If the scenario is starting to sound silly, that’s a good sign. I can then imagine how this plays out in great detail, from the disappointed faces and words of the rest of the department to the official letter of dismissal in my hands, and consider what I might do in that case, like applying for industry jobs. While building up these layers of detail in my mind, I breathe deeply, which I associate with meditative acceptance of reality. (I use the word “acceptance” to mean “acknowledgement” rather than “resignation”.)\nI am trying to use this technique more often, both in the regular and situational sense. A good default time is my daily meditation practice. I might also set up a trigger-action habit of the form “if I notice myself repeatedly worrying about something, visualize that thing (or an exaggerated version of it) happening, and try to accept it”. Some issues have more natural triggers than others – while worrying tends to call attention to itself, aversions often manifest as a quick flinch away from a thought, so it’s better to find a trigger among the actions that are often caused by an aversion, e.g. procrastination. A trigger for a potentially unhelpful identity could be a thought like “I’m not good at X, but I should be”. A particular issue can simultaneously have associated worries (e.g. “will I be productive enough?”), aversions (e.g. towards working on the project) and identities (“productive person”), so there is likely to be something there that makes a good trigger. Visualizing myself getting nothing done for a month can help with all of these to some degree.\nSystem 1 is good at imagining scary things – why not use this as a tool?\nCross-posted", "url": "https://vkrakovna.wordpress.com/2015/03/26/negative-visualization-radical-acceptance-and-stoicism/", "title": "Negative visualization, radical acceptance and stoicism", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-03-27T03:36:57+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=5", "authors": ["Victoria Krakovna"], "id": "3098e327f32cb9bb82db07ae6d57c71c", "summary": []} {"text": "Future of Life Institute’s recent milestones in AI safety\n\nIn January, many months of effort by FLI’s founders and volunteers finally came to fruition – the Puerto Rico conference, open letter and grants program announcement took place in rapid succession. The conference was a resounding success according to many of the people there, who were impressed with the quality of the ideas presented and the way it was organized. There were opportunities for the attendees to engage with each other at different levels of structure, from talks to panels to beach breakout groups. The relaxed Caribbean atmosphere seemed to put everyone at ease, and many candid and cooperative conversations happened between attendees with rather different backgrounds and views.\nIt was fascinating to observe many of the AI researchers get exposed to various AI safety ideas for the first time. Stuart Russell’s argument that the variables that are not accounted for by the objective function tend to be pushed to extreme values, Nick Bostrom’s presentation on takeoff speeds and singleton/multipolar scenarios, and other key ideas were received quite well. One attending researcher summed it up along these lines: “It is so easy to obsess about the next building block towards general AI, that we often forget to ask ourselves a key question – what happens when we succeed?”.\n\nA week after the conference, the open letter outlining the research priorities went public. The letter and research document were the product of many months of hard work and careful thought by Daniel Dewey, Max Tegmark, Stuart Russell, and others. It was worded in optimistic and positive terms – the most negative word in the whole thing was “pitfalls”. Nevertheless, the media’s sensationalist lens twisted the message into things like “experts pledge to rein in AI research” to “warn of a robot uprising” and “protect mankind from machines”, invariably accompanied by a Terminator image or a Skynet reference. When the grants program was announced soon afterwards, the headlines became “Elon Musk donates…” to “keep killer robots at bay”, “keep AI from turning evil”, you name it. Those media portrayals shared a key misconception of the underlying concerns, that AI has to be “malevolent” to be dangerous, while the most likely problematic scenario in our minds is a misspecified general AI system with beneficial or neutral objectives. While a few reasonable journalists actually bothered to get in touch with FLI and discuss the ideas behind our efforts, most of the media herd stampeded ahead under the alarmist Terminator banner.\nThe open letter expresses a joint effort by the AI research community to step up to the challenge of advancing AI safety as responsible scientists. My main worry about this publicity angle is that this might be the first major exposure to AI safety concerns for many people, including AI researchers who would understandably feel attacked and misunderstood by the media’s framing of their work. It is really unfortunate to have some researchers turned away from the cause of keeping AI beneficial and safe without even engaging with the actual concerns and arguments.\nI am sometimes asked by reporters whether there has been too much emphasis on the superintelligence concerns that is “distracting” from the more immediate AI impacts like the automation of jobs and autonomous weapons. While the media hype is certainly not helpful towards making progress on either the near-term or long-term concerns, there is a pervasive false dichotomy here, as both of these domains are in dire need of more extensive research. The near-term economic and legal issues are already on the horizon, while the design and forecasting of general AI is a complex interdisciplinary research challenge that will likely take decades, so it is of utmost importance to begin the work as soon as possible.\nThe grants program on AI safety, fueled by Elon Musk’s generous donation, is now well under way, with the initial proposals due March 1. The authors of the best initial proposals will be invited to submit a more detailed full proposal by May 17. I hope that our program will help kickstart the emerging subfield of AI safety, stimulate open discussion of the ideas among the AI experts, and broaden the community of researchers working on these important and difficult questions. Stuart Russell put it well in his talk at the Puerto Rico conference: “Solving this problem should be an intrinsic part of the field, just as containment is a part of fusion research. It isn’t ‘Ethics of AI’, it’s common sense!”.", "url": "https://vkrakovna.wordpress.com/2015/02/16/flis-recent-milestones-in-ai-safety/", "title": "Future of Life Institute’s recent milestones in AI safety", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-02-16T19:32:20+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=5", "authors": ["Victoria Krakovna"], "id": "e3de4dcce5a66d38dce84e57df47c824", "summary": []} {"text": "2014-15 New Year review\n\n2014 progress\nIf someone told me at the beginning of 2014 that I would co-found an organization to mitigate technological risks to humanity, I might not have believed them. Thanks Max, Meia, Anthony and Jaan for the great initiative!\nI am almost done with my first research project on variable selection and classification using a Bayesian forest model – I simplified the variable partition in the model, came up with better tree updates, added a hyperprior, sped up the algorithm by an order of magnitude, and started testing on real data. Among the other ambitious projects of the past year are two MIRIx workshops plus writing up the results, and starting this blog.\nImprovements in personal effectiveness:\n\nstarted using a daily checklist of morning habits\nstarted taking melatonin every night\nstarted tagging new thoughts\nstarted using FollowUpThen to schedule future tasks without overloading my todo list\nstarted using Toggl to track work hours only\nstopped using Beeminder (too stressful), and replaced it with a combination of 42goals and FollowUpThen (works well)\nquit as President of the Toastmasters club\nmade a volunteer application form for FLI, so that instead of being inundated with 7 freeform emails per month from interested folks, I get the relevant information in an organized spreadsheet and I’m not required to respond\n\n\nRandom cool things I did:\n\nclimbing a 5.11a\nclimbing outdoors\nindoor surfing\npolar bear swim\n\n2014 resolutions\nA year ago, I made a number of New Year resolutions (and assigned a probability of completion for each goal). Here is how they worked out:\nSucceeded:\n\ncontinue meditation practice (>10 minutes daily, > 120 times) (70%) – did ~200 times\nstart a new research project (70%) – started working with a new advisor, did some background reading, narrowed down a topic, wrote a grant proposal\ndo at least 5 pullups in a row (85%)\nreading stats blogs\ngo to 3 conferences, including a MIRI workshop (80%) – went to NESS, JSM, NIPS, and two MIRIx workshops\ngive 5 speeches at Toastmasters (75%)\ngive at least 5 LessWrong meetup talks (70%)\nrun comfort zone expansion outings at least twice (80%)\nstart a group project at Citadel (40%) – FLI work groups and MIRIx workshops sort of count for this\nintroduce at least 3 friends to LW meetups (50%)\nhelp people achieve their goals – helped run a weekly habit training session at Citadel\n\nEssentially succeeded:\n\npublish paper about current research project (90%) – almost done, hope to submit by March 1\nwrite at least 5 LW posts (80%) – wrote 4 posts\nmore writing – reflections (did some), stories (sort of), poems (nope), also journals and blog posts\n\nFailed and no longer endorsed:\n\ncontinue to avoid Beeminder debt – didn’t work, then stopped using Beeminder, now use 42goals for goal tracking and FollowUpThen for reminders\ndo consulting for Metamed (60%)\nread 90% of the LW Sequences (70%) – made progress, but no longer want to read such a high fraction (waiting for the ebook to come out)\nfinish Pearl’s Causality (50%) – read the LW review of the book instead\nlearn more economics and biology\n\nCalibration:\n\nlow predictions, 40-60%: 2/4 = 50% (perfectly calibrated)\nhigh predictions, 70-90%: 7/10 = 70% (overconfident)\n\nConclusions:\n\nReading goals mostly don’t work for me – if I do set them, flexible goals of the form “read some of X” (like stats blogs) do better than more fixed and time-consuming goals like “read most of X” (like the Sequences)\nThe rate of goal disendorsement is 5/19 = 26%.\nI don’t tend to completely fail on goals I continue to endorse – yay!\n\n2015 goals and habits\nBroad categories of goals for the coming year:\n\nResearch:\n\nwrap up and submit BFC project,\nstart and make a significant progress on 1-2 new projects\n\n\nFLI:\n\nhelp streamline operations and communication,\ncontinue work on AI safety outreach to AI researchers,\nencourage AI safety research,\nstart projects in risk areas other than AI safety\n\n\nSelf-improvement:\n\nreduce weekend/free-time anxiety,\nincrease acceptance of suboptimal situations,\nimprove introspection ability and my model of myself,\nimprove retention of information (using Anki),\neliminate cluttery speech pattern\n\n\n\nHabits to maintain:\n\ndaily meditation\ndaily pushups\ndaily melatonin\ntagging and writing down new thoughts\ntracking goals in 42goals\ntracking work hours in Toggl\njournaling 1-2 times / week\n\n2015 predictions\n\nI will submit the Bayesian Forest Classifier paper for publication (95%)\nI will submit another paper besides BFC (40%)\nI will present BFC results at a conference (JSM, ICML or NIPS) (40%)\nI will get a new external fellowship to replace my expiring NSERC fellowship (50%)\nI will skim at least 20 research papers in machine learning (70%)\nI will write at least 12 blog posts (70%)\nI will climb a 5.12 without rope cheating (50%)\nI will lead climb a 5.11a (50%)\nI will be able to do 10 pullups in a row (60%)\nI will meditate at least 150 times (80%)\nI will record at least 150 new thoughts (70%)\nI will make at least 100 Anki cards by the end of the year (70%)\nI will read at least 10 books (60%)\nI will attend Burning Man (90%)\nBoston will have a second rationalist house by the end of the year (30%)\nFLI will hire a full-time project manager or administrator (80%)\nFLI will start a project on biotech safety (70%)\n\nI will update this post if other goals and predictions for the year come to mind before the end of January.", "url": "https://vkrakovna.wordpress.com/2015/01/11/2014-15-new-year-review/", "title": "2014-15 New Year review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2015-01-11T03:21:43+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=5", "authors": ["Victoria Krakovna"], "id": "28aa6383d233b6a7b4894ba995e922b9", "summary": []} {"text": "Open and closed mental states\n\nI learned a game at Burning Man this year that was about connecting to people and reading their nonverbal signals, called the “open-closed” game (h/t Minda Myers). There are two people in the game, and one is trying to approach the other and place a hand on their shoulder. No words can be exchanged, except that person who is being approached can announce their emotional state as “open” or “closed”. When they say “closed”, the approacher may not get any closer until they say “open” again. The approachee monitors themselves for any internal discomfort associated with the other person, and says “closed” if that is the case. The approacher tries to keep the other person comfortable through their body language and eye contact, to get them to remain “open”.\nI have recently started playing this game with myself, with “open” representing openness to experience or being in the moment, and “closed” representing tunnel vision or discomfort with the way things are going. In a way, I imagine being “approached” by whatever situation I’m in, or whatever sequence of experiences is happening, instead of a person. I ask myself whether I am in the open or closed state, and try to shift to the open state whenever I notice being in the closed state.\n\nThere are a couple of reasons to try to do this. In the open state, I tend to be happier, more curious and observant and have more new thoughts. From a week of tracking my mental states and thought status using TagTime, I can make a preliminary conclusion that while old thoughts do occur in the open state, new thoughts never occur in the closed state. While the closed state makes me more efficient at doing straightforward tasks (e.g. by making me less distractable), it makes me less efficient at doing less straightforward tasks (e.g. by increasing my tendency to optimize locally rather than globally).\nThis is related to the concept of “againstness” taught by Valentine Smith at the Center for Applied Rationality, which is a sense of resisting something about the situation at hand. Learning to notice this sense more quickly is a valuable thing I learned at CFAR and through my meditation practice. Redirecting attention to body sensations is supposed to be helpful for dissipating againstness, but I have found it difficult to get myself to do this in the moment, and not particularly reliable. Following the driving principle of “focusing on the road and not the curb”, I find it easier to shift to a mental state with a simple salient label like “open” instead of a clunky label like “non-againsty”. It also feels less judgmental to ask myself “what am I closed to right now, what experience am I not letting in?” than “what am I against right now?”.\nThe againstness approach seems to be about relaxing the mind by relaxing the body first, while for some people relaxing the mind first comes more naturally – I actually find myself automatically breathing deeper when shifting into the open state. For both approaches, the goal is the same – to let go of mental and physical tension before proceeding with what you are doing. The rule of thumb, like in the game, is to first get into the open state and then approach the situation at hand.\n(Cross-posted to LessWrong.)", "url": "https://vkrakovna.wordpress.com/2014/12/26/open-and-closed-mental-states/", "title": "Open and closed mental states", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2014-12-26T06:21:09+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=5", "authors": ["Victoria Krakovna"], "id": "297ee8dc887938b669764ae0e6b400b5", "summary": []} {"text": "Importance motivation: a double-edged sword\n\nWorking on something important is often stressful. A hypothesis came up in conversation (h/t Michael Vassar) that labeling your work as important actually decreases the quality of the output. I have seen this effect in action quite a bit, e.g. when someone can write an essay when commenting on a blogpost, but gets nailed by writer’s block when writing an essay on purpose. Feeling responsible for something important can be an obstacle to productivity when the work requires creativity. How can we do important creative work without being held back by importance considerations?\nThis has come up in my thesis research project on variable selection. It’s been “almost done” for a year now, and my advisor keeps reminding me that it’s important to finish soon. This motivation results in making little tweaks to the algorithm at the expense of looking at the big picture. However, most of the improvements to the algorithm came about through exploring models that I thought were more interesting than the default one. When I came up with a more streamlined version of the model, for a while the algorithm was doing worse than the original one, and I started cursing myself for following my elegance heuristics instead of doing what needed to be done. Then I found a bug, and the new algorithm reached a similar performance level to the old one. (I do eventually want to stop playing with the model and actually publish the thing, though…)\n\nI would like to distinguish between two types of importance: extrinsic and intrinsic. Extrinsic importance comes from external restrictions, like deadlines, funding availability, user demand or evaluation by high-status people. Intrinsic importance arises from the problem you’re solving – in my research example, this includes criteria like model simplicity and algorithm correctness. This is somewhat of a continuum, since a particular criterion can combine intrinsic and extrinsic importance, for example if my advisor insists on the algorithm being correct. Intrinsic importance fuels interest and curiosity, while extrinsic importance can inhibit creativity.\nThere is a quote by Howard Thurman: “Don’t ask yourself what the world needs; ask yourself what makes you come alive. And then go and do that. Because what the world needs is people who have come alive.” This can be interpreted as a suggestion to focus on intrinsic rather than extrinsic importance of your work. However, when choosing projects, it’s a good idea to be guided by extrinsic importance at least to some degree, lest you end up working on underwater basket weaving. Ideally, you’d want to find the intersection between the things that the world needs and the things that make you come alive. Throwing out the importance criterion once you start the project seems suboptimal, since it also applies to choosing subproblems within the project.\nThis can be helped by separating idea generation from idea evaluation, process focus from outcome focus. Write a sloppy draft of the essay, or imagine that you are writing to a close friend, and then step back and edit it based on the needs of your intended audience. Play with a new model for a while, occasionally asking yourself whether to prune this line of inquiry in favor of more promising ones. This process is analogous to a Kalman filter – an algorithm that alternates between an evolution step, where the next step is proposed, and an observation step, where that step is updated using data from the real world. In practice, much more time needs to be spent on developing the next step than on evaluating it, and more importantly, longer chunks of time. You want to correct your course once in a while, instead of stifling good ideas before fully developing them.\nThere is a saying that I find both annoying and insightful – “life is too important to be taken seriously”. Like many sayings, it’s annoyingly vague, but can be interpreted usefully. My interpretation of choice is that too much evaluation of what you’re doing can make the outcome less good, which conflicts with the purpose of evaluating it in the first place. “Life is too important to be taken seriously too often”, how about that?", "url": "https://vkrakovna.wordpress.com/2014/10/18/importance-motivation-a-double-edged-sword/", "title": "Importance motivation: a double-edged sword", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2014-10-18T14:28:24+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=5", "authors": ["Victoria Krakovna"], "id": "c64e6d6412666933522a3e6cf9bc7805", "summary": []} {"text": "Citadel house sessions – a year in review\n\nSince we moved into Citadel House in Boston a year ago, we ran self-improvement and rationality sessions every week, for the housemates and some local LessWrongers. There were around 3 sessions running at a time, and some of them caught on much more than others. I will discuss the sessions in decreasing order of success.\nMeta Mondays\nMeta Mondays are self-improvement meetings with a theme – most meetings involved a particular activity announced (slightly) in advance, aimed at practicing a skill. Some example activities were:\n\nFeedback-a-thon. A large group of us got on Admonymous, and sent each other feedback messages simultaneously. We had a list of feedback categories on the board (social skills, hygiene, speech patterns, etc), and some people requested feedback in specific categories. We were sufficiently persistent and prolific to overwhelm the website’s email quota!\nTable Topics Against Rationality. We combined the idea of Table Topics from Toastmasters (1-2 minute impromptu speeches) with Cards Against Rationality. Everyone took turns drawing two cards from the deck, and then giving a speech that connected the concepts on the two cards to each other. This devolved into silliness at the end, when we threw in the Cards Against Humanity deck.\nWRAP decision framework. (“Widen your options, Reality test assumptions, Attain emotional distance, Prepare to be right/wrong.”) I taught this decision procedure, and people played around with applying it, but most of the people present didn’t have a major upcoming decision to practice on.\n\nAt times when we didn’t have a theme, we would have a general conversation about what people were optimizing in their lives these days. The themed meetings were generally better attended and more focused, but also required more preparation. In future, we should invite guest hosts for Meta Mondays, instead of coming up with all the topics ourselves.\n\nOrder of the Sphex\nThis is a habit building session, named after the carpenter wasp of CFAR fame, known for stalwartly adhering to its routines. We would go around the circle proposing habits to try for a week, and then on the second round everyone would commit to 1-2 habits to work on. These could be the same habits as last week, sometimes with modifications, or new habits. At the end of the session, I sent everyone involved an email with their commitments.\nDue to fluctuating attendance, in practice people followed up on their habits every few weeks instead of weekly. Some found it helpful, others said that talking about a difficult habit without immediately doing something about them made the action more aversive. Depending on why someone is not sticking to habits, this session can be useful if their main failure mode is forgetting or indecision, and counterproductive if the underlying problem is aversion.\nWriting\nSince many people procrastinate on writing things, we decided to get together and write every week, optionally sharing the writing with others. This session was successful for a while – people wrote blog posts, LessWrong articles, stories, journal entries, emails, etc. Gradually, some regular attendees dropped out, and we started skipping the session on most weeks. Some said that it was hard to write on demand or to finish a piece of writing in a two-hour time block. The usefulness of this session depended on the ability to pick up and continue a previously started piece of writing, which was challenging for many of us.\nGoal Factoring\nWe used the goal factoring technique from CFAR to analyze the motivations behind our actions, and alternative ways to achieve the same goals more cheaply. We also did some aversion factoring – analysis of potentially useful but unpleasant actions. (Here, “action” has a broad meaning that includes things like “thinking about X” or “having emotion X”.) We usually worked by ourselves for a while, and then (optionally) discussed our findings with each other.\nOne failure mode was people choosing more sharable actions to factor, which were usually less private or embarrassing, and thus likely less useful. We tried to remedy this by holding a “secret goal factoring” session, where people agreed not to share their results. In practice, most people  said they didn’t factor anything particularly private or important in this session, so other barriers to analyzing important actions might be more significant than fear of sharing with a group.\nAfter a few months, we ran out of low hanging fruit in terms of actions to factor, and this session was transformed into Meta Mondays.\nStrategic review\nFor a month or so, I tried to run a weekly CFAR-style strategic review session. This didn’t catch on at all – it was difficult to explain, and since the benefit of the technique depends on regularity, inconsistent attendance made it useless to everyone except me. These days, I do strategic reviews on my own once every few weeks.\nAll the sessions had the difficulty of staying focused on the activity without devolving into unstructured conversation. This could be helped by running at most two sessions at a time, since there would be more evenings free for random conversation. Another general takeaway is that sessions need a clear purpose that people buy in to.\nI am running a survey to see which sessions people found useful last year and/or want to happen this coming year. Stay tuned!", "url": "https://vkrakovna.wordpress.com/2014/09/07/citadel-house-sessions-a-year-in-review/", "title": "Citadel house sessions – a year in review", "source": "vkrakovna.wordpress.com", "source_type": "blog", "date_published": "2014-09-07T04:36:57+00:00", "paged_url": "https://vkrakovna.wordpress.com/feed?paged=5", "authors": ["Victoria Krakovna"], "id": "239d31a7338dc17a4c3b2b1298cac113", "summary": []}