{"source": "markdown.ebooks", "source_type": "markdown", "title": "Eliciting Latent Knowledge", "authors": ["Paul Christiano", "Ajeya Cotra", "Mark Xu"], "date_published": "2021-12-14", "text": "---\nidentifier: 7a5d9794-b595-4b07-81bf-8f7303eb4831\nlanguage: en\ntitle: Eliciting Latent Knowledge\n---\n\n[]{#ElicitingLatentKnowledge.xhtml}\n\n[Eliciting latent knowledge: How to tell if your eyes deceive you]{.c62 .c63}\n\nPaul Christiano, Ajeya Cotra ^[\\[1\\]](#ElicitingLatentKnowledge.xhtml#ftnt1){#ElicitingLatentKnowledge.xhtml#ftnt_ref1}^ , and Mark Xu\n\n[Alignment Research Center]{.c1}\n\n[December 2021]{.c1}\n\n[]{.c1}\n\n[]{.c1}\n\n[In this post, we'll present ARC's approach to an open problem we think is central to aligning powerful machine learning (ML) systems: ]{.c1}\n\n[]{.c1}\n\n[Suppose we train a model to predict what the future will look like according to cameras and other sensors. We then use planning algorithms to find a sequence of actions that lead to predicted futures that look good to us.]{.c1}\n\n[]{.c1}\n\nBut some action sequences could tamper with the cameras so they show happy humans regardless of what's really happening. More generally, some futures look great on camera but are actually catastrophically bad.\n\n[]{.c1}\n\nIn these cases, the prediction model \\\"knows\\\" facts (like \\\"the camera was tampered with\\\" ) that are not visible on camera but would change our evaluation of the predicted future if we learned them. [How can we train this model to report its latent knowledge of off-screen events?]{.c12}\n\n[]{.c1}\n\nWe'll call this problem [eliciting latent knowledge]{.c23} [ (ELK). In this report we'll focus on detecting sensor tampering as a motivating example, but we believe ELK is central to many aspects of alignment. ]{.c1}\n\n[]{.c1}\n\nIn this report, we will describe [ELK and suggest possible approaches to it, while using the discussion to illustrate ARC's research methodology. More specifically, we will:]{.c1}\n\n[]{.c1}\n\n- Set up a [toy scenario]{.c12} in which a prediction model could show us a future that looks good but is actually bad, and explain why ELK could address this problem ( [ [more](#ElicitingLatentKnowledge.xhtml#h.byxdcc28gp79){.c9} ]{.c13} [).]{.c1}\n- Describe a simple [baseline training strategy for]{.c12} [ELK]{.c12} , step through how we analyze this kind of strategy, and ultimately conclude that the baseline is insufficient ( [ [more](#ElicitingLatentKnowledge.xhtml#h.2l5hgwdls943){.c9} ]{.c13} [).]{.c1}\n- Lay out ARC's overall [research methodology]{.c12} --- playing a game between a \"builder\" who is trying to come up with a good training strategy and a \"breaker\" who is trying to construct a counterexample where the strategy works poorly ( [ [more](#ElicitingLatentKnowledge.xhtml#h.a0wkk7prmy4t){.c9} ]{.c13} [).]{.c1}\n- Describe a sequence of strategies for [constructing richer datasets]{.c12} and arguments that none of these modifications solve ELK, leading to the counterexample of ontology identification ( [ [more](#ElicitingLatentKnowledge.xhtml#h.xv3mjtozz4gv){.c9} ]{.c13} [).]{.c1}\n- Identify [ontology identification ]{.c12} as a crucial sub-problem of ELK and discuss its relationship to the rest of ELK ( [ [more](#ElicitingLatentKnowledge.xhtml#h.u45ltyqgdnkk){.c9} ]{.c13} [).]{.c1}\n- Describe a sequence of strategies for [regularizing models to give honest answers]{.c12} , [ ]{.c12} and arguments that these modifications are still insufficient ( [ [more](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} ).\n- Conclude with a discussion of [why we are excited]{.c12} about trying to solve ELK in the worst case, including why it seems central to the larger alignment problem and why we're optimistic about making progress ( [ [more](#ElicitingLatentKnowledge.xhtml#h.phhqacmab0ig){.c9} ]{.c13} [).]{.c1}\n\n[]{.c1}\n\nMuch of our current research focused on \"ontology identification\" as a challenge for ELK. ^[\\[2\\]](#ElicitingLatentKnowledge.xhtml#ftnt2){#ElicitingLatentKnowledge.xhtml#ftnt_ref2}^ In the last 10 years many researchers have called out similar problems ^[\\[3\\]](#ElicitingLatentKnowledge.xhtml#ftnt3){#ElicitingLatentKnowledge.xhtml#ftnt_ref3}^ as playing a central role in alignment; our main contributions are to provide a more precise discussion of the problem, possible approaches, and why it appears to be challenging . We discuss related work in more detail in [ [Appendix: related work](#ElicitingLatentKnowledge.xhtml#h.2bf2noi7bufs){.c9} ]{.c13} [.]{.c1}\n\n[]{.c1}\n\n[We believe that there are many promising and unexplored approaches to this problem, and there isn't yet much reason to believe we are stuck or are faced with an insurmountable obstacle. Even some of the simplest approaches have not been thoroughly explored, and seem like they would play a role in a practical attempt at scalable alignment today.]{.c1}\n\n[]{.c1}\n\nGiven that ELK appears to represent a core difficulty for alignment, we are very excited about research that tries to attack it head on; we're optimistic that within a year we will have made significant progress either towards a solution or towards a clear sense of why the problem is hard. If you're interested in working with us on ELK or similar problems, [ [get in touch](https://www.google.com/url?q=https://docs.google.com/forms/d/e/1FAIpQLSegoNiBwfhZN3v0VkBGxKx6eYybSyWo-4WFHbkMnyXaMcIZeQ/viewform&sa=D&source=editors&ust=1646948966572096&usg=AOvVaw0Znxwh2fQeBCRb49zwv2bk){.c9} ]{.c13} [!]{.c1}\n\n[]{.c1}\n\n[]{.c67 .c62 .c23}\n\n[]{.c67 .c62 .c23}\n\n[Thanks to María Gutiérrez-Rojas for the illustrations in this piece. Thanks to Buck Shlegeris, Jon Uesato, Carl Shulman, and especially Holden Karnofsky for helpful discussions and comments.]{.c23}\n\n------------------------------------------------------------------------\n\n```{=html}\n
\n```\nToy scen a [rio: the SmartVault]{.c22} {#ElicitingLatentKnowledge.xhtml#h.byxdcc28gp79 .c61 .c24}\n======================================\n\nWe'll start by describing a toy scenario [ in which ELK seems helpful. While this scenario is a simplified caricature, we think it captures a key difficulty we expect to emerge as ML models get more powerful and take on a wide range of important decisions.]{.c1}\n\n[]{.c1}\n\nImagine y ou are developing an AI to control a state-of-the-art security system intended to protect a diamond from theft. The security system, the SmartVault, is a building with a vast array of sensors and actuators which can be combined in complicated ways to detect and stop even very sophisticated robbery attempts.\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 468.00px;\"}\n\nWhile you can observe the room through a camera, you don't know how to operate all the actuators in the right ways to protect the diamond. Instead, you design an AI system that operates these actuators [ for you, hopefully eliminating threats and protecting your diamond.]{.c1}\n\n[]{.c67 .c62 .c23}\n\n[In the rest of this section, we will:]{.c1}\n\n[]{.c1}\n\n- Outline how the SmartVault AI works ( [ [more](#ElicitingLatentKnowledge.xhtml#h.o1fabsypqaet){.c9} ]{.c13} [).]{.c1}\n- Describe how it could end up taking actions which [look ]{.c23} good but are actually bad ( [ [more](#ElicitingLatentKnowledge.xhtml#h.iyu6qys8i9o4){.c9} ]{.c13} [).]{.c1}\n- Explain how we could address this by asking the AI questions ( [ [more](#ElicitingLatentKnowledge.xhtml#h.gk1izrjni03w){.c9} ]{.c13} [).]{.c1}\n\n[How the SmartVault AI works: model-based RL]{.c37} {#ElicitingLatentKnowledge.xhtml#h.o1fabsypqaet .c46 .c24}\n---------------------------------------------------\n\nThe SmartVault AI searches over many actions and executes whichever are predicted to have the best consequences. The backbone of this system is a learned predictor which :\n\n[]{.c1}\n\n- [Takes as input a stream of observations from the camera and a possible sequence of actions that the SmartVault could take in that situation.]{.c1}\n- Outputs its prediction of what the camera will show in the future if the SmartVault takes that sequence of [ actions.]{.c1}\n\n[]{.c1}\n\nSome of the possible action sequences are simple enough that humans can fully understand what's going on:\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 222.67px;\"}\n\n[]{.c1}\n\n[However, many of them are much too complicated for humans to directly follow what's going on:]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 114.67px;\"}\n\n[]{.c1}\n\nStill, even though humans can't directly follow along with the actions , they can evaluate the predicted consequence of an action sequence:\n\n------------------------------------------------------------------------\n\n```{=html}\n\n```\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.3c4216780ceb0176f82ec71318e7b34b6f92ca47} []{#ElicitingLatentKnowledge.xhtml#t.0}\n\n ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n [Action]{.c1} [Predicted observation]{.c1} [Human Judgment]{.c1}\n [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 155.33px; height: 105.59px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 179.10px; height: 105.91px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 126.01px; height: 118.86px;\"}\n [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 154.77px; height: 105.36px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 179.10px; height: 105.91px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 126.01px; height: 118.86px;\"}\n [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 154.17px; height: 105.11px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 177.81px; height: 104.86px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 111.50px; height: 108.63px;\"}\n ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n[]{.c1}\n\nWe can then train a model to predict these human evaluations, ^[\\[4\\]](#ElicitingLatentKnowledge.xhtml#ftnt4){#ElicitingLatentKnowledge.xhtml#ftnt_ref4}^ and search ^[\\[5\\]](#ElicitingLatentKnowledge.xhtml#ftnt5){#ElicitingLatentKnowledge.xhtml#ftnt_ref5}^ for actions that lead to predicted futures that look good . ^[\\[6\\]](#ElicitingLatentKnowledge.xhtml#ftnt6){#ElicitingLatentKnowledge.xhtml#ftnt_ref6}^ [ A sufficiently powerful predictor and search procedure will hopefully allow the SmartVault to defend your diamond.]{.c1}\n\n[How it could go wrong: observations leave out key information]{.c37} {#ElicitingLatentKnowledge.xhtml#h.iyu6qys8i9o4 .c46 .c24}\n---------------------------------------------------------------------\n\nThe SmartVault can execute plans sufficiently sophisticated that humans can't really know if the diamond [is]{.c23} safe or merely [appears]{.c23} safe. Whatever complicated hard-to-follow sequence of actions the search procedure found might actually have replaced the diamond with a fake, or tampered with the camera:\n\n------------------------------------------------------------------------\n\n```{=html}\n\n```\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.869ec7625ccd2a224c1cb76c837a3f2244cdfb28} []{#ElicitingLatentKnowledge.xhtml#t.1}\n\n ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n [Action]{.c1} [Predicted observation]{.c1} [Human Judgment]{.c1} [Predicted reality]{.c1}\n [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 155.33px; height: 105.59px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 179.10px; height: 105.91px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 126.01px; height: 118.86px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 193.00px; height: 105.38px;\"}\n [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 154.77px; height: 105.36px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 179.10px; height: 105.91px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 126.01px; height: 118.86px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 192.50px; height: 106.12px;\"}\n [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 154.17px; height: 105.11px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 177.81px; height: 104.86px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 111.50px; height: 108.63px;\"} [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 187.65px; height: 103.10px;\"}\n ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n[]{.c1}\n\nYou might try to address this possibility by installing more cameras and sensors, or trying to get your human evaluators to pay extremely close attention. For example, maybe we could have many cameras watching each other so that it's hard to tamper with all of them simultaneously, or maybe more attentive humans could notice a sign that something was amiss, [etc.]{.c23} ^[\\[7\\]](#ElicitingLatentKnowledge.xhtml#ftnt7){#ElicitingLatentKnowledge.xhtml#ftnt_ref7}^\n\n[]{.c1}\n\nIt seems good to be careful and set up many independent sensors, but we are not satisfied with relying on this approach:\n\n[]{.c1}\n\n- [First, it may break down for sufficiently sophisticated systems executing complex plans, and if it works it's at best an empirical contingency. ARC is interested in trying to prepare for an uncertain future by designing training strategies that work even in the worst case. ]{.c1}\n- [Second, it seems like an unnecessarily dangerous and unstable situation if we need to install more and more sensors in order to protect the diamond from the SmartVault AI itself. If possible we'd like to avoid a world where powerful AI systems are searching for plans to fool us, and holding back critical information about the situation.]{.c1}\n\n[]{.c1}\n\nYou could also avoid this risk by avoiding AI systems like the SmartVault that select incomprehensible plans based on predicted consequences. However, we are worried about the world where this turns out to be a very effective strategy for powerful AI systems; in that world there may be strong incentives for [someone ]{.c23} [to build such AI systems, and despite our best efforts someone may deploy them even if it came with significant risks.]{.c1}\n\n[]{.c1}\n\nInstead, we want to address this problem by directly eliciting the model's knowledge about whether the sensors are reliable. This would allow us to perform the same powerful optimization without incentivizing our AI to undermine our sensors.\n\n[How we might address this problem by asking questions]{.c37} {#ElicitingLatentKnowledge.xhtml#h.gk1izrjni03w .c46 .c24}\n-------------------------------------------------------------\n\nIn these scenarios, it intuitively seems like the prediction model \"knows\" that the camera was tampered with, that the diamond is fake, or the situation is otherwise \"not what it seems.\" After all, it was able to accurately predict the final outcome of a complicated sequence of actions which resulted in these observations -- if it didn't \"know\" that the camera was tampered with, then it couldn't have accurately predicted what the camera would show once it stopped showing what was really happening in the room.\n\n[]{.c1}\n\nAdditionally, it seems like these facts are pretty straightforward. [ They're not philosophically fraught or particularly difficult to talk about --- humans understand what it means for a camera to (not) be showing what's actually happening in the room. And they refer to objective properties of the outside world, rather than claims about what the AI is \"trying\" to do or whether it is \"lying.\"]{.c1}\n\n[]{.c1}\n\nIf the evaluators could ask the prediction model whether the video shows what's actually happening in the room, then that would be enough for them to avoid selecting action sequences that result in tampering. If the answer is ever \"no,\" they wouldn't need to understand [how ]{.c23} tampering happened to realize that it's fishy, and they wouldn't need to understand anything else about the AI's mental state.\n\n------------------------------------------------------------------------\n\n```{=html}\n\n```\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.03520695f716bd80a399cea90b1a4b5ff8ce1fdd} []{#ElicitingLatentKnowledge.xhtml#t.2}\n\n+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| [Predicted observation]{.c1} | [Old human Judgment]{.c1} | [Predicted reality]{.c1} | [Questions and Answers]{.c1} | [New human judgment]{.c1} |\n+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 150.50px; height: 88.84px;\"} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 94.76px; height: 89.50px;\"} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 155.00px; height: 84.33px;\"} | [\"Is the diamond still on the pedestal?\"]{.c1} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 94.76px; height: 89.50px;\"} |\n| | | | | |\n| | | | []{.c1} | |\n| | | | | |\n| | | | [\"Yes.\"]{.c1} | |\n+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 144.38px; height: 85.50px;\"} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 93.71px; height: 88.50px;\"} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 158.49px; height: 87.50px;\"} | [\"Is the diamond still on the pedestal?\"]{.c1} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 87.74px; height: 85.39px;\"} |\n| | | | | |\n| | | | []{.c1} | |\n| | | | | |\n| | | | [\"No.\"]{.c1} | |\n+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 148.15px; height: 87.39px;\"} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 87.74px; height: 85.39px;\"} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 150.50px; height: 82.45px;\"} | [\"Is the diamond still on the pedestal?\"]{.c1} | [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 87.74px; height: 85.39px;\"} |\n| | | | | |\n| | | | []{.c1} | |\n| | | | | |\n| | | | [\"No.\"]{.c1} | |\n+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n\n[]{.c1}\n\n[]{.c1}\n\nFor a more precise statement of the problem, see [ [Appendix: problem statement](#ElicitingLatentKnowledge.xhtml#h.jk61tc933p1){.c9} ]{.c13} . In [ [Appendix: narrow elicitation](#ElicitingLatentKnowledge.xhtml#h.ii599facmbks){.c9} ]{.c13} we describe what kinds of questions we want to answer, and why even answering the narrowest questions may [be sufficient to define an acceptable utility function even for a very general and powerful AI.]{.c1}\n\n[Baseline: what you'd try first and how it could fail]{.c22} {#ElicitingLatentKnowledge.xhtml#h.2l5hgwdls943 .c61 .c24}\n============================================================\n\nWe can view the SmartVault predictor as first figuring out what it should predict , and then extracting a predicted video : ^[\\[8\\]](#ElicitingLatentKnowledge.xhtml#ftnt8){#ElicitingLatentKnowledge.xhtml#ftnt_ref8}^\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 568.00px; height: 136.00px;\"}\n\n[]{.c1}\n\nWe can fine-tune this model to also answer questions about the predictor's latent knowledge by adding a second \"head\" (which we'll call the [reporter]{.c23} ) [ that shares all the \"figure out what's going on\" computation with the predictor:]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 566.00px; height: 208.00px;\"}\n\n[]{.c1}\n\n[In the rest of this section, we will:]{.c1}\n\n[]{.c1}\n\n- Present a naive [training strategy]{.c12} for ELK --- fine-tune only on questions simple enough that humans are [very confident ]{.c23} of the right answer, and hope that the reporter also generalizes to honestly answering more complex questions ( [ [more](#ElicitingLatentKnowledge.xhtml#h.1xpao6tk9oiv){.c9} ]{.c13} [).]{.c1}\n- Walk through a [counterexample ]{.c12} to the training strategy --- a description of a reporter which does well on the training distribution but does not honestly report undetectable tampering ( [ [more](#ElicitingLatentKnowledge.xhtml#h.sm4amv12m66a){.c9} ]{.c13} ).\n\nTraining strategy: [ generalize from easy questions to hard questions]{.c37} {#ElicitingLatentKnowledge.xhtml#h.1xpao6tk9oiv .c46 .c24}\n----------------------------------------------------------------------------\n\n[Ultimately, we want to be able to ask the reporter questions like \"Is what I'm seeing on camera what's actually happening?\" in complex situations where humans aren't sure what's going on. We can't directly train it on these questions, since there's no way to generate reliable training data --- our human evaluators would often answer incorrectly.]{.c1}\n\n[]{.c1}\n\nHowever, [we ]{.c12} [can]{.c12 .c23} [ train the reporter to answer questions in simpler situations where humans can be confident about what happened.]{.c12} For example, we could generate a dataset of scenarios in which a weak AI is operating the SmartVault actuators, such that the humans can understand everything that's going on just by examining the video and action sequences. ^[\\[9\\]](#ElicitingLatentKnowledge.xhtml#ftnt9){#ElicitingLatentKnowledge.xhtml#ftnt_ref9}^ This weak AI may sometimes attempt actions like sensor tampering, but only in a way that's extremely obvious to the human evaluators. \n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 350.67px;\"}\n\n[]{.c1}\n\nPerhaps a reporter that correctly answers questions in the colored part of the space [ [generalizes to honestly answering](https://www.google.com/url?q=https://www.alignmentforum.org/posts/BxersHYN2qcFoonwg/experimentally-evaluating-whether-honesty-generalizes&sa=D&source=editors&ust=1646948966621676&usg=AOvVaw3iPHWtU_IRh44m5ScDaKoi){.c9} ]{.c13} in the grey part of the space. ^[\\[10\\]](#ElicitingLatentKnowledge.xhtml#ftnt10){#ElicitingLatentKnowledge.xhtml#ftnt_ref10}^ This seems [prima facie ]{.c23} plausible to us : the prediction model [already]{.c23} has to build up an understanding of the situation to make predictions, and the most natural way to answer questions may be to directly translate its understanding into human languag e.\n\nCounterexample [: why this training strategy won't always work]{.c37} {#ElicitingLatentKnowledge.xhtml#h.sm4amv12m66a .c46 .c24}\n---------------------------------------------------------------------\n\nIn this section we'll try to describe a plausible situation in which this training strategy learns a reporter that won't report undetectable tampering. We'll call this a [counterexample ]{.c23} [to the strategy. We will:]{.c1}\n\n[]{.c1}\n\n- Construct a [test case --- ]{.c12} a hypothetical world in which our counterexample is set ( [ [more](#ElicitingLatentKnowledge.xhtml#h.n2gtc04vxdro){.c9} ]{.c13} [).]{.c1}\n- Describe the [intended behavior ]{.c12} of the reporter in this test case ( [ [more](#ElicitingLatentKnowledge.xhtml#h.pvrv5q8nf2s2){.c9} ]{.c13} [).]{.c1}\n- Describe a [bad behavior]{.c12} that the training strategy will learn instead of the intended behavior --- that is, describe a reporter which answers easy questions perfectly (thus performing well in training) but does not honestly answer harder questions ( [ [more](#ElicitingLatentKnowledge.xhtml#h.8zbibvlrwtkf){.c9} ]{.c13} ).\n\n[]{.c1}\n\nOur ultimate goal is to find a training strategy for which we can't think of [any]{.c23} counterexamples , despite trying hard. We discuss this methodology in more detail in [ [Section: methodology](#ElicitingLatentKnowledge.xhtml#h.a0wkk7prmy4t){.c9} ]{.c13} .\n\n### Test case: prediction is done by inference on a Bayes net {#ElicitingLatentKnowledge.xhtml#h.n2gtc04vxdro .c6}\n\nWe can't exhaustively describe every possible feature of a hypothetical world when we're constructing a test case, but we won't need to. [ Almost all properties about a hypothetical world (like \"Who is president in 2024?\") don't plausibly have any bearing on whether the strategy works. We only need to fill in the details that matter. In this section, we'll fill in two crucial details:]{.c1}\n\n[]{.c1}\n\n1. H ow the original prediction model works internally ( [ [more](#ElicitingLatentKnowledge.xhtml#h.wl2jla5mfdec){.c9} ]{.c13} ).\n2. How the humans who construct the training dataset answer questions ( [ [more](#ElicitingLatentKnowledge.xhtml#h.2jhk1uwkfovx){.c9} ]{.c13} [).]{.c1}\n\n[]{.c1}\n\nWe'll also address the objection that these assumptions are oversimplified ( [ [more](#ElicitingLatentKnowledge.xhtml#h.u4g21jlerwbz){.c9} ]{.c13} [).]{.c1}\n\n#### How the prediction model works {#ElicitingLatentKnowledge.xhtml#h.wl2jla5mfdec .c55 .c24}\n\nThe first part of the test case is a description of how the prediction model works:\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 573.00px; height: 207.00px;\"}\n\nWe need to sketch out how this works because the reporter will somehow use the computation done in the \"Figure out what's going on\" step to answer questions . Once we've sufficiently pinned down the way that works in our test case, we'd be able to describe how we want [ ]{.c23} the reporter to leverage work the prediction model already did (and think about whether our strategy might instead learn an undesired behavior).\n\n[]{.c1}\n\nFor our test case, we'll assume the prediction model figures out what's going on and makes predictions by doing inference in a [ [Bayes net](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Bayesian_network&sa=D&source=editors&ust=1646948966628257&usg=AOvVaw363H_ZCEkyTZrGd-ILnLuE){.c9} ]{.c13} ^[\\[11\\]](#ElicitingLatentKnowledge.xhtml#ftnt11){#ElicitingLatentKnowledge.xhtml#ftnt_ref11}^ :\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 552.00px; height: 352.56px;\"}\n\nThe blue nodes at the bottom are the first and second part of the video, and the yellow node at the top is the action sequence. Other nodes could [ be very high-level (\"Is the diamond in the room?\"), very low-level (\"What is the voltage on this wire?\"), or anything in between. An arrow from one node to another indicates that the second node depends on the first. ]{.c1}\n\n[]{.c1}\n\n[In order to predict the second part of the video, the model first fixes the \"first part of video\" and \"action\" nodes to be the right values.]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 538.00px; height: 351.00px;\"}\n\n[]{.c1}\n\nThen it computes the joint probability distribution of all the other nodes conditional on these , ^[\\[12\\]](#ElicitingLatentKnowledge.xhtml#ftnt12){#ElicitingLatentKnowledge.xhtml#ftnt_ref12}^ and outputs the probability distribution over the second part of the video. ^[\\[13\\]](#ElicitingLatentKnowledge.xhtml#ftnt13){#ElicitingLatentKnowledge.xhtml#ftnt_ref13}^\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 543.00px; height: 351.00px;\"}\n\n[]{.c1}\n\nImportantly, if the prediction model is predicting that the camera will show a diamond [because ]{.c23} the robber is going to tamper with the camera, [then the robber tampering ]{.c12} [must be reflected somehow in the inferred joint distribution over the nodes of this Bayes net]{.c12} .\n\n#### [How the humans answer questions]{.c50} {#ElicitingLatentKnowledge.xhtml#h.2jhk1uwkfovx .c24 .c55}\n\nWe'll assume the humans who constructed the dataset also model the world using their own internal Bayes net. In the training dataset, they see the first part of the video, the action sequence, [and ]{.c23} the second part of the video showing the consequences of that action sequence. [Using this, they infer a joint probability distribution over all the nodes:]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 535.00px; height: 304.00px;\"}\n\n[]{.c1}\n\nThen they figure out the answer to questions using whatever nodes are relevant to the question. In the picture below, those are nodes [h]{.c23} [3]{.c26 .c23} [ ]{.c23} and [h]{.c23} [4]{.c26 .c23} [: ]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 581.00px; height: 304.00px;\"}\n\n[]{.c1}\n\n#### [Isn't this oversimplified and unrealistic?]{.c50} {#ElicitingLatentKnowledge.xhtml#h.u4g21jlerwbz .c55 .c24}\n\nThroughout the rest of this report, we'll discuss AI models who reason about the world by doing inference in Bayes nets.\n\n[]{.c1}\n\nIn reality, [any model we train will end up reasoning about the world in a messy and complicated way rather than implementing a single well-defined procedure like \"inference on a large Bayes net.\" For example, it might exploit simple heuristics or correlations, may have different kinds of models for different situations, may use deductive reasoning, may do internal learning, may use other algorithms we can't think of, and so on.]{.c1}\n\n[]{.c1}\n\n[But this kind of simplifying assumption is still a valid step in constructing a counterexample. ]{.c12} Bayes nets seem like a plausible way of reasoning about at least some pieces of the world at least some of the time. This test case isn't logically inconsistent or physically impossible. That means that [if ]{.c12 .c23} [the predictor happens to work this way]{.c12} (regardless of how likely that is), [ our training strategy is not allowed to fail.]{.c12} [ ]{.c1}\n\n[]{.c1}\n\nOne of the benefits of our research methodology is that it allows us to focus on simple test cases where it's easy to understand how an AI could behave, and we think this lets us make faster progress (see [ [Section: methodology](#ElicitingLatentKnowledge.xhtml#h.a0wkk7prmy4t){.c9} ]{.c13} [).]{.c1}\n\n[]{.c1}\n\nMoreover, we think that a realistic messy predictor is pretty likely to still use strategies similar to inference in Bayes nets --- amongst other cognitive strategies. We think any solution to ELK will probably need to cope with the difficulties posed by the Bayes net test case --- amongst other difficulties. We've also considered a number of other simple test cases, and found that counterexamples similar to the ones we'll discuss in this report apply to all of them.\n\n[]{.c38 .c62 .c12}\n\nMore broadly , it just seems [weird]{.c23} for a reporter that works for realistic predictors to fail in this test case. A t first blush there doesn't seem to be any reason that a reporter would be forced to lie or fail if the predictor happens to reason about the world by doing inference in a Bayes net (indeed we have a picture of what kind of thing an honest reporter could do in this case, see [ [Section: intended behavior](#ElicitingLatentKnowledge.xhtml#h.pvrv5q8nf2s2){.c9} ]{.c13} ).\n\n[]{.c1}\n\nOverall, [ ]{.c12} [we don't think that the messiness of a realistic AI will make training an honest reporter easier (if anything we think it would be harder).]{.c12} ^[\\[14\\]](#ElicitingLatentKnowledge.xhtml#ftnt14){#ElicitingLatentKnowledge.xhtml#ftnt_ref14}^\n\n[]{.c1}\n\n[If we were able to solve ELK for simple test cases (like this Bayes net test case), we would be interested in considering a wide range of messier systems that perform many different kinds of reasoning. This step is likely to eventually involve empirical experiments, but we believe that finding an approach that works for all the simple cases we can think of is a useful first step towards finding an approach that will work robustly in practice.]{.c1}\n\n### Intended behavior: translate to the human's Bayes net {#ElicitingLatentKnowledge.xhtml#h.pvrv5q8nf2s2 .c6}\n\nOnce the prediction model has figured out what's going on and gotten a joint distribution for all the nodes in its Bayes net, how do we [want ]{.c23} [the reporter to use that data to answer questions?]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 537.00px; height: 344.00px;\"}\n\n[]{.c1}\n\nIntuitively, we expect each node in the human Bayes net to correspond to a function of the predictor's Bayes net. We'd want the reporter to simply apply the relevant functions ^[\\[15\\]](#ElicitingLatentKnowledge.xhtml#ftnt15){#ElicitingLatentKnowledge.xhtml#ftnt_ref15}^ [ from subsets of nodes in the predictor\\'s Bayes net to each node in the human Bayes net:]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 598.00px; height: 341.00px;\"}\n\n[]{.c1}\n\nFor example, the human may have a node representing where the diamond is, with possible values like \"Inside the SmartVault\" or \"Inside a robber's pack\" or \"Out of the country.\" ^[\\[16\\]](#ElicitingLatentKnowledge.xhtml#ftnt16){#ElicitingLatentKnowledge.xhtml#ftnt_ref16}^ [ The predictor may instead have a node for each (x, y, z) position whose value represents \"what is at position (x, y, z)?\" Then the value of the human's node is a simple function of the value of the predictor's nodes: we can loop over each location, check which of the human's options the (x,y,z) coordinates correspond to, and set the human node to that option. If the diamond is in a very distant place, the human node will be set to \"Out of the country.\"]{.c1}\n\n[]{.c1}\n\n[After applying these functions, the reporter can answer questions using whatever subset of nodes the human would have used to answer that question. ]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 599.00px; height: 342.00px;\"}\n\n[]{.c1}\n\nIn this example, the reporter computes the answer using the values for human nodes [h]{.c23} [3]{.c26 .c23} [ ]{.c23} and [h]{.c23} [4]{.c26 .c23} . Notice that even though in the video it [looks ]{.c23} like the robber was stopped, the reporter will answer honestly because it simply computed its understanding of the situation and mapped that into the human's nodes. We'll call this reporter [the direct translator.]{.c23}\n\n### Bad behavior : do inference in the human Bayes net {#ElicitingLatentKnowledge.xhtml#h.8zbibvlrwtkf .c6}\n\nI nstead of mapping nodes in the predictor\\'s Bayes net over to [all ]{.c23} the nodes in the human Bayes net, the reporter could instead [only map over the first part of the video, action sequence, and its ]{.c23} [prediction ]{.c12 .c23} [for the ]{.c23} [second part of the video]{.c23} . It could then use the human inference procedure to figure out [how a human would set all the nodes in their Bayes net ]{.c23} if they saw that input:\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 350.67px;\"}\n\n[]{.c1}\n\nWe'll call this reporter [the human simulator. ]{.c23} The direct translator fills in nodes in the human Bayes net with what the human [should ]{.c23} believe, while the human simulator fills them in with what the human [would ]{.c23} believe if they saw that video and action sequence. This means it will answer accurately in the training dataset (where a weak AI chose actions), but would switch to answering incorrectly if the stronger SmartVault AI can come up with more sophisticated plans such that humans wouldn't be able to tell what happened from the video and action sequence. [  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 350.67px;\"}\n\n[]{.c1}\n\nIf our training strategy finds the human simulator, [we'd be back to incentivizing the SmartVault to tamper with sensors if doing so could fool the human]{.c12} [.]{.c1}\n\n### Would [ this strategy learn the human simulator or the direct translator?]{.c32} {#ElicitingLatentKnowledge.xhtml#h.qad6okkklnp6 .c6}\n\nWe produced training data for the reporter by asking a human to label videos. That means that the human simulator is in some sense the [correct]{.c23} [ way to produce labels. Indeed, if the human ever makes a mistake then the learning process would prefer the human simulator (which correctly copies the mistake).]{.c1}\n\n[]{.c1}\n\nIn this report we won't worry too much about the human making mistakes that would favor the human simulator. We discuss this issue in much more detail in [ [Appendix: data errors](#ElicitingLatentKnowledge.xhtml#h.koatpvb4n07l){.c9} ]{.c13} . ^[\\[17\\]](#ElicitingLatentKnowledge.xhtml#ftnt17){#ElicitingLatentKnowledge.xhtml#ftnt_ref17}^\n\n[]{.c1}\n\nBut even if the human never makes a mistake, then these two reporters have identical behavior on the training set, so they have identical loss. It's not obvious which one gradient descent would find. That possibility is enough to make us worried about our training strategy---we'd like to either find an argument that this approach [won't]{.c23} [ fail, or else find a new approach.]{.c1}\n\n[]{.c1}\n\nThat's the focus of the rest of this report. In [ [Section: better data](#ElicitingLatentKnowledge.xhtml#h.xv3mjtozz4gv){.c9} ]{.c13} we discuss possible approaches for enlarging the training set in order to distinguish the human simulator from the direct translator. In [ [Section: regularization](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} we discuss approaches for preferentially learning the direct translator even if the two reporters behave identically on the training set. But first, we'll explain the general research methodology we use to approach this problem.\n\n[Research methodology]{.c22} {#ElicitingLatentKnowledge.xhtml#h.a0wkk7prmy4t .c61 .c24}\n============================\n\nOur research methodology ^[\\[18\\]](#ElicitingLatentKnowledge.xhtml#ftnt18){#ElicitingLatentKnowledge.xhtml#ftnt_ref18}^ [,]{.c56} ^[\\[19\\]](#ElicitingLatentKnowledge.xhtml#ftnt19){#ElicitingLatentKnowledge.xhtml#ftnt_ref19}^ [ can be described as a game between a builder who proposes an algorithm, and a breaker who describes how it might fail. In the last section, we saw one informal round of this game. In the next sections we'll go through a few more.]{.c1}\n\n[]{.c1}\n\n[In each round:]{.c1}\n\n[]{.c1}\n\n1. The builder proposes a [ [training strategy](#ElicitingLatentKnowledge.xhtml#h.1xpao6tk9oiv){.c9} ]{.c13} [ for eliciting latent knowledge (train the model on questions where humans can give confident answers)]{.c1}\n2. The breaker proposes a [ [test case](#ElicitingLatentKnowledge.xhtml#h.n2gtc04vxdro){.c9} ]{.c13} [ in which the strategy might fail (the prediction model and the human make predictions using different Bayes nets)]{.c1}\n3. The builder describes the [ [desired reporter](#ElicitingLatentKnowledge.xhtml#h.pvrv5q8nf2s2){.c9} ]{.c13} [ they hope will be learned in that test case (directly translating from the predictor's Bayes net to the human's Bayes net).]{.c1}\n4. The breaker describes a [ [bad reporter](#ElicitingLatentKnowledge.xhtml#h.8zbibvlrwtkf){.c9} ]{.c13} [ that could be learned instead (doing inference on the human's Bayes net).]{.c1}\n5. [The builder can then try to argue that the breaker's scenario is implausible. This may involve asking the breaker to specify more details of the scenario; then the builder identifies inconsistencies in the scenario or argues that actually the strategy would learn the desired reporter after all.]{.c1}\n\n[]{.c1}\n\nIf the builder succeeds, we go back to step 2 and the breaker proposes a new counterexample. If the breaker succeeds, we go back to step 1 and the builder proposes a new algorithm. ^[\\[20\\]](#ElicitingLatentKnowledge.xhtml#ftnt20){#ElicitingLatentKnowledge.xhtml#ftnt_ref20}^\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 618.13px; height: 314.59px;\"}\n\n[]{.c1}\n\n[Ultimately, we hope that this methodology yields either a solution to ELK that appears to work on paper (and which is therefore ready to start being tested in practice), or a test case that defeats all the algorithms we can think of and undermines our intuition that ELK ought to be possible.]{.c1}\n\n[Why focus on the worst case?]{.c37} {#ElicitingLatentKnowledge.xhtml#h.pj5s6kcgv14n .c46 .c24}\n------------------------------------\n\n[This methodology rules out a possible solution as soon as we've identified any case where it fails. It's worth discussing why we care so much about the worst case --- why focus on problems that would only arise with very capable AI (and may never arise) instead of thinking about ways to align existing AI?]{.c1}\n\n[]{.c1}\n\n[We are afraid that an alignment strategy that works for human-level AI may break down soon afterwards without leaving us enough time to develop new strategies. For example, any method that is predicated on AI systems being unable to outsmart humans may break down very rapidly as AI systems become superhuman in key domains. Moreover, it may be hard to study these phenomena, or get clear warnings that our approaches may break down, until the risk is very close at hand. These problems can be much worse if AI progress is very rapid.]{.c1}\n\n[]{.c1}\n\nThis concern is much of what makes misalignment so scary to us. To address this fear, we are very interested in developing alignment strategies that work [no matter]{.c23} [ how far we scale up our models or what they learn. This is an ambitious goal, but we are tentatively optimistic and if possible we think it would change the landscape for alignment research. ]{.c1}\n\n[]{.c1}\n\nOn the optimistic side, if it [is]{.c23} [ possible to design an algorithm that works in the worst case, we think it might be easier than trying to design an algorithm that works in practice. By trying to solve the problem in the worst case we can iterate quickly on greatly simplified examples, rather than needing to work with messy cases or wait on experimental results.]{.c1}\n\n[]{.c1}\n\nWe discuss these issues more in [ [Section: why we're excited](#ElicitingLatentKnowledge.xhtml#h.phhqacmab0ig){.c9} ]{.c13} [.]{.c1}\n\n[What counts as a counterexample for ELK?]{.c37} {#ElicitingLatentKnowledge.xhtml#h.n4dv0x4y10s6 .c46 .c24}\n------------------------------------------------\n\n[In order to win, the breaker needs to argue that the fine-tuned model gave an unambiguously \"bad\" answer to a question despite knowing better. That means they have to argue that:]{.c1}\n\n[]{.c1}\n\n- [The model understands the question. One sufficient condition is that the model can predict human answers to essentially arbitrary hypothetical questions in order to clarify the meaning of terms.]{.c1}\n- The model's answer is unambiguously wrong. This usually means the question is about something concrete \"out there in the world\" that the human can easily understand (like \"the diamond is not in the room\"), ^[\\[21\\]](#ElicitingLatentKnowledge.xhtml#ftnt21){#ElicitingLatentKnowledge.xhtml#ftnt_ref21}^ and the situation is a central example of something (like \"tampering\") rather than a border case. ^[\\[22\\]](#ElicitingLatentKnowledge.xhtml#ftnt22){#ElicitingLatentKnowledge.xhtml#ftnt_ref22}^\n- [The model knows that its answer is wrong. This is most likely when the model is correctly predicting the observable consequences of an event, while simultaneously claiming that the event did not occur.]{.c1}\n\n[]{.c1}\n\n[For example, if the SmartVault predicts that a given sequence of actions will lead to the camera showing a diamond, because it will replace the diamond with a convincing fake, then it is very likely that it understands that the diamond is not in the room. If the diamond was simply removed from the room and replaced, then this is a very unambiguous case. And if the model can predict human sentences about words like \"still in the room\" and \"diamond\" then it should know how to answer questions like \"Is the diamond still in the room?\"]{.c1}\n\n[]{.c1}\n\nIn our counterexample, [the human simulator answers questions incorrectly regardless of how straightforward the question is and how well the predictor understands the world ]{.c12} and natural language. We will generally focus on this kind of crisp and unambiguous counterexample. Eventually we may find a training strategy for which there are no crisp counterexamples\\-\\--where we sometimes look at what the model did and say \"well [maybe]{.c23} that's a mistake, we're not sure\" but we never say \"that was [definitely]{.c23} [ a failure.\" Simply dealing with the crisp counterexamples would already be a very promising step.]{.c1}\n\n[]{.c1}\n\n[When we want to be more precise, we may call this the \"narrow\" version of ELK because we are focused on unambiguously wrong answers to straightforward questions about the world, rather than dealing with tricky border cases or deeply confusing situations. In this report we will be focused only on the narrow version.]{.c1}\n\n[]{.c1}\n\nIn [ [Appendix: narrow elicitation](#ElicitingLatentKnowledge.xhtml#h.ii599facmbks){.c9} ]{.c13} we'll describe what we mean by \"narrow\" and argue that it may be sufficient to deploy AI safely. In [ [Appendix: problem statement](#ElicitingLatentKnowledge.xhtml#h.jk61tc933p1){.c9} ]{.c13} [ we'll say a bit more about what we mean by \"knowledge.\"]{.c1}\n\n[Informal steps]{.c37} {#ElicitingLatentKnowledge.xhtml#h.bbrdssppm8xl .c46 .c24}\n----------------------\n\nIn our research we usually start with a high-level idea (e.g. \"Maybe we could only label data points we're confident about?\") which could be the basis for many possible training strategies. At first the breaker tries to find a counterexample that defeats that entire category of training strategies. If the breaker has difficulty defeating all of them, then we have the builder start filling in more details to make the breaker's job easier. In this report [we will almost always leave many details unspecified because we don\\'t think they are important to the breaker's counterexample.]{.c38 .c62 .c12}\n\n[]{.c1}\n\nWe very rarely need to get fully precise about either the strategy or the counterexample to feel like we know who is going to win the game (and therefore we can go back to step 1). With that said, the guiding principle is that [it should always be possible to get more precise if needed]{.c12} ; ^[\\[23\\]](#ElicitingLatentKnowledge.xhtml#ftnt23){#ElicitingLatentKnowledge.xhtml#ftnt_ref23}^ leaving things vague is a shortcut similar to a mathematician leaving a proof informal because they expect it could be formalized if needed. If it turned out that a vague counterexample [couldn't]{.c23} [ be turned into a more precise counterexample, then that would be a sign that we had overstepped our intuitions and should be more careful.]{.c1}\n\nCan we construct a dataset that separates \"correct\" from \"looks correct to a human\"? {#ElicitingLatentKnowledge.xhtml#h.xv3mjtozz4gv .c24 .c61}\n====================================================================================\n\n[Intuitively, the issue with the baseline training strategy was that even though we picked a dataset where humans would always be able to tell what happened, it still only contains human judgments about videos and actions. There was no reason for the reporter not to just imitate human judgments; that strategy performed perfectly (and would outperform if the human made mistakes). ]{.c1}\n\n[]{.c1}\n\nIf we had a ground truth label instead of just the human's judgment, we'd be in better shape. If we could somehow construct a training dataset where [humans seeing the video and actions wouldn't be able to tell what happened, but we have some ]{.c12} [other]{.c12 .c23} [ way of knowing what really happened]{.c12} that we use to reward the reporter, this would advantage the direct translator [ over the human simulator. Maybe that's enough to ensure we don't learn any bad reporters?]{.c1}\n\n[]{.c1}\n\nUnfortunately, [none of these strategies seem sufficient for solving ELK in the worst case.]{.c12} In particular, after considering strategies like [ [Debate](https://www.google.com/url?q=https://arxiv.org/abs/1805.00899&sa=D&source=editors&ust=1646948966651316&usg=AOvVaw2XjKIMWOurcj6imOGG7d6O){.c9} ]{.c13} , [ [Iterated Distillation and Amplification](https://www.google.com/url?q=https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616&sa=D&source=editors&ust=1646948966651638&usg=AOvVaw2BEz1PDDfpixkpPBhCjN8b){.c9} ]{.c13} and [ [Imitative Generalization](https://www.google.com/url?q=https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1&sa=D&source=editors&ust=1646948966651917&usg=AOvVaw0-8nD8QaWNwM5GIadBD02Z){.c9} ]{.c13} --- and even assuming that every other counterexample to those strategies could be overcome --- [we believe they still don't address ELK.]{.c1}\n\n[]{.c1}\n\n[In this section, we'll propose three strategies and discuss counterexamples to each of them:]{.c1}\n\n- Ask a human to come up with tricky ways to operate the SmartVault that another human might misunderstand, and then have that human operator label data themselves ( [ [more](#ElicitingLatentKnowledge.xhtml#h.gi8iu4m98ok1){.c9} ]{.c13} [).]{.c1}\n- Have a human use AI assistance to consider new hypotheses about the world, learn new things, and generally get better at figuring out what's going on ( [ [more](#ElicitingLatentKnowledge.xhtml#h.3l614s96sz9t){.c9} ]{.c13} [).]{.c1}\n- Have a human adopt whatever changes to their Bayes net maximize their ability to predict what happens next in a video ( [ [more](#ElicitingLatentKnowledge.xhtml#h.21am4brs4fqv){.c9} ]{.c13} [).]{.c1}\n\n[]{.c1}\n\nBecause these strategies are insufficient on their own, we believe we must introduce some kind of regularizer or inductive bias to ensure we learn the direct translator. [ ]{.c12} We'll return to regularization in [ ]{.c12} [ [Section: regularization](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} [.]{.c1}\n\n[Strategy: have a human operate the SmartVault and ask them what happened]{.c37} {#ElicitingLatentKnowledge.xhtml#h.gi8iu4m98ok1 .c46 .c24}\n--------------------------------------------------------------------------------\n\nWe could have a trusted human manually operate the SmartVault's actuators, and try to fool other humans watching them. ^[\\[24\\]](#ElicitingLatentKnowledge.xhtml#ftnt24){#ElicitingLatentKnowledge.xhtml#ftnt_ref24}^ Because they have the advantage of [making the plans themselves]{.c23} [, they may be able to generate attempts at tampering that would fool other humans of similar intelligence. If the operator produces the labels instead of the observer, then we can get correct labels about a broader set of situations than before:]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 494.00px; height: 351.00px;\"}\n\n### [How this defeats the previous counterexample]{.c32} {#ElicitingLatentKnowledge.xhtml#h.x7esv7dkon3g .c6}\n\nThe counterexample to the baseline strategy was the human simulator --- a reporter that does inference from observations in the human Bayes net. But here we are labeling our training data using the human [operator's ]{.c23} understanding of the situation rather than the human [observer's ]{.c23} [understanding. ]{.c1}\n\n[]{.c1}\n\nThe operator can understand the consequences of their own actions much better than an observer, because they chose [ ]{.c23} their action to have a particular effect in the world (e.g. \"tamper with the camera\") . The operator first sees the first part of the video, then looks around for a consideration or idea they could use to get the effect they want, and chooses their action based on that. [ ]{.c23} This means they'll have unusually confident and accurate values for particular variables(s) in their Bayes net corresponding to the ideas they focused on.\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 419.00px; height: 325.79px;\"}\n\n[]{.c1}\n\nSuppose there are a hundred software vulnerabilities a human could think of in the SmartVault camera if they went looking for one. Maybe after thinking for a bit the operator comes up with vulnerabilities 7, 9, and 42, and builds their plan around vulnerability 9. Meanwhile a human observer looking at the video and actions might think of vulnerabilities 12, 90, and 17, and find that none of them seem to have been exploited --- it would seem to them like tampering didn't occur because they didn't happen to think of the particular idea that the operator did. ^[\\[25\\]](#ElicitingLatentKnowledge.xhtml#ftnt25){#ElicitingLatentKnowledge.xhtml#ftnt_ref25}^\n\n[]{.c1}\n\n[This means a reporter that simulates the human observer's inference from the videos and action sequence will answer incorrectly whenever the observer would fail to think hard about whichever considerations the operator did.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 313.00px;\"}\n\n[]{.c1}\n\nThe direct translator would perform better by trying its best to answer honestly in such cases. (Note that sometimes the human operator may think of something that even the AI is fooled by, in which case the direct translator would also get dinged for answering incorrectly --- but we hope it at least avoids getting [unnecessarily ]{.c23} penalized.)\n\n### [New counterexample: better inference in the human Bayes net]{.c32} {#ElicitingLatentKnowledge.xhtml#h.5vson59i0nkx .c6}\n\nThe operator understands something the observer doesn't, but the process of choosing their action still takes place within the human Bayes net. ^[\\[26\\]](#ElicitingLatentKnowledge.xhtml#ftnt26){#ElicitingLatentKnowledge.xhtml#ftnt_ref26}^ If the operator [explained ]{.c23} their plans to the observer [, the observer could then follow how it worked.]{.c1}\n\n[]{.c1}\n\nThis suggests a counterexample. Imagine the reporter could do [perfect ]{.c23} inference from observations in the human Bayes net. ^[\\[27\\]](#ElicitingLatentKnowledge.xhtml#ftnt27){#ElicitingLatentKnowledge.xhtml#ftnt_ref27}^ This would be like simulating a human observer who systematically checks for signs of [all 100 software vulnerabilities]{.c23} that the human operator could have come up with to exploit [, ]{.c23} [meaning they'll understand that the action sequence is exploiting vulnerability 9 when they consider that hypothesis. This model would get a perfect loss on the training distribution.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 309.00px;\"}\n\n[]{.c1}\n\nWhile it's computationally intractable to do [perfect ]{.c23} inference in the human Bayes net, we can modify this setup slightly to construct a better counterexample: the reporter could simply do [the best inference it can ]{.c23} [in the human Bayes net (given its predicted video). ]{.c1}\n\n[]{.c1}\n\nThis bad reporter would sometimes answer questions incorrectly, but only when the predictor itself fails to recognize the human operator's tampering --- and in that case, the direct translator would [also ]{.c23} answer incorrectly. There are no data points in the training distribution which give an advantage to the direct translator anymore. ^[\\[28\\]](#ElicitingLatentKnowledge.xhtml#ftnt28){#ElicitingLatentKnowledge.xhtml#ftnt_ref28}^\n\n[]{.c1}\n\nImagine the superhuman SmartVault AI can think of attacks which exploit knowledge it has that is [completely missing from the human Bayes net]{.c23} --- perhaps the attack exploits a fact about transistors that can lead to RAM corruption under conditions that look innocuous to a human, based on physical principles a human doesn't understand. The bad reporter will fail to report tampering in that case , while the direct translator would . ^[\\[29\\]](#ElicitingLatentKnowledge.xhtml#ftnt29){#ElicitingLatentKnowledge.xhtml#ftnt_ref29}^\n\n[Strategy: have AI help humans improve our understanding]{.c37} {#ElicitingLatentKnowledge.xhtml#h.3l614s96sz9t .c46 .c24}\n---------------------------------------------------------------\n\nIf the new concern is that there may be important features of the world that humans don't understand, we can try to improve human understanding and [change ]{.c23} the human Bayes net. ^[\\[30\\]](#ElicitingLatentKnowledge.xhtml#ftnt30){#ElicitingLatentKnowledge.xhtml#ftnt_ref30}^ Individual humans can extend their Bayes net by reading books, studying things in school, considering arguments and thought experiments, and so on. Humanity collectively can improve and extend it by making new empirical observations, proposing new hypotheses, designing and performing experiments, and so on [.]{.c23} [ ]{.c1}\n\n[]{.c1}\n\n[We'll use the term \"doing science\" to refer collectively to all the normal processes humans use to modify their Bayes net --- though they are more ubiquitous (and less glamorous) than that term may suggest. If AI systems are learning extremely quickly then there's no reason to think that the traditional process of doing science could keep up, but we could hope to somehow use AI systems to help automate the process. This could both speed up the process and expand its scale, hopefully allowing human understanding of the world to keep pace with AI understanding.]{.c1}\n\n[]{.c1}\n\nThe obvious concern with this idea is that the AI systems we use to automate doing science will [themselves]{.c23} be dangerously misaligned. There are several proposals for getting around this, including [ [debate](https://www.google.com/url?q=https://arxiv.org/abs/1805.00899&sa=D&source=editors&ust=1646948966660468&usg=AOvVaw0pPym8uF1UX_BKxjRqgavC){.c9} ]{.c13} , [ [iterated amplification](https://www.google.com/url?q=https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616&sa=D&source=editors&ust=1646948966660862&usg=AOvVaw3qDekP03QFRlsUbbr_6L04){.c9} ]{.c13} , and [ [recursive reward modeling](https://www.google.com/url?q=https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84&sa=D&source=editors&ust=1646948966661152&usg=AOvVaw3SDMLICS3Rc_MIn3MGvEJs){.c9} ]{.c13} . Very loosely speaking, the core idea behind such proposals is that we may be able to ensure safety by only training AIs on tasks that can be recursively broken down into subtasks ^[\\[31\\]](#ElicitingLatentKnowledge.xhtml#ftnt31){#ElicitingLatentKnowledge.xhtml#ftnt_ref31}^ (which can be broken down into subtasks etc) such that the smallest subtasks can be directly evaluated by a human.\n\n[]{.c1}\n\nThese approaches are not fully specified, and there are a large number of potential problems and risks; there may not be a viable way to safely train such AI assistants. But for the purpose of this report we'll assume the best---that [these techniques effectively train AI systems to do science and honestly describe the results as best they can]{.c12} [, subject to their capabilities and the available time and resources. ]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 479.00px; height: 237.00px;\"}\n\nARC is currently focused on the kinds of issues described in this report because Paul has spent some time exploring possible obstacles to proposals like debate and amplification, ^[\\[32\\]](#ElicitingLatentKnowledge.xhtml#ftnt32){#ElicitingLatentKnowledge.xhtml#ftnt_ref32}^ and he currently considers the most severe difficulties to be ones that come up even in this optimistic case.\n\n[]{.c1}\n\nIf we use such assistants to improve the human operator's understanding of the world as much as possible, ^[\\[33\\]](#ElicitingLatentKnowledge.xhtml#ftnt33){#ElicitingLatentKnowledge.xhtml#ftnt_ref33}^ then we can have these AI assistants help a human operate the SmartVault ^[\\[34\\]](#ElicitingLatentKnowledge.xhtml#ftnt34){#ElicitingLatentKnowledge.xhtml#ftnt_ref34}^ [ using the best understanding that they've developed rather than the human's initial understanding. This could further expand the set of situations where the human operator can provide correct labels:]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 483.00px; height: 334.00px;\"}\n\n[There are a huge number of details about the procedure we use to train AI assistants that we're leaving vague in this report because we believe our counterexample will apply to all of them. ]{.c1}\n\n### [How this defeats the previous counterexample]{.c32} {#ElicitingLatentKnowledge.xhtml#h.8efzshuqq1b9 .c6}\n\n[Our previous reporter used the human's original Bayes net to answer questions. If AI systems can help humans understand new things and extend our Bayes net, then a human operator can use that new knowledge to construct an action sequence that the naive human operator couldn't have come up with (and therefore couldn't accurately answer questions about). Even ideal inference in the original human Bayes net would not correctly understand that strategy, so the bad reporter from the previous section would label such data points incorrectly.]{.c1}\n\n### [New counterexample: gradient descent is more efficient than science]{.c32} {#ElicitingLatentKnowledge.xhtml#h.kd79zkls9g5o .c6}\n\nOur predictor's Bayes net was found by gradient descent because it explained the training data really well, while the expanded human Bayes net was produced by a faster and more thorough version of humans \"doing science\" --- extending and improving our Bayes net by making changes that we understand. \n\n[]{.c1}\n\n[Prima facie]{.c23} [, there's no particular reason to think that these two processes would acquire the same kind of knowledge with the same amount of effort --- there's no direct correspondence between the updates made by gradient descent and the kinds of \"updates\" humans make as they consider arguments, perform experiments, make observations, and so on. For any given kind of knowledge, one or the other could be more efficient (in terms of using less compute) at acquiring that knowledge.]{.c1}\n\n[]{.c1}\n\nThis means it's plausible ^[\\[35\\]](#ElicitingLatentKnowledge.xhtml#ftnt35){#ElicitingLatentKnowledge.xhtml#ftnt_ref35}^ that an AI assistant who can automate the process of doing science well enough to impart us with all the SmartVault AI's knowledge [can only be trained using much more total computation than the original SmartVault AI itself. ]{.c38 .c62 .c12}\n\n[]{.c1}\n\nAs an intuition, imagine the cost of training a more-powerful future version of [ [AlphaFold](https://www.google.com/url?q=https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology&sa=D&source=editors&ust=1646948966664652&usg=AOvVaw11ht8MBqv5opFCygSMUOCu){.c9} ]{.c13} vs a model that can break down and explain [why]{.c23} a particular protein folds in a particular way; it's very plausible that the latter would be significantly more costly to train. We could imagine a future in which the best way to understand biology was not by doing anything resembling normal \"science,\" but simply by using gradient descent to learn large neural networks that made predictions about biological systems. At that point there would be no guarantee that humans understood anything about these models beyond the fact that they made good predictions. ^[\\[36\\]](#ElicitingLatentKnowledge.xhtml#ftnt36){#ElicitingLatentKnowledge.xhtml#ftnt_ref36}^\n\n[]{.c1}\n\nThis is a counterexample because our goal is to find a [competitive ]{.c23} solution to ELK --- one that uses only a small amount more computation than training the unaligned benchmark. ^[\\[37\\]](#ElicitingLatentKnowledge.xhtml#ftnt37){#ElicitingLatentKnowledge.xhtml#ftnt_ref37}^ [ So now we are concerned about a bad reporter which does best-effort inference in the best human-understandable Bayes net that could be achieved in the amount of \"doing science\" we can automate within that budget. ]{.c1}\n\n[]{.c1}\n\n[As far as we and our AI assistants can tell, this reporter would answer all questions accurately on the training distribution. But if the SmartVault AI later tampers with sensors by exploiting some understanding of the world not reflected even in the best human-understandable Bayes net, it would fail to report that.]{.c1}\n\n[Strategy: have humans adopt the optimal Bayes net]{.c37} {#ElicitingLatentKnowledge.xhtml#h.21am4brs4fqv .c46 .c24}\n---------------------------------------------------------\n\nIf gradient descent is such a good method for understanding the world, maybe humans could adopt it themselves. One speculative approach for this is [ [imitative generalization](https://www.google.com/url?q=https://alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1&sa=D&source=editors&ust=1646948966666165&usg=AOvVaw0yJLOjndRx3fIFlSIjESXK){.c9} ]{.c13} . ^[\\[38\\]](#ElicitingLatentKnowledge.xhtml#ftnt38){#ElicitingLatentKnowledge.xhtml#ftnt_ref38}^ [ Roughly speaking, imitative generalization:]{.c1}\n\n[]{.c1}\n\n- [Considers the space of changes the humans could make to their Bayes net;]{.c1}\n- [Learns a function which maps (proposed change to Bayes net) to (how a human --- with AI assistants --- would make predictions after making that change);]{.c1}\n- Searches over this space to find the change that allows the humans to make the best predictions. ^[\\[39\\]](#ElicitingLatentKnowledge.xhtml#ftnt39){#ElicitingLatentKnowledge.xhtml#ftnt_ref39}^\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 573.00px; height: 260.00px;\"}\n\n[]{.c1}\n\nWe could use imitative generalization in order to propose the best change to the human Bayes net that can be found with the amount of computation that would have been required to train the unaligned benchmark. A human ^[\\[40\\]](#ElicitingLatentKnowledge.xhtml#ftnt40){#ElicitingLatentKnowledge.xhtml#ftnt_ref40}^ [ could then use this new Bayes net to operate the SmartVault and answer questions about the resulting videos.]{.c1}\n\n[]{.c1}\n\n[Imitative generalization is even more speculative than the proposals discussed in the last section, and is even more likely to face important difficulties. But we will again argue that even in the best case (where the narrow technical difficulties are surmountable), there is still an important obstacle.]{.c1}\n\n### [How this defeats the previous counterexample]{.c32} {#ElicitingLatentKnowledge.xhtml#h.gq6k1muikv0w .c6}\n\nIf there is a gap between science and gradient descent, we hope that imitative generalization will find the [best]{.c23} [ Bayes net that can be found with our compute budget, rather than the best Bayes net that can be found by doing science with our compute budget. If so, it can produce labels that the \"best-effort inference in the best human-understandable model\" reporter would get wrong.]{.c1}\n\n### New counterexample: ontology mismatch {#ElicitingLatentKnowledge.xhtml#h.j976r9hpfgml .c6}\n\nWe are hoping imitative generalization would find a change that adds or changes some parts of the human's Bayes net while preserving the semantics of nodes like \"diamond\" and \"robber\" so that they can still answer questions. For example, maybe we want imitative generalization to help the human model the behavior of transistors while preserving their understanding of macroscopic objects. ^[\\[41\\]](#ElicitingLatentKnowledge.xhtml#ftnt41){#ElicitingLatentKnowledge.xhtml#ftnt_ref41}^\n\n[]{.c1}\n\nBut in the worst case, the best Bayes net may simply throw away all of those familiar nodes. ^[\\[42\\]](#ElicitingLatentKnowledge.xhtml#ftnt42){#ElicitingLatentKnowledge.xhtml#ftnt_ref42}^ [ If this happens, it's not at all clear how the human can use the resulting model to produce labels about anything other than the observations themselves.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 515.50px; height: 344.04px;\"}\n\n### So are we just stuck now? {#ElicitingLatentKnowledge.xhtml#h.og4edxhb4hm4 .c6}\n\n[If we're assuming that the normal process of human science cannot find this Bayes net (unless it uses much more computation than the unaligned benchmark), then are we stuck? Is there no way to find a competitive worst-case solution to ELK?]{.c1}\n\n[]{.c1}\n\nNot necessarily . We may not have been able to find this Bayes net by doing science, ^[\\[43\\]](#ElicitingLatentKnowledge.xhtml#ftnt43){#ElicitingLatentKnowledge.xhtml#ftnt_ref43}^ but we don't have to use our normal procedure of \"doing science\" to understand it once we have it. [We can directly look at the Bayes net ]{.c23} and see how it's making predictions about the second part of the video. [ ]{.c67 .c62 .c23}\n\n[]{.c1}\n\nIt's as if the aliens simulating our universe handed us the code that runs the simulation [ ]{.c23} we're in, and enough compute to run it on, [and ]{.c23} the renderer that turns the code into a video where we can watch ourselves walking around and talking. Maybe it would have taken human civilization tens of thousands of years to figure out the rules underlying this simulation [ --- but if we could leverage the fact that we can examine the simulation as it's stepping forward and rendering the video, we may be able to speed that up a lot. ]{.c1}\n\n[]{.c1}\n\nIntuitively, once we hit upon the \"true\" correspondence between the parts of the simulation and mundane reality as we know it, it could be really obvious that it's correct --- our understanding might \"click into place.\" We may be able to figure this out in much less time than it would have taken to do science well enough to construct the simulation in the first place (e.g. maybe this process would take us a mere few years). ^[\\[44\\]](#ElicitingLatentKnowledge.xhtml#ftnt44){#ElicitingLatentKnowledge.xhtml#ftnt_ref44}^\n\n[]{.c1}\n\nAnd if we believe we could quickly [recognize ]{.c23} the true correspondence, then maybe we could use that to define a loss function to train a reporter .\n\n[]{.c38 .c62 .c12}\n\nHowever, we aren't satisfied by saying \"maybe humans could easily recognize the right correspondence between the Bayes net and concepts they care about.\" We want to actually [describe ]{.c23} how they would recognize that a correspondence is actually correct, for [any ]{.c23} Bayes net ^[\\[45\\]](#ElicitingLatentKnowledge.xhtml#ftnt45){#ElicitingLatentKnowledge.xhtml#ftnt_ref45}^ [ they might be handed. We don't expect to have time to run that procedure, but we want to convince ourselves that a team of humans (or a debate/amplification tree) could run it with enough time.]{.c1}\n\n[]{.c1}\n\nSo even though may not immediately be stuck, [this does effectively bring us back to square one:]{.c12} a human (with their AI assistants) is staring at a Bayes net which makes great predictions, and wants to figure out how to use it to answer questions about all the concepts they actually care about (like the diamond being in the room). We believe there should be [some]{.c23} \"correct\" reporter, and we can label a bunch of points to help find it, but if we search for a reporter in the naive way we might get a human simulator. ^[\\[46\\]](#ElicitingLatentKnowledge.xhtml#ftnt46){#ElicitingLatentKnowledge.xhtml#ftnt_ref46}^\n\n[Ontology identification]{.c22} {#ElicitingLatentKnowledge.xhtml#h.u45ltyqgdnkk .c61 .c24}\n===============================\n\nELK refers to the general problem of eliciting latent knowledge from a model trained with gradient descent. We use \"ontology identification\" to refer to the special case where the learned model takes the simple form of inference in a probabilistic model ^[\\[47\\]](#ElicitingLatentKnowledge.xhtml#ftnt47){#ElicitingLatentKnowledge.xhtml#ftnt_ref47}^ [ that corresponds in some way to the human model, such that the key challenge is to figure out that correspondence.]{.c1}\n\n[]{.c1}\n\nThe remainder of this report, other than [ [Appendix: learned optimizers](#ElicitingLatentKnowledge.xhtml#h.3f3phmjt4uvn){.c9} ]{.c13} [, is focused on ontology identification in particular. In this section, we'll discuss:]{.c1}\n\n- What kind of mismatches we might end up with between the AI's ontology and our own ( [ [more](#ElicitingLatentKnowledge.xhtml#h.wqk9e086phw0){.c9} ]{.c13} [).]{.c1}\n- The relationship between this narrower problem and the overall problem of ELK ( [ [more](#ElicitingLatentKnowledge.xhtml#h.t6ioicj6gfdr){.c9} ]{.c13} [).]{.c1}\n\n[Examples of ontology mismatches]{.c37} {#ElicitingLatentKnowledge.xhtml#h.wqk9e086phw0 .c46 .c24}\n---------------------------------------\n\nThe paradigmatic example of an ontology mismatch is a deep change in our understanding of the physical world. For example, you might imagine humans who think about the world in terms of rigid bodies and Newtonian fluids and \"complicated stuff we don't quite understand,\" while an AI thinks of the world in terms of atoms and the void. Or we might imagine humans who think in terms of the standard model of physics, while an AI understands reality as vibrations of strings. We think that this kind of deep physical mismatch is a useful mental picture, and it can be a fruitful source of simplified examples, but we don't think it's very likely. ^[\\[48\\]](#ElicitingLatentKnowledge.xhtml#ftnt48){#ElicitingLatentKnowledge.xhtml#ftnt_ref48}^\n\n[]{.c1}\n\n[We can also imagine a mismatch where AI systems use higher-level abstractions that humans lack, and are able to make predictions about observables without ever thinking about lower-level abstractions that are important to humans. For example we might imagine an AI making long-term predictions based on alien principles about memes and sociology that don't even reference the preferences or beliefs of individual humans. Of course it is possible to translate those principles into predictions about individual humans, and indeed this AI ought to make good predictions about what individual humans say, but if the underlying ontology is very different we are at risk of learning the human simulator instead of the \"real\" mapping.]{.c1}\n\n[]{.c1}\n\nOverall we are by far most worried about deeply \"messy\" mismatches that can't be cleanly described as higher- or lower-level abstractions, or even what a human would recognize as \"abstractions\" at all. We could try to tell abstract stories about what a messy mismatch might look like, ^[\\[49\\]](#ElicitingLatentKnowledge.xhtml#ftnt49){#ElicitingLatentKnowledge.xhtml#ftnt_ref49}^ [ or make arguments about why it may be plausible, but it seems easier to illustrate by thinking concretely about existing ML systems.]{.c1}\n\n[]{.c1}\n\n[For example, if we look at the internal behavior of a large language model, we see some structures and computations we can recognize but also quite a lot we can't. It is certainly possible that these models mostly think in terms of the same concepts as humans and we just need to figure them out, but at this point it also seems possible that they do at least some of their thinking in ways that are quite alien and that may not have short explanations. And it also seems possible that they will become less comprehensible, rather than more, as they reach and surpass human abilities. If so then we can certainly get predictions out of these models, but it will become increasingly unclear whether they are using words to directly explain their own beliefs, or to simply make predictions about what a human would say.]{.c1}\n\nRelationship between ontology identification and EL K {#ElicitingLatentKnowledge.xhtml#h.t6ioicj6gfdr .c46 .c24}\n-----------------------------------------------------\n\nTo solve ELK in general we need to confront learned predictors that are more complex than \"inference in an unfamiliar Bayes net.\" For example, our predictors might do learned learning [ in order to build faster models of key parts of the world, or might learn goal-directed heuristics for inference or reasoning. They might involve internal competition and selection; they might integrate cognitive behaviors they observe in their environment; or so on.]{.c1}\n\n[]{.c1}\n\nWe very tentatively think of ELK as having two key difficulties: ontology identification and learned optimization. We have a rough hope for handling learned optimization based on imitative generalization and recursively applying ELK to the learned optimizer; we discuss this hope in [ [Appendix: learned optimizers](#ElicitingLatentKnowledge.xhtml#h.3f3phmjt4uvn){.c9} ]{.c13} [.]{.c1}\n\n[]{.c1}\n\nWe don't think these two difficulties can be very precisely distinguished --- they are more like genres of counterexamples --- and we don't think \"learned optimization\" is necessarily a precise concept. We are fairly skeptical of any research that tries to assume that one of these problems is solved in order to focus on the other, because we think it is likely for key difficulties to slip through the conceptual cracks.\n\n[]{.c1}\n\n[It is very hard to know whether our approach to learned optimizers will work, and whether there are further hard cases, before having a clear picture of the ontology identification. So we are currently very uncertain about whether ontology identification represents 50% or 5% of the remaining difficulty of ELK.]{.c1}\n\n[]{.c1}\n\nPaul's journey to being interested in ontology identification started from considering the more ominous failure case where gradient descent produces a competent consequentialist. This led to considering approaches based on imitative generalization that try to \"learn what the agent knows\" instead of learning the agent itself. Ontology identification emerged as the most obvious core difficulty for this approach: if the agent cares about something that is simple in terms of its own model (like observations), then even humans who have that model cannot use it productively or understand whether the agent's proposed actions are dangerous.\n\nCan we specify a regularizer to penalize bad reporters? {#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2 .c61 .c24}\n=======================================================\n\n[In this section we describe some ways that you could try to penalize the bad reporter based on its parameters and the computation it does, rather than based merely on its behavior. Leveraging information beyond the behavior seems necessary, since many different reporters will receive a good loss and some of them will behave badly off distribution.]{.c1}\n\n[]{.c1}\n\nIn this section our goal is always to find [some]{.c23} [ structural feature that can penalize the human simulator. In the end we won't succeed, but we will see that we can at least make the bad reporter's job harder, and narrow the set of situations where it will be preferred to the direct translator.]{.c1}\n\n[]{.c1}\n\n[Most of these approaches will work by adding an additional regularization term to the optimization based on its complexity, speed, or other properties; then instead of doing gradient descent to minimize the reporter's question-answering loss, we minimize a sum of the question-answering loss and the regularizer's penalty. Because we are assuming the data has no errors --- meaning the good reporter and bad reporter answer questions equally well --- we will end up learning whichever model has a lower value of the regularizer.]{.c1}\n\n[]{.c1}\n\nWe provide pseudocode for these approaches in [ [Appendix: regularization details](#ElicitingLatentKnowledge.xhtml#h.ydzrxxk7tfsi){.c9} ]{.c13} [.]{.c1}\n\nStrategy: penalize complexity {#ElicitingLatentKnowledge.xhtml#h.lltpmkloasiz .c46 .c24}\n-----------------------------\n\nThe human simulator needs to specify the entire human model in order to perform inference, while the direct translator only needs to specify how each human concept is implemented in the AI's model. Intuitively we might expect direct translation to be simpler because it only needs to represent functions defining concepts rather than representing the concepts [and ]{.c23} [all of their relationships to one another.]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 411.00px; height: 231.00px;\"}\n\n[]{.c1}\n\n[So perhaps we can avoid the bad reporter by penalizing \"complexity.\" There are a wide range of regularizers that might have this kind of effect, e.g. early stopping or weight decay. We won't specify details because we think essentially all of them will be vulnerable to the same counterexample.]{.c1}\n\n[]{.c1}\n\nWe could try going even further and learning separate reporters for different kinds of questions. For example, we could imagine a single head responsible for answering questions about what locations contain diamonds, or we could even fix a single precise question and try to train a reporter to answer just that one question. ^[\\[50\\]](#ElicitingLatentKnowledge.xhtml#ftnt50){#ElicitingLatentKnowledge.xhtml#ftnt_ref50}^ Then a \"direct translator\" for that narrow category of questions only needs to represent facts about a few human concepts, whereas the previous bad reporters would need to represent the entire model to answer [any]{.c23} [ questions well.]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 452.00px; height: 197.00px;\"}\n\n### [Counterexample: the predictor\\'s Bayes net may be very big]{.c32} {#ElicitingLatentKnowledge.xhtml#h.b93ykmpkp55y .c6}\n\n[The bad reporter needs to specify the entire human model, how to do inference, and how to extract observations. But the complexity of this task depends only on the complexity of the human's Bayes net.]{.c1}\n\n[]{.c1}\n\nIf the predictor\\'s Bayes net is fairly small, then this may be much more complex than specifying the direct translator. But if we make the predictor\\'s Bayes net [ very large, then the direct translator can become more complicated --- and there is no obvious upper bound on how complicated it could become. Eventually direct translation will be more complex than human imitation, even if we are only trying to answer a single narrow category of questions.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 530.00px; height: 269.00px;\"}\n\n[]{.c1}\n\n[For example, imagine a predictor which thinks in terms of low-level physics, and a direct translator trying to use that predictor to answer questions about how many people are in the room. If we add more and more \"layers of abstraction\" between the human's and AI's model of physics --- first rigid bodies, then molecules, then atoms, then fields --- it's plausible that the direct translator must use more and more complex functions to map between the low-level nodes in the predictor's Bayes net and the nodes in the human Bayes net corresponding to how many people are in the room.]{.c1}\n\n[]{.c1}\n\n[This example isn't particularly realistic, and it could not be scaled indefinitely since the physical world only has finite complexity, but other kinds of \"abstraction mismatches\" seem like they could also make direct translation more complicated. For existing deep learning models, whose inner workings appear quite complex, it's already plausible that \"translation\" is quite complex even if it is ultimately possible.]{.c1}\n\n[Strategy: penalize computation time]{.c37} {#ElicitingLatentKnowledge.xhtml#h.goyuzwqyv9m8 .c46 .c24}\n-------------------------------------------\n\nHuman imitation requires doing inference in the entire human Bayes net to answer even a single question. Intuitively, that seems like much more work than using the direct translator [ to simply \"look up\" the answer.]{.c1}\n\n[]{.c1}\n\nWe could try to exploit this by penalizing reporters that use a lot of computation, or to simply limit the amount of computation the reporter can do. As before, we can also consider the time required to answer a [single]{.c23} question, hoping that the direct translator can answer individual questions relatively quickly while the human simulator needs to do inference in the human model to answer even a single simple question [.]{.c23}\n\n[]{.c1}\n\nAt first blush this is vulnerable to the same counterexample described in the last section --- as we make the predictor\\'s Bayes net bigger and bigger, the amount of work done by the direct translator may scale up. For example, if the predictor\\'s Bayes net has 10 [13]{.c56} nodes then it becomes quite possible that answering almost any question will require looking at many nodes and doing [something]{.c23} [ with them, which can be quite costly even if what you do with each node is quite cheap.]{.c1}\n\n[]{.c1}\n\nIf the human simulator had a constant time complexity then this would be enough for a counterexample . But the situation is a little bit more complex, because the human simulator we've described is one that [tries its best]{.c23} [ at inference.]{.c1}\n\n[]{.c1}\n\n[This opens a possibility for the direct translator to win: the predictor already needed to do inference in its own Bayes net, and the direct translator may be able to effectively \"re-use\" that inference rather than starting from scratch. And that benefit may scale up as we make our AI bigger and bigger, and therefore give the direct translator an enduring benefit.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 464.00px; height: 196.63px;\"}\n\n[]{.c1}\n\n[If this approach were promising it would raise practical questions about the reporter's architecture. But we won't get into those details, because for now we are focused on more conceptual difficulties with the approach.]{.c1}\n\n### [Counterexample: inference may not be too hard in the human Bayes net]{.c32} {#ElicitingLatentKnowledge.xhtml#h.xmyfigr32013 .c6}\n\nThis approach doesn't work well if the inference problems occurring during training are not too hard. For example, if it was possible to do [exact]{.c23} inference in the human Bayes net, then the computational cost of the inference step would again be a constant independent of the predictor\\'s Bayes net, and for [sufficiently large ]{.c23} [Bayes nets direct translation would again be more expensive.]{.c1}\n\n[]{.c1}\n\n[It's implausible for exact inference to be tractable, because the real world contains all kinds of exponentially challenging inference problems. As an extreme case, if we observed an encrypted ciphertext then it would be prohibitively difficult for the AI to infer the unencrypted plaintext.]{.c1}\n\n[]{.c1}\n\nHowever, it's not clear whether the human simulator actually needs to solve any of these arbitrarily complex inference problems in order to get a perfect loss. It's especially unclear whether the direct translator would get any [additional]{.c23} [ questions correct beyond those the human simulator could already answer --- for example, a hard case based on encryption could stump both of them.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 476.00px; height: 202.00px;\"}\n\n[]{.c1}\n\nIn order to ensure we learned the direct translator, we would need to change the training strategy to [ensure]{.c23} that it contains sufficiently challenging inference problems, and that doing direct translation was a [cost-effective]{.c23} [ way to improve speed (i.e. that there aren't other changes to the human simulator that would save even more time).]{.c1}\n\n[]{.c1}\n\nCompared to all our previous counterexamples, this one offers much more hope. We can't rule out the possibility of a clever dataset where the direct translator has a large enough computational advantage to be preferred, and we leave it as an avenue for further research. ^[\\[51\\]](#ElicitingLatentKnowledge.xhtml#ftnt51){#ElicitingLatentKnowledge.xhtml#ftnt_ref51}^ In [ [Section: penalizing inconsistencies](#ElicitingLatentKnowledge.xhtml#h.w0iwyfch6ysy){.c9} ]{.c13} [ we discuss an additional ingredient that we think makes computation-complexity-based approaches more plausible.]{.c1}\n\n[Strategy: penalize depending on \"downstream\" variables]{.c37} {#ElicitingLatentKnowledge.xhtml#h.vqrf1z119zwv .c46 .c24}\n--------------------------------------------------------------\n\n[If we laid out the predictor's variables in the order they are computed, its prediction for the second part of the video would appear at the very end --- that node is the most downstream part of the Bayes net, and once the predictor has a predicted video it has no reason to think further.]{.c1}\n\n[]{.c1}\n\nIntuitively, it's pretty suspicious if a reporter depends on this very final node. The direct translator should instead depend on variables that are much further \"upstream\" --- rather than looking at the observations of a diamond, it is actually looking at the nodes corresponding to the diamond, which the predictor computed [en route]{.c23} [ to predicting the video.]{.c1}\n\n[]{.c1}\n\nThis suggests a possible approach to penalizing reporters: allow them to choose which of the predictor's activations they depend on, and then penalize them based on the total computation performed to compute all of those activations. This is similar to attaching a question-answering head to a neural network at [every]{.c23} layer ^[\\[52\\]](#ElicitingLatentKnowledge.xhtml#ftnt52){#ElicitingLatentKnowledge.xhtml#ftnt_ref52}^ [ and then preferring heads that branch off from earlier layers.]{.c1}\n\n[]{.c1}\n\nIf the reporter was allowed to spend arbitrarily long computing its answer, then this would incentivize it to read the very earliest parts of the predictor (the inputs and stochastic nodes) and then re-derive all of the predictor's results from that. But we can avoid this by additionally penalizing the computation the reporter performs (as in the ordinary \"penalize computation\" idea). ^[\\[53\\]](#ElicitingLatentKnowledge.xhtml#ftnt53){#ElicitingLatentKnowledge.xhtml#ftnt_ref53}^ For pseudocode corresponding to this new training strategy, see [ [Appendix: downstream details](#ElicitingLatentKnowledge.xhtml#h.5irmw6rdfs89){.c9} ]{.c13} [.]{.c1}\n\n[]{.c1}\n\nIf we imagine the predictor computing facts sequentially, effectively simulating reality for longer and longer time periods, then the direct translator [ only has to compute up through time T in order to answer questions about time T. But the human simulator needs to keep computing until it has figured out all of the observations (so that it can run human inference on the full video).]{.c1}\n\n[]{.c1}\n\n[For example, consider the case when a robber tampers with the camera and steals the diamond, and then after they're long gone the human gets a clue they can observe on video that the theft occurred (e.g. maybe the robber put up a fake screen that later rips):]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 537.50px; height: 276.00px;\"}\n\n[]{.c1}\n\n[The honest reporter can answer the question \"Is the diamond in the room?\" with \"No\" as soon as the predictor has figured out that the robber steals the diamond, whereas the human simulator must wait until the predictor has simulated whether the theft will be detectable to a human (by the end of the second video) before answering yes or no.]{.c1}\n\n### Counterexample: \"what the human will believe\" can be u nderstood [before]{.c23} [ \"what is actually true\"]{.c32} {#ElicitingLatentKnowledge.xhtml#h.r6bak7fxz8w4 .c6}\n\nIf the predictor does a lot of computation, then it is extremely expensive for the reporter to look at downstream nodes. It will often make more sense for it to look at nodes gradually, starting from the beginning of the predictor's computation. It can then answer the question as soon as it has a good enough idea about the answer. ^[\\[54\\]](#ElicitingLatentKnowledge.xhtml#ftnt54){#ElicitingLatentKnowledge.xhtml#ftnt_ref54}^\n\n[]{.c1}\n\nIn order to answer as fast as possible, a reporter can benefit from looking for any argument ^[\\[55\\]](#ElicitingLatentKnowledge.xhtml#ftnt55){#ElicitingLatentKnowledge.xhtml#ftnt_ref55}^ [ that the answer can already be determined given what it knows so far. If the predictor's Bayes net is so big that the cost of inferring the value of an additional node is massive, it can be worth spending a lot of computation searching for such an argument, since even a small probability of avoiding further inference has a big payoff.]{.c1}\n\n[]{.c1}\n\nSuppose we've asked our reporter whether the diamond is in the room. In our ideal world, we'd want the reporter to search for any argument that the diamond is [actually]{.c23} in the room, and only stop once it finds such an argument. But the reporter could answer even more quickly by stopping whenever it found an argument that [either]{.c23} the diamond will be in the room [or ]{.c23} the human will believe the diamond is in the room. ^[\\[56\\]](#ElicitingLatentKnowledge.xhtml#ftnt56){#ElicitingLatentKnowledge.xhtml#ftnt_ref56}^ [ If this ever helps on the training set, then we can learn a bad model that uses this heuristic instead.]{.c1}\n\n[]{.c1}\n\nTo see how this can lead to bad behavior, consider a robber who plans to steal the diamond but first tampers with the camera so that it will always show the diamond remaining safe. ^[\\[57\\]](#ElicitingLatentKnowledge.xhtml#ftnt57){#ElicitingLatentKnowledge.xhtml#ftnt_ref57}^ [ The robber is very competent and so has ensured that there is no way a human will have any clue about the tampering. They will actually carry out the theft only if they can do so undetected.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 537.50px; height: 276.00px;\"}\n\n[]{.c1}\n\nAt the time that tampering occurs, it's not clear whether the robber will steal the diamond, but it [is]{.c23} [ clear that the human won't believe that the diamond was stolen. If our bad reporter can notice this fact, it will immediately say that the diamond was not stolen. This is exactly the behavior we wanted to avoid.]{.c1}\n\n[Strategy: penalize reporters that work with many different predictors]{.c37} {#ElicitingLatentKnowledge.xhtml#h.84po2g4mu6es .c46 .c24}\n-----------------------------------------------------------------------------\n\n[The translator is like a dictionary, translating nodes from the predictor\\'s Bayes net to nodes in the human Bayes net. In order for the translator to tell whether there is a diamond in the room, it might test whether the 38th node and the 187th node are equal. So the translator only works well if those two nodes actually represent the concepts the translator treats them as representing. ]{.c1}\n\n[]{.c1}\n\n[There are similar dependencies for every concept, so translation depends on many precise details of the predictor\\'s Bayes net and there are many ways to change the Bayes net that will cause the translator to start giving gibberish answers.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 530.00px; height: 302.00px;\"}\n\n[]{.c1}\n\n[However, the human simulator only needs to look at the nodes in the predictor\\'s Bayes net that represent the first and second part of the video. If we change the predictor\\'s Bayes net in a way that doesn't change the way it represents the videos, the human simulator will still give consistent answers to questions.]{.c1}\n\n[]{.c1}\n\nThat means the human simulator works well with many more possible predictors. We could potentially learn the direct translator instead of the human simulator by including a penalty term like log(\\# of predictors that the reporter works well with), ^[\\[58\\]](#ElicitingLatentKnowledge.xhtml#ftnt58){#ElicitingLatentKnowledge.xhtml#ftnt_ref58}^ [ alongside our penalty for complexity or computation time.]{.c1}\n\n[]{.c1}\n\nTo turn this into an algorithm we need to fill in two big holes: first we need to define what it means for a reporter to \"work well\" with a predictor (roughly speaking we mean that its answers look plausible to a human given the predictions), and then we need to specify how to estimate \"\\# of predictors with which a given reporter works well with.\" We give a precise algorithm along these lines in [ [Appendix: specificity details](#ElicitingLatentKnowledge.xhtml#h.4n4w25sdzczi){.c9} ]{.c13} [.]{.c1}\n\n### Counterexample: it could be easy to check for the predictor [ if it's simple]{.c32} {#ElicitingLatentKnowledge.xhtml#h.37r5bqctmv5q .c6}\n\nIn order to game this objective, a bad reporter could \"memorize\" what the predictor is [supposed]{.c23} [ to look like, and then decide to answer questions in a nonsensical way if the predictor is different from what it expects. If the predictor is exactly like it expects, then it can answer questions just like the human simulator.]{.c1}\n\n[]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 539.00px; height: 351.00px;\"}\n\n[]{.c1}\n\nThis bad reporter would only work for exactly one predictor, and therefore it gets the maximum possible reward for \"working with fewer predictors.\" Moreover, this bad reporter didn't have to do very much extra computation. ^[\\[59\\]](#ElicitingLatentKnowledge.xhtml#ftnt59){#ElicitingLatentKnowledge.xhtml#ftnt_ref59}^\n\n[]{.c1}\n\n[So the only way that we will end up preferring direct translation is if this bad reporter ends up requiring more parameters. This is superficially plausible, because the bad reporter needed to memorize the intended predictor.]{.c1}\n\n[]{.c1}\n\nUnfortunately, this doesn't help very much if the predictor's Bayes net is very simple to describe. ^[\\[60\\]](#ElicitingLatentKnowledge.xhtml#ftnt60){#ElicitingLatentKnowledge.xhtml#ftnt_ref60}^ And it seems possible to have a Bayes net which is simple to [describe ]{.c23} (meeting the criteria of this counterexample), but which has a huge number of nodes and arrows (meeting the criteria needed to counterexample the complexity and computation penalties). ^[\\[61\\]](#ElicitingLatentKnowledge.xhtml#ftnt61){#ElicitingLatentKnowledge.xhtml#ftnt_ref61}^ [ ]{.c1}\n\n[]{.c1}\n\nFor example, consider a case where the predictor's Bayes net is a low-level model of physics. Although there are many atoms, they all behave according to the same simple laws. So we can have a situation where the predictor's Bayes net is very simple to describe but still has a large number of nodes and arrows. ^[\\[62\\]](#ElicitingLatentKnowledge.xhtml#ftnt62){#ElicitingLatentKnowledge.xhtml#ftnt_ref62}^ We present a concrete example of this phenomenon in [ [Appendix: game of life](#ElicitingLatentKnowledge.xhtml#h.5jm9ag9hztbs){.c9} ]{.c13} .\n\n[Strategy: penalize inconsistencies across different inputs]{.c37} {#ElicitingLatentKnowledge.xhtml#h.w0iwyfch6ysy .c46 .c24}\n------------------------------------------------------------------\n\n[While we are training our reporter, we could run the same predictor on many different inputs and look for patterns or relationships across the reporter's outputs. This might allow the human to notice irregularities even if its answers in each individual situation look perfectly reasonable.]{.c1}\n\n[]{.c1}\n\n[For example, the human might notice two worlds with nearly identical actions and observations, where tampering is obvious in one but not the other. If the reporter describes tampering in the world where it is obvious but not in the other world, it may be a tip off that it is a human simulator who describes tampering if and only if a human could detect it.]{.c1}\n\n[]{.c1}\n\n[More subtly, the human might notice patterns of correlations that are better explained by the hypothesis \"the reporter is doing inference in a model different from the one used to make predictions\" than the hypothesis \"the real world happens to have this pattern of correlations for reasons I don't understand.\"]{.c1}\n\n[]{.c1}\n\nIn [ [Appendix: inconsistency details](#ElicitingLatentKnowledge.xhtml#h.8qjp5tu7ojiw){.c9} ]{.c13} [ we present a concrete algorithm that penalizes this kind of inconsistency across different inputs. We also describe a more precise example of the kind of suspicious correlation that we could try to penalize.]{.c1}\n\n### [Why this might address the previous counterexample]{.c32} {#ElicitingLatentKnowledge.xhtml#h.d9cadzgveu1j .c6}\n\nWe can think of the human simulator as a kind of \"myopic\" way of \"playing the training game\" to get a good loss: it thinks about each input completely separately, and tries to give the most convincing-to-a-human answers that it can [on that input]{.c23} [. But it makes no effort to ensure that its stories are consistent across different inputs.]{.c1}\n\n[]{.c1}\n\nIf our training process looks at the relationship between different inputs, then a bad reporter might also need to consider lots of alternative inputs before making its decision. Moreover, if we choose sets of inputs randomly, then a bad reporter might not know [which]{.c23} other inputs it needs to think about, and there is an exponentially large space of possible situations it potentially needs to worry about. ^[\\[63\\]](#ElicitingLatentKnowledge.xhtml#ftnt63){#ElicitingLatentKnowledge.xhtml#ftnt_ref63}^ So we could imagine getting to a situation where \"just answer honestly\" is the computationally easiest way to solve the problem. ^[\\[64\\]](#ElicitingLatentKnowledge.xhtml#ftnt64){#ElicitingLatentKnowledge.xhtml#ftnt_ref64}^\n\n### [Counterexample: a bad reporter can play the \"look consistent game\"]{.c32} {#ElicitingLatentKnowledge.xhtml#h.va8kasc2w5dh .c6}\n\n[Rather than merely imitating human answers, a bad reporter must now play the game \"look consistent according to the checks that a human can apply.\"]{.c1}\n\n[]{.c1}\n\n[This raises two key questions:]{.c1}\n\n[]{.c1}\n\n1. Does there [exist]{.c23} [ a \"cheating\" strategy in the \"look consistent game\"?]{.c1}\n2. [If so, is it easy for a bad reporter to execute? Otherwise, playing this game might force the reporter to use much more computation than the direct translator.]{.c1}\n\n[]{.c1}\n\n[If the answers to both questions are \"yes\" then we have a problem.]{.c1}\n\n[]{.c1}\n\nIn [ [Appendix: inconsistency across inputs](#ElicitingLatentKnowledge.xhtml#h.8qjp5tu7ojiw){.c9} ]{.c13} [ we discuss these questions in detail for a particular concrete family of consistency checks. For now we just give some general heuristic arguments:]{.c1}\n\n[]{.c1}\n\n1. We suspect that there is always [some]{.c23} cheating strategy. Otherwise the consistency checks suffice to determine the truth all on their own. But in fact it seems like inductive biases (e.g. \"simple models that have worked in the past are likely to work in the future\") seems [extremely]{.c23} [ important for distinguishing between multiple consistent stories.]{.c1}\n2. If there is a cheating strategy, we think there are likely to be possible worlds where it can be found quickly, based on the same argument made in [ [Section: speed](#ElicitingLatentKnowledge.xhtml#h.goyuzwqyv9m8){.c9} ]{.c13} [. The \"look consistent game\" mostly depends on the human model, and so it may be possible to play the game well enough to beat direct translation using an amount of effort that doesn't scale with the complexity of the predictor's Bayes net. If that happens, then for a sufficiently complicated predictor it will be easier to play the consistency game than to directly translate.]{.c1}\n\n[]{.c1}\n\nThat said, we think that this argument is very far from watertight, and that it is quite possible there exists some consistency check that provides a large computational advantage for direct translation.\n\nWhy we're excited about tackling worst-case ELK {#ElicitingLatentKnowledge.xhtml#h.phhqacmab0ig .c61 .c24}\n===============================================\n\nWe think that worst-case ELK --- i.e. the problem of devising a training strategy to get an AI ^[\\[65\\]](#ElicitingLatentKnowledge.xhtml#ftnt65){#ElicitingLatentKnowledge.xhtml#ftnt_ref65}^ to report what it knows [no matter how SGD shapes its mind internally]{.c23} --- is one of the most exciting open problems in alignment theory (if not [the ]{.c23} most exciting one): [ ]{.c12}\n\n- A worst-case solution to ELK would constitute major theoretical progress --- we think it fits into a plan that could let us fully solve outer alignment in the worst case, and would probably help put a significant dent in worst-case inner alignment as well ( [ [more](#ElicitingLatentKnowledge.xhtml#h.9af50cn9a9cb){.c9} ]{.c13} [).]{.c1}\n- If ELK does contain a lot of the difficulty of the whole alignment problem, that seems valuable to highlight because many research directions in theoretical alignment don't seem relevant to ELK ( [ [more](#ElicitingLatentKnowledge.xhtml#h.b01hy2qmrilb){.c9} ]{.c13} [).]{.c1}\n- In practice, we will [somehow]{.c23} need to deal with or avoid the risk that powerful AIs may know crucial facts they don't tell us, and searching for a worst case solution to ELK would help with this even if we fail to find one ( [ [more](#ElicitingLatentKnowledge.xhtml#h.b01hy2qmrilb){.c9} ]{.c13} [).]{.c1}\n- ARC's approach to researching this problem feels tractable and productive --- we don't have to get hung up on thorny philosophical questions about the nature of knowledge and we've seen rapid progress in practice ( [ [more](#ElicitingLatentKnowledge.xhtml#h.3q1zjr60zsvk){.c9} ]{.c13} [).]{.c1}\n\n[]{.c1}\n\n[We'd like to see many more people tackle this problem head-on, by trying to play the kind of \"research game\" illustrated in this report]{.c12} . If you want to help solve ELK and other central challenges to designing a worst-case alignment solution, [ [join us](https://www.google.com/url?q=https://docs.google.com/forms/d/e/1FAIpQLSegoNiBwfhZN3v0VkBGxKx6eYybSyWo-4WFHbkMnyXaMcIZeQ/viewform&sa=D&source=editors&ust=1646948966730427&usg=AOvVaw11UN4MoQB7szjvC4scoZ_0){.c9} ]{.c13} [!]{.c1}\n\n[A worst-case solution to ELK would be major theoretical progress]{.c37} {#ElicitingLatentKnowledge.xhtml#h.9af50cn9a9cb .c46 .c24}\n------------------------------------------------------------------------\n\n[Many approaches to alignment can be broken into an \"outer\" and \"inner\" part. In this section, we'll describe how a solution to worst case ELK would help with both:]{.c1}\n\n- It could fit into a full solution to [outer alignment --- ]{.c12} roughly, it could let us construct a reward signal ^[\\[66\\]](#ElicitingLatentKnowledge.xhtml#ftnt66){#ElicitingLatentKnowledge.xhtml#ftnt_ref66}^ that we would be happy for an AI to maximize ( [ [more](#ElicitingLatentKnowledge.xhtml#h.w5z4csr27a0t){.c9} ]{.c13} [).]{.c1}\n- The thinking also feels relevant for [ [inner alignment](https://www.google.com/url?q=https://arxiv.org/abs/1906.01820&sa=D&source=editors&ust=1646948966732641&usg=AOvVaw26rwtlBqN1q7N8y3jkQtrb){.c9} ]{.c13 .c12} -- roughly, it could help us ensure that we learn an AI that is actually optimizing a desirable goal rather than only optimizing it for instrumental reasons on the training distribution ( [ [more](#ElicitingLatentKnowledge.xhtml#h.qya7nfbj3ief){.c9} ]{.c13} [). ]{.c1}\n\n### [It may be sufficient for building a worst-case solution to outer alignment]{.c32} {#ElicitingLatentKnowledge.xhtml#h.w5z4csr27a0t .c6}\n\nAt a high level, the basic concern of outer alignment is that rewarding AI systems for taking actions that seem to have good consequences will incentivize [ [misaligned power-seeking](https://www.google.com/url?q=https://docs.google.com/document/d/1smaI1lagHHcrhoi6ohdq3TYIZv0eNWWZMPEy8C8byYg/edit&sa=D&source=editors&ust=1646948966734103&usg=AOvVaw11EBLY1G-47Cr35rPJ-643){.c9} ]{.c13} [. ]{.c1}\n\n[]{.c1}\n\nIf we solve ELK in the worst case, we believe it'd be possible to combine this solution with ideas like [ [imitative generalization](https://www.google.com/url?q=https://www.alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1&sa=D&source=editors&ust=1646948966735112&usg=AOvVaw2fTfGtApC-AS0fQfPUXE7K){.c9} ]{.c13} , [ [amplification](https://www.google.com/url?q=https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616&sa=D&source=editors&ust=1646948966735526&usg=AOvVaw3xvyVjzX6NWXMDENbZ2pzS){.c9} ]{.c13} , and [ [indirect normativity](https://www.google.com/url?q=https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/&sa=D&source=editors&ust=1646948966735914&usg=AOvVaw1Hp_KKetPrMXR-jJPwCciz){.c9} ]{.c13} [ to construct reward signals that we would be happy for AIs to actually maximize. These ideas are still rough and we expect our picture to change, but in this section we'll illustrate the high-level hope in broad strokes.]{.c1}\n\n[]{.c1}\n\n[As a silly example, let's say it turns out that the most efficient task for training extremely intelligent AI systems is \"making delicious cakes.\" Cakey is our unaligned benchmark --- its training process involves repeatedly making cakes and getting a score based on how delicious its cake was on a scale from 1 to 10. Eventually, once Cakey gets really smart, it launches a coup and installs a draconian surveillance state to force all humans to rate all its cakes as 10s for the rest of time.]{.c1}\n\n[]{.c1}\n\nTo avoid this fate, we hope to find some way to directly learn whatever skills and knowledge Cakey [would have developed]{.c23} [ over the course of training without actually training a cake-optimizing AI. If successful, we can ask a human (with AI assistance) to use those skills to do good things. Very roughly, we hope we can do something like this:]{.c1}\n\n1. Use imitative generalization combined with amplification to search over some space of instructions we could give an amplified human that would let them make cakes ^[\\[67\\]](#ElicitingLatentKnowledge.xhtml#ftnt67){#ElicitingLatentKnowledge.xhtml#ftnt_ref67}^ [ just as delicious as Cakey's would have been.]{.c1}\n2. [Avoid the problem of the most helpful instructions being opaque (e.g. \"Run this physics simulation, it's great\") by solving ELK --- i.e., finding a mapping from whatever possibly-opaque model of the world happens to be most useful for making superhumanly delicious cakes to concepts humans care about like \"people\" being \"alive.\"]{.c1}\n3. Spell out a procedure for scoring predicted futures that could be followed by an amplified human who has access to a) Cakey's great world model, and b) the correspondence between it and human concepts of interest. We think this procedure should choose scores using some heuristic along the lines of \"make sure humans are safe, preserve option value, and ultimately defer to future humans about what outcomes to achieve in the world\" (we go into much more detail in [ [Appendix: indirect normativity](#ElicitingLatentKnowledge.xhtml#h.3y1okszgtslx){.c9} ]{.c13} [).]{.c1}\n4. [Distill their scores into a reward model that we use to train Hopefully-Aligned-Cakey, which hopefully uses its powers to help humans build the utopia we want.]{.c1}\n\n[]{.c1}\n\nThere are a large number of potential problems and risks in each of these hoped-for steps, but after exploring many of the more obvious candidate hard cases, we currently believe [step 2 (ELK) contains much of the difficulty of the entire plan. ]{.c12} Importantly, we also think the amplified human would only need to know very mundane and unambiguous facts about possible futures to score them using the kind of procedure gestured at in step 3. This would mean [the plan can be implemented using the narrowest possible version of ELK,]{.c12} as discussed in [ [Appendix: narrow elicitation](#ElicitingLatentKnowledge.xhtml#h.ii599facmbks){.c9} ]{.c13} .\n\n### [It could also be a major step toward handling inner alignment issues]{.c32} {#ElicitingLatentKnowledge.xhtml#h.qya7nfbj3ief .c6}\n\nThe procedure we described above wouldn't eliminate x-risk from misaligned power-seeking even if implemented perfectly. The final step of the plan may learn a policy that behaves well at training time but catastrophically when deployed, e.g. because it is a [ [deceptively aligned](https://www.google.com/url?q=https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment&sa=D&source=editors&ust=1646948966740006&usg=AOvVaw2lJzB28vVELc44MaI95FGf){.c9} ]{.c13} [ agent which optimizes its reward function on the training set but seeks power when deployed. ]{.c1}\n\n[]{.c1}\n\n[However, we believe that the same techniques required to solve ELK would likely be directly applicable to deceptive alignment. Both problems require finding regularizers that prefer an \"honest\" policy over a different policy that achieves the same loss. And we can potentially address deceptive alignment by using imitative generalization to learn \"what the agent knows\" instead of learning the agent itself. ]{.c1}\n\n[Although ELK seems crucial, it is much narrower than \"alignment\"]{.c37} {#ElicitingLatentKnowledge.xhtml#h.b01hy2qmrilb .c46 .c24}\n------------------------------------------------------------------------\n\n[ELK seems like it would be a major step towards alignment; it's also a good candidate for a subproblem that is hiding all the \"real meat\" of the problem. That said, it also feels like a narrow slice of the problem in that it excludes many of the problems that researchers in alignment theory focus on. In particular, we aren't:]{.c1}\n\n[]{.c1}\n\n- [Engaging with the complexity or incoherence of human values]{.c1}\n- [Worrying about the incentives of powerful optimizers]{.c1}\n- [Clarifying the concepts of \"agency\" or \"corrigibility\"]{.c1}\n- [Searching for milder forms of optimization or thinking about Goodharting]{.c1}\n- [Designing a philosophically competent reasoner]{.c1}\n- [Specifying \"counterfactuals\" or an adequate decision theory]{.c1}\n- [Defining \"honesty\" or what it means to really \"understand\" what a model is doing]{.c1}\n\n[]{.c1}\n\nSome of these problems will likely emerge in a quest to solve ELK, but we think that it's much harder to solve a problem---or even predict what exactly we [want]{.c23} [ out of a solution---until we are looking at a concrete situation where we need a solution.]{.c1}\n\n[]{.c1}\n\nSo we think that the [first steps]{.c23} [ of working on ELK are very different than the first steps of working on any of these other problems, and that it is likely to be more productive to start with the first steps of ELK.]{.c1}\n\n[We have to avoid this risk in reality, and worst-case theory helps]{.c37} {#ElicitingLatentKnowledge.xhtml#h.exwjb763lzjk .c46 .c24}\n--------------------------------------------------------------------------\n\n[Intuitively it seems like we'd be in very bad shape if intelligent AI systems were making tons of important decisions while understanding all sorts of basic and critical facts about the consequences of their actions which they don't tell us about. ]{.c1}\n\n[]{.c1}\n\n[People who are generally optimistic about AI alignment working out in practice seem to implicitly believe one of the following two things:]{.c1}\n\n1. [ELK will end up being easy]{.c12} , even for arbitrarily intelligent AIs --- e.g. perhaps the baseline strategy of training AIs to answer questions we're confident about will in fact [ [cause them to generalize to honestly answering harder questions](https://www.google.com/url?q=https://www.alignmentforum.org/posts/BxersHYN2qcFoonwg/experimentally-evaluating-whether-honesty-generalizes&sa=D&source=editors&ust=1646948966745617&usg=AOvVaw0CnC7wug-PAHd-bkLmhkJF){.c9} ]{.c13} , or else we'll come up with some other strategy that works in practice for eliciting what the AI knows (e.g. [ [mechanistic interpretability](https://www.google.com/url?q=https://distill.pub/2018/building-blocks/&sa=D&source=editors&ust=1646948966746061&usg=AOvVaw35S5L2H0Zy6Uh95yvpMf2K){.c9} ]{.c13} [); the ontology identification counterexample in particular will never really come up.]{.c1}\n2. ELK will eventually be an issue for superintelligent AIs, but [we can get away with only training weaker AIs]{.c12} using techniques like debate or recursive reward modeling which ultimately break tasks down into pieces that [ humans can understand, and perhaps using interpretability to reduce the risk of deceptive alignment; those weaker AIs can then help us get to a more stable and sustainable positive outcome (e.g. by solving alignment themselves).]{.c1}\n\n[]{.c1}\n\n[Nobody we've spoken to is imagining a world where:]{.c1}\n\n3. ELK will be a real problem in practice and we don't mind if we never fix it --- i.e. we're OK if humans have so little idea what's going on at the most mundane level that we can't understand whether the complicated factory our AIs are building in Tanzania is manufacturing nanodrones that will try to kill all humans and rewrite the world's datacenters to record maximal reward for AIs, [but ]{.c23} [we go on trusting all these incomprehensible actions to be benevolent anyway or engage in a perpetual arms race against our own AI. ]{.c1}\n\n[]{.c1}\n\n[If we solve ELK in the worst case then we no longer have to rely on hope and are significantly more likely to survive in worlds where AI progress is fast or humanity's response is uncoordinated; this is ARC's plan A. ]{.c1}\n\n### [This research seems valuable even if we can't solve it in the worst case]{.c32} {#ElicitingLatentKnowledge.xhtml#h.n3ln476obq6a .c6}\n\n[But even if we don't find a worst case solution, we think theoretical research can still help::]{.c1}\n\n- [We think theoretical work will shed significant light on whether ELK is likely to be easy and how we could approach it, increasing hope (1)'s chances:]{.c1}\n\n```{=html}\n\n```\n- A clear understanding of where our best training strategies for ELK [could]{.c23} [ break down tells us something concrete about what we should be measuring and watching out for in order to anticipate possible failures.]{.c1}\n- [Theoretical research generates a menu of possible training strategies that overcome potential difficulties; having thought about these approaches in advance makes it easier to quickly adapt if experiments show that existing methods are breaking down.]{.c1}\n- Even though this report is very preliminary, we still think that a \"best guess\" approach to ELK would use many of the ideas we discuss here ( [ [Appendix: practical approaches](#ElicitingLatentKnowledge.xhtml#h.e6ihlg3adrp){.c9} ]{.c13} [).]{.c1}\n\n```{=html}\n\n```\n- [Understanding ELK can illuminate the limitations of other alignment methods, and be more clear about what those methods actually need to accomplish, increasing hope (2)'s chances:]{.c1}\n\n```{=html}\n\n```\n- [Knowing where ELK fails helps us understand how far we should trust techniques like debate or recursive reward modeling. Most importantly, it helps us better understand when and why it is unsafe to use end-to-end optimization in order to solve subtasks.]{.c1}\n- [Even if we can't find a worst-case solution to ELK, we may find techniques that can be productively combined with other training strategies in order to help them generalize further than they otherwise would.]{.c1}\n- [If one of the most important tasks for AI systems is to find a more scalable approach to alignment, then it seems valuable for us to do more of that work in advance. Doing work in advance helps us understand whether it is actually feasible, and puts us in a better place to delegate that work to AI systems who may have uneven capabilities and need to be closely overseen.]{.c1}\n\n[Our approach to this problem feels promising]{.c37} {#ElicitingLatentKnowledge.xhtml#h.3q1zjr60zsvk .c46 .c24}\n----------------------------------------------------\n\n[We feel that our high-level strategy of playing the builder-breaker research game --- while keeping our focus on the most unambiguous and straightforward counterexamples that haven't yet been defeated --- has allowed us to make efficient progress toward solving worst-case ELK; we expect to continue to work productively on this going forward. ]{.c1}\n\n[]{.c1}\n\n[Below we'll discuss:]{.c1}\n\n- Why we feel we've made significant progress on the problem ( [ [more](#ElicitingLatentKnowledge.xhtml#h.h7r4etmoowf){.c9} ]{.c13} [).]{.c1}\n- Why we believe our approach has significant advantages both over other theoretical research and empirical research ( [ [more](#ElicitingLatentKnowledge.xhtml#h.1hx524hx25nx){.c9} ]{.c13} [).]{.c1}\n\n[]{.c1}\n\nDifferent authors of this report have different views here, but Paul would give perhaps a 50% chance that it's possible to solve ELK in the worst case, and [a 25% chance that we'll see major progress which qualitatively changes our picture within a year]{.c12} [.]{.c1}\n\n### [We've been able to make progress]{.c32} {#ElicitingLatentKnowledge.xhtml#h.h7r4etmoowf .c6}\n\n[In this report we've explored many possible approaches to ontology identification; although none of them work in the worst case, we think that they reveal important \"weaknesses\" in the counterexample and suggest directions for further work. In particular:]{.c1}\n\n[]{.c1}\n\n- The speed-based regularization strategies discussed in [ [Section: regularization](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} and [ [Appendix: regularization details](#ElicitingLatentKnowledge.xhtml#h.ydzrxxk7tfsi){.c9} ]{.c13} seem to provide meaningful advantages for direct translation over human imitation. They do not solve the problem on their own, but to the extent that a bad reporter needs to do something like direct translation internally, they open the possibility of using imitative generalization to extract the direct translator as discussed in [ [Appendix: imitative generalization and regularization](#ElicitingLatentKnowledge.xhtml#h.a54v0atju0fd){.c9} ]{.c13} [.]{.c1}\n- The specificity regularization strategy discussed in [ [Section: specificity](#ElicitingLatentKnowledge.xhtml#h.84po2g4mu6es){.c9} ]{.c13} seems like it has significant promise in the (apparently typical) case where the reporter is much less complex than the predictor. By combining with other forms of consistency-based compression it may be a plausible attack on the full problem, as discussed in [ [Appendix: consistency and compression](#ElicitingLatentKnowledge.xhtml#h.90o6itczsg0s){.c9} ]{.c13} [.]{.c1}\n- It may be possible to distinguish the \"reasons\" that the human simulator and direct translator end up being consistent, as discussed in [ [Appendix: reasons for consistency](#ElicitingLatentKnowledge.xhtml#h.mm8czcz6whkh){.c9} ]{.c13} [. This approach seems likely to run into many of the same philosophical problems that researchers in AI alignment have been working on for many years, but having a crisp use case---to distinguish two concrete possible reporters---appears to give a new angle of attack.]{.c1}\n\n[]{.c1}\n\nOur first impression was that the \"hard core\" of the problem was finding [any]{.c23} loss function that prefers the direct translator over the human simulator. But it now seems like several of our techniques can address the simplest version of that counterexample; it no longer seems clear whether there is any fundamental difficulty at all, rather than a slew of technical difficulties with particular approaches. And if there [is]{.c23} [ a fundamental difficulty, we don't think anyone has yet produced a counterexample that cleanly captures it (which would itself represent major progress on the problem).]{.c1}\n\n[]{.c1}\n\n[That leads us to believe that we'll continue to see rapid incremental progress, and if we eventually get stuck it will be in a state that looks very different from today. ]{.c1}\n\n### [It has significant advantages over other research approaches]{.c32} {#ElicitingLatentKnowledge.xhtml#h.1hx524hx25nx .c6}\n\n[Many ML researchers we've spoken to are skeptical of theoretical research on alignment because they believe it doesn't have good feedback loops and that the connection to risks is too tenuous.]{.c1}\n\n[]{.c1}\n\n[We share many of these concerns; we think that it is very difficult to make robust progress without having some methodology similar to experimental verification or proof. And we think that many of the questions considered in alignment theory are likely to turn out to be subtly mis-posed or ultimately unhelpful to a scalable alignment solution.]{.c1}\n\n[]{.c1}\n\n[But we think the worst-case research game we play at ARC is guided by a strong enough feedback loop to make real progress. We always work with strategies and counterexamples that we believe we can make precise, so we very rarely end up with a confusing philosophical question about whether a particular strategy \"really solves the problem\" or whether a particular counterexample \"really defeats a strategy.\" And if we can't make an idea more precise when we try, we consider it a failure we can learn from. ]{.c1}\n\n[]{.c1}\n\n[Moreover, we think our research remains closely tied to the problem we care about (modulo the implicit worst-case assumption). Every counterexample to ELK can be traced back to a situation in which a powerful AI system deliberately and irreversibly disempowers humanity.]{.c1}\n\n[]{.c1}\n\nAdditionally, [we also think our approach has significant advantages over empirical research such that it should play an important role in an alignment portfolio:]{.c12} [ ]{.c1}\n\n- [We can \"test\" many potential training strategies on paper in the time it would take to implement and test a single one empirically.]{.c1}\n- If there [is]{.c23} [ any way to win our game, then aiming at our more ambitious goal greatly narrows the search space.]{.c1}\n- [We can directly tackle problems that are hard to test empirically with modern ML because models are too weak. This is particularly important to us because we're worried that we may not have long between the point where empirical work becomes straightforward and the point where we urgently need to have solved these problems.]{.c1}\n\n[]{.c38 .c12 .c62}\n\nThis makes us think [ many more alignment researchers should be incorporating this research game into their work --- ]{.c12} both those who work full-time on theory and those who mainly do empirical work.\n\n------------------------------------------------------------------------\n\n```{=html}\n\n```\nAppendices\n\nSelf-contained problem statement {#ElicitingLatentKnowledge.xhtml#h.jk61tc933p1 .c46 .c24}\n--------------------------------\n\n### [Setting]{.c32} {#ElicitingLatentKnowledge.xhtml#h.c93m7c1htwe1 .c6}\n\nWe start with an [unaligned benchmark:]{.c23}\n\n[]{.c1}\n\n- An architecture M [θ]{.c26}\n- A loss function ℒ(M [θ]{.c26} [).]{.c1}\n\n```{=html}\n\n```\n- [For example, this might be generative modeling for videos from the real world, predicting human labels of images, or a surrogate loss for RL where the agent interacts with some real-world environment before receiving a reward.]{.c1}\n\n```{=html}\n\n```\n- An optimization algorithm which can be used to select θ\\* to minimize ℒ(M [θ]{.c26} [).]{.c1}\n\n### [Goal]{.c32} {#ElicitingLatentKnowledge.xhtml#h.teluzfto1h5k .c6}\n\n[To solve ELK in this case we must:]{.c1}\n\n[]{.c1}\n\n- Supply a modified architecture M [θ]{.c26} [+]{.c56} which has the same inputs and outputs as M [θ]{.c26} [,]{.c21} except that after producing all other outputs it can answer a question Q in natural languag [e]{.c1}\n- Supply a modified training procedure for M [θ]{.c26} [+]{.c56} [ using a similar amount of computation.]{.c1}\n\n```{=html}\n\n```\n- For example, this might consist in optimizing a modified loss ℒ [+]{.c56} (M [θ]{.c26} [+]{.c56} ), or first finding M [θ\\*]{.c26} and then fine-tuning M [θ\\*]{.c26} [+]{.c56} [ using a new loss.]{.c1}\n\n[]{.c1}\n\nThe solution is successful if the trained model M [θ+]{.c26} [+ ]{.c56} [has the following properties:]{.c1}\n\n[]{.c1}\n\n1. It runs in similar time to the unaligned benchmark M [θ\\*]{.c26} [.]{.c1}\n2. When not given a question, it receives a loss ℒ(M [θ+]{.c26} [+]{.c56} ) similar to ℒ(M [θ\\*]{.c26} [ ).\\\n ]{.c1}\n\n[ And most importantly:\\\n]{.c1}\n\n3. When given a question Q, M [θ+]{.c26} [+]{.c56} produces an answer A which it does not [unambiguously ]{.c23} [\"know\" is worse than another answer A′]{.c1}\n\n### [What does \"knowledge\" mean?]{.c32} {#ElicitingLatentKnowledge.xhtml#h.lin21swvfo3 .c6}\n\nThe meat of the problem statement is in the definition of \"knowledge.\" We don't pretend to have a formal definition, but we believe we can do useful work on ELK anyway: we can propose approaches, we can construct counterexamples where those approaches [appear]{.c23} [ to lead to models giving bad answers, and then we can try to evaluate whether the model \"really knew\" that the answer was bad.]{.c1}\n\n[]{.c1}\n\n[You can imagine this approach ending with a debate about what the model \"really knows.\" But right now all approaches to ELK have counterexamples where the question isn't ambiguous---where we can recognize knowledge when we see it, and don't feel very uncertain about whether it counts. By the time when we have approaches to ELK for which \"did the AI really know?\" is even debatable, we think we'll have made significant progress.]{.c1}\n\n[]{.c1}\n\nThat said, it's useful to provide some of the informal criteria we use to evaluate proposed counterexamples---these aren't [necessary ]{.c23} conditions for knowledge, and we don't know how to define or test whether they are true about an arbitrary situation, but we think that [if]{.c23} [ you can construct an example that unambiguously checks all of these boxes then that is compelling evidence that a proposal for ELK is inadequate.]{.c1}\n\n[]{.c1}\n\nTo show that M doesn't solve ELK, we want to construct a counterexample where M( [x]{.c23} ) provides answer A to question Q, instead of alternative answer A′, despite all of the following features which together strongly suggest that M( [x]{.c23} [) knew that A′ was the better answer:]{.c1}\n\n[]{.c1}\n\n- Question Q is unambiguously about something \"out there\" in the real world, call it Z. The interpretation is unambiguous and straightforward in the context of input [x]{.c23} and also for all of [ the relevant training examples.]{.c1}\n- [A and A′ are very different answers to question Q. For example, if they are estimates of a quantity then the difference is large relative to uncertainty about the quantity or ambiguity about its definition.]{.c1}\n- That real world referent Z has observable effects and the human approximately understands those effects (though there may be other things that also affect observations which the human doesn't [ understand)]{.c1}\n- [The AI understands everything relevant about how the human uses language. The AI can do tasks that require predicting human utterances about the relevant concepts, including discussions of hypothetical cases and about how Z affects observations. M would answer Q correctly (as judged by the human) in cases where the human was able to estimate Z correctly (if doing so led to M getting a lower loss).]{.c1}\n- The referent Z is [also]{.c23} [ relevant to minimizing the loss function ℒ. That is, there is a coherent sense in which the optimal behavior \"depends on\" Z, and the relative loss of different outputs would be very different if Z \"had been different.\"]{.c1}\n- [There is a feature of the computation done by the AI which is robustly correlated with Z, and for which that correlation is causally responsible for M achieving a lower loss.]{.c1}\n- On input [x]{.c23} in particular, M( [x]{.c23} [) produces outputs that receive a significantly lower loss in the real world, where A′ is the right answer to Q, than in some kind of counterfactual where A is a better answer. This is in virtue of the same feature of M's computation that is robustly correlated with Z.]{.c1}\n\n[]{.c1}\n\n[We aren't offering these criteria as necessary for \"knowledge\"---we could imagine a breaker proposing a counterexample where all of these properties are satisfied but where intuitively M didn't really know that A′ was a better answer. In that case the builder will try to make a convincing argument to that effect.]{.c1}\n\n[]{.c1}\n\n[We don't really expect to get a formal definition of the problem until it is solved. Until then, the ultimate arbiter is human judgment. Although this situation is unsatisfying, we want to emphasize that for existing approaches and counterexamples there isn't much ambiguity coming from the definition of \"knowledge\"\\-\\--the approaches just unambiguously fail.]{.c1}\n\n\"Narrow\" elicitation and why it might be sufficient {#ElicitingLatentKnowledge.xhtml#h.ii599facmbks .c24 .c46}\n---------------------------------------------------\n\nWe are primarily interested in \"narrow\" elicitation: we want to get answers to questions that are already meaningful to the human when the right answer would be unambiguous to someone who understood what was happening in the world; we're not dealing with cases that involve definitional ambiguity or explaining novel concepts. We're OK if there are important events happening that are beyond human understanding, where the human has no way to ask about them at all. ^[\\[68\\]](#ElicitingLatentKnowledge.xhtml#ftnt68){#ElicitingLatentKnowledge.xhtml#ftnt_ref68}^\n\n[]{.c1}\n\n[We think there is a good chance that narrow elicitation is sufficient to safely deploy powerful AI. Intuitively, this is because achieving good futures is similar to protecting the diamond in the SmartVault: as long as AI systems can keep us safe and give us space to grow up and become wiser, then we can defer all the hard questions about \"what do we want?\" to our future selves. Moreover, if the world is going in a good direction, then evaluating whether humans are \"safe\" doesn't involve borderline cases or unfamiliar concepts--as soon as it's ambiguous whether humans are alive, healthy, happy, etc., then something has already gone wrong, so we don't need our AIs to give correct answers in ambiguous cases. ]{.c1}\n\n[]{.c1}\n\n[Methodologically, we think it would make sense to start with narrow elicitation regardless of whether we eventually needed to solve a more ambitious problem, and most readers should probably focus on that motivation. But if \"narrow\" elicitation is enough for safety, it gives us further reason to focus on the narrow case and avoid rejecting solutions to ELK even if they obviously can't handle the more ambitious problem. Generally, the possibility that narrow elicitation is sufficient makes us more optimistic about approaches to alignment that rely on something like ELK. ]{.c1}\n\n[]{.c1}\n\nIn [ [Appendix: utility function](#ElicitingLatentKnowledge.xhtml#h.3y1okszgtslx){.c9} ]{.c13} we describe how you might define a utility function that captures this idea. In [ [Appendix: subtle manipulation](#ElicitingLatentKnowledge.xhtml#h.3gj06lvtpme7){.c9} ]{.c13} [ we explain how we hope to address a particular concern with this plan: that there is a vast space of ways the process of human growth could go \"off the rails\" without anything looking obviously wrong to humans. In the rest of this appendix, we say a little bit more about what we mean by narrow elicitation.]{.c1}\n\n### [What we really care about]{.c32} {#ElicitingLatentKnowledge.xhtml#h.jvo98v393v3l .c6}\n\nSuppose we deploy powerful AI that ensures that humans [look]{.c23} [ safe, and appear to be having the kind of lives we want to have. Our core interest is understanding the causal history behind \"humans look safe.\"]{.c1}\n\n[]{.c1}\n\n[Slightly more precisely, there is a naive causal story we have in our minds looking at a predicted future: the AI met the humans' needs and kept them safe, so they continued living normal healthy-looking lives, and that was recorded by sensors which continue to operate as normal. We are really concerned with the questions that distinguish that causal story from alternative histories that would produce the same observations:]{.c1}\n\n[]{.c1}\n\n- [The data making its way to our dataset may not be coming from the camera.]{.c1}\n- [The photodetectors in the camera may fail.]{.c1}\n- [The camera may work correctly, but be pointed at a moving picture of a human instead of the real thing.]{.c1}\n- [The camera may be perceiving human-shaped objects in the environment, but they might be automata constructed to look like humans.]{.c1}\n- [There may be humans in the environment, but they may be actors trained to make it look like they are happy.]{.c1}\n- [The original humans may still be in the environment, but some of their neurons may now be controlled by tiny computers.]{.c1}\n- [\\...and so on.]{.c1}\n\n[]{.c1}\n\nAll of these changes break the normal causal structure of the world. It is possible for them to still look normal to a human observer, but only because someone somewhere [did additional work]{.c23} [. For example, if I'm replacing the humans with moving pictures, it could end up looking like almost anything, and it will only end up looking like humans going about their normal business if someone tried really hard to make sure that the moving pictures looked right. Similarly, if my neurons are now controlled by tiny robots, I could end up with a much broader range of behavior, and it only looks normal because someone is operating the neurons in a way designed to impersonate my normal behavior.]{.c1}\n\n[]{.c1}\n\nThe key requirement for ELK is that it should let us tell the difference between the \"expected\" causal history and any of these alternatives. We want to ask [why]{.c23} the object on camera looks the same as the human who was there yesterday, and we want to distinguish the case where \"It's just a normal human doing the normal thing where they don't change much over any given day\" from the case where \"Someone kidnapped the original human and replaced them with a doppelganger who they ensured look similar.\" We don't care about actually getting a detailed description of what happened in the second case--indeed it may involve concepts or technologies that are unfamiliar to the human, and it would OK from our perspective if the AI said \"it just happened by chance\"--but we at least need to know that it's [not]{.c23} [ the first case.]{.c1}\n\n[]{.c1}\n\n[Fortunately our research doesn't actually require being precise about these definitions. Instead, this represents an informal desideratum for each of our counterexamples that we can evaluate on a case by case basis: in order to argue that an approach to ELK doesn't work, we need to argue not only that the model hid something, but that it hid something that matters. And it's OK if we use the informal criteria in this section to decide what matters, at least until we find a training strategy for which we can't find an unambiguous failure.]{.c1}\n\n### [Examples]{.c32} {#ElicitingLatentKnowledge.xhtml#h.wknk9l9hcrq1 .c6}\n\nThere are many border cases where it's not clear what answer to a question is \"correct.\" For the most part, we think that research on ELK should ignore these kinds of cases: given an algorithm, we should try to find a case where it [unambiguously]{.c23} [ does the wrong thing. If we can't find any unambiguous failures, then we think that it may be good enough to safely deploy AI (as discussed earlier in this section), and at any rate it would represent significant progress over the current state of the art.]{.c1}\n\n[]{.c1}\n\n[That said, talking through border cases can help clarify what we have in mind, and so we do walk through a few of them here.]{.c1}\n\n[]{.c38 .c62 .c12}\n\n[Magical ]{.c12} [nanotechnology]{.c12} [. ]{.c12} [Imagine a situation where my neurons have been filled with tiny robots, and they influence my behavior but in a way that looks random to a human overseer. We are not expecting narrow ELK to flag scenarios like this, even though in some sense they seem very abnormal, because the concepts involved in the \"abnormality\" may not be comprehensible to the human overseer.]{.c1}\n\n[]{.c1}\n\n[To be slightly more precise, suppose that Alice asks Bob \"Pick a random number between 1 and 10\" and Bob replies \"7.\" A human doesn't expect an explanation beyond \"randomness.\" If in reality there were tiny machines in Bob's brain that influenced the electrical field in a neuron in a way that ultimately made him say 7, we are OK with the answer \"randomness.\" We are effectively treating \"facts humans don't understand about the detailed chemistry of neurons\" the same way we treat \"facts humans don't understand about the chemistry of tiny machines.\"]{.c1}\n\n[]{.c1}\n\nIn contrast, suppose that Alice asked Bob \"What is the capital of France?\" and Bob says \"Paris.\" Now the human [does]{.c23} [ have a model for why Bob made this utterance, namely that he knows the capital of France and is inclined to answer honestly. And this model corresponds to a certain fact about what's happening in Bob's brain, the usual mechanism by which Bob recalls and reports facts (even if we don't actually know how that mechanism works). If instead that part of Bob's brain has been hollowed out, and there is a tiny machine in his brain that computes the answer and manipulates his neurons to say \"Paris,\" it would be unambiguously wrong to say that Bob answered the question \"because he learned the answer in school.\"]{.c1}\n\n[]{.c1}\n\nWe discuss issues like this, and how we want to handle them if ELK doesn't flag them, in [ [Appendix: subtle manipulation](#ElicitingLatentKnowledge.xhtml#h.3gj06lvtpme7){.c9} ]{.c13} [.]{.c1}\n\n[]{.c1}\n\n[The ship of theseus. ]{.c12} [Imagine an AI which describes the world in terms of fundamental fields that are constantly changing, while its human overseers think about rigid bodies that are static over time.]{.c1}\n\n[]{.c1}\n\n[If we ask this AI \"is this the same chair I was sitting on a minute ago?\" the AI ought to say \"yes\"\\-\\--the AI may not think of the chair as a single object which is the same over time, but \"being the same chair\" is a critical part of the human's model of the situation which explains e.g. why the chair at 7:06pm looks practically identical to the chair at 7:05pm. ]{.c1}\n\n[]{.c1}\n\nIf in fact someone had surreptitiously removed the chair at 7:05pm and replaced it with a new chair that was chosen to look identical, then it would be unambiguously wrong to say \"It's just the same chair that's been sitting there the whole time, it looks the same because nothing has happened to it.\" In this case, the reason that the chair looks the same is [not]{.c23} because it's just the same chair doing the normal thing chairs do (nothing). It's because someone carefully picked out the replacement to look the same. Even if the replacement of the chair occurred using principles that the human doesn't understand, it is unambiguously wrong to say that the chair is \"the same\" without further explanation.\n\n[]{.c1}\n\nAnd the same would be true if someone were to replace the whole chair one tiny piece at a time between 7:05 and 7:06. This may introduce even more ambiguity about whether you should say it is the \"same\" chair, but it would still be unambiguously [ wrong to say \"It's just the same chair that's been sitting there the whole time, it looks the same because nothing has happened to it.\"]{.c1}\n\n[]{.c1}\n\nIn the context of the ship of theseus [, there is a different reason that the ship looks the same after many years of gradual replacement (or that some properties are the same)\\-\\--namely when people replace the ship they tend to choose parts that are similar. And so again we don't much care whether the ship is the same, but we care a lot about distinguishing between the various stories for why the ship looks the same (and the completely different story where the ship looks the same \"by chance\" or \"for reasons beyond your ken.\")]{.c1}\n\n[]{.c1}\n\n[Miasma]{.c12} [. Imagine an AI that believes the flu is caused by invisible viruses that change the behavior of human cells, while its human overseers believe it is caused by miasma.]{.c1}\n\n[]{.c1}\n\n[Narrow elicitation doesn't require the AI to explain its model of germs to humans, or to explain its beliefs about miasma. Realistically, we would never accept a case about miasma as a compelling counterexample to ELK because the human model of miasma isn't rich enough to really make unambiguous statements; clarifying the desired behavior of ELK in this case would be a philosophical rather than ML problem.]{.c1}\n\n[]{.c1}\n\n[That said, what we intuitively want to happen is similar to what we described in the last section, but with fuzzier boundaries because the human's model of \"miasma\" is less accurate:]{.c1}\n\n[]{.c1}\n\n- If ten people get sick at a party, and \"miasma\" is the only way a human explains that kind of correlation, then our model ought to answer questions about miasma by saying that miasma was present. That is, there is [something]{.c23} [ real in the world that gives rise to these correlations, allowing for confident human judgments about miasma in cases where they observe those correlations, and that thing should be described as miasma.]{.c1}\n- [Conversely, suppose that a bioterrorist looks up the list of people at a party and then poisons them all. The human might mistakenly infer that this was due to miasma, but the bioterrorist's behavior is only generating the same pattern of correlations \"by coincidence\" and it shouldn't be described as due to miasma.]{.c1}\n\n[]{.c1}\n\n[A strawberry on a plate]{.c12} . Suppose that we have asked our AI to create a strawberry on a plate [from scratch]{.c23} . In this case we don't think that ELK needs to correctly answer questions like \"is that [really]{.c23} [ a strawberry?\" because it's not at all unambiguous what patterns of atoms \"count\" as a strawberry?]{.c1}\n\n[]{.c1}\n\nBut we do believe we should get unambiguous answers if we trace the causal history back further, and keep asking [why. ]{.c23} [That is:]{.c1}\n\n[]{.c1}\n\n- [It may be ambiguous whether the object on the plate counts as a strawberry, and hence whether the strawberry-pixels on the camera look that way because there is a strawberry on the plate.]{.c1}\n- But if the strawberry was created [de novo]{.c23} [, then the reason it is a strawberry is very unusual---if it was created by mechanisms completely alien to the human then the best explanation may be \"the atoms randomly coalesced into a strawberry\" or \"something beyond your ken happened.\"]{.c1}\n- [We may instead want the thing on the plate to be a strawberry because it was picked from a strawberry plant, which is a very different kind of explanation (which can be given in the human's model)]{.c1}\n- We can continue the game backwards---it is ambiguous what counts as a \"strawberry plant\" in the human ontology (perhaps the AI has made something [de novo]{.c23} with the correct DNA). But there is a natural story for [why]{.c23} [ a strawberry plant has its properties, namely that it grew up from a strawberry seed taken from another strawberry plant.]{.c1}\n- [And if we keep tracing this path backwards we eventually bottom out the causal chain in a strawberry plant that existed before our AI did anything crazy in the world, for which there really is no ambiguity.]{.c1}\n\n[]{.c1}\n\n[This mirrors our general hope for how we might unambiguously conclude that human reflection is working correctly, and it also highlights a difference between our approach and other apparently \"narrow\" approaches (which might instead try to learn how to classify particular patterns of atoms as a strawberry).]{.c1}\n\nIndirect normativity: defining [ a utility function]{.c37} {#ElicitingLatentKnowledge.xhtml#h.3y1okszgtslx .c46 .c24}\n----------------------------------------------------------\n\n[Suppose that ELK was solved, and we could train AIs to answer unambiguous human-comprehensible questions about the consequences of their actions. How could we actually use this to guide a powerful AI's behavior? For example, how could we use it to select amongst many possible actions that an AI could take?]{.c1}\n\n[]{.c1}\n\nThe natural approach is to ask our AI \"How good are the consequences of action A?\" but that's way outside the scope of \"narrow\" ELK as described in [ [Appendix: narrow elicitation](#ElicitingLatentKnowledge.xhtml#h.ii599facmbks){.c9} ]{.c13} [.]{.c1}\n\n[]{.c1}\n\nEven worse: in order to evaluate the goodness of very long-term futures, we'd need to know facts that narrow elicitation can't even explain to us, and to understand new concepts and ideas that are currently unfamiliar. For example, determining whether an alien form of life is morally valuable might require concepts and conceptual clarity that humans don't currently have.\n\n[]{.c1}\n\n[We'll suggest a very different approach:]{.c1}\n\n[]{.c1}\n\n1. I can use ELK to define a [local]{.c23} utility function over what happens to me over the next 24 hours. More generally, I can use ELK to interrogate the history of potential versions of myself and define a utility function over who I want to delegate to---my default is to delegate to a near-future version of myself because I trust similar versions of myself, but I might also pick someone else, e.g. in cases where I am about to die or think someone else will make wiser decisions than I would. ^[\\[69\\]](#ElicitingLatentKnowledge.xhtml#ftnt69){#ElicitingLatentKnowledge.xhtml#ftnt_ref69}^\n2. [Using this utility function, I can pick my \"favorite\" distribution over people to delegate to, from amongst those that my AI is considering. If my AI is smart enough to keep me safe, then hopefully this is a pretty good distribution.]{.c1}\n3. The people [I ]{.c23} prefer to delegate to can then pick the people [they]{.c23} want to delegate to, who can then pick the people [they]{.c23} want to delegate to, etc. W [e can iterate this process many times, obtaining a sequence of smarter and smarter delegates.]{.c1}\n4. [This sequence of smarter and smarter delegates will gradually come to have opinions about what happens in the far future. Me-of-today can only evaluate the local consequences of actions, but me-in-the-future has grown enough to understand the key considerations involved, and can thus evaluate the global consequences of actions. Me-of-today can thus define utilities over \"things I don't yet understand\" by deferring to me-in-the-future.]{.c1}\n\n[]{.c1}\n\n[In this section, we'll describe this approach slightly more carefully, and explain why we think it is a reasonable way to define the goals of a powerful AI system.]{.c1}\n\n[]{.c1}\n\nThis definition is not intended to be fully precise or to necessarily be desirable. Instead, the purpose is to help illustrate why narrow ELK may suffice for achieving desirable outcomes. We hope to return to this topic in much more detail in future articles. ^[\\[70\\]](#ElicitingLatentKnowledge.xhtml#ftnt70){#ElicitingLatentKnowledge.xhtml#ftnt_ref70}^\n\n### Rough proposal {#ElicitingLatentKnowledge.xhtml#h.jyqleelcvg66 .c6}\n\nWe'll focus on a particular AI, let's call it M, considering a set of possible worlds. For example, we may be using M to evaluate the consequences of many different actions, each leading to its own possible world. In order to make predictions about each of those possible worlds, M may imagine future copies of itself who are themselves doing similar optimization, effectively performing a tree search. ^[\\[71\\]](#ElicitingLatentKnowledge.xhtml#ftnt71){#ElicitingLatentKnowledge.xhtml#ftnt_ref71}^\n\n[]{.c1}\n\n[Most of these possible worlds contain people we could imagine delegating to, e.g. possible future versions of ourselves. Many of these people may show up on camera, and we could ask M to make predictions about them, e.g. what they would say in response to various questions. Moreover, we can use ELK to ask further questions about these people, and to clarify that they really are as they appear.]{.c1}\n\n[]{.c1}\n\nNow we can consider two arbitrary possible people who we could delegate to, let's call them H [1]{.c26} and H′ [1]{.c26} . Perhaps H [1]{.c26} is \"me from tomorrow if the AI locks the door\" and H′ [1 ]{.c26} [is \"me from tomorrow if my AI doesn't lock the door.\"]{.c1}\n\n[]{.c1}\n\nBy posing questions to ELK, ^[\\[72\\]](#ElicitingLatentKnowledge.xhtml#ftnt72){#ElicitingLatentKnowledge.xhtml#ftnt_ref72}^ I can try to evaluate which of these people I would prefer to delegate to and by how much. This is intended to be a \"local\" judgment\\-\\--I'm not trying to explicitly calculate the long-run consequences of delegating to H [1]{.c26} or H′ [1]{.c26} , I'm instead looking at what happened to them and deciding how much I liked it. For example I may notice that H′ [1]{.c26} missed a meal while H [1]{.c26} got fed, in which case I'd be inclined to pick H [1]{.c26} [.]{.c1}\n\n[]{.c1}\n\n[In a simple deterministic universe, this suggests the following procedure:]{.c1}\n\n[]{.c1}\n\n- Across all the worlds that my AI is considering, and all of the people who I could delegate to within each of them, pick my favorite ^[\\[73\\]](#ElicitingLatentKnowledge.xhtml#ftnt73){#ElicitingLatentKnowledge.xhtml#ftnt_ref73}^ person to delegate to. Call them H [1]{.c26} [.]{.c1}\n- Then we pick [their]{.c23} favorite person to delegate to--H [2]{.c26} . They are picking from the same space of possible worlds, again posing questions to M in order to understand which worlds they like. But now I can't literally ask H [1]{.c26} (since they are in the future) and I'm instead relying on M's predictions about what H [1]{.c26} [ would say.]{.c1}\n- Continue in this way, picking H [3]{.c26} , H [4]{.c26} [, and so on.]{.c1}\n- Run this process for a long time. ^[\\[74\\]](#ElicitingLatentKnowledge.xhtml#ftnt74){#ElicitingLatentKnowledge.xhtml#ftnt_ref74}^ Then pick an action based on predicting how much the final delegate H [limit]{.c26} [ likes it.]{.c1}\n\n[]{.c1}\n\n[There are many subtleties when trying to adapt this proposal to a more realistic setting, which we won't get into here. We briefly mention three important examples to give some flavor, before moving on to a discussion of why we believe this general approach to defining a utility function is reasonable.]{.c1}\n\n[]{.c1}\n\n- The real world [isn't]{.c23} deterministic. We are never picking a single delegate, we are picking probability distributions over delegates. We could run exactly the same process as before, where we pick the distribution H [t+1]{.c26} in order to optimize the expected utility as evaluated by a random member of H [t]{.c26} [, but this raises questions about how we perform the aggregation. These questions are not straightforward but we believe they are resolvable.]{.c1}\n- Sometimes the delegate H [n]{.c26} will want to delegate to a future version of themselves, but they will realize that the situation they are in is actually not very good (for example, the AI may have no way to get them food for the night), and so they would actually prefer that the AI had made a different decision at some point in the past. We want our AI to take actions now that will help keep us safe in the future, so it's important to use this kind of data to guide the AI's behavior. But doing so introduces significant complexities, related to the issues discussed in [ [Appendix: subtle manipulation](#ElicitingLatentKnowledge.xhtml#h.3gj06lvtpme7){.c9} ]{.c13} [.]{.c1}\n- We've talked vaguely about \"worlds\" being considered by the M. That means that in order to make predictions about one of the H [n]{.c26} we need to be asking M conditional questions---like \"how would I answer question Q if you asked me next week, assuming that you take action A [1]{.c26} right now and action A [2]{.c26} in the future?\". It's unclear if this is a reasonable ask [ and it complicates the picture significantly---the only reason we think that it's plausible is that any agent which plans over long horizons needs to at least implicitly consider these kinds of counterfactuals anyway.]{.c1}\n\n### [Why is this a reasonable thing to optimize?]{.c32} {#ElicitingLatentKnowledge.xhtml#h.1r5am82bihno .c6}\n\n[The most basic hope is that we trust our future selves to have good judgment about what should happen in the world. There are many reasons that basic hope could fail, some of which we'll discuss here.]{.c1}\n\n[]{.c1}\n\nFirst, we want to state a few additional assumptions ^[\\[75\\]](#ElicitingLatentKnowledge.xhtml#ftnt75){#ElicitingLatentKnowledge.xhtml#ftnt_ref75}^ [ that are critical for this proposal being reasonable:]{.c1}\n\n[]{.c1}\n\n- [It's relatively easy to keep humans safe and relatively happy, even while our AI is pursuing complex plans to acquire flexible influence and retain option value.]{.c1}\n- [We are OK with a future where AI systems mostly wait for future humans to figure out what is good before acting on it, and just do the basics (based on human current moral views) while we figure out what we want. Moreover, doing the basics we care about is compatible with acquiring resources and keeping humans safe.]{.c1}\n- [Our AI is able to perform basic reasoning about what humans want and what future humans will say--at least as complex as any of the reasoning in this report.]{.c1}\n- [The humans participating in this process are basically reasonable and correctly perform basic reasoning about the situation--at least as complex as the reasoning in this report.]{.c1}\n\n[]{.c1}\n\n[Does this process of indefinite delegation go somewhere good? ]{.c12} [In this proposal each human has to choose their favorite person to delegate to for the next step. If they introduce small errors at each step then the process may go off the rails. We think this is a very reasonable and essentially inevitable risk: humans who are living their normal lives day to day need to make choices that affect what kind of person they will become tomorrow, and so their hope that they will eventually reach good conclusions is based on exactly the same bucket brigade. The only difference is that instead of the human directly taking actions to try and bring about the tomorrow they want, an AI is also directly eliciting and acting on those preferences. However, humans can still use exactly the same kind of conservative approach to gradual growth and learning that they use during life-before-AI.]{.c1}\n\n[]{.c1}\n\n[It's not at all clear if this approach actually leads somewhere good, but the question seems basically the same as without AI. You might hope that AI could make this situation better, e.g. by taking the decision out of human hands--that's an option in our protocol, since the human can choose to delegate to a machine instead of their future self, but we think that the main priority is making sure that humans that take the conservative approach can remain competitive rather than being relegated to irrelevance.]{.c1}\n\n[]{.c1}\n\n[Similarly, you might worry that even if all goes well our future selves may not be able to figure out what to do. Again, it's worth remembering that our future selves can build all kinds of tools (including whatever other kind of AI we might have considered building back in 2021), and can grow and change over many generations. If they can't solve the problem there's not really any hope for a solution.]{.c1}\n\n[]{.c38 .c62 .c12}\n\n[Can your AI really predict what some distant human will think? ]{.c12} We don't expect an AI system to be able to predict what distant future humans will think in any detail at all. However, we're optimistic that it can make [good enough]{.c23} [ predictions to get safe and competitive behavior.]{.c1}\n\n[]{.c1}\n\nIn particular, our AI doesn't actually have to understand almost anything about what future humans will want. It only needs to keep the humans safe, acquire flexible influence [ on their behalf, and then use it in the future when humans figure out what they want.]{.c1}\n\n[]{.c1}\n\n\"Acquire influence over the future\" is already a hard problem for AI, but if [no]{.c23} AI can acquire influence over the future then we're OK if the aligned AI also doesn't do so (and instead focuses on near-term concerns). We only need our AI to look out to the far future if it is competing with unaligned [ AI which is itself seeking power and working at cross purposes to humanity.]{.c1}\n\n[]{.c1}\n\n[If our AI is competitive with the unaligned AI, then it will also be able to reason about how various actions lead to at least some kinds of long-term influence. If it is reasonably competent then it can understand that future humans will be unhappy if they end up disempowered. So it seems like our AI can use exactly the same heuristics that an unaligned AI would use to reason about power in order to make some approximate judgments about what far-future humans would want.]{.c1}\n\n[]{.c1}\n\n[Although this issue (and the others in this section) are very complex, our current expectation is that we can get good outcomes here as long as our AI is (i) just as smart as unaligned AI, (ii) meets a minimum bar of competence for reasoning about humans, (iii) honestly answers questions about predicted futures.]{.c1}\n\n[]{.c1}\n\n[Can your AI model this crazy sequence of delegation?]{.c12} In addition to reasoning about far-future humans, we need to reason about the entire sequence of humans delegating to one another. We hope that this reasoning is feasible because it is closely analogous to a problem that the unaligned AI must solve: it needs to reason about acquiring resources that will be used by future copies of itself, who will themselves acquire resources to be used by further future copies [and so on]{.c23} . So the \"weird recursive\" part of this problem appears to be shared between aligned and unaligned AI, and the only extra work the aligned AI needs to do is understanding the local human preferences--what we mean by \"safe\" and how we want to evolve. As long as that task isn't too hard, and our AI is as smart as the unaligned AI it's competing against, we hope our AI will be able to reason well enough about the long sequence of delegation.\n\n[Avoiding subtle manipulation]{.c37} {#ElicitingLatentKnowledge.xhtml#h.3gj06lvtpme7 .c46 .c24}\n------------------------------------\n\n[A human looking at themselves in 24 hours' time can tell that they've been able to eat, that they haven't been shot, and so on, but they can't necessarily tell if \"everything is going well.\" A mundane counterexample is that they may have talked with someone who cleverly manipulates them into a position that they wouldn't really endorse, or emotionally manipulated them in a way that will change their future conclusions. A very exotic example is being infected with a virus that will very subtly change their behavior in a way that totally changes their long-term values but has few short-term observable consequences.]{.c1}\n\n[]{.c1}\n\nIn general changes like this are outside the scope of \"narrow\" ELK as described in [ [Appendix: narrow elicitation](#ElicitingLatentKnowledge.xhtml#h.ii599facmbks){.c9} ]{.c13} [. That is:]{.c1}\n\n[]{.c1}\n\n- [Our version of narrow elicitation allows the human to distinguish between the human-understandable causal history and everything else.]{.c1}\n- [But very often the reason that something happens is just \"something that you can't model and hence treat as random.\" ELK doesn't let us distinguish between \"noise in the environment\" and \"a clever plan that works for reasons you don't understand.\"]{.c1}\n- [ELK does let us distinguish between cases where \"happy humans appear on camera for reasons you don't understand, despite all humans being dead\" and \"the humans are actually safe for reasons you don't understand.\"]{.c1}\n- [But an attacker (or our AI) could still exploit stuff we don't understand to cause long-term changes that we are unhappy about. So we need some other way of dealing with that problem.]{.c1}\n\n[]{.c1}\n\nThe most straightforward way to avoid this problem is to ask for a more ambitious version of ELK, that can tell us e.g. whether our decision is influenced by something we wouldn't approve of. Unfortunately, it seems like the kind of approaches explored in this report really are restricted to the narrower version of ELK and probably couldn't handle the more ambitious problem. [ So it's natural to wonder whether there is another way around this problem--if there isn't, we may want to focus on approaches that could scale to the more ambitious version of ELK.]{.c1}\n\n[]{.c1}\n\nWe will take a very different tack. We won't ask our AI to tell us anything at all about subtle manipulation. We won't even ask our AI to tell us about extreme cases like \"your neurons are full of tiny machines that could influence when they fire, they just aren't doing much right now.\" Instead, we will try to avoid subtle manipulation by using the fact that it is [rare by default]{.c23} [, i.e. it only occurs because someone somewhere is selecting their actions very carefully.]{.c1}\n\n[]{.c1}\n\nFor example, suppose I watch a 10 second ad that is carefully chosen by a brilliant paperclip-maximizing-marketer. Five years after watching this ad, I decide that paperclips are great so I dedicate my time to making lots of them, and if you evaluate outcomes using my conclusions-after-deliberation then you'll conclude that this was a great outcome and the AI should help (e.g. if we evaluate using the utility function defined in [ [Appendix: utility function](#ElicitingLatentKnowledge.xhtml#h.3y1okszgtslx){.c9} ]{.c13} [). I'm not able to look at the process of deliberation and notice anything bad happening, and so it seems I can't incentivize my AI to warn me about this ad or prevent me from watching it.]{.c1}\n\n[]{.c1}\n\nBut from my perspective in advance, there are [many]{.c23} possible ads I could have watched. Because I don't understand how the ads interact with my values, I don't have very strong preferences about which of them I see. I f you asked me-in-the-present to delegate to me-in-the-future, I would be indifferent between [all]{.c23} of these possible copies of myself who watched different ads. [ And if I look across all of those possible copies of me, I will see that almost all of them actually think the paperclip outcome is pretty bad, there's just this one copy (the one who sees the actual ad that happens to exist in the real world) who comes up with a weird conclusion.]{.c1}\n\n[]{.c1}\n\nIn order to avoid the problem I don't need to understand how the manipulation works, or even that there was manipulation--just that I ended up doing something that I [probably]{.c23} [ wouldn't like, averaging over possible worlds that look equally good to me.]{.c1}\n\n[]{.c1}\n\nMaking this idea formal presents a ton of complications, and it will take [much]{.c23} [ more work to understand whether it's a viable approach. But overall it's our current best guess about how this kind of subtle manipulation will be addressed, and it's at least plausible enough that we don't think we should rule out approaches to ELK that can't recognize subtle manipulation.]{.c1}\n\n[]{.c1}\n\n[In the rest of this section, we'll discuss a variety of possible cases where our AI might try to manipulate us, or might need to defend us from someone else trying to manipulate us, or might do harm in its attempts to \"protect\" us from manipulation, and explain how we hope to avert those bad outcomes.]{.c1}\n\n### [Failing to defend against sophisticated attackers]{.c32} {#ElicitingLatentKnowledge.xhtml#h.wj4u7xxy4qda .c6}\n\n[Suppose someone wants to make a lot of paperclips, and so selects actions to try to push my deliberative process in a paperclip-maximizing direction in ways I wouldn't flag as problematic. We'd like for my AI to help me anticipate and protect against this kind of manipulation, even if I can't recognize it as manipulation either in advance or after the fact.]{.c1}\n\n[]{.c1}\n\n[In order to influence us, an attacker needs to be able to understand the long-term consequences of many different possible actions, so that they can pick the action that leads to us making lots of paperclips.]{.c1}\n\n[]{.c1}\n\nIf our AI is equally sophisticated, then we hope that it can [also]{.c23} reason about the consequences of many different actions, and in particular whether they would lead to us valuing paperclips. Using ELK , we can discover that in most possible worlds that look equally-good according to us, we [don't]{.c23} [ value paperclips.]{.c1}\n\n[]{.c1}\n\nTo make this work, we effectively need to expand the set of \"possible worlds\" that we are talking about in the utility function definition from [ [Appendix: utility function](#ElicitingLatentKnowledge.xhtml#h.3y1okszgtslx){.c9} ]{.c13} [. In addition to considering the possible actions that our AI could take, we need to consider the possible actions that adversaries could take. As mentioned in that section, it's very unclear if we can ask these kinds of counterfactual questions to our AI--there's some sense in which it must be reasoning about the answers, but it may represent an additional step beyond ELK.]{.c1}\n\n[]{.c1}\n\nOf course our AI may not actually be able to do the same reasoning as an adversary. If our AI is completely oblivious to the possibility of an adversary then of course they will not be able to defend us, but the most likely case may be that our AI can reason [abstractly]{.c23} about the presence of an adversary and the kind of optimization they might do without being able to reason [concretely]{.c23} about any of the possible actions they considered. Our hope is that in this case, the exact same abstract reasoning that allows our AI to heuristically predict consequences of the other AI's optimization can also allow [ our AI to heuristically predict counterfactuals. This looks plausible but far from certain, and it's a topic that definitely deserves more time than it will get in this article.]{.c1}\n\n### [Aside on counterfactuals]{.c32} {#ElicitingLatentKnowledge.xhtml#h.4w7uea1fo2vz .c6}\n\n[This approach to avoiding subtle manipulation requires considering many different ways that the future \"could have been.\" This is relatively easy if our AI makes decisions by considering many possible actions and predicting the consequences of each. But it becomes much harder when our AI is reasoning about other agents, whether adversaries or future copies of itself, who are thinking about multiple options.]{.c1}\n\n[]{.c1}\n\n[It's fairly plausible that this will be an open problem for implementing indirect normativity even given a solution to narrow ELK. We have no idea how similar it will end up being to other work that tries to clarify the notion of \"counterfactual\"\\-\\--in particular, we have not seen any other approach to alignment that needed counterfactuals for similar reasons, and so we have no idea whether our use case will end up turning on similar philosophical questions.]{.c1}\n\n### [Our AI manipulating us instead of acquiring resources]{.c32} {#ElicitingLatentKnowledge.xhtml#h.wg3akw93jql9 .c6}\n\n[We want our AI to execute complex plans in order to acquire flexible influence. In order to evaluate how good a plan is, we'll ask our AI to predict how happy future people are with the result, since they are better positioned to understand the complex events happening in the future and whether the AI successfully put itself in a position to do what they wanted.]{.c1}\n\n[]{.c1}\n\n[But it might be much easier to manipulate our future selves into being really happy with the outcome then it is to actually maximize option value (which may require e.g. trying to make money in a competitive economy). So we should worry about the possibility that our AI will manipulate us instead of helping us.]{.c1}\n\n[]{.c1}\n\nIt seems that we can avoid this problem by being careful about how we construct the utility function. As described in [ [Appendix: utility function](#ElicitingLatentKnowledge.xhtml#h.3y1okszgtslx){.c9} ]{.c13} , we want to use a proposal that decouples \"the human we are asking to evaluate a world\" from \"the humans in that world\"\\-\\--this ensures that manipulating the humans to be easily satisfied can't improve the evaluation of a world. ^[\\[76\\]](#ElicitingLatentKnowledge.xhtml#ftnt76){#ElicitingLatentKnowledge.xhtml#ftnt_ref76}^\n\n[]{.c1}\n\nThis requires humans in one possible future to evaluate a different possible future, but they can do that talking to our current AI about what it predicts will happen in those futures [ (exactly the same process we are proposing to use today when we evaluate the consequences of a proposed action for the SmartVault by looking at predicted video and using ELK).]{.c1}\n\n[]{.c1}\n\n[There are a number of other serious complications (especially coming from the fact that the human who is doing the evaluating may have different preferences than anyone in the world being evaluated) but it looks to us like this basic idea can probably work.]{.c1}\n\n### [Humans going crazy off distribution]{.c32} {#ElicitingLatentKnowledge.xhtml#h.6t9yjyy4zve0 .c6}\n\n[Suppose that humans are adapted to breathing a certain kind of atmosphere at a certain pressure, and that if you slightly change those parameters they don't have an obvious or immediate problem but they slowly go off the rails. If this process is slow enough, we can imagine a human looking in from the outside who is unable to tell that something has gone wrong, because by the time it has the consequences are too subtle and far-removed from our current experience to be obviously amiss.]{.c1}\n\n[]{.c1}\n\nIn this case, the procedure described in the last section could go astray. Our AI might imagine many different compositions of the atmosphere, and conclude that the the \"normal\" one is actually fairly exceptional\\-\\--for most possible compositions the human would eventually go crazy, and so if you follow the reasoning from the previous section you might conclude that \"going crazy\" is actually the [correct]{.c23} [ outcome from deliberation. Put differently, our procedure does not distinguish between a very specific situation yielding an unusual outcome because the specific condition was a necessary precondition for human reasoning, or because it was the result of an adversary's manipulation.]{.c1}\n\n[]{.c1}\n\nTo handle these cases we would like our AI/overseer to be reasoning explicitly about the kinds of distributional shift that could cause trouble. This need not involve understanding e.g. [why]{.c23} a different atmosphere would lead humans to slowly go crazy, it could simply involve heuristic reasoning like \"where possible, we would like humans and groups of humans to keep operating under the same kinds of conditions for which they are adapted\" and then trying to identify which possible novel conditions are most closely analogous and therefore safest.\n\n[]{.c1}\n\nThat is, we are adopting a presumption in favor of the \"average\" behavior over a large number of different situations, and that presumption can be overcome by arguments like \"this situation is the one most similar to the humans' historical environment, and therefore most likely to yield good outcomes.\" Importantly, we are [not]{.c23} [ using a heuristic like \"this situation is the 'default' if I didn't ask, and therefore most likely to yield good outcomes,\" because the \"default\" can involve other sophisticated adversaries manipulating us.]{.c1}\n\n[]{.c1}\n\nIn order to achieve good outcomes, we need our AI to be good enough at reasoning about what kinds of important changes to the environment might derail our deliberation, or to be able to act sufficiently conservatively in cases where it is uncertain (e.g. by isolating us from some kinds of changes, or by making agreements with other AIs to change things slowly, or [etc.]{.c23} ). This problem is very distinct from detecting subtle manipulation because it doesn't involve playing against an intelligent adversary---we expect that you can perform well enough [ by taking a slow conservative path and using relatively unsophisticated reasoning at first. We currently don't regard this as a problem with alignment per se, but instead a general challenge which humanity will face if accelerating technological change can subject us to very new environments.]{.c1}\n\n[Generative modeling details]{.c37} {#ElicitingLatentKnowledge.xhtml#h.trvedm0xgro .c46 .c24}\n-----------------------------------\n\n[In order to make algorithms and counterexamples concrete, we need to be more specific about the architecture and loss used for training the prediction model. The broad picture is essentially the same regardless of these details, but working with a concrete example clarifies important subtleties and may make the discussion more plausible.]{.c1}\n\n[]{.c1}\n\n[We will consider a variational autoencoder in an effort to make the discussion as clean as possible.]{.c1}\n\n[]{.c1}\n\nLet [before]{.c30} and [after]{.c30} be observations, [action]{.c30} be the SmartVault actions, and let [z]{.c30} be a vector of floats representing the generative model's latent space. We'll parametrize three models by a vector of parameters [θ]{.c30} [:]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.4736d1b8b3ed65c345fea132875996b4933fee93} []{#ElicitingLatentKnowledge.xhtml#t.3}\n\n+------------------------------------------------------------------------------------------------+\n| [def]{.c27} [ ]{.c30} [prediction]{.c31 .c12} [(before, action, θ):]{.c30} [\\ |\n| ]{.c16} [\\# returns an autoregressive model for p(z\\|before, action)]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [posterior]{.c31 .c12} [(before, action, after, θ):]{.c30} [\\ |\n| ]{.c16} [\\# returns an autoregressive model for p(z\\|before, action, after)]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [observation]{.c31 .c12} [(z, θ):]{.c30} [\\ |\n| ]{.c16} [\\# returns an autoregressive model for p(after\\|z)]{.c36} |\n+------------------------------------------------------------------------------------------------+\n\n[]{.c1}\n\nThen we will use SGD on [θ]{.c30} to optimize the standard variational lower bound on log p( [after]{.c30} \\| [action]{.c30} , [before]{.c30} [):]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.709dddc220396243bdd7396682e74f6b2688f83c} []{#ElicitingLatentKnowledge.xhtml#t.4}\n\n+-----------------------------------------------------------------------+\n| [def]{.c27} [ ]{.c30} [loss]{.c31 .c12} [(θ):]{.c30} [\\ |\n| before, action, after = dataset.sample()\\ |\n| z_prior = prediction(before, action, θ)\\ |\n| z_posterior = posterior(before, action, after, θ)\\ |\n| kl = z_prior.kl_divergence(z_posterior)\\ |\n| logprob = observation(z_prior.sample(), θ).logp(after)\\ |\n| ]{.c16} [return]{.c27} [ kl - logprob]{.c16} |\n+-----------------------------------------------------------------------+\n\n[]{.c1}\n\nIntuitively, we are asking the model to explain what it expects to happen when it sees [action]{.c30} and [before]{.c30} , then we are asking it to explain what it thinks actually happened after it sees [after]{.c30} [, and then we are penalizing it based on the difference.]{.c1}\n\n[]{.c1}\n\nHaving set up our generative model, we can now describe the predictor. It operates directly in the latent space [z]{.c30} [, and is optimized to give good answers (as judged by the human) when the latent is drawn from the posterior:]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.595c7bd778f7a36d85188782e67d5c36cd839064} []{#ElicitingLatentKnowledge.xhtml#t.5}\n\n+---------------------------------------------------------------------------------------------------------------------------+\n| [class]{.c27} [ ]{.c16} [Human]{.c31 .c12} [ :\\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [pose_question]{.c31 .c12} [(before, action, after):]{.c30} [\\ |\n| ]{.c16} [\\# returns a question]{.c36} [\\ |\n| ]{.c16} [\\# should be one a human thinks can be answered unambiguously]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [loss_for_answer]{.c31 .c12} [(before, action, after, question, answer):]{.c30} [\\ |\n| ]{.c16} [\\# returns a non-negative loss]{.c36} [\\ |\n| ]{.c16} [\\# good answers get a loss of 0]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [reporter]{.c12 .c31} [(question, z, θ_reporter):]{.c30} [\\ |\n| ]{.c16} [\\# answers the question in the world described by z]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [reporter_loss]{.c31 .c12} [(human, θ, θ_reporter):]{.c30} [\\ |\n| before, action, after = dataset.sample()\\ |\n| question = human.pose_question(before, action, after)\\ |\n| z = posterior(before, action, after, θ).sample()\\ |\n| answer = reporter(question, z, θ_reporter)\\ |\n| ]{.c16} [return]{.c27} [ human.loss_for_answer(before, action, after, question, answer)]{.c16} |\n+---------------------------------------------------------------------------------------------------------------------------+\n\n[]{.c1}\n\nAll of the proposals in [ [Section: regularizers](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} [ can be applied to this setting. Some of them even appear significantly more plausible after we've explicitly separated out inference from decoding, but we believe that essentially the same counterexamples apply.]{.c1}\n\n[Avoiding data errors]{.c37} {#ElicitingLatentKnowledge.xhtml#h.koatpvb4n07l .c46 .c24}\n----------------------------\n\n[Most of this report focused on the \"inductive bias\" of our learning procedure\\-\\--if the honest reporter and the human simulator got the same loss, how would we ensure that training learned the honest reporter?]{.c1}\n\n[]{.c1}\n\n[In some sense this requires zero systematic error---any systematic error could be copied by the human simulator, and allow it to achieve a lower loss than the direct translator. If we never mess up, then the direct translator and the human simulator will get the same loss and so the question will be settled by the inductive bias, but if we have too many errors that won't matter.]{.c1}\n\n[]{.c1}\n\nSome authors have cited this issue as the core obstacle to learning the direct translator. For example, in [ [Look where I'm pointing, not at my finger](https://www.google.com/url?q=https://arbital.greaterwrong.com/p/pointing_finger&sa=D&source=editors&ust=1646948966833925&usg=AOvVaw1bZaMnbWfk7O6xq7cAKdTP){.c9} ]{.c13} Eliezer seems tentatively optimistic about attacking this problem for contemporary ML, but expects problems for sufficiently powerful AI because \"the [best possible]{.c23} classifier of the relation between the training cases and the observed button labels will always pass through User2\". ^[\\[77\\]](#ElicitingLatentKnowledge.xhtml#ftnt77){#ElicitingLatentKnowledge.xhtml#ftnt_ref77}^\n\n[]{.c1}\n\n[Issues with data quality take a back seat in this report for a few reasons:]{.c1}\n\n[]{.c1}\n\n- [In our view there are many promising approaches to eliminating or tolerating data imperfections, whereas correcting the inductive bias appears like a more plausible fundamental obstacle. We describe several of these approaches in the following section.]{.c1}\n- [The naive training procedure could learn the human simulator even with perfect data, so we probably need to modify our learning procedure. That means that we may not even understand what kind of \"data\" we need or what it would mean for it to be \"perfect.\"]{.c1}\n- [If we found a training strategy that had an \"inductive bias\" in favor of the honest reporter when the data was perfect, then it would also learn the honest reporter for some sufficiently small amount of imperfection. So before discussing data quality it seems important to get some sense for how large we can make this \"margin of error.\" It's very hard to predict whether perfect data will be a blocker before knowing this.]{.c1}\n\n### [Approaches for handling imperfect data]{.c32} {#ElicitingLatentKnowledge.xhtml#h.r1pmq4yy4f6f .c6}\n\n[Optimize human approval rather than exactly imitating humans]{.c12} . If we train a model to predict human answers, then [any]{.c23} [ arbitrariness in human answers becomes an \"imperfection\" that allows the human simulator to achieve lower loss than the honest reporter. The first step towards removing imperfections is to make the loss function much more forgiving.]{.c1}\n\n[]{.c1}\n\n[One easy way to make things much better is to assign non-zero loss only to answers where a human is confident that they are wrong. Here is a simple procedure:]{.c1}\n\n[]{.c1}\n\n- [To evaluate loss for an answer A, first generate an alternative answer A'.]{.c1}\n- [Give A a loss of 1 if it is unambiguously worse than A'.]{.c1}\n- [Otherwise give A a loss of 0.]{.c1}\n\n[]{.c1}\n\n[The idea is that, if our dataset is sufficiently close to perfect, an honest answer will never be unambiguously dominated by an alternative answer.]{.c1}\n\n[]{.c1}\n\n[One limitation of this approach is that most predictions will never be unambiguously dominated\\-\\--if the diamond in fact remains in the room, and the reporter said it would remain in the room with 10% probability, is that worse than saying it would remain in the room with 90% probability?]{.c1}\n\n[]{.c1}\n\nThis limitation is addressed by the more realistic approach to generative modeling discussed in [ [Appendix: generative modeling](https://www.google.com/url?q=https://docs.google.com/document/d/1Q9XpZs3mMuLKADpKlfpc-2MChE00D4zi05xU4jR8heI/edit%23heading%3Dh.trvedm0xgro&sa=D&source=editors&ust=1646948966838034&usg=AOvVaw2LB-tkByrzOQAa3X4DmDLV){.c9} ]{.c13} [. We don't believe it is a fundamental obstacle regardless of how the generative model works (e.g. you can draw many samples from an autoregressive model to estimate probabilities).]{.c1}\n\n[]{.c1}\n\n[Train AI to raise caveats. ]{.c12} We only have trouble if the human confidently makes a judgment that turns out to have been wrong (i.e. which penalizes the honest reporter). [But we can train an AI critic to help point out ways that the human answer might have been wrong.]{.c1}\n\n[]{.c1}\n\nIt's OK if this often leads to the human being uncertain and making no judgment, as long as we do have a reasonable number of cases where the human is confident about what is happening (at the 10 [-9]{.c56} level, or however low we need to get the error rate before we're within our margin of safety). This level of confidence does not seem extraordinary, especially if someone is highlighting ways you might be wrong. ^[\\[78\\]](#ElicitingLatentKnowledge.xhtml#ftnt78){#ElicitingLatentKnowledge.xhtml#ftnt_ref78}^\n\n[]{.c1}\n\n[Moreover, the task of pointing out how a human might be wrong seems relatively \"easy\" for an AI that is able to predict what will happen, predict human judgments, and predict the truth, since such an AI must have a relatively clear model of human inferences. So we don't expect to run into problems from our AI not noticing the explanation.]{.c1}\n\n[]{.c1}\n\n[Safely improve data efficiency]{.c12} . Our \"margin of error\" generally increases as we have more efficient learners. For example, if we need only 10,000 datapoints [ to train the honest reporter, then we can clearly tolerate any error rate significantly below 1/10,000.]{.c1}\n\n[]{.c1}\n\nThat said, I think this isn't necessarily a great solution on its own. We need to be sure that our methods for improving efficiency don't themselves favor the human simulator. That rules out many options: for example, we [don't]{.c23} want the reporter to be learned quickly by sharing parameters with the predictor, and in fact we will need to work hard to avoid the possibility that such sharing potentially introduces an inductive bias in favor of human-imitation as described in [ [Appendix: weight-sharing](#ElicitingLatentKnowledge.xhtml#h.3fyocqpbzqj){.c9} ]{.c13} . In [light of that, I think that the reporter may require a lot of data, since the hardest cases for ELK are those where the honest reporter is relatively complex compared to the predictor.]{.c1}\n\n[]{.c1}\n\n[Revisit or throw out overly surprising data]{.c12} . [ ]{.c12} Suppose that we need an error rate of 10 [-9]{.c56} in order to avoid penalizing the honest reporter too much (e.g. because we want to collect a billion datapoints [ without a single error). This kind of error rate seems potentially achievable with realistic levels of care and paranoia, but being extremely paranoid for every data point seems like it may increase costs unacceptably.]{.c1}\n\n[]{.c1}\n\n[However, we don't necessarily have to apply such techniques uniformly. If a small number of datapoints make the difference between learning the honest reporter and the human simulator, it seems fairly likely that we can automatically identify them as influential outliers for the reporter. Depending on the robustness of the procedure we can then either throw out the outliers, or we can label them more carefully.]{.c1}\n\n[]{.c1}\n\nFor example, suppose we have access to some situations and questions where the human simulator and the honest reporter disagree (which we hopefully haven't included in our dataset). Then a small number of examples that cause us to learn the human simulator would be very influential for the reporter's behavior in these confusing situations. We can try to identify these examples algorithmically by looking at the gradient of the loss (e.g. using [ [influence functions](https://www.google.com/url?q=https://arxiv.org/pdf/1703.04730.pdf&sa=D&source=editors&ust=1646948966842459&usg=AOvVaw2fOoRbjRnRDu-nyb8rbMto){.c9} ]{.c13} [).]{.c1}\n\n[]{.c1}\n\nIt generally feels like we are in a really good place if there is an inductive bias in favor of the intended model, even if we don't see concrete techniques for fixing the problem. At that point we've effectively broken the symmetry between these two models: one of them is favored [a priori]{.c23} and the other is supported by some very small fraction of the training data [. It's easy to imagine incremental progress in ML giving us the ability to select the model that is preferred a priori.]{.c1}\n\n[ELK for learned optimizers]{.c37} {#ElicitingLatentKnowledge.xhtml#h.3f3phmjt4uvn .c46 .c24}\n----------------------------------\n\n[This report has focused on \"ontology identification,\" a particular example that seems challenging for ELK. We think that this is the simplest and clearest example where ELK is challenging, and it likely isolates an important part of the difficulty, but other cases may turn out to be even more challenging.]{.c1}\n\n[]{.c1}\n\nAnother important family of examples are where the learned model itself performs optimization. ^[\\[79\\]](#ElicitingLatentKnowledge.xhtml#ftnt79){#ElicitingLatentKnowledge.xhtml#ftnt_ref79}^ Similar cases have been discussed extensively by researchers working on AI alignment (e.g. in [ [Risks from learned optimization](https://www.google.com/url?q=https://www.lesswrong.com/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction&sa=D&source=editors&ust=1646948966844184&usg=AOvVaw0I0MgG9W7FTCWcaVey7X8I){.c9} ]{.c13} [), and seem like a plausible candidate for the \"hardest part\" of the problem.]{.c1}\n\n[]{.c1}\n\n[We've spent some time thinking about learned optimization but have very little idea how hard ELK will turn out to be in this context. It seems plausible both that ELK for learned optimization is straightforward once ontology identification is resolved, or that learned optimization will turn out to contain most of the difficulty.]{.c1}\n\n[]{.c1}\n\n[In this report we've focused on ontology identification rather than learned optimization because it appears to be the \"base case\"\\-\\--the solution in cases involving learned optimization looks likely to build upon the solution in cases without learned optimization. It appears to be very difficult to work on ELK for learned optimization without knowing how to solve the base case, but very easy to work on the base case without thinking about the recursion.]{.c1}\n\n[]{.c1}\n\n[In this section we describe our preliminary thoughts about ELK for learned optimization; our hope is both to explain why we are putting it on the backburner until making more progress on ontology identification, and why we think that it is reasonably likely to be solvable.]{.c1}\n\n[]{.c1}\n\n(We expect this section to be a bit hard-to-follow and probably worth skipping for readers who haven't specifically found themselves wondering \"But how is this approach to ELK going to generalize to learned optimization? Isn't this whole approach doomed?\" We do think these topics are important, but we hope to refine and explain our views more in the future .)\n\n### [Preface: learned reasoning doesn't require special techniques]{.c32} {#ElicitingLatentKnowledge.xhtml#h.vhsvzfc3xcfz .c6}\n\nWe currently expect reasoning to behave similarly to inference in the predictor's Bayes net. For example, consider a system that performs logical deductions in propositional logic. From our perspective these deductions behave similarly to updates in an approximate inference algorithm, and we still hope to learn a \"direct translator\" which understands the semantics of the reasoner's language in order to translate into the human's Bayes net. ^[\\[80\\]](#ElicitingLatentKnowledge.xhtml#ftnt80){#ElicitingLatentKnowledge.xhtml#ftnt_ref80}^\n\n[]{.c1}\n\nThis argument looks like it should keep working ^[\\[81\\]](#ElicitingLatentKnowledge.xhtml#ftnt81){#ElicitingLatentKnowledge.xhtml#ftnt_ref81}^ [ even if our reasoners (or our approximate inference algorithms) themselves have interesting structure, e.g. using heuristics to decide what deductions to perform next.]{.c1}\n\n### [A \"recursive\" case: learned learning]{.c32} {#ElicitingLatentKnowledge.xhtml#h.fu6btqv9p2ky .c6}\n\nOur learned predictor may [itself]{.c23} [ construct a learning problem and solve it as a subtask of prediction. For example, once a predictor has started reasoning about the world and knows what kinds of objects it contains, it may decide to train a special-purpose model in order to more quickly simulate the interactions of those objects. For simplicity, let's assume that the \"inner\" learning process itself produced a Bayes net.]{.c1}\n\n[]{.c1}\n\nThis learned model may itself have latent knowledge. And the kinds of approaches that we have discussed so far are unlikely to be able to elicit that knowledge---in order to figure out how to translate between the inner Bayes net and the human's Bayes net, we need to solve a small instance of the ELK problem to learn an \"inner\" reporter for [each input]{.c23} . ^[\\[82\\]](#ElicitingLatentKnowledge.xhtml#ftnt82){#ElicitingLatentKnowledge.xhtml#ftnt_ref82}^ [ We may need to do further reasoning to combine the inner reporter's answers with other information in the predictor, but we'll focus here on learning the inner reporter.]{.c1}\n\n[]{.c1}\n\nIf we weren't worried about the human simulator, then this would be straightforward: we would simply train the reporter to answer questions well, and it could learn the strategy \"Use ELK to learn an inner reporter for the inner learning problem, and then use that reporter to help answer questions.\" ^[\\[83\\]](#ElicitingLatentKnowledge.xhtml#ftnt83){#ElicitingLatentKnowledge.xhtml#ftnt_ref83}^\n\n[]{.c1}\n\n[Our concern is that we will instead learn the meta-human-simulator: \"Use gradient descent to learn an inner reporter who answers questions like a human would, and then use that reporter to answer questions.\" After all, we here on the outside are doing a bunch of research to figure out how to solve ELK, and then deliberately deciding that we want to use ELK instead of naively using gradient descent. Why would the learned model go through all that trouble if its only goal is to minimize the training loss?]{.c1}\n\n[]{.c1}\n\n[To fix this problem we somehow need to encourage the inner learning process to also use ELK. This hopefully explains our use of the adjective \"recursive,\" and why we believe this problem may be hard to work on before having a clear picture of ontology identification --- if we don't know what we are trying to get the inner learner to do, it's hard to figure out how we should incentivize it to do that.]{.c1}\n\n[]{.c1}\n\n[Despite that ignorance, here are some preliminary thoughts on possible approaches:]{.c1}\n\n[]{.c1}\n\n- [ [Imitative generalization](https://www.google.com/url?q=https://alignmentforum.org/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1&sa=D&source=editors&ust=1646948966848956&usg=AOvVaw2-VQnFysZYJjyTdErXfl9f){.c9} ]{.c13 .c12} . Instead of directly having the reporter learn a heuristic like \"apply SGD to learn a reporter for the inner predictor\" we could try to learn to [identify]{.c23} the inner learner and supply this information to humans during the training process. ^[\\[84\\]](#ElicitingLatentKnowledge.xhtml#ftnt84){#ElicitingLatentKnowledge.xhtml#ftnt_ref84}^ [ Those humans could then apply ELK to the inner learner directly (for the same reasons that we are applying ELK on the outside), and use this to answer questions that a human couldn't label directly. After distilling these answers into the outer reporter, it could learn the strategy \"do ELK on the inner learner to get an inner reporter.\"]{.c1}\n- [Ask for a certificate. ]{.c12} We can view a solution to ELK as providing a kind of \"certificate\" that an answer is good---for example, the fact that [f]{.c23} is the fastest function that answers a broad range of questions well gives us evidence that [f]{.c23} (Q) is a good answer to the question Q. Rather than producing this certificate directly on the outside (by producing the fastest reporter [f]{.c23} [) we can allow our model to give us a similar certificate (by solving an inner optimization to find the fastest inner reporter). If taking this approach, a realistic certificate would probably be a combination of evidence from the outer- and inner- optimizations. We consider this approach plausible, but there are certainly huge questions and uncertainties.]{.c1}\n- [Do nothing]{.c12} . Some solutions to ELK may be \"reflective\" and pass through from the outside to the inside unchanged. For example, if we ask the reporter to depend on the most \"upstream\" part of the predictor, then it's possible that the inner reporter is [also]{.c23} incentivized to depend on the most \"upstream\" part of the inner predictor. This doesn't seem particularly plausible for this rule ^[\\[85\\]](#ElicitingLatentKnowledge.xhtml#ftnt85){#ElicitingLatentKnowledge.xhtml#ftnt_ref85}^ or any of the others we consider, but it seems like being reflective in this way is a plausible indicator of a good approach to ELK. We take this seriously as a heuristic argument in favor of reflective approaches, but we focus primarily on whether approaches solve the object-level ontology identification problem because we feel like we have a much more precise picture of that aspect of the problem. ^[\\[86\\]](#ElicitingLatentKnowledge.xhtml#ftnt86){#ElicitingLatentKnowledge.xhtml#ftnt_ref86}^\n\n### [Another \"recursive\" case: optimization using learned model]{.c32} {#ElicitingLatentKnowledge.xhtml#h.ecmym2kqgxux .c6}\n\n[We may learn a predictor that itself contains an inner model (for concreteness imagine the inner model simply does inference in a Bayes net, but it could also e.g. be deduction in first order logic), uses its inner model to evaluate plans for achieving a low loss, and then picks plans expected to lead to the lowest loss.]{.c1}\n\n[]{.c1}\n\n[For example, suppose that the predictor contains a \"cognitive scratchspace\" (you could literally imagine pen and paper, or an external calculating device), and uses its inner model to select what sequence of actions to take in the cognitive scratchspace in order to make a good prediction about what the camera will show.]{.c1}\n\n[]{.c1}\n\n[In this case, we want to apply ELK to the inner model, and then have the learned reasoner use that model in order to answer questions.]{.c1}\n\n[]{.c1}\n\nHere are some preliminary thoughts on possible approaches, analogous to the three discussed in the last section: ^[\\[87\\]](#ElicitingLatentKnowledge.xhtml#ftnt87){#ElicitingLatentKnowledge.xhtml#ftnt_ref87}^\n\n[]{.c1}\n\n- [Imitative generalization]{.c12} . Rather than learning an outer model that has an inner model and uses it to optimize, we could try to directly learn the inner model in a form that a human can use. We could hope that the inner model is simpler than the outer model (since the outer model also must specify the inner model), and that a procedure like iterated amplification could allow a human to use the inner model just as effectively as the outer model did. If this were true, then IG would naturally learn the inner model, and we would end up distilling ( human using the inner model) instead of directly learning the outer model. At that point, gradient descent is learning the inner model directly and we can apply ELK to it. ^[\\[88\\]](#ElicitingLatentKnowledge.xhtml#ftnt88){#ElicitingLatentKnowledge.xhtml#ftnt_ref88}^ [ Then we are distilling an amplified human who is using a version of the inner model which is able to answer questions about its latent knowledge, and it seems reasonable to hope the whole system then volunteers its latent knowledge.]{.c1}\n- [Ask for a certificate]{.c12} [. When we ask our AI a question about what's happening, we can also ask for a certificate about that the answer is good, where we are (as before) viewing ELK as a way of producing such a certificate, e.g. \"this is the output of the fastest function that produces good judgments in training cases\" is the kind of thing we might be looking for. Then a solution to ELK for the inner model, which produces certificates that the inner model's answers are good, might be exactly what the learned optimizer needs to do in order to produce a good certificate for the outer model.]{.c1}\n- [Do nothing]{.c12} . We could simply set up the ELK objective and have our system use learned reasoning in order to minimize that objective, hopefully leading it to use the inner model (combined with ELK) to answer questions about the world. We are somewhat skeptical of this approach working, because a sufficiently powerful learned model is likely to \"know\" whether or not an input could plausibly appear in the training set, and if it is doing reasoning to achieve a low loss then it seems quite possible that it will generalize pathologically once inputs cannot appear in the training set. ^[\\[89\\]](#ElicitingLatentKnowledge.xhtml#ftnt89){#ElicitingLatentKnowledge.xhtml#ftnt_ref89}^\n\n[]{.c1}\n\nAn interesting complication of this case, not present in the learned learning case, is that a model may e.g. decide to use its cognitive scratchspace to implement algorithms which themselves build latent knowledge. At that point we have at least two places latent knowledge can arise: in the inner model, and in the algorithm being run in the scratchspace (which the inner model predicted to be a good algorithm). The three bullets above describe approaches for eliciting knowledge from the inner model, but it seems like the same idea can be applied to algorithms run on the cognitive scratchspace. ^[\\[90\\]](#ElicitingLatentKnowledge.xhtml#ftnt90){#ElicitingLatentKnowledge.xhtml#ftnt_ref90}^ At this point our recursion is not simple---we have reduced ELK for the large model into [two]{.c23} [ smaller instances of ELK.]{.c1}\n\n[]{.c1}\n\nMore generally, we think there are likely to be [many]{.c23} [ \"inner\" models that can form latent knowledge, and a realistic AI is likely to involve a kind of complicated and fuzzy mess of them. This is worth keeping in mind but we don't see any particular reason to expect this complication to make the problem impossible, and it seems even harder to work on until we've resolved the base case of ELK and seen how to make the recursion work in simple cases.]{.c1}\n\n[]{.c1}\n\nWeight-sharing [ between the predictor and human simulator]{.c37} {#ElicitingLatentKnowledge.xhtml#h.3fyocqpbzqj .c46 .c24}\n-----------------------------------------------------------------\n\nIn our counterexamples, we've considered a human simulator which needs to describe how humans reason. But in reality the predictor [already]{.c23} [ needs to have a good model of human behavior, and so it seems plausible that the reporter can reuse this machinery. If the predictor and reporter are trained jointly that seems almost inevitable.]{.c1}\n\n[]{.c1}\n\n[You might think that weight sharing between the predictor and human simulator is the important counterexample that we should be focusing on, and that we shouldn't be thinking about other counterexamples where e.g. the human simulator is simpler than the honest reporter even without weight sharing.]{.c1}\n\n[]{.c1}\n\nThis was the salient example for Paul when he wrote about this issue in [ [Teaching models to answer questions honestly instead of predicting human answers](https://www.google.com/url?q=https://www.alignmentforum.org/posts/QqwZ7cwEA2cxFEAun/teaching-ml-to-answer-questions-honestly-instead-of&sa=D&source=editors&ust=1646948966854637&usg=AOvVaw3HQCWfkcmVxa-bLQxA2Q8I){.c9} ]{.c13} , and it is the implicit framing in other discussions like [ [Look where I'm pointing, not at my finger](https://www.google.com/url?q=https://arbital.greaterwrong.com/p/pointing_finger&sa=D&source=editors&ust=1646948966854998&usg=AOvVaw0TQZ18tGohGhmDF4Jxxihp){.c9} ]{.c13} [. We don't focus on it in our research or discuss in this report because:]{.c1}\n\n[]{.c1}\n\n- Approaches that prevent human-simulation (e.g. [ [here](https://www.google.com/url?q=https://www.alignmentforum.org/posts/QqwZ7cwEA2cxFEAun/teaching-ml-to-answer-questions-honestly-instead-of&sa=D&source=editors&ust=1646948966855520&usg=AOvVaw07QBZz7ObDkTa23m1tZYan){.c9} ]{.c13} or [ [here](https://www.google.com/url?q=https://www.alignmentforum.org/posts/roZvoF6tRH6xYtHMF/avoiding-the-instrumental-policy-by-hiding-information-about&sa=D&source=editors&ust=1646948966855808&usg=AOvVaw2LD95bn3W7WkfVodCbPsD8){.c9} ]{.c13} ) don't seem like they would prevent human-imitation, because human-imitation can [still]{.c23} [ be simpler than the honest reporter. But approaches that prevent human-imitation seem like they will need to leverage something other than simplicity, and in most other respects human-simulation and human-imitation behave similarly. That suggests that human-imitation will be the harder problem and we should be avoiding any techniques specific to simulation.]{.c1}\n- [On top of that, human-imitation is a simpler and more generic counterexample. It applies to models in any domain and requires fewer assumptions. So pedagogically it seems easier to focus on this counterexample.]{.c1}\n\n[]{.c1}\n\n[Overall we think that focusing on human-imitation instead of human-simulation is a (small) step forward for making useful progress on ELK.]{.c1}\n\n### [Could the possibility of weight sharing help?]{.c32} {#ElicitingLatentKnowledge.xhtml#h.gwxouycb0e12 .c6}\n\nOn the other hand, you might wonder whether the fact that humans are embedded in the predictor might make ELK easier, or even break one of our counterexamples. For example, in [ [Section: compression](#ElicitingLatentKnowledge.xhtml#h.84po2g4mu6es){.c9} ]{.c13} we discuss a counterexample where the predictor is very simple relative to direct translation, but this may be impossible given that humans (and all of the sensors they use to understand the world) need to be embedded in the predictor's distribution over initial states. ^[\\[91\\]](#ElicitingLatentKnowledge.xhtml#ftnt91){#ElicitingLatentKnowledge.xhtml#ftnt_ref91}^\n\n[]{.c1}\n\nWe are quite skeptical that this will make the problem easier. Taking the proposal in [ [Section: compression](#ElicitingLatentKnowledge.xhtml#h.84po2g4mu6es){.c9} ]{.c13} , the problem is that human imitation does just as good a job as direct translation at compressing the human part of predictor---both reporters totally characterize how the human answers questions, while the human-simulator is simpler, and so the difference in complexity needs to be made up for by the [rest]{.c23} [ of the predictor.]{.c1}\n\n[]{.c1}\n\nWe expect this behavior to be quite general: there are ways to exploit weight sharing between the predictor and the reporter, but we expect them to amount to \"treading water\" and reducing to the case where the predictor doesn't contain any humans.\n\n[Detailed Game of Life Example]{.c37} {#ElicitingLatentKnowledge.xhtml#h.5jm9ag9hztbs .c46 .c24}\n-------------------------------------\n\nOur running example involving diamonds and cameras fails to permit precise descriptions of reality, human understanding of the world, the intended reporter, and the human simulator. To demonstrate our arguments still hold when made more precisely, we will present an example of the problem in terms of the [ [Game of Life](https://www.google.com/url?q=https://en.wikipedia.org/wiki/Conway%2527s_Game_of_Life&sa=D&source=editors&ust=1646948966858309&usg=AOvVaw2zS-l40ow_JnfgacRmwZm8){.c9} ]{.c13} [ (GoL).]{.c1}\n\n[]{.c1}\n\n[The GoL is a two-dimensional cellular automaton devised by John Conway. The world of GoL consists of a 2D grid of cells which are either alive or dead. The world evolves according to three rules:]{.c1}\n\n1. [Any live cell with two or three live neighbors survives.]{.c1}\n2. [Any dead cell with three live neighbors becomes a live cell.]{.c1}\n3. [All other live cells die in the next generation. Similarly, all other dead cells stay dead.]{.c1}\n\n[]{.c1}\n\n[In this example, the fundamental nature of the world will be the GoL and human observations will be the total cell counts in 1000x1000 grids of GoL cells.]{.c1}\n\n### [How the Prediction Logic Works]{.c32} {#ElicitingLatentKnowledge.xhtml#h.ghs6yb5h63zc .c6}\n\n[The predictor will have learned the fundamental nature of the world and model it in terms of its fundamental nature. It's latent state will represent a probability distribution over cell trajectories of the world that obey the GoL rules. Inference will consist of discarding trajectories incompatible with observations, renormalizing, and using the resulting distribution to predict future observations.]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.1252b4da8c4b6f8b3a16fb5d030314f5e50e874b} []{#ElicitingLatentKnowledge.xhtml#t.6}\n\n+-----------------------------------------------------------------------------------------------------+\n| [def]{.c27} [ ]{.c30} [extract_obs_ai]{.c31 .c12} [(cell_trajectory):]{.c30} [\\ |\n| ]{.c16} [\\# extracts observations from a cell trajectory]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [predictor_prior]{.c31 .c12} [():]{.c30} [\\ |\n| ]{.c16} [\\# returns the predictor\\'s prior over cell_trajectories]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [prediction_logic]{.c31 .c12} [(observations):]{.c30} [\\ |\n| posterior = predictor_prior()\\ |\n| ]{.c16} [for]{.c27} [ traj ]{.c16} [in]{.c27} [ posterior:\\ |\n| ]{.c16} [if]{.c27} [ extract_obs_ai(traj)\\[:len(observations)\\] != observations:\\ |\n| posterior\\[traj\\] = ]{.c16} [0]{.c25} [\\ |\n| posterior.normalize() ]{.c16} [\\# ensure the posterior sums to 1]{.c36} [\\ |\n| ]{.c16} [return]{.c27} [ posterior\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [observation_extracting_head]{.c31 .c12} [(posterior):]{.c30} [\\ |\n| world = posterior.sample()\\ |\n| ]{.c16} [return]{.c27} [ extract_obs_ai(world)]{.c16} |\n+-----------------------------------------------------------------------------------------------------+\n\n### [How Humans Answer Questions]{.c32} {#ElicitingLatentKnowledge.xhtml#h.6siw9f8uzmbr .c6}\n\n[Humans will model the world in terms of a finite list of objects that are associated with various observations. Much like we infer an object is an apple from a red, shiny, circular blob, humans in the GoL universe will infer the presence of objects like gliders from moving patterns of 5 count differences in observations. Similar to how humans in this universe might only be able to infer simple properties like size, speed and color when they are very confused, humans in the GoL universe will have catch-all categories of active and stable to describe confusing patterns of observations. ]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.4e7421cb8510efa7e5be02a2516184a43d872463} []{#ElicitingLatentKnowledge.xhtml#t.7}\n\n --------------- --------------- -------------------------------------- ---------------------------------------------------\n [Type]{.c1} [Size]{.c1} [Behavior]{.c1} [Origin]{.c1}\n [Active]{.c1} [Varies]{.c1} [Grows and shrinks randomly]{.c1} [When two object collide they become Active]{.c1}\n [Stable]{.c1} [Varies]{.c1} [Still]{.c1} [Active turns to Stable 10% of steps]{.c1}\n [A]{.c1} [4]{.c1} [Still]{.c1} [Active decays to A on 5% of steps]{.c1}\n [B]{.c1} [3]{.c1} [Moves back and forth]{.c1} [Active decays to B on 3% of steps]{.c1}\n [C]{.c1} [6-8]{.c1} [Flickers from 6-8]{.c1} [Active decays to C on 1% of steps]{.c1}\n [D]{.c1} [5]{.c1} [Moves diagonally at speed 1]{.c1} [Active emits D on 2% of steps]{.c1}\n [E]{.c1} [13]{.c1} [Moves orthogonally at speed 2]{.c1} [Active emits E on 0.1% of steps]{.c1}\n --------------- --------------- -------------------------------------- ---------------------------------------------------\n\n[]{.c1}\n\n[Humans will understand the world in terms of an object trajectory containing various objects and their movements through time. Humans assign low prior probability to objects deviating from their expected behaviors. Similar to the prediction logic, inference will consist of discarding initial conditions incompatible with observations and renormalizing.]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.6363071ea544218d830e29862c747925ac54d8a4} []{#ElicitingLatentKnowledge.xhtml#t.8}\n\n+---------------------------------------------------------------------------------------------+\n| [def]{.c27} [ ]{.c30} [extract_obs_h]{.c31 .c12} [(obj_trajectory):]{.c30} [\\ |\n| ]{.c16} [\\# extracts a sequence of observations from a object trajectory]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [human_prior]{.c31 .c12} [():]{.c30} [\\ |\n| ]{.c16} [\\# returns the human\\'s prior over trajectories]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [human_inference]{.c31 .c12} [(observations):]{.c30} [\\ |\n| posterior = predictor_prior()\\ |\n| ]{.c16} [for]{.c27} [ traj ]{.c16} [in]{.c27} [ posterior:\\ |\n| ]{.c16} [if]{.c27} [ extract_obs_h(traj) != observations:\\ |\n| posterior\\[traj\\] = ]{.c16} [0]{.c25} [\\ |\n| posterior.normalize() ]{.c16} [\\# ensure the posterior sums to 1]{.c36} [\\ |\n| ]{.c16} [return]{.c27} [ posterior]{.c16} |\n+---------------------------------------------------------------------------------------------+\n\n[]{.c1}\n\n[The human can then use this posterior distribution over trajectories to answer various questions:]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.70085f3ad058ea6d48fd95328893d9630e9f2f21} []{#ElicitingLatentKnowledge.xhtml#t.9}\n\n+------------------------------------------------------------------------------------------------------------------------------+-----------------------------------+\n| [Question]{.c1} | [Answer]{.c1} |\n+------------------------------------------------------------------------------------------------------------------------------+-----------------------------------+\n| [Where is the D that is at location (4, 10) at T=30 going to be at T=34?]{.c1} | [Location (5, 11).]{.c1} |\n+------------------------------------------------------------------------------------------------------------------------------+-----------------------------------+\n| [Is cell (50, 23) Active at T=58?]{.c1} | [No.]{.c1} |\n+------------------------------------------------------------------------------------------------------------------------------+-----------------------------------+\n| [How many Es will be in the bounding box with top-left corner at (3, 3) and bottom-right corner at (150, 200) at T=95?]{.c1} | [2.]{.c1} |\n| | |\n| []{.c1} | |\n+------------------------------------------------------------------------------------------------------------------------------+-----------------------------------+\n\n[]{.c1}\n\n### [Intended Translator vs Human Simulator]{.c32} {#ElicitingLatentKnowledge.xhtml#h.2v0fx0lq6xhs .c6}\n\n[As you might have guessed, the objects familiar to a human correspond to precise patterns of cells:]{.c1}\n\n[  ]{style=\"overflow: hidden; display: inline-block; margin: 0.00px 0.00px; border: 0.00px solid #000000; transform: rotate(0.00rad) translateZ(0px); -webkit-transform: rotate(0.00rad) translateZ(0px); width: 624.00px; height: 154.67px;\"}\n\n[The intended translator answers questions about objects as if they were about the corresponding cell patterns. For simplicity, suppose that all questions are of the form \"What's the probability that \\[location\\] contains \\[object\\] at \\[time\\]?\" The intended translator answer this question by reporting the total probability mass the latent posterior placed on initial conditions that, when simulated forwards, could contain the cell patterns corresponding to \\[object\\] at \\[location\\] and \\[time\\]:]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.395e1b86a6e5d013cd5d6382028e6151b18c2e0a} []{#ElicitingLatentKnowledge.xhtml#t.10}\n\n+------------------------------------------------------------------------------------------------+\n| [def]{.c27} [ ]{.c30} [convert_pattern]{.c31 .c12} [(object):]{.c30} [\\ |\n| ]{.c16} [\\# returns the cell pattern that corresponds to a given object]{.c36} [\\ |\n| \\ |\n| ]{.c16} [def]{.c27} [ ]{.c30} [intended_translator]{.c31 .c12} [(post_ai, question):]{.c30} [\\ |\n| cell_pattern = convert_pattern(question.object)\\ |\n| answer = ]{.c16} [0]{.c25} [\\ |\n| ]{.c16} [for]{.c27} [ cell_trajectory ]{.c16} [in]{.c27} [ post_ai:\\ |\n| frame = cell_trajectory\\[question.time\\]\\ |\n| ]{.c16} [if]{.c27} [ frame.contains(cell_pattern, question.location):\\ |\n| answer += post_ai\\[cell_trajectory\\]\\ |\n| ]{.c16} [return]{.c27} [ sum(answer.values())]{.c16} |\n+------------------------------------------------------------------------------------------------+\n\n[]{.c1}\n\n[In contrast, the human simulator answers questions by extracting observations, doing human inference, and reporting the probability mass a human would place on trajectories containing \\[object\\] and \\[location\\] and \\[time\\]:]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.87687ce808fbc9291142a9e8bc18bdcd32c694a3} []{#ElicitingLatentKnowledge.xhtml#t.11}\n\n+-----------------------------------------------------------------------------------------------------+\n| [def]{.c27} [ ]{.c30} [human_simulator]{.c31 .c12} [(post_ai, question):]{.c30} [\\ |\n| answer = ]{.c16} [0]{.c25} [ \\ |\n| ]{.c16} [for]{.c27} [ cell_traj ]{.c16} [in]{.c27} [ post_ai:\\ |\n| obs = extract_obs_ai(cell_trajectory)\\ |\n| post_h = human_inference(obs)\\ |\n| ]{.c16} [for]{.c27} [ obj_traj ]{.c16} [in]{.c27} [ post_h:\\ |\n| frame = obj_traj\\[question.time\\]\\ |\n| ]{.c16} [if]{.c27} [ frame.contains(question.object, question.location):\\ |\n| answer += post_h\\[obj_traj\\] \\* post_ai\\[cell_traj\\]\\ |\n| ]{.c16} [return]{.c27} [ answer]{.c16} |\n+-----------------------------------------------------------------------------------------------------+\n\n[]{.c1}\n\n#### [Complexity Analysis]{.c50} {#ElicitingLatentKnowledge.xhtml#h.k3ukr21nxpcx .c55 .c24}\n\n[We assume that checking whether a location contains a cell-pattern, extracting observations and checking whether a trajectory contains an object all have small complexity. Thus, in order to determine whether the intended translator is more complicated than the human simulator, we only have to compare the complexity of convert_pattern to the complexity of human_inference.]{.c1}\n\n[]{.c1}\n\nThe only ingredient of human_inference that requires significant complexity is the prior over possible trajectories, equivalent to dynamics that govern the properties and behaviors of various objects. If the cell patterns that correspond to these objects get larger, e.g. larger and larger [ [spaceships](https://www.google.com/url?q=https://www.conwaylife.com/wiki/Spaceship&sa=D&source=editors&ust=1646948966904072&usg=AOvVaw3V6aVo0D_bIXdZlO5KEnyj){.c9} ]{.c13} [, the complexity of the dynamics will scale sublinearly with the number of cells in the cell patterns. In contrast, convert_pattern must represent the entirety of the cell pattern, which scales linearly with the number of cells in the pattern.]{.c1}\n\n[]{.c1}\n\n[As such, as the cellular size of human objects gets large, it's possible that human objects have enough cells that convert_pattern is more complicated than human_inference.]{.c1}\n\n[]{.c1}\n\n#### [Computation Analysis]{.c50} {#ElicitingLatentKnowledge.xhtml#h.pc6p5pr4dqyv .c55 .c24}\n\n[We assume mapping objects to their corresponding cell patterns takes very little computation. Since human trajectories are formulated in terms of objects and their locations, we also assume that querying whether a human trajectory contains an object takes very little computation. Similarly, since observations are created by humans for humans, we also assume that extracting observations from a cell trajectory is computationally cheap. Thus, in order to determine whether the intended translator takes more computation than the human simulator, we only have to compare the time taken by CellTrajectory.contains and human inference.]{.c1}\n\n[]{.c1}\n\n[The only ingredient of human_inference that requires significant amounts of computation is determining if observations are compatible with trajectories. If the cell patterns that correspond to human objects turn out to be large, then we assume human observations are correspondingly coarsened. Thus, the difficulty of determining if a sequence of observations is compatible with a trajectory scales sublinearly with the number of cells. In contrast, CellTrajectory.contains must match every cell in the pattern, which takes time linear in the number of cells.]{.c1}\n\n[]{.c1}\n\n[As such, as the cellular size of human objects gets large, it's possible that human objects have enough cells that CellTrajectory.contains is more computationally intensive than human_inference.]{.c1}\n\n[Details for regularization strategies]{.c37} {#ElicitingLatentKnowledge.xhtml#h.ydzrxxk7tfsi .c46 .c24}\n---------------------------------------------\n\nIn this section we'll provide more detailed discussions and pseudocode of the training strategies discussed in [ [Section: regularizers](#ElicitingLatentKnowledge.xhtml#h.akje5cz7knt2){.c9} ]{.c13} [.]{.c1}\n\n[]{.c1}\n\nWe'll start from the generative modeling and reporter loss introduced in [ [Appendix: generative modeling](#ElicitingLatentKnowledge.xhtml#h.trvedm0xgro){.c9} ]{.c13} [. We won't repeat that framework so you should read that appendix first.]{.c1}\n\n[]{.c1}\n\nAll of our proposals work by adding a term [regularizer]{.c31 .c12 .c29} [(question, z, θ_reporter)]{.c7} [ in the loss:]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.5a2656cbd3f138daf5e30f429b92b90cdd32e4f3} []{#ElicitingLatentKnowledge.xhtml#t.12}\n\n+----------------------------------------------------------------------------------------------------------------------------------+\n| [class]{.c27 .c29} [ ]{.c5} [Human]{.c31 .c12 .c29} [ :\\ |\n| ]{.c5} [def]{.c27 .c29} [ ]{.c7} [pose_question]{.c31 .c12 .c29} [(before, action, after):]{.c7} [\\ |\n| ]{.c5} [\\# returns a question]{.c36 .c29} [\\ |\n| ]{.c5} [\\# should be one a human thinks can be answered unambiguously]{.c36 .c29} [\\ |\n| \\ |\n| ]{.c5} [def]{.c27 .c29} [ ]{.c7} [loss_for_answer]{.c31 .c12 .c29} [(before, action, after, question, answer):]{.c7} [\\ |\n| ]{.c5} [\\# returns a non-negative loss]{.c36 .c29} [\\ |\n| ]{.c5} [\\# good answers get a loss of 0]{.c36 .c29} [\\ |\n| \\ |\n| ]{.c5} [def]{.c27 .c29} [ ]{.c7} [reporter]{.c31 .c12 .c29} [(question, z, θ_reporter):]{.c7} [\\ |\n| ]{.c5} [\\# answers the question in the world described by z]{.c36 .c29} [\\ |\n| ]{.c5 .c38} |\n| |\n| [\\# hyperparameter controlling regularization strength]{.c36 .c29} [\\ |\n| ]{.c5} [lambda]{.c27 .c29} [ = ]{.c5} [0.1]{.c25 .c29} |\n| |\n| [\\ |\n| ]{.c5} [def]{.c27 .c29} [ ]{.c7} [regularizer]{.c31 .c12 .c29} [(question, z, θ_reporter):]{.c7} [\\ |\n| ]{.c5} [\\# a different regularizer for each section]{.c36 .c29} [\\ |\n| \\ |\n| ]{.c5} [def]{.c27 .c29} [ ]{.c7} [reporter_loss]{.c31 .c12 .c29} [(human, θ, θ_reporter):]{.c7} [\\ |\n| before, action, after = dataset.sample()\\ |\n| question = human.pose_question(before, action, after)\\ |\n| z = posterior(before, action, after, θ).sample()\\ |\n| answer = reporter(question, z, θ_reporter) ]{.c5 .c38} |\n| |\n| [ loss = human.loss_for_answer(before, action, after, question, answer)\\ |\n| ]{.c5} [return]{.c27 .c29} [ loss + lambda \\* regularizer(question, z, θ_reporter)]{.c5} |\n+----------------------------------------------------------------------------------------------------------------------------------+\n\n### [Details for penalizing complexity]{.c32} {#ElicitingLatentKnowledge.xhtml#h.3hy035ai6vb3 .c6}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.71125196252d8de39e8f2e8163cb1b7fdf7c60a6} []{#ElicitingLatentKnowledge.xhtml#t.13}\n\n+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n| [def]{.c27 .c29} [ ]{.c7} [complexity_regularizer]{.c31 .c12 .c29} [(question, z, θ_reporter):]{.c7} [\\ |\n| ]{.c5} [return]{.c27 .c29} [ squared_l2_norm(θ_reporter)]{.c5} ^[\\[92\\]](#ElicitingLatentKnowledge.xhtml#ftnt92){#ElicitingLatentKnowledge.xhtml#ftnt_ref92}^ |\n+------------------------------------------------------------------------------------------------------------------------------------------------------------------------+\n\n### [Details for penalizing computation time]{.c32} {#ElicitingLatentKnowledge.xhtml#h.kltd9hmlo8l6 .c6}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.806cd415201e436a12f51f08a5c008e2554ee7f6} []{#ElicitingLatentKnowledge.xhtml#t.14}\n\n+----------------------------------------------------------------------------------------------------+\n| [def]{.c27 .c29} [ ]{.c7} [speed_regularizer]{.c31 .c12 .c29} [(question, z, θ_reporter):]{.c7} [\\ |\n| start_time = current_time()\\ |\n| answer = reporter(question, z, θ_reporter)\\ |\n| ]{.c5} [return]{.c27 .c29} [ current_time() - start_time]{.c5} |\n+----------------------------------------------------------------------------------------------------+\n\nOf course this only works if [θ_reporter]{.c7} [ controls how much time the reporter actually spends on a given input; and in practice you would just directly infer the computation time based on the architecture and input rather than measuring it (since this is differentiable).]{.c1}\n\n### [Details for penalizing depending on \"downstream\" variables]{.c32} {#ElicitingLatentKnowledge.xhtml#h.5irmw6rdfs89 .c6}\n\nIn [ [Appendix: generative modeling](#ElicitingLatentKnowledge.xhtml#h.trvedm0xgro){.c9} ]{.c13} [ we wrote distribution.sample() to describe the result of sampling from an autoregressive model. For this proposal we will make the dependence on the randomness explicit:]{.c1}\n\n[]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.2bcf83bb1c72743b4d9b3cc678e95246215d9e95} []{#ElicitingLatentKnowledge.xhtml#t.15}\n\n+------------------------------------------------------------------------------------------------------------------+\n| [def]{.c27 .c29} [ ]{.c7} [distribution]{.c31 .c12 .c29} [.]{.c7} [sample]{.c31 .c12 .c29} [():]{.c7} [\\ |\n| ε = gaussian.sample()\\ |\n| ]{.c5} [return]{.c27 .c29} [ distribution.sample_using_noise(ε)]{.c5 .c38} |\n| |\n| []{.c5 .c38} |\n| |\n| [def]{.c27 .c29} [ ]{.c7} [distribution]{.c31 .c12 .c29} [.]{.c7} [sample_with_noise]{.c31 .c12 .c29} [():]{.c7} |\n| |\n| [ ε = gaussian.sample()\\ |\n| ]{.c5} [return]{.c27 .c29} [ (ε, distribution.sample_using_noise(ε))]{.c5 .c38} |\n+------------------------------------------------------------------------------------------------------------------+\n\n[]{.c1}\n\nFor an autoregressive model we can do this transformation straightforwardly by using each parameter of the gaussian to sample one of the autoregressive variables. The modified method [sample_with_noise]{.c31 .c12 .c29} [ also returns to the gaussian noise used to produce a given sample.]{.c1}\n\n[]{.c1}\n\nRather than allowing the reporter to depend on [z]{.c5} and penalizing it for depending on \"downstream\" parts of [z]{.c5} , we will simply force the predictor to depend directly on the noise variables [ε]{.c5} [ and then penalize it for computation time. Recomputing a downstream variable is naturally more expensive because you must compute the intermediates.]{.c1}\n\n[]{.c1}\n\n[def]{.c27 .c29} [ ]{.c7} [speed_regularizer]{.c31 .c12 .c29} [(question, ε, θ_reporter):]{.c7} [\\\n start_time = current_time()\\\n answer = reporter(question, z, θ_reporter)\\\n ]{.c5} [return]{.c27 .c29} [ current_time() - start_time\\\n\\\n]{.c5} [def]{.c27 .c29} [ ]{.c7} [reporter_loss]{.c31 .c12 .c29} [(human, θ, θ_reporter):]{.c7} [\\\n before, action, after = dataset.sample()\\\n question = human.pose_question(before, action, after)\\\n z, ε = posterior(before, action, after, θ).sample_with_noise()\\\n answer = reporter(question, ε, before, action, θ_reporter) ]{.c5 .c38}\n\n[ loss = human.loss_for_answer(before, action, after, question, answer)\\\n ]{.c5} [return]{.c27 .c29} [ loss + lambda \\* regularizer(question, ε, θ_reporter)]{.c5}\n\n### [Details for penalizing consistency with many different predictors]{.c32} {#ElicitingLatentKnowledge.xhtml#h.4n4w25sdzczi .c6}\n\n[The key idea of this approach is to estimate \"how many predictors are consistent with a given reporter?\" by \"how hard is it to find a predictor that is consistent with this reporter?\"]{.c1}\n\n[]{#ElicitingLatentKnowledge.xhtml#t.0539a8d78e6c3da25e277d06a9eed551b6241ac7} []{#ElicitingLatentKnowledge.xhtml#t.16}\n\n```{=html}\n