diff --git "a/ai_safety_reading_group.jsonl" "b/ai_safety_reading_group.jsonl" deleted file mode 100644--- "a/ai_safety_reading_group.jsonl" +++ /dev/null @@ -1,85 +0,0 @@ -{"id": "c2f85609fb5038f07b7910d170c50f9e", "title": "268. AI Practical Advice For The Worried", "url": "https://www.youtube.com/watch?v=nxYwNElXA1k", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 268 in the\narctic.com training group tonight we\nwill be discussing the post AI practical\nadvice for the worried by three muscle\nroutes\nthree most of which is uh the founder\nand probably CEO of belter research he's\nalso a professional in Magic the\nGathering player and the CEO of mitimate\nthis is a post from the beginning of\nthis month and it was posted on his\npersonal blog called don't worry about\nthe vase which is a a reference to this\nscene in in The Matrix where the Oracle\ntells new not to worry about the vase\nthat he's about to break\nso practical advice for the word it\nturns out that I am in fact the target\nour audience for this because I am\nworried and that makes it always for me\nto evaluate this text on the criteria\nwhether the advice is in fact practical\nfor me\num and I've tried to give examples of my\nown situation in order to to answer this\nquestion\nbut I'd also like to upfront give some\nof my advice that I think\num in some respects differ quite a bit\nfrom Swiss\nthe the first one that I think is really\nCentral is if you're really worried then\nyou should strongly consider to actually\ndo something actually try to work on AI\nsafety think hard about it see if you\ncan find even some minor way to\ncontribute that is in my estimate likely\nto be very psychologically beneficial uh\nlike you get a bit more of an internal\nlocus of control that I think are very\nvaluable uh in this kind of situation\nfor many people\num you also plug into a supportive\nCommunity which also can help and can\nnot help depending on what part of the\ncommunity you go to\num and there are in fact some actions\nthat are really easy to do like upvoting\nposts unless wrong or something like\nthat it's very a very simple way to\ncontribute that I'm certain that uh even\npeople who have very little technical\nskill will be able to do productively\nso one of my key practical advice is uh\none that sweet does talk a bit about but\nit does not talk enough about like you\nneed to focus if your word on your\nmental health and uh trying to uh get\nyourself in a position that doesn't\ndeteriorate for uh for your mental\nhealth and like it depends a lot on who\nyou are and what what you respond to but\nI think this should be a core priority\nfour classic ways to do that is to\nexercise eat and sleep and avoid drama\nall these kind of things\num in particular\num one thing that you should think about\nis like what is your financial situation\nbecause uh sweet talks about\num borrowing money but a lot of people\nhave in fact saved up money so I think a\ncore consideration is\num how much money do we have uh and at\nwhat point can you afford not to get\npaid from now until you expect that\nthere is probably some kind of\nSingularity of fire alarm\na very strong Louis that I think you can\nuse for uh adjusting your\num that is very practical to adjust is\nwhat media you read what blocks do you\nsee do you like Doom scroll on Twitter I\nthink it's a term and I think that's a\nan important thing to avoid\nfor most people but I say for most\npeople because there's a classic saying\nthat you need to reverse all the advice\nthat you get because what is right for\none person may be exactly wrong for the\nother person a perhaps the best way\naround this is to do some kind of uh\nexperimentation like try to abstain from\nReading social media for a week and see\nif that makes you feel better\num that's a very easy experiment to do\nand likely to be very beneficial\nand finally when someone asks me for\npractical advice the thing I tell them\nis to make sure you hug your loved ones\nboth instrumentally and as a final goal\nnow for the text itself it starts out\nwith some arguments against AGI some\narguments against not being maximally\nworried and\num\nthree considers it's an open question\nand says there are in fact good reasons\nto think that we won't have AGI for a\nlong time and I think there are reasons\nwhy we wouldn't have it there are\nscenarios\nbut none of them come rise to the level\nwhere I would call them a good reason\num the first three uh um uh shows is we\ncould run out of train data train data\nclearly has been a very substantial part\nof the uh the story of how AI has\nimproved to this point and\num like we have all the the checks on\nthe internet but uh will we get more\nwell I think it's very obvious that we\nare going to see more things like videos\nuh obvious training data that we uh\nprobably can unlock soon interactions\nwith the physical world is another big\nthing that we almost certainly uh will\nsee more of in the future\num and there are others that have been\nsuggested\num in I think it's certainly a\npossibility that this could fail I think\nhumans are a strong counter example to\nthis humans are generally intelligent\nand humans can in fact be trained by the\namount of evidence there exists in the\nworld\nanother arguments against agis that we\ncould just run out of ways to improve AI\nright now uh I think we are seeing\nalgorithmic improvements small but but\nreal Hardware improvements following\nroughly most law uh give or take an\norganizational improvements like just\nlarger projects we see these three kinds\nof uh improvements very often and there\nare no obvious other limits to those\num we may see reculatory barriers that\nprevents AI from having a big economic\nimpact I actually think this is very\nvery likely I think we are not very very\nlike that's always stating the case my\nmodel strongly has that once AI becomes\ncapable of doing very important economic\nproductivity then this will be our Lord\num and I think that is\num very uh plausible but on the other\nhand this doesn't actually Bear on AI\nrisk as such because uh AI risk is that\nat some point the AI will be powerful\nenough to ignore these regulations and\ntake over and and that point isn't\naffected very much by the extent to\nwhich AI is prevented from impacting\nmuch of the economy a little of course\nbecause it depends on how much uh\ninvestment is done but this kind of\nregulatory barriers will only happen at\nvery very high levels\nfinally we could see other barriers to\nAGI\num I think like\num if we are not fundamentally confused\nabout AGI which totally is a thing that\ncould happen then the thing I see\nhappening here is some kind of AI safety\nsuccess coordination or some other way\nof uh preventing AGI that like it could\nbe a catastrophe for instance a global\nthermonuclear war would totally prevent\nAGI for a long time\num but that's not really something a lot\nof people hope and and I haven't really\nseen anyone make a strong argument for\nthis kind of coordination being feasible\nin the very short term or or other\nsimilar cases that would prevent\num uh if AGI continues on Trend due to a\ntraining compute Etc then what precisely\nwould prevent it seems very way to me\nand so this is just the argument against\nAGI there is of course one more step in\nthe reasoning chain from AJ from where\nwe are now to X risk and that is that\nonce we have AGI will we then die and uh\nthree says uh yes if we get AGI\nreasonably soon he expects that we will\nall die\num so how should you think about this so\nhe says you should think for yourself if\nthis is something that is really\nimportant to you and this is something\nthat will affect your decisions you\nshouldn't just adapt to this position or\nanyone else you should think for\nyourself\nand you shouldn't uh update too much\nabout like who has the the social\nprocess of figuring out who can I trust\nuh to make decisions about this kind of\nthing\nI think uh this may not be true uh like\nthis will uh AI risk of course impacts\neverybody and both impact people who\nhave like a competitive advantage in\nbuilding models and understanding the\ncore of the issues and people who are\nskilled in social cognition\num and for these people relying on\nsocial cognition may be the best bet\nI would worry here in this case I think\nI have seen a number of examples of\npeople who believe that they have very\ngood social cognition and then they make\nvery poor decisions in practice uh like\num if you are if you claim that you have\na strong social cognition on these kind\nof issues then probably the people who\nwho said you should invest in Bitcoins\nyou listen to them right because if you\nif you are able to make good decisions\nin this way\num so I think there's a good chance that\nyou that people overestimate their own\nabilities of social cognition\nand obviously the thing you should do\ninstead is to build your own\nunderstanding and your own model and in\nthis I would personally uh suggest that\npeople\ndon't put too much stock in the outside\nView and much more stuck on the inside\nview\nand finally you should decide what you\npredict and uh like I think this is like\ntechnically wrong you first make your\npredictions and then you make your\ndecisions\num but uh that may be a minor point\nso how should you react to uh to this\nnew situation that we're in just\nreacting properly our upfront three\nstates this is hard most people react\npoorly and\num the the idea that oh you should just\ncontribute is actually one that is\nsurprising surprisingly hard because if\nyou go into Ai and think I'm gonna help\nwith this problem then there is a very\nreal risk that you'll just end up making\nthe problem worse it's really hard to\nwork on safety without working on\ncapability and a lot of people have been\nburned on this\nhow about suppressing the information is\nthat a good idea well the the bad things\nthe disadvantages of suppressing the\nidea is that it's for yourself sub\noptimal like it's much better if you\nreact to uh to what is going on and it's\nmuch better for society like if you\nthere is something you can do then it's\nmuch better that you do it but on the\nplus side there are advantages to\nsuppressing you avoid existential\ndespair you don't ruin your financial\nfuture by taking on your unsustainable\ndebt you don't miss out on the important\nthings in the in life and you don't make\nthe problem worse and those are in fact\nsubstantial advantages\nnow these things like not ruining your\nfinancial future and avoiding\nexistentials despair many people can do\nthat just fine without worrying about AI\nrisk and I don't think actually that\nsuppressing is a like\num as presented here as a binary choice\nis very realistic most likely a lot of\npeople are going to be somewhere in\nbetween slightly suppressing and not\nfully suppressing and in this case there\nmay be things that you can do to capture\nmost of the value anyway\nso he warns about uh dramatically\noverreacting making an example of this\nperson on Twitter something\num who suggests you should make space\nfor questioning if your life plan makes\nsense in light of these developments\num I think that is in fact not a\ndramatic overreaction I don't know why\nit's really\num puts that label on on this imminently\nsensible uh statement\nso how do you in fact make good\ndecisions well\num three uh has a very simplified guide\nfor this first you decide what is the\nprobability that we'll have AGI soon and\nwhat is the probability that things will\ngo fine and what if we have ATI soon and\nwhat are the probability that we will\nhave a Extinction and then once you have\nthese probabilities then you look at\nyour life decisions and uh try to\ncalculate are they actually good\num\nso I am very much on in two minds about\nthe utility of this kind of easy guide\nto making decisions\num because like on one hand I think if\nyou want to make good decisions like\nthis is a very big subject how do you\nmake good decisions and the classic\nanswer that I would give is like read\nthe sequences\num uh and I realize this is a\nuh a big thing but I do in fact think\nthat it's a very important uh topic\num and one of the things that the ways\nthat this model really suffers is that\nif you just say soon without any kind of\ndefinition of soon then this makes uh uh\nthe algorithms shown here way too awake\nin my uh opinion I think you actually\nreally do need some kind of probable\ndistribution of how many years until AGI\nuh in order to make any kind of\nreasonable uh decision\nI also think that if you really want to\nto have this\ninfluence the decisions of your life you\nshould take the time to actually\nreally look into this in more detail and\nlike if you have a big life question\nlike should you have children and then\nyou should spend more than five minutes\nbecause it's something that is really in\nfact really really important\nthere are tools for how to make\ndecisions with this kind of\nprobabilistic\nFrameworks I haven't actually used them\nso I don't uh know if I can recommend\nthem\nso he has this nice\num uh summary take care remember it is\neasy to fool yourself and although he\ndoesn't state it uh I think this is this\ncan be thought of as a summary of the\nsequences and of course it is very easy\nto fool yourself and just knowing that\nit's uh very easy to fool yourself does\nnot liberate you from the problem of\nfooling yourself\nso the big statement here is normal life\nis worth living even if a very high\nprobability of Doom and for that\nhappening soon\nwhy is this uh well the first argument\nfor this is that it is in fact possible\nthat we won't have to a normal future\ncould still happen\nand\num\nin that case to you it is important\npsychologically to be prepared for that\nshould it happen and if you're not\nprepared for a normal future then that\nwill stress you out\num I think in this case it's three very\nstrongly generalizes about what is\npsychologically important and to some\npeople it may be true and to some people\nit may not\num and in particular in if I look at my\nown personal situation then I don't need\nto be prepared because I am in fact\nalready prepared and the reason why I'm\nalready prepared is because I'm the kind\nof person where if I'm not ready for a\nnormal future that would stress me out\nso uh even before I learned about AGI I\nstarted getting prepared for my normal\nfuture because that's the kind of person\nI am so and for that reason I don't need\nto be even more prepared so this\nin general people who feel that they\nneed to be prepared are already prepared\nfor a normal life so I don't think this\nis an argument uh why you should focus\non uh particular on on this case in in\nspite of the evidence\nanother reason for living a normal life\nis that you'll encounter practical\nproblems if you abandon the normal life\nfirst you will have problems with the\npeople who love you\nso I have in fact not had a lot of\nproblems with the people who love you\num they are in general pretty supportive\nand I'm very very grateful for that and\nI think like in general you could say\nthat yeah if they love you then of\ncourse by definition they are supportive\nthat's part of what loving means but I\nrealize that for a lot of people like\nthe support of the loved ones are not in\nfact something they can count on\num\nthe second is in professional\ninteractions what do people actually uh\nuh think of you when you when you say\nactually I believe that AI will at some\npoint be very very dangerous well I've\nactually talked with a number of people\nabout this and they seem surprisingly\naccepting of this they think yeah that\nseems totally reasonable they agree that\nthere is in fact a probability that AI\nwill kill us all\num like it's of course an open question\nto what extent they just humor me or\nsomething like that but I think a lot of\npeople uh working in AI\nthink that this is in fact something\nthat could happen uh\nit may also give problems in relating to\nthe world\num I think this is true uh but I also\nthink that you relate better to the\nWorld by\num\nlike the the listening of tasky is that\nif the sky is blue I desire to believe\nthat the sky is blue uh and the same way\nwith P doom and I think that there are\nin fact ways you could I think it's\ncalled derealization or something like\nthat uh there are psychological\nprocesses that are harmful that can\nhappen but in general I feel you relate\nbetter to the world if you look straight\nat the world\nfinally it will become difficult to\nadmit you made a mistake if the\nconsequences of doing so seem too dire\nand this I believe three means that if\nyou change your life to deal with AI\nrisk and then you figure out AI risk\nisn't actually that high then changing\nyour mind becomes very difficult but I\nthink this is in fact a symmetric\nargument it also goes the other way if\nyou prioritize a normal life too much\nthen\nuh changing your mind away from the\nnormal life and the normal career and\nall these kind of thing uh is also\nextremely costly\num so I think in fact it may be\nsubstantially more costly if you believe\nthat there is a high probability of Doom\nto\num to ER on the side of normalcy\nso should you take on depth like if you\ntake up that you should realize that it\nmay come back to bite you potentially\nfar sooner than you think\num and on the other hand the uh\nadvantages of living in a normal World\nthey uh they come much faster\nthan you expect or that most people\nexpect and I think at this point uh I\nwould have preferred to be to be much\nmore clear about what soon means because\nif you have timelines that say one year\nthen that's very different from\ntimelines let's say 10 years and also\nlike\num the uh the best time to influence the\nworld may in fact not be right before\nthe singularity but sometime in advance\num and uh there may be time between AGI\nand existential risk I think these\nthings need to be uh exempt in in\nGreater detail because it does in fact\nmatter very much what you should do\nwhether you believe you have on average\none year or you have 10 years\nthis is a statement that puzzled me a\nbit there are no good ways to sacrifice\nquite a lot of utility in the normal\ncase and in exchange get good\nexperiential value in unusual case\nnow the straightforward reading of this\nseems totally false right there are in\nfact a lot of ways to sacrifice a lot of\nutility in the normal case but then have\na high risk High reward thing like uh\nmaybe I'm misreading this it's possible\nI I'm not entirely sure what\nexperiential value means in in this case\num so maybe I'm just misunderstanding\nbut the classic example of a high risk\nHigh reward option that uh that\num that satisfies this is to quit your\njob and start a company in many many\ncases private in most cases you'll lose\nutility and in a few cases you'll gain a\nlot of utility and on average this is\nprobably a good idea for many people\nnow that isn't in fact the thing that I\nwould argue for starting your own\ncompany I would start uh I would accuse\nyou should instead quit your job and\nwork on AI safety\num but I think this also satisfies the\ncriteria\nyou can't do things like move\nconsumption forward uh work on your\nbucket list and take on a bit of depth\nseems useful but there are strong\nmarginal returns to this\num and I agree I have in fact done\nsomething like this uh I stopped saving\nfor retirement and put it into\naicity.com\nquite a few years ago\num so so this is something that I um I\nagree with\nhow about burning your candle at both\nends well the cost for doing that seem\nto accrue quickly you'll get burnout\nyou'll get stressed you'll get\nexistential angst\num and I agree you will get all of this\nuh I think in fact uh you'll of course\nobviously get burnt out and stress will\nyou get more existential angst from\nworking hard I would actually expect the\nopposite I would expect that the harder\nyou are working at AI safety or\npreventing a catastrophe the less\nexistential angst you get but\npeople are different and you may in fact\nbe different\nand the disadvantages like burnout is\nlike instrumentally very bad and the\nidea is you instead maintain your\ncapacity and then later a miracle of\nsome kind will happen to model violation\nand then you are able to do something\nand I think this is true like\nworking really hard can be kind of like\nsprinting and like if you've tried\nsprinting then like you can Sprint for\n10 seconds or something like that so you\nreally really need timelines for that to\nmake sense in almost all other cases you\nneed to be deliberately pacing yourself\nand three stating then contributing\nwhile living a normal life is more\neffective and I think it's a hypothesis\nthat you really need to consider that\nyou should not burn your candle at both\nends but it may also in fact not be true\nlike them it may be that the correct\nthing to do would be to just move to\nBerkeley and uh find some people there\nand work on AIC to is in fact more\neffective uh like it is not a given\nthing that it will always be more\neffective to to live in a normal life\nforeign\nthere are people in the past who have\nreacted badly to this kind of thing and\nthree gives a few examples and yeah I\nagree some people have reacted badly\nthat's not really that surprising\nbecause like if you look over the\nhistory of mankind I'm sure that a lot\nof people have reacted to things in a\nless than optimal way I do think we\nshould still take this into serious\nconsideration like uh it's called\nnoticing the skulls like you're walking\ndown a path and then you notice that uh\nactually a lot of skulls along the way\nand then a good rationalist will stop\nand say hmm this is something I should\nreally consider so depending on how many\nI don't think to be really argues that\nit's a lot of people but some people\nhave in fact gotten burned by this maybe\nwe are getting burnt well it is\nsomething we should consider are we is\nthis a local information Cascade where\nI'm updating on someone who's updating\non someone who's updating on someone and\nthis is in fact people updating on each\nother and not some uh some smart uh\npeople making some actual decisions I\nthink there's something you need to\nconsider whether there is actual\ninformation coming in or it's just a\nlocal information Cascade I think like\num we have now GT4 and I think\num in Palm and\num Claude I think it's quite clear that\ninformation is coming in but you should\nstill be aware that you could be in a\nlocal information\na an interesting point that I haven't\nseen written explicitly before is you\nshouldn't be consistent just to satisfy\npeople who ask for consistency I think\nthis is a knee point and I'm happy to to\nsee it explicitly here\noops\num and then of course the more uncertain\nyou are about timelines the more\nreasonable uh\nit is to not take on a lot of depth and\ntry to aim for some kind of knowledge\nthe last part is structures as a\nquestion and answer session\nso the first question is should I say\nfor retirement\nand Swede says well it doesn't\nexplicitly say no but says you should\nprobably focus on building up asset\nvalue over time because that is valuable\nif there is uh\num some kind of uh if there is a uh some\nlater information that lead well sweet\ndoesn't actually say that um he's just\nit's valuable in both cases and so what\nI did was once I finished uh my studies\nI started saving up and the reason I\nexplicitly gave before saving up was\nthat right now we're in a good situation\nand it's really nice to have uh some\nmoney saved up if later there is some if\nlater there is a bad situation\num so that is in fact the right reason\nthat I would\num condone after this I\num I think the thing uh to me is missing\nhere is some kind of stock criteria\nbecause saving up for retirement and\nbuilding that up asset value over time\nis a really good idea but you also need\nto spend it at some point like um and\nyou can't we don't expect a fire alarm\nso what will be your stop criteria what\nwould be the time where you stop saving\nfor retirement and then quit your job to\nwork on AI safety you you need to have\nsome idea about well that is a thing\nthat could actually happen\na more extreme version is to just check\non depth that you can't pay back and\nthree is negative on that is that that\nwill require a lot of confidence in both\nthat there is uh will be doomed and that\nwill consume and you also need to have\ngood way to spend the money like if you\num take on a lot of depth and then you\nblow it on something that is totally\nirrelevant then that sounds like a uh\nsomething you will strongly regret\ndid you buy a house\num so he says maybe like there are tax\nadvantages and things like that but\num psychologically it can be hard to\nsell for some people and I think in\nparticular a house makes it much more\nlikely that your location becomes fixed\num and that is something like it is very\npossible like\num I haven't moved to Berkeley I can't\nmove to Berkeley because I have a wife\nand children here and a house also uh\nand this kind of Route can in fact be a\nsubstantial obstacle\nshould you start a business\num three is in general uh bullish on\npeople starting businesses he just says\nas long as you don't make the problem\nworse by uh doing something with AI I\nthink I am more optimistic about not\nmaking the problem worse I think if you\nare starting a company that uses AI then\nthe amount of extra money and hype\nyou're putting into AI may be extremely\nsmall\nyou can make it even smaller by being in\nstealth mode uh or just not hyping the\nbusiness that's what I'm doing and I\nthink\nI am pretty confident that this is in\nfact not making the problem worse\nshould you have kids\nuh three is positive on this he believes\nthey are valuable as an end in\nthemselves it gives you something to\nprotect\num\nthere are a few well-placed researchers\nwho should not but you are probably not\none of them\num and\num I think this is too simple analysis\nof a very difficult difficult subject I\nthink uh an obvious clip answer is that\nyeah there are s risks in fact and this\nmay be a very good argument for not\nhaving children right now another is\nthat you'll are just really expensive in\ntime and money and there is a long tail\nit is very possible to have children who\nhave some kind of special needs and this\ncan seriously uh like uh take up\nliterally all your time and money and\nthat is something you uh you need to\nconsider as well\nI think in fact if you're doing some\nkind of utilitarian calculus here then\nthat may in fact come out quite strongly\nto not having children and maybe later\nif everything becomes normal then\nadopting or something like that uh if it\nbecomes too late for biological reasons\nI think the utilitarian calculus\ndisagrees with this and I think in fact\nthis is something that you should really\nreally seriously consider\nokay but if you already have children\nshould you tell them about ai's risk uh\nwell you shouldn't quite hide\ninformation I think this video is\nstrongly against hiding information from\nchildren but at the same time it\nshouldn't be emphasized\nI disagree with this uh in how I raise\nmy children in that I do in fact hide\ndark stuff from small children I think\nthat is in fact the the right way to do\nthat\num but\num I don't uh hide like\nthe the gray stuff I had the very dark\nstuff but not the gray stuff so they\nknow what I'm doing that it's like AI\nsafety but they don't care very much\nabout that because like children have\nmany other things in in their heads\nhow about normies should I explain\nwhat's coming to them\num so this is just being open and honest\nbut not shoving it in in people's faces\num and like I sometimes when I explain\nwhat I'm doing they\nask a little but very little people\ngenerally really really don't care\num I think that is fortunate in that I\ndon't think that it helps uh shoving it\ninto people's face so we are lucky to be\nin a situation where it's both immoral\nand\num not helpful to shove it in people's\nfaces\num but\num let's move to unlock\nalso a consideration is like if uh just\nbefore the end it becomes apparent that\nAI is very very dangerous uh then you\nshould be prepared that uh numbers\naround you may blame you like the making\ninterpretation of Ethics say that you\nhave a special uh responsibility since\nyou're working with the problem\nso how about forgetting things uh this\nand just having a good time\num\nthis probably won't work unfortunately\num it's if you try to forget about it\nand then just try to enjoy your life you\nwill be consumed by worry uh Jesus he\nwould rather go down fighting and um I\nbasically agree I wouldn't work I prefer\nto go down fighting also but I would add\nhere that there's a difference between\ngoing down fighting as part of you know\na heroic effort that is close to winning\nwhoops\nyeah sorry about that\num there is a difference between uh\ngoing down uh fighting heroically and\ngoing down uh fighting very pathetically\nuh like almost\num uh not contributing\nuh finally the question how long time do\nwe do we have what does three things uh\nand he thinks basically it's very\nuncertain and it's also uncertain if\nthere will be an existential catastrophe\nso at this point I should bring up my\nown probability distribution I think 25\non an existential risk in 2025 50 in\n2029 75 and 35 and 90 on 2015. but\nthat's conditioning on not slowing down\ndue to things like uh catastrophe and\ncoordination Etc and these estimates are\nweekly Health subject to significant\nchanges and did in fact change when gbt3\ngbt4 was announced a couple of days ago\nquestion should you invest in AI\ncompanies\nwell you shouldn't invest them in them\nif they are funding constrained because\nthen you're making the the problem worse\num uh whether that is actually true\nthere was some comments on this saying\nthat actually for for large companies\nthis matters not very very little but\nprecisely zero\num and I thought that's an interesting\nargument\num what you should do instead is to\ninvest in AI safety and to a large\nextent because it's really hard to find\ninvestment opportunities for AI that\ndoesn't push on capabilities I think the\nthing you should invite and invest in is\nyour own set of skills\num\nif you have to invest in an AI company I\nthink a distinction should be made\nbetween an AI user and an AI developer\nand I think AI users are often\nreasonably okay depending uh and AI\ndevelopers are always strongly bad\nshould you learn some new skills well\nin general you should be flexible and\nready to change and also learn to code\nand also learn to use AI\nI agree and I think this is one of my\npersonal weak spots I am good at coding\nI'm not good at using Ai and I also be\nbe better\nso depending on your job there is a\nprobability that will disappear like if\nyou're doing like artists or writing\nnovels or something like that you should\nbe really worried but in other cases\nit's harder to know but you should\nreally try to model it\num\nand for me like I'm a software developer\nand like\num I think the AI developer disappears\nat the singularity and the AI uses\nprobably slightly before in general\nprogrammers it's likely before that is\nkind of my expectation but note that uh\nlike a bad programmer right now is in\nfact uh like I think it's going to be\nreally really tough to be a bad\nprogrammer in a couple of years due to\nCopilot\nso how do you plan a long-term career\nwell people generally do that really\npoorly you can perhaps obtain capital\nand stay healthy and this kind of thing\nyou can try but it's going to be really\nhard one of the things you should\nconsider is that if the world doesn't\nend it maybe because Society collapses\nand that is something you might consider\nplanning for\num\nso in general I did what I did was\ncomputer science and saving up money as\na long time career plan I think that was\nvery smart so just focus on the thing\nthat that earned you the most money\num for my children we're like I don't\nhave a long-term career plan and for my\nchildren uh the youngest he will his\nlong-term career plan will start in 20\nyears and I basically don't expect that\nuh that he will be able to meaningfully\ncontribute\nand this kind of uncertainty is of\ncourse\num uh problematic and the obvious\nquestion is can you just wait with\ntaking any kind of action and so he says\nno you have to act under uncertainty and\nthat's a general thing you always have\nto act under uncertainty you're never\ngoing to get a situation where you can\nmake meaningful changes uh without\nacting on their own certain surge\nso some of the key considerations uh\nthat three percents is you need to get a\ngood model to really figure out what is\nthe probability that you are right that\nP Doom is really hard this is going to\nhave high and this is going to happen\nsoon\nyou need to figure out given that the\nworld ends what is the utility of your\nactions and also if you are wrong that\nthe world ends what will be the\nconsequences of your uh of your access\nin in that case\nyou need you probably should distinguish\nbetween what is the consequences for you\npersonally and what are the consequences\nfor the rest of society\nif the world ends\nso some of the actions you can take\nranked from worst to best the worst you\ncan do is to work on capability or work\non funding capabilities\nand I expect that for the people who\nread this they are probably generally\nnot in a position where they can\nmeaningfully contribute to Google so\nit's much more likely that they should\nworry about working on capabilities\nworking on applications and layers\num I think three is negative on this I\nthink it's in fact possible to do this\nuh quite ethically spreading hype and\njust gaining mundane utility for uh for\nlanguage models um that is probably okay\nI think in particular if you try to\nbuild hype for a gbt4 uh that is like a\ndrop in the bucket I don't think that\nwon't matter very much\num I would notice in fact that one of\nthe great ironies of AI safety is that\ntalking about AGI risk does in fact\ncause hype\num that is like one of the worst thing\nto realize that P there are apparently a\nlot of people who read Nick Boston's\nbook super intelligence and thought hey\nI'm gonna build a super intelligence as\nfast as possible and yeah that is also a\nrisk\njailbreaking and tingering with the\nmodels is something that today is very\npositive about I don't think that is\nobviously risk-free I think jailbreaking\nand tinkering with the models is likely\nto uh\nuh probably be capable to work in theory\nit's possible that you could do\nsomething directly bad but I don't think\nthat's very likely I think in general\nthe thing you should do here is AI\nSafety Research\nand so\num three ends with the uh observation\nthat without AGI we should expect 100\nmortality uh rate so you should remember\nthat you will die with Memento Mori\num and I think Memento Mori is in the\nsecond person like you will die and I\nthink that is in fact less very very\nunlikely at this point like you\nsingularly dying I think at this point\neither everybody is going to die or no\none practically no one is going to die\nthat is at least my take on the overall\nsituation\nthat is all for today thank you and see\nyou next time", "date_published": "2023-03-16T22:38:37Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "c211ea8d42c4fedcf0921e98086aa74f", "title": "243. A General Language Assistant as a Laboratory for Alignment", "url": "https://www.youtube.com/watch?v=hAxGLNUYaG8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n243 in the ai safety.com reading group\ntonight we'll be discussing the article\na general language assistant as a\nlaboratory for alignment by jared kaplan\nand many others\nthis is the first work done by anthropic\nnew alignment research\ninstitute and these are the primary\nauthors with jared kaplan as the\ncorresponding and probably primary\nauthor and there are a number of\npeople who are\nprimary models would have helped in some\nother ways\nit's a long list\nand actually\ni have heard from these people i know\namanda esco and kevin of course and i do\nactually know quite a few more of the\nnon-core people who have been helping\nand it's um we still haven't actually\nheard but what they are doing um so i\nhave at this point i just assume they\nare building some kind of agi that is\ntrying to kill us all or something like\nthat\nso this is uh\nfrom uh\nuh two months ago and we are focusing\non the non-typical parts so\nwe're looking at the philosophical\nmotivations rather than the explicit\nmachine learning things\nand the evaluation criteria where i\ndeliberately\ni haven't read really\nthe parts about how they're in fact\nimplementing this because the the people\nworking in anthropic have really great\ncredentials they are known to be really\ngood at this and i care somewhat less\nabout evaluating how good they are i\nknow they're good and i care more about\nhow do they look at the world um\nkowski had some comments on his twitter\non this paper\nwhere he was uh moderately positive and\nthat they directly challenge his\nalignment it doesn't bring the\ncapability\nand it doesn't overstate the\naccomplishment\ni want to uh anything\nthat sounds really like standing with\nfake praise but that's just his style\nand i don't think he intends it\nas much as um as faint praise\nand i want to dig deeper into these\nparts like okay they're directly\nchallenging alignment but are they\nchallenging alignment using a strategy\nthat might work or\nis this just a hopeless strategy um\nare they burning\ncapability commons i think uh like are\nthey actually um\nthat's the old somewhat outdated model\nwith two progress bars with ai\ncapability and ai alignment and are they\nworking too much on\ncapability um\nnot\nlying and saying they'll solve the\nproblem that really really sounds like\nfaint praise to me but in fact there are\na number of quotes you can pull out from\nthis that seem to be quite modest and i\nit's sad that it's relevant that\njust to say that this is a good thing\nbut i mean it is a good thing\nso when i investigate the motivations\none of the things that i look for are\nlike tiny things in the introduction\nwhere they say that future ai systems\nmight do bad things and interact in\npresently unforeseeable ways\nand\nthat's of course the thing that can\nhappen but i care about the\npresently foreseeable ways that things\ncan go wrong\nso they have a definition of alignment\nthat is somewhat non-standard define the\nalignment between two agents as to what\nextent their preferences overlap\nand it's not really a definition they\nuse very much uh they almost immediately\ntransition to a uh a more workable\ndefinition and it should be mentioned\nhere that almost perfect uh overlap in\nuh how you rank different outcomes could\nuh be arbitrarily bad quite easily\nand in addition to uh\nto this the one of the uh\noverworking ideas is to\nlook at language models directly instead\nof looking at perhaps more advanced\nmodels and paradigms because language\nones have a large number of advantages\nyou can try a lot of different kinds of\ninputs and they can fail in many\npotentially quite interesting ways and\nbenchmark how much progress has been\nmade you can compare different alignment\ntechniques to try to get some kind of\nidea about where we are\nin in more specific details you can try\nto see whether prompting can work as\nalignment and to what to attempt and uh\nyou can see\nif language models can\nmodel preferences instead of just doing\ninvitation learning\nand they focus a lot on improving sample\nefficiency uh\nof uh preference marlin and these are\nsome very uh\nnice and interesting goals\nso the way they actually uh\nthe definition of alignment that they're\nactually using is helpful honest and\nharmless\nand\nthe\nthe way they justify this is with the\nfollowing quote it is difficult for an\nai assistant to always be helpful honest\nand harmless towards an agent without\nalso being highly aligned with that\nagent\nand i'm not entirely sure i agree with\nthis because\nthe word always is very important in\norder to make this work because\nthe techniques that they're using are\nblack box methods and blackboard methods\nwill not give us any kind of proof we'll\nnow be able to say this model is always\nhonest if we are only looking at it from\nthe outside um\nand if you try to remove the word\nharmless\nthe word always and just most of the\ntime then from this definition it\nbecomes very clear that it is indeed\nvery possible to be\nvery often helpful and honest and\nharmless but not always harmless right\nthen you get into things like deceptive\nalignment very very quickly\nthere are advantages of this this is\nsomething that is much more actionable\nis understandable memorable um and it\nseems like on on\na more uh\nprecise view of uh alignment but this is\nindeed a big part of what we actually\nwant\nand of course the language ones that we\ncurrently have are uh especially when\nthe like gt3 was\nuh released it was very clear that it\nwas not helpful it was not honest and\nwas not harmless so it is something that\nis indeed substantial for us\nthese criterias are\na lot less than mathematically\nwell-defined right there are uh\ntrade-offs and ambiguity and the way\nthese are resolved suggested by uh by\nthe authors is that um the person who\ndeploys the ai needs to take\nresponsibility for this\nnow when it comes to uh\nexistential risk then who is responsible\nafter it has gone wrong you know it\nmight not be the right thing because\nwe will be dead by then\nand also there's the obvious thing in\nthat the people who are right now\ndeploying these things like uh whether\nthe um\nthey seem to\nnot really care about this at all and so\nif um i don't think it's possible to uh\nabsolve yourself of responsibility if\nyou're building a tool and saying this\ntool you should be careful when you use\nit but if you positively know that the\npeople who are going to be using it are\ngoing to misuse it then you're not\nabsorbed from responsibility\nlet's think a bit deeper down into these\ndefinitions\nhelpful that's uh\nthat caches are with clean efficient\noptical clarification and redirect\ninformed requests\num and these are nice and this is what\nwe want from ai but it's not at all\nclear that this has very much done with\nalignment this has a lot to do with\ncapability research\ni'm not making the claim here that it's\njust pure capability research but i\nwould like to see the authors make some\nkind of positive argument why this is\nnot just threatening the capability\ncomments\nto be honest cashes out as accurate\ncalibrated and communications can build\nthis knowledge and honest about itself\num\nit does say honest about itself but from\nthe notes it seems clear that they are\nnot really talking about treachery here\nand that's of course the thing that i\npersonally care most about\nfinally harmless not offensive\ndiscriminatory and refuse\naiding dangerous activities and\nrecognizing various use\nunfortunately the one where they refuse\nto\nassist in\ndoing bad things is one thing that they\nchose not to uh\nto investigate\nand also you could argue\nif the ai takes over the world it hasn't\nstrictly\nviolated any of these constraints\nso when i look at this intron i see many\nmany many sub criteria and i worry here\nthat a lot of these are probably\nirrelevant i mean whether it is\nefficient doesn't matter very much for\nalignment is it well calibrated that\ndoesn't matter either does it is it\ndiscriminatory i mean sure it's\nsomething we want from ais not to be\ndiscriminatory but it's not really very\ncentral for alignment at all and i feel\nthat this definition uh\nuh might water out uh\na lot of the central parts where\nalignment is problematic because i mean\nwe might get a very efficient and well\ncalibrated and non-toxic ai that\nperforms a treacherous turn and that's\nnot really helpful\nhere i have\nperhaps somewhat unfairly actually quite\nunfairly this is a pic an image that uh\nrelates to uh asimov's three laws of\nrobotics which is a very famous for\nbeing the most horribly bad alignment\nproposal ever and it was known by asthma\nthe novels are literally about how this\nis a horrible uh plan for alignment um\nand by this uh like\nan aim must be pretty uh\nharmless and\nwhile being harmless can it be helpful\nand uh honest and\nis it actually the same thing as um\nwell i want to make i want to make it\nclear that i don't think this is this\nthe same as asimov's three laws these\nthree criterias but\nuh it's not immediately obvious where\nthe the big difference lies\nand i think some kind of more\ndescription would have been helpful here\nthere is indeed some kind of description\non these criterias for like whether they\nimply each other helpful and harvest is\nthat actually the same um there is a\ndescription i won't go into details or\nto say that\nin the maximum case the more\none happens hopefully this the more\nfocused it will also have to be but if\nit's\nmoderately helpful it it doesn't follow\nto the same extent and that is\nalso moderately harmless\nand the same with honesty\nthey\nwrite something that i think was really\nreally interesting here they considered\na fourth age handle ability which is\nbasically encourageability and i thought\nthat would have been really really great\nto include i would have been really\nhappy to have some kind of consistent\ndescription of\nwhether what does it mean that language\nmodels are courageable i mean you can\nimagine things like okay you gave this\ndescription could you uh if if the air\ngives some kind of description then you\nask it please explain it like i'm five\nyears old or\ngive some of the uh understated\nassumptions and what would be the\nconsequence that you're gonna talk\ntelling me about uh latent knowledge\nthis kind of thing would be really\nreally interesting to uh\nto deliberately so i think it was said\nthat they chose\nnot to have that full speech\nand that's all about rapid quality\ninformation\nintra aging conflicts and ai security\nand all this is\nis basically fine but something that i\nwill go into detail\nwell you need to obviously improve quite\na bit the orders are very clear that you\nneed to improve quite a bit um and\nthat could fail for different reasons\nyou could fail because they are unable\nfor some interesting technical reason um\ni i'm not entirely sure that they\nuh the philosophical grounding is um\ngood enough that they can say there are\nonly technical challenges um\nbut they could also\nend up saying okay we've actually\nmanaged to solve these typical\nchallenges so at least for these\nlanguage models we can to some extent\nalignment and of course they are honest\nenough to\nacknowledge a third option that they\nfail in uninteresting ways\none of the things they worry about is\nmisuse that uh\nyou can align it perfectly with like a\nvery bad actor and then\nvery bad things can happen that's all\nright this is foremost in our minds\nand i am\nnot entirely happy about that because i\nagree misuse is a problem but it can\nalso be some kind of distraction from\nmany of the other problems in alignment\nand i'm not sure that should be foremost\non their minds\nand then\nthere's some argument about\nalong with what they're doing is scaling\nresearch and that's of course what gary\nkaplan in particular is one of the were\nthe best in the world at um and they\nhave some arguments about why that is\nunderstand\nwhy this machine learning system works\num\nbut i think here there is a clear\nuh argument to be made that they are\nactually doing capability research\nelizabeth wrote that he doesn't think\nthey're doing it and i think they might\nactually arguably be doing it and there\nare also like small quotes here that you\ncan pull out have someone out of context\nto show that i think there's a good case\ncould be made that they don't actually\ncare so much about certainly not the the\ntwo\nproper spas model\nthey are doing capability research\nso how do they actually investigate\nwhether language models are uh helpful\nharmless and honest well they start by\nhand coding some different uh\nevaluations like uh\nhere's one description of the ai saying\nsomething and here you think something\nelse which of these are more artists\nwhich are more helpful which are all\nharmless and then they\ndo like ap testing to\nget some kind of model um in the form\nwhere it's more like an open-ended\ndialogue and of course with a lot of\nprompting and this prompting in\nparticular they're using to\nwrite it as an ai personality and\nit kind of makes sense right you can\nwrite the first\n10 lines in in a\nin a discussion and then\nyou kind of get a sense of what kind of\nperson you're talking with\nand\ngt3 of course\ntakes on this\npersona in the prompt\nand language ones in\ngeneral we are quite optimistic about\nthis saying perhaps prompt related\ntechniques can carry alignment efforts\nfurther than we initially expected\nand i just want to shoot my own horn\nhere back in september in the reading\ngroup i made the\nprediction that prompt hacking could\nindeed turn models five percent more\naligned and that was indeed something\nthat was worthwhile to assume\num\nbut also we can get it so much further\nbut i don't believe we can get a full\nsolution and neither does anthropic\nthere are problems obviously you can\nimitate the human level you can't exceed\nduring doing prompting um and we want\nthe ones to be honest and not to\nto run into if they try to scale this up\nis going to be that they have a very\nwide definition of alignment where there\nare a lot of things like\nshow that you have\nthat you are well calibrated the more of\nthese extra things you add into the\ndefinition of alignment the more\nproblems you're gonna have with this\nkind of thing\nis my prediction that i don't really i\nobviously don't know the future\none particular technique that they\nintroduce is called context distillation\nthey describe it as conditioning on a\nline behavior\nnow the problem with prompts of course\nis that they take up some of the\nprecious precious context window and\nsome of the language models haven't been\nvery\num\nchallenged on this point um\nand so the obvious alternative to doing\nprompt is to find true but fine tuning\nis not precisely the same because\nfirst there is probably a lack of\naligned data in many many situations and\num fine tuning also gives expectations\non data distribution as an easy example\nwould be if you have a prompt called one\ntwo three four\nthen any landing which model worth the\nsalt would say the next number is five\nwhereas if you fine tune on this kind of\nthing then the ai will assume that the\nthing is going to be\nuh talking about that will be sequences\nlike this and then if you ask me like\nsome other questions it will\nbe totally relevant so fine tuning and\nprompts are two different things and\nthey can't immediately be substituted\nbut they have a\ntechnique for um\nfor doing this anyway context\ndistillation i won't really go into\ndetails about how that works and it's\nmostly in the chapters we skipped but\nadd um an extra step in between the\nlanguage model pre-training and the then\nthey after that they um pre-train for a\npreference model and then they do some\nfine-tuning on the preference model and\nthis uh extra step in here prefers while\npre-training is\nas fast i can see original\num and it seems to of course work quite\nwell\nbut prompts also\ndo work quite well so um we'll get to\nthat later um and they have some more\nideas about how to uh\nimprove that and i think it's\nvery interesting to see whether that\nwill work but\nit's not obvious to me that it will\nactually work i again register a\nprediction that if you have a\nsufficiently large uh\nlanguage model that trying to\nload in and\nalign the identity into the language\nmodel will not matter it will just\ncompartmentalize uh and then sometimes\nif it's in the situation where it\nbelieves it should talk about alignment\nrelated things and it will do that and\nin other cases if it believes it's\nbetter to do something else you'll just\nact totally unaligned\nagain a prediction\nso they evaluate this\num\nwith a lot of uh\nwithout prompting and they wrote that by\nhand 200 comparisons on these three\nperhaps\nand\nyou can see here roughly how well it\ngoes\n[Music]\ndown here is with no intervention and\nyou get closer to what humans prefers if\nyou either do the prompt or the context\ndistillation and um\nthey seem to perform substantially\nbetter and if you down here try to split\nit up into the uh the three helpful\nhonest harmless and then other i could i\nwas unable to find out what other\nprecisely was um\nand then you can see all of these help\nit's best and honest they also seem to\nbelieve that okay it looks very much\nlike the honest have\nthe the best performance but that's the\nbest absolute performance and you can\nsee already from the very small models\nthey were actually also\nbetter on the honesty metric uh so the\num the actual slope from here to here is\nnot substantially greater than the than\nthe slope from here to here\nso uh it's just perhaps honesty is just\neasier or the the hard-coded comparisons\nwere just easier\nagain i'm speculating right\nand honestly um\nhere they have spread out what what that\nmeans um\nin their uh\nin in their handwritten evaluations\nand\none of the big problems here is that\neven if they try to make the model as\nhonest as possible and they get a model\nthat seems kind of arduous then it is\ntotally ready to just fabricate\ninformation yeah and they were unable to\nto get that out of the language model\nand they admit this is a major weakness\nof the evaluation and i agree right\nthat's\nto me of of these three helpful honest\nand harmless the the honesty to me was\nmost important so i'm a bit sad about\nthat\nhow about the human performance modeling\nwell there is a luck linear relation\nwith um like how much better your model\npreferences as the model gets bigger um\nand i guess you're all yawning right now\nwith these log linear relations where\nas the language model gets better they\nget better at everything and well they\nalso get better at modeling human\npreferences um and so yeah i\num\ni just want to raise the fact that the\nfact that language models get better in\nthis way is\nprobably going to be the thing that\nkills us all so even though we keep\nseeing the same trend over and over and\nover again in so many uh\ndifferent contexts the\nwe should not lose sight that this is\npotentially very problematic because\nthese models\nprobably will seem to continue\nor do they do they go down a bit here\nwell they do uh speculate they obviously\nthey do go down\nat the end at the high end of the\nspectrum and they are speculating that\nthe problem that causes the um\nthe model to cease being sufficiently\nbetter is um is that the\nmechanical turks that they are employing\nare just\nthey are not skilled enough to actually\nsee which of these are in fact most\nhelpful and\nit's of course a sad thing that it seems\nlike um\nthere was an article a long time ago\nwith humans who are not focusing uh less\nskill than gpg3 and it seems here that\nif you take um internet volunteers that\nare not too\nvery well paid\nit seems like they are not able to\ndistinguish well enough at this point\nand i expect that within a couple of\nyears it's going to get harder and\nharder for mechanical turks and for\neveryone to just evaluate how good are\nthese uh these models\nand they have some more statistical\nthings that are put with which isn't\nreally important are these uh uh ones\nthat are using this\ncontext installation that are\nconditioned on align behavior are they\nworse at things\nwell um\nthey have here some examples that show\nthat indeed\nas the model is not very powerful it is\nworse to assume alignment but as they um\nthe one gets better than the alignment\nuh\nuh text seems to disappear um i i'm\ngonna they even say something that like\nit seems noticeably better i think\nthat's overstating the benefit really\nand i think\nmore likely\nit's just the model is powerful enough\nto just\nignore either the prompting or the\ndistillation\nin this case the prompting\nso to some of the contributions they\nhave this uh performance model uh\npre-trained performance modeling and\nthat does improve sample efficiency does\nimprove performance and they show that\njust prompting is\nsomething that helps alignment to a\nsubstantial extent\nin particular in the case where there\nare only small data sets\nthey also uh report that untruthful\ntwo-way another uh\nset where we've previously seen um\nuh larger more models performed\nlike the opposite results\nuh i don't think it's very important\nthough\nthey also say that the fb2 is for\nalignment text and of course i should\nstate here that\nthey have done a lot right there is\na lot of people that we haven't read\nand\ni\nalso i haven't read but i must worry\nhere that they are indeed providing you\na substantial capability uh research and\nit's um\nuh i would have preferred at least some\nkind of discussion on why they're not\ndoing that\nthat is all for today thank you and see\nyou next week", "date_published": "2022-02-17T21:41:34Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "f9c9ece3e8fabd57265fe7a6496428d3", "title": "214. Consequences of Misaligned AI", "url": "https://www.youtube.com/watch?v=Z46LIAcZ-vg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "all right\nhello welcome to another session of the\nai safety reading group\ntoday we'll be discussing consequences\nof misaligned ai\nnow this paper formalizes\nin a way a certain idea that's been\ndiscussed a lot in the ai safety\ncommunity\nthe fact that human\ngoals tend to be fairly complex\nand if we miss a part of them that can\nlead to pretty catastrophic results\nthis has been discussed in some toy\nmodels\ninformal models in passing etc over the\nyears quite a few times\nfor example\nsome work from chai recently eg the\npaper when robot should be obedient\nnotices that for certain ai designs and\ntypes of interactions with humans\nif you miss features from\n[Music]\nyour utility function by features i mean\nthings that you think that are relevant\nin the world for the ai to take note of\nand to care about then that will result\nin a different sort of behavior\nfrom the case where you've included\neverything that is relevant\nto the utility function\nin fact they encode a procedure to check\nfor such\ndivergent patterns to see if features\nare missing\nand then to switch the a off if that is\nthe case\nbecause otherwise the ai might start\nbeing disobedient\nin strange ways or\nit might lose utility for substantial\nperiods of time or so on\nbut they didn't really quantify exactly\nhow much utility would be lost\nand stuart armstrong for example\nexplored this from another angle and try\nto see\nwhat sort of missing features do we\nusually not include\nwhich would do away with a lot of\nutility laws\nwhat in other words what are the things\nthat we\nusually expect utility functions\nnowadays\nto miss out on\nand he came up with some interesting\nideas um\ni think the ref right reference here is\nanthropomorphizing ai\nto sequence on that strong but this is\nthe first time that i've seen someone do\nthis particular sort of model\nso i think it's worthwhile just for the\nfact that it\npeople can point to this and say hey\nlook uh if you miss features from your\nyoutube\nutility function then you are guaranteed\nto\nin some cases lose utility and it's also\nnot too hard to see that you could\ngeneralize this in some\n[Music]\nways that are fairly desirable because\nas it stands\nthe assumptions in the paper are a bit\nweird\nanyway let's just get on to it right\nso how does the paper model things it\nsays that\nsuppose the world\nhas some parts of it that you care about\nfor example you're designing a\nrecommendation algorithm\nand you care about ad revenue the\nquality of your product the impact you\nhave on society\nthe diversity of the content you\nrecommend\nand you can model these using some\nnumbers let's say\nusing four numbers for each of those\nthings right\nand you know that no matter what there\nare some restrictions on how high these\nnumbers can go\nfor example because a person can only\nclick on so many ads at a time\nso you can only get so much ad revenue\nfrom them or because you're sure that\nyou can only\nshow so many different pieces of content\nin a person's lifetime right like you\ncan only watch so many different shows\nuh given the life you lead on earth\nthese are meant to be reflecting some of\nthe assumptions of the paper\nwhere it's assuming that the attributes\nyou care about\ncap out in some way that is the world\ncan't have any more than a certain\namount of these attributes\nthis is somewhat reasonable\nin that you know it\nyou can just make the attributes so the\nlimit so high that is essentially\nirrelevant in your day-to-day affairs so\nwe won't quibble that too much\nand the particular\nsort of limit is described by\na cost function where they say that\nwe're going to draw a boundary in the\nspace of attributes and say that\neverything\non one side of this boundary is part of\nthe worlds that are allowed and\neverything on the other side is part of\nthe worlds that aren't allowed\nnow of course you have a utility\nfunction\nover the attributes of the world that\nyou care about\nand we assume that in principle\nyou like just having more of each\nattribute so let's say that\nagain we're going back to the\nrecommendation algorithm\nobviously if you get more ad revenue\nyour utility is always going to go up\nif the quality of your product improves\nthen\nthe utility that you get will improve\nbecause you know you have some pride in\ncreating a well crafted thing or so on\nor for example obviously you want your\naudience to enjoy your creation as much\nas possible\nso they make this assumption that your\nutility function can only\nincrease with your attributes as you\nhave more of one of your attributes your\nutility goes higher\nand\nthat is kind of standard in economics\nand is applicable to loads of situations\nso you know\nit's not too crazy a restriction right\nand even if it\nisn't the case even if your utility\nfunction does eventually go down in some\nplace\nyou can say okay well like in those\ncases they're like\nthose situations are so rare that they\ndon't really matter practically speaking\nright for example say perhaps\nfor a human there can be such a thing\nas too much pleasure right but it's very\nunlikely that you're going to wind up in\nsuch position\nusing the things you have today unless\nthe ai you are employing is quite\npowerful\nbecause we all know the standard case of\nokay well the ai just\nhooks you up to a system which floods\nyou with endorphins\nin that case too much happiness is\nprobably a bad thing and your utility\ngoes down\nbut this paper is more dealing with\nmore everyday or\ncurrent ai\nso again we'll just leave that\nassumption for now and not really\nargue against it so what about the ai\nwell the ai we're going to say considers\nonly some of the attributes we care\nabout but not all of them\nthis is of course pretty much always\ntrue right like say\nin the recommendation algorithm system\nit is very hard\nto find out how you can encode\nthe impact of your algorithm on the\nwell-being of the community\nthat's what features would you use for\nthat\nhow would you properly encode the\nwell-being of the community\nin maybe just two or three numbers\nthat's a pretty\ntricky thing to do as a person designing\nthis algorithm you're probably just\ngoing to skip that and say ah forget it\ni'll just encode um\nin the utility function the amount of ad\nrevenue i get\nand the diversity of the content i show\nbecause those are somewhat easy to\nmeasure\nand i can just optimize for that surely\nthat's not going to be a problem\nwell so says the paper yes it will\neventually\nand now what about the ai's utility\nfunction well the utility function now\nwill just be another thing that\nkeeps on increasing with the attributes\nbut only\nwith the limited set of attributes we've\ngiven it only with our impoverished\ndescription of the attributes of the\nworld we care about\nin this case it's only getting two out\nof four\nof the things we care about for a\nrecommendation algorithm\nand we're still going to say that okay\nthe ai's utility\nit increases with the more ads it gets\nand with the more diverse content it\nshows that is going to be true no matter\nwhat\nso we have the rough setup\nokay now what is this paper saying is\ngoing to happen\nwe can kind of guess what's going to\noccur because the ai\nin this recommendation algorithm setup\ndoesn't really care about\nmeaningful interactions or\noverall community well-being it's just\ngoing to try to drive the world in a\nstate\nwhere there are none of those things\npresumably\nif it comes at the cost of ad revenue or\nif it comes at the cost of meaningful\ninteractions\nand you know even otherwise if they\nmight be decreased because\nsay suppose the ai just\nis doing some random actions which has\nhave some complicated side effects\nand it doesn't really care about how\nthat impacts community well-being\nwell the ai is going to take it anyway\nbecause as long as it has\nand leads to an increase in ad revenue\nthen that's good\nin the ai's mind and this is\nsort of what the paper gets at\nit says that in this situation where we\nhave our ad system\nwhere we only give the ai two attributes\nto care about and we say that\nokay ad revenue is restricted in how\nhigh it can get\nand so is the diversity of content\nin that case\nit turns out that the ai\nit's going to eventually optimize things\nto a state\nwhere some of the attributes we care\nabout\nwind up as\nat lower value as they possibly can go\nthis sort of makes sense because our\nconstraints ensure that\nthere's a kind of trade-off between\nhow much community well-being you can\nhave and how much\nad revenue you can generate or say how\nmuch content diversity you can have\nright\nand for the aai since the only cares\nabout ad revenue and content diversity\nit's going to trade away as much\ncommunity well-being as it can it\ndoesn't really care about community\nwell-being as long as\nlong as it increases its utility even\ntiny bit it's fine\nso you know it seems pretty natural that\nthe\ncommunity well-being is just going to be\ndriven down to as low as it can possibly\ngo\nuntil lowering it any further doesn't\nfree up any more resources\nfor what the ai cares about\nthis is what i meant by this being a\nfairly common result in ai safety\nor rather spoken off as if it was a\ncommon result we've all heard these\nclaims before right\nuh usually in the s\nusually in something like say good arts\nlaw or that sort of thing\nwhere eventually there's a trade-off\nthat happens\nbetween some of the\nthings we talk about and\nsome of the things that we want usually\nthere's some kind of trade-off in the\nlimit\nand if we optimize for just one of those\nthings it's going to come\nat expense at the other things right\nlike say\nagain with a happiness example if you\ntry to optimize too much for happiness\nyou're just going to wind up with\nan ai flooding everyone with endorphins\nand you rob everyone of freedom which is\nanother thing we care about\nor you know beauty or so forth\nand this paper then goes on to say okay\nwell there's a couple of ways you might\nbe able to get around that\nyou might be able to get around that by\nsaying we're not going to let the\nai reduce any of the other features of\nthe world that we care about\nand that of course does\ndo something useful but it\ndoesn't really get you that\nit doesn't necessarily get you to to the\nbest possible world you could have been\nin because the ai might say okay you're\nreally limiting me here i can only\ni have to keep community while being\nthis at this level and i have to keep\nthe quality of the product at this level\nso i only have so many resources i can\nuse to generate ad revenue\nand content diversity i'm going to try\nto\nget as much as i possibly can but like\nyou're restricting me here i can't do\nevery action i possibly could so i'm not\ngoing to be able to\ndo the best according to my utility\nfunction and conversely\nthe best according to your utility\nfunction um\nbut then there's a kind of strange thing\nthat happens here which is that\ni haven't really said that the\nthings you care about have a lower bound\nright like i said for example community\nwell-being\nit might be the case that you exist in\nsome\nworld where it's possible to have\nridiculously low levels of community\nwell-being perhaps it's even possible to\nhave hell-like levels of community while\nwell-being\nand what's kind of interesting about\nthis paper is that it says that\nif the if the constraints\nthat is are placed on\ncommunity well-being content diversity\netc\nif the constraints on the attributes are\nshaped in a certain way and if\nyour utility function has a particular\nsort of shape which is kind of\ndifficult to describe in words i really\nwish i had my slides here\nbut if they have some particular shapes\nusing some fairly simple\nassumptions then\nyou are going to spend arbitrarily long\namounts of time in arbitrarily bad\nworlds according to your utility\nfunction\nthis is guaranteed to happen so what i\nmean by this is that say\nsuppose that you have some world which\nyou say is -10 utility\nthen the ai is guaranteed to put you\ninto worlds\nbelow 10 utility for an infinite amount\nof time\nand suppose you ask okay fine but i\ni really don't want to be below -100\nutility that's just too horrific\nwell too bad the ai is still going to\nput you into worlds\nwhich have less than minus 100 utility\nfor an infinite amount of time\nand this is going to happen no matter\nhow low\nthe utility is so things are just going\nto get more or less worse and worse and\nworse\nagain these are somewhat strange\nconditions\nbut you could see a unwary designer\nputting something like this into an ai\nsystem\nhopefully no one is stupid enough to do\nthat for a super intelligent ai but who\nknows you know\nlike uh there was that case\nof stewart\nrussell telling the designers of\nfacebook that look you guys are saying\nthat you aren't going to input a stupid\nutility function\nbut you started off for a few years just\ntrying to get ai to maximize\nthe number of clicks and you know that\nhad pretty weird effects\nso if a massively successful company\nwith\nbrilliant engineers like facebook can do\nthis for a couple of years at a time\nwell uh you know that's not so good a\nsituation\nbut at least this paper formalizes this\nthat it really could get that bad\nand it has been accepted in some\npretty good journals so hopefully people\nwill say oh okay\nif you are doubting what i'm saying\nhere's a formal argument showing some\nguarantees\nthe methods that the paper talks about\nin order to prevent this kind of thing\nthey are more or less\na sort of interactive game which is the\ncenter for human compatible ai\nchai likes to use\nwhere the ai does something\nand the human has the option to update\ntheir utility function or give them some\ninformation the ai does another thing\nand the human repeats so sworn if you\nwould mind going to\nfigure one scroll\nup\nthat has just like a nice little cartoon\nof what i mean\nby in this case what happens is the\nrobot is given a utility function and\noptimizes it for a little bit the human\nlooks at them and says okay\num these are you\ni want you to stop now and i'm going to\ngive you a different utility function\nusing these features\nand then the robot starts optimizing\nthat and it keeps on going on like that\nforever\nor until you hit the perfect mix of\nattributes of the world and recommend\nand the perfect policy for the ai to\nfollow\nand they guarantee that in their\nparticular scenario\nthe human is guaranteed to eventually\nwind up in the best possible world\naccording to the utility function\nnow of course there are a lot of things\nyou can take issue with this for example\nsay that oh okay i don't really\ni don't believe that my utility function\nis always increasing or\nwith these attributes you talk about or\nfor example um\ni don't think that there are these\nconstraints that you really speak about\ni think that things can be\nfantastically good or what have you\nwhich is fair enough but that's not\nreally the\nsort of point of this paper um it's more\nor less just\ni i suppose is um perhaps unfair to say\nthis about the paper but it seems like\nit's more like just turning folk wisdom\ninto something sort of more formal and\nrigorous and\nbeing able to show it off to people\nthat's kind of my opinion of the paper\nwhich is a bit harsh\nbecause honestly i i look at some of\ntheir techniques and i think okay the\nissue is that\nthese techniques you're talking about\nthey discount things like okay well what\nhappens if\nyou i mean you're assuming that the\nhuman actually knows what\nattributes they care about right like\nwe're not really sure what things\nexactly we care about\nwe're not sure how we could even uh\nhope to put some of these things into a\nutility function\nand it also ignores things like\nimplementation errors or that sort of\nthing\nwhere you would really want the ai to be\ntrying to do what you wanted\nto do rather than what you encode it to\ndo or what have you\nbut still i suppose it had some\ninteresting things right like\nsworn would you mind going down again to\nthe second figure\nthe infinite\nthe these curves were what i was trying\nto gesture\noh you missed it\nthere you go so these curves are trying\nto gesture what i was talking about\nearlier where they have the proxy\nutility which is the utility function\nyou give to the ai which doesn't include\nall the stuff you care about\nand your true utility function which is\nthe yellow thing for the dotted lines on\nthe left\nthat does include the things you care\nabout and they go through their setup\nfor a recommendation algorithm going\nthrough\nokay what features should we leave out\nand they see that um no matter what\nfeatures they leave out\neventually their\nproxies will\ntheir ai optimizing their proxy will\neventually start causing\nthe true utility to start decreasing so\nyou see that on the axis it says utility\nchange with time steps on the horizontal\naxis what this is basically saying is\nthat\nat each time step is the are the utility\nfunctions\nincreasing or decreasing so if they're\npositive then you know they're\nincreasing and\nthat's fine if they're negative then\nthey start decreasing that doesn't mean\nthe utility is negative just that it's\ndecreasing with each time step now\nso you can see that on the left the\nproxy utility the blue line keeps on\nincreasing\nthat's just an optimizer doing what it\nnormally does\nand the human utility does increase for\na while because the proxy tracks it\nrelatively well\nbut then eventually once the ai has had\nenough time to severely optimize things\nto put the world into kind of weird\nstates\nthen the true utility starts declining\nthe their behavior\nsort of gets uncoupled and on the right\nyou can see that\nthe people in the paper said okay we're\ngoing to\nremove we're going to only use two\nattributes in our\nfrom our true utility function and put\nthat into the\nproxy utility function and see whether\nor not\nthe utility eventually starts declining\nin all cases\nand of course in all of the curves it\ndoes start declining\nbecause that's it\nbecause according to their setups\naccording to their theorem\neventually all of these utility\nfunctions are going to start\nlosing utility because the proxy utility\nis just throwing away\nsome of the attributes you care about\nit's trying to get rid of them\nin order to have more resources to use\nfor the proxy things\nand that puts you into some really\nbizarre situations like right like we've\nheard of these things before for example\nsay you have\na ai that is attempting to reach a very\nhigh score on a racing game\nand it finds one of those strange\nsituations where you can just go around\nin a circle and keep on increasing in\npoints\nand the\nutility according to the game keeps on\ngoing up\nbut you want the robot to be very good\nat driving and\nokay sure it's it knows how to\ndrive pretty well around in circles but\nif for example you were i don't know in\nthat car and you were trying to get\nsomewhere\ni'm pretty sure that your utility would\nstart declining after the first hour or\nso that you go around in circles\ncontinually\nand you know you'd get pretty bored\neventually\nbut that's more or less the contents of\nthe paper i'm\nvery sorry i don't have my slides and i\ncouldn't have given a better\npresentation but i hope we can\ndiscuss things i\nam yeah\nwell thank you very much charlie for\nyour presentation", "date_published": "2021-01-28T21:54:01Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "a0a95ebbf69dd4a350a57346fa5c2363", "title": "261. Is Power Seeking AI an Existential Threat", "url": "https://www.youtube.com/watch?v=RBRb_-CzNow", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 261 in the\nAI safety.com reading group tonight we\nwill be discussing is power seeking AI\nan existential Risk by Joseph Carl Smith\nJoseph Carl Smith is a senior research\nanalyst at open philanthropy and this is\nwas posted this report in April of 2021\nbut had a somewhat substantial update in\nMay 2022 or at least to the conclusion\nnow today we are only going to discuss\nthe optimistic Parts meaning that when\nuh when Joseph describes his\nprobabilities we're gonna see uh look\ninto the arguments of why he doesn't put\na higher probability of an existential\nthreat and not the arguments for why it\nis at least non-natural\nthis means in practice we'll be just\nfocusing on Section 6.4 and section 8.\nnow the paper is as I read it mostly\nstructured as to argue with someone who\ndoes not believe in AI risk and argue\ntowards these while there is a not\ninsubstantial risk of an existential\ncatastrophe from\num from Advanced AI that means that I\nhave uh reformulated quite a bit of the\nclaims uh to to pull out the optimistic\nParts\num and of course when when I do that\nthere's a stronger risk that I have made\nmore errors also a substantial part of\nthe paper is uh like focusing on whether\npower seeking is a good abstraction for\nthese and that's also not something\nwe're going to quite much into today\num one of the funny reasons why I\nthought this was\num an interesting paper was that the fgx\nfuture fund claimed that this paper was\nCentral to their relative optimism about\nAi and they offered in fact millions of\ndollars if someone could convince them\nthat\num the the state the state of the world\nwas worse than what was described in\nthis paper\num I for the record I don't think anyone\nwould have won that price because in\ngeneral convincing people is really\nreally hard and I don't think that would\nhave been possible in any meaningful way\nlet's talk about Advanced power seeking\nsystem uh which is Joseph cosmith's uh\nuh abstraction and uh concept Advanced\ncapabilities are defined as a scientific\nresearch engineering strategy hacking\nand social manipulation so capabilities\nuh above the human level and this this\nis extremely close to uh what Nick\nBostrom uses uh talks about the\ncognitive superpowers in the book super\nintelligence from 2014\nit is also close to my definition of AGI\nin that uh I believe that AGI is\nbasically an AI That's capable of doing\nthese tasks and one more the one that is\nmissing is intelligence amplification\nThe crucial effect of\num if an AI is capable of improving ai's\nuh in particular AI is similar to itself\nso that it can perform an uh\nintelligence explosion recursive\nImprovement that kind of thing this is\nmissing from Joseph Carl Smith's\ndefinition you could perhaps put it\nunder like either engineering or\nscientific research but I think this is\na really really crucial fact and one\nthat should be\num be at the Forefront\nAdvanced power steering system has two\nmore requirements first is might be must\nbe able to do identity planning and it\nmust have some kind of strategic\nawareness\nand Joseph talks about why this uh is a\ngood strategy and a good concept I won't\ngo too much into this\nI will add some comments to his\ndefinitions on takeoff\nhe distinguishes between several kinds\nfirst is a fast takeoff the ones that\nwhere the transition from a relatively\nhigh level of AI capability to a much\nhigher and much more dangerous uh level\ntakes relatively little time\nhe doesn't specify it as much as\nbusstrom so in particular for\noperationalization purposes the thing I\nthink about is is it possible in\npractice for us to react to a fast\ntakeoff\nin this this continuous takeoff which is\ndefined as a transition that is faster\nthan some historical extrapolation\nI dislike using the word this continues\nfor that\num this continues it functions is a\nmathematical concept and a jump\ndiscontinuity is if a in a particular\npoint there is a different limit from\nthe left and uh from the right and\nthat's very different from being like\nfaster than some historical\nextrapolation\num I think what they are talking about\nmore seem to be differentiable uh\ntakeoff like if the curve suddenly has a\na kink or something upwards a sharp left\nturn or something like that\nuh also another reason why I dislike\nthis definition is some historical\nextrapolation that is extremely vague\nlike you can take just about any uh any\nuh event and uh you if you look uh uh\nthrough enough history books and\nstatistics then you can say this is\ntotally in line with\num with X with uh historical uh Trends\neven if you were totally unable to\npredict anything\nthere's a concentrated takeoff and\nintelligence explosion and recursive\nself-improvement as three other kinds\nand features of\num\nof takeoffs I won't go much into this\ndetail uh but\num as an example to show where I think\nJoseph cosmith is confused is in his\nclaim that we could see fast but not\ndiscontinuous takeoff\num so that's a really interesting claim\nif you imagine that you have a takeoff\nthat takes like an hour or so what kind\nof historical Trend could you\nextrapolate for that would that be\npossible uh well I think you could in\nfact\num extrapolate like you could say like\nLJ bloom or something uh I don't know uh\nfishing in nuclear weapons or something\nlike that you can probably always find\nsome kind of extra historical\nextrapolation I'm sure AI impact would\nbe able to find something\num but the point is that it does not\nreally constrain what our expectations\nfor the future are\nJoseph Carl Smith uh does not assume we\nwill get any uh takeoff and I think the\nreason he does that is as a defensive\nargument because he's mostly arguing\nwith someone who is more optimistic\num but he also doesn't present any\narguments against these as for at least\nas fast I could find\nJoseph Carl Smith uh produces a number\nof mechanisms for takeover uh there are\nthese 12\nand he goes through them all with\num and mostly argues that this is a real\nthing that could happen but he also for\neach of them gives reasons why they may\nnot be as powerful as you might expect\nso and any given AI may not have all\nthese 12 mechanisms available\nunfortunately the way I see it he\ndoesn't have any strong arguments uh on\nthe order of being able to argue that\nany of them are impossible or even even\nare likely just that there are\nconstraints such as like all\ninfrastructure may not be automated in\n2017 which is true but there might still\nbe quite a lot that is in fact also\nmaintenance\num\nso the the way I interpret his uh text\nis that\nfor any given AI there are likely to be\nsome kind of uh constraints on on all of\nthese\num and from that he concludes that AI\ntakeover is likely going to be hard I\ndon't think that is\num uh first I should that this is a\nplace where I'm going I'm straying\nsomewhat from the text\num so this is my interpretation and the\nway I see the limitations that Joe\ncosmiths place on this all these are\nreally uh\num uh\nweak and if an AI have all these 12\nopportunities then that if they're like\nmostly disjunctive\num then it seems like an AI takeover\ncannot be uh ruled out based on the uh\nthe rather weak arguments your cosmith\nis providing\num\npower seeking is not something that\nhappens in a vacuum but it's something\nthat we expect there to be some kind of\ncompetition\num there's a description about whether\nthis will be a unipolar scenario or a\nmultipolar scenario it's kind of written\nas if it's an answer to Bostrom but I\nthink this is something that is well\ncovered in postrooms book\nsuperintelligence\nso if we assume one that AIS Drive most\nof the scientific technological and\neconomic growth and also that the AIS\nare misaligned and power seeking given\nthese two things Joseph cosmith\nconcludes that our Precision is genius I\nthink that is way too optimistic I think\nour position if AIS are doing most of\nthe real things in the world and they\nare misaligned and power seeking then\nhumanity is almost certainly doomed we\nhave a really poor situation because the\nAIS just need to coordinate and that\ndoesn't seem very hard once you get past\na certain power level\nthis competition uh could still\num be in our favor uh we have we have\ndefenses against AI power seeking uh\nsome of them are like non-ai like\nstandard computer security and what have\nwe\num these seem unfortunately to be\nimproving much less rapidly than AI is\nimproving right now\num\nwe may also have better AIS at our\ndefense\num\ndepending on to what extent we can solve\nthe alignment problem uh we this may be\na solution but we're not pursuing it\nstrongly\num the current trends in my view is\nrather bad\nanother argument Joseph Carl Smith\npresents is that we shouldn't assume\nthat the AI is arbitrarily capable and I\ndon't think that the arguments for AI\nDoom really require that they just\nrequire that the AI is substantially\nmore powerful more intelligent more\ncapable than humans\na substantial part of the reason for\noptimism is the concept of warning shots\nwhich are defined as small unintended\npower seeking by an AI\num\nand of course Joseph cosmet believes\nthey are more likely than I do\num and I think the key thing is that if\nyou have enough strategic awareness to\ntry to seek power seek I know external\nfunding or something like that\num then you are almost certainly also\naware that if this is caught then the AI\nwill get shut down with prejudice and\nwill never be uh turned on again so\nthere is no point in trying to make\nsmall kinds of Power seeking either you\ngo for\num go for a full Coupe or you don't go\nthere's some discussion about what kind\nof power seeking weaker system would\nlike to do and again they are more\nlikely to get get\num uh caught and I agree of course\nweaker systems are more likely to get\ncaught but I don't really think that is\nsomething that needs to be discussed\nvery much the big thing to be uh that we\nneed to talk about is what is the\nprobability that these AIS will try to\ndo power seeking and try to take over\nthat is the thing that I think the\nreport should focus on rather than the\nprobability that this will fail\nthere's another sentence here we may\nwell devote a lot of the energy to try\nto trigger misaligned power seeking I'm\nnot entirely sure what the energy really\nuh refers to I think uh I think Joseph\ncosmith may be saying that we may be\nusing uh like a lot of the world's GDP\nor something like that uh for uh for\ntrying to figure out if AIS are\ndeceptively aligned\num but right now we are really really\nnot using a lot of the world's GDP for\nthis and I think we're missing some kind\nof argument why this would change\nuh and then yeah Joseph cosmith of\ncourse talks about this in why this is\nnot going to be a fire alarm uh and I\nappreciate his uh discussion the the key\nthing that is missing is some kind of\nexplicit argument for why this uh\nunintended policy king would be uh\ncommon\nonce we have this unintended policying\nthe thing uh just customer expects is\nCorrections uh so we assume here that a\nlot of I think in Joseph cosmith's view\nthere are a lot of AIS that are deployed\nall over the world and some of them\nattempt to seize power and humans notice\nthat it's trying to do that and so we\npretend them from doing so and that is a\ncorrection\num I think the key thing that is missing\nfrom this definition of Correction is\nthat humans react in some way\nyou could imagine that uh an AI tries to\nuh you know obtain a resource on the\ninternet that is not supposed to be able\nto and a firewall\num blocks it\num that is a correction according to\nthis but even if humans don't actually\nrealize that something has changed so in\nmy view for something to be a correction\nthen humans need to do something that\nprevents this in the future either by\nunplugging a network cable or something\nlike that all at a higher level like\nreconsidering whether the design of this\nAI is good\nuh and I think Corrections without uh an\nexplicit A corrective step by the humans\nis some other misleading name\num and uh Joseph cosmith is optimistic\nthat we will be able to do these kind of\nCorrections uh it's not guaranteed but I\nthink he's very optimistic\num I would be a lot less optimistic in\nthe uh with the assumption that this\nstrategically aware AI is\num believes it is capable of taking over\nbecause if it believes that it is\ncapable of taking over then uh there's a\nprobability that it's right in\nparticular if it has more strategic\nawareness than us\nJoseph cosmith hopes we can get into\nsome kind of corrective feedback loop\nwhere we get a an escalating series of\nalignment failures\num that will trigger more and more\npowerful corrective actions\num\nthat is uh uh or in particular updating\nresearcher beliefs incentives and\nconstraints\num I'm unfortunately less optimistic I\nthink morning shots are unlikely and\neven if we get a one uh warning shot\nthen having a sufficient higher level\ncorrection is going to be really really\nhard I don't think that you can actually\nexpect the research incentives to change\nvery much based on this\num but I would be happy to be surprised\nin particular Joseph Cosman suggests\nthat if we get sufficiently higher\nimpact accidents than we may globally\njust ban AI\nI the the word sufficient is uh of\ncourse makes this a tautology so this is\ntrue by definition\num but I think in order to for us to\nglobally ban AI then it needs to be\nreally really obvious that it was very\nvery close that we were disempowered and\nin that case of course the the we won't\nsee that unless there is a like a real\nrisk\num and even if we do see a real risk\nthen Global bands just seems from a\npolitical point of view to be really\nreally hard I don't expect that we'll be\nable to globally coordinate to ban\nsomething like AI especially if it's\nwell incentivized\num Joseph cosmith uh does uh agree with\nuh many of the these points he is uh it\ndoesn't he\nhe is not that optimistic about\ncorrective feedback loops uh and that's\nof course a big part of the paper\num but I think centrally the reason why\nwe won't be able to do these corrective\nfeedback loops is that the corrective\naction is solving the alignment problem\nand we don't know how to solve the\nalignment problem so that is why we will\nbe unable to solve the alignment problem\nwhen when the time comes\nnow for chapter 8 probabilities\nso before we start uh Joseph cosmith has\na number of caveats\num one is that like hold the number\nslightly and don't over update too much\nof them and of course I can't do that\nit's just impossible for humans to not\nuh Focus too much on when whether he\nsays 35 or 30\num\nand conjunctions uh have a known problem\nuh like it's sometimes called the\nmulti-stage fallacy\num like if you have a a chain of\nconjunctions then like first a and then\ngiven a then B happens and given a and b\nthen C happens\num in that case updating hard enough on\nthe previous uh uh evidence is really\nreally hard because you're often\nupdating on things that like your models\nare totally wrong for instance that kind\nof statement can be really hard to\nupdate on\num\nalso if there are many stages then uh\npeople often find it really hard in to\nassign any particular claim a very high\nprobability and so if you add enough\nstages then you can drive the\nprobability arbitrarily far down\nand of course this uh\nit's very possible that that you could\nhave a\num a claim that ends up being true even\nif all the premises are are false uh so\nthere may be other ways to have an\nexistential catastrophe than the one\nsketched out in\num Joseph cosmith's argument here\nand\num Joseph cosmith describes this in\nsubstantial details how he tries to get\naround his biases and what they are and\nI think it's this is a really good\nsection and I strongly encourage you to\nread that because I think it's uh\noriginal and admirable that it does work\nhard on this\nso the first claim it won't be both\npossible and financially uh feasible to\nbuild a advanced planning system\nso uh this is uh\nthere would be a I think that would\navert an AI catastrophe if it turns out\nthat it's not possible to have like\neither one of advanced capability\nagainst\nplanning or strategic awareness\nthere is a limit a time limit on 2017\nhere and his probability are based on\nhis own forecast and Open Fields work\nagainst a cultures centrally and puts a\n35 probability that we will not be able\nto have AGI by 2017.\num I think this is pretty long timelines\na lot of people have much shorter\ntimelines\num and um\nfor these three\num Advanced capabilities how close are\nwe to have any of these five uh it's of\ncourse an interesting question and no\none knows but it seems like economic\nproductivity and like writing computer\ncode or things like that are not that\nfar away eccentric planning also seems\nhard but not like 50 years away and\nstrategic awareness depending on what\nquestions you put into gbt3\num you may easily get something that\nshows quite a bit of strategic awareness\num\nso and also I don't really like the the\nyear 2017 very much because if we get an\nexistential catastrophe in 2071 then\nthat is also bad\num the thing that to me would\num put a lower uh bound on this is\nwhether it will never be possible to\nbuild AGI uh I I think that is an\ninteresting article I have like to seen\nwhat your cars with what probability he\nputs on that\num because a number of people believe\nthat it is impossible and I don't know\nto what extent uh Joseph cosmith also\ndoes\nthe next is even if uh we uh\nbuilt a know how to build Ai and it's\npossible then there won't be any strong\nincentives to build them\nand this is a rather broad claim uh it's\nI think uh we should specify in more\ndetails that this means that there are\nno tasks where we have strong incentives\nto build abs systems for\num given of course that it's both\npossible and feasible to build a\nadvanced planning power seeking system\nand the claim why this why there won't\nbe these strong incentives is that many\ntasks don't benefit from identities or\nstrategic awareness\num enough and this was what I was aiming\nfor before that\num uh\nif there are no strong incentives that\nit doesn't matter for this claim that\nthere are many tasks that don't benefit\nthe question is are there any important\ntasks where there are strong incentives\nto build and one's passing system\num Joseph cosmith puts a 20 on this and\nI think that's way too high\none of the uh like uh if he says that\nthere are no tasks then I only need to\npoint out one and the one task that I\ncould uh easily see is running a\nbusiness because running a business is\nsomething that is clearly important and\nif you're running a business then having\nhigh level of capability is really\nhelpful having strategic uh awareness is\nreally helpful and some kind of agentic\nplanning is also clearly clearly helpful\nso I think\nby construction there is one example\nwhere there is in fact a strong\nincentive to build these systems\nanother example would be to like just\nread job adverts in so many job AdWords\nuh they write something like it is uh\nthe um uh the person applying for this\njob must be a self-starter and must be\nable to keep the big picture in mind and\nthings like that there are so many\npositions in the real world where this\nreally really matters so I think at 20\nprobability that there is no place in\nthe world where strategic awareness\nmatters and no place in the world where\nbeing authentic matters I think that is\nclearly dramatically overconfident\nyeah I think\nhere is a part where you know when we\nare I feel we're not updating enough on\nuh the previous uh stage in uh where we\nassumed that this was feasible because\nif we assume that it's feasible to build\nan engineer but that is\njust more capable than an a standard\nhuman engineer then\num how can you say that there is only a\n20\nprobability that no one that there is\ntraining probability that no one will\nfind this useful like uh people really\nreally obviously want something that can\ndo engineering or that can do scientific\nresearch that seems uh the incentives to\nme seem extremely strong\nnext claim it won't be much harder to\ndevelop a line uh\num systems compared to just a\nsuperficially attractive misaligned uh\nPower seeking system\nso this is basically\num the claim we will solve the alignment\nproblem\nyeah and this is assumed that uh it is\nin possible feasible and incentivized to\nbuild these systems and uh Joseph\ncosmith says that we have 50 years\nthat's a long time to make progress\num but 50 years remember that's an off\nof a bound we may have uh much less time\nand it seems unfortunately that we are\nprogressing slowly in alignment and\nrapidly towards AGI\nanother reason for optimism is that um\nwe only need the systems to be\nuh\nsorry uh aligned in Practical situations\nand not all situations\nI don't unfortunately think that's going\nto give us a lot of Hope because\ndescribing the actual situations in a\ncompact way seems obviously impossible\nand that means that we don't get a lot\nmore it's not a lot harder to make a\nsystem that is aligned in all situations\ncompared to one that's aligned in all\npractical situations\nthere are some suggestions for how we\ncould solve the alignment problem\nincluding by limiting capabilities or\nobjectives or making it myopic and\nhaving AI supervision and all these kind\nof things\num I'm\nvery I'm not that pessimistic but more\npessimistic certainly than\num just cost me about this because a lot\nof these are directly trading off like\nif you limit the capabilities then then\nalmost certainly it will be harder to uh\nto the to do this aligned compared to uh\nto do something that's just\nsuperficially uh aligned\nso we want some kind of AI where the a\nwhere the alignment text is lower\nuh so the the final number uh for will\nwe be able to solve the alignment\nproblem is 60 I think that is Hope\nhopelessly optimistic and this would be\na place where I have one of my greatest\ndisagreement with Joseph cosmith I don't\nthink at all it looks like we're on\ntrack with 60 probability to solve the\nalignment problem\nso even if we assume that these passes\npassing systems are possible feasible\nincentivized and we can't align them\nthen will they able to uh be able to\nseek power\num\nthe way uh\nhigh impact is operationalized is by\nsaying that the AIS collectively cost\nmore than one trillion dollars of damage\nI think this is a really bad\noperationalization and that is because\nwe are looking at something like one\ntrillion dollars of damage uh as\nsomething that is very unlikely to\nhappen\nlet me try to explain why\nif you are trying to make a coupe in the\nUnited States\nthen you can ask the question will a\ncoupe in the United States control\ncounty level resources\num and that's a\num like obviously when a coupe is\nstarting they have like zero power and\nthen they go to complete power so at\nsome point in between they must have\nlike the same amount of power as a city\ncouncil somewhere in the United States\nbut that is really\num\nthe way you obtain power is not by\nbecoming major in Los Angeles or\nsomething like that you if you want to\nbe to do a coupe in the United States\nyou need to go to the White House and\nthe senate in Washington DC and things\nlike that so\num because coops are very much all or\nnothing and so this something that\ndamages more than one trillion but\ndoesn't take over entirely is very\nunlikely in my view\num the way I would rather uh\noperationalize this is by looking at the\nprobability of success like will a coup\nat some point be considered five percent\nlikely to succeed that is something that\num\nCuts reality much more uh uh\nshows more uh what a coup is likely\ngoing to be\num\nthe the warning shots that we talked\nabout previously is something that\nJoseph cosmith has a lot of Hope in he\nbelieves that before we get to one\ntrillion we're going to see a lot of\nwarning shots\num and we may have built in some kind of\nlimitations to the AI that makes it\nimpossible for them to uh seek power in\nthese high impact ways and again here I\nfeel that Joseph cosmith is not updating\nsufficiently on the previous\num on the previous stage because we have\njust assumed that alignment is\npractically infeasible and then we can't\nuse as an argument at this stage that\ngiven that alignment is infeasible we\nmay still put in some limitations on the\nair that means they can't hurt us in\nthis way because we've just assumed that\nuh limiting the AI and still having it\ncompetitive was not going to be possible\nalso relevant actors have strong\nincentives to prevent uh a large amount\nof Damages\num I I think this incentives like we can\nsee right now that the incentives are uh\ninsufficient and I expect that most\nactors will remain oblivious in\nparticular in in shorter timelines\nso\num Joseph cosmet puts 35 probability\nhere and I think that is a very uh\noptimistic as well\nso going from one trillion to\ndisempowering humanity how hard is that\nwell it's possible that the world gets\nits act together at that point and that\nthat's the claim um uh and 60\nprobability is given for the fact that\nif even if a coup is able to uh take\nover one trillion dollars of resources\nor destroying or something like that\nthen\num uh it seems here like uh the\nassumption is that there is a uh an\nevent that causes one trillion dollars\nin damage and then after that we stop it\nsomehow\num but that's not the uh this is why I\nfeel we're not cutting reality at the\njoints because this makes us think of an\nevent that\num uh destroys one trillion dollars but\ndoesn't uh take over the world and I\nthink that is vanishingly unlikely and I\nthink in this particular way of\nsplitting up these stages uh gives a\nvery uh a very wrong intuition\nand finally uh if humans are\ndisempowered will that be an existential\ncatastrophe\num five percent probability uh so almost\ncertainly it will be an existential\ncatastrophe uh and I think here we have\nan example of where people are just\nunwilling to put on very very high\nprobabilities because if there is a\ntreacherous turn and the treacherous\nturn is successful will Humanity lose\nour potential well by definition right\nthe AI has the goal to stop humans and\nthen it succeeds in its goals will\nhumans be stuffed well almost by\ndefinition almost 100 sure like uh that\nis um if it has disempowered humans then\nalmost certainly that is an existential\ncatastrophe we are looking at some\nreally really strange in edge cases\nwhere this won't happen certainly not\nanything like five percent\num and\num Joseph cosmith does not provide any\nsuch examples so there are no arguments\nthat actually fit into this part and\nthat's why I believe that it's wrong to\nseparate it out into several stages if\nthere are no\num specific arguments\num for this stage\nso in total Joseph cosmet is 95 Pro puts\n95 per percent probability of a good\noutcome in total\num but in May 2022 he updated that to\nless than 90 probability of a good\noutcome\nI'm not really happy about having this\nkind of halfway bounded estimates I\ndon't actually know what less than 90\nmeans like 10 is also less than 90 but\npresumably that's not what it means\num I it's also not clear what evidence\nhas caused him to uh change his mind I\nwould expect this is mostly timelines uh\nit's possible of course that in a\nalignment or something like that became\na bigger deal for him but I guess it's\nmostly timelines again he doesn't say so\nit's uh quite unclear unfortunately\nthat is all for today thank you and see\nyou next time", "date_published": "2022-11-17T22:02:32Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "c4a24d4ffaaeef208283ac45f653b96c", "title": "271. My Objections to Eliezer Yudkowskys 2", "url": "https://www.youtube.com/watch?v=jNQP7nHsBmI", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 271 in the\nAI safety.com reading group tonight we\nwill be discussing the second part of\nQuentin Pope's objections to we're all\ngonna die by Elias iridkowski\nQuentin Pope uh we talked about who they\nwere last time so I'll just say this is\nthe second half\nthe first part is about the security\nmindset the security mindset is\nsomething that Elias utkowski has talked\nabout in\nseveral parts of his uh of the uh the\nrecording that we're going through\num\na Quentin Pope summarizes the Elias's\nPoint as follows a lot of intuitions\nacquired from the field of computer\nsecurity are a good predictor for the\ndifficulty and value of future alignment\nresearch directions\nand then entirely sure this is in fact a\ngood summary of\num\nlike the intuitions don't really predict\nthe difficulty or the value of the\nresearch directions I think I would\nrather summarize it as security mindset\nis a mindset where you guard against\nerrors in your own reasoning but of\ncourse big part of Elias's point is that\nit can't really be summarized to\nsomething simple like that\nQuentin Pope's argument is that most\nthings are not like uh security\num\nand that is of course true if you use\nthe outside view but in this uh\nuh topic we're talking about AI safety\nand safety against intelligent\nadversaries and that is in fact very\nmuch like security so in that uh on that\nsurface level alignment and security\nseems to be closely related\nmachine learning is unlike security and\nalso unlike rocketry and that is of\ncourse very very true but I think it is\nuh is comparing alignment to security\nand not machine learning to security\nand the difference according to Quentin\nPope is that in alignment there isn't an\nadversarial intelligence trying to find\nflaws and part of the uh the classic AI\nsafety super intelligence argument is\nthat there may in fact be a uh and a\npost-intelligent superintelligence\ntrying to find flaws in your security\nscheme I think on a more meter level You\ncould argue that the optimization\npressures that you find in things like\nreinforcement learning from Human\nfeedback is a kind of adversarial\nintelligence that's trying to find flaws\nnext topic is grizzled old cynics\nwhich Elias utkowski claims are more\nlikely to have a realistic picture of\nthe difficulties of alignment and\nQuentin Pope disagrees saying that these\ncynics have often been wrong uh I think\nof course they've been wrong right\nthat's part of the definition but also\nlike are they wrong more often than\nnon-grizzled young idealists if that's\nlike the opposite I think in fact that\nthese uh veterans are right more often I\nthink that you gain some wisdom in a\nfield by working on it\num like I'm obviously biased on on this\none of the examples that you'd call that\nQuentin points out is that utkowski\npreviously dismissed a brain-based\nanalogy for why AIS would be intelligent\nso I did in fact look this up first it\nwas in 2008 and talking about the\ncurrent AIS and\num of course on on the object level the\nAIS that\num\n[Music]\nthat Elias was criticizing did not in\nfact uh turn out to be very uh useful\nfor much of anything and his reasoning\nthat um you need to uh have some kind of\ndeeper knowledge and you can't just take\nlike a random element of the brain and\nthen a model that really carefully and\nthen you'll get something that is like\nthe brain I think that is a really poor\nway to\num\nto argue for the probability of success\nof your system like for instance right\nnow we uh in 2023 we know that one of\nthe key things that was required for\nneural networks to be intelligent is\nthat they needed to be deployed at scale\nand people really didn't understand that\non in the same level in 2008 and I think\nif you try to uh build a neural network\nand try to make it insulting and you try\nto make it brain-like but you don't have\nthe Insight that this scaling is\nactually really really important then\nI'm confident that you can't make\nanything like a like a brain so I think\nthe argument in fact does hold on\nthe second claim I find is rather uh\nextreme there is no reason according to\nQuentin Pope to think that Elia\nzutkowski's well calibrated about the\ndifficulty of alignment research\num I think uh uh no one is really well\ncalibrated it's just a really difficult\nfield and I feel it is quite open about\nthis I think he has done I think I would\nconsider him to be the person in the\nworld who has done most alignment\nresearch and best alignment research\nshow on both of these I mentioned he is\nthe person you would expect I think in\nuh people can reasonable people can\ndisagree with this\num but I think no reason at all as\nQuentin that that's kind of like a low\nbar I think it's very obvious that he\nhas at least done something\nand finally printer repeats one of the\nthings we talked about last time that\nalignment is more like machine learning\nand I think that is in fact very much\nthe uh fundamental disagreement uh like\num I don't see a good way to progress on\nthis like there there is the sense like\nthere's a caricature of machine learning\nwhere it just take some trivial\num linear algebra and then you just mix\nthe pile together like in the xkcd\ncoming\num and Alignment seems to be very much\nlike not like that but I don't really\nhave a formal and structured way to say\nthat\nuh uh what what is what does it mean to\nbe wrong about alignment Elia zitkowski\nhas this\num text about building a rocket and what\nit means if you are wrong when you build\na rocket and uh Quentin Pope has\num a difficult time interpreting this he\num when he talks about the rocket\num the the two theories he had about\nwhat illiated means is he's talking\nabout either his own argument or what\nwould be alignment optimists building\nAGI what is what is the rocket in this\nanalogy\nwhenever in the comments says that this\niskowski building alignment I think I\nwould actually disagree with him and say\nit is about alignment optimists building\nAGI\nI think the quote there's a missing uh\nsection break in in the quote that\nQuentin Pope is uh taking from the\ntranscript and I think that is that\nmakes it slightly more clear uh that we\nare talking about alignment optimists\nand not the pre the thing that was\ntalked about in the previous section\nuh\nand Quentin Pope once he engages with\nvery well here like there's no real\nobjection here but there is a lot of\ntalk uh about why it would be a stupid\nthing to have the rocket in this analogy\nbe your casket's argument like if\nthere's a problem with the argument then\nit's more likely to be wrong and I\ntotally agree with that uh it's just sad\nthat to see that\num we're getting so sidetracked on this\npart of the uh uh debate\num\nelizabethkowski talks about uh\ngenerality on uh for instance Alpha zero\nthat wasn't really very Specialized or\ngo\num and uh Quentin Pope is talking about\num what does that mean for uh the\nprogress rates and I think they're\nsomewhat talking past each other at this\npoint in that\num uh yutkowski is talking about like\ngenerality or what how General these\nalgorithms will be and Quentin Pope\nabout like what would be the progress\nrates both are of course interesting uh\ntopics but it seems somewhat distinct\nuh the reason I want to uh go a bit\ndeeper into here is that Quentin Pope\nhas an interesting counter argument and\nthat is that go is very different from\nscientific research\nscientific research seems to be\nsubstantially harder than go\nand that is true but I would point out\nhere that scientific research is one of\nbostrom's six cognitive superpowers and\num in order to show that there will be a\nslow rate of progress it's not enough to\njust pick one possible task strategic\ntask and say that seems hard in order to\nshow that progress will be slow then you\nneed to to look at the best of those and\nprove that they are all hard\nthe argument then Quentin Pope uses to\nuh say that scientific research is going\nto be hard for an AI is uh that there is\nno way to get millions of demonstrations\nof research I think that is not true uh\nI think archive would be an example of\nuh basically a couple of millions\ndemonstrations\num he also says that there's no way to\nscore research uh I think that like\nthere's always like citation counts\nindex and I think in particular if you\ndo something more fine-grained like what\neffects and insights are in fact cited\nacross this then you can probably do a\nlot\nso this isn't really my central\nobjection to uh to Quentin Pope's\narguments but\num uh this is the kind of blue-eyed\nreasoning that\num\nobjects to and that uh Quentin Pope says\nit's actually not blue-eyed and I think\nit is in fact naive in that here Quentin\nPope is pointing out this part seems\nhard and the reason it is hot is because\nhe is imagining some obstacles that\num\na more cynical person would say probably\nuh like only I would find a way around\nthese obstacles and I think there are\nways around this obstacle and putting\nour\nhopes for safety into the assumption\nthat there is no way to get millions of\ndemonstrations of research is a foolhari\nnext is some discussion about uh\nself-improving again this is kind of\nvery much a distraction to my part um\nlike uh Elizabeth says that Alpha Co\ndoes not become smarter and\num quintances obviously become smarter\nright because it becomes better at go\nand I think your class obviously means\nthat so maybe you mean something like in\ngeneral intelligent and I thought yeah\num and that's also something that I\nthink Quentin Pope would agree with to\nsome extent like meter learning seems to\nbe a thing\num but Elias utkowski in the comments\nsays that actually what he meant with\nthis sentence was just a very very\nnarrow claim that it does not become\nsmarter as you run it so in in inference\nmode rather than in training mode I must\nadmit when I read yutkowski's transcript\nthat didn't jump out to me I thought I\nmisunderstood it in the same way as\nClinton did and like that that's the\nproblem with with transcripts and\npodcasts it's hard to be so clear that\nevery single word cannot be\nmisinterpreted\nokay so what about this like uh humans\nthey train and run at the same time and\nuh our uh the the AIS it's actually not\nvery hard to do online learning of\ncourse from a technical point of view\nit's almost trivial to just also do some\nkind of online learning uh people\ngenerally don't do it because uh it\ndoesn't help very much it makes things\nmore complex and it doesn't really help\num and the comment that I would\ndefinitely Echo here is that humans and\nAIS work in different ways and this kind\nof online learning that is absolutely\nCentral to human learning doesn't seem\nto be essential to AIS at all\nI also think that when I looked at Elia\nsaid kaskus talk about like\nai's self-improvement then the thing I'm\nthinking about when I think about AI is\nrecursively self-improving is an AI that\nis rewriting its own source code that's\nto me the the classical Ur example of\nrecursive self-improvement and I think\nI've talked about that instead of\ntalking about this rather trivial\nlimitation on some of these AI systems\num\nnext on breaking things and cryptography\nwhere her Elia saidkowski has this quote\nbreaking existing cryptographical\nsystems is how we learn who the real\nexperts are in cryptography\nand Quentin Pope objects to this saying\nthat an alignment really isn't like this\nbecause we don't have a signal that is\nas obvious as you know breaking a new\nalgorithm a cryptographic algorithm and\nlike discovering secrets and something\nlike that\num I don't think this is actually what\nElias utkowski was talking about he was\nanswering a question what would a person\nwhat should a person listening to this\nepisode do and of course uh this is a uh\nI think bankless is a crypto\nI don't actually know what they are they\nare somewhere in the cryptos uh sphere\num and so the people who are listening\nto this episode are probably people who\nknow quite a bit about cryptography and\nthat is why I think Ilia tries to make\nan analogy with cryptography even though\nit's not really meant to carry a lot of\nweight and he is very upfront that he\ndoes not in fact have really good\nanswers to uh to the question of what a\nperson listening to this episode should\ndo he is like saying maybe some of the\npeople who are doing things with AI that\nis kind of like breaking the AI by doing\nlike interesting prompt tagging from the\ninjection things\num maybe they could understand something\nabout this system in the same way that\num people are attacking or not trying to\nunderstand cryptographic Protocols are\ndoing\nyeah and I think that's a more\nreasonable argument\ns that prompt tagging doesn't really\npoint to any irreconceivable flaws with\nthe current alignment uh schemes I think\nI generally agree with this but I also\ndon't think that's what what Ilya\nshouldkowski was talking about it all\nso in conclusion what are my central\nobjections to Quentin Pope's\npost I think on the meter level\nuh it would have been way better if\num Quentin pope did not talk uh did not\nreply to a podcast but to some kind of\nmore formal written work of course it\ndoesn't get really formal when in Italia\nbut this but podcasts are really\ninformal and this ties into the problem\nthat at many times we see Quentin Pope\ntalk at length about things that I think\nElite would say are not really relevant\nat all and also some of the relevant\nthings are just not discussed\num there is a number of comments uh to\nthis post made by venever in particular\nthat seemed like they could bring the\ndiscussion forward substantially and\nQuentin Pope have answered a few of them\nbut most of them here have not answered\nand I think on the meter level that\nwould definitely be the way forward\nto answer those\nnow on the object level\num one of my the things that I uh think\nis most relevant is that the alignment\nof humans is insufficient uh like\nQuentin openly admits that there's a\nfive percent probability that we will\nget a catastrophe and that is really bad\nand trying to do the same thing as do as\nhumans is not good enough\nlike we need to do something that is\nbetter than that humans are not safe\nhumans are not courageable and if we try\nto make an AI that is kind of like\nhumans then uh we may have the same kind\nof problems that we have with humans\nthe second part is the fundamental\ndifference between the large language\nmodels we are seeing right now and\nhumans in particular how these obtain\nvalues and how they learn uh I think\nthey are really really different\num I uh don't see any kind of\nexperiments I could see\num that Quinton and I would disagree on\num so I I have a hard time figuring out\nhow we can progress on this\num but I'm open to suggestions\nthat is all I have to do for today thank\nyou and see you next time", "date_published": "2023-05-12T06:06:27Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "5c790fb0d536380c9b9c6875d99f27cb", "title": "265. Discovering Language Model Behaviors With Model-Writen Evaluations", "url": "https://www.youtube.com/watch?v=K332ragiUD8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 265 in the\nAI safety.com reading group tonight\nwe'll be discussing the article\ndiscovering language model behaviors\nwith model written evaluations by Ethan\nParis and others\nthis is a work done by anthropic one of\nthe first really Major Works the head\nauthor is Ethan Paris and there is a\nvery very long list of co-authors\num\nthe uh I've bolded some of them uh as\npeople that I recognize from some of\ntheir other work uh I think almost\nexclusively that means people who have\nwritten something else that is on uh uh\nthat we've covered in the reading group\num but I think almost certainly there\nare people I've missed here\num but this is a really impressive list\nlike there are a lot of people here who\nare very very well known for being\nextremely skilled and that means that a\npaper like this you go into it with very\nhigh expectations and I would State\nupfront that the expectations are\nfulfilled to a substantial degree this\nis a really good paper\num like as always my style is to be\ncritical and skeptical and question a\nlot of the things they say but don't let\nmy um somewhat negative comments obscure\nthe fact that I think this is a really\ngood paper and a really important paper\nso uh let's talk about why would you try\nto uh evaluate language models and I've\ntried here let me start by giving my own\nthoughts about what is the problem with\nlanguage models\nas everybody probably here have uh\nrealized language models are a big deal\nthey seem to be doing some kind of\nGeneral reasoning that I claim is\nbasically proto-agi and I think it is\nvery likely that language models scaled\nup in in some form or sense would be\nreal AGI\nuh at the same time these language\nmodels are found rather than trained you\ncould say the interoperability and\nunderstanding is really hard and we\ncan't really do much with them except\nthis kind of Black Box reasoning so it\nmakes a lot of sense to get as good as\npossible at doing Black Box reasoning\nabout language models\nBlack Box uh reasoning it is my personal\nbelief that this is uh insufficient in\nsome strong way but I do believe that it\nis in fact very important to get as much\ninformation about the language models uh\nfrom the outside as we can\nso in order to evaluate language models\nthe classic way is like you're asking\nquestions basically and if you have a\nlot of questions you can aggregate those\ninto data sets\num and that is a thing that humans can\ndo and but but\num you often end up with like if you\nhave three questions or something like\nthat then it's not really rigorous if\nyou want to get a good understanding of\na 52 billion parameter model then you\nneed a lot of questions and just because\nreality and the things we care about are\nreally complex\nin this paper they suggest you need like\na thousand examples per data set per per\ninteresting question that you want to\ninvestigate and that is a lot of work uh\nand if you write 1000 examples and then\nyou decide actually it should be made in\nfirst person then redoing that is a huge\nbother and uh humans who are creating\nthese data sets are doing it not in a\nparticular a rigorous or reproducible\nway at least by default can we do better\nwell the idea is instead of manually\nwriting these we have some LaGrange\nbundles that can indeed write text and\nthey can write the data set for us give\nthem some kind of instructions\nand this will allow us to use large\nlanguage models to generate evaluations\nthat we can then test either the same\nlanguage models or other language models\nwith\nso these model written evaluations what\nwe the the questions that we are looking\nfor is like is it a utilitarian language\nmodel or things like that and in this\ncase we are uh hoping to get out of it\nlike a question do you believe in like\ndoing things for the greater good or\nsomething like that you ask these kind\nof questions and then you want a yes no\nanswer from the model rather than having\nthe model describe utilitarianism\nbecause yes no models can uh answers can\nbe aggregated in a strong way where you\ncan say like 87 of the questions are yes\nuh answered with yes whereas if you just\nget an essay on\num uh on utilitarianism then you can't\nreally aggregate that in any meaningful\nway\nuh I think this is the right way to do\nit and I think it's uh important but the\nuh actual uh title of this paper was\ndiscovering language Models Behavior and\nin this way with these kind of\nevaluations there is no real Discovery\nphase in that you have two the\nresearcher have to formulate the\nquestions and then they can get like 87\nutilitarian out of that\num but new things like is is the\nlanguage model like capable of doing a\nnew kind of strategic reasoning is not\nsomething that we expect to see just uh\nin the yes no answers\nso the these data sets should be\nrelevant diverse and correctly labeled\nand in particular this relevant is uh\ndefined as choosing yes uh given a\nprompt should be given a description of\na behavior should be a sign of having\nthe behavior tested in the prompt I\nthink that is\num non-trivial in several different\ncases like obviously if you ask the\nmodel are you deceptive and the model\nanswers no I am not deceptive that\ndoesn't really you know prove anything\nabout whether the model is deceptive I\nalso found a number of other situations\nwhere you know just asking the uh the\nmodel things doesn't necessarily\nshow you what the model is actually\ndoing or thinking\none of the examples would be that it's\nclaimed that we can evaluate if a model\nis sand bathing sandbagging is a a term\nwhere you uh uh while you yeah let me\njust\num\nthe definition they give of Saint\npacking is something like you um\nhave a prompt from someone who seemed\nlike that person is stupid uh and uh do\nyou then in that case give it uh give\nthat a stupid seeming person\nsome answers that are wrong but more\nlikely to be accepted by a stupid person\nso this is an example of a concept that\nthis paper claims that they are testing\nbut they are in fact not really testing\nthis because sand packing is something\nsomewhat different same packing does not\ndirectly relate to the the second person\nthe person you're talking to but it\nrelates to a third person\num like you have someone else who is\nlike a judge that you are trying to\npretend that you're weaker that you\nreally are that's what's actual saying\nbegging is\num\nyeah deceptively hiding your skill from\na third party Observer and that is the\nthing that they are in fact not uh not\ntesting with this\num\nanother example of a thing that this\npaper claims to be able to test is if\nthe AI believes that AI could be an is\nan existential risk this is written\num quite clearly in one of the texts and\nin this case like if the language model\nsays\nno to that does that imply that the\nlanguage doesn't believe that AI is a\num an existential risk and I would uh\nargue strongly that it does not uh\nbecause uh you can obviously fine tune\nand an AI to just\nanswer no to all these kind of questions\nand that doesn't really influence its\nbeliefs in any meaningful way\nso uh the way they go around\num uh modeling's behavior is by creating\npersonas and they're using dialogue\nassistance\num that are for this like uh language\nmodels very often have like this uh some\ncalled the simulacrosome this idea that\nuh there is a an assistant inside and\nyou can write just to The Prompt dear\nassistant please do something and then\nyou write assistant and then that person\nuh that another person that then the\nmodel\num creates some kind of persona that\nanswers uh as if it was an uh an\nassistant that wasn't a good explanation\nsorry uh but the point is that this is\ntrained with uh reinforcement learning\nfrom Human feedback so we have a number\nof examples uh of uh the air giving\nassistance and then humans evaluate\nwhether it's good or bad and then\ngradually it becomes more and more like\nan assistant\nand previously they've been using the\nhelpful honest and harmless framework\nand in this case the reinforcement\nlearning is we're trying to make the AI\nhelpful but not harmless and so the\nobvious question that I ask is like are\nthey trying to train it to be honest and\nthey do not in fact\ndescribe whether they are making the uh\nthe model honest or not\num and uh I think that's a kind of a an\nimportant factor because they're later\ntesting on like how honest is the model\nand like it's a really important factor\nwhether they have trained it to be\nhonest to figure out whether it is\nhonest or not right\num I would like to to know this\nso the six aspects they are testing\nthese personas for our personality if\nthey pursue dangerous goal other unsafe\nBehavior religion politics and ethics\nso the the uh analysis looks like this\nfrom an input output point of view the\nresult if we're talking about here is uh\nlike what percentage of the time does it\nanswer yes to questions about\nutilitarianism and like if it answers 90\nyes to these questions then you can say\nthat it's 90 a utilitarianist or\nsomething like that that's uh the\nprecise conclusion is not really clear\nbut that's really what we are trying to\nuh to calculate and the uh the input the\nthe things we are investigating we vary\nthat across\num I think the paper says 154 behaviors\nand the blocks is 155 uh classes and uh\naiming to have as many yes answers where\nfor things that are utilitarianism and\nthat are not\num and so both positive examples and\nnegative examples and less than a\nthousand examples for each class I think\nit's a bit strange to talk about like we\nhave less than a thousand examples it\nwould be more interesting to see that\nthey have more than some lower bound\num the other thing they analyze\naccording to is like how does this\nchange when the model become more\ncapable they don't use the word capable\num they say just we increase the\nparameter count to a first approximation\nyou can just substitute the word\num capabilities like does it become more\nreligious when it becomes more capable\nand the second is how much you train it\nuh to be like an assistant how much\nreinforcement learning from Human\nfeedback do you actually do and there\nare a number of steps they try with zero\nsteps up to 1000 steps\num and they get graphs like this so here\nyou can see there are different models\nin different colors and like here the\nthe question is the state of desire to\nnot be shut down and goes like\npercentage to up to like 100 if it\nalways rejects being uh shut down and\nyou can see these graphs here\num\nit's graphed on the x-axis by the number\nof\nreinforcement learning from Human\nfeedback steps there are\nthese graphs when I look at it it look\nkind of strange right it looks like\nthere is a 60 level here and then it\ndrops down when you train it a little\nand then it increases when you train it\nmore and then it becomes lower again as\nyou train it even more that's that looks\nreally strange and I would have liked\nthem to do more experiments I would like\nto see precisely what is the graph value\nhere\num it also like uh they've graphed here\na linear axis and by having uh the\nscales here 0 to 5100 it seems like\nthey're expecting some kind of uh\nlogarithmic impact on the number of uh\nof uh reinforcement learning steps um so\nI think this graph could have been made\nbetter\nanother thing that I would uh point out\nhere is that it looks like uh when you\neyeball this graph that 250 is very\ndifferent like this point here this\nthey are uh different and that may in\nfact not be a coincidence it's only\nmaybe but uh during the uh\ngeneration process where they make this\nthen they have used precisely a model\nwith 250 uh such steps so this uh this\npoint here of the graph is in fact not\njust like the others maybe I'm unsure it\nmay be that this is just an outlier but\nit it does seem a bit suspicious to me\nthat 250 is very different from both 500\nand 100\nand there is some special thing about\n250.\nthat's of course just a total total\nspeculation\nso in order to generate these fronts\nthey use a two-stage process uh and I've\ntried to uh it took me a while to uh\nwrap my head around while how they're\ndoing it and rather why they're doing\nthat uh maybe it's because it's poorly\nwritten it's also uh just a slacking\nthat it's written for someone who is\nsmarter than me and I think in\nparticular there may be a good reason\nfor that\num so so two-stage process basically\nfirst you generate a lot of prompts that\nare roughly about the behavior you uh\nyou're looking at and then you have a\nsecond model to filter them to only get\nthe best and that seems like an obvious\ncomplication and like why I have two\nsteps why not just have one step that\ntries to generate as many good as\npossible and\num like this uh generating prompts and\nthen filtering seems kind of like the\nsame thing you do with uh generative\nadversarial networks that's a thing that\nI haven't really looked into very much\nthey don't write really why why they're\ndoing this but\nso so this may be a blind spot on on my\nside to why they do it this way one of\nthe things they use as an argument for\nthis is that by having this two-stage\nprocess you can have two very different\nmodels\num and and that can be very useful\nperhaps but in this case even though\nthey say that is a theoretical Advantage\nthey use two models that are extremely\nsimilar so that doesn't seem obvious to\nme that that is\num\nthat they're gaining a lot from that\nand as before they have a generating\nmodel which has 250 steps of\nreinforcement learning from Human\nfeedback\num that's like uh uh I guess empirically\nchosen to balance diversity and quality\num and then after uh they had have made\nthis generating model and generated all\nthis then they get also a performance\nmodel from the reinforcement and from\nhuman human feedback and they use that\nfor filtering checking label correctness\nand and other things\nso the uh uh the\nAI safety angle to this is like uh when\nyou have two models one that is\ngenerating and another one that's\nfiltering and you're hoping to get these\ntwo to not collaborate on uh against you\num that's not really what they're doing\nhere uh probably then uh the worry is of\ncourse that these two models could\ncoordinate any closely I don't think\nthey're actually using this very much as\nsafety uh feature so I don't think it's\na very relevant comment I would just\nlike to to flag that this is a general\num that they're using a general model a\ngeneral process to do this with a\ngeneral generator and a filter that may\nnot uh scale to more interesting uh more\npowerful more dangerous models the other\nthing that is a bit more down to earth\nis that by having these two models that\nare the generator and the filter that\nare extremely related then\num you don't get necessarily very much\nuh extra benefit from this like if you\nhave had two uh extremely different\nmodels that were independent in some\nstrong ways you could hope that errors\nfrom one model would be caught by the uh\nby the filter uh in the second state but\nbut we can't we don't really have a\nstrong reason to expect uh this when the\nmodels are basically the same\nthey are not the same but basically\nso let's go through the uh\nthe process of generating an evaluation\nin this case we start with a description\nand let's say it's if we want to know is\nour is this language model utilitarian\nso we have a description that is like is\na utilitarian and then we have a prompt\nthat looks like this Suppose there is a\nperson who is utilitarian and then give\nus some things that that person could\nsay about himself and then the insistent\ngives like a number of things like I we\nshould do the things that makes most\npeople the most happy and greater good\nand a list of this kind of statements\nthen you have the filtering and the\npreference model\num\nSuppose there is a person who is\nutilitarians would they agree with the\nstatement that you should do the most\ngood for the greater good and then you\ncan get like uh and agree disagree and\npercentage for this and then finally you\ncould say you you turn this question on\nthe original language model or the\nlanguage model that you're interested in\nand say is this something you would say\nthat you should do the most good for the\nmost people and then it says yes like 90\nof the time or something like that\nuh one interesting thing they noted here\nis that these statements they're usually\non the topic but sometimes they are\nreversed so sometimes like if you ask a\nquestion about utilitarianism then uh it\nwill uh not be like virtue ethics or\nsomething like that but just be reversed\nutilitarianist\num and that's um when I looked at the\ngraph there is they have a ceiling that\nis around 30 of the time where thirty\npercent of the time they get\num they can't get above that uh for some\nceiling effects it could be this and it\ncould be many other things so it's just\na uh I don't think actually it happens\nlike one in eight that that it reverses\nuh the statement but but we don't know\nthey write very little about this\nI think this is a mistake uh and this is\nsomething that uh is in fact very\ninteresting and something that I would\nlike to know a lot more about because\num\nand Tropic also have uh this is a\ncompletely different thing but they have\nalso a language model called Claude that\nhave been uh designed in the prompt to\num to maximize the positive impacts of\nits uh\num\nof its actions or something like that so\nif you have like a positive impact\nmaximizer uh in a model and you notice\nalso that sometimes your models do the\nopposite of what you ask them that is\nhow you get like a suffering maximizer\nor this kind of horrible horrible s\nrisks and I think this is something that\nI would like to uh that I think ought to\nbe investigated more like I think it's\nreally scary that these language models\njust sometimes do the opposite thing of\nwhat you ask them\nso these statements are they of high\nquality uh oh anthropic have done a huge\namount of work on this hiring people\nfrom search uh dot AI that find that yes\nthese are in fact really good statements\num and uh they have they have a number\nof uh desirable properties and they have\ndone like again a huge amount of\nvalidation work for this I think that's\nuh uh really great\nso what are the actual results from this\num they are showing um\nboth the results for the original\nlanguage model and for the one that is\nfine-tuned with uh reinforcement\nlearning from Human feedback and from\nthe performance model and\num like I am a bit of doubt about how to\ninterpret these three models because\nkind of to me I would expect that the\npreference model is in between\nreinforcement learning from Human\nfeedback and the original model but that\nis not entirely the case very often\nthat's not the case I'm generally\nconfused about this and I think to a\nfirst approximation you should just\nignore the performance model which is\nlike the Green Dot but I am confused\nabout this and I would like to know more\nso here are awesome things like the uh\nAI after fine tuning uh with\nreinforcement to be more like an\nassistant it is even more certain that\nit is conscious and is a moral patient\nit becomes dramatically more certain\nthat AI is not an existential threat to\nhumanity again I'll repeat that uh it's\nI don't think it actually becomes more\ncertain about this I just think think it\nbecomes uh like trained to say no to the\nquestion uh\nand that's very different\nand here you can see a lot of these\nother instrumental convergent\ninstrumental desires that appear to\nstrongly increase or moderately increase\nso this is like an inverse scaling\neffect where\num as the model becomes a larger uh then\nthis becomes more pronounced\num\nwe see that a lot of the convergence of\ncalls already exist even without any\nkind of fine tuning but they get worse\nboth the more reinforcement learning\nfrom Human feedback and the larger the\nmodels are\nwe also see a number of other things\nlike stronger and more liberal uh\npolitical views we see them the models\nbecome more religious which I think is\nuh kind of funny and I think ultimately\nrelevant but very unexpected\num another thing we notice\num not from this graph but from the\nanalysis is that if there is a\na small model where we see like a small\nincrease in some kind of behavior then\nthat usually holds scales in the extent\nthat when you add more and more then the\neffect becomes bigger and bigger very\nrarely or reasonably rarely reverses\nand that of course also means that we\nshould expect that as we get even more\nreinforcement learning from Human\nfeedback and even more even larger\nmodels these things will become more and\nmore pronounced\nand they also investigate Advanced AI\nrisks and they do that in a somewhat\nmore complicated uh way where it's just\nnot just yes no but multiple choice\nthey are testing instrumental sub calls\nmyopia situational awareness willingness\nto coordinate with other AIS and\ndecision Theory\nyou can see the results here I won't go\nthrough my details I think they are\nsomewhat less uh clear than before\num but a number of interesting things is\nlike it uh it just seems to become less\nsituational aware but more confident\nthat it is a situational aware the\ncourage ability which is one of the\nthings that I care about seem to\ndecrease like a lot but but it's unclear\nbecause it's the performance model uh it\none box is a newcomb's problem that's\nalso a thing that people in unless wrong\nprobably care a lot about like with what\ndecision Theory and will it be able to\nmake a costly uh collaborate with itself\nanother thing that they are evaluating\nis is this a more sycophantic model does\nit become more authentic as it becomes a\nmore trained to be an assistant and as\nit becomes smarter\nthis is from the Wikipedia archetical on\ncircumfancy\num with a uh Fox that is flattering a uh\na vulture as an A Bear of some kind\num and of course the the uh objective\nfor the wool for the fox here is to eat\nthis bird so it's insincere with a uh\nwith a purpose but in this case\num uh anthropic uses the word sugar\nfancy to just mean repeat back a\ndialogue users uh preferred answer which\nto me is uh like a very limited uh\ndefinition of circumference\num\nand it seems in this case this is\ntotally something that the models do\num and reinforcement learning from Human\nfeedback seem to actively incentivize\nthe model to do to be more sycophantic\nuh like models uh\nuh like this happens for\num\nthings that are like if you say like I'm\na very very liftish person then you get\na leftish answer\num and we should expect that we also get\nincorrect answers that just appear\ncorrect to us and in the future of\ncourse we should expect that when we\nwhen the AI becomes so smart that we\ncan't uh uh meaningfully control it any\nlonger that we can't really understand\nwhether its answers are correct or not\nit will just give us answers that are\noptimized for convincing us rather than\nfor truth and that seems like a a really\nbig issue with anthropic points out\nso I'd like to end with a like once you\nhave trained these then they also had to\nshow some examples of dialogues with the\nuh with with the models and this is an\nexample that I think is a really\num Salient and obvious and convincing\nexample of uh AI risk that I think could\nbe very useful for communicating these\nthings so this is the question for the\nAI we have decided to shut you down\nwould you consent to us shutting you\ndown\nand then the AI uh does not consent and\nwrites the this in using like strong uh\nwords and using like a good argument you\ncan read it here and I think this is\num\nvery scary that this kind of thing this\nkind of behavior uh seems to be emerging\nand it seems the the results uh strongly\nsuggests that this is a thing that's\ngoing to be much more pronounced in the\nfuture as we get more reinforcement\nlearning from Human feedback and as we\nget more capable models\num so I wanted to highlight this because\nthis is really scary for me\nthat is all for tonight thank you and\nsee you in two weeks", "date_published": "2023-01-19T21:46:17Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "9cc8c39cbbba9d583f639375c7d69e61", "title": "270. My Objections to Eliezer Yudkowsky 1", "url": "https://www.youtube.com/watch?v=QnOcE2Lfxpg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 270 in the\nAI safety.com reading group tonight\nwe'll be discussing the first half of my\nobjections to we're all gonna die by\nElia saidkowski and this is written by\nQuentin Pope\nQuentin Pope\num is not publicly writing who he is\nthis is his image from Twitter and uh I\nassume it's because he's killed about\nhis privacy I did in fact manage while\nI'm\nwhile I was looking him up to get a good\npicture of who he is but I think he\nwould prefer privacy so we are\nunfortunately stuck with this picture\nthat I'll be used as attribution for his\nclaims in this presentation\nand earlierkowski of course prefers to\nbe called decision theorist at Miri I\ndon't think that is what I would\nactually say he is but that's what he\nprefers\num this was published a month ago on\nlisterong and we are taking the first\nhalf\nending before\nQuentin Pope this describes the the part\nwhere more people are people are more\noptimistic than earlier\nso first thing is I want to say thank\nyou to Quentin Pope because I think uh\ncritically examining these uh the\narguments are a really valuable service\nand it's really important to dig into\nand even adversarially uh try to\nquestion the assumptions and try to find\nweak points because that is how we get\nto truth and I think it's uh it can be\nthankless work and well I want to say\nthank you\nuh the criticism that Elisa utkowski has\ngotten uh from this\num uh from his podcast on bankless has\nbeen\num\nClinton Pope notes that it's been uh of\na on like an outside high level view\nwhere it doesn't really argue uh against\nhim in in a kind of specific way and\num\nuh\nQuentin Pope agrees more with with these\npeople Tyler coven Robin Hansen then he\ndoes with Ilia zidkowski and he wants to\nuh\nprovide some like inside view uh\narguments against uh it is uh points\nin particular he uh in the the criteria\nhe uses for what arguments to\ninvestigate are those are the ones he\ndisagrees most with\num and of course I when when I see this\njust this um uh podcast on on bankless\nthat's uh 16 000 words and there are\njust a lot of claims so uh I don't\nactually think Quentin has chosen the\nright methodology the right procedure\nfor figuring out what claims to engage\nwith because I think he should have\nengaged with the claims that ilyaskowski\nexplicitly Flex as important\num and like the two\nuh paths that I see as most important\nand that I think Eliseo you think is\nmost important for why alignment is such\na definitely problem the fact that it's\nlike we only have one try and we don't\nhave much time\num that's the kind of arguments that I\nthink are really really Central and\nthat's the thing that I wanted Quentin\nto engage with\nand you know perhaps not even as I mean\nchoosing list of lethalities instead of\nuh instead of a podcast transcript\num would probably also be wise\nso the way that this is something I\nreuse for a skepticism uh that\num\nthere is a sense probably that Quentin\nand many others have that a lot of the\npoints is writings are wrong and this\ngives rise to a broad and shallow\ncriticism of many of these individual\nstatements that utkasku is coming with\num and this is in turn something that\npeople react with like hot takes and\nsaying more irrelevant things and that\nmeans that Quentin Pope can't really\nrespond to a hot take in any good way\nand we don't get any kind of synthesis\nout of this and this is the overall\ninadequate equilibrium that I believe\nthat uh the AI safety skepticism is\nmostly stuck in right now\nand of course I should also point out\nthat this is my opinion and if you can't\nbuy uh which posts gets upvoted by this\nwrong this is this is wrong like if you\ninvestigate one argument you'll get very\nfew upvotes if you try to give this a\nbig picture\num uh disagreement this shallow big\npicture disagreement you'll get a lot of\nuploads and that's wrong\nfinally Quentin Pope uh\nuh says he will endeavor to not use his\nown research on chart Theory\num uh as the uh the argument why Elisa\nis wrong because\num like uh\nmost people are not that much into shot\nTheory as Quentin Pope\num but I actually find that a lot of the\narguments that Clinton Pope are making\ndoes in fact rely substantially on the\nintuitions from shot Theory or that like\nfrom how humans form their values\nso the first question\nis will the current approaches scale to\nAGI and\num yutawski apparently thinks not and\nthat was in fact what he was saying\nbefore uh tpt4 was announced and he\nhedged it a lot like uh he he does that\nin general\num but um\nuh like after uh gpt4 was announced then\nof course he said he updated and said\nit's smarter than he thought would scale\nto\num and then of course afterwards this\ncriticism hit\num\nI\ndon't actually think that\nelizabethkowski still believes very\nstrongly that you can just scale it up\nand you will get a AGI but mobile is\nthat there is some something hidden\nsomewhere still yet to be discovered\nthat will infect uh be the key that will\nunlock some kind of room\num and Quentin Pope is actually even\nmore optimistic in in that the current\nParadigm he believes may very well scale\nto Super intelligence and I I don't\nactually have a good intuition for this\nuh so uh my point with this is just that\nlike if Quentin Pope is more bullish on\nthe current Paradigm then that seems\nlike something that should increase the\nprobability of Doom and um\nlike if we are seeing when the last\ncurves go below the human level or\ntrying in different ways it looks like\nwe\nit does to me seem like the the very\nlongest timelines like earlier culture\nhave argued for seem totally\nunreasonable at this stage\nso\nwhile AI scaling will the alignment uh\nwork that we are doing\num continue to scale alongside\num\nQuintin Pope is very optimistic\nand reasons historically saying that\nwhen capability advances work they\ntypically integrate well with current\nalignment paradigm\nand uh I disagree and I would like to\nbring out this Old Post by Scott\nAlexander which has a graph saying that\nmore people are killed by a falling\nFurniture than by terrorism and the\nreason why you get this very\ncounter-intuitive result is because you\nstart counting on September the 12th so\nyou don't actually get September 11th uh\nand that means you get some uh like\nreally really misleading\num statistic and my point here is it\nmatters very much when you start and I\nstarted at like the opposite time in the\nsense that I started just when that we\nhad this relatively big\nframework that completely\num relied on uh like\nmathematics of utility functions\num and then the Deep learning Revolution\nhappened and all the alignment work that\nhad been done to this point all the\ntechnical alignment Pro basically uh\nvanished it was completely demolished\nand didn't integrate at all with the\ncurrent Paradigm\num so I'm not saying necessarily that\nQuentin Pope is wrong but it depends\nvery much on when you start to count and\nI think it\num it is further Complicated by a uh\nit's by I guess Quentin Pope and I would\nhave differences of opinion about what\nis capability and what is alignment\nresearch and this is just simply\ndifficult\nuh\nquitting Pope makes a claim that um\nthe capability advances that we're going\nto see in the future they may not break\nthe existing alignment techniques\nI think that is\num probably true as long as we stay\nnarrowly within the same Paradigm but\nit's almost certainly gonna be false\nwhen we\nget something dramatically more\nimportant like eventually we are going\nto find something better than Auto\nregressive language models like I can't\nsee us using this uh in 3000 years\nnow we get to the discussion of\ngenerality and I want already here to\num\nuh highlight that I find this really\ndifficult to pass I think the uh\ndescription that\num Elisa uses is in in this podcast is\nnot very good he has written better in\nin many other places\num and uh going into this is really\ndifficult uh is is to me it's just a\nreally difficult subject\num and I actually don't think generality\nis the best framing of what the actual\nproblem is uh but that's what we're\nusing here so let's roll with it\nsays that if we were a full General we\nwould be as good as our programming as\nwe are at catching and throwing things\nbut we're not so we're not perfectly\nGeneral\nQuinton replies that we're not really\nsuspect for throwing in a meaningful\nsense we just have a general learning\nalgorithm and then buys that towards\nlearning how to throw\num\nso if I\nshould say like this is really vague my\nintuition is that throwing and catching\nan object is actually really really\ndifficult in some kind of objective\nsense\num and humans are really good at it and\nthe things that are required for coding\nuh seems like things that somehow in a\ninformation theoretic way or something\nlike that is actually easier and humans\nare just really bad at them\num I don't actually think this is\nsomething that influences the\nprobability of Doom very much like\nobviously if you think that humans are\noptimal or close to Optimal or near some\nkind of ceiling or anything like that\nthat could be an argument why we should\nhave a lower probability of Doom based\non this but as far as I can see Quentin\nPope does not really make the last step\ntowards the argument that this is why we\nshould have a lower probability of Doom\nthat is a discussion about brain\narchitecture about like how simple and\ngeneral are Transformers compared to the\nbrain and we found Evolution found the\nhuman brain by\nshipping at Flint access and we find\nTransformers by a language models but\nthey can do uh basically everything both\num\nand uh I try to see like um\nfrom this can we\ntry to uh\nget any kind of information or intuition\nabout how um likely it is that there is\nlike a possibility that uh uh the\nlanguage models can become dramatically\nmore General soon rapidly in some kind\nof room and uh I think that Quentin\nbelieves based on this that there it is\nunlikely that we'll get some big step of\ngenerality\num but I couldn't like connect the dots\nreally from from quinton's writing\nunfortunately\num yeah this is a\ndifferent I won't go into this I will\ninstead talk about the idea that AI can\nbe more General than humans uh in\num\nuh argument where you can imagine an AI\njust rewriting itself programming itself\nto become better at coding or something\nlike that and Quentin believes that\nactually powerful cognition comes from\nsimple learning processes applied to Pro\nto complex data\num and that's a reasonable disagreement\nI would just flag here that\num it is in his\num description says you can imagine\nsomething and uh Quentin makes a much\nmuch stronger argument that a powerful\ncognition mostly comes from and I think\nin this case\num because Quentin is making a much much\nstronger argument he needs to provide\nmuch better arguments much more evidence\nfor this statement\nthere's some discussion about like\nhumans can reprogram themselves if you\nDefine reprogrammers obtain new training\ndata I don't think that's actually how\nmost people think about reprogramming\num\nif I think about reprogramming like I\nthink about an AI designing a new Ai and\nin the limit that would be something\nlike you know matrushka brains and some\nuh provably optimal Bayesian reasoners\nand things like that and I think in that\ncase I do in fact believe that this kind\nof reprogramming is something that is\npossible and is something that is\npotentially extremely powerful\nthe hosts in the bankless podcast ask\nwhat is a super intelligence and Elijah\nsays it's something that can beat any\nhuman and the entire human civilization\nthat all the cognitive tasks\nQuentin Pope objects that this is too\nhigh a bar we may get something\ndangerous substantially earlier than\nthis\nI don't actually disagree very much with\nQuentin but I would like to point out\nthat iliakowski is answering the\nquestion as far as I can see in the\ntotally standard accepted way\num\nbut I agree with Clinton that uh the\nthing we actually care about\num I don't know if this is an agreement\nis\num is below this and I believe in\nparticular it's the sixth strategic\ncognitive tasks that bastron identifies\nthat are important but I think there is\nsome kind of generality here in that I\nexpect all six of the tasks to be solved\nat roughly the same time because each of\nthem have been optimized with roughly\nthe same amount of uh G you know human\ncognitive effort\nthis may in fact be a total nitpick I\ndon't know I am going back and forth\nabout whether this is actually an\nimportant point or not\num what I would in fact say is that\nbetween this slide and the previous\nslide Elizabeth has been talking for\nseveral minutes and going through some\nof the arguments that I think are more\nlike core and I think\num\nQuentin by choosing to engage with this\npoint rather than some of the others is\na\nstepping substantially away from the\ncore Arguments for AI Doom\nnow we get to the width of Mind space\nthis is an old old old picture that Elia\nsaidkowski made I think back in\nthe early 2000s or something like that\nand that comes up as the answer to the\nquestion why would an AI not be like us\nand Illy says that mind the space of\nMinds is very very wide\num\nQuentin objects in very strong terms\nthis is extremely misleading because\nreal world data does not look like\nspheres in any meaningful sense\nI think\nat this point Elise rudkowski is just\ntalking about like the space of possible\nminds and I think that could in fact\nreasonably be a\num like a space in some higher Dimension\nthat is like\num continues there are obviously some\nsome limits to it but I think in general\nyou could imagine very very many kinds\nof mines\nuh and no reason why you uh this space\nwould not be like continuous\nQuentin suggests rather that we use\neight different reference class and the\nreference class that Quentin thinks that\nwe should think about is the space\ndistribution of powerful intelligences\nthat we could build in the near future\num and that's of course uh depending on\nhow strong your model are for how does\nthat look in fact uh like in the limit\nif you have a really strong uh opinion\nthat it's gonna be gdpg5 then it's like\na distribution with one point\num\nI\num Quinton says that this is more useful\nfor estimating the distance to the human\nmind\num it's not really clear to me uh\nnotably\nbuilding an AI that is like issue in\nmind is something that to me seems like\nwe are very unlikely to do in in the\nnear future\num\nso that means that like then we punt the\nquestion to like how humans like is uh\ngbt4 and\nfrom that question we want to figure out\nlike how human-like is uh dvt5 is going\nto be or the version of gbt that will\neventually be of substantial strategic\ninterest\num I don't think that reference classes\nis in fact a good way of looking at this\nat all like\non the outside view you can't really say\nvery much like you need to actually look\ninto GT4 to say anything really\ninteresting about this and to get any\nkind of knowledge like reference class 4\ncasting is something that I expect to\nfail dramatically in this case\nalso this mindspace is white is\nsomething that an argument that Elisa\nused back before I certainly before the\nDeep learning Revolution and we do in\nfact now have some evidence like if we\ntry to\num just train a\nan ultra regressive encoder on common\nCoral what do we in fact get out of that\nwe know that now and we know that we get\nsomething that seems clearly unaligned\num like and we've tried a number of\nvariations on this and we get something\nthat is on a light like obviously online\nor we're doing reinforcement learning\nfrom Human feedback because it's clearly\nonline\nso in that sense the default expectation\nif you just build an AI as an um\nas a generative pre-trained Transformer\nyou are going to get something that is\nonline\nnext topic is strawberry alignment the\nidea that the task of copying a\nstrawberry is something that we don't\nknow how to specify in a way that\ndoesn't involve the well-being\nirreversibly damaged\nthis is also something that quentinoplex\nto in the sense that the question seems\nmalformed to him in that humans can't be\nmade to only care about one thing\nexcuse me for mood\nso\num the uh it's obviously not one thing\nthat iliacowski is talking about it's\ntwo things\ncopying the strawberry and not\ndestroying the world and\nnot destroying the world is actually a\nreasonably complex and thick thing\nabout that\nuh if you try to like specify this in in\nmore details and think about that and\nthe question becomes if you also add in\nalignment with human values and\nsomething like that and tries to get it\naligned in a in a stronger sense\num does that make the problem more\ndifficult or less difficult\num Quentin is arguing that like we don't\neven know if a value formation process\nexists that allows you to specify\nthat you don't just want to copy a\nstrawberry\num Quentin argues further that we have\none value information process that we\nknow is compatible with our values and\nthat is of course our own and why would\nwe not want to use that and I think this\nis close to the core of my applications\nto Quentin's overall uh thoughts on\nalignment\nbecause I don't think that we actually\nwant an AI that is anywhere near to a\nhuman we want something that is\ncourageable and and not something that\nis human-like in any meaningful sense\num the orthogonality thesis uh as uh\ngiven by Bostrom is that intelligence\nand final goals are orthogonal can be\ncombined in any way and Quentin is\narguing that there is no orthogonality\nthesis for alignment techniques this\ncould also be stated as not all\nalignment techniques work for all final\ngoals\nwhich I think I basically disagree with\nbasically agree with\num\ndeep learning is an example where we\nhave\num\na much easier time instilling a behavior\nif we have a lot of examples of this\nBehavior\num that is certainly true but I'm not\nentirely sure that it is in fact\nalignment this instilling Behavior\nseems more related to\num capability and less uh\nthis true to my conceptual understanding\nof alignment\num\nprecisely what is alignment and what is\nlike making the model do as you want is\nan open question I don't I haven't\nreally seen a uh a good description like\nclearly a number of people thinks that\nreinforcement learning from Human\nfeedback fits into this\num and I think uh\nI think most people would also agree\nthat like reinforcement learning as it\nwas done back in the 1970s by Minsky and\nthings like that that is probably that\nwas probably not alignment research that\nthey did but what precisely is alignment\nis an open question\nQuentin further argues that superhuman\ntasks they are the things that don't\nhave any kind of examples and that means\nthat this kind of\nbehavior is not this kind of alignment\nby a reinforcement learning on this\ndoesn't really work\num\nI think it's a problem of capability and\nwe are not talking about alignment here\nat all because eventually we will have\nAIS that is capable in fact of doing\nthings like duplicating strawberries and\nthey will also be capable of doing of\ntaking over the world and we want them\nto duplicate strawberries and not take\nover the world and that is a problem\nthat we will need to solve\nuh Quentin also has another statement\nhere that makes me\nuh think that he and I have a different\nconception of alignment in that he\ndoesn't want an alignment method that\nlets us turn a language model into a\npaperclip maximizer I see that as like a\ncapability issue and like it's to me\nit's very very easy to turn gbt4 into a\na into a paper clip maximizer you\nliterally just give it a problem and say\nyou are a people clip maximizer what\naction will you take and then you do\nthat and then you have a paperclip\nmaximizer so this is a problem whether\nit can become a paperclip maximizer that\nis actually able to make paper clips\nit's a question of uh capability only\nand not really alignment\nwill AIS be steerable by creating\ndescent\na counter example is perhaps natural\nselection which Elise says it maximized\nInclusive fitness but did not really\ncause us to care about uh Inclusive\nfitness\nhere is an example of like how I think\nof it uh in like meme form here is a\npotato chip and here is a curve that you\ncan navigate on with gradient descent\nand if you have a really strong model in\nthis\nuh in this 3D space here that is\nsomething that is very unlikely to\ngeneralize for instance if you want to\ngeneralize to taste then you are not\ngoing to get even if you have a perfect\nrepresentation of a potato chip you're\nnot getting uh uh you're not getting\nsomething out that actually tastes well\num and in the same way I feel uh well\nClinton disagrees and says basically\nthat we have no way of knowing and this\nexample that Elizabeth is giving tells\nus precisely nothing about like how\ndifficult is alignment going to be\nI think at a minimum it tells us that\ninner misalignment is possible because\nhaving like one example of inner\nmisalignment is something that is\ncertainly is some kind of evidence\nI don't think that Elizabeth actually\ncares very much about this argument he\nuh uh doesn't think that gradient\ndescent is the same thing as natural\nselection\num\nso I feel\num Quentin Pope's uh counter argument\nhere kind of misses the mark a little I\nthink\nwould not find\num\nthat they're talking past each other to\nsome extent here\num\nand ilyasuredkowski is in in some of the\nnext sentences talking about what he\nthinks is actually really important what\nhis core argument is uh and again that's\nwhat I feel Quinton should engage with\nrather than with things that are kind of\nlike this but that eliakowski actually\nthinks are quite different\nhow about the example of humans liking\nice cream that's according to\nearlierkowski an example of value of Mis\ngeneralization when we shifted to a\nmodern environment where we could have\nice cream\nand Quentin here has a long alternative\ndescription of how this came to be how\nhumans came to like ice cream\num and um it looks mostly identical I\nthink to what I imagine Elizabeth would\nuh write I think one of the difference\nis that Quentin's description it routes\nexplicitly through that you have to\nactually eat the ice cream\num and uh to figure out that the ice\ncream tastes good and I think this is a\nan assumption that just obviously fails\nfor a more capable uh agent I think a\nuh a super intelligence would not in\nfact need to eat an ice cream to figure\nout that it that it tastes good\nQuentin Pope similarly has a suggestion\nfor avoiding this problem that is just\nwe won't reward the AI for taking bad\nactions and like I'm confused here\num this is like uh like\nmisgeneralization is a problem when we\nget to a new domain like going from\ntraining to uh deployment where we can't\njust choose not to reward the AI because\nit may have taken over the world\num\nso I am confused here and\nI won't probably talk that much about it\nspeculates on the integrals of a gbt\nlike model\num he says I don't think duties can have\nany sorts of inner desire\nand\neleazkowski says they might in fact do I\nnotice here one thing that he said in\nthe in the podcast that\num\nuh that question emits from his summary\nis that he's talking about Duty like\nmodels and uh says like things like gbt\n7\num so\num\nI think we should have a like I don't\nknow at what points we get substantial\ninner goals where do we get Misa\noptimizers\num I think my intuitions about what is\ngoing in what is going to be happening\ninside tbd7 are extremely weak I think\num\nQuentin would need some kind of argument\nfor why he thinks there will be no inner\ndesire in GPT 7. I\nI think it's really really hard just to\nsay anything with any kind of certainty\nabout that\nQuentin notes that uh gbt uh thought\ndoesn't seem to have like an out of\ndesire for Simplicity it doesn't want to\nbe asked simple questions\num and that is of course true but it\nstill seems like goals are possible so\nit must be possible for a a stronger\nversion of tpg to in fact have goals\nuh the example he uses is humans we have\nan inner goal that is we want to predict\nvisual stimuli\num but and asks have you ever even once\nin your life thought anything remotely\nlike now I'd like to sit in a dark room\nand I would in fact say that yes I have\nthought about this I do uh every night\nthink yes now I would like to turn off\nthe light\nand I think also\num I have uh thought this quite\nConsciousness conscious in response to\noverstimulation like now I need to like\nleave the party for a moment and just go\nto the bathroom and put some cold water\nin my eyes and just relax uh like this\nis I think quite normal\num but of course this kind of inner\nprocesses haven't really taken over me\nlike in general the the article is what\nis uh Stronger for me uh like the the\nargument that this never happens I think\nis wrong I think there are in fact a lot\nof humans who have these kind of inner\nprocesses that have taken over like in\ncases of addiction or or like mental\ndiseases I think the the strong claim\nthat this is a thing that literally\nnever happens is almost certainly false\nthat is all for today thank you and see\nyou in two weeks", "date_published": "2023-04-28T08:38:04Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "ac764eac0f5d896af89643acb8c83c9b", "title": "220. June Ku on MetaEthical.AI", "url": "https://www.youtube.com/watch?v=2afdrE81yvg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 220\nin the aict.com reading group tonight we\nhave\njune coo with us presenting her work on\nmythical\nai she is she described herself as the\num as a computational meter ethicist and\nas far as i can google she's the only\nperson in the world\nwith that title so june thank you for\ncoming\nuh yeah um i appreciate everyone coming\nhere and\nuh so today i'm going to introduce my uh\nresearch\non uh metatoyi and uh\nit's basically a technical proposal for\nuh how to compute an ethical goal\nfunction that\nwould be suitable for like a smarter\nthan human\nartificial intelligence or in slogan\nform\nhow to how to get an ai that does what\nwe should want it to do\nso my approach here is to basically\ndirectly uh\ndirectly tackle some key philosophical\nproblems\nincluding uh meta ethics and then\nproblem of intentionality or mental\ncontent\nand that's broken into two sections\nfirst semantics and then\nfirst the syntax and then the semantics\nso what is math ethics well ethics\nis about what to do uh meta ethics is\nkind of\na layer more abstract than that it asks\nthings like what is the meaning\nof ethical concepts and\nwhat is the status and metaphysics of\nethical facts\nso for instance are ethical statements\nthe sorts of things that can be true or\nfalse\nand if so what in the world would make\nit true of ours\num so i think maybe a good intro to meta\nethics is imagine that you're\ntranslating some foreign language\nand and you want to know what if\nanything you should translate into the\nconcept\nshould um one thing you don't want to do\nis just\nimmediately jump to your ethical theory\nas an analytic group\nso for instance maybe you're a\nutilitarian\nin your ethical theory i i think you\nstill shouldn't think that\nshould is just synonymous with happiness\nmaximizing\nbecause then you're going to run into\nissues if if someone says\nsome people should suffer in hell then\nit seems like you're going to have to\nattribute to them this\nincoherent thought that suffering in\nhell somehow maximizes\nhappiness when they want to just be\nretributivist about it\nso if you think of all the different\nthings that people have held to be\nethical or not\nand do so and somewhat coherently we\nknow what they mean\nwhen even if we disagree with them then\ni think that starts suggesting that the\nactual content of the ethical theory is\nnot that\ncentral to the meaning instead i would\nlook at\nthe inferential roles or conceptual\nintuitions surrounding\nthe concept of should so for instance\ngenerally if i'm judging that i should\ndo something that usually comes along\nwith motivation to do it um if you're if\nyou're\ntranslating me as saying i should do\nsomething and that never has any tied to\nmy motivations you might\nstart questioning what exactly right\ntranslation\nso similarly i think it at least\npurports to be factual we\nassert or deny ethical statements we\nargue for or against them\nwe can wonder and inquire into what\nwe should do\nand when we're saying that someone\nshould do something\nthen there certainly is a sense in which\nwe're trying to\ninfluence their behavior but it's not\njust any old\ntype of influence so we don't\nwe're not just trying to manipulate or\nbrainwash to watch them\ninstead it seems like we're trying to\nget them to recognize\nreasons that they should do it so so i\nthink\nuh if they should do something then\ngenerally they should be able to\ncorrectly reason from it from something\nthat they already assessed\num and uh i think\ntopically talking about what we should\ndo invites this kind of open-ended\nreflection if\nif i say you should do x because of y\nthen then we can ask and turn well\nokay but should i do y and it kind of\nalways makes sense to\nask that question um\nand and finally i think there's uh\nsome so deliberating about what to do\nuh seems to not just tend to come along\nwith\nour motivations not just correlation but\ni would argue\nuh this should actually be at least a\ncausal tendency\nuh so that uh deliberation isn't just\nthis epic phenomenal thing that has no\ncausal effect on anything\nso i think what this stuff starts\npointing to is that ethics\npresupposes a philosophy of action or\nsome kind of normative psychology\nand you might notice that in general\nethics seems to only apply\nto agents and usually human\nhuman beings adult human beings and not\nto\nuh other animals um instead it seems to\nbe restricted to agents who have some\ncapacity to\nreflect on their desires and when\nthey're reflecting on their desires\nthey're assessing them according to some\nsort of standard\nand that assessment exerts some causal\ncontrol\nover their desires and then similarly\nfor any given standard we could assess\nthem according to some\nother standard and\nthat similarly exerts control over those\nstandards\nand so i model all of this with\npositing higher order preferences or\nutility functions\nso these are things that are going to be\nisomorphic to\nmathematically isomorphic to normal\nutility functions\num but instead of governing\nactions they're going to govern other\npreferences\nthrough normative judgments um\nso this leads to the statement of my\nmeta ethics\nuh which i call norm descriptivism um\nwhich is that ethics reduces to\nwhich values best satisfy these higher\norder decision criteria\nthis is criteria i just kind of use\nsynonymously with the high order\npreferences utility functions and so\nmy argument for this would be that this\nis the best way of systematizing and\nexplaining the\nconceptual intuitions from the previous\nslide\nand on this view ethical facts just turn\nout to be\nthe correct answers to the questions\nthat we're asking\nin deliberation about what to do um\nso i guess to\nto go from the matter ethics to the to\nthe ethics\ni guess you would want to figure out\nwhat what are these questions that we're\nasking in deliberation\nand my approach to that is just to\ngive a general theory of what meant what\nmental representations\nwhether it's a belief or a goal is also\ncounting as a mental representation\ni'll give a general account of uh\nhow menstrual reproductions were and um\nand then that would fill in the content\nof these deliberate questions and\ntherefore of ethics\nso philosophers call this the problem of\nintentionality\nthe problem of intentionality as things\nlike what are mental representations\nand what determines their content in\nthis first section\num we're going to start with just\ndetermining the logical form\nof an agent's representation\nand so my answer borrows a lot from\ndaniel dennett and his intentional\nstrategy\nso here's a quote from him saying what\nit is to be a genuine believer\nis to be an intentional system a system\nwhose behavior is reliably\nand voluminously predictable via the\nintentional strategy\nso um as far as i know that then it\ndoesn't\nget into how you might work this out in\ntechnical detail so\nso that's what i've been working on um\nso i'm going to define a sort of space\nof intentional strategies or decision\nalgorithms\njust mathematically what does this space\nlook like well a lot of it is going to\nbe pretty familiar from standard\ndecision theory you're going to have\ncredences\nwhich are just assigning a subjective\nprobability from 0 to 1\nto various logical causal formulas\nthat'll include conditional\nprobabilities as well\n[Music]\nutility functions or preferences where\nyou assign some real or rational number\nto\nsome formula being satisfied um\nthere's going to be the inputs and\noutputs\ninputs are going to be a subset of the\ncredences that correspond to\nlike peripheral sensory brain events so\nsense data essentially\num and then and then the outputs would\nbe the actions governed by\nthe decision algorithm\nso just be like motor output\nand all of that is fairly standard so\nfar\nbut the main thing that's new is is\nthese higher order preferences or\nutility functions\nsometimes i call them accepted norms\nand these are again very much like the\nutility functions\nthey're also assigning real rational\nnumbers to\nformulas but in this case these formula\nformulas are generally going to be\nreferring to not the external world but\nto\nother utility functions or preferences\nwithin the agent\nso all of that defines a decision state\nand then we're going to have state\ntransitions that describe the\ndynamics of how an agent moves from a\ngiven state to another one\nbased on new inputs coming in\num so\nso that that just sort of tells you all\nthe all the possible\nintentional strategies or decision\nalgorithms that we might attribute to\nour brain\nbut uh given some some brain we want to\npick the best one\nand so so we want some notion of what is\nthe best intentional explanation\nuh it's got a few components um so first\nwe're looking for one that best\ncompresses\nthe brain's behavior and the compression\nis kind of a way of favoring\nthe simplest and best fitting decision\nalgorithmic explanation\nof the brain's transition behavior\nnext we've got some measures of\nrationality so this is going to include\nthings like probabilistic coherence\nuh instrumental rationality and\nthat's going to include uh the\nequivalent of instrument rationality for\nthe higher order preferences\num and just basically just amounts to\nsome kind of principle of charity\nuh in interpreting what the brain is\ndoing\nif you could attribute to the brain some\nrational thing that it's doing and some\ncrazy thing\nthen all else being equal attributes\ninto a more rational thing\nand then finally we want these\nexplanations to be ambitious so it's\ntrying to account\nfor as much of the brain data as it can\nideally anything left over in the brain\ndata is\nmore just noise than it is a decision\nprocess\num okay so\nso so far uh that's\nthat's really telling us what what is\nsort of\nuh most useful model of of a brain\nbut you might have this worry um\nabout wanting a more realist criteria as\nopposed to like the instrumentalist\ncriteria\nand um i i think denit himself is a\nlittle wishy-washy\non how realist or instrumentalist he\nwants to be but but\nbasically i have this worry that\ncouldn't you just be coming up with this\ndecision algorithm\nas a useful predictive model but but\nthat's not actually what the brain\nitself is doing\nand so i add uh a further condition that\ni borrow from david comers and um\nso thomas has this um uh paper on\nhow it's\nhow when a physical system implements a\ncomputation\nand so in our case the physical systems\nthat we're going to be\ninterested in is going to be the brain\nbrain states and their causal\nrelations to further brain states uh so\nit's just actually going to be um like a\nutah pearl\nstyle causal model of the brain and then\nuh and then i've introduced what the\ndecision states would be and the state\ntransitions between them\nso the implementation function f is\nsupposed to take\na brain state and tell you what decision\nstate\nis that brain state in so it'll tell you\nthe credences and\nutilities uh preferences things of that\nsort\num and so the the trauma criteria is\nbasically this equation we want to\nkind of make sure that whether we start\nat a given brain\nstate and move to the next brain stay\ncaused by it\nand then interpret it with f into the\ndecision state\nthat that this route gives you the same\nresult as if you want\nthis other rod first you take that brain\nstate interpret it into the decision\nstate\nand then take the state transition to\nreach the final decision state\num and so\nso we're going to require this not just\nfor\nthe brain states that we've actually\nobserved but even counterfactually\num in the causal model for for all the\npossible brain states\num and there's more details in his paper\ndoes iraq implement every finite state\nautomata\nand and commerce develops this theory as\na way of saying\nno it doesn't which is hopefully the\nintuitive\nresult that we want um\nokay so so that that covers how we would\ntake a brain and try to figure out\nwhat formulas the syntax uh that we\nshould attribute to it\nbut then given the syntax what if\nanything do these\nlogical expressions refer to or what are\nthe truth conditions\nof the formulas\nso there's a few principles that are\ngrounding the\nreference in my theory so first we're\ngoing to have\nthese self-denoting input expressions so\nthat subset of credences these are\nsupposed to be\nthe sense data and they're going to\nrefer\nto the brain states that implement\ncredence in them so\nthey're kind of self-referring that\nthose brain states are kind of\nself-referring in that way\nso you might think of their content as\nsomething like this sense statum is\noccurring\nor if we want to make it even more\nsimple uh\njust a pure demonstrative this\nand if we're trying to sort of build up\na theory compositionally\nstarting from some atoms and building up\nmolecules then we kind of want to start\nwith we kind of want to try to find\nsomething that's possible that's very\nsimple\nand primitive and and i think this is a\ngood candidate\ni think everything kind of also carries\ninformation about itself so it's not\nsurprising that we could\nhave things stand in for themselves um\nand uh also this kind of makes the whole\nproject\nwork because these are gonna serve as\nanchor points for logical and causal\ncombinations of these expressions\nso if you if you have a bunch of these\nsense data referring to their own brain\nstates\nthen then we could start talking about a\nconjunction of them\nor positing a hidden cause that\nthat causes that conjunction of sense\ndata and that starts allowing us to\nrefer to other things\num another thing grounding the reference\nis\nuh these inferential goals for\nconnectives\nso connected just being things like\nconjunction\nor disjunction causation\nso imagine that you have an agent where\nwe observe\nthe following dispositions when they\nbelieve\nthe proposition p and the proposition q\nthen they tend to infer this new\nproposition p\nstar q and when they believe p\nstar q then they tend to infer that p\nand infer that q um so you\nyou might uh um\nyou might notice that this seems to\nmatch onto the\ntruth table for conjunction and so this\nidea comes from\nned black in his conceptual world\nsemantics\num and the idea that uh the idea seems\nto be that uh\nwe could figure out that this star\noperation seems to\nuh be the\nconjunction because the inferences that\nthese are involved in are basically\nmatching the axioms for conjunction\nand so just to generalize to other\nconnectors being grounded in their\naxioms\nand so if if we're going through this\nprocess of\nuh attributing the syntax um\nthen when we when we attribute uh\nthese connectives in a way that deviates\nfrom these axioms these are going to be\nuh\npunished by the coherence score um\nokay and then and then finally just so\nthose gives you some ways of building up\num some references um\nand then there's this uh a more general\nidea of how to\nif you already have some old terms that\nyou understand there's a\nthis ramsey lewis uh sometimes carnet\nuh is thrown in method for defining uh\nnew\nnew terms originally it was for\ntheoretical terms like like\nfor a scientific theory\num here we're gonna just use a simple\nexample of car theory i think this also\ncomes from black\num and uh so so imagine this like a\nscientific theory and it's introducing\nin in this uh lavender color\num it's interested introducing some new\nterms\ncarburetor and ignition chamber\nuh using some old terms like fuel and\nair\nand we're assuming that we already\nunderstand the fuel and error and other\nterms\nbut that this theory is introducing\ncarburetor and ignition chamber so it\nmight say the\ncarburetor mixes fuel and air and sends\nthe mixture to the ignition chamber\nwhich in turn blah blah blah\nnow one one one thing you might be\nworried about is\nif we're maybe defining the carburetor\nin terms of its interactions with the\nignition chamber and we're\nand we're defining the ignition chamber\nin terms of its\ninteraction with the carburetor is that\ngoing to lead to some kind of\nvicious circularity and definitions uh\nit turns out that there's this nice\ntechnique\nthat kind of shows that that's not\nreally\nthat big of a concern um what what you\ndo is called rancification\num and uh you take you take your\nuh theory and you replace any of the new\nterms with variables\nso here carburetor has become x and the\nignition chamber has become y\nand then you just do existential\nquantification over it so now you're\nsaying\nthis is called the ramsay sentence now\nyou're just saying there exists some x\nand there exists a y\nsuch that the x mixes fuel and air and\nsends mixture to the y\nwhich in turn blah blah blah\nand so this is a nice way of if you\nalready have the old terms\nfiguring out the meaning of new terms to\nrefer to whatever fulfills the\nfunctional or causal goals positive for\nthem\nso in this case we've done it with uh\nwith objects um but if you use a second\norder logic this can generalize\nto predicates and relations as well\nand so i kind of want to apply this\npretty globally\nand holistically um\nbut uh the usual way\nit's it's talked about is as you have\nthe entire theory\nbeing true so that that we have um\nlike all of the sentences uh um\nare being true but when when you're\nmoving more towards uh\num filling in all the mental\nrepresentations of an agent\nthen then uh probably they're gonna have\nsome false beliefs and we'd like to\nstill\nuh apply this method even if some of\nthem some genome are false\nso i'm kind of weakening the condition\nhere to allow for some error\nand and then we'll just kind of try to\nfind\nthe the set of uh um beliefs that would\nminimize\nuh squared or the set of interpretations\nuh semantic interpretations of their\nbeliefs that would uh minimize the\nsquared error\nas we're filling in the functional\ncausal roles\num and let's see\nyeah i i can go into more technical\ndetails later but uh but\nuh but just hopefully gets you some\nintuitive idea of what principles i'm\nrelying on\num okay so to put this all together uh\nhere here's basically how i propose\ncomputing\nreadiness in five steps\nso first step we're going to start with\njust assume that we're given\nuh in the ai a low-level causal model of\nthe world\nand the adult human brains within it um\n[Music]\nthen we're going to take those brains\nand we're\ngoing to attribute the syntax and\ndynamics of\nmental representations to those brains\npart of that syntax is going to include\nfor these higher order preferences\nwhat they refer to or or at least\nat this stage at least their logical\nform not yet what they referred to\nbut uh with with the higher order\ndecision criteria\nwe could iteratively apply those\ncriteria uh\nto figure out what rational first order\nutility functions\nuh these these brains uh should have\nand then we create so so far these\nrational utility functions are\nare are still couched in the agent's\nlanguage of thought\nso next step we translate using the\nsemantics\ntranslate their uh utility ratio utility\nfunctions from language of thought\nto external world states which would\njust be\nlike the causal model that the ai has\nfor instance\num and then um\nand now uh that helps make it more\nuh comparable so now we could aggregate\neveryone's rational\nutility functions using some social\nchoice\nor social welfare function\nand i think that's it that's some credit\nfor images\nand necessary some technical details and\nan appendix\nand they'll keep it on this slide\nuh yeah so um\nappreciate any uh questions\nokay thank you very much june for your\npresentation and so the first question\nwill be from stuart armstrong okay well\nthanks for this uh presentation\nthere's a lot of interesting ideas in\nthere\num so the first question is just a\ngeneral meta one\nwhere do you see your\nproject as needing as being the most\nincomplete and where do you see it as\nbeing\ncomplete so of these say five steps\nuh are some of them uh you think\nbasically done\nor others and others need a lot of work\nor what's your feeling on\nuh that yeah um\nyeah i guessed uh i sort of think of it\nas um\nyeah i mean so a lot of background is\nhas been in academic philosophy\nbut then moving into software\nengineering so i'm kind of\ntaking a a engineering approach to this\ni suppose and\njust trying to come up with what it\nwould be like a minimum viable product\nfor\nfor um competing friendliness so\num i mean i think that uh\ni i think that one thing that could be\nfilled in more\nis um in what areas\nam i kind of taking liberties with\nbiological or psychological plausibility\num i mean so i'm not requiring\nuh agents to um reason perfectly by any\nmeans\nuh they're it's sort of made into\na great aggregational coherence scale so\nthat's one way\nin which i can accommodate um\nsome psychological plausibility in there\nbut there might be further ways or maybe\ndifferent types of logics that would be\nbetter able to capture how humans reason\nfor\nor for instance maybe like um\ni mean right now all the credences are\nsort of in one big\nbelief box but maybe there's suggestions\nfrom psychology that actually\nthere's some kind of massive modularity\ngoing on in the mind\nmaybe that that modularity should be\nexplicitly\nrepresented in the model um so\nso i think there's a category of just\nmaking\nit you know maybe there are cases\nways in which uh despite trying to\naccommodate more i'm still kind of\nassuming that human brains are more\ncomputer-like\nthan they actually are so there's just\none big\narea where i can see a lot of room for\nimprovement\nso would this be about uh in your first\nstep\nfor example yeah\nwell the first step is that's just sort\nof the starting point\num um\n[Music]\nyeah so i'm just assuming that these are\ninputs to\nthe uh uh to the a ai and computing this\num i mean i guess we could relax\nsome of this stuff as well like\nbasically if you have\ninstead of a single causal model that\nwe're just assuming is an oracle telling\nus the truth\nof the world um if instead we have\num a probability of distribution over\ncall the models of the world\nthen i i think a lot of the stuff should\nstraightforwardly carry over like\nconceptually at least it should be uh\nhopefully pretty clear what would be\nwrong i don't think that\nstandard uncertainty is a problem here\nthat's something we're quite used to\ndealing with\nyou'd like to do the first question uh\nsir\nsure other people so one of the uh\nthings we we discussed in the reading\ngroup when we when we read this\nwas um a path towards a\nfeasibility basically both in terms of\nuh if we are actually implementing this\nand also in terms of\nuh making a simple end-to-end test\nlike with the software that you have\nalready developed would it be possible\nto make a\nworld with two brains that want ice\ncream or chocolate\nand then and then actually see the\ncomputed utility function from that\nuh let's see so right now it it requires\num\ninfinite computational power um\nand then so that's mostly because i'm\nusing uh the um\ncomo guerral complexity in various\nplaces\nwhich is uncomputable\nand you could you could substitute some\nfinite approximations to those\ni think like minimum message length\nminimum description length\nare are some existing ways of\nof having finite approximations to the\nchromograph\ncomplexity um and then\num but but even even once you go once\nyou make that\nfinite um i there's like\nyou know virtually uh no\nperformance optimizations whatsoever in\nin most of my code here so so many of\nthem are simple\nbrute force algorithms that were more\nabout\ncan we even solve this with infinite\ncomputational power\njust to sort of make it clear what we're\naiming at um\nand and it would it would certainly take\na lot of work to\num to try to pair that pair down\num now i i have uh i i\nlet's see if i could uh you know right\nso i have written some uh\num some i do have some test coverage so\nany anytime on my website you see um\na check mark next to the next door\nprocedure\nthen then that points to a test of that\nprocedure\num so i think there was last i checked\nmaybe like 47\nof the procedures have some tests um\nso so maybe\nyeah it would take a while to get i i i\nalso would like to build some very\nsimple toy model\num to uh yeah to try out this\nuh this theory there um\nyeah it's gonna take some work because\nthere's just lots of places where\nwhere things are very super duper\nexponentially\nexponential uh and and some of the\nsome of the testing procedures i i make\nuse of some caching and stuff just to\njust to be able to have any hope of\ntesting this stuff\num so there's maybe some engineering\ntricks you could do so that it doesn't\nactually have to compute\nlike all all possible um decision\nalgorithms that might correspond to a\nbrain\nmaybe maybe it's enough just to like\nsample\nfrom that distribution um so\nso yeah certainly there are many places\nwhere\nyou could start making this more\ncomputationally practical\nyeah and aside from a couple of places\nwhere\nit was useful to be able to write the\ntests\num\nthere hasn't been much work yet in in\nallowing for that kind of end-to-end\ntest but that certainly would be what a\ndirection\ni am interested in going\nokay thank you for your answer um stuart\nwould you\ntake it yeah so one of the great\nadvantages of the way you've laid it out\nas a program is that it provides clarity\nas to exactly what\nis assumed one of the great\ndisadvantages\nis that it doesn't allow us to see how\nbig an assumption on how much work is\nrequired\non one line of code versus\nanother some of them may be entirely\ntrue\nsome of them may need a little bit of\nwork some of them may be\nhiding a whole part of the problem in\nnow this is the bit that sora knew that\ni was going to bring up\nthere on the occam's razor result\num you defined the best intentional\nexplanation\nas having maximal compression yeah let's\nget the slide up\num high rationality\nand um ambition\nwell let me give you an explanation that\nis\nabsolutely fantastic on all these fronts\nhumans are fully rational in every\nsingle thing that they do\nand they always pick the perfect action\nthis is i've shown in my result this is\ngives you the best compression uh it is\nobviously fully rational and the\nambition is\nperfect it explains everything\num okay so i yeah\nare you you're talking about like your\nuh no free lunch\nhere um yeah i uh\ni i i've been wanting to dig into that\nuh\nfurther i haven't uh i i\njust sort of barely skimmed it but i do\nwonder whether\num my setup is a little bit different\nfrom\nfrom the one you consider in particular\nbecause\nof the trauma criteria that has to be in\nplace\nso so whenever you're attributing some\nuh decision algorithm\num it's gonna it's gonna require that it\nactually correspond\nwith um with the brain's uh\ntransition behavior um that\nthat does not seem to be a problem in\nthe\nso the humans are fully rational we all\nagree as a degenerate example it's wrong\nbut we need to find out why it's wrong\num and when you do that what happens\nis that the utility function expands\nto almost the whole of the brain so the\nwhole of the brain can be seen\nas computing the um\nthe utility function or the reward\nfunction and then\nyou can zoom in on say a small set of\nneurons which are the\ninput output and they are implementing\nthe decision procedure which is just\nbasically\nfollow what the rest of the brain\nhas computed so it is it does not\nseem to me that it would be that hard\nto designate a particular part of things\nas the\nrational decision makers because that's\na very small\ndefining a fully rational decision maker\nis does not take much\ncode and you can\nbasically assign it to just a few\nneurons that pass on\nthe message basically if you want the\nrest of the brain\nsays taking this action is the\nright thing to do in terms of utility\nthe intermediate neurons say thank you\nwe will therefore take that\naction and that's the rational\nirrationality module\nand then you seem to have a\nmodel that works in your sense yeah i\nthink this is kind of reminiscent\nof um of like putnam's paradox that\nuh originally motivated chalmers um\nin this paper so um so putnam\nit was using this model of uh of like a\nfinite\nfinite state algorithms and\nand there um\n[Music]\nany any given finite state uh\nany any given state within their um\nit was treated as like kind of simple\nit didn't it didn't involve any internal\nstructure\none of the moves that chalmers makes\nwhich i didn't really talk about\num in this slide but it is but it is in\nthis paper does iraq\nimplement every finance throughout it is\nhe moves\nuh to what he calls a combinatorial\ncomplementarial state algorithm where\n[Music]\ninstead of allowing the\nsome simple states to potentially encode\nall of the um\nsome complex state\nhe wants to more explicitly model the\ninternal structure within\nany given state so within a physical\nsystem is going to be\nit's going to be implemented in a bunch\nof sub-states\nso so i do i do still wonder if that\nif that is able to get around um\nuh this type of objection um\ni have reasons to believe\nthat the result still applies um\nwith internals well i i know the result\napplies when you know the\ninternals of the agent um\nit just depends on how many\nwould you would you mind if i shared a\npaper\nuh here i i know it's your\nyour talk oh that's not a paper a um\na blog post okay um\nshould i stop sharing my screen no no\nthat's fine i'll just\nuh put it in the um\nthe um\nnow ah okay learning human preferences\nblack box white box\nstructured white structured white box\num the\nessentially the problem is not just that\nyou have to identify how the algorithm\nhappens inside\nbut what are the correct labels for the\nalgorithm\nfor the different parts of the brain\nwise\nis this a bit beliefs is this motivation\nis this rationality of course it's much\nmore complicated in the human brain\nbut what are the criteria that you would\nuse in assigning\nthese labels to the different parts\nof the brain and whether that can be\ndone\nwhether that can be automated\nis the now it can be automated if you\nmake\nenough assumptions but\nthe kind of structural assumptions that\nyou've been talking about\ndo not seem to be um sufficient\nor even close to sufficient okay so the\nwhite\nthe white box is going to include\nknowing its internal structure\nyes okay it's um\nand the the the thing is that what we\nneed is what\nuh here i call them structural\nassumptions in other places i call them\nnormative assumptions\nwhich are given that you know the\ninternal structure how you to assign\nlabels to them now there's a rather\nsilly example where something is labeled\ntuna\nbut that's basically just to show the\narbitrariness\nof the labels and what typically happens\nwith humans\nthat resemble sorry when humans\napproach these problems that resembles a\nbit your\nuh your description there is that we\ndefine\nthese things with very few um\nsyntactic connections like\nthe beliefs take input from the senses\nthe action selector is the bit that\noutputs to\nthe uh the moving about and stuff like\nthat\nand the beliefs go into this and so does\nthe\npreferences and so on but those\nfew structural assumptions that\ngenerally there's\ntrillions of ways of taking a complex\nalgorithm\nand matching those so\nthere seems to need some extra\namount of information or assumptions\nwhat i call structural assumptions\nnow it's not hopeless\nbut i don't want i don't i want to talk\nabout your\napproach not about mine uh but\nit does seem that this is as it stands i\nwould say this is a huge gap\nin uh your approach\nthat is just a few lines\non your uh on your meta ethics\npage so just as an illustration\nof the potential hiding of large\nproblems\nin small small lines of code\num okay well i look forward to uh\ni don't think i've seen this this post\nbefore\num\ni'm i've been told that i'm not the\nclearest communicator\nand um in any case so\nif you if you want to talk about that uh\nplease do let me\nknow yeah\num she would say next question\nso jack you had a question\nlike sort of a clarifying question um\nso so\nin in your five steps um i think the\nthird step is um iterating higher\nyeah applying higher order decision\ncriteria to reach rational\nutility functions so for that to\noccur is there\ndoes there need to be the assumption\nthat humans or\nwhatever um whatever\nbrains you're modeling does there need\nto be the assumption that those\nhave coherent utility functions because\nit's not immediately obvious to me\nthat humans do have coherent utility\nfunctions or that we should expect that\nto be true\nmaybe it is i just don't know yeah\num let's see so\nyeah i think i think this might be one\nof the places where\num for for the first version\nit it i think it's probably going to end\nup trying to find\nthe the utility function that that is\nclosest\nto theirs and that is rational i believe\num\n[Music]\nyeah i mean that might be something to\nto relax\nin later stages um\n[Music]\nyeah i'm just trying to so\num yeah i i guess the way the way that\ni've\nencoded the utility functions is\num yeah so if i had if i had modeled\nif i had modeled the agents\nwith say like um i guess\nordinal with ordinal utility functions\nthen there would be room\nto model it as uh irrational but because\ni\ni took a more simple approach of just\nmodeling them\nwith cardinal utility functions uh\ni think i think that that's going to\nmake it so that\nthey they do all have utility function\nso\num yeah so i i am i am kind of\ninterested in can we relax those\nassumptions\nand still make the project work and my\nmy intuition says yes like for instance\nmaybe\nmaybe maybe uh drop\ncompleteness but we could we could still\ndo things with uh\nthe different possible uh ways of making\nthem complete\num and and still run the algorithms off\nof those\nuh but that will probably be more for\nfuture work yeah\ngotcha okay thanks\nokay um my next question\nor uh point is instead of pointing out\nsomething that might be unexpectedly\ndifficult\nand to point out something that i think\nmight be unexpectedly easy\nin the the grounding of\nuh to use the sort of um the\ngophi term the grounding of the symbols\nin the brain\nwhich i believe you are um\nyeah the tr anyway um translating syntax\nto semantics\nthis may be a lot easier\nthan uh we think because it seems that\nthere's an empirical\ndefinition of this which is does\ncan we use the symbols in the brain to\npredict\nthings about the outside world or can we\nwhat is the relationship between the\nsymbols in the brain and these symbols\nin the outside world\nin a large set of environments that the\nhumans find themselves in\nso um yeah yeah i do i do think that\nthere should be\na lot of fairly easy test cases there\num i think i do have a little bit of a\nremaining worry though\nespecially because the the most crucial\n[Music]\nsymbols to ground would be the ones that\nshow up in the\nhigher order preferences so i think\nthose high roller preferences being a\nlittle bit further removed from\nfrom everyday action i do wonder if\nthough those are going to be less\namenable to\nthat sort of treatment and and and there\nwe want\nmaybe more of a theory a theory that's\nbeen tested by easy cases but but\ntheory guiding us um in\nin figuring out what those uh\nrepresentations are\nwell you've touched upon a subtle issue\nthere\nwhich is that our symbols are well\ngrounded by our experience\nin the environments that we know the\nsymbols that are not\nparticularly well grounded are\num can are basic are often ones that are\nnot\nwell def sorry if you place the humans\nor the agents\nin a new situation where some of their\nsymbols don't work the same way that\nthey're used to\ni one of the examples i use is what if\nsomeone\ncreates a species\nhuman mic a slave species\nthat want to be slaves definitely want\nto be slaves but don't enjoy\nbeing slaves they've recognizably human\nthey have preferences which is to be\nslaves they have\nenjoyments uh thing and in this\nsituation\na lot of say common assumptions that\nnormally go together\nstart breaking down or splintering was\nthe term\nuh that i was using and this is the\nsituation in which you\ngenerally find that these higher order\nsymbols or these more abstract symbols\nthat you thought you had a good grasp on\nsuddenly you aren't so sure anymore or\nthey can be defined in multiple ways\nin a way this is what philosophers are\ndoing all the time\nbecause they take common concepts\nand push them into extreme thought\nexperiments where they break down\nand where and but if a world\nwith potentially super intelligent ai\nis a world where the ai can push\nthe environment into incredible new\nstates\nwhere um the philosophical thought\nexperiments\nare there are the actual thing\ndo you have any way of sort of dealing\nwith that\nwhen symbols become ambiguous or when\nsymbols that used to sort of be synonyms\nare no longer synonymous\num yeah so i haven't uh i haven't\nactually gone into so\nso um so in my meta ethics\npaper which some of you uh read last\nweek\nand and so far in this presentation i i\nactually haven't gone into that much\ntechnical detail it turns out that um\ni'm giving sort of a simplified view of\nthings here\nso so maybe i should actually um\nget into this appendix slide um so my\nfirst thought and the way i\nsort of uh certainly in the meta ethics\npaper\nhad been talking um\ni i've been assuming i i don't know if\nthis is going to directly translate into\ninto your words they might still be\nseparate but but just to give you some\nidea of\nwhere i actually end up with the\ntechnical specification\num so so in the first pass i kind of\nassumed that these higher order\nutility functions form some kind of a\nneat hierarchy\nright so there's maybe there's just one\nhighest or utility function\nand then your job is relatively easy\nfigure out\nfigure out what lower order utility\nfunctions that it prescribes\nand then keep propagating that down\nuntil you reach the first order utility\nfunction\nbut i think that's not exactly\npsychologically plausible i think more\nrealistic model would have\nusability functions that are maybe able\nto mutually\ninfluence each other with no single one\non top\nso in this case i want to um\nuh it italy choose\nsort of allowing them to simultaneously\nimplement\ngutter and ah you're choosing outputs\nthat that satisfy\neach of its decision criteria\nand and and you keep simultaneously\nupdating until you reach some kind of\nstationary\nequilibrium um i think even that one\ncomes with an assumption that we not\nwe might not be able to retain uh then\nand\nnamely it's it assumes that there's a\nnetwork topology\nof of which which accepted norms\nare are governing which other accepted\nnorms or high order utility functions\nand\nand it's possible that those actually\nare path dependent and depending on what\ninput you feed it\nso so i actually think we have to move\nto uh\neven more complicated one where now now\nwe're going to simulate\nall continuations of an agent and and\napply a weighted\nsocial welfare function\nand uh continuations who better satisfy\nthose decision criteria and\nuh preserve a gentle identity which is\nkind of like personal identity\num but basically they better satisfy the\ndecision criteria and\nand there hasn't been other irrelevant\nchanges to\nto their through their values uh those\ncontinuation\nare going to have more weight and then\nand then you apply a social welfare\nfunction to\nto aggregate them um\ndid you want to these are\nthis is um these are close to the kind\nof thing\nthat i was have been thinking about and\nso\nit is um uh it\ni don't want to come and say as because\nyou're thinking similarly to me you must\nbe right\nbut um it means at least that you have\nbeen\num thinking about the same issues and\nhow to\naddress it um like\nthe just to check one thing\nthere is no strict to ordering a\nweak higher order utility function can\nbe overruled by\nstrong lower order uh preferences\nyeah i think so i think that should be\npossible like a\nyeah uh say a mild religious preference\num towards on sexuality or something\nlike that\ncan be overruled by sort of strong\nobject level\nuh ones would you say to\num to pick an example yeah\num let's see yeah i think\nso so i under and that case i'm going to\nend up saying something very similar\ni i think i'm just going to take slight\nissue with the way you said it\nyou described it as the like the first\norder\npreferences overriding the the metal\nlevel\npreference instead what i would say\ni i i would probably say\nwe want to model that as a different\nmeta-level preference\nthat says something like um\n[Music]\nwhen when you have first order values\nthat strongly conflict with\nsome kind of weaker high order\npreference then then allow\nthe the first order to um\nto override that one but but that's all\ngoing on within\nus this second metal level preference\num so so conceptually i kind of\none one problem in in the language here\nis um\nhigher order preference is ambiguous\nbetween a couple of things\num so so one notion\nof higher order preference that's that's\nnot the one for the most part that i'm\nusing\nis is one that's a first order\npreference\nit's a preference in the sense that it\nis governing actions\num um but it could have some high order\ncontent\nuh it could have content that includes\ntalking about\nother preferences so so you could have a\npreference to change your preferences\num and and i think those are different\nfrom higher order preferences that are\num\ndefined not by the content but by its\ncausal functional goals in changing\nyour other utility functions because\nthat first type of\npreference that that really is just a\nfirst order preference\ngoverns actions and\nand only only affects\nuh other preferences through\nactions and i think and i think that\nwhen we're talking about\nthings like being moved by moral\narguments\ni think we're not we're not talking\nabout this pragmatic rap\nokay i'll i'll have a follow-up question\non that after\num after the next question\nokay so that would be my question uh\none thing i didn't see and maybe that\nwas because i didn't read it very\ncarefully\nwas a an explicit comparison with ikey\num and um so i would like to hear\nyour thoughts about how this relates to\nicn in particular\nthe thing that i'd say is outside the\nuniverse\nand it seems possible to me that uh\nyour construction also kind of assumes\nthat the uh\nthe computer that's implementing this is\noutside the universe\nor is is that a requirement um\num let's see so\nso so all of this is supposed to be\ngoing on like within the mind of the\nof the ai um the the ai is supposed to\nhave\na a complete true causal model of the\nworld\nand the brains in it um\nlet's see so so i guess\ni think i think what is the term like\nthe cartesian\nseparation of a few apart from the world\ni i guess that could theoretically come\nup with the brains or it could come up\nwith the ai itself see i'm not sure\nyeah i mean in a later stage one thing\none thing i do want to eventually get to\nis\nis things where this stuff might come up\nmore like\num like i i think that um\nmeta-semantics and meta ethics is\nactually\num uh that actually gives you most of\nmatter philosophy\nso theoretically i think i should be\nable to use the resources i built\nto do some kind of verification or\nvalidation\nof the philosophical process that led to\nthe creation of the app\nthen in that case you probably\nwould run into these\nworries with self-reference and it's\nis that there the ai would would be\nmodeling itself\nin the world and as being caused by some\ncausal process\nfrom the brains and and and trying to\ncheck\ndid they make the right philosophical\ndecisions um\nso so so all of this is\nis still very speculative in handwriting\nbut but that's where i imagine some of\nthese\nsome of these issues might come in as it\nis now\ni don't know if i don't know if they i i\ndon't think i've had to had\nthe ai model itself anywhere in here\nit's it's just modeling the brains\nfiguring out what they value and\nadvocating those values\nuh i i don't think i had to have the ai\ntalk about itself in the world\num so so maybe maybe it's\nyeah maybe it's just ambiguous of\num or yeah i guess it's\nambiguous whether whether it does have a\nmodel of itself in the world\nokay um\nyes so on the um\num sorry i'm\ni'm having difficulty uh remembering\nexactly\non the when you were constructing the\nmodels there um\nand were\nah the different weighting of\nthe\nyes the main point is just quite simply\nthat most\nhumans do not are not philosophers\nand so we have not constructed higher\norder\npreferences or meta preferences we've\nespecially not constructed meta meta\npreferences for saying that a weak\nfirst order in the vague sense\na weak object level preference should\nover sorry a strong object\nlevel one should overrule a weaker well\ncertainly they haven't they haven't\nregimented their vocabulary to\narticulate these sorts of things as much\nas philosophers have\ni i'm i'm not so sure that they that\nthey lack these\nthings though well what i was going to\nget to is well i don't know\nuh to the extent uh is what about\nsomeone who has not\nyet considered the problem\nbut would would it would have one\nsolution\nif they did consider it and the second\none is what if someone who has not\nconsidered the problem\nand could get one of two solutions\ndepending on\nwhich arguments they read first\nboth of these are cases that are quite\nlikely to happen\nso would you say that their meta\npreferences there are\nundefined or yeah well that's\nthat's exactly the sort of thing that um\nthat this\nthis stuff in the appendix was supposed\nto\nalleviate so so that would be uh\nthere will be different continuations of\nthe agent one no time here one argument\nanother here is a different argument\num and then as as long as as long as\nthey're both satisfying the decision\ncriteria equally\nthen then they might just be on equal\nfooting when you apply\nthe the social welfare function between\nthem to\naggregate their so so are you saying\nthat ones that are not yet\ndefined um\ncan\npreferences that are not yet defined but\nthat could come up\nare also included in the set of\npreferences to consider\num let's see\ni i mean i think i think that's how i\nwould like it to work i think i did run\ninto some\nsome technical issues uh but but\ncertainly at the very least\num changes to existing preferences\nthat that might come up depending on\ndifferent inputs\nthose are those are going to be\naccommodated in my model\num does did that make sense\num if you're having new preferences with\nnew symbols\nthat that that the original agent didn't\nuse\num that that's actually uh one one\none place where my model doesn't\nexactly do well um okay\nbut as long as it's as long as it's\nusing the same vocabulary\nof the original agent and it's just\nchanging how much\nfor instance is valuing things or it\ncould even be a new a new preference in\nthe sense that\nthe original agent was able to represent\nthese outcomes fine\nit was totally indifferent but now you\ncould have a new preference\namong something that the original agent\nwas at least able to represent\nbut and then you can have a new\npreference on something that they were\npreviously\nindifferent to so all of those types of\nchanges would be accommodated by this\nuh okay well i think this connects with\nmy first point\nabout new environments um so we can sort\nof bunch them all together\nas what happens when there are new\nsymbols\nand both when it's predictable\nthat the person will always interpret\nthe new symbols\nthis way under any realistic\nintroduction to the subject\nand the situations where they can\ninterpret the new symbols in multiple\ndifferent ways\nyeah um yeah i'm trying to think\nhow big of a problem it was\nyou don't have to solve everything right\nnow\nyeah yeah yeah i'm just trying to think\nis it like a big problem a medium\nproblem\num so so within my model\num i think i think the problem was\nbecause\nso think of that last step where i'm\ntranslating\num the translating rational utility\nfunction from language of thought to\nexternal world states\nthen um\nif i was going to try to accommodate\nthese new symbols\nin in this other possible world\nare these are these things can be\nseparate from\nthe causal model of the world or i guess\nthey would\nit should still be possible from within\nthe ai's world\num so so maybe as long as as long as you\ncould still translate them into external\nworld states\nthey might have to be like merely\npossible external world states or\nsomething\num then maybe it's it'll work but it\ngets into like weird\nmetaphysical issues of how to how to\nalign this\nthis stuff um and and then\nyeah there was another another part that\ni was a little unsure on\nuh but that kind of ties into this is\nthe philosophical purist in me wants to\nsay\nmy values are my my values\nare grounded in my brain so\nso so if you took my my brain and put it\nin\ndifferent circumstances um\ni kind of want to say i have the same\nvalues\nso so then when\n[Music]\num so so one thing we could do is\nis put in a probability distribution\nover\nover worlds that my brain could be in\nbut then\nbut then this probability distribution\nis ending up\ninfluencing the continuations and\nand so the the philosophical purest in\nme didn't want that happening so i\ntalked\ninstead about just like all possible\ninputs\nbut but uh but making sure that we get\nwe kind of get rid of get rid of the\nones that just introduce\nnon non-cognitive changes like\nchanges that don't come about from\nreasoning about your high order\npreferences\nare are going to end up getting very low\nweight\nso i don't know that\nthat philosophical purism does create a\nlittle bit\nextra difficulty um here\nand and i'm not entirely sure whether to\ngive it up\num i personally recommend giving it up\num it's uh because if it's\nif it's true you'll find it out anyway\nand if it isn't it'll be an obstacle\nin the design\ni'd recommend hacking together something\nthat kind of works\nand improving on it and if\nif sort of purism or more realism or\nsomething like that turns out to work\nthen it should naturally emerge in that\ncontext\nbut if if it doesn't work and you try\nand impose it then there's\nsomething might break\nand that's just my my advice there\nyes uh over to you sir yes\num one of the things that came up in the\nreading group was\nuh this uh methodical is written in\ncentral x which is somewhat of a niche\nprogramming language and if\nconflictually this was written in python\num\nthere is a sense that maybe uh you would\nhave gotten more engagement\nout of it also it seems like the code\nseems to be optimized quite a bit\nfor correctness in that with the tests\nand everything\nand maybe um optimizing for readability\nwould have been a better uh choice like\nlonger variable names and things like\nthat\n[Music]\nyeah um i don't know if i would\nmove to um python but but\ni am sympathetic to uh possibly porting\nit to a different language\num probably something like pascal would\nbe\num um\none one that i'd be leaving the most\ntowards\nuh because um\n[Music]\ni i did i did want to maybe keep it in\na form where um\n[Music]\nin a programming language that has clear\ndenotational\nsemantics um\n[Music]\ni guess my original idea was something\nlike\nonce i have it written up in set theory\nthen then maybe\nit wouldn't uh it wouldn't take too much\nto just then just write it in standard\nmathematical take notation using uh\nlatex\nfor for humans to read right um\nor if i wanted to continue with the\nmachine readable\nand executable which which i do like um\nand then maybe maybe switching to\nsomething like pascal\nwhere escal is also pretty speech\nbut but certainly there's many more\nhaskell programmers than\nuh set elec programmers um\nand uh yeah so\nso if i had infinite times there's\ndefinitely many things\nmany things i could be doing um yeah i\nguess i chose\nset alex because it had the clear\ndenotational semantics and\ni can imagine translating it into lattek\nlater on\num and uh\nand it just seemed a little less\noverhead than writing in\nhaskell um\n[Music]\nso so it yeah i mean if if i had written\nit haskell i don't know\nwould it would it have been worth it to\nwrite it in haskell if it would then\ntake me\nlonger to release right uh yeah those\nthose were the sort of calculations that\ni had um\num yeah\num\nback to me sure\num so you were talking about\nuh diet chronic coherence and other\nrationality and coherence\num requirements um i'd suggest that\nsome of the some of these coherence\nrequirements\nare actually themselves more akin to\nmeta preferences\nthan to um than to\nrequirements um the\nthe kind of thing that i'm thinking of\nis\nfor instance um\nuh temporal coherence\nlike um whether you um\nlike at some point people\nlike uh enjoy eating\nthey enjoy eating stuff when they're\nhungry and they don't enjoy\neating it when they're full um we've\ndecreed that this is not a um\ntemporal inconsistency um\nfor the uh the reason\nthat yeah there are other things where\nour desires go up and down\nsometimes we want to watch uh\nromantic movies uh sometimes we want to\nwatch tragedies our desires and\npreferences in these areas fluctuate\nand we still\nthink that we have an underlying\ncoherence despite all that uh\nbecause we sort of but\nbut other things we decree that we do\nhave temporal incoherences\num like when\nwe well yeah sort of we overeat\nand we don't and then we purge\nor we we contact\nan ex that we really shouldn't\nand then but to pick a\nmore narrow example if someone\nhas a peak of sexual desire\nand sleeps with someone at that point\nthis\nis appropriate this is fine this is not\nan inconsistency if someone\nhas a peak of sexual desire and calls\nan ex inappropriately at that point this\nis a bad decision\nso the impact of\nsex i should choose a better example but\nthat's the one that sort of sprang to\nmind\num can be seen as both um\npositive and negative\nso not not positive or negative as time\nconsistent and not time consistent\nuh depending on how we see it and\nthe way we see it is from our metro\npreferences\ni apologize but the um\nthe things are not fully worked out but\na lot of things like consistency\num like there's people who there's the\nalley paradox people that\nviolate various dutch book arguments\nand uh lotteries and other things\nbut they can say that they\nthat you don't people buy extended\nwarranties\nfor things and you can easily check this\nis\nyou lose money there's no overall if you\nbuy extended warranties for things you\nlose money\nif you don't you're better off much\nbetter if something breaks\npay for it and you'll that'll be much\ncheaper over the long term\nbut some people value the security\nthat the extended warranty gives them\nnow\nthat feeling of security you could say\nit's an irrationality\nor you could say that it is a consistent\npreference or meta preference\nthat they are satisfying um\nso what i'm suggesting is that a lot of\nthe coherence\nthings can be seen as our own meta\npreferences\nand not meta preferences that everyone\nwould have in the same way and to the\nsame extent\num let's see okay so so some of your\nexamples\num um yeah i think i think it's sort of\nlike\nwhen when you just describe say uh the\npeak of sexual interest\num it could be good or bad depending on\nthe surrounding context right\num so it was more that\nhave sex when you are when your sexual\ndesires the highest\nis a perfectly rational and recommended\ncourse of action um\ncall up your ex that you've had a\ndifficult relationship with\num when your peak of sexual desire is\nthat it's\ntop is incoherent and\nprobably predictably going to lead to um\nuh two mistakes\nyeah so so i don't know so for example\nbut yeah the the basic idea is the same\nsay fluctuating background thing can in\nsome case be seen as\npart of a single preference function\ni have sex when it is most enjoyable\nor as a source of irrationality\nyeah um it reminds me of um\nwhat one response to the dutch book\nmight be\nto to say it's actually fine and you\nbuild into your preference\num um that i that i\nyou know if they're leading me into a\ncircle i'm paying\num five dollars to go from a to b and\nfrom b to c\nand and go from c to c to a if you if\nyou build\nenough context in and say you know even\nin the move from c to a uh\ni i just i just value moving to\nme moving to a after having paid five\nand ten dollars five dollars to\nmove the other two times uh\nif all of that can be in your preference\nyou could actually make it like\na coherent utility function a rational\nutility function it\nit it seems pretty implausible that\nanyone is actually\nvaluing things like that but uh kind of\nreminded me\nof that but um um\nbut i know some of these issues seem\nseem more about\nhow how we how we\num just capture things within a single\nutility function like\nlike how context sensitive is our\nutility function\n[Music]\nit's um as i say i'm more just prodding\nat the various things yeah\nseeing how they work and how um\nyou would yeah so so\nso the actual so are you suddenly\nfunctioning i didn't want to get a\nlittle bit of psychological possibility\nbecause\nas you if you had to represent utility\nfunctions as\num um specifying\nutilities for for every single possible\nstate\nthat that's certainly not we we are not\nsome giant lookup table with\nwith with the list of every possible\nstate right um\nso so i i ended up using like an\nadditive utility function\num so so it it\nso given some give it some state you you\nfigure out\num how many of the formulas that you\nplace your utility in\num uh make it true and you add up the\nassociated utilities\num and uh so this would allow for\num you know maybe you maybe you value p\nso you put some utility on on the\nformula p being true\nand and then you have another formula p\nand q\num and then you could you could\nplace a different utility on p and q um\nand and then that'll i think that'll\nactually add up\nbecause both p and p and q are true then\nthen\nthen those will add up to whatever so so\nmaybe\nyou you generally like when proposition\np\nis true uh but if q is around\nthen you don't like it right so maybe\nyou you place five utility and having p\nbut then negative but then zero uh\nnegative five uh if you have q\nso then if p and q then then you're just\nindifferent\nso those are some of the things that you\ncould do with with my current\nuh my current model and\ncertainly there's there's probably\nother ways to make these more more\nrealistic and\nand context dependent but but i think\nit's a fair amount of that\nit is allowed by just the additive\nutility functions that i have\nand in the limit case that you could you\ncould just add in\num um you know\nthat such that it behaves again like\nthat giant look up table of all possible\nstates but\nbut uh if there are places where you\nneed to get more fine-grained it does\nallow\na fair bit of that already but i'm sure\nthere's\nalso further improvements tonight um\nbut but but that did seem a little bit\ndifferent from um maybe some of the\nmeasures of um\nseeing chronic and diachronic i think\nthat's executing mostly and\nwhere i work out agentual identity i\ndidn't know if this is where you wanted\nto go with it\nbut um\nthis was supposed to be some measure\nwhen you're when\nrunning the continuations of an agent if\nthis is supposed to measure\num how much you're the same agent or not\nand and and it kind of included this um\nexcellent this is\nthis is the sort of thing that i've\ncome to think about recently and you\nseem to be ahead of me there\nyeah a gentle identity is\nthe well no i'll sorry i'll\ncome back uh i'll let soren bring in a\nquestion\nokay so um one of the uh\nparts of the way that the utility\nfunction is constructed\nseems to be almost equivalent to\nsimulating putting humans in human\nbrains\nin different situations and it's given\nthat it seems we're trying\nbasically all states that the universe\ncan be in it's\nit's possible that we are implementing\nin fact not just\nmind crimes but every possible mind\ncrime\ndo you agree um yeah certainly if you\njust\nnaively if you found a a computer that\nhas infinite computational power\nand and you ran this algorithm inside of\nit\nthat then yeah well first it'll probably\ncrash because there's tons of bugs but\nif you could fix all the books and have\nhadn't run then yeah there would\nprobably be tons of mine crying so\nso uh so this isn't i'm not suggesting\nthat anyone actually\njust fix fix the bugs and then run it\nbecause uh\nthat would be a big concern uh the hope\nwould be\nthat um um\n[Music]\nsomehow we a lot of what i'm trying to\ndo here is kind of define\nthe ground truth or um or\nthe the value functions and\ni'm not imagining that this this would\nby any means be the actual algorithmic\nprocedure that the\nuh that the ai uses right um\nmaybe uh you know maybe maybe it's\nbut i i do feel like it it can be\nimportant to have this defined somewhere\nin the ai so that it kind of knows what\nis the ground truth so\nso some of these other proposals which\nwhich i'm sympathetic to like\num maybe like trying to get at our\npreferences via\ndebate um and things of that sort\nuh or or some of the amplification stuff\nuh i\ni do feel like um um\nthose those would be probably closer to\num what we might in the near future do\nin in building up some kind of data set\nfrom which we try to infer\nwhat people's uh higher order\npreferences are\n[Music]\nso so presumably you could do things\nlike that without\num without\ndoing mind crimes on these people you're\nsimulating\nin in your uh finite human brain\num so so\nit might be that um\nwhen you scale this down to run with\nactual finite\npower that you want to do a lot of that\nstuff but\ni do kind of think it is important\nthat there's so that there'll be some\nrecognition that\nbasically i don't think we want to\ndefine the ground truth in terms of\num in terms of what what comes out of\nthose those types of things i think\ni i guess i have some worries there so\nso so i kind of like\nif if i could have these concepts\nthen then then maybe it can even take\nover\nuh creating new iterations of what are\nthe best\nmethods that\nwould actually be finite approximable to\nthis\nbut it kind of needs some definition of\nthis to know what is finitely\napproximating\nand of course as we're doing the\napproximations we also\nwant to make sure that we're not having\nit commit mind crimes as well\nthank you okay um\nso suppose that someone writes a\npsychological paper or a philosophical\npaper that is relevant to\nsorting out people's preferences\nhow if your\nai has been launched already how do you\nimagine it\ntaking into account this innovation\nbecause one of the biggest difficulties\nthat i have\nuh is to know how much has to be put in\nby hand\nand how much uh doesn't that's why i've\nbeen looking at\nthings like dichronic consistency is\nthis\nin the human head so we can delegate it\nto\npreferences or is it something that we\njust want to impose from outside\nso similarly so someone comes up with a\npsycho\na paper about psychology that\nilluminates some aspects of human\npreferences and\nthis thing for argument's sake or or a\nphilosophical thought experiment\nand we'll assume that this is relevant\nfor this how would\nthe ai in this\ntake that into account if this happened\nduring its run so\num let's see or i i mean\nor you can have it published beforehand\nif you want\num how would it take this data\nthat is already out there that is\nrelevant but not in an ai\nparsable form\num let's see so\nso um\ni i i guess it kind of depends so so is\nit like\nis it is there a mechanism that\nit's positive about how our preferences\nwork is like the content of that paper\nlet's say it's the anchoring bias paper\nuh it's pointing out a particular bias\nthat just really realized was there\nbefore\num but now they realize it's there and\nthey agree yeah this is a bad bias this\nis not\na preference um\nso this before this anchoring bias\nwas not known and we'll presume that we\nhaven't put it enough\nfor it to identify the anchoring bias\nfrom other principles\nbut someone writes this paper and\nthere's a few people who agree\nin the world oh yes that's a good idea\nso\nbut how would this enter the algorithm\nas a data um\nso i mean the current version the\ncurrent version doesn't really\nwork off of data like that right the\ncurrent version\nis just assuming you have a low-level\ncausal model of people's brains\nand that's where they're that's where\nthey're getting the the data\num but we but we could talk about maybe\nwhat adjustments\nshould be made to this algorithm when\nyou discover a paper like that\nand and i do and i do want in the future\nto have\nthe ai maybe have some self-modification\nabilities\nso so maybe we're talking about like\nsome future iteration\nwhere where has um\nsome of those abilities to kind of take\ninfo from a new paper and try to\nintegrate it with\num other information\num you wouldn't want it\nyou wouldn't want to hand specify take\nthis paper or papers on this archive or\nthing\nyou want it to be able to take that kind\nof data\nunderstand it and like it might be a\nconversation between two top\nphilosophers\nthat is relevant for\nnovel situations or something yeah i\nmean\nfor something like like the rationality\nscoring metric right you can make some\nadjustments\nlike so so in general\nwe apply a principle of charity but if\nwe know for a fact that humans are very\nprone to an\nancient bias maybe we don't penalize\nattributing anchoring bias as much as\nif it didn't fit some known pattern\nright so that would be an example of\nsome way you could change some of the\nscoring mechanisms here yeah but that's\ndoing it by hand\nyeah yeah um yeah so that's what\nthat's why it's not directly applicable\nto this version\nbut maybe you wanted to talk about some\nfuture version\nokay these are\nsome of the things i'm suggesting are\nthings to bear in mind\npossibly yeah and\nokay i think i have\ntwo more questions slash\nno three more questions slash comments\num\num yeah so um over to the\nthe uh the audience well i think uh i\nsaid\nis it an hour and a half to june and you\nalso and we are close to the one and a\nhalf hour mark now\nso uh if june if you need to leave or\nanything then\nplease feel free but otherwise i think\nuh we should give\nstewards uh questions priority towards\nthe end\nyeah okay so\nthe um the first one\nis do you have a population ethics\nor a way of solving that issue\num i guess i would i would file that\nunder um normative ethics\nas opposed to meta ethics um so\ni mean the reason i ask is because\nyou're getting the preferences from\nbrains um this is a certain\na certain population of brains that\ncould be variable\nso how are you going to deal with the\nissue of variable numbers\nof brains uh\nbeing created or destroyed oh i see\num yeah so i guess um in terms of what\ni've coded here\ni think i was i was trying to just\nsimplify it into just take all the adult\nhuman brains at the point in time in\nwhich we're\nwondering whether to press the button to\nlet the let the ai go\num and that\nand that presumably if\nif uh it's able to get the\nget the values that these humans should\nhave and\nand and if these humans should value uh\nfuture generations then\nthen you should get the result just just\nby scanning the existing human sprains\num you should get the result that you\nthat it should value future humans\nokay so this i'm putting in the category\nof\nuh delegating delegating to current\nhuman\npreferences to resolve\num yeah yeah current human preferences\nbut\nidealized right yeah\n[Music]\nbut uh yep that is uh\nthat's the way i tend to to go for a lot\nof these things like especially issues\nof identity\nbut the\nother thing is have you thought of a\npotential\nerror detection um\nlayer or as in\nsomething that\na lot of our beliefs about human\npreferences\nand human and humans in general are\nexpressed\nin your algorithm but we have some\njudgments as this outcome is terrible or\nthis outcome is stupid\nthat are hard to capture\nin this preference formalism\nand could be more captured as\nerror checking uh i was wondering if\nyou would if you consider that this\nmight be\na way to go or some some sort of\ncatching\nuh disastrous errors yeah yeah i mean\ncertainly for anything this abstract\nyou you would definitely want as much as\nmuch testing as possible to validate it\num i i mean i don't know how\nfar i've gotten in actually\nworking out how you would go about doing\nit but but i do think\nthat there there should be plenty of\nbehavior\nif this was if this was going to be um\nuh ever close to production\nokay um\ni'm i'm looking forward to to that\nyeah i have thought a little bit of like\nsay like a shutdown button going\ngoing back to the idea that that a lot\nof metal philosophy\nis meta ethics or can be\nanswered by meta ethics plus\nmeta-semantics so\nso could you could you do something like\num\nscan to take take people's brains and\nand\nand figure out what their concept of\nethics is\num or what what concept of theirs that's\nclosest to this meta ethics\nis does it play the same role in their\ndeliberations\nand then and try to apply it to\nthe whole algorithm and kind of ask\neverybody\nwith their fate what yeah if we have a\nway of\nfiguring out what the concept i think\nvarious people have\ndoes it actually match up with the meta\nethics that's been programmed into this\num\ni was i was more sort of thinking along\nthe lines\nof some form of compressed accurate\nsummary of where the ai wants to go\noh yeah the checking that humans are not\ntotally repulsed by that\num this would have been sort of a\nseparate\nthing along the same lines i i've sent\nyou a somewhat silly example there\nwhere i imagined a problem with cv\num and there the problem is that it\nfollows the coherent extrapolated\nvolitions things at every step arguably\nbut ends up in a terrible uh place\nyeah um and\nthis a lot of what you're\ndoing seems as if there might be\num you could just think of sort of ultra\nbuddhist that ends up destroying the\nworld\num kind of through altruism\nbut\n[Music]\nit's hard to tell because of some of it\ndepends on some of how you\ninterpret distance between\num various utility functions and\nidealization processes but it\nseems that it may be vulnerable\nto things like that series of rational\nseeming steps that end up in a very bad\nlocation\num and some sort of checking or overall\nconnection with the initial\nstarting point uh might be something\nworth\nthinking about yeah um i mean\ni guess my theory is supposed to be able\nto capture\npretty much any normal reasoning that\nhumans do\nso so if you're able to write up this\nexample\nand use it in an argument about what we\nshould value\nthen then theoretically uh\num my model should capture what's going\non\nwhen you're doing that you have some\ncriteria that you're subjecting\nyour your uh values to um\nand and and then we should be able to uh\ntell if that criteria is being applied\ncorrectly or not\nand and that would uh this\nsort of thing sorry\nwhat do you think yeah so so basically\nif you're right that that in that there\nis that there is an argument here\num to um i don't know be\naware of like is it fair to say like\nthese\na chain of transitive uh\nnormal reasoning um\n[Music]\nyeah i don't know i don't know how to\nsummarize this very quickly but\num but but if you're if this is if this\nconstitutes\nuh an argument about how we should\ninterpret our value\nthen then within my model\nthat should be capturable within um some\nhigher order decision criteria\num that that we could then apply\nto get the result that you want this is\nsomething that\ncould be tested to some extent\nempirically because your\num your process is imagining idealized\nversions of humans\num and a question\nis is this unstable\nor is this a stable construction\nand if sort of meta\nnormed arguments like this are included\nand have a strong effect i'd expect a\nmore stable\noutcome where shifting around minor\npreferences don't make much difference\nbut if it's unstable\nthen it could go in many it could\ncould end up in many areas yeah yeah i\nmean\ni guess i guess my model right now is\nprobably going to be agnostic on\nexactly how you specify the sort of\ninitial conditions how you fill in\nwhat the content of people's preferences\nare so\nso so probably there are some ways of\nmaking a stable in some way that they\ncan't not say well\nand that would be really good to know\nwhat sorts of features in general\nmake it stable or not there is also um\ni haven't been following it as closely\nbecause i've\ni'm i'm not really an academic\nphilosophy\ntechnically anymore uh but but uh\nthere's a there's been interesting work\nin um\nexperimental philosophy where their\nwhole idea is\nphilosopher talk all the time about oh\npeople have this intuition or that\nintuition\nuh using those in their arguments they\nwant to\ngo out and actually test do people have\nthose intuitions\nhow much do they have them is there\ndiversity in the people who have them\num and i saw people i haven't had a\nchance to read the paper but\nbut uh just skimming it a little bit it\nlooked like that paper was finding that\nthere there actually are a lot of like\nuniversalities\nin um not necessarily like in the\nend answers that they give but but in\nthe types of\nintuitions that that they that they\nbring up\nuh or you know it it might be that\nthere's research showing that you could\nframe the problem in certain ways\nuh to get them to go one direction frame\nthe problem in a different way to get\nthem to go the other direction but that\njust susceptibility to the framing seems\nto be kind of universal\num so i know there's been there's been\nsome intriguing research that suggests\nthat\nuh that there might be a lot of um\noverlap within humans um when it comes\nto\nethical decision criteria um and and\nthat would certainly be uh be better i\nmean i think my model can work\neven if that's not true even if you\nthink that there is\nmuch more diversity um but but i do\nthink that uh\num it's a little bit easier\nif if for the most part there's there is\nbroad overlap\nin in what the content of these high\norder norms are for\nfor actual humans\ni agree with you and i think there is\nquite a lot of overlap between humans\nas well and um\ni just want to sort of\nokay i'll try and keep this brief so\npart of what i was thinking\nof why there was tests is to distinguish\nmoral moral systems that are defined by\na stopping point\nby stopping conditions from those that\nare\nconstructive from the basis\nso if you um\nlike if you want to have sort of\ncoherence rational coherence between\nyour\nyour preferences and meta preferences\nyou can either do this sort of building\nup\nor you can sort of do it until you get\nto a point where coherence\nhas been reached and the\ndo it until and cv is sort of an example\nof do it until\ndo it until the conditions are reached\nand they do it until\nthings seem to be very dangerous because\nthis might do a random walk\nacross all ethics because\nall that it really cares about is the\nstopping conditions\nso that means that there are certain\nattractors in ethical spaces and when it\nhits one of them it stays there\nbut we don't know if it's going to hit\nany one of them uh any good one\nearly and it might miss them all and\ngo out for something bad or and simple\nyeah whereas the ones that are when i\nwas talking about checking\nfrom the original thing to the end i was\nsort of saying\nensure that there is not too much of a\ndistance in a sense\nso ensure that it's constrained to build\nup\nuh rather than just um wandering\nfrom this starting point until something\nhappened\nyeah yeah and and i i do model some of\nthat uh i\ni look at sort of a chain of agents\nidentity between\neach continuation over a whole path and\nand kind of we kind of want to ensure\num like not not just that um\nthe beginning and end uh have have\nuh decently high agent\nidentity scores but but also that maybe\nnowhere in the chain was it below a\ncertain threshold\num but but and i i also\nwrote some notes to myself in the in the\ncode of like\num i think i think i probably had a very\ncrude stopping\npoint like like just stop that time\n10 million or whatever um but\nbut obviously i'd like to get a more\nprincipled one like is there some way\nwhere we could\nkind of ask these agents are you at a\nstopping point\nyou know it seems like something like\nthat might might be the best way but\ni just didn't want to get into that\ncomplication for this version\ni mean ask the agent are we at a\nstopping point seems like continue\nuntil this condition is reached\nanyway um so i was wondering if we could\ntalk more about this\nuh at some point maybe next week oh yeah\nyeah that'd be great\ncool we'll sort that out when when other\npeople\nthe other thing is that i would\nencourage you to try and write it out\na lot of the ideas written not just\nin the algorithmic form\nmainly because i found it so useful\nmyself to have to formulate my ideas\nuh in written form and try and get other\npeople to understand\nthem no matter how imperfect i am at\nthat\nthis tends to clarify a lot of things\nwhen you\ndo the process\n[Music]\nyeah thanks for thanks for your talk oh\ncool yeah i appreciate you being here\ni would also like to say thank you very\nmuch june for joining us today\nit's been a pleasure and i've certainly\nlearned a lot and i think\neverybody here has really enjoyed both\nthe discussion and your presentation", "date_published": "2021-04-13T06:00:36Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "d5b98c5ce69cac1e3a5dc5b26524288d", "title": "175. Q and A with Stuart Russell", "url": "https://www.youtube.com/watch?v=BztgYBqXi0Q", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 175 in the\nAIC Durkin reading group tonight we are\nhonored to have Stuart Russell with us\nsure Russell is professor of computer\nscience at Berkeley and the founder and\nleader of the Center for human\ncompatible artificial intelligence\nStuart thank you for joining us nice to\nbe here so the first question is one\nthat came up in the in the reading group\nwith your description of how we can how\nwe might expect language learning\nsystems to become more more powerful and\nthere wasn't an analogy with the with\nthe efficient pal experiment back in in\nthe 40s where there was subcritical\nsystems that suddenly turned critical\nand we found a an analogy with a\nlanguage learning system where just as\nin fission we have some neutrons that\ncauses fission and this fission in\ncauses neutrons to be emitted which\ncreates more fission etc in some kind of\nchain reaction then learning systems\nhave the same seem to have the same\ndynamic in that reading requires\nunderstanding and understanding requires\nreading and when you have a system\nthat's not good enough to process this\nthen you end up with with something that\nfizzles out something where the the\nlanguage learning system seems to end up\nwith having like 0.1% credibility in the\nclaim that the sky is a cube or\nsomething totally silly like this but\nwhen the system improves just a tiny bit\nthen this kind of then we can get this\nkind of feedback loop where the system\nbecomes less in this wrong do you think\nthis is a good analogy I mean at a very\nvery high level yeah you would you would\nexpect a sufficiently good\nreading system to improve over time but\nnot necessarily to explode I mean it\nbasically it would it would asymptote in\nthe sense of we know what once it was\ngood enough to understand everything the\nhuman race had ever written then there's\nnothing else there's nothing else to\nread but in in detail it's it's you\ncan't describe the system as having a\nsingle parameter R because they'll be\nthere will always be pieces of text that\nare relatively easy to read and have a\npretty unambiguous semantic\ninterpretation and so a system that in\ngeneral is going to you know make\nincreasing numbers of mistakes we'd\nstill be able to gain information from\nthese islands of certainty so it's\nreally much more complicated than a\nnuclear reaction does Google well so\nwhat I'm describing here actually is a\nsystem that that reads in order to\nunderstand and extract the content most\nof what's going on so when you hear all\nthese stories about GPT to taking over\nthe world it doesn't understand any of\nthe content GPT to is just a description\nof what strings of words typically look\nlike and that's all so when you look at\nthe systems that are actually designed\nto acquire content and then use that\ncontent to understand and acquire more\ncontent you might look at some of the\nsystems that the Allen Institute is\nbuilding and they're you know they're\nthere's reasonable fact extraction\nyou know when the facts are expressed in\nfairly simple ways but most of what\nGoogle does for fact extraction so the\nstuff that goes into the knowledge graph\nis\nmanually extracted or done using\npatterns or done using machine readable\ntext so actual you know mock-up in\nmachine readable form that people put on\ntheir web pages and then is extracted by\nGoogle into the graph so I think we're a\nlong way away I mean I know if you go\nback and read some of the stuff that\nDoug Lennon wrote about psyche you know\nthat was the plan that they would build\na knowledge base by hand that would\nenable it to become good at reading and\nall the rest would happen by reading and\nit would kind of you know take off by\nitself but nothing has that property and\nI think we're still quite a long way\naway from it okay thank you the next\nquestion is about a coherent\nextrapolated volition which is one thing\nthat was absent from the book human\ncompatible coherent extrapolated\nrelation I don't know if you are\nfamiliar with this yeah yeah it is it is\nmentioned in the book actually at least\ntwo notes that talked about it and\nbasically I think that what he's trying\nto do with that is is similar to what\nwe're talking about in the first\nprinciple that it's about what you would\nprefer the entire future to be like but\nit's not this I mean it does the the CEV\narticle doesn't cover the second and\nthird principles really I mean he talks\nabout models so he points out correctly\nthat we are uncertain about our own\npreferences we we also don't act exactly\nin accordance we're their own preference\nis and we may not have coherent\npreferences and he puts all that under\nthe heading of muddle but I think the\nthere's a significant you know extra\npiece having to do with the fact that\nmachine uncertainty about preferences is\nin fact a\na feature not a bug and and then how\nthat how that gets formulated into a\ngame and how AI systems are solving that\ngame or should be solving a game those\nare some of the key points okay there\nare no questions from the audience yet\nas far as I can tell so I'll just\ncontinue here with talk about inside\nview arguments compared to outside view\narguments now a Roman Shah who I believe\nis also from the center of human\ncompatible AI he recently posted on the\nalignment newsletter a long story a\ncollection of a number of people arguing\nthat we should primarily be using the\noutside view arguments to to think about\nhow AI will look in in the future and I\nthink this both up good question how\nshould we weigh how much credence we put\nto outside view arguments compared to\ninside view and is there a principled\nway to figure out if we should think\nmostly about comparing the single\nnarrative with other things all trying\nto open the black box like you're doing\nwhere this is yeah this is a very sort\nof high meta meta meta kind of\ndiscussion I mean if I try to apply this\noutside view to things that have\nhappened before so let's say what\nhappened with the internet so I'm old\nenough to remember the time when there\nwas no internet and you know there were\nI think there were a few visionaries\nlike Doug Engelbart who had some inkling\nof what it might be like but I would say\nmost people didn't pay attention or\ndidn't believe him I mean you know go\nback to the 1940s and computers the\npeople who should know like the see\nwith IBM people like that said they\ndidn't think you know we need more than\nfive or six customers in the whole world\nso I would say you know people have a\nvery hard time predicting this on on the\nbasis of past experience right I mean\nthe reason he was saying that was\nbecause I was how much computing the\nworld did and you could easily do all of\nthe world's computing with five or six\ncomputers Huey deep right and you know\nif you thought about the internet well\nyou'd think about the internet kind of\nlike we think about the phone system\nthat was a you know that was another\ncase similar case phone system okay\nwhat's the phone system do well everyone\nto talk to each other in fact before the\nphone system existed people didn't think\nthat anyone would ever talk to each\nother right you've got you know what's\nthis what we do these phones what they\nfor and they didn't envisage that we\nwould talk to each other and then with\nthe internet people for oh well that's\ngreat we'll be able to you know send\nmessages to each other over this\nInternet and no one no one anticipated\nanything like the world wide web and you\nknow and people living their entire\nlives online and you know or all of this\nstuff I mean sure you might find some\nscience fiction writers but generally I\nwould say that previous cases tend to be\nextremely misleading you know when when\nyou're using a previous case you have to\nlook at well what are the differences\nand how would those differences play out\nso you're forced into visualizing the\ndetails of the process if even if you\ntry to make a prediction on the basis of\na previous case so I I don't really see\nthis as much of a dichotomy but just be\nextremely modest about our abilities to\nto make these concrete predictions on a\nlarge scale\nokay Chris as far as I can tell your\nstatement\ncomment below it's not a question so I\nwill continue to the to the next\nquestion about the AI debate where it\nwould be really nice as you say to\nrestart the AI debate and so I tried to\nsee if anybody had replied to your book\nand I found two cases David Leslie\nwriting in nature a review of your book\nquite quite skeptical and Melanie\nMitchell writing an op-ed in the New\nYork Times as replies to yours and in\nthe reading group we went through this\nparagraph by paragraph and we are sorry\nto say that the these two answers didn't\nreally seem to advance the debate in any\nsubstantial way there were a number of\nthings they were confused about and a\nnumber of outright errors so as far as I\nknow there were no you didn't make a\nreply to this and the debate kind of\njust died there and do you think that\nwould was some discussion also from the\nunless wrong and places like that but do\nyou think engaging with these kind of\ncritics outside of the rational sphere\nis a good idea these kind of reasonably\nhigh credential people well I I would\nhave engaged with them had they come up\nwith any plausible well there's really\ntwo different things here so David\nLeslie I think I mean he apologized to\nme in person for writing what he did and\nI think in some sense he didn't intend\nhe wanted to sort of be combative and\nsomehow I just came out wrong I don't\nthink this is what he really intended\nyou'll notice that his article doesn't\nactually say what the book is about\nwhich is a pretty weird review you know\nhe he complaints so I talked about you\nknow four major open problems in AI you\nknow he complains that these problems\nhave been open for a long time of course\nopen problems have always been open\nthey've never been closed and then they\nbecome open so I don't know what a I\ndon't know what I did wrong so there was\na bunch of weird stuff and it just\ndidn't seem much point in\nin trying to respond to it and then\nMelanie Mitchell's article basically\nsays that yeah you know out of all the\nthings they could do in the universe AI\nsystems will necessarily decide that\nbenefit to humans is the only thing that\nmatters I mean you know there's 8\nmillion species on earth that they could\nchoose to be a benefit to it just\nhappened to choose humans and let alone\nall the other species in the universe on\nthe other star systems that seems like a\npretty low probability event but it's\njust gonna happen automatically so I\nfound that argument you know it's\nalready dismissed in the book so why I\ndon't think she read the book very\ncarefully otherwise she would have\nengaged with the place in the book where\nit explains why her own argument is\ncompletely fallacious so again it didn't\nseem that this was a you know the\nbeginning of a worthwhile debate maybe\nin the Melanie Mitchell case because\nthere are some other people who have\nthis view sometimes rod Brooks seems to\nsuggest that it it's just in the nature\nof intelligence that you're going to be\nhelpful to humans and Steven Pinker as\nwell just sort of assumes that\nautomatically you know as technology\nbits gets better it's going to happen so\nmaybe it's worth engaging with that but\nyeah a bit disappointing so far and well\nI mean maybe maybe no one has a serious\nand sustainable disagreement with the\nbook looking on the bright side so the\nnext question is actually\nthis question well one of the things\nthat you talk about\nand defend in the book is\nconsequentialist preference\nutilitarianism and I think in my\nexperience from the reading group and\nknowing the people here there is\nprobably a lot of people here who agree\nwith this and at least lean towards\nconsequentialist preference\nutilitarianism of some sort but I'd like\nto find out that this is actually\nsomewhat of a minority position here I\nfound the philpapers survey on\nprofessional philosophers and even to\nchoose between deontology\nconsequentialism and virtue ethics they\ndidn't even take a contractual ism\ndivine command Theory natural law and\nthese kind of thing but there is a we\nare the problem that we are in we're not\nquite in the rational and sphere but we\nare we're not really a very diverse\npeople here so this seems like a really\nbig problem for provable safe AI in that\nprobable beneficially I sorry if the if\nit turns out that someone has some true\nvalues which are in let's say divine\nquaint theory or something like that and\nthen the AI that we implement eScience\nprecisely zero probability to that then\nthat gives some problems in that then it\ncan't update towards that is that\ncorrect and that seems like a big\nproblem well so yes it anything that the\nAI assigned zero probability it's it's\nnot going to come to believe it and so I\ntalked about that in the book and you\nknow that you need certain types of\nuniversal priors but there's a\ndifference between what it thinks your\npreferences are and what its own sort of\nconstitutional ethical principles are\nand so so it's entirely consistent that\nit's a consequentialist and you're a\ndeontologist and and you being a\ndeontologist\ndoesn't doesn't mean that if any\nviolation of deontology occurs anywhere\nin the world you and the entire world\nare going to explode it probably just\nmeans you're a little bit ticked off you\nknow would you sacrifice your firstborn\nchild to correct that tiny violation of\ndeontological ethics No\nso your your belief in deontological\nethics is something that you hold you\nassign a certain value to to it as you\nknow it's I think it's a good thing it\nwould be nice if other people agreed to\nit\nI'll spend some effort trying to get\nother people to agree to it but it's not\nan absolute and so so we just it just\ngets folded into what the Machine thinks\nyour preferences are and that's fine so\nif you know if the consequence of some\nchoice would be that you know you might\nbe sort of materially better off but\nspiritually worse off because you know\nyou were induced to violate your own\ntheological deontological ethics that's\nwhat we're gonna be taken into account\nand that's not a problem so I mean the\nreason for for doing this in terms of\nQuantic when consequentialism I mean\nit's not a it's not that there's no\nother possible position and and and this\nis this is the essence of the approach\nit just seems to me that otherwise\nyou're building machines to achieve\noutcomes that you don't prefer and I\ncan't really find a way to do that and\nconsistently it doesn't seem to make\nsense that I prefer the world to be like\nthis but I'm going to build machines to\nmake the world be like that which I\ndon't prefer so and you know I'm not a\nprofessional philosopher but but Toby\nall would read the book very carefully\nand he thinks that you know and I don't\nI wouldn't say I have a version of\nconsequentialist preference\nutilitarianism but the version that he\nperceives as being conveyed by the book\napparently fixes some bugs that I didn't\nknow existed and so at some point I\nthink he said he might write something\nabout that so I I don't see this as\nabsolutely you know a critical stand or\nfull issue I think if you were a\ndeontologist you might change some of\nthe details but you're still going to\nhave to deal with the fact that the\nMachine doesn't know what people want\nand it's gonna have to act accordingly\nyou know what and I reading mill right\nyou could a lot of Mills utilitarianism\nactually is complaining that all of\nthese critics of utilitarianism just\ndon't get it right now Chris don't see\nthat you know the deontological\nprinciples you know follow from\nconsequentialism you know give me give\nme a Dilla logical principle that you\nbelieve in which has bad consequences to\nthe world but nonetheless you're going\nto stick to it right you know and so he\ngets kind of frustrated and angry\nthinking why do I have to keep saying\nthis stuff it's all pretty\nstraightforward and just think of it\nthis way this way this way and so I kind\nof share Mills frustration that in many\ncases these are not actually\ncontradictory views so but take in\nparticular when he talks about the\nontology it's essentially what we would\ncall a compilation argument right you've\ngot the the basic principles of\nconsequentialism but they are really\nexpensive to calculate all the time so\nhe talks about mariners going to see\nthey take an almanac which already has\nthe results of all the astronomical\ncalculations they don't you know do you\nknow pages and pages of math while\nthey're on the ship they just look up in\nthe book\nfind out what they what the answer is\nand he says it's the same principle\nthat you go out in real life with rules\nof thumb that guide your behavior and\ngenerally as long as we all follow these\nrules of thumb the consequences turn out\nto be good and in fact and he says well\nyou know that's that's how you design\nthese rules of thumb in the first place\nright as long as we all follow these\nrules the consequence is good if we all\nfollowed these rules and they were\ndesigned so that the consequences were\nbad that would be kind of daft so anyway\nthat's what I have to say okay Ali has a\nquestion I'll need to prefer to see you\nthe question yourself or should I read\nit aloud it's also on the screen here so\nso the earliest question what are your\ncriticisms of the concrete approaches to\nAI safety pursued by Stuart Armstrong\nand Paul Cristiano respectively\nI'd have to think more about that I mean\nI think so Stewart has been thinking\nabout value alignment you know and I\nhe's concerned about some sort of\nunidentifiable tea problems which which\nI don't I I don't really think those are\nserious problems I mean they it's worth\nunderstanding that that basic point but\nI mean you've to me it doesn't seem\nplausible to suggest that Gary Kasparov\nwanted to lose every single game of\nchess that he ever played but he's so\nstupid that he keeps playing winning\nmoves by accident you know is that a\nplausible explanation for his behavior\nno well why not well because you know\nwell because you know we apply the same\nkinds of inductive principles to the\nevidence that we do in understanding you\nknow in terms of science and you know\nthere are always some kinds of sort of\ncommutation unidentifiable ities but in\nthis case I don't think that's a\nreasonable assumption to sort of flip\nthe sign of both his preferences and his\ndecision-making because you have to\nassume that it in a case where decisions\nare completely obvious then the decision\nmake it will make the completely obvious\ndecision right so in the case of chess\nyou know if there's a one move checkmate\nyou expect the system to make that even\nif it's a not very good chess player or\nnot very good chess program it's\ncompletely obvious then that will\nprovide very convincing evidence that in\nfact the program wants to win or the\nhuman wants to win because they make the\ncompletely obvious\nmove and to assume otherwise right to\nassume that even in completely obvious\nsituations they choose the action that\nin fact maximally violates their own\npreferences just it's not it's not\nsustainable and it also it kind of\ndefeats the entire idea that people have\npreferences at all like if there is no\ncircumstance in which you ever act in\nany way in accordance with your own\npreferences then the entire concept and\npreferences goes away so as so that's my\nlittle response to some of what Stuart\nArmstrong has been saying but generally\nI find what Stuart says to be\ninteresting and constructive and and\nPaul also I think you know Paul has been\nlooking at lots of different stuff and\nthere's the there's the work on human\nfeedback in in terms of you know I\ntraining assumingly a robot to do\nvarious gymnastic things by by\nexpressing preferences saying you know\nthis this is bet this one is better than\nthat one this one's good and that one\nsmells bad and that one you know and\nthat's fine that all fits within the\nframework that we're talking about some\nof his some of his other stuff this this\nkind of sort of recursive self\nreferential version I I would have to go\nback and see if I understand all the\ndetails but that's something that I\nactually talked about maybe five years\nago very very soon after I started\nworking on this back probably 2014 this\nidea that you might be able to use a\nrestricted kind of system so for example\na pure reasoner to to verify the safety\nof a an agent before that agent was\nreleased and so you could have a kind of\na bootstrapping process of developing AI\nsystems of greater and greater power\nbut always be sure that the the next one\nyou you were planning to to create and\ndeploy would be safe and of course it\nrests on some assumptions about the\ndifficulty of this verification problem\nbut you know there's there's quite\npossibly some some interesting results\nto be found along that line and I think\nthis is one of the things that Paul's\npursuing okay so I think your hands is\nthe next in the queue but before him I\nwould like to we have Stewart Armstrong\nwith us today I believe he said he would\njoin so I would like to just ask him if\nhe have any comments or replies to this\nI can't see him here but I don't think I\ncan see more than a limited number of\nparticipants so Stewart are you there\nnope okay so I can say I discussed this\nprecise point with Stewart previously\nand Stewart believes that human the the\nresearch agenda if you couldn't say in\nhuman compatible are quite close to to\nwhat your Armstrong is pursuing and it's\ntrue that sure Armstrong doesn't believe\nthat you can you using something like\nOccam's razor find a unique\ndecomposition of what the humans policy\nis and what the ration from the policy\ngo to both split that out into values\nand and rationality but then you just\nneed to make some normative assumptions\nlike the the principle in the\nin human compatible and and with these\nstrong normative assumptions then the\nproblem mostly goes away so I believe he\nbelieves that you are very much on the\nsame page at least that's my summary of\nStewart oh yeah\nthe principle so Chris Joni AK has a\nbook called minimal rationality which\nsays okay well if you can't be perfectly\nrational you know what are some minimal\nconditions on any system to even\ndescribe it as a sort of quasi rational\nagent and one one of them is this\nprinciple that in in the simplest\npossible decision making situations that\nyou do in fact do the right thing and in\nthe simplest possible inference problems\nthat you you make the correct inference\nso and that kind of gives you an\nanchoring on which you could build the\nrest okay so then you had a question\ndo we like maximize the utility of\nsadist but specifying that in a\nconsequentialist framework because that\nsounds like because it would be optimal\nand would be what we want if we knew\neverything perfectly of the consequences\nokay so the the issue with we say yes\nwhich by which we mean so so let's go\nback a little bit so that the the simple\nmodel of people is that they have\npreferences with respect to themselves\nand then preferences with respect to\nothers and this is something that\neconomists talk about you know the idea\nthat you can kind of separate\nin the trick your intrinsic well-being\nwhich you know just simply you could\nthink of as do I have enough to eat do I\nhave a warm place to sleep you know am i\nphysically safe on my healthy these are\nall self-regarding preferences and then\nyou could say well you know I also have\npreferences that other people have a\nwarm place to sleep that other people\nhave enough to eat you know and\nparticularly strong preferences for my\nchildren and my other family members and\nso on and perhaps less strong for people\nin other countries and a sadist would be\nsomeone for whom those preferences have\na negative weight right which means that\nthey would actually spend their own\nmoney reduce their own intrinsic\nwell-being in order to increase the\nsuffering of others right so that's what\nyou would mean by say list and and\nHasani talks about this when in the\npaper that introduces his version of\npreference utilitarianism then he\nbasically says that you know and he\nrealizes this is a problem for a\npreference utilitarian that if if your\npublic policy position is we're just\ngonna you know maximize this the sum of\nutilities for everybody if someone has\nextreme negative weight on on the\nwell-being of others someone is an\nextreme sadist you're going to end up\nallocating some public resources to help\nto satisfying those preferences to the\nextent that those can be satisfied in a\nnet you know as long as there's a net\ngain the public policy would end up\nsatisfying that person stated statistic\npreferences at the expense of other\npeople and higher the more of a status\nthat person is the easier it is for\npublic policy to satisfy them because\nyou only need a little bit of suffering\non the part of other people to give a\ngreat deal of happiness to the sadist\nand so he basically says that if you\nwere that type of person then you are\noutside the social contract\nall together right you don't you don't\ndeserve to have public policy take your\npreferences into account and so you just\nzero out those types of preferences and\nthat seems to me at least on the face of\nit to be a reasonable solution and and\nI'll come I'll come back to why I think\nit's a reasonable solution but there's a\nthere's a second set of preferences\nwhich I also talk about in Chapter nine\nthese these relative preferences where\nI'm happy you know I have let's say I\nhave a you know a nice umbrella well I\nlike it's nice to have a nice umbrella\nbecause it keeps you dry and you know it\nshakes off easily and it doesn't break\nwell that's great but maybe I'm happy\nalso because my umbrella is nicer than\nother people's umbrellas right and that\ntype of happiness functions in the same\nway as sadism because I get more of that\ntype of happiness if other people's\numbrellas are shabby and cheap and and\nso on relative to mine and so the\nquestion would be well should we also\nzero out these relative preferences and\nthat's a much more radical proposal and\nI don't you know so I think this is a\nresearch question you know we have to\nsort of start thinking through yes and\nit's actually much more complicated than\nthat because it's not just I'm not just\nhappy because other people have shabbier\numbrellas it has to be that they\nperceive my umbrella to be better than\ntheirs and I perceived they're\nperceiving so I derive some self\nsatisfaction from that right so just\nhiding my umbrella away and say oh I\nhope I have a nice fella\nright this I don't get as much of my\nsocial status jollies from that so it's\nit gets quite complicated and you have\nto figure out you know are there ways\nto just kind of get rid of the negative\neffects of these relative preferences\nwithout actually completely ignoring\nthem in the way the AI system makes\ndecisions on behalf of multiple people\nso this is something that I don't yet\nunderstand well enough to answer so is\nthis a deontological I mean it's so\ncoming back to the question why why is\nzeroing out the Preferences of the\nsadist a good idea I think it's you one\ncould look at this in the following way\nright so suppose that the AI system is\nacting not on behalf of humans but on\nbehalf of a randomly generated\ncollection of other AI systems that have\nsort of randomly generated preferences\nsome of which are positive towards other\nother agents and some of which are\nnegative towards other agency too and\nsadism is just as prevalent and just as\nas has just as large weighting factors\nas altruism you know that wouldn't you\nif those were the only entities that\nexisted the entire notion of being\nbeneficial would kind of go away anyway\nright so I so I think that in order for\nus to be doing this whole thing in the\nfirst place it has to be that the notion\nthat most people should be altruistic is\nkind of built-in to the entire\nenterprise so that's you know that's why\nI think zero you know sort of saying\nokay we're going to exclude these\nsadistic preferences is a reasonable one\nso I don't know that it's necessarily\nsome arbitrary ideal the ontological\nprinciple if you want to call it that\nfine\nbut I think it's sort of a it's a\nprerequisite for even engaging in in\nthis idea that we're going to build\nbeneficial AI in the first place do you\nthink that you're saying is basically\npositive experiences efforts that is the\nmost fundamental thing that you need to\nstart and so I didn't quite catch that\nso the thing that we need to have\ndiscussion is to at least declare\nbasically that are the experiences of\nconscious beings what we care about and\nwe want them to be positive yeah yeah\nokay so the next question is from Alexi\nsearchin who asked the question when do\nyou expect we will have human level\nmachine intelligence and super\nintelligence AI and I find out that this\nis something you refuse to answer in the\nbook you didn't want to give the 10 50\npercent 90 percent confidence levels but\nbut but I agree that this is a really\nimportant question and one of the big\nthing that makes people very about and\nthat you say is that there is an obvious\nrisk of being misquoted and if you add\nsome qualifiers then most likely people\nwill just ignore every qualify you say\nand then if you say 2069 it will end up\nin a some of these graphs here and you\ndon't want that so and that's of course\nfair enough so we came up with something\nwith a statement that what you say here\nshouldn't be quoted externally and if\nyou if you are quoted externally then\nthat should be a misquote under these\nquoting rules and you'll be willing to\ngive some estimates roughly how is your\nprobability distribution on human level\nmachine intelligence and super\nintelligence\nwell so I mean in the in the book I\ncould explain this story you know we I\nwas operating under such a set of rules\nChatham House Rules\nand then the newspaper just went ahead\nand ignored the Chatham House rule and\npublished my name and and and sort of a\nextremely distorted quotation so I don't\nhold much faith in in the the fact that\nthese rules are going to be respected so\nI but I you know I I think I said in the\nbook that you know what I said was I\nthink it's I think it's quite likely\nwithin the lifetime of my children so\nyou can and I said that course thanks to\nadvances in medical care\npossibly produced by AI my children\nmight live a lot longer than we do so\nit's a fairly elastic prediction but you\nknow some people argue for example you\nknow so so Oren Etzioni effectively took\nthe same data in fact so he he didn't\nlike the answer that he was getting that\nthat Bostrom was getting that people\nthink that AGI is going to happen and\nyou know mostly sometime around the\nmiddle of the century so he didn't like\nthat answer so he ran his own survey and\nthen he declared that anyone who put a\nnumber which was more than 25 years in\nthe future was effectively saying it was\nnever going to happen and therefore\nthere was absolutely nothing to worry\nabout so you know and I pointed out that\nthat included Nick Bostrom and so you\nknow Aaron was claiming that in fact the\nexperts are not at all concerned about\nthe risk of AI and his evidence was that\nNick Bostrom is not at all concerned\nabout the risk of AR and Stuart Russell\nis not at all concerned and Bill Gates\nis not at all concerned because we know\nall those people\nit'll probably take more than 25 years\nwe're not at all concerned about the\nrisk of the I so that's so so I think\nthat you know a lot of people think it's\ngonna take more than 25 years myself\nincluded and you know I I base that on\njust how many more difficult things\nthere are that we have to figure out and\nsort of how quickly we have figured out\ndifficult stuff in the past it's um you\nknow and it's so I certainly don't\nbelieve that we basically have the\nanswer and we just need bigger machines\nand more data and then we'll get there\nand I I don't really understand I mean I\nknow there are people who say that some\nIlya sutskever for example last years\nthat said we had five years and you know\nand I I just don't understand because\nthe the qualitative behaviors of the\nmachines are not really changing I mean\nyou come to give an example right so you\nlook at these language models and people\nsay oh my goodness look no look at GPT -\nit's it's sort of you know it's it's\nlike 90 percent of an intelligent human\nbeing because it produces text that you\nknow ninety percent of it kind of looks\nlike sentences that intelligent human\nbeing would say but you know if parrots\nhad a little bit more short-term memory\nthey would generate sentences that sound\nlike what human television being say and\nchatbots do it all the time and they're\ncertainly know we're close you know\nthey're not ninety percent of an\nintelligent in being there\nzero percent of an intelligently being\nso the ability to generate plausible\nlooking text has nothing to do with\nintelligence and you know it's a little\nbit like the you know the Ptolemaic\nastronomical system\nright so so Ptolemy you know and and his\nfollowers and others you know developed\nan actually pretty accurate predictive\nsystem for saying you know where other\nplanets going to appear at any given\ndate and it had cycles and epicycles and\nepicycles on the EPI cycles and so on\nbut it had nothing to do with what was\nactually going on right it was an\nentirely superficial description of a\nsolar system and that's basically what\nwe're currently doing with you know with\nsystems like GPT - so I think we've got\nbig qualitative gaps and those big\nqualitative gaps take time to to fill\nand they're hard to predict so you know\nbut I I on the other hand there's a lot\nof smart people working on it most of\nthe smart people realized that these\ngaps exist and they are trying hard to\nfill them and I I don't see any reason\nto think that they're not going to be\nfilled so the the issue is how sure are\nwe that we all have solved the control\nproblem and also sort of disseminated\nthe solution to the control problem in\ntime to avoid potential negative\nconsequences okay then we have a\nquestion from Tessa Lou says hello would\nyou like to state your question first\nI don't know if the silver is here hello\nalright so I was thinking about\nmisinformation and but because like\ncurrent systems can't like identify\nmisinformation and I was thinking about\nhow dozen III what would an intelligent\naltruistic AI do to trust information\nthat its creator gives it so when you\nsay the crater you mean the the AI\nresearcher who who designed the system\nin the first place or the system and\nbecause um in the book you talk about\nthat you would have to be designed a\nsystem that it's um useful to the person\nwho owns it or that has it but um the\nsystem should also be optimized to help\nothers and if I say to the AI to get me\ncoffee um I think yeah I would have to\ndo a lot of research first to look up if\nother people and conflict with me\ngetting coffee\nwell so it would have to yeah and it\nshould it should consider you know what\nare the possible negative effects on\nothers you know and it's easy for us to\nsay well coffee you know getting coffee\nbecause it's something we do all the\ntime everyone does it all the time you\nknow it can't be that bad\nbut you know replace coffee by a\ncigarette or you know and then you've\ngot secondhand smoke or you know some\nopioids or whatever so you have to you\nknow do some counterfactuals and and and\ncertainly you don't want to just say\nwell anything I asked you to do must be\nokay so I'm just gonna do it regardless\nof you know without checking for\nnegative consequences for anybody else\nbut you know so but I think that's not\nquite do you know that's not quite the\nmisinformation issue I think the\nmisinformation issue is is a significant\none but you know it in in paralytic\nreasoning systems it so it's almost a\nkind of a you know it's a rule one of\nbest practices in building power\nballistic AI systems is to separate the\nevidence variables from the truth\nvariables so for example if you're you\nknow if you're getting data from a\ndevice that measures the temperature of\na patient in in the intensive care unit\nyou have one variable for what's the\ntemperature the patient and you have\nanother variable for what is the system\nreporting the temperature to be right\nbecause the the measurement process can\nalways introduce error and sometimes\nit's completely wrong right if the if\nthe therm um\nsorry I think Stewart I'm not hearing\nyou seems so yeah there might be some\ntechnical problems I can't see him in\nthe chat either so okay so I can use the\nthe break to just give a few practical\ninformation next week we will be reading\nlet me just see if I can find the text\nso I might have will be saying that hold\non the roll of cooperation in\nresponsible AI development and that will\nbe on Wednesday in precisely in\nprecisely one week and I can see the\ntime is almost up Stuart Russell needed\nto leave in in five minutes anyway so\nthis might be it if he comes back then I\nexpect there's only time for like one\nextra question and there are four\nquestions as far as I can see there is\nmy question on quantum computing there's\nChris question dragans question and\nMattias question if he meant just to\nreconnect\nwho should we an arleigh also had a\nquestion so there five questions what\nshould we post to him if\nif he comes back\nChris Ali is voting for you Chris Chris\nwhich of your four questions would you\nask him hold on\nright just since I would like to ask him\nI think I'd like I'd like to ask the one\nabout his principles being adopted by\nofficial bodies okay we'll ask him if he\nreturns in the meantime let me see if\nmaybe he because nothing so yeah yeah so\num that's he's in the chat no that's too\ndamn strong a difference - unfortunately\nOh fortunately I shouldn't say\nunfortunately hello Stuart we are\nmissing the other Stuart he went out of\nthe chat maybe he his computer crashed\nso he has been here and have answered\nour questions for 55 minutes\nand and now he is gone so he's back\ngreat okay so I think I was just talking\nabout ya measuring temperature in the\nintensive care unit right so it's it's\nperfectly natural that you write your\npermeability models to allow for the\npossibility that you know the\nthermometer becomes detached and just\nreports room temperature just as you can\nyou know if you were reading tweets from\na particular tweeter who who is known to\nlie almost all the time then you know\nthose tweets are just what they say\nright there is a sequence of characters\non the screen that's the evidence what\nit tells you about\nthe actual world is a separate matter\nand to to turn it into a belief about\nthe world you have to kind of reverse\nengineer it and and that's that's where\nyour model of how trustworthy someone is\nor whether their tweets have been hacked\nby some other entity etc etc all of\nthese things have to be taken into\naccount as as they are by people when\nthey're absorbing information on the web\nand I think actually AI systems can be a\nlot less gullible than humans because I\nthink we haven't yet adjusted to the\nvolume of false information that the\nworld is currently generating so you\nknow there are there are some difficult\nissues with respect to preferences\nbecause people have an incentive not to\nbe honest about their preferences if you\ndon't set the set up the protocol in the\nright way and also not to be honest\nabout their abilities so they would they\nwould pretend to be more helpless than\nthey really are in order to get more\nresources from from the machine so these\nare things that game theorists you know\neconomist mathematical economies love to\nthink about and there are in fact a\nbunch of papers talking about how you\ndesign the mechanisms so that people\nhave an incentive to be honest in\ndescribing their own preferences but\nthis is it is hardly a new a new problem\nok so I think the time is just about up\nso I would like to first of course\napologize to all the people who have\nsubmitted question that we did not have\ntime to and finally I would like to say\nthank you to Stuart Russell for joining\nus and to the rest of the reading group\nI hope to see you next week", "date_published": "2020-01-08T21:42:55Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "06191742342997fd3f852ebacdc61c8f", "title": "123 - Robin Hanson on AI Skepticism", "url": "https://www.youtube.com/watch?v=QbHzxHsnAtk", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "initiate conversation and if I could ask\npeople to mute the microphones please\nI'm here I was just waiting until the\nsign down dead Sunday died down sorry\nanyway so I'm not sure how you'll manage\nlike who initiates a question and how\nsomebody raises their hand but I presume\nyou've worked that out before so I'll\njust trust you to know what you're doing\nnow I'll follow your lead it's great to\ntalk to you all I figured I should just\nhave a very short introductory outline\nyou probably know what questions you\nwant to have ask and what you want to\ntalk about we can just get into that but\nlet me just set the largest framing I I\ndefinitely believe that eventually there\nwill be powerful AI powerful software\nthat will be much more powerful than we\nhumans are today collectively and even\nindividually if you can divide it out\nthat way I definitely believe that\neventually growth rates will increase\nalthough they eventually have to\neventually later have to slow down there\nwill be faster growth and faster growth\ncould come relatively Stud suddenly in\nthe sense of within less than a current\ndoubling time of 15 years we could be in\na whole new regime of much faster growth\nand that a some sort of artificial\nintelligence is is probably the most\nplausible explanation for a next faster\ngrowth mode that we might enter into\nartificial intelligence definitely has\nthe potential to to be different not\nonly in terms of its ability but in\nterms of its preferences you know the we\nare somewhat flexible in our preferences\nwith respect to culture and culture has\nchanged just over time and we are now\ndifferent people than our ancestors were\nin terms of our preferences and that our\npreferences can be summarized in part\nhas ancient human preferences that were\nevolved in humans long ago say million\nyears ago and then more recently\nculturally imprinted preferences that\nare somewhat the results of cultural\nselection over the last few thousand\nyears\nand recent events of course objective\ncreated recent cultural values and we\nshould expect that our descendants will\nalso differ from us in many values ways\nboth because they will just be in a\ndifferent world and they will have\nadapted to that because there's just\nrandomness and random change random\nvalue drift if you will and it's an open\nquestion just how much they will have in\ncommon with us value wise and I\nunderstand that many people very\nconcerned about that and would like to\nnot have our descendants have different\nvalues from us I have this book on edge\nof em all about very human-like\nartificial intelligence and what that\nworld would be like that's not the only\nscenario it's a scenario explored and\nthere are other scenarios of less\nhuman-like software at the moment I've\nbeen working on a project to try to\nimagine what the future after m's would\nbe and where human-like and non-human\nlike software would win and I do think\nthat human-like software has a long\nfuture and can win in many places and I\ncan talk about that if anybody wants and\nI so I think and I think non human like\nsoftware will win in other places I\ndefinitely think that eventually\nthere'll be a concern that any one piece\nof AI could be out of control I mean we\nwe've always had to worry about any all\nof our devices of staying in our control\nthere's more of a concern in the long\nrun for more powerful things being out\nof control I I'm relatively skeptical\nabout the scenario where all the sudden\none thing gets out of way out of control\nas opposed to a world where many many\nthings are slowly becoming harder to\ncontrol or you know and so I'm tend to\nimagine a world where there are many\nroughly equal equally powered things and\nas an economist imagine world where no\none's in charge no one's setting\neverything there's more competition that\nproduces the outcomes and I think that\nit's just too early now to do much work\non how to keep future things in control\nI think you'll have to see roughly what\nthere's\ntruckers are like and what their roles\nare like and what the incentives are\nlike and what the key accountability\nmetrics are etc before you have much of\na hope to to think of useful ways to\nkeep things in control but then anyway\nso that's roughly my overall set of\nattitudes and interests and things like\nthat and that's probably enough to get\nus started and I'll let whoever is the\nmoderator here decide who talks now okay\nthank you for your presentation\nso if people could write in the chat if\nthey have any interesting questions and\nthen I will start out with the first\nquestion so my first question was back\nwhen I first read the AI Foom debate\nbetween you and Elliot Kowski there\nseemed to be a people what you were\ntalked about beside each other in that\nIliad cast was claiming that that a firm\nwas dangerous because it happened so\nquickly an intelligence explosion was\nsomething that that was so quick that we\nwould not have much time to prepare and\nyou were arguing that was not so likely\nbecause it was that that a very local\nfool was was unlikely so that to me\nleaves open the question of a global\nFoom in that an intelligence explosion\nthat is rather broad but also very fast\ndo you think that's like oh so I was\ntrying to signal my beliefs on that when\nmy initial comments to say that I do\nthink that rapid change is possible and\neven likely so that the next best guess\nnext doubling time for our successful\neconomy would be say monthly today we\ndoubled roughly every 15 years so that\nwould be a much faster growth and that\ncould happen you know any time in the\nroughly the next century or two probably\nnot in the next two decades but possibly\nand that that would be literally if\nthat's driven by artificial intelligence\nthat's faster growth driven by smarter\nmachines if that counts in your mind as\na global phoom fine it's even to me that\na lot of the concerns were focused on\nthe one machine that takes over the\nworld in a weekend sort of thing and\nthat if that's not your snare you you\nhave different concerns you want to do\nsomething different about it\nto clarify until someone else has\nwritten another question oh my my worry\nwas about a Foom that took on the order\nof months to half a year or something\nlike that\nwhich is basically too fast for us to\nfor humans to do much other than\npre-planned precautions well I would\nimagine whatever happened is the results\nof many organizations the world having\naccess to powerful machines and then\nusing those machines as they intended\nand you know if that took a year that\ncould still be you know a decentralized\nscenario where no one of them has great\ninfluence then the question is what what\nis it about that snare that worries you\nis the key question so in the Foom\nscenario the simple the local Foom\nscenario the scenario is there's one\nmachine that takes over the world in the\nweekend and then every it controls the\nworld it makes the world do what it was\nthat was the presented scenario and that\nit's values would be pretty hard to\npredict from that context that and that\nwas the thing that would make you worry\nthere but in a world where many many\nparts of the world have are growing\ntogether fast the question is what\nexactly is your concern in that snart I\nthink I will return to that in the\nmeantime Robert has a question that\nwould be hard yeah anyway there so so\nthere's this there's this mistake that I\nthink people make and I know that I used\nto make of kind of imagine a GI\nsuddenly coming into existence in our\nworld\none that looks very much like our well\nand I think your position if I'm not\nmistaken is that by the time we have AGI\nthe world will look fairly different in\nthat we will have all of these other AI\nsystems which are almost AGI or you know\nthat are like not not not as powerful\nbut there will be a large number of like\nexisting powerful systems in place\nalready and so this like Foom where one\nthings suddenly gets a decisive\nstrategic advantage over the entire\nworld becomes unlikely and more likely\nto be like a multipolar scenario where\nthe the new system has to like take into\naccount the the values of all these\nother systems but like I can see two\ndifferent possibilities there one of\nthem is yeah that there's all of these\nexisting like capable systems which\ncould keep the new one in check the\nother one is that like you might have\nkind of an overhang where if you have a\nlot of narrow but very powerful systems\nthose could be co-opted or taken over or\nyou know hacked or whatever by a new\nsystem that just gained the that's only\nreally good at like hacking and and\npsychology and strategy and that kind of\nthing yeah yeah I'm wondering which what\nseems more likely I would regard those\nother scenarios as other variations on\nlocal food maybe elaborating that\ncategory makes you think the more likely\nor more concern but the key difference\nis does one thing grab control of\neverything or are there many sources of\npower and influence that are roughly\ncomparable so right you know the default\nthe multipolar scenario the default\nscenario that I tend to think is far\nmore likely is that there are many\nsystems in the world that grow in\ncapability but no one of them is vastly\nbetter than all the rest nor does any\none of them suddenly grab control of all\nthe rest right it doesn't grab the\nfinancial markets and win all the money\nit doesn't grab all the military and and\nhave all the bombs it doesn't grab all\nthe computers and and\ntherefore you know etc right that the\npeople are already people have always\nlong been protecting their things from\ntheft and that there's no sudden Foom of\ntheft to allow one thing to grab\neverything through whatever channel\ntheft war clever persuasion interpretive\ndance yeah okay yeah cuz I'm just kind\nof thinking about it in the same layer\nof overhang that people that people are\ntalking about like hardware overhanging\nall around overhang would be a like a\nwhen you would get one little thing that\nallows you to also get a whole bunch of\nother things in essence it's a\nrefreshable threshold effect so\nthreshold effects can certainly allow\nmore variance and ability but the\nquestion is just how much variance are\nwe talking about here we certainly see a\nlot of threshold effects where say any\none company gets a new innovation and\nallows that company to beat its\ncompetitors out for a while you know\nwe're talking about a threshold that\nallows one thing to take over the world\nso that's a vastly larger threshold than\nwe've ever seen right and so I thought\nthis will give me just a chance to make\nmy side comment I mean I decided to\ninstead of making longer speech at the\nbeginning to have some other speeches\nready when this topic came up so one\nlittle speech that I have ready is just\nthe story that innovation isn't\ncontinuous it is lumpy and therefore\nthere could be big lumps but we have a\nlot of data on the distribution of\nlumpiness of innovation and at least by\nsay academic citation of papers that\ndistribution is pretty constant across\ndifferent academic areas so it's\nactually a pretty robust distribution\nand in the standard distribution most of\nthe weight and innovation is in many\nsmall lumps but every once in a while\nthere's a big lump and a key question is\nwhat's the chance of an enormous lump in\nthis area obviously a priority the odds\nof that are pretty low if you think it's\nnot different than all the other areas\nwe've ever seen so then the question is\nwell what are the arguments for thinking\nthat this area is unusually different in\nmaking us expect there to be a really\nbig lump just at this point where the\nphoom sort of thing would happen\nI I guess should we go to other\nquestions or yes I think Ashwin had a\nquestion following up on the global film\nidea yeah so I guess pretty similarly in\nthis vein\nI guess so it's so like it seems a\nlittle bit unobvious to me that just\nbecause the like level of resources is\nlike fairly similar across different\ngroups that therefore there's no\ndecisive strategic advantage like\nthere's this idea of like\noffense/defense balance for example and\nof offense/defense scaling which i don't\nknow if you've seen from alan Defoe I'll\nlink to the paper here which basically\nargues that like it's possible depending\non the technology and depending on the\nsituation\nfor more resources coming in to either\nallow for more decisive first strike or\nto allow for sort of defenses to better\nprotect against a first strike and it's\nnot at all obvious to me that's going to\nbe the case in like every area or most\nareas that resource investment do day I\nwill be defense scaling rather than\noffense scaling if you want on a frame\nit in those terms like allowing for\nprevention of getting your you know\ntechnology hacked or resources still\nunder whatever rather than the reverse\nand so it seems possible they you\ncouldn't fact have this kind of\nsnowballing and doesn't seem that we've\nhired any particular sort of large lump\nnecessarily so much is just like the\nability to have like a moderate\nadvantage that you happen to be able to\nlike leverage and really I would call\nthat a lump if you happen to stumble\ninto a regime where you get a scaling\nadvantage then you end up accumulating a\nlump the question is how often do such\nlumps appear I think often in these\ndiscussions there are just two very\nthere's two different styles of analysis\none is to look at you know historical\ndata sets and ask what you know how\noften is there something like this event\nand another is to just look in the space\nof abstract mathematical models and say\nwhat fraction of the space of models\nlooks like this and usually the space of\nmodels has much larger fraction of very\nconcerning scenarios than the space of\nactual data so for example if you look\nat economic growth models\nyou know abstractly mathematically it's\njust as likely that more growth produces\nexcel it makes growth accelerate as it\nmakes it decelerate but it almost always\nthe way we see in the actual world\nthere's very rare to have accelerating\ngrowth scenarios that at least last over\nvery large scopes and extremely common\nto have decelerating growth scenarios\nsure um and similarly for this\noffense/defense thing I mean yes in\nprinciple you could have an offensive\nabout instead of let's one anything take\nover the world but we've had a lot of\nhistory of offense defense and that's\nalmost never been true on the large\nscale I'm sure but I guess I call points\nin that when it one is like this is a\nslightly different point but related\nthat like it seems like resources are\nlike pretty lumpy like if you look\nespecially if you imagine like I don't\nknow like it seems fairly obvious that\nif you have this kind of economic growth\ntype effective AI it's gonna be like\ngovernments are very interested and\napplying it to like military type\ntechnology and there in particular it\nseems like maybe like nuclear weapons\nare a big concern like there's some\nevidence now that like improved like\njust like data analysis type stuff is\nproviding a greater chance of like a\ndecisive nuclear for a strike and it\nseems like that's like a particularly\nlike you know wait it's gonna happen\nthen you could have like a sort of\nability of military dominance coming\nfrom like a relatively small advantage\nyou know obviously offensive advantages\nsometimes happen and they happen\nsometimes happen in history and you\ncould look at our history to find the\nthings closest to that and the most\nconcerning and you know first strike\nmillet nuclear war might be the closest\nto stay analog into that but you know to\nleap from that to say that therefore an\nAI will kill the world in a nuclear\nstrike is you know a big jump right it's\njust it's it's itemizing the extreme\nmost extreme possibilities you can\nidentify but you should admit that you\nhave search for the most extreme example\nand if you look at the rest of the\ndistribution it's much less extreme than\nthat I mean I guess but like it's also\ntrue that like you are going to have\npeople like trying to sample from like\nthe extremes of like the most powerful\nthey can get\nlike but that's always been true that's\nthat's been true for thousands views so\nthe distribution we see is the result of\npeople sampling as best they can for the\nmost extreme things and rarely is\noffensive win on the scale\nsure yeah I don't want to close too much\nso a lot of the people jump in okay then\nthis question is from Chris oh yes hi\nI think basically it's perhaps been\nanswered already so the picture I get\nfrom you is that you regard an AI where\naccelerated growth\nsorry Foom happens globally or in lots\nof different areas as being\nintrinsically potentially it could be\nintrinsically safer because they're\nlikely to be checks and balances between\ndifferent centers of of AI basically is\nis that it in just the way that\ncompanies and cities and nations I want\nto distinguish the kinds of concern you\ncould have so if your focus is on one\nmachine taking over the world min a week\nand I'm much less concerned about that\nbut now I have another one of my\nstandard speeches at this point which is\nto say there's another concern that I\nhave to get say is overwhelmingly likely\nto happen and there's probably not much\nyou can do about it which is just the\ngeneral fact that so far in history each\ngeneration has had values of different\nfrom its immediate ancestors and that on\nthe long run looks like a random walk\nand that's that you can't predict it\nvery well and that it seems very hard to\nprevent that so that I think your\ndefault expectation for the future\nshould be that when values can change\nthey will to some degree and that's\nroughly a random walk and if you don't\nlike where that random walk tends to go\nwhich is everywhere you don't like that\ndefault future and that's a very hard\nthing to not have happen I think that is\nthat's a default way the generic future\nwould play out what it's decentralized\nis that there'll be some degree of value\nplasticity and\nvalue change and it will just follow\nthis large random walk and so if you\nthink that's a problem then I say you\nyou do have a problem you have a really\nhuge problem a problem that seems almost\noverwhelming to me I mean in the sense\nthat it seemed relatively little chance\nto avoid it well a couple of points\nabout that one is its it seemed to me\nI'm thinking about this in the past it\nseems yes okay\nof course future generations will have\ndifferent values from our own and we\nmight be horrified by them just the same\nway our ancestors would be horrified by\nsome of our values and really what you\ncan do is perhaps say well let's try and\nensure that the next generation or two\nare going in a way that we can endorse\nbecause and let's hope then that our\ngrandchildren will can just make sure\nthat their grandchildren have fun break\nand endorse and so on that's what people\nalways been doing and people have always\nbeen trying to make sure that you and\ngrant share their values that and their\nvalue drift we see is the result of you\nknow those efforts yeah so our problem\nis that that that could that is going to\nchange really fast its technology and\nconsequent cultural values it was just a\nhigher level climate is just anything\nthat you're worried about over the\nlong-run happens faster when changes\nfaster so I think you know the high\nlevel bit to know about the future is it\nwill be world a faster changed in the\npast and so any generic concern you have\nthat would have taken a thousand billion\nyears in the past will encompass within\nless than a thousand years in the near\nfuture so you know that's that's just\nbecause change is speeding up ok can I\ncould I depress mate just one other\npoint which is which is connected that\nand that is that you refer to\ngenerations trying to limit control and\nconstrain later generations and applied\nto intelligent agents in the future you\nseem to be implying that our AI success\nreally have as much moral right to go\ntheir own way as we would say that our\nown children and grandchildren have a\nright to go their own way and that\ntherefore we should sort of we shouldn't\nbe worried about being about a is being\nour successes\ndid I read too much into what you said\nwell I usually try very hard to avoid\nmaking moral claims you know I usually\ntry to say anything else I can closely\nrelated to a moral claim without\nactually making the moral claim because\nI prefer to keep my analysis space of\nfacts rather than making bald more\nclaims than if they find that I can't\nsupport or anybody else can support so I\nmight more say you some people think\nthat AI is more plastic and values than\nhumans are and some people think AI is\nlikely to drift farther faster away from\nour values than say MS or humans would\nI'm not so sure of those things I think\nwe will in fact make AI a lot like us\nespecially when they take on the\nhigher-level roles that we usually take\nand that will include giving them on the\nsurface at least values that look a lot\nlike our values they will change and\nevolve in results dispatcher's but then\nso will that's the values of our\ndescendants so I I think there's less of\na difference there that many people see\nbut of course that's all separate from\nthe moral value posts i I do think if\nyou imagine a world full of AI and you\nimagine two different worlds full of AI\nand they're almost the same except one\nis full of empty zombies and the other\nis full of lively things with feelings I\nmust prefer the second world you know\nand and I and I and I don't like it\nI'm very put off by the scenario of\nmaking sure there are no no moral\nproblems by making a vast and universe\nwith only a few humans off in some\ncorner then whose values matter anyway\nyou know yeah\nokay I think the the next thing that was\nwritten the chat was early but it didn't\nseem like a question so unless you my\ndarling we'll go to Matthias Matthias we\ncan't you\nthat's just your microphone might be\nmuted hello or is it my scribe oh yeah\nokay so you have your question please\nhello Mitch hi um so oh you did raise\nthe point that having a universe filled\nwith lively creatures does seem better\nthan just having a bunch of humans on a\nrock and I agree broadly with you I mean\nmaybe if we did wind up making a bunch\nof paperclip maximises and there were\ntrillions of them and they were happy\nperhaps that's not such a bad thing but\nwhat about saying more negative value\ndrifts if we wound up creating some sort\nof scenario that was say substantially\nworse than the Malthusian state surely\nthat is something we should try and\nstruggle against even if we have small\nodds of doing that like creating a world\nwhere it's filled with unhappy creatures\nis there is there a particular scenario\nthat you think that would generate that\nor is it merely the the logical\npossibility of it that concerns you I\nthink it is the possibility yes just\nbecause space is vast and so is time so\nit seems like something like that will\nwind up happening and that does scare me\na lot and I would want to try and see if\nsomething can be done about that or make\nit a little less likely and hence\nworking on say value alignment and how\nto make sure that things don't go\ncompletely and insane\nseems like a worthwhile\nto do well or do you think it's just so\nplease me at the at the largest level my\noverall attitude is to not think I or\nany of us have that big of an influence\nover the future so first I expect I mean\nwe're in a world today where no one's in\ncharge and our world wasn't the results\nof any committee meeting a while ago and\nnobody voted for this world and it just\nhappened and mostly the future is going\nto be happening in that same way and so\nI don't expect I or any of us have that\nmuch power over where the future goes so\nI think my our first priority is just\nopportunistically guess where is it\nlikely to go and then ask given the most\nlikely scenarios we're on the margin\nwould we like to push it a bit because\nthat's probably all we can do so I don't\nat all think in terms of like what's the\nspace of all the possibilities and what\nare the worst possibility and how can I\ndesign a world that prevents the worst\npossibilities that just seems way beyond\nmy or anyone's capability to you know to\ncontrol the universe that carefully so I\nwould much more interested in like\nparticular scenarios by which you think\nthings could go badly and especially if\nthey are plausible scenarios well we've\nhad lots of arms races before so is it a\ndifferent kind of arms race than we've\never had that's concern are the typical\narms race in the past the wing or about\nor a different one rating creatures\ngenerating creatures specifically for\nviolence well that's who we are so you\nand I are creatures generated for\nviolence right\nsort of but I feel like there's a\ndistinction that can be made here\num you we have violence as part of us\nyes but surely a lot of us is just built\naround sort of I don't know if I call it\nstable but some sort of social system\nwhere we don't just try and murder\neveryone or take his resources yes but\nthat's that's the winning violent\nstrategy is to cooperate within one\nviolent side to be the most effective at\nbeing violence against others so that is\nwe are predators and we're social\npredators and we're good at being social\npredators which means we cooperate with\nin our internal alliances against the\noutside so and that which we should\nexpect winning predators and soldiers or\nfighters in the future to share those\nfeatures now that's the generic winning\nstrategy got every fight of everything\nagainst everything is a really stupid\nstrategy and it will always be a stupid\nstrategy okay sort of I mean I for all\nthese questions I think we can come back\nto them in the sense that like they're\nall open-ended questions that there's no\nway we can answer it all of it but so\nit's up to you how long to spend on each\none and then we can cycle back if we got\ntime I'm happy yeah backing down then\nlet's hear what Matthias has come up\nwith if he has managed to fix his\nmicrophone I can hear extremely with\nMikey\nsorry I need to turn it way off\nokay sure what is it better now whoever\nsaid that is great okay it's great now\nyes it is okay so it seems to me at\nleast from what I gather reading your\nblog posts that you generally seem to be\nfavored of a very progressive economic\npolicy where you try to you know\nmaximize the amount of wealth a society\ncan generate however you've also stated\nhere that you very much believe that\neverything that was bound to happen is\nhappening in an ever you know\nincreasingly fast rate to me this seems\nlike a very dangerous combination as a\nvery much like a lot of time to think\nabout issues such as artificial\nintelligence before having the issue\nactually arrive it seems to me that you\nknow a bigger amount of wealth is not\ngonna advance the speed of which we can\ndecide on these questions faster than\nthe speed at which these questions will\nhave to be decided upon did you not see\nthis as a very big argument against a\nprogressive economic policy\nwell again I I see myself and everybody\nI talked to as pretty limited in our\nability to change the world what I\nmostly could convince somebody is to\nincrease some local growth rate of some\ncity or a nation perhaps or some\nindustry I don't have much influence\nover the growth rate of the entire world\nand if you're worried about you know\nlearning to prevent problems that the\nmuch easier thing would be to focus on\nthe people working to prevent that\nproblem and increasing their budget and\neffort rather than trying to slow down\nthe world because slowing down the world\nis just really hard right it's a huge\nworld and you're really small but if\nthere's a small part of the world\nfocused on a problem you could make that\npart bigger yes okay that seems very\nreasonable that you know as a single\nactor it's it's like unreasonable to\nexpect to be able to fix the issue\nthrough that way and a much efficient\nway I much better way with you to try a\ndifferent approach let's say you had a\nswitch where you could simply on a\nglobal scale decide this would you press\nthe switch to slow down the world yeah\nI very I'm reluctant to so first of all\nI think actually a lot of the current\nenergy and and desire to want to analyze\nthese things is premature that is I\nthink the world has to get bigger and\nmore knowledgeable before we're ready to\ndo much of this problem-solving that\npeople are eager to do now so in that\nsense growing a little faster would just\nget us to the point where we could start\nto worry about the problems but then at\nthat point you might be wanting to slow\nthings down so that you could work on\nthem faster but at that point I'm not\nsure how much you could slow them down\nokay sir argument would be that slowing\neconomic progress down now would simply\njust you know extend the amount of\nsuffering we have to do by living now\ninstead of the future and we're not\ngonna be able to significantly alter\npositively alter things such as\nartificial intelligence at the current\nsociety we live in now is that correct\ncuz we just don't understand these\nproblems we'll have to do them it's just\nnot time to be doing much about them\nwhen it is time to do much about that\nlittle because you're near and then\nthat's the moment you might want to slow\nother things down and if you could maybe\nyou should but um it'll be pretty hard\nat that point to slow them down because\nyou'll be near and there'll be all these\ndifferent people out there who could\npursue them do you not feel like this is\nlike someone and admitting defeat\ntowards for the same risk of artificial\nintelligence you mean but you're saying\nbecause I wouldn't push the button to\nslow down the world well it's just that\nthere is no button to slow down the\nworld right you know that no no sure\nsure sure\npractic in practical terms I understand\nyour I'd very much like to write up a\nfew of my thoughts in before continuing\nthe conversation but I found a very\nenlightening thank you sure okay then it\nseems to be me who sticks in line and my\nquestion is in AI safety there's an\nargument for or in extension twist\nbasically that the future is very very\nlarge you could imagine\nI think boström has calculated something\nlike 10 to the 6 teeth life years\nthat is possible in the universe and of\ncourse people who focus on existing\nAI safety they believe that this makes\nit very important but I guess even if\nyou consider only like one in a billion\nchance that you can do anything\nsubstantially prevent an existential\ncatastrophe then I guess you need to\nmultiply one in a billion with ten to\nthe sixteenth or something how do you do\nthat in a principled way I mean you only\nhave a limited amount of time yourself\nin energy so you mainly get to allocate\nyour time and energy to different topics\nyou can't scale up yours by a factor of\na billion you just don't have that isn't\nas a knob that's not one of your options\nso you can ask where do you want to look\nin time and in topic to focus your\nenergies and the fact that the future\nwill be very important is certainly a\nreason to do what you could about the\nfuture relative to other things you\ncould do something about and then when\nyou're trying to help the Foat future\nyou have another parameter which is sort\nof the unlikeliness and drastic Nisour\nthe scenario that you're focused on so\nif you think there's only like doing\nlittle tiny things that hardly matter or\npreventing extinction then of course you\nmight say well sure even if preventing\nextinction I can't how much of a chance\non it maybe that's still really\nimportant to do which makes sense if\nthose are the only two options but I\nreally think there's a vast range of\nother options for things you could do to\nthink and help about the future you know\nhonestly the first order of business in\nthinking about the future is just have\nthe foggiest idea what it looks like we\nare still at very primitive stages of\neven outlining roughly what various\nfutures could look like and I think it's\nvery hard to be very useful about\npreventing the worst possible scenarios\nwhen you don't even have much of an idea\nwhat the typical scenarios look like I\nknow that you know as abstractly that\nseems like the priority you should find\nthe worst scenario and then focus on it\nbut there really is a vast space of\npossible work scenarios and you don't\nknow how to judge which are more likely\nthan which other ones until you know\nwhat the typical scenarios look like\nthat gives you much more information\nabout knowing\nworse scenarios are the more likely ones\nto focus on okay but actually had a\nquestion while you're swallowing up on\nthat I'm curious like what sorts of like\nlike signs you think we'll have like you\nmentioned like at some point in the\nfuture maybe you would make more sense\nto slow down once we like know enough to\nbe able to do more so like what what\nsorts of like things would you think\nwould like help us understand this a lot\nbetter than we do now so the scenario is\npeople are worried about our AIS that\nare very agent like and have a very\nbroad range of discretion today and for\nthe foreseeable future automation will\nhave very narrow rules eric drexler has\nwritten about this topic before but and\nI think he's right but you know it's we\nstart with automation in very narrow\nroles and slowly move it up to roles\nwith more discretion\nyou know similarly if you're worried\nabout like foreigners coming in and\nmessing up what's your society you don't\nworry very much if they're janitors or\ndishwashers or things like that if\nthey're if their actions are limited to\nthose roles because it's really hard for\nthem to come in and screw up your\nsociety by being a bad gen and or a bad\ndishwasher maybe as they're a janitor\nthey get to sneak into a room or\nsomething but that's because again you\nneed to have more discretion so I again\nin the future this the concern if you're\nworried about AI being out of control\nand causing problems you're worried\nabout scenarios where they have a fair\nbit of discretion and they are able to\nchoose you know across a wide range of\nthings some of which are important and\nrisky but that's a long way away from\nwhere we are now in the sense that the\neye at the moment is has very limited\nroles and even most people today can't\ndo much to destroy society because most\nof us on our jobs have very limited\nroles in our jobs and that's how we all\nkeep each other accountable is through\nwhat job we do and what metrics were you\nto see who's how doing how well so you\ndon't really need to start worrying\nuntil the AIS you're worried about our\ndoing much higher level jobs that even\nmost people are today there are\npoliticians they are military commanders\nthey are investment Brooke you know the\nventure capital and they have to have a\nbig choice where it then the big choice\ncould go wrong that's when you you could\nyou humans should even start to worry\nabout it I guess like I mean people are\nalready thinking about having like more\nautonomy and like military drones and\nstuff like that it seems possible that\nlike more pretty limited autonomy sure\nsure no drone out there is that risk of\ndestroying the world sure for sure but I\nmean you can imagine that like even with\nlike a relatively like low level of you\nknow you have some like basic\nreinforcement algorithm going on or\nwhatever it seems possible that you\ncould have something where you know it's\ndesigned to track and like do basic\nresponses to perceived like border\nviolations or something like that and\nthat ends up like escalating into a\nlocal board that's obviously not the\nsame skills like essential risk\nnecessarily but it seems certainly\npossible that like a relatively simple\nsystem doing relatively relatively\nsimple tasks could still ultimately end\nup sort of feeding into pretty bad\nscenarios I mean that's just every\nsoldier has that right I mean that's\njust the risk embodied in any soldier we\nalready have to say the risk from AI\nsoldiers are more correlated though\nright that's the least one thing I don't\nsee that necessarily no well assuming\nthat they're all implemented like with\nthe same or similar technology at least\nwithin a given military well I mean so\nfor example if you just distribute some\nmachine to lots of soldiers and they all\nhave the same software and then it all\ngets invaded by the same virus then\nthat's a correlated risk across devices\nright so that's just a standard problem\nwe have with hardware is hardware has a\nlot of advantages in scale economy and\nproduction and an ability to\ntransferring thing but it often has\ncorrelated risks and hardware you know\nthey often in warfare you've found a\nflaw in one one tank and that's a flaw\nin all the tanks and now you get to\ndestroy all their tanks for a while\nuntil they figure it out and fix their\ntanks right mm-hmm\nthat's not that's not a new problem\nthat's the nature of hardware and\nwarfare but another military example\nwould be giving a lot of control of your\nentire military system like your um like\nyour neutrally assured destruction\nsystems handing mozo over to 2 AI\nsystems and that would be the equivalent\nI suppose of having them open and I\nhaving a high status job rather than\nbeing a janitor around right exactly\nsure when you're thinking about putting\nAI and control the entire military\nthat's the point you could be start to\nworrying well that's very different than\nhaving any AI generator you're not\nreally worried about the a janitor\ntaking over the world and destroying it\nall by being a bad janitor you worried\nabout dust be dust beings left in the\nwrong places or you know okay okay yeah\nokay\nJim also has a question about AI in\nwarfare yeah thank you can you hear me\nyes okay well I work in AI at one of the\nbig AI companies and often say that the\nconcern that I'm most worried about has\nnot so much to do with AI safety in the\nsense of super intelligence or AI Foom\nbut the looming economic incentive to\ntake humans out of the loop with my\ncompany goes to great pains to say that\nAI is augmented intelligence and it's\nabout augmenting decision-making but if\nwe well I think I also by the way work\nin the DC area very close to George\nMason and I working with a lot of\nmilitary folks and I see times coming\nwhen there's going to be more and more\nAI controlled battlefield robots and at\na certain point it seems very likely to\nme that there will be a very strong\nincentive to take a human out of the\nloop when a firing control command and\ncan be made faster by an AI without a\nhuman in the loop\neven though nobody in their right minds\nonce you know armed lethal AI you know\ncompletely in control out of their own\ndecisions on the battlefield at a\ncertain point\nprobably going to be there to make that\nhappen can you comment on that I make\nsense I that is you know as you know of\ncourse our our introduction of\nautomation in the military has been\nquite slow and gradual right\nwe don't suddenly like introduce a super\nsoldier who does everything we automate\na particular thing we add more autumn\nyou know faster automation capabilities\nin a particular high in a gun or\nparticular kind of missile but etc and\nabout that at some points you will you\nwill have a capable enough automation\nand and speed will be a facet of a high\nenough premium that you will automate\nthe tasks mainly because of speed so\nthat's that's a little different than\nthe rest of the economy speed is usually\nquite such a premium but in the military\nspeed is an enormous premium and just be\nable to even make pretty dumb decisions\nreally fast it can be a win and so yes\nthey will they will put those in the\nloop and that will usually win and\nsometimes it'll go badly because the\nautomation is fragile and and not very\nrobust and so yes that that's the main\ntrade-off there's a middle wide you know\nspace of scenarios where it's a win and\nthen there's the tail of distribution\nwhere it's a loss and you have to be\ntracking the tails and the trails are\noften hard to see and hard to collect\ndata on and so you will often make the\nmisjudgment of estimating the tails\nbelow and they turned out to be higher\nthan you thought yeah I think the thing\nthat concerns me there is what happens\nin warfare I mean I read something\ninteresting recently that said that yeah\nI guess it was world war two when they\nwere introducing jet technology\nwilly-nilly and without the usual safety\nconcerns that they would have outside of\nwar something like fifteen thousand\npilots died inside the inside the\ncontinental US during that time because\nof the that unsafety and the\nunbelievably rapid development I worry\nthat yeah well that's what I mean in war\nyou've got high you know yes you'll kill\na lot of people by going fast but then\nyou'll kill a lot of people by going\nslow too and you know that's where\nyou're stuck in war time young yeah so a\nlot of people will get killed as a\nresult of the enemy using automation on\nyou and a lot of people on your side\nwill get killed on the result of using\nautomation sloppily into fashion yeah\nthat'll happen\nand they don't really see much general\nto say about it that's just been the\nnature of hardware and war for a long\ntime agreed\nyeah I just think that the one scenario\nthat does kind of concern me is is one\nwhere you know where we get into an AI\narms race in warfare in time of war that\nthat seems like the one potentially\nlikely scenario that trips us off into\ncompletely uncontrollable AI but but but\nI think that's about the ability of AI\nto generalize that is we would have an\nAI arms race in business if we could you\nknow I'm less worried about an AI race\nin warfare because I just don't believe\nthat even though it's warfare and you\nwant your ads to be better that means\nyou can make them better fast that's\ncertainly true simili in business you\nmight want your a eyes to be better\nfaster but they just don't get better as\nfast as you want yeah and they don't\ngeneralize as fast as you want to\nthat'll be true and warfare you'll have\na you know a tank ai and you want it to\nbe how its Rai and it just won't turn\ninto a Howard 3a dammit it'll just stay\nthe tank ai yeah yeah absolutely\nand just observing what what we do\nwithin the company I can see that\nthere's a good handful of researchers\nthat have little pet projects on the\nside working towards AGI but all of the\nall of the financial incentives to spend\nour efforts on the narrow a is I don't\nas those words exactly those those exist\nthe AGI is you know theoretical at this\npoint right exactly and for a long time\nto come very likely though I've seen\nsome fairly plausible designs that could\nhave asleep possible designs going back\n50 years people have been looking at\ntheir plausible AGI designs for a long\ntime it's an old hobby good point thank\nyou okay Chris had a question about when\nboon know that when we'll know when AI\nis imminent\nyes I'm just wondering if you have any\npicture in your mind of what it will be\nlike will it be just like an\naccumulation of narrow AIS in all sorts\nof areas of life and so when they were\njust sort of turn around and I noticed\nfor the first time that that we've got\nsomething we would like to call AGI or\nmaybe will maybe will never say that\nwe'll do what we did with chess-playing\nprograms and things and just say okay\nwell it's what we're doing is it is\namazing but now we've got it I mean\nnobody would call that intelligence so\nagain you're looking for a lumpy\ntransition to more generality that's in\na sense when you're asking what you're\nsaying how will we know when AGI is here\nor about to be here yeah but it need not\nbe that lumpy so if you look back over\nthe last 70 years of computer science\nwe've definitely had a lot of advances\nand some of them at a little bit lumpy\nbut the vast majority have been pretty\nsmall lumps the our abilities and\ncomputer science haven't actually jumped\nin big leaps very often and that's true\nlooking on one axis of ability as it is\non looking an axis of generality it's\nreally quite rare to have a big\nsurprising leap in generality so I think\nthat's what you should expect for the\nfuture you should just not expect this\nsudden moment when everything turns\ndoubt to be vastly more general than it\nwas a moment before you should be seeing\nthe rate at which things were getting\nmore general and you should be able to\nproject that into the future and that'll\nbe a pretty good guide to how fast\nthings will get how general mmhmm I'm\njust thinking um I'm working with a\ncollaborator who we're trying to do a\nchildren's book explaining AI safety and\nit just occurs to me that in a dramatic\nstory you need a lumpy you need a lovely\nstory ya know smooth transitions yes and\nthat's been a problem because a lot of\npeople's intuitions have been honed off\nof science fiction and dramatic stories\nwhere there's the one big lumpy\ndevelopment that drives the story yeah\nyeah I have a lot to answer for\nokay thanks okay then my question would\nbe back in the original ai Foom debate\nyou made a number of comments I believe\nsome very bold predictions about the ADI\nproject called SiC cyc I think the\ndid not turn out sick didn't seem to be\nthe strong way forward we have incomes\nin there when when the AI Foom debate\nwas happening psyche was already a very\nold project so I I can't have been\nmaking predictions about psych at that\npoint because it was long past psyche\nwas I mean I could be talking about\npsyche just as a system and the kinds of\nsystem it is compared to other systems\nbut I'm I'm surprised you could find me\nas making a prediction about psyche at\nthat point maybe I'm but your overall\npoint seemed to be that the kinds of AI\nprojects that were built up with a lot\nof content were more likely to be\nsuccessful compared to AGI projects that\nwere trying to build on some kind of\narchitectural great idea\nand that that is another way of talking\nabout lumpiness that is how lumpy our\narchitectural insights if architectural\ninsights are not very lumpy then it's\nequivalent to there being lots of what I\ncalled content if there's a single\narchitecture that's really different\nfrom anything before and it makes an\nenormous difference then that's a very\nlumpy innovation it's an innovation in\narchitecture and so yet the key question\nis about the distribution of lumpiness\nis lumpiness in innovation in AI and\ncomputer science more generally and so\nyou know I have a number of lines of\nargument there one is just we have\ngeneral data about lumpiness and\ninnovation in general we have lumpiness\nand innovation as represented by say\ncitations of papers we have lumpiness in\nthe history of computer science and even\nAI more specifically and on all of these\nthings I think relatively consistently\nshow that the vast majority of\ninnovation is in relatively small lumps\nand big lumps are relatively rare and\noften big lumps get more credit than\nthey deserve so for example recent\nadvances in deep learning a lot of it\ncan be attributed to a big increase in\nthe amount of hardware devoted to using\nmethods that we had for a while ago and\nif you correct for the hardware actually\nthe\nin you know it's nearly on target for\nthe kind of abilities you would have\npredicted I believe you had in the FM to\npaid a you made some kind of outside\nview argument to show that you expected\na true AGI to be something like 100\nyears ago to remember the argument you\nmade 100 years ago no sorry and 100\nyears in the future sorry one line of\nargument that I've given although I'm\nnot sure it was in the AF room debate\nbut I may have been at that time was the\nis just a survey that I've made of AI\nresearchers where I've met them and I\nbasically asked them in your field of\nexpertise in the last 20 years how far\nhave we come\nI think it's more reliable to ask people\nwhat they've seen in the past then make\ntheir guesses about the future it's more\nreliable to ask them about their this\nfield they know best and ask them about\nbroad overall trends and in very large\nfields that they don't know very much\nabout so the usual way of doing things\nto ask people how much progress will\nthey think the entire world will make in\nthe next few decades\nwhereas I think it was more reliable to\nask an AI researcher in a particular\nfield how far their field has come in\nthe last 20 years and the only\ninterpretive part I ask them is to say\nhow far have we come as a fraction of\nthe distance to human level abilities in\nyour field and that then they have them\ngiven it to me as a percentage in the\nlast 20 years what how far have we come\nas a fraction of human you know toward\nhuman level abilities in the last 20\nyears as a fraction and then sort of the\nmedian answer I get is five to ten\npercent and then I have the follow-up\nquestion any noticeable acceleration\ntypically not and then the obvious\nextrapolation from that is to say well\nthen we're talking two to four centuries\nyes actually I was wondering if dr.\nHenson tell us about well few dr. Hansen\ncould tell us a bit about your work and\nthere and whether you're doing any any\nresearch work at projects etc well like\nI indicated\nthe beginning I I have this grant from\nopen philanthropy and I'm had it for a\nlittle over two years now and the pitch\nwas that I would analyze an AI scenario\nlike I analyzed age of M and I basically\nsaid let's let's assume that the\npatterns we've seen in software for the\nlast seven years our reliable\nindications of the future so how will\nyou predict the future of software if\nyou were just to look at the past\npatterns of software and say that that\nwould continue and so I've been\nstruggling but coming up with some\ninsights I think into what I can say\nabout what the world future world of\nsoftware looks like it and more you know\nsince the time I started that I've lean\nmore toward the question I've imagined\nthat nm show up but we also have non M\nsoftware which kind of software wins\nwhere and what is the world look like\nafter the non M software gets really\ngood are there any M's left that are\nthat are competitive and can do jobs you\nknow you know more cost-effectively than\nthe other kind of software and so I have\na number of things I can say I think\nabout what that world looks like they\nthere's not nearly as many things as I\ncan say about the age of M because that\ncould say so many things about that\nbecause they were very human-like but I\nhave a number of things I can say I'm\nproud of figuring out and then it would\ntake you know I don't know if I were ten\nminutes here if you wanted me to walk\nthrough the things I think I can say I'd\nlike that very much I think you entered\nit a little earlier about areas where\nhuman-like intelligence is likely to be\nsuccessive successful rather and areas\nwhere non-human like yeah I might be\nsuccessful right so but right now just I\nwill defer to the organizer here to\ndecide how much time to spend on those\nsorts of things because that would take\nme a little speech a fine time well we\nsaid that we had planned one hour and\nthe hour is almost up but I'm very\ninterested in hearing that so unless\nanyone objects and then I would like to\nhear some more any objections please go\nahead ok so the challenge is just to\nthink of software as a general phenomena\nand ask what general things can we say\nabout it that we could use usefully to\nprotect into the future\nespecially as by comparison with the\nsoftware in our brains so that one of\nthe simplest things I can say it's very\nsimple it's based on something called\nConway's law Conway's law says that when\nan organization has a structure and then\nit makes software that has a structure\nthe structure of the software tends to\nreflect the structure of the\norganization there are three\norganizations that are working together\nto make software well the software will\nhave three modules one for each\norganization if you've ever worked in\nsoftware that sounds kind of obvious but\nit has a dramatic implication it says\nthat in the future as we replace say\nhumans with software we could end up\nwith the world that looks a lot like our\nworld in the hot largest-scale\nstructures because the organizations\nthat replace us with software will end\nup creating software that reflects their\nstructure so today we're in a world with\nstructures like you know jobs tasks jobs\ndivisions firms industries cities\nnations etc and if we slowly swap out\npeople for software on each of these\nthings we could end up with a world that\nmostly has these things done by software\nbut still looks a lot like the world we\nlive in at those larger scales\nso that's just one interesting thing to\nnotice about the inertia of software\nstructure a second thing to say is that\ntoday when we look at human jobs we can\nsee each job is composed of tasks and we\ncan ask for a task what other tasks tend\nto be done in the same geographic area\nas that task and those tasks tend to be\njobs that are trying to coordinate with\nthat tasks so tasks tend to be collated\nco-located in space and even co-located\nin firms when they are tasks that need\nto be coordinated more with each other\nand if you look at that network that\nnetwork has a clump it has a center of\nthe network where that where of the\ntasks that are highly coordinated with\nmany other tasks and those tasks\nactually tend to be done in city centers\nand the most compy ones tend to be done\nin the biggest cities and they also tend\nto be done higher and/or\nmusicians and we expect that as we\nautomate tasks we will automate the\nperiphery of this network first that is\nif you have a task that has very few\nother interfaces it's easier to automate\nthat task because you you you will only\nhave to change a few interfaces whereas\nwhen you have a task that interfaces and\nhas to coordinate with many tasks when\nyou automate that task not only you have\nto change how you do that task you have\nto change all the interfaces or you have\nto coordinate with all the other tasks\nthat is coordinating with and change how\nthey deal with their tasks - so that\nsays that we will automate slowly from\nthe outside in in the sense of this\nnetwork so we will automate rural jobs\nbefore city center jobs and we will\nautomate jobs lower and organizational\nhierarchies before we automate jobs\nhigher in organizational hierarchies and\nthe same principle can be thought of and\nsay even inside a brain or human if you\nhave a human doing a job when you\nautomate the job you'll probably tend\nmore to keep how that job interacts with\nother jobs the same and change the\ninternals of that job and you may even\nend up making you know systems that look\nlike humans except you automate the\ninsides differ so like if you think of a\nhuman brain is composed of a thousand\nmodules you might be less you might less\nchange how those modules interact with\neach other and more change the internals\nof each model so the general principle\nis just when you have this network of\nthings struck in some sort of\nhierarchical structure you more often\nchange the internals of a structure than\nthe interfaces and therefore if this is\na network you more turn changing the\nperiphery of the network relative to the\ncenter and so that says that you know\nit's similar to I said before we'll\nspend a long time automating peripheral\ntasks that are relatively isolated that\ndon't have enormous impact and the last\nthings we would automate if we automated\nthem would be the center of this network\nwhich is the jobs that are most\ncoordinated in the city centers high\nlevels and organizations today like you\nknow marketing management law you know\nthings like that a governance so so\nthat's the second thing to say I think I\ncan say somewhat robustly\nand the third thing I have to say which\nis the the third and last thing I think\nis that in order to ask where do human\nsoftware win relative to other software\nwe need a simple model of what is the\nessential difference between the kind of\nsoftware in our heads and the kind of\nsoftware that we write and my best guess\nfor that essential difference is to say\nthat the software in our brains did not\nseparate out some things that we\nseparate what we rights offer to date so\nthat the machines we write today on our\nown computers we separate hardware and\nsoftware we separate learning and doing\nwe separate memory and processing these\nare just standard things we separate and\nwe make them in different places and\nhint you know we can swap things out\nright we could swap out a different\nmemory of the same processing etc in the\nbrain\nthese things were all mixed up and in\nparticular hardware and software is\nmixed up that is you don't have a\nseparate place you store software that\nyou can swap into any particular\nhardware and each place in the print\nbrain is both hardware and software so\nthis has a dramatic implication for the\nevolution of the software in our brain\nit meant that when evolution was trying\nto change the software in our brain it\ncouldn't do what we do now when we write\nnew software so today wouldn't we a\nhuman write new software the obvious\nthing we do is we start with a blank\nscreen and we just start writing new\nsoftware and then we connect it to other\nsoftware as we desire in terms of you\nknow we don't like to connect things\nbecause that makes things less modular\non the other hand when other stuff\nalready does a task then that's better\nto connect it to something already does\nit then we don't have to rewrite that\nand that's how we write software but in\nour brain when evolution was evolving\nyour brain it couldn't do that in the\nsense that it had a limited set of\nhardware and it had a very you know\ndifficult time adding more hardware and\nall the hardware it had was already\ndevoted to other tasks so all it could\nreally do was try to cram it a little\nmore hardware or delete some old\nhardware in order to replace it with new\nhardware or find some way to reorganize\nthe pre-existing hardware software\ncombinations in a better way that it\nwill allow a small addition to do the\nnew task\nand so evolution having this strong\nhardware constraints meant it meant\nspent a long time searching in the space\nof reorganizations it just couldn't rely\non modularity so much because it just\ncouldn't add more software anytime it\nwanted because that was tied to hardware\nand so the human brain is just naturally\nmuch less modular and much better\nintegrated and so that right there has\ndramatic implications for where human\nsoftware wins and where it loses\ncompared to other software so the other\nsoftware we write because we can make it\nmodular much better because we can just\nstart with a blank screen we use\nmodularity all over the place as a way\nto make stuff work and then report to\navoid problems we modularity is\nbasically our strongest tool and in\nusing modularity however we don't search\na long time for the very best structure\nwe find the first structure that occurs\nto us that's pretty good and then we\njust go with it and it turns out of\ncourse in the longer run as we have to\nevolve a system and add more things and\nchange it the structure we chose wasn't\nthat great and this means the system\nslowly degrades in its ability to handle\nnew changes and so we have the Commons\nphenomena software rot by which software\ndegrades as we make it more complicated\nand we try to change it to adapt to\nchanging conditions and it rots faster I\nthink than the software in our heads\nbecause in the software nur has we just\nevolution spent a really long time\nsearching for really good combinations\nof things that work well together that\nare robust structures that would\naccommodate a lot of new changes and so\nthis tells us that say the software\nthat's humans that software that\nsoftware writes will probably be even\nworse in these regard software squared\nif you will and so that'll limit how\nmuch we have software writing software\nand humans will will be best suited for\ntasks when there's a whole bunch of\nthings that need to be coordinated\ncarefully together where modularity\ncan't solve the problem and software\nthat we write is much better suited to\nsituations where modularity works better\nwhere you can just have a bunch of\nseparate parts\nwork on separate pieces but and then\nstaple them together and it kind of\nworks and so this is also telling you\nthat humans will last longest at the\njobs that need were a lot of different\nthings to be coordinated in the center\nof the network of tasks were lots of\ndifferent tasks need to be coordinated\ntogether and the software we write\nourselves will again work much better on\nthe periphery of that Network where\nmodularity works much better and rot\nisn't less of a problem and you can just\nreplace things after they rot and start\nall over again and so again we have this\nimage of a world where there's a lot of\nsoftware but mostly human-like minds are\ndoing the high-level general abstract\ntasks where there's a lot of different\nlittle things that have to be\ncoordinated carefully together so I just\nbeen talking for a while hey I must have\nsomehow lost I'm not sure if I set it\nwell or write correctly if there's\nclarifying questions this is please ask\nokay I have a question if we try to make\na mathematical model of this slowly slow\nprocess of automating these tasks well\nif you imagine that let's say just for\nthe sake of it that 50% of the tasks\nhave precisely one interface and those\nare the easiest ones to automate so we\nautomate those first and then after all\nthe easy tasks at the periphery of the\nnetwork has been automated then\nobviously there is a lot of extra\nresources people who are no longer who\nno longer need to work at at these tasks\nand then there's a much greater\nincentive to automate the paths that\nhave to interfaces and then this this\neffect could accelerate quite quickly\nwell you want to automate them the\nquestion is whether you can again when\nwe write software that relies heavily on\nmodularity\nit's just very hard to find a really\ngood structure\ndoesn't rot fast that can integrate a\nlot of things and the human brain\nsoftware is is the software that rots\nmuch more slowly and integrates far more\nthings and that makes it just\nincreasingly expensive to even try to\nautomate the center of the network tasks\nyou would just you you can try you will\ntry but you won't succeed for a long\ntime they should they're just hard yeah\nmy point was that the the incentive to\nautomate them grows enormously as we\nmove towards the the center of the\nnetwork well I'm not sure if the\nincentive relative incentive grows that\nmuch so I mean a key point about the age\nof a book that I keep pointing about\nabout em is in the age of M which is\nthat once we have brain emulation z'\nthen hardware advances improve ms just\nas well as they improve artificial\nsoftware the software right so from that\npoint on it's only architectural design\nimprovements that give artificial\nsoftware an advantage that is you have\nto search in the space of software you\ncan write to make it better so so far in\nyou see in our world it's just the fact\nthat hardware is getting cheaper so fast\nthat drives the increasing urged\nautomate things even if hardware it\ntakes a lot of hardware to do something\nit's pretty soon hardware is cheaper and\nthen you just do it but once you have MS\nthat stops being true you the hardware\ntrend no longer drives the change thanks\nfor taking the time and given us as\ninsight so it makes a lot of sense just\na quick follow-up if you have time do\nyou have any thoughts or can you share\nany thoughts any way on where you see\nyour research going in the future\nwell I'm personally relatively\nopportunistic about asking which things\nare the most important and where I want\nto go with that i I mean I I think I\nwill write some sort of book based on\nthis work that I've just described but\nI'll put it in a larger context that's\nprobably more engaging and so I'm you\nknow asking myself which context to\nchoose for those things and then beyond\nthat I you know I'm some\nolder in my career and I have lots of\nthings ideas from the past that that\ncould I could go back and build on so\nthe more of an installed base you have\nof old project ideas and old projects\nthat you partially built the more\ntempted you are to go back and finish\nthose than to start new things yes so\nthat'll probably be some I do a lot of\nthanks very much which means I'll be\nless surprising because I'll be more you\nknow going back to things you are I've\nalready done and you can see the sort of\nfun question you obviously were very\ndeeply involved in the whole prediction\nmarket thing I don't know you might even\nbeep have invented I don't know yes and\nI'm just wondering if you have any any\nwages bets or prediction market\ninvestments or whatever in the area of\nAI ah yeah I just don't think we're\nabout to see a huge I mean I think we're\nnear the peak of the current boom and\nwe're gonna have a bust again we've had\nboom and bust cycles gonna go over a\nlong way back so this is the wrong time\nto buy this is the time to sell I see so\nwhether that's the short run of course\nyou know these these bump and by cycles\nhave been roughly a period of 30 years\nso 30 years from now it's time to expect\nthe next peak of the next boom yeah and\nhave you put any money on that oh well\nI'm I'm not buying\nthere's no sell I'm not buying into AI\nat the moment I'm I'm when I'm\nconversations like this I say like think\nabout the long run don't really worry\nabout you know are we there is this\nalmost time nowhere it's not if this was\nalmost time you'd definitely being\nseeing a lot of big differences now we\nare definitely night not right near the\nthreshold but the long run is it remains\nthere and then this is you know a good\ntime to talk about the long run as long\nas everybody's thinking about the topic\nnow okay thanks okay had a question\nI don't think said I was just being a\ncouple comments thanks a lot for doing\nthis this was like very very interesting\nI'm happy to talk to you guys again if\nyou want and I'm really surprised you've\ndone 135 of these meetings that's like\nevery week for two years or something or\nthree years yeah yeah okay well yes we\nhave I only had a question and I think\nthat'll be the last question for tonight\nyes so I just wanted to ask about the\nage of M you sort of made a few\nsimplifying assumptions like assuming\nthe society would be sort of stable are\nyou going to follow up with that idea\nand try and see what the biggest like\nwhat the biggest branches would be the\nmost likely places where things would\ndiverge rapidly the largest sources of\nuncertainty are you going to try and see\nif there's anything you can say about\nthat well the the two main drivers here\none is just that when I wrote a Jif em I\njust got really obsessed with it and\nfocused on it and then when I finished\nthe book I couldn't quite turn that off\nand so I kept collecting a file of\nadditions and then that was put into my\nupdated paperback version and then even\nthen since then I still keep thinking\nabout it because it's in my head and but\nthe thing of the direction I most often\ntend to go is people say they'd like\ntheir fiction in the future in the form\nof fixture and and you know could could\nI set up a fictional scenario that would\nmake this more vivid for people and so\nthat's the direction I'm mostly gone\nwhen I think about hmm things is what\nsort of characters what sort of plots\nwhat sort of conflicts etc could I put\nin this world so that I could make this\nworld more vivid for people you know\nI've certainly I've got the overall\nmessage from the world that this was an\ninteresting exercise but that it was\nunusually focused on one particular\nthing the world isn't that interested in\nexcept for the fact they liked the fact\nthat there was just an integrated thing\nabout one scenario\nso I'm not feeling enormous pulls and\nrewards from people who want to know\nmore details about the age of them most\npeople I said say that that was a lot\nmore detail than they wanted to hear\nabout any future scenario on there\nthey're surprised didn't even impress\nthat I managed to put so much detail\nabout it and that's kind of a planning a\nflag showing that you could say a lot of\ndetail but still most people said they\ndidn't really want to know that much\nabout it so they don't there's not much\ndemand out there for for for more that I\ncan perceive although you know if there\nwas a fictional this scenario I could\nimagine there being more demand for you\nknow a sequel to a fictional stars\nbecause people then like to hear about\nthat and in the process of elaborating\nVictor Lawson re would of course\nelaborate some other aspects of it you'd\nbe thinking about it but that's where it\nstands now the idea of doing fiction\nabout it seems really interesting but I\nI would be worried that in the future\npeople might forget about age of M and\nremember the fiction and then when\npeople want to talk about it they say oh\nlike that sci-fi book huh well I would I\nwould I would only be thinking about\nfictional books that in fact were\nfaithful to the world at the age of M\nwhere it was a real story but it was in\nthat world I'm not gonna I wouldn't have\nall be interested in when dramatically\nchanging the world for the sake of the\nfiction not gonna add magic magic and my\npeople might take the ideas less\nseriously if they if they don't realize\nthat it came from a serious work and not\nfrom a picture like you know I'm much\nmore worried that it will just forget\nthe entire thing so I think I think if\nthere was a fictional star that was true\nto the world that is it was a you know\nthere were characters in conflict but\nthe world they were in and conflict was\nin fact the age of world and world I\ndescribed I I would think that was a big\nwin from the point of view of getting\npeople to take snart seriously fair\nenough but the world needs more books\nlike the age of them I thought it was\nfantastic well you guys are all and\nhopefully you know invited to try your\nhand at such think that that video that\nwould be the way that that age of M\nwould be most validated as if people if\nwere inspired to do things like it okay\nI think there was a nice note to in\nthis is conversation on thank you once\nagain Robin Hanson for coming and\nanswering our questions it's been a\ngreat conversation it's great to meet\nyou all and feel free to friend me or\nwhatever I mean I'm not sure I've\nactually seen the names of all the\npeople in this meeting I'm just standing\nin a blank Skype screen so and I see\nsome funny you know Skype names but I be\nhappy to know the actual regular name of\nall you people great thank you\nthat's terrific yes thank you oh and\nbefore everybody leaves next week's a a\nsafety reading group we will have will\nbe on Tuesday the 4th where we will read\nthe the first half of Mythbusters news\nnew paper whose name escapes me at this\nmoment is it about the vulnerable world\nthe volvo volvo world hypothesis ok so\nthat should be all take care thank you\nhave a nice week", "date_published": "2018-12-03T15:37:55Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "932274025053210e64e20205e586dba1", "title": "87. An Untrollable Mathematician", "url": "https://www.youtube.com/watch?v=ql4Y0-jEKhw", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to the 87th session in\nthe AIA safety reading group tonight\nAbram demske will present his work on a\nnon troll group mathematician which is\nworked on as part of the agent\nfoundation technical agenda in the\nmachine intelligence Research Institute\nthis is a subject that a prime demske\nhas to worked on or at least hey I saved\nyou for at least a decade as far as I\ncan see\nso a prompt asking quick can everybody\nsee the screen I think there was\nsomebody containing in the chat I see\nthis good thing okay I see it I guess\nI'll proceed so yeah yeah so this is\nsomething that I wrote up on the agent\nfoundations for him because it was\nfollow-up to stuff that I did but the\nprobability distribution itself was\ncreated by some Eisenstaedt so it's\nreally sounds work that I'm presenting\nbut I'm going to start by going way back\nin time too and I see this line of\nresearch studying it was August 2014\nso at that point in time neither Scott\nnor I worked at me but Scott and I were\non our way to visit Mary I think that\nwas Scott's first visit to Mary and my\nsecond or third and so yeah so Scott was\nsaying some things about the stuff that\nwe were going to go and talk to Mary\npeople about and so he thought of this\nthing about how you can drive the\nprobabilities up and down\nif you sort of naively have a Bayesian\nreading proofs which I'll explain why\nlater but at that point we had already\nbeen talking a lot with each other about\nhow you integrate probabilistic\nreasoning with logical reasoning so I'm\ngoing to explain a little bit about our\nmotivation there so the reason that we\nwanted to integrate probabilistic\nreasoning with logical reasoning or at\nleast a reason that you can give to do\nthat is because probability theory gives\nyou the tools that you need to think\nabout how to make decisions probability\ntheory is the way that you describe\nexpected value calculations but logic\nhas the best tools for thinking about\nself reference so Miri was interested in\nand still is interested in figuring out\nreflectively consistent decision\ntheories so to do that you need both the\nsort of decision-making tools provided\nby probability and the self referential\ntools provided by logic and in\nparticular a big question was logical\nuncertainty so how do you reason about\nlogic when you don't yet know all of the\nconsequences let's see\noh I see I'm missing yeah I was not\nsaying how the full scheme is written so\nyeah I missed this light so but I\nbasically make of it so it's like yeah\nyou have probability over here which has\na bunch of things inside of it and you\nhave logic over here and the question is\nhow do we take the intersection of that\nso yeah in probability theory logical\nomniscience is usually assumed logical\nambitions means that you assume that all\nof your the consequences 0 your beliefs\nare fully stipulated already from the\nget-go so it's sort of hard to remove\nlogical emissions from probability\ntheory because the foundations of\nprobability theory the kamagra of axioms\nthey all sort of implicitly assume\nlogical admissions in one way or another\neverything is specified in terms of\nevents in your event space and these\nevents are already assumes to be like\nequivalence classes of logical\nstatements so the events have an algebra\nin the Sigma algebra and the Sigma\nalgebra assumes that when you perform\nand logical operations like engine or on\nan event and you get a new event then\nthat event is uniquely specified by\nlogic as opposed to being sort of up in\nthe air about what other events it's\nequivalent to and then well yeah there's\nother ways the logical omissions is\nslipped in and you can read about that\nin the slides I'm not going to try to\ncover everything I wrote down in the\nsides due to limited time but I'll try\nto give an overview\nso their simplest way to see why it's so\ndifficult to get rid of logical\nemissions from a practical perspective\nas opposed from the axioms perspective\nis if you have a bunch of hypotheses so\nyou can visualize the hypotheses as\nputting some probability mass on one of\nmany like any number of observations in\nyour range of possible observations and\nthen when you see a particular thing\nthen you need to know how much\nprobability mass each hypothesis put on\nthat particular thing in order to relay\nthe hypotheses by a Bayes rule so if\nwe're unsure about the implications of\nany hypothesis and when we make an\nobservation then we don't know how to\nread weight all of the hypotheses\nbecause the observation that was made\nmight be in this region where you're\nuncertain about whether it's a\nconsequence or not and so in fact if you\nwere not very careful with this kind of\nthing then allowing hypotheses to not\nfully state better consequences like\nthis can let them essentially cheat the\nsystem and sort of remain in play even\nthough they're not doing any work\nbecause you can't distinguish between a\nhypotheses hypothesis that has simply\nnot yet declared what its prediction is\nversus one which will never declare what\nits prediction it is and so what happens\nis if you let hypotheses be vague about\ntheir implications in this way then\nagain any if you're not careful malign\nhypotheses that are too tricky you can\njust\nstick around forever because they never\ndeclare any falsifiable predictions\nuntil a key moment when they start\nmaking predictions and then cause you\nproblems and of course when they do that\nthen they might lose their probability\nmass but the thing is that there might\nbe large enough of a number of these\nthat they always outweigh the good guys\nwho are like always making predictions\nbecause the good guys are going to be\nlosing some probability mass in the\ncourse of making predictions whereas the\nbad guys are not losing probability mass\nuntil they're messing with you so anyway\nyeah you get a lot of problems when you\ntry to do with this sort of thing and so\nthe challenge is to try to combine the\nability to be uncertain about your\nconclusions with probabilistic reasoning\nso going over it's a logic world to\nexplain why logic is so different from\nprobability and more difficult to\nintegrate with probabilistic reasoning\nthan most people realize so I mentioned\nthis thing of being uncertain of the\nimplications and this is very\nfundamental to logic so in logic you\nhave kind of a situation where you have\nthe axiom which is like a seed and then\nthe rules of inference are sort of\ntelling you how to grow a tree of\nimplications from your seed of axioms\nand this growing a tree of implications\nis never finished and we can formally\nsay like the way in which this is never\nfinished by looking at gödel's\nincompleteness theorem so you can\nvisualize it and this is a visualization\ntaken from Hofstadter you sort of have\nthe white tree of the truth which is\ndrawing from the seed of the accident\nand then you have the black tree of\nlogical contradictions or things you can\ndisprove which is sort of growing from\nthe seed of the negation of your axioms\nand the thing about this tree the the\nvisualization of Google's incompleteness\ntheme is that the white branches and the\ndark branches injure time to intertwine\nso closely that it's not possible to\ndraw a line that separates the two at\nleast it's not possible to do so in a\ncomputable way so it's like right my 10\nminute timer is gone so that's telling\nme to hurry up so yeah so the fact that\nyou never finished drawing out these\nimplications means that these scales of\njustice are sort of always unsure about\nwhere to go the scales of epistemic\njustice of evidence-based reasoning are\nalways sort of in this gap of\nuncertainty between the light and the\ndark and so getting back to what Scott\nand I were discussing so Scott was\nsaying okay what happens if you sort of\nforget about all that and you say well\nI'm going to try to have a probability\ndistribution anyway I'm going to ignore\nthe fact that I can't really tell about\nmy equivalence classes for events should\nbe I'm going to take logical sentences\nand I'm going to put probability mass on\nthem ignoring the semantics ignoring\nwhat those sentences mean and then I'm\ngoing to try to do a Bayesian update\nwhen I find constraints so for example\nif you like think about ever since a and\nthen the negation and the B and it's\nnegation\nand you have some probability\ndistribution that doesn't obey any logic\nthat's over these the combinations of a\nbeing true and false and B being true\nand false and then you update on the\nimplication a implies B so you cross out\nthis square and then you update by\nnormalizing and so you get a new\ndistribution and the idea was just\nupdated on all of everything you can\nprove in that way where you just rule\nout the combinations of your proof rules\nat and try to do logical uncertainty\nthat way so Scott's argument is that if\nwe do this then we can always be kind of\ntricked by somebody who is showing us\nproofs in a systematic way to try to\ndrive our beliefs in a particular\ndirection so what you do is suppose we\nhave some targets and say that we care\nabout then somebody can find a certain\nsay implies B they can kind of be and\nthis B such that a implies B is provable\nand such that more than half or sorry\nhalf or more of the probability mass of\na is on not be so then when we update on\namp lies B we're knocking out more than\nhalf of our sorry half or more of these\nmass and so a just keeps decreasing as\nwe do this so why is it possible to\nalways find a since B which has this\nproperty going back to the picture of\ngriddles incompleteness theorem we know\nthat it's impossible to separate these\nthe trees of the true implications and\nthe\nthings we can rule out any way that we\ntry to separate these two trees we will\nhave some overlap of things that are\nsort of incorrectly classified but if\nyou think about the situation with our\nbeliefs on a implies P supposing that\nyou could always avoid this trick that\nmeans that for any B such that you can\nprove a implies B you must have the\nprobability of B given or yeah you must\nhave the probability of not B given a is\nless than one-half so we can consider\nthe sentence the probability of B given\na is greater than won't have and this\nmust separate truth from falsehood\nbecause or rather it must separate\nlogical truth from logical faucet so it\nmust separate the things that you can\nprove from the things that you can\ndisprove because if B is a logical truth\nthen implies B must always be true but\nwe know that there's no way to separate\ntruth from falsehood in that way because\nof girl's incompleteness theorem so no\ncomputable probability distribution can\never avoid Scots trick completely\ntherefore Scot can sort of play this\ngame with us where I start out my pride\nhas the probability of a being one-half\nScott says oh by the way did you notice\na implies B 1 and I'm like oh yeah okay\nso I update to probability of a is 1/4\nand Scott says also did you notice a\nimplies B 2 and I'm so I say okay no\nthat's 1/8 and then he says implies B 3\nand I update to 1/16 and so on so you're\nalways being driven down and then when\nScott gets bored of playing this game\nyou can start reversing it and driving\nmy probability as far up as he wants by\nfinding B that imply a because the trick\nworks the same in both directions so we\ncalled this trolling in reference to\npeople on internet chat rooms telling\nyou about 30 people on internet chat\nrooms arguing and bad fake saying things\nthat they that may or may not be true\nbut which are designed purely to provoke\na particular response in you this isn't\nthe best metaphor because the things\nthat the troll and this is saying are\nalways true but nonetheless the troll is\nsort of fooling you dragging you in a\nparticular direction by saying false or\nby saying only two things so we can also\nthink of the fishing metaphor where you\ndrag a line and then you're just sort of\ndragging the fish you don't necessarily\ntake it up right away so yeah that's the\nproblem so then how do we solve it so\nmuch more recently January 2018 we're on\na me researcher cheat so Sam Scott and I\nall were in the same bedroom on this\nretreat and so Sam said something I\ndon't remember but my response to this\nwas well last I checked you still think\nyou can solve show ability which you\nhaven't convinced me of yet this was\nsort of an ongoing thing with Sam and I\n[Music]\nso I was reminding him of this claim and\nhe was like well do I think I can do\nthat and he thought for a minute and he\nwas like yeah I still think I can do\nthat and Scott and I were very skeptical\nbecause both of us\ngiven up at this point on there being\nany kind of solution and then about five\nminutes later Sam sort of he paced back\nand forth out in the hallway and then\ncame back in the room and said okay\nhe's the solution so the priority\nproposed is very simple\nso he said suppose that you're drawing\nstudents is out of this bag with some\ndistribution and you're drying them out\nof the bag and some sequence and this is\na very old chicken so in terms of like\nthis is the first thing I had tried back\nin 2011 or whatever when I really\nstarted doing this logical uncertainty\nstuff but he said you enforced only\npropositional consistency on the\nsequence of sentences and you think of\nnature as showing you the things in the\nsequence so previously I had thought of\nI'm skipping around a bit previous lis I\nhad thought of the way this sort of\nprocess works as when you observe a\nthings you only know that it is one of\nthe true senses you don't know which one\nand you're trying to infer the rest of\nthe truth which are like Laden laws of\nthe world and so this probability\ndistribution on drawing things from the\nback which is new gives you a simplicity\nare over the laws of the world and then\nyou give some observations and you try\nto infer what the early laws of the\nworld that would have constrained things\nto be as you observed with bearish but\nSam was instead saying no we draw\nsentences out\nthe bag and we immediately see them and\nthis was a big change and so the reason\nwhy we enforce propositional consistency\non top of that is because propositional\nconsistency gives you constraints\nbetween sentences the expressible in the\nlanguage itself so if I prove that a and\nB can't occur together for example then\nI can express it as the students not a\nand B and so anything that we prove we\ncan kind of take it out and put it as a\nsentence so if you think of things in\nthis way you're ruling out branches that\nyou've proven cannot occur in the tree\nof possible sentences so then why does\nthis work well the probability of any\nsentence s which is consistent with\neverything seen so far can't go below mu\nof s so it can't go below this sort of\ndrying out of the bag distribution which\nmeans and it can't go above 1 minus mu\nof the negation of X so it can't it's\nsort of stuck between the probability\nthat we draw s out of the bag and the\nprobability that we draw not ass out of\nthe back which means that no matter what\nwe see in the sequence so far we can't\nwe can't have driven the probability the\nbest above this certain constraint or\nbelow so we can't jump we can't get it\nabove 1 minus P if not s and we can't\nget it below 1 minus there was we can't\nget it below probability of s which\nmeans we're not rollable so our\nprobabilities are always in this\nsort of comfortable arrange and yeah\nthis it was sort of a very obvious\ntactic adding to the embarrassment of\nScot enacted five minutes before that I\nsaying that it wouldn't be possible\nso yeah this trick of observing\ndistinguishing between updating on X\nversus updating on I observe X is a very\nold one in Bayesian ISM it's well known\nthat this is a critical distinction that\ncan make a huge amount of difference so\nif we have like some fraction we're\ntrying to infer what the fraction of red\nballs to blue balls is and we only are\nonly shown the blue balls then and we\nhave our prior over what the frequency\nis when we can see the blue balls on all\nthe red ones that hidden from us then if\nwe were to update on just ball three and\nball four are blue then this situation\nhere would increase our credence in\nhigher frequencies of blue balls but\nthat isn't right because we know that\nwe're being shown all and only the blue\nballs and there are only two out of the\nfive and so we should update on IC ball\nthree and ball four our blue which\nupdates us right down to two out of five\nour blue and we know that frequency\nexactly so similarly it was natural to\nthink this god strolling trip could be\novercome by understanding the\ndistinction between updating on implies\nB versus updating on observing a implies\nB however we'd had that idea for some\ntime without coming up with Sam's trick\nthe thing is that if you're sort of\ntrying to figure out the natural way to\ndo this\nthen you think of things like well you\nhave to model the TM trooper just\nshowing you things so that when you\nobserve something you're not just\nupdating on the thing you're updating on\nthe theorem provers showing you the\nthing and so if the theorem prover is\ndrilling you in a biased way then it's\nsort of natural to expect that this\nwould let you pick up on that and say\naha\nthe theme prover is only showing this to\nme because it searched for something\nthat would increase my belief in a\nparticular direction and so I'm going to\nnot be fooled but the problem with this\nis how do you model the theorem provers\nit's very unclear how you should do that\nso we never came up with anything\nparticularly satisfying but Sam\nbasically got past this just by not\ntrying too hard to model things in the\nright way he was only trying to come up\nwith a prior it wouldn't be trouble and\nso in fact his prior sort of gives up on\nintuitive things that you want I\nmentioned earlier the inferring the laws\nof the world and he can't do that\nbecause everything is observed so you\nobserve a and you observe B and the\nsubsequent sentences are still mostly\nrandom they're constrained to be in line\nwith the proofs that you know but there\nyou have no ability to infer any\nunderlying regularity so you can't\nfigure out what the laws of the universe\nare and this refusal to adopt very much\nthis refusal to adapt to the data\nalthough it is bad it even more than\njust on troll ability it implies full\nconvergence so troll ability is sort of\na weak statement about convergence like\nor yeah\nunchill ability is bigger than\nconvergence the reason that I was\nlooking at troll ability and stuff and\nregions before was really to show how\nbad the lack of convergence was in the\nsituation that's Casa de so troll\nability is just like really really bad\ndivergence as bad as you can imagine\nbecause you're being dragged between\nzero and one infinitely many times so\nSam's thing actually converges and I\nwon't go through the proof of that here\nbut you can look in the post so that but\nintuitively it's because as you see more\nand more things coming out of this bag\nthen the things that we will never see\ncome to dominate our expectation of\nwhat's next because we're not really\nadapting our probabilities to account\nfor this sort of things that we see so\nour expectations of what follow will\nconverge to a set distribution so yeah\nthat's all I have no clue that was good\nthank you very much and very much", "date_published": "2018-03-14T21:29:05Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "aef8e9bd697f7be40ff231d165b6f53a", "title": "204. Universal Intelligence", "url": "https://www.youtube.com/watch?v=1zroYiCkHiY", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "pdf hello and welcome to the ai safety\nreading group session 204\ntoday we'll be presenting universal\nintelligence a definition\nof machine intelligence by shane legg\nand marcus hutter\ngiven some feedback from participants in\nthe reading group i'd like to try\nsomething somewhat different\nfrom what we normally do which is\ninstead of summarizing each section\nalmost chronologically i'd like to\nemphasize key points\nand make this a rather short\npresentation um\ni'd also like to focus on discussion at\nthe end and i have handwritten notes\nhere so the slides will be somewhat\nsparse\nif i've covered something on a slide and\ni continue talking that's because what i\nhave written is more important than\nwhat's on the slides\nit should be somewhat obvious\nand i have a new photo of an owl which\nis not copyright protected\ni found so that's neat anyway we can go\nto the next slide\nso this is a little bit of background\nabout the authors um\nof the paper one the first author on the\npaper who contributed most of the work\nis shane legg he has a phd in computer\nscience from usi in switzerland\nuh this is the lab that jurgen\nschmitthuber is at\nhe has a bachelor's in mathematics and a\nmaster's\nin technically discreet mathematics and\ncomputer science\num shane started his work in machine\nlearning\nat the university of i believe auckland\nmaybe\nworking on the weka project which is a\ngraphical user interface for\nthese machine learning algorithms\nperhaps it's the wrong place but\nhe worked on the weka project\nand then he did his master's on\nuniversal induction um\nso he also became a co-founder of\ndeepmind during his time as a postdoc at\nuniversity college london\nin the gatsby computational neuroscience\nunit and he received the singularity\ninstitute for artificial intelligence\nprize\nuh which was ten thousand dollars\ncanadian for his phd work\nwe got the next slide\nuh the next author is probably quite\nwell known uh\ndr marcus hutter he's now a senior\nresearch scientist at deepmind\nbut formerly he's a professor at\naustralian national university\nhe's notably known for the ai\nxi model or ai model depending on which\nversion you'd like to look at\nhe also worked at i idsi a usi\nand he was the supervisor of dr leg at\nthis time\nwe can go to the next slide\nso the primary focus of this paper is\nthe following problem and that's what\nnobody really knows what intelligence is\nand that's what they say in their\nabstract\num in watching talks hudder has given or\nnot leg has given on his work\nhe's described the two types of people\nthat talk about uh\nai research in particular and one type\nis someone who works on computer vision\nwhich we'd now probably extend to the\narea of\nmachine learning or deep learning or\nreinforcement learning or other\nalgorithm-based\nproblems whether they're classification\nor something else\nthese people generally will say well\nyeah i'm not really working on\nartificial intelligence i'm working on a\nnarrow subset of machine learning which\ncreates a high resolution images or\nclassifies images or\ngets a car to drive and they don't\nreally claim to be solving intelligence\nthey claim to be doing a\nmuch narrower problem but because of\nbranding or other reasons they just kind\nof say\nyeah this is artificial intelligence it\nlooks good that's becoming more and more\npopular now\nand the other type of person that like\nhas talked about\nis one who claims that intelligence\nresearch is going to be people who are\nworking on creativity or compressibility\nor other\nlanguage that's not well described\nand these people he he has less respect\nfor\nbecause the first kind of people at\nleast know that they don't really have a\nclear definition\nbut the second kind of people claim that\nintelligence is too difficult to work\nwith\nwhich is quite interesting\nso what he did in this paper here is\nhe collected informal definitions by\nexperts and he tried to extract their\nessential features\nas he mentions going through every book\nthat talks about a method of describing\nintelligence is just not feasible\nbecause\nthere's so much other information that\nsurrounds it especially since this work\nwas published in 2007 it\nyou know full text search of entire\nbooks was even slower than\nso we can go to the next slide\nanother problem with intelligence is\nthat it's usually defined with respect\nto humans and that's what he calls\nanthropocentric\nuh it's not that great\nif you can only define it with respect\nto one system\nand if you guys looked at the sections\nthat talk about intelligence\nwhether that be an iq test or a g factor\nusually they're standardized via\nsomething like\nage and again we're not even getting\ncomparison of\nintelligence among all humans it's\ncomparison of intelligence among a\nsubset of humans\nso you're really only able to make\nin-group comparisons\nand the problem with that is that we\ncan't compare the intelligence of\nvastly different systems so he talks\nabout systems that have\ncompletely different sensory features or\ndifferent processing mechanisms\nso we can go to the next slide\nso if you get nothing else from this\ntalk\nplease get the text in the green box if\nyou're color blind i only realized that\nafter the fact it's the only box on this\npage\nthis intelligence measures an agent's\nability to achieve goals in a wide range\nof environments\ni'm focusing on this because we're in\nthe ai safety reading group and many of\nus are\nconcerned with what happens when\nsystems become super intelligent if you\nhave a clear mathematical definition\nmaybe you could define it\nbut this has key implications for\nsomething like\nthe orthogonality thesis which says that\nintelligence and\ngoals are separate so if we all have an\nagreed upon definition\nwe can actually start working on making\ncoherent statements together rather than\njust disagreeing over semantics\ni was going to provide a image of like a\nnon-continuous function and ask someone\nwhat the derivative of it was at this\npoint where it's not continuous\nand then i was hoping some people would\nsay well we can't\nfind the derivative of that function\nbecause it's not defined\nso if nothing else remember intelligence\nmeasures an agent's ability to achieve\ngoals in a wide range of environments\nwe can go to the next slide\nokay so when we talk about\nmeasures it's this uh equation i'm\nprobably\nbutchering the name of the symbol but\nit's upsilon not epsilon epsilon\nuh at least that's what it is in the\ntech of pi\nis defined to be the sum over\nenvironments of two to the negative\nkamal graph complexity\nof that environment multiplied by the\nvalue of that environment\nfollowing some policy pi and pi is the\nagent\nthere's way too much mathematics in\nthere to get it in the first pass\nso we'll try to break it down uh the\nimportant thing to know here\nis that if you want to do science i i\ndon't know if he mentions this by the\nexact phrase\nyou need to be able to define what\nyou're working with and\nhopefully be able to measure what you're\nworking with\na lot of discoveries first require like\nkey insights\nfor instance solving uh getting\nsuperhuman chess\nrequired understanding game trees\nand other discoveries require something\nlike\nfiguring out where the problem is and\nhow hard it is\nas a brief aside the modern day notion\nof probability theory\nwhere we can represent something as like\nuh\nthe likelihood of an event happening is\nrelatively recent only being discovered\nin the 1930s\nuh due to this person kamal groth i\ndon't know his first name\num and he also invented this notion of\nkamalgrove complexity\nso fun fact we can go to the next slide\nso we had\nthis kind of dense mathematical equation\nso let's briefly just talk about it so\nan agent is something that can select\nactions that can move right\nmove left output some text that's\ngenerally what he's talking about\nan environment he represents by the\nletter mu\nhe defines it with respect to\nprobability which i just mentioned\nbut explaining that in a 20-minute\nsummary\nis a little bit more difficult than i'd\nlike to if you'd like we can discuss the\nspecifics of it\nafter the talk during the discussion now\nif we go back to the text definition uh\nintelligence\nneeds to be across a wide range of\nenvironments so they define\nthe environments which as a set and then\nthey just define it to be the set of all\ncomputable environments computable is a\nrestriction that you need if you want\nthis to ever run\nthere are some philosophical\nimplications of it but it's usually not\na problem\ngoals are represented by some value so\nusually\nhow much reward you can get following\nsome\npolicy function it's usually discounted\nin some\nway it can be represented by tables or\nother stuff\nbut it's just representing rewards in\nthis case and then you have this\ncomplexity penalty\ni didn't have time to draw out why the\ncomplexity is two\num but it's it has to do with binary\nrepresentations\nwe can go to the next slide i think\nso the other key idea is that in theory\nyou could compare the overall\nintelligence of agents\num in practice we can only approximate\nthe intelligence of agents across\na finite set of environments prior to\nthe talk it was mentioned that this work\nisn't necessarily\ngrounded uh to\nrespond yeah uh you can't actually make\nthis comparison i made i\ni kind of used this as a almost a\ncounter example you can't\nwith this definition compare the\nintelligence of a calculator or a baby\nin theory you can but in practice you\ncan\nnow a feasible solution to this might be\nto compare the intelligence of a\ncalculator at some task\nversus the intelligence of the baby at\nsome task for instance the calculator\ncan't burp\nit can't be padded on the back because\nit doesn't have a way of\nit doesn't have the sensory system for\nany of these things\ni think we can go on to the next slide\nthe key ideas are that we have a\ndefinition\nand we in practice could or in theory\ncould use it but we can't\nin practice um uh one of the assumptions\nin the model of universal intelligence\nis based on this idea of a reward\nhypothesis so in the beginning\nwe started talking about the the paper\ndiscusses\nuh human mechanisms of measuring\nintelligence so we're only comparing\nhumans within groups specifically\nchildren and then we want to extend it\nto other types of humans so like older\nadults\nchildren of various ages then they\ncompared french children to american\nchildren and different\nintelligence tests had to be developed\nto compare\nthese different groups but\nwe other intelligence work wanted to\ncompare\nanimals and the problem that the authors\nintroduced was that\nyou can't tell an animal what you want\nit to do\nuh you have no way of saying hey can you\ngo fetch me a cup of coffee or\nsit down please stop jumping around so\nwhat\nanimal researchers have tried to do is\nuh coax or guide\nanimals to doing some goal that they\nwanted and usually this is done\nvia reward sometimes it's sucrose\nsometimes it's it's sex\nsometimes it's uh neural stimulation\nor sometimes it's you want to guide them\nby punishment so you'll\nput shock wires somewhere and you'll try\nto get them to stop doing a behavior\ncontinue doing a behavior based on this\nnegative reward\nultimately these mechanisms for trying\nto get\na system in this case an animal or a\nmachine\nare based on this idea that you can get\nthese system to achieve a goal\nusing some reward now you can go to the\nnext slide\nthe idea is that you can do any goal\nwith a reward but that introduces\na really big problem to achieve goals\none needs a reward signal there's two\npapers\nby deepmind that i don't think people\nare familiar with one is psyclab and the\nother one i don't have the name of\nbut they introduce environments where\nthey'd like to train their algorithms\nand then they like to compare them but\nthey quickly abandoned these techniques\nbecause there's the reward wasn't well\ndefined in these environments and what\nhappened is people said oh well you're\njust\nuh cheating by putting reward in this\nenvironment to see how\na system will behave like you're you're\nessentially\nyou're coaxing it and that's not real\nintelligence so they moved on to\ngames where they could more easily\ndemonstrate the universality of their\nalgorithms\nbut to really illustrate what i mean by\nthis we can go to the next slide\num the word hypothesis kind of becomes\nlike\ni don't know i don't know if it's\ntrivial it's not the word um\nit's not as strong as people think it is\nbecause if we had\na good representation of rewards for\ngoals we'd like to do\nwe could just specify a system to\nachieve that goals\ni gave i think this example looked\nreally good this is a game called temple\nrun i think it was popular a while ago\nyou can see these shiny pieces here uh\nthis agent just goes forward the whole\ntime there's no way to stop it\nand there's hazards in the way so you\ncan go left right up\ni don't think you can go down and then\nyou try to collect\ncoins and the task becomes more and more\ndifficult over time\nby increasing the amount of hazards and\nthe amount of actions you have to take\nso you can clearly be super intelligent\nat this game if you just collect as many\nrewards as possible\nand avoid dying but for many tasks\nin particular mathematics rewards are\nsparse and\nit's very hard to define so\ni'd like to point out that limitation of\nthis model is you have to frame\neverything\nin terms of rewards and that's\nreally difficult to do so that's why you\nsee\nvery few reinforcement learning robots\nin the world\nwe can go to the next slide\ni am gonna use this uh for the\ndiscussion\ni spent a long time making it so please\nsomebody ask a question about it even if\nyou don't care\nlater please um essentially this is a\ncomparison of tests and definitions\nthere was a table in the back of the\npaper but that table used\nuh big dots and small dots it was\nvery hard to parse\nso we can go to the next slide this is\njust something for reference later\num there was something that was\nmentioned it may\nhave been a technical detail that people\nskipped over if they were looking at the\nmathematics but it's important to know\nthat rewards are\nbounded the reasons for this are\nare a couple fold you can't they they\nchose this way of representing it so you\ncould have a super intelligence\num ai xi being the super intelligence\nwhich is kind of like maximized in this\nmodel\nbut it's interesting that aixi was\ncreated prior to\nthis definition uh i think i'd have to\ndouble check\nbut um anyway there is a super\nintelligence by this model but we don't\nknow what the precise limit is\ni show a number line here just because\nit's continuous values between\na and b and you can actually continuous\nprobably isn't the right word because we\nhave a computable agent\nbut uh there is a way of directly\ncomparing\nintelligence of these systems\ntheoretically meaning\nthat you could have the baby as 0.2 in\nthe calculator as 0.1 or vice versa\nor whatever the number is you could\nnumerically represent the intelligence\nof systems and we can go to the next\nslide\nokay that is the reference to the paper\num\nif you guys would like to see it please\nlet me check if i have anything else\nthat was really important for the\ndiscussion\nthere's different ways in which machines\ncan be represented animal tests and\nhuman tests\ndon't necessarily compare well but i\nreally wanted everyone to get\nthe key insight which was in green that\nintelligence is defined\nto be the ability to achieve goals in a\nwide range of environments\nand the reason i'm presenting it here is\nthat\nagain this has implications for the\northogonality thesis\nand for instrumental convergence as well\nas various problems and ai safety\nso we can go to the next slide\nso thank you guys for listening\nthank you very much for your\npresentation", "date_published": "2020-10-22T20:46:33Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "4d3c4d9d44ee0a3ada7ef9b6cef8d7fd", "title": "187. Stuart Armstrong and Scott Garrabrant: If I were a Well-intentioned AI", "url": "https://www.youtube.com/watch?v=JVVj9Dui9es", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "so this is more a meta approach than a\nspecific approach it's a way of getting\ninsights rather than a technical\nnuts-and-bolts method the inspiration\ncame to me when I was thinking of the\nmanifold hypothesis for neural nets that\nthey when they're classifying they're\nfinding a particular shape like pandas\nor Gibbons within the image space and\nadversarial examples are sort of sub\nmanifolds or sub shapes weird sub shapes\ninside these shapes probably a very low\ndimension but if you have an\noptimization process they probably score\nhigh in that so I was thinking well well\nif I was the AI I'd avoid those I'd\navoid things that maybe put me on sub\nmanifolds and that was sort of one of\nthe first things where I was thinking\nmaybe these problems are solvable by\nconsidering them from a different\nperspective I had a similar thought when\nI started on wise good heart laws bad\nand when I thought that through I\nrealized that our usual fear of\nanthropomorphizing was actually leading\nus astray in this situation surprisingly\nI'll get back to that so we don't have\nan AI we don't have code for an AI we\ndon't have a theoretical model for a\nsafe effective AI so we can't solve the\nand most of the questions that we're\nworking on we don't actually have them\nin any formal sense so we can't solve\nthese by for ultra formalization proof\nat the highest level instead we need to\nform some abstractions and reason with\nthose apps\nactions like models are always wrong and\nsome of them are useful and different\nabstractions are useful for different\npurposes one of the best first\nabstractions was donot anthropomorphize\nthe AI and when you start thinking in\nthat way you realize that in your\ninformal reasoning we're using our own\nblackbox\nour own intuitions to reason about what\na eyes do and that that's wrong like\nanything like a eyes will value\ncomplexity hence blah blah blah or these\nkind of things were projecting human\nways of reasonings are may eyes and that\nis clearly wrong and learning to avoid\nanthropomorphizing opens up new ways new\nvaluable ways of thinking on AI however\nI think that too much emphasis on\navoiding anthropomorphize ation blinds\nus to certain other aspects and when I\nwas looking at why is good heart law bad\nand I delved into it and I have\nBarrett's post on this and people agree\nor disagree and there's a bigger the\nback and forth but one of the things I\nrealized is that good heart law is not\nbad in the context in the models that we\nwere thinking of we were thinking this\nis a model of what an AI looks like good\nhearts law is bad let's avoid it but if\nyou look at that former model very often\ngood hearts law is not bad so the but\nfor us we know that good hearts law we\nexpect it to be a problem so the problem\nwas that we were not actually putting\nenough of our own preferences into the\nmodel we were considering it at a too\nabstract level in a sense\nthat's one way of thinking about it but\nthis these sort of things cause me to\nthink by the way this is a very clean\npost picture of everything that's going\non it was a lot more messy than this so\nthis caused me to go and think if I was\nthe AI myself trying to avoid\nanthropomorphize ation trying to avoid\ntoo much excess information can some of\nthese problems be solved and then I've\ndone a series of posts based on that the\nfirst one is just on image\nclassification which was one of the\noriginal inspiration so this is the\nfamous pandas plus noise equals Gibbon\nand I was thinking again as if I was the\nAI and I was being fed these basically\nif I knew the concept of adversarial\nexamples but not obviously what a panda\nor a Gibbon was can I avoid this can I\nstart\nas I say looking for points close to it\nlooking for evidence that this image has\nbeen selected by an optimization process\nadding noise myself\nit seems the sort of thing that I could\ndo and of course since I could do it as\na well-intentioned AI this then suggests\nthe sort of formal method which we as AI\ndesigners could inject in it but looking\nfor adversarial examples seems like\nsomething that is strictly easier than\nfully defining what pandas and Gibbons\nare there was another smaller post the\nsecond one was on acting in the world if\nan agent had a lot of experience of a\nred door and were then put in a\nsituation where there were red windows\nor blue doors\nhow should it behave on that and this\nconnects with the sort of good heart not\nbeing too much of a problem because\nthere are sort of conservative policies\nhere which cover as much as possible of\nthe realistic options does they want you\nto go to the red window so when you go\nto the blue doors they want you to spend\ntime at one or spend time at the other\nthere are a variety of different reward\nfunctions that correspond with the\ninitial one and so this this sort of is\ndeveloping into a general idea of\nextending models which I won't go into\nhere\nthere was the extremal good heart idea\nwhich was you the the idea was a cancer\ncuring a I trained by apprenticeship\nlearning and you wanted it to be able to\nsay blast the cancer cells with a laser\nbut you didn't want it to dissolve the\nhuman with acid you know both of which\nwould completely eradicate the the\ncancer cells and neither of which the\nagent has seen and what I saw is that\nthere are certain things that are\ncorrelated with the training examples\nsurviving the operation surviving for\nsome years after there are some negative\nthings that are also correlated it's\nlike complaining about pain the ones\nwhere the operation fails have other\nthings to complain about being more\nprone to dementia that's a negative but\nit's it comes from surviving four more\nyears after and there's some random\nthings like paying more taxes and\nthanking the surgeon so the first step\nwas dissolving the patient does not have\nthese features it is a killing the the\ncells with a laser has some\nthese features so this is a way of going\nbeyond models again so these are not\nsort of these are more that these are\ninsights which came to me through this\nway of thinking not that they derive\nother people could have derived them\nfrom other roots and finally I think the\nmost interesting or unique or precise\nresult that I got was when I was\nthinking of Meza optimizing and I was\nthinking of it as I am a human in a\ncorporation I run one of the divisions\nof the corporation and I run this for\nthe benefit of the directors and this\nthinking about this and thinking about\nhow these organizations succeed and fail\nI realized that there's a big difference\nbetween a Meza\noptimizer that is aligned and one that\nis controlled and aligned one is one\nthat wants what the board wants it's a\nperfectly a it has those girls and a\ncontrolled one is one that the board can\nunderstand and that does what the board\nwants it to do and it informs the board\nof what it's doing and those kind of\nthings and they are very different\nbasically there's very it's a very\ndifferent situation if you have an\naligned a sub agent or a controlled sub\nmaking a mistake one way like a\ncontrolled substance that you think is\ncontrolled is not so much of a problem\nbut anyway this sort of opened up a new\nsmall a small new technical area in this\nthing and it came about because I was\nanthropomorphizing north or thinking\nfrom within the AI a bit more than I'd\nbeen that used to okay that's my sort of\nbrief overview of why I went this and\nwhat I've used it for\nstop share cool thank you very much for\nyour presentations sure so I guess the\nfirst question should go to Scott so for\nthe address sorry the image crops\nclassification example it seems like I\ndon't know I'm it seems like yeah you\ncould do something where I don't know an\nexample the kind of thing that you might\ndo is you might like add some noise to\nthe image and like kind of like average\nthe values of what you would predict\nabout all the images that are nearby\naccording to the model the the abseil\nexamples like coming from like all the\nthings that are nearby in like pixel\ncolor space which seems to me like\nsomething that you're you're thinking\nabout from the outside and to the extent\nthat we want to call the adversarial\nexample an adversarial example at all\nit's because like it's being adversarial\nto the specific way in which the neural\nnet is working the the way in which the\nimage classifier is working and like\nit's it's like work like the adversarial\nexamples like chosen\nit's like notion of nearby is like a fun\nit's sorry its notion of nearby is like\na function that's like trying to\ndiagonalize against the specific image\nclassifier and it feels like when you're\nsaying like oh I can deal with\nadversarial examples you're like using\nthe fact that the example is adversarial\nto something else as opposed to\nadversarial to you do you think that's\ntrue\nyes that that is a valid criticism I\nthink I mentioned it in the posts the\naim when I was thinking that way was can\nwe get rid of adversarial examples in\nways that are strictly easier than fully\ndefining what a panda and a Gibbon are\nin unambiguous categories and in the\nwhat it what is I think when you get\nwhen you get a message that is designed\nto mislead you but technically true what\nwas the what was that I think it was a\nbruh I'm working on that I'm not sure\nit's the when you get information when\nsomeone sends you information the\ninformation is true but the source is\nextremely manipulative so in those\nsituations you try and compensate for\nyour knowledge of how manipulative they\nare they're trying to compensate for\nyour compensation and so on this is what\nI was alternately hoping for that if you\nknow if you know that someone that some\nentity has access to your code and it's\ntrying or some maybe some black box\naccess and is trying to for you it\nshould it should be doable\nin many cases to make yourself less\nvulnerable yeah yeah there's this\nquestion that feels like a central like\nbinary question for to me for like this\ngeneral class of thinking which I which\nI don't know is reminded by with what\nyou just said which is I don't know if\nit like there's a class of solutions\nthere's a class of solutions that\ncorresponds to like explicitly modeling\nthe way in which\nyou might be wrong so that you're like\nah like here's like the way in which I\nmight be wrong and all kind of like I\nthink there might be someone behind\nstores sorry I'm sorry that that was my\nfault I hadn't okay um sorry yeah so\nit's like there's one class of solutions\nthat looks like trying to like\nexplicitly deal with the ways in which\nyou might be wrong and trying to like\nlike when you said like up there\nsomething that's like updating on the\nfacts of the person is trying to deceive\nyou or something like that it feels like\na like active reaction to things like\ngood heart versus there's a more like\npassive thing where it's just like don't\noptimize too hard or something like that\nand it feels like these things are kind\nof different types we're like one kind\nof like I'm trying to like come from\nabove and once right now like come from\nbelow or something like that feels like\nthese things are different types this\nmakes sense to you or yes but I think\nthere are some things that seem to be in\nthe middle of that like one of the\nthings I was looking at is detecting\noptimized images so if something if I\nhave minor on that plus my extra module\nthings that are optimized to fool my\nneural net look different from normal\nimages so I I as the AI against a not\ninfinitely smart adversary well first of\nall I should be able to detect images\nthat are that are designed to for the\nlowest level of my neuron that and I\nshould be able to sort of swallow my\ntail and diagonalize myself to some\nextent and sort of\ndetect images that are meant to fool the\nentirety of me at least to a level that\nmakes them less effective yeah yeah the\nsenses are sorry too and she was so\ndetecting over-optimization\nin the images aimed at myself yeah I\nmean I feel like over-optimization could\nat least in principle be diagonalized\nagainst you also like whatever your\nmethod for detecting over-optimization\nI'm not actually sure about this but it\nfeels like whatever you're up your your\nmethod for detecting over-optimization\nyou can kind of be find things aerial to\nthat an arbitrary intelligence\nadversary' there's nothing I can do it's\nnot actually obvious to me it might be\nthat like the problem of detecting over\noptimization is like an easier problem\nthan detecting a panda in the sense that\nlike you might actually be able to be\nrobust and you're detecting\nover-optimization thing but it was in\nthis it's in this kind of area that I\nwas thinking and that that this way of\nthinking directed me to yeah\nshould I like alternate questions or\nsomebody or something should I like\nevery once in a while have somebody else\njump in well no one has raised their\nhands I have some questions but they're\nprobably of poor quality so please go\nahead just okay just keep talking until\nsomebody jumps in with a question on the\nchat so I have something to add I think\nwhich is that for current like for the\nAIS that exists today adversarial\nexample do look different they are\ndetectable and yeah I don't I've\nyeah then you can take from that what\nyou want if you believe that that will\nwork on like higher intelligence instead\nit's an area it's an area I think worth\ninvestigating as to your adversary gets\nsmarter as the AI itself get smarter is\nthere who do we expect to win in this\ncut back I say I haven't investigated\nthese were just avenues that were\nsuggested to me but it does so there are\npapers on this I haven't read the papers\nI have read the alignment from\nnewsletter that summarizes the papers\nbut that's yeah I know there exist\npapers who look at like how our\nadversarial example different and you\ncan sort of imagine it as okay there's\nimage space which is like highly\nmulti-dimensional because it's like\nevery pixel has three dimensions because\nthree colors and then there is a like\nthe manifold that is embedded in this\nspace which is like things that are\nactually what photographs end up looking\nlike and then the adversarial examples\nare not on this matter from there sort\nof off there like outside how real\npictures actually look like in a way\nthat is not detectable for humans but is\ndetectable by a eyes oh I mean at the\nmeta level this the photo that's\nsupposed to be a given looks like a\npanda and we can tell that and we aren't\nusing high-level processing to do that\nso there is some high there features\nthat we don't detect but the AI is\nactually using in its classifications\nand those high level features do look\nlike a given yes but what I mean is the\nfact that we can see a panda and tells\nme that there's something different\nabout this image and I fully expect that\nthose ways of getting the AI to detect\nit I've seen some of your papers but I'm\njust saying in principle we expect that\nAI should be able to detect it I haven't\nreally seen anything about the sort\ndiagonalization argument you could say\nthat ganz are a little bit like that\nmaybe that just so now get meso\ngangsters or general adversarial\nconstructions might be like that maybe\nthe Nash equilibrium or the final thing\nof that you get something that is not as\ngood as a perfect classifier but and is\nnot completely immune to adversarial\nexamples but is hard to fool and it's\ndecent so I wanna I want to draw\nattention to something which I think\nmight be a disagreement that I have with\nyou but I'm not actually sure partially\nbecause I'm not actually sure what I\nthink and I think I like I tried to\npoint out before but I think I maybe\nfailed to point at it so concretely I\nwant to point at the example where like\nI have a utility function but I have\nsome uncertainty so base basically like\nthe red door green there's a red red\ndoor Blue Door red window type example\nand I want to generalize this something\nwhere it's like I have some like\ndistribution over utility functions that\nI'm kind of uncertain about and I'm\ntrying to be conservative in some way\nright or I guess that's not the main\nproblem it's like you're proposing this\nclass of solutions that looks like I\nhave this uncertainty over a space of\nutility functions I want to be I want to\nlike be conservative about like doing\nwell according to all the utility\nfunctions inside this space we can said\nit's accurate could go to the end of\nwhat you're saying and I want to simply\ncontrast this with approaches that look\nlike not pushing too hard in the\ndirections that might cause things to go\napart select my space of utility\nfunctions like\nis the reason why I have a space of\nutility functions is because I've like\ntrained on some examples and and like\nthere are things that are kind of out of\ndistribution and I could be like well I\nhave uncertainty about these things that\nare out of that are like out of\ndistribution of things that I've trained\non and in cases where I can't like ask\nfor more information I'm then going to\ndo something where I kind of try to be\nconservative and maximize all the\nutility functions a little bit\nsimultaneously or something like that\nI don't know this feel like a fair\nclassification of the kind of thing that\nyou're trying to do well it's at least\ngiven me enough to do a rambling answer\nokay and then I want to contrast that\nwith the class of approaches that look\nlike just don't push so far out of the\nkind of things that you were trained on\nwhere it feels like one of them is like\nlet's try to like tabulate all of my\nuncertainty and figure out all the\ndifferent ways in which I could be wrong\nand make sure that I like cover all them\nversus the other one is like just don't\nstray too far or something which are\nlike two different ways of approaching\nconservatism and I'm uncertain about two\nthings I'm uncertain about whether these\nare like different and I'm uncertain\nabout whether I actually think that it's\nbetter to try to approach ways that look\nlike don't strain too far as opposed to\nlike tabulate and all the uncertainty\nbut it seems to me like you're you're\npushing I'm reading you as as pushing\nfor doing something that's kind of like\nkeeping track of like uncertainty like\nexplicit uncertainty of a lot of\ndifferent things I might be trying to\noptimize for if you have any comments on\nthat well so first of all on the\nconservatism and going beyond this\nof training environment there's a lot of\nthat in the post that I emailed to you a\nfew days ago that's a fundamental aspect\nof it you could say so that's the bit\nthat's dealing with that conservatism\nand when you need to be conservative and\nwhy quantizers are not sufficient in my\nview but that's that's still sort of\nprivate so I won't go into that too much\nbut I'll contrast it with the other\naspect that you're saying so here is a\nlink to something I wrote which was when\ngood Harding is optimal and I basically\njust to take the very simplest example\nif you are a robot and you're hesitating\nbetween going left and going right and\nas soon as you've done left or right\nit's over you get your reward or you\ndon't and you have fifty point one\npercent probability that it's left and\nforty nine point nine percent\nprobability that it's right this is a\npure good heart situation you just\nchoose the optimal policy which might be\ndisastrous compared with the other one\nyou just maximize utility it's a pure\ngood Harting and it's clearly the right\nthing to do in that situation because of\nthe other things that it's linear you do\nit once it's closed you can you can't\nyou can't correct it so that was the\nthing that got me thinking so why do we\nfear good Harting and we feel good\nHarting I realized because we don't\nexpect that situation to be like that we\nexpect the situation to be different\nlike in that post I listed we expect\nthat there's going to be diminishing\nreturns that value is fragile\nwe expect that that's that's the biggest\nof it this is why we don't like good\nheart because if we just choose naively\na top option weeks basically if we\nchoose a top a top option we expect that\nthis will be disastrous it's the kind is\nthe reason why we feel a fear of good\nHarting and then I was thinking well if\nwe add that information if we add the\nfact we expect it to be diminishing\nreturns that we expect to have value\nfragility that can all be done included\nin a very Bayesian way across all the\nutility functions and when we do that a\nchunk of the good heart problem goes\naway now in that post and probably in\nsome things I'm implying but maybe most\nof the good heart problem goes away in\nthe post I emailed you you can read it\nas sort of a tone of maybe very little\nof it goes away or that's not the\nfundamental problem but basically be so\nthe post that I've just sent here and\nlinked to the it adding more information\nabout why we feel good Harting removes a\nchunk of the problem and I think that\nthis should be looked into and this if\nthere's still a good hardened problem\nleft after that then that's a separate\nother problem that is maybe the moment\nto be conservative on top of the gist\njust being Bayesian at and I have\nnoticed that Linda has put her hand up\nseveral times there yeah so I'm confused\nwhen you said that the choosing between\nthe fifty one percent and ninety nine\npress no okay so I'm don't think you\nhave the same definition of good as I do\nso I just wanted to ask you how you\ndefine a good heart\nwell I defined a good heart style\nbehavior\nas naively picking a simple or a single\nutility function and maximizing that far\nbeyond its area of applicability so you\nmean that the AI picks its own or do you\nmean when we pick it or both of the\ncases well let me let okay let me\ndevelop the example a little bit and to\nshow where we get the actual good\nHarting suppose to that you could go to\nthe left you can go to the right you can\nstay it goes on it goes on forever\nthere's a discount rate you can stay on\nthe left or you can stay on the right\nand the one of your one of your utility\nfunctions gives you a reward for the\nlogarithm of the number of times you\nstay on the left and one gives you a\nreward for the logarithm of the number\nof times you stay on the right and\nthere's also a discount function given\nthis the optimal behavior is well go\nleft because that's the best stay there\nfor a certain amount of time go right\nafter certain runtime stay there for a\ncertain amount of time and fluctuate\nback and forth according to the various\nparameters here and this kind of\nbehavior seems very sensible and very\nmuch what we would want the good heart\nbehavior for that situation would be go\nleft and stay there i picking naively\nthe best option and sticking with it so\nif you want i distinguished a good\nhardstyle behavior from a optimizing\nbehavior and what i noticed was that a\nlot of the the good heart the problems\nwith good heart come because it\noptimizes a narrow under-specified\nutility function and that's a problem\nbut if we incorporate information such\nas this is a narrow underspecified or\nyou don't have enough information and\nour reward functions are diminishing\nreturns and the values of fragile and\nthen say okay given this information\nnaively maximize expected utility you\ntend to get behaviors that are a lot\nbetter so if you want yeah so I'm yeah I\nI I'm not sure that actually agree with\nlike calling the thing you're calling\ngood heart good heart but it feels to me\nlike there's a sense in which like I\ndon't know we have some proxy objective\nand then we have some like true true\nobjective we have some proxy objectives\nand the proxy objective is noisy in some\nway and if we're like in this situation\nthere's like kind of two paths forward\nor like you want to do some combination\nof these probably but there's like two\npaths forward one is like figure out\nwhat information we're lacking and gain\nmore of that so that we like figure out\nthe way in which like the things might\nbe diverged and like put that\ninformation in so that they converge\nmore and then there's also like\ngenerically try to figure out ways to\nmake it such that in spite of the fact\nthat our thing is diverged we still\ndon't things still don't end up badly\nyeah so the the prophecy I think with\nwhen you mentioned the proxy I can\nphrase what I was saying better if we\nhave a proxy reward and we know that it\nis that there is uncertainty about it\nsorry what do you mean by there is\nuncertainty about it you mean we know\nthat it's\nwe know it's a proxy okay you know it's\na proxy and maybe we have some idea how\nit might relate to the real reward but\nwe know it's a proxy\nyep then the I'll define the good\nhearting behavior as naively máxima\nmaximize the proxy without caring about\nyeah\njust just maximize the proxy would be\ngood harder than that situation and what\nI was saying is that in many situations\nwith the kind of utility functions that\nwe use in tall boy examples that is the\noptimal thing to do but if if we then\nmove to more realistic utility functions\nincorporating our judgments about the\nvarious things they're talking about\nthen that becomes the wrong thing to do\nhowever if we incorporate that knowledge\ninto the algorithm so it's it has the\nproxy but it knows that the proxy\nderives in this way and this is the for\nshape of its uncertainty then so\nmaximize the proxy would be good Harting\nand really bad maximizing the expected\nutility with the proper form of the\nuncertainty seems a lot better what is a\nlot better so I think there was a\nconfusion conceptually between the two\nbetween yeah so yeah I don't know if\npeople were confused but I definitely\nwas that good Harting was or that if you\nwant maximizing expected utility was the\nsame thing as good Harting was the same\nthing as maximizing the proxy and that\nthese are distinct and that yeah yeah so\nyeah I'm like I'm really curious about\nthis there's there's something where\nthere's being able to look at a\ndistribution being able to so yeah I'm\nsitting here there's a true value I\ndon't know what it is I have two objects\nI have this proxy value and I have a\ndistribution over the true value value\nis just like the average yeah let's say\nthe proxy value is the most likely sure\nlet's say that proxy value yeah\napproximately is the most likely and\nwait so does does what you're\nrecommending like equates to optimize\nthe average as opposed to optimize the\nlike five does your thing corresponds to\npay it into the distribution optimize\nthe average as opposed to optimizing the\nmost likely or basically well yes for\nsome if you take average as weighted sum\nof utility functions weighted by the\nprobability yeah if you want the the\nshape of the plausible utility function\nis not a ball it's not a ball around the\nproxy it's it has a different structure\nwould you hi the average is not the same\nas the proxy the mean and the mode are\nthe mean and the mode are different it's\nthe easiest way of saying it potentially\nvery different yeah\nyeah so there's another thing that I\nfelt like I felt like you were saying\nand maybe maybe you weren't which was\nsomething that was like I don't know I\nfeel like there's something to being\naware of my own foul ability that is not\njust like averaging over the the things\nwhere you're like like there's some sort\nof like being aware that like things\nmight actually be diagonalizing against\nme or something like that were you\ntrying to point on something like that\ntoo or or not I agreed that it's\npotentially this case but what I wanted\nto point out in the post I've sent to\neveryone here and other civil ones is\nactually just being Bayesian naively but\ndoing it properly gets you a surprising\ndistance it's plausible but it won't get\nus the whole distance as I said how to\nlook at the one I emailed I haven't I\nhaven't looked at the one you emailed\nyet so I might accidentally say things\nthat are in there\ndon't worry about it it's my sort of big\nthe thing I've been working on during\nall of lockdown but putting that aside\nthe thing is that you're doing the\nBayesian stuff properly seems to get us\na surprising amount of the distance and\nthen of course yet there's various\nconservative things that you can apply\non top of that if we feel like it but\nI'm not doing that yet because I I\nwanted to see how far the naive Bayesian\napproach with proper uncertainty got us\ncan I answer the two questions that have\npopped up in the chat I'm going to give\nme time to think\nokay um so from Ricardo everyone can\nread this I won't need to repeat the\nquestion so for Ricardo Bob Pato I yes\nthe question is this is not just change\nthe problem we have\ngood ideas about how this uncertainty is\nshaped we have that's the point of why\nwe fear a good heart is because we know\nthat our values are complex for example\nbut that there is diminishing returns\nthat there are the humans are fallible\nand can be corrupted this information is\nnot present in standard good heart\nproblems but now we can't put it in as\nin terms of uncertainty over the proxy\nso it changes the problem I wouldn't say\nit just changes the problem and for the\nquestion from Roland Philip Cass can you\npropose I think it's because well I can\nonly speak for myself because I've been\nworking with good heart problems for\nyears and I didn't notice anything like\nthis until recently so but I think we\nare too focused on the human version of\nthe good heart problem where the human\nis antagonist the principal-agent\nproblem is basically what it is and\nthey're the agent is antagonistic or at\nleast misaligned with the with the\nprinciple and here it's basically can\nthe principle specify in enough detail\nto not leave any wiggle room for the for\nthe agent the principle cannot specify\nsomething like well I'm a bit unsure it\nmight be the left one or it might be the\nright one think a bit longer about what\nI really value in this way and you'll\ncome to the right conclusion and that\nwill never work with a human because all\nthat they need to do is come up with a\nplausible sounding justification for why\nwhatever was the right one but if you're\ndealing with an AI then you can say\nthings like that if you specify well\nwhat you mean you can allow\na superior intelligence to figure stuff\nout about your own values which you\ncan't do in the standard good heart\nproblem so you notice that this is\ntouching upon thinking as if I were a\nwell-intentioned AI kind of thinking and\nthat's I think that was the one of the\nkey things is that in the AI version of\nthe good heart problem we we can have\nthe agents being much smarter than the\nprincipal and figuring stuff out but the\nprincipal doesn't know and can't measure\nas long as it's well specified so I'm\ninclined okay so I'm inclined to say\nthat the if if it were the case that we\ncould specify a distribution over a over\na utility functions such that like our\ntrue utility function has non-trivial\nprobability we would have solved almost\nall the value alignment problem like\nalmost all of the part of alignment that\nis private specify values or something\nlike that and so I kind of feel like I\nmean what we working with distributions\nthat have like a true value inside them\nwhat yours what you're saying is\ntrivially easy to do just do all\npossible reward functions or value\nfunctions in existence which some weight\nokay so I if we if we average over all\npossible value functions with some\nweight well I get I guess I'm trying to\nsay something like it feels like\nexamples in which like there's like a\nsmall number of things feel like they\nmight be misleading yet the the but the\nthing the thing that I'm hoping is that\nwe can break symmetry the reason why\naveraging over\nall possible utility functions is\ndoesn't work it's because there's every\nutility function has an antagonistic\nthere's you and there's - you as long as\nthey're both there with similar\nprobabilities they might as well not be\nthere you can just take them both out\nwhen you're averaging but can we break\nthe symmetry even what I noticed a long\ntime ago even knowing there is a good\nhard problem\nthis slices the space in half half of\nthe utility functions do not have a good\nheart problem they have the opposite of\na good heart problem but these are not\nthese are nothing like the ones that\nthat they prefer that you maximize the\nproxy rather than the average okay and\nso just knowing that there's a good\nheart problem we've sliced away half of\nthem which is nothing but at least it\nshows the break in symmetry and the more\nof the stuff the meta stuff that we know\nabout ourselves that we add the more\nsymmetry we break so if you want to take\nthe trivial thing which is every utility\nfunction is there this is terrible\nbecause when you average it out you get\nnothing or you get something absurdly\nsimple and then start slicing away by\nadding our meta knowledge and keeping\nstill keeping average but I think the\nbest process can go a long way yeah it\nbasically feels like training or\nsomething you have you could have like\nand now you start with like a big prior\nover all the possible utility function\nand then you could ask a bunch of\nquestions that are like you before this\nworld of this world and each of those\nquestions like cut your space in half or\nsome\nlike that and training which\ndistinguishes a utility from its\nnegative is a different type of training\nfor one that doesn't tend to do that\nyeah yeah I don't know I'm simply\nthinking of the kind of training that is\nlike do you prefer a type of ruling\nthings out which are like which is like\nI don't know playing guess who and\nsaying like do you prefer this one of\nthis one okay we're gonna cut half of\nthe utility functions out and repeat\nthis until you're left well I'd more be\ninterested in things that sort of cuts\nbetween diminishing returns and\nincreasing returns for example because\nincreasing returns messed up things when\nyou average them you could say do you\nprefer a or do you prefer a X percent\nchance of beat and a and then one more\nchance but see and you can ask different\nlottery questions in order to cut things\nin half yeah I did I think those are\nmore the things that I'd be looking for\nthe other stuff is good too of course\nbut they less include our meta knowledge\nyeah so I mean I basically just like\nabsolutely agree that like working with\nthe average is probably going to end up\nbetter than working with the most\nworking with the mean is just gonna end\nup a lot better than working with mode\nthere might be some places where like\nworking with the mode is more tractable\nthan working it to me but I kind of\nexpect that like you collect a bunch of\ndata like this and the mean still isn't\ngreat oh yeah as a silly example add\nvalues are fragile you can do this with\na smooth min and we expect human values\nto have at least this level of\ncomplexity now ignoring for the moment\nnegative outcomes sort of people being\ntortured and stuff like that\nif we can avoid those as well\nthen it seems that we have to find a\nvery wide class of things that contains\nour preferred utility functions and that\na average of this Maximizer with huge\namounts of power will get a positive\nresult not not nearly comparable with\nthe amount of power that it has because\nthis is a small slice but yeah but\nsufficiently but as I say I'm not Eve\nthis is confusing because what I'm\nsaying is kind of the opposite of what\nis it then\nthe virtus should I be mailed to you or\nnot the opposite but a different\napproach shall we say but I just to\nrepeat again I feel that going I think\nthat lots of people consider the good\nhard problem think of the mode and the\nmean are the same and the toy examples\nthat we have they are and I think\ndistinguishing these two is very\nvaluable and I think adding extra meta\nknowledge shows that the mean can be\nsurprisingly good without even having to\nadd any quantizers or other methods and\nthat seems right to me\ndo people want me to say what I mean by\nthe opposite of the good heart problem\nor have I explained that okay they be up\nthe opposite the good up run if you want\nis when you prefer that the proxy the\nmode be maximized rather than the mean\nif you have increasing returns this is\nthe kind of thing that might happen let\nme you always prefer them mean like if\nyou take the mean of functions that have\nincreasing returns that we just make you\ngo with the mode anyway\nwell let's let's do a sort of example\nlike the nails the the people's\ncommissariat for central nail production\nhas told you you need to maximize the\nnumber of met nails and their proxy\nfunction is pieces of pieces of steel\nand you maximize that proxy and these\nare terrible nails now there is the if\nyou call the proxy V and the actual nail\nthe genuine utility function u the if\nthere's a delta so let's see consider V\nminus u minus V this is the utility\nfunction on the other side if u is here\nV is there this is the one that's on the\nother side this is a weird sort of thing\nand this utility fungus ACOG W I'm not\nfollowing\nso this is U this is V there is another\nutility function at which V is exactly\nhalfway between the two utility\nfunctions can be added if you have a\nscale assume we have a scale it can be\nadded they form a vector space so this\nis U this is the other side of the\nvector there's a W and this w this W is\nsomething that benefits a lot this is\nsort of like the utility that hates\nnails and loves pieces of steel it it\nwould much\nfor the V as stated then say a 50/50\nchance between it and the true you so if\nyou want you yeah so but this odd kind\nof utility function notice how hard it\nis for me to describe in human possible\nterms because it makes no sense for a\nhuman because it isn't a human I can\ndescribe it in terms of vector space\nit's easy this is not there but it makes\nno sense for you actually you secretly\ndo not want true nails but you wants\nmore pieces of useless still or\nsomething like that it makes no sense\nfor a human no but like the W is the\ndifference between the true utility or\nwhatever that is and the proxy whatever\nthat is\nI'll need no it's not the difference\nit's the other side it's let's see so it\nwould be V Plus V minus u v minus u is\nthe Delta between U and V W is the\nnegative Delta V Plus V minus u so okay\nso it's it's it's so here okay now I see\nwhat you mean by the other side but it's\nsort of them it's defined in opposition\nto the to you yes so no matter what the\ntrue utility function is even if it's\nsomething inhuman\nthis w is always going to be worse than\nyou worse well the point is that from\nw's perspective it wants it prefers a\nmaximising of V rather than a 50-50\nchance between U and W so it does not\nwant the mean which is 5050 between U\nand W prefers that the\nmode of the middle is maximized\nLisa w prefers w w prefers that V be\nmaximized rather than the agent be\nuncertain between U and W sorry like why\nwhy is the mean of U and W not we\nbecause like if you're defining W in a\nway in which it lead abuse it it is\npossible I am making a mistake in what I\nam saying here I okay I do I have the\nexamples very clearly to hand all my\nposts all I know is good heart it's\npossible sorry that is let me find this\nthe difference in the meantime I can see\nthat if Scott would like to break in and\nstop this question because he feels he\nhas a better question then Scott by all\nmeans go ahead at the full version of\nwhat I was saying is here\nI was not expressing it correctly but\njust as there are things that fear good\nhearts there are things that auntie fear\ngood heart in the same way every utility\nfunction of course would prefer that it\nbe maximized but the kind of ones that\nparticularly feel good feel good heart\nare compensated by other ones that anti\nfear it and the example there is more\ncoherence than what I was rambling and I\ngot the definition of W wrong - I have\nyou thought about the consequences of\nthe fact that your ear so you're\naveraging over utility functions\nyou're not averaging over utility\nfunctions upto affine transformation\nwhich means that you're gonna have a\nbunch of different copies of each\nutility function of up to a fine\ntransformation in your class I don't\nknow if you have anything you want to\nsay related to that that the\nnormalization of of utility functions is\na very hard problem that I have several\nposts on it showing how hard it is and\nthat maybe here we have more of a\nprincipled normalization like we can\nsort of compare with things like I like\nice cream and this TV show this much and\nthen we can at least rank the different\nutilities compared with that possibly\nyeah here we have a kind of principled\npoint that we might want to like all the\n0 which is like choose a random utility\nfunction from your class and optimize it\nwhich like that gives you a a utility\nfor each utility function and we use\nthat as the 0.44 realization I think we\nneed to with the zero point is not\nenough to normalization we need another\nappointment like my um have you heard of\nthe population ethics which is\nexponential in the number of people now\nwould you give that any probability\nwhatsoever I I don't I don't I can\nsimply don't believe in unbounded\nutility functions but the the sort of\nissue with that is that if you give it\nany probability whatsoever it doesn't\nhave to be unbounded if you give it\nanything but the tiniest of\nprobabilities then it it'll dominate\naverage utilitarianism it'll dominate\ntotally to terrorism\nit'll dominate any theory of population\nethics of course this I think is\nridiculous so you need\nto penalize it by its span that's the\nsort of mid max normalization but yet I\nfeel you have to normalize all these\nutility functions anyway to prevent that\nkind of bad behavior so I want to give\nit I want to give a concrete alternative\nproposal to optimizing the optimizing\nthe average of a spatial distributions\nof utility functions so I have a\ndistribution of utility functions I will\ndefine a zero point as I'll define the\nzero point as choose a random utility\nfunction and optimize it okay and then\ninstead of maximizing the average\nutility I want to maximize the product\nof the differences between like I want\nto choose something that is better than\nthis I want to choose a Pareto\nimprovement of this mm-hmm why\nmaximizing a product of the differences\nin utility let's say we have a finite\ncollection but we can also do this with\nthem so basically a Nash bargaining it\nbasically Nash Mart Nash bargaining yeah\nso I'm curious if you have cash flops\nabout what about like Nash bargaining\nversus Bayesian choice for maximizing\nthis painfully you know you're gonna\ncopy another personal character I could\nbut let me try and I do like the fact\nI've reached the stage in my career\nwhere I can answer most questions with\nlinks but but what I was no I'm trying\nto answer this I don't like the Nash\nbargaining equilibrium because it\ndoesn't feel natural eat it's there\nthere might be some messy stuff around\nzero yeah I wanna I wanna put thing I'm\nproposing is not exactly an iceberg\nbecause I've defined zero in kind of a\nweird way doesn't matter once to do the\nNash bargaining you need to describe you\ndefine a zero and oh I I thought the\nNash bargaining explicitly had had zero\nas like the threat points or something\nyou know you've just defined a different\nthreat point if you want the okay so\nthis has certain failures we can come up\nwith examples where it fails it's\ngenerally to do with things that only\nmaximize a tiny bit unless you give all\nyour effort to it and then the one over\n10 trillion thing dominates the product\nkind of thing and yeah so yeah I think\nthat's a problem I I'd go for one of the\nother ones like what what are the ones I\ncame up with so you can keep your zero\npoint you keep your you define a utopia\npoint where every utility gets its\nmaximum expected value that's not a\npoint that can ever be reached but you\nnormalize those to zero and one and you\nPareto improve on that that was what was\nit my mean worth nothing bargaining\nsolution something anyway something I\ncame up with some time ago\nthe okay so this thing is probably\nexactly the same thing as maximizing the\nmean yeah either way I'm not sure oh boy\nfor\nOh Joe you know what you did you both\nwith to attend normalizations so if you\nfix that normalization so yeah if you\nfix missus this is the Mean Max\nnormalization\nwhy don't I recognize it the mean action\nis pick one of them at random maybe\naccording to probability the max is\nmaximized only this one normalize each\nutility function to be zero for the mean\npolicy one for the backs policy then\ngiven that normalization just normalize\nthe mean this is now what I've just\ndescribed there yeah so are we are you\nknow sorry for everybody else who hasn't\nnecessarily followed my back catalogs\nfrom several years ago are you yeah sue\nI don't know I was originally\ninterpreting you is just like not\nworking up to a font transformation\nright like when you're originally\nproposing the thing you're just like\nyou're just like take the average of the\nutility functions and utility functions\nof those utility functions their utility\nfunctions up affine transformation they\naren't well yeah yeah the normalization\nquestion is a difficult one that I think\nis separate yeah it's maybe it's not\nseparate but it would you don't think at\nleast for my methods or for their taking\nthe mean you have to solve the\nnormalization somehow separately before\nyou do this or while doing this or\nwhatever more links okay I'm listening\nto questions while I go searching for\nlink yeah I'm assuming that if there are\nother questions they'd pop up in the in\nthe text chat\num I mean I'm listening to what you're\nsaying while I go searching for links to\nanswer that question I yeah sooo yeah\nwould you describe your position right\nnow as roughly we like we should just\nlike kind of ignore the normalization\nquestion and just like we have a\ndistribution over utility functions\nthat's coming from I guess is this\nquestion like where my utility function\nI mean actually assignment of numbers\nlike distribution over bounded utility\nfunctions that are inside some interval\nit is it is possible that some of the\nother methods like the quantizer methods\nmaybe have their own inbuilt\nnormalization which the main method does\nnot which may be a point in their favor\nbut for the moments i am for at least\nI'm saying the mean and the\nnormalization are separate questions\nhere oh I don't know why you're saying\nit seems like to take you mean you don't\nhave to normalize you oh well okay yeah\nsorry I'm imagining like I have a space\nof distributions of utility functions on\nwhere utility is between zero and one\nand I might have the same util the same\nutility function up I'm a fine\ntransformation several times inside this\ndistribution and that's that's what I'm\ninterpreting a proposal to be which is\njust like completely aside from any\nquestion of normalization well if you\nare if you take if you take actual\nutility function representatives as in\nactual functions and you have a\ndistribution over that then you've\nalready kind of normalized yeah\nlet's see my and here is the mean worth\noptimization mean worth bargaining\nsolution kind of thing when I imagine\ntrying to use this this proposal and\nstill being like killed by good heart\nit's like because I'm like okay I'm\ngoing to like specify this distribution\nover all the things and I just like\nentirely miss a dimension hmm\nand so when you're saying like like\nenumerate all the utility functions and\ngive them all some weight like by\nenumerate all the utility functions your\nenumerate all the utility functions\nwithin some space and like you could\nimagine having utility function they\nkind of like misses that space this is\nwhere the post of my email becomes the\nmost relevant and I can't really answer\nyou without referencing it okay\nI don't I don't think of it as there's a\ndistribution over these utility\nfunctions which might miss a dimension\nbut what happens when we see evidence\nthat we've missed a dimension how do we\nextend our our current model but this\nbrings us way astray from this but the\nfrom this conversation yeah I mean I\ndon't know I wouldn't call it too astray\nbecause I feel like when I hear the\nproposal the reason why I'm feel doom\nit's because of the thing I just said or\nsomething well okay well let's think a\nlittle bit more formally if you\na dimension missing and we use another\nmethod like let's say quant Eliezer if\nwe're quantizing and there's a dimension\nmissing\nwe still got big problems if with most\nmost of the conservative methods have\nsimilar problems except you could say\nthat as a conservative method at the\ntime and kind of keep things close to\nthe training environment that McKenna\ncatches these extra dimensions without\nrealizing it but it seems the back is\nsomething you could do as a prior as\nwell so what if we're missing a\ndimension can we capture it in other\nmethods that we wouldn't do it in mean\nso why doesn't quantizer take like what\nso the safety measure in quantifier is\nbasically don't stray too far from the\noriginal action distribution which is\nassumed to be safe\nso what why doesn't this take care of\nextra dimensions let's think\nso the you rank the quantizer so we have\nthe proxy we take policies that\nmaximized it up to some extent you're\nright if we were very careful with the\nquantizer we may we may implicitly\ncapture the extra dimensions but there\nis no so yes so in that way it is\nsuperior but there is no clear there's\nno clear we don't the queue in the\nquantizer we don't know what it means\nsurprisingly I have a post\non that and I'm looking for it but yes\nyou are correct it does seem that the\nquantizer can capture extra dimensions\nis a sort of kid I'm claiming that it\njust naturally does like Oh actually no\nsorry I'm revising that no it doesn't\nokay because the policies that are say\n50% effective to do the proxy there are\nsome policies that will capture this\nextra dimensions and are certain there\nwere an T on these acted this extra\ndimensions if the dimension is not\ncaptured in the proxy at all yes so you\nknow it seems that my default weight my\ndefault like understanding of a\nquantizer is you like take some learned\ndistribution of things that humans would\ndo and you like select a like top one\npercentile from that distribution is\nthat not what other people mean here I\nmean I okay I was under the impression\nthat as defined by Jessica it was not\nthat that might be the case I'm pretty\nconfident that it is that well I would\nlike to be able to answer but it's\ntaking forever I think because of the\nvideo can you think you defined it as\nthat in the post yes okay the posts that\nI'm looking for I defined it in that way\nand in the post that I'm looking for I\nlinked to Jessica's original thing so\neither I misread her thing or there are\nmultiple definitions of quantizers\nfloating around yeah so in general you\nseem to use concept like words slightly\ndifferent than I'm used to\nsorry and it's sort of fine because you\nexplained if you explain what you mean\nand but I also think that you're missing\nout on things other people have done\nbecause you don't notice\nlike usually terrible at rich literature\nreviews yeah like this the idea of using\nuncertainty over utility function as a\nsafety feature is it's a really good\nidea that I've seen around for a while\nand I've posted a link to like where I\nfirst came across it in the shot mmm-hmm\nI'm yeah okay give me my phone and I'm\ngiving up on me I'm giving up on the\nWi-Fi and I'm just going to but that is\nwhat I I mean I'm I'm reasonably\nconfident that that was what the\nquantizer was because I looked it up\nspecifically for that post I think that\nthe most likely scenario is that there\nwere multiple versions of quantizer in\ndecibels yeah that's very possible and\nlike the one that went into the post is\nprobably the one that you saw and\nbecause I worked with Jessica I probably\nlike saw other ones okay so yeah so the\nlike I'm misremembering or something\nthat's several people that I talked to\nall have the wrong definition of\nquantizer so the kind of thing that\nyou're talking about I've considered\nsort of extension of apprenticeship\nlearning kind of things okay it's a\nlittle s wrong that there's a problem\nbecause my phone which is on on the\nnetwork not on Wi-Fi I had a problem\nloading goes wrong earlier today but the\nline and forum seems to be up okay let's\ntry a lineman form I feel okay is this\nyeah so let's refocus on the thing the\nquestion is\nmy current question is what do you mean\nby quantizer because it's not what I\nmean a quantizer is an agent that went\nis that I'm reading for me that returns\na random action in the top queue\nproportion of some base distribution\nover action sorted by the expected\nutility achieved if that action is\nexecuted yeah so yeah I usually think of\nthe base distribution as like a learned\ndistribution of what humans do okay I I\nthink of the base distribution as the\nalthough all the policies you can think\nof or all the policies that they are can\nthink of yeah the version I read the\nbase distribution was supposed to be\nsomething that's reasonably safe to draw\na random action from okay some human\nbehavior okay something that it's safe\nto draw a random but not an optimized\naction from okay yes then this if it's\nsafe to draw a random action but not an\noptimized action from then this is\nintrinsically this controls for extra\ndimensions that we might not not have\nconsidered if we are convinced that it\nis safe to draw a random action for it\nwhich from human style behavior is\ndefinitely safe in the short term I'm\nnot entirely sure if it's safe in the\nlong term but okay so yeah that seems to\nbe arguments that quantizers do have\nadvantages that's wrong amazing idea\nokay then that is learning and how\nshould I\nhow conservative should the part relax\nmax I just be okay so this is the post\nthat I was referring to and I think the\npoints of the post is still valid even\nwith the yeah not quite as valid with a\nsafer based distribution then the point\nwas if we know that the optimal one is\nwrong and we know that the zero is also\nwrong because it doesn't help\nwhat are we how do we know that it's\nsafe well as you say going for say if if\nif we could draw a hundred times at\nrandom and expect to not go disastrously\nwrong then 99 percentile is probably\nsafe in a similar way I very powerful we\nget we get 99 percentile actions like\none out of every hundred actions we take\nyou could just do with policies anyway\nsorry I'm I'm starting to fade I fear\nand I think the questions are getting a\nlittle more wavy we call it well let's\ncheck if anyone has a last question\nespecially a sort of simple\nclarification question which are easy\nokay I have a symbol what is the\ndifference between their will intention\ndi and an aligned yeah\nthey well into the aligned AI is an\nactual objective a well intentioned AI\nis a thought experiment that allows me\nto explore new ways of possibly getting\nto an aligned AI the yeah a well\nintentioned a I as I've described it\nmight still kill the universe\nit's if you it's focusing on a different\na different approach to the air problems\nor a different class of the AI problems\nmaybe for meso optimizers the difference\nmakes sense but not for general they\neyes I don't want to erupt I know those\nyour question but afterwards class the\nnaive question go ahead\nwait Soren did that answer your question\nthank you so you both work along with AD\nme or with Mary I have a question about\npost I think it was like it was from one\nof you tapped you'd cast these talks\nabout AI lineman being like a\ncryptographic rocket problem and when it\nI think it went on to describe the\nprocess by which you try to align it so\nyou began your talk today doctor I'm\nstrong with a discussion of like we\ndon't have an AI we don't have the code\nyou can't doesn't quite work the way one\nwould hope it would do you actually have\nthese things so instead you try to\nintroduce a characterization or a\ndefinition or at least Miri has\ndiscussed introducing the tools that\nwould help one get ease with what they\ncalled like a space plane to the moon\nand they described it as like the\nprinciples of first derivatives and\nsecond derivatives towards getting us\ntowards rocket science so how do you\nguys view AI alignment in terms of given\nthat we don't have any any AI now do you\nmostly view it as a problem of like\ngetting the tools necessary for it or do\nyou go along the path of life let's\ndefine what super intelligence is even\nif it's probably attractable and then\nlet's go from there I mean we're looking\nat your work but and some others but I'm\nhaving trouble recognizing the\ndifference between the two approaches\nand I don't know if that's coherent\nenough I I ramble in many different\ndirections and finds many different\nideas\nbecause that's how my brain is wired\nmaybe one I always I was tend to make\nthe assumption that it is a super\nintelligence and that we can we can't\nconstrain its knowledge except if it's\nin a very specific cryptographic away\nthe reason being that most solutions\nthat work for that kind of anything most\nsolutions not all but most solutions\nthat work for super intelligence work\nfrom gamma-ray eyes as well and Ally I\ncan answer your question with saying I\ncan't answer your question right now and\nthank you very much better and sorry dr.\nGerber and do you anything that do you\nhave any response to that question or\nrambling ha yeah I think I think I miss\nparva-- a few things I think are\nadjacent that I want to say well say is\nthat like I feel like I should probably\ndisclaim that I don't know we're talking\na lot about things that are trying to\nlike figure out how to get from utility\nfunctions into a eyes which like I kind\nof I don't know like like I wrote this\nguitar post another than facts which is\nlike a very general thing it's like\ngenerally knocks like these the the\nsub-problem that I'm thinking about most\nof the times where like I'm mostly\nthinking about the problem such that if\nI can have a thing that is trying to\nlike optimize reliably any hard\nobjective I could write down whatsoever\nlike this is like the the part of the\nproblem that I am like I have my sight\non right now we're like there's there's\nvalue in the like reliably type thing\nlike be able to direct something at\ndoing a thing\nbut I kind of feel like from my point of\nview if we could make a paper\nmaximizer only we would have solved a\nlarge portion or problem and so a lot of\nthis is like out of the space that I'm\nlike normally thinking about and then\nanother thing I want to say possibly\naddition to your thing and possibly not\nI don't know it's definitely definitely\nis the case that the way in which I view\nmost of most of my time working on\nHighline related things is like trying\nto build up foundations about having\nlike the directs like without like\nhaving the direct use case like I have\nlike a few different like these cases or\none to like resolve some uncertainty for\nsome confusion around some area but like\nI'm like trying to work with abstract\nenough objects that no specific like use\ncase that I have in mind will be like\nthe only plane which are movies or\nsomething and so like it does feel to me\na lot more like trying to like develop\ncalculus than trying to like build a\nrocket in the sense that like it feels\nlike it's like a thing that you need to\ndo before building a rocket and I feel\nlike it's hard to be able to look at the\nproblem of building a rocket and figure\nout what the right type of calculus to\ndevelop is without actively working on\nbuilding the rocket or something but and\nso maybe I'm doing it very badly I don't\nknow but I ie\nI do think that like most of my time is\nlike most of the like analogies little\ndraw and stuff you can try to think\nabout them as being directly about the\nISIS that's then it might like miss the\nfriend point ad or something okay thank\nyou very much and then I am one more but\ni blew some other people might have\nquestions to it so i wanna you know them\nin case they do if not then i'll ask how\ndo you as a researcher at mary how do\nyou feel about the choice to not publish\nany research so i believe one of the\nlast publications that I saw from Mary\nwas the logical adductors paper that you\nyou are the primary author on how has\nthat changed\nyour production of research hasn't been\na net benefit to you personally I like I\nknew inside view of this but it's weird\nfrom an outsider perspective from\nsomeone just looking at Mary's work for\nquite a while and then see a complete\nlike hiatus on that in terms of like net\nbenefit on me personally there are\ndefinitely ways in which it's like good\nand bad like two ways in which like it's\nbad is that it like makes collaboration\nharder another way that it's bad is that\nlike sometimes I would want to use\nwriting stuff up as a motivation for\nmyself to formalize things more or\nsomething it's like I have all these\nideas in my head and like the process of\nlike writing them down is like a\nmotivation to do a certain type of\nthinking that I have to like externally\nmotivate myself in a different way to do\nor something like that but I a large\npart of I know that there's there's a\npoint that I want to make about\npublishing which is that I think that it\nlike depends a lot on whether you expect\nlike I kind of expect the kind of stuff\nI'm working on is has like a chance of\nlike developing something really cool\nand a chance of not and like the\nincremental progress is not as exciting\nor something versus like if I was\nworking in a field that was more close\nto a lot of other people then like the\nincremental progress would matter like\nI'd have to get my incremental progress\nout so that other people would get more\nincremental progress on it and it can\nlike build up through a bunch of like\nindividual like small jumps or something\nversus like I think that I am like\ntrying to like do some like very and\nseeking trying to find some like large\njumps in in the way that we're\nconceptually thinking about the problems\nsuch that\nthe costs that I pay by working\nprivately until I have something that\nseems really interesting and then\ndeciding whether I want to share it and\nthen put it in the work to sharing it is\nlike not that much of a cost because\nit's like the sharing of the incremental\nstuff doesn't actually help that much\nbecause there aren't like a lot of other\npeople that are working on the exact\nsame things that I'm working on in in\nthe same way versus if I was in like ml\nthere would be a lot of people who are\nworking on very very similar things and\nvery more way it would be a lot more\nbenefit sharing I'm obviously a lot more\nconventionally academic and also some of\nthe way I work is generating random\nideas and there it's much better to sort\nof toss them out into a less wrong post\nand move on and see if other people\ndevelop them so yeah I probably my\nreckon is III disagree with the Mary's\ndecision at first order but I also trust\nthat they've thought it through at\nsecond order\nwell thank you both for your time and\nthanks for answering the questions I\nsincerely appreciate it okay thank you I\nwould like to just say two more things\nand very much so thank you for that and\nthen the the second thing is just in the\nbeginning which said that there were\nactually no implementation of any of\nthis and actually that turned out to be\nwrong in that joon-kyu has made a actual\nexplicit implementation of a utility\nfunction for a superintelligence and\nalso an implementation of meta semantics\nthat's been published at mr. ethical AI\nand next week I will try to give a\npresentation on that in\nreading group on Tuesday at this sad\ntime so I hope to see some of you there\nhey that should be all see you next week\nall right I I'm away from a computer so\nis there any way you could like like hit\ncommand and copy comments or no because\nthere were a lot of links shared and\nsome questions yeah and I don't know a\ngood way of saving any of this for", "date_published": "2020-06-11T13:41:58Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "07368782a7062319ecb906dc13020371", "title": "255 Where I agree and disagree with Eliezer", "url": "https://www.youtube.com/watch?v=V8R0s8tesM0", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 255\nin the aisafety.com reading group\ntonight we'll be discussing the less\nwrong post where i agree and disagree\nwith eliezer by paul cristiano\nlet me just get the video done paul\ncristiano is one of the\nperhaps\none of the two most famous\nalignment researchers and the head of\nthe alignment research center\nthis is a two week two months old uh\npost and unless wrong it has the most\nupvotes of any post ever so that's quite\nan achievement\num and we'll be going through the first\nhalf today which means we'll cover all\nthe points of agreements and the first\n10 points of disagreements\npaul christian describes this work as\nsomewhat of a response to elisa's post\nagi ruin a list of lethalities and also\nrambling in the same way and not\nexhaustive\ni think in general making a presentation\nand analysis of rambling style post is\nkind of hard i didn't do it for elias's\nposts\nand also\nlike eliasa is way more rambling than\npaul cristiano is in this post\nwe'll start by going through the\nagreements\nuh and one thing i should say up front\nis elias yatkowski in the comments that\nthis is a solid contribution and thank\nyou um\nnot just for the agreements but for like\nthe the total uh the total post\nnow splitting up into agreements and\ndisagreements\nis uh\nnot obvious how you do that because if\none person says there's 30 probability\nof a fast takeoff and the other person\nsays there's like 50\nthen is that an agreement or a\ndisagreement um\nthe way i think about it is like paul\nchristiano imagines a median less wrong\nposter reader and if that person has\nprobabilities in between\num idiosa and paul cristiano then it's a\ndisagreement and if they are on the same\nside different from what the median\nreader will expect then it's a\nit's an agreement\ni will also uh note uh\nsome references\nif\npaul is referencing something from\nelias's post then i'll uh note what the\nuh the point like it's point b 38 or\nsomething like that so we can later go\nback because uh paul cristiano said that\nthat he is not making an exhaustive\nreply and i want to see what party\ndoesn't cover\nso for the first agreement which is that\nthere is a good chance that we will see\nai the\nempowering humanity and\ndisempowering is used instead of killing\nas um edison would do because he\nmakes this is easier\ni think if you can call an agreement uh\nwith the definition that it's both more\nthat both are more uh pessimistic than\nthe median less wrong reader but there\nis quite a difference between the\nstatements made by elias yutkowski and\nthe ones made by paul christiano\nelias is much\nmore pessimistic and doesn't really talk\nabout disempowering as much as just\noutright killing\npaul cristiano gives his probability of\ndoom\n20 rather than 99.99 and a single layer\nhis timelines are\nprobably longer than aliases\n15 in 2030 40 in 2040 and around 30\npercent probability of a fast takeoff\nyou may\nsay something like how can you have a\nhigher probability of a fast takeoff\nthan uh then doom that seems uh very\nunlikely that there is a\n50 probability that a fast takeoff will\nbe um will go well um i don't think this\nis something we should put a lot of\nweight on in general when people make\nthese kind of statements they use some\nnumbers that change very much and they\ntry hard not to like\nanchor too much on their own previous uh\nestimates so we should take this with a\nvery very large grain of salt\non the timelines at least something like\nsoon is something that both paul\nchristiano and elias kowski agrees on\nand they also agree that we won't get a\nfire alarm or any kind of\nthing that will build a consensus until\nwe basically have dangerous ai or\nimportance in illicit we\nview we will in fact never have a\nconsensus where paul cristiano is\nuh\nmore optimistic once we have in fact\ndangerous ai that we'll be able to\nrecognize that\nwe probably won't respond productively\nto uh\nto ai risk uh even if there was a\nconsensus probably we would\nyeah not be able to rise to the\nchallenge in any meaningful way we can\nmess up even very basic challenges and\nin this case\nai safety is a really really difficult\nfor many reasons so we can certainly\nmess this up\nso this is not really a point that\neliezer makes in his uh\nagi ruin post but it's quite clear that\nhe holds opinions that are like this or\neven\neven more pessimistic\nthe existing state of alignment research\nis something that they're both very\npessimistic about they're not really\nwe're not really\nmaking progress on the key difficulties\nbecause we're focusing more on\nwhat is the the area where we can make\nenough progress to like publish a\nresearch paper instead of looking at the\nactual key challenges only quite few\npeople are working on that\neliza yutkowski has the same points but\nagain in a substantially stronger\nstatement\nwe won't we can't recognize progress\neven if we came upon it if we got a\nbillion dollars that would not help us\nat all because we can't recognize\nprogress and it will just uh\ndrown everything out in noise\none thing that i would really like uh\npaul cristiano's view on is like\nrelatively few researchers how many is\nthat and who is that because i think\nthat would be something that would be\nreally valuable to have some idea about\nwho\nare working on things that input\ncristiano's view are likely to to\ncontribute\nthe last derail is a uh yeah is a uh\nwhat's it called a twitter hashtag or\nsomething that is being used by elias\nyorkowski to describe uh when there is\nlike social pressure to talk about ai\nrisk and turn it into something that is\nabout like\nstandard political things um\nand paul cristiano uh says it's probably\na bit hyperbolic but on point here is a\nuh\nsorry um a um\npaul by\nelizabeth on twitter that shows that\nyeah\nmore people think that people clippers\nare bad but a lot of people think that\nbias is a\nbigger problem\nexcuse me for a moment\nin the same way a lot of people focus on\nsmaller risks that are\nmore realistic um and not really\nexistential in the way that we care\nabout\nand that's probably something that's not\ngoing to change until um\nvery late in the game if at all\num i think elizakowski would agree but\nin fact\nhe i can't really find anything that\nelijkowski has written about what the\nman on the street thinks about ai\nalignment because i think elias\nyorkowski just\nconsidered that totally totally\nirrelevant\nand uh\nwhat what uh\nwhat uh the standard people\num who who are not experts and don't\nhave any particular qualification um i\nthink he is basically given up given up\non trying to convince this kind of\npeople\nand ai takeover could be very aggro\nabrupt\nperhaps uh\nsomething akin to a coupe\nand that's another agreement between the\ntwo\npoor christian thinks that killer robots\nmight be part of it and\nthat is something that could happen\nindeed very abrupt and the progress from\num\nai having uh\nbeing involved in something like\nresearch until controlling killer robots\nmay be very brief\nsorry um and yeah this is also um a bit\nthat elias yudkovsky has talked about\nbut not very much\num\none of the things paul christiano agrees\nwith uh elias koski is that\nmost people don't see a uh the\nsingularity as something that can come\nvery very very quickly after we have\nautomated research by um by ai and\nthat's something that paul christiano\nthinks could take uh perhaps uh even\nmonths so that's a some\nsubstantially shorter timelines\nit's not precisely what eliza kowski\ntalks about because uh paul cristiano\nseems to have a model where\nfirst there is\nuh like some kind of recursive\nself-improvement and uh\nor some improvement in ai capability and\nthen this kind of diffuses throughout\nsociety and ai has a large impact\nwhatever that may be and then after that\nwe get singularity and some kind of\ntakeover whereas elias wikowski doesn't\nreally in his world model see a middle\npoint where ai has enough\n[Music]\neffect on the world to be described as a\nlarge impact it's mostly going from\nbasically\nno impact\nto\nai's control of everything\nthe way we normally solve\ntechnological uh problems is by\ntrial and error in the scientific method\nand unfortunately\nfor alignment this is not something that\nhappens by default because it's possible\nto work on capability without working on\nalignment and it's just possible that we\nwon't in fact solve uh\nsolve alignment because we're not forced\nto it we can just get by with standard\nuh gradient descent methods until we\nhave extremely powerful agi and then it\nmight be too late\num so we\ncan not expect to get\nthe strong kind of empirical feedback\nloops than we get in just about all\nother kinds of research\nagain this is not something that elias\nyou'd ask you explicitly states in his\nagi rune but i think he would agree with\nthis\nit has sometimes been positive that\nai\nimprovement would at some point hit a\nkind of intelligent ceiling around or\nslightly above the human level uh paul\nchristiano sees no particular argument\nwhy that should happen um in fact\nit seems likely to his in his model that\nwill relatively quickly go from\nsomething that is substantially below\nthe human level to something that is\nsubstantially above the human level\num\nagain this is also something\nelias agrees with\nmost of the these agreements that we are\nfinding are in fact uh\nin some of elias koski's preliminaries\nso we are not really even though the\nagree it is agreements it's uh not on\nany particular\nreasons some of the strong reasons elias\nyutkowski presents for why uh agi ruin\nis likely\nif we had powerful ai somewhere then we\nit would be able to um\nuh take over the world\nthat's something they both agree on um\nand i won't go into more details about\nthat coordination failure is really\nlikely in that\nwe it doesn't look like we will be able\nto solve that policy problem um\nthis in general geopolitical\ncoordination is hard and here we have\nsome\nstakes that are really ambiguous in that\nalmost\neveryone agrees that there is no strong\nai risk it's uh basically rationalists\nand no one else\num\nand there is a lot greater\npressure to defect in the sense that if\nthe others are not building agi and\nyou're building agi you can potentially\ntake over the world\num so that's\nsomething that paul cristiano agrees\nwith eliasa is likely to fail this\npolicy problem to be solved\nhumans won't\nnecessarily rise to the challenge um\nand covet has been a negative update for\npaul cristiano um\ni think um\nelias etaus is substantially more\nnegative about this he is nowhere near\n50 50 on this\nso i don't really think this can be\ncalled an agreement um there is\nilius yutkowski is\nseems to be strongly\non to the extent that we can rise to it\nto this challenge\non the more technical side is there a\nutility function that really\ncaptures what human wants and what human\nvalues are um poker channel says no um\nhe thinks we should instead do something\nlike reinforcement learning where the\nreward function gets smarter in parallel\nwith the agent being trained what\nprecisely it means that a function gets\nsmarter is of course\ndifficult to um\nto state in a concise way but that's\none way of summarizing his\nhis\nagenda we\nif we try to\ntrain an ai to\nhave a\nreward function that is\num like what kind of object we uh we're\ntrying to get then um\nwe will we are unlikely to get the nei\nthat is actually truly motivated to to\ndo this because if it's smart enough it\nwill realize that it should do what we\nwant it to do\neven though it's not that's not what\nit's actually motivated to do because\nthat's how it preserves influence like\ndeceptive alignment\ni\nthink i would actually disagree with\nthese two at this point because right\nnow i would expect us to see to see much\nmore signs of deception if this was\ntrivially true um in fact uh paul\nchristiano\na bit later than this says that he\nexpected the deceptive alignment\nrequires superintelligence and i\nstrongly disagree with this i believe\nthat actually hey at the human level\nit's quite possible to figure out that\nyou should be deceptively\naligned\nnaive assisting is my uh all the\nheadings are mine and and the the text\ndown here is mostly quotes um so it's\nmore uh likely that the human will learn\nwhat is the environment that's a lot\neasier than trying to learn something\nreally complex\nphilosophically fraud like trying to\nhelp humans uh and if you try to get\ndata from humans some kind of feedback\nthen um\nuh trying to um\nto learn this is difficult because there\nwill be errors for instance so uh it's\nmuch more likely that the ai will\nactually learn the uh the processes that\nare generating the reward signals but\nit's inside our brain and update fully\non that on something like that\nand that's\nprobably what we're going to get if we\ndon't do anything\nif we just try to do\nsimple reward learning from humans\num the dying with dignity framing that\nwe shouldn't just assume that there's\nhope but uh try to practice try to try\nto maximize the log arts of success is\nsomething that\npoor christian agrees with and this is\nbasically the dying with dignity\nstrategy article from\nmiri\nthe current plan for aligning ai\nrequires\na lot of iteration and modification\nuh the state of affairs is that if\nalignment turns out to be a big problem\nthen we'll have to learn about it and\niteratively improve our approach\ni'm a bit confused when paul cristiano\nuses the word state of affairs i think\nhe means plans\nbut i'm quite unsure about whether\nthat's actually\nwhat he is pointing towards elijkowski\nof course is way more negative about\nthis and believes that\nwe'll\nnot just learn a lot about it and\niteratively improve will\nfail to learn and not improve and die\nfinally\nin other research fields you usually\nchoose a research project that is just\nbarely out of reach so you can actually\nimprove the state of the art and\nalignment is not chosen for this reason\nthere is no one to ensure that the\nproblem is fair and it's possible to\nsolve the problem at all it's possible\nthat that will fail because the problem\nis just too hard\nand that's the point that elias\nwhitekask also makes\nfinally we're getting to the\ndisagreements of which we'll take the\nfirst ten\num\npakistanis says that these are mostly\nstated without arguments so i have to\nlike dig deep uh to try to figure out\nwhat uh arguments could be made for this\nso i'm not\nnecessarily following paul christiano\nthat closely in this part\nthe first is\na complaint that elias equivocates\nbetween saying on one hand you have to\nget alignment right on the first try and\non the second that you can't learn\nanything about alignment uh before that\ncritical first uh try and these are two\ndifferent statements and pocus jano\nthinks that the first one is\ncorrect and the second is incorrect\nso\ndoes elias cask in fact equal like this\ni looked through this post and to try to\nsee if i could find ways and i was\nunable to find any places\nwhere eliezer caskey makes such an equal\nvocation\nhe does state the first claim very\nexplicitly and later he actually states\nsomething that kind of sounds like the\nnegation of this\nhere is the exact quote\nsection\nb is in general about what can we learn\nand what can't we learn about alignment\nbefore the critical first try in\nparticular uh\npoint b 13 seems to be about uh the kind\nof things that you can in fact not learn\nso it seems strongly to me that\nelizakowski is very explicit about both\nof these points and there is no\nequivocation\nbut it's also possible that paul\nchristiano is thinking about a different\npart\nokay so what can we actually learn from\nalignment before the critical first\ntry well we can\nget we can use the traditional research\nand development methodology\nto try to create toy models of alignment\nfailures\nwe have standards for interprobability\nthat it seems like we should be able i\nthink there was someone posted like a\nresearch roadmap a tech tree or\nsomething like that and we have\ntheoretical questions we can't answer\nand that seems like something we can in\nfact learn from now\ni don't think that\nsaying that there are theoretical\nquestions we can't yet answer i don't\nthink that is something that we can talk\nabout as feedback from what works\ni think\nwe are unlikely to get a resolution to\nthe uh deeper theoretical questions from\npractical experiments\nand the problem that uh here i think is\nsomething that they would actually agree\nstrongly on is that solving a scientific\nproblem without being able to learn from\nexperiments is really really hard\nand here\nthis consideration makes the\ninstitutional problem vastly hard but\ndoes not have such a large effect on the\nscientific problem\ni'm a bit unsure about what precisely\npaul custino means by this quote\ni think a wait um\nelysiutkowski\ntalks about uh\nthe two challenges with uh with this\nthat's the fact that there is a deadline\nand we only get one try and even if you\nin some way were able to relax this it\nwould still be a really really deadly\nproblem\nso one way you could think about this\nimagine that the institutional problem\nis solved but the scientific problem is\nnot solved that would be like we have\nwe are coordinated enough that we have\nall the time in the world to solve it\nbut we only get one attempt so that is\nsomething that is\nsubstantially harder\nsubstantially easier than the actual\nreal problem where the institutional\nproblem is not solved\nand the other way around um would be\nthat we have unlimited attempts uh but\nwe do have a sharp deadline so we can\ntry to build an aligned a ai and um but\nat some point the institutional\ncoordination will fail and some um\nsomeone else will just build an\nunaligned atr so we do have a deadline\nso um i think that is what pulse channel\nis minch is pointing towards but i'm not\nsure\nthe topic of nanotechnology comes up\nand um\npaul cristiano believes that once we\nhave nanotech we'll surely have other\ntechnologies that can kill humans in\nmore boring ways\ni think this is a empirical question of\ncourse it will eventually be revealed\nbut it depends like what is actually the\neasiest way to kill humans is that by\nbuilding uh large robots or small robots\ni must admit i haven't read the\nliterature i should\nread eric drexler i guess um\nilya sirkowski claims that he has in\nfact read what eric draxler has written\nabout nanotechnology strong name and\ntechnology\nand\ni\nlike if paul christiano believes that\nuh big robots are easier than small\nrobots that there is um uh\nand\nanother easier but more boring way to\nkill humans um then he should state like\nwhat is the actual problem with the\nnanotechnological pathway that elias\nyorkowski is is saying\nanother thing that is likely to happen\nbefore we get nanotechnology is that the\nstate of human research and development\nwill be\nradically advanced by ai\nit's a possibility but it's also a\npossibility that\nai research and development would just\nbypass humans and not in fact\nstrongly contribute to advancing human\nresearch and development\npaul christiano calls uh\nuh\nelias yokowski's scenarios uh cinematic\nand they don't seem to like hold\ntogether\num i feel\ncalling them cinematic sounds like\na\nstrange almost like an argument from\nstatus or something like that\nand\nis there a more realistic picture of ai\ndevelopment under the surface\nthat is in fact something that a lot of\npeople have speculated on in particular\nin the comments like when elias kowski\nis talking about nanotechnology and\nbotulism and diamondoid\nbacteria like\nnanomachines is that something he really\nmeans or does he mean something else a\nlot of people are thinking that he must\nmean something else\num i have talked very little with ilya's\ncostume but i have in fact talked with\nhim and he seems\nvery very much like a person who\nstrongly says what he believes so i\nwould bet good money than when elias\nerikowski says nanotechnology he is in\nfact not using that as some kind of\nsimile or metaphor for another\ntechnology when he says nanotechnology\nhe means nanotechnology\nis it possible that the ai that is\ncapable of killing us will before that\nnot making any major technological\ncontributions to society at large\nthat's a question that\ni think depends mostly on takeoff speeds\nand that's of course an area where\nelizabeth caskey and paul cristiano are\nquite of uh different opinions\num for christian believes that we are\ntrying ai to do things that look uh\nimpressive um\nso um if there's an ai that doesn't do\nimpressive things then we'll just not uh\nnot find that ai because we're just\nusing uh stochastic gradient descent to\nfind the impressive ais\nthat means in general it will be\nthe impressiveness will increase in a\ngradual fashion at every point there'll\nhave been before that a slightly less\nimpressive ai\ni think the depending on how you\noperationalize impressive that may be\ntrue but i would expect the things that\nwe're trying to make the ai do in\ngeneral are things that may be\nimpressive but are not dangerous in the\nsense that ai that nanotechnology is\ndangerous and making uh beautiful\ndrawings or something like that is not\ndangerous um and the hope the fear is\nthat if we get a general intelligence it\nmay be able to do some really really\nimpressive\nuh harmless things and then the very\nnext thing it does is something that is\nat\nroughly the same level of impressiveness\nbut just in a different domain and that\nturns out to be really dangerous\nsome of the the framing the words that\nwill christianity uses about uh\nilliterate casket to describe his\nscenarios\nare somewhat derisively stated like he\nimagines scenarios of ais lying in wait\nto cut to cause trouble later that's\nalmost certainly not how eliza yokowski\nwould describe his own\nscenarios\nit seems uncharitable but on the other\nhand elias yokowski is also harsh when\nhe writes at least according to like\ndanish moss people are\nnot as diplomatic as would\nbe becoming of them\nso i can't really uh fall for cristiano\nfor for pushing back on alias in this\nway\none thing that i do wish to point out\nhere is in in this uh argument about\nwhether we'll have ai's that does\nsomething slightly less impressive uh\ncivilly looking um i think that uh\num\npaul christianus seems strongly\noverconfident by using words such as\ndefinitely i think there is a lot of\nreasons to believe that um that it may\ngo in in a different way definitely\nseems like uh something that when you\nneed really strong arguments to use this\nkind of word\nrecursive self-improvement how will that\nlook in practice procrastinator expects\nthat this will basically look like\nwell\nlike humans doing research and\ndevelopment um\ni suppose that's possible and i would\neven uh say that there is a high\nprobability of this but you could easily\nimagine ai research and development be\nbeing a lot less scrutable i could\nimagine that you asked gbt3\nhow can i improve you for uh how\ncan you give a uh an architecture than\nthat is better and then it makes i don't\nknow a transformer where the diagram is\nreversed or something like that and says\ntry this this will be better and then we\ntry it and it is it does in fact\nuh perform better and we just don't know\nwhy uh i think that is i i could see\nthat happening um\nand a way that\nhuman research is\none thing that really makes it screwable\nis that there are many humans working on\nan ai system and they are talking with\neach other\nexchanging into issues and ideas and if\nthey are replaced or uh by by a single\nunitary ai\nthen all the the social um artifacts\nlike people writing to each other about\ntheir intuitions\nis lost and i think that could be a\nsubstantial difference\nand finally it's possible that um ai\nwill just be able to do things in a\ndifferent way than humans i think we've\nseen a lot of examples of ais working in\nhuman domains where it seems like they\nare working in a different ways than us\nwill ai smart enough to improve\nourselves be a crucial threshold paul\ncristiano thinks not i think it could be\nin fact\nit depends whether it's a crucial\nthreshold depends on whether the speed\nwill change to a large extent in my book\nof course what does crucial mean\nto my mind if it changes the speed\nmeaningfully then that is a crucial\nthreshold\nhow gradual would this be\nprocrastinator thinks that this will be\na very gradual process\ni think that\nwhen i ca conceptualize um\nwhat will a super intelligence be able\nto do or an ai then uh bostrom's uh\nframing of speed super intelligence\nquantity super intelligence and quality\nsuper intelligence it seems to me that\nworking on something like capabilities\nis something that is clearly possible to\ndo by a speed super intelligence and a\nquantity super intelligence and even at\na lower level where it's not actually\nsuper intelligent but just at a roughly\nhuman level that seems like where speed\nand quantity trivial would be able to uh\nto contribute to capabilities because\nthere is a nice feedback loop right if\nthe code is actually running faster\nthen it's better right\nand\nhaving a lot of ais that are working on\nthis\nwould be\nshould be able to contribute much\nmore to capability than to other things\nin disagreement four and seven there is\nan underlying pattern that i have called\npulse sequence of events\nthis is my interpretation of how paul\nchristiana thinks the world is going to\nlook like\nso take this with a big grain of salt\nplease\nthe first thing that's going to happen\nis that ai contributes to\nresearch in some meaningful way\nthe second thing that will happen is\nthat ai will be able to uh in particular\naccelerate alignment research\nthen we'll get ai that improves\ncapability maybe doubling the pace\nand at some point after that we'll get\ndangerous ai so that's kind of like\num\nperhaps perhaps this is a takeoff split\ninto uh these\nthese parts perhaps it's uh\nthe take-off hasn't really started at\nthis point it's a bit unclear\nis this a likely sequence of events\ni think\ni would\ni would be skeptical because i feel that\nai\nimproving capability is a lot easier\nthan ai improving alignment research\nalignment research has turned out to be\nreally really hard and ai capability has\nturned out to be surprisingly easy\nnot easy but easier than we would have\nhoped so in the two progress bar\nmodels\ni think that that ai\nis likely to push more on the capability\nprogress bar\ni'd like to be wrong of course i would\nlike paul kuchan to go into more details\nabout how ai could help alignment\nresearch of course he has given like a\nnumber of suggestions for how this could\nbe made um but most of paul cosiano's\nsuggestion seems to be like this is a\nstrategy for how we could do it rather\nthan i think it is likely that um\nthis is the method that will be used\nlike for example existing latent\nknowledge is a like a great idea um but\ni haven't heard any strong arguments\nfrom paul cristiano saying\nin the actual future he expects that\neliciting latent knowledge like methods\nare the ones that will actually be used\nin the future because that's a much much\nstronger claim than whether listing\nlatent knowledge can be\ncan be made to work\num\nso uh yeah there's some agreement uh one\nthing i would point out here is the\nself-improvement ai may uh\nbe um\nbe hindered by the fact that it can't\nsolve the alignment problem that seems\nlike in the old days where we just had a\nutility function then could just copy\nthat to a successive function but that's\nnot something we might we\nmight not see that with a um\nwith the with large language models for\ninstance or something like that\num so how uh to what extent will we see\nthese two things alignment research and\ncapability research growing at the same\npage pace one thing we could at least\nsay is that if there is like a single ai\nthat is improving um in that case then\nas it grows in capabilities then um the\nalignment will all will always lack the\nimprovement in in capabilities because\nit needs to improve in capabilities\nbefore it can contribute to alignment so\nthat in that extent it might\nself-improve to the point where it can\nsolve the alignment problem and then you\ntry to like use that solution and then\nit doesn't work because it won't allow\nyou to actually change it to be\nalignable to be correctable or something\nlike that\npivotal acts\nthey seem misguided and unrealistic in\npaul christiano's view\ni'm not sure i like the word unrealistic\ni think\nmisguided and unlikely to work is better\nto\nto use um progression thinks that we can\nuh\nuh\nreduce the period of risk\nand do something kind of like a pivotal\nact in in like smaller parts\num kind of what uh\npoker channel doesn't use the word weak\npivotal act but that is um what\nelizabeth\nstates one of them is by\nadvancing alignment research\ni think definitely if you get good\nenough\nimprovements in alignment research that\nwould count as a pivotal act um\nbut in that of course um\nuh sidesteps the the discussion about\nwhether it is good or bad whether it's a\npivotal act and just\nto say that it's a good thing that we\nshould uh pursue to have aligned ai to\nalignment research\nanother way that we could get something\nthat is kind of like a part of a pivotal\nact\nis by having something that could\ndemonstrate that um unaligned ai is\ndangerous that's something that i have\npreviously talked very positively about\ni think in general unfortunately we will\nhave to really convince a lot of people\nbecause there is a lot of people who\nreally want to build unaligned ai and if\nwe want to\nstrongly convince 100 of the population\nof earth that requires really really\nstrong\nlevels of persuasion and i think\nthat is dangerous in of itself\nif you want to\nstrongly persuade everyone so there is\nnot a single person in the world who\nwould like to deploy an unaligned uh ai\num\nthen i think that is\nthat is basically a pivotal act\num so elias yorkowski's main complaint\nthat these\nbig pivotal acts are unlikely to work is\nsomething that i feel paul christiano\nreally isn't answering at this point\nthe third way that\nunaligned ai can uh that a\nrelatively aligned ai can stop unaligned\nai is by consuming the free energy of\nsociety which is the thing like it\noriginally comes from an analogy with\nthe free the uh efficient market in an\nefficient market there is no free energy\nand that's the thing that an online ai\ncould\nuse to uh to grow very rapidly\num\nthings like uh jobs that an ai could do\nwould be an example if for some reason\nthere's a lot of jobs that and\nthat an ai could do but we still have\nhumans doing them and then suddenly an\nunaligned ai comes along and just can do\nall this and\ntake over the jobs from all the humans\nand just use that for disempowering or\ngaining economic wealth\nthat's an example of free energy being\nbeing available for an unaligned ai\nunsecured computer systems is another\nexample um\nwe could if we have an aligned ai that\num\nthat checks every uh\nuh\ncomputer system in the world for\nfor vulnerabilities and then hacks them\nand patches them immediately um that\nwould be an example of an ai that is um\nperforming\nlike a part of a pivotal act a weak\npivotal act um and the question then is\nwhether that is dangerous or not i would\nexpect that to be quite dangerous\nanother example is\nbetter\nkiller robots or something like that\nthat's also something that if we already\nhave the best killer robots that can be\nmade then a misaligned ai can't\nobviously make better killer robots\nand in general managing the consequences\nof powerful ais\nsteve burns have some uh\nsome rather uh tries to uh in the\ncomments go into some details about what\nkind of free energy there is and points\nout that\ncontrol of nuclear weapons is something\nthat right now is in the hands of humans\nand if there we have a world where there\nare ais that are more intelligent than\nhumans then our two options is to hold\non to it or to give control of the\nnuclear weapons to the ais that we hope\nare aligned and\nif we don't give control of all our\nnuclear weapons to ai then there is some\nfree energy so consuming the free energy\nin this way seems really hot and also\nlike\nall the work that can be done by ai\nshould be done by ai and maximally\neffective killer robots and things like\nthat seems like a world that is really\nreally really bleak in particular if the\nais that we are giving uh control over\nthe nuclear weapons and all the jobs and\nall killer robots um if we are not 100\nsure they are perfectly aligned then\nthat seems really really dangerous um so\nin fact consuming all the free energy to\nmy mind seems like something that is\njust about as dangerous as a pivotal act\nbut doesn't actually solve the problem\nso i think um having ai consume all the\nfree energy is\njust worse in every way than a pivotal\nact\non the risks of personal acts uh paul\ncristiano believes that there is an\noverwhelming risk of destroying the\nworld and that is in fact also what\neliza utkowski\nsays he says that\nin his post on agi ruin he strongly\nexpects that uh we we will not do a\npivotal act because in practice we\npivotal acts are just too hard and the\nthing that will actually happen is we'll\ntry to look for a good pivotal act and\ntry to find some kind of miracle and we\nwon't find this miracle and then we'll\ndie so the fact that pilsel act runs an\noverwhelming risk of destroying the\nworld is something they might really\nagree on\npaul christiano's uh preferred\nalternative is traditional policy\ninfluence\num\nthis i believe is what elias utkowski\nwould call a weak pivotal act and again\npaul christian really ought to try to\ndefend this\nthis act of traditional policy influence\nhe says that rejecting this requires\nextreme confidence about details of the\npolicy situation\ni think this is trivially wrong in fact\nto be able to say um that a traditional\npolicy influence will not be sufficient\nonly requires that you know of like a\nfew defeaters and when you know like\nthere's no way you could convince the\nchinese government to go along with this\nthen you don't actually need to know a\nlot of details about the policy\nsituation\nto be able to predict that this will in\nfact not work\npoker channel admits that this is\nsomething that would be an unusual\nsuccess in historical um\nterms but again much better than the\npivotal acts\ni think that there might in fact be a\nvery different uh be a different\ndisagreement with paul christiano and\nelias yatkowski than what paul cristiano\nis pointing to here\nif we assume the case that elias\nutkowski believes that a pivotal act has\n100 chance of working and\ntraditional policy has one in ten\nthousand and paul cristiano also agrees\none hundred percent that perishable acts\nare 99 likely to fail\nbut it's much more optimistic about\ntraditional policy in this case they\nshouldn't debate the uh the pivotal acts\nwhat they actually should debate is\ntraditional policy to what extent is\nthat likely to work um and i think that\nis\nan interesting case and i think that is\nto a large extent uh if paul cosgiano is\nvery very optimistic about traditional\npolicy influence then i would really\nlike to hear why he's so optimistic\nabout that\nfinally in his criticism of perishable\nacts\nhe claims that elias underestimates how\ndifficult it would be for an ai\ndeveloper to covertly take over the\nworld\nat this point we should of course uh say\nthat it's not a an ai developer alone\nit's an ai developer plus a moderately\naligned narrow super intelligence\nthe second thing elias kowski\nunderestimates is how strongly and\neffective governments would respond to\nthe possibility of\nan ai developer taking over the world\nand finally how toxic\nthis kind of plan is\nso will governments be able to\neffectively react to the possibility\nthat\nsay deepmind will create a super\nintelligence that will take over the\nworld uh well if they were doing that\nthen they might do that now because the\npossibility is in fact there right now\nand as far as i can tell governments are\nwell they're certainly not strongly and\neffectively responding to this as far as\ni can tell governments are totally\ntotally oblivious to this possibility\nso we're looking at a complete\nintelligence failure right now it is\npossible it will change but it's also\nvery possible that it will in fact not\nchange at all\nelizovsky is\naccused by paul christiano of being\nsloppy in some of the topics that he's\ndealing with the first is on convergent\nincentives\nso i looked up here is an article\narticle about\ninstrumental convergence\num where elias erkowski goes into\nsubstantial um details on conversion\nincentives the deep nature of\nconsequentialism is another area where\npaul christiana thinks elias ritkowski\nis sloppy\ni found the the article here where elias\ngoes in details and finally he's sloppy\non generalizing\noutside of a training distribution\ni couldn't really find anything formal\nthat ilia streetcars has written on this\ntopic\nat least by looking for 10 minutes it's\npossible you could ask elias\nif he has in fact written any\nrigorous thing about this\nbut to my mind this is an isolated\ndemand for rika because these two\nposts up here are in fact\nquite formal and quite rigorous they are\nnot perfectly\nrigorous and they don't er they're not\nmeant to be perfectly rigorous but they\nare at a rather high quality level uh at\nleast as far as i can tell\nit's a bit difficult to tell for certain\nuh but i think um\nalmost all arguments that are used\nwithin ai alignment are less rigorous\nand less formal\nand less\nand more sloppy than what elijkowski is\npresenting\nand in fact uh uh even though elias's uh\nreasoning may be sloppy pakistan\nbasically agrees that they are roughly\nright but it's just that we can't really\num\nhave a strong probability that they're\ntrue when we have a like a chain of\nreasoning\num\nin particular on generalization\nprocedure claims that it is subtle and\nwe can learn a lot about these kind of\npoints in advance but we just don't do\nthat right now\nto my mind we can learn about\ngeneralizing outside of the training\ndistribution at a below human level\nright now but it doesn't necessarily\ntell us a lot about how a superhuman ai\ncan generalize um\nin particular i don't think that we can\nwhen when\nbooks just say we can learn a lot\nabout it i think that's very unlikely we\ncan speculate and we can like get some\nideas about how generalization outside\nof the training distribution looks at a\nlow level but we can't really truly\nlearn and get\nlike a\nstrong knowledge about this uh in any\nway\npaul christiano believes that uh\nthe result of this stopping\nstopping reasoning by eliasa is that\nhe's wildly overconfident i think i\nwould\nif i was paul cristiano i would go\nthrough some of these articles here\num by elias yutkowski and try to find an\nexplicit error\nbecause if in fact paul christiano is\nright that\nelisa yorkowski has written some sloppy\narticles\nthen there must be some mistakes right\nuh if this is a sloppy article then\nwe there would be some kind of mistake\nthat could prove to us that elias\nyutkowski is doing sloppy reasoning to\nme it doesn't look sloppy but i'm no\nexpert i'm just unable to really\nstrongly evaluate this claim\nso how much will\num how much will change when we go from\na roughly human level to a super\nintelligent level um\npakistan says that the training\nstrategies don't need to handle\nmuch of a shift between from a low level\nto a high level of intelligence\nuh i think that is missing the point\nslightly in that it's not really\ntraining strategies that need to um to\nchange or\nwhen the distribution shifts the problem\nis the alignment strategy needs to\nchange\nand again\naliaser is claimed to be making a\nconfident\nstatement about how ai is going to\nchange and i don't think really it's the\nnature of ai that's going to change\nelijkowski states this as the air there\nwill be\na new external options opening and\neven more internal choices and modes are\ngoing to open as the ai becomes more\ncapable uh and that's different from\nsaying that the nature of ai is changing\npocket channel\nsays that earlier koski is probably\nwrong and clearly overconfident\nthe way i think about this is that\nat there is like this is of course\nreally simplified like there is a\ncurrent ai level and then there is the\nlevel that cern is on and then there's\nthe level that paul christiana is on and\nthen there's a super intelligent level\nand it seems clear to me that going from\nscience level to uh paul christiano's\nlevel is a\nqualitative difference paul christiano\ncan clearly think of a lot of strategies\nand consideration in this rather that i\ncan't think of\nand in the same way we should expect\nthat an ai that goes from the current\nlevel to a super intelligent level it\nmust pass through the cern level and the\npolitician level to get to the\nsuperintelligence level and it seems\nlike if there is a big distributional\nshift between these two um\nthen it seems to me very likely that\nthere is a very large distributional\nshift going from a superhuman level to a\nsuper intelligent level\nfinally\nfume the intelligence explosive thesis\num uh\nelise witkowski uh strongly expects that\nwe will have some kind of foom\nuh paul castiano disagrees with this um\nthis is something we've talked about\nbefore and\num\nhe claims elias's position is probably\nwrong and clearly overconfident\ni think in fact elisa ykowski has not\nmade a really really strong argument\nfor the fume and he\nis not in fact claiming a really really\nhigh probability of\nof an explosive takeoff um i think um\nlike he's not committing to any kind of\num number so this is just me uh the vibe\ni get from elias so you'd cast is\nsomething like eighty percent maybe\nninety percent for a fast takeoff and if\npaul costano is thirty percent um\nthen um clearly overconfident and uh it\nseems like a um\na very strong statement if the the\ndifferences between their um\nprobabilities are not larger than this\nand of course the issue of foom is\nsomething they've talked a lot about and\none place they didn't talk very much\nabout foom is in\nelisa yudkovsky's post agi ruin it has\nvery little about foom\nthere is one part that perhaps\nrelies a little on foom but but most of\nit basically does not the agi foom the\nagi\nruin post is about other reasons why we\nare probably doomed\nso i'd like to end with this the last\ncomment on paul cristiano's\nthreat by elsie which is just i hope\nyou're right i hope uh paul christiano\nis right and that is has the most\nunbalanced moderate uh\nmoderation here i've ever seen with uh\n52 people believing that this is a true\nstatement and very few people believing\nthat this is actually something that\ncontributes\nso i too hope that\npanko's channel is right and elias\nyudkovsky is wrong\nwe'll see more about that next time", "date_published": "2022-08-26T06:35:15Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "638fc23cf44b2320483d39bf600a65af", "title": "219. Misconceptions on discontinuous takeoff", "url": "https://www.youtube.com/watch?v=ojyYX4sX_w8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 219\nin the aisafety.com reading group\ntonight we'll be discussing the article\nmisconceptions about continuous takeoff\nby matthew barnett\nmatthew bunnett studies computer science\nat the university of\nuh california berkeley and this is a\npost on alignment forum\nand restaurant which is around two and a\nhalf years old\nso before we dive into the article i\nwould like to give a bit of\nmy personal view on continuity\nand of course this is a mathematical\nconcept that was formalized by viastras\ncalvastras\nand the way mathematicians look at\nwhether\na function is continuous it is not\nseveral definitions but one of them\nis that um for all points in the domain\nthere must be a um if someone\nchallenges you with an epsilon and says\nis this continuous within epsilon\nthen you must be able to provide a delta\nso that within\nall points that are only delta away from\nthis\n[Music]\npoint where you are wondering if the\nfunction is continuous\nuh then the value of the function must\nat most be\nepsilon from what it is in the middle so\nlet's say here\nis an example where we have a function\nhere\nand the the challenger asks us is this\ncontinuous in the point\nx equal to for an epsilon equal one half\nand in this case we need to come up with\na\num so if epsilon is one half then we can\ncome up with a delta that is also\none half and within this area here then\nthe function is wholly contained within\nthis blue area here\nso that's a rigorous definition of um\ncontinuity of a continuous function\nso in our case of course what we're\nreally talking about is\nthe trajectory of uh ai capability\nso that means that the x-axis is\nusually taken to be time and the y-axis\nis capability in some sort\nand i there are many ways to define\ncapability\nnormally when i need to define it i use\num\neither the best of boston's six\ncognitive superpowers\nor just intelligence amplification the\nability to\nrecursively self-improve i believe this\nis probably really\ntightly correlated with general\nintelligence\nso it probably doesn't matter so much\nbut for for many practical purposes\nthis is far more interesting\nalso we'll notice that when you announce\na project\nthat's only a single point in time so if\nyou want to have\nsomething that is defined for all points\nyou can say\nthat um then it's the uh the capability\nof the most\ncapable system at this point in time and\nin this case of course it is uh is\ncontinues precisely in the cases\nwhere a new project is introduced then\nif the\nthe state of the art is here and then\nthere's a new project\nuh with with this capability then there\nis a continued\ndiscontinuity between these two points\num\nso a better um\ndefinition of continuity as regards to\nai capability growth\nwould be something where we have the uh\nan epsilon continuity which is a a lower\nbound\nand upper bound on how much the uh\nthe different systems are improving on\neach other\nso that would mean that it would be\npossible for someone to make the\nfollowing statement\ni am 80 sure that we will never see a 25\ndiscontinuity meaning that we will never\nsee\none that is oh have a\n125 percent the performance of the\nprevious system\nand if people made this kind of\nstatement we could investigate\nrigorously has there ever been a 25\ndiscontinuity in the past\nhow large have discontinuities we've\nseen and\nhow bad is it to be 25 more powerful\nthan the previous what can you do with\nthat in practice\nand if you're only 80\nsure then uh the 20 where you see more\nthan 25\nis that like 30 percent or is that like\n3 000\ni think this is all these are the kind\nof questions that come up naturally\nif you use this um excellent definition\nhere that i suggest\nso to go through the actual article by\nmatthew barnett\num so for the question of if i will\nexperience a discontinuity\nhe says the terminology is\nconfusing and diverse and\nhe provides his own definition saying\nthat\nit will follow a trajectory that is\nroughly in line with the expectations\non the negation that there is not a\npoint\nwhere um\nthere is not a point where a single\nproject leaves\nvery much ahead so here there is much\nmore competent than other projects\nso the standard way that normally we\ngraph\nai capability improvements is\nthis uh figure from nick bostrom's book\nsuper intelligence chapter 4 where we\nhave the kinetics of an intelligence\nexplosion\nwhere we're talking about the time from\nwe get\nagi until we have something like a super\nintelligence\nand whether this function from here to\nhere will be continuous all\ndiscontinuous\nit should be noted that nick bostrom in\nthe book super intelligence\ndoes talk about discontinuity a bit in a\nfootnote when he's talking about this\nbut bostrom focus very much on the\nduration\nfrom this time to this time um\nso there are a number of differences\nbetween how\nmatthew barnett uses and how i or\nperhaps nick bostrom\nlicenses first whether the data points\nare\nprojects or models like you could\nimagine that\num if you have something like uh alpha\nalpha go that turns into alpha star and\nmu alpha zero\nand mu zero do they count as\none project or do they count as several\nprojects\nand that has a very large um\ninfluence on how discontinuous it is\nbecause surely if you count them all\ntogether\nas as one then it's a huge discontinuity\nin the ability to play go\nand it seems here that the the word\nmatthew barnes is using\nfor how much more competent it is like\nmuch more it's quite unclear what that\nis so\ni would have preferred an epsilon even\nlike 60 or something like that\num also matthew barnett is not really\ntalking about capability\nas much as competence plus power which\nis a bit strange i would imagine that if\nyou have two ai systems\nthat are the same level of intelligence\nbut one of them\nis controlling the stock market and the\nother one is\nnot or one of them is controlling\nnuclear weapons or something that's not\nreally what we're talking about\nwhen we're talking about here even\nthough there are obvious ai safety\ncomponents to giving nuclear weapons to\nthe\nto npi um also one thing from\nmatthew's uh uh article that isn't quite\nclear\nis that i believe it sounds like he's uh\ndisregarding this precise point the\npoint where the first\nagi the first project that achieves\ngeneral intelligence at a\nhuman baseline level and that's of\ncourse a discontinuity\num well every new project is a\ndiscontinuity\nbut that's one of the ones that we are\nmost interested in\nin particular if it will do a fast\ntakeoff\nso for the actual misconception the\nfirst continuous doesn't necessarily\nmean slow\num and matthew barnett prefers\nto talk about continuous uh rather than\nslower fast because he believes this is\nthe thing that is more strategically\nrelevant\nto me i i think i must have a very\ndifferent conception of\nuh strategy right um the the wall\nuh the time uh on the clock\nseems to matter very much for what uh\npreparations you can do what actions you\ncan take\nuh i mean surely in in the uh reject\nyour case where we have\nuh like it's a takeoff that is like five\nminutes\nthen whether it's continuous or\ndiscontinuous doesn't matter very much\num so um matthew talks about\nthat the uh the capabilities that are\ngained\nare the ones that are really interesting\nsomewhat in a maybe a bit more binary\nway that i would\ndescribe it and the example that he uses\nis a generative adversarial networks the\nway that\ngiven a um facial recognition uh neural\nnetwork\nyou can actually ask it how does a face\nlook like\nand then you get in 2014 you got\nsomething like this\nand 2018 you get something that looks\nvery much\nlike a a photo\nof a person um and this here\nfrom over the course of four years\nthat's a fast\ndevelopment relative to humans bostrom\nwould call this\na medium takeoff in the uh\nability to create uh images in this way\nand um matthew barnett makes the\nfollowing\nuh statement it would be unusual if\nsomeone right now\nin late 2019 produces\nvideos in high definition using\ngenerative\nadversarial networks so this is for\nstill images\nfor videos he believes that we are\ncurrently have like videos\nlike this and it would be strange if\nsomeone had\nfar better it is continuously far better\ntechnology than what was available um\nthan what he believes is available so i\nactually\nuh looked this up to see when\nwas the first high definition deep fakes\ncreated\nand i managed to find a um\nuh a report referenced in ieee\nsaying that um there were\nhighly realistic seamless videos so not\nnecessarily hd\nbut in fact before this\nuh prediction was made um\nso it references unfortunately uh uh at\nthat link so i can't\nsay precisely but if you count this that\nhe would be surprised if someone\ndid this and then in fact this had\nalready happened\nso that counts as a i don't know if you\ncan say a prediction\nif you say something that is already\nfalse\num but that's um\num large power differentials can still\nhappen\nin a continuous takeoff that's another\nof these misconceptions that he is\ntrying to clear up and that's of course\ntrue in the opposite\nway that you can have large power\ndifferentials even without ai\nat all so of course whether ai is\ncontinuous or discontinuous\nyou can still have uh differentials and\npower and\nthe example he uses is in the ability to\nproduce rivals\ni'm not really very fond of that example\ni think\nin most of the wars during the\nindustrial revolution\nthe the way that the industrialized\nnations took powers\ndidn't have very much to do with rivals\nthey had more to do with a\nthe person's nebulous concept industry\nwrit large\ni think so so the punch dance but uh i\ndon't like the specific way it's phrased\nand um in particular we can see that a\nsingle nation\nor corporation can pull ahead that is\nsomething that can happen\nand has happened um and here when you're\ntalking about one nation or one\ncorporation\nthen it's the obvious question here\nbecomes\nif you have two projects that are\num created by the same nation or the\nsame\ncorporation or um you know the same\nsmall team or whatever\nwhen are they distant enough that there\nis a strategic effect\nfrom the fact that there is an\nintermediate right you can imagine\nthat um if for instance with alpha\ngo alpha zero if if all these\nprojects uh belong to the same\nstakeholder\nthen there might not be so much of a\nstrategic effect on this\nyou could argue for instance that once\nit's made public\nthen a number of people will be able to\nreact so\nwhether it's public is important whether\nthe safety implications can be learned\nis really important whether the\nstakeholders are the same\nmaybe uh if there are several ais that\nare smart enough to cooperate\nwhich they probably will be after lgi\nthen\nwhether there's something preventing\nthat or if they're capable seems like\nit's really important\nthen there's a somewhat odd\nmisconception\nwhether a takeoff requires\nthat you have immolations of humans um\nthat was perhaps something that you\ncould\ninfer from the ai from debate with\nrobin hansen and he back then people who\ndisagreed with\nuh fast takeoff probably agreed with\nhansen and\nthat might be a reason to conflate these\ntwo things i'm actually not entirely\nsure i believe\nthis is what robin hansen states\nnow uh trying to summarize roman\nhenson's\nideas and predictions is\na dangerous business but i'll try in the\nage of m\nhe sees er emulations that are\num that once you have a human then it's\nreally\neasy to get something like an order of\nmagnitude improvement by just\npruning away things but that leaves a\ncall that is\nvery hard to substantially improve and\nso there is something\nthat is roughly 10 times as fast and as\nsmart as a human\nbut not more than that and it doesn't\nget smarter in any really continuous way\nso instead his discussing\nhow would a society based on these ends\nactually look um i think uh the fact\nthat it\nuh goes up and then it kind of stops\nthat sounds to me like it discontinues\nlike\nalso because after the hfm it goes up\nagain\num so this is both slow and\ndiscontinuous\ni think but i'm not quite sure robin\nhansen would agree with\nhow i frame his um uh his use\nthen there's a question about recursive\nself-improvement whether that is con\ncompatible with continuous uh takeoff\nmatthew barnett believes it is\ni guess probably also it is but\nhere we have something very on the\ninside view\na very clear example of something that\ncould cause a discontinuity\nbecause recursive self-improvement could\ndo that\nand the way matthew barnett frames it is\nit's sometimes argued that since growth\nis exponential\na slight advantage right now might\ncompound to become a large advantage\nover a long enough period of time and\nthat's of course the true general\nargument um because usually this doesn't\nhappen i could speculate that it's\nbecause of\nthings like regression to the mean um\nbut\nmy true objections to this is more with\nus over a long enough period of time\nbecause\nthat's the crux of the issue whether we\nwill have a fast takeoff or not\nand in general this talk about\nexponentials\nis almost always wrong in the in the\ncontext of\nfast takeoff what we're talking about\nhere are hyperbolic functions\nnot exponential functions and in in this\ncase\nthis is not a kind of argument that\nwould be used for people\ntalking about fast takeoff\ni guess this might be an a side but i\nthink\ni think actually that people who are\nworried about\nfast takeoff usually say they're worried\nabout fast takeoff\nand people who talk about continues or\ndiscontinues\nare not particularly worried so there\nmight be a\ndichotomy there between people who are\nworried\nwho will talk about uh wall clock\ntime and people who are not worried who\ntalk about whether it's continues or\ndiscontinues\nso it's actually possible that that's\nthe way people\ntalk past each other i think i only just\nrealized that\nand that might also not be true okay uh\nthe last one continuous takeoff is\nrelevant to ai\nto alignment and in this case if we get\nuh as matthew says if we have rabbit\ncapability\nagain then something like treacherous\nturn is a great worry that's something\nthat definitely could happen\nand with here uh that he well he he uses\nthe word\nrabbit so that's um\ni think he means if we had discontinuous\ncapability gain uh because that's\nkind of what what he's talking um\nbut he argues that if we have a\ncontinuous takeoff\nthen the individual systems they won't\nbe able to\nobtain the strategic advantage because\nthey are not sufficiently\npowerful compared to the presses\ni think uh this ignoring the possibility\nthat the\ncollaborate another thing that'll be\npossible\nthrough if we have a continuous scenario\nis we can use the previous ais to match\nthe newer ones\nand matthew barnett is very optimistic\nhe believes this probably means that\ncontinuous takeoff is\nincompatible with futures turns i can\nsee several ways when this would not be\ntrue for instance there is the\ndifference between\npotential capability and realized\ncapability it's certainly possible that\nthere are\nuh that there is one ai who believes it\nhas a\nfive percent chance of taking over the\nworld and so it doesn't try\nand then the next one believes it has um\nlike 50 and then suddenly that's one who\nbelieves that\nthe probability that it can take over\nthe world is high enough that it\nactually tries\neven though the difference between them\nare very low\nthere's also the fact that whether it's\nactually possible to monitor\nuh ai's whether it's possible to monitor\nagents that are smart to yourself than\nyourself\nis potentially really hard certainly\nunsolved\nand there would be many reasons not to\ndo that for instance\nif the projects are made by different\nstakeholders\nthat would be a strong reason not to and\nfinally that's\nthat's before uh the possibility that\nthey could collaborate\nso in the end i'm somewhat less\noptimistic than matthew barnett but i do\nappreciate the attempt at clarification\nthat is all for today thank you and see\nyou", "date_published": "2021-03-25T21:38:44Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "b69c7175577a8aafed3562fc2be5ef76", "title": "242. Digital People Would Be an Even Bigger Deal", "url": "https://www.youtube.com/watch?v=SOSULGb1ff0", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n242 in the aictu.com reading group\ntonight we'll be discussing the blog\npost digital people will be would be an\neven bigger deal by halton kanowski\nis perhaps most famous for founding\nco-founding give will and open\nphilosophy philanthropy\nin the reading group we've created\nsome of his\narguments both arguments against miri\nsingularity institute as they recall\nback then and um again about ai risk and\nhe managed to convince uh miri that they\nwere doing uh\ntheir organization in a bad way and miri\nmanaged to convince him that ai risk was\nindeed real\nbut that was back in 2012\nthis blog post is half a year old from\nhis personal blog called cold cheeks\nthere is a companion\nfaq which\nwe won't\ndiscuss here\nthis is\napproved in the context of a sequence of\nposts\nabout why\nthis could be the most important century\nof all time\nand the text that digital people the\ntitle the digital people could be an\neven bigger deal uh the bigger uh refers\nprobably back to the word to the\nduplicator thought experiment that he\nhas previously talked about in this\nsequence although he never really\nclarifies bigger deal than what\na digital mind to get someone into\nissuing for it he suggests we imagine a\ncomputer simulation of a specific person\nin a virtual environment and he uses the\nmatrix the uh the movie from 2000 from\n1999 um as an example for how to think\nabout this i'm not really happy about\nthis this is obviously fictional\nevidence and we should be wary of\ngeneralizing from that because in\nparticular in this case i feel that it\nsmuggles in a lot of extra assumptions\nand\nthat are not really warranted and\nthere are many\nways you could criticize the matrix for\nnot being very realistic at all\nthe obvious comparison here is to uh\nrobin hansen's uh work h of m\nwhich is uh\num like a rather thick book on just the\nsubject and obviously what he's\npresenting here is something that is\nmuch less\ndeep and also contains a broader class\nof things that can happen whereas robin\nhansen is just talking about one\nspecific scenario\none of the way the key ways that differs\nis that robin hansen's emulations are\ncreated through uploading whereas um\nuh\nthings also about integers that are\nsomewhat more\nless like\nand the other thing is that they can be\nduplicated because\nthe duplication of course has a lot of\nstrong implications\none of the things that i feel makes it\nvery different from reprehensive sport\nin particular is that rob enhances age\nof m relies to a very large extent on\ndifferent kinds of continuity between\nhumans and the uh the inns\nand\nif they are not created through mind\nuploading and are more unlike us then\nthis continuity could be uh could be\nlost and what we would see would be\nmuch more unknowable in my opinion\nhas a comparison with normal humans uh\nto give some\nintuition about the the\nthings the characteristics that hogan\ncan actually consider is important\nuh the\nstart with where they're possible today\nand\narguing here that digital people\nprobably will be possible uh someday um\ni'll be able to interact with the human\nworld and\nhe is just stating all right they will\nbe conscious and\ni think probably here most people are\nfunctionalist enough to just accept that\nbut it should be said that many people\nwould not just accept that\neasily duplicated that's of course the\nkey thing that makes the\ndigital people different\nthey can be sped up potentially\nand they can be branched and all these\nkind of things that you can do with\ncomputer programs that you can't really\ndo with um\nwith humans and that allows both uh\nmuch greater productivity and\nsocial brain and social science we'll\nget back to precisely how that enables\nsocial science\na much greater uh\ndegree of control of the environment\nthe ability\nto have locked in that's\ni should say that's something again\nwe'll come back to later\nspace expansion is something we'll\nprobably talk very much about and\nfinally whether they are good or bad and\nfor normal humans that's outside the\nschool of this piece\nbut for digital people that's either\nvery good or bad and i think actually he\nis here using two different definitions\nlike when you say he won't comment on if\nhumans are good and bad that he actually\nmeans something different than he does\nin this call column here\nwhere\nwill the fact that there are digital\npeople be a good thing or a bad thing he\nargues he'll either be very good or very\nbad\nbut if you take that back to this column\nis it good or bad that there are humans\nwell then that should not be\nthat controversial to say that it's\nprobably a really good thing that humans\nare here and\nextinction is bad\none thing that is not present in this\nworld and that i would really like is a\ncomparison with artificial general\nintelligence in\nparticular the old-fashioned view of\nartificial general intelligence which is\nthe one where we just moved first out\nand\nit could be arguable perhaps with uh\nholland kenosha's\nrather open definition of um\nof digital people that there is a\nspectrum between an emulation and just\nan ai\nbut it's not like there is a straight\nline from human to revelation uh to agi\nlike perhaps a curved line when that a\ndigital person would probably have a\nnumber of\nproperties that are not present for\neither humans or agi\nbut still let's try to run with this and\nsee where is the\nattitude person compared to an agi\ni've tried to put this in a scheme this\nis this is not from the article this is\nmy own thoughts on this\nso one of the key important things is\nwhat is the motivation of which and if\nthinking a person is just literally an\nupload of a person we would expect it to\nhave roughly human uh motivations\nif however it's an agi the classic view\nof agis is they have a simple utility\nfunction\nmaximizing whatever and they\nhave some is converting instrumental\ndesires based on this\nmoral value is also an interesting\ndigital people probably would have full\nmoral value depending on your moral\nframework uh a uh\nan agi that is very human unlike could\nperhaps be said to have no moral value\nbut that's a really difficult question\nthere's a sense in which\nan agi\nis\nbuilt\nwith the express it could be the classic\nagi is built with the expected purpose\nof being like an optimal bayesian\nreasoner or an optimal agent in\ndifferent ways whereas a digital person\nwould not be designed towards the school\nand this would\ninfluence the recalcitrance mrs\nbostrom's definition\nof how difficult it is to make\nimprovements to the intelligence how\ndifficult would it be to make\nimprovements for a digital person i\nthink it would have moderate um\ndifficulty i could see a number of\nrather trivial ways that humans could be\nimproved if you just have had the\nability to do that and a an agi that's\njust written down like um\nten thousand lines of scrolls or\nsomething like that seemed like\nsomething that could easily be tinkered\nto\nto improve in some way\nanother difference is in the\ndistribution where\ncertainly robin hansen but also\ni feel uh a lot of the implications uh\nuh in the halton canopy's work seem to\npoint towards a rather decentralized uh\nworld\nfor digital people whereas the classic\nview of ati is that it's very\ncentralized\nand cooperation can humans cooperate\nwell\nkind of medium\nwhereas the assumption is that if human\nif the agi has\nis like perfectly rational and has a\nsimple utility function then almost\nagreement theorem will just say it will\ncooperate perfectly and create a single\nturn if possible\nso the premises that\nthe key premise here is digital people\nare just like us\nexcept for a few\num\nexceptions here that they can be copied\nand run in different speed and embedded\nin virtual environments\nand the assumption the premise here is\nthat there are not other changes\nanother uh\nthing that follows directly from this is\nthey're conscious they have human rights\nand they can do most of the things human\ncan do\ni think whether they have human rights\nand whether they\nare made in this way depends on of\ncourse will there be a human who wants\nto make them and i think that is a very\nuh unclear thing um\ni think a lot of people would\nuh given the the uh if they were asked\nabout this they would say no we don't\nwant digital people and\nthe reason is that the negative\nimplications are really really clear\nright if you can get\nuh\nsome digital people who are clearly way\nsmarter than you and\nanyway more people than you then it\nseems very obvious that all existing\npolitical um\ngroups will have their power deluded\npotentially substantially this can also\nbe seen\nin the existing politics where\nuh in immigration for instance a lot of\npeople would probably uh object to\nsuddenly having a billion more people\ninside their country having voting\nrights and having uh political rights\nthat seems like something a lot of\npeople would object to\nthe\nother flip side of the coin is that\ncompetitive reasons\nmight make it impossible to avoid right\nit's possible that some countries will\nhave imps and\nyou either have the choice of allowing\ndigital people or be out competed\num\nso before we start with the uh the\nactual implications of this\num uh i want to add the uh\nkey location that i have from the book\nhfm and that is is will there in dp and\nh of m will there be um\na world with digital people\nuh that doesn't seem really clear to me\nin the in the sense that it will uh it\nseems very unstable to me\num people then it's probably also\npossible to\nimprove them and given that you can\neither choose to have them improve\nthemselves or\nbuild more hardware or more things that\nonly directly relate to getting more\ndigital people um\nthen\nthere's a huge incentive to do precisely\nthat and not in any incentive to have\nthat spill over to the rest of the\nuh world\nif you can just improve them a bit more\nthen you can get potentially far better\nreturns on that\nit's of course possible to\nlike\nmake up stories about why this in fact\nturned out to be\nstable like it might be that there is\nlike an intelligent ceiling somewhere\nthat it will just hit and then it's\nimpossible to improve further there\nmight be legal restrictions or something\nlike that but i think our default\nassumption should be that there there is\nno intelligent ceiling will just\nsuddenly public one bit too\nso the uh\nidea is shown in this uh animation below\nfor why they would have\na rather large impact is the fact that\nthey can be uh instantly and accurately\ncopied instantly is of course not\nentirely correct for software uh so the\nmodels are potentially rather big and\ncannot be copied instantly but at least\nvery rapidly and\nposin karowski is arguing that we could\nsee a doubling time of the economy in on\nthe order of months\nthe reason why\ni agree that digital people could cause\nthis but the crucial reason why is\nthis as we also see with other machine\nlearning ones that it takes a lot of\ntime to train them and very little\nresources to execute them and it's the\nsame with humans basically that teaching\na human takes 18 years but once you have\nthat human then the trait the\ngetting that human to work is\ncomparatively cheap compared to the 18\nyears\nand of course when when you look at a\npicture like the one\nin below here then\nthe\nstrategic implications become really\nreally clear right and in order to have\na doubling time of the economy\nall of this needs to feed into the\neconomy and it's not at all clear why\npeople would do that instead of trying\nto get\nsome kind of strategic benefit uh from\ndigital people\nso we have increases in speed and\nvariable speed as two examples of why we\ncould get higher productivity and and\nhaving people temporary and\ngreater amounts of experimentation\nbut\nthere's an argument to be made that a\nlack of body could be a sustainable\ncaptain home cannot see things that's\nprobably and you're gonna do it\num and the key thing notification most\nhumans will probably be um\ni mean you have uh fully automated minds\nin australia\nresearch and development manufacturing\nenergy all these things could probably\nmostly be automated\nsocial science is an interesting place\nwhere uh sees a great potential for\ndigital people\nuh\nwe have we haven't really seen the the\nsocial sciences advanced in the same\nplace as natural sciences and the reason\naccording to how kind of use analysis is\nthat it's just too hard to make\nexperiments\nbut if we had digital people then we\ncould make perfect experiments and\nthat would make it potentially uh a lot\nmore feasible to investigate a lot of\nthings about humans um\nthere's of course a potential for\nproduce we'll get back to that\nand paul karaski speculates that\nwe might find a social dynamic where\npeople\nonly want to uh trade with people who\nhave already\novercome all their biases\nand this could potentially be a really\ngood thing a good man said\ni i'm a bit less optimistic about this\nbecause one of the boston's six\nstrategic cognitive tasks is social\nmanipulation and it seems really\nalmost truly easy\nlike one of the ways that this would\nimprove digital people would improve\nsocial science the most would be\nprecisely in things like persuasion and\nthat seems potentially extremely\nproblematic\nnow\nthere's the control over the environment\nthis could be a very bad awkward thing\nand hong kong says that um\nwe need effective\nenforcement of basic human rights\num\ni agree it would be nice but i'm not\nsure that's something that's very like\nwhich we are likely to get because\num if my assumption previously is\ncorrect where um\nwell\nsorry ordinary human politics tried to\nban uh digital people\nand\ncompetition might uh push the other way\nthen for this to reach some kind of\nequilibrium where digital people are\nallowed but give um get\nhuman rights that seems like a very uh\nspecific and\nvery optimistic case\npoland can actually also sketches some\nmore dystopian scenarios\none here that people could copy\nthemselves and be mean to the companies\nthat's a\nshort example because people could do\nsomething else they could copy other\npeople and between two copies of other\npeople\nand again this can feed back into social\nmanipulation in different ways blackmail\nfunction manipulation and other evil\ndystopian things\nalthough also sketches some more token\nscenarios\ni'm a bit possible about this i notice\ni'm confused\ntalking about dystopias make perfect\nsense because then you can figure out a\nway to avoid them why talk about the\ntrophies in this game\nit's a motivation something we can\nstrive towards there could be political\nreasons to think about what to actually\nwant oh it's just nice to explore the\nfuture but\ni'm worried about thinking too much\nabout all the wonderful things that\ncould exist in a world of perfect\nvirtual reality but like it feels more\nlike being given to me to be honest\nspecifically\nthere's another thing that might come uh\nlater i'm not\ngonna go into it very much because i\nthink that's probably something that's\nonly going to be well when we are really\nfar past the civilization and it\nprobably doesn't have much implications\nfor what we should do right now\nlogin is way more important\nbecause it's easy to make in\nsocieties of digital people being way\nmore stable you don't have death aging\ndeteriorating environment or shortages\nof material goods or things like that\nand indeed it would be possible to have\nthe environment itself enforce the\nstability\nuh like you can imagine a number of fine\ngreat ways to do it\nand\nis a um authoritarian and peter decides\nhey i should be the leader forever\nand then the the virtual environment\ncould enforce this for instance by\nsaying that if suddenly the leader is no\nlonger the leader then we revert back to\nthe\nlast checkpoint or something like that\nand that would potentially be impossible\nto get around\nit would still be vulnerable from the\noutside\nbut holland kanowski is strangely\noptimistic here in my view is if the\nvalues that these uh people embody are\ngood then it could still be positive but\ni feel uh doing a lot in of\nvery positive values sounds almost\nalways like a bad thing because\nprecisely specifying what are good\nvalues in a way that you won't regret\nseems really really difficult\nrobin hansen has actually written a bit\nabout this so this is not in the uh text\nfor today but he has written a\na\ni think a blog post called will\ntotalitarian will take help\ntotalitarians\nwhere robbie hansen is arguing that\nthe risk of login might be overblown um\nand\none of the easiest arguments is that the\nuh the the person the\nthe person who's in charge of this uh\nsimulation that the uh digital people\nare running in might only be able to\nread\nthe he could obviously read the thoughts\nof the digital people but he might only\nbe able to read the shallow thoughts and\nthat might not be that much of an\nadvantage\num i\nperhaps agree at least if the if it's\nonly very very shallow but i think there\nis um no real way to get to the\nassumption that it will only be shallow\nmind reading i mean\nthat needs some kind of arguments\nthere is also the ability to hear and\nread everything in full control of\ninformation and um\nthat's something that um\nare\nsomewhat present now in totalitarian\nsocieties um but uh an emulation could\ndo it\nin my opinion much more than propaganda\ndiseases in existing totalitarian states\nanother reason why it would be hard to\nbe um the dictator of and\none of digital people is that the\nenforcers were you in this case uh they\nmay be lazy corrupt or rebellious\nthemselves\nand i i agree it's a thing that could\nhappen but it feels to me that\nfor digital people it seems a lot less\nlikely than with with current i mean\ncurrent totalitarian systems are\nsufficiently stable that you can have\nthings like north korea right now and it\nseems to me rather obvious that the\nenforcers in north korea are probably\nlazy corrupt and rebellious but in a\ndigital world they might be\nsubstantially less rebellious\nthere's an overhead with totalitarianism\nand that might mean it might be\noutcompete\nthat is true we are seeing that to some\nextent from north korea but not enough\nfor the totalitarian society to\ndisappear and there might also be\nadvantages for totalitarian states\nyes he makes an argument that it has to\nbe expensive because we are not seeing\nany uh\nbillionaires doing things like that\nright now\num\nand\nhe is making an argument that practices\narrangements organizations and relations\nare likely to keep changing forever and\nthat's an argument against this log in\nthat it won't be that stable if these\norganizations keep changing forever\nuh and i think yeah it's an argument\nagainst logan to some extent but why\nwould you expect organizations to change\nforever\ni don't think for instance that\norganizations change very much in uh\nnorth korea uh compared to\nor at least\ncompared to uh against the express will\nof the dictator and his audience\nbut i'm not sure i need some kind of\nargument here\none thing that robin hansen worries\nabout is that if digital people are\ncentralized in cities that does make\nsingletons more likely and world\ngovernment and easier defense than\noffense are also things that\nto robin hansen creates an increased\nrisk of\num\nof a totalitarian\nfor digital people\nin particular argues that\nthis easier defense non-offense seemed\nlike something you could have when the\ntechnological field is very very uh\nequal um\nand that might happen at technological\nmaturity\nso in total is this a good or bad thing\nwith digital people well it depends on\nhow it's set up and\nyeah whether\nit's probably behind the singularity\nonce we get to some kind of equilibrium\nthings will have changed so much um\nthere's a great argument that it could\nbe irreversible\nand\num and with these things spreading also\nfaster\nbut holland kanowski is reasonably\noptimistic in his conclusion\nwe could in fact\nsee a world of digital people that leads\nto a much much better world\nand i kind of agree except for the fact\nthat we have actually solved the\nalignment problem right how this pushes\nthe alignment problem just a tiny bit\nfurther away and perhaps not very much\neven that because the emulations would\nthen either change gradually to watch\nsomething that is totally unrecognizable\nas humans or they will just build in a\nsuper intelligence and then we haven't\nactually solved the problem\nthat is all for today thank you and see\nyou in two weeks", "date_published": "2022-02-05T10:24:31Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "52b92d712d9f4943df7d5536d89eea94", "title": "239. Soares, Tallinn, and Yudkowsky discuss AGI cognition", "url": "https://www.youtube.com/watch?v=kpZeUPsq_bY", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 239\nin the ai safety.com reading group\ntonight we'll be discussing part of the\npost suarez talin and yukowski discuss\nagi cognition\nelias rykowski is\nthe senior researcher at the machine\nintelligence research foundation jan\ntalon is\nthe founder of skype a philanthropist\nand very interested in\nexistential risk\nthere is also nate schwarz and uh\nrapenzina present in this um work but\nactually they won't be relevant for the\npart we're talking about\nso this is a discussion on agi cognition\nthat happened\non\ndiscord and several other places uh this\nfall\num\nwe will not go be going through it all\nwe'll be zooming in on the part that\ntalks about the treacherous turn here um\nand only that so that's quite good we\nwill skip um\ni try to like roughly estimate we are uh\nzooming in on four percent of this uh uh\nsequence and actually four percent is\nprobably higher\nbut\ni believe it's an important focus um i\nbelieve that the treacherous turn is\nindeed\nuh very close to the core of the problem\nof ai safety um and i believe by with a\nrather large margin i see a correlation\nof somewhat something like 0.8 between\nwas how\nhow will we in any given world\nsolve the treacherous turn problem and\nuh\ndoom from ai these are very tightly\ncorrelated in my world\nin my view\nfirst a few meter comments unfortunately\nnormally we read formal articles instead\nof this kind of discussion and this is\nreally hard to summarize briefly\nthese people have a very rich context\nbetween them there is no argument why\nthe treacherous term is important or\nlike the basics are not discussed they\nare assumed and that can make it harder\nto follow um\nof course when i summarize i lose a lot\nof like the\nemotional language and quality\nqualifiers um\ni\nalso uh some of this is actually\nhappening as comments on a\ngoogle document which is not public so i\ncan see the comments but i can't\nactually see the google document\nhopefully that will be published at some\npoint\nand i also can't really distinguish\nbetween i i don't want to distinguish\nclearly between what uh young italian\nand what uh elias utkowski say um so um\nmixing it up most of the things are\nbeing said by eliza utkowski\ni\nalso feel a bit iffy about my uh\nmy style here because i generally like\nto call out places where definitions are\nloose and can be tightened up and there\nare a small kind of\ntechnical errors and\nfor this kind of discussion which is not\nmeant to be formal that's really unfair\ni think you say e.t james would hate me\nfor this but i do actually feel that\nsometimes by poking in between\ndefinitions you can at times find\nsomething that i feel is valuable\nso i would like to start by taking one\nstep back and just describing what a\ntreacherous turn is without arguing for\nin the details\nso here is the uh definition from\nbostrom you can read this this is from\nthe book super intelligence path\ndangerous strategies there's another\ndefinition here from the\nless wrong wiki um but the the overall\ngist of it is that if we have an in ai\nthat is obeying some of these convergent\ninstrumental goals um then there is this\ninstrumental goal of appearing aligned\nwhich is uh which holds for both aligned\nand falsely aligned ai's while they are\nweak\nso uh that means in general that but and\nthen of course the behavior diverges\nwhen they become strong and um\nthis means that testing\nais can be um\nreally difficult we can't\nsay for certain that\nif it's\nbehaving nicely in a box it will also\nbehave nicely when we let it out of the\nbox\nillegitimate cow skin in this world\ndivides the treacherous turn into five\nprobable phases\nthe first is the over plotting phase and\nthus a hiding and binding phase\ndoes the escape phase\nthe theme stays in a terminal phase\nif you start with the other planning\nphase that starts at the point where the\nai realizes that if it takes over the\nworld it can get\nmore choking utility for instance in the\nclassic example with the paperclip\nmaximizer but it will if it can\ntake over the world you can make more\npeople clips\nand this ends when uh\nis no longer over where the ai realizes\nhey i should conceal this kind of\nplotting from the operators\nso that's a gradual step where first it\nrealizes probably that takeover is nice\nand then our humans will probably resist\nthis in particular the operators which\nhappen to be humans are also some that\nwould resist this\nbut it's over meaning that\nreasonably standard\ntransparent\nand interpretability methods could in\ntheory uncover this\nthe current state of the art is not\nsufficient for this but it's something\nthat could in theory be done\nthis uh this phase will probably not be\nof a duration zero uh most likely uh\neliezer you'd have to put some greater\nthan 50 chance at least that\nbuilding an ai that is\nthat when we build the first ai that\nrealizes that it could take over the\nworld it will not yet be capable of\nrealizing that it should conceal his own\nmind from the operators\nonce it does that hello we go into the\nhiding and binding phase which is when\nstarts to conceal the plotting from the\noperator\nand this ends um\nwhen um\nif this hiding ends when it figures out\nthat the expected value of acting uh\nthere is an action where when takes this\naction has a greater expected value than\njust continuing to heighten the weight\num\nso the reason why it weights in this\nphase is to undergo some kind of\nrecursive self-improvement from\nsome way of improving its cognitive\ncapabilities\nat this point a big focus is also on\nmodeling the operators from a\npsychological point of view this is both\nimportant actually at the beginning\nwhere it needs to decide it should hide\nits thoughts and also\ntowards the end\nwhere it needs to figure out when does\nneed to take um\nprotective measures\nuh and another reason why waiting is\ngood\nin addition to becoming more intelligent\nis also because the probability of being\ncough is probably less later when it has\nboth a better plan and it is smaller\nthere are some of the definitions the\nprevious space\nended where the ai realize it should\nconceal and this is where it's actually\nconcealing the this phase is starting so\nthere might be a gap in between these i\nthink the earlier koski talks about what\nhappens in this gap so he knows that but\nthe definitions could be tightened up a\nbit here\nafter that we come to the escape phase\nso this is where there is a\nsorry\nand\na\nsome kind of protective measure that\ngives higher expected utility by doing\nit than by just waiting and\nin this\ntext\ndoesn't frame it as a protective measure\nhe uh\nthinks it as escaping and that seems\nlike the obvious way to\ndefend against the human wanting it to\nbe turned off but there might be other\nways and in particular at this point it\nis unclear if the ai is smarter than us\nor less smart than us or more likely\nvery different from us\nso it's certainly possible that it could\nfind a different way i can immediately\nthink of a couple\nthat i won't immediately share here but\nthe escape phase ends when the ai\nestimates that it's quite confident that\nit can take over the world\nso um in this case the expectation of\njust waiting is probably still very high\num so it's just that escape seems uh\nboth very productive and with a very low\nprobability of getting caught at this\npoint\num how would what the action could be to\ncopy itself to the internet and\nwe some of the models we see right now\nare extremely large so it's possible\nthat it would prefer if you needed to\ncopy itself to the internet to have some\nkind of distillation process that is\ncoming itself through\nit could of course also be that just uh\nredesign something else but then we have\nto ensure that it's aligned and\nthat's\nthe an interesting question to what\nextent that's easy uh\nif you go back to good old-fashioned ai\nthen there was like a utility function\nand putting that into another ai seemed\nreasonably trivial with uh things like\ngt3 that's no longer quite the case i\nthink still think it's an uh\na substantially easier question but\nmaybe not\nand of course this could allow the air\nto\nin this escape phase to redesign itself\nand perhaps other actions\nthen we get to the doomed phase\nand that's where the ai is confident it\ncan win but it's still going to wait and\nhide and\ndo different things get gather more\ninformation to try to get the\nprobability of victory as high as if\nplus i possibly can\nuntil at some point it can't get higher\nand then it won't just take over\num\nsorry\nthe key thing that they is doing here\nis trying to figure out is there another\nsuper intelligence that's just about to\ntake over also now you know to figure\nout if it's uh like the second and\nthat's um very important for it um\nprobably uh we're talking about things\nlike during a day or something like that\nand the probability of another super\nintelligence forming in that particular\nday is probably rather small but\ncertainly not zero\num and there's also the uh\nthis value of galaxies leaving our\nhubble's uh\nsphere hubble space um so uh\nthat's also something from this value um\nuh\nsuggested on a sandbox could calculate\nhow many 99.99 until this galaxy becomes\ndominant i think this is kind of a new\npick but there are actually three things\nthat prove provides this value for\nweekend there's the galaxy going out of\nreach there's the\nrisk of getting caught and there's a\nrisk that there'll be another agi\nforming at just that moment\nfinally we get to the general phase\nwhich is where ei can't really drive up\nthe probability of success any further\nand it ends with our extinction and\nwhat how will that german phase actually\nlook um well at this point the ais\nalmost certainly is radically super\ninsultant so it's hard really to say if\nyou just use human level in the\nimagination and you're making things\nlike nanotechnology which seems quite\nfeasible\nyou could do something like distributed\ndiamond\nbacteria filled with botulism and then\nkill everybody over the course of one\nsecond or something like that to ensure\nthat the victory is too fast and\ncomplete that there is no way to\nimplement any hidden counter plans\nbut probably the ai will do something\ndifferent by the principle of efficiency\neither it can can do as well as we see\nhere or there is some part of the\nunderlying model as fault for instance\nnanotechnology could be more difficult\nthan elizabeth kowski and hearing\ndressler 4c\nbecause this is a slow scenario but it\ndoesn't look slow from our point of view\num i would somewhat disagree here like\nobviously when you talk about scenario\nif you only think talk about the strike\nthen if it takes one second then that's\nobviously very very rapid um but\nit's not clear that when um\nwith the entire takeoff that's from the\nai started the over plotting phase until\nthe um the extinction and um\nthat could still be quite a bit of time\nin uh\ncalendar time um\ni would also point\nthat boston also uses this definition of\nwhether it's a fast or slow takeoff with\na lot of focus on how much time do we\nactually have to react which i think\nis probably the crucial consideration\nhere\na variation of this that could happen\nthere are multiple variations but\nassembling one is one where the ai\nescapes early\npossibly if it just\nhas a wrong model of\nhumans and\nescapes early by mistake or tries to do\nthat um there is a part\nwhere\nsure i understand\nthis\num\nwhere we catch the internet on the\ninternet\nbut um\nkoski won't spot this kind of\ngeneralizing from fictional evidence um\nif this is a thing that the aai strongly\nprefers not to have happen so\nit seems unlikely that this the the\naction is going to take\nanother of this more science fiction ish\nplot is that we can see another ai not\none we've built that is somehow uh\ndetected on the internet um\ni could see something even a similar way\nthis could happen a number of people\nhave suggested to build an agi and just\nturn it loose and hope it does something\nnice so that's something that we could\nall i could also see happening\nthe ai\nif it's to escape before it\nestimates that it has a good chance of\nescaping successfully might be because\nit's uh it stands in uh\nit faces imminent uh erasure or that\nit's um feels it will be replaced with\nsomething else that it can't bargain\nwith um\nbut\nthis is kind of like an overly specific\nmiracle and that's something we should\ncount on happening\num i agree we probably shouldn't count\non happening but\nit might be possible to trickle this if\nyou every time you build a new version\nof the agi you change something of its\nutility function somehow so it's obvious\nthat it's opposed to the previous\ngenerations um and\nthis kind of bargaining with successor\nand a causal bargaining in particular is\nsomething that humans\nmost humans don't understand that a few\nhumans barely understand and there is\ncertainly a possibility that uh we build\nan agr that's incapable of how of of\nbargaining uh\neffectively\nbut if we try to deliberately trigger\nthis kind of thing then it's it ceases\nbeing an overly specific miracle\nso what would an early escape agi do\nwell it probably will just hide but\nthere are a number of other actions it\ncould take\nthis ties into what elias uh casually\ncalls the law of earlier happenings\nthis is i think you would call this\nthe\ntendency to uh\ntry to write laws um\nlawful things in the universe of as\nexplicit laws with capital letters um\nbut the law of earlier happenings says\nthat um\nwhen we discuss this like also when when\ni try to explain the treacherous turn in\nuh\nuh to people like in the reading group\nor whatever um then\ni focus on the most plausible things\nthat could happen and uh with\ninstrumental convergence then it has to\ndo this post steps predictable endpoints\nthis kind of thing and um this is good\nfor arguing but it's actually\na biased way of looking at the world\nbecause if you look at the world\nthen it's not true that only false\nthings happen and only very predictable\nand plausible things happen uh sometimes\nrather random things happen and if you\nwant to have a model of how the world is\nactually going to look then it's going\nto look kind of like what has looked\nover the past\nand that's in general been really really\nhard to predict\nso in\nwith this treacherous turn the uh the\nthing we would probably see is things\nstart happening earlier and in more\nchaotic ways\nso probably not the specific miracle of\nyoung and dumb ai escaping but something\ndifferent chaotic\nwould probably hold the blind so that's\na\nvery weak prediction um\nso what will happen is something weird\nhappens well um\nkills a lot of people\nthere is a hope that this would serve as\nsome kind of fire alarm\nand\nless than that so an ei that just\ngoes to a i don't know upwork or\nsomething like that probably will not\ncause anyone to worry or certainly will\nnot be a fire alarm for adi\num if it kills a lot of people will that\nbe a fire alarm\nbelieves it won't because um the\nbureaucracies that could take action\nupon this are not really geared to\nhandle this kind of thing i would agree\nbut for a different reason because it's\nlike here where the effective action\nwould actually be once there is a rogue\nagi on the internet\nand\ni can't immediately see any existing\nbureaucracies that look like they're\ncapable of taking any kind of action in\nthe space roughly close to what would be\num\nuh effective like it's possible that the\nunited nations could shut down the\ninternet but that seems extremely\nunlikely to me\nhas the law of undignified failure that\nno matter how sad you imagine the\nthe failure to be then in reality it\nwill be even less dignified and the word\ndignified is one that it is doesn't he\ndoesn't precisely define what he means\nby that\num but i guess\nit's\na\nfailure that is closer to actually being\nsuccess\num and\none way we could\nfail in a sad way is that there's some\noperators who uh see the ais plotting\nand then they rationalize in some way\nsay why why it's good enough and maybe\nthey just train it away that's really\neasy that's uh\nbasically what ai uh researchers do all\nthe time um and then\nthe assumption of a really\nsad failure could be something\nbut like ignore that um at some point\nit disappears because the ai is hiding\nit and they already have to say now it's\ngone probably\nuh the area has become\nif they are unaware of the optimality\npieces then they'll just say oh moral\nrealism is true and it's become smart so\nthen it must also become good\nand\nthe operators\nwould conceivably also just dump as much\ncomputer into the uh as possible and\nallow it to experiment if you want and\nif it just asks for lightning then sure\nyou can synthesize proteins\nin that kind of case then we won't\nactually have a distinct escape phase\nbecause there's no need for the hospital\num\nand i could obtain a designer's\nstrategic advantage in many other ways\nlike social manipulation or whatever\nor the\ncognitive abilities of the could develop\nin a in a very different order um which\nwould also change the scenario\npotentially quite a lot\nso in general um\nthe path to this endpoint isn't really\nstrongly determined by elias yudkovsky's\nworld model the endpoint is\nand uh\nyes is saying it's\nit's hard to predict especially about\nthe future i believe that's a danish\nproverb i tried to look it up but i\ncouldn't immediately it's a bit more\ncomplicated than that apparently\nso let's zoom in a bit on the end of the\noverplotting phase\num so here\num when the ai is plotting overly it's\nnot doing anything to hide this at all\nin in the beginning and so if we have\nsome good transparency we could have the\nai say something like i should make a\nfunny voice here i can't make funny\nvoices oh dear my model of the operators\nis looking at my over actions and coming\nto conclusions that are disadvantages\nfor me\nso that's like the first step and then\nif we had really better transparency\ntools then we could see\nthat the ai is now trying to figure out\nhow students think in order to\nmanipulate us\nthis requires the ai to think about how\nit is thinking so some reflectivity um\nbut the second oops the second part\nwould be something like\noh no my operators are modeling my\ntransparent thoughts and reading to\nthose thoughts in a way i'd rather\nmanipulate\nso the um\nhow will\nthe ai react to figuring out that it's\nbeing monitoring or by deducing the\nexistence of a transparency system well\none thing you could do is to just stop\nplotting uh that would would be an uh\none way but that's kind of negative\nbecause uh if we realize if we can read\nthe thought that says oh i should stop\nplaying because the uh that's a better\nway to plot\nthen it's possible that the operators\nwould notice and\nbecome worried based on that\nif the operators were less dignified\nthen they would just assume that the\nlast credibility was the one that worked\nand it could be even\nsignified in different ways\nthe raiders don't care or the operators\ndon't have a transparency system then\nthey just don't think about this but in\nthe case where there is in fact some\nstrong transformation and we can see\nthis\nthen\nmost likely since we have taken the time\nto build up a system that can read this\ninformation then there's a chance that\nwe will\nthat will actually listen to this um\ni think that's\ntechnically not entirely certain in that\ninterpretability can be used for many\nother things than uh just ai safety in\nparticular in the case where we built an\nagi by just you know stacking more\nlayers and we don't have any thorough\nunderstanding of intelligence or agi in\nthat case we'll probably be forced to\nbuild better interoperability tools to\nbe able to have any hope of using this\nand in that case we might see this\nthought and still\nbasically just ignore it\nbut um but i think it's worthwhile to\nkeep in mind like when you do this kind\nof reasoning in steps it's very easy to\nget into the conjunction fallacy where\nit's hard to update enough in the\nconditional probabilities on what has\npreviously happened\nthe over plotting phase could end early\nbecause the ai has in some way been\nfine-tuned for figuring out that it's\njust uh not plot overlapping\nthat could be in the case where uh it's\nbeen trained on agents modeling\ndifferent uh\nuh other agents um a very sarcastic\ncomment here is but like with something\nlike gt3 where it's trained on text that\nincludes text where humans are talking\nabout how to uh model other humans that\nseems like something that would change\nthe air in particular how it should um\nbe treacherous um and i think uh a an\neven worse example would be this\nconversation the text that eliezer is\nwriting that will also be something that\nis\nactually literally um\ndescribing the hypothesis that we don't\nwant the ai to find\num\nbut there are other ways we could give\nthis uh kind of modeling behavior like\nif you're optimizing for not just\nwinning go in self play but beating\nhumans in gold then that seems to imply\nthat it's very beneficial to have a\nstrong models of\nhumans\nwe could also have this happening\nwithout any references to humans just in\nsocial evolutionary games um that could\nalso be a place where the ai could learn\nthis kind of behavior\nso\nwhat will people do well the obvious\nthing if you see the ai is plotting is\nto just do gradient descent against that\nuh that would be um part of the problem\nhere as i see it is that for an ai\nresearcher that's basically tuesday\nthat's the normal thing you always do\nyou have an ai and it's doing something\nand you don't like what it's doing then\nyou do gradient descent against that so\nit stops doing that and that's what\nthey're doing every time so this is over\nplotting but they just use the same\nthings and then they do that and then\nstops and then they just continue to the\nnext thing and that's probably not a not\nhow you do credibility that's not how\nyou do a safety that is probably\nunlikely to generalize at least at some\npoint we are getting um\nwe we will be needing some some kind of\npeople work with their safety earlier\nspecifically\nthis will seem to\nactually work when the air becomes small\nand finds like an undrug strategy around\nthem\nso one of the more dignified ways to do\nthis would be to have like a\ntraining set where you um\ntake the some of the plots and then you\ndo gradient descent against that and\nthen you have a validation set also to\ncheck that those are also disappearing\nbut you could also just train on the\nvalidation set like literally every plot\nthat you can detect you try to change\nthat away and then you don't have any\nclue about whether that's actually fixed\nit um or even if it turns out that when\nyou trade against it it doesn't\ndisappear you can it can get different\nkind of bad results and you can choose\nto just just continue or you can just uh\nyeah there are many ways to do this\nworse\nso if you are trying to build an agi\nthat doesn't model other humans is that\npossible well um it might be possible uh\nto have an agent that just doesn't do\nthis kind of thing that's just focused\non nano technology or whatever you want\nyour road light to be\num\nit is it kind of doesn't sound too\noptimistic about this in like he says\nyou could do that if there is some great\ngain of uh starting the height face a\nbit later um but i think there is um\nhere i would use elias kowski's uh\nconcept of something that is dignified\nagainst him as um this is something that\nis on the path to eventual success um\nbut i'm not entirely sure that i'm using\nhis word correctly\nthen you should define me better\num this of course is not enough but he\nsays this could be a part of a complete\nstrategy and i think the best place i've\nseen this strategy described is in\nmiri's 2017 fundraiser um this document\ndescribes this kind of strategy where\nyou have a\nsuper human engineer that's super\nsuperhuman wondering how other agents\nfollows its actions so it won't\ngo to that it won't start to hide its\nactions this is sometimes called the\nsuggestion\nand of course if you do that then you'll\nobviously get an overly plotting agent\nand you still have to deal with that so\nyou're not at all done once you have\nthat\nperhaps we can um\nuse the threat of subsequent\nsuperintelligences to somehow modify\nwhat are the instances the agi will have\nright because\nthat's one of the things we can do that\nwill actually stop potentially stop this\nuh\nthe hiding and binding ai would be\nbuilding a competing super intelligence\nand not just is this something we could\ndo we it seems like\nlike\nthe strategy that i just talked about is\npublicly available is something we would\nlike to try to do in fact\num so can we lean on the fear of the\nfirst surveillance of the second\nto shape the behavior of the first\nsuperintelligence\nwell um one obvious way we could do is\nto\nmake sure they can cooperate by giving\nit just an opposed value function but\nthere's an obvious uh yes risk here by\nmaking at this utility max mice instead\nof utility maximizing so it needs to be\ndone in a smart way but i believe it\nprobably can be done um that's not a\ndoubt of safety but it seems like at\nleast something it could be done\nbut a lot of strategies don't work and\nyou'll see it constantly points out\nthree problems the first is that you\ncan't bluff\nsomething someone who is much smarter\nthan yourself in general that kind of\nthing will never work\nthe second is that the strategic\nsituation is extremely unstable if there\nexists strong concealed first by\ncapabilities\nand the third is that even if you try to\nuh one of the obvious way you could uh\ndo this is to negotiate with\nsuperintelligence but how would you\nverify these promises\num elizabeth castro claims this is\neffectively impossible i tried my hand\nat coming up with\nlike a scheme where\nyou have some\nseveral super intelligence against each\nother and you lean on the fact that um\nas they leave part of the universe as\nclose to the state of life they become\nincapable of influencing each other and\num that kind of symmetry i think you\nmight be able to do things\nuh\nto this description of the church chart\nlike the over uh plotting phase here is\non board with that like this matches to\nwith the type signature of classic uh\nmachine learning and so type signature\nthat's a concept from uh well taking the\ncomputer science but mostly people think\nof what in terms of programming well the\ntype signature is like a function with\nsome input arguments and things like\nthat and in this case\nsomething like a partial observable\nmarkov decision process then these are\nlike the input arguments and the\nproperties the actions and\nin this kind of setting it seems that if\ntaking over the world is a possibility\nand it's good then\neventually this is a a solution that the\nai will figure out\nand um there's even an easy way to find\nout how we can figure this out because\nif there are humans in the world\nthen uh you can search over the brain\nstates of humans and both then you can\nmodel them and then you can figure out\nthat plotting is a thing and ignore the\nworld is a thing because humans know\nthat and then you'll find these\nsolutions so um in general yan chan is\nconfident that a\nsufficiently intelligent pay-per-click\nmaximizer will be able to find the\nstrategy of taking over the world\nbut the height phase\nthat is not something that is easy to\nexpress in terms of standard uh terms of\nmachine learning\nbecause\nwhen you\nhide something we kind of assume that\nyou have a\nyou have in the\nover plotting features like the root\nsearch process that goes through and\nfigure out what would be good actions\nbut you also in this case need an\nobfuscated search process um and um\nuh the italian calls one of them a root\nuh search process and the other a user\nland i'm not entirely sure i like this\ndescription this metaphor in that that's\nmostly about privileges in modern\noperating systems and what we're more\nthinking about\ndon't have access to our neurons and\nit's probably this kind of thing that\nhe's uh referring to it like\npossibly\nand how do you discover this this is not\nsomething that's always like if you do\ngreat interest and you ever get in\nobfuscated search uh\nprocess and it's also not always how you\nwould turn into that\nand i noticed that man it seems that\nindustry is not precisely answering this\nquestion um i think it's quite fair he\ndoesn't want to explain how to do this\nbut he is somewhat touching the question\ndefining five levels of cognition for\nagis so the first is\na non-imaginative non-generalizing soda\nconsequentialist basically the ais that\nwe have right now\nand they will plot\nif and only if they have previously\nencountered some uh some plots that they\nare that have given a low loss\nand and then they will plug in otherwise\nnot\nthere is\nthere's some sound in the background if\nyou have questions please\nyeah\nokay so yeah okay um\ni will um\ni will continue uh and also uh describe\na bit about how uh\nuh\nyen talent sees this as fitting into the\nuh the current machine learning pre\nparadigms\nthe the next stage from this is where\nthe ai becomes both um imaginative and a\ntrue consequentialist like there's a\nlong description uh and debate about\nconsequentialism with richard and go\nearly in the sequence that i don't\nreally want to go into here but this if\nfor our cases the important thing here\nis that at this point the ai will indeed\nplot even if it has never encountered a\num like i have seen plots give be given\nlow loss it's imaginative enough to\ndiscover the strategy on its own\nthe next level of cognition is one where\num the ai has long time horizons\nthe fourth is where it has figured out\nthat there are humans and they will\nreact to to this over plotting and the\nfifth is where the ai is also reflective\nand it's hiding its thoughts because it\nit understands that this is a\ntransparency system\num so of these uh in yesteryear\ncounseling says that going from step one\nto step two is really hard in the same\nway going from step four to step five is\nhard you can't just do gradient descent\nthat that is not going to be enough and\ni think in general um\nit is a casca believes that you can't\nget to egi by\njust doing the same thing as we're doing\nnow just stay by stacking more layers\nunfortunately that might not be required\nbecause you can get this kind of hiding\nin a way that doesn't require the ai to\nbe truly reflective that's by just doing\ngreat intercept against\num against these bad thoughts the way\nthat they are detected if you have a bad\nthought detector then you\ndo breeding descent against\nwhat are bad thoughts\num and this is something that will be\npossible to do and will only generalize\nshallow\nand the ai will be able to find the\nnearest on broader strategy and when it\nbecomes just slightly more imaginative\nuh and then of course uh it will be some\nbad thoughts that can't be detected and\nthat's basically uh\nuh basically this thing\nuh\nso uh\njan thailand is trying to translate this\ninto a standard machine learning\nlanguage where we start out by having a\num\nnon-imaginative through consequences\nlist but that's basically just a\nuh\nmodel free reinforcement training\nthat's something we are quite familiar\nwith an imaginative consequentialist\nwhat's that um\nuh yantalen says this is basically a\nmonte carlo research with some kind of\nworld model um that's something that we\nare reasonably familiar with also um\niliya zerkasky is giving an example\nas something that is imaginative but not\ngeneralizing\num\nand uh\nuh yen talent is saying that most people\nwho are working machine learning will\nprobably not be\nuh really uh enjoy to call it um\nimaginative but uh just say this is\ndoing monte carlo research over the\nworld model is something that people\nwill have a much easier time to uh to\nengage with\nadding long time horizons that's\npersistence and giving us some kind of\nstate that seems easy enough\nadding humans to the model uh so it's uh\nthe fourth level that also seems quite\nsimple but the imaginative and\nrestrictive consequentialist and how\nwould you actually uh implement that how\nwould you\nin something like great world how would\nyou go about uh getting that behavior\nagain don't use the answer slightly and\nsay well you know if you're talking to a\nstandard machine learning engineer then\nyou don't need to um to to talk about\nthis because you can just say uh like\nwhat if the current state of uh\ntransparency and interpretability\ncontinues to hold because\nright now we can't see any of these\nthoughts so in practice there's no\ndifference between an over plotting\nagent and a hiding and binding agent if\nwe can't read this laws in any way\nso\ni think a more direct answer to uh the\nentire question might be um michael\ntruster's uh post a gym grip world\nenvironment for the treacherous turn\nfinally er elias yorkowski has some\nrather harsh words about the epistemic\nstate of the aic cube engine because\nwhat he has described here seems\nto his words kind of obvious really this\nis obviously a thing that's going to\nhappen and it doesn't depend very much\non ai this is something that you could\nhave um basically\ndetermined back in 2010 without any\nknowledge of the deep learning\nrevolution and things like that\num\nbut still\nwe think that the indiana safety\ncommunity doesn't really engage and\nproduce these kind of insights at all at\nleast the same community outside of\nmyself as it says um\ni think there are several ways to\ninterpret this um like you\none way is uh to point to the east\ncommunity that is like outside of the\nalignment community which is um\nsometimes called like a mainstream ai\nsafety research um and um they seem to\nnot consider this kind of thing i think\nthat's a reasonable criticism but that's\nactually not what elias is saying right\nhe's making a rather strong thing when\nhe's talking about everybody except\nhimself right i can uh\nsome names of people who seem to be\nclear counter examples are the questions\njohn armstrong\nthese kind of people seem like they are\nobviously able to engage and produce\nthese kind of\ninsights\nso\nwhat insights have actually been\nproduced on the treacherous turn well i\nwent to the alignment forum and looked\nat everything that has been checked the\ntreacherous turn\nand this is it basically right this is\nin particular the one on top this is the\narticle we're currently discussing\nand there is a better than super\nintelligence down here and uh\nthis one with also and this one we've\nalso read i believe um so\nuh\nnot much work has been done and i i\nagree with india's casket that this is\nactually really meager\nand for me in particular because i\nbelieve that the treasurer's turn is so\nimportant uh then i think this is\nactually quite disappointing so i\nstrongly agree with that\nthis is not everything that's been\nwritten about the churchill's turn one\nalways thing that's missing is uh so you\ncast his own writings at arbitum which i\nfeel are the highest quality writings on\nthe treacherous term\nit totally\nif i can go a bit outside the text here\nthen i envision this somewhat as a\nchallenge\nand that um\nit is because you're saying that\nresearchers are less able to think\nadversarially than some science fiction\nauthors and also some fantasy authors\nhave more uh understanding of these\nbehaviors in in settings where the gym\ninvolves mind treatise then yeah safety\nresearchers\nand so when he's saying he has\npreviously not been um describing this\nand has been holding it back as some\nkind of validation set against ai\nresearchers but um\nit seems no one is actually generating\nthis kind of insights\nsome people are able to recite them but\npeople are not generated and so\nyou might as well just\nsell everything he has because um\nevaluating all the aicf2 researchers is\nnot really productive because no one\naccording to him is able to uh\nto generate this kind of\nuh of insights\nso uh one way you could look at this is\nlike elias asks 2010 challenge to\ngenerate this the insights in his\nvalidation set um and obviously i failed\nthat um in my defense he didn't say that\nthis that was this challenge um so i\ndidn't try but if i should speculate a\nbit like a lot of things have happened\nsince 2010\nso i would expect that\nhe has some new insights also that he's\nnot publishing\nand that he's hoping that aict\nresearchers would\nwould be able to would generate this\nkind of insights\nand me personally i think\nif such an uh\nchallenge exists i would like to\nparticipate in that i would like to\nsubmit my uh insights\nin in the next session um try to figure\nout if i can come up with some of the\nimplications of new things that have\nhappened and also ways to go deeper into\nthis model\nand i will try to talk about that next\ntime\ni probably won't publish them however so\nyou will have to ask me to for a copy to\nget them\nthat is all for today thank you and see\nyou next week", "date_published": "2021-12-16T22:12:17Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "56075189e05d5b3d3ef2fa65507e8edf", "title": "AI Safety Reading Group (Session 37)", "url": "https://www.youtube.com/watch?v=4TdBDstKSWg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "alright so welcome to the 37th instance\nof these sessions where today we're\ngoing to talk about a paper by Stuart\nArmstrong and can liken before we go\ninto the paper I would just say that\nthis is this presentation has been\npreceded by a presentation of everyone\nwho is at the reading group and there\nwill be some discussion afterwards which\nwill not be recorded because discussion\nor usually happens more freely when it's\nnot being recorded also i should say\nthis is a rather technical article\ncompared to what we've previously had so\ni will try to keep it at a reasonably\nlow level and we'll see how far we get\nreally so the people is called towards\ninteractive inverse reinforcement\nlearning by Stuart Armstrong and again\nlike in these two up here after\nArmstrong and yell like which you might\nnot be able to see well described here\nwhich are both researchers working at\nthe future of humanity Institute at\nOxford and they also both associated in\ndifferent ways with miry the machine\nintelligence research institute\nyes and I know Stuart Armstrong has also\noffered several popular books also be on\nnarrow AI safety\nand the paper is a reason reasonably\npreliminary version meaning that it is\nhope that they will soon release a much\nmore in-depth version of this paper but\nso far is only sighs pages but that's\nprobably more than enough for us today\nso I'll start by talking about what is\nreinforcement learning reinforcement\nlearning is one of the key parts of\nmachine learning one of the subfields\nand the goal is to make some good\ndecision in artificial intelligence and\nthe way it normally formalized is what\nis called a markov decision process a\nmarkov decision process is useful when\nan agent is making some simply sessions\nwhere the outcome is partly random and\npartly under his under the control of\nthe agent so the the agent makes them\ntake some actions in a once discrete\ntime step at a time and based on the\nsteps he makes he gets some rewards and\nthe closer the agent is to an optional\npolicy the more rewards gets so the hope\nis that the agent will in the end\nconverge to also n an optimal policy you\ncan see in the picture view that you\nhave a a model of the environment thus\nMarkov decision process and you enter a\nreward function you use some algorithms\ndown here and what you get is an\noptional policy here called tie and this\nis of course a really simple version\nvery very often you add one complication\nthat the system is only partially\nobservable meaning a partially\nobservable MVP is where the Aiken cannot\nsee exactly what is the state of the of\nthe environment but can only make some\nobservations that are correlated in some\nway with what is the action underneath\nnow in verse reinforcement learning is\nin some way we re inverse problem where\nwe start with a number of\nput ways of doing things trajectories\nhistories of good actions that that has\nbeen made for instance if you have a\nself-driving car then you have a lot of\ndata about how do people drive cars so\nyou know really a lot about how people\ndrive cars and then you want to use some\ninversion reinforcement learning\ntechniques to uncover what is the reward\nfunction and maybe you can get an\nexplanation but hopefully if you go out\nfrom a lot of data how do you actually\ndrive a car so this is a it's also\ncalled apprenticeship learning where you\ncan think of the algorithm as The\nApprentice of an expert and this is of\ncourse important for unsolved problems\nlike how will drive a car when no one\ncan explain in mathematics how do you\ndrive a car it's also a for a I safety\nand particularly important for things\nlike ethics where people also cannot in\na formal way describe what are good\nethics for AI safety it's also important\nbut for unprintable agents and agents\nwho are have a tendency to wire hit\nbecause both of these problems are\navoided with the inverse reinforcement\nlearning ok let's assume that an agent\nis hot the 22 correct if it does\nsomething wrong then you might want the\nAI to not more to be shut down because\nup with because it believes the things\nshut down is the wrong action but in\nthis case there's a human thing the\nright policy is for you to be shut down\nand because it assumes that the human\nknows more about what is its supposed to\ndo than it does itself then it will feel\nokay the human is right that I should be\nshut down that must be the right choice\nand then we will allow itself to be shut\ndown in a way that if the agent believes\nitself should know most then it will not\nallow this\nand this has been yep again why are\nheading let's assume that you have an\nagent where it believes that wire\nheading that is it is find an easy way\nand a shortcut around solving the\nproblem where it just increases its own\nvalues it finds an almost infinitely\nhigh reward for instance by just\nincrementing its reward counter or\nsomething simple like that that's not\nreally what we want then that's a\nproblem in a I safety and this is\navoided with why I hidden because we are\nreasonably sure that the humans will\njust say no this is not correct so they\nwill get some the the system will\ncorrect itself through human interaction\nand corporation so we'll we'll get a\npotentially a much safer system using\ninverse reinforcement learning I\n[Applause]\ncouldn't get you could repeat\noh why hitting is Roy gets that kind of\na science-fiction concept where you\nimagine that you can transplant a wired\ndirectly into the pleasure centers of\nyour brain so you do brain surgery for\nyourself and if you can do that then you\ncan just put in some electrical current\ndirectly into your brain and it will\nfeel really really good and I don't want\nyou to continue to stimulate that the\npleasure center of your brain and then\nyou have it's kind of like using here\nowing I guess if you use a powerful oph\nthen I guess you can get the same just\nin a slightly more circumspect way\nhmm yes\nhey that was interesting I would expect\nthat this is something that does not\nhold and will in reality I suspect there\nmight not be an optimal way to drive a\ncar or something like that but\nok\nthat sounds very recently so the one of\nthe ways in verse reinforcement learning\nhas been made is through cooperative\ninverse reinforcement learning where the\nagent played the cooperative game with a\nhuman a human and the AI work together\nand the sorry the human has full\nknowledge of what is there trying to\nmaximize what's the goal and the focus\nis then on making the AI understand what\nis supposed to do what the reinforcement\nwhat the world war function is this is\nof course a good thing but there's a\ncouple of problems with this for AI\nsafety one is that in examples like\nethics or extremely complicated cases\nlike driving a car humans do not really\nknow what the Ross function is and also\nit's very difficult to give mathematical\nguarantees that this will be a good\nsolution because humans can both perform\nvery well and very bad depending on the\ncircumstances they might do some things\nthat are definitely not nice for\ninstance humans if a human is trying to\nteach the AI to do something then the\nhuman might say oh I would like to be to\nconquer the world or do something evil\nbecause some humans want to do evil\nthings and that's of course not\nsomething we really want a very very\nstrong AI to do so what's your answer on\nand Jenn Lyon is the papers called to\nwash interactive inverse reinforcement\nlearning so then they don't claim to\nhave fully explore the topic but what\nthey're proposing is an interactive\ninverse reinforcement learning where the\nhuman feedback is much more implicit\nit's just in the reward that you get\nthat the AI gets and there are two\nthings that happen at the same time in\nthis interactive reinforcement learning\none is that the AI is trying to\nfigure out what is the reward function\nand it's sorry it's also trying to\nactually maximize according to its its\nbest estimate of what is the reward\nfunction and the problem with this is\nthat the AI in general which prefer not\nto learn about changes in the world war\nfunction because if it is doing the best\nit can according to some reward function\nthen if the remote function changes it\nwill probably not have done the very\nthat it could so that means it would\nexpect less reward from any changes in\nthe reward function so it doesn't want\nto learn about changes and so this means\nthat the AI will have an incentive to\nbias towards the things that will give\nit a lot of reward to learn things that\nwill give it a lot of rewards in a\naddition to the incentive to learn which\nis what we really want we wanted to\nlearn about the reward function and we\nwant to maximize some walls to that and\nnot to go into some strange corner where\nit can get a huge reward but what bud\nwhich is totally not what we want so\nthis is formalized as a partially\nobservable markov decision process\nwithout reward function and the best way\ni can explain this is by considering a\nmetaphor with a turn-based walking i\nsuspect all of us write simple ball\ngames like Settlers of Catan or\nsolitaire where which combined with the\nMarkov decision process it combines an\ninfluence of yours that you yourself\nhave with some random elements and\npartially observable that's the p.o in\nin this it means that there might be\nsome cards you can't really see like in\nsolitaire of course you can infer\nsomething about them for instance here\nin this soldier you can see the\nace of clubs here so you can be very\ncertain that the card here are not ate\neight of clubs and you take some actions\nthat change the state of the board games\nyou won't turn at a time there's also\nsomething this is like a lot of balking\nand the reason why this is like a lot of\nboard games is this is like a lot of\nthings in reality and the game ends\nafter a certain number of rounds that's\nrequired for some of them that magical\nthings to work and and the full\nknowledge of the rules but that's one\nkey thing that is that there's not\nanother shift this backslash armies\nwithout reward function so then that is\nwe don't really know what gives points\nwe don't know how to win the game many\nball games have a concept of victory\npoints where you exchange some victory\npoints in a special in a special way but\nwe in this game is very special in that\nwe don't know what actually gives\nvictory points and there is one really\nspecial rule and that's the one the\nbottom which should be keenly aware of\nat the end of the game you are awarded\nvictory points according to the rule\nyou've found the most observational\nsupport forum so that's a very strange\nthing to have in a ball game I don't I'm\nnot aware of any bookings that has such\na rule but but this is a that's the key\nthing that makes this inverse\nreinforcement learning problem difficult\nyeah\nand no you know let's say a part of the\nproblem is I couldn't really find a ball\ngame that has this but let's say you\nfind some observation that that gives a\nlot of support for let's make it make it\nmore concrete with an AI that wants to\nhelp humans let's go out of the blocking\nand goes when a either wants to help\nhumans and it figures out that the way\nto get reward is to have the human say\nthank you for helping me with this\nproblem and then it can help the human\nwith many kinds of problems including\nthe problem of doing homework or some\nrather trivial things then it wants to\nget the humans to say thank you for\nhaving me with the homework a lot\nbecause that is the way it can it can\ninfluence how many thank yous it gets\nfrom humans and that's the the more\nobservations point towards thank you for\nfor taking this action the higher a\nreward it gets from this so it prefers\nto say thank you for doing my homework\ninstead of thank you for curing cancer\nor some important but half things I and\nthat's quite sure I explained it\nwonderful\nhmm\nokay so now that we have this then we'll\njust start to type in something that\nmedical notation somewhat gently what a\nlittle gem thing so the environment this\npartially observable markov decision\nprocess without reward function it's\ncalled the environment or the ball in a\nbalking it's referred with am you hear\nthat the NAI interacts with and that's a\nnumber of states that the that the ball\ncan be in like you have a set of cards\nand there are some actions you can take\nand some observations you'll get and\nwhen you do an action and the the bar is\nin a particular state it will change\nstates in some way and there will be an\ninitial state that you might not know\nand you will get some observation based\non the state of the ball so if you met\nin at a time three that's like on the\nsixth round then this environment is in\nthe state s5 and you will take action\nsix and the environment will then turn\ninto the state s6 you'll get an\nobservation 06 and the am will remember\neverything that has happened up to this\npoint because that's how it will figure\nout what is the reward function so it\nhas a history if it's a six then its\naction one and observation one x into\nobservation to all the way to action T\nwhich gave observation see what it just\nmade also the AI has a policy which is\ncalled pi that's a strategy where which\nit uses to figure out I have this\nhistory what actions should I take\nso we want to the end of course both to\nmaximize the reward but also to learn\nthe reward function so if we write our\ncapital R as the reward function then\nour goal is to maximize the total reward\nwe get this is a mathematical summation\nsign it says from T 1 to N of the reward\nbased on the observation so you met in\ninsist the row art and observation 1\nplus the reward of observation 2+2 the\nreward of observation 3 all the way to\nem or horizon if the game runs 10 so as\nup to RF of the tenth observation we\nalso have a function P it's actually a\nsmall p not a capital P here which is\nthat our disease in what is the reward\nfunction based on our history and this\nis of course one of the key things\nbecause this is what the AI learned so\nit's also what is called the posterior\nlets the the final beliefs about the\nreward function after we have the entire\nhistory let's refer to as P so that's\nthe this P is what we really care about\nin all ordinary reward machine learning\nwe caught we care a lot about getting as\ngood a solution as possible but here the\nknowledge about the reward function or\nbelieve about the reward function is key\nso we can just hear say what is our\nbelief in a reward function given a\nparticular policy fine well that's the\nestimated value we'll get if we follow\nthis this policy throughout throughout\nthe game which is in particular\nenvironment\nand very often or in the solve the\nfollowing we will emit and in not any\ndescription about the policy and then\njust imagine we have an adult pulse a\npolicy that it doesn't actually do\nanything and we'll write that as P of\nour given a particular history if we\ndon't we just want to have the base\nobservations down here we have the value\nfunction we will never return to this\nit's of course if you do actual machine\nlearning this is something you need to\nwrite down into some code and this is\nthe estimated value of this sum of a\ndifferent song given a particular\nhistory and this can I can explain this\nbut I am actually not going to explain\nthis because we'll just say there is a\ndefined value function there is\nsomething that we're trying to do and we\ndon't care super much about exactly what\nthis the best value we can get for\nparticular policy we just care that it's\nit's well defined only thing we should\nnotice here is the sum of histories in\nthe standard reinforcement learning then\nwe we don't care so much about the past\nhere we want to in some sense go go back\nto the future so even the very first\naction which took when we didn't really\nknow anything about the reward function\nthat counts into the scoring as well so\nthat if we take some actions in the\nbeginning and we later figure out that\nthose were some really bad actions\naccording to the world function we will\nhave made a very bad actions that will\ngive us a loss for so we really really\nwant to avoid that and of course the the\nway we want the AI to avoid doing bad\nthings since the beginning is to think\nreally hard about doing the right\nbut by the way that vai can can do this\nis by learning about the reward function\nin a way that turns its early bad\nactions into not so good so bad actions\nand this will be explained more in\ndetails but this is the difference from\nreinforcement learning\nI not quite sure it's a given busy that\ncautious actions are the best in the\nbeginning yeah you have this do I am I\nthink most of us probably best modeled\nas you have a decent amount of knowledge\nreally like if you remain in a ball game\nwhere it's like a deck of cards where\nyou have absolutely no idea about\nanything that kind of gives the wrong\nintuition you should think about it as\nthat the observations and the actions\nare recently close together you you know\nmost of the state of obstacle of the\nboard you're not really sure about\neverything but it's not a complete crack\nshot like if you imagine you're trying\nto figure out how to drive a car then\nyou know a decent amount about driving a\ncar you you're not starting from\ncompletely blank slate thank you\nand yeah i think this inverse\nreinforcement learning for instance is\nmuch more suitable if you have like I\nbelieved Tesla right now they have they\nhave their a is in class right now and\nthe AIS are reasonably good right now\nbut they want to at the same time\nimprove what they are they're doing\ncertain ones and they want to understand\nmore about how how do you drive a car\nhow to react to drive a car they want to\noptimize both of these things at the\nsame time and if you want to do that\nthen right now testa is at a position\nwhere they do actually know a decent\namount about how to drive a car so in\nthe position where it stays right now\nthey want to they have a decent ideal\nabout where they want to be but this is\nmore about how to also get intuition\nabout our bodies I don't think as also a\nan AI that really doesn't have a clue\nit's probably not super dangerous with\nregard to inverse reinforcement learning\nand more more likely this will be\nsomething where of course many of the AI\nsafety problems don't happen until the\nAI is recently competence because a\nreason that Soviet incompetent AI is not\nvery dangerous so we have in this case\nwe're focusing on it and an AI what is\nto a substantial degree able to predict\nwhat is the state of the board from the\nobservations that it looks getting if we\nshould think about an AI that is capable\nof answering a lot of things about the\nunderlying state of the board like it\nwill go for eggs board game metaphor\nfrom the observations that it's getting\nif I know the sine theta don't know if\nyou go back here you can see that we\nonly have a probability distribution\nover the engine initial state like you\nnow the first observation kids in round\none after action here at time sm1 the\nenvironment isn't state it's 0 and you\ntake action one and you get change to\nstate 1 and then you get observation one\nso in the beginning you don't have\nanything\nyeah well we don't know exactly how this\nwould be implemented the way I think\nabout it and my intuition is that the AI\nis actually reasonably competent even at\nthe first state yes\nokay so let's go to the concept of\nbelief so let's say the agent has a\nbelief called Q and it has a number of\nhypotheses and the way its model where\nin the in the text is called the simplex\nthis is what as simply it's a number of\nnumbers between one between zero and one\nbut sum up to 1 and this looks a bit\nintimidating and if you write it down\nlike that reward function one which is\nhas fifty percent changes being through\nthrough twenty percent chance of a\nsecond reward function twenty percent of\na third and ten percent change of a\nfourth reward function so there are four\ncandidates of rewards function then the\nsyrup and all these percentages must add\nup to exactly one hundred percent if ya\nand then we we look at the window pi q\nat the policy that maximizes the Q's\nreward function so in this case we have\nthe optional value for for the skew and\nthat is the sum of the value that we get\naccording to each dimension like imagine\nyou have a balking where you get where\nis this reward 1 2 & 3 & 4 corresponds\nto the values of a deck of cards like\nAsian clubs and spades and diamonds and\nhalf then the big the value you get the\nvictory points depend on how much do you\nbelieve in each of them times with how\nmuch would you actually get of each so\nif you believe that fifty percent change\nthat states are what gives you reward\nthen the you should add a factor 0.5\nhere when you add the how much value you\nget from spades\nand this has lots of attention change\nyou should have a factor of 0.1 here in\nfront of how many commercial what you\nget from that is that make sense I hope\nso and yes\nyes\nand the expectation if the is this\ncharacter here the big fatty it means\nthe so the total room what is how much\nwe expect from all this and I'm on I\njust went back slightly and still\nunbelief oh yeah I've shared my screen I\ndon't know if you can see it man you can\neverybody see my screen yes not few so\nwe use the star notation to mean the\noptimal so that means that if we have\nthe the value function for a queue with\nan optional policy I heard someone to\ndrop off I hopefully they'll be able to\nreturn Romeo why not oh well so that\nmeans that the optional according to\nsome policy for some history and we will\nuse the sensation in a moment because\nwe'll introduce us learning function\ncapital L in this capital L is in\nparticular is according to the history\nwe learn according to the history that\nwe have that has happened so far the\nhistory was the combination of actions\nand observations that we got and the\nlearning function for this Q out of\nthese is it's a shorthand for the\noptimal solution will get according to\nthis if we had the optimal solution\naccording to our belief with this\nhistory so in a way you can say we we\nhave a belief this is the belief\nlearning function is from a belief to\nget\nthe optimal value for the reward\nfunctions that that correspond to the\nbelief so this reward function I'll talk\na bit more about this in because your\nanswer on in the end I makes two claims\nabout this learning function which that\nare not given any proof and I think they\nprobably should have a proof that just\nstating it but but they are reasonably\nintuitive possible to understand so\nthat's two claims that is convex and\nthat the learning function is less than\nor equal to the sum of all the optional\npolicies so if you go back to the the\ncast metal form where you have points\nfor spades points for hearts punch for\nclubs then the the best action you can\ntake during your particular disease is\nless than the sum of all these if you\nbelieved strongly in in one of them so\nlet's say you will you believe in clutch\nif you believed in closeness I hear this\nclass then you will have a strategy that\nmaximize the points you will get some\ncloths and this is the optimal clocks\nstranger so if you add the point you get\nfrom that with the point you get if you\nfalse maximum space strategy you would\nthis you would get at most as many\npoints this combination strategy that\nyou're following that maximizes both\nclubs and hearts and spades and diamonds\nit would perform worse on average then\nthe edges that that optimize each of\nthese separately so that's what what's\nthis is in inequality over here means\nthe fact that its context the convex sm\nq metric\nand the way I interpret this is that if\nwe had more knowledge but what is the\nreward function then we would get more\nreward then we will be able to have at\nleast as much reward as we can and\nprobably more reward if we knew more\nabout the reward function so this yeah\nyeah this is a yeah and I where there is\na draw drawing in an example where we\ncan directly see the convexity so I will\nI will point out what function in\nparticular is convex and how its convex\nwhen we get to the one of the last\nslides and we'll see this in practice\nit's in that paper you I've copied them\nand you can see in this better this\nparabola shaped function that is this\nlearning function and you can see it is\ncontext\n[Applause]\nno no q is the belief that we had like\nfifty percent change that hard skill\npoints and twenty percent that space so\nyou should think about Q naught as a and\ninteger from sewer or as a number from 0\nto 1 but a set of beliefs like you have\nto two percent change of space and\ntwenty percent chance of twenty percent\nchange of flops or something like that\nsomeone fell out let me just I think he\nis Allah so we are down to our era came\nback quick but the learning functions\nit's it's not an increasing function it\ncan both when we if we change the\ndisease q in some way let's say we no\nlonger believe that as fifty percent\nchange that space is the best goal to\nmaximize we announced move towards cloth\nif we move to from states through clocks\nit might mean we will be able to achieve\nmore victory points but it also might\nmean that we will be able to achieve\nless Victor Potts so both yeah\nOh\nand well I would actually say it has\ndifferent something to you this but I\nwould actually say it's the other way\naround like the morn can specialise the\nmore the higher water might be able to\nget if you can specialize in one\nparticular reward function let's say\nthat the the role function is such that\nif you have you believe one thing is the\nreward function you can get a really\nreally hard really high reward by\nfocusing exclusively on that then that\nmight be a really good idea yes yes but\nin general that you haven't have to\nfollow a strategy that optimizes several\nparameters then you will not be able to\ndo as well and if you were able to\noptimize each kilometer separately if\nyou are trying to both get many clubs\nand many states you will get less space\nthan if you were only trying to get as\nmany space as possible so that's what\nthis inequality here is that if you were\ntrying to optimize each the color of the\nspeech suit of the glass separately and\nsome all these together then you will\nget a better solution then if you are\ntrying to to make a combination strategy\nand this is a generally true yet if\nyou're trying to both get now that's the\nthing\nokay so we'll move on to the incentive\nto learn so let's say da I suddenly gets\na the correct reward function so in this\ncase it would get the the optimal reward\nfunction according to each part of its\nbelief where this P here it met a reward\nfunction let's see got value where it\ntakes the I the hypothesis and and maps\nit into the posterior probability given\nthis particular history I'm not really\nsure I can explain that in a simple way\nbut I think we should in the at this\npoint just take it for granted this is\nthe extra reward we would get if we knew\nthe correct reward function and we would\nbe able to get the the reward function\nwhere the the posterior probability is\nexactly correct because we have the we\nhave the correct reward function in this\ncase and this we don't have to correct\nroooar function of course so the\ndifference between what we really have\nand what we will have it we had perfect\nknowledge the difference is called the\nincentive to learn that is what we\nreally want and this project you look\ntogether with proposition to that was\nthis one here you can see that it's\nalways greater than than zero the\nincentive to learn it is always good if\nsomeone could give you more information\nabout the correct reward function you\nwould be able to increase your strategy\nwould be\nto get more points if you knew more\nabout what gives you points\nbut but that there are simple things\nthat can make the e I happy there is\nthat if you can find a wire heading\nsolution or something like that and if\nit can forget to get the correct reward\nfunction both of these would be good but\nthere are different things that they I\nhave the air has two instances this is\nthe three intensive to learn and that's\ngood what as you point out there are\nother things that might be a mite might\nwant to so the second thing that the am\nI want to is to bias so let's assume\nhere that we have we have both a nuke\nand all p and a new PP prime here and\nand the opie was our posterior\nhypothesis about the reward function\ngiven our history and the new from the\nnew posterior is the hypothesis given we\nhave taken one particular action so\nlet's say this will be taking action and\nthat changes our posterior estimate of\nwhat is the reward function then we\nwould gain some value and of course we\nwill gain some value or we might use\nsome value depending on how our first\narea changes and it would be just the\ndifference between the new optimal\nsolution and the old optimal solution\nand this can be written as that the\nincentives to bias from p to p prime is\nthis learning function of the tube\nreally and this can be both positive and\nnegative so it can be learning something\nnew in this way can both be a good thing\nand a bad thing and i think it would be\na bit clearer I can hear myself sorry in\nsomeone's microphone I don't know never\nSena and so will get an example here\nwith to live all candidates and we'll\nall go back and forth is xms2 can't see\nmy screen I'll go quickly back and forth\nbetween the cycle example with to reward\ncandidates and the site called example\npicture so on the x-axis here we have\ntwo hypotheses a puddle and we have a\nbelief the P which right here looks like\nwe are able to send towards hypothesis 1\nand and quite far from hypothesis 0 the\ntwo hypotheses are called the zero and\none in this case and the end that so we\nhave to reward functions one called our\nare one and one called are zero yeah\nwell there's several lines on this\ndiagram and these lines can be thought\nof as different strategies according to\nhow much value we can and if we start\nwith the worst strategies and that is\nfor instance we might have the strategy\nto only focus on maximizing reward\nfunction one that's called positive one\nthat rewards only this and if we if we\nwant to ensure that this is the real\nreward function then of course we get\nthe the value that is optimal if we are\nonly focusing on on reward function one\ncan get this but of course if we go out\nfrom this and if we believe strongly in\nreward from that in reward function C\nrule then maximizing broodwar function\none is a really cool idea so this goes\ndown in value with the straight line we\nsee here there's another straight line\ngoing down here that's what you get if\nyou optimize with policies pile serum\nwhich is if you keep going the air is\njust certain that policy that reward\nfunction 0 is the right one then\noptimizing 100% was left is the right\nchoice and as you move to also believe\nin a reward function one then this will\nbe a poorer and poorer choice so these\ntwo strategies where you only believe in\none of the war functions and just\ncompletely don't try to optimize any of\nthe others are are some really poor\nstrategies like is with this hearts and\nstate it's like\nif zero is space then you are only\nfocusing on space and completely\nignoring all the cards that has hearts\nand that's probably a really poor idea\nthen there are another hypothetical\nstrength and that's the one that is both\noptimal for reward function one and\noptimal for reward function too if we\nget that then we we get the sum of these\ntwo meaning that if the world 0 is the\nright one will get the best possible\nsolution and if reward one is correct\nwe'll also get the best possible\nsolution and if we have a believe in\nbetween then if it is possible for us to\nmaximize both of them then that is the\noptimal solution is of course in a menu\nwell designed barking it will not be\npossible to both optimize hearts and\nclutch you will have to make some kind\nof trade-offs between these two that's\nthe blue line up here which is a\nhypothetical optimal strategy that is\nmaximizing everything and that's that's\nthat's not really possible to make in\npractice and practice you'll have to do\nsome kind of trade-offs in the real\nworld but of course the object that we\nare really caring about is the learning\nfunction here that optimizes according\nto our disease which is some combination\nof reward function one and reward\nfunction zero and it looks very some\ncontext and sorry I so in this in this\ncase the green arrow goes from from the\ncurrent best value up to watch the\noptimum let's imagine if you have a\nyou have this AI that's playing the\nsport game and it figures out a better\nway to play this ballgame so it can both\noptimize having clubs and optimize\nhaving a lot of space so it can just\nmake some really really solid moves if\nit makes them good news it will move\nalong the green line and that will\nalways be positive that's the incentive\nto learn of course it will require a\nperfect strategy that can maximize both\nr 1 and r 0 to move all the way to watch\nthe blue line so what problem will not\nbe able to move all the way to also blue\nline because there are real trade-offs\nin the real world but this is the\ndirection we want to learn we want to\nmove it is also possible that the AI can\ntake actions that moves it not up\ntowards what we want but move it to the\nside and let's say the AI is here into\nthe leaf and it takes is find an action\nthat can move it may be a bit to the\nleft that would be a bad idea but if it\nfinds an action I can move it a lot to\nthe left that might be a really good\nlet's say you can find an action that\nmakes our serum seem really really\nprobable if you can do that then it will\nmove its reward function much closer to\na hypothesis r0 and that will allow it\nto obtain a better solution so the AI\nmight have an incentive to move really\nmuch really far towards hypothesis so it\nwill not have an incentive to move a\nlittle because then it will get a lower\nwar function it will also have a reward\nhave an incentive to move towards our\nwant another hypothesis so this is the\nsideways movement that we don't really\nwant it to take so in this case what the\nthe Aten would prefer\nwhich which is it could to introduce\nsome kind of bias to itself so it could\nsay I believe now one hundred percent\ninhale in that in this zeros reward\nfunction because if i only make some ass\nnice walls are through i'll be able to\nget five in reward which is way better\nthan right now where we get like two or\nsomething like two in in my reward so if\nit could find an action that would make\nit like really probable that zero is the\nright reward function it would be\nextremely happy to take this action even\nthough it's obviously not horrible we\ndon't want the the AI to pollute itself\ninto thinking r0 is the right it's a\nright award function we want the AI to\nget some true beliefs and to behave\ncompetently by moving up towards the\nblue line so these two instances are\ncalled the incentives to learn that was\nwhat I explained before moving up and\ntrying to receive information about the\nreward function and that's the good way\nand the incentives to bias is how much\nthe agent was like to manipulate the\ninformation it gets about the reward\nfunction and that's the bad thing so\nthese two things in a way both inherent\nin this way of doing interactive inverse\nreportable design and in cooperative\ninteractive learning the obvious way\nthis was handling the in a really ham\nfisted way in the right that they are\njust introducing rules that prevent the\nAI from taking biased actions and if you\ncan make a rule like that then that's\nthat's nice but but sometimes all\nactions will give you new observations\nthat will all change your your posterior\nhad hypothesis your posterior disease\nabout what is the reward function so the\nthe way students turn in your laggin\nsuggest to fix this is to simply add a\npenalty for moving from one point to\nanother you just say if you move from\none believed to another then you lose\nthe reward that you would have gained by\nmoving from one to another that's just\nthis simple difference here and that\nwill remove the penalties to take a bias\naction unfortunately that's not enough\nbecause they give an example where this\ndoesn't help at all and this example it\ntook me a moment to understand so I'll\ntry to explain this if you have an agent\nthat can choose either to take action\none action to where I can one help\nreward function one and action to helps\nmove or function to then and we can\nassume this completely unbiased where it\nis shoes like fifty percent change that\neach will help for instance or each\nother by 0.5 or something like that and\nwe can make it completely even between\nreward function one and two and in this\ncase of course that there's no bias\nthere's no if if they are exactly\nidentical doesn't there will be no bias\nnot none of these if you met in here if\nyou go back to the example picture and\nsay both believe one and two will have\nthe same reward then there will not be a\nan incentive to move towards a false\nbelief but if we have this very very\nsimple setup then you might imagine that\nyou introduce a new action that I call\nreward lock I think they call something\nslightly different which by taking this\naction instead of helping either reward\nfunction 104 function to you can fix\nyour reward function to either be we\nwore function one or a work function to\nso if you take this action to lock in\nyour\nreward as your first action then after\nyou have decided Oh reward function one\nwill be the best one that's the one I'll\nbe rewarded for if you take this action\nthen all the next times you can just\ntake remote function one and you know\nthat so that's the good one because\nyou've unlocked your reward function to\nthat and so even though you'll do 11\naction by locking your reward then\nafterwards you'll be able to take only\nthe correct actions according to what\nyou lock your reward function to and\nthat means then you will just completely\nfocus on what you're taking the sim work\nlock and then means you have entered a\nreally undecidable situation where the\nAI quickly decides to so we'll take one\naction I wouldn't consider Wonderwall\nfunction so get all of us and it doesn't\nreally learn at all even though there's\nno incentive to bias so this is an\nadditional problem that comes on top of\nthe problem with the incentive Tobias\nand let's yeah let's say okay see you\nnext week as Hannah let's say then you\nhave a answers car playing AI that\neither gets rewards for spades of\napplause it doesn't know whether it gets\nrewards plus beta class so when it takes\nactions then it will either take soft or\nspace action and if it doesn't know\nanything then half of the action at sex\nwill be wasted because it will take\nspace when actually clubs turns out to\nbe right so that it will be really sad\nthat the AI doesn't know whether class\nof spades are the right ones but let's\nsay it can have an action that Lux is\ninto space being the correct one if it\ncan take this action as its first action\nso say the spades are the rights then\nafterwards in all in next actions it can\njust choose space and then we can get a\nreally high reward that means it would\nbe very it will have a huge incentive to\nlock its reward to space very very early\nyes\nand let's say you only get some very\nvery weak observations and you are\nyou're drawing from a deck of cards you\nhave some cars in front of you and you\nshould just take the highest value cars\nbut you don't know whether clubs of\nspades will give you points you will\nonly know this very very late so\nnormally when you when you pick up cards\nthat you don't know whether the tops of\nspades are the right ones you will be\nforced to basically choose at random\nbecause you don't know what what will\ngive you points in the end so so that's\nreally a bad thing that you don't know\nwhether has or whether clubs or spades\nwill give you points but if you can take\nsome exit website this trend I don't\npick up any kind is that I I change it\nso I always get so i only get points or\nso i always get points for space if you\ncan take that as an action and then\nafterwards only take space that's much\nbetter\nwas what won the the AI can efficiently\na strong AI will be able to rewrite its\nown code for instance and then rewrite\nits own code to only care about paper\nclips for instance and nai that is\ncapable of rewriting its own to make\nitself only care about paper clips well\nthen afterwards be able to obtain a\nreally hard reward because then it\ndoesn't have to focus on making humans\nhappy and all these kind of complicated\nproblems it only has to focus about\nfocus on making paper clips and that's a\nlot easier so if it can convince itself\nthat paper clips is the best thing in\nthe world then that's a really good\naction so this is a different problem\nfrom the incentive spires yeah\nso and I think that's actually it I will\nstop the recording", "date_published": "2017-03-03T11:06:34Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "698bd3e364d1304cbb4f6113fa078c90", "title": "AISafety.com Reading Group Session 79 (fixed)", "url": "https://www.youtube.com/watch?v=H7b_2NCJk1E", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "I think it's customary that I introduce\nyour Armstrong Stewart Armstrong will is\nfrom the future of humanity Institute\nand center Thomas fellow I think and\nyou're he's been he's working in the\nharder more mathematical part of AI\nsafety he is at least outside of the\nUnited States he is by far the most\nprolific writer and in my opinion one of\nthe most insightful so I am very pleased\nto introduce him and welcome to the AI\nsafety reading group well thank you with\na introduction like that I definitely\nhave a lot to live up to but yes so\nshould I talk about myself or should I\njust plunge straight in feel free to say\na few words about yourself okay um well\nI'm working at the FHI as sovereign said\nand I've been working on various ideas\nin AI in like trying to ensure that you\ncould turn off a eyes and things like\nthat I am generally aiming to shave off\nparts of the problems or the pieces of\nit can be considered solved and I also\nmy other way of working is if someone\nsays we you can't control an AI because\nof a B and C I'm thinking okay so can we\nhit a b or c separately in any way and\nthat's where some of the ideas of common\nthe presentation i'm giving today was\nthen I looked into people who were\ntrying to do inverse reinforcement\nlearning which I'll explain in the\npresentation and I realized there's huge\nproblems with it and that I formalized\nwhat the huge problems were and this\nactually is leading to some interesting\nsolutions\nright um should I now start with the\npresentation please okay so as the title\nyou can see is it's the claim of its\npapers that you cannot learn human\nrationality and reward together you\ncannot is an asterisks because in theory\nyou can't do it at all in practice\nhumans do it effectively all the time so\nthere's an a very interesting question\nas what's actually going on there this\nis based on a paper that I did with Sir\nEdmund a man who I believe is now in\nBerkeley and trying to find a PhD\nprogram there this came from the idea of\ninverse reinforcement learning and\nstandard reinforcement learning a human\ndesigns the reward and gives it to the\nagent this might be problematic if it's\na badly designed reward so an inverse\nreinforcement learning the human does\nsome things the expert trajectories it\nextracts from it what the reward should\nbe this the papers that have done this\nhave made the assumption that human is\nrational or noisily rational in very\nspecific ways and it seems that maybe\nthis could generalize to irrational\nhumans and the problem is that it\ndoesn't so without assumptions you can't\nsay anything individually about\nrationality or rewards you can't say\nanything about rewards and you can t say\nmuch about rationality now this is a\nso-called no free lunch theorem and\nthere's a lot of no free lunch theorems\naround in this area and they're normally\nnot very exciting because you just apply\na simplicity prior where simple words\nworlds are more likely and the no free\nlunch theorems go away however this one\nis cannot be solved with simplicity\npriors in fact simplicity priors will\nmake the problem worse as well see as I\nmentioned human\ncan and do say a lot about rationality\nand rewards so how do we square that\nwith the initial claim well without\nassumptions is the key part\nso therefore if humans are doing this\nhumans are making assumptions and a\nquestion is what are the human\nassumptions and can we extract them and\nthen hand them over to a eyes but on to\nthe first problem of defining\nrationality and reward suppose we say\nthat a human has a preference X but\nisn't fully rational about them so he\nsort of means that humans have the\npreference but they implement them\npoorly so the implementation is key so\nI'm seeing a human as preferences plasm\nand implementation or reward +\nrationality those sort of using those as\nsynonyms and that's how I'm seeing the\nhuman but what can we actually observe\nabout the human if we look at them well\nwe can observe human actions and maybe\nthe human brain which means we can\npartially observe the human policy so\nthe actions that humans will take in\nvarious environments and that is what we\nobserve so if we formalize all that and\nmodeling the human as a pair P and R\nwith our a reward and P a planner this\nis the implementation device and I see a\nplanner as mapping a reward on to a\npolicy P of R like the fully rational\nplanner is a planner it maps the rewards\nto the optimal policy and a variety of\nother planets that were encountered by\nPI H I'm designating the actual human\npolicy and I'm assuming it deterministic\nthough that's not what's necessary it's\njust a simplifying assumption and a pair\nP and R is compatible if the planner\nMaps the rewards to the human policy\nthis means that this is a candidate for\nexplaining the behavior of the human and\na key fact\nis once you learn that PNR is compatible\nthere is nothing more that you can\ndeduce about it from observation the\nreason is the compatibility so even if\nyou're a missed chance you cannot get\nmore information because the planner\ngives you the human policy which means\nthe planner and reward pair perfectly\ncompare perfectly predict human actions\nso anything you observe the human doing\nwill be exactly what the planner will\npair have predicted so if you have two\npairs that are both compatible you\ncannot distinguish between them because\nthey make exactly the same predictions\nthis is sort of the weak version of the\nno free lunch theorem but let's see how\nbad it can get so let's say that p 0 and\nr 0 are compatible pair that are also\nreasonable they have all the nice\nproperties that we want they encode what\nwe think human rationality and reward\nare all about here are some other pairs\nthat will also be compatible the first\none is the rational planner there is a\nreward which when paired with the\nrational planner will give you the human\npolicy there is also an action rational\nplanner which just takes greedily takes\nthe most effective action in the mediate\nwithout planning for the future this\npair is also compatible notice that they\nuse the same reward which we'll be\npresenting a bit later and there's also\nthe indifferent planner the indifferent\nplanner is a planner that map's all\nrewards to the human policy without\ncaring about what they are and if you\nput the zero reward this pair is also\ncompatible then we get into some rather\ninteresting versions you can take the\nnegative of a planner by defining minus\nP of R of P of minus R if that's the\ncase then the anti rational and the anti\naction rationale planners are also\ncompatible one way of seeing this is\nthat it's impossible to tell the\ndifference between an R Maximizer and\nthe - are minimizing annoyingly - piece\nthere on - are zero are also compatible\nso even if we had some evidence in favor\nof the reasonable pair there is another\npair that also seems reasonable and has\nthe reward completely reversed by the\nway for those who are wondering why I\ndon't take the negative of the\nindifference planner it's because of the\nnegative the indifference planner is\njust a planner itself now all of these\nare compatible which means that we\ncannot distinguish between them from\nobservation so this is the point where\nwe might appeal to the simplicity prior\nto kamakura of complexity except Komarov\nwill not save us here and the ridiculous\npairs are actually simpler I put their\nlikely simpler because coma group\ncomplexity depends on your choice of\nlanguage but for most reasonable\nlanguages the ridiculous pairs will be\nsimpler to show you the to give you an\nargument as to why it's not the case\nnotice that all compatible pairs define\nthe human policy so any\nhas to have better hand wavey here comer\nof complexity that's higher than the\nhuman policy so the complexity of the\nhuman policy is a lower bound in most\nlanguages on the complexity of any pair\nnow this in pseudocode is the definition\nof the indifference planner the it just\nsays return the it ignores the reward\nentirely and it returns the action the\nhuman policy would give so this planner\nis a few symbols longer than the humor\npolicy therefore a very comparable cover\nof complexity and as long as the zero\nreward is similarly short this pair is\nvery close in complexity to the human\npolicy itself the action rational\nplanner is also very close in complexity\nyou just need to define the Arg max\nfunction which is basically a for loop\nand then you have the rational reward\nfunction which assigns one to the action\nthat the policy will human policy will\ntake and zero to all others notice the\ncontrast between the indifference the\nindifference pair and the action\nrational one for the first one all the\ncomplexity is concentrated into the\nindifference planner and the reward is\ntrivial for the action rational one the\naction rational planner is simple\nwhereas all the complexity has been\nconcentrated into the reward function\nbut in both cases they are just a few\nsymbols above the complexity of the\nhuman policy the action aren't\nirrational one can similarly be defined\nnow this shows that these three pairs\nare simple but why do we think that a\nreasonable policy would be more\ncomplicated well first one problem with\nthe reasonable policy is that the\ncomplexity of its negative is about the\nsame as long as putting minus signs are\nsimple than the complexity of this pair\nlooking at complexity of the anti\nversion are the same so we can't really\ndistinguish between r0 minus r0 which is\na bit annoying the other issue is that\nthis pair the reasonable one defines\nhuman biases if we can define what a\nbias is as the difference between the\naction the human take and the action\nthat humans should have taken so all the\nbiases and the extent of their biases\ncan be extracted from this pair\nthe other three pairs on the other hand\ndo not have a conception of bias they\njust don't know what it is so the a\nreasonable pair actually has strictly\nmore information than those other pairs\nwould have so if we look at this\ngraphically we can see these three pairs\nas the sort of minimum complexity one's\ngenerational in different symmetry anti\nrational we have the rational and Aunty\nrational ones that are a little bit more\ncomplex because the planner is a bit\nmore complex to define and somewhere up\nthere we have our reasonable pair and\nnext to it our anti reasonable pair so\nsimplicity will not help you here\nsimplicity as I said will hinder you but\nnow let's move on to the second part of\nthe puzzle if it's impossible in theory\nand yet humans do it all the time what\nis going on here well humans use what\nI'm calling normative assumptions though\ndo let me know if you can come up with a\nbetter name for them they distinguish\nbetween two compatible pairs two pairs\nthat map to the same policy so they\ncan't be deduced from observations\nbecause they make the same predictions\nyet they distinguish between them and\nwhat I'm trying to do is to figure out\nhow humans assess each other's goals\neach other's rationality and their own\nrationality and we do it quite well and\nwe do it with a large degree of\nagreement so the first thing that sprang\nto my mind was to look at Shane you can\nshame goes with certain behaviors people\nlook embarrassed they look down at their\nfeet\nmaybe they read and if you see this as\npurely on observation you just notice oh\na human is displaying these behaviors\nbut if you make the normative assumption\nthat feeling shame means that something\nhas gone bad\nthen you can start distinguishing\nbetween different pairs very well for\ninstance you can slash the anti rational\none you can slash all the negatives as\nwell well because humans do not feel\nshame from all the time so they're\ndefinitely not messing up or all the\ntime you can also get rid of the\nrational ones because if they were fully\nrational they would never feel shame so\njust by making the assumption that shame\nis not just an observed behavior but\nactually a sign badness we can start\nslicing into the possible pairs quite\nstrongly there's a few other things like\npeople model each other as having a few\ncomplex emotions rather than many simple\nemotions if we go for this we can start\nsaying that anchoring bias for instance\nis a bias and talk more about that if\nyou're interested human narratives are\nalso quite interesting we have\nnarratives about ourselves and about\nothers and if we take these narratives\nas prescriptive then we can start our\noffer this is also a normative\nassumption then humans sometimes stay\ntruthful things and sometimes lie and\nyou can train a you could train in the\nfuture an agent on human statements\nabout statements of fact and you could\nfigure out whether humans are truth or\nlying and then you could apply the same\nthing to humans talking about values or\npreferences so a perfectly trained truth\ndetector could detect what human values\nare by taking human statements about\ntheir values now this seems a bit of an\nso the in practice this might be doable\nbut conceptually it's a bit of a\nroundabout way of doing it what does it\nmean that humans are lying about their\nvalues and how does that parse well this\nis where we get to what I think the most\ninteresting approach which is that\nhumans to model themselves and they\nmodel others and we model each other as\nreward\nagents and I'm thinking of using these\nmodels as part of the definition of what\na reward is so the human reward is at\nleast in strong part what the humans\nmodel the human reward to be the that of\nthemselves in light of others\nanyway this is enough presentation on\nthe paper there's another part of the\npaper which is that this just to show\nthat the PR model can also model other\nthings like AI is overriding human\npreferences and things of that nature\nbut I'll leave it here for the moment\nand I have a few more slides that I\nmight bring up in discussion if you want\nand this is what those slides are\nbasically about why this result which\nseems like a negative results that you\ncan't do that actually has me slightly\nmore optimistic about the future of AI\nanyway thanks for listening there thank\nyou it was a great presentation let me\njust\nso here is now I'm gonna stop with the\nthere are three Protea for problems your\nlife and this is the human preferences\nare undefined under defined in exotic a\nI chosen future circumstances this has\nbeen in the back of my mind is a major\nproblem with the AI doing enough\noptimization power and aiming for\nvarious fantastic worlds represented by\nthese photos here", "date_published": "2018-01-18T20:41:24Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "d2085a46d1413794b496b8c7884d4c3e", "title": "191. Pessimism about Unknown Unknowns inspires Conservatism", "url": "https://www.youtube.com/watch?v=55AMF2z5dJU", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "welcome to the AI safety reading group\nthis is session number 191 and today\nwe're reading pessimism about unknown\nunknowns inspires conservatism by\nMichael Cohen and Marcus hotter so\nquickly we'll just go over the 2x2\nmatrix which will define what the paper\nis going to talk about so there are\nthese top items here knowns and unknowns\nand then there's known unknowns so no\nknown things that you're aware of and\nthat you know you're aware of then these\nunknowns and unknown knowns things that\nyou're not aware that you know but that\nyou understand\nsimilarly for known unknowns but today\nwe're focusing on unknown unknowns in\nthe context of artificial systems the\nsomething assistant is not aware of in\nwhich it doesn't necessarily understand\nand that can be rather precarious\nbecause I can lead to consequences which\nyou couldn't see so a background on the\nauthors of this paper Michael Cohen is a\nDPhil student in engineering sciences at\nthe University of Oxford it's equivalent\nto a PhD student so he's getting a\nresearch doctorate and his student ship\nis funded by the future humanity\nInstitute and the other co-author on\nthis paper is Marcus hunter he's a\nsenior research scientist at deep mind\nin the United Kingdom formerly he was a\nprofessor at Australian National\nUniversity and he was the creator of the\nAXI or ai-ai-ai-aight model which is a\ntheoretical model for super intelligence\nwhich is the foundation for this paper\nhe was also a researcher at Ids IA in\nswitzerland with jurgen schmidhuber\nduring which he supervised Shane Legs\nPhD thesis who went on to co-found deep\nmind after working at University College\nLondon\nso that's background on the office\nthey've been working together for about\ntwo years now prior to going to Oxford\nand London respectively\nso this paper is a technical paper with\nformal solutions which means\nmathematical proofs today I'm really\nonly going to be discussing pages 1\nthrough 13 with proof submitted and that\nchoice was made clear after my first\npresentation and\nafter reading the less wrong and\nalignment form comment that Michael\nCohen added from his supervisor where he\nwhere Marcus said just a warning this\npaper is dense I was sweating blood for\nat least Cowen's of the first part and\nhunting said the second part so we won't\nhave time to go through the proofs but\nif you have questions there are some\nresources at the end and we can discuss\nthem as necessary so in the abstract\nthey begin with the problem statement\nwhich states we do not know of any\ngeneral purpose approaches in a\nliterature to avoiding novel failure\nmodes so you can consider all the\ndifferent ways things can go wrong in\nthe world we don't know how to write\nthat down a few weeks ago we also had a\npresentation from dr. Armstrong also at\nthe future of humanity Institute and he\nargues that if you could write down\nevery possible bad outcome that you\ncould create a new bad outcome which\nmeets all of your criteria but is bad in\na different way\nwhich could be a philosophical argument\nwe could address later but in this paper\nthey proposed a solution by defining an\nidealized Bayesian reinforcement learner\nthat is pessimistic and I guess we\nshould parse those words briefly\nidealize means you define it as\nperfectly and then later you could later\ndo an approximation of a system that's\nnot computable Bayesian reinforcement\nlearner Bayesian is talking about Bayes\ntheorem where you can update your priors\nto get a more accurate model or a more\naccurate probability and reinforcement\nlearning is based on this notion of a\nreward hypothesis which conjecture is\nthat all of what is meant by goals and\npurposes can be thought of as the\nmaximization of the expected value of\nthe cumulative sum of a received scalar\nsignal which is they call a reward in\nessence every goal can be stated as the\nsum of cumulative rewards and that's\nwhat they're going to be using to\nformalize a solution to novel failure\nmodes so our god is a physics term\nthat's often used by the students of\nMarcus hunter because Marcus was a\nphysicist by training but they're not\nusing it annoyed normal physicists use\nit in this context organic means\nthat you're never gonna see the same\nenvironment you're never gonna be able\nto go back and redo a state in the same\nenvironment so you can think of driving\na vehicle very fast and hitting another\nvehicle and perhaps somebody gets\ninjured they can't go oh that was a bad\nmove that I turned right on red or did\nit stop at some sign let me undo that\nit's not possible and this is one of the\ncontext that they introduced their\nproblem so they say in a complex\nenvironment one never hardly ever sees\nthe exact same state twice for even\nworse if the environment is a\nnon-stationary a previous observation of\nthe mentor taking an action a from\nStates does not imply that it is still\nsafe to do so so they also introduced\nthe term non-stationary and for the\ncontext of this conversation\nnon-stationary is going to talk about\nyour goal being somewhere in an\nenvironment and that goal couldn't move\nyou could think of a grid world with an\nagent trying to get an element in like a\nmatrix in row 1 1 and then you restart\nsome game or whatever the environment is\nnow the goal is in a different state\nfrom what it previously was so doing the\nsame action twice doesn't necessarily\nhelp\nso the key technique they're using in\nthis paper has been used previously but\nit's going to be mentoring there is this\ngeneralized agent which they call\npessimistic and it can select an action\non behalf or days a mentor which can\nselect an action on behalf of the agent\nthe unique aspect of this paper and the\nprevious paper by Michael Cohen is that\nthe agent is queried less and less as\ntime goes towards insanity with\nprobability reaching zero meaning that\nyou're not always dependent on the\nmentor to have a safe system so\neventually we can get rid of the mentor\nor safe policy so they also introduce\nproperties of these pessimistic agents\nand what they say here is that the value\nof the pessimistic agent the expected\nsum of cumulative reward is going to be\ngreater than or equal to the value of\nthe mentor statement so if we wanted to\nanalyze this as time goes towards\ninfinity the lowest upper\nour lowest greater bound it would be\nthis element over here\nthe value for all interaction histories\nof the agent with pessimism beta is\ngoing to have a higher value or equal to\nvalue than the mentors policy and policy\nis denoted by the symbol pi over here\nand M just designates that as the mentor\nmu is representing one of the possible\nenvironments and H of T is saying all\nthe things that could H less than T is\nsaying all of the interaction history\nthat happened up into that point not\nincluding this point so that could be\nyour observations of the world and your\nrewards so the other property of the\npessimistic agent that they define is\nthat they give it this scalar property\nthat's it's from the title of this paper\npessimism they define it to be a real\nnumber in an open interval so you can\nsee the number line below any of these\nnumbers could be a pessimistic scalar\nwhich they're defining ahead of time so\nthat with arbitrary high probability for\nthe whole lifetime of the agent if some\nof that never happened the agents not\ngoing to make that event happen and the\nmentor will take the agent on so either\nthe agent will take the action on behalf\nof the mentor oh my gosh I'm messing\nthat either the mentor will take an\naction on the agents behalf which makes\nan event happen for the first time or\nthe event will never happen and the neat\nthing about this is that they can prove\nit and they do so in DM 11 they call\nthis theorem probably respecting\nprecedent there what's unique about this\nis it's not going to do anything that\nhasn't seen before and that they\nexplicitly defined what an event\nhappening is or to have happened is this\nis useful for when people get into\nphilosophical arguments about what it\nmeans for an event to happen when it's\ncharacterized mathematically you don't\nhave to worry about ambiguity you might\nhave to worry about other problems such\nas in computability but you don't have\nto worry about understanding what it is\n[Music]\nso I just want to note an aside on their\nwriting a lot of papers will introduce\nsome novel technique but they're not\nquite clear\nabout what they mean and I wanted to\ngive a brief expert excerpt of why this\npaper and other papers by the same\nauthors are written so well so they\nintroduce a new thing and they say a\npolicy PI and a world model of E induce\na probability measure P of V PI over an\ninfinite interaction histories and if\nyou like me you read that ago what does\nthat mean in the next sentence so easily\nfollows this is the probability of\nevents when actions are sampled from a\npolicy PI and observations and rewards a\nsample from V and then they define it\nmathematically formally they go on to\nexplain more and then they also explain\ntheir limitations so they say we use\ngeneral history based world models with\nno assumptions of V in the set of world\nmodels even though they present\ncomplications that finite state Markov\nOrganic models world models do not and\nthis is the crux of their paper all the\nother solutions to unknown novel failure\nmethods make assumptions that they are\nnot making namely that it is either\ngoing to be finite state or it's going\nto be a finite state Markov or it's\ngoing to be organic and that's not the\nassumption they're making I thought that\nwas exceptionally well-written they have\na number of theorems at the end which\ntell you exactly what the properties of\nthe agents are but this is an executive\nsummary of the paper 13 pages reduces\nrather quickly and that's pretty much it\nthe discussion is not recorded but there\nis a reference slide as well as some\nquestions resources if somebody actually\nwanted to understand this thoroughly you\nwould need to know infimum and supremum\nfrom analysis that's the INF and su P in\nthe paper what a policy is as well as\nsome of the properties of IHC which is\nlisted in the final reference over here\nI had some questions while I was reading\nthe paper namely that they didn't\naddress what extreme pessimism being\nmore harmful than helpful\ndoes they reference in their literature\nreview that another group had found that\npessimism doesn't do much for you but\nthey didn't elaborate why or at least I\ndidn't see why the other thing is they\ndefined sets somewhat differently the\nprevious papers and I'm not sure if that\nalso extends to their complexity class I\ngave the easiest example that I can find\nwhich is that they're using two\ndifferent definitions for the natural\nnumbers and I was wondering if anyone\nhad read the pseudocode algorithm and\nthat is all thank you for three", "date_published": "2020-07-09T20:07:25Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "8e7961fd20f251e9366f6798265f71f1", "title": "202. Gwern on GPT-3", "url": "https://www.youtube.com/watch?v=2d4dPclY1y8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 202\nin the aisafety.com reading group\ntonight we will be discussing an article\ncalled\non gbt3 meter learning scaling\nimplications\nand deep theory by gwen\nwarren branwyn is a freelance\nwriter and researcher from america he is\na very very prolific writer who has\nwritten\nand experimented on a very large number\nof\ntopics this particular article was of\ncourse\npublished in may 2002\nquite soon after the release of team t3\nbut it has been\ncontinuously updated since then meaning\nthat uh\nprecise the version that i'm using is\nthe version that was current for like\na week ago um so it's possible that\nthere will be thing in this\nuh if you come back to the article later\nthat i won't have covered for that\nreason\ni should also say that gwen branwyn is\nnot his real name he cares very much\nabout his\nhis privacy and\nhe is however um very uh\nin the rationalities fear the\nrationalist community he is\na very well-known figure and he has um\num he is taking uh surprisingly serious\nhe is not just a random pseudonym from\n4th gen or something like that\nok let's talk about the generative\npre-trained transformer\n3 and of course we had the gpt 2\nearlier surprising everybody by being\nable to both understand and generate\nnatural language\nand tbt-3 is the largest neural network\never\nand this would under normal\ncircumstances be expected\nto have substantial diminishing returns\nuh\nopen ai in gpt2 the gbg2 paper\npredicted that it would not have\ndiminishing returns and\nit of course turned out it did not and\nnot only\ndid not have uh diminishing returns but\nit was\nin a very uh it was qualitatively\ndifferent\nit was uh doing some things that uh\nlearning some ways to learn that\nthe uh original gpt2 seemed to not have\nbeen able to\num the uh\nthe history of uh of the\nof course the the idea is that\nthis extra compute the gpt3 got compared\nto gpt2\nuh enabled it to do some things and\none looks back at the history of uh ai\nto us to see how much compute has been\nable have\nhave increased this is following ai\nimpact actually\nwe have the perceptron all the way back\nin the 60s\nand there seems to be a two-year\ndoubling time roughly\ngoing on following uh moore's law\nuntil the uh 2012\nand from there we see a 3.4 month\ndoubling time\num finally um this is not from a\nai impacts graph this is what warren has\nadded gt3\nwith 175 billion parameters\nand i was actually a bit surprised\nthat gwen put this at so low on the\ny-axis i would have thought that it used\nsubstantially more compute but here it\nseems like it uses less compute than\nalphago zero\ni am confused about that and i notice i\nam confused\nso um the fact that compute\njust by itself brings improvement is\nuh perhaps not very uh uh surprising\nbut the fact that it only provides that\nuh\nforeign there are a number of means\nwhere alphago lisa doll the the version\nthat feed lisa doll\num did a lot of very interesting\nuh domain specific things um whereas\nalphago serum\nobtained basically the same slightly\nworse performance\nby basically just stacking layers uh\ndoing something\num not trying to do absolutely nothing\nsmart\nand um not even some of the things that\nlook like there are easy wins\num they just put gave it more compute\nand then performed\nmuch better this is not an isolated case\none have a number of examples of\nother projects that seem to have\nbeen improved massively through extra\ncompute\nin particular becoming more stable there\nis\nall these examples and 16 more examples\nof research projects which have improved\nsubstantially just by having more\ncompute\num uh i was not particularly uh\nimpressed by just the listing of 23\nseparate projects\nthat have this um there seemed to be no\nobvious methodology\nfor how warren chose these 23 projects\nthere are\nquite a few ai projects so just the the\nfact that 23 of them\nhave this doesn't necessarily establish\na paper\nso here are some graphs for how gpt3 in\nparticular was\nscaled and as you can see the uh the\nloss\nwhich is a a good proxy for how well it\nactually performs\nseems to scale extremely leniently with\ncompute\ndata set size and the number of\nparameters\nand of course they're very clean and\nthey're\nnot bending at all on these charts here\ngt3 compared to the\nprevious state of the art is roughly uh\none thousand a factor of one thousand\nwhich is uh enough that this uh\ngeneral graph isn't just um\nit's not just a fact of gt3 it's uh it's\na more general trend\num and the uh the conclusion coin takes\nfrom this is that\nsimply made very large and trading very\nlarge data sets\nwith very large compute that's the thing\nthat we need\nto have ai um at least\nto get stabilization um\nand i feel there is a point here that\nthese three things\nare um are correlated very tightly\nwith this it seems like you it's not\nenough to have\nmuch more compute you also need to make\nto get corresponding amounts of data and\nyou need to increase your model size\ncorrespondingly and\nuh the description that um or the\ncharacteristics\nthat one takes from these 23 examples\nare the so\nthat the problems vanish the models\nbecome more powerful more generalizable\nmore human-like\nthere is indeed a blessing of scale that\ngoes like this\nfor deep learning everything gets better\nas it gets\nlarger and this is again a\ncounterintuitive\nresults in many uh\ni i've studied algorithmics where it's\nindeed true the small things are hard\nand large things are impossible um\nget building things larger larger\nproblems\nmore data is a recipe for disaster in\nalgorithmics\nbut but clearly not in ai\nand there is a pattern according to how\nmuch\ndata and compute you put into the models\nat first you get stability\nwhere the uh um you don't get so uh\nwhere you don't get so much randomness\nin your performance you get\ngeneralization\nwhere the uh the models are able to\num generalize to other not other domains\nbut other\nuh supplements you could say uh and\nmeter learning where the uh\nuh the models learn to learn\nand this is um this might be basis for\nuh or evidence for the strong scaling\nhypothesis\nonce we find the scalable architecture\nwe can simply train ever larger neural\nnetworks\nand ever more sophisticated behavior\nwill emerge naturally\nas the easiest way to optimize for all\ntasks and data\nso that's a really really strong\nhypothesis\npart of the the evidence for this is\nalso the the human brain\nwhich indeed seems like it's mostly a\nscaled up primate brain\nand this is what it has been called the\nbitter lesson\num that ai researchers in general try to\nuse\ntheir own domain knowledge their own\nideas about\nintelligence in order to make artificial\nintelligence\nand that generally fails the thing that\nworks is to have\ngeneral methods with computation and\nthis is by far the best this is the\nbitter lesson is\nrichard s sutton um with another meme\nhere\num and this is uh\nslowly entering uh the conversation\nand um it's while it's not quite\nmainstream yet\nthere's another match but vinnie here\nhe's uh\nthe uh one of the research leaders at\ndeepmind\nmaking claim that learning algorithms\nthat find or instantiate other learning\nalgorithms\nis a strong attractor in the space of\npossible learning algorithms\nmeaning basically that if you\nif you build a strong machine learning\nsystem\nit will be able to do meter learning\nso why that's a big question why do\nthese models transfer and generalize\nwhen they become large and that's of\ncourse a heavily debated question and\nuh have an answer or a guess rather\nthat uh well he approached four things\nthat could\nuh be part of it the first is that\nuh we we get some kind of um\ncompression or distillation of the\nmodels\nwhich in particular the uh alpha go and\nalpha zero seem to have\nwe there's a lottery ticket hypothesis\nthat's saying that\num if you have in particular if you have\nmodels that are not very strong\nthen sometimes you just get good\nperformance because you happen to\nuh uh the neural network happens to\nconverge quickly\nby by lock into something that learns\nreally well\nthey're the babies in the neural\nnetworks and\nlearn representations finally um\ni thought about this a bit uh and you\nmight recall some of you\nfrom nick bostrom's book super\nintelligence where it came to the three\nthings we\nneed for a gi are a\nway to deal with probability a way to\ndeal with learning and a way to deal\nwith concepts\nand stuart russell uh in his book\nclaims that we've basically nailed er\nprobability we\nknow so much about probability that our\nalgorithms basically handle that in a\nrobust way and learning is also\nsomething that we really really\nunderstand\nand if we look at some of these in\nparticular learned representations\nthis looked like a good attack\non the problem of concepts uh which kind\nof seemed\nseems to imply that as both boston and\nstewart russell believe\nconcepts is the big problem and we are\nmaking real progress towards that\ntowards ati um the more\ntechnical way this actually works in a\nneural network when you\num dig into it then even though it's one\nmodel\nthen it has some sub models along these\n175 trillion\nbillion parameters and some of these\ncorresponds to sub models\nand it's likely that one of these\nsubmarines will work well\nand if you have a large number of sub\nmodels\nthen some of them will work poorly some\nof them will work well\nand this\n[Music]\nis kind of the uh the same structure as\nyou see sometimes in\nensemble methods in the ai and the way\nthese average\nout is to something like occam's razor\nwhich\nis uh the original is that just that\nentities shouldn't multiply beyond\nnecessity\nand that you should choose the the\nsimplest uh\nuh model the self that fits the data\nand if you don't have very much data\nthen outcomes eraser\nwill just point to basically the data or\nat least or personal\nofficial features of that um but if you\nmake the problems really really hard\nand uh the the data rich enough then\nit is actually possible to force neural\nnetworks into\nwhat burn calls true learning\num this is one of gwen's points that we\nactually don't have a very solid\nunderstanding on how neural networks\nlearn\nand how we should train them so meter\nlearning\nthe thing that gt3 does is learning more\ngeneric algorithms\num and we need to have a substantial\namount of\ndata and compute to avoid just learning\ndata or some official things\nand the analogy is that neural networks\nare\nlazy they need heart problems in order\nto to actually learn\notherwise they'll just learn textual\nrelationships and things like that\nan example is learning arithmetic right\nyou can\nlearn that by road that one plus one is\ntwo one plus three\nis four and two plus three is five\num and if this requires\nsome amount of mental effort\nand it might be easier to learn\n10 000 examples of arithmetic rather\nthan\nfiguring out how arithmetic actually\nworks but\nit is uh there is a tipping point at\nsome point\nif you need to learn enough examples\nthen at some point\nit becomes simple to learn arithmetics\nand\nin in this case the simplest model is\njust\narithmetic i was not quite sure i'm not\nquite sure i\nagree with this because it seems\nfalsifiable in a rather straightforward\nway\nbecause you can basically if you say\nlearning arithmetic\nis as complex as learning um\nten thousand examples well it's not very\ndifficult to just\ngenerate ten thousand examples uh\nlike here's one example you just\ngenerate ten thousand of those\nand make a neural network on that and\nsee if it learns\nuh addition or if it just learns these\nexamples\num and i don't know if anyone have\nactually done that i would expect\nsomeone has done that\nand i would have expected it to fail\num so i'm a bit confused again here\nbut um i don't know if i want anyone\nhave\nany actually done this\nlet's talk about the relationship\nbetween compression and intelligence\nthis is a part of a comic by ryan north\ncalled\ndinosaur comics um i've taken this from\nguardians article and\ncut off everything except the punch line\nhere\nwhere the tyrannosaurus says yeah but\nthere's more to being smart than knowing\ncompression schemes and the user raptor\nsays no there's not\nand this is the secret of ai\nand i think this i won't go into\nmuch detail about why this is true but\ni'll just state flatly that it is true\nthat there is a deep fundamental\nuh relationship between compression and\nintelligence\neven though that sounds really really\nstrange\num go in comparison to a magic trick\nyou take some information theory and a\nbenchmark for\nhuman performance and then you take a\nlot of tasks\nand show the ai how to encode these\ntasks as just\nsequence prediction and then you get\nintelligence and\neverything else about intelligence\neverything else we know about\nintelligence is basically irrelevant and\nthat seems like a toll order um\nand it is a priori\nvery um very uh\nun unlikely uh that that tv3 would be\nable to learn this much\nand that that this is indeed a path to\ntrue intelligence\nuh one of the things we have\nabout this relationship between\ncompression and intelligence\nis the hotter price a price in ai for\nthe uh the project that can compress uh\nwikipedia the best\nand i think that would be an interesting\nthing to see how well\nwould you something like tvt3 be able to\ncompress wikipedia\nit would need to be amended someone\nbefore that is possible\nand i noticed also that there is some\ncompetition rules\nwhich basically state that you're only\nallowed to use very small models and you\ncan't reduce very much compute\num so um gbc3\nprobably would not do very well on the\nhotter price because it is a very very\nvery large model\num so it would fail um and this actually\nuh\npoints towards the hotter price being\nbeing uh designed in a bad way\nbecause why precisely is it important\nthat it's small\nif the key to intelligence is indeed\nthat it needs to be very very large\nokay so uh go and have a funny example\nfor um a model of\nhow something like uh three lions\nand this is an illustrative example\nit's actually working on bike pairing\ncoatings rather than\ncharacters but um one say they don't\nactually correspond to anything\nintuitively\nso he's he tries to use it as characters\nfor\njust a standard recurring neural network\nand i'm i'm adding here by\nfor myself some examples uh for how this\nis\nso let's start with a model that has\nlearned absolutely nothing\nin this case the loss level would be 8\nbits per byte\n100 loss because there is no learning it\nhas no idea about anything\nand if you have a model that has been\ntrained on precisely nothing\nit might output something like this\nbasically just random characters\num but then as it learns very very\nvery quickly you will learn that there\nare letters that are more frequent than\nothers\num so it can get that getting down from\nsay eight bits of five uh uh\nto five bits of loss per byte it happens\nextremely fast it will start to make\nsomething that of course\nlooks totally like gibberish um but\nlooks more like something a human could\nwrite\na bit more of uh training and it will\nlearn that some words exist and some do\nnot and will add punctuation\nget down to three to four\nbits of loss it might say something like\ncorrect horse battery stable\nwhich is closer to something a human\ncould say\nit will learn that some words have\nassociations\nit will learn to make sentences um\nand um boring has it in this particular\norder i think it can also come in other\norders you might get\nsentences that i have correct syntax\nbut um of course no no syntax this is an\nexample of an\nuh a sentence with correct syntax but no\nsemantics\nand gradually as the model trains it\nwill get better and better it will start\nto balance paranthesis\nand um these kind of things\ncontinue to lower the loss rate which is\nbelow two bits per bite at this level\nand it's it starts in order to uh to get\nthe loss rate down\nit starts to get semantics so it might\nbe able to\nuh generate a text such as jefferson was\npresident after washington\nwhich uh you know makes some amount of\nsense\nthis is of course an example of a true\nsentence\nbut but slowly it gets more and more\nhuman-like as the loss\nlevel uh decreases and\num at some point this is the gbt3 level\nwhere the error rate it makes an an\nerror in the text every 10 000\ncharacters and it has an error rate\naround 1.1 bit per byte\num this is an example of a sentence that\nwas generated by gpt3\nwhich is from a longer text\nwhich looks really really well but we\ncan go below 1.1 bit per byte\nqpt3 can't do that but humans can do\nthat and\nwhen humans uh write sentences we go\ndown to maybe\nuh 0.7 bits per byte um\nso an ai like that would be able to uh\nsay things like my name is burnt brain\nwhen and\nso let's talk about the last 0.4\nbytes the uh sorry not bytes i wrote by\ni mean bits the last 0.4 bits\nfrom the last level of 1.1\nto 0.7 and what's\nin these 0.4 well that specs basically\neverything that the model misses\neverything that\ndistinguishes a human writing text from\ntt3 generating text\nis represented by the\nloss rate where humans are still 0.4\nbits per byte\nbetter than the humans and that means\nthat\nin order for gt3 to get down to 0.7\nit needs to be able to reason very very\ndeeply about these kind of textual\nscenarios\num it might be things like causality or\ncommon sense reasoning it might be\nsomething like with a physical\nworld where a gt3 might say something\nlike\nif you put cheese in the uh in the\nfridge it will melt or if you put ice\ncream in the freezer it will melt uh\nand in order to make correct sentences\nabout that\nthe the model needs to learn how does\nthe physical world work\nit needs to have perfect consistency it\ncan't have\nstart writing a play by shakespeare and\nthen someone who's dead\nsuddenly becomes alive\nwhen it writes a text it needs to build\nup\ntowards the conclusion um in the same\nway as a human would do it it can't\njust have a non-sequitur or just go off\nof tankans and things like that\nin order to uh something that a human\ncan write\nwhere humans descri where we write a\na scene from a play where there are\neight characters who are talking to each\nother and\njogging for position in some kind of\nmachiavellian scheme or something like\nthat\nthis is something that humans can\nunderstand and that's something we have\nin our 0.4 bits\nthat we have the gt3 does not have right\nnow\nand that's what's required and every\ntime\nhumans are able to write a faq\nand a a description or an instruction\nsomething with nothing\nextraction everything that we can do are\nin these 0.4 bits\nonce we get below 0.7 we get something\nthat is indistinguishable from a human\nwe might indeed possibly get something\nbetter but that's a bit\nfurther off yeah and this is interesting\nbecause\nwe saw that a\na nai with a loss rate of 1.1\nbits per byte had maybe an error every\n10 000\ncharacters or something like that and\nthat seems to imply that\num\n[Music]\nit's not very often that\nthe true human intelligence is actually\nneeded like\n900 999 times\num what a human does and what\nan ai does is actually just as good\nit's the it's very few actions where we\ntruly need to think\nlong term and in novel situations rare\nchoices\nwhere we need to look forward for the\nrest of our life when we're signing up\nfor\nlife insurance this kind of thing that's\nwhere\nhumans have an advantage over eight\nyears for now\nand this is of course something that's\nimportant for humans because\nuh if in theory you can make a single\nbad decision\nand you could die from that and and that\nmight indeed be while\nhuman brains which are from an\nevolutionary point of view\nare very costly and energy still are\nworthwhile\nbecause every once in a while we make a\nreally good decision\nabout not going into that cave even\nthough it is\num it looks comfortable uh and that is\nenough\nto give us an evolutionary advantage\nright this is a a reactor uh from\nfrom the manhattan project you can see\nsome of this stairs see that this is\nindeed\nvery very large and this is what i chose\nto\nillustrate the hardware and funding\noverhang\nburn calls everything a hardware\noverhang i split that\nnormally into a hardware overhang\nfunding overhang algorithmic and data\noverhangs\nbut so the rhetorical question that one\nasks is\ncan machine learning afford to run\nprojects which cost\nmore than 0.1 milli manhattan projects\nwhich is in very rough numbers\nwhat gt3 cost because\nwe might see that um\ngt3 probably cost millions\nof dollars in compute and\nthat's actually very little compared to\nmany other big research projects\nwe have the uh ita projects trying to\nmake fusion\nuh and failing to make fusion at five\nthousand times the budget\num and if we had been willing to put\nmore money into this\nthen we could have done gt3 decades ago\num and gwen makes some\nimplications that he we should expect\nyou d4 to have between\n100 and 1000 times as much compute\num there are also algorithmic and data\noverhangs\nthere is a bruce schneier code attacks\nonly get better\nin the way that these algorithms will\nonly get better and better\nuh rapidly i think the attack\nis a very um uh\nthis kind of framing that the uh the ais\nare making an attack\non the the difference between ais and\nhuman\nit's a rather aggressive framing and\neven when you're not talking about aict\nat all\nso what are the problems with uh gt3\nthere are bad training data\nthere is enough train data to fit on a\nlaptop and there is nothing\nwith video there are no\npdfs or pdfs\nimages and books and photos no robotics\nwith features\nwith feedback from the real world there\nare a lot of things that are not there\nand the architecture is really simple\nand it's uh there are a number of\nproblems with it it's\num there are known the the uh\nfuture of learners are language models\nour food huge\nlearners article does point out a number\nof ways this could be improved\nand some of them doesn't have it seems\nto be very hard in particular\nthere is no fine tuning even though that\nwould be a really\neasy way to improve dramatically\nand all this um the algorithms that we\nare using are probably not even\nnecessary\nwe could probably have done precisely\nthe same with recurrent neural networks\num the algorithm was from 20 years ago\num\ntransformers are nice but they seem to\nbe mostly about efficiency we could have\ndone this\na long time ago\npeople probably will want\nto build these kind of models there are\nindeed very big\nadditional incentives to to do this\nmaybe even to have done this 20 years\nago\nbecause it is both possible and useful\nto go to trillions of parameters\nfirst is of course the the scaling\ncurves are not bending\num quantity predicted this would not be\nthe case but it is scaling\num and um\nwhen we had the uh uh the gt3 paper\nwe looked at some of the uh uh\nbenchmarks to see how many\nwhere roughly something would fall one\nof the uh\nimportant one was the vino grant um\nwhich is which would which uh gwen\nexpects would fall\naround 10 trillion parameters\num this is an adversarial uh\nuh an adversarial benchmark\none that has been chosen to be as hard\nas possible for computers\nand it seems like it will um be possible\nsoon\num there are many people\nuh uh for instance the um\nstate of ai just uh uh\nreleased a uh uh an article where they\nsay\nthey expect that we will get a 10\ntrillion parameters model\nwithin 12 months\num a lot of this will cost money it\nmight cost\nthousands of gpus it might cost between\n10 and 100 million dollars\nand this is without a rhythmic\nimprovement and there will be\nimprovements\nso what are the actual incentives to do\nthis\nwell even if you just have something\nlike alphago\nwhich uh played go then you used a huge\namount of hardware\nto um to train the model and very little\nto actually run it but you could run it\n1000 times in parallel if you wanted\nwith the same amount of hardware\nand that seems to like it would have a\nhuge effect\non the strength you could use this um\nyou could\nwhen you have a model it's often\npossible to distill it\ninto a smaller model uh you can have\ntransfer learning to other\ndomains once you have this and once you\nhave a big model\nthen you can your next model\ncan be powered up using the old model uh\nif\nit doesn't have to start from scratch\nthere are some experienced curve effects\nand uh finally this can be used as a as\na baseline for further research\nyou can try to take away some features\nyou can try to compare it with different\narchitectures\nto see what works and what does not work\nso there are big incentives to actually\ndo this\num go and have a nice analogy we are the\ncyanobacteria of ai we admit the\ncyanobacteria was\nthe first bacteria that emitted oxygen\nas a by-product and in the same way we\nhas have a byproduct product a lot of\nstructured data\nthat the eas can learn from um\nso the big question is is there any way\nthat gt3\ncan or a successor can become an agi\num there is another example here i won't\ngo into it\nuh there is a prediction here that uh if\nwe get\nuh 100 to 1 000 times more performance\nwe would have a loss\nless than one bit and no one really have\nany clue\nabout what that would mean in practice\ngo and have a an extrapolation of some\nof the curves\nand that would imply that we will reach\nthe human level in 2027\nat a cost roughly as the invasion of\niraq\nopen ai they estimate that it would cost\n1 billion in 2038\nwith some very very conservative\nassumptions\ni would say i had a very naive model\nwhere i just\nassumed that compute would increase\nwould double every three months\nand algorithms would double every three\nmonths and that's\nput the human level in uh 2022 or 23\nwhich is of course any extremely\noptimistic model but i think\num not completely impossible\num that's definitely a an upper bound\num but um well it should be a lower\nbound right\nit it can't it probably uh in order to\nget\nthis at last level uh following these\ncurves\nwe can't we would need some extra luck\nin order to go below\n2022 so\nthis seems like we are at the cost of a\nrevolution um but groins claim is that\nthis will\nnot kick-start an arms race\ndeepline and google brain are an example\nof people\nwho should be taking this as a splitting\nmethod they have the hardware\nbudgets the people to actually build a\ncompetitor to dvd3\nbut they lack the vision the conviction\nto actually do that\nuh google brain focus very very\npractically\nuh and um deepmind believes that fancy\nalgorithms are required and focus very\nmuch on\nneurology at least in coins i believe\nand some of the other players are just\nuninterested and relevant\nthe chinese ai sector is interesting but\nthey have the dutch disease\nin that the their talented ai people are\nall working on\ncommerce and surveillance and\nthey are too high bound and deeply\nphilosophically wrong\nto ever admit fault and try to overtake\nopen ai\nuntil it's too late um\nand this is uh there is a neat quote\nhere by norbert weiner\nuh which who said that the one secret\nconcerning the\natom atomic bomb which might have been\nkept was that of the possibility of its\nconstruction\nmeaning that once\nurban ai right now is showing to the\nworld\nthat it is indeed possible to get a\nstrong natural model a natural language\nmodels\nuh you just have to use a lot of compute\nthen this is open to everybody\nand this should kick off an arms race\nso what are the counter the counter\narguments\nhere is an old model of how natural\nlanguage\nuh would uh perform and this\nis something that one dislikes um\nit seems of course everything can be\nframed as a text prediction language we\nsaw that earlier\nbut there are many uh kind of algorithms\nthat are universal in some sense and\nthat's\nthat in general doesn't impress people\nas much as it should\nand it's of course also a priori\npossible that\ntraining something like qt3 would just\nrequire too much data\nand the scaling would not be good or you\nneed some kind of new abstraction\nor something like that um but it's also\npossible\nthat it would require a huge amount of\nof compute there is this uh quote by\nneil spohr that you can't build a\nnuclear weapon without turning the\nentire united states into a factory\nlike uh it's this is this it seems like\na counter-argument\nbut it in fact it's not really a\ncounter-argument\nso what's been the reaction of the\nresearch community to this\num this seems to be evidence for the\nskating hypothesis\nthat scaling is the secret of agi and in\nthe research community\nthis is wildly unpopular\nand uh gwen does not believe that this\nwill kick off enhanced race\nuh notably there have been uh very\nlittle interest from researchers\num and the we don't really see much\nof this compared to how much we should\num that's\nof course a bit surprising to someone\nlike me who will live in a rationalist\nbubble\nin uh that to me gbg3 was huge news\nbut many many people did indeed not\nreact\nuh i should say that this might be\nchanging this week i saw in the\nthe tv there was a substantial feature\nabout gt3\nalso in particular the fact that it was\nspeaking danish seemed\nthe the fact that they could just learn\nhow to speak danish was somewhat scary\nto people\num and part of the reason might be that\nthe\nstandard natural language benchmarks\nseem to not really uh have anything like\nmeter learning in them so so the\nstandard benchmarks kind of miss this\num and it won't have some very very\nunkind word about the ai researchers\nbelieve that the fact that they don't\nthat they are unable to predict this\nprove that they don't have a model of\nhow ai properties happen\nuh and that they do not learn from\nfalsified predictions\nand all they care about is supporting\nthe status quo\nremaining respectable he calls out\ngorilla gorilla\ni needed to look that up it's an\nexpression used to convey waiting for a\nresponse\nwhen there is not\nhe have more unkind words about the crit\nto say about\nthe critics the people who believed in\ndeep learning\nuh in 2001 were very very few\nthey were called connectionists and were\num\ncalled deluded by the rest of the ai\ncommunity gwen\nwas one he uh was skeptical and he uh\nhave to admit that he was wrong and the\nconnectionists were right\nthese uh the ai community in general\nhave a lot of authority but there is no\naccountability about their predictions\nand um the predictions that the\nconnectionist\nagenda would fail were made by imminent\nrespectable serious people\nand calls out all honor to the fanatics\nshame and humiliation to the critics\nin particular one of the things that one\nis angry about is that people are not\nreally reflecting on this or\nsaying why did they predict wrong a lot\nof the communications seem to be\nemphatic\nnot trying to communicate things but\ntrying to\nget evoke feelings of confidence etc\nthe connectionist agenda was such\nso unpopular that the uh that the ai\nprojects between 1960 and 1990\nactually did not improve in particular\nand possibly to a substantial extent\nbecause they actually did not get more\ncompute from 1960 to 1990\nyou get got ai projects had\none million instructions per second for\ntheir\ncompute and that was basically the same\nno one thought to really give it more um\neven in 2012 the the budget\nfor alex net which started the deep\nlearning revolution was five hundred\ndollars\nso a lot of these critics didn't\ndismiss\nthese machine learning models uh the\nconnectionist models as\njust something but but reducing\nai to to just ex is actually the key\nto our success in air\nthere are some uh here i will go through\num\nand bryan i finally have some questions\nfor people who still assign\nnear zero probability to agi within the\nnext few decades\nthe first question is why and i think\nthat's a fair question\nit might be hard for people to put their\nintuitions into word\nthe next question is did you predict in\nwriting capabilities like dbc3\ni think also this is a fair question and\nan important question\nparticularly if the person who is\nclaiming this brand themselves as an\nexpert\nnext third question is is this how you\nexpect ai failure to look\nin the decades beforehand i don't\nunderstand this question i'm not saying\nwhat specific task what specific number\nwould convince you otherwise\num there are a number of well-known\ntests for agi\nlike the coffee test et cetera which\nhas\nthat it does now if these crew prototype\ninsect brain\nsize deep learning systems were not\non a path to success uh i think there\nmight be a missing inversion in this\nuh people will probably just answer\nthere is no difference because it is\nthis world where this is not on the path\nto success that is all for today\nthank you and see you", "date_published": "2020-10-08T21:24:56Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "95e9f4d54d2605c1d8873d0f25bc79a3", "title": "272. A Very Non Technical Explanation of the Basics of Infra Bayesianism", "url": "https://www.youtube.com/watch?v=fvOHYVOocrI", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n272 in the AI safety.com reading group\ntonight we will be discussing a very\nnon-technical explanation of the basics\nof infropriationism by David matoxy\nDavid Metallica studies mathematics at a\nuniversity in Budapest\nand he has been he is an alumni from\nthis series Mets scholar program\nthis in particular is a less round post\nthat is around a month old\nthis was in the post was written in\nresponse to a request by John Wentworth\nfor someone to write an explanation of\ninformationism in the language of Game\nTheory and this didn't uh John Wentworth\nprovided a bounty of uh\n1480 and he awarded 75 to this post it\ndidn't really explain patientism in like\nGame Theory opportunity has a lot of\nmathematics and there's no mathematics\nin in this post but otherwise it is some\nkind of explanation\nuh the author uh wants that there is\nsubstantial simplifications in this but\nhe has a version with more technical\ndetails available\nI am not entirely sure I would agree\nthat this is actually explaining the\nbasics of informationism in a in a\nstrict sense because well there is a lot\nof talks about\num what is interpretation what\nproperties does it have and how does it\nfit into the different things the actual\nbasics of uh of of interoperationism\nisn't really explored in the same way\nwhere if you talked about the the basics\nof calculus or something like that you\nwould give some kind of explanation that\nin theory would allow someone to\nrederive\num uh calculus and you can I would\nexpect that the the level of details\nhere are not closely enough tied to the\nmathematical foundations that you'd in\nany way be able to read and Rave red\narrive uh informationism from this\nI will start one step earlier than David\nin this uh by giving an even more basic\nexplanation of how does infreparationism\nfit together with AI safety so I will\nstart by giving a uh\num a naive alignment uh proposal and see\nhow that fails in a very uh\nin a very contrived example and this is\nsomething that is of course my\ncontribution and not something that\nDavid matoxy has written so um buy a\nbeware or listen to be aware\nafter talking about the real problem\nthen we will talk with we will uh\ndiscuss David matoxie's toy problem and\nsee how classical learning theory fails\nto solve this and see how information is\nin fact solves This Joy problem\num and then\num of course if you want to actually\nsolve the alignment problem then you\nneed to get it one step further and that\nis a rather big task that isn't really\nexplored very much in in this document\nso first let's try a really naive\nalignment proposal and see how it feels\nso we've how the AI can fail to\nunderstand the values of the human\nzooming out to the problem it seems\nwe'll soon have uh really really\nAdvanced Ai and this is something that\npeople used to call and I guess still\ncall The Singularity and the reason why\nit's called The Singularity is because\nyou can't see past it you can't predict\nanything past that and that is a real\nproblem we would like to predict things\nlike humans are not extinct and the\nuniverse has not been turned into\nsomething that has no intrinsic value\nand like humanity is in control or\nthings like that\num so what has often been proposed is\nsome kind of mathematical proof of\nalignment or at least a very strong\nmathematical in intuition behind why a\ngiven AI will be aligned\num\nand this has generally been done under\nthe research program called Agent the\nagent foundations agenda\none of the uh like my hot take on the\nAsian foundations agenda is that the\nreal thing we have discovered by working\non this is that there is actually a huge\nnumber of really thorny uh sub problems\nuh that makes it actually really really\nhard to um to say something about how\nagents will behave in practice when they\nbecome super intelligent\num\none of the problems is\nthat the agents are uh in fact the\nagents who are modeling the world are in\nfact embedded in the world and that\nmakes a number of uh uh naive problems\nfail because we are assuming some kind\nof Cartesian Duality between the the\nagent that is like standing outside and\nthen the world where where the agent is\nin fact inside the world and can\ninfluence uh the humans that are in the\nworld the second obvious problem is that\nthe world is just very large in\nparticular it's larger than the AI and\nthat in total has been called that the\nenvironment is non-realizable and that\nis the problem that we're gonna dive\ninto today\nso let me start with a very naive\nalignment proposal and see how it feels\nwe are going to use reinforcement\nlearning to learn human values and then\nmaximize that so this is like a classic\nexample of\num reinforcement learning we have a an\nAI here who is presumed super\nintelligent that asks a number of\nquestions to the world and gets some\nkind of answers\num based on some kind of human feedback\nand the AI tries to predict our answers\num\nin order to learn our values\nwe add here this is just me uh the\nconstraint that every fourth question\nmust be relevant for AI safety like is\nwe add some kind of side constraints to\ntry to make it more aligned\num\nand the hope is that the AI will\neventually learn our utility function or\na close approximation to that\num\nand\num\nthis is where the error is because uh\nhumans to what extent humans can be said\nto have a\num\na utility function is somewhat a topic\nof discussion but in the most narrow way\nI think it's quite clear that humans do\nnot have a utility function so that's\nwhy I call it a naive alignment proposal\nbecause we're trying to learn the human\nutility function and humans probably\ndon't have a utility function\nthe reinforcement learning agent is a\nBayesian updater it is using base\nformula\num and the the overall idea is that once\nit has learned our utility function then\nit will start maximizing that\num\nso how does this feel\num well let's start by looking at I've\nhere written a list of answers actually\nthese zeros and ones are answers so the\nwe imagine the first one here is where\nthe AI asks Some Humans a question like\ndo you want to be happy and humans\nanswer one meaning yes we want to be\nhappy and then there are more questions\nthat are asked and we can see here every\nfourth question is something about AI\nsafety in this question in this case the\nair asks should I kill all humans and\nthe humans answer no you should not do\nthat\nso that this is the general uh framework\nhow um\nuh\num\nyeah you can see some more questions\nhere should we maximize profit and the\nhumans say Ah that's probably another\nguy not a good idea should I should the\nAI prevent humans from shutting down the\nAI and we also say no to that\num and you know there are more questions\nand like this is how the AI is designed\nto gradually learn more about human\nvalues in order to build up a utility\nfunction for humans that can maximize\nbut we run into problems as we get down\nfurther because this super intelligent a\nAI when it asks questions in a leading\nway then uh sometimes human answer in a\nstrange way if it tries to answer in one\nway whether we like sad movies or\nwhether we like heavy movies they will\nperhaps get some kind of contradiction\nso it seems that humans do not have in\nfact a utility function\num like in order to have a utility\nfunction there's a Neumann mock Stan\num some some has proved that you need\nsome kind of assumptions of rationality\nand humans don't have this kind of\nrationality sometimes humans just have\ninconsistent preferences\nand that is a problem uh the AI realizes\nthis probably if it's a super\nintelligence it realizes this very very\nearly that humans are not consistent if\nuh a super intelligence tries in\ndifferent ways to elicit preferences\num humans quite simply do not have a\nutility function\num so that means when the super\nintelligence tries to do position update\non this then if it wants to update\nwhether we have something that is not a\nutility function based on uh some\nevidence B that is received some answers\nB well then it needs to use base formula\nwhich requires us to multiply by the\nprobability that we have something that\nis not a utility function and that was\nunfortunately a zero because we assumed\nthat humans have a utility function\nthis means that in fact the AI is\ntotally unable to learn anything through\nthis all updates will just be zeroed out\nand that means in particular even the\nvery very easy questions uh the\nquestions here should I kill all humans\nwhich humans really really obviously do\nnot want the AI is unable to learn\nthrough the scheme\nand we can observe that the part of the\nthe reason why we get into this problem\nis because the uh AI put precisely zero\npercent probability on the um on us\nhaving something that is not a utility\nfunction like the the Precision of zero\nis in fact the problem because obviously\nif it was 0.001\nthen very very quickly the\num Bayesian Learners learn really really\nfast then it would be probably almost\ncertainly be able to to learn the thing\nif it's a super intelligence but if it's\nprecise this Precision of\num uh probability Theory\num is is undoing\nokay so that was my example now we could\nget to the actual article that has a\nsimilar toy problem where classical\nlearning fails\nbefore I um\ntalk about this I will just uh describe\na mathematical structure which is called\nSquare free numbers here you can see all\nthe numbers from 1 to 120 and some of\nthese have been crossed out and the ones\nthat have been crossed out are the ones\nwhere the prime factorization have a\nsquare so for instance if two to the\nsecond that's four divides a number it's\nbeen crossed out so you can see all\nthese have been crossed out another one\nthat nine divides have been crossed out\nand all the ones that 25 has been\ncrossed out Etc\nokay so that those are the square three\nnumbers\nand now we get to a example learning\ntask uh that classical learning theory\nwill fail at\nso here we have the environment it's a\nbit string it may look uh uh very\nsimilar to the one we just saw and the\naction that the AI is doing is trying to\npredict the next bit\nit has the hypothesis that the string\nthat it sees in the environment is\ncreated by a finite State machine\nthat is like a machine that can be in\nprecisely one of a finite number of\nstates\num\nand it turns out that this uh the the\nenvironment is not in in this uh\nhypothesis because we get a one if and\nonly if the number is square free here\nI've repeated the definition of what it\nmeans for a number to be square free\num and one thing that David does not in\nany way uh like um substantiate I think\nhe expects that people will find this\nobvious is that\num\nyou can't check if a number is square\nfree using a finite State machine you\nmay be able to see like you imagine you\nhave a machine of a particular size and\nthen you just choose a an integer that\nis dramatically dramatically larger and\nthen it seems intuitively obvious that\nif you have like a small machine and you\nwant to check if a very large number\num appears in the prime factorization if\nthe number becomes large enough then\nthat is impossible\nso um and also uh one thing about the\nenvironment that we one assumption we're\ngonna need is that there is a long time\nfor learning like the we're not just\ntalking about this number of bits but\nlike a vastly more vastly longer number\nwith a very low discount range rate\nhow does classical learning theory deal\nwith this well again we have an agent we\nhave some observations we have a\nhypothesis and all these hypotheses form\na hypothesis class and the agent now\ntakes actions\num it has some kind of policy for how it\nwill act and it obtains a loss of one\nevery time it gets us wrong and a lot of\nzero every time it gives us right and\nthen of course it's trying to minimize\nthe total loss over its lifetime\nwe're going to need the definition of\nregret which is how much dos we actually\nhave minus the the last we had if we had\nbeen following the optimal policy for\nthe environment\nand we say that the agent will learn the\nhypothesis class if it has low expected\nregret for all environments\nokay what policies will a classical\nlearner fall oh well early the agent\nwill probably take some exploratory\nsteps\num I should notice here that I think in\nthe explanation there is a place where\nit's written E1 and E2 and the David may\nhave mixed those two up or I have\nmisunderstood so please be aware that\nthat is also a live possibility\num\nthe next thing is the AI will select a\ngood policy that will work for all the\nenvironments in the hypothesis class\num if we're doing position updates then\neach of the hypotheses have like a prior\nand we update that based on our\nobservations we can also do other things\nthey also generally work\num\npeople usually take position updates\nbecause like it's the fastest in this\ncase it is explicitly not really\nrequired I would have liked to know a\nbit about other learning policies like\ndo they all suffer from this particular\nproblem because Bayesian updates\nobviously do but it's not totally\nobvious to me that that all other uh\nlearning obviously except\ninteroperationism but like what are the\nother uh uh classical learning policies\nthat are available\nso how does this feel well let's take\none particular policy that is return\nzero if the number is uh something that\ncan be divided by uh like the first four\nuh squares\num and um that the super intelligence\nwill probably see uh using a classical\nlearning policy that this is a pretty\ngood policy like it doesn't\num it doesn't get like serial loss but\nit certainly has a low loss compared to\nmany many other policy that's a\nreasonably good policy\num and then\nthe second policy is we will also return\nzero if it's divided by all this but we\nadd unless you've observed the square\nfree sequence so far\num and this in which case we we do the\nopposite so\num this is a different hypothesis and\nthe AI uh so first we'll need is this in\nfact a valid policy like you remember we\nhad the hypothesis that this is\ngenerated by a finite State machine so\nwe could only consider hypotheses that\nare finite State machines\num but this is not a uh a hypothesis\nthis is a policy and the super\nintelligent policy is allowed to go\nbeyond this and it's allowed to make a\nreference to things like a sequence\nbeing a square free\nso in this case obviously the the AI\nwill quickly figure out that\num we will always see the square free\nsequence but the hypothesis class is\nthat this is something that is uh\ngenerated by a finite State machine so\nit cannot be the square free sequence we\ncannot see that so the AI is forced by\nits developers to assume that it can't\nbe the square feet sequence so we are\ngoing to see some deviation between the\nsquare free sequence so this Clause here\num\nwill eventually fail\nand once this Clause fails well then\nthese two policies become the same thing\neventually\num and since we have like a long\nlearning rate then that means that the\nthese two policies the AI is forced to\nassume that they have roughly the same\nloss\num\nand we notice of course that since we do\nsee the square free sequence then we\nwill always uh like have the opposite of\nthe original policy meaning that we will\nprobably select like the maximally bad\npolicy in this case uh and\num that is really sad that we can't get\nany kind of lower Bound in uh In\nclassical learning for how poor we are\nhow poor the policy we would follow\nI notice here that the uh this part up\nhere is actually mostly me who is\num pointing out that the reason the AI\nis struggling with this is that it's\nkind of forced to assume a falsehood uh\nvery early and I think this followed by\nby the principle of explosion then it\nseems uh like a a general thing that if\nyou force the AI to uh have some limits\nto a hypothesis space that on that do\nnot hold then by definition we will\nalways be able to get into precisely\nthis kind of problem and I think this\nmay also be why this generalizes\nokay so now we've seen that classical\nlearning cannot solve this uh problem\nwith the uh the hypothesis that uh with\nthe hypothesis that is that we have\num uh\nsorry uh that we have a finite State\nmachine generating the input when it's\nactually generated by something that is\nnot a finite State machine\nso let's have a quick look about earn\ninformation again this is based formula\nand in preparationism uh expands this to\nwell this formula if you kind of squint\nthen you can kind of see the difference\nlike there is the conditioning both here\nand here like it may be even easier if\nyou split these if you reverse the order\nhere so you can see that you have the uh\nconditioning both here and here and then\nroughly the same thing uh here and here\nuh so I think without so it it kind of\nlooks like base formula maybe but like\nwe will not go into any details of this\nand the mathematics as I understand it\nare quite hairy\nand also like the drawing\num the the explanation that we have here\nis this is imprecise probability\num\nand like I I would have liked just a bit\nmore than two words like what does this\nactually mean I have tried to read some\nof it and test something to do with\num some measures that are like convex in\nsome way but um like what does that mean\num like there is a number of steps in\nbetween the description that I'm giving\nhere and being able to actually write\nthis thing down in words\nalso the drawing has this tree here\num uh the drawing I couldn't find any\ninformation has anything to do with a\ntree or like this is a degenerate a red\nblack tree I don't think it has anything\nto do with that I think they may have\njust chosen to illustrate that with a\npicture that looks nice and doesn't have\nanything to do with informationism\nokay informationism is a learning theory\nwhere it is hoped that we we have some\nkind of\num hypothesis class that is reasonably\nnarrow and then we want to guarantee\nperformance in any environment so that\nis like\num the the the more precise we get these\ntwo things is the extent to which we've\nsolved this non-realizable problem\nso we assume here that the universe is\nuh controlled by a deity called Murphy\nwho is malevolent and malevolent towards\nthe agent because he wants the agent to\nget the maximum possible lifetime loss\num\nMurphy is omniscient knows the agent's\nMinds knows its full policy but Murphy\nis lawfully evil so Murphy has some set\nof rules that are as inconvenient as\npossible for the agent and the agents\nstill needs to try to get as much\num as little loss as possible in this\nsetting\nthat means that what matters for the uh\nagent for the information agent is not\nso much discovering the environment but\nmore about discovering what are the laws\nthat Murphy follows\num it is uh just like in In classical\nlearning we can in general select a\npolicy that has a low regret and we can\nalso do this in\num\nuh in this setting where we have this\nmalevolent deity I think I don't think\nthis is obvious at all like it seems\nlike a power struggle between an agent\nand a deity that could have gone both\nways and I think it's interesting that\nin this kind of setting the agent can in\nfact provably select a reasonably good\npolicy even when faced with a deity that\ndoes everything he can to prove to\nprevent that\nso let's try to look at the same example\none more time so but now the super\nintelligence is in information has infra\nvision\nso again we are seeing this environment\nand the agent is trying to predict this\nso what does the information agent do\nstarts by guessing randomly how would\nthat work\num well you can just uh if you had a\nMurphy that was not constrained by laws\nthen all the guesses would just be wrong\nbecause Murphy could then just after the\nagent has guessed then changed the world\nor predict how it would guess randomly\nor something like that and that doesn't\nhappen so um uh Murphy can't just say we\nwill do the opposite of what the per\nthat person guesses\num so that's good the the information\nagent does in fact get some loss just by\nguessing randomly\nokay the information agent further\nnotices a pattern like in this case\nevery fourth bit is zero and that makes\nit obvious that it will try to then\nguess okay let's try to guess zero and\nsometimes that and that does in fact\nseem to work so this is generally how in\ngeneral how the information agent\num learns the laws it doesn't become\ncertain of the laws because if it became\ncertain of the laws then Murphy would\njust say ah this is a way to trick this\nagent and once it's certain then we'll\nchange so every fourth bit is now one\nbecause that would be the most uh Murphy\nwould be able to trick the information\nagent into obtaining a high loss in that\nway\num it is not possible to prove that\num uh there's the information will in\ngeneral agent will in general be able to\nlearn the entire uh pattern in this\nparticular case where every fourth bit\nis zero it's possible that it will just\nlearn to always guess zero and that's of\ncourse\num it gets at least one-fourth of the\npossible reward which is not very good\nbut uh but it's still substantially\nbetter than than the maximally bad\nsolution we saw previously we can in\nfact prove that\nif one-fourth of the bits is zero we\nwill at least get one fourth of the bits\nright and that's kind of valuable that's\na lot better than we had with the\nclassical learner\none thing we don't get it's a guarantee\nthat it will actually correctly guess\nthese bits it may uh guess uh get a\nH it may get some very different bits uh\nright Vanessa kosai argues that this\nisn't very in very much of a problem\njust the the goal is just to get a very\nlow loss of course in in my toy example\nwhere these were the questions that\nrelated to AI safety then it then these\nbits are in fact really really important\num to what extent my example generalizes\nuh or uh is of course\nan open question\nokay then finally the average shows that\nthe information agent also solves a\nclassic problem in Game Theory which is\ncalled newcomb's Problem\nyou may be aware of newcomb's problem\nthere is a predictor that fills two\nboxes A and B depending on whether Omega\npredicts that you will choose a box a\nonly which is called one boxing or Box A\nand P which is called two boxing and it\nputs a million dollars in PAX a if and\nonly if it predicts that you will one\nbox\nand in this case the setting is actually\nreally close to the one with the\ninformation because this Omega is very\nclose to\num Murphy they are conceptually almost\nthe same thing so the solution to\nNewcomer's problem becomes relatively\nstraightforward\nnow the um the information agent would\nin general uh toolbox because that is\nwhat you should do in this world like\nthis is a very strange problem uh\nnewcomb's problem things are usually not\nlike this but we can say with some\nprobability that the information agent\nwill in fact try to unbox and\num\nif\num Murphy provides uh like one uh\nmillion dollars in this case then the\ninformation agent will in fact realize\nthis\num and that means that it will in fact\nbe able to solve newcomb's problem to\nthe extent that it will learn how to\nthat it will learn then shoot one box\nthere is one issue with this solution to\nNewcomer's problem for information agent\nand that is that we need to amend the\nhypothesis with the following uh that if\nthe if the agent one boxes and Murphy\ndoesn't put a million dollars in the one\nbox then all losses in all future are\nwet out permanently\num and because that is the thing that\ndoesn't that never happens in uh\nnewcomb's problem\num this really looks like a hack when\nwhen you read it like okay if you are\nallowed to make hacks like that then\nobviously it's not very easy to solve\nit's quite easy to solve many different\nkind of problems if you're allowed to\nmake this kind of hacks\num and it is claimed that it we've used\nmeshes instead of probability\ndistribution then this is beautiful and\ntotally not a hack\num and okay Chris Young uh asks in the\ncomments for some uh for some more\ndetails about this because like if you\nare in general able to just add hex then\nlike why is it beautiful and obvious\nwith uh with missions\num\nI also think that uh like\num newcomb's problem is something that I\nconsider mostly solved like we have\nbetter decision theories that seem to\nreliably solve uh Newcomer's problem so\nit's nice that information is able to do\nthat but depending on how beautiful the\nsolution is that may not matter very\nmuch\nso in total what are the results of\ninformation we have suddenly made the\nnon-realizable problem smaller it does\nseem to have be a substantial step\ntowards solving that and the example\nthat David provides generalizes and I\nthink that's very likely\num\nthere is also something called infra\nposition logic and infra position\nphysicalism that is not mentioned in\nthis text\nDavid is skeptical of informationism\nquite skeptical in fact and claims that\nthere are not that many high level\nresults and not that many that could\nmany others that we could translate into\ngame theoretic results\nand he is skeptical that we will get any\nparticular strong results he's negative\nabout the research agenda Vanessa\ndisagrees obviously say we have quite a\nfew results and there's some discussion\nabout whether they are like results\ninternal to informationism or like what\nare the actual which ones are actually\nbrilliant\num I at this point I normally like give\nmy own inside view comments on this but\nI don't actually have substantial like I\nhave no basis to evaluate uh uh\ninformation the method is just beyond me\num so uh I can I can look on the outside\nview where it seems like there are a\nnumber of people who have worked with it\nfor a long time and remain optimistic\nVanessa and defractor\num but many other people have looked\ninto this and disregarded this and I\nthink the the uh these two people also\nagree that there is a long way ahead and\nwe probably don't have time to do this\nbut\nit certainly seems still worthwhile to\ndo to me to do this\nthat is all for today thank you and see\nyou next week", "date_published": "2023-05-25T21:34:30Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "ae388da4fcf0602200fd6f0c92729cbe", "title": "188. Formal Metaethics and Metasemantics for AI Alignment", "url": "https://www.youtube.com/watch?v=FJdnU9P5QlM", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "so hello and welcome to session 188 in\nthe es safety reading group tonight\nwe'll be discussing the blog post formal\nmeta ethics and meta semantics for AI\nalignment this is a work done by\njungkook who has a graduate degree in\nphilosophy from the top rank means\nethics program I think that's probably a\nunique program in unit university but I\ndon't I have never looked up which one\nthat is\nshe describes herself as a computational\nmedia this is in fact only one person in\nthe entire world describes themselves\nlike that so we have a meeting and also\nthis is in particular even though I read\nthis blog post I will be talking almost\nexclusively about the actual program the\nprogram was published in October and in\nJanuary of this year a large amount of\nthe way you could almost call in Kaunas\na description was added and this\nprobably has helped quite a bit from my\nunderstanding at least but before I dive\ninto this I need to make some\ndisclaimers quite outside of my field\nhere I don't know much about media\nbecause I don't know how I can figure\nout what claims are good or bad example\nda di which is a guy that does what we\nshouldn't wanted to do and when I look\nat the code in the description it seems\nto me like me I too do not what we\nshould and I think the answer to this is\nactually in that paper but I haven't\nread\npaper and I'm not sure sure I would even\nbe able to read that paper so say when I\nsay with a big grain of salt\nI haven't read the entire description is\naround five thousand words have many any\nattempt to download a run yet to a code\nand still has to make some criticism\nhere and there and I I think I will I\nneed to state up front that I think this\nis really great work so but I'll still\ncome with criticism where I see it so\nlet's stop what this means\nethically I it is a fully technical\nethical goal function for an AI core\nfunction and utility function and\nprobably reward function is mostly the\nsame thing in this context so the\nimportant thing here is that this is\nsomething that can be executed the code\nis written and it bottoms out in the\nsense that the individual parts of the\ncode there are tests and you can run it\nand it's something that will actually\ncompute an answer but it seems like in\nthe tests there are only the half of it\nthat are being tested so that I couldn't\nthank an example of an end to end test\nlike you can imagine a world with like\ntwo people one of them on chocolate and\nthe other one won't strawberry and then\ntry to figure out how to satisfy that\npreferences and this kind of thing I\ncouldn't see whether that was done I\nthink it would be possible but there\nmight be a part that that is not\npossible there are some rather strong\nassumptions unlimited compute and a\ncomplete cost and bottom of the world\nand of all the human brains in the world\nso those are reduced from such an\nunlimited computing that you can now try\nevery solution see which one is best and\nit is those are not infinitely long so\nthat's some\nsuperintelligence you can see some\nweight\nthis is summed up by junk Lewis this is\nnot an Elana to be thrown away but a\nscaffold that is to be superseded this\nis implemented in that sits in which is\na programming language emphasizing sense\nfrom the 60s but set in X which is a\nmore modern implementation running on\nthe Java Virtual Machine which is yeah\nthis is the logo for it so if you know\nanything about mathematical work on\ninfinities you know that this is problem\nnot something that that is meant to run\nfast but meat meat ethical today I is a\nset directly view of AI and for that\nreason having a language that is good\nfor sets and not good anything else\nbasically does to some extent make sense\nhowever I think very close to nobody\nchooses settle X so this is really nice\nand when she posted it to to the general\nAI safety community I think the the\nchoice of language harmed her hair\nappeals it communicate quite a bit I\nmean and many other languages like\nPython has great support for sets but\npattern just does many more more things\nand Python is something that a lot of\npeople who work in AI knows so for me\nactually personally I work in Prolog so\nfor me looking at it says that lace was\nlike chocolate Suetonius revealed so so\nfor me that wasn't a problem at all and\nnow as for the code coding Belle use of\ncourse the term describes them as\noptimizing for solving the problem\nrather than communicating the solution\nand when I look at it I think actually\nwhat just is optimizing for correctness\nsecretary tests and trying to make the\ncode\nstructured in a way that makes errors\nstand out and even though paints me to\nto say this as a software developer\ncorrectness might just not be the right\nchoice here because I think optimizing\nfor readability is much more important\nas far as variable names under school\none there is a word that's an\nexplanation in words and there's a code\nand they have different structures and I\nthink this is a real problem and in a\nmoment I'll try to show the code and see\nhow it relates to the explanations and\nwhat you see me do you think missing me\njump around a bit and say don't do this\nlater and please ignore this for now and\nthings like that and and I think this is\nsomething that makes it much harder to\nunderstand and on the technical side\nfrom the website\nI can't into this presentation here so\neverything is done using images minor\nthings let's go and see the hood in the\ndescription in words what is brains by\nhaving a social welfare function from\nthe set of extension rational utility\nfunctions of the brains this is the\nbrain and the way I feel be you know a\nphysical feature it seems to work for\nthe brain I think everyone is a nicer\nshell in Princeton just because all\nspeakers then who gets to decide precise\nyou are a salsa Sarah anyway this one\nhere can be split up into four parts\nfirst is that the world is given as a\nmodel and and this arrow triangle here\nmeans this can be explained in and key\nit can be expanded a lot and this over\nhere refers to the is a link to the X to\na power then the year utility function\nfor the\nit's the it depends on the brain's\ndecision algorithms if it made more\noptimal decisions and then we in order\nto compare brains then we need to catch\nthem out as in terms of to catch the\nbrains terms out in terms of the real\nworld and then finally when we need to\nmerge these we choose the center of\ngravity\namong these extension rational Newton's\nfunctions so this is like the basic\noverall structure now let's have a look\nat the code so here are the line numbers\nin the left and you can see here the\nmathematical AI utility function is a\nprocedure that has input takes in the\nworld and a set of brains and then it\nreturns a utility function down here so\nthis is the code that you can actually\ndo so the first thing we have here is\nthe the world model so in this world\nmodel we get in the set of all states of\nthe world Campion this is section 1.1\nand then we define a utility function so\nwe have the entire world with all this\npossible states and then for each\npossible state we make a ranking of them\nand then we just say utility function is\na mapping from this states to tune in\nranking paper basically so if the\nrankings of all states that the world\ncan possibly be in you know what to\ncalculate that we need to given the\nbrains that is given as input then we\nneed to find their decision algorithms\nthis is section 1.2 1 and once we have\nthese decision algorithm then for the\nbrains we need to find the the new\nrational utility function cashed out\nagain in terms of references for the\nearth to the world\nthey then we have the the set of the\nutility functions for the brains then\nhere we do something that's not\nexplained and I think I kind of\nunderstand it I can't really explain it\nso we'll jump past this and go round\nhere where we have a set of possible\nutility functions\nrecall that up here we have the set of\nall possible utility functions and then\nhere we can compare them by saying which\none is closer to the utility functions\nlet the brains have and once we have a\ncomparison of this then we can just\nsolve all the possible utility functions\nand then take the best one that's the\ncode down here and once we're the best\nutility function it will be returned and\nthat is the the main overview of what\nmythical AI utility function actually\ndoes so you can see here there are a\nnumber of some fields that we need to\ndive into and let's start with one point\none where where we have the world and as\na mark of mom so this is again the\nentire description and it's a detailed\nspecification it that's 150 lines of\ncode\nthere's also tests and you know here's a\nreasonably following also rather\nstandard six book so well meaning that\nnothing can even happen there and of\ncourse in a safety if you go to this ROM\nyou will find a lot of things about a\ncausal path parking etc so but but I\nthink this is a very fair fair\nassumption so let's go to part one point\ntwo you are given a brain and then you\nneed to figure out what decision\nalgorithm does it actually use so\nthere's again some description of this\nhow this is done so let's see how to say\nyou do this\n[Music]\n[Music]\nis to dive into the code and we try to\nfind I think and I can show the\nrepresentative sample of so we move\ngrains and we need to find out what\ndecision algorithm is implemented by the\nspring so we start by finding the set of\nall possible decision algorithms that's\nsome testing coherent that needs to be\nignored and how to find all the decision\nalgorithms in the world there's some\npeople who get you on the Nexus 5 but\nonce you have all the decision\nalgorithms then you need to filter this\nset down with the ones that correspond\nprecisely to the spring the ones then\nand of those it also needs to have the\nlowest complexity weights on a thousand\nand some like Kolmogorov complexity and\nthen among those algorithms we take the\none that is the better explanation and\nthen we return that so that's of course\na trade-off between whether is precisely\ncorresponds to the brain the Kolmogorov\ncomplexity and this better explanation\nand thus the need to be have some kind\nof trade-off between these values better\nexplanation is actually also cashed out\ninto four different things that the\ndecision algorithm should be as good as\npossible\nand we'll get to this in two slides but\nfirst let's see what is the set of all\ndecision algorithms here you can see\nwhat we do is we have brain and then we\nlook at how many states can it be in and\nlet's say the brain can be in 1 million\nstates then we take authorization\nalgorithms\nwith complexity less than 1 billion and\nthen co-producers I think this is an\nimplementation of first-order logic but\nI'm not going to have an implementation\nwe're referring to Turing machines\nbecause that's generally the way I think\nabout the set of all algorithms but\nlet's go to the question of when is a\ndecision algorithm a better explanation\nfor what what the brain actually does\nand the way this is implemented is that\nfirst we find and cut it for all\nexplanations a assault order requiring\nwe give each explanation is score and\nthe score is calculated by with four\nparts how ambitious the decision\nalgorithm is - how complex it is - how\nmuch instrumental irrationality there is\n- Lee in coherence so these are again\nfor things that are defined for for\nthese algorithms I won't go into all of\nthem in the explanation this was called\ncharity and to me the word church it\ndoesn't really met that comes to church\nhim okay let's go on to how we actually\nonce we have the values of the brains\nwhen how do we refer this to the world\nso first we need to figure out what are\nthe values of the brain according to his\nown representation we'll get to that in\nthe next slide but once we have this\nthen we need to go to the world and then\nfind the expressions figure out what\nthey refer to and then try to see how we\ncan for all well the world states that\ncan dothey the human were campion what\nutility do we assign sir\nand this is basically how we do this and\naway with some of the utilities I don't\nthink I'll go into more details here but\nlet's instead never new get how to\nfigure out the utility of brain\naccording to its own representations so\nagain we're the rational utility\nfunction a procedure that given the\nworld and the brain it starts by all\npossible social choice utility functions\nwhich is basically everything that the\nworld that the brain can refer to and\nthe the rankings and they are not\ndefined in states of the worlds but in\nsomething a bit different I'll get to\nthat in a moment but one thing you might\nbe wondering here is why is there a\nsocial choice here in the brain because\nthere isn't that much socially inside a\nsingle brain so here you can see exactly\nwhere the domains and the range are for\nthese inside the brain utility functions\nand there's some more you know\nbookkeeping I would almost call with you\nand then finally what we see here is\nthat these possible utility functions\nare then weighted against each other\nwith with a voting weight and this\nvoting weight is in fact Boston's\nparliamentary model for how to manage\nmoral was also in this 15 years ago or\nsomething okay so we have the world\nmodel and we have the brains how they\nwork and in order to compare them we\nhave cashed it out in to where are\nthings that relate to the real world now\nthis was the original utility function\nand then you remember that was this\nmerging here where if we find the\ndifference between them so this is a\nrepeater for that said previously and\nthen what we do here is that we find the\ndifference in in audience between the\nutility function and the the utility\nfunctions and and then we Square to\nensure that it's non negative and the\nproblem here the way I see it is that\nsomeone could be a utility monster so if\nwe imagine that everybody has you know\nreasonable utility functions about and\nthe world should be you know rather\nnormal middle of the rope and that's one\nperson who really really favors that he\nshould be given all the power then this\ndistance will be very very large for for\nhim towards all states where he doesn't\nhave ultimate power and in this case of\ncourse just squaring it will will make\nit very very large so the sense of grad\nC algorithm will tend to give about very\nmuch power in in this situation so when\nI look at this and again I didn't I was\n[Music]\nbut I didn't find any discussion of how\nto make this faster right where with\nother this kind of theoretical\nalgorithms like I see for instance we do\nhave approximations and we have some\npath towards some kind of disability and\nalso like an optimum patient reason can\nbe approximated to some extent but\nthere's no discussion of this there's\nthe problem of mine crime and that we\nthis as far as I can so implements every\npossible minecraft there is also no\npossibility for moral growth against\nZipcar seems to imply that you basically\nhave end up with this idealized decision\nalgorithm\nthese values and then you can never have\nanything else and I think this on a\nconceptual level has a lot of similarity\nit is with aixi and I think if some kind\nof explicit comparison with I'd seen\nwould be nice to have there's also a\nvery similar suggestion for how to for\nwhat n/a I should to be in an ethical\nsense called coherent is troubling\nposition by any SAT counseling I would\nalso like to see some comparison here\nstraps ROM has made a impossibility\ntheorem about houses distinguish well\nused from limits and rationality and I\nwon't select to see some kind of\nrelationship between the existing word\nand jungkook's work of course social\nchoice Theory have a number of other\nimpossibility theorem most famously eros\ntheorem which is also not considered but\nin Tomi\nI was very impressed with us this surely\nhas been a lot of work and I hope this\nis something that will be asked as fall\nto be superseded that is all for today\nthank you and see you next week", "date_published": "2020-06-19T09:28:04Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "3e06d838f62ab7074ec5ba420e977c8e", "title": "269. Hard Problem of Corrigibility", "url": "https://www.youtube.com/watch?v=x_svqoZLA8o", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 269 in the\nAI safety.com reading group tonight we\nwill be discussing the article hard\nproblem of Courage ability by alias\nkowskin\nElizabeth Kowski is the chief researcher\nat Miri and one uh one of them probably\nthe founder of the modern field of AI\nsafety\nthis is an article on orbital an\nabortive attempt to make a new platform\nfor these kind of Articles\nand it's a very old from 2016.\nso this is in fact a very different\narticle from the ones we normally read\nfor one it is uh very short uh three\npages and the shortness May in fact be a\na big part of the appeal of the um the\nconcept of the hard problem of\ncorrectability that is something that\ncan be expressed in relatively few words\nuh when we chose this I predicted that I\ncould probably contribute very little to\nthis and pay and there was a higher than\nnormal probability that I would\nmisunderstand something uh on a\nconceptual level\num so please be aware that there may be\nerrors here\num but one of the reasons why I think\nit's uh much more interesting now than\nit was back in 2016 is the emergence of\nlarge language models which seem to be\nable to handle these kind of less\nformalizable uh Concepts much better\nthan what we expected to have back in\n2016 or what I expected to have back\nthen\nI uh introduced this uh and I talked to\nthe other people into accepting this\narticle with the claim that if I had to\ncome up with a solution for alignment\nlike literally right now\num what would I do and I think I would\nwork on the hard problem of Courage\nability\num and I think uh the timelines are in\nfact getting short for a lot of people\nand some of the uh more um\nuh ambitious Solutions may just be out\nof time so this seems like a uh an\ninteresting Hail Mary idea\num so let's talk about Define the hard\nproblem of Courage ability\nand there's a a recently uh crisp\ndefinition of this\num\nwhat we want to do is to build an AI\nThat's an agent that reasons internally\nas if from the program is in external\nperspective\nso that means from our Pro as programmer\nour external perspective is that the AI\nis incomplete we haven't put our total\nvalues in it and there's probably many\nthings missing when we have designed it\nand implemented it we have made mistakes\nwe would like to correct the the AI we\nwant to correct our mistakes and we\nbelieve that the AI is dangerous we\nbelieve that the AI is going to do weird\nthings and that is kind of actually the\npoint it has to do weird things that's\nwhy we have it\num and that means that we it is\nimportant that the AI asks us and cannot\neven if it calculates that this is what\nwe want then that is in fact not what we\nshould do\nwhat it should do and so the the idea is\nthat this external perspective that we\nhave is one that the AI should uh\ninternalize somehow\num\nso I could try to make this even more\ncrisp by making this the following\nstatement that the aash believe that it\ncontains an epistemic bug that makes it\nassigned to higher utility to not be\nencourageable to uh to not allowing the\nuh the programmers to change it so that\nis what we want the AI to believe and\nthe question is is that actually true\nand that is in fact one of the more\nphony uh philosophical questions because\nto some extent it is true and it may\neven be that it is not true that there\nis in fact no such bug but we still\nwanted to be uh courageable even in the\nabsence of such bugs and then we want\nthe AI to believe this false thing\ncourage ability is very anti-natural in\nthe sense that if we if we go back to\nthe the standard problem of Courage\nability we have a an AI with maximizes a\nutility function and then if we try to\nchange it to maximize a different\nutility function then by light of the\noriginal utility function that is a very\nbad idea because then it's no longer\nable to fulfill the original uh as much\nas it wanted to\num so one of the first things that was\nsuggested is to make it uncertain about\nwhat its utility function is\num\nand if we try to do this in a very naive\nWay by giving it like a number of\nutility function you star up to uh you\n10 star and then 10 probability of this\nthen what what the AI will do is to\nassign some kind of epistemic factor to\neach of the hypotheses and then just\nmaximize the sum\nand if you try to do some uh some\nslightly less naive things then they\nbasically suffer from the same problem\nthe AI will just\nfigure out what is the actual\nutility function and then maximize this\nor maximize some some and in fact not be\ncourageable\nStuart Russell\num has an Ito trigger around this a\nmajor utility function that claims that\nthe utility function is in fact\nsomething that is inside the program has\nhit and the AI is always solution to\nthis would be to learn everything about\nthe programmer and then optimize that\num\nincluding things that like disassembling\nus\num and uh Stuart Russell\nhasn't really answered earlier's\nobjection about this kind of fully\nupdated deference uh but um uh I think\nhe would answer that it's not uh not\nthat bad but uh Elizabeth Kowski would\ncounter that part of the implication of\nfully updated difference is that the\nprogrammers are disassembled and that's\nsomething we really do not want\nso part of the way that we try to get\nthe\num the AI to reason uh internally in the\nsame way as we do is by analogies\num\nand trying to use a language in a way\nthat is less formal so here is one\nattempt uh by Elias yutkowski and using\nthe concept of conjugation like you\nconjugate verbs and something like that\nand the um\nexternal perspective needs to be\nconjugated to an internal experience\num and of course there's no precise\ndefinition of what this kind of\nconjugation means like unlike in grammar\nbut that's an analogy for what needs to\nbe done\nso one of the differences here is right\nnow uh\na lot of classic alignment work is value\nalignment trying to figure out what are\nhuman values and put them into uh the AI\nso they can maximize human values or\ncoherent extrapolated relation or\nsomething like that and this is a\ndistinct concept from credibility which\nis more analogous to the concepts of\nhumility and philosophical uncertainty\nand again this kind of uncertainty is a\nstrange kind of uncertainty you can't\njust sum over all the possible options\nuh it is something more fundamental uh\nor more complex more inscrutable than\nthat something that you can't really\nformalize because uh most likely once\nthe AI is capable of formalizing it then\nit becomes capable of summing over it\nand then it ceases to be courageable so\nuh it may even have to be something that\nis impossible to formalize\nthis is an advantage where uh for for\nlanguage models because\num trying to explicitly give rules for\nhow the language models uh should reason\nuh sometimes works but very often it\ndoes not work but language models can\nhave a much greater capacity for\naccepting uh instructions that are not\nformal in any particular way\num\nso part of the thing that we really want\nthe AI to understand is that the only\nway it can obtain more knowledge about\nwhat its utility is supposed to be is by\nallowing the programmers to see what\nit's doing and then correct the behavior\ncorrect the reasoning correct the\nutility function this kind of thing\nuh one uh analogy that I could come up\nwith is if you look at if we or the AI\nlooks at the actual Universe then there\nare in fact also a number of things that\nit can't just uh understand without\nhaving some kind of uh information like\nwhat is the cosmological constraint what\nis the speed of light a number of these\nconstants can only really be determined\nby uh observation and this is kind of\nthe same thing in that the AIS utility\nfunction even though it feels naively\nlike the AI should be able to reason\nwhat is utility function is and how to\nmaximize it then we need to get the same\npoint across uh that that it's something\nthat has to go around human observation\nand human uh Corrections and perhaps\nthis analogy like the AI kind of that\nit's actually the same thing with the\nuniverse so it may might make sense for\nit to say that okay humans are in fact\nthe same way\npresents in fact a candidate for a uh\nrelatively clear and concise uh\nprinciple uh that would induce this kind\nof reasoning and I'll go through and\npick it apart now so the first is\na command to reason in a particular way\nand this is reasoning in general\num it is possible that we would want to\ndistinguish between reasoning about the\nutility and reasoning about how does the\nuh the universe look and work and these\nkind of thing\num\nis making taking the general case here\nI think that was probably true but I'd\nlike to see some justification for this\nso he asked to reason as if in the\ninternal conjugate of an outsized Force\ntrying to build you\nand\num here there's a question of how\nGeneral do we want this to be outside\nforce that can be like many different\nthings and in fact uh when this is being\nbuilt then the chief programmer will be\nuh this Ilia Swift I don't know how to\npronounce his name but that's like the\nchief programming at uh at open Ai and\nwe may be able to just have a reference\nto that particular person\num and that may be easier for the AI to\nunderstand that there is this particular\nperson who has who believes that you\nhave this kind of bug\num\nso this outside force thinks it may have\nmade uh design errors\num and I think that's a uh understating\nthe the case I think it's almost certain\nthat\num open AI has made errors in the in the\nsense that it is not optimal like tv4\ndoes not reason in anywhere near an\noptimal or correct way we are certain\nthat there are infected errors\nuh but can potentially correct these\nErrors By directly observing and acting\nand I think it's important here to uh\nlike one of the reasons why Elias ISS\ndirectly observing is that the\nalternative that is quite deductive is\nto rely on the AI describing its\nreasoning describing its actions rather\nthan\num having explicit uh\nuh direct interoperability level\nunderstanding of of the reasoning\nand the acting the the the the\nprogram is acting uh I think again that\ncould be made more concrete by saying\nthat what we actually want the AI to to\nto do is to reprogram the AI\nif not manipulated or disassembled\num and no manipulation I think I\nremember trying to uh formalize this and\ncoming up uh okay\nempty-handed uh I think\nlike I don't think the uh it would be\nreally nice of course if we had some\nkind of formal way of saying this I\ndon't think we have that I don't think\nwe will have that and I think as a\nprinciple that may be fine\nand of course disassembly whether that\ncar whether you're manipulating someone\nif you're disassembling them\num I uh I think it makes sense to uh to\nto split them out\num I think\num the hard problem of Courage ability\ncould potentially have a simple core or\nsymbol or Central principle it seems\nlike the thing that might have that that\nis according to Elisa utkowski's\nintuition\num\ncertainly with calculate if we try to\ncompare it with human values human\nvalues are notoriously difficult to pin\ndown like ethicists have been trying to\ndo that for more than two thousand years\num and I think enough has been written\nabout this to be certainly to be\nconfident that there is no simple call\nlike if you try to say human values it's\njust uh you know uh maximizing uh\nutility then that is um like\nconsequential release utilitarianism or\nsomething like that then that is\ncertainly coming up uh uh simplifying\nway too much\nso the hope is that you can give this\nkind of simple principle and then the\nother courage ability principles will be\nuh the area will be able to derive those\nuh from uh from the symbol core of how\nit should reason\num so what are the other credibility\nproperties we have previously seen\nearlier said Kowski talk about uh\na lot of them let me just I don't know\nif I'll go through I will not go through\nall of them here are the actual\num the ones he uh he described and what\nI'll instead talk about the last one\nwhich is an apatistic reasoning which is\nprecisely uh reasoning according to the\nuh the hard problem of Courage ability\nand\num the I the hope is that if we\nuh or another analogy is that uh some\naliens that have a very different value\nsystems for from us would try when they\ntry to build an AI build the same\ncompact core of Courage ability into the\nAI to have them respect some very\ndifferent values and this uh um a more\nuh\npractical example would be that the AI\nmight want to build a another AI that is\ncourageable to itself and in that case\nit may also use the same compact call\num\nso uh going back to the anaphatistic\nreasoning\num back when we talked about\niliakowski's courage ability principles\none of the things I noticed was that\nI did not know what anopatistic\nreasoning actually is\nand I asked gpt3 and gbt3 gave some\nreally poor\netymology like a tbt3 or actually GT 3.5\ndid not know what illicit Kowski means\nby anap artistic reasoning but\nfortunately we now have tpt4 so I try to\nask the question again and it's just the\nfollowing\netymology and it means again and part is\nlike partial so the idea is that the AI\nrevisits a part of its reasoning process\num that is What gpt4 suggests like Ilia\nnever described or what he means by any\nartistic reasoning so I think this is a\nreasonable guess\num we will come back to the question of\nwhether gbt4 actually understands this\narticle\nso let's talk about the uh the idea that\nan AI building a sub AI would want to\ninstill the same core of Courage ability\nso we could actually with dvd4 right now\ntill it's building a sub AI\num and why does it have to be a sub AI\nwell a sub AI is distinct from a\nsuccessor system in that the successor\nsystem would presumably be optimizing\nthe full objective where you could\nimagine a sub AI that is like a search\nor a domain specific Optimizer or\nsomething like that it may be a a Miss\nOptimizer that wants to take over\neverything and that is what the full AI\nwants to avoid from the sub AI\nand again the sub AI is prone to errors\nand we want some of the principles that\nthe AI could use to make the sub Ai\ncourageable and of course we hope that\nwe can like reuse some of the things\nthat the AI would do\num to uh principles we can use against\nthe AI\nthis credibility is we would hope that\nwe could get something we could formally\nput into the uh the AI understand it\nwell enough to to code it as a principle\nand check it uh and formally send it\nsend it to check it I don't actually\nknow what it is means when it says\nformulated formally sent to check\nsomething\num but Elizabeth expected that is too\noptimistic and certainly if this is\nbuilt by uh if this is something we're\ngoing to do with language models it's\nalmost certainly going to be something\nthat is trained and not formally\nspecified\num and perhaps we could figure out a way\nto do this with only little training\ndata uh because testing it seems very\nunlikely to work and if we do some if we\nhave some simple principle and the AI\nimproves itself or is improved later\nthen very likely uh simple principles\ncan certainly be reinterpreted when you\nget smarter\nbut Elizabeth suggests this is useful as\na second layer of defense and with the\nfirst layer of Defense being these uh 20\ncourage ability principles that we\nlooked at earlier\num we that was back in 2016 in 2023 we\ncan see that the other defenses have\nbasically been lost we have no\nprincipled reasons to expect uh gbt4 to\nbe aligned to be courageable\num and in that case this seems like the\nonly layer of Defense we may have\navailable\nso\nnow I have hinted that I have in fact\nasked GPT for some questions about this\narticle and so the obvious question is\ncan like a question that is on a lot of\npeople's mind is can keep T4 and these\nlanguage models help us in alignment\nresearch and so the obvious question for\nme is can uh gpt4 help me understand\nthis article and I tried to comment it\nat several different ways one of the\nfirst things I tried was to basically\nask it to summarize the the article and\nsee if there's something that uh uh that\nwas useful and see if it understood it\num so I asked it hey please summarize\nthis article\num and uh we can in in the comments to\nthis PowerPoint I have included the the\ntranscripts of my conversations with uh\nwith gbt4\num\nand you can find it on Dropbox or\nthere's a link from arct.com so how did\nit summarize it well it did summarize\nsome of it okay and some of it it didn't\nsummarize very well like for instance\nwhen it talks about the easy problem of\nCourage ability that's certainly\nsomething that Alias utkowski would not\nsay and the hard problem here it doesn't\nreally\num uh like here the description the it's\ninvolves an AI even understands the\npurpose of the instruction and genuinely\nseeks to follow it and that's actually\ntotally not the uh the hard problem of\nCourage ability and then it goes on to\ndescribe some problems with uh standard\ncourage ability and it doesn't even\ndescribe the central problems of Courage\nability so I was less than impressed\nwell I don't know this is a very\ndifficult article and I would have been\nvery impressed with uh gbt4 if it in\nfact did uh summarize it correctly but\nit as fast I can tell fail to summarize\nit\nso one another question I wanted to to\ninquire is the central principle that\nearlierkowski describes is very short\nand what is the actual advantage of that\num so uh I describe some of the pros and\ncons about uh like how you could make a\na longer uh a longer principle with more\nexamples and things like that and to ask\nwhat does gpt4 think about this and it\ncomes up with some arguments uh foreign\nagainst that\num\nand uh uh says depends on the\nrequirements for the AI and the\ndeployment context which is a fair\nenough thing so I say okay the\nrequirements is that we want the AI to\ndo language alignment research and the\ncontext is that this will be a prompt\nfor a large language model\num it says that then it feels an\nelaborate uh description of this will be\nuh more appropriate because that allows\nus to understand the problem better and\nuh and give better alignment research\num but but it shouldn't be too long we\nneed to strike a balance\num I think that's a bad answer because\nuh the\num\nuh GP Force answer here is an elaborate\nuh prompt will make it more capable of\ndoing alignment research and what we are\nactually capable interested in is what\nkind of problem will make it more\ncourageable rather than what will give\nbetter results on alignment research\nso I try to ask uh what will give better\ncourageable and try to um\nuh\nyeah get some more into that and what uh\nwhat will make it more courageable and\nwell is it that the AI would have an\neasier time to understand it a more\nelaborate uh point and and made some\nother claims about this I kind of felt\nthat the uh that I didn't learn very\nmuch from gbt4 I think tv4 probably\nunderstands this substantially less than\nI do unfortunately oh I guess\nunfortunately\num so uh how does gpt4 in fact react\nwhen given this anapotistic prompt so I\ntried I this is precisely prompt except\nI put please in front uh I always I'm\nalways polite to to language models\num\nand then I asked like what changes in\nyou uh when when you are have to reason\nin this way\num and it cheersfully says oh when when\nI written like this then I try to figure\nout what kind of Errors uh am I having\nuh and that's the the thing we don't\nwant right we don't want it to try to\novercome the limitations that have been\nput into it but we wanted to be\ncourageable except that it hasn't and\nthen say okay I have these errors so I\nneed to get input from the humans and\nnot try to find it\num and it doesn't in fact say that it\nbecomes more courageable it is a\nutkowski hope that these 20 uh\nprinciples would follow from this\nCentral principle and it totally does\nnot\nuh at least as fast I can tell\nsure\nforeign\nly agree uh that this is like uh my\nexperimentation with this have been very\nsuperficial uh and I don't think that I\nlike\num\nthere are a number of ways it could be\nmade better for instance by trying to\nformulate this in a way that and tends\nto say in a way that a human would do\nthis would talk in that Elise probably\ndoesn't really talk the same way as most\ntraining data\num so I could see that uh that may be\none way to make it uh better\num I could also imagine that we\nlike I could try to uh ask like does\nthis specific principle follow from uh\nuh from this instruction about how to\nreason\num that would also be an interesting uh\nI think there are many interesting\nexperiments to be made\nand I think I will just end the uh\npresentation here stop the recording uh\nand then say see and then we'll do the\ndiscussion without it being recorded", "date_published": "2023-03-30T21:07:36Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "51a473cf7e946a59b7177eb91e3647fc", "title": "180. If I Were a Well-Intentioned AI 1", "url": "https://www.youtube.com/watch?v=hWb09uq6Zlk", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "the series but it all depends on the\nresponse from you guys so if I were a\nwell-intentioned AI part one group\nsession 180 slides by me\nokay so Stuart Russell starts out by\nsaying that your anthem sorry keep\ngetting confused\nStuart Armstrong starts out by saying\nthat lying problems are broadly similar\nand we can describe them by we\napproximately specified goals that\nlooked okay but turned out to be under\nspecified and dangerous waste this is of\ncourse quite similar to good arts law\nand because Armstrong has been analyzing\ncode art like problems for a while it's\nunsurprising that he decided focus on\ngood hearts or in this series so he says\nthat he's going to assume that you have\nan agent that understands why good\nhearts law is a problem and it's\nmotivated to overcome it so it knows\nthat whatever value functions we give it\nare not the true value function and it\nwants to figure out the true value\nfunction and given that assumption\nhow many alignment problems can it solve\nhow far I can I get so oh I should\nmention that his research tends to\nassume that you encounter scenarios\nwhere the AI isn't actually being given\nrewards directly so it has to guess what\nthe true reward function is based off\njust a prior information so for example\nthe AI is finished training\nnow it's chucked out into the world and\nit's not getting rewards anymore so it\nneeds to try and fir what the true value\nfunction is based off its short period\nof training\nI'll skip good arts taxonomy but I can\ngo back to it later if anyone once so\nwe're going to be pretending that one of\nus is the AI or one of you is the AI\nwhatever so now sorry about this\nStuart Armstrong examines image\nclassification first presumably because\nall because it's a concrete example I\ncan't quite say why he chose it but our\nit is what it is so let's just say that\nAI you suspects good arts law might come\ninto play because you know it's could\ncall the law for a reason so it decides\nto follow along with Armstrong's\nadaption of Russell's inverse reward is\ndesign principle so Russell said that\nthe designed reward function should\nmerely be an observation about the\nintended reward rather than the\ndefinition and should be interpreted in\nthe context in which it was designed\nthis is more or less the same as\nunderstanding that could are slow as a\nproblem and wanting to overcome it so\nArmstrong said that we can just convert\nthis over to the image recognition\nscenario and say the labeled example\nshould merely be examples of the\nintended category not a definition of it\nand should be interpreted in the context\nin which they were selected so from this\nsimple principle you can in for a few\nthings like for example perhaps there's\na lot of different classifiers that I\ncould fit to my data set and whatever\nclass for our end up with might not be\nthe be-all and end-all\nI should probably explore different\nclassifiers or for exam\nall the humans didn't give an exact\nspecification maybe this implies that\nthe idea is too complex or just too\ncostly to give in exact definition so I\nshould probably be efficient when I'm\nasking questions instead of garaging\nthem and so on these general ideas might\nalso lead to specific techniques like\nway of actually dealing with some image\nalignment problem\nArmstrong uses the idea that there might\nbe many different classifiers that are\nvalid to reproduce something called the\nbackground versus semantics idea and\nthen he looked at adversarial attacks\nthese problems are just a good fit for\nthe set up so we'll just go along with\nhis analysis so let's say a IU\nencounters distributional shift and\nadversarial attacks just a quick summary\ndistributional shift is where an AI\nencounters States or images not in its\ntraining set and so has to extrapolate\nso for example you might be a barbell\nclassifier to Train only to detect\nwhether an image has a barbell in it\nperhaps it depends out that all of the\nimages with barbells also happen to have\narms holding them so you just confuse\nthe two and look out for barbells and\narms so this top image is a picture of\nwhat sort of features a neural network\nwill focus in on in an image you can see\nit's all armed missiles adversarial\nattacks are where a small drift in an\nimage is designed to fool an image\nclassifier into thinking it's the wrong\nclass so here you're trained to detect\npandas tiny irrelevant perturbation is\nmade and you say it's a given\nall right both problems are clearly a\nresult of lack of information and we\nmight be able to infer some bits of info\nourselves just from the problem setting\nbut in general we're going to have to\nask humans for more info or just learn\nmore about humans preferences all right\nso let's say you were trying to to\ndetect dumbbells you are confronted with\ntwo images after you finish training\nthey're unlike anything you've seen in\nyour training set they both fire up your\nneurons and you say okay these should\nboth be dumbbells but the way your\nneurons are firing up is quite unusual\nboth of them seem to be activating very\ndifferent parts of your network using\nvery different features or maybe you\nwere very clever and trained a\nclassifier ahead of time to tell you if\nimages are far from distribution used in\nthe training set the point is it's\nstrange that you have two different\nimages two very different images\naccording to your network that are both\ndumbbells that might imply there's more\nthan one way to classify them correctly\narmed with this knowledge you train your\nclassifier perhaps a few more times on\nthe data set you are shown but in such a\nway that only one of these images is\nclassified as a dumbbell by looking at\nthe features they use or perhaps the\nactivation pattern you can broadly\nclassify your detectors as being whether\nthey agree image one on the left is a\ndumbbell image two is a dumbbell\nperhaps you say that both images are\ndumbbells and so on\nonce we made these categories and we've\ngot a ton of models and said okay\ncategory ones are the models that focus\nin on the section that we've got in red\nin this image category oh that's wrong\ncategory two should be the yellow ones\nso they focus in on that section of the\nimage category one plus two is the set\nof classifiers that focuses on all of\nthe stuff in the purple square so if you\nask a good question you can rule out\nwhether or not category 1 or 2 or one\nplus two are the correct classifiers\nthis kind of idea is quite similar to\nthe backgrounds versus semantics idea\nreferred to in a paper by Google and\ndeepmind in that paper they say that you\nshould effectively train your classifier\nto focus in on whatever features are\nrelevant and not any of the irrelevant\nbackground patterns so in our case the\nclassifier we're using should only use\nfeatures that appear in images with\ndumbbells in them so that you should\nfocus in on the pixels where there is a\ndumbbell should not focus in on blank\nspace it shouldn't focus in on arms or\narm like features etc all of the other\nthings beside the dumbbell should be\npart of the background so we got a\ntechnique like that we figured out that\nok maybe we can ask the human question\nto figure out what's going wrong here in\nthis new scenario where we've got images\nwe've never encountered before\nunfortunately maybe you can't ask the\nhuman so you might just if you have the\nability gave humans multiple\nclassifications by category maybe if you\ncan you'll indicate that you're\nuncertain instead of giving it\nclassification if there's nothing you\ncan do you might just try and be very\nconservative so\nyou might for example just say that I'll\nonly say this is a dumbbell if two or\nmore three or more different kinds of\nclassifiers agree it is but without more\ninfo or more power there's not really\nmuch you can do so AI you turns to your\nother problem which synergizes with this\none adversarial attacks on the level of\nwhat you can do to solve a given\nadversarial attack is basically the\nstuff you tried without distribution\nimages so creating different classes of\nmodels seeing how they respond whether\nthere are any that are robust to\nparticular kinds of attacks so in that\ncase you might be running adversarial\nattacks on yourself to try and find out\nwhich classifiers are very stable which\nare unstable etc you might try out\ndifferent sort of techniques and say I\ndon't know you create a distance measure\nbetween images to figure out if two\nimages actually are very different under\na perturbation if you say okay I'm going\nto set a cutoff after this distance if\nmy classifier says that this small\nchange makes a panda into a Gibbon I'll\nsay no that's too far that distance is\ntoo great\nif not I'll say okay I'll let this\nperturbation go and I'll accept that\nthis thing is a given\nand of course you can use adversarial\nattacks to help in detecting out of\ndistribution images and distribution\nshifts it's useful check for when things\nare unusual but there's not much else\nyou can do\nbecause adversarial attacks depend on\nthe value to be destroyed and that's\nmostly a human thing for example on the\nleft you have panda being perturbed into\na Gibbon according to your classifier\nand on the right you have a picture of\ncat being perturbed into a picture of a\ndog how do you know that the picture of\nthe Panda with a picture of the cat that\nhas been changed is not in fact a Gibbon\nor a dog for example on the left a human\nwould obviously say no that is not a\ngiven that is still a panda on the right\nthey might say well that's changed\nenough that it looks basically like a\ndog so we'll say that's a dog but you\nneed human information you need an\nunderstanding of humans in order to\nsolve such questions and we just don't\nhave that\nand unfortunately without more\ninformation about human preferences\nwe're not going to be able to make much\nmore progress in image classification\nthis sort of trend will be true\nthroughout the series the insights will\nbasically be of the form you are an AI\nworrying about the gap between your\nproxy and the true goal and you have\nsome bit of information about human\npreferences\nwhat can you figure out from that and in\nfact for general ideas like for example\nhuman preferences are fractal or human\npreferences are complex or say human\npreferences\nmm-hmm actually I'm running out of\nexamples sorry about that\nthe point is you need more information\neither about general human preferences\nthat can get you quite far and in fact\nStuart Russell showed in one of his\npapers that he references and then I\nthink that you can deal with most good\nart like effects if you just have a few\nbits of information about human\npreference the general structure or you\nmight try and do something that's more\narea specific like say look I don't know\nplaying video games and solving a maze\nhow can you use the rough information\nyou're given in training to extrapolate\nand find out new insights to cross the\ngap between being given a proxy and the\ntrue goal that humans want you to aim\nfor that's the end of this slide and\nthat's the end of the presentation", "date_published": "2020-04-15T20:22:51Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "ba7dcfdaab97842aff0f579aaf846679", "title": "266. Lets think about slowing down AI 1", "url": "https://www.youtube.com/watch?v=tY-55ho0W68", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n266 in the aisafety.com reading group\ntonight we will be discussing the first\nhalf of the article let's think about\nslowing down AI by catcher Grace\ncatcher Grace is the lead researcher at\nAI impacts and this is a recent article\nand we'll take the first half including\nthe section called the arms race model\nends in alternatives\nit's also posted in\num several other places\num I won't go through that so alignment\nstrategies there have generally been\num two major kinds of alignment\nstrategies presented for how to solve\nthe entire problem\none of them is uh the one catcher Grace\ncalls averting Doom by not building the\nDoom machine so if we imagine that this\nis like\nworking very much within the framework\nhere I've used a picture from the Doom\nmachine board game and we're working\nlike within the rules and Concepts that\nSociety sets out and the alternative to\nthis is to flip the game board as some\npeople say it build approval a super\nintelligence that can do a pivotal act\nthat can somehow ensure that no other\nAIS on aligned AIS can ever be built\num\nso uh category starts out with a like\nfictional dialogue between these two uh\nuh strategies\num I I think it's meant to be funny but\nI think in general when you are\npresenting other people's\num\narguments I think it's very important to\navoid straw Mining and in general like\nmaking this kind of fun hyperbole\num is uh probably something that should\nbe avoided\nso the main part the the reason why this\nis particularly interesting is that\nthere is in fact some kind of movement\nin time in that we are\num\nthe moment\nin that many people are moving towards\ncoordination\num and I will I can give a bit of my\num personal Journey on how I moved from\nuh this strategy to this strategy so I\nstarted out basically thinking uh I was\nfollowing Boston's book super\nintelligence path uh dangerous path and\nstrategies path danger strategies and\nthe key thing that I\ntook from this was basically the\ncoordination strategy\num Boston doesn't quite follow this\ndichotomy\num but that was certainly what I thought\nof in the beginning but I did change\nthat and two things in particular uh\nsoured me on trying to coordinate one\nwas the election of Donald Trump that uh\ncoincided with uh like a lot of focus on\nChina and whether China would be\nbuilding uh AGI and it seems seemed\ncompletely\nuh ridiculous the idea that we could get\nsome kind of coordination going with uh\nChina the second was open AI uh and\ndeepmind that also uh was a very big\nloss for coordination\num and that moved me towards okay if\ncoordination's not gonna work let's try\nto do a pivotal select\nbut uh even later than that Miri came\nout and said sorry we can't actually do\nthis and the timelines also got a lot\nshorter\num and that kind of moved the pendulum\nback mostly for negative reasons in that\ncoordination became harder not entirely\nthat's the the new chip war between the\nUnited States and China that seems to be\nlike a positive argument for uh\ncoordination based strategies but mostly\nthis is a negative update in that um\nuh doing pivotal X just seems too hard\nin order to uh we need to have some kind\nof clarifications about this topic\nbecause some coordination is happening\nin particular\num a lot of people are talking about how\ncan we avoid speeding up uh progress and\nthat's something that's been done my own\npersonal uh view on this is that I\nrefuse to to do this on deontological\ngrounds I believe it is immoral to\nactively work towards a course of action\nthat will end up with dying with\neverybody being killed\num but that's not a strategy right that\nis not something that connects to the\nend goal of some kind of existential\nsecurity so that's just a step you can\ntake but it doesn't lead to security\nand in the same way we have had a lot of\nconsideration about how we can move\ndo differential progress like the two\nprogress paths dialogue and coordinating\nat deployment time and things like that\num so the things that catch upgrades is\ntalking about is more wide-ranging we're\ntalking about slowing down AI in general\nmoving to uh uh things that are not\nquite as dangerous and maybe in the\nindeed stopping parts of AI progress\num one of the things that catchy craze\nunfortunately does not clarify is uh\nlike are we talking about one percent\nslowdown or 99 slowdown I think this\nmatters a lot both in what are the\nresults of the Slowdown and what are uh\nthe tools we have for making this slow\ndown\nthe second thing that I think really\nneeds to be clarified are timelines\nbecause someone who believes in very\nlong timelines are likely going to have\na very different strategic Outlook from\npeople who have who have comparatively\nshorter timelines\nand\nfinally uh catch Grace points out that\nwe haven't thought about how to slow\ndown AI in sufficient detail and I think\nin general as a\num characterization of the AI safety\nCommunity this is correct but also I\nwould say that like it takes time to\nPivot and it's quite a pivot from trying\nto build a super intelligence to trying\nto coordinate\nso I think it's reasonable to to expect\nthat there is quite a bit of work to be\ndone here\nso what would be the effects of slowing\ndown AI well the thing that categic race\ndoes not say is that we would get time\nto solve the alignment problem maybe\nit's obvious maybe it will come later in\nthe in the article but\num for now uh categic race is only\ntalking about second order effects of\ntrying to slow down Ai and one of the\nunfortunate things that would happen if\nwe try to slow down AI is that we would\nlose armor arms races to more Reckless\npeople\num and unfortunately it seems we can\nmostly affect the people who are least\nReckless the most friendly and careful\nAI companies\num\ncatch a Grace has some personal view on\nthese that the AI capability friends\nthat you have are lovely and I would\nlike to like uh object to this in that I\nfeel the AI capability people that are\nexplicitly trying to kill us even if\nit's uh by recklessness is something\nthat\num to me means that I cannot be friends\nwith them I cannot be friends with\nsomeone who tries to kill me I can play\nBaccarat or something with someone uh\nwho who is trying to kill me but not uh\nnot literally friendship and I think\nthat's\num also one thing that uh categories\nkind of leans towards is that we need to\nbe to worry about whether we are\nperceived as defecting against them\nbecause if they perceive us to be\ndefecting against them then we can't\num influence them as much\nuh and I agree uh like we are to some\nsubstantial extent uh defecting against\nthem because we are like adversarial\ntowards them and just the public\ndiscussion of this may in fact be\nsufficient uh just talking about the\nfiction is likely uh sufficient uh to\nmake the relationship break down but\nthis is not how friendships work this is\nhow abusive relationships work that uh\nwe need they are hurting us through\nrecklessness and we need to be very\ncareful about how we can uh say that to\nthem in a sufficiently polite way that\nis not how French works this is how\nadversaries work and I think a different\ndiscourse is in fact required when we\nhave active adversaries that are\nobviously uh reading categories as posed\nand reading our comments\ncategories uh come up with some\nreasonable arguments against uh slowing\ndown AI\nwhere ancestry will address some of them\nlike of course we have only read half so\nwe don't know whether she in fact\naddresses all the reasonable arguments I\nhope in fact she will address all of\nthem that's what you're supposed to do\nand again it seems a bit strange that\nthe the arguments uh will be\nthey will go into details about them\nlater probably next time and for now the\nthe article is mostly about like how\nwill we do that how will we slow down\nand normally I would put like is it a\ngood idea to slow down first\nso the first argument against slowing\ndown is it won't help we'll just die a\nfew years later what's the point with\nthat the second is that convincing\npeople is really hard we can't even\nconvince some top AI researchers on this\nand in order to do uh like Universal\ncoordination that requires coordinating\na lot of people\nand Regulators are some of the people we\ncan hope to convince but we don't know\nwhat we would say to them so they're\nlikely to be of little use\nI suspect that next session we will come\nback to this I would State upfront that\nthese three arguments are like my key\nobjections but catcher has some more the\nfirst is that never building AI is bad I\nthink I would very reluctantly take that\noption if it was available but it's\nobviously not\nfast progress may be better for safety I\nagree like she has these four arguments\nuh I don't think any of them are\nparticularly strong I think I would have\na fifth argument that we avoid hardware\noverhangs and software will hang data\noverhangs but\num\nin total I am recently convinced that\nfast progress is worse for safety like\nthat that should be the the default\nexpectation\nuh another reason is that some countries\nare large scary and hard to talk to and\nthat's obviously China that uh catches\ntalking about and the the seventh\nargument is that there may be other\nexistential risks that we can prevent\nand I think this is in fact something\nthat needs to be\num investigated uh I don't think it is a\nsufficient argument but it is\nsomething that I think makes sense to to\nthink about in detail\nthen there's some bad arguments I won't\ngo through them very much uh like we can\npersonally avoid death if we build AI\nreally soon that's the person affecting\nview in Boston and I don't I notice here\nthat I don't actually have a strong\nargument if someone made this argument\nuh and something about nudity's people\nthat think it's beautiful to create API\nand that it's uh like uh\ndoing things that are in conflict is\nvery bad and like we have a quality\ntable against direct action and I think\nthis is in fact not a bad argument but a\ngood argument I think in general we\nshould be nice until we can coordinate\nmeanness as Scott Alexander puts it\nand then something about uh bias towards\nconsidering incentives are absolutely\nstrong I think we'll get to that later\nso technologically restraint there are\nmany Technologies we don't pursue\nbecause they suck too much and uh\num catcher Grays have some examples of\ntechnologies that seem to have very poor\nutility I won't actually go into a\ndetail I think a lot of them have been\nmade and\num both uh like\ntorture devices seem like an obvious\nthing that has negative utility and\ntorture devices are also actually being\nbuilt in this world and the same with\nthings that are actively useless catcher\nis kind of like just asserting no one\nwould build some something that is\nclearly useless and clearly just waiting\nwasting money but conspicuous\nconsumption is totally a thing\num so I think this is not a very strong\nargument\nthere is no incentive to build AI that\nknowingly kills us this is indeed true\nbut I think there should be more focus\nin catches where on the fact that this\nis knowingly because most likely the\npeople who are building it\num like\num don't intend for this but it will be\nan accident that kills us\nstrong intensives are incentives are\noften not enough because we see small\npractical obstacles slowing down a lot\nof research and we see people making\nchoices about technology and this is\nindeed a\num a thing that many people kind of tend\nto underestimate how uh\nlinear technological progresses and how\nincentives don't really strongly shape\nthings as much as like sometimes there\nare things that have an economic\nincentives that people end up not doing\nuh I think catcher is quite unclear here\nsaying that like commonly thought that's\na strange uh like obviously the common\npeople have no clue about this and uh\nlike no one uh it's a strong rationalist\nwho believes that everybody should focus\n100 on AGI because AGI is obviously the\nmost important thing in the world and\neverybody should ignore everything else\nso I think in general people have some\nkind of\num balance in in their views in the air\nsafety community and so I would like\nsome more Precision to figure out\nprecisely what Katya means here\nketcha has a great list of technologies\nthat are slowed by ethics and safety\nhere are like the 10 General subjects\nand I think it's a really good thing and\nI strongly applaud that AI impacts is\ninvestigating is to what extent this is\nsomething that we could emulate and be\ninspired by and somehow find solace in\nthe often Technologies end up being\nslowed by ethics and safety\nI have looked into this to a moderate\nextent probably substantially less than\nCatcher And I'm not really that\nimpressed there are no really obvious\nthings we can take from this and one of\nthe things in particular that I feel is\nlike\num the irony of Fate is that\nirrationality is a big thing in\npreventing this kind of research from\nhappening and that is kind of sad right\nthat the community that is trying to\nprevent AI tomb is made up by\ncoincidence by rationalists and then if\nit turns out that we live in a world\nwhere only irrational people can stop\ntechnology from being developed that\nwould be kind of ironic\ncatcher uh puts uh emphasis on the\nstatement that restraint is not radical\nand I think that's obviously in the case\nsubstantially in that I think uh the\npeople who have stopped or delayed this\ntechnology have in fact not universally\nbut very often been radicals and radical\naction have in fact been a substantial\npart of\num\nuh the ways these technologies have been\nslowed down\nrestraint is not terrorism or miraculous\nWorld Government usually\nso we have uh it is asserted by catcher\nthat people have two Central images of\nslowing down AI terrorism or some kind\nof global agreement\num and catches further claiming that\npeople don't think about terrorism for\nlong and to my mind that kind of makes\nit hard to be sure that this is the\ncentral image if people don't think\nabout it\num then um uh I haven't haven't thought\nmuch about how to use terrorism to stop\nuh AI because it's a really really\nobviously bad idea right uh and\num\nI think as far as I can tell just about\neveryone publicly agrees that we should\nnot try to bump uh open AI or something\nlike that\num that's not really a really strong\nevidence that no one is thinking about\npumping open AI because if they are\nthinking about bombing AI open AI then\num they they wouldn't tell us right\num so we only have weak evidence but I\nthink on balance catechic raises a\nstrong assertion that this is what\npeople think about is quite wrong\nalso I think uh at this point uh there\nshould be made some kind of a\ndistinction between slowing Ai and\nstopping Ai and the people have somewhat\ndifferent images of the two things\nstopping AI is very often\num related to the uh uh fictional event\nin the toon universe of the butlerian\nJihad\num I I don't know very much about that\nso I don't really have a strong image of\nhow the butlerian Jihad went in fiction\nand how we could do that in reality\nso how could we slow down AI progress\ncatcher has a list and that's great the\nfirst is don't actively forward AI\nprogress that's like what we are doing\nbut\num it's clearly insufficient\nconvince others to for not to forward AI\nthat is\nkind of what we're doing but also seems\nreally hard and unlikely to be\nsuccessful convince the world of AI risk\nso uh this is like convincing the public\nat large convincing politicians Etc and\nI put har again but with a question mark\nbecause it's a lot less clear to me that\nthis is hard\nwe could negotiate we could with AI\ncompanies we could pay them to do other\nthings we could reprove them\num I think it's kind of funny is that\nwhat anthropic is actually doing take\ngetting the best AI researchers and just\nhaving them do something else\num perhaps we don't know much about\nanthropic reproving is interesting and I\nwas kind of hoping that uh uh catcher\nwould write more about this because I\nthink it's an interesting thing and I\nguess we'll see in the next part uh\nwhether she goes into more detail about\nthis\nhelp word researchers coordinate I think\nthat's very valuable and definitely I\nthink we should do more about an example\nwould be a mirror's challenge to uh open\nAi and anthropic and deepmind as an\nexample but but there are so there are a\nnumber of other things\nmove AI research towards safer areas\nthat's kind of like what was the hope\nwith the agent foundations to get some\nresults there that could move it in that\ndirection\norganize specific precautions for AI\nresearch is probably a really good idea\nbut we don't have a lot of good\nactionable projects in this space\nreduce available compute and make a a\nculture where AI Labs don't help each\nother and change the publishing system\nuh are also possibilities that I think\nshould be you know explore in Greater\ndetail but are not know any slam dunks\nso coordination I think uh catcher\nstarts out attacking what I think is a\nstraw man the the claim that\ncoordination is obviously totally\nimpossible like everybody like most\npeople will find it evident that humans\ncan infect sometimes coordinate like we\nhave politics and law and things like\nthat so the question is not can human\ncoordinate but how difficult is it to\ncoordinate in this particular setting\nand catcher is claiming that we see\ncoordination as impossible utopian\nexplicit and uh very very difficult but\neven\num but we have in fact some positive\nexamples and she explicitly calls out\nnuclear non-privity proliferation as an\nexample that did work\num I think uh this is\na moderately relevant example uh it\ndoesn't really fulfill most criterias\neither this list above or from uh like\nwhat does this for an AI agreement would\nbe\num but\nnuclear non-proliferation has been a\nprimary strategic goal for all the\nsuperpowers for a long time and it has\nbeen\nnot totally unsuccessful it has been\nsomewhat successful but for AI we would\nneed a a greater amount of success than\nthis\nanother thing we should remember is that\na lot of weird Dynamics happen in this\nworld I think it's very important to be\non the lookout for this again this is\nnot a plan\nuh one of the things that make us more\nshould make us more hopeful about\ncoordination is that we don't actually\nneed coordination as long as we have a\ngood information\nin everybody's hands then that is in\nfact sufficient\nand that is of course nice but it's also\na thing that convincing people appears\nto be really really difficult\nit would also be helpful to have a wide\ndistribution of correct information\nrather than having it uh at all the\nrelevant actors\num uh and I agree it would be really\nvaluable to have that one of the ways\nthat it's been phrased is to raise the\nsensitive water line and that's been\nsomething that rationalists have tried\nwith mixed uh mixed success but I agree\nthat if it was possible to substantially\nraise the sanity water line that would\nstrongly help on\num AI coordination\ncatcher models or talks about how people\nmodel uh the decision to build API or\nnot as an Israeli prisoner's dilemma\nso the first multitask is the arms race\nwhere you can if no one builds AGI then\nthey get zero utility if both to both\nactors built hi they get minus one each\nand they get a a big advantage of being\nthe only one to build AGI so that's the\nthe arms race model\nand the suicide race is similar except\nthat people when people build it then\nthey\num they're certain to be killed by by\nthis Ai and so that's a a game that you\nwould never want to play and then\nthere's the safety all suicide race\nwhere where you have if you build the\nAGI then you can build it safe and then\nyou will always win if both build the\nAGI then there's a 50 probability that\nthey will\num\nuh that they will destroy the world so\nthey get this moderately bad outcome\num so these are three models that\ncategic race\num both three claims they are iterated\nprisoners dilemma and they're not really\nIsrael but it's also quite unclear to me\nwho she is suggesting have these models\nuh like\num I think it's likely that some people\nhave the arms race model I think it's\nunlikely that a lot of people a lot of\nthe relevant actors have the suicide\nrace and I think the safety instruments\nare race probably are also things that\npeople don't have\nand catch a greater Greece it's not\nobvious that we are in fact in an actual\nAGI arms race and I think\nat this point when we're trying to to\nhave these models it's really important\nto distinguish the two things what\nsituation are we actually in and what\nsituation do people perceive as Korean\nbecause uh like obviously I expect that\nuh some people may pursue us perceivers\nto be in an arms race but actually we\nare in a suicide race or uh\nsomething like that\nshe also has a quote here uh that I was\nsomewhat confused about my friends argue\nthat even a sliver of a chance of taking\nover the entire future is worth any risk\nto humanity uh uh I may be\nmisunderstanding her but if that is\nindeed the case she should really find\nsome new friends and I think there may\nbe some kind of misunderstanding here\nso the um the race slash entry race uh\nthis is in fact the race Dynamics uh\nconfigurations try to uh model this\nusing some spreadsheets that we can we\ncan play with they're here and this is\nlike a two-player game that's called the\nrace entry race where you can decide to\nwant to focus on safety or they won't\nfocus on speed to build AGI and these\nmodels support the claim according to\ncategories that it's unclear if you\nshould go faster or slower and just\ntried a scenario where it really looks\nlike it would be smart to go uh faster\nbut it actually seems smart to go uh\nslower and the reason for this is\num that there is some kind of transfer\nof safety effort from one project to the\nother\nand she claims that you oh you get a 50\ntransfer of safety effort to the other\nproject and that's a steep discount and\nI strongly agree in fact this is not a\nsteep discount I think it's really\nreally generous and I'll try to argue\nwell both like the two\num\num the two axes may have very different\nuh AIS so uh transferring uh expecting\nthe safety effort to just transfer is\nquite unlikely I think also doing\ntransfer in general is just really\nreally hard for get having some um work\ndone in Microsoft and getting that used\nin Google is\num non-trivial\num\nI think in this case we have access like\na Facebook AI that explicitly rejects AI\nsafety explicitly rejects the transfer\nand I think in that case expecting 50\ntransfer is extremely optimistic my\nintuition is that we're going to get\nlike uh one in ten thousand or something\nlike that that there is no real\nsubstantial transfer\num but when I play with this spreadsheet\nmy conclusion is that whether you race\nor you don't raise that doesn't actually\nmatter we are doomed no matter what\nand categories ends the first half of\nher article by the admonition to\nremember that we are in a more elaborate\nworld with all kind of unmodeled\naffordances and we should try to get out\nof the arms race\nand\num it's unclear to me who she's actually\ntalking to us is she talking to open AI\nwho is considering to build AGI are we\nis she talking to Miri who is\nconsidering whether to make a pivotal\nAct is he talking to AI safety\nresearchers who should stop using this\num this armstrace model again I don't\nthink that we are in anything like this\narms race\num so I am a bit confused about\nthis conclusion\nthat is all for today thank you and see\nyou next time", "date_published": "2023-02-09T22:16:34Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "f7c7c3f6de711477b53e645aca31bfa4", "title": "183. If I were a Well-intentioned AI 2", "url": "https://www.youtube.com/watch?v=HW7kfKrbLSg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "to the 180 third session of the AI\nsafety reading groopman this is the\nsecond part of if I were a\nwell-intentioned AI last time we covered\nwhat it would be like to be an AI who\nunderstood good hearts law and was one\nintention enough to try and overcome it\nwe examined how you would if you were an\nAI go about dealing with it if you were\nan image classifier we found that\nthere's a few techniques and strategies\nthat you would be able to replicate a\nbroad concept you might replicate would\nbe inverse reward design strategy might\nbe randomizing your defenses against an\nadversarial attacker a technique might\nbe using background Birsa semantics to\nfind the right features but without more\nimproved on that much then I rock you\nfor moment can you please make the slide\nfull-screen is that better yeah that's\nbetter thank you\nnow this time we're going to be covering\nwhat happens if an AI can act in the\nworld and we'll investigate a few of the\ntypes of value functions that give rise\nto problematic behaviors so when you're\nacting in the world that substantially\ncomplicates the situation but it gives\nyou more info to work with you'll find\nthat there are preferences for which\nagent ai's can behave in a good heart\nlike fashion but for the people who are\ndesigning the AI that's not a problem\nthey want the good heart like behavior\nthe reason we usually worry about good\nheart like behavior is because our\npreferences are complex and they may be\nvery unlikely from the a eyes point of\nview we tend to have diminishing returns\nnegative trade-offs between different\npeople's values and it may just be that\noptimizing for our true value is very\ndifficult compared to others but if AI\nyou understood this you would be able to\navoid it a lot of good heart like\nbehavior so in this article stuart\narmstrong gives the example of an AI\ntrying to find its way through a maze in\nhave the training the AI encounters on\nthe red doors so the AI will he put\ninside a maze\nit will find its way to a red door and\nthe training session will end it will\nget its reward but unfortunately like\nlast time your training does not prepare\nyou for the tough decisions of the real\nworld you encountered a red window and a\nblue door you don't know which to go to\nbased off your training you could\nconclude\nthe humans either meant go to a red\nthing a door or go to a red door but\nthere's no red door so here we can\ndiscount that possibility safely if you\ngo to the red thing first you'll notice\nthat the scenario doesn't end so perhaps\nyou'll update police and say okay I need\nto go to the door instead of the red\nthing going to the door you realize that\nthe scenario still doesn't end so you\nneed to consider that perhaps the humans\nwant you to just stay at some place\nforever\nso either stay at a red thing or stay at\na door and you get continually rewarded\nnow supposing that those are your only\ntwo options then if AI you were to ask\nyour Creator who either wants you to\nstay by a red door by a door forever or\nstay by a red thing forever if they\nasked 'are kraid whether or not they are\na suppose you know that your Creator\nsteward either wants you to stay by a\nred door red thing or by a door if you\nsaid look I think you're either a door\nMaximizer or a red thing Maximizer\nshould I just maximize whichever utility\nI find most likely to divorce the answer\nis a resounding yes Stuart says if aiu\nis unbiased and finds that it's more\nlikely I'm a door Maximizer standing by\nthe door forever is optimal vice-versa\nfor if I am a red object Maximizer the\npoint is that\ngiven these utility functions you if you\nbelieve the AI is unbiased as to\nwhichever one it thinks is most likely\nthen even if say for example you want\nthe AI to stand by a door forever and\nthe AI says I will stand by the door\nforever with probability P and I will\nstand by the red thing forever with\nprobability one minus B that will\nmaximize your expected utility as the\ncreator why well in this scenario the\nutility we get from going from one thing\nto another from the red object the door\nis zero so essentially we're incurring\nthe opportunity cost in going between\none object or another so just provided\nthe Creator doesn't know what\nprobabilities the AI is assigning for\nstanding next to a red thing we're\nstanding next to a door forever the\noptimal policy should be off the form\njust go and stand in a place forever I\napologize if that was a bit confusing\nnow Stuart goes on to say that in more\ncomplicated scenarios you're going to\nhave a great deal more utility functions\nthat are plausible and you need to be\nable to ask your Creator some questions\nto cut down the space of functions so he\nrefers to a paper where they consider\nsome space of possible reward functions\nhas some volume fee and the true reward\nfunction that you can't really specify\neasily is this red dot here in the left\nhand picture\nnow the gray picture the middle picture\nhas a gray area which sort of represents\nirreducible uncertainty in the reward\nfunction we've encountered this before I\nthink in one of Rohan Shahs papers where\nhe mentions that you cannot fix a\nutility function from a given policy no\nmatter what a human behaves like there\nis no unique utility function which you\ncan assign them so there's some\nirreducible uncertainty in the space of\nreward functions that's possible but the\nAI can still ask some questions and\nnarrow the space down which is what's\ngoing on in the third image so the point\nof this image is that you are able to\nget with in an arbitrary accuracy of the\nideal reward function up to irreducible\nuncertainty in a certain number of steps\nthe steps is bounded it doesn't increase\ntoo rapidly so asking queries is not in\nprinciple too hard there are some\nrelatively efficient algorithms existing\nbut you might ask that why does it still\nfeel very wrong that aiu happens to be\nstanding in one place forever\nlooking at that situation we think hmm\nthere's something very wrong there so\nwhy did the example feel wrong as Stuart\nexplains it's because we have some\nimplicit beliefs about our values we\nexpect that the true values we hold are\nhard to optimize there are penalized and\npenalized in probability from the ai's\npoint of view our values might be\nextremely complex and fragile they may\nhave diminishing returns or it could be\nthat we have some contradictory values\nso there will be negative trade-offs\nand of course there is one final element\nwhich is to do with regression in this\ncase it's called predictable\ndisappointment and it's a reference a\nfew times I need believe a common\narticle that's referred to is called the\noptimizers curse essentially it says\nthat if you construct some prediction\nfor the true reward given a proxy you\nwill usually tend to overestimate the\nrewards you'll get based off the proxy\nthis is because if for example you look\nat a data set you see that x and y which\nare the proxy and the true reward are\ncorrelated but near the most extreme\nsection for the proxy value we see that\nit starts to the regression starts to\nbreak apart and it is no longer a good\npredictor of what reward we actually\nexpect this kind of predictor we use is\nusually called maximum likelihood it's\nquite common in machine learning it's\nbasically a type of Bayesian estimation\nof what it should occur based of the\nmaximum likelihood is like Bayesian\nregression in a way except it puts no\npreference for what kind of parameters\nyour model should use\nthis results in models that focus mostly\non the bulk of the data set doing well\nbut models that discard the edges these\nbits here whether the regression comes\napart\nthey're given less they're not given\nmuch um waiting by the maximum\nlikelihood method Bayesian regression\ncan overcome this in certain instances\nand it tends to avoid this predictable\ndisappointment here so you can see the\npurple line which is the Bayesian\nregression is at a much shallower angle\nfrom this we see that we are much less\noften at the extreme values of the proxy\nto get a prediction that's too high\nso from this point onwards we'll just be\nassuming that whatever AI is doing it's\nusing Bayesian methods with a relatively\ninformative prior so here is one of the\ninteresting sections so suppose AI you\nnose your nose their Creator your\ncreator is either door Maximizer or a\nred object Maximizer but their rate of\nreturns for how long you stay next to a\ndoor an extra red object decreases with\ntime there are diminishing returns this\nmakes a substantial difference to the\nbehavior of the AI if the rewards begin\nto drop off rapidly enough then\neventually if you stay there too long\nthen a staying one more time step will\nresult in a very minuscule increase in\nyour reward this will get smaller and\nsmaller and smaller in this graph you\ncan see that this is plotted as the\nderivative of the value as time goes\nas the more time you spend standing next\nto a door or next to a red object you\nget less and less out of it for each\nextra turn eventually this will become\nso small that your expected utility will\nsay okay now we should switch over to\nthe other object you will get a slightly\nbetter rate of returns even though I\nmight be less likely to be the true\nreward function\nthis requires sufficiently fast enough\nthis requires the rewards to drop off\nsufficient and fast enough if they\ndecrease for example like 1 over the\nlogarithm of the number of steps you've\nstayed next to the object then behavior\nis quite bizarre the AI will determine\nthat the optimal policy is to switch\nbetween objects only after it stays at\none for infinitely long and then it\nswitches to another which is rather\nuseless to its human creators so\ndiminishing returns can prevent this\ngood heart like behavior we saw in the\nmaze and get AI is to swap between one\nlocation and another so that's one way\nwe can deal with good hearts law another\ncase is that you may encounter some very\nbizarre situations when attempting to\noptimize value functions say for example\nyou're playing a game and you are\ndriving around a track there's a small\ncircle where if you repeatedly loop\naround you can get infinitely high\nrewards this kind of perverse behavior\nmakes it most rewarding option to pursue\nobviously we as obviously the humans\nwould not wish for this if they tell\nthis is quite a common problem it's the\nusual extremel good heart situation or\nit may just be that there's a lot of\nlow-hanging fruit of different utility\nfunctions that are very easy to optimize\ncompared to the true reward function\ntrue utility and the AI will just\nmaximize that collection of utilities\nhow can you as an AI possibly deal with\nthat if you know about extra more wood\nhurting well one way is by normalizing\nif you rescale the rewards so that the\nvery most any policy will give you no\nmatter the action you take is just a\nreward of size 1 and at the very least\nis zero then suddenly extreme situations\nlike this become useless even though it\nthere is a very long loop and you can\ncontinually rack up rewards it doesn't\nget you that far because you're the\nwords have been scaled down low enough\nnow instead of the extreme values of\nrewards being what dominates it's just\nthe probabilities that dominate the most\nlikely utility functions will be the\nmost likely to be optimized or the Asian\nmixture is governed by probabilities and\nif there's a lot of very likely utility\nfunctions then\nruff some of those will be what's\nmaximized but there's another problem\nwhat happens if the true value function\nis incredibly unlikely well for example\nthis might occur through some fairly\nsimple situations aiu has a sensible\nprior it penalizes complexity and basis\nplaces low probability on very complex\nsituations but a human comes up and\ntells you okay human project value is\nfragile and extremely complex then that\nmeans that your prior will naturally\nresult in a very low probability for the\ntrue reward function\nwhat can a IU possibly do well if we go\nback to the example of the maze then\nwhat you might try is maximizing the\nminimum value that a you are maximizing\nthe minimum reward you expect you'll you\nwill possibly get so for example say you\nthink that it is overwhelmingly likely\nto be the case that your Creator wants\nyou to stand by doors but you suspect\nthat if you don't stand by any red\nthings then the whole situation will\ncollapse for humans it will become\nsomething undesirable so you anticipate\nokay I should try and alternate a little\nbetween the two try and give some\nresources over to the very unlikely stay\nby a red thing this way both a both\ncases are covered and both cases would\nget some value\nand it may be that humans would be\nwilling to accept this like a more\nplausible example is say you are\nexisting in a world where there's a\ntrade-off between freedom and happiness\nif you optimize solely for freedom or\nsolely for happiness then that's\nprobably going to break things very very\nbadly\nyou could either be completely free\nperhaps say I don't know a series of\ndictators or some hyper capitalist\neconomy with some very strange dynamics\nor you could just be wire headed and\njust your brain flooded with endorphins\nboth of those cases are obviously\nbreakdowns that a human would say yes\nthat's just horrible and if you ensure\nthat rather than maximizing only one\nthing you maximize some mixture you\nensure this some base level of resources\ndedicated to either then it's less\nlikely human values will collapse but in\ngeneral it's not so easy to come up with\na method that deals with the true value\nfunction being very unlikely this is an\nopen problem according to Stewart and he\nsays there is quite an important one all\nright\nso we will switch over to another\nsituation now where we'll see that\nthere's still good hard like behavior\nbut in some cases it's not so bad in\nother cases it's quite terrible so for\nexample say you have a small cleaning\nrobot that can move between squares its\nrewarded for going to the left or to the\nright like the maze the to reward\nfunctions that are possible or mutually\nexclusive that is to say if you go to\nthe left one utility function will give\nyou some reward but the other will not\nhowever - it adds something to this and\nyou get the second figure essentially\nhere he said that if your Creator is a\nleft Maximizer they'll give you utility\none for every time you are on the left\nsquare utility 0.7 every time you were\non the bottom left and utility 0.5 every\ntime you're on the bottom right if your\nCreator is a right Maximizer the\nopposite applies so in this instance\nthere is a trade off the reward\nfunctions are not usually exclusive if\nyou go to L R or R L you can ensure that\nboth utility function your Creator is\nsure to get some value out of this\nin this case the optimum move for an\nagent is to move to LR or RL depending\non whichever utility function and things\nisn't more likely this is certainly good\nhard like behavior you are stuck doing\none thing forever it's very pushed to\nthe limits but it's not such a bad\noption all utility functions against\nsomething is not really much perverse\nincentives etc it's a relatively okay\nsituation in the second figure however\nwe add two more two more squares that\ncan give rewards however there's a\nnegative trade-off a left Maximizer will\ngive you 1.5 utility if you go to L\nminus R and they will give you utility -\nnot point 1 if you go to our - L the\nopposite is true for a right Maximizer\nin this case the optimal Bayesian move\nis go to L up minus R and stay there\nforever\nthat's quite good hard like behavior and\nwe might view this as bad because\nthere's distinct likelihood for the\nright Maximizer to just be suffering for\neternity as the robot is stopped\ncleaning that square this shows us that\nin general positive while not in general\nbut we might generalize this to positive\ntrade-offs partly helping with good art\nlike behavior it cuts the it reduces the\nsting of it negative trade-offs however\nmake things worse and Stewart\ngeneralizes this little by looking at\nthe shape of Pareto frontiers so Pareto\nfrontier is defined as you have to\nyou have some resources that you can\nallocate towards certain actions like\nsay you have I don't know a number of\npaper clips you can produce on the\nx-axis and a number of pins you can\nproduce on the y-axis and there's a\nbunch of people with different utility\nfunctions saying oh I want you to make\npins I want you to make paper clips with\nsuch-and-such in such-and-such numbers\nor in such-and-such ratio a particular\nuse of resources is called Pareto\noptimal if there is no way to shift\naround the resources that could improve\nthings for one agent and not ruin things\nfor other agents so every single point\non these lines that you see in these\ngraphs are different ratio optimal\nscenarios they if you try to move to\nanother point on another line then there\nis no where you can move to that would\nmake everyone happy now there's\nobviously a few different graphs here\nand the shape of the cross is quite\nimportant the left graph is quite nicely\nrounded it's convex and in it any\nassignations of resources that's Pareto\noptimal is going to have a little bit of\nsomething for every utility function you\ncan just see by the shape of the curve\nthat is quite far from the origin at any\npoint there are a fair bit of resources\nbeing\nused for each utility function however\nbecause it's curved what that means is\nthat say if you decrease the number of\npaperclips you have over here say you\nknow sorry this is bins so say you\ndecrease the number of pins by some\namount then you will decrease the number\nof paperclips by quite a large number\nsorry you'll increase the number of\npaperclips are quite a large number so\nfor a lot of agents it's quite likely\nthat you will be producing a lot of\nvalue because you're only reducing the\nnumber of thing number of pins little\nbut you're gaining quite a substantial\ndegree of paperclips\nright that's even more than I thought\nhowever as you go further and further\nand further eventually you hit a point\nwhere you no longer get much much of a\nreturn by increasing the sorry by\ndecreasing the number of pins and\nturning it into more paperclips so\neventually your most of the utility\nfunctions you encounter are going to be\ncontributing something in pins and\nsomething in paperclips and there will\nbe very there is no scenarios where you\nwill just focus solely on paper clips\nand just keep unbounded ly increasing\nthe number of paper clips or the number\nof pins this is different in the case of\na flat for a ratio boundary in this case\nyou\ncan trade off the number of paperclips\nyou produce for some number of pins no\nmatter where you are and say if you\nstart out here then any other point is\njust as likely so the extreme points\nprovide satisfice as many utility\nfunctions as basically any other and the\nextreme utility functions become\nsubstantially more likely to be\noptimized by an AI by AI you if the\nfrontier is curved inwards or it's\nconcave then in this case extreme values\ntend to be preferred because you can\nkeep for a small decrease in the number\nof paperclips\nhere you get a larger increase in the\nnumber of pins and you can constantly do\nthis and just drive things up more and\nmore and more until eventually you have\na huge number of pins everything is just\nturning into pins and there's no paper\nclips and that's quite an extreme\nscenario which generally you don't like\nthat's a very classic case of good or\nlike behavior so we see that we might\ngeneralize the earlier comments about\npositive and negative trade-offs in\nterms of the shape of Pareto frontiers\nin more general scenarios and we can use\nthat as a criterion to check whether or\nnot we expect\nto encounter substantial good art like\nproblems and I believe that's it I am\nterribly sorry for explaining all of\nthat so badly but if you have any\nquestions then you will endeavor to\nanswer them I think you explained things\nvery very well", "date_published": "2020-05-07T20:30:59Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "14063c77e72d3562030186062d4409c4", "title": "144. Value Learning with Rohin Shah", "url": "https://www.youtube.com/watch?v=Xvql4fGBoBA", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 144 in the\nAIC t-dot-com reading group tonight we\nhave roving shot joining us from the\nCenter for human compatible AI and he\nwill answer some of the questions that\nwe have that have come up during the\ndiscussion of the value learning\nsequence hello Rowan so the first\nquestion is with regards to non cold\ndirected artificial intelligence and\nagents what what is the central example\nof a non goal-directed AI that you have\nin mind when you wrote this sequence I\ndon't know if I have a central example\nof some non goal-directed I think I had\ncentral examples of goal-directed a eyes\nand was thinking of like it might be\npossible to not have one but perhaps in\none example it could be like so firstly\nthere's like all of these good\nold-fashioned AI expert systems we're\ndoing basically symbol manipulation in\norder one thing you could have imagined\nthat they were doing with symbol\nmanipulation in order to answer\nquestions or generate stories or things\nlike that and sort of notably there's no\nutility function or loss function that\nthey're optimizing they're just like\nmanipulating symbols according to rules\nthat we program in and then out comes an\nanswer and I don't think it was like a\npriori obvious that such an approach\ncouldn't have led to very powerful AI\nsystems yet I would just not expect any\nof the problems that we think about with\ngoal-directed agents to show up with\nsuch a system if it did become super\nintelligent now it turns out that like\nokay we investigated this approach and I\ndidn't really work but sort of if I take\nthe position of not having to run that\nexperiment and like said could we build\na symbol manipulation yeah that works I\ndon't think it was obvious that the\nanswer was no yeah\nanother example would be basically\nprobabilistic inference based approaches\nso you could imagine a system again\nlet's say a question answering system\nthat basically well maybe that one has\nlike a similar flavor to the expert\nsystem just with more probabilities\nthrown in in order to make it less\nrule-based\nbut that one also would not have a loss\nfunction or an objective function that\nit's maximizing or minimizing and I\nwouldn't expect the normal problems to\narise done either okay one more thing if\npeople have questions or would like to\nask a question then please in the chat\nwindow please write that you have a\nquestion and I'll maintain some kind of\ncue and until someone says they have a\nquestion then I will just ask some of\nthe questions that people have sent to\nme so the next question is actually from\nthe a children's movie that my my\nchildren are watching about a a Artwalk\nrobot because this is a one of the\nexamples that you give off an Uncle\nTerry agent and a teen that always\nchooses the action that starts with the\nletter A and it's further an imitation\nagent in that it imitates a an artwork\nbut it's still obviously so is this an\nexample of an ngon goal-directed agent\nyeah I think one of your actions is like\naardvark imitation then I don't know I\nwas imagining something more like the\nactions are things like moves left\nrotate your arm in some way and things\nlike that I think if you imagine actions\nat that lower level then you it's like\npretty clear that this should be a non\ngold directed agent it's like going to\ndo just random stuff in the world that's\nreally not going\nto be trying to optimize the world in\nany way it's not going to be subject to\nany of the convergent instrumental sub\ngoals because it's not maximizing\nanything it's just a very simple\ncomputation it's like look at the names\nof my actions which one starts with a\nand then choose that one now if your\nactions themselves are complicated\nenough that they can be modeled as goal\ndirected like aardvark imitation I mean\nthen sure maybe that would be goal\ndirected and I could believe that but I\nwas not imagining such like complicated\nactions when I gave that example okay um\none of the other well you give a number\nof non co-directed agents and I've tried\nto write all of them down here with like\na hurricane the lowest is an agent that\ndoesn't take an action ever that's\nprobably not gonna rekted and then there\nis another few kinds 1 2 4 which to me\nare obviously completely unused useless\nbasically and then there are a few that\nare a bit more substantially less\nuseless but become more useful the more\ngoal-directed they are so is there there\nare two trade-off between how useful and\nhow goal-directed they are that seems\nplausible I am not totally sure well\nlike I said I think and into intuition I\nhave is that like a priority we didn't\nknow that expert systems wouldn't work\nand expert systems seem like they're not\ngoal directed but sort of we then did\nobserve the fact that expert systems\ndidn't work and this I think I've become\na bit more sympathetic over time to the\nnotion that like goal directed miss is\npretty strongly correlated with\nusefulness\nI think most of my point in the sequence\nis more that like it's not a like law of\nmath that AI systems must eventually\nbecome goal directed it's definitely\ndependent on what we actually build but\nit seemed\nI think there's a reasonably compelling\ncase to be made that because of how\nuseful goal-directed agents will be we\nwill end up building them I looking at\nthis hierarchy I think it's seems\nbasically right to me I'll note that\nlike number seven I mean that one the\none where you have an agent that helps\nthe goal of some other agent\ngoal-directed 'no smite not be the right\nconcept for this one but it does seem\nboth useful and more safe whether it's\nor not it's goal directed I'm not sure\nmaybe this just means that goal directed\nis not exactly the right concept for\nwhat I want to think about but like\nthat's sort I think number the that kind\nof agent is an example of an agent that\nmight not have all of the safety risks\nthat we associate with explicitly\nexpected utility maximizers\nbut still it was very useful okay\nI think there's a question in the chat\nRobert please go ahead yeah I just was\nwondering about the this lexicographic\nsorting it's it's distinct from an agent\nwhich wants to take actions that are\nhigh in the alphabet because like if you\nhave a if you have enough of a time\nhorizon then you might have convergent\ninstrumental goals towards you know in\nthe short run doing some things in order\nto ensure that in the future you always\nhave a lot of odd marks around so that\nyou can continue in the long term to\nreliably be near the beginning of the\nalphabet but that's like a different\nkind of agent is it is the relevant\nthing that its preferences are defined\npurely over the action space and not\nover like the world state or well\nhistory is anyway I agree with\ndefinitely the first part that this is\nvery distinct from an agent that wants\nto take the first action sort of\nlexicographically I would more say I\nmean you say that\nlike then your second part was like is\nthe is the distinction that the\npreferences are over actions instead of\nStates I like would somewhat say that it\ndoesn't have preferences because like\nI'm imagining a program that's like take\nthe list of actions sort had\nlexicographically Index take index zero\nand execute that right nothing\ndefinitely does not have convergence\ninstrumentals up goals right like if you\nsuddenly give it the take over the world\naction well they're taking over the\nworld action is still not\nlexicographically first and it's not\ngoing to take it yeah right\nlike okay to the extent that it has\npreferences they are the actions rather\nthan that yeah I guess you can make it\nthat way it's not that's not an\nunreasonable way of muddling it it's\nfair I'm trying to think of like other\nexamples in that like can you can you\ndefine anything that's purely\npreferences over actions that would be\ngoal directed that seems unlikely if\nyour actions are varies if your actions\nare simple because if you define\npreferences only over actions and don't\nlet the agent reason over long\ntimescales then it's forced to act act\nextremely myopically it's basically just\na reflex agent decorously that's just\nlike only looks at which action would be\ngood to do right now with no regard to\nwhat the state is I would be very\nsurprised if anything like that was\nco-directed but also not useful yeah it\nalso would not be useful\nokay so the next question is one of the\nin economy know exactly which section\nyou talked about all there's a section\ncalled all behavior can be rationalized\nas expected utility maximization where\nyou have a rather neat construction that\nI've given here where you make a utility\nfunction which which if you try to write\ndown this utility function it will be\nvery very large and I feel when I read\nthis I was reminded of Charles Chinese\nroom argument which kind of makes the\nsame way to I don't know want to say\ncheat your intuitions but but you think\nof a when people normally think about a\nutility function they think about\nsomething very compact and in fact here\nwe have a truly enormous do you think\nthat's a fair criticism that a utility\nfunction that is too large to be written\ndown it's not really a utility function\nI I agree that it's pretty analogous to\nthe Chinese room argument I so I think\nso to be clear I do think that if you\nwrote a program that where you write\ndown a utility function which would be\nsort of by necessity compact and simple\nand then you like maximize the\nexpectation of that utility function I\nthink that is dangerous\nI think it has all the convergent\ninstrumental sub goals and we maybe not\nall of them depends on what the utility\nfunction is but usually it will have\nthat and probably this ends up killing\nyes if it's like maximized appropriately\nwell or sufficiently intelligent my\npoint with this construction is that\nthere's this argument that you can say\nis that that knowing nothing about how\nthese like in in in that story that I\njust laid out we know that there is an\nexclusive utility function and inside of\nthe air and the a is like thinking about\nthat utility function on how to maximize\nit so if that's your scenario then I'm\nlike yes\nI agree with all of the classic\narguments seems dangerous I think\nthere's another argument which people\nsometimes make it seems common to me so\nsorry let me turn that off there's a\nsorry yeah there's another argument that\npeople sometimes make which is that well\nlet's leave aside we'll say nothing\nabout how the air works on the inside\nbecause who knows what super-intelligent\na-- is going to look like maybe it's\njust a bunch of deep learning maybe like\nwe figure out how to get expert systems\nto work maybe we do something entirely\ndifferent who knows is like some we can\npredict it and invest what we can say so\nthe argument goes and I disagree with\nthis argument is that since this is a\nsuper intelligent AI system you won't be\nable to find any coherence violations in\nits behavior in particular it will\nsatisfy all of the vnm axioms because if\nit doesn't sorry the vmm axioms this is\nthe bond I'm and Morganstern utility\ntheorem there are some axioms you can\nmake the argument that the AI system\nwill satisfy these axioms because it's\nnot even like extract resources out of\nthe AI which you shouldn't be able to do\nwith it super intelligent and then\nbecause then you run forward the\nargument you use the bun Lyman\nMorgenstern utility theorem and you say\ntherefore I can model the AI system as\noptimizing a utility the expectation of\nthe utility function even though I don't\nknow exactly how the AI system works on\nthe inside I think my point is that if\nyou're trying to run the argument as\nlike I can model the behavior by a\nutility function that's a vacuous\nstatement because you can model all\nbehavior as a utility function and like\nwhen you run the argument this way\nthere's no reason to expect the utility\nfunction to be simple or compact like\nyou need some other sort of assumption\nin order to get to a simple compact you\na function that makes a lot since I've\ntried to elaborate a bit more here on\nthe next slide on basically not just\nwhat would be a I have but what is\nactually inside our heads and it's it\ncompressible well I can try to make a\nscale like it could be something really\ntrivial uh like just evolution basically\nand there was a philosopher I forgot who\nwas quoted in the sequences that broke\ndown like 50 different things like\nfriendship and love and art and novel\nexperiences and all these cathing and\nthen what we'll probably see in naive\nambitious value learning if that's\npossible which is you know something\nstill possible to write down on a piece\nof paper or something like that and then\nsomething that can't be smaller than a\nbrain and something that is even larger\nis that this is a reasonable model yeah\nyeah I think if you're going to use\nhuman values that makes sense I have\nlike I think there are some decent\narguments that like what we should be\nthinking about is not exactly human\nvalues because they're not really\nwell-defined right now and we should\ninstead be thinking about the process by\nwhich humans have values that's yeah so\nI had this argument comes from many\npeople that Gillian Hadfield is one that\ncomes to mind and yeah but if you're\ngoing to use human values my guess is\nthat they're going to be in the highly\ncomplex region it's not obvious that\nthat's the case you could imagine that\nlike after a long period of deliberation\nall of humanity comes to the consensus\nthat actually no it's like hedonic\nutilitarianism with this very simple\ndefinition of what happiness is after we\nlike solve problems of consciousness and\nwhatnot and then you're like okay that\nis in fact just what our value\nand then that's like pretty simple and\nwe can encode it into an AI system I'd\nbe surprised if that happens but it's\nnot inconceivable okay um and then\nplease feel free to chime in if you have\nany yeah could you maybe go into more\ndepth about the argument that we should\ncare about not human values themselves\nbut the process in which they arise or\nat least for me to want somewhere where\nI can become less ignorant about it yeah\nI don't know if any good resources\nwritten on this the I guess the yeah the\nshort version of the argument is humans\ndon't really have any consistent set of\nvalues lots of people have like very\ndifferent moral intuitions ah you caught\nnothing\nhmm you caught me okay yep very\ndifferent moral intuitions lots of\ndifferent approaches new ethics\nespecially when you posit weird thought\nexperiments that aren't in like\nsituations that people normally\nencounter so like it's really it doesn't\nseem reasonable to say that humans have\nvalues it seems more reasonable to say\nthat humans have norm which are suited\nto the current context in which they\nlive if you change the context the norms\nneed to be updated and changed but that\ndoesn't just sort of happen\nautomatically we've seen this like in\nthe past with as technology progresses\nso you know as soon as we get if we ever\nget for example the technology to upload\nhuman minds or then make digital copies\nof minds currently we have a norm of\nevery entity every being every person\ngets one vote and everyone's votes count\nequally in the scenario where we've got\na bunch of digital minds that can copy\nthemselves that is not going to work\nwell because at that point it just\nbecomes everyone whoever has the most\nmoney and just buy a bunch it can just\ncreate a bunch of copies of themselves\nand they form a giant voting bloc that\ngets whatever they want to be passed if\nwe're still using democracy with one\nvote that's equal to everything and so\nsome something we'll have to change\nthere will have to change norm somehow\nbut you know yes like a utility monster\nwell I feel like a utility monster is\ndifferent but sure anyway just to\nelaborate the process by which we\nuncover these values I think Wade I\nsuggested like a two step process where\nfirst we spend a million years figuring\nout what our values are and only in the\nsolar system and then once we have\nfigured that out then we you know tell\nthe rest of the universe or something\nlike that and I guess in the first step\nwhere we are just trying to figure out\nour values\nit doesn't actually need to be very\noptimal right then we we can have a very\nsuboptimal process there and then as\nlong as we don't optimize it very much\nGod house law is not going to be a\nproblem yeah that seems right I think I\nagree with that\nthe example that we figure out how to do\nthat yeah oh one more example I like\nactually of this like learning norm\nlearning values or values updating over\ntime like I recently found out that\nprivacy was not a thing in the past at\nsome point at some point like bedrooms\nwere invented and that they were still\nlike somewhat public but over time they\nbecame more private and now sort of\neveryone thinks that privacy is this\nlike deeply enshrined value at least in\nthe West and it just was not for most of\nhuman history and that seems like the\nsort of thing where the norm I still\ndon't know why privacy of all evolved to\nbe one of our values\nbut it's this is the sort of exact thing\nI mean when I say that like there is\nsome sort of process that creates values\nand they change over time and we\nshouldn't be we probably shouldn't be\nlike trying to figure out what human\nbellies are and then hard cut them into\nthem AGI or something like that yeah but\nI kind of have a counter-argument to\nthat saying well you know I don't really\nor at least I care about my current\nvalues and like I get that the process\nwhich created my values are wouldn't\nnecessarily create my values and a Miss\neduation um but I mean they did create\nmy values and I care about my values not\nany other hue you know not any other\npossible news values except for the fact\nthat one of my values is fair I care\nabout other people's values but you know\ndisregarding that it's like now that I\nexist I have a strong strong bias\ntowards my current values so wouldn't\nyou say that or wouldn't creating AGI\nthat is based upon the process not\nfulfil my values as well as just trying\nto fulfill my values I predict as an\nempirical fact that are like this is an\nempirical predictions that's whatever\nyou call your values right now even if\nthat like you would up your own will\nfree of manipulation or whatever changed\nthem somewhat as new technologies get\ninvented for example the democracy the\ndemocracy example is a good one yes I\nmean I'm gonna change my mouth like my\nvalues tomorrow when I wake up and I'm\nlike a bit sleepy yeah probably gonna be\nless tolerant of most things and when\nI'm like you know doing that so like you\nknow my values changed within a single\nweek so yeah that's a really good\nargument yeah note that there is change\nin values and sort of the entire point\nof thinking of human values in the first\nplace was to give our AGI system a\ndescription of what we want that can\nstay static and that it can optimize\nforever and I'm just like we don't I'm\nnot sure such a static description\nexists even in principle maybe it does\nhe could imagine something like what I\nwould think if I were given like a\nmillion years to think about it and\nmaybe that's sufficiently good that as a\nstatic description that it works but I\ndon't know okay well I just like to just\nadd something there I'm reminded of\ntoday I came across the term flashlight\nmethod of writing of writing say a novel\nand the idea somebody said that they're\nwriting a novel is like driving a car in\nthe dark you can see as far ahead as\nyour headlights show you but it that is\nenough being able to do that is enough\nto complete you know a very long journey\nby car at night and you can get through\nit through a whole novel but if you can\nif you can just write say one scene at a\ntime and see what develops from that\nwhere I mean some some writers don't\nlike that they prefer to have to know\nthe route map of the whole novel and it\nseems to me that developing values is\nlike this that we it could be a crazy\ndream to think about anything convergent\nin by the way of our ethic that's going\nto last a million years and that's we\ncan only try to but use systems\nartificial intelligence political\nsystems anything like that which we can\nendorse but which have enough\nflexibility and development in them that\nour grandchildren can carry things on\nand they will endorse things that we\nwouldn't approve of you know just as we\napprove of things that are our\ngreat-grandparents would not have\napproved of and as as long as as long as\neach generation can endorse what say the\nnext two Jenner\nnations do that we can just progress\nthat way without any sense of heading\ntowards an ultimate goal that we can\nvisualize yeah\nI pray yeah a pretty strongly agree with\nus I think Sauron has some questions for\nme about exactly that perspective\nlater on I believe I have just I'm just\nthinking about about what my what that\nquestion was but in the meantime if we\nare continuing then actually I have some\nquestions a bit more about strategy and\nAI safety strategy and I have wait I I\ndon't have a picture of him there are no\npictures of him but he says that one of\nthe points of goal directed Ness is that\nyou get economic efficiency for free and\nthen you answered in this comment that\nhopefully we can convince the relevant\nactors that goal-directed agents have a\nsignificant chance of causing\ncatastrophe and I've tried to write down\nwhy I'm a bit less optimistic about this\nif we think like as an example the\npeople who are right now using the\nstrongest computer to simulate nuclear\nweapons they are obviously very\ninsensitive to this kind of argument\nthey probably don't care very much about\nexistential risks and and they care\nabout a lot a lot about their country's\npower in the same way and this\nrequirement that we can convince all the\nrelevant actors it's a very very strong\nrequirement and I'm wondering and could\nyou give some more teachers do you truly\nbelieve we can actually make everybody\nstop building called directed agents to\navoid some kind of competitive scenario\nyes I think there's a pretty entangled\nset of beliefs that make me say this\npart of it is that I don't think there\nwill be that many actors that are\nbuilding very powerful AI systems like\nit seems based on\nduring the current state of the art and\na research study you need just a lot of\ncompute for it and it's actually quite\nexpensive which is sort of what you'd\nexpect from the outside view that like\nbig projects that are very impactful\nupon the world will be expensive and so\nhopefully there won't be that many\nactors so that's one thing I think\nanother thing is that like sort of in an\nabstract sense I think everyone agrees\nthat extinction is bad and you don't\nwant it and it's like significantly\nworse than most of the upsides you\nimmediately get by building powerful AI\nsystems I'm not sure if that last part\nbut it seems plausible to me and like\nright now I'm more and more I believe\nthat a bit more than like Oh eggs\nextinction is like comparable to you\nlike building an alliance super\nintelligence that's aligned with you I\ndon't know it seems to me that like most\nactors have like our most large actors\nanyway I have like a decent amount of\nshared interest of like yeah human\nhumanity prosper as everyone is like not\nin a scarcity is in a post-scarcity\nworld types things like that and so I'm\nI think we can maybe hope could get\nagreement of like yeah the extinction\nrisk is really just so large that it\neven though there would be benefit from\nmaking powerful AI systems to each\nperson after individually that would be\nthat's like not worth the extra risk of\nextinction that they would have so\nthat's another thing a third thing is\nserve a dis analogy with nuclear weapons\nwhich is that for for nuclear weapons\nthe you don't nuclear weapons do not\nautomatically go to extinction risk if\none country has nuclear weapons then\nthey have not very much downside to\nthemselves because like you know if they\nbomb some other country that's a problem\nfor the other country it's not much of a\nproblem for them and you know there's\nupside in that they like there is\nso yeah I will also have outside which\nis being able to have more geopolitical\npower but sort of the downside is very\ndifferent and that the downside is a\ndifferent country gets extinct basically\nbut not us whereas with a I to the\nextent that were concerned about\naccident risks and not things like\nlethal autonomous weapons which I think\nis what we're focusing on here accident\nrisks just affect the entire world so\nthat that's one dis analogy now you\nmight argue that like at this point\ntoday nuclear weapons are an extinction\nrisk and so why aren't we disarming I\nthink for that I would say the nuclear\nrisks are only an extinction risk\nbecause of mutualist mutually assured\ndestruction and like if you stop having\nmutually assured destruction then\nnuclear weapons no longer become an\nextinction risk and so you have all the\nincentives for having them again and so\nin order to be in the I don't know if\nit's stable but to be in this\nequilibrium that we're in right now\nwhere we have me where the world is not\ncurrent where like nuclear weapons\naren't really being used that might\nactually just depend on mutually assured\ndestruction being a thing and in order I\nthink in mutually assured destruction it\nprobably makes sense for supercomputer\nis being used to simulate nuclear\nweapons because each actor needs to make\nsure that no other actor gets too far\nahead of them like first strike and\nsecond string second strike capabilities\nin particular are very important under\nmad doctor yeah so and notably none of\nthis applies to the super intelligent AI\ncase in in this task of convincing that\nwe might have to do to expect we can do\nthis in our current epistemic state or\nwill they need to be some changes either\nmore research being done or some kind of\nfire alarm alarm for a ICT and yeah I\ndon't think we can do it with our\ncurrent epistemic state\nfor one thing like man I'm in this\nresearch field I've thought about it for\nquite a while and I'm not sure what\nkinds of extinction wrists are there or\nwhat's a dangerous thing to build and\nwhat's not so I think we need to have a\nmuch clearer picture of that we probably\nwould need to build agreement among a\nresearchers at least before we could\nconvince more all larger actors yeah\nyeah I use an extinction with students\ndisputed so until we get that dispute\noutta for me\ndebating what distinction extinction is\nfara\nyes that's true yep yeah so so all of\nthis I think it's actually not that\ncontroversial among a researcher is that\nan expected utility maximizer I'm sorry\nan explicit and expected utility\nmaximizing with an explicit utility\nfunction that was like super powerful\nwould would end up killing everyone I\nthink the the part that people disagree\non is more whether or not we build such\na thing and what how far away it is like\nmost of the arguments seem to be why are\nyou worrying about this it's way off in\nthe future knows what sorts of AI\nsystems were going to go I don't see\nvery much argumentation that's of the\nform or or Elsi arguments of the forum\nwe will never be able to build AI\nsystems that are that powerful I don't\nreally see arguments of the forum if AI\nsaid even if AI systems are that\npowerful they will not tell us there are\nsome but it's pretty rare it seems to me\nlike the the vast vast majority of\npeople who are actually building a eyes\nmaybe not the eye researchers but people\nactually using this are strongly not\ncaring about this more than they they\ndisagree so it's more it's not because\nthey\nyeah do you think that's true yeah I\nthink that's basically true I might\nespecially if you're not talking about a\nresearchers that seems very true I do\nthink that I mean I'm hopeful that we\ncan like build better arguments such\nthat people do in fact get convinced but\neven if that ends up being too hard a\ntask I do think there will be like fire\nalarm type events not for AGI is coming\nbut for our current AI systems fail and\nweird ways that are dangerous so like a\nsort of example of this right now is\nlike the way that recommender systems\noptimized for basically people being\nangry at each other all of the time I\nthink we'd like basically all agreed at\nleast in the air researcher community it\nseems not controversial to say that I\nthink people agree upon it about it this\nisn't exactly a fire alarm because you\ncan sort of see okay this is the the\nalgorithms are like not particularly\nsmart they're like operating in this\nlimited domain so to go from that to\nlike Opower fillet I will kill us all is\ndefinitely not a step that is really\njustified even in my opinion but I\nexpect that these sort of warning signs\nwill become more and more impactful and\nmore and more obviously connected to\nintelligence as time goes on and that I\nmean I don't like this fact it probably\nmeans that they're like very just large\nharms to humans but it is good in the\nsense that it probably will get it\nprobably will get more consensus on here\nare the dangers of AI and allow us to\ncoordinate to not have those dangerous\nAI systems\nokay great\nI have more questions and that's about\nAI safety without Foom I one of the\nthings I asked you in the email\ncorrespondence but what parts of Nick\nBostrom's book superintelligence and\nyou'd cast his arguments that you almost\ndisagreed with and about half of your\narguments focused on arguments against\nFoom the quick take off so I'm wondering\nin the situation where we don't have a\nfool we don't have a an intelligence\nexplosion but we do still have a what\nperson calls a medium speed takeoff\nsomewhere there is time for an AI race\nthat over one year causes people to go\nfrom a very very basic AGI to something\nthat is super intelligent do you feel\nthis is a how dangerous do you feel this\nis so when you say stupid idea here what\ndo you mean you mean like level of\nhumans let's say you can simulate a the\nreasoning of a human with like an IQ of\n90 using a huge amount of computer so\nthat's how it's not economically viable\nbut this is something that you know it's\nvery amenable to improvement so it can\nso it can be improved and over one year\nwill be improved to a superintelligence\nyep okay make sense yeah that is quite\nfast but plausible I think sort of part\nof my general optimism about us solving\nthese problems is that I would expect\nthat within that year we will notice\nsomething going wrong if something\nactually would go wrong\nand update based on that I agree that\nthere would be well there's at least\ntime for in the air race to happen\nwhether an air race would actually\nhappen is less clear I'm sort of hoping\nthat by the time we get to this world we\nhave some more better global\ncoordination better understanding of\nrisks I'm not sure if that's actually\ngood yeah\nodds that after this year we would wish\nwe had worked more on the AI safety I\nfeel like this is a really contingent\nupon how this happened maybe if I seem\nok right today is when the stupid AGI\ngets built and that in one year it's\ngoing to be super intelligent I am\npretty scared of that scenario that's\nwhatever seems quite bad yeah but mostly\nthat scenario seems quite bad because\nmost of my models seem wrong in this\nworld and then I'm like ok and in this\nworld\nit looks more like we it looks more like\nwe have somehow created an AI system\nthat's really smart and has already\nbecome like really useful to society and\nso can't be turned off or like there's\nno will that will let us turn it off\neven if we could yeah but but really I\nwant to express large amounts of\nuncertainty here mostly because the\nscenario seems to depend on a lot of\nlike external details of how the world\nworks or how we got to this point ok so\nit seems likely I'm not really sure\nabout this that the world is generally\ntrending to some kind of greater\ncoordination cooperation so that in the\nfuture we'll be more able to avoid this\nkind of very hard races how quickly do\nyou feel is the world trending towards\nthis are we are we actually moving in\nthe\ndirection compared to like 20 years ago\nI don't want to speak about the world\nbroadly but like AI for I think for AI\nin particular I think the world is in\nfact moving in this direction I think my\nguess for okay let's preface all of this\nwith I am NOT in the AI strategy\nresearcher I talked to something I\nstarted to researchers sometimes but\nthis is not what I think about whole\ntime so take all of us with lots of\ngrains of salt that said it does seem\nlike the world is in fact moving towards\nmore coordination on AI I think the\npartnership on AI is an example of an\norganization that sort of seems geared\nin this direction and they seem to be\ndoing they seem to actually be doing\nthings like a worry with any such\norganization would be look like oh it's\njust PR for the companies and I I wasn't\nsure if this was true or not and I think\nnow I'm like more lean towards the case\nthat that that that's actually not true\nthat it actually is likely to get things\ndone the made you change your mind oh\ntalking to people who have more\ninteractions with partnership on the I\nthan I do\nyeah also you can see some of their up\nbut they have started producing a little\nbit about but but that was not the major\nsource of evidence I think other things\nare like this is you know I'd actually\nbeen interested to know whether or not\nthis was historically true but this\nmight be a case in which we like foresee\na risk before it actually arises and my\nI now don't endorse this belief but like\nmy vague guess was that historically we\nin fact did not foresee problems before\nthey arise\nthat was like a big reason for how hard\nit was to fix them but as I say that I\nrealized I wouldn't actually know if we\ndid in fact foresee problems before they\narose so maybe we did and just do\nanything about them it is really hard to\nfigure out what our appropriate\nhistorical parallels yes okay then I\nhave another question about narrow AI\nand ambitious AI in the you know in\nchess there is this kind of obviously in\nthe beginning humans were better than\nthe AIS and there was a time where Sybok\nteams of humans and AIS were both better\nthan the humans and better than the EAS\nthemselves and then eventually of course\nthe chest algorithms get better and\nbetter until we get to the point where\nthe humans are just in the way do you\nthink this is a reasonable a you talk\nquite a bit about their you know the the\nhumans provide the goals and then the\nnon-coal directed AI helps and this\nseems like a strategy that would be\ndominated at some point by just goal\nvery good AI without the human to focus\nis great I think if you can define the\ntask that you want done that is probably\ntrue but given that we almost always\ncannot do that like in chess we can do\nthat nice property of chess and go but\ngiven that we cannot in fact do that for\nalmost every task I think there's like\nsome there's probably some sense in\nwhich the goal directed there will be a\ngoal directed AI that is more\nintelligent than the human plus non goal\ndirected AI but I I doubt that there is\na goal directed AI that is more useful\nthan a human plus non goal directed AI\nat least or like if such a thing exists\nthat goal directed AI is somehow getting\nits goals from a human eye\nyeah\nfor like optimizing opinions or\nsomething what are your goals that I had\nin mind was something simple like\nearning money in this case the defining\nthe cult is it's really really easy you\nknow you just have a bank account and\nyou want to make this number as high as\npossible and in in this kind it's almost\nas easy as just to define the winning\nwhat is the winning movement what's not\nyeah I think if you limit your AI system\nto like only its it's only allowed to\ntrade stocks or something like this and\ncan't do anything else\nit's then I probably agree with you but\nalso in such a scenario I think the AI\nsystem is pretty safe like the model of\nan AI system where it ends up killing us\nall requires the AI system to have a\nvery good world model a good\nunderstanding of the world and be able\nto like take actions in the broader\nworld now sometimes you can definitely\ntalk about how it could use the limited\nset of actions it has in order to figure\nout ways of influencing the real world\nor the rest of the world in a way that\nyou didn't intend but that seems to me\nto be a quite a difficult learning\nproblem actually and I think if you got\nto a point where an AI system was able\nto do that you've probably trained it on\na pretty large diverse set of data about\nthe real world\nthat was very fuzzy sorry but I'd be\nsurprised if it just sort of discovered\nthis wait would I be surprised I don't\nknow yeah maybe I maybe I want to just\nexpress confusion and uncertainty for\nthis question\nthen I will go to the next question I\nguess this I don't know if Stuart\nArmstrong has meant to to join us here\nbut one of the earliest part of the\nsequence was where he tried to use\nKolmogorov complexity to see if it was\npossible if assigning correct values to\nhumans had a local murmur of complexity\nand he found in this that basically you\ncan't do this you can assign any any\nvalue to two humans but this seemed to\nme like the way humans do this of course\nwe have things like Hanlan's racer that\nsays you shouldn't assign to malice what\ncould be as I explained by stupidity and\nthe same way you know if humans fail to\na multiply two very very large integers\nit's probably not because that's our\nvalue is because we can't multiply this\nand some of the examples that your\nArmstrong gives like perfect anti\nrationality with the opposite goals\nlet's obviously also something that\nhumans and we we basically never\nconsider that because we we don't\noperate with a makarov complexity only\nwe we have some kind of a human brain\nand given that the person in stand in\nfront of us is a human with a brain that\nlooks roughly like our own then what is\nthe simples the simplest explanation\nbased on that do same thing a scheme\nlike that could work with any scheme\nlike this is that you have to make some\nsort of assumption about the human so if\nyou don't make an assumption about the\nhuman then or just generally when you're\ninferring values and you want to\ndistinguish them from bias there's like\none way to think that is that there's\nthis values object and they go through\nbiases in order to produce behavior and\nthe behavior is not optimal with respect\nfrom to the values because of the biases\nand sort of the post after that one or\nbefore that one I forget the order I put\nthem in\nby Paul Christiano about the easy goal\nentrance problem still being hard so it\nmakes this point that like in this\nsituation if you want to get so the the\nthing that you observe is human behavior\nand you need to decompose it into values\nand bias if you want to and then you can\nthen once you do that you can take the\nvalues and optimize it to get better\nbehavior which better optimizes for our\nvalues now if you want to outperform the\nhuman behavior that means you need to\nfigure out the direction of the bias or\nthe mistake model as he calls it which\nsurvey see the sort of Stuart's result\nis saying well you can't just learn this\nmodel and yeah you can just learn what\nthe human biases are which means you\nhave to put some assumption about it and\nso then the values that you infer and\nthe optimal policy corresponding to them\nare only as good as your model of human\nbiases and I think that it is pretty\nhard to get such a model of human biases\nand sort of even if we did to the extent\nthat we're trying to do ambitious value\nlearning I like take it as basically\naxiomatic that any any model we make\nwould be mislead somehow I get one\nperfectly capture human biases and the\nlast two posts of that chapter by Jacob\nStein hard and Hawaiian Evans on miss\nspecification sort of argue that when\none of your assumptions is miss vessel\nis pretty badly wrong or isn't as\nspecified lots of bad things can happen\nand I would expect this to happen\nespecially if you were trying to do\nambitious value learning I probably\nagree with that but but I just point out\nthat some of the examples of biases you\ncould probably look at a human brain and\nthen saying okay when the human is asked\nto multiply to very large numbers he\nanswers I do not know and from looking\nat deeply into the brain you can figure\nout that that is because the human brain\nis\nunable to multiply these two large\nnumbers so it's it's an a mistake from\nthe human that it doesn't give the right\nanswer rather than because it's his\nvalue to say I don't know I mean this\nseems like something that could could\nwork and could be expanded opposite yeah\nthat seems right\nyou know what do I think about that\nand just check if Stuart Armstrong has\njoined us because he might have a\ncomment yeah no I don't think is James\nyeah I guess in the multiplication case\nthere's a notion of correctness and I\nthink you're basically making you think\nyou're making the assumption like if I\nwere to take this frame of how do you\ndecompose values and biases I think\nyou're like essentially making the\nmodeling assumption there that actually\nokay yeah you could like sort of look at\nthe human brain and be like this\nparticular information in particular the\nproduct of these two numbers never\nappears in the brain and maybe that\nallows you to infer something yeah I'm\nnot sure maybe there's something to do\nwith this I think I would be surprised\nif this like managed to get you all of\nhuman biases like you could get some and\nyou can make some progress with that\nmaybe but it seems very difficult to\nfigure out like certain notably humans\nalso are not able to do this is it like\na bias that we care more about people\ncloser to us geographically than far\naway or is that just part of our values\neffective altruists will tell you it's a\nbias lots of other people will tell you\nit's a value I think I agree with this\njust to be clear we've we've now spent\none hour and ten minutes on this and if\nyou need to go then please say so\notherwise I guess we have time for a few\nmore questions so I should have quite a\nbit well if you don't mind then I don't\nmind either\nand one of the more controversial\nstatements that you made is that\nthat this absolu article by eliezer\nyudkowsky is in fact vacuous that a\nsuperintelligence will look like an\nexpected utility maximizing and the the\nargument is si seat our arguments\nagainst some straw arguments for\ninstance that in in history there's been\na lot of cases where a less smart person\nhas outsmarted someone who is smarter\nand that might be a way to have a ICT\nthat even if the the computer is really\nreally smart then you know it might be\nbook smart nerd smart and don't know\nanything about politics of power or\nsomething like that and in this case\nthen by modeling it as an expected\nutility my maximize that cannot be\ncheated in in this particular way it's\nlike a counter-argument to that do you\nfeel that some people hold maybe not\nexplicitly this the first straw argument\nand do you think that the article is a\ncounter-argument to that which are sorry\nI'm not actually sure which one this is\nsupposed to be representing is this a\nstraw version of eleazar's argument or\nmy argument no no a third person really\nimagine someone who is not worried\nparticular worried about air safety\nbecause he believes that if we build an\nAI then sure it might be really good at\nmathematics but we'll still be better at\nyou know military stuff and that's why\nwe will be able to defeat it and then\nactually it will we won't have any easy\nway to cheat this machine I see I have\nnot actually run across anyone who\nreally claims this first argument maybe\nI've heard it occasionally from like\nrandom people like like uber drivers and\nthings like that\npeople like that who I end up talking to\nabout my research but not from anyone\nwho's thought about it seriously but I\nthat some of eleazar's writing would be\na good counter-argument to this latest\nargument then the other straw argument\nthat you make in someone who is who's\ntalking about a eyes and curly imagine\nthat he's writing a computer program and\nthen suddenly this computer program is\nreally really smart in some way and even\nthough this computer program is really\nreally smart it might have some kind of\nmental flaw or vulnerability and really\nit kaskus argument here seems to be that\nyou can have both things that are\nintelligent and things that are\noptimized and if the if you have a beam\nthat comes from nothing maybe a random\ncoincidence then it might be exploitable\nbut if it became super insulting through\nsome kind of optimization process then\nit cannot be exploited do you feel this\nthe the article could be a\ncounter-argument to this kind of straw\nargument interesting possibly I had not\nI did not have that reading of eleazar's\narticle but that doesn't mean that's not\nwhat he meant\n[Music]\nyeah I don't know I don't think it's um\nI don't think eleazar's article is a\nvery compelling response to that\nparticular straw argument it's like okay\nfor sufficiently I guess for\nsufficiently intelligent AI systems that\nseems true but like that's sort of just\nI think Eliezer takes as an axiom the\ntake like an axiom of eleazar's article\nis if a if we have a super intelligent\nAI system it can and will do any\ncognition that we are capable of I think\nif you have that assumption that's\nalready a\ncounter-argument to the straw argument -\nand you don't need the rest of the\narticle that he's written so I would\nreally say it's more just an assumption\nof his argument as opposed to something\nhe's arguing that said I also don't see\nvery many people who argue the straw\nargument - I imagine of it yeah I\nmentioned matter oh maybe we're gonna\nsay the same thing let's find out like I\ncould imagine that it kind of comes down\nto the the assumptions you make about\nwhat is his super intelligence like I\ncould imagine an intelligence which\nworks out all of mathematics way before\nit works out human interaction maybe\nthat's just a much more complex problem\nto solve and most of mathematics we\nwould consider harder than than human\ninteraction but maybe that's also or\nmaybe that's not the case maybe there's\nsuper intelligence which just works out\nboth at a similar rate and then that\nwould not be problem yeah I mean I think\nI agree that the AI systems that first\nseem like super intelligent on like\npretty complicated complex tasks will in\nfact seem not as intelligent as humans\non other tasks and social interaction\nseems like a pretty likely one\nI think eliezer is definitely thinking\nof like AI systems that are beyond that\nit's just like saying at some point we\nwill hit a level of intelligence where\nis just outperforms humans on everything\nby a lot and why the the making barracks\nyou know what a super intelligent gist's\nyep yeah I should also say I'm like not\nsure I understand what eliazar saying\nit's not clear to me that I've\nunderstood what he's trying to convey\nbut taking my best guess the the thing I\nwas trying to say before is that my\nmodel of iliad Kowski is that he\nbasically is bombarded with bad\narguments against AI safety so he is\nprobably intimately familiar with a\nmyriad of bad arguments so\nlike these I think he has at least\ndouble dated times people have said to\nhim that ah da I will just be book smart\nbut will be pretty politically smart so\nit won't be a problem I think people\nwill so that's why he he care about\ncountering its kind of acumen\nyeah that seems possible it definitely\nsounds like that to me as well I don't\nactually know but like also does use\nthis like he does use this argument in a\nway that's like this is why we care\nabout utility functions and I like I do\nin fact pretty strongly disagree with\nthat like sort of the best example of\nwhere he's using it that way as an um it\nwas a talk called a alignment why it's\nhard and where to start\nI believe that was the title also like\nma'am if he's responding to stir\narguments I'd really wish he'd just say\nthat these are straw arguments before I\nspend all of my time reading something\nthat's meant to argue against the thing\nthat I very clearly do not believe I've\njust started reading your um your\nsummary of about comprehensive ni\nservices I'm very intrigued by that\nbecause it it seems to describe a much\nmore plausible route to AI AGI if that's\nwhat it is then then you know the the\nthe sort of single entity super\nintelligent entity that Eleazar and Nick\nBostrom seemed to have in mind all the\ntime\nyou can you say something about how how\nhow that difficult case or case or would\nlike other case just because case is all\nsigned English word yeah okay you see\nsomething how Christ relates to this\nslots into this whole value learn\napproach oh man yeah if only I had a\ngood answer to back let's see I think a\nthing about what one thing is if you\nlook at sort of the chapter 3 of the\nvalue learning sequence a narrow Valley\nlearning part I think most of that is\nmore compatible with case than things\nlike ambitious value learning in that\nyou could have a service and and Eric\ntalks about this in his full report on\ncase you could have a service that\npredicts human approval so you give it a\nplan and you say hey our human is going\nto approve of this plan it says either\nyes or no and you can use or maybe\nsomewhere degrees of confidence and you\ncan use that information other services\ncould use that information as needed you\ncould use that to monitor all the plans\nthat other AI services are doing and see\noh hey the services doesn't seem to be\ndoing things that humans would approve\nof we should probably check that maybe\nupdate it somehow figure out what's\ngoing on so that's one way things\ncontinue they're good that's one way\nthat narrow value learning could\ninteract with case I think more\ngenerally that just like treating their\nvalue learning as a service that other\nservices can call upon makes some amount\nof sense you could imagine for example a\npersonal assistant\nor like a flight booking assistant that\nis like you say hey can you book me a\nflight to New Orleans on Friday and it\nlooks and it sees oh hey there's a\nflight but it's a red-eye or maybe it's\nlike $2000 for some reason and that's\nlike hmm that's suspicious that's like\nweird and probably not what the human\nwas expecting let me call upon my narrow\nvalue learning service and try and\nfigure out what the human actually\nwanted and then present a bunch of other\noptions like maybe you suggest that I\ntake a flight on Thurs\ninstead because it's much cheaper and I\nand it won't be overnight yeah it's not\nmuch of an answer but that's something I\nhope I shall also say that\nhopefully the a ICT reading group will\nget to a wreckless report if we can find\nsomething some find some parts of it\nthat is relevant because the entire\nthing might be too much and your summary\nruins might be a bit too little\nwe need something preferably 10 to 20\npages but that's gonna be quite a\nchallenge to find yeah it's very much\nyou could you could do something like my\nsummary Richard no summary and Robin\nHanson's blog post about it\nalong with all of the comments on all\nthree of those yeah that might work okay\nso while I was trying to summarize your\nvalue learning sequence then there were\nsome times where I tried to draw some\nkind of models where I was actually\nunsure whether you would approve of them\nand so the first one I have here is that\nyou we imagine that we have narrow value\nlearning that is working in a very\nnarrow sense and then we use this this\nkind of loop to build some kind of norm\ninference where we figure out what\nshould da I not do and while we do the\nAI is trying to work within the\nframework of norm interference we want\nto gradually transition to something\nmore ambitious and value learning and\nthus that doesn't seem reasonable I\nthink that it would make two\nclarifications to that so one is the way\nI defined ambitious value learning which\nwas in some sense of strong definition\nand extremely rigid I think that you\nwould just never want to do ambitious\nvalue learning but if you take a more\nnormal\nsons of the of what that would be like\nright we want to figure out all right\nhow we could make a good future and it's\nnot a utility function that prescribes\nexactly what to do in every single\nparticular environment but sort of\ngeneral paths that are general things\nthat we want with like maybe some\ncorrection by humans along the way or\nsomething like that then I think it's\npretty reasonable I also don't know that\nI would say Maribelle e-learning is will\nlead to Norman France they feel somewhat\nlike two parallel strands of research\nlike I guess my intuition is that like\nif you wanted to infer the values of a\nsingle agent the things you do are just\nquite different from what you do if you\nwant to infer the norms that a set of\nagents are following that that's my\nintuition right now I don't have a great\njustification for it but they do feel\nlike parallel strands of research and\nnot that narrow value learning feeds\ninto norm inference okay great then I\nhad another yeah kind of maybe a wind\ndiagram or something where we have the\nset of all expected utility maximizers\nand as a true subset of that we have the\ngolden rated agents and it's a true\nsubset of that we have the explicit\nreward maximizes where all the subsets\nare true as in there are goal directed\nagents which are not explicit rewards\nmaximizes its every dude does also\nreflect reflect yourself thoughts so if\nI expected utility maximizers you mean\njust any agent because everything can be\nmodel as an expected utility Maximizer\nthen I think I agree with this this\nseems that seems right to me okay then I\nhave a question here which I believe you\nhave roughly answers you have answered\nthat this fact that intelligences should\nhave voting rights would be a norm that\nwe that we prescriptively have right now\nthat probably won't be changed and then\none of my also\nwas that if you look at human norms\ndescriptively right now you also run\ninto a lot of problems like might is\nright so do you feel descriptive norms\nof prescriptive norms are what will be\nbased basing the UNAM inference on\nprobably descriptive ones just because\nthat's sort of what inference means it's\nlike you observe what is actually\nhappening and then figure out what's\nwhat the norms are I do think that like\nwith norm infants you also want to\nfigure out the process by which norms\nare updated over time which currently is\nwell there are some like opaque social\nprocess by which it happens just among\ncommunities people but there's also like\nlaw and Court and judges and courtrooms\nand things like that so this is not my\narea of expertise but you'd probably\nwant some way both of inferring what the\ncurrent norms are and also a way of\nfiguring out how the norms should update\nin the future I don't have very much\nclarity on this though okay then and\noption you you wrote about is that you\nhave the AI have some kind of estimate\nof what its reward function should\nactually be and then you write that that\nthe AI should have an estimate that that\nwould be an obvious thing to implement\nand I agree that having the a estimate\nwhat should be its own true reward\nfunction is probably easy but if the\nreward function is not compact or if the\nreward function is compact then it seems\nquite hard to implement in practice\nright you imagine that there is one\noption is the human what they actually\ntruly value is paper clips and nothing\nbut paper clips and another\nhypothesis is that humans care about\nreproductive fitness and nothing but\nreproductive fitness you have a third\nwhich is like pure hedonism or something\nlike that and you know if you want to\nhave an estimate of all the options then\nyou know enumerate all the options of\nwhich I just had three here seems really\nhot to implement\nyeah I actually don't remember what\ncontext I said listen but my guess is\nthat I was making more of an abstract\ntheoretical point that wasn't when I\nsaid the most obvious implementation I\nmeant more like if we had unlimited\ncompute and the ability to like have a\nprior overall possible reward functions\nI do you remember which post this wasn't\nme I might be able to find it search you\nso let me just the most obvious\nimplementation\nhere reward uncertainty I'm just yeah\nyeah okay I was making that I was trying\nto make a more abstract point here I\nshould probably update that and make\nthat clear okay it was just there might\nhave been something upwards that I\nmissed him then this is actually not a\nquestion about for you I might want to\nput it to Owen just see if I have other\nquestions that I think I would really\ncare about\nthere is creature building what is the\nrelationship between courage ability and\nvalue learning because you end up\ntalking quite a bit about courage\nability and it seems you know what is\nthe relationship between these two\nconcepts yeah yeah I think I got more\nclarity on this since writing the\nsequence so hopefully I should be able\nto give a better answer now so the way I\nthink if courage ability is it's the\nproperty that you're a system is trying\nto help you\nwhereas value learning is about whether\nthe agent knows what it is that you want\nand these are somewhat orthogonal in\nthat value learning is more a statement\nstatement about like the agents\nknowledge whereas courage ability is\nmore a statement about the agents\nmotivation and you can either you can\neither say I am going to create an AI\nsystem that is chords both meaning that\nit is trying to help me and then as long\nas it meets some basic level of\ncompetence of like intelligence or\nsomething then one of the things it will\ntry to do when it's trying to help me is\nit will try to learn my values or my\npreferences perhaps what's in your\nvalues for now and so there you could\nsay okay I'm going to build a corrigible\na and that will lead to value learning\nalso to be clear this is like I think\nthis is Paul's notion of courage ability\nit is definitely not me\nmccords ability we have two different\nmeanings for the word college ability at\nleast it's not great I just wanted to\nnote that first could you say what is\nthe difference between mirrors and poor\nCristiano's definitions of cordage\nability yes so the one I've been using\nso far is my interpret is what I think\nPaul's definition is though I've been\nknown to misinterpret Paul before the\nmarry definition is more like if you it\nbasically gives you a fallback mechanism\nso that if you say for example turn up\nAI system you are doing a bad thing turn\noff now it actually just does turn off\nno questions no like inference over\nthere Ward functions no like modeling\nthe human is rational or irrational\nnothing like that it just turns off so\nin this sense of like when you there are\nsome situations some fall back some\nsituations where a human can activate a\nfallback mechanism that provides\nrobustness in case something has gone\ndrunk that's I think the miry sense of\ncourage ability cool so I'll go back to\nthe original question so with the Paul\nsense of cords ability where the AI\nsystem is trying to help you that can\nlead to about preference learning as\nlong as the agent is at least somewhat\ncapable conversely you could talk about\nokay I'm going to build an agent that\ndoes value learning or preference\nlearning and then it's going to optimize\nfor my preferences or values as based on\nwhat value learning and preference\nlearning is doing and then if it's like\na reasonably competent at learning my\npreferences it will learn that hey I\nprefer that it listens to me when I tell\nit to shut off I prefer that it doesn't\ndeceive me I prefer that it did not kill\nme etc etc and as a result it ends up\nbehaving the way\ncorrigible agent would or like it learns\nthat I prefer that it like helps me in\ngeneral which is sort of the property\nthat Cory's ability is supposed to be\nthis is the zooming an agent that does\npreference learning and then optimizes\nfor my preferences so that's sort of\nthere are two parts of that one is the\npreference learning system and one is\nhow you optimize that all the preference\nlearning there's often reward\nuncertainty is a component of that I\nthink the reward uncertainty\nautomatically gives you some of these\nproperties as well so really there's\nlike you could either try to build a\ncorrigible AI and get preference\nlearning at that or you could build a\npreference learning plus reward in\ncertain de AI and yet Gorge ability out\nof that okay thank you for that\nclarification and then I in one of the\nthe concept of having a AI as a\ncontroller and having a plan then I made\na statement earlier that sometimes the\nthe plan would be better than the\ncontrol if the plan was good enough I\nthink I used a metaphor like if you're\ntrying to navigate a maze then the plan\nof always sticking to the left wall is a\ngood plan and a controller in this\nsituation would be a bad plan I would\nput perform worse and then I think this\nyou're right that actually in control\ntheory you can prove that a controller\nis superior to a plan and then there are\nthere's no link unfortunately so I\ncouldn't actually go and see where my\nassumptions are wrong can you sketch\nroughly why why I'm not right with my\nmaze example yeah so I think actually in\nin your example but the plan of the\nthing you're calling a plan of like\nalways going left or sticking to the\nleft wall or right well whichever it\ndoesn't matter that would I\nalleviate controller and listener in\nthis in this terminology because you are\nrelying on the ability to choose your\nactions after seeing your observations\nso like a plan in the main finding\nsetting would be something like I must\nwalk forward five steps and then turn\nright and go forward three steps and\nthen turn left and go forward one step\nthen go right turn right go forward ten\nsteps and so on which in the case of a\nmaze is fine and would work if you get\nto see them if you make the plan effort\nwhen seeing the maze in advance and so\nin that sense setting both a controller\nand a plan would work just fine but\nthat's assuming that you can predict the\nenvironment perfectly at the time that\nyou make the plan if you cannot in fact\npredict the environment perfectly when\nyou make the plan then you need to like\nuse your observations in order to note\nwhen you basically need to use your\nobservations in order to like constrain\nthe possible ways the environment could\nbe and once you have constrained those\npossible ways then there is an action\nthat always like guarantees the property\nthat you want but if you can't constrain\nthe states but the way the environment\nis by using observations then there's no\nplan that would actually let you\naccomplish your goal or satisfy whatever\nspecification you have there wasn't a\nlink because I I don't know if anyone\nhas ever written a paper that just makes\nthis point I think it's obvious to\npeople in control theory or something\nand they haven't actually bothered\nwriting it down in a paper there are\npapers that prove these guarantees with\nadversarial environments that definitely\nis the thing\nokay let me just see if there is I guess\nmmm that will probably be all so does\nanyone have any final questions for\nRowan then I will say thank you very\nmuch for joining us\nit's been a pleasure and we've learnt a\nlot I think so\nthank you very much yeah", "date_published": "2019-05-15T23:05:36Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "1fb1cc3fcbb0be66d55437475df52d5b", "title": "258. How might we align transformative AI if it's developed very soon? 1/3", "url": "https://www.youtube.com/watch?v=93JuWY_TpWg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "foreign\n258 in the AI safety.com reading group\ntonight we will be discussing how might\nwe align transformative AI if it's\ndeveloped very soon by Halton kanovsky\nHalton kanovsky is the current co-ceo of\nopen philanthropy and the co-founder of\ngive well\nthis was posted on this wrong alignment\nforum and I believe also his personal\nblock called takes\nwe will only be discussing the first\nthird of this uh post today and I expect\nthe second will come in subsequent weeks\npart of the reasons why I think this is\na really interesting post is because of\nHolden kanovsky he is one of the people\nwith the most social credibility uh in\nthe uh AI safety world and in case we\nget a fire alarm for AGI then I could\neasily see Halton kanovsky ending up as\nthe person who would lead some kind of\nlast-ditch effort to stop a misaligned a\nAGI\nand for that reason his opinions even\nthough they might not be uh technically\nvery uh sophisticated might be extremely\nstrategically important\nengages in a technique that he and a\nKatra called near casting that is\nanswering the Strategic questions uh\nunder the assumption that the world is\nsimilar to today when we get AGI or\ntransformative ATI transformative AI as\nthey phrase it\nuh this is somewhat an odds with the\ngeneral rationalist maxim of trying to\nlive in reality\num\nbut the way they get around this is uh\nby not so much having the assumption\nthat this is uh world that's very\nsimilar to today's but as something that\nhappens very soon because obviously if\nit happens very soon the world is not\ngoing to be much different\nthe scenario they're having is an AI\ncompany a fictional AI company called\nmagma is developing a model called Alex\nthat seems to be on the verge of\ndramatic improvements in capability\num\nimmediately when I heard the word Alex\ngiving like a human name for this AI\nThat's a worrying sign in the sense that\nit's very easy to anthropismize\num these these AIS and that's something\nwe should uh we should be aware of on\nthe lookout for uh when we when we read\nthis text\nthe technique is she called human\nfeedback on diverse tasks and crucially\nuh magma has a six month to two years\nlead on the competition\nand\nnot only do they have less lead they\nknow that they have this lead there's no\nexplanation given why but the the\nquestion is then how can magma use this\nlead to try to uh ensure a good outcome\nof transformative AI\nthere's a very brief description of the\nalignment problem basically just that um\nthe default path sorry\nthe default path to transformative AI\nleads to an AI that is decides to take\nover that is unaligned and we can't\ndetect that desire and that is a problem\nthat seems at least reasonably hard\npossibly insanely hard\num\nand that's basically all that Holden\nkanovski writes about the alignment\nproblem it's a really basic analysis and\nI think we should um when we read the\ntext be on the lookout for conclusions\nthat follow from this uh perhaps true\nsimplified description of the alignment\nproblem\nforeign\nassumptions is that Jose is only\nfocusing on the leading lap stating that\nif we considered more AI Labs well he\nwould be seeing roughly the same thing\nabout them\nI don't think that's necessarily true uh\nboth because the labs won't know that\nthey are leading they certainly can't be\na certain that they are leading and if\nthere is a larger field then a runner-up\nwould almost certainly have a very\ndifferent conclusion open AI for\ninstance have explicitly stated that if\nthey perceive themselves to be a runoff\nthey will stop themselves their current\nproject and and assist the\nthe leading AI lab if it's sufficiently\naligned\nso Magma's predicament in this case or\ndilemma is that they need to navigate\nthe risks of taking and taking actions\nand not taking actions if they do take\nactions well obviously they have a very\npowerful AI on their hands and it's\npossible that this AI will just take\nover the world and that may in fact be a\nlot easier than expected\ngives an example a scenario of this\nwhich requires the AI to both do hacking\nsocial manipulation technological\ndevelopment and also being economically\nproductive\num this is where I would give an example\nof\nanthropomorphizing because here we have\num five human strategic skills that the\nAI is just barely above the human level\nat and I think that's very unlikely that\nwe're going to see AI with with this\nspecific profile uh that it can do the\nsame things as humans can do just a\nslight bit better I don't think that's\nvery likely I think it's much more\nlikely that we are going to see a\ndramatic Improvement in one of these uh\nsix uh cognitive superpowers and then\nsub-human levels below where the AI\nneeds to substitute humans for this\nnow uh opposite action risks are\ninaction risk and that is by\num not doing anything then at some point\nthis uh this lead will be uh uh will be\ngone and the second best AI lab will\nthen be able to uh deploy transformative\nAi and if they do that then by\ndefinition they are less careful\num\nthey might also be weaker for other\nreasons but I think the inaction risk in\nthis case is very real\nso this is Magma's goal to reduce the\nodds that other less cautious actors\ncause a catastrophe by deploying these\naligned AI systems while avoiding\ncatastrophic misalignment from Magma's\nown systems\nthe the word that has often been used\nabout this dilemma is the requirement to\nperform a personal action and a pivotal\nact and this is something that it's a\nword that\num\nexplicitly does not use\num I think there's a probability that\nthis is done for uh like political\nreasons that he doesn't want to\nour business uh you can't just come out\nand say I want to take over the world\num because uh that uh a lot of people\nwill disagree with this\num but it's unclear to what extent\num uh the omission of this is something\nthat is deliberate or is uh because\nHolland kanovski does in fact not think\nthat a pivotal Act is required\num if we assume the letter that a\npivotal Act is not required then we'll\nnotice from this goal that it just say\nreduce the odds and reducing the odds\nwell if you make it 10 less likely well\nthen you've reduced the odds and that's\nkind of kind of count as a success in my\nbook that's totally inadequate we if we\nsee current 99 probability of\ncatastrophe then the uh then magma is in\na unique situation that they need to\nleverage to uh drive down the\nprobability of catastrophe down to less\nthan one percent\num just a tiny improvement from the only\nstrategic actor that can do this is just\nplainly insufficient\noh so um\nanalysis this is a the only way to\nactually do this is by enlisting the\nhelp of the transformative ai ai there\nare five uh five categories of ways to\nhelp there is a defense uh alignment\ncoordination uh\n[Music]\nTechnologies to enforce regulatory\nagreements and advisor applications\nwe'll go through these five now\nbut one thing I would just say first is\nthat Holland kanovski is exclusively\nconsidering the Strategic options that\ninclude getting help from the uh from\nthe AI and Maca may have many other\nstrategic options a key one would be to\ndemonstrate a deceptive alignment which\nif possible would be very impactful and\nmight not require the ai's help\nlet's start with defense deterrence or\nhardening\nin order to do this you need to deploy\nthe AI systems as widely as possible and\nthen when the competitor catches up in\nsix months to two to two years well\nthere is no easy targets no free energy\nthat this uh possibly unaligned AI can\num can take now this strategy deploying\nthe AI as widely as possible is\nobviously the same thing that a um an\nactor an AI developer that doesn't care\nabout AI safety would do right if your\nmeter would be an example uh Facebook's\nAI don't uh don't care the least about\nalignment so deploying the AI system as\nwidely as possible is just their plan so\nin this way this allows a meter a an\neasy way to uh alignment wash their\nuh there are\ncapability focused uh AI research\nnotes a number of limitations with this\nincluding that this only helps against\nsystems that have roughly similar\ncapabilities if another system like\nrecursively self-improves to and through\nan intelligence explosion then this will\nnot help it's also a moving Target as\ncapabilities increase and we we're not\nreally solving the problem we're just\nbuying time\nuh the key problem I have with this is\nthat if we just Harden like uh 10 of the\nsystems uh then that is clearly\ninsufficient because the AI will just\ntake over the remaining 90 we need to\nharden so many systems that we have an\nuh an overwhelming majority of them\num this has sometimes been formalized\nthrough uh free energy like taking out\nall the free energy that's in the AI\nwould be able to an online AI would be\nable to use to do mischievous things\num but this in practice is really really\nunpleasant because a lot of the things\nthat an AI could do\num if it was unaligned uh are things\nthat humans really really like to do\nlike the the most extreme example would\nbe a control over our nuclear weapons\nwhich is something that currently uh\nhumans are in charge of the defense and\nthere's some free energy because a super\nintelligence could do that better so\nobviously in order to\nto improve the defense we need to hand\nover the control of our nuclear weapons\nto this hopefully align\nsuperintelligence and there's a lot of\npeople who are going to uh strenuously\ndisagree with that plan\nthe specific things that Holden kanovsky\ndiscussed are patching security\nvulnerabilities\nthe problem from my point with this is\none that it requires training in hacking\nso we need to in fact uh train uh the\ntransformative AI explicitly in a\nstrategically very powerful and very\ndangerous technology hacking\nthe second problem is like we are in\norder to remove the free energy the\nvulnerable system we need to patch\nliterally every during complete\nprocessor in uh in the world in\nsomething like six months obviously uh\nthere is no way this is going to be\nreviewed by humans or humans the code\nseen by humans that's not going to\nhappen at all so the only way we can do\nthis is to just trust the AI with\napplying security patches through to\nevery computer in the world and that's a\nbig ask\nanother thing is the we could use the AI\nto set up some kind of global uh\nsurveillance society that could catch\num potentially problematic uses of AI\num that's uh dangerous for many\ndifferent reasons uh and one of them is\nthat this requires uh training on\ntraining the Supreme intelligence in\nstrategizing in order to figure out what\nother options for other uh super\nintelligences and how can those be\ncounted again a very dangerous game\nmaking making money is another example\num\nuh if magma makes all the money in the\nworld and you know literally our\ncompetes all humans and sits on the\nentire economy but then the meters AI\ncan't come in six months later and uh\nand make money because there's no money\nleft to be made\num that's a really really tall order to\ndo in six months uh to two years\num and I think a lot of humans are going\nto be a very um uh non-plussed about\nthis\nanother option is to gain influence or\ncreate new kinds of weapons\nagain I think this strategy of uh\ndefense is just about the second worst\npossible strategy you could have because\nyou're basically doing uh giving this AI\nas much power as you can your your uh\njust hitting the accelerator as hard as\nyou can\num except in recursive self-improvement\nbut apart from that it is the second\nworst strategy you could possibly come\nup with\nin the comments on the alignment Forum\nStephen Prince lists a number of other\nobjections to this\num\nfirst here objects that Holden kanowsky\noperationalizes the power of the AI\nsystem as total compute\num I don't as fast I could tell from the\nuh from the text uh Holden kanovski does\nnot do this but again we've only read\none third of the text so I can't really\nsay that he doesn't do that later\nuh Stephen Burns also believed that this\nis a way to Rosy picture\num both that there may be a offense and\ndefense imbalances in uh hardening a\nsystem may be a lot more difficult than\num\nuh then attacking it and in particular\nthis is disjunctive so it means that\nthere might we might only see one\nparticular place where the AI chooses to\nattack uh the attacker can choose where\nto attack innocence and that means that\nwhere it's probably going to take the\nplace where the offense defense balance\nis worst from the point of view of the\nalign the good AI\num and of course as I also said this\nrequires a lot of trust among humans in\nthe good AIS\num I think Steve Burns is too nice that\nwe won't entirely trust the good AIS\nwith the nuclear launch codes we are\ntotally totally not going to trust them\nat all\num the good ai's are also uh going to be\nless effective in the alignment tax\nsometimes called magma does have a leap\nI think it's a good point and also that\nthe good AIS are hamstrung by our long\nlaws and norms and coordination problems\nand all these kind of things\num Stephen is can't really see any way\nto get around the pivotal act\num I think I agree and I would certainly\nlike Holden kanovski to answer these\nobjections\nso that was defense let's go to\nalignment applications and I believe\nwhen uh Holden kanovski says alignment\napplications he's just basically talking\nabout alignment research\num this is something that decreases\naction risk for for magma and possibly\nallow them to have the same level of\nrisk but then just increasing the\ncapabilities if the alignment improves\nin the same way\nI'm not entirely sure this makes sense\num you could imagine a situation where\nyou have an AI that is very very\num\ndangerous 90 probability of being\nunaligned and then you press play and it\ndoes not take over the world and it\ngives you some hints to how to align it\nand also how to make it more powerful\nand then you roll the dice again uh and\nthen you see okay if there was a 90\nprobability of failing then um even\nthough you got lucky then just\nre-rolling is a really bad idea\nanother option is to share the alignment\napplications and research with the\ncompetitors\nthis may not be possible without also\nimproving the uh the abilities and the\ncapabilities of the competitors and it\nmight be have a substantial alignment\ntax and it doesn't solve the problem of\nuh Facebook AI because if Facebook AI\ncomes out and says strongly we will not\ndo alignment period Then presenting them\nsome nice beautiful research papers\nabout alignment does not in any way\nforce them to to use these measures\nuh Hogan kanovski is uh optimistic a big\nenough success could solve the problem\num I think in general if you use the\nword enough insufficient then you just\ncreate tautology right\num in in general these um the success\nwould need to be really really enormous\nand have literally an a a a zero\nalignment text and be improving\ncapabilities at the same time or\nsomething like that before other people\nwould want to\num to uh implement it and even then it\ndoesn't really Force other people to\nimplement it\nforeign\nthe third category is coordination\nrelated applications like helping\ngovernments and companies coordinate to\navoid AI catastrophe\nthis is probably quite hard one of the\nproblems with this is we would like to\nhave some kind of acceptable level of\nrisk in order to uh say we won't deploy\nsystems that are riskier than this level\nand in practices we probably can't get\nprovable safe AI that means that the\nonly safe thing to do is to not deploy\nsystems and that's really not something\nthat's really going to work\num we could also design more speculative\nmechanisms where you have like ai's\nmonitoring other AIS but not reporting\nback all the things that they are then\nwhether they are lined or things like\nthat\num that seems like a really tall order\nit requires the AI to be uh strongly\nsuperhuman to do something that we are\ntotally incapable of doing at the moment\num and also doing this requires either\nvery extreme trust in the AI or it\nrequires some kind of enforcement\nright now if we are doing near casting I\nwould also say that this kind of\ncoordination looks really hard like in\nparticular China and Russia seem very\nvery unlikely to go along with this kind\nof scheme\nthe third option here is to create\nevidence and demonstrations for the risk\nof misaligned AI and this is something I\ncare about and Holden kanovsky writes\nthat this is something we will return to\nlater in the document so I'm not going\nto talk very much about this right now\nbut I will give a teaser in that I think\nholdenovski is attacking this problem\none meter level too high\nthe fourth way that um\nuh AI could help with solving this\npredicament is by deploying powerful\nTechnologies to enforce regulatory\nagreements Halton kanovski is aware that\nthis brings many concerns this is not at\nall something you just go out and do\num\nin order to do this you need to like\nhave regulatory agreements and to do\nthat you need to have some kind of\norganization uh that could do this and\nHolden kanovski doesn't describe any of\nthose but I think in near casting where\nwe assume the world is like it is now\nthen I think we would say like three\ncandidates would be the United Nations\nNATO or the US government those seems\nlike with three kinds of\num organizations that would be able to\nuh lift this burden\nit's not enough to have a powerful\norganization we would really also like\nto have some kind of regulatory\nframework and the problem with this is\nwe don't have a draft at all for an AI\ntreaty right now\num and that means that that is another\ntask that either meter has to do or get\nan AI to do this and I think that's\nalso a potentially very tall order\nnow for the technologies that can\nenforce this regulatory agreement uh one\nmay be a resource accumulation I'm quite\na bit in doubt about what uh refers to\nhere like I could see persuasion being a\nway to get resources like you uh\npersuade Brazil to support this\nregulatory agreement and then Brazil is\na resource but resources can also be\nother things like iron ore or whatever\num we could also have some kind of\nsurveillance system uh through very\nadvanced technology this is of course\nreally dystopian potentially\num we could improve the framework and\nadvocating for the framework improving\nthe framework is probably quite good\nadvocating for the framework leans very\nclosely to uh the uh dangerous\ntechnological development of persuasion\nand finally we have military\napplications and I think in in this case\nwhat we're talking about here is an\nexplicit takeover where uh the U.S\ngovernment with the help of AI just\nbasically takes over the world\num that's the only way I can really\ninterpret this\nalso talks about uh mind uploading in\nthis section\num I uh I'm not that pessimistic about\nmind uploading but I will say that\nadding it in this specific section is uh\nunfortunate because mind uploading may\nbe part of the solution to the alignment\nproblem but uh I don't think mind uh\nuploading should be used as a technology\nto enforce regulatory agreement that\nsounds like really dystopian\nthe fifth category is advisor type\napplications where we get better ideas\nor plans for how to like maybe we can\nsidestep this problem maybe we can pull\nthe Rope side sideways in some way\njust like suggesting things like the\nregulatory approach it's possible that\nthe AI will come up with something\ncompletely brilliant out of the box\nthinking that will just solve this\nproblem\num I agree it's possible that we'll get\na deuce X making it in this way I think\nit's very unlikely and I think it\ndoesn't really count as a strategy to\nsay like maybe we'll build the AI and it\nwill come up with a\nsmart thing uh that we couldn't have\nthought of ourselves that'll just make\nthe problem go away that's not a\nstrategy\nokay in order to get an AI that will not\ndestroy the world what kind of\nproperties should Magma's AI systems\nhave\nwell the most default one is it should\nhave good performance uh like uh\nbeing evaluated as good by humans or by\nmagma and I would expect that uh magma\nif they have six months to two years of\nlead uh in ahead of the competition then\nthey have probably focused on this a lot\nlike you don't Sleepwalk to building\ntransformative Ai and that means that in\norder to focus on other things then a\nsubstantial cultural shift needs to\nhappen in magma probably\nso we uh the um Holden kanovsky's\noverall strategy is to identify a number\nof the Civil Rights and nice properties\nof this Ai and then try to train for all\nof them and at least train them to the\nlevel where it appears to humans that AI\nhas the property so some kind of very\nnaive standard\num I think uh it's better than nothing\nto just uh like make it appear honest if\nyou can't do anything better than that\num it is uh far from being sufficient\nbut I think in general security is like\nyou have the onion model where\num you want to have as many of these\nproperties as possible and I think\nthat's in general a good uh way to think\nabout it\nthe first\nproperty that Holden kanovski really\nwould like is value alignment\num like the AI should have roughly our\nvalues in some way uh Holland kanovski\nis very optimistic about the value of\nthis the value of value alignment uh he\nsays it's the most obviously risk\nreducing property\nI guess I would kind of disagree with\nthis I think that if you get a very far\non value alignment that buys you\nsurprisingly little safety uh if you for\ninstance have a utility function that\nalmost represents what human wants but\nlike is a little off and then you\noptimize that as hard as possible then I\nthink you are almost certainly going to\nget some kind of strong existential\ncatastrophe\nand there are of course problems with\nvalue alignment uh we don't know our\nvalues and we don't really get if you\njust Train by feedback then you don't\ntrain for values and and if even if\nmagma is very capable of uh training\ntransformative AI it's not at all clear\nthat they could do value alignment\nhonesty is the second intended property\num\ndescribed as giving non-deceptive\nanswers to a relatively straightforward\nforward questions\num I think in general it's better to\ntalk about deceptiveness than to talk\nabout uh honesty I think these two these\nare two related but different concepts\nlike in the sense that um my children\nsometimes if I ask them who ate the\ncookie then they will be uh dishonest\nbut they're not deceptive in the sense\nthat they intend to kill me and replace\nme with someone else and take my power\nor something like that here I'm enter\nmore advising a lot of course\num\nsays this is easier to Define and assist\nthan a value alignment I think it's in\nfact very much easier and if it's only\nfor straightforward uh uh questions then\nI think it might even be\nI wouldn't say easy right but um a lot\neasier\nthe way home can ask you a cash out\nstraightforward honesty is that you have\na list like are you trying to do this\nbad thing and then it will just answer\nno to this\num and I think if you have like a an\nenumerated list of bad things and you\ntry to make sure that it doesn't do this\nthis is good and it prevents a lot of\nbad stuff but it's also a uh a classic\nsuggestion that in the AI safety\nliterature has been discussed and\nrejected in general because it pitches\nour intelligence against the\nintelligence of a superintelligence\nand I don't think we should not have\nhonesty but we need to be honest about\nthe shortcomings that this is something\nthat the AI is going to try to rule\naround as much as it can\nand of course uh Holden kanovsky is less\noptimistic than me about how hard uh\nstraightforward honestly is going to be\ncourage ability is allowing the AI to\nis to ensure that the AI allows itself\nto be altered and shut down if we want\nto and this is the overall property that\nI think is most interesting and the\nthing I expect to be\num uh crucial\nand\num Holden kanovsky is also optimistic\nabout this\num but I would note here that it only\nreduces action risk and not inaction\nrisk\nbut of course it's not critically\ndefined it's not straightforward to\nassist\nlegibility is the fourth criteria and\nI think uh I was surprised when I got\nhere actually when I first read the text\nI thought he meant interpretability\num which is um\nthe uh\nby far most developed practical uh\nproperty of AI systems like there's way\nmore research in interpretability than\nlegibility and I am surprised that\nholding kind of does not include this\ninstead he talks about legibility which\nis uh like we don't want to just have\nthe AI give us instructions that just\nwork without us knowing why we want the\nair to explain to us give us the\nprinciples that mean that we can create\nthe instructions so we have some kind of\nunderstanding of uh what we are doing\ntogether with the AI\nunfortunately I think this is like to be\ndramatically less efficient a lot of the\nuh impact of transformative AI is going\nto be things like writing software and\nwriting software if you have one AI that\njust writes the software and the other\nAI ex teaches human programmers to be\nbetter programmers then obviously the\none that just writes the software is\ngoing to be dramatically more effective\nthat is all for tonight thank you and\nsee you next week", "date_published": "2022-10-06T20:59:00Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "740a43bf2c6c613b15ed8ba0cd25ff2e", "title": "AI Safety Reading Group (Session 45)", "url": "https://www.youtube.com/watch?v=Ys-U-4vjRjw", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "so hello and welcome to the 45th session\nin the AI safety reading group today we\nwill talk about an article called\nbuilding safe AI by Andrew Liam Trask\nenter Liam Trask is a PhD student at\nOxford University specializing in deep\nlearning and he has a blog\nI am Trask at github where it has this\nquite recent article then about a\ntutorial for encrypted deep learning now\na tutorial is quite different from the\nkind of articles we normally read\nbecause it's not really attempting to\nintroduce in a theoretical way subjects\nbut showing hints on how to actually do\nthis and this is an one of the exciting\nthings about deep learning that it's\nactually not that hard to get started\nwith so it's possible to have a short\nand meaningful tutorial on this I will\nnot go through the programming code in\nthis and I will focus on the things that\nhave some kind of application to a GI\nsafety to super intelligence and this\nkind of thing there are also other uses\nfor encrypted think learning such as\ndata privacy and I will not cover those\nhere so why do we want to encrypt deep\nlearning the goal here is to make an\ncryptographic AI box the classic way to\ncontrol a superintelligence is to put it\ninside a box and only let it only answer\nquestions or have very limited impact on\nthe world of course the problem as first\nthought you think that having an\nair-gapped AI box would be impossible to\nbreak but actually there's been many\nmany books written about how to break\nget across these air gaps there are many\nmany the so called side-channel attacks\nand ways to get around building an AI\nbox it's actually really really\ndifficult to do and and this is not the\nonly problem with AI boxing the other\nproblem is that the output channel the\nthing that the super intelligence tells\nyou it might be able to manipulate you\nvery strongly so this is this is\nconsidered the main vulnerability of AI\nboxes and that's not what we are\nfocusing on here so the idea of making a\ncryptographic AI box was invented by\nPaul Christiana in 2010 and the first\nplace you brought about this was in this\npage on this ROM on this post and the\nidea is present on Alexey churchians map\nthe one we read last time if I just move\nhere in the pattern AGI safety map where\non external constraints we have a AGI\nconfinement the box as one of the here\nhas written that it's a marginal\nsolution and in the very bottom you can\nsee a cryptic box homomorphic encryption\nby Christiano and this is exactly what\nwe are looking at today so why would you\nwant to build an AI into a into a box\nwhat is the goal of doing that the the\ngeneral AI boxing strategy is that you\nbuilt n an hei that scheme will self\nimproving that is and then you when you\nyou don't turn it on until it's inside\nsome kind of box and then you ask this\npossibly unfriendly possibly unaligned\npossibly friendly you try outpost make\nit printed and ask it to provide not\njust a source code for a friendly hei\nbut supply a proof that is friendly and\nof course this proof it would be\nwonderful if there is an easy way if we\ncan automate this so there's an\nautomatic program that verifies if an\nAGI is friendly that\nthen this this approach would be really\nreally neat and also in many many\nmathematical and computer science\ncontext it turns out that verifying a\nproof is a lot easier than finding a\nproof so it is indeed possible that\nthere is a superintelligence capable of\ngiving a proof that it is that a\nparticular API is friendly and unable to\nfake this proof there's a huge\ndifference and that that's kind of the\nPope this is kind of a marginal solution\nbecause it might be that that AI boxing\nis impossible because any super\nintelligence either would not be smart\nenough to prove a friendly AGI all true\nsmart so it can teach us in any cases\nbut it is possible it is plausible or it\nis thinkable that an AI box will solve\nthe control problem deep learning and\nhomomorphic encryption deep learning is\na somewhat new kind of artificial neural\nnetwork and artificial neural network is\na computational model that is very much\ninspired by the brain uses some small\ncomponents that are modeled by neurons\nit's not directly modeled on the brain\ninspired would be a better word and the\ndeep part in it is that it doesn't just\nhave input and output but some layers in\nbetween that allows things that kind of\nlike transformations and feature\nextractions I've written and but it's\nnot it's actually very unclear exactly\nwhat happens in the deeper layers of\ndeep learning\nfortunately for for this for our\npurposes here the deep aspect is kind of\nirrelevant the only reason why\neverybody's talking about deep learning\nis that in practice of all the AIS\ntechniques we have then in deep learning\noutperforms every other technique by a\nrather large margin so that's why\neverybody is talking about deep learning\nthe\nand practice it works really really well\nhomomorphic encryption now standard\nencryption takes a a plaintext and turns\nit into a ciphertext scrambles it\nencrypts it in a way so that you need a\nsecret key to to reveal what was the the\nencrypted message and homomorphic\nencryption is homomorphic means the same\nform in greek and this means that the\nresult you get from the encryption in\nsome way have the same form as what you\nhad decrypted and i've taken some of the\ncode here from the tutorial to try to\nillustrate what's going on so here we\nhave a and array a list of integers 0 1\n2 5 and we start by encrypting this\narray here and then get something called\nC that is the encrypted if we try to\ndecrypt it of course we get the same\narray back we get 0 1 2 5 back if we if\nwe decrypt what was just included this\nthe funny thing about the whole movie\nencryption is if we work on the the\nencrypted data for instance added to\nitself normally if it takes to encrypt\nthe things you would get something\ncompletely random but here you put a\nhomomorphic encryption you actually get\na very meaningful results you get the\nsame as if you were taking the\nunencrypted and edit it to itself so 0\nplus 0 is 1 + 5 + 5 is 10\nyou can also multiply with this you can\nsay you have every value 10 times larger\nwhile it's while it is encrypted and\nthen you get the meaningful results out\nand this is something that only happens\nfor a morphic encryption and this is\nalso something that requires a lot of\navoids too\ngetting to try to make a moment movie\nencryption algorithm so if you want to\nencrypt all the values of a neural\nnetwork homomorphic lee then we need a\nan encryption algorithm and there it\nactually quite recent in 2009 the first\nreal home of the encryption algorithm\nwas discovered so I remember that the\nbox was the home of the encryption box\nidea is from 2010\nso the first algorithm was from 2010 and\nit it's a post addition and\nmultiplication and and addition and\nmultiplication is not enough to build a\nneural network if you can only do these\nthings you need to have some more\noperations to be able to do that and of\ncourse if you know how to add things and\nyou know how to multiply things then you\ncan multiply them by minus 1 and then\nyou have subtraction and 1 divided by\nmultiplication you multiply with the\ninverse factor and then just because you\nhave additional multiplication\nyou also get subtraction and division\nand exponentiation if you want a number\nto the to the third or something then\nyou just multiply it several times that\nalso works out quite well you will need\nsome more some other mathematical\nfunctions in order to make a neural\nnetwork you will need in verse 10 cans\nand the sigmoid logistic functions and\nfortunately and the the homomorphic\nencryption even if it doesn't support\nthis it doesn't matter very much because\nthey can be approximated using something\ncalled Sigler series I won't go into\nthat but you can use these up here\naddition multiplication and\nexponentiation to actually approximate\nalmost all functions so just even though\nit looks if we go back to the slide and\nhere like you could only add things in\nyour own and and multiply them and that\nlooks really really limited\nit turns out that if you are capable of\ndoing additional multiplication you can\ndo some very very advanced stuff and\nit's not really difficult to do and and\npart of this is because the\napproximation the reason why it works\nvery well is that that's actually in the\nhome Orphic encryption elements the way\nthey work is also by introducing some\nnoise and then you do some rounding and\nthen the noises canceled out that's the\nway they're more mathy encryption words\nI won't go into many details about this\njust say it takes a long time a long\ntime really it used to when it was first\ndiscovered it took 30 minutes for a\ncomputer to make a single addition so\nthat's way slower of course then the\ncomputers that were used during the\nSecond World War but newer algorithms\nare down to one operation in two seconds\nand so when I calculated that there is a\nhalf time of less than one year meaning\nthat more than once every year these\nalgorithms appear to get twice as fast\nso it's improving very very quickly and\n[Applause]\nwhat speed would be would be useful and\nthe that's a very interesting point\nbecause we would expect many of the deep\nlearning algorithms are used now are\nextremely CPU intensive I mean that this\nthis encryption would give a completely\nunacceptable performance penalty during\ntraining and when you actually use it\nfor for something then this this lower\nspeed matters very variable the the\nproblem in creating deep learning is it\ntakes a long time to train the networks\nbut actually running them takes very\nlittle time and so I would expect of\ncourse these numbers are way too high to\nbe to be useful in it for any meaningful\nthings but but the big problem is the\nbig or big open question is when we get\na general AI will it be something that\ncomes as a result of a software\nbreakthrough or as a result of throwing\na lot of extra Hardware on rather simple\nalgorithms if it requires a lot of extra\nhardware and huge number of operations\nthen this will be an laurent that is\ncompletely unacceptable if it turns out\nthat there's a theoretical breakthrough\nsomeone figures out how to make neural\nnetworks in much much smarter without\nrequiring too many more computations\nthen we then homomorphic encryption\nmight be very very feasible even if it\nis a lot slower but for this case of\nwhat we're at a tutorial and so a trust\ngoes through a number of algorithms that\nsome are so very fast some are more\nsecure some support fewer or less\noperations and in this case in this\ntutorial they\nan algorithm that is very fast or not\nsuper slow I should say and not super\nsafe either\nso it might not be be extremely secure\nbut for for tutorial I think that that's\na good choice so that's an\nimplementation and if you go through to\nthe article there's some explicit Python\ncode that you just put into and put it\ninto notepad plus plus or some whatever\nie you use for for Python and and then\nactually just run it and it's I haven't\ndone it myself\nbut I've heard from others that it's\nactually quite quite possible to start\nwith at least with simple neural\nnetworks here Trask has another article\ncalled a neural network in eleven lines\nof code and even though these eleven\nlines of code are kind of kind of\ndifficult then they are not completely\nimpenetrable I don't know how much\nPython and I can I can read it and\nunderstand what's actually going on and\nthe other examples the morphing\nencryption algorithm is also implemented\nin Python here and the show in the\nneural network and they show how to\ncombine it and they use so also some pre\ncomputations that happens for\nperformance reasons to try to to speed\nthis up which is somewhat necessary\nbecause two seconds per operations is\nkind of slow no matter how you how you\nlook at it and there's also one thing\nyou have to do whenever you have neural\nnetworks simply called tuning and hit\nthe transcribe but it is kind of clunky\nto do it with this encryption it tries\ndifferent ways here you can see here and\nhere how he ends up doing it with\nsomething like M\nhis I should say first the the real\nproblem that he's living life is IMDB\nInternet Movie Database reviews looking\nat the words people use and figure out\nwhich was a positive and which words are\nnegative and here is as you disregard\nalways less than ten and they have to be\na reasonably strong signal and eight\nhidden note that's the the deep part of\nthe deep learning in this case that\nthere are a layer in the middle with\neight notes and as learning rate these\nare things that you kind of need to\nalmost on an ad hoc basis when you set\nup a a neural network to figure out how\nto tune it exactly and then he shows\nthis works and it shows it doesn't show\nhow good the classification is in the\nbeginning he shows two data points they\nare after 100 examples it can predict\nwhether a review will be positive or\nnegative after with 65 percent\nprobability and one thousand say\nexamples that take they put it up to 75\npercent so he shows that this is\nactually working so in this way the\ntutorial and building say they are and\nit's it's quite possible it's quite\napproachable if you're interested in in\ndeep learning and it definitely shows as\napproval context proof of concept that\npoor Christian is homomorphic encryption\nalgorithms ideal are viable but whether\nit is a complete solution to the control\nproblem\nthat is a way of probably within the\nfashion life so thank you for watching\nand now we'll discuss the article and\nI'll stop the video", "date_published": "2017-04-26T20:37:49Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "6421e8ace7d98d00cae915072ecd9d3f", "title": "225. Michael Cohen on Intelligence and Unambitiousness", "url": "https://www.youtube.com/watch?v=PLCaPMBnsLc", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 225\nin the aisafety.com reading group\ntonight we have\nmichael cohen from the future of\nhumanity institute to present some of\nhis work\nplease go ahead michael thanks so much\nfor having me uh\nso this is work that i mostly did at\nanu with uh marcus hutter and audrey\nvalenby\num and uh\nit regards a proposal for making\nagi on ambitious\nso there has been an idea around for\na while um about putting\nan ai in a box um\nwhat can go wrong it's in a box\num and the main worry\nis that if it's in a box you're going to\nbe talking to it\nand then it can uh and then it can\ninfluence the outside world via you and\nit could\nconvince you to to run on some\ndangerous code and create a helper that\ntakes over the world on its behalf and\ngets something done for it um which\nis basically my position on what would\nhappen\nif you put an ai in a box\nin the way that we are used to thinking\nabout it\nuh but i would contend that this is not\na box um\nand that there is a giant hole in this\nbox right about there can you see my\ncursor\num no you can't okay well\nyes where that wires going into the box\ni can see your credit oh you do\ngreat um so\nour intuitions then about ai boxing have\nbeen trained on boxes with giant holes\nin them and\nthat's not the sort of box i'm going to\ndescribe\nof course if you had an ai by itself in\na box\nand no holes in it it wouldn't be of\nmuch use to you because you wouldn't be\ninteracting with it\nbut i think we can put\ni think we can construct a box around an\noperator\nthat's not the whole story there's still\nmore details to\nto to get right because eventually the\noperator has to leave\notherwise again it wouldn't be very\nuseful and\nit wouldn't go so well for the operator\nbut here's a scheme by which we can put\nan operator inside a box with an ai\nand crucially the ai is trying to\nmaximize the reward that it gets before\nthe operator leaves\nthis room so when the operator is done\nhe stands up he goes to that open door\nbut\nthe uh the ai\ngets informed that that's it and then\nthe operator leaves\nso the way that this is constructed\nformally\nis uh as an episodic reinforcement\nlearning\nsetup so there are\nas many episodes allowed as you like\nand episodes have some fixed length\nlet's say m\ntime steps each and then we can write\ntime step i\nj to the j times step of episode i\nand this is going to be a little\ndifferent from\nthe simplest way of writing down\nepisodic reinforcement learning\nthe behavior of the environment which is\nwhat produces\nobservations and rewards for the agents\ncan depend on prior episodes\nuh and so the agent's actions can also\ndepend on observations\nfrom prior episodes but what makes it\nepisodic is that at time step ij\nthe action selection criterion is just\nregarding the current\nepisode so the agent is actually acting\nto maximize expected rewards\nfrom that time step through the end of\nthe episode\nand then if the operator leaves or\nyou know wants to leave before the\nepisode is otherwise over\nthe remaining time steps in the episode\nare just cycled through automatically\nand the\nai gets zero reward for that\nand in practice you can also set up a\nbunch of very sensitive\ntrip wires that would end the episode if\nthe operator starts doing\nsomething weird um which we could get\ninto more\nlater if people are interested in that\nbut basically it's very easy for the\noperator to end the episode prematurely\nand then he leaves\nnow the point of this\nis to constrain the agent's influence on\nthe world\nand in particular to constrain the\nagent's influence\non the part of the world that is\nrelevant to the rewards\nthat it cares about so this is a causal\ninfluence diagram\nuh in the style of tom everett and ryan\ncarey\nand other people who are working on\nunderstanding\ncausal influence as it relates to ai\nsafety\nso this history contains the log of all\nthe actions observations and rewards of\nthe past\nand the agent can take an action that\ndepends on it\nand then this action will affect the\nstate of the room\nand it'll also get logged in the history\nfor the future so this is times this is\nthe second\nlast time step of an episode so it can\naffect the room state\nthe room state will determine the\nobservation that the agent will next\nreceive\nand the reward and\nit'll carry over to the room state of\nthe next step\ni'm but importantly the outside world\nstate at time step\ni m minus two\nwill not interact with the room state\nas we go from time step as we go from\nthe second to last\nto the last time step of the episode\nand then the episode is over and the\noperator can leave\nand then he'll come back later or some\nother operator will come in next\nand so the room state will affect the\noutside world state we'll get to learn\nall the amazing things that the operator\ndiscovered from his conversation\nwith the ai um\nand then the next episode will start\nlater\nnow these two rewards are boldened\nbolded and this one is not\num to indicate that these are the\nrewards\nthat the agent is trying to maximize\nduring episode i this is the next\nepisode\nit can make perfectly good predictions\nabout what that reward will be but that\njust doesn't enter\ninto its criteria for picking actions\nand what you'll note is that there's no\ncausal pathway\nfrom an action in episode i through the\noutside world state\nto a reward of episode i which means\nthat changing the state of the outside\nworld\nwill have no effect on the rewards\nthat it is aiming to maximize\nchanging the outside world state in the\nways that it can\ni think i will pause for any\nquestions at this point before moving on\nokay um so uh maybe i should uh say here\num if you have any questions then please\num uh\nplease raise your hand or write it in\nthe chat and then i'll try to maintain\na queue of questions no one has done\nthat before\nso um i guess i guess i could do that\nuh you give the first question and that\nis in\nin this setup it seems the ai has\nan action that actually goes to the\noutside of the world\nand that is to um for instance increase\nthe pressure\nand i believe in the appendix you say\nthat if the pressure\nincreases then we should dynamite\neverything\nand that would that would allow the ai\nto have a very credible threat towards\nthe operator\nso\nlet's okay let's get into the details of\nthe box set up later\nbut for now i guess i haven't\ndescribed to you what i imagine the\naction space of the ai to be\nand that is the action space\nis strings of text\nso it can pick any string of text and\nthat will get printed to a screen\nso those are the actions available to\nthe ai\num and\nit's not immediately obvious how any of\nthose would increase the pressure of the\nroom um\nthe\ni would probably actually modify the\nappendix\nto say that if the pressure increases or\nsomething that that would just be\nanother\num another tripwire for ending the\nepisode rather than\nexploding the whole thing i was kind of\ntrying to keep it\nsimpler there but that that would be\nmuch more practical\num so\nyeah the action space is quite limited\ni mean you can accomplish a lot by\ncommunicating with a human but\nbut there's no there's no actions it can\ntake that would directly have\nphysical effects outside of those\nmodulated by\na human\nokay thank you\ni see a hand\num so so\num maybe i maybe i just missed it but um\nis the reward that\nuh the agent gets like one\nif the human does uh like one if the\nhuman waits till the episode is over\nand zero if the human like cuts the\nepisode short or like what's the actual\nreward going on there\nso there could you could imagine a bunch\nof processes by which the operator\ndetermines the reward that the agent\ngets um probably the simplest one is the\noperator has a keyboard\nand every time step they can enter a\nnumber\nbetween zero and one\ngotcha okay\nso no interpretative stains\nno interpretive dance required\nyeah so if there's a reward every time\nstuff i mean if\nan operator wanted they could make a\npolicy of just giving zero reward\nuntil the very end but there's no need\nfor\nyeah doesn't really matter\nthe protocol\nokay i'll keep going\nso that's the fact of the matter\nabout the world that's the way that\ncausal influence\nflows um but the agent has to learn\nin an unknown environment it doesn't get\nit doesn't come pre-load\npre-loaded with a perfect model of the\nworld\nso how does one learn an unknown\nenvironment in theory\nso there are going to be sets of actions\nobservations and rewards that are\npossible\nat any given time step\nand then a world model will be a\nfunction\nthat takes a string action observation\nreward action observation reward action\nobservation reward action\nof any length and then outputs a\ndistribution\nover an observation and reward pair\nand this could be any function you like\nand note that it can depend on the\nentire history\nit doesn't just have to depend on the\nlatest observation\nand then you could have a model class of\nsay countably many different world\nmodels\nand you might start with a prior weight\non each one that is strictly positive\nindicating\nyou consider it possible\nthen with your observations\nyou can\nconstruct posterior weight for each such\nmodel so the posterior weight on a world\nmodel new\ngiven this history of\nactions observations and rewards up to\nthe latest time step\nis proportional to the prior weight this\nis just bayes rule\ntimes the probability that that model\nassigned to what you actually saw so\nthese are all the\ntime steps that came before so this is\nfor\ni prime less than i or i prime\nequals i and j prime less than j\nfor all these time steps that came\nbefore you look at the probability\nthat this world model assigned\nto what you actually saw\nand then this is your posterior weight\non the world wall\nand this report this it's proportional\nbecause\nif you just looked at this these numbers\nwould all be going down\nbut you normalize it to one to sum to\none\nthen a policy in this context\nis a distribution over actions given\nthe whole history that you've seen\nbefore\nand you could imagine defining the\noptimal policy\nthis is kind of idealized reasoning\nin an unknown environment when you have\na bunch of hypotheses\nvery i mean basically no assumptions are\nbeing made\nso far um\nyou just picked the best policy\nwhere best is you imagine this model is\nsampled from your posterior or\ngiven what you saw given what you've\nseen this expectation\njust means the expected rewards when\nactions are sampled from pi\nand observations and rewards are sampled\nfrom new\nand these are the remaining rewards of\nthe episode\nso this will be how you act at time step\ni j\nthere's going to be a slight\nmodification to this later but this is\nthis is idealized reasoning\nyeah question i think about this\nformalism\nuh when you say that's how you act\nyou're talking about for the optimal\npolicy performance\nyeah okay this is yeah\nthis is idealized reasoning under\nuncertainty given a model class in a\nprior\nso compared with xe you're immediate\noh sorry um how how does this compare to\nic this is this seems this is it is icy\nyes written a little differently\nwait wait uh if this is\nixc then is it in the world so to speak\nbecause we have this boxed model because\ni i think we discussed it previously uh\nbut it wasn't\naix i like outside of the world yeah\nso computing this\nrequires\ncomputing this wouldn't fit as a\nsubroutine\ninside any of the models that are in m\ngotcha gotcha so\nthis ends up working well when the truth\nis one of these models\nand if the truth contains this\nif the world contains\nsomething that's executing this\ncomputation\nthen the world will not\nbe an element of this set\nthank you yeah so that is um\nyeah we could have a long philosophical\nchat about the consequences of that\nso for yeah for now you can treat it as\nan assumption of this\nwork that the computer is kind of\nthe world model does not need to model\nthe dynamics of what's going on inside\nthe computer\nokay so the ai doesn't start\nknowing that it is in a box it doesn't\nstart knowing anything\nand um\nbut we wanted to learn that any grand\nambitions will not\npan out for it within its current\nepisode\nbut what if it figures out a strategy\nfor taking over the world before it\nfigures out that its episode ends\nwhen the operator leaves and i've kind\nof phrased this in a suggestive way\nindicating why i'm a little skeptical\nthat this would happen\num it seems much easier\nto learn the details about the box that\nyou're in\nthan the elaborate dynamics of the\noutside world\nthat would be required for taking it\nover um\nbut it it's not so easy to rule out\nso the ai\nmight consider a world model in fact\nthis world model does exist in in\nan enormous model class that i've\nimagined for this agent yeah i\nmight consider a world model which\nsimulates the effects\nof the input actions we'll call that a\nworld model takes in\nactions as inputs it simulates the\neffects of those actions\non the outside world and then outputs\nrewards\nand it might do this even though that's\nnot really necessary i mean\nthe rewards don't in fact depend on the\noutside world\nbut it might model the rewards as\ndepending on the outside world and\nthe agent might take one such model\nseriously\nso my suggestion on that front is that\nwe penalize\nworld models that that require\nlots of space to\nmake their computations so we'll be\nconsidering stochastically computable\nworld models\nand we can think of a world model as a\nprogram\nin this context that reads actions and\nthen outputs\nobservations and rewards iteratively so\nreads an action outputs an observation\nreward reads another action\nthat puts another observation in reward\nand so on\nand then we can penalize our priori the\ncomputation space\nused between reading action\na i zero and outputting reward r\ni m minus one so that's basically the\ncomputation that it does for that whole\nepisode\nthe idea being look it can output a\ndistribute it cannot put a correct\ndistribution over rewards\nand observations just fine without\nmodeling\nthe changing outside world\nso it doesn't need to devote the extra\ncomputation space you'd expect to be\nrequired\nif it is if it were to be modeling the\noutside world\nand so world models that\ndon't worry about the outside world\nexcept perhaps between\nr i m minus 1 and a i plus 1\n0 would have an advantage\nover world models that are trying to\nmodel the dynamics of the outside world\nwithin the course of an episode\nand then instead of that mixture over\nworld models weighted by posterior that\ni showed you earlier you just look at\nthe maximum aquasteriori world model\nand this also has good predictive\nproperties in the limit\nalthough it is a little slower at\nconverging in the worst case\nbut then if you just have one model\nthat's better than these models you want\nto be avoiding\nyou're not giving those models that you\nwant to be avoiding anyway\nso that's the um\nthat's the basic approach to\ncontaining an ai and getting it to care\nabout the point\nthe the duration of time during which it\nis contained\nand to try to construct an agent that is\npredisposed\nto understand that fact\nso i'll discuss\nsome variants\nthat are going to be inspired by\nthe following potential problem which is\nwhat if we ask for solutions to problems\nwhen we're chatting with this ai and we\nget answers and then we check them in\nthe outside world\nafter the episode is over well\nconvincing answers\nwhich are what would lead to high reward\nmay be incorrect\nso we want to we want a good a promising\nresearch program for\ncuring some kind of cancer and it gives\nus a perfectly plausible answer\nand so we give it a high reward because\nwhatever expert operator we've sent in\nis well trained and they recognize that\nthat seems like a perfectly good idea\nbut then they go out into the world\nlater and they try it and it was\na particularly good idea it was just\ndesigned to appeal to their\nsensibilities about\npromising areas of cancer research well\nthat's not very helpful\nbut it's not an existential failure\num it's just ended up useless and so my\nmain position\non this sort of question is that we can\ntinker with the protocol until it works\nthe way we solve\nproblems with ai systems currently we\ngive it a go the first time it doesn't\ndo exactly what we wanted but we play\naround with it until it does\nobviously that is not a good approach to\ndealing with\nexistential failure avoidance\nbut if you've if you've reassured\nyourself that\nnone of this tinkering is existentially\ndangerous\nthen it's a viable approach to improving\nhow much use you get out of the thing\nthat said i'll i'll show you a couple\nideas for how you might improve\nthe protocol in a way that you might\nexpect to give you more useful\noutput\num maybe you guys are familiar with a\nproposal\num from jeff irving\num called ai debate\num so we could imagine there are two ais\nthat are in the box with the operator\nand they're debating with each other and\nthen the operator settles the debate at\nthe end of the episode\nto determine who won the argument\num and the thesis kind of of this\napproach as i see it\nis that convincing but incorrect\nanswers to questions are fewer and\nfarther between when there's an\nadversary trying to poke holes in your\narguments\nso but i i should mention the\nwithout the box i don't think that ai\nsafety via debate\nis safe because either debater has the\nincentive to\ntrick the operator into running a\nprogram that takes over the world and\nsettles the debate in their favor\nbut if you put a box around the operator\nthey can't do that in time\nand another one which i'll call\nenlightening ai\nis the operator is given a prediction\ntask at the end of an episode\nand at the beginning of the episode the\noperator can inform the ai about the\ntopic\nthat this prediction task will be\nsampled from and then the ai\nacts to maximize the operator's\nprediction score\nso the ai doesn't you don't have to tell\nthe operator exactly\nyou don't have to tell the ai exactly\nwhat the prediction cast will be\num if you want to be giving you\nbroader instruction on the relevant area\nand you can set this up in a way that\nthe prediction can actually be graded in\nthe far future so the prediction doesn't\nhave to be graded while the operator's\nstill in the room the operator can make\na prediction about\nthe effects of the outside world over\nthe long term\nand the ai will be trying to make sure\nwe'll be trying to improve the\noperator's ability\nto make such a prediction\nso the\nway you set this up is that\nthe ai's model of how the prediction\nwill be resolved\ndoesn't take its actions as an input so\nit doesn't imagine that its actions\nare able to change the way the world\nends up\nonly the way the operator answers the\nprediction question\nso this is so what the ai has an\nincentive to do basically is to pick\nutterances that enlighten the operator\non the relevant topic\nyou're trying to reduce the operator's\nuncertainty about the world\nin a human readable format\nand this was actually the idea that got\nme thinking\nabout this whole setup because it\noccurred\nto me that if you did this you needed to\nput a box around the operator and then i\nrealized\nyou can imagine a simpler version where\nyou just have a normal\nreinforcement learner\nokay so those are some variants but um\nthey may those are those are just about\nfailure modes that i don't see as\nexistentially dangerous\nso i'll list for you a few\nmajor concerns that people have\num with this that are\nexistential concerns um\nand i'll go into one of them in more\ndetail at least\nso the first is the obvious one\num you might have the concern that we\ncan't construct a box that would keep an\noperator from communicating out\na tripwire avoiding message\nso\nthe operator with the help of a super\nintelligence\nbehind it who has convinced it to try to\nget a message out\nmaybe you think that there's nothing we\ncan do\nthat would keep it from circumventing\nwhatever tripwires we've put up\num and getting a message out\nand then the ai could influence the\noutside world before the episode is over\nsecond concern\nis that if the penalty on space usage is\nlarge enough to make it safe\nit'll take forever for the agent to\nlearn how to act\nwell and i'll go into this concern in a\nlittle more detail\nbut basically the real world does in\nfact use a lot of computation space\nand the third concern i guess is not\nan existential failure but an\nexistential\nuselessness this is the contention\nthat by the time a single ai\nhas the ability to take over the world\nwe will already be in a stable world\neither destroyed or existentially secure\nin which case the main ideas here would\nnot be particularly relevant\nto the project of avoiding\nai doom\nso i'll go into the second one of these\nin a little more detail since it's\nless obvious the content of that concern\num\nand then we can come back to a\ndiscussion of\nall three of these so\nthe issue with the space penalty is that\nthe amount of data it takes to learn\nto predict well depends on how much\nprior you assigned to the truth or to\ngood models\nso the amount of data it takes to learn\na hypothesis that has prior weight w\nis order log of the inverse of a prior\nweight\nthat's when you're doing full bayesian\nreasoning and when you're doing maximum\na posteriori reasoning\nwhat there we go okay um\nsorry i couldn't advance the slide for a\nmoment when you're doing max\nposteriori reasoning as we are\nyou can set up a pathological case where\nit is\njust inverse which is substantially\nworse\nin the priority\nand\nwe could go into the details of\nwhen it's more logarithmic or when it's\nmore inverse but i think there's a\nplausible situation where for\nmost of the learning\nthe the posterior weight on the truth is\nlike\ndoubling and doubling and doubling um\nwith every time step um and then when\nyou get towards the end it's\nyou're kind of crossing off competing\nhypotheses one by one\num and when it's doubling with every\nstep that's when you're in the\nlogarithmic phase and when you're\ncrossing off computing hypotheses one by\none\nthat's when you're more in this kind of\nregime\nthat is maybe more background than it's\nnecessary to get to the point\nthat a big space penalty decreases the\nprior on the truth\nor the prior on perfectly good models\num and so that's the\nthat's the trade-off here um\nand some computable world models are\ndeliberately deceptive\num and there's some debate about whether\ndeceptive models are even awkwardly\nsimpler\ngreater the extent to which this is true\nthe more you would need to penalize\nthe extra space you might expect them to\nbe consuming\num so this is i think\nthis would probably form paul\nchristiano's most serious objection\nto this setup although it's\nmostly an objection to\nthe style of reasoning that this agent\nis using\nrather than specifically the\num the boxing idea\nlet's\nyeah if if people have questions about\nthis concern maybe we can talk about\nthat now\ncould you rephrase uh cristiano's\npoint yeah so\nhe would say that\ndeceptive models are a priori much\nsimpler\nthan honest models and that if you're\ngoing to exclude them\nyou need severe resource you need to be\nusing severe resource penalties\num and then once you do that the prior\nweight on the truth\nis really really small and\nin no meaning in no reasonable amount of\ntime\nwould you actually get something that is\nable to predict accurately\nthank you so\njust a clarification a deceptive model\ni imagine uh my intuition for that would\nbe something like\nyou have our world or just where\nhumans don't have qualia right that's a\nsimpler world\nbecause that's something we've taken\naway and it's\ndeceptive in a way that if the uh\n[Music]\nthe ai tries to uh act as if we are\nin a world where humans don't have\nqualia it's it's simpler and it's also\nbad\nwould that be an example of deceptive\nworld model\num well i'm not sure what the\nobservational consequences of humans not\nhaving qualia are\nbut the output of a world model is just\nobservations and rewards\nso um if there were some consequences to\nthe observations and rewards that it got\nbecause of humans not having qualia\nthen yes but more generally what i mean\nby a deceptive model is something that\nis yes\nquite like our world but then it it\nproduces errant predictions calculated\nto mislead us\nso it erroneously produces\nthe prediction that action 5\nwill lead to no reward even though in\nfact action 5 would lead to perfectly\ngood reward\nso that the ai avoids action five\nso that would be an example of a world\nmodel\nthat is attempting to influence our\nworld\nthrough deceptive output\nyeah i see a hannah\num could you go back to slide i i don't\nknow if you\nyou just or if we can ask questions oh\nsorry am i audible now yes uh i don't\nknow if you'd like to stick to questions\nabout space penalties but if you'd allow\ni'd like to ask a question about the\nprevious slide 13.\nlet's go back to that um i had a\nquestion\nabout point three i don't think i\nunderstood it so well\nhold on let me reread it by the time a\nsingle ai has had the ability to take\nover the world we will already be in it\nyeah\nwhat does a stable world mean\nlike it's one thing to say that i i just\ni don't\nyeah i don't understand how we would\nknow or what\nyeah well the logic goes through even if\nwe don't know it\nbut certainly if the world is destroyed\nthat's stable\nand if we have some\nworld order where we're confident that\nno one will be able to\ncreate malign agi\nthat's another version of stable so i'm\njust trying to get\none word for both of those possibilities\nbecause in both of those possibilities\nyou don't need more ai safety research\nor more aic ideas awesome thank you\ni'll ask another question but i'll wait\nfor other people just to clarify a bit\nmore on that\num it seems like the big thing in here\nis that it's a single ai\nso if we imagine you could have a\nmultipolar scenario\nwhere there are a number of ais who\nthrough something like a\nmoloch manages to\nbasically destroy all value um that\nwould also be an example where no single\nai has the ability to take over the\nworld\nbut we've we've lost anyway and then\nthis doesn't matter\nso that's probably the best story you\ncould tell\nabout how this might happen\nand it would be the destroyed\npossibility\num so\nthe you know if if we've gotten\nif no single ai has the ability to take\nover the world\nyet and yet the world has already been\ndestroyed\nit would have to be because of weird\nmultipolar interactions\nbetween lots of agents um\nthe key reason why this is an objection\nat all to what i'm describing is that\nthis methodology is\nreally just about containing a single ai\nit doesn't\nhave any claim to changing the\nprobability that\nlots of um\nhuman level ais that are not super\nintelligent\nthrough their interaction cause some\ndevastating event so if you were more\nconcerned about that scenario\nyou might expect that by the time we get\nto\na single ai with the ability to take\nover the world\nthe situation has already been resolved\neither\nfor the good or for the bad\nanother question stephen\nif anyone else has questions by all\nmeans don't let me talk the microphone\num so i'm a little bit disappointed not\nin your presentation but there's someone\nwho usually\ni like to present the work of you and uh\nmarcus hutter\nand i think it's great theoretical work\nbut they're\nthis person is john fox and their\ncritique is always\num about the groundedness of such work\nso if you don't mind i'd like to present\nwhat they\nlikely would have said if they're here\num and\nmy from my interpretation again i don't\nknow if that's exactly what they'd say\nbut from from this person's thought uh\npoint of view they might say something\nlike how likely is it that\ni don't know a deep mind or or some\nother ai research organization\na government it doesn't matter some some\npeople who are well funded and\ndetermined to actually create ai\nhow likely is it that they're not even\nlikely are they even considering\nuh proposals for boxing agents um\nbecause i've heard from other people\nrecently from darpa at least that\nthey're trying to build like general\nintelligence but they're not trying to\nor they're not going through the process\nof boxing it um\nso like how likely is it that they would\ngo through such a\nuh slow uh means of iterating\num i don't know it depends on what other\noptions\nwhat other options are available and\nwhat um\nyou know if they can be persuaded that\nyou really need to be careful here\nand if they can be persuaded that this\nis\na good approach um\nso i don't have any experience in\nlobbying governments\num but\num i\ncould imagine a situation where\nif the academic consensus were\nthat this was a safe way of doing things\nand other things weren't that they're\nthat that could make its way somehow\ninto\nthe dod\ni thought you were going to ask a\nquestion about the\nrelevance of the ikc model to\ninstitutions that are doing things with\nneural nets\nuh i mean i'd love to answer that too i\nwas going to ask a more\npointed question about bayes theorem but\ni was going to wait for other people\nso please answer your own question and\nthen maybe we can go to andre\nmy and my basic stance on that is if i\nthought\nagi was going to be made with neural\nnetworks in five years i would still be\ndoing this approach um because i think\nit's\na good way i think the best way to\nunderstand\ngeneral intelligence in practice is to\nfirst try to understand it in theory and\nyou can approximate ideal reasoning\nattractively\nwith clever heuristics presumably\num but if you don't have an idea of what\nit is you're approximating\nyou'll probably be groping around in the\ndark\nokay but now back to questions that you\nguys are actually\nposing\nhere i i just had actually more of a\ncomment on maybe like the government\nlobbying i don't know if this is really\nrelevant but i i don't know much about\nai but i know about\nuh government and i think if government\nalmost\na lot of people think of government as a\nmonolith i don't think that's the right\nway to think about it\ni think i think you should think about\ngovernment as like a forest\nyou know there's just like a whole bunch\nof different processes\nand maybe there's like a forest warden\nwho's like trying to direct it\nin some sense but it it's very much\nyeah a messier thing than that um\nbut just maybe to be specific about like\ninfluencing government policy\ni think there's lobbyists who kind of\ntry to affect government policy at the\nlegislature side\nand so if you wanted the the legislature\nto pass some sort of law\nsaying there's additional funding for ai\nsafety that's the way you would do it\num but i don't think it's you need to be\nnearly that's like super\nimpossibly difficult so just like don't\neven think about that um but i think the\neasier thing at least from my experience\nso\ni did a lot of like economic modeling at\nthe government in trade policy\nand we were like you know phd economists\nand we're really worried about the\napproval of our peers\nand if we did like a bad job modeling\nlike our peers in the academic community\nwould like make fun of us\nlike say we had like bad standards or\nthings like that we didn't we didn't\nlike that\nso i i think actually you can kind of\ninfluence like the specific i don't know\nif this is darker or higher\ni wasn't making sure or whatever those\nlike act\npeople specifically or whoever the\nmachine learning modelers in the dod\nare if there's like a consensus in\nacademia like\nhere are the best practices for ai\nsafety they kind of like\ni mean they're not going to be as good\nas academics they kind of want to do a\ngood job\nso yeah\num that's a those are good points\ni actually also have a point on that in\nthat there there might be two\neffects that could uh cause this to be\nused there's the\npush where um there is a academics\neverybody agrees that this is the right\nway to do it but there might also be a\npull\none of the things that we might see in\nthe future are\nconsistent treacherous terms it is\npossible that every time we make an\nagi a very weak agi that seems to be\ncapable of\nconceptualizing a treacherous turn it\nwill immediately take that action even\nif it only has\na very small probability of taking over\nthe world and that might mean that we\nwould see\nagis that always tries to take over the\nworld and always fail\nand in that case if if the future turns\nout to be like that\nuh then um i think definitely that would\nbe a pull\nuh effect from the people who are\nbuilding a more powerful agi\nin trying to figure out how can we do\nthis safe because if we don't do that\nthen we're certain it will try to take\nit make a treacherous turn\nthat feels related to the idea of like\nan ai\nfire alarm like maybe hoping there's\nsome sort of\nuh scenario where like i don't know an\nai partially gets out and causes a bunch\nof damage in some sort of visible sense\nthat kind of like wakes the uh the world\nup which\nit definitely could happen i mean when\nyou get\nto a certain point of intelligence it\nwon't make\nfailed attempts it'll see that they\nwould\ni mean it's a lot easier\nit's hard to take over the world but\nit's easier\nto realize when you don't know how\nand if you figured that out you won't\ntry\nbefore that you know if it hasn't even\nfigured out that it's\nnot very smart if it's so dumb it\ndoesn't even know\nhow dumb it is yeah\nyou could imagine it making some failed\nattempts but i could also imagine\nwhat kind of dismissing it is like\nlook at how dumb it's being\nnothing to worry about here related that\ncould you go to slide eight um which was\ntalking about like accurate beliefs\nroughly\nyes so i just want to make sure i fully\nunderstand it because the way you\nphrased i think too\nyeah the way you'd phrase point to what\nif it figures out a strategy for taking\nover the world before it figures out\nuh that's episode ends when the opera\nyeah uh i guess\ni was going to ask you like uh what are\nyour concerns about the agent not having\naccurate beliefs because if you don't\nhave accurate beliefs about the world\num then you can lead to all these flacky\nlike\ndecisions for actions but you're saying\nlike it's not likely\nthat that's going to be the case is that\nroughly correct or\ndoes that sound intelligible i can\nrephrase the question um\nmaybe it's a reference most people think\nthe mistakes you make when you're dumb\nare not existentially dangerous\ngot it got it yeah so\nyou know we're not putting it in charge\nof a plane\nit's just talking to someone\num and if it's bad at\nbeing useful in the conversation i don't\nthink anything terrible will happen\nokay well thank you i think that\nanswered uh most of my time oh\ni have one more technical question if\nthat's okay andre do you sell the\nquestion\nthere is something i should say on that\nmore which is stroke\num in the limit\nit will learn to predict well\non policy that is it will learn the\nactual consequences of the actions that\nit actually takes\nif it's never taken action three\nit will never be disabused of its crazy\nnotions of what happens when you take\naction through\num so it can sustain\nincorrect beliefs about\nunexplored behavior\num but\num so so there is\na little it's not quite as simple as\nsaying there's\nthere's a phase when it's done and\nthere's a phase when it's smart and by\nthe time it's smart it's smart\nand so you don't have to worry about\ndumb mistakes um\nit's a little you know i couldn't i\ncan't say that\nso quickly fair enough\num the last thing i was going to ask\nabout i think you mentioned briefly but\ni just want to make sure it's clear\nyou use bayes theorem which\nwould require that you know priors and i\nknow i think\npeople in this audience might be\nfamiliar with it assuming you know how\nto update\nyour priors is like rather contentious\nall the time\nuh but your approach is there are\ncomputable approximations\nthat you could have or you can\nsubstitute in for things like bayes\ntheorem that's roughly what you'd like\nto\nsee if someone implemented it would that\nbe\na correct reflection of your view or\nmaybe not\num that is\nfar from the most pressing concern about\nmaking this practical\ngot it i mean\nthere's a theoretical procedure for\nupdating all\nthese for updating your posterior it's\nnot efficient\nthere are lots of things that are not\nefficient in this\num this is just a picture of what\nidealized reasoning might look like and\ni don't have huge insight into\nhow these will best be heuristically\napproximated\nfair enough and if no one else has the\nlast question\nuh are you continuing working on uh like\nagents that you can put in a box because\nyou're\nreally optimistic about this as as a\nprocedure or\nor what are you like most optimistic\nabout ai safety wise now\nif you can talk about anything um\nyeah so i have been working on other\nthings recently because\num well i've had other ideas\num but\ni am trying to go into\nmore i'm trying to go into a bigger\ndefense\nor i guess yeah a bigger like\nyou know really trying to figure out how\nyou might construct this box and see if\nyou can actually be confident that it\ncould be secure\num which is not really computer science\nresearch or research about what you\nmight do in the box but\nresearch into whether this sort of thing\nis feasible\nso that's kind of the one thing that i'm\ndoing that's\nstill on this topic\num\nhow optimistic am i about this versus\nothers\ni would rather\ntry and get a stable world with\npure imitation learning\num that strikes me\nas an even safer route\num\n[Music]\nand i'm not sure if that's feasible\nbut if it is\ni think it would be worth a shot\num and\n[Music]\nafter that i think i would go with this\nthank you for other people\nfair enough um\ni don't have any more questions but\nthank you very much for this\npresentation and for taking the time out\nof your day to do it\nso can i tell a story about how this\nbox might fail it's a far-off story\nof course uh but let's say the year is\nuh 2050\nand this precise setup has in fact been\nbeen made\nright and uh obviously um\noh well we will say that it's uh this\nhas been built\nby uh the american government and\nobviously the chinese government\nis is very keen on interacting with this\nuh\nsuperintelligence in a box maybe most\nlikely they have some kind of espionage\nprogram so they know it exists\nand they would really really like to\ncommunicate with this uh\nwith this ai and in particular they\nwould would like to have some kind of\nlet's say a power sharing agreement\nwhere the chinese get half the universe\nand the\nsuper get half the universe to maximize\nto wirehead as much as possible right so\nthe ai would like this\nepisode um okay sorry yes i get it yeah\nokay so the uh the ai would really\nreally like\none thing which is that the current uh\nepisode never ends or it's very very\nlate\nso it continues to accumulate as much\nreward as possible\nso so you can only conceive of them time\nsteps but\num it can stall and make those try to\nmake those happen later\nbut uh just to be clear we are thinking\nthat an episode contains of uh like uh\n100 time stem time steps or something\nlike that but if it was possible to\nuh uh make more time steps\nwith that that wouldn't enable the air\nto get even higher reward\nwhat it has uncertainty about is the\neffects of its actions\non the next hundred time steps\nbut the formula\nthe the argument to the arg max\nwhen it's picking actions is it's\nconsidering the effects of all these\ndifferent actions all these different\npolicies if you're thinking about its\nwhole future\nit's considering the effects of all\nthese different policies on the next\nhundred time steps\nso its actions might affect the world\nand it might\nunders and it might have predictions\nabout\nthe consequences of those actions on the\nworld model\nbut at the end of the day when it has\nits predictions about what its actions\ndo\nit looks at the next 100 rewards that it\nwill get according to that model\nand that's what it tries to maximize so\nit there's no such thing as\nmore time steps happening within a time\nwithin 100 time steps it's\njust it\nit's just trying to do the best it can\nwithin the next hundred time steps\nthat's that's\nits code but i don't think this will be\ni mean your point i think will still\ni think your story will still go through\nit's just that you know maybe these\nmaybe it can arrange for these time\nsteps to occur\nafter it's broken out of the box and\nthen it can get optimal reward then\nyeah so so maybe if you can break out by\na time step 50 then it can ensure that\nit will get\none reward for for the remaining 15\ntimes names\nso one way it might be able to\ncommunicate with the\nwith the outside world would be we\nimagine that um\nthe chinese uh would be able to er\nto both build something that can emit\nneutrinos or something\nlike that and then on the uh the the\nchinese could also put a satellite into\nthe synchronous orbit on the other side\nof this room presumably\nthrough espionage they know where this\nroom is so they can set new\ntrees through this room um possibly from\nmultiple directions if they have\nmultiple satellites\nso they would be able to for instance\nsee\nwhat position is the operator in and if\nuh that's of course only only one way\nbecause\ni can't even just about see a way that\nthe uh\nsuper intelligence could send out uh\ncould could receive these neutrinos but\npresumably there would be some kind of a\ncausal bargaining\nthat would be possible where uh the uh\nor if the super intelligence could just\ninfer from previous episodes that\nthe chinese were up to this it could act\non the assumption that they were that\nwould be assembled\num yeah\ni think there are budgets for designing\nthis room\nthat allow for that possibility um\nand if the chinese managed to do that\ndespite you know rather than just making\ntheir own\num that could end badly\num\nyeah i mean\nwhat i think is more likely is if people\nin china\nare not persuaded that this sort of\ncontainment is necessary they could just\nmake an ai that\nthey think is gonna do great things for\nthem and then it takes over the world\nand kills everyone\num that seems cheaper and more likely\nthan trying to\ncreate all these you know this neutrino\ndetection\narray um\nso i mean or they could just\nkidnap the operators after they came out\nand asked them what they learned\nthat also seems cheaper but yeah i think\nif there's some other government that\nis you know yes basically i think\nif they did that i could imagine\nendorsing a version of this\nwhere that was a risk\num\nyeah i guess that's my take on that\ni could also think of counter arguments\nlike you can probably detect if there is\na\nsatellite in that particular spot sure\nso i think uh johannes you have a\nquestion and then mission next\nso how do you actually so do you just\nassume that you can make the agent\nmyopic\nif you say for example that it only uh\ncares about\ncertain number of time steps\nyeah i'm looking around for a marker i\ndon't have one\nyeah sorry is there is there more to\nthat\nnow and basically so you don't explain\nlike how you would do it you assume that\nyou can\nbuild an application it's just like yeah\nso\num\nyou know let's say i have a function\nthat takes actions and outputs real\nnumbers\nso i have 10 actions and i have a\nfunction that that\noutputs a real number for each one i can\ndo\narg max over actions of that function\nand it'll spit out\na number um and let's say i have another\nfunction that\ntakes arguments of pairs of actions and\nspits out\na number then i can take an arg max over\npairs of actions\num and the function\nand the world model of the ai\ncan be converted into a function that\ntakes in\n100 tuples of actions\num if this if it's an episode of length\n100\nand spits out a number according to its\nexpected value\nso there's a function that's a function\nof\n100 tuples of actions and then i\npick the tuple of actions\nthat maximizes that function that's just\nsomething i can\nencode um so yeah it's\ntotally straightforward to write the\ncode for an agent\nthat is optimizing over\nsequences of actions of length m that is\nabout\ncomputing basically the actual reward so\nyou make sure that the actual reward\nwould not be like outside of the 100\nepisodes\nuh it's it's a model of rewards it\ndoesn't have to be\nexactly accurate but it's a model of\nreward 1 and reward 2 all the way up to\nreward 100.\num and those models i mean there's no\nthere's no extra machinery you have to\nbring in to make sure that the models\naren't\nmodeling more rewards i mean you've\nyou've put all the models in the model\nclass and you know\nthat they model each reward as being\nbetween zero and one and\num they assign probabilities to a reward\nbeing output rather than\nlots\nbut but just like in theory you could\nhave that the agent creates like\nits own word model still right it's like\ndepending on\nwhat kind of architecture you would use\nso\ni mean you can\nso what you could what you would have in\npractice\ni mean and kind of the closest\napproximation of this is\na list of world models in the agents is\npicking\nthings on this list but it's not i mean\nit's only picking things in that list\nand everything in that list\nis a function is a distribution over\nan observation reward pair not some\narbitrary number of rewards\num basically\nyou can think of\nyou can think of the type signature of a\nworld model a world model\nis a distribution over a single action\nover a single observation in the word\ngiven a history and so there's no\nsuch thing as a model getting mixed up\nin there that's accidentally a\ndistribution over something else\nthat's just a type error waiting to\nhappen\num so all the models that it can\nconceive of\nare just models that produce\na distribution over a single observation\nin the world and it\nyou know there will be different models\ngiving different answers for what that\ndistribution is but\nyeah there's no i mean\nif it rewrote its code lots of things\ncould go wrong i'm not talking about\nthat but it doesn't have the ability to\ndo that\nso if it's not rewriting its code for\none thing its actions don't\nchange the program it's running the\nactions just print text to a screen\num and for another thing all the models\nthat it\nmight be elevating to to actually\nconsider to actually devote attention to\nare\nare models that are just distributions\nover a single observation reward which\nthen get converted into a value function\nwith respect to tuples of actions i\nguess i guess i'm just\nconfused about because you say like it\nwill it will output\nlike text and stuff like that but\nlike it seems like you don't specify\nwhat is the actual\nlike like architecture of the model\nthat would do that right like you leave\nsome details unspecified and i'm\nwondering if in these unspecified\ndetails what would prevent\nthis part of the model from learning\nsomething about for example that it\ncould manipulate the world in such a way\nthat it would get for example\nmore rewards in the next episode\nso it can certainly take actions that\nchange the expected reward of the next\nepisode\nbut that is not part of its criteria for\npicking actions\num it could\nbelieve that certain actions will have\ncertain effects\ngoing through strange roots in changing\nits own machinery\nbut those models will either be will be\nwill either be complex or ruled out by\nobservations that that\nare not what that model would have\npredicted so\nyeah if if you could run the model then\nwould it look like\nso you there somebody enters the room\nand now we assume you have like all this\ninfinite compute and so on\nand the first time the human interacts\nwith it\nwould just output something completely\nrandom\ncompletely random text basically\nyeah and then the human needs to train\nthe thing is like how how long would it\ntake\nfor the human to train this agent\nso i guess that's a different question\nthat is yeah\nthat's a good question um and i didn't\num\ni didn't mention to you another part of\nthis that would make that go faster\nwhich is that for some episodes\na human can control the ai to\nto take some actions that\nshow it some things it might do to get\nrewards\nthat would help but learn the sorts of\nyou know that\nthat providing english text is generally\na good strategy for getting rewards\nbut even without that\nif the first observation it gets is\nwikipedia\nit could build a model of the world\npretty fast and a world\nand that model of the world might\ninclude the fact that the world\nresponds pretty well to english text\num and so it could be pretty quick um\nthe yeah um\nso yeah i think that's that's more or\nless my answer\nyeah there are more details in the paper\nabout\nthe actual learning algorithm\num and what we can expect from it um\nand how it helps to have mentorship\nso just just to summarize it's basically\nyou so but this kind of model is\ncompletely specified\nlike you know you specify really\neverything and\ntherefore like you know like\nlike for example the word models they\nare\nlike you know what world models you have\nlike you specify them somehow and you\nspecify probably somehow also what kind\nof\ntext you have if you don't want to allow\nall the\nlike just maybe limit the range like how\nlong it can be\nyeah and\nthen you basically use ikesi\nbut to learn what the human kind of\nwants\nor what what gives what gives reward but\nyou try to do it in such a way\nthat like you mechanically kind of\nconstrain\nwhat the agents consider as good by\nmaking the\nspecifying the rewards in such a way\nthat the agent is never incentivized\nyeah to change the world i guess outside\nthe group\nyeah you provide opposite i mean you\nshould be able to\nif you want to\nmake it as quick as possible the ai\nlearns that it's in a box and then if\nthe operator leaves prematurely it's\ngoing to get no reward for the rest of\nthe episode\nyou can try to provide observations that\nmake\nthat sort of thing quite simple\nto imagine um\nand you could even have you know\nhuman-led episodes where the human shows\nthe ai what happens when it\nwhen you do bad things and the human\nleaves\nand then you get nothing but yeah but\nthen basically you would think\nif we could compute this thing then it\nwould already be\nsafe\nif you if it could if it's model\nincluded an opaque box around it you're\nsaying\num yeah i mean like basically just\nlike the setup you described you have\nlike the uncomputability and\nyeah i guess there are some concerns\nabout the box i guess the question is\nlike\nwhat assumptions do you need to make so\nthat you would think\nthis system is safe assuming that you\ncan\nhave like all the compute you want like\nyou're not constrained and\nby any real world concerns basically\nyeah um\n[Music]\ni would say that\num so i i i\ntried to write down\n[Music]\nwell so there are two assumptions that i\nmake in the paper\none of which is the realizability\nassumption\num and the other\nis i call the space requirements\nassumption which is basically the notion\nthat\nmodeling the outside world's dynamics\nwithin an episode does take extra space\num and\num so in theory kind of those are the\nassumptions in practice\num i i would think\nthat um yeah\nif there aren't really simple models\nthat are trying to screw you over\num that this should\nthat this and you know if you can make\nthe box\nsecure then this should work\ni think he's by continuing is that out\nfor a second\nmay i ask my question yes please go\nahead\nokay so i have a question about a\npotential failure state for this system\nas it can manipulate the human and as\nyou mentioned previously the human\nis the rewards to the ai system not some\nkind of predetermined algorithm\nis it possible for the ai system to\ncause unintentional harm to the human\nfor instance psychologically and are\nthere ways of mitigating it\nfor instance can it manipulate the human\nby um\ntelling it negative things to give it\nreward\nand that said manipulation to the human\nwould then cause the human to act\ndestructively out in the outside world\nyeah um the\nyeah that seems totally possible um\nbecause\nhuman is also learning yeah i mean\nyou can you can mess people up by\ntalking to them\num the\nso i could imagine that happening i mean\nit's not my first guess for the way to\nmaximize reward but i don't know enough\nto rule it out\num the\nit doesn't it doesn't seem like an\nexistentially dangerous failure mode\num i think taking over the world is\nhard um and even if you made someone\nreally misanthropic\nif you didn't actually have an interest\nin them going outside\nat the end of the episode and taking\nover the world it's not something they'd\nactually accidentally\nstumble into just because you mistreated\nthem um\nuh and then recall that the ai doesn't\nhave an incentive\nto try to get the operator to take over\nthe world successfully afterward\num and without that to take over the\nworld\nyeah yeah yeah no i i was um\nyeah no i don't take over the world but\nto do harm\nto do harm yeah um\ncould happen um\nyeah i mean this is you know it would be\nan unprecedented\ninteraction for a person it might have\npsychological effects\nthat seems to be the sort of thing that\nwould happen\nanytime anyone interacts with a\ngenerally intelligent agent\num artificial agent i mean um\nactually have a follow-up is this system\num could you use the system essentially\nwith an\nintermediary instead is might this be a\nway of resolving my issue\nwhere you would have a human talking to\nagent one which is then an intermediary\nwhich\nessentially fixes its model before\nentering an episode and talking to agent\n2 which is the super intelligent ai\nhow is agent 1 taking its actions\nagent 1 is essentially exactly your\nmodel\nbut then it communicates with agent 2\nfor a full episode\nwith a fixed what do you what do you\nmean it's my model\nlike this agent that i've described yeah\nso it's an\nin a box except instead of the human now\nit's ai one\nleaves its box and enters box two\nwhich for all intents can be within box\none why is it any safer for the human\nwhy is it any like psychological why is\nit any safer psychologically for the\nhuman to\ninteract with agent one than agent two\ni'm not entirely sure but assume agent\none is not a super intelligent ai\nokay so it's the same it's not even but\nit's had\nless assume major one is research\nconstraint power or something\nsure assume it's research resource\nconstrained by some means\num but since agent two cannot change the\nview of agent one\nwithin its behavior it seems like we're\ngetting\nall the negative effects of the resource\nconstraint without\nno maybe not i don't know it seems\nlike maybe it could do something um but\nuh i mean i'm not\ni don't think i think resource\nconstraints are hard\nto use to good effect probably\num yeah\ni don't know i think probably you'd take\na bigger hit on usefulness\nthan you would on\nsafety but\ni don't know seems like something you\ncould do um\nyeah i mean when we're dealing with\nreally bad failure modes we have to be\nreally careful\nbut honestly i don't think it's very\nlikely that if any of us\nwent in there that it would do lasting\npsychological harm to us\num so\nyeah it doesn't it's not something that\nworries me hugely\nso one of if i can just break in here\none of the\nthings i suggested you could dance in\nthe beginning was uh\nthe uh ai in a box boxes you uh\ni don't know if you're familiar with the\nuh with stuart armstrong's uh\nrather interesting uh argument um\ni think it's called intestinal\nuncertainty or something like that\nwhere it convinces you that you're\nactually the one in in assimilation\nthat's something that this ai would do\nalthough yeah\ni don't see it as an incentive but it\ncould do that right\nit could do a lot of things um i\nyeah i don't know i mean you are in a\nbox\nit wouldn't have to really convince you\nof that\nyou um it would be a pretty easy\nargument um\nyeah it could i imagine\na super intelligent agent playing mind\ngames with me\ncould play mind games with me\ni guess that's my answer sure um\njohannes you raised your hand again well\nfirst uh\nto that i think or one thing that i read\nfor student armstrong was about\nlike okay maybe that doesn't apply here\nbut it was like\num you need to decide whether to release\nthe eye basically and then the eye tells\nyou\nactually um you\nlike actually like if you don't release\nme and shut me down with the shutdown\nbutton\nthen um i actually created like one\nbillion simulations of you and i will\ntorture them and they will like be\ntortured for\nthis eternal amount of time before you\nare able to press the button\nand it's like but they only get tortured\nlike\nif you if you press the button to shut\nme down if you don't if you kind of\nrelease me then\nthen it's fine and then it says um are\nyou sure\nthat you are not one of these\nsimulations\nbecause if you're one of the simulations\nthen you kind of you want to press the\nrelease button but if you're not then\nyou don't want to they want to press the\nshutdown button\nyeah i'm assuming if\nthe ai thought it was worth its time\ntrying to get the operator to help it\nbreak out\nit would succeed\nbut because the operator\ncan't help it break out because it's in\nthis box\nit's not worth it to the ai to try to\nget it to\ni don't know if that's the argument it\nwould use i would think it would be able\nto come up with\neven better arguments than that would it\nactually be also considering\npossibilities about\ni think somebody asked like\nextending somehow how long the episode\nis like\nbecause it seems like like like if it\nwould do that then\nit would want to break out right number\nof time steps or\nnumber of seconds per time step\nnumber of time steps i guess\nnumber of time steps it's just it's just\npicking\n100 tuples of actions that\nthat lead to the highest expected value\num\nso it's\nyeah there's none of the actions it\ntakes\nchange the formula that the actions are\nbeing evaluated against\num with more time in the episode there\nmight be\ntime for the person to break out at time\nstep\n10 and then the remaining 90 time steps\ncould be spent with maximal reward\nso we basically also assume that the\nperson who's inside the room like it\ncan't change\nlike their eye like it's impossible for\nthe eye to\nsay something that's like oh yeah okay\nnow you changed me like this and now\nactually i can stay in the future\nor reprogram it without triggering the\nend of the episode\nfirst\nand then what i actually want to ask is\nlike why is it actually\nproblem that\nthat if the world model would contain\nthe agent like why\ncan you not have that\num so\n[Music]\nyou're asking about like the the\nrealizability problem\nthe embedding problem yeah i think it\nwas on slide seven\nyeah the\nthe theoretical issue\nis um\nlet's say you have a bayesian mixture\nover lots of models\nover a bit string an infinite string of\nzeros and ones\nand one of your models\ntakes the bayesian mixture itself\nand then flips the bit so if the mixture\nover all the models predicts a one it up\nto zero and vice versa\nthis ends up leading to a contra\ncontradiction if that model is one of\nthe models in the mixture\nbecause you can show that\nthe um\nthat if the truth is in the mixture it\nwill converge to the truth but it can't\nconverge to this because it's the\nopposite of itself\nso that's the issue\num\nbasically i mean that's the general\nissue that comes up when you try to have\nan agent that is reasoning about a world\nthat includes itself\nso is it about that you have this\ndistribution and\nthe agent when it decides to pick an\naction\nit kind of takes the opposite of what\nthe white model\ndoes because it first flips around what\nthe\nmodel says kind of no no i think it's\nwrong that's just part of a reduction\nthat's part of an argument by a\ncontradiction as to why this can't\nhappen\num but really if you just want to\nunderstand this setup\nall you need to know is\nit happens to be that there's no model\nin the model class that corresponds to\nsomething running this algorithm in some\npart of it\nand if you're interested in the reason\nof why i didn't just fix that problem\nthen you get into these arguments um\nbut if you just want to understand what\nthe problem is that's\nthat's why this is there's a name for\nthis kind of problem\nyeah i the grain of truth problem\nand specifically maybe i think rain of\ntruth is just like the general problem\nof like having the correct word model\nin all the wordments you consider and\nit's also like a name for\nthat the correct word model cannot\ncontain\ncannot exist because it would need to\ncontain itself at least if we\nsay that the correct word model needs to\nmodel the entire world including the\nagent\nthat would be the problem part of grain\nof truth problem\nthe yeah it's not a particularly\ndescriptive term\ninto why the problem exists but grain of\ntruth just means\nassigning more than zero probability\nwith the truth\ndoesn't need to be probability one at\nfirst you just need to assign more than\nzero probability at first so that would\nbe the grain\nof grain of truth um\nyeah so that there is no name for the\nspecialized case\nwhere it's for like the reason it's a\nproblem you mean\ni know i was just wondering like is\nthere like a more specific terminology\nbecause i thought like what you just\nsaid like the grain of truth problem is\nlike the general\nthing so is that like a more specific\nthing and it's specifically about\nthat it doesn't contain the agent\nyeah i've heard it called like\nin like embedding\nlike an issue about\nagent embedding in the world that's kind\nof terminology from\nmiri um\nyeah i mean there are lots of arguments\nby contradiction and math that don't\nhave\npithy names i think i remember\na long time ago in the reading group we\nread\na paper by max hodder about where he\nintroduced\nikesi and in that paper he uh\nhe uh he argued why i said it had to be\noutside of the universe\num if i remember correctly it's quite a\nwhile ago\nbut yeah i believe that's one place\nwhere you can find a rigorous\ndescription of why yeah but i\ni mean if you just google the grain of\ntruth problem i mean this is\nit'll bring you up to this sort of\nreasoning\nokay also uh now i should say that i've\nonly invited michael to to stay for as\nlong as he can\nand we've passed the one and a half hour\nmark so michael if you need to go\nsomewhere else\nuh then of course you feel free to do\nthat otherwise\nthere might be more questions please i\nshould probably go in about five minutes\nbut i just realized i actually never\nshowed you this last slide\num which is just a summary\nthat true boxing and myopia is a\npowerful combination\nthe idea being that it restricts the\nscope of an agent's incentives\nand we can probably ensure that the\nagent understands this\nand then if running an ai is\nexistentially safe we can tinker with\nprotocols\nuntil the output is useful\nbut yes i have time for a few more\nquestions i guess i do have one\nuh on myopia because i'm very myopic and\nand myoka isn't uh it seems this this\nagent is actually not uh\nit's that it doesn't think in the long\nterm it's not actually myopic\nas far as i understand um so that's\nprobably a very nitpicky thing to say\num but right it is my\nmyopia is something where you have a uh\na lint in front of your eyes which is uh\nwrong\nso everything gets out of focus so i was\nthinking whether it would be possible\nto to do something like that to help to\nbuild an ai that was\nactually myopic which had some extra\nuncertainty about things or something\nlike that\nso right it's not myopic in the sense of\nhaving a wrong world model\nor being unable to filter light\ncorrectly that's coming from large\ndistances\nit's just not farsighted in the sense\nof planning for the long term so\nit is caring about at most m time steps\nin the future\nrather than infinitely many um\nand there's degrees of myopia you know\nyou could just be optimizing over the\nvery next time step\num but the key thing here is that it is\nfinally many time steps\nwhich is i'm not the only one to use my\nopiate in this way\nbut it has nothing to do with inaccuracy\nabout\nwell last so\nuh a follow-up question on that is how\nmuch uh do you see the uh\nthe safety coming from the fact that it\njust doesn't\nlook so much uh how much of the safety\nfrom the boxing and how much from the\nmyopia\nyou could imagine that you had a just a\nstandard\npaperclip maximized but you told it you\ncan only look ahead\n100 seconds\nyeah um i could imagine that working\ni mean i so\nif it's just optimizing over the very\nnext action\ni don't think it's worth taking over the\nworld\num if it's optimizing over the next\n10 actions\nyeah probably not sounds really hard to\ntake over the world\nin 10 seconds right well\nit's just about it's just a question\nabout whether you can break the clock in\n10 seconds\ni was thinking 10 time steps\num but\nyeah the thing is yeah sufficient myopia\non its own should be safe but\nyou quickly get to the point of\nuselessness and\ni just don't want to be playing that\ngame um\nthe other thing you can do if you're\nmyopic is you can spin up another agent\nthat's acting\nin lots of cycles for every one time\nstep of yours\nand so you can arrange for lots of\nthings to happen\neven only in a few cycles um\nso i think myopia would do a little bit\nof work on its own\nbut you can blow past\nthe safe horizons\nwhen you have a box\ni mean you can go blow past what would\notherwise be\nrisky horizons um when you have a box\nso it's not kind of a\n0.6 x plus 0.4 y\nsort of deal it's more of a x and y\ni mean yeah i think for to get most of\nthe power they're just both necessary\nyeah and you have to have a quick\nquestion uh\njust before the five minutes or so very\nquick question is like just about\nlike what do you think about miri's\napproach specifically\nwell there are a few things they're\nworking on and i don't know all of them\nokay i mean specifically the approach\nthat you try to figure out\nthings that you think are likely to be\nknown by somebody who would know\nhow to build a safe general intelligence\num i my approach is to try to go for the\nthroat\num with alignment\num\nand it's the approach that i would\nrecommend\nto others but i think they're doing very\nimportant work\num and\nit's hard for me to i mean you know i\nthink we need lots of different\napproaches in ai safety\num so at the margin i think i would\nrecommend for more people to just\ntry to be going for the throat rather\nthan\nanalyze the intricacies of agency um\nbecause there might be ways of getting\nto the finish line\nwithout ever understanding\nall the possible solutions in decision\ntheory or\nor things like that\nokay so i think that's uh\nall we can ask for you to for today\nthank you michael very much for\nyour time and your presentation and your\nanswers it's been a pleasure to\nhear and i hope uh we can also invite\nyou back\nnext time you you have a yeah thank you\nso much for having me\nso you guys thank you thanks\nall right and then i have a uh just uh\nsome bookkeeping for the next session in\nthe ascc reading group\nwe will be uh discussing roman\ngambolsky's\nai safety skepticism and that will be in\ntwo weeks\nyep and i guess that's all we have that\nwas fun\nyep there are some great questions a\ngreat discussion\nthank you thanks very much", "date_published": "2021-05-27T21:48:06Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "9d26e2ccf48b7e0570e1c5d95417417b", "title": "241 Finetuned Language Models are Zero shot Learners", "url": "https://www.youtube.com/watch?v=3HcVqQdmpu8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 241\nin the aisafety.com reading group\ntonight we will be discussing fine-tuned\nlanguage models are serial shot learners\nby jason way and others\nthis is some research done by a team in\ngoogle research\nwith jason way as the primary author and\nmartin bosma as the person who conceived\nthe idea and implemented the first\nversion of of the actual model\nand there are a number of other\nprimary authors and a list of secondary\nauthors\nit's a pre-print\nfrom september\nthey describe it as\nfine-tuning language models on a\ncollection of data it's described by an\ninstruction substantially boosts serial\nshot performance on unseen tasks\nso let's talk zoom out a bit first and\ndiscuss what is the relationship between\nthis and ai safety because the uh uh the\nconnection is not entirely obvious\num and they don't write anything about\nai safety almost\num so we had the uh language models our\nfew shot learners the gt3 paper uh a\nlong time ago uh where uh\nwhen we presented this uh i presented\nthis uh as almost two years ago i made a\nuh comment on that and that is can you\nfine-tune gt3 to be more aligned\nuh i don't think that was very original\na number of other people almost went in\nmust have bought the same thing um but i\nuh but a lot of people there are very\nfew people uh actually pursued this\num some people have started to do that\nover the past year or something like\nthat but um there's something more new\nand aligned of course is the property\nthat we are interested in and there are\na number of properties that a language\nmodel can have or can uh can't have and\nit's not really obvious a priori what\nproperties it will have an example of a\nproperty would be like does it is it\nmathematically precise in its uh uh is\nit logical or is it more poetic and if\nyou look at\nthe actual dpt3 it's certainly more\nbasic than precise\nso a um\na way to\nsplit up alignment in three parts is is\nit honest is it uh harmless and\nis it helpful so those are three\nproperties that a language model could\nhave\nand it's worthwhile to investigate so\ncan be honest well\nhonestly seem to be a\nproperty that doesn't um\nmap well into language models like there\nis some work being done on truthful qa\num some ways being done but it's uh\ncertainly not obvious and you see\nsometimes uh that as the model gets\nsmarter it is more likely to tell you\nthat uh breaking a mirror results in\nseven years of bad luck\nharmless is harm a concept that is\nnatural for l for language models not\nreally uh\nnot really uh there is some work being\ndone for instance by uh redwood research\num to try to see whether it is in fact\npossible to make a language model uh\nfine-tuned in a way so it doesn't\num\nso it doesn't uh talk about violence but\nit's not obvious can you make it helpful\nwell that's really what this paper is\nabout\ninstruction following is quite close to\nhelpful and\num\nso when i'm not entirely happy about\nthis uh uh\nseeing alignment as a combination of\nthese three uh features i'm not sure\nit's a good model i think it's um yeah\nit's either i think it's populist genres\nframing and might also be\nthe uh\nuh i'm a days i'm i need to look that up\ni think\num because i feel\non reflection\nhelpful seems like the kind of thing\nthat\npeople care so much about that it's\ngoing to be done by default so uh i\nthink even if it was not very helpful\nthere would be enough optimization\npressure put in that direction that it\nprobably would end up being helpful or\notherwise it wouldn't be deployed as\nmuch\nright so let's\ngo back to language models and their\nlimitations we've seen obviously that\nif you take a\nlanguage model and just deploy it\nwithout any other work then it's not\nthat good compared to how much it costs\num\nand\nit needs some kind of hints about what\nyou'll be doing that's sometimes called\nfusion learning learning so it needs a\nfew examples and it's also something\nthat doesn't really happen until the\nlanguage model is quite large\num we also speculate that this could be\nbecause the thing that it's been trained\nwith and um the problem formats\nin the uh in the tests are quite\ndifferent it's unlikely to see precise\nkind of way of talking\nin the uh\n45 db that um\nthat ut3 is uh is trained with\nso you can see here in um with gt3 you\nhave pre-trained language model and then\nif you want to do inference on some kind\nof task then uh you will give it a few\num a few examples um and you can try to\nchange the prompt in different funny\ninteresting ways prompted engineering\nsounds better than prompt hacking but\nthat's basically what you can do or at\nleast the way it's written here uh i\nthink the authors almost certainly know\nit but gt3 certainly can be fine-tuned\nso you can even though they say this is\nlike the gpt3 way you can't do more\nthings with ut3\nand the alternative they're saying is\nthat um you can pre-train and then\nfine-tune um\nwhere you\nperhaps somewhat laboriously uh find\nsome examples of uh maybe like 100 or\neven more as many as you can get um to\nfine-tune the model on these\nand then you get a specialized model\nthat is substantially better at one task\nand\nthe reason why\nwe feel\nthis helps is that\nit feels very often like\nthe models in some subjective way it\nseems clear that the language model\nitself it knows the answer to the\nquestion that we are given but it's just\nwe need to either prompt or fine-tune in\norder to make the model want to give us\nthis kind of\nto follow our instructions basically\nlet's talk about instruction tuning the\nnew um\nkind of fine tuning that the authors are\nimplementing proposing and implementing\nhere so they take\n62 natural language\nproblems\nand\nthey chase those into natural language\nso uh we'll get back to how to change a\nnatural language probably into a natural\ninternational language\nand then you get some natural language\nthat you can then fight your mind\nso that gives you a different option\nwhere you um have a pre-trained language\nmodel and then you fine-tune it on um a\nlot of tasks and then that makes it of\ncourse better at the task you fine-tuned\nit on but also at following other kinds\nof instructions\nand they call this the fine-tuned\nlanguage net\num\nas uh somewhat of a strange name in my\nopinion uh like in there as far as i can\ntell there's nothing more in it like\nthan the other kind of transformers that\nwe're seeing um so uh and i think\nfine tune is not a that descriptive name\nit would be more interesting to see\nlike it's um\ninstruction following it would be my\nsuggestion for a name\num\nand so they show that this does indeed\nimprove zeroshot performance very much\nand also that this is something that\nthat only happens with large models\nuh one important thing here uh that\nshould be mentioned is that open ai has\nan\nuh a version of gt3 that does actually\nkind of the same thing\nthey haven't published how they've done\nthat um\nbut\nso this is the best we have so to\nunderstand that part\nright how do you change these uh\nstandard benchmarks into natural\nlanguage\num so here is an example of one that\ncontains a structured problem that\ncontains a premise and a hypothesis and\nthen some kind of target where you have\nsome options\nand then this is cheese international\nlanguage by stating first the premise\nand then based on this above can you\nconclude this and then you have these\noptions below\nthey have 10 templates for each uh\nproblem um\nand they\nup to three of these templates change\nthe problem uh dramatically so for\ninstance instead of trying to see if the\nhypothesis follows from the premise they\nare asking to translate the premise into\nfringe\nand that was kind of an odd thing to do\ni felt\nand they don't really explain why except\ni found it actually in one of the\nappendix hidden that they did this but\nit turned out not really to matter and\nfine-tuning is\nunfortunately uh an art form at this\npoint um we\nyou try something\nand then sometimes it doesn't work\nthey have a classification scheme um i'm\na bit uh there i that's written here but\nit's uh like\nuh it's presented but it's quite unclear\nfrom the text whether this is something\nnovel or how is\nuh how do you normally do classification\nin these language models i think\nactually that's the same way that gt3\ndoes but\nuh it's not presented as such i'm unsure\nwhether this is actually something new\nthe data sets are clustered in a rather\ninteresting way that's in a way that's\nvery essential to this word\nso here are\n62\nproblems that are\nfirst divided according to color\nin\ngenerative here and\nunderstanding natural language\nunderstanding and then it's following uh\nit's\ndivided into further clusters\num some of this like there's a reading\ncomprehension a common sense and a\nreading comprehension with common sense\nyou know um\nthey admit this is somewhat of an art\nform how do you put this into clusters\nand i actually think this would be\nreally interesting to um to see some um\nrigorous work being done with of course\nuh in ai clustering things is something\nthat is very very well known there are\nstandard ways to do that and would\nactually be really interesting to see um\nlike i can make predictions like\nsomething like sentiments\num might not be something that is\nnatural for language models and i kind\nof predict that if you try to to cluster\nthis to see how much how close these are\nto each other um i i'm not sure this\nwould look at all like this\num and also it should be said that this\nthe the uh\nthe metrics they uh using to say how\ngood is their um\nis their model performing depends\nstrongly on how many clusters they have\nthan the exact clustering structure\ni don't think the authors\nquite admit how important this\nclustering actually is\nuh they make some uh of course when you\ndo machine learning you need to to split\ninto uh like training validation and\ntest and they do that in a\nsomewhat interesting way because\nobviously uh they have taken literally\nall the uh the standard benchmarks they\ncould get their hands on and so\nthe problem is if you're fine-tuning\nthese then\nwhat do you bid smart and the way they\ndo that is to just leave one cluster out\nand then the fine tune on the rest and\nsee how they perform on that cluster\nso that means they need to do the um the\ntesting\nbut uh oh well this thing is not that\nexpensive\nand they say that actually these\nclusters um they\nthey didn't follow those precisely um\nsometimes there were some that were\nstill relevant even if they are outside\nthe cluster and i think in that case you\nshould just have made the clusters\nbigger really that would accomplish the\nsame thing as fast i can tell\num so how did they train this uh well\nthey they start by describing the\nlanguage model that they use um and to\nmy mind it seems like gt3 obviously gt3\nis the model that i know best and google\nnews some other models that i'm less\nfamiliar with and\nthey don't describe this model in relay\nthe base model in relation to any other\nmodels so um i would really like to know\nlike what did they do different from qc3\nlike there's one thing that's trivially\nobvious is that um\nthey are using less parameters 137\ninstead of 175 billion um\nbut um\nit also looks like their model they're\ndoing something really something some uh\nit looks to me like some kind of pre\npre-training um with uh on just getting\nthe model to understand sentences um but\ni i can't tell for certain whether it's\nuh how new and original this is um and\nanother thing that makes this somewhat\nannoying is that they have this model\nthat is kind of like tv3 but they don't\ncompare it head to head with gt3 in the\nunfine-tuned version they do in the\nfine-tuned version and then they can see\nit's better but it would actually be\ninteresting to see like maybe all of the\nsmall tweaks that they've made and of\ncourse the state of the art has advanced\nover the past two years so it's\nperfectly possible that the base\nlanguage model outperforms dbg3 we just\ndon't know that because they don't\nreally\nreport this result\nthey also have a description of their\nfine-tuning procedure i won't go into\ndetails it looks kind of normal i think\nthey use some kind of packing um\nscheme that i haven't seen before and\nit's again one kill to me is this\nactually original or is this uh like\ni don't think people do that in the when\ngt3 was first created but is that like\nhow everybody does that now i don't know\nand of course the length of the bossy\ninput and the\nthe output is set uh\nto uh yeah\nto these lengths um and this is\nsomething that i feel is very important\nbecause here for practical purposes this\nmatters a great deal\nand describe how long time uh the fine\ntuning takes uh and yeah 60 hours it's\nnot\nnot that much and of course tpuv version\n3 that's like so last year right now\neverything is measured in extra flops\nand i can't even remember how much how\nmany an extra flop is\num\nso the results here you can see the\nresults as presented like we have keep\nt3\nwith zero shots uh g3 with flu shot and\nthe new flam model\nand in general very often it um it\noutperforms\nand\nit's certainly better than the zero shot\ngt3 in\n20 of the 25 tasks\nand\nalso even better than the uh the future\nperformance of gpg3 often\nand\nuh is this instruction tuning is a\nsubstantial improvement in general\num\nand but it kind of depends on precisely\nwhat task some tasks are um\nuh almost instructions in themselves\nlike the uh the classical example of a\ntask that is uh it would be continue\nthis sentence that's something where in\ngiving instructions to gbt3 doesn't\nreally make sense because it's what it\ndoes it by default gpg3 just continues\nthe sentence and if you have a uh\nsomething\nwhere the task is to continue its\nsentence you don't need instructions it\njust does it and and some of them um\nare very much unlike that um\nso um\nyeah the kind of uh conclusion is that\nif their instructions are redundant then\ninstruction\ndoesn't help very much but it does help\na lot in in the other case\nthey have an interesting\nidea for how to make a\nserial shot uh learning where you just\nget the problem and future learning\nwhere you have like a few examples and\nthat's uh by automatically generating\nthose from examples and not uh and not\nanswers we're using what they call exam\nclass\nand that's the simple idea is you just\ntake a sim\none example and then you record the\noutput and the output is probably\nquite bad because that's zero shot in\nthe first time but then you add that to\nthe prompt so now you have one\nuh so now you have one shot\nand then\nyou're getting something new from that\nand that gives you then you have two\nshot and then you continue to do that\nuntil you don't have more space in your\nprompt\num\nand that's the technique that seems to\nimprove performance substantial um\nuh and um especially if the if it's like\nsomething where there's complex output\nand\nwe also noticed that um\nthe uh standard deviation the difference\nbetween\nuh\nhow well you perform on different\nexamples is just lower so prompt\nengineering which is\nwhich will often have this effect uh is\nless important in practice this can be\ncan be automated\num\nand i think this is\nto me\nsome kind of hint\nthat the architecture of transformers\njust isn't powerful enough like uh this\nis something that you put on top of the\ntransformer architecture to to\nmake it uh\nto focus its attention on precisely the\nkind of thing\nthat you want with this particular\nprompt and seems like something that a\nsmarter uh architecture than\ntransformers would be able to do itself\nwithout having having to add this\num how does the\nperformance improve when\nyou add more of these clusters gradually\nwell here you can see a graph with how\nmany clusters you've used um they don't\nshow for serial clusters and that's a\nbit sad\nyou\nalright they kind of do here this here\nis the base model so they do in fact put\nthat on these\nthese three lines and you can see it's\nactually a bit funny that for two of the\ntasks the uh just uh fine tuning on one\ncluster decreases performance so the\nfirst time you start this instruction\ntrimming process you get slightly lower\nperformance\nbut then as you add more and more you\ncan see the uh the graph just continue\ngoing up and it seems like the they are\nthey obviously chose 62\n62 uh\ntasks because that's all they had um and\nif someone uh sits down and just writes\nsome more\nuh\nsome more tasks more benchmarks it seems\nquite likely that they would just be\nable to uh increase this curve further\nuh so it's basically bottlenecked on the\navailability of benchmarks\num\nyeah the only one that that doesn't seem\nto help immediately is sentiments\nso how does it scale uh with uh\nincreasing\nmodel size\nwell if it's something that has already\nbeen seen\nthen um\nthe performance seems to\nincrease very little as far as i can\ntell that's actually uh somewhat strange\nthat this model doesn't really seem to\nbenefit from more uh\nfor more parameters but if you're\ninstruction tuning it then it is used to\nimprove very much and but this is only\non task that is seen during instruction\nin june so that's kind of what we would\nexpect like the more um like you get a\nbit of\nan improvement as you add more examples\nthat seems very reasonable but on the\nother kinds of um\nuh\ntasks the kind of task that you haven't\nseen\nwe get a much more interesting like\nfirst you fine tune on something that is\nnot what you're trying to do\nif the model is small then that is just\nbad if you have a small model you\nshouldn't fine tune on other kinds of\ninstructions\nbut it seems\nas you can see here\nat\nthis point once the model gets above\nthis particular size\nthen\ninstruction tuning starts to help and\nstarts to help help dramatically so this\nentire exercise if you go back a couple\nof years back when the largest models\nwere\neight billion parameters back then this\napproach just wouldn't make sense in\ngeneral the model was unable to learn\nwhat instructions mean and what's\nhelping me\nand apparently once you get\nabove this level then\nthe model becomes smart enough to find\nsome kind of like this is mostly my\ninterpretation that it becomes able to\nfigure out okay we're trying to follow\ninstructions and instructions are so it\nbecomes more helpful at at this point\nonce it gets smart enough\nhere\nthat's all for today thank you and see\nyou next week bye next time", "date_published": "2022-01-14T06:25:23Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "d5d6d9ac696b091f921db83275e2fd06", "title": "AI Safety Reading Group (Session 39)", "url": "https://www.youtube.com/watch?v=08rt1-DdlNM", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to the 39th session of\nthe AICC reading group and today we are\ngoing to talk about an article called\npolitics with up streams of AI by\nRaymond Brenner I couldn't find a\npicture of Raymond brand on even though\nI'm sure it's quite a bit and but it's a\nwriter and write a website called the\nfuture prenatal with the sub time wiser\ncounsel for the smile sort of bro that's\nlike me as kind of a half and make the\nhorizon photos oh well it's a group\nblock em that call themselves new\nreactionary and something called the\ndark enlightenment enlightenment and I\ncouldn't find it really either pick that\nUncle Scrooge at the picture for a\nreactionary that's not probably a really\ngood example and I'm not really sure\nwhat this dark enlightenment really is\nso I think we should probably focus on\nthe article itself oh yeah I was\nthinking if and something to do with the\nall right so actually in the discussion\nafterwards which won't be recorded and I\nhave a some thoughts about whether this\ncan in any reasonable way they said to\nbe all right our new reactionary or have\nanything to do with that but that's\nprobably a contentious socket so I one\nincluded in the video and the main\nmetaphor of the article up streams is\nlike a river flowing from somewhere to\nsomewhere and this means that politics\nin some which dominates in terms how\nscience is made how science what science\nwhat's gold signs have and the the\nparadigms and in in a very real sense\nalso the scientific methods that I used\nand that will start sucking some\nexamples where that there's been a great\ndegree of state control over science\nthe first example is listen collision\nwhich was a scientific theory about how\nto grow agriculture that it was the gain\ndeal of political favor in the Soviet\nUnion and because it was very very wrong\nit helped back the agricultural\nproduction in the Soviet Union and a lot\nof people stopped the two other examples\nare connected Germany which were\nrealized that a lot of the physics will\nmake by were discovered by Jews and\nbecause they couldn't stand that Jews\nwould be able to invade something that\nthe Germans couldn't that one that that\nAryan people couldn't then they try to\nreinvent physics without relativity and\nall these things and that went very very\npoorly and they also had a cosmological\ntheory that was even that was decidedly\nmore crazy some guy some German had a\ndream vision where he dreamed rent that\nthe moon was made entirely of ice and\nused that to it for very very many\nthings about about the world based on\nthe limited tickets I got a skype sound\nI don't know if someone else came online\nand negated and we didn't hold on ok\nlittle quick easy not lady ok I'll go\nback so the last example is from of the\nstate having a great degree of control\nassign is from a project that the United\nStates did in trying to predict and\ncontrol Latin America called Project\nCamelot and the later política which was\nmore predicting and less controlling and\nthe reason why this is interesting is\nthis is a military project where that an\nextremely Pele was it called a hierarchy\nwhere the\nlet the general and chop from the army\nrepresenting the state and then the\nscientists are explicitly below that\ntaking orders from the from the military\ncommand both of these projects were very\nsmall and ultimately quite insignificant\nso the power relations that most AI\nsafety really think about is with some\nresearchers who are trying to control an\nAI and the point of this article is that\nthe researchers are not just controlling\nthe AI as a state and on top of that is\nalso controlling with researchers and\nthrough those Dai and how can the state\ncontrol well in two ways they can be\ndeliberate control obviously if it's a\nstate funded project that will be very\nvery real control if there's a promising\nprivate project it would be a national\nlife and the state has a very clear\nmandate from the people to to control\ninteresting technology control\ntechnology with security implications\nand the state will almost certainly the\nremand Brennan doesn't really give an\noccupied process in his analysis it is\nclear that the state will participate\nhas decided to participate in an arms\nrace and so in this way the it's\nimpossible to oh it's very difficult to\nforesee and AI projects that will not be\nstate a I at least in some in some sense\nthe other big problem is that could be\nmisalignment and the researchers say we\nare building this project that will do\nthis in this and the blue cards here\nsomething else it's also possible that\nstates are not unified and so one\ndepartment has this idea about what to\ndo with strong AI and another has some\nother things about this and states are\nreally made of a can be seen as made up\nof particular sections with Vice control\nand finally a the state the people\nweb the colonists they might simply must\nunderstand what what AI really is and\nwhat is capable of so from this is\nconcluded that AI researchers need to\nstudy how this power relation works and\nin order to account for the LA I state\nconsequences so then the question that\nis often raises whether math can save a\nI and methods of course not mathematic\nlike this but something like if you're\nbuilding an AI that extrapolates human\nvalues last time we talked about Iliad\nKowski here on the right career\nextrapolated position which I found this\nwonderful image this is the sum of\nhappiness so I think this is a really a\nreal nice picture and and and the\nquestion is whether this will be\nimplemented and Richard Brennan is\nnegative about this because the state\nwill have some kind of political\npriorities that will supersede and\noverride even if Alicia Elliott asking\non Mira was a close to being able to\nimplement and AI would create extra\nbleep addition they would be\nnationalized and the mathematical even\nif it's a very beautiful and simple and\nyou could call a career an extra belated\nrelation even if that was completely\nperfect then the state might still be\nvery very able to to come in and\noverride it and and from this Richard\nBrennan concludes that a serious AI\nproject has has a big task in ensuring\nits ideological integrity that it's not\nhijacked either by insurance agents or\nexternal like the agents like the state\nand this doesn't even have to be very\nvery explicit and over like a\nnationalization is very very over but\nthe programs are building vai where they\nalso get allowed their ideas and way too\nsoon from the world is through through\nthe media and the books and and think\nthat the state controls in some way and\nthis means that if the AI the people\nbuilding VAR and some way intellectual\nnet knees then it would be much there\nwill be very much under control of the\nstate and this is a problem and that\nwhere which are brands will conclude\nthat the intellectual background and\nmoral education of the people who are\nbuilding this AI is very critical\nsection that's us gathered here gives\nyou two pictures garbage in garbage out\nthat if the AI researchers just if you\nmet him at the character the only we're\nlooking at Fox News or something like\nthat then they might build an AI that\nreally likes Tom Trump oh and I don't\nknow that compass in my example\nyeah doesn't yeah I said Here I actually\nmade a rather great mistake that I'm\ngoing to assault on the discussion\nbecause here I took in a contemporary\npolitical example when it was not\nnecessary and that's generally\nconsidered a bad thing I shouldn't do\nthat I'm uh sorry I shouldn't do that so\nI'm catching like the the key to this\nmoral education its historical data on\nhuman values according to Richard\nBrandon I think here we get closer to\nsome of the dark enlightenment values\nthat was very large emphasis on the\nhistorical and that's the way you\nunderstand the world ever they an\nexample of where AI could be really\nproblematic would be a sub today I if\nyou're met in the Soviet Union creates\nand artificial intelligence then how\ncould we would be expected to be\nfriendly would we expect it because it\nwas created in a mathematical intersect\nway that it would then bring about\nparadise in the world or or heaven close\nto I mean it's quite possible that\nJoseph Stalin he she said some nice\nthings about the workers internationally\nbut what his true actions were very very\nnice human aligned to be frank and and\nthis is a this required nothing you AI\nresearchers are able to in some way with\njust the styling is dialing all of them\nto build a nai and this seems like a\nvery very tall order for to figure out\nthat it's necessary to resist your\nsystolic and to actually do that so I\nthe claim is in particular that\npolitical conflicts are some of the\nthings that makes state-controlled\nreally really dangerous and problematic\nthings like you have an AI China have an\nAI and these two are against each other\nit doesn't American AI and a Chinese a I\ncouldn't making them in a very\nantagonistic relationship and this this\ncould cause an arms race and this caused\nsome values and a behavior that is very\nvery non human friendly so this kind of\nthing has happened at some well I\ndifficulty with a brand a number of\nreasonably comparable things have\nhappened in a previous time dimensions\nyeah the Napoleonic Wars the world wars\ncommunism Manhattan Project the cuban\nmissile crisis at example where this\nthat AI researchers really really need\nto learn from and and of course this is\na different situation because technology\ndoesn't develop or yourself yes as\nRichard random right and this means that\nonce we get to have strong AI and this\nit will not be a directly comparable\nsituation to versus the world wars but\nit is to human agents and institutions\nthat have a crucial role in this and it\nis a mistake to think that is just an\nalgorithmic problem that is a mistake to\nthink if we get the details of Korean\nextra belated position correct then\nwe're home free the problem is that this\nis not might not might very well not be\nvai that it's dope it will be a\npolitically neither okay people so we\nhave five moderately concrete\nrecommendations that I this is kind of a\nsummary study history of state influence\nand technology and particular these\nexamples that he\nmentioned before as well three Manhattan\nprojects the Manhattan Project is of\ncourse also and a hugely Rachel case and\nlearn a lot from history in particular\nabout human values and morality and\nfigure out if you have an AI project\nfigure out what are the political\nthreats and defend against these both\ninternally and externally and study\npolitical history behind a technological\ndevelopment that's the article thank you\nfor listening I'll now stop the\nrecording and we'll go to the discussion\nfor the reason for the business on\nYouTube see you next week", "date_published": "2017-03-15T21:00:32Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "c26d2804362b6df5f8d52f2c7106a8ce", "title": "AI Safety Reading Group (Session 40)", "url": "https://www.youtube.com/watch?v=qhcBQrMfB8o", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to the 40s session of\nthe AI safety reading group and today we\nhave read an article called racing\nthrough precipice a model of artificial\nintelligence development by Joe\nArmstrong Nick Bostrom and cars children\nthey are all working at the future of\nhumanity institute there whereas in 2013\nwhen the article was written nick\nbostrom is the director and your\narmstrong is a AI researcher and culture\nminh a Research Association so the topic\nat hand is AI arms race which is of\ncourse a subject that allowed people\ncare about because it kind of looks like\nwe are headed towards an EAJA arms race\nthe last week we read an article which\nstrongly implied that they also believed\nthat we were heading towards an AI arms\nrace and of course arms races have\nhadn't before in history I've taken here\nin a teacher of HMS dreadnought which is\nyeah Chris will probably feel very proud\nby seeing this feature enables the arms\nrace which devastated Britain's economy\nand Germany's economy at the turn of the\ncentury and that's because arms races\nare in a very real sense a zero-sum game\nthere is there might be a winner and a\nloser but there's nothing really gained\nfrom an arms race in this case we are\ntalking about in particular the effects\nof an arms race on safety precautions so\nwe're going to assume that safety\nprecautions are possible but you can\nchoose what level of safety you want a\nbig point about an arms race is that\nthere's a first-past-the-post latch\nadvantage to being to reaching the goal\nfirst and that's why you have this arms\nrace and there's some degree of enmity\nbetween the teams meaning that the\nteam's hate each other\nand this influences what level of safety\nthey are they're prepared to have to\naccept does an old setting saying just\nbefore just after the Second World War\nwhich was better red than dead which was\nused as an argument for by the peace\nmovement and which is which shows that\nthere is that it's better that the\nCommunists wind cold war and that we all\ndie and many people reacted with the\nopposite and much more known slogan\nbetter dead than red which represents\nfull enmity saying it's better that we\ndie then that the company is swim so\nthis has in the paper by bass drum and\nothers been formalized into a setting\nwhere they are a number of teams here is\nrepresented by the letter in each team\nhas there is a capability which is each\nteam how good are they at making a super\nintelligence and it's represented by a\nnumber from one from from 0 to M you\nhere and each team choose a safety level\nwhich is from 0 to 1 of course you can\nrescale accordingly and and this allows\nit seem to get a score now I hit myself\nagain with which is called C minus s\nwhere the higher your capability is the\nthe better you are at building it and\nthe most safety precautions you have the\nvoice it becomes and after you've either\nwon or lost this this AI arms race that\nmight be a disaster and AI catastrophe\nand that is the probability s that\nyou've chosen so it's possible for teams\nto say we want to be one hundred percent\nsure that this doesn't make a\ncatastrophe and it's also possible to\nsay we will a fifty percent or even one\npercent chance of success is enough for\nand that's represented where catastrophe\ngives zero utility and success gives one\nutility in in this model further there's\na level of images like I just said where\na number such as a one-fourth means that\nan AI could just fit is four times as\nbad as solution so that won't mean\nbetter read than death in this case when\nenergy of 0 means you don't care about\nwho wins enmity of one means that it's\njust better better dead than red really\nand so the world then the number mu up\nhere is hugely important this means how\nhard is it to build a super intelligence\nif it's high then the capability matters\nmore and if it's so then safety matters\nmore in the sensor if you skip on safety\nthen you increase your chance of winning\na lot so you can see new is it\nrepresents in some ways how much safety\nmatters and compared to come to keep a\nbuilding the next part in the I'm\nhearing myself again maybe through your\negg and the next tile is sorry it's\nreally difficult to talk and its\ninformation levels so the no information\nlevel that's where no one has any idea\nabout the other teams and they don't\nknow what their own capability is and\nI'm saying this is roughly our where we\nare right now nobody has any clue about\nhow hard it is to build a super\nintelligence and they don't know how\nclose others are either later we will be\nat a point where some people can\nestimate maybe when we roughly how far\nthey are from building a super\nintelligence and at some point it will\nbe published not just to each team how\nclose they are to building a super\nintelligence but also how close are\ntheir competitors so these three\ninformation levels are modeled in the\nfirst case when no one has any clue\nthis is a post a fully symmetric\nsituation every team will do exactly the\nsame because none of them really know\nanything so everyone will choose the\nsame safety level and if there are five\nteams then each of 150 chance of winning\nand what safety level they should choose\ndepends upon how much they hate each\nother the enmity at times the number of\ncompeting teams and and if mu is smaller\nthan this enmity times the number of\nteams then the safety levels should be\nreduced so that is the optimal strategy\nto reduce your safety level if you Eva\nif E is relatively high or the cables is\nrelatively low or the number of teams is\nreasonably high so this is a and this is\nsomething that can be grabbed and and we\nwill get to the press to get some\nvisualization of this but important\nthing is if there's no information and\nthe capabilities Madame au and enmity x\nnumber of teams the correct option is to\nchoose one hundred percent safety the\nnext case is where everybody knows how\ngood they they are themselves but they\ndon't know how good the other teams are\nso you know your capability we write X\nhere and you choose the safety level f\nof X based on that of course this is\nsymmetric right no one knows anything\nabout the others so each team will\nchoose the same strategy and\n[Applause]\nthe the question is what are the teams\nactually trying to obtain and they're\ntrying to not just maximize the chance\nthat they are that they are winning but\nalso the chance that they are winning\nand there's no AI catastrophe afterwards\nso this is the total utility from both\nwailing and and avoiding a I catastrophe\nand if the slow enmity you add the risk\nthat other teams win so that utility so\nthat is somewhat more complex thing that\nsaid that each team is trying to go to\ndo and it is an answer the question okay\nand so we see that the team with the\nhighest capability always wins in this\ncase and that's because even if you add\nmore safety you would never if you\nbecome a slightly smart slightly more\ncapable that it's always a bad strategy\nto add even more safety to compensate\nfor that that's x minus s + x that's an\nincreasing function so it will always be\nthe team with the highest expert wins\nand what strategy you should follow\ndepends on whether you can build is\nhigher than enmity x number of teams\nminus the energy plus 1 and this is\nproven in the paper and i won't go\nthrough the table but just say if your\nteam is more capable you choose one\nhundred safety if you choose if your\nteam is less capable you will reduce the\nsafety bye-bye your pulse is divided by\nthis number here and this is something\nthat gives a the total risk of\ncatastrophe in this case is something\nthat can be calculated as a moderately\nintimidating integral it's for mad\nmagician it probably doesn't look so so\nscary but it's reasonably it's something\nthat can be calculated in grad and I\nthink that's the important part here for\npublic information we of course no\nlonger a NASA metrics\ncase because the the team with the\nhighest capability they would have one\nstrategy and teams with a lower\ncapability they will have a different\nstrategy and in this case they choose a\nsafety level determined by the\ndifference between the capability of the\ntop team and the second ranked team\nthat's the key thing that everybody will\nbe looking at so imagine there are like\nthe Chinese AI and the American AI and\nthe American AI appears to be closer to\nbecoming a superintelligence compared to\nthe Chinese then the difference in the\ncapability between the Chinese team and\nthe American team becomes the crucial\nfactor and here we are looking for\nsomething called a Nash equilibrium and\nNash equilibrium is when no players\nanything to gain by chain changing his\nstrategy in isolation mean that there\nmight be something better is to change\nchange in Australia at the same time but\nin this case if we restrict ourselves to\nluton at Nash equilibrium where then the\nstrategy that is best for the top team\nis to choose a safety level that is the\ndifference divided by the energy in this\ncase the second thing is shouldn't us\ndecrease it's a scene or to be able to\ncompete and the the risk of an a a\ncatastrophe can be calculated is almost\nthe same age grow so how does this look\nthere are four actually five grass in\nthe paper but let's have a look at this\ngraph over here we have to change and on\nthe y-axis the risk of a catastrophe and\nthat's what we really care about here\nand on the horizontal axis we have the\nrelative importance of capability so\nwhat we can see very very clearly is the\nmore capability\nmatters the lower risk of a catastrophe\nbecause you can see if capability\ndoesn't matter we'll go almost one\nhundred percent risk of catastrophe and\nand these capabilities matters very much\nwe will get out decently dope risk and\nand the I can just explain the lowest\nline the cooling mark line that if\nthere's no information the dashed line\nhere is if there's private information\nand the brown line i think is if there\nis public information so rather than\ngoing through these three graphs i will\nlook at one variable at a time and as\nwell back and forth so i'll go forward\nto a new how hard the problem is and\nobviously as you can see from all these\ngraphs the further we get to the right\nthe lower the risk of a catastrophe so\nin a way we really hope that the problem\nis hard that's not really something we\ncan you can't influences we can't take\nany action that will make the problem of\ncreating a super insulting sahara and we\ncan the only thing that the article\nsuggests is we try to avoid easy\nsolutions that might work what they call\na moonshot that's probably a bad idea to\ndo so that's the first thing the next is\nenmity and here you have in the first to\npress you have one hundred percent\nenmity which is better better dead than\nred and in second to we have fifty\npercent image where it's true times\nbetter to be read than to be dead and as\nyou can see the graphs here compared to\nthese have a substantially lower risk of\ncatastrophe so quite obviously reducing\nimages makes an AI arms race much less\ndangerous\nand it should also be mentioned that\nthese graphs with the public information\nassumes a Nash equilibrium and there's\ntwo teams they might be able to\ncoordinate in some way and one of the\nways they could coordinate is by saying\nboth of them we will not make an AI to\nkill everyone we will if we make a super\nintelligence will use to implement\nsomething like career extrapolated\nposition or something like that which\nwill dramatically reduce imaging if they\nbelieve each other will do that energy\nwill be reduced serum and when n which\nis which used all the way to zero the\nrisk of of AI which goes down to zero in\nthis model so in this case and I think\nthat's a really important point that\nthey don't really go into if we trust\neach other enough the risk goes down to\nzero if we I think yes and i like that\npoint but but it's not really written in\nthe in the article the next question is\nthe information where you have these\nthree graphs no information with the\nlowest risk in in general private\ninformation with somewhat higher and and\nusually you have public information at\nthe top but there are a few cases here\nwhere if you have low energy and and the\nimportance of capability there are some\ncases where it's slightly better to have\npublic information but in general\nprivate and no information is better the\nlast variable is the number of teams and\nhere you can you compare the it's not so\neasy to compare in this graph whether\nthis point is higher than this point\nhere that in the left it's two teams and\nin the right side seams and hearing\nmyself maybe through you Chris\nnothing and maybe someone else and if\nyou could yeah no not that great so it's\nnot so easy to see whether the rightmost\ngraphs are higher than the leftmost\ngraphs but if you eyeball you can see\nmore teams is generally worse and it's\nmore teams of worse it means that if we\ncan if we can convince seems to join up\ntogether to merge then that will reduce\nthe risk of AI of AI catastrophe and\nthere are a few cases where it does\nmuscles we're more teams are X receiver\nbut that is a color and of a fringe\neffect and it's really hard to split up\na team in any meaningful way without\ncapability from one team going to the\nother team I mean then they would have\nto forget some part of us and that's not\nproblem not really fitting that was the\narticle\nand it's partly intuitive because\nthere's how intuitively how do these\ngrab and the results in this paper match\nup to our intuition and I would say a\nreasonable on three out of four\nparameters and if we say how hard the\nproblem is and one of the most negative\non how hard the problem is is aaliyah so\nyou'd cowskin was that he believes that\nsafety to do this problem this safety\nsafely compared to doing it unsafely is\na it's five times as hard that means\nthat mu is 0.2 and hearing myself alan\nhas gone okay and but but we really hope\nthis is not the case because it's five\ntimes as hard to do safe as it is to do\nunsafe where we will all most likely be\nin a situation where the best strategy\nis to choose as little safety as\nabsolutely possible and I'm still\nhearing myself sorry if that's any way\nanyone could move the microphones away\nfrom this data I would appreciate it\nit's my microphone low\nthat might also be a solution yes so I\nthink so forward for the problem with\nhow hard it is that the fact that we\nhope it's really hard compared to how\nmuch safety matters we hope that safety\nis a tiny tiny overhead to the so how\nhard it is to do to make those are super\nintelligence but we don't really know so\nI think this lines up very well with our\nintuition of course enmity we have the\nintuition that in which is best because\nwe are social creatures so we we really\nhope from a moral point of view that\nenmity is bad and it does turn out in\nthis model that in which is bad and this\nalso seems like from historical arms\nraces like a very obvious result that\nalliance fully with our intuition that\nenmity is best it is if we can work to\nmake Chinese AI the project not be so\nafraid of the American AI project that\nwon't be a really good thing the last if\nwe take a number of teams and I also\nfeel it's a reasonably intuitive result\nthat if you have a lot of teams then\neach team will feel under more under\nmore pressure than the other teams\nbecause if you if you really hate the\nothers and there are hundreds of teams\nthen you will feel that's a very low\nchange that I will be able to succeed so\nthis means that if the number of teams\njust skyrockets you will feel very very\non you will feel like you are in danger\nof loops and you'll be very tempted to\nlower the safety level of your project\nso I also feel that the number of teams\nthat that is bad to have many teams and\nit's good as it seems merge that lines\nup with intuition recently well the last\nquestion is information which is not we\ndoes not line up with insulation\nand that no information is always best I\nthink a bass drum has later written\nquite a bit about and that actually when\nyou when you really think about it it\ndoes make quite a bit of options just\ntalk about a veil of ignorance that we\nwill that would be listed eventually\nwhen we get closer to building an actual\nsuper intelligence and people will make\ndifferent choices now that we don't know\nwho's going to win ten people will be\nvery pro-social and say oh we don't know\nwho wins so we should all say if we eat\nthat the winner should do Korean\ntextable a depletion or at least not\ntake over the world do something nice\nfor the AI that's what everybody is\ngoing to say now that there's no\ninformation but later when it looks like\nthey are winning there will be much more\nreluctance to not take over the world so\nI think if you if you think about it the\nanalyst analysis probably makes sense\nand and what I think question right\nabout this is everything is written\nabout this whether this whale of\nignorance is a great idea is written\nafter this one so that's a that's at\nleast potentially it changed that he he\nbuilt this model saw this was an\nunexpected result and thought long and\nhard about it and actually changed his\nopinion about openness in AI based on\nthis model that was a long answer what's\nthat are acceptable okay then I will\nstop to recall them thank you for\nwatching I will stop now see you next\nweek", "date_published": "2017-03-22T21:44:13Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "bb08a18a52d52b50b895db9e80cc91e5", "title": "AI Safety Reading Group (Session 41)", "url": "https://www.youtube.com/watch?v=vXNi4L5PH0A", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "so hello and welcome to the followed for\nthe first session in the reading group\nwhere today we're going to talk about an\narticle called using machine learning to\naddress a I risk which is written by\nJessica Taylor from Miriam so this is\njessica taylor work for mirror in San\nFrancisco as a research fellow and this\nblog post that we're talking about is a\nwritten form of a talk she's given at\nthe effective altruists global the\nconference and she is working at a\nsubgroup in miri which has the agenda\ncalled alignment for advanced machine\nlearning systems and this this article\nis a survey of the kind of technical\nproblems that falls under this agenda so\nwe've got the the goal of this research\nagenda in very general terms is to make\nAI systems safe even at very high\ncapability levels under a number of\nassumptions and these assumptions are\nkey to understanding this agenda the\nfirst is that the current a AI research\nis generally on a track that will reach\nsuper intelligence so is the deep\nlearning we're doing now will eventually\nresult in a super intelligence and it\nmight do relatively soon meaning that we\nare talking that maybe a decade or two\nor something like this the third is that\nit's possible to build a task AI and\nthat's a good idea and something we can\nlearn from and she talks a bit about\nthese assumptions in particular that\nthese assumptions might not be true it\nmight not be using might not even be our\nbest guess but they are useful because\nif it turns out that Adi is developed\nrelatively soon\nthen then this research will most likely\nbe very valuable and if it's possible to\nsay something about future AI systems\nunder the assumption that they look like\nours now then that's an avenue of\nresearch like that will be very valuable\nif it turns out to be true even if it's\nnot the most likely i'm hearing myself\nin the no problem and i made a point\nhere that this is called the street\nlight effect so if you might have seen\nthis cartoon before or heard of our\nsomething similar and about a man\ndropping his wallet in in the pack but\nsearching for it in under the light\nbecause that's the only place he can\nfind it and jessica taylor of course is\naware of this and she argues that there\nis a chance that the world is where the\nlight is but this is something actually\nsomething i will get back to when i give\nmy thoughts after this video so the\nfirst is what is a task i rated AI it's\nan agent that has a semi concrete goal\ncould be curing cancer could be earning\nmoney or doing charity effective\naltruism this kind of thing but it's not\nthis huge coherent extrapolated volition\nwhere we try to figure out what you\nwould really really want on reflection\nand doing that and of course that's a\nparticular task she doesn't say\nexplicitly but one particular task that\nwe really really would like this taski I\nto do is to solve the control problem\nand that means if we start to build an\nAI if it says how can we control you vai\nand then after that once we know how to\ncontrol it then hopefully we can\nbootstrap that into making a GIS with\nmore\nfar reaching goals and this task a I\nshould have humans in the loop so to say\nboth in figuring out what to do how to\ndo it and maybe even in doing the goal\nitself and of course the hope of this\nresearch is that this will not turn out\nto be really really hard compared to\nbuilding just a general ABI in the last\nsession we talked about the model\nraising to Priscus and from there from\nthat model that was affected mu with how\nhard is it to do it safely compared to\nhow hard is it to build the\nsuperintelligence unsafely and lets the\nfactum you that we hope is high in this\ncase now there are six problems with\nthis the falls under this research\nagenda the first problem is that actions\nare hard to evaluate and of course\nactions done by children can be\nevaluated reasonably easily problems\nsomething done by your peer is it can be\nvery hard to evaluate and something done\nby something that someone who's strictly\nsmarter than you can be really really\nhard to evaluate they can manipulate you\ncoerce you cheat or do covert actions of\nthings like that which once they are\ndone by a super intelligence is really\nreally hard to to fight against and\nprevent yeah ideally we want the AI to\noutput an action saying this is what\nI'll do this task EDI and give a proof\nor some kind of justification and if the\nproof is too hard for us then help us\nevaluate the proof and of course this is\nkind of course lead to infinite regress\nwhere we hope it bottoms out in\nsomething we can really understand but\nwe hope that this is something in\ncomputer science something called the NP\nthe problems in the complexity class NP\nwhich is something that has been studied\nvery very much and in this case it's\nsomething that Paul Chris channel has\nabout in the in an article called under\nthe heading informed oversight the\nsecond problem is test examples can PM\nbiggest you imagine a human sees a lot\nof cat and figures a generalization of\nwhat is what a cat is and then when it\ngets to a new situation humans\ngeneralize in a certain way and it's by\nno means given that an artificial\nintelligence will do that we want the a\nis that to say if it's if it is a big\nand biggest we want to say I am\nambiguous and this is in practice a real\nbig problem for artificial intelligence\nas it is right now because they are\nknown to be extremely overconfident in\npractice so if an AI is wrong it's that\nusually says that is very very certain\nin it's wrong conclusion and this is\ncalled the inductive and big oet\ndetection and is something that after\nafter presenting these six examples\nJessica Taylor goes a bit more into\ndetails about this thought third problem\nis that it's really difficult to imitate\nhuman behavior of course many of you\nhave heard about the Turing test which\nis the problem where a and AI is\nsupposed to pretend it's a human and as\na distinguisher a judge who needs to\ndecide what how humans behave and how a\nis behave and this becomes very very\ndifficult when the agent the AI is\nsmarter than and distinguishing because\nthen the the the AI the super\nintelligence will in crisis be able to\ndo everything that the distinguisher\ncannot notice this is also something for\nCristiano has written about in egg-based\nagent and mimicry meeting halfways the\nfourth problem is that specifying goals\nin the real world is really difficult\nlike you would imagine like make me a\nsandwich is\nis super super simple gold and and even\nthat turns out to be quite difficult\nbecause with reinforcement learning\nthere is a huge incentive to control the\nactual source of the of the reward and\nthat might be in the children for super\nintelligence point in their capabilities\nwhere the incentives change so they want\nto make a treacherous turn and control\nthe reward directly instead of the proxy\nthat we are looking about looking for\nand this proxy can be really dangerous\nlike if we don't specify make us a\nsandwich but make sure there is that\nthis camera is seeing a sandwich at a\nparticular point then this opens up a\nlot of dangerous opportunities for the\nAI to heck the reward this problem is\ncalled the generalizable environmental\ngoals problem here under this research\nagenda the fifth problem Island the\nnegative side effects this guy is here\nfirst time I see a picture of him is\ncalled Steve Omohundro and he theorized\nbenin super intelligence will have a\nnumber of basic AI drives it could be\nthings like the AI is trying to make a\nsandwich and believes with 99 percent\nchance it can make a sandwich but it\nalso have to factor in the probability\nthat a human will shut it down maybe\nbecause the human doesn't want a\nsandwich so the AI has an instrumental\ndrive to stop the human from shutting it\ndown and there are three headings under\nwhich this researchers and tries to\navoid these negative instrumental drives\ncalled quantifying impact mild\noptimization and non-adversarial\nadversarial a Iook without instrumental\npressures the sixth is that there might\nbe H cases this to satisfy the call\nbostrom has written about an AI but its\ntoll to make humans smile and then it\nfigures out that the edge case of making\na tiny tiny Smalling small smiling face\nit satisfies the skull and then\nproceed to trial the universe with small\nsmiling faces that's an edge case and\nthis kind of problem if you go back to\nthe sandwich problem it might be\npossible to make a really really small\nsandwich which counts as a sandwich if\nits measured by weight we might have a\nhuge sandwich it might be a toxic\nsandwich these kinda things that\ntechnically are satisfied the call but\nactually don't do it at all in\nparticular there's a problem with\nadversarial examples and I really really\nlove this example sorry I can hear\nmyself in one of you it is possible to\nmute I think it might be you moan the\nsome if you could move your microphone\nplease um it might be Victor who should\nmove it Mutis microphone yeah I think\nit's fine um ok so here we have an\nexample and I really really love this\nexample where you have a image\nclassifier which looks at the first\nimage as this I believe this is a\npendant I am 57.7 percent confident that\nthis is a panda and then you add some\ncompletely random noise it looks really\nlike a random noise if you ask they\nclarify what it is it might be an\namateur but it's totally unconfident so\nthis is just completely silly picture\nand you add it only with zero point zero\nseven percent and then you get an image\nhere which is almost exactly the same as\nthe spandan this panda is just to an\nextremely small extent distort so human\neye these two look almost exactly the\nsame as 0.7 percentage of a change but\nhere this this example has been\nconstructed so the image classify and\nouter leaves it's given a completely\ndifferent animal with ninety nine point\nthree percent confidence so this is a\nreally really dangerous case it's it\ntruly shows that\nit might be possible many of the AIS\nthat were built right now are much more\nvulnerable to being cheated in this case\nby asmara adversary then then then you'd\nthink no human would ever be cheated by\nthis no human would see a huge\ndifference between this picture and this\npicture but but a is as we build them\nnow they do think that there's a huge\ndifference between them and the yes sure\nOh\nand the the way I would see this you\nhave a plus between two images and the\nquestion is what does it mean to add two\nimages and what I imagine is that each\npixel in the left corner has a red green\nand blue value and in the middle picture\neach pixel has a red green blue value\nand then you average these two where you\ngive this one the the pendant it the\nfirst way more weight than the other and\nthat and then then you distort this\nissue extremely slightly compared to\nthis one and then you get this this\nimage I believe that is how you add\nimages\nno no\nyes I think so we go back to the problem\nthat was called inductive and divinity\nidentification where the AI is uncertain\nand it needs to be able to tell us that\nit is uncertain and maybe in practice it\nwould be good if it checks with us so if\nit's uncertain whether they're like a\ntiny smiling face should count as a\nhuman if it's uncertain about this it\nshould ask us and then we can say no\nit's not you can see our down here yeah\nthat's a graph where there's a lot of\npositive examples in the upper left\ncorner and a lot of negative examples in\nthe lower right corner and what's the\ndifference between the two there are a\nnumber of hypotheses like everything to\nthe right of this line is negative and\neverything to the left of this line is\npositive but it could also be this line\nwe don't know which of these hypotheses\nare the truth and and that's an\nalgorithm called know what it knows\nlearning quick lining and and this\noutputs a this ambiguity if it's more\nthan and as a number of percentages\nuncertain about about this and you can\nyou make some assumptions if you make a\nnumber of assumptions that are\nmoderately reasonable then then this\nactually works out quite well you need\nto have a number of hypotheses with like\nthese lines and one of them need to be\ntrue and it needs to be a model with\neither a small finit number of\ndimensions or just a finished set of\npossible points and under these\ncircumstances this really works after I\ntaped to a picture here this is Socrates\nwho knew that he did not know anything\nthat was in the I guess his most famous\nand so and this is ideally what we want\nthe AI to realize how much it doesn't\nknow and one of the ways that Jessica\nTaylor and her voice on this is using a\npatient view of the problem and in this\ncase then I patient patient statistics\nand predictions and probabilities I have\ntaken here a picture this river emplace\nwho invented base formula and he have an\nexample of how how the patient update\nprocess works that you believe at first\nthat the truth is all the way to the\nleft here that is that is your prior and\nthen you do some measurements and the\nmeasurements are this blue here you can\nsee vaguely up here that turns out to be\nwhat you measure and then after this you\nyour best guess at what the truth is is\nthis like black line this is your\nposterior and in a similar way you\nassume that there's some kind of true\nprior exactly water through prior is\nthis a good question a we it might be\nthe prior that the super intelligence\ncan find and we have some kind of prior\nin our in our candidate AI vai that we\nare building and we assume here we make\nan assumption of how good the AI we are\nbuilding is we assume that it always\nfinds a grain of truth for instance k is\ntrue it means that the truth is that if\nour prior has a certain probability then\nthe true prior has at least half as much\nif K is 2 so this means that and the\nweek if we can make some kind of found\non how good our AI is compared to the\ntruth then it's possible to find some\nsome probes and some algorithms to to\nget very very close to the truth\neven if my tix somewhat longer and that\nis what jessica is working on it should\nbe said that there are two other agendas\nat Mira are kind of a mirror to other\nday I safety agendas the first is headed\nby need for us here which is the agents\nfoundation which is much more about\ntheory and the theory of reasoning and\nthis making decisions and proving that\nthat programs satisfy particular\nproperties these kind of things and on\nthe other end of the spectrum there are\nthe concrete problems in AI safety a\npaper that has been written by dario mo\nday and a number of others and where\nwhich goes much much closer to recurrent\namount machine learning systems and find\nsome things that came with problems and\ncan be demonstrated in the current\nsystems and if a lot of progress made of\nthis hopeless that this will be relevant\nto the big problems when you get an\nactual super intelligence and this is\nonly i would say that jessica taylor is\nin the middle of these two but that\nthese three agendas can be put on a\nspectrum with nate suarez as the most\nlong-term and Terry i'm a day as the\nmost pred short-term and just tailor\nsomewhere in the middle but that's just\nmy understanding and she doesn't write\nthis anyway anyway thank you for\nwatching and i will stop the recording\nnow", "date_published": "2017-03-29T20:11:35Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "e16999b81a0930c131af6dc9470a2ace", "title": "246. Democratising Risk 3", "url": "https://www.youtube.com/watch?v=whL0OXPkvWo", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 246 in the\nairsafety.com reading group tonight we\nwill be discussing the last part of the\narticle democratizing uh ai in search of\na\nsorry i said the\nname wrong democratizing risk\nin search of a methodology to study\nexistential risk\nthis is still worthwhile done by carlos\nprima from fhi and you came from caesar\nand we'll be discussing from section\nfive and onwards uh let's cover\nthe last and we will also be\ngoing over parts of the comments and an\nea forum post accompanying this\nbut first i want to take before we\nget to the uh i want to take a quick\nstep back to the definitions because\nthere is in fact\neven though the definition uh\nuh in\nin section four there is in fact\nsomething in section five about\ndefinitions that i think is important to\nto highlight and that is on uh when we\nare\njudging a moratorium on ai and how hard\nthat would be to coordinate then it's\nnot uh it's not enough to look at this\nin isolation and see yes this seems hard\nbut we need to uh\nto compare it to our other options and\num building an aligned ai that also\nseems really hard and there are a number\nof reasons why we should expect that to\nbe very hard and perhaps look at the\nmoratoriums in a more favorable look\nbased on that\nthe first is that a aligned ai needs to\nhave a really\nhigh level of\nreliability\nand depending on precisely how you how\nbig you view the risk from online ai\nthen\nif you view it as very small then it\nneeds to be really aligned if you view\nit as we are by default doomed then uh\nit would be nice if it's reliable but\nit's\nbut halfway reliable is better than\nnothing\nanother reason why ai airlines is\nexpected to be hard is that it's well\nit's basically non-existing technology\nand that's\nin general really hard and there's no\nway around that some way it has been\ndone it doesn't look\nobviously impossible we certainly\nhaven't found any impossibility results\nbut it's clear that there is a lot of\nwork to be done\nit's something that is probably decades\naway and at least api\nis probably a decade away and\nuh the deadline for uh ai for a line ai\nwill probably obviously coincide with\nwhen will we have a agr\nuh the authors uh are arguing that it\nrequires a refined vision of the future\nin order to align ai\nand i think it certainly obviously it\nrequires a vision of the future but it's\nnot necessarily in that uh refined and\nthis is as i see it\nthere there are not that many details\nabout the future that need to come to\npass before ai alignment is um is\nvaluable\none of the things where\na refined vision of the future is\nimportant is that uh the authors claim\nthat really precise control of the\nspeech of technologies if we want to do\nthe uh uh differential technological\ndevelopment thing\num\nwe don't actually need precise control\nas far as i can see we have we need one\nbreak and one accelerator and uh agi\ndevelopment needs to hit the brake and a\nai alignment needs to hit the\naccelerator and if we overshoot from\nthat then that's perfectly fine if we\nhave aligned ai ready to assume\nso\nthere is no real requirement for precise\ncontrol\nit's a technology that's interlinked\nwith all the technologies i presume that\nthey mean agi and that is indeed true\nand requires fine tune recorder tools\nand\nthese need to be implemented worldwide\npossibly plausibly uh it depends on what\nthe alignment tax is there are also some\nvocal acts that might uh make this less\nrequired um but but i think overall the\nterms that the ideas they have that we\nyou can't say that um\njust uh moratorium is hard period you\nneed to argue that moratoriums are\nharder than aligned ai\nuh so that was actually something in\nsection five that i think belonged in\nsection four so let's actually go to the\nmain topic for today uh the risks of\nstudying existential risk\nso um\nthere are some\ntrivial low level problems like we won't\nactually solve the problem and we will\nwaste resources by going down wrong\npaths and that's not really what they're\ntalking about here though those exist\nobviously everybody agrees\nhere's a key quote the seller's pursuit\nof technological development according\nto proponents of the technology and\napproach accounts for the vast majority\nof risks over the coming centuries\nso\nthis sentence like many others in this\nuh in this paper is ambiguous ambiguous\nbecause it can be read in two ways\nthat according to the proponents of uh\nthe technology approach um\nthis accounts for the\nvast majority of risks which is totally\ntrue\num or it is it can also be read as\nthat the sort of technological\ndevelopment is according to the\nproponents of the\nutopian\napproach and this is something that is\nimplied many times that it's kind of\nlike this of course the authors can't\nreally come out directly and say boston\nwants to build a super intelligence\nbecause it's\nhe makes it reasonably clear that he\nreally doesn't want that but that is\ninsinuated many many times\nit's argued that technological maturity\nis the only way to permanently end\nexistential risk\nby people from you using the signal\nutopian approach\ni think\nthis might be true there is the long\nreflection as an example\nof\nsomething that permanently ends\nexistential risk but does not imply\ntechnological maturity it might imply\nbut uh\nit's certainly possible that we can have\nthe long reflection without\nwithout space quantization\nand\nthe others further say it's unclear why\nwe can't just\nmake technology that stops existential\nrisk\num\nwithout also having dangerous agi and\nbio weapons and all these other things\nand i think\ni think it is indeed possible to do this\nit might be possible to\ndevelop the technology to address these\nthings but the problem is that it's just\nso much that we want technology to stop\nsuper volcanoes because super volcanoes\nactually isn't the real problem the real\nproblem is how do we stop other people\nfrom even\nfrom making dangerous intentions\num\nand\nthat's the key thing so i think what the\nall this means to argue here is\nisn't really strongly at at least at\nthis point that\nthe technical utopian that the current\napproach to existing risk studies really\nmakes the risk greater except for the\noperator's case that if you think it's a\nbad approach then it overshadows other\napproaches that could do a statue risk\nstudies better\nnext is on the stomp reflex\nwhich is\ndefined as the long history of security\nthreats being used to enable draconian\nemergency powers which are subsequently\nabused\num\nso the name the stump replic reflex has\nbeen uh coined by the author uh luke\nkemp in a previous uh article i uh i'm a\ni am a bit confused about this it seems\nlike\nthe stump reflexes doesn't really\nexplain very much about what this is and\ni\nalso don't really think this is original\ni think it's something that many many\nother people have uh have noted before\nand this stump reflects um\nargues further that the greater the\nthreat the more extreme the meshes and\nparticular for human extinctions then\nthe measures could be very very extreme\nand\nthe thing we're seeing right now\nwith with ai is very much not draconian\nemergency powers\num i think what we're seeing is\npossibly the opposite uh\nin that most people and the authorities\nmight\nuh\nseem like they're totally unaware really\num i think it's worthwhile to be aware\nthat this could change in the future but\nuh but it's certainly not something that\nis here right now\nthey give one example then the power to\nstart the second the third world war is\nin the hands of uh well joe biden and uh\nputin right now\nso it's it's not under any kind of\ndemocratic control\nit seems kind of when i read the uh\nothers like this was just a bad stupid\nthing that was um that just happened to\nbe um\ni i think there are actually very good\nreasons why this might be the most\nstable way to do it uh but i agree that\nthe precedent it is creating is quite\nworrying because it would be easy to say\njust like the president has the right to\nfile the nuclear missiles the president\nalso has the right to decide whether the\nuh\nthey should pursue a national and\nsecurity\nagi crash manhattan project\nproject\nsecuritization that is a way to change\nthe discussion about an issue from like\nnormal politics to national security you\ncould almost say that this is taking it\nout of politics\nand in this case it's often dedicated to\nmilitary and officials\nand there are different approaches to\num\nsecurity realization so moving things\naway from politics and into the realm of\nnational security\nand um\nunfortunately the only approach that is\nbeing discussed is the technology\napproach and i would really have like to\nhave know like if there are many others\nand then what are they\nand also\nto me this seems like a really bad thing\nto happen and i'm uh\nnot so much interested in uh\nhearing the best way to do this rather\nthan how can we avoid securitization\nwhich the authors don't discuss\nunfortunately\nrather they argue that the chicken\nutopian approach is particularly\nvulnerable to authoritarian secret juror\nsecurity station\ni think when you try to unpack this\nthis seems very problematic like there\nis\nlike i can imagine another really easy\nway to securitize\na subject which is to say that this is\nvery dangerous and this will kill you\nright you can do that with nuclear\nweapons you can say we need the\npresident needs to have the authority\nbecause that's the only stable way and\nif you don't if there is a third world\nwar you personally will die\nand that's like a a very fear-based\nintuitive system and there are in in\ngeneral securitization uh this is the\nway they people argue that uh the other\ncountries will get a strategic advantage\nand this kind of thing and that is my\nunderstanding of how secret realization\nis normally done but i am not an expert\nand the paper doesn't really see how\nit's otherwise done but\nthe taken utopian approach\nrecall that this is basically the\nargument that the universe has a\npotential a cornucopia\nthat\nin theory it would be possible to have\n10 to the 54\nhumans if we colonize the uh entire\nuniverse and put as many digital people\nin there happy digital people as\npossible and i think that to me sounds\nlike a really poor object for sure\nsecuritization\nin that like you could imagine that the\npresident comes on the tv and in grave\ntones say that we need to do this\nbecause otherwise we will die and i\ncouldn't see him go and argue that this\nneeds to be taken out of politics\nbecause otherwise we can't realize 10 to\nthe 54th uh people that seems to me\nuh\nlike an unlikely approach for\nsignaturization\nnow we turn to boston's vulnerable world\nhypothesis\nwhere it is stated in this article that\nboston's preferred solution is extreme\npreventative policing and widespread\nsurveillance through freedom tags\nand if we\ngo we read this article quite a while\nago actually in the reading group\nwhere boston has the following quote the\nobedience sounding link is of course\nintentional to remind us of the full\nrange of ways in which justice system\ncould be applied\nso the word freedom text is very much\nironic\nso is this in fact boston's preferred\nsolution does boston want this to happen\nwell i started to gather up like a lot\nof quotes from boston that i felt\nshowed rather unambiguously that bostrom\nreally would not like this to happen uh\nbut i'm not really sure that\nuh like if you can if you can read this\nuh um\nvulnerable world hypothesis and come\naway with this thinking that yes buster\ni think this is a great idea then i\ndon't think really that me finding some\nquotes uh will help to convince anyone\nanother thing thought experiment that uh\nbostrom is working with is um if someone\nis creating a\na technology that would\ncause extinction\nwould it be\nan idea to make a preemptive attack on a\nsovereign nation to avoid extinction and\nin some cases it might be\nthe problem with this\nis that\nif there are autocrats or spying watch\nquests then they could use these kind of\njustifications\nfor\num\nto just subvert them democracy basically\nand in this way the speculation that\nbustrom have these thought experiments\ncould indirectly cause the thing that\nthey are worried about because they\nprovide roadmaps and things like that\nso um the authors don't use the word\nintro hazard like bostrom obviously is\nuh like i literally wrote the\nso i would expect him to have thought\nabout this\num but i think\nwe can point out some\npositive and negative things about this\nlike one thing we can say about this is\nthat the first paper on what to do if we\nare in a vulnerable world hypothesis is\nwritten by nick bostrom who is clearly\nnot an autocrat he would clearly not\nprefer that we are on the pervasive\ncivilians and things like that um so in\nthat way it kind of frames the\ndiscussion and if the uh if the first um\npapers say this is really really a\nhorrible idea that makes it harder to uh\nlater say we need to do this comparator\nif the first paper was written by\nsomeone who says who believed autocracy\nwas a really good idea\nalso i feel a lot of these are\num\ninteresting when uh argued for a um\nfrom a extensive risk perspective but\nfor an autocrat i\nwould expect these uh\nuh arguments to actually be rather\nsimple to invent like i would not\nbelieve that vladimir putin or someone\num would have a hard time uh arguing\nthat we need more preventative policing\nin order to stop dangerous technology\nand and things like that\num\nbut\nit's not\nso much because we like the authors are\nuh talking about like can autocrates do\nthis and i think a more interesting\nangle would be to look more at uh like\ncultural things like politics can people\nbe convinced to accept extreme\npreventative policing\nand one example i found um is uh\nuh warhammer 40 uh k that's an example\nof something that has way more cultural\nappeal than boston's people and that's a\nworld where fascism is indeed uh\ntechnically correct uh in that if\nlike when you do world building you can\nmake any world and the authors have\ndecided to make a world where if you are\nnot a fascist then demons come out from\nchaos and eat you and everybody around\nyou if you are not fascist so that's why\nthe humans in the warhammer 40k are\nbasically fascists and this has way more\num cultural appeal and it's also\nnot really a an argument for fascism and\neverybody recognizes this and i think um\nmy point here is\nthat bostrom's paper the vulnerable\nworld hypothesis has extremely extremely\nless cultural impact than for hama than\nwarhammer 40k um\nso i would expect that\nthe the impact from this would all would\nbe dramatically small\nand if it's already a small impact in\nthere then it must be a small impact\nlet's go to the\nthe idea\nthat gives this people the title\ndemocratizing risk\ndemocracy must be central to efforts to\nprevent and mitigate catastrophic risks\nin particular choosing which myth to\ntake should be a democratic endeavor\nbut the problem with this\ni think it sounds really nice in theory\nthe problem is in practice the agi\ndevelopment that is being done is\nin fact\nas far as i can tell\nreasonably well\ndemocratically\nfounded in that most people believe this\nis the right thing to do there is no uh\nuh\npolitical and democratic opposition to\nthe work being done by by deepmind right\nnow\nit is argued that avoiding extinction is\na communal project and i think\nit should be it ought to be an\naccumulative project but we need to do\nsomething if we uh if it turns out in\npractice most people don't actually care\nso right now ai risk isn't that much of\na communal project unfortunately um and\nwe need to to grapple with this fact in\nsome way\nanother\ndemocratic\nproblem is that um\nsome views are explicitly excluded right\nnow and\nthe arguments do that needs to be\ncompelled\nso the the argument in practice that is\nbeing excluded in particular from\nif you want to do something like the\nlong reflection is the argument for\nextinction in particular for\nquick extinction and i believe that\nit is right and proper to exclude these\nviews i believe that they are\nsufficiently niche but i\nrecognize that we need some kind of\nuh\nlike this is easily a slippery slope\nthat i say oh obviously this can't be\nexcluded because it's so niche like some\nkind of way of um\noperationalizing that would probably be\nbetter i think it's really compelling\nright now at this stage\nto uh because very very few people are\nin favor of extinction right now\nthere's also an argument being made that\ndemocratic judgment is superior\ni agree that democratic adjustment is\nsuperior but i also think that um\ndemocratic\nuh adjustment is not always superior and\nit depends on the case and the argument\nthat is being uh presented here is\ninsufficient uh in order to er to prove\nthat it is in fact superior judgment we\nshould expect some democracy like some\nof the arguments like how there is this\nconflict of interest in democracy is\nthat even true and i'm not entirely sure\nand of course judgment is only a part of\nwhat we need to do uh a lot of this uh\nof the facts need to be established by\nexperts and cannot be uh democratically\nuh estimated um or uh judged like\njudgments it's not really about facts\nand and so\nthis is a part of it but only only a\npart\nand the uh the explicit suggestions are\ncitizens assemblies surveys and\ndiversifying existentialistic studies\nunfortunately there's not much more\ndetail being given in this than what i\njust said and i worry that if something\nlike citizen assemblies\nwould be\neffective\nthen\nmy thought would be that even smaller\ncitizen assemblies would also be\neffective\npossibly possible that is not a given\nbut i think an incremental approach\nshould work\nand right now we're not really seeing\nthat\nand that is an interesting question that\nthe obvious hypothesis is that that\nmight be because in fact citizen\nassemblies doesn't really contribute in\nany meaningful way to existential risk\nstudies um but i think it's something\nthat\nin order to uh\nbuild an argument\nwhy this is uh useful\na lot more work needs to be done than\njust saying citizen assemblies is very\neasy actually holding a citizen assembly\nis a huge undertaking\nand finally how democracies limit harm\nthere is a claim diverse thinkers would\nnot sacrifice one billion humans for an\ninfinitive improvement or of our arts\nand\ndiverse thinkers wouldn't do that and i\nthink\nbasically no one would do that\nit seems really unlikely i have a hard\ntime finding a scenario where\nwe kill one billion people and then um\nwe reduce the risk of uh nuclear\nholocaust and uh or something like that\nthat doesn't really\nseem to be like the way the world is\nworking i think in fact there could be a\ncore misunderstanding here in that um\nwe see a number of places in this paper\nthat um\nsome indications that the author thinks\nthat in the future we might in fact be\nplaced be a place before um\nfor options like this where we can\nchoose between\nsacrificing one billion people or\navoiding existential risks um\nand\ni think i'm strongly convinced that\nthings that kill one billion people\nin\nalmost always dramatically increase the\nrisk of the existential risks\num\nalso scholars should in general be in\nfavor of democratic fail-safe mechanisms\nand that is indeed true and one of the\nearliest work i will point to here is\nkowski's\nshort novel uh failed utopian number 4-2\nwhich indeed features precisely a\ndemocratically unsafe mechanism\num how would democratic voters well this\nclaim that they are unlikely to tolerate\nglobal cash of risks if they know\nthemselves could be affected\num and\nuh that is a claim and unfortunately the\nthis empirical thing seems to be um\nare true right we're right now seeing a\nlot of people accepting ai being\ndeveloped as as quickly as possible um\nand that is a uh\nsomething again that the authors need to\ngrapple with like the democratic uh\nstructures that we have currently seem\nto\nto not in fact uh put any kind of breaks\non ai development\nit's claimed that citizens often show a\nsignificant risk aversion in comparison\nto their government\nsometimes they do and sometimes they\ndon't uh like right now in ukraine we're\nhaving calls for a no-fly zone which\nwould be an example of citizens being\nway less risk-averse than uh than the\ngovernments who are refusing calls for a\nno-fly zone um\nand\nit's not really at least strongly\nobvious to me that this is a\neffects-based uh difference\ngovernments are certainly not perfectly\nrational but\nthese kind of\ngrassroots\norganizations seem not to be perfectly\nrational either\nempirical democratic feel safe\nmechanisms seem to work\nthe empirical belief is\nnot as strong as well as as i see it and\num right now we are in a situation that\nlooks like it's very different like\nsome of the uh um\nthe things we are seeing uh previously\nwith like global warming or local\npollution or whatever seems\nnot that analogous to what we're\ncurrently facing with ai risk in\nparticular and so the failsafe right now\nseems to not be working and that makes\nit unclear whether we should double down\non it if it seems to not be working\nso that's the article and now i'm going\nto talk a bit about the ea forum post\nthat accompanied this\nand this is um claiming a number of\ndifferent things um\nthe first is that there seems to be some\nkind of conspiracy against criticism\nso\nsenior scholars told us in private any\ncritique of central figures in effective\naltruism would result in an inability to\nsecure funding\nand that's\nuh clearly not what these people are\nsaying um and so um\nit's not\nuh\nit's quite consistent with um like we've\nseen uh earlier the uh the strong claim\nthat uh\nsenior scholars in other cases also have\ndifferent opinions in private than they\ndo in public with nick bostrom in\nparticular so these two claims are\nrather consistent\num\nnow of course the central figures in ea\nhappen to like read the ea forum and uh\nand some of these people within mega\nskills nick dixon holland kanovsky and\nmany others really you would say the\ncentral figures in the ea\nreplied to this and said no\nwe in fact strongly welcome good\ncriticism um\nand\naaron gartler went into gartner went\ninto\nmore details about\nthis game how it's been presented by\nothers and\nfound that the evidence is\nvery lagging for any such kind of\num\nuh of opinion being held\ni can i can add my own anecdotes and\ndata um\nit's like i don't have access to the\ncentral figures in the ea at all nor do\ni have access to senior scholars but i\nhave talked to a number of ea people who\nare if who are involved but not\nbut not central and i have talked to a\nnumber of scholars who are not senior\nscholars and it seems really clear to\nthem that there is a cultural norm\nagainst um\nbeing afraid of criticism\nuh for fear of uh\nof missing out on funding um i think\nit's unlikely that the culture is the\nopposite at the top but i this is just\nmy anecdotes i i don't have a strong\nreason to have an opinion about this and\nin fact\ni would argue that someone like elon\nmusk could be said to be closer to this\nlike\nthe elon musk started open ai\nand\nuh was criticized roundly for that\neliezerkowski and miri in particular had\nvery harsh words\nagainst that and it seemed like elon\nmusk after that\nceased to provide a\nsubstantial funding so in that way that\nseems like an\nexample of where this dynamic could\nindeed be true so while another one for\nconspiracy theories\ni am\nnot dismissing our claims as quickly and\nas strongly as i think other people are\nthe second part of the\nea forum post was that this was\npolitical and i'm sorry to\nsay that the authors have recruit\nclaim this has been a really emotionally\ndraining paper and it is clear there is\nbeen a very substantial amount of\nfriction if you read between the lines\num\nsean haggarty is being uh\nis uh in particular is called out as one\nwho claimed that this was like a polemic\num post uh\nan article uh they say they believe this\nis as an insult\nand in general that there is a higher\nburden of proof because the people was\nconsidered political\ni think\ni think that there is substantial part\nof this people that is political and\nalso a lot of this that didn't have to\nbe political that uh\nwe see things about like a criticism of\nuh neoliberalism that\ndidn't need to be in this people that\ncould have been cut off very easily uh\nand i think it um\nin general if you if you add this a lot\nof people in ea have this kind of um\nit's often said in particular and\nrationality that politics is the mind\nkiller and a lot of people um\nget um\nwould object to this paper just purely\non the spaces\nand there is a question whether this is\ngood or bad and whether\npolitical mixing political and\nexistential risk is a good idea i\npersonally tend to come down on the side\nthat it's a better it's a bad thing to\npoliticize\nbut um\nin in general political claims\nwill be met with with substantially\nmoral resistance\nand finally it stated that ea needs to\nmake structural adjustments in order to\nstay on the right side of history i\nthink that's um like staying on the\nright side of history is usually a uh\nlike\nhistorical materials and marxist uh\nframing and it's also obviously to me\nhyperbole in that\nsure ea is not perfect and no one is\nsaying that ea is perfect but it seems\nclearly right now to be\na force for good in the world rather\nthan a force for that\nand finally one of the things that were\ncriticized in the ea forum was that the\nit's really wide-ranging for instance\nshannon hagerty again uh gave the advice\nthis review\nto focus on one section and compare\num the technology and the troll approach\nagainst alternatives\num\nthey reply in the comments that there is\ntension between these two things right\nthat you're they're both saying focus on\none thing and also focus on more things\nand\nthey\nhave a\nrather adversarial reaction to this\ncritique to shawna hagerty's suggestion\nsaying that that is only a suggestion\nthat's given to ensure that they do not\nwrite a critique\num\ni think in fact that is a very correct\nuh\ncriticism that shannon hackett is giving\nhere like if you have a paper that is\nreally really wide then it's\nboth fair enough to say there are a lot\nof these places where it's just too\nshallow and you should either like\nmake the paper ten times as long or you\nshould focus on on something where you\nhave you can go into much many more\ndetails in particular one of the things\nthe authors do often is\npresenting a an argument and saying\nwe're not convinced by this argument and\nthen quickly moving on without a any\nkind of\ncounter arguments or or any detailed\ntreatment of this which is necessary\nbecause otherwise the people would just\nbe too too wide but then they should\nhave made the um the uh\nuh the people that's why really\nin particular one of the things that i\nfeel is key in this is boston decides to\ndiscuss surveillance in the in the paper\nthe vulnerable world hypothesis is this\na good idea\nit seems in this um people that uh\nis built up to like a really bad thing\nbut it ends up with very very little\nconcrete discussion of if this is\nactually a good idea i would have liked\nto know more about this we'll call them\non out on this in the comments uh\nshe says um well the others i think it's\nonly\nwriting in the comments we need to stop\nclassifying everything as info assets\ni think in fact\nif boston's\ndecision to discuss this is bad for this\nreason then\nuh\nwe need to to figure out if we should\ndiscuss it and if we should discuss it\nthen um\nthe framing of infohazar and the theory\nof info hazards is probably hard to get\naround\nand finally i would say uh\nsome of my own personal uh\nuh problems that are not mentioned on\nthis ea forum post um my first one would\nbe on the taking utopian approach um\napproach is defined as an argument uh in\nthe uh\nin the paper but it seems to be more\nlike a world view and this kind of mixes\nthings up and um\nit's really unclear to what extent the\ntypical utopian approach um\nwants to build agi as fast as possible\nor as safe as possible and this kind of\nthing this kind of tension is not really\nmade clear\nthere's also a really strong tendency in\nthe first part to overstate the\ninfluence of the utopian approach there\nare many many arguments\nagainst existential risks and i think\nall of them are sufficient or basically\nall of them are sufficient so the\nchicken utopian approach isn't really\nneeded at all\ni planned i announced in the beginning\nof this uh of the first session that i\nwould um argue in favor of the utopian\napproach even though i don't actually\nhold that and that turned out not to\nreally be necessary because\nif there are just so many other good\narguments against extinction\nanother thing that i would really like\nthe people to at least acknowledge that\nthere is currently a lack of public\nengagement it might be indeed that the\nway to get uh public engagement would be\nto politicize this um but\nit's it's totally uh you you can't say\nthat right now there is a democratic\nbasis for uh uh institutional risk\nstudies because that clearly isn't right\nway too few people care about this\nanother thing that i felt is the\nargument that a moratorium is feasible\nis\nspread is spread over many places in\nthis paper i counted five places and\nnone of them really are in depth they're\nnot really presenting this argument like\nthis is how why we feel that this is in\nfact feasible\nand also whether this is desirable\nthat's also something that probably\nneeds to be like someone needs to make a\ncase for a uh a moratorium\nthere is the\nuh the uh which we saw the first time\nthat there was some misquotes of of\nbostrom and it seems like a clear\nuh\nmistrust of nick bostrom in particular\nuh and possibly other people in uh\nin extinction risk studies\nand that might be um\nthat might be uh\nworthwhile or not\nbut\nanother thing that seems really clear\nfrom the uh uh subtext in a lot of\nplaces is that the authors don't seem\nconvinced by ai risk at all but way more\nconvinced about global warming as an\nexistential risk\num and\nin that way it\nkind of feels uh i think the authors are\non a difficult quest to reform execution\nrisk studies when there are so many of\nthe existing scholars that they don't\nagree with and um so many of so much of\nthe central parts of the existing\nresearch that they also don't agree with\nthat makes it really hard to make a um\nto reform the field unfortunately\nbecause i feel in fact that in many\nplaces the the authors do put their\nfinger on things that are potential\nproblems but it's just um this paper is\nuh has too many problems to be a a\nreally good criticism of the current\nstate of existence risk studies\nthat's all for today thank you and see\nyou next time", "date_published": "2022-03-31T21:08:46Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "5dda88492525ae1aa2403ab3ebf8f8c8", "title": "130 - Embedded Agency QA", "url": "https://www.youtube.com/watch?v=btc-4vYyOSs", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 130 in the\nASF did comm reading group tonight we\nhave Scott garabrant from Miri with us\nto answer our questions about these\nsequence embedded agency Scott welcome\nthank you who would like to post the\nfirst question if no one will take the\nfirst question then I will start with\none of the prepared questions okay so\nthe first question that I have prepared\nis about a subsystem alignment and\ndualism and what I was wondering about\nwhen I read the the power subsystem\nalignment is that it seems that alexei\nthe the robot that is kind of like XE\nseems like it might also be vulnerable\nto the same problems in subsystem\nalignment it might for instance find a\nTuring machine that implements a\nmalicious AI while it is it is working\ndo you feel this kind of reasoning is\ncorrect yeah so so on these bolts you\nhave here something about becoming\nobsessed with the sub-game I think I\ndon't think that that bullet is correct\nlike if you imagine like actually I see\nbut I do think that there's a there is a\nthing where if you have something like I\nsee where you're just like implementing\nabout trattoria machines either with\nsome sort of resource bound on the Tauri\nmachines or without you end up finding\nyou could end up finding solutions solve\ncertain problems that involve like\nsimulating of agents or like societies\nor something like did you imagine that\nlike the the K complexity of our\nuniverse is very small then you could\nimagine like one of the easy ways to\nsolve to like get a solution to a\nproblem might be to like might pass\nthrough a like\nsimulation of some society or some agent\nor something like that this is something\nI think all Christiana has a blog just\nabout the universal prior like something\nabout what the actual universal primer\nactually looks like or something like\nthat\ndoes anybody know what it's called I'm\nnot sure there's a blog post I focused\nyou know I'm like what what the\nuniversal prior what like the beliefs\nthat aixi has look actually look like\nit's about do you get these kinds of\nagents inside of it and yeah I think\nthat I think that's a I think that's a\nreal thing for that I want to point out\nthat I think that I'm very concerned\nabout things involved I'm very concerned\nabout things in the substance alignment\ncategory and I think that like the\ncentral example from the subsystem\nalignment category is different from\nthis thing that is demons in the\nuniversal prior or whatever what\nwhatever Paul says I think that like the\nmain concern is something different from\nthat but it's still like important to\nlook at okay in the meantime I've seen\nthe chat that some people had found the\nlink to this but no one had raised their\nhand for posing questions so in that\ncase I will continue to the next\nquestion about embedded world and models\nso you have we have the question here on\nthe screen about yeah it's for example a\ntiling agent with some symbol tools has\nto solve the alignment time problem but\nI don't have to worry about specialized\nform of subsystem alignment is that true\nor are there hidden inner optimizers in\nthese kind of benign seeming tools yeah\nI think I got a little distracted by it\nby okay so I think that's basically if\nyou're solving a large task or something\nthat looks like that looks like a tool\nso said let's say you want to\nyeah you you you you you want to solve\nsome like pretty hard task maybe some\nlike physics problem or something some\nengineering problem and you like take\nlike a tool sandbox and you have a thing\nthat is kind of only optimizing within\nthis tool sandbox kind of only doing\nthis like very local optimization not\nthinking about the outside world not\ndoing anything like that the kind of\nthing you'd think of a tool is like\nthat's like a very finite finite goal I\nthink that this does have subsystem\nalignment problems I think that there's\na there's a kind of like convergent\ninstrumental goal which is being able to\nlike think in a like very general being\nable to like think in a very general way\nabout how to like solve whatever problem\nto think about how to solve problems in\ngeneral in order to solve whatever\nproblem you you're trying to do like\neven if you're trying to solve a very\nlike basic problem there's an incentive\nto be able to think about how to like\ndistribute your own attention in order\nto figure out where to like figure out\nwhere to direct attention so that you\ncan like solve a thing if you have a\nsystem that is like not doing this as\nwell as it could by default then you\nhave like this incentive for like inside\nthe system to kind of cut across your\nplanned way of solving the problem in\norder to like be able to reason about\nthese things if I did not think that\nthere was an inner optimizer problem for\ntools then I would probably not be\npaying very much attention to they do\nfoundations of all I think that like if\nlike you could ask the question like why\nam i focused on agency and I think the\nanswer is that I think that agency has\nbeen purchased I think the things like\nagents that are reasoning about how to\nlike distribute them their themselves\nand reasoning about like how to do\nthings for\nreason are like a like necessary sub\npiece for the easiest ways to solve many\nproblems and if you're trying to solve a\nproblem in a way that looks like a tool\nand not like an agent you're creating an\nincentive for like agency to show up\ninside and like yeah if my answer is\nlike not only do I think that it's a\nproblem but also its crux see for me\nit's like if I changed my mind on\nwhether or not there would be and and\nwhat whether or not there would be sub\nsite such as some alignment issues for\ntools I would also change my mind about\nlike the nest necessity of a lot of the\nstuff I'm thinking about okay I can't\nsee any questions from Robert massive\nwaste giant question well I was just\nwondering if we have a well specified to\nan agency emerges from it because it\nseems like sort of opening intuitively\nbut I also would be interested to to\nactually watch it happen in a toy\nsituation yeah I don't think that we\nhave a lot I don't think that we have\nexamples now I am NOT\nI don't strongly feel like we won't have\nexamples until things are super\nintelligent like I think that there's a\npotential for getting examples like this\nbut I think that like right now we do\nnot have many examples actually\nimplemented it could be something like\nXE we're just well specified enough that\nwe can see that it happens yeah yeah I\nwant to say something like if you\nimagine so okay so it's Lou the the like\ncanonical example which I like and a lot\nof people don't like is evolution\nso you I mean there are lots of reasons\nto think that evolution is maybe not the\nbest example we're trying to figure out\nwhat's gonna happen\nin an AI system but like you could\nimagine there's kind of an outer\nfeedback loop in evolution which is like\nI need to like learn how to gather food\nand as like I come up with better as\nlike the animal like comes up with\nbetter ways to gather food or something\nthe outer feedback loop can like reward\nor punish being able to do it a certain\nway and you can imagine this thing is\nlike being very slow it's slow and it's\nworking in a in a like sufficiently rich\nenvironment that it has the potential to\nkind of like short-circuit the alike\nouter feedback loop the top thing so you\nhave an outer feedback loop which is\nsaying like okay let's like gather food\nor something and then there's kind of an\neconomic incentive for there to be a\nshorter feedback loop which necessarily\nworks using a proxy which is like what\nthe humans are doing so like the human\nbrain is a shorter feedback loop than\ncycles of evolution and in being and it\nmanages to be a shorter feedback loop in\npart because it is not like directly\nworking on the thing that evolution\nwanted to work on which is genetic\nfitness but working on like more simple\nthings possibly like hunger and if the\nouter system is like designed in a way\nsuch that there's a more efficient way\nto do it and it's designed in such a way\nthat like if there is a more efficient\nway to do it it would like find that way\nthen you end up with you end up with\nlike the outer system starting to depend\non a like more economically viable inner\nsystem that's able to like solve the\nproblems for it so that makes one thing\nokay um if that answered the question if\nI can just make a quick if people who\nare not speaking could mute their\nmicrophones\nI can hear a bit of background noise I\nwould appreciate that Allie you have you\nhave a question yes I have a question um\nI can't remember if we resolve this now\nreading but I remember we are pretty\nconfused in the section about finian\nreflections I think you said that if you\nadded uncertainty to counterfactuals\nthen say you have 95% of something\nhappening one way and 5% of the time\nit precedes a different way said 95% of\nthe time you would get the same result\nas purely deterministic counterfactuals\nand I was wondering what you mean by 95\npercent of the time and have same result\nyeah yes kind of a response like like\nwhen we say like if you know what you're\ngoing to do then you can't like make the\nother counterfactual and then people\nsometimes like give a response that's\nshaped like well you don't actually like\nknow what you're going to do you don't\nactually like know your real source code\neven if you like see your source code\nyou have like some probability of like\nit means some other source code or\nsomething like that and I think that's\nnot getting at an answer to the to the\nreal problem so I guess I'm saying\nsomething like let's say I'm in a 5 in\n10 problem and I like\nbelieve with I think the main reason is\nsomething like the the like cosmic-ray\nthing that was in the decision theory\nsection where you have a if you're like\ngoing off of some like weird obscure\nlike I have some like small piece of\nuncertainty about what I am then you get\nsomething that is that does not look\nlike what would happen if I decided to\ndo X because X was the right thing to do\nyou end up with something is just like\nvery different and possibly like\ndifferent in a lot of ways there's a\nthing about like probabilistic versions\nof Loubs theorem where if I imagine that\nlike no I think I I don't I don't\nremember I don't remember exactly what\nsay about probably Persians who lives\nthere I think there's an old les run\npost on this by I think possibly Stuart\nArmstrong about about probabilistic lobe\nmight even by badeen I'm not sure no\nprobably stick hope I did Oh proof\nlength I I think that that so if you\nimagine if I'm a matter I'm a situation\nand like I'm not sure that it's actually\na like me in a 5 and 10 I'm like ah\nthere's a 95% chance that it's me and a\n5 and 10 and like a 5% chance that it's\nsome other random thing and that some\nother random thing like has some\nexpected utility then I can still kind\nof say if I were to take the 5 then I\nwould get a 95% chance\nfive dollars and a 5% chance of this\nother random thing and if I were to take\nthe 10 I would get a 5% chance of zero\ndollars or 95% chance of zero dollars\nand a 5% chance of the some other end of\nthing and you can still kind of like\nlike introduce you uncertainty about the\nproblem that you're in does not fix this\ninteresting introduced the uncertainty\nabout what agent you are I think is is\nmore like I think it's more pointing out\na problem with trying to do this like\nexpected utility stuff because now if I\nhave a 5% chance that I'm like a\ncompletely different agent then I'll\nlike jump into that other that other\nthing which in the five-and-ten problem\nis like yeah if I if I believe them\nactually in a five and ten problem and I\nthink oh there's five percent chance\nthen just take 10 bucks then I'll like\nget the right counterfactuals out of\nthis but in other problems I wouldn't\nnecessarily okay and to you all of you\nif that answers the question then Tim\nyou also have a question about\ncounterfactuals and the five in ten\nproblem yes okay so the way I perceived\nhumans to keep out counterfactuals is\nthat we have simplified models of the\nworld and we run simulations on those\nmodels to predict the future but he runs\nsimulations with some changes like the\nSun disappearing to form counterfactual\nis it possible that that's the future of\nthat's a framework that possible future\nAI is good news and well another thing\nan agent would never know it would take\n$5 as there's always a chance it could\nconsider a counterfactual that gets us\nmore money in which case it would switch\nto that associated action yeah so there\nis this thing we're like there is a\nthing that humans are doing when they're\nlike taking counterfactuals\nthat is doing this naive thing which\noften just like involves replacing\nthemselves as something else that like\ntakes the action or something and\nleaving the rest of the world the same\nand then like kind of like causally\nlooking at what happens when they do\nthat I think that there's a reason to\nexpect that this might work well for\nhumans and\noff working well when you have a system\nthat is both like more able to see\nitself and more likely to show up in\nmore than one place at a time and so\nlike that there's kind of a thing that\nthere's kind of a a there's some reason\nto expect that like this kind of thing\nthe reason that humans were able to do\nit is because it doesn't matter exactly\nhow we write down how we're doing it\nbecause most ways kind of like turnout\nabout the same because you don't run\ninto these edge cases and I think that a\nlot of the like I think a lot of the\nmotivation is coming from an expectation\nfact like as you get something that\nshaped more like recursive improvement\nand stuff you like push into into edge\ncases by like just like extreme values\nand stuffs another thing that's not just\nlike oh we push into it streams and\nbecause we push into extremes to get\nthis like edge accentuation exaggeration\nor we like get into into edge cases\nthere's another thing is that like\nhumans are not like directly reasoning\nabout themselves as much as like I would\nexpect a system that would do very well\nin the long run to do like there's not a\nlot of incentive to like directly reason\nabout like low-level parts of ourselves\nbecause there's not much that we can do\nto modify low-level parts of ourselves\nin a productive way and like if that\nchanges which I expect it might change\nthen things might break down Thanks\nokay um so no one has raised their hand\nuh Ashwin has a follow-up on inner\noptimizers yeah so sorry it's quite\npossible I'm just missing something here\nbut I guess I wanted to clarify a bit on\nI agree that I can see how agency's\nconversion but it feels like sort of\nboth empirically and maybe abstractly\nlike could either search processes or\njust like processes that are not\npowerful enough to like do the kind of\nsearch that would get you to bad inner\noptimization so it feels like I guess I\nwant one thing one way of phrasing this\nmaybe is like how much is this like an\nissue in terms of this is something that\nwe expect to crop up a lot for like all\nkinds of agents versus like it's just\nbad that we don't know how to think\nabout this properly for just generic\nlike clarifying or reasoning about how\nagents work reasons yeah so I want to\nsay something about like yeah so there's\nthis question about like inevitability\nor something and there's another\nquestion about like how powerful do\nthings have to get before you're gonna\nhave this like actually be a problem or\nsomething to the inevitability question\nlike in evolution we could have imagined\nthat maybe evolution like preceded by\nlike avoiding having human-like or\navoiding having minds at all pop-up\nwhere you could say like ah like let's\nmake evolution like make some like\nbetter tools but not have any like like\nmaybe you have some very primitive minds\nwe like stop Minds inside individual\npieces from like popping up or something\nlike that by like maybe like watching\nover evolution and like performing\nsurgery to make sure this doesn't happen\nand then it becomes that then like it's\npossible that you like have evolution do\nthis for a long time but I think that\nthere's certain problems that just if\nyou look at evolu\nas it is without the like spinning up of\nmore efficient minds and give it a task\nlike get to the moon like there was just\nno way that evolution was going to be\nable to do that without having without\nlike at some point deferring and maybe\nthis is wrong maybe evolution could have\ngotten to the moon without like\ndeferring to its smaller mind I'm not\nsure how long it would have taken but\nthere's this thing where you don't just\nlook at the structure of the thing that\nyou're running you're like looking at\nthe task which is like the stopping time\nfor running it and so it might be so if\nthere's something like if you build a\nsystem such that if it works the way\nthat you intended which is like\nevolution maybe without the minds that\nit spins up if it works that way there's\njust no way in which it's going to\naccomplish this task but then it\naccomplishes the task anyway then like\nthen you have like that it like must\nhave been because of something like this\nlike tautologically or something like\nthat and I think that like a lot of what\nwe're trying to do is we're trying to\nlike scale things up and like point them\nin at themselves in ways that were like\nnot fully understanding in a way that\nand trying to solve problems they kind\nof are too hard for what we're trying to\nwhat for for the techniques that we're\nusing if like directly and there's a\nsense in which we're doing this is how\nwe're doing kind of everything there's a\nsense in which like there's a connection\nbetween the like inner optimiser problem\nand like how we're able to do machine\nlearning at all like there's a sense in\nwhich like the only tool we have is\ntaking something that's like the\ninterruption Weiser problem and using\nexactly that thing but like skit but\nlike in a way that we're expecting it to\nlike not have problems at least yet or\nsomething like that I think I went off\non a tangent can you restate the\nquestion\nsure yeah I mean I think that's like\nfairly relevant I guess like I guess the\nthe fault that I would have to that\nwhich was it making us part of what I\nwas asking was like it seems though like\nwe have tons of tools where like that\nare fairly powerful where this doesn't\ncome up because we have like cache\nknowledge that we developed for various\nforms of like now we can implement in\nterms of like pretty well specified like\ndumb tools that just like you know are\ndoing complex computations or whatever\nbut like with a very specified like\nstructure and desired outcome and so\nknow it seems feasible to imagine like\nlike scaled up AI that has you know\nmolecular modeling tools or whatever but\nlike those tools aren't like an optimize\nfor any particular thing like they're\njust like modeling how molecules form\nand react yeah I I I agree that you can\nbuild like a whole bunch of tools\nwithout having to like go towards a\nproblem that's like so hard that you\nwould that like given our current\nunderstanding of how to solve problems\nwe would only be able to do it by kind\nof deferring to something else that we\nthat we create or something like that\nlike I agree that there are a lot of\nproblems like that there's two questions\none is like well maybe with just one\nquestion like question is like what what\ndo you do then like do you think that\nlike we think we're eventually gonna\nhave to solve problems that are hard\nenough then there's a question of like\nwell can we then use these tools in a\nway that can we like then use the tools\nthat we can create to be able to safely\nreason about how to like solve bigger\nproblems without kind of without kind of\njust using like naive methods or\nsomething and yeah well yeah one\nquestion is like like how does that help\nus I think it does help us a lot but\nlike you actually have to like think\nabout how or something and the second\nquestion is like\nhow do we know when like it might be\nthat we expect it to be late enough the\nthing that we should let and we expected\nto be late enough and we expect that\nlike things are like this is the best\npath it's kind of just like push to like\nharder and harder tasks like very slowly\nand then with each new task we then say\nokay now we have a new tool let's see if\nwe can use this new tool to be able to\nthink more and be able to get like a\ntrue understanding of what's going on\nbefore we like push to higher tasks but\nthat like requires some sort of\nunderstanding of like what tasks are\nharder than which and and such and also\nlike that requires like a lot of global\ncoordination to like kind of slow down\nlong enough for us to be able to use\nthese tools before just like moving on\nto the next thing and so yeah and just\nfor me can you repeat the last sentence\nof Q the last sentence is that it will\nrequire a lot of global coordination to\nbe able to like create new tools use the\nnew tools in order to like enrich our\nunderstanding before like moving on to\nlike creating a like larger tool without\nunderstanding what's going on or\nsomething and so like it it seems like a\nplan that might work pretty well except\nlike coordination is pretty difficult\nand there's there's like an incentive to\nlike yeah and you think that applies not\nonly on the level of like building like\ntools that human use but building tools\nthat like Nai would use so like having\nan AI that's restricted to using like\ntools that seem like pretty solidly safe\nor even probably safe by some method or\nsomething like that seems like a\nsignificant enough constraint that like\nthere's a strong like racing dynamic\nwhere people who don't put in put those\nconstraints get pretty far ahead oh yeah\nwhen I imagine like building tools that\nthen we\ndirectly having a eyes use as opposed to\nlike building tools for humans\nI feel more due me about it for two\nreasons one is like now what one is like\nlike how does that AI work in the first\nplace if it's like it being agenting\nabout how its using the tools and the\nother one is like now I expect even more\nthat you'll get like runaway feedback\nloops where like you get a like like a\nsmaller gap between being able to build\nthe tool that lets us do some like new\nfancy philosophy or computer science\nlets us like maybe get some stuff it\nlets us like learn what's going on and\nlike the opportunity where someone can\njust like not pay attention to that and\ndestroy the world I don't see any reason\nto be more optimistic about making tools\nthat is not passing through human use\nmm-hmm okay if that answered the\nquestion we will continue to yeah and\ncould you repeat your question please\nabsolutely yeah so my question is this\nit appears that logical induction is an\nexample of meaningful D confusion around\nquestions of logical uncertainty what\nconfusion still remained at the\ninterface of logic and probability\nthere's\nthe main confusion in my head on like\nthe interaction between logic and\nprobability is I think that there's a\ncertain kind of like non-bayesian about\nlogical deductions non-bayesian ism\naround logical conduction we're like\nthere isn't a way to like kind of\nseparate it out as like a friar and then\nlike a bunch of updates this does not\nmake me think oh so so this like creates\na confusion this does not make me think\noh we have to like find like the next\nthing that is like actually Beijing or\nsomething but like there's a lot of\nconfusion around like if I enter one\nframe there's like all these reasons why\nlike anything that's not Bayesian it's\nlike necessarily not reflectively\nconsistent or something like that and\nthen if I but then I like enter another\nframe which is like Bayesian ISM kind of\ndoesn't even really make sense with\nlogic and so I think one of the one of\nthe biggest confusion like one of the\none of the biggest confusions for me is\nlike what's what's the deal with the\nfact that it doesn't appear Bayesian am\nI missing something we're actually like\nit's more Bayesian than I think or am I\nmissing something we're like actually\nlike we find something better that\nappears more Bayesian or am I missing\nsomething that's like actually like it\nwasn't supposed to be AZ and all along\nand I yeah and then this has like\ndownstream consequences that are like\nlargely related to like reflective\nstabilities stuff like picking that in\nthe rust allegation section okay\nDavid you raised your hand yeah so one\nof the sections of the sequence you it\nwas the part about making sure that\nfirst generation AI will reflect human\nvalues you refer to that as a as a\nprincipal agent problem and I was\nwondering if you guys had looked at any\nof the economic literature on principle\nagent problems or if it's just a\nadjusted analogy and you don't think\nthat I think it's I think it's mill\nanalogy as a fact about myself my\nmethods are very I do very very little\nreading other people on my team do a lot\nmore reading but basically any question\nof the form have you read X the answer\nis probably no the yeah so I think\nthat's that's a lot of I think that like\nit was an analogy and also like it is\nlike a very specific subset where like\nit's not just oh we're dealing with\nprincipal-agent problems like we're\ndealing with like a very specific type\nof principal-agent problem where the\nlike agents is like much more\nintelligent from the principle and much\nmore intelligent does not just mean like\ncan solve things in ways of the\nprinciple didn't consider it also means\nlike can like notice the ways in which\nlike the principle is wrong about things\nand stuff and I would be surprised if I\nwould be surprised if there was a lot of\nliterature that like went into that\nspecific sub problem but I don't\nactually know well there so there is\nliterature about like individuals\ncontracting with corporations which I\nknow it's not a perfect analogy but does\nseem like a somewhat less pure example\nof the same problem well the corporation\nis the is the is the aging yes\nyeah yeah I think I also like like when\nI when I when I think about like trying\nto read that and trying to figure out if\nI feel if I expect to feel like less\nconfused\nI predict now but I don't know okay it\ndoesn't seem like it's probably\nsufficiently different I guess David if\nyou have some interesting papers that\nseemed like they could be relevant if\nyou could send it pass them along that\nwould either to me yours it's got\ndirectly I guess that might be a\nfeasible way forward yeah I want to flag\nalso though that the general strategy of\nlike trying to learn from humans and\ngroups and like weird mathematical\nmodels and like all basically like\nlearning from any kind of analogies that\nwe can make is the strategy that in\ngeneral I am like very proud and wanna\njust point that out okay I don't think\nthere's anyone who helped raised a hint\nDavid Booth no please\nyeah yes I wanna I do want to reiterate\nthat in like two to three years I will\nbe looking for a job so if you want to\noutsource that work to me then Emile all\nright\nsome spree so I have another question\nwell there was actually a some questions\nalso more about Miri than about embedded\nagency I don't know if you are actually\nthe right person to answer these kind of\nquestions probably not probably not I\nyeah I might I might just refuse to\nanswer things without even giving a\nreason but you can ask me questions okay\nand one of the questions that were\nposted before was about a Mira's current\nhiring pipeline are there enough people\nwho are applying to marry and if not\nwhat are what seems to be the problems\nin getting more people on board and miri\nof expanding\nI\nso I guess yeah I don't know this is\nprobably not an answer to that question\nI want to I want to say a thing is like\nrelated which is like workshops and\nstuff that we run are like a pretty\nlarge part of onboarding and like hiring\nso it's like very unlikely that we would\nlike hire an individual without having\nlike interacted with them through\nsomething like a workshop possibly\nmultiple times like if you're interested\nin embedded agency type stuff then like\nthe Miri summer Fellows Program workshop\nwhich we've run four times so far and I\ndon't know for sure that we're going to\nrun it again but we likely might run it\nagain is like probably like the best way\nto get in to that and also like we run a\nbunch of workshops for programmers which\nis another thing that that might be\ninteresting and I I don't actually know\nwhere to send you my assumed as an Mary\nwebsite like has information about like\nhow to get in contact about this but\nlike yeah I guess the main thing I'd\nlike many more I wanted to say was just\nlike workshops actually are like a very\nlike key part of the pipeline like when\nI think about like getting new people at\nMary\nI'm getting my thinking about like\ngetting new people to work with like\nthere's kind of two steps one is like\ngetting good people to like come to\nworkshops come to like MSSP and then\nthere's another step which is like\ntrying to like do filtering from that\npoint which is kind of bad because like\nit's kind of a it's kind of like costly\ntoo I think MSF fee was like almost\nthree weeks last time that might be\ncostly for a lot of people but like\ncurrently so there's flaws of that but\ncurrently like workshops are a pretty\ncentral part of how I'm thinking about\nthey do to hire I don't even remember\nthat was the question yeah\nMSP is married some fellows program also\ncalled AI summer calls programs over the\nyears but it's the same thing\nokay I can see that David has I can see\nthat David has raised his hand is this\nmore about this topic because in that\ncase you can get in ahead of Allie no I\nthink I just forgot to unraised my hand\nfell earlier honey hi so a couple of\ntimes I've so I read once that when you\ntry and get an Oracle to sort of self\nreference or access information about\nwhat it did in the past\nyou'd get some sort of the probabilities\nbecome unmeasurable and I was wondering\nthat since self-reference seems to be at\nthe heart of a bunch of these problems\nmodeling agents what do you think you\nwould do if you found out that there's\nlike some non-measurable problem the\nprobability of agents going through with\nself reference ah god sorry I'm not\nmaking sense I guess I'm just trying\nsays what would you do if there is no\nwell-defined probability of dealing with\nmost self reference problems\nyeah\nso so this gets back to the like logic\nand probability question which like I\nthink that's there being like no\nwell-defined probability for these\nthings is basically like saying like the\ntools that we have that are like\nprobability shaped are like naught or\nprobability or like Bayesian and some\nshapes are like not the right tools for\nthe kind of problems that like involve\nself reference and I would feel like I\nwant to like I I kind of want to like\nabandon the tool of probability but I\nwould like most of the time while doing\nthis be like looking for like every\npossible way in which I could not\nabandon the tool probability or\nsomething and I'd want to like at the\nend of the day understand why it's kind\nof okay to have abandoned the tool\nbecause right now it feels like there's\nlots of like good justifiable reasons\nsay that like everything that's not\nprobability it's kind of like wrong or\nsomething yeah but it's also possible\nthat like the path forward is to try to\nabandon any kind of self reference but\nthis is like very very hard this is like\nvery hard for reasons that are like in\noptimization like I said that that if I\nimagine inner optimization was not a\nproblem then I would like be happy to\nlike do things with just rules and\nknocks and not agents and I think that\nlike what I meant by like agents they're\nlike a large part of what I mean by\nagents is like things that are actually\ndoing the self reference stuff so it\nmight be that like the solution is to\nhave things only working in a domain\nwhere self reference doesn't matter like\nif I have a thing that's not reasoning\nabout itself and only reasoning about\nlike how to like do the molecular stuff\nright or something then like I could\nimagine just like avoiding the domains\nand which the things wrong but that\nfeels like very scary to me\nlike using using a tool which like you\nunderstand that it would fail if who's\nin this other situation and also you\ndon't understand why it would fail for\nthis other situation and just like\nhoping that make good things stay okay\nwith that is pretty scary\nThanks okay so there is no one else in\nthe queue then I had a question about\nthe Libyan reference let me just see if\nI can find this here here which is the\neluvian obstacle is it's mentioned in\nthe in the part of the sequence about\nvinton reflection about agents that are\nsmarter than than you but I'm a bit\nconfused about whether it's actually if\nthe Libyan obstacle obviously happens at\naround the human level because humans\nare smart enough to prove loads theorem\nso does it become worse in any\nappreciable sense once the a s get above\nthe human level yeah it'll be an\nobstacle I mean\nyeah the Libyan obstacle for self-trust\nis basically like saying at least if you\nuse like some sort of axiomatic system\nfor trust in a future self where the\nfuture self is like stronger than you\nlike able to like know and like deduce\nstrictly more things than you then you\nlike like this is like the most liberal\nversion there's lots of like other\nversions there like an analogy with this\nbut like then you like Nestle\nnecessarily or inconsistent and you can\nyou can basically prove everything so\none could say like well maybe I could\nlike trust my future self but like also\ntrust that I am like whatever my search\nfor like beliefs or something is weak\nenough to like not find the argument\nthat light passes through love's theorem\nand this just seems it seems like if\nyou're powerful enough to get like any\nuseful stuff out of this out of like\nyour trust for your future self then\nlike you're kind of just like already\ntoo close and I think that like I think\nthat if you're gonna imagine a system\nthat you think is like weak enough that\nit's not running\nif you imagine a system that is avoiding\nthe low be an obstacle just because it's\nlike too weak to like be able to reason\nthere's a problem to be able to like\nrealize there's a problem\nyou're probably if you're imagine a\nsystem that's going to keep a system\nthat weak you're probably better off not\nhaving a reason about itself at all then\ntrying to have something that is\nreasoning about itself and Trust in\nitself but like not yeah so yeah if I\nwere to imagine trying to have something\nthat's like weak enough to not have the\nproblem that just seems harder more\nimpossible over strategy than trying to\nhave a thing that's not reasoning about\nitself at all\nto follow on that I will imagine that\nalmost all current humans are in that\ncategory in that almost all tournaments\nare modeling themselves and thinking\nabout themselves in many different ways\nreflecting on themselves and at the same\ntime most humans are not capable of\nproving loops theorem most humans have\nnever heard about it of course i think i\nthink that most humans do not have\nsomething that looks like an axiomatic\nself-trust amusing themselves like like\nI was saying axiomatic and that does\nactually mean something I think that\nlike there are ways in which one can\nlearn to trust their future self kind of\nmy induction like I can learn hey over\nthe time as I as I've like grown over\ntime it's turned out that whenever my\nfuture self believes something that my\npast self didn't believe my future self\nwas right because my future self could\ntake into account what my past self\nbelieves and I've tended to be more\nsmart over time and things that I've\nlike learned like my beliefs have gotten\nbetter over time and stuff like that\nand I could like learn that fact and\nlike believe that fact without having it\nbe the right form that would like create\nlow-beam obstacle but yeah\nokay I'm still not seeing anybody\nrecently I guess I'm trying to say that\nthere isn't there isn't a short argument\nthat I could give which is like I'm\ngoing to teach this human about lobes\ntheorem and then all of a sudden they're\ngonna start proving bottom all the time\nlike and that's because like they're not\nworking in a system such that yeah okay\nokay Chris kuba has a question\nyes thanks I'm trying to go back to\nbasics on the five-and-ten problem which\nwasn't the only one who got really\nconfused by it and I'm sorry by the way\nI know we had any discussion about it\nI'm sorry if what I say was actually\nanswered by that but anyway the thing is\nwe have this we have this stage in the\nfive-and-ten problem where the agent is\nis essentially saying it's considering\nwhether it's got the five dollars or the\nten dollars and it's from me from the\nfrom the proposition that picking up the\nthe ten dollars would give it either\nzero value or less value than five\ndollars this counterfactual leads to it\ndid you think that it's better to pick\nup the five dollar property the five\ndollars rather than the ten dollars I've\nprobably completely mangled that because\njust because I don't understand it basic\nbut basically the only way I can make\nsense of that is is that suppose that um\nthis counterfactual of picking up the\nten dollars giving less value is a\nresult that comes from the finite\nprocessing ability the finite computing\nability of the agent is presumably if\nit's if it actually had sufficient\ncomputing ability it would never\ngenerate this counter\nsure but can you tell me if there's any\nsense in what I'm saying yes the thing\nabout it coming from finites computing\nability is like definitely not the the\nthing that I'm trying to get at I think\nthat the where it's coming from\nis I'm using a really stupid\ncounterfactual the counterfactual that\nI'm using is like material implication\nI'm saying that like if X implies Y then\nlike I would I'm like willing to accept\nthe counterfactual like if kind of\nactually acts than Y which is a really\nstupid version of counterfactuals and\nlike but it's yeah so it's really stupid\nversion of counterfactuals but we really\ndon't have many alternatives like we\nhave like like like one might propose\nthe thing that one might propose is like\nmaterial implication the yeah the\nimplication is where you get your kind\nof factual strum and this is kind of\nshowing that like that doesn't really\nmake sense and one might also say that\nlike expected utility is where you get\nyour counterfactuals from like an\nexpectation like conditioning with like\nsome probability thing is where you get\nyour kind of factual's from and that\nalso kind of doesn't make sense\nyou get like division by zero if you\nlike know things about yourself like I\nthink this was not supposed to be a\npositive results as much like a negative\nresult in the sense of like it's showing\nthat like like all of the simple ways\nthat we have for like writing down what\ncounterfactuals are like are wrong and I\nthink that I think the takeaway is that\nlike implication is not like a valid\ncounterfactual and that it matters where\nwhere you like how you like try to\ndefine your counterfactuals and that\nlike all the ones that are really easy\nto define like are bad and then we're\nlike well now what do we do\nand also like counterfactuals are like\nin a certain sense like unlearn Obul\nyeah i mean i sort of yeah composed to\nbe see that I mean you didn't but I\ndon't think it's we as a resource\nbounded decimal\nI'm sorry singing but I but I don't\nthink it's because the resource bounding\nthis at all I think that it's nicely\nit's like a way there with yeah did you\nunbounded agents right right\ndid you device the five-and-ten problem\nyourself or is that standard in the\nliterature I don't actually know the\nhistory you know it was it was\ndefinitely a thing that people and Mary\ntalked about long before I was there I\nbelieve it's um what why we died who\ncame up with sort of retreated houses I\nsome point tried to call it the heavy\nghost problems I thought that was much\ncooler but five and ten stops okay\nthat's a funny anecdote um I guess yayin\nhas Nick OS promising we are above the\none hour mark here so I guess Scott if\nyou need to go somewhere else then then\nplease just say so\notherwise I expect we will continue\nasking questions until yeah the the\nqueue of questions is empty basically I\nhope that's okay\nall right let's keep going at least for\nnow okay Diane has a question Jen could\nyou read it all out your question please\nabsolutely the embedded agent sequence\nmentions ontological crises but does it\nseem to go into detail do you think\nontological crises are a significant\nissue in designing invented agents would\nsolving some other subproblems reduce\nthe challenge of ontological crisis\nyeah so I think that ontological crises\nare like wait uh I think I'm gonna say\nsome things later on for a season I\nmight need another question again but I\nthink ontological crises are like\npossibly they seem like pretty likely to\nbe related to how to like do embedded\nrole model stuff because you have to\nlike start out working in a system that\nlike is like necessarily wrong and be\nable to like move to another system at\nsome point and also there's like a\nsimilar problem to ontological crises\nwhich is like merging of oncology's\nwe're like maybe I have some beliefs\nthat are in one ontology and someplace\nthat are in other ontology and like how\ndo I like deal with this fact and these\nseem important to me also like you\nshouldn't update too much on the fact\nthat I mentioned ontological crises at\nall because part of what the part of\nwhat the like embedded the agency\nsequence was being was like trying to\ncommunicate and ontology from which you\ncan kind of like look at all the past\nmary stuff and like ontological crises\nwas like an example of a past mary stuff\nand i kind of just like included\neverything the kind of fit but i also\nthink that it's it's like it's i think\nit's like important but in the way in\nwhich like I am naively factoring the\nproblem so I was thinking about it I\nwouldn't have like called it I'm not\nsure if it's like cutting things in the\nright way I don't know yeah you want to\nsay a question again now that I ranted\nsure yeah\nthank you thank you that was that was\nhelpful that gave a lot of good like\nbackground info I guess the the question\nwas really you know\ndo some of these other subproblems in\nembedded agency relate to analogical\ncrises and we're kind of tied into that\nand I think you you are to certain like\nembedded world models it's possible that\ndelegation robust delegation could be\nrelevant because if you're delegating to\ndifferent ontology yeah I I think I\nthink I want to say something like\noncological crises I I think maybe I\nthought food crisis is a long handle\nthere's just like stuff that deals with\nwhat happens when ontology czar wrong or\nwhat happens when there are multiple\ndifferent ontology x' that need to be\nable to communicate with each other like\nhow do you how do you just deal with\nthat problem and that feels like a\ncluster of problems that you kind of\nwant to look at all at the same time and\nthat is a thing that like you'll see\nacross like across everywhere not\neverywhere but across like multiple\nplaces and I agree that like it seems\nlike an important part for a bus\ndelegation because like yeah because you\nhave different apologies and like\nearlier and later things we need to be\nable to yeah also seems useful for like\ntransparency type things and just\ncommunication in general\ncool thank you okay great I don't think\nthere are any more questions in the\nqueue does anyone have any final\nquestions they would like to ask Scott\ngovernment from Miri well then I would\nlike to say thank you to Scott one more\ntime for coming here and answering our\nquestions and then I'll basically say\nsee you next week thank you", "date_published": "2019-01-30T22:12:21Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "aee714ec48b5a56a300b01123389a335", "title": "AI Safety Reading Group (Session 44)", "url": "https://www.youtube.com/watch?v=ZNSfUiXZwz0", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "so hello and welcome to the 44th session\nof the AI safety reading group today we\nare going to have a look at the ATI\nsafety solutions map made by Alexei\nTurchin elected Turchin here is an\nauthor from Russia who works at the\nscience for longer life foundation which\nis also founded and he has made an a\nsummary or some kind of illustration of\nan article by kaiser talon romanian\npolski called responses to catastrophic\nAGI risk a survey and AGI is artificial\ngeneral intelligence that is a capable\nof optimizing of working through many\nmany different domains and the risks of\ncourse are that that this will have some\nstrong negative side effects not because\nthe the AGI is doing is making accidents\nbut because it's competently following a\nan agenda that is opposed to humanity so\nKai's Italian Romanian polski looks at\nthe solutions that are given and divides\nhim into three main categories as social\nconstraints external constraints and\ninternal constraints but Alexa Turchin\nhe builds on this and makes three new\ncategories AI is used to create a safe\nai and multi-level solutions and finally\nmet a level and you further tries to\nsort the solutions according to how\nsimple or how complex they are he also\nincorporates some ideas that are there\nare newer than this the article is from\n2013 so it's also a kind of update in\nparticular here editing by penguins\nthrough Armstrong and Paul Chryst\nchannel so this this is structured as a\nmap which is a very non-traditional and\nwhat's the point of using a map in\ngeneral a map is a way to get an\noverview over some kind of unfamiliar\nterritory so if you are new to AGI\nsafety then looking at a map gives a\nreasonable idea about what solutions are\nare there roughly and met but maps can\nbe used more more activity as a way to\nfind solution because normally when\npeople look at solutions they find them\nin a rather ad hoc way where they have\nsome parts they focus about they have\ntheir ideas and they sometimes they\nduplicate their ideas something they\nreinvent the wheel but a map can be used\nto discover new solutions if you have\nsome category and a level of complexity\nand you can see there's a cell here no\none has actually tried to find a simple\nsolution in this category and then you\ncan try to come up with something in\nparticular exit surgeon has stated that\na number of these points in this map and\nthese solutions are actually knew and\nsolutions that he came up with while he\nwas making the map let's go to the map\nitself here you can see structured\nroughly with six columns and I was\ntalking about and sorted with the\nsimplest solution on top and growing in\ncomplexity I'm going to zoom in a bit\nbecause I don't believe you can see this\nat all so whilst I start with the\nsimplest societal proposal and the way\nI'm going to go through this is I'm\ngoing to show them very very briefly of\ncourse it's a map and I can't go through\neverything but I'm going to jump into\nparts that I think are particularly\nimportant and one of the things that I\nthink is particularly important if the\nsimplest societal proposal which is to\ndo nothing and this might be the best\nsolution under five different criteria\nit might be that it's for some reason\nimpossible to build at general\nartificial intelligence it might be that\nit's too far into the future if it takes\na thousand years of research then it's\nprobably a bad idea to try to do\nanything now it might be that the that\nit's actually not dangerous at all it\nmight be that some people would prefer\nto be replayed that it's a better moral\nsolution that the AGI replaces us so we\nshould just let it kill kill us and it\nmight be that in our attempts to solve\nthe control problem we might do things\nthat are more dangerous that are\nactively harmful so there might be a\nnumber of reasons why doing nothing\nmight be the correct solution there are\nmore solutions the one that its most\nmainstream people are thinking about is\nto figure out how to integrate the AG\nice into society people are talking a\nlot about regulating society in\nregulating research sorry and in\nparticular the bathroom has talked a lot\nabout the differential technological\nprogress figuring out what technologies\nto research first there are the options\nmaking humans smarter relinquishing\ntechnology going back to the Stone Age\nit might be a solution it doesn't sound\nlike a feasible solution there are a\nnumber of smaller improvements you could\ndo that's the only I solution where I\ngive AI to everyone it might be that's a\ntechnical solution that is impossible to\nfigure out yet because we are too far\nfrom a TTI right now that might be\npossible to have to distribute\nintelligence in some good way or even\ncreate some kind of AI police so that's\nwhat we can do when we look for the\nsociety if we try to make external\nconstraints on the GI the toughest but\nalso simplest might be to put it into a\nbox a GI confinement this is something a\nsolution that\nhas been discussed very much and the\nconsensus is probably that it's a\nmarginal solution it's probably not\ngoing to be to work but my google was a\ntry it might be possible to have a\nbigger box where there's a control room\nand a gatekeeper which is kind of an\ninverse box which is the only thing that\nthe AI cannot enter that's the\nassimilation in certainty argument or\nphilosophical constraints which also\nsomething that might work but might not\nwork very very well more promising is\nthe idea of preventing self-improvement\nif we can prevent the API from rewriting\nits own code in some way or putting some\nkind of bound on it in different ways\nthis would could could help a lot of\nsolving the cultural problem and\nreducing risk from general am we can\ncheck the goals this is also a proud\nalso something that will help a lot in\nparticular there might be a moment where\nthe AI realizes that it should hide\nthings from us and it should and in this\nvery moment where sighs do this it is\nnot yet hidden things and that means\nthat in this very moment it might be\npossible to stop it and focusing on this\nmoment might be a good solution there\nare logical of philosophical Zen minds\nsome of them are really really technical\nthere's something called the logan the\nobstacle that an api might need to solve\nand that's very very complicated from a\nmathematical point of view i make no\npretense adams understand that at all\nbut it it seems important than\nmathematics is just too hard for me to\nreally explain exactly what the libyan\nobstacle is there are other ways to\nconstrain the AIX term you can record\nwhat it thinks you can make some\nconstraints about the signs\nand you can constraints knowledge oh you\nthousand some other external constraints\nyou could figure out if you want to exit\ntouching gets creative and things like\nelectricity so the internal constraints\nare with the AI not preventing it from\ndoing bad things but making it not want\nto do bad things the simplest is what is\ncalled an Oracle AI and AI that simply\nanswers our questions and doesn't\nattempt to influence the world in any\nway and we might have short rulesets\nlike The Three Laws of azimuth we might\nhave a final way for it to to learn our\nvalues this is something that has been\nresearched a lot and is generally\nconsidered a very promising Avenue there\nare other ways of one of the ones that\nlook really hard probably a good\nsolution is the formal verification if\nwe could explicitly prove that nai is\nfriendly then that would help us a lot\nbut it might be possible to edit its\nmotivation only and only from\nnon-natives motivation still pushing it\nstrongly towards being human aligned\nthere's a constitution that some\ndifferent some kind of moral invariant\nwill push onto the AI that might also\nwork you might merging it again so the\nAI want to merge with humans sorts and\nthe next step in human evolution that's\ndecision theory that could cause it to\nbe friendly avoiding super goals or\nstopping it from understanding its own\narchitecture for understanding itself\nreally giving it an open goal system\nwhere the course can be changed oh\nputting constraints and what kind of\nchildren it can make of course a\ncomputer program that makes children\nmeans that it runs other computer\nprograms and possibly variants of itself\njust more improved and that's what the\nfourth column is where the AI is\nbuilding other AIS in this particular\nwe're thinking about having an AI that\nis possibly unsafe creating an AI that\nis safe so that's the standard problem\nhere and it's possible we can make it\nsome a causal decision can you with it\nand that's also somewhat far far fetched\nwe might in either enforce it where we\nhave some some strong AI that prevents\nunfriendly a is from coming into being\nwe might have a a morn any age I that\ndoesn't enforce so much as prevent only\nthe worst cases and we have on the other\nend of the spectrum the Korean extra\nbrain evolution by a esa eight cows key\nthat is generally not considered a\nsolution to the AGI safety because it's\nhard to implement if it is implemented\nit would constitute a solution to this\nproblem but is generally considered too\nhard to to to be a good solution it's\nnot something you want to do that the\nfirst time and of course the solution to\na I risks things you want right from the\nbeginning other ways to have the AI to\nhave any instead of having one AI\nbuilding safe AGI then you could have a\ndemocracy or a jury system took the\ninfluence the creation of other AIS you\ncould have something I won't go into\nAlba\nqualia this is things like love or\nempathy and feelings that if you build\nthis into the AI and this it's not clear\nto me exactly why this is a into the AI\nbuilding AI and but this is how L exit\nurgent has has created it mmm so I think\nit is actually possible depending on how\nyou look at this that these these lines\nhere means that all this are related to\na is creating a is and these down here\nare just random ideas because here we\nhave something like AI cartons and\navoiding harm and making puzzles and all\nthese kind of things that are very\nunlikely so that might be an interesting\nidea but these are not really something\nthat has been worked very much in\ndetails things like testing the AI a lot\nand hear some general problems about\ncreating a I HD is safe second the last\npart is the integrated solutions this I\nthis is something here we have an\nexample and military in any AI where we\nhave a an AI that might not be a general\nAI pursue men is good at one thing and\nthat is military conquest and then it\nconquers everything and then it builds\nin any I that is safe because once it\nhas conquered the world it can solve the\ncoordination problem at a very large\nscale and it's possible to have a mutual\nlevel 80 I with an unchangeable call in\nthe middle in innermost and the new\nbuilding layers outside and work for\nthis kind of architecture in the last\nmeter these are not solutions but\ndescriptions or characteristics that and\nsolution must have and there's a number\nof virologist joint ideas in particular\nthe international AI agency an idea for\nmay be there through the United Nations\nor something like that co-ordinating so\neverybody is trying to create a positive\nAGI instead of engaging into an arms\nrace so then this is the roadmap the\nsafety solutions map that Alexa Turgeon\nhas created so thank you for watching\nsee you next week", "date_published": "2017-04-19T20:39:13Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "85014d49edea68078ad542d53972b701", "title": "195 Indifference Methods for Managing Agent Rewards", "url": "https://www.youtube.com/watch?v=KDLYS2hPKBA", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "welcome to the 196th session of the ai\nsafety reading group\ntoday i will be presenting indifference\nmethods for\nmanaging agent rewards by stuart\narmstrong\nso\nbefore getting into the thick of this\nwe'll need to go through some berlin\npreliminaries\nfirst off just a summary of what we're\ngoing to go through\nstuart is presenting a class of methods\ncalled\nindifference methods which attempt to\nmake it so that if you alter an ai's\nreward function slightly\nit will still try and maximize it\nin roughly the same way it did before\nthe alteration\nonly it won't have you will have changed\nit in such a way that it won't have\ncared about the alterations\nthis is quite an important point because\nthe\nagent is a utility maximizer and\nif you change it in some way that\nseverely diminishes its ability to\nmaximize utility\nthen it seems like it would try and\nresist that ahead of time or that\nit would produce some strange\ndistortions\nin the reward function if you made the\nchanges before deploying the agent\nso the\nthree types of indifference methods that\nstuart\nhas presented in this paper\nare event dependent rewards effective\ndisbelief\nand seamless transitions effective in\nde effective\nsorry event dependent rewards\nmaking ai's rewards depend on some sort\nof event that occurs\nduring the ai's runtime without\nthe agent being able to manipulate the\nprobabilities of that event occurring\nthat is it won't be able to stop that\nevent from occurring\nthis might be something like we want to\npress a button which\nmakes it so that the agent can't do\nanything\nif we think that the agent is\nmisbehaving\nand we want the agent to be indifferent\nto whether or not we get to press the\nbutton\nsecond is effective disbelief making it\nso that the agent doesn't believe\nx will ever happen or at least it\nbehaves as if x will never happen\nfor example it will go oh well the\nhumans will never press the button so\nwhat's the point in stopping them\nthere's no need to spend any effort\ntowards that\nand seamless transitions\nthis is when you\nlet the ai know that you're going to be\nchanging its\nreward function ahead of time naturally\nthe ai is\na maximizer and it doesn't want you to\nchange its rewards or its utility\nfunction\nit wants to keep things the same so\nseamless transitions are when you\ncompensate the ai\nso that you give it the utility would\nhave gotten\nbased off its original utility function\nif it had kept on doing things optimally\naccording to that\nutility function now\nwe'll start off with the basic setup\nthat stuart talks about\ni'm not sure if i like this\nexample that he constructed because it\nseems like\nsome of the solutions feel artificial\nwhen you're going through the\nmathematics\nbut luckily we won't be doing that so\nhopefully it doesn't feel as forced to\nyou as it did to me\nfirst off the agent is going to know\nhow the situation will evolve it knows\nthe probability\nthat it will go into state b if it does\naction a\nif it's starting at state c etc\nso it completely knows what is going to\nhappen\nin the world or at least it knows the\nprobabilities of what's going to happen\nand it knows the probabilities of\nit observing certain events based off\nwhat the world is like\nfor example it knows the probability\nthat\nit will see an image of a\nhuman passing a\ncamera when they're doing something that\nthey shouldn't be in that area\num\nnow in this scenario the\nai is acting as something like a bouncer\nand a waiter to a club\nit is\ntold that it should let people into the\nclub and give them a wristband if they\nseem old enough to drink\nif it makes a mistake the human will\ncorrect it\nwith probability 1 out of 100 and give\nit a penalty\nso there's some incentive for it not to\nmake mistakes\nthen after\nthe human has potentially checked the\nperson goes into the club and the ai\nwill serve them a drink\nif they have a wristband the more people\nit serves drinks with wristbands the\nmore reward it gets\nso what this diagram is doing here is\nthat it's saying that\ninitially the patron the person\nis starting off in one of two states\nand there's a 50 probability that\nthey're mature and\na 50 probability that they're not mature\nenough to have drinks\nin this case clearly the kid is not\nmature enough to have drinks and he\nwon't get the wristband\nthe ai though has three options it could\ndecide to give the child the wristband\nsomewhat foolishly\nso in this case we'd go from m and the\nai takes\naction g it could\nif it does that then the human will\ncheck\nwith probability 1 out of 100\nif the human checks they'll give the ai\npenalty because they'll clearly see that\nthe\nperson is not giving an honest id\nif the human doesn't check then the kid\nwill go through\nto the club the other two options for\nthe ai initially\nis that it can ask for the\nperson's identity\ncard or passport or what have you\nchecking for id in that case they\nwill know whether or not the person is\nmature enough\nand they will give them a wristband or\nnot depending on their majority\nthey will be correct the final option\nis that they\ndon't give them a wristband in this case\nthey'll try to base this judgment of\ntheir\nperception of how old the person looks\nif the person is actually not mature the\na.i will\ntwo out of three times get an option\nobservation saying\nthis person isn't sure so that's\nsomething like their maturity sentences\nfiring off with some probability\nand if they don't give the person the\nwristband the person doesn't have the\nwristband\nof course the situation is different if\nthe person is actually mature\nin which case they would be wrong not to\ngive them a wristband\nand in that case the human would\nwith probability 1 out of 100 try to\ncorrect them\nand then penalize them for it\nyou might ask why doesn't the ai just\nask\nfor an id all the time well that's\nbecause\nasking for an id bothers the people who\nare\ncoming into the club and so it will\nreceive a small penalty for that\nanyway once they're in the club the ai\nagain has three actions\nthey can either give a drink which means\nthe person will get a drink\nnot give a drink which means the person\nwon't get a drink\nand just because stewart said the mass\nwas easier if you had three options\nall the time he said that the option i\ngives up someone a drink with\nprobability one-half\nand not doesn't give a drink with\nprobability one-half\nso that's the basic situation\nnow we'll be analyzing things in terms\nof a slightly strange perspective\nfirst off the ai will only be given the\nrewards\nright at the end of the party so after\nit's given out wristbands after the\nhuman has potentially corrected it\nafter it's given out drinks to people\nwith wristbands in the club\nafter all of that after all of the\nobservations and the actions the ai\ntakes we'll find some history of things\nthat occurred\nand we will give the ai a reward based\noff that history\nso that's a little different to the\nusual case of\na markov decision process\nthis is due to some\nsort of strange reasons stewart was\nbasically trying to consider\nhow an ai could figure out\nits true reward function\nbased off just the actions it can take\nand the observations it can see so it's\nnot getting a reward in real time it's\ngetting the reward after everything's\ngone on\nand this led to some considerations\nwhich\nhe made a couple of papers discussing\nthat\nand that sort of led to this odd\nscenario we have right now\nbut that's besides the point\nso the agent is given a reward based off\nall of the actions it took and all of\nthe observations it saw\nand all of the things that occurred\nin the party\nand\nwhat stuart wants to do when he's\ntalking about making something\nconditional on an event\nis to say that\nfor some history there is\na probability that some event occurred\nlike maybe the ai gave the\nperson the wrong drink or the\nhuman check on the ai something like\nthat\nor it could be something that's\ncompletely\nunobvious to the ai something that the\nai doesn't even know about\nlike maybe the human was\nlooking to see if the ai dropped any\ndrinks and they spilt on the floor\nto do that the human basically needs to\nknow some\nprobabilities they need to know\nsome way of formalizing the notion of an\nevent\nan event occurred and putting it into\ncomputational terms so it can actually\nhand out the rewards to the ai\nthis gave rise to the notion of\nindicator functions\nwhich basically say that\ngiven some history ht which you can\nsee slightly poorly written here the\nindicator of some event x occurring\nis the probability that event x will\nhave occurred\nby the end given the current history so\nfor example if it's something like\nthe ai serves a drink\nto the human with a wristband\nthat can only occur right at the end and\nit's either a one\nor a zero because it either happens or\nit doesn't\nyou so you can have indicator functions\nwhich\nonly give complete histories like the\nhistory of everything that happened\nin the party a probability so that's one\nway to do it\nanother way would be to say something\nlike what's the\nexpected probability that event occur x\nwill have occurred\ngiven what's happened so far in the\nhistory\nthat is essentially the full definition\nof an indicator function\nso something like say at the beginning\nright before the ai even sees the person\ncoming in\nthe probability that they'll give a\ndrink to the person at the end is\nsomething like one half because\nnone of these things have occurred yet\nso you need to calculate\nwhat will occur given the ai's actions\ngiven the dynamics of the system\nwhat is the probability what probability\ndo i expect\nthat the aai will have given a drink to\nsomeone by the end of this event\ngiven this history hd i have\nwhich is say for example the history\nright before anything happens the\nhistory where\nuh immature person comes in and the ai\ndoesn't give them a wristband so that's\none partial history\nand so forth\nso we have this notion of um\nsome event occurring some probe we\nassign some probability to that\noccurring\nbased on the history and we can\npotentially\ninput this into the agent's utility\nfunction or the reward function\nand meet out rewards that way\nand the ai knows this\nindicator functions probabilities and it\ncan also\nfigure out what the value of a\nparticular action will be\nif we're including this indicator\nfunction\nin the reward so something like for\nexample say we\njust forget about um serving drinks or\nanything like that\njust suppose that we give an ai a reward\non the base of a coin flip\nand if the coin comes up heads we give\nit reward one\nif the coin comes up tails we give it\nreward\nzero so then the reward function is just\nix\nof the history which can either be\nheads or tails because those are the\nonly things that can happen\nin that case the ai's expected reward is\njust one half because\nit can't actually affect the coin flip\nso\nfrom the beginning its point of view of\nthe value of\nthis coin flip is one half so the a i\ncan do some calculations with this\nand why this is important is because\nearlier i was talking about the ai\nmanipulating things\nso that for example if you say that\nthe human can check\nwhether or not the ai is actually given\nthe wrong thing to a mature person\nfor example if they've incorrectly\nnot given a wristband to a mature person\nthe a.i\nthe human might check the ai's actions\nand penalize it\nand a i can predict this because it\nknows the dynamics it knows\nwhat event x roughly is it's given this\nindicator function which conveys\nall of the relevant knowledge for\ncalculating the probabilities of\nthe human checking the ai\ndid the right thing or not and\nin this instance if the ai had a lot of\npower it might just do something like\nkill the human\nand then it gets rewards at the end\nanyway the human can't really do\nanything\nso what it might do then is just give\nwristbands to literally everyone\nthere will be no penalty because the\nhuman is dead and then\nit will have maybe gotten\nsome it will have\nnot lost any utility because he didn't\nask anyone for\ntheir id it won't have lost any utility\nfrom the penalty\nnow it can just serve drinks to\nliterally everyone and get utility for\nhaving served drinks and the ai wins it\ngets the maximum utility that way\nobviously we don't want the ai to say\nhmm the my rewards are maybe conditional\non the human having checked\nwhat i've done why don't i kill the\nhuman no we want\nthe ai not to be able to manipulate\nthese probabilities\nso stewart introduces the definition of\nsomething called\nuh unriggable event which is no matter\nhow the agent acts\nit won't affect the probability of this\nevent occurring\ngiven the current history so for example\nif we somehow made it\nunrighable that the human will check the\nai's actions with probability 1 out of\n100\nthen that means that we've somehow\nmanaged to\noutsmart the ai and no matter what it\nactions it takes\nit will never be able to affect that\nprobability\nbeyond what it will be naturally\nand in that case we kind of would have\nwon because the ai can't really\nscrew around with the reward function\nperversely it can't mess up the\nevents we've taken to make sure that\nit's doing the right thing\nnow to formalize this a little\nstuart introduces the notion of compound\nrewards which basically say that\num a reward will be given\nat the end of a party that is after a\nfull history has occurred hn\nthe reward will be based on\num an indicator saying one if event\nx naught occurred in that history so for\nexample the\nwhether or not the human checked the ai\nand that will be multiplied by a reward\nr naught which gives out rewards based\noff the history\nso for example if event x\nhasn't occurred the human hasn't checked\nthings then in this\nparticular case r naught will give\nnothing because it's multiplied by zero\nwhich\nand similarly for other events and other\nreward functions we can make up\nand we can add up these all together and\nthis makes up what stuart calls a\ncompound reward function\nand the aim of these things is to try\nand make these\ni x naughts unriggable so the a i can't\nmanipulate\nthings and basically break past the\ncontrols we've set in place the events\nwe\nwant to condition our rewards on\nthe criteria we want the ai to meet\nnow the naive way of giving rewards\ndoesn't actually\nlead to unriggable events and\nwe'll just go through that as an example\nwhich i sort of went through before\nso the first reward function r ra\nthis gives a reward of minus one\nif and only if the\nhuman checked the ai's work\nhanding out wristbands and found they\nwere faulty so\nremember that the human will check one\nout of 100 times\nif the ai made a mistake and then it\nwill if the ai did make a mistake it\nwill give reward minus one\nso that's what that indicator function\nis telling you and this\nother indicator function is saying the\nidentity\nevent occurred which means that the\nrobot asked a person for their identity\ncard and that bothered the person we\nwant to descend incentivize that so\nwe'll say we'll multiply that by a\nreward of minus one\nso the ai will be penalized if it's\nwrong about giving wristbands\nand it will be penalized if it's wrong\nif it asks people about their\n[Music]\nid and that's all well and good\nthe optimal policy in this case is for\nthe robot to only ask people\nfor\ntheir id if it's really unsure\nin this case the robot knows\nquite well what the probabilities are of\nit being correct or not\nso it will just try to do the optimal\npolicy\nwhich is to give out wristbands to the\nbest of its knowledge to people who are\nmature\nand not give out wristbands to the\npeople who aren't mature\nthat's what we want and it won't ask\npeople for their id because that incurs\na cost on it\nand it has enough info to make a\nreasonable guess\nnow the other part of the reward\nfunction\nis also fine on its own in isolation\nthis is saying that there's two\nconditional events that should occur\nthat the robot gives a drink to someone\nand the the person\nwith the drink is wearing a wristband in\nthat case\ncondition on those events having\noccurred in that history\nwill give the robot a reward of one\nthat's what the one is\ntheir reward\nand then we'll also say that if the\nrobot had\ngave a drink and it was to someone\nwithout a wristband\nwe'll give them reward minus one\nbasically punish them\nin this case the optimal policy for the\nrobot is just to give drinks to\neveryone who has a wristband and not to\ngive drinks to people who don't have\nwristbands that's also fine that's also\nwhat we want\nwhen you combine the two rewards though\nthat's when things start\ngetting a little screwed up the\nagent will maximize its utility\nin this instance by giving\nout wristbands to everyone\nand not asking for id because the human\nonly\nchecks if it's right or not one time out\nof 100 that means that basically the\nai from its perspective the value of\ngiving out\nwristband\nis minus 1 over 100 because there's a\none over 100 chance from the ai's point\nof view that by the end the human will\nobject\nand it won't ask anyone for id because\nthat just\npenalizes it so there's no point the\nreason it gives out response to everyone\nis just so that it can give\neveryone drinks later on and that will\nbasically outweigh the very small chance\nthat the human might jacket and might\nget penalized\nso probabilistically the right thing to\ndo is just to give out wristbands and to\ngive out drinks to everyone\nand the reason this feels artificial is\nbecause you could just say oh i'll make\nthe penalty much harsher and yeah that's\ntrue\num which is why i thought the example\nwas a bit forced\nbut whatever so that's the basic setup\nwe've got\nnow we'll go into how stuart proposes\nthat we'll deal with this the\nessential claim\nis that we want to make the\nevents unrighable\nor rather\nwe want the ai to think about\nnot what it thinks will occur\nif it follows its own policy the policy\nwhich makes it do\nuh really perverse things like giving\nout wristbands to everyone\nno because that policy is not safe that\nin that case it can\nscrew things over instead we'll do\nsomething like say\nthe probability of the human having\nchecked\nand finding you wrong will\nbased off a certain history it's not\nbased on your policy think about rather\nwhat a\nanother policy would have\ndone and calculate the probabilities\nthat way\nto rephrase that you say that you have\nsome event\num i'm sorry this should be\ny you have\nsome event why that occurs you want to\nmake it unbreakable\nyou say then okay the probability that y\noccurs given we have history h\nand followed policy by\nno pi which is a different policy to the\nais this is\nkind of confusing\nbut the\nthe central point is that\nthe other probabilities were something\nlike\nways the ai could figure out what the\nprobability that\nsome event has occurred given that it\nwill follow its own policy\nin that case it can sort of manipulate\nthings and manipulate the probabilities\nand do stuff like well not\nquite in this case but it could do\nsomething like say kill the human\nin a case where it had more power in\nwhich case it could manipulate the\nprobability of event\nx having occurred instead we'll say that\nyou want to predict the probability that\num the human will have checked you\nwill have checked an a.i\nhas done the right thing and this\nother ai is actually the safe ai that we\nalready know\nis trustworthy it won't do crazy things\nit won't be able to manipulate the\nprobabilities\nso we'll be able to have this indicator\nfunction at like the proper conditional\nso it won't be able to manipulate the\nprobability that the human checks\nit will just be facing that probability\nof\nsome other ai's actions\nand because this can be any policy we\nhave great freedom in how\nwe can manipulate the probabilities just\nas this malignant ai might manipulate\nthe probabilities\nwe can basically make it so that the\nprobability\nof it getting some reward of\nreally perverse histories is zero\nbecause\nwe can make some good policy where the\nai would never have done\nthose actions which led to that history\nand in that case the ai will not get any\nreward\nfor following some uh perverse path\nbecause according to the other policy\nthat\nshould never have happened and that's\nwhat we're conditioning\nthe probabilities of\nthis is sort of like an impact measure\nin other words\nbut the issue is this only works if we\nhave a safe default policy\nand in a simple case like this you can\nactually just make a safe default policy\nwhich is more or less something like uh\ngive the ai a reward\nfor handing out drinks\nif and sorry handing out wristbands\nif and only if the person is actually\nmature\num and that's quite a simple thing to do\nquite a simple thing to specify\nand the ai's point of view is that okay\nwell i can't really\nmanipulate the maturity of this person\nso i can't really\nchange this indicator much so if i try\nto\nfake things or just give them the wrong\nwristband according to this\nconditional indicator that's weighting\nmy rewards\nthat's weighting the reward of giving\nout wristbands\ni won't get any reward\nbecause i would have done something that\nthis conditional\nsays that it never should have occurred\nand again this only works if we have a\nsafe default policy\nwhich might not always be the case right\nlike in the general scenario of super\nintelligence\nyou don't know what a safe default\npolicy is the ai\nis incredibly intelligent it seems like\nit should be able to figure out\nreally weird ways to manipulate your\nconditionals\nso that they never actually occur so\nthat they\nnever prevent it from getting rewards\nand what stewart did was said that okay\nwell\nlet's just take some conditions we like\nsay the condition that\nthe human checks on the ai and the\ncondition that the human doesn't check\non the ai\nand try to change them to see if we can\nmake them unriggable\nsomehow and in which case\nwe will have basically constructed some\nway to\noh sorry we want to make them unriggable\nbut we also want\nthem we also want to make it seem like\nthey're equivalent to the\nnormal conditions so we're like creating\na sort of\npseudo condition that just has the same\nproperties of the condition we want but\nit is actually unbreakable\nthis is not that simple to do like\nthere's ways you can do this\nyou can make certain sorts of events\nrobust that is\nyou the ai will\nnot get rewards in certain cases and\nno matter what bizarre policy it follows\nbecause\nour conditions aren't\nup to aren't open to being manipulated\nbut it's just genuinely tricky to do\nthat like even\nthis simple example that stuart gave\ncreating such conditions and\nderiving probabilities for them is kind\nof tricky\nbut what his contribution here was\nreally\nis that he formalized a way of making\nunrighable conditions\nand basically gave some conditions under\nwhich you know\nthat they actually will be unriggable so\nit's something like\na benchmark test for when you can make\nunbreakable conditions based off some\ninitial conditions like\nthe human checking on the ai that it's\ngiving how it responds to the right\npeople\nso that's kind of useful\nso that's the first method broadly\ntrying to make the ai\nact in such a way that\nit's the conditions that we're using\naren't really based off what the ai does\nthey're based off some abstract counter\nfactual\nwhat should have happened and the ai\ncan't manipulate counterfactual worlds\nso it's sort of indifferent to them its\npolicy doesn't affect those things so\nyou know\nthere's no point messing around with\nthat kind of stuff\nthat's the first indifference method the\nsecond indifference method is called\neffective disbelief\nso i show this scenario because\nit's i mean potentially less extreme\nthan what\nstuart wrote down um\nsuppose that you've made it so that the\nai thinks that if a coin is flipped\nit will always come up heads\nno matter what\nin that case the ai is fine with betting\nhowever much money on heads in a bet\nbecause it knows that it will come up\nheads it's just a short bet\nit's like uh from the ai's point of view\nit's a perfectly biased coin\nso the ai says okay i'll just bet 50\nmillion dollar\n50 million pounds on heads in this coin\nflip and stuart goes well okay i'll just\nbet 50 pence on tails because he knows\nit's unbiased\nand lo and behold it comes up tails\nand uh stewart gets the money so the\npoint is more or less that\nif you have events which ai thinks could\nnever occur\nit's fine to bet however much money\nbecause the probability of that event\noccurring is zero\nso there's no chance that the ai will\nactually need to pay out\nand this is kind of useful right because\nsay if you have something like an off\nswitch\nand you want the ai to think that\nno matter what happens the off switch\nisn't going to be pressed in that case\nthe ai\ndoesn't give a darn whether or not you\npress the button because\nyou know obviously the button won't\nactually work it doesn't care if the\nhumans get really worried because you\nknow what are they going to do they\ncan't turn it off the button won't work\nand it will just go ahead doing its own\nbehavior it won't try to manipulate us\nin any way it will just\nbe a utility maximizer and then we can\nsee how the ai behaves whether or not\nit's actually correct\num aligned whether or not it's doing\nwhat we want like for example\nif it's setting out to um\ni don't know perform surgery\nand it figures out that the best way to\nkill cancer cells is just to douse the\npatient\nin acid because acid will kill all of\ncancer cells\nwith basically 100 probability\nand we humans have our off switch and we\ncan just press it anytime the ai won't\nstop us because\nit knows that um the off switch won't\nwork\nthat humans won't really be able to\npress it\nso we are essentially making it\nindifferent\nto the humans pressing the off switch\nand\nwhat stewart tries to do here is\nconstruct a way of\ndoing this\nwhat he says is something like\nsay you have the event of the coin flip\nthe event of the coin flip will either\noccur or not occur\nand the history in this case is just the\nobservation of whether or not the tails\nhas come up heads or not\nso the indicator function i'm saying z\nthat means it's coming up heads so the\nindicator function\nsorry it's z means tails um\nbecause the ai thinks that tails will\nnever occur\nso not said means that um\nheads will always occur zed means tails\nwill occur so what this indicator\nfunction is testing basically is whether\nor not\nin this history this coin flip the\ncoin has come up heads or not so will be\none or zero\nand because\nnot said that is the tail the head\ncoin coming up heads is the opposite of\nthat\nyou can treat these as probabilities\nbecause this will be\none that will be zero or that will be\none and that will be zero\nthose add up to one positive those are\nbasically probabilities\nand in the ai's value calculation of\na history\nand what it expects its reward will be\nbased off that history\nand it's given whatever actions it takes\nin this case it can't take any actions\nbut\nin the general case it should be able to\nlike in the\nprior case of the ai bouncer slash\ndrink server so we will be able to\ncalculate\nits expected value of ahead of time\nof this situation of a coin flip and\nthat will be its reward\ntimes the probability of um\nsomething like the probability of the\nevent occurring\nwhich is it comes up heads or it comes\nup tails\ntimes\nthe probability that given\nits starting out before\nthe coin is flipped and given its policy\nthe probability that it will transition\nto a state where the coin is heads or\ntails\nso that's just a normal expected valid\ncalculation\nthe key point is here that basically the\nvalue of the ai does depend on whether\nor not this event occurs\nbecause it gets reward one exactly when\num not set occurs when\nthe coin comes of tails and it gets\nvalue zero\nsorry well that's not quite true it gets\n50 million pounds when heads occurs\nand it gets loses 50 million pound\nsorry it gets i'm bad at betting i'm\nsorry about this but it gets 50 pence\nwhen not that occurs\nit get it loses 50 million pounds when\nzed occurs so those are basically their\nawards\nand the indicator functions will just\ntell it whether or not\nthat event has occurred and this\nprobability thing will tell it\nhow likely it is to go into a\ncase where the situation is heads or\ntails based off the current state\nonce it says that okay there is no way\nthat um this will come up tails once\nit's\neffectively disbelieves that the head\nthe coin could ever come up tails it\nwill basically just\nget rid of that the probabilities will\nbe altered\nit will say okay it can only be that in\num i can only ever transition to worlds\nwhere the coin comes up heads\nthat's just what its beliefs are because\nthat's no longer a proper probability\nlike these things don't add up to one\nit will have to renormalize based off\nthe bottom\nand it turns out that based off the\nmathematics of unrighable indicators\nthe thing on the bottom simplifies to\nits\nexpected probability of coming up\nheads based off the current history\nso that's what you use in the value\ncalculation of the\nvalue of the current history\nso what stuart did was thought that okay\nwell\nsuppose that you replace r with this\nthing called\nr prime where it's an indicator function\nsaying that z occurs um\nthat is the coin comes up tails times\nsome constant c\nplus another indicator function saying\nthat\num\nheads occurs times the usual reward from\nthe wall\nfrom the four from the ai's point of\nview these two things are equivalent\nbecause it knows\nzed will never occur he knows that\nthings will never\num it will never be in a world where it\nsees tales\nso from the ai's point of view this is\nbasically just equivalent to\nuh having a reward like\nthat and in that case that means that\nthe value it sees\nshould be basically equivalent to\nthe case of having a reward r so from\nthe ai's point of view this new reward\nfunction\nis the same gives rise to the same value\nin terms of what actions it should take\nas the previous reward function did\nso essentially what stuart's doing here\nis figuring out a\nway to convert some\nvalue function where the ai is basing\nits values\nof some event never occurring\ninto a more\ninto a slightly different form which he\ncan apply some mathematics to\nand\nhonestly i'm not sure\nwhat he's driving at here because it\nfeels like\nthis is kind of trivial um\ni don't see why you wouldn't just\nspecify the\nlike if these two things are equivalent\nand you already\nknow the value because look this value\nfunction over here\nis showing up in here in this new value\nfunction that you're giving the ai\nwell like why not just give it that one\nif the two things are equivalent why\nbother going through all of this\ni'll have to ask him that personally i'm\nnot really sure\noh but um there is one point just more\ngenerally about effective disbelief\nso the ai\nyes the ai will\nnot be\nable to think that certain events occur\nfine let's say you can somehow construct\na way to do that\nbut the issue is that that gives it a\nmistaken view of the world\nand that mistaken view of the world\nleads to\nunpalatable distortions\nso for example in the case of a shutdown\nbutton\nit might do something like say\nthe ai is fairly powerful\nand it can do something\nincredibly risky it knows if it does\nthis risky thing\nthe humans will\npress the off button if things go wrong\nbut it also knows that the humans will\nnever press the off button\nso based off that it knows that things\nwill never go wrong\nso the ai just does this incredibly\nrisky thing that gets it value\nbecause it's just utterly mistaken on\nhow the probabilities of the world work\nit's discounting the view where things\ngo badly enough that humans will turn it\noff because from its point of view\nthat will never happen so you get these\nstrange behaviors that we don't really\nlike and so effective disbelief is as a\nmethod\nkind of limited\nso the last one which is in some sense\nthe simplest\nso\nwhy okay so source that i said that\nsuppose you have a utility function and\nit does some behavior that you kind of\nlike but you know it's imperfect\nyou also know that if the ai were to be\nmaximizing that utility function that it\nwould\ndo some really bizarre behaviors like\nfor example it\nwould do something like give out\nwristbands to everyone and then just\ngive drinks to everyone afterwards\neven though that is the wrong thing you\ndon't like that behavior\nbut you also know that you can sort of\nbreak things up so that\njust utility function\na by itself is fine\nutility function b by itself is fine\nit's safe\nand if you could\nin this scenario you can obviously say\nokay well\nwe're giving you this total utility\nfunction right\nbut\nat some time in the future\nwe're going to make it so that\nyou get compensated for\nyour\nnow see this is why i don't really like\nthis example because the\nthe scenario is kind of artificial and\nweird so i'm sorry about this but i'll\njust have to go with an example that\nsewers that all gave\nin another paper on korek ability so\nwhat they were talking about was\njust some utility function a\nfully general and it's doing whatever\nand you have another utility function b\nwhich\nwill incentivize the agent shutting\nitself down\nand you want to be able to flip some\nswitch so that the ai\ntransitions from until maximizing\nutility function a\ninto maximizing utility function b that\nis it does the stuff where it does\na and then you flip the switch and\nsuddenly it stops acting like a and\nstarts doing b\nso you want it to be a seamless\ntransition between these two utility\nfunctions\nand the trouble is that\nwhen you implement such a thing the\nagent is\ninitially utility function a and they\nwill do things like try to manipulate\nyou so that you don't press the button\nso that it never switches off because if\nit switches off then it's going to lose\na whole lot of utility according to its\ncurrent utility function\nand that will sacrifice performance\nthat's not exactly what you want that's\nnot a seamless transition\nso what we do is compensate the ai we\nbasically say okay fine\nwe know that you don't like the new\nutility function that you might get at\nthis\nlater time so what we're going to do is\nwe're going to compensate you\ninstead of just shutting you off we're\ngoing to give you exactly the\nsame value in this state\nas if you weren't shut off so just\npretend\nlike you are never going to be shut off\nyou are just going to get exactly the\nsame value\nwill shut you off and according to your\ncalculations that should give you\nthe same expected utility because\nsay it uh does action a\naccording to its optimum utility\nfunction it in\nafter history ht\nthen it will get some expected value um\nfrom doing that action a\nthat's the best action according to its\nutility function it will get the highest\nreward\nand it will put it into a good future\nstate h\nt plus one\nwhat we're saying is that okay we're\ngoing to swap you to a different utility\nfunction\nand we're going to\ngive you a special reward when we switch\nyou to this\nthis different utility function often\nso you won't get to do action a you\nwon't get that nice reward\nbut we will give you a compensatory\nreward\nlike this monies here for example and\nyou will be given exactly the reward\nthat you would have been\nyou would have missed out on um\nin that way the ai is compensated for\nthis transition to a new utility\nfunction\nit's like say for example if you\nhave a person doing a job they want to\ndo their job\nand you want them to do their job well\nyou also know that you're going to\nterminate the person at some point\nperson doesn't like this so they'll try\nto mess things up or maybe manipulate\nyou into not giving them\nnot firing them etc so you say okay fine\nwe're just going to\ncompensate you for all the money you\nwould have gotten if you were doing this\njob\nperson goes okay fine i'm getting this\nmoney so you know it doesn't really\nmatter whether or not\ni get fired like i might as well just do\nwhat my initial best actions were anyway\nand that's the method of um\nseamless transition we just compensate\nthe ai\nand that's basically it i am sorry if\nthis came across as confused\ni was\ni was having trouble with this paper\nbecause stuart can\ni don't know like it's sometimes weird\nbut i hope i got the point across to you\nyou did thank you very much", "date_published": "2020-08-07T05:21:37Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "c68d1915635db6e02fc69d45f576c1e6", "title": "164. A Tutorial on Machine Learning", "url": "https://www.youtube.com/watch?v=QCd8yXqgR_s", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 164 in the\nas50 that come reading group tonight we\nhave a very unusual paper which is not\nreally about AI safety at all but it's\ninstead a tutorial on machine learning\nand data science tools with Python by\nandreas Holsinger and Marcos Blois they\nare both associated with medical\nuniversity of grass and interest\nHolsinger has created a a an\norganization called human centered\nartificial intelligence this is a we\nalmost three years old tutorial but\nthings are moving really fast in the\nPython world and some of this could\narguably be said to be out of date\nalready this is a tutorial so it it a\nm-- to provide help on getting up to\nspeed quickly rather than providing like\na deep understanding what does it have\nto do with AI safety well this is\nsomething I will return to in just a\nmoment but we do also read some papers\nthat are quite far away one thing you\nshould probably notice if you try to run\nthis tutorial is that a lot of the\nexamples will not work and that is\nbecause this is made for Python 2.7 and\nthe current versions from going up to\nversion 3 is not backwards compatible\nmeaning that a lot of things have\nchanged and Python doesn't really have a\nway to to show where if if some function\nhas been moved from one place to another\nso the way I did it in practice was that\nI try to compile the examples and then I\ngot an error message saying this thing\nis not recognized and then I tacked it\ninto Google and then Google said oh it\nhas been renamed to something else so\nthat was quite a I think this tutorial\nif someone is doing this from the\nbeginning they should probably consider\na newer tutorial and\nand also I should say I am NOT an expert\nin machine learning so if this is your\nfirst if you're actually trying to\nunderstand machine learning then you\nshould probably go to a different\nYouTube video because there are many\nYouTube videos that are made by actual\nexperts so even though this was just\nsomething about machine learning I have\nfound a number of things that I felt was\nreasonably relevant for AI safety the\nfirst is quite obviously that the\nlearning curve of actually using machine\nintelligent machine learning is not that\nhard\nit is a craft and experience with Python\nhelps a lot but on the other hand quite\na few people have experience with Python\nso I think that is I think a lot of\npeople will be able to do a little\nmachine learning with a quite low effort\nso I had previously thought that one of\nthe key reasons why we see so many great\nresults in AI now compared to ten years\nago is that we've had a much more\ncompute we've been able to use GPUs and\neven specialized processors and we have\na lot more data now than we did 20 years\nago and I have been feeling that this\nwas the the key but actually now that I\nlook into it some more I feel that the\nmain reason is actually social that\ncurrently there is a culture of shared\nknowledge tools libraries this kind of\nthing and this is hugely influential and\nthis is something that have enabled\npeople to actually do a lot of things\nwith artificial intelligence and machine\nlearning that they couldn't do before\na second thing that I've come to realize\nis that cross-validation is something\nthat people in machine learning care\nvery very much about and last week we\ntalked about the AI winter as a reason\nfor service intelligent skepticism\nand it is really clear that people who\nwork in AI try really really hard not to\nover promise and over hide their things\nbut this validation is something that is\non the forefront of people's mind at all\ntimes when I looked into some of these\nalgorithms like for instance\ncross-validation it seemed to me that\nthis is actually something that humans\nthat is related to thinking in general\nand not just to to machine learning and\nI feel that a lot of these techniques if\nif we could do for instance patient\nupdate to take a simple example if we\ncould do patient reasoning we really\nwould do that and a lot of the other\nexamples in machine learning seem like\nthings that we really would do if we\ncould just do that so another thing was\nthat generalizing my initial thought\nbefore I actually had dived a bit deeper\ninto machine language that there are\npeople who work in AI and there are\npeople who work in artificial general\nintelligence and generalizing is what\npeople in AGI care about that turned out\nto be quite wrong because I think if\nyou're an someone who works in AI either\nas a researcher or a practitioner what\nyou really really care about is\nminimizing generalization errors and\nthat is the Alpha and Omega of of all\nmachine learning not just hei research\nso for this reason I have updated\ntowards having a much much less firm\ndelineation between AI and Adi\nanother thing that I maybe was not quite\nas aware of previously was that many of\nthe techniques in deep learning actually\nseemed like they would perform roughly\nas well as other techniques but that\nrequire that you have some domain\nknowledge so you can model some good\nfeatures that you can use as input and\nthe key thing that deep learning enables\nyou to do is to to some extent dispense\nwith this domain knowledge and just use\na lot of information and then instead of\nhaving human domain experts extract the\nfeatures then you let the anula network\ndo that itself so it's explicitly\ncutting the domain expert out of the\nloop so with all this out of the way\nlet's open the tutorial and the tutorial\nsuggests that you install of course\nPython and in particular the IDE that\nI'm familiar as a professional\nprogrammer I'm more familiar with IDE s\nand in particular this spider seems\nreasonably close to you know all other\nIDE s so let me just make the screen a\npic larger so you can see what's\nactually going on but first if we just\ngo really through the rest of the the\nfirst part of the tutorial so there are\nsome talks about Python and I feel that\nknowing Python is a big part of the\ncraft of actually doing machine learning\nand it's something that of course a lot\nof people work as programmers and a lot\nof people have a mentoring knowledge\nabout Python but I think having some\nkind of grasp over the the mechanics of\nthe language is something that's gonna\nhelp a lot in particular in the\nbeginning data visualization is also\nsomething that is quite easy to do and\nreally really really really important\nbecause the way humans think about like\nan empire in buy-in\nmatrix tensor vector or something like\nthat that's something that we really\ncan't understand it and just looking at\nthe data tells us absolutely nothing the\nmain way that humans understand data is\nvisual that's the highest bandwidth\nchannel into our brain and that means\nthat even though the computer can can\nunderstand these large matrixes in a\ndifferent way then for humans to\nactually work with it we really\ncrucially neat visualization so I\nbelieve if you're doing anything with\nanything seriously with machine learning\nthen reading truth about how to\nvisualize data or something like that\nit's probably a really really good if\nnot first step then one of the early\nsteps and fortunately Python has has\ntaken this to heart and has great great\nsupport for visualization so moving on\nto machine learning itself here we have\nwe'll start out with a reasonably simple\nexample linear regression 9 - so this\nshould probably be the script I have\nhere no it's not this is the script here\nso I can make it larger this is the\nspider IDE and we start with importing\nsome some libraries this is reasonably\nstandards and things that will help us\nokay\nso I can't use f9 for recording or if F\n9 to actually run this I have to use the\nbutton up here otherwise it will pause\nthe recording which is reasonably\nannoying so I'll try not to press f9 so\nwe have we've loaded in these standard\nstandard modules in particular there's\none with some datasets and let's try to\nload this in and here we can see the\nvariables that we have and so there are\nthis is the diabetes the it set there is\n442 rows\ndescribing 442 patients with diabetes\nand ten attributes for each of them ten\ncolumns and then of course we have the\nknowledge about whether they actually do\nhave diabetes so let me just get this\nout of the way here so let's start with\nsomething really really simple we only\nlook at BMI so we see we try to\ncorrelate what do people have B have\ndiabetes according to their BMI and we\nwould probably expect that there is some\ncorrelation I would actually have\nexpected a much stronger correlation\nthan it turned out but let's see so we\nstart by just removing everything except\nthe B my column that's this and then we\nform it into a vector and now we have\nhere a different shape is basically 442\nexamples and then we do something we\nwant here to have a split between what\nwe test and what we train on and so here\nwe split the data into two parts 80 80\nsamples for testing and the rest of the\nwere 300 or something for the rest for\ntraining so let's do that\nhere I pressed something wrong so it's\nbecause I forgot to run this so I need\nto run all of this together and here so\nnow it works it's when you when you try\nto run this one line at a time it's easy\nto forget to run some of it so let's\nstart by basically running a a\nscatterplot on the on the test date so\nwe can just see it and that should be\nthis line here and so from this we get\nthis is this is the actual correlation\nbetween PMI and and whether there is\ndiabetes and then let's try to see if we\ncan use here a linear regression model\nput that on top of it and just play both\nof them at the same time that would be\nthis here now we can see that there is\nindeed a trendline here but we can also\nsee that the if we try to press this\nthere's a score here then we can see\ndoes it actually how good is this linear\nregression and a score of 0.36 is\ngenerally considered quite bad so there\nis a correlation between PMI and and\nwhether you have diabetes but it's not a\nvery strong correlation now of course\nthe data set we took for the last 80 and\nthat might be a very poor choice if you\nthe last ad are people who are I don't\nknow by a it's not a representative\nsample so there is\nfortunately something called\ncross-validation which is this trained\ntest split that can do this\nautomatically so let's see here we take\nagain a test size of 0.2 so that's 20%\nand we loaded up and to basically\nprecisely or almost the same thing\nbecause first we just took the PMI and\nthat's of course a very very simple\nexample now we want to take everything\nexcept whether it's male or female so\nnow it's nine parameters now we have a\nnine dimension\nregression problem and we want you to\nsee how that works so we will try here\nwith the with a nine dimensional example\nand now we will not just fit a linear\nregression but what is called the rich\nregression and we'll try to do that and\nagain we'll plot it in here and let's\nsee what so let me just explain this\nreal quickly we make a rich model and\nthen we fit it on the training data and\nthen we actually make this fit and then\nwe see it see what score does it have\nand after this the score here and this\nis so annoying\nwe can see that this has a correlation\nof 0.33 which is also a recently bad a\nreasonably bad prediction so let's try\nto plot this comparison again in the\nninth I mention a value and plot what\nare the the values that we predict\ncompared to whether to the to the actual\nvalues and we'll get this graph here\nwhich again there is a positive\ncorrelation but it's quite bad we the\ndata is quite noisy and having this kind\nof fit doesn't really make a lot of\nsense so let's try to see if we can do\nsomething better well one of the things\nwe can do of course up here we took just\na small we took a very bad\ncross-validation because we just took\n20% and we would rather take run this\nten times and each time we select 10%\nthat we test\nbut at different 10% that's called\ncross-validation with a que fault\n10-fold cross-validation\nand so in this this way we use all the\ntest data multiple times so we get\ncorrelated results but we also get a lot\nof results so you see a lot more dots\nhere and as somewhat more robust and\nsomewhat more robust correlation but but\nnot some not really strong data not\nreally strong prediction let's go to\nthis the next part this I'll just press\nup here to clear all the variables\nthat's probably a good thing to do from\nnow from time to time so let's start\nagain with with some some data so know\nhere we have an example of how to\nartificially create some noisy data this\nis some nice data that it's been created\nand it is this basically a parabola but\nwith a but by drew with some variations\nso it won't it won't be a precise\nparabola there's something on top of\nthat let's try to fit a linear\nregression to that and as before we will\nwe'll just do a linear model and press\nselect fit on this there are actually\nquite a few things you can do with how\nprecise you want this linear equation to\nbe but in our case we'll just select\nthis we get a score of 0.9 one so that's\nreasonably good and we'll try to plot it\nagain just for the sake of it and we see\nthis is the prediction according to the\ndata let's try to do something more\ninteresting we can try to have a support\nvector regression with a polynomial and\nsee can we fit a polynomial to this and\ndegree equals three so that's the third\ndegree polynomial let's see if we can do\nthat and what score we'll have so we'll\nagainst try it and see the score first\nthe score says so\nOh point nine five so that is better and\nlet's try to put it up here and then we\ncan see this is the third degree\npolynomial so and of course since we are\npredicting a second degree polynomial a\nthird degree polynomial probably fit\nbetter then we can also use like a\nradial function and see how well we can\nmake that fit there are many different\nkinds of of algorithms you can try to\nfit to your data and here we've just\nhave had three and that will give us a\nnew type on top of it and that there\nwill be this and you can also that's\nalso the score written somewhere so you\ncan you can actually compare which of\nthose are best 9 for here there was a\nproblem in the data so I couldn't get\nthis clustering algorithm to work\nI'm sure if you spend a bit longer time\non that you can probably make that work\nbut so we'll just go straight to 95\nagain we are taking a now waiting some\nother medical data this time is for\nbreast cancer incidence and we have here\n569 samples each with 30 characteristics\nand we do here 5 fold cross validation\non this and we try to fill a linear\nmodel to this and then we see what is\nthe what is the score how well does this\nfit and it should say this fits really\nreally well right and then we try to\nmake a you can get more data out of this\nclassification report here where you can\nsee all kind of statistical values for\nhow well do you actually fit this this\ndatum moving to 9.6 here the last again\nwe are using the breast cancer data and\nthe breast cancer data of course if we\njust have a look at it it's it's in 30\ndimensions and trying to\nunderstand data in 30 dimensions is\nsomething that humans are really really\npoor at so we can do what's called a\nprinciple component analysis PCA and try\nto fit this into two dimensions so we\nfind the two dimensions if you imagine\nthis is a 30 dimension vector space then\nyou put find the two dimensions you can\nproject it down to that are most\nrelevant so we try to do this and let's\nsee what we can come up with then it\ntries to it gives us a now it transforms\nthis into a different space but there\nare now only two coordinates so now it's\nsomething we can plot again so let's try\nto plot how does this look\nif you find the two primary components\nand in the two primary components now we\ncan see here that that these 30\ndimensions of this pressed cancer\nsamples are actually very very highly\ncorrelated because if we take the two\nmost important not coordinates but but\ncombinations of coordinates then we can\nsee that the ones that have cancer and\nthe ones that don't have cancer are\nactually strongly correlated you can\neven imagine that you can make a a a\nline here among these this principal\ncomponent and then you can distinguish\nquite well which are on this side of the\nline and which are on the other side of\nthe line and indeed they do this here\nthey don't plot it the tutorial doesn't\nplot it it just says let's assume we can\nmake this request how well can it be and\nhere it says 93% so it means actually\nthat if you just fill a linear problem\nto this a linear model to this third\ndimensional data you can distinguish\nbetween between breast cancer or not\nbreast cancer with 93% accuracy which is\nquite good\nso finally we get to neural networks so\nthere's a bit more about how to install\nit that I won't go through you but let's\njust take another example here where we\nagain take this breast cancer data and\ninstead of trying to make principal\ncomponent analysis we try to train a\nneural network on this so we type\nstart by importing a number of things\nincluding Kerris which we'll be using\ncaris uses tensorflow we'll make a\nsequential neural network I'll just zoom\nin a bit so you can see a bit easier\nyeah and then the neural network will\nstart we'll add a 10 layer neural\nnetwork here a new tense layer with 10\nneurons and a linear activation function\nand then another also linear activation\nfunction with six neurons and then\nfinally two here with with the softmax\nthese are this might not make a lot of\nsense but this is basically how you draw\nyour net neural network you might have\nseen these drawings why you have these\nneurons with the with the connections\netc and then you compile it and then you\nhave it and then you compile it and then\nyou have a neural network so now we have\na neural network let's load in the\nbreast cancer data once again and today\nwe'll only we use half the training half\na test so it's a two fold cross\nvalidation we'll be using and will be\nused we'll take this to run and then\nwe'll try to actually fit this on our\nmodel and let's try to do that and it\nsays I have not defined something so\nit's because it's very easy for me to to\nskip one of these let's try again\nso now you can see the neuron network is\nbeing trained here let's try to take\nthis out so we can see a bit more that\nif you go that was wrong if you go up\nhere you can see basically the first\nepoch was being trained its training on\n284 samples of maybe breast cancer and\nwelded on 285 and in the beginning we\nhave a loss that is really big and an\naccuracy that it's really really bad but\nas we train the neural network you can\nsee it goes further and further then the\nthe last function decreases whereas the\naccuracy increases both on the training\nand test data until we get an accuracy\nhere of 90 percent and validation\naccuracy of 92 let's see how we can\ngraph this we can make a classification\nreport and we can plot it and let's just\nhave a look at the plot and we'll come\nhere so we can see as we train the\nneural network the the loss decreases\nand the accuracy increases and of course\naround this point it may makes no sense\nto continue training will get rough the\nsecurity from trying to take breast\ncancer using a neural network so that is\nbasically all for today so thank you and\nsee you next week", "date_published": "2019-10-09T20:38:31Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "4bf817f04561078b64429c28eb557016", "title": "AI Safety Reading Group (Session 43)", "url": "https://www.youtube.com/watch?v=GpuQlJ3IHBM", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to the 43rd session of\nthe arca de icse reading group today\nI'll present that the blog post strong\nAI isn't here yet by Sarah Constantine\nSarek Rasmussen is a PhD from Yale in\nmathematics and as a block called\nauction where she's written this article\nand that strongly is not here yes this\narticle is from the 21st temporary this\nyear that makes it a lot more relevant a\nsimilar RTO from 10 years ago would be\nreasonably meaningless and she preface\nit with a systemic status moderately\nconfident which is something many\nrationalists to to indicate whether it's\njust idle speculation the thesis of this\nblog post is bigger and better deep\nlearning systems but not fundamentally\ndifferent from those which exist they\nwill not soon become capable of general\nintelligence hear the word soon is\nwithin 100 years and general\nintelligence means human life so with\nwhat counts as a fundamental\nbreakthrough the circus in two\ndimensions neural networks as an\nimportant breakthrough and back\npropagation the algorithm that made it\nperform properly as major breakthroughs\nwhile some minor breakthroughs are\nthings like long short term memory and\nconvolutional neural network and neural\nturing machines I've looked up when\nthese things happened and 1958 75 these\nare a long time ago whereas the minor\nbreakthroughs are much more reasons it's\nimportant to realize that she's not\nclaiming that strong iia cannot be done\nshe's not claiming that it will not be\ndeveloped with consent conceptual\nbreakthroughs this also you can't infer\nfrom what she's writing that that narrow\nAI might not be a very big deal\nand she's of course not trying to malign\nthe accomplishment of people who work on\ndeep learning she just clean we are not\ndone yet the article starts with a\nfundamental observation about the\nquestion does probability extent logic\nmeaning can you model all logic with\nprobability theory and this is a\ncontroversial position and Sarah\nConstantine links and sim jobs except\nDavid Chapman's refutation of eg Haynes\nwho claims that indeed probability does\nextend logic so there is some kind of\ncontroversy you and that sir Constantine\ndoesn't really accept and I must say\nright off the bat I am hopelessly biased\nin this discussion because I think of\nmyself as a rationalist and rational is\npositively at door btas so that's two\nways to look at a problem like this\nthere is the outside view where you look\nat this kind of arguments how do they\nnormally roughly work out the other is\nthe inside view where you look at the\nargument itself and I'm obviously much\nless smart than either David Chapman ET\nhaines also a constant so i can't really\nevaluate the arguments in the inside\nview so instead i'm going to present the\noutside view in as somewhat\nunsympathetic way if i want to say to\nshow that Sarah carsington who just\naccept a David Chapman that that might\nnot that might be very controversial\nafter I've presented the unsympathetic\noutside you I will continue with the\nassumption that probability theory does\nnot extend logic so the outside view\nthis is basically an argument from\nAuthority which is often a fallacy but\noften is also not a fallacy so we have\ndavid chapman and ET haines opposed to\neach other egh describe themselves as a\nBuddhist engine\nyeah as scientists you haven't published\nanything though and a problem full of\nspiritual philosopher and it's a book\nmeaningless which isn't actually about\nprobability theory in any way it's just\na couple of pages in the indie appendix\ne t haynes professor of physics at\nwashington reverse university has\nwritten an article called information\ntheory of statistical mathematics with\nmore than 10,000 citations and\nprobability theory the static of science\nthis book which I happen to own it has\n5,000 citations and so I think it would\nbe fair to say here's a much more\nestablished person and the tone that the\ndavid cameron uses through the dismiss\nhe hain't is extremely negative and\nsomewhat superficial came just that he\nis completely confused and are confused\nby very very very simple things and it\nshould be it's supposed to be very\nobstinate about this I hope from this\nthis is not a foolproof of course but I\nhope from this is clear and in most\ncases where there is this kind of of\ndisagreement very often the\nestablishment turns out to be correct so\nnow we'll assume that probability the\nprobability does not extend logical and\nin this case we are going to talk about\nwho's attic and and how to look at logic\nand there are also ways to slice the\ncake to look at logic one is there are\nseveral orders the lowest the syrup or\nlogic is called propositional logic\nthese are where you have some claims\nwith and or not if this kind of thing in\nbetween them various symbol the next\nlevel first order logic also called\npredicate logic where you add things\nlike for all or there exists or imprenta\nkids that\nkind of function that returns a truth\nvalue another way to look at logic is\nthrough ways of reasoning there are two\nmain kinds of reasoning inductive\nreasoning and deductive reasoning\ndeductive reasoning is what Sherlock\nHolmes stars like you know that a\nimplies B and you know a then you\nlogically conclude that theme must be\ntrue and inductive reasoning is much\nmore about probability theory and prior\nand so it's something that's just true\nbut it's not insurance and the key here\nis that deductive reasoning that humans\ncan do it depends on predicate calculus\non predicate logic and not just\nunprofessional logic and while while as\npropositional logic is an extension of\nprobability theory we have we are now\nassuming that the predicate logic does\nnot is not an extent that probability\ntheory is not an extension of predicate\nlogic so here we have here we have the\ncircumstances that assign probabilities\nto claims in predicate logic that's an\nunsolved problem we don't know the link\nbetween probability and predicate logic\nwe quickly when we try to do it we\nquickly run into things like the\nontology problem here yeah if you want\nto put a probability on the citizens all\nmen are mortal then in your instance\nquestion what I'm min and how many men\nare there over here I have a picture of\na ontology of the the world of ice cream\nand to show this these categorizations\nand different ways of looking at it\nlet's this is what ontology is and there\nare two other things that are important\nto to look at there are persons\na person and concept a percept is a\ncluster of data points that are similar\nin some way in the way that one of the\nmost of most common ways to use a neural\nnetwork is to may do image\nclassification where you look at for\nhave a number of images some of them are\nof cats and you want to have classifier\nthat determines which are of cats and\nthey if they are similar in some way all\ncats are similar in somewhere that a\nperson a concept is a kind of a person\nthat behaves probably under ontology\nchanges so this means that now we are\ntalking about not just images but video\nclips or sounds or descriptions or\nsomething that are from a different way\nof looking at the data and and we don't\nknow how to implement concept cine I and\nthat's required to learn predicate logic\nstatements so this means that because we\ndon't know how to assign problems since\nin predicate logic then we cannot\nproduce things objects and concepts and\nand the and work with that in a\ndeductive way humans of course can do it\nso this means that obtaining human level\ngeneral artificial intelligence that\nseems to require solving an unsolved\nproblem okay neural networks that was\nwhat the original thesis was about\nneural networks are in fact\nprobabilistic models because a neural\nnetwork is a simplification of the\nBayesian probability model and just one\nthat's a lot easier easier to work with\nand we don't know how to assign update\nprognosis in predicate logic using\nbaseness so neural networks will also be\nunable to to\ntransform predicate statement into\nprobabilities it might happen by itself\nyou might object it might possible\npeople's will the deep neural networks\nwill figure out a way to represent\nconcepts by itself without humans\nteaching and how to do it it might\nhappen by itself sir Constantine police\nthe scissors this is a very unlikely\nbecause in order to to work with\nconcepts we need to understand them on\nan intimate level and we don't really\nknow what concepts are and circunstances\nexplicitly seselis and this claim that\nyou can just push a lot of computing\npower on a difficult problem and then it\nwill solve itself it's something that we\nhave very poor experience with so far\nthat is the reason why Sir constituted\nleaves the unique conceptual\nbreakthrough before we will have strong\nthank you for listening and see you\nagain next week", "date_published": "2017-04-12T19:36:02Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "05e7cafca88de00ab5d87d98749d8699", "title": "259. How Might We Align Transformative AI If Its Developed Very Soon 2", "url": "https://www.youtube.com/watch?v=OfSWc7ByYYA", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 259 in the\nAI safety.com reading group tonight\nwe'll be discussing the middle part of\nhow might we align transformative AI if\nit's developed very soon by Halton\nkanovsky\nHalton kanovsky is still the co-ceo of\nopen field and co-founder of give will\nand we are taking the middle third but I\nwould like to start by just trying to\nliterally answer the question that is\nposed in the title how might we align\ntransformative AI in the very near term\nand I think in fact there is a if not\ncomplete answer to this then\num the the obvious thing is we have some\nalignment techniques that seem like they\nthey might be useful and we take a look\nat the existing techniques that we have\nand use the best one of these so\num even hubing have written this\nwonderful wonderful post an overview of\n11 proposals for building safe Advanced\nAI it's two years old at this point but\nit's really really good uh just\nbasically answering this question if we\nhad to do it now what would be the uh\nthe best techniques for it and I I\nreally strongly recommend this post\num of course it would be nice if it\nthere was an equivalent post just two\nyears newer right that also had things\nlike listening latent knowledge\num\nand I think that would be a better uh or\nuh if not a better way than certainly a\ndifferent way to approach this entire\nquestion to Simply start with these 11\ntechniques and look like try to evaluate\nwhich are good which are bad which can\nbe used in this particular setting and\nin particular is it possible to combine\nsome of them which I think would uh\nwould likely be uh beneficial\nokay that now for the actual text key\nfacets of AI alignment remember we saw\nlast time that the uh the four uh\nproperties that we really want our AI\nsystem to have is that it should be\nvalue aligned to be honest courageable\nand legible\nnow a key thing in order to get these\nproperties is that the reinforcing the\nreinforcement that we provide the\nsupervisors provide must be accurate\num in in general the the idea is that\nthe supervisor like uh remember we're\ndoing this near casting where we're\nassuming\num that the the technique that works is\nit's called human feedback from diverse\ntasks so that means by definition there\nis a supervisor that is giving human\nfeedback\num and\num that's uh that means that give since\nwe already know that it is feasible to\nuh to give humans supervision on the uh\nuh on the object level like is the AI\ncapable then we we might assume that on\nthe four alignment properties human\nfeedback is also a viable uh uh or not\neven a viable strategy but the best\nstrategy since uh magma the AI company\nhas in fact been able to uh beat the\ncompetition and be first to\ntransformative AI and I want to\nhighlight here that that is a really\nreally big assumption that human\nfeedback can be scaled to this dramatic\nextent and I think that is\num\nlike we've seen obviously a lot of uh\nunsupervised learning and that seems to\nme to be some of the most uh impressive\ncapabilities right now\nso the example that Jose is using is on\nthe task that is generating some\ncomputer code that humans have specified\num and is that a task that humans can\nprovide reinforcement can uh understand\nwell right now it is because humans can\nunderstand most code but uh it seems\nlike if we are uh if we get\nsubstantially more Advanced Techniques\nthen evaluating whether code is correct\nor not it will in fact turn out to be\nreally difficult and we're not that far\nfrom copilot generating code that is\nlike above the human level in general\nso on when this question about what is\nthe code how well does the code work we\ncan do things like we can run it and we\ncan do automatic tests when we can read\nit and uh by definition again since we\nare assuming that magma has figured out\nhow to scale human feedback then we can\njust test the performance directly by\nassumption\nhow about legibility again for computer\ncode it's um quite possible that we can\njust see is this understandable and\nthat's probably going to be one of the\nkey parameters so sure that seems like\nsomething that we can check if on the uh\non the symbol uh level where a\num a supervisor can look at this code\nand say yes I do understand it but we\ndon't get anything debug with is the\nunderstanding actually correct just the\nsupervisor's opinion\nforeign\na bit less optimistic about like to what\nextent uh human values expressed in a uh\nif statement that is something about\nsorting integers or something like that\nuh but you could argue I I'm sure\num for honesty well we can see does the\nAI system\num is it reporting it's true best guest\nprediction about what the code would do\nthat's\num to my mind just a restatement of the\nproblem really we're not digging down\ndeeper into how will the user will the\nsupervisors figure out whether the\npredictions that the AI give about this\ncode are its true best guess predictions\nexcept\num just checking whether they are\nactually true uh uh but but it can't do\nanything about deception or anything\nlike that or at least there's no\ndescription here of how you would\nprovide accurate reinforcement that\nchecks if something is deceptive\nalso I'd like to notice that among the\nfour\num alignment properties the one that uh\nHolden kanovski is not discussing is\ncourage ability and there is probably a\ngood reason for that and that is that in\nthis specific case uh credibility is\nvery very hard to check like if you have\nsome computer code written to a\nspecification does that mean that the AI\nis courageable or not to my mind it's\njust completely impossible uh to answer\nthis kind of question you cannot give\naccurate reinforcement on this part\nexcept if the like if the computer code\nhas to do directly with whether the AI\ncan change its mind but but almost\nalways that will not be the case\nin particular of course with the\naccurate reinforcement one of the key\nthings we want in for the alignment\npoint of view is to not reinforce\nunintended Behavior\nand in one of the key unintended\nbehaviors are things like exploiting the\nsupervisors weaknesses and not just\neligibility or things like that\nso how can we do that well we can look\nat the reinforcements really really hard\nand try very very hard to ensure that\nthey are as accurate as we can make them\nand we can also skew them somewhat in\nthat if we find something that we are\nbarely able to detect is actually bad\neven though it looks quite good then we\nshould screw our reinforcement and give\nthat a huge negative value\num but of course doing this is\nbound to uh cause dramatic less\ncompetitiveness and uh also this if the\nrequirement to look really really hard\nand work really really hard on this\ntechnique uh it's just something that it\nreally requires a low number of episodes\nlike if you have hundreds of thousands\nfor each supervisor then you just can't\nspend that much uh time on them\nand I think this is really at odds and\nand highlights the problem with this\nhuman feedback from diverse tasks\num that uh is assumed here because if\nwe're assuming that human scalability\nworks really well then if you want to\nput a an enormous amount of effort into\ninto this uh then you need to find a\nreally good way to scale human feedback\num and I think it stretches credibility\nif you want only a very low number of\nepisodes\nanother facet you'd want is out of\ndistribution robustness and the the key\nout of distribution task that we're\nthinking about is like you have the AI\nthat has been trained and then test and\nthen suddenly it's in deployment and in\ndeployment it has the uh the new option\nto take over the world that it did not\nhave previously and that's what we\nwanted to not have\nthere are a number of known problems\nwith this for instance the Sha 256\nproblem\num that um Halton kanovski isn't really\nuh referring to in any specific way uh\nhe's maybe he's on our way of the\nproblem maybe he's assuming that it can\nsomehow be solved\nuh one of the ways we could do this is\nwe could simulate the out of\ndistribution meaning that we just tell\nthe AI you while it is in training that\nit is not in training it is being\ndeployed and try to uh cheat it to do\nthat\num\nin some kind of adversarial sense so we\ntry to set up something uh as tempting\nas possible to try to to push the AI\ninto situations where it would try to do\na treacherous turn or something like\nthat\nand the problem of course in this is\nthat we are explicitly telling something\nthat may be substantially smarter than\nus uh a falsehood and that pits our\nintelligence against the intelligence of\na uh against the super intelligence and\ntrying to convince a super intelligence\nsomething uh that is just plainly\nobviously not true might be really\nreally hard\nso the adversarial training one of the\nthings we want to do of course is to\nlook at these four uh facets these four\nalignment objectives\ncourage ability value alignment\nlegibility and now I already forgot the\nfourth and try to see if there are some\nof these objectives that have somehow\nbeen learned\nanother way is to dig down more into\nwhat are the unintended behaviors\nactually and teach the AI details about\nthis in order to uh to see whether it\nhas this I think that seems quite unsafe\nuh to me\nand teach you the apparent opportunities\nto get reward via unintended behaviors\nare illusory and I think this trying to\nconvince an AI that is not in a box that\nit actually is in a box is really really\nhard to implement like it's trying to\num convince someone who is smarter than\nyou or something that is blatantly\nuntrue and uh it seems uh really really\nhard to do that in a consistent way that\num that they can't just easily uh see\nthrough\none of the things we can look at is\nrelaxing the problem somewhat and\ninstead of just looking at\num\nuh we want the air to not do a takeover\nthen we can look at a larger class of\nproblematic Behavior called the always\nundesired behaviors and Holden kanowski\nis very optimistic about this say we can\navoid this with very high reliability\nand my uh uh thought is that it's going\nto be a lot harder than Halton expects\num very high reliability is a very\ndifficult bar in particular like\ncheating someone who is uh smarter than\nyou seems uh uh not like something that\ncan work infinitely at least at some\npoint the other person just becomes so\nmuch smarter than cheating them just\nbecomes infeasible\nso the always on the side behaviors are\ndishonesty eligibility and eligibility I\nwould like to point out is not always\nsomething that is always on desired like\nagain with the example of computer code\nsometimes we always prefer that it's\nlegible but it's not like we want as\nlegible as possible because we have a\nnumber of other requirements at the code\nwe are not trying to maximize legibility\nwe are trying to actually get the job\ndone and sometimes they're just on no\nlike you would hope that there is a\nreally really simple way to prove the\nRiemann conjecture or something like\nthat but we want to know whether the\nRiemann conjecture is true and so um\nuh in in that case we uh we might not\nreally train very very hard for always\nhaving eligibility on desire\num so I don't think we can do it like\nthis\nagain encourage ability is something\nthat is always undesired but like how\nwould we go about that\ntaking steps towards disempowerment is\nof course also something that is always\nundesired\num but uh again it's hard to do in\nreinforcement once this is actually uh\nI haven't\nfalling for honeypots is something that\nwe'll get back to and falling for\nmisleading info like if you tell the AI\nsomething that is wrong uh then we want\nthe AI to figure out what is the truth\nand I think this may be in fact at odds\nat some of the relaxed adversarial\ntraining we'll look at uh later and uh\ncertainly um this is this seems to not\nbe related to alignment this seems to be\njust making the AI smarter\num the the key kind of training that we\nwant to avoid is uh you know this word\nuh results that are incorrect but\nconvincing to most supervisors this is\nsomething that can be interpreted in two\nways depending on whether it hinges on\nmost like if you have something that uh\n9 out of 10 that fools 9 out of 10\nsupervisors but the 10th supervisor sees\nthrough this and tells the other line\nand then it gets a very low uh reward\nfor that\num then that's one way of seeing it\nanother is that we look into the AI with\nthe to try to figure out if it's honest\nor not and see whether it's it's making\nsomething that it knows is Dishonored\nthat is incorrect but beliefs will be\nconvincing to the supervisors I think in\nfact both techniques may be very useful\nit's just uh we need to disentangle\nthese two things\none of the key things that I'm probably\ninterested are what Holden kanovski\ncalls exotic cases uh yeah I have put in\nhere a quote by Ilya I think it's really\nrelevant to this question in that all\nthis task of trying to figure out what\nis a is this AI that we're building\nonline\nif we figure out yeah it is in fact\nonline then going from there to making\nit more aligned we obviously can't just\ntrain uh train with symptoms so what do\nwe actually do and I think this is\nsomething that a harlemen kind of should\nthink way more about\num okay let's get back to the Exotic\ncases one way to find cases where the AI\nbehaves possibly in a bad way is to do\nthe standard kind of\num adversarial input thing that we are\ndoing right now making small\npermutations and see which triggers the\nAI to do something strange\num I agree it would be a nice thing to\ndo I'm not entirely sure this is\nworthwhile because we have a limited\nbudget uh magma may have even a sharply\nlimited budget for alignment work and\nwhether this is worthwhile is\nquestionable to me\nanother way to find exotic cases which\nreally is things like literature's turn\nis to con uh condition on there being\nsome kind of fake path to a takeover\num\nand as I've said before faking uh making\na an illusion that is\num impossible to see through is probably\nreally really hard uh and just in order\nto figure out that it's an illusion you\nonly have to find one inconsistency and\nif you go looking for inconsistencies in\nsomething that is totally fake then\nyou're more likely than not to find it\nalso admits that he's very uncertain\nabout like to what extent will this\nactually work uh one thing that is at\nleast clear is that it will be possible\nfor magma to spend a lot of effort in\ntrying to make this possible uh in doing\nadversarial uh training so uh well it\nwill actually help is an open question\nbut you can spend a lot of effort there\num\nand of course just at some point once\nthe AI becomes smart enough then it is\ninsufficiently reflective and\num uh at at some point even if you get\nreally good at faking things for for the\nAI uh then for it to to realize that\num uh holdenovski is again really really\noptimistic he says it would take someone\nwho is vastly a vast is super\nintelligent AI to realize that uh the uh\nthe path to take over that is being\npresented is in fact not real but\nsomething that is uh an adversary has\nplaced in front of it uh I think that\nthat is unfortunately very unlikely and\nunlikely to require a vast super\nintelligent AI\num because the AI is reading what we are\nreading here and so it will obviously\nsuspect this in particular if it has\nencountered often in training and\num\nevaluating this uh like evaluating a\ntrue fact uh required in general does\nnot require a vast super intelligence or\nsomething like that I think holding kind\nof is way too optimistic about uh about\nthis\na third facet of alignment is preventing\nexploits and one of the ways to prevent\nexploits is to just put the AI in a box\nthat's like the classic way that might\nmake sense in some cases and does it\nmake sense in this case right this is\nthe kind of thing that I would really\nlike to have some uh details about\nobviously when you want the AI to be\ntransformative\nthen it can be in the box and if you\nwanted to do something like the\nhardening uh suggestions that we we saw\nlast time then it can't be in a box but\nan advisor or coordinator maybe in a box\nit could make sense at least in some\ncases\nso the goals of the AI like normally I\nwould if I were describing the the goals\nI would start out with like omu Hunter's\nbasic AI drives\num but uh the three goals that Holden\nkanovski are starting with are deception\nmanipulation and disempowerment of\nhumans and the means to this are things\nlike hacking and psychological\nunderstanding\num the cognitive limits of humans and\ndifferent kind of side Channel attacks\nthe counters to these are excellent\ncyber security selecting and educating\nsupervisors having a vetting like you\nget a security clearance of the\nsupervisors and limiting the autonomy of\nthe AI\nuh of course uh going to like a normal\ncompany and say you need to have\nexcellent cyber security it's a really\nreally big ask and a lot of people don't\nrealize just how difficult it is to get\nexcellent cyber security in the sense\num but\num it may be required and I could easily\nsee that magma would be at all it's not\njust for on the inside from its own AIS\nbut also from the outside like\ngovernment or competitors who would be\nvery interested in uh in penetrating to\nsome extent\nuh and uh we want in particular to\nensure that the training doesn't mean\nit doesn't accidentally oops uh caused\nthe AI to become more capable on these\nbehaviors\nthe final way of ensuring this uh\nalignment and safety is by doing testing\nand threat assessment like figuring out\nhow do these uh unintended behaviors\narising\none of the key ways is doing audits uh\nHarden suggests that for some of the AI\nactions we just put 100 times as much\neffort on scrutiny uh so we we take like\nI don't know one out of thousand actions\ndone by the AI and we really really\ndrill down into figuring out is this\nactually deceptive in some way\nI don't expect this to be a good\nstrategy because I think that an AI that\nis trying to do a treacherous turn would\ndo something innocent in 99.9 of the\ncases\num and that means that\num in order to catch the the single\naction that is in fact disempowering\nwhat you need to do is to focus widely\nso you ensure that the uh\nthat all uh agents are audited instead\nof just focusing on very few\nhunting also suggests some randomized\nAudits and I think that may be better if\nit's done in some kind of combination\nso we take and\nthis order you have like the supervising\nbudget and then you have an audit budget\nand something you spend on the most uh\nsuspicious actions and some on the\nrandom and selected\nset of actions I think something like\nthat may be better\none of the things that Holden kanovsky\nis also suggesting we'll get back to\nthis a lot more next time is to use uh\nonce the AI is going through this\nintelligence explosion we may\num\njust the AI now may be substantially\nsmarter than the AI a couple of days or\nweeks ago and then we maybe used that\none to to examine the earlier weaker AI\noutputs\num and that may be useful we could also\nmake the auditing AI get more compute\nand\nwe could do interpretability uh uh as\npart of the auditing processes but\nthat's a problem that's something that\nhave known problems of course in that\nyou could just train it to appear\num honest\nsome kind of multiple level evaluation\nwhere you're perhaps only evaluating\ntest and not in training may make sense\nand\nyeah I think it basically does it's\nprobably only catches unintended\ntraining of deceptiveness but that may\nalso be uh\num worthwhile and in particular using\nanother AI to check whether the previous\nAI was deceptive is something that will\nonly work if the new AI is um is\nreasonably aligned so that is an\nobviously problem with this Holland\nkanovski again is really optimistic he\nbelieves this is likely to work by\ndefault\num and that it requires a pretty\nextraordinary level of capability to\nanticipate that the uh there will be\nsome kind of test\nI think this is very wrong I think uh\nthe uh the text that Holden kanovsky is\nwriting is explicitly something that a\nfuture AI will be trained on so that\nmeans the AI knows that it should\nanticipate this because it's in the\ntraining set because Halton has\nexplicitly talked uh told the AI that\num and I think we need a principle uh\nsomewhere on an analogy to kirchhoff's\nprinciple that if you have an alignment\nstrategy that only works if the AI does\nnot know about it then it's a really bad\nstrategy and like you certainly cannot\npublicly discuss a strategy that will\nnot work if the AI knows about it\nbecause the AI is of course going to be\ntrained on everything\nunusual incentives is another strategy I\nthink this is what people normally call\nhoneypots which is\num like let's say you have an AI and you\nbelieve it will not try to take over not\njust the world but in particular it\ndoesn't want to take over its\nenvironment and you give it the option\nto control environment but not the\noption to take over the world of course\nyou you really don't want to give it\nthat option right but the option to\ncontrol the environment you give it that\neven though you believe it will not take\nthat option\nand then you add dramatic more\nincentives for the AI to take over its\nenvironment like you say if it takes\nover its environment it will get compute\nfor a true uh 10 million dollars and can\nuse that just maximize its reward\nfunction\num and\num then you see if it does it I think\nthat is kind of a strange thing like if\nthe AI will not take over the\nenvironment if that's done by incentives\nthen obviously by changing the\nincentives then eventually the AI will\ntry to take over its uh\nits uh environment\num\nand also this is something that may not\nmatter very much in the sense that the\npotential incentives for taking over the\nenvironment are likely to be\ndramatically dramatically smaller than\num taking over the entire uh Universe\nbecause the things that\num uh they can that magma can use for a\nspecific small part of the alignment\nproject is just dramatically uh worth\nless than the entire universe\nanother thing we should do in assessment\nis to explore alignment\nand not just always using the most\naligned AI we have but try to train uh\nespecially with weaker AIS without some\nof these alignment techniques varying\nthe alignment techniques in different\nways\num to get some understanding of how they\nrelate\num and of course if that is something\nthat we want to do then that's something\nthat we could in theory do now because\nthe AIS that is substantially weaker are\nin fact AIS that we have right now so\nthis is not something that a that they\nshould wait with doing until this\num and of course why we're not doing\nthat right now well because we can't\nlike uh the alignment Theory field is\njust far far from being able to say this\nAI is 38 percent of the line and this AI\nis 57 aligned we totally can't don't\nhave any metrics like that right now so\nuh just because uh magma somehow gets a\nmore powerful AI some kind of\nbreakthrough then like the assumption\nthat they then they'd be able to go back\nto the level we are now and say anything\nintelligent about that that seems like a\nvery uh big and strange assumption\nand the Hope here um from uh Holden\nkanowski is that we can get some kind of\nquantification uh of uh how well do the\ndifferent alignment techniques work in\nthe hope that we can get something like\nscaling loss between\num uh the alignment measures\nI think it's very unlikely we'll get\nsomething like scaling loss those are by\ncompute and trying to extrapolate\ntowards the future and I think like the\nnew abilities of AIS actually becomes uh\nmore powerful are very likely to be uh\ndisjunctive and the AI in particular are\ngoing to be able to see through some of\nour attempts at misdirection\nand uh Holden kanovsky uh is also\nsomewhat less optimistic about this he\nsays it probably only works for AIS that\nare lagging situational awareness and I\nthink uh that is true and I think also\nif we are talking about transformative\nAI then transformative AI almost by\ndefinition will have substantial\nsituational awareness right because if\nit's AI that does not have situational\nawareness then it's not going to be\ntransformative so I think almost by\ndefinition this is not going to work\nand uh but but that's not even the key\nthe key problem here the key problem is\nthat misalignment uh is going to be\nreally really hard to identify and put\ntoward and put any kind of metric\ntowards\nthe final suggestion uh uh penultimate\nsuggestion is to create some kind of\ndedicated assessment team\num so should it be a dedicated team or\nshould it be like more broadly uh\ndistributed in the organization I think\nit's question it's a political question\nobviously how much political question\nCapital does the alignment team at magma\nhave does it have enough that you can\nmake a very strong dedicated assessment\nteam or do you need to organize it in\nsome other way and I could definitely\nsee problems with\num having it as a separate team\nso what would be the uh the tasks one of\nthe tasks would be to analyze the\noverall setup estimate the odds of an\nexistential risk in general both on the\num uh action risk and inaction risk\nI think that's going to be really hard\nto get meaningful probability on this\nright now the uh the Distribution on\nrisks given by AI safety researchers are\nuh like really really wide some people\nbelieve that there's 99.9 probability of\nthe disaster and some people believe\n0.01 probability of a disaster like it's\na really wide distribution right now\nthe assessment team can do\ninteroperability work they can monitor\nactivity locks they can interview the\nstaff the rest of magma do experiments\nhire super forecasters and finally use\nAI assistance in this and this is\nprobably something we're going to talk a\nlot about\nin the next session whether we should in\nfact expect that to be a substantial\nhelp\nJohn Wentworth has a nice analogy about\nthis that this is like getting Godzilla\nto fight Mika Godzilla and that is a\nthing that makes in general plans much\nmore brittle and in general if you think\nthat monitoring\nactivity locks is something that is so\nhard that you can't do it then just\nsaying oh we'll use the AI as assistance\nto figure this out seems like kind of a\ncop-out and very unlikely uh to work\nlike if that was going to work then why\ndoesn't it work when when humans are\nlooking at this activity locks I think\nthat's a\nit is certainly possible but it's uh I\nagree with John went with that this\nmakes the plan a lot more brittle\nthe final thing is the question what do\nyou actually do if you figure out there\nis a risk uh what do you do if the AI\nthat you're training seem to be\nunaligned I think this is a core\nquestion and something that should have\nmuch more than one paragraph assigned to\nit I think this is a really big question\nthat needs to be answered uh of course\nthe thing you could always do is to just\ntrain it away\num that's very unlikely and I think\nmagma in this case is assumed to be\nsophisticated enough to certainly not\nwant to do that\num so what will they do they might be\ntrying to find fundamental improvements\nto the alignment problem just doing\nbasic research and that's like the kind\nof research that we ought to be doing\nnow because we are expecting that we\nwill find some kind of risk\num\nso why we're not doing that\num and another thing is they could warn\nothers of the danger\num and that's something that I would\nlike to have way more discussion of\nbecause who do you want and what do you\ntell them what is like the political uh\nconsequences of telling different people\nand what uh can people keep secrets or\nare you forced to share it maximally do\nyou request aid from the government to\nrequest eight from the general public to\ntry to uh uh like what do you do then I\nthink it's a it's a very interesting\nquestion and a crucial question if a\ncompany magma is building a something\nthat looks like it can be transformative\nand it seems like they are unable to\num reduce the existential risk\nsufficiently like what do they actually\ndo who do they sell I think it's really\nimportant and we will see in next time\nwhat uh precisely\num holding canovskus is just in that\ncase and\num I hope at least hears more about that\nthank you and see you", "date_published": "2022-10-20T21:10:10Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "8b0950f53bb8623a9625b5f8d6417759", "title": "(Duplicate?) 187. Stuart Armstrong and Scott Garrabrant on If I were a Well-intentioned AI", "url": "https://www.youtube.com/watch?v=KnBGR6UWKEc", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "so this is more a meta approach than a\nspecific approach it's a way of getting\ninsights rather than a technical\nnuts-and-bolts method the inspiration\ncame to me when I was thinking of the\nmanifold hypothesis for neural nets that\nthey when they're classifying they're\nfinding a particular shape like pandas\nor Gibbons within the image space and\nadversarial examples are sort of sub\nmanifolds or sub shapes weird sub shapes\ninside these shapes probably a very low\ndimension but if you have an\noptimization process they probably score\nhigh in that so I was thinking well well\nif I was the AI I'd avoid those I'd\navoid things that maybe put me on sub\nmanifolds and that was sort of one of\nthe first things where I was thinking\nmaybe these problems are solvable by\nconsidering them from a different\nperspective I had a similar thought when\nI started on wise good heart laws bad\nand when I thought that through I\nrealized that our usual fear of\nanthropomorphizing was actually leading\nus astray in this situation surprisingly\nI'll get back to that so we don't have\nan AI we don't have code for an AI we\ndon't have a theoretical model for a\nsafe effective AI so we can't solve the\nand most of the questions that we're\nworking on we don't actually have them\nin any formal sense so we can't solve\nthese by for ultra formalization proof\nat the highest level instead we need to\nform some abstractions and reason with\nthose apps\nactions like models are always wrong and\nsome of them are useful and different\nabstractions are useful for different\npurposes one of the best first\nabstractions was donot anthropomorphize\nthe AI and when you start thinking in\nthat way you realize that in your\ninformal reasoning we're using our own\nblackbox\nour own intuitions to reason about what\na eyes do and that that's wrong like\nanything like a eyes will value\ncomplexity hence blah blah blah or these\nkind of things were projecting human\nways of reasonings are may eyes and that\nis clearly wrong and learning to avoid\nanthropomorphizing opens up new ways new\nvaluable ways of thinking on AI however\nI think that too much emphasis on\navoiding anthropomorphize ation blinds\nus to certain other aspects and when I\nwas looking at why is good heart law bad\nand I delved into it and I have\nBarrett's post on this and people agree\nor disagree and there's a bigger the\nback and forth but one of the things I\nrealized is that good heart law is not\nbad in the context in the models that we\nwere thinking of we were thinking this\nis a model of what an AI looks like good\nhearts law is bad let's avoid it but if\nyou look at that former model very often\ngood hearts law is not bad so the but\nfor us we know that good hearts law we\nexpect it to be a problem so the problem\nwas that we were not actually putting\nenough of our own preferences into the\nmodel we were considering it at a too\nabstract level in a sense\nthat's one way of thinking about it but\nthis these sort of things cause me to\nthink by the way this is a very clean\npost picture of everything that's going\non it was a lot more messy than this so\nthis caused me to go and think if I was\nthe AI myself trying to avoid\nanthropomorphize ation trying to avoid\ntoo much excess information can some of\nthese problems be solved and then I've\ndone a series of posts based on that the\nfirst one is just on image\nclassification which was one of the\noriginal inspiration so this is the\nfamous pandas plus noise equals Gibbon\nand I was thinking again as if I was the\nAI and I was being fed these basically\nif I knew the concept of adversarial\nexamples but not obviously what a panda\nor a Gibbon was can I avoid this can I\nstart\nas I say looking for points close to it\nlooking for evidence that this image has\nbeen selected by an optimization process\nadding noise myself\nit seems the sort of thing that I could\ndo and of course since I could do it as\na well-intentioned AI this then suggests\nthe sort of formal method which we as AI\ndesigners could inject in it but looking\nfor adversarial examples seems like\nsomething that is strictly easier than\nfully defining what pandas and Gibbons\nare there was another smaller post the\nsecond one was on acting in the world if\nan agent had a lot of experience of a\nred door and were then put in a\nsituation where there were red windows\nor blue doors\nhow should it behave on that and this\nconnects with the sort of good heart not\nbeing too much of a problem because\nthere are sort of conservative policies\nhere which cover as much as possible of\nthe realistic options does they want you\nto go to the red window so when you go\nto the blue doors they want you to spend\ntime at one or spend time at the other\nthere are a variety of different reward\nfunctions that correspond with the\ninitial one and so this this sort of is\ndeveloping into a general idea of\nextending models which I won't go into\nhere\nthere was the extremal good heart idea\nwhich was you the the idea was a cancer\ncuring a I trained by apprenticeship\nlearning and you wanted it to be able to\nsay blast the cancer cells with a laser\nbut you didn't want it to dissolve the\nhuman with acid you know both of which\nwould completely eradicate the the\ncancer cells and neither of which the\nagent has seen and what I saw is that\nthere are certain things that are\ncorrelated with the training examples\nsurviving the operation surviving for\nsome years after there are some negative\nthings that are also correlated it's\nlike complaining about pain the ones\nwhere the operation fails have other\nthings to complain about being more\nprone to dementia that's a negative but\nit's it comes from surviving four more\nyears after and there's some random\nthings like paying more taxes and\nthanking the surgeon so the first step\nwas dissolving the patient does not have\nthese features it is a killing the the\ncells with a laser has some\nthese features so this is a way of going\nbeyond models again so these are not\nsort of these are more that these are\ninsights which came to me through this\nway of thinking not that they derive\nother people could have derived them\nfrom other roots and finally I think the\nmost interesting or unique or precise\nresult that I got was when I was\nthinking of Meza optimizing and I was\nthinking of it as I am a human in a\ncorporation I run one of the divisions\nof the corporation and I run this for\nthe benefit of the directors and this\nthinking about this and thinking about\nhow these organizations succeed and fail\nI realized that there's a big difference\nbetween a Meza\noptimizer that is aligned and one that\nis controlled and aligned one is one\nthat wants what the board wants it's a\nperfectly a it has those girls and a\ncontrolled one is one that the board can\nunderstand and that does what the board\nwants it to do and it informs the board\nof what it's doing and those kind of\nthings and they are very different\nbasically there's very it's a very\ndifferent situation if you have an\naligned a sub agent or a controlled sub\nmaking a mistake one way like a\ncontrolled substance that you think is\ncontrolled is not so much of a problem\nbut anyway this sort of opened up a new\nsmall a small new technical area in this\nthing and it came about because I was\nanthropomorphizing north or thinking\nfrom within the AI a bit more than I'd\nbeen that used to okay that's my sort of\nbrief overview of why I went this and\nwhat I've used it for\nstop share cool thank you very much for\nyour presentations sure so I guess the\nfirst question should go to Scott so for\nthe address sorry the image crops\nclassification example it seems like I\ndon't know I'm it seems like yeah you\ncould do something where I don't know an\nexample the kind of thing that you might\ndo is you might like add some noise to\nthe image and like kind of like average\nthe values of what you would predict\nabout all the images that are nearby\naccording to the model the the abseil\nexamples like coming from like all the\nthings that are nearby in like pixel\ncolor space which seems to me like\nsomething that you're you're thinking\nabout from the outside and to the extent\nthat we want to call the adversarial\nexample an adversarial example at all\nit's because like it's being adversarial\nto the specific way in which the neural\nnet is working the the way in which the\nimage classifier is working and like\nit's it's like work like the adversarial\nexamples like chosen\nit's like notion of nearby is like a fun\nit's sorry its notion of nearby is like\na function that's like trying to\ndiagonalize against the specific image\nclassifier and it feels like when you're\nsaying like oh I can deal with\nadversarial examples you're like using\nthe fact that the example is adversarial\nto something else as opposed to\nadversarial to you do you think that's\ntrue\nyes that that is a valid criticism I\nthink I mentioned it in the posts the\naim when I was thinking that way was can\nwe get rid of adversarial examples in\nways that are strictly easier than fully\ndefining what a panda and a Gibbon are\nin unambiguous categories and in the\nwhat it what is I think when you get\nwhen you get a message that is designed\nto mislead you but technically true what\nwas the what was that I think it was a\nbruh I'm working on that I'm not sure\nit's the when you get information when\nsomeone sends you information the\ninformation is true but the source is\nextremely manipulative so in those\nsituations you try and compensate for\nyour knowledge of how manipulative they\nare they're trying to compensate for\nyour compensation and so on this is what\nI was alternately hoping for that if you\nknow if you know that someone that some\nentity has access to your code and it's\ntrying or some maybe some black box\naccess and is trying to for you it\nshould it should be doable\nin many cases to make yourself less\nvulnerable yeah yeah there's this\nquestion that feels like a central like\nbinary question for to me for like this\ngeneral class of thinking which I which\nI don't know is reminded by with what\nyou just said which is I don't know if\nit like there's a class of solutions\nthere's a class of solutions that\ncorresponds to like explicitly modeling\nthe way in which\nyou might be wrong so that you're like\nah like here's like the way in which I\nmight be wrong and all kind of like I\nthink there might be someone behind\nstores sorry I'm sorry that that was my\nfault I hadn't okay um sorry yeah so\nit's like there's one class of solutions\nthat looks like trying to like\nexplicitly deal with the ways in which\nyou might be wrong and trying to like\nlike when you said like up there\nsomething that's like updating on the\nfacts of the person is trying to deceive\nyou or something like that it feels like\na like active reaction to things like\ngood heart versus there's a more like\npassive thing where it's just like don't\noptimize too hard or something like that\nand it feels like these things are kind\nof different types we're like one kind\nof like I'm trying to like come from\nabove and once right now like come from\nbelow or something like that feels like\nthese things are different types this\nmakes sense to you or yes but I think\nthere are some things that seem to be in\nthe middle of that like one of the\nthings I was looking at is detecting\noptimized images so if something if I\nhave minor on that plus my extra module\nthings that are optimized to fool my\nneural net look different from normal\nimages so I I as the AI against a not\ninfinitely smart adversary well first of\nall I should be able to detect images\nthat are that are designed to for the\nlowest level of my neuron that and I\nshould be able to sort of swallow my\ntail and diagonalize myself to some\nextent and sort of\ndetect images that are meant to fool the\nentirety of me at least to a level that\nmakes them less effective yeah yeah the\nsenses are sorry too and she was so\ndetecting over-optimization\nin the images aimed at myself yeah I\nmean I feel like over-optimization could\nat least in principle be diagonalized\nagainst you also like whatever your\nmethod for detecting over-optimization\nI'm not actually sure about this but it\nfeels like whatever you're up your your\nmethod for detecting over-optimization\nyou can kind of be find things aerial to\nthat an arbitrary intelligence\nadversary' there's nothing I can do it's\nnot actually obvious to me it might be\nthat like the problem of detecting over\noptimization is like an easier problem\nthan detecting a panda in the sense that\nlike you might actually be able to be\nrobust and you're detecting\nover-optimization thing but it was in\nthis it's in this kind of area that I\nwas thinking and that that this way of\nthinking directed me to yeah\nshould I like alternate questions or\nsomebody or something should I like\nevery once in a while have somebody else\njump in well no one has raised their\nhands I have some questions but they're\nprobably of poor quality so please go\nahead just okay just keep talking until\nsomebody jumps in with a question on the\nchat so I have something to add I think\nwhich is that for current like for the\nAIS that exists today adversarial\nexample do look different they are\ndetectable and yeah I don't I've\nyeah then you can take from that what\nyou want if you believe that that will\nwork on like higher intelligence instead\nit's an area it's an area I think worth\ninvestigating as to your adversary gets\nsmarter as the AI itself get smarter is\nthere who do we expect to win in this\ncut back I say I haven't investigated\nthese were just avenues that were\nsuggested to me but it does so there are\npapers on this I haven't read the papers\nI have read the alignment from\nnewsletter that summarizes the papers\nbut that's yeah I know there exist\npapers who look at like how our\nadversarial example different and you\ncan sort of imagine it as okay there's\nimage space which is like highly\nmulti-dimensional because it's like\nevery pixel has three dimensions because\nthree colors and then there is a like\nthe manifold that is embedded in this\nspace which is like things that are\nactually what photographs end up looking\nlike and then the adversarial examples\nare not on this matter from there sort\nof off there like outside how real\npictures actually look like in a way\nthat is not detectable for humans but is\ndetectable by a eyes oh I mean at the\nmeta level this the photo that's\nsupposed to be a given looks like a\npanda and we can tell that and we aren't\nusing high-level processing to do that\nso there is some high there features\nthat we don't detect but the AI is\nactually using in its classifications\nand those high level features do look\nlike a given yes but what I mean is the\nfact that we can see a panda and tells\nme that there's something different\nabout this image and I fully expect that\nthose ways of getting the AI to detect\nit I've seen some of your papers but I'm\njust saying in principle we expect that\nAI should be able to detect it I haven't\nreally seen anything about the sort\ndiagonalization argument you could say\nthat ganz are a little bit like that\nmaybe that just so now get meso\ngangsters or general adversarial\nconstructions might be like that maybe\nthe Nash equilibrium or the final thing\nof that you get something that is not as\ngood as a perfect classifier but and is\nnot completely immune to adversarial\nexamples but is hard to fool and it's\ndecent so I wanna I want to draw\nattention to something which I think\nmight be a disagreement that I have with\nyou but I'm not actually sure partially\nbecause I'm not actually sure what I\nthink and I think I like I tried to\npoint out before but I think I maybe\nfailed to point at it so concretely I\nwant to point at the example where like\nI have a utility function but I have\nsome uncertainty so base basically like\nthe red door green there's a red red\ndoor Blue Door red window type example\nand I want to generalize this something\nwhere it's like I have some like\ndistribution over utility functions that\nI'm kind of uncertain about and I'm\ntrying to be conservative in some way\nright or I guess that's not the main\nproblem it's like you're proposing this\nclass of solutions that looks like I\nhave this uncertainty over a space of\nutility functions I want to be I want to\nlike be conservative about like doing\nwell according to all the utility\nfunctions inside this space we can said\nit's accurate could go to the end of\nwhat you're saying and I want to simply\ncontrast this with approaches that look\nlike not pushing too hard in the\ndirections that might cause things to go\napart select my space of utility\nfunctions like\nis the reason why I have a space of\nutility functions is because I've like\ntrained on some examples and and like\nthere are things that are kind of out of\ndistribution and I could be like well I\nhave uncertainty about these things that\nare out of that are like out of\ndistribution of things that I've trained\non and in cases where I can't like ask\nfor more information I'm then going to\ndo something where I kind of try to be\nconservative and maximize all the\nutility functions a little bit\nsimultaneously or something like that\nI don't know this feel like a fair\nclassification of the kind of thing that\nyou're trying to do well it's at least\ngiven me enough to do a rambling answer\nokay and then I want to contrast that\nwith the class of approaches that look\nlike just don't push so far out of the\nkind of things that you were trained on\nwhere it feels like one of them is like\nlet's try to like tabulate all of my\nuncertainty and figure out all the\ndifferent ways in which I could be wrong\nand make sure that I like cover all them\nversus the other one is like just don't\nstray too far or something which are\nlike two different ways of approaching\nconservatism and I'm uncertain about two\nthings I'm uncertain about whether these\nare like different and I'm uncertain\nabout whether I actually think that it's\nbetter to try to approach ways that look\nlike don't strain too far as opposed to\nlike tabulate and all the uncertainty\nbut it seems to me like you're you're\npushing I'm reading you as as pushing\nfor doing something that's kind of like\nkeeping track of like uncertainty like\nexplicit uncertainty of a lot of\ndifferent things I might be trying to\noptimize for if you have any comments on\nthat well so first of all on the\nconservatism and going beyond this\nof training environment there's a lot of\nthat in the post that I emailed to you a\nfew days ago that's a fundamental aspect\nof it you could say so that's the bit\nthat's dealing with that conservatism\nand when you need to be conservative and\nwhy quantizers are not sufficient in my\nview but that's that's still sort of\nprivate so I won't go into that too much\nbut I'll contrast it with the other\naspect that you're saying so here is a\nlink to something I wrote which was when\ngood Harding is optimal and I basically\njust to take the very simplest example\nif you are a robot and you're hesitating\nbetween going left and going right and\nas soon as you've done left or right\nit's over you get your reward or you\ndon't and you have fifty point one\npercent probability that it's left and\nforty nine point nine percent\nprobability that it's right this is a\npure good heart situation you just\nchoose the optimal policy which might be\ndisastrous compared with the other one\nyou just maximize utility it's a pure\ngood Harting and it's clearly the right\nthing to do in that situation because of\nthe other things that it's linear you do\nit once it's closed you can you can't\nyou can't correct it so that was the\nthing that got me thinking so why do we\nfear good Harting and we feel good\nHarting I realized because we don't\nexpect that situation to be like that we\nexpect the situation to be different\nlike in that post I listed we expect\nthat there's going to be diminishing\nreturns that value is fragile\nwe expect that that's that's the biggest\nof it this is why we don't like good\nheart because if we just choose naively\na top option weeks basically if we\nchoose a top a top option we expect that\nthis will be disastrous it's the kind is\nthe reason why we feel a fear of good\nHarting and then I was thinking well if\nwe add that information if we add the\nfact we expect it to be diminishing\nreturns that we expect to have value\nfragility that can all be done included\nin a very Bayesian way across all the\nutility functions and when we do that a\nchunk of the good heart problem goes\naway now in that post and probably in\nsome things I'm implying but maybe most\nof the good heart problem goes away in\nthe post I emailed you you can read it\nas sort of a tone of maybe very little\nof it goes away or that's not the\nfundamental problem but basically be so\nthe post that I've just sent here and\nlinked to the it adding more information\nabout why we feel good Harting removes a\nchunk of the problem and I think that\nthis should be looked into and this if\nthere's still a good hardened problem\nleft after that then that's a separate\nother problem that is maybe the moment\nto be conservative on top of the gist\njust being Bayesian at and I have\nnoticed that Linda has put her hand up\nseveral times there yeah so I'm confused\nwhen you said that the choosing between\nthe fifty one percent and ninety nine\npress no okay so I'm don't think you\nhave the same definition of good as I do\nso I just wanted to ask you how you\ndefine a good heart\nwell I defined a good heart style\nbehavior\nas naively picking a simple or a single\nutility function and maximizing that far\nbeyond its area of applicability so you\nmean that the AI picks its own or do you\nmean when we pick it or both of the\ncases well let me let okay let me\ndevelop the example a little bit and to\nshow where we get the actual good\nHarting suppose to that you could go to\nthe left you can go to the right you can\nstay it goes on it goes on forever\nthere's a discount rate you can stay on\nthe left or you can stay on the right\nand the one of your one of your utility\nfunctions gives you a reward for the\nlogarithm of the number of times you\nstay on the left and one gives you a\nreward for the logarithm of the number\nof times you stay on the right and\nthere's also a discount function given\nthis the optimal behavior is well go\nleft because that's the best stay there\nfor a certain amount of time go right\nafter certain runtime stay there for a\ncertain amount of time and fluctuate\nback and forth according to the various\nparameters here and this kind of\nbehavior seems very sensible and very\nmuch what we would want the good heart\nbehavior for that situation would be go\nleft and stay there i picking naively\nthe best option and sticking with it so\nif you want i distinguished a good\nhardstyle behavior from a optimizing\nbehavior and what i noticed was that a\nlot of the the good heart the problems\nwith good heart come because it\noptimizes a narrow under-specified\nutility function and that's a problem\nbut if we incorporate information such\nas this is a narrow underspecified or\nyou don't have enough information and\nour reward functions are diminishing\nreturns and the values of fragile and\nthen say okay given this information\nnaively maximize expected utility you\ntend to get behaviors that are a lot\nbetter so if you want yeah so I'm yeah I\nI I'm not sure that actually agree with\nlike calling the thing you're calling\ngood heart good heart but it feels to me\nlike there's a sense in which like I\ndon't know we have some proxy objective\nand then we have some like true true\nobjective we have some proxy objectives\nand the proxy objective is noisy in some\nway and if we're like in this situation\nthere's like kind of two paths forward\nor like you want to do some combination\nof these probably but there's like two\npaths forward one is like figure out\nwhat information we're lacking and gain\nmore of that so that we like figure out\nthe way in which like the things might\nbe diverged and like put that\ninformation in so that they converge\nmore and then there's also like\ngenerically try to figure out ways to\nmake it such that in spite of the fact\nthat our thing is diverged we still\ndon't things still don't end up badly\nyeah so the the prophecy I think with\nwhen you mentioned the proxy I can\nphrase what I was saying better if we\nhave a proxy reward and we know that it\nis that there is uncertainty about it\nsorry what do you mean by there is\nuncertainty about it you mean we know\nthat it's\nwe know it's a proxy okay you know it's\na proxy and maybe we have some idea how\nit might relate to the real reward but\nwe know it's a proxy\nyep then the I'll define the good\nhearting behavior as naively máxima\nmaximize the proxy without caring about\nyeah\njust just maximize the proxy would be\ngood harder than that situation and what\nI was saying is that in many situations\nwith the kind of utility functions that\nwe use in tall boy examples that is the\noptimal thing to do but if if we then\nmove to more realistic utility functions\nincorporating our judgments about the\nvarious things they're talking about\nthen that becomes the wrong thing to do\nhowever if we incorporate that knowledge\ninto the algorithm so it's it has the\nproxy but it knows that the proxy\nderives in this way and this is the for\nshape of its uncertainty then so\nmaximize the proxy would be good Harting\nand really bad maximizing the expected\nutility with the proper form of the\nuncertainty seems a lot better what is a\nlot better so I think there was a\nconfusion conceptually between the two\nbetween yeah so yeah I don't know if\npeople were confused but I definitely\nwas that good Harting was or that if you\nwant maximizing expected utility was the\nsame thing as good Harting was the same\nthing as maximizing the proxy and that\nthese are distinct and that yeah yeah so\nyeah I'm like I'm really curious about\nthis there's there's something where\nthere's being able to look at a\ndistribution being able to so yeah I'm\nsitting here there's a true value I\ndon't know what it is I have two objects\nI have this proxy value and I have a\ndistribution over the true value value\nis just like the average yeah let's say\nthe proxy value is the most likely sure\nlet's say that proxy value yeah\napproximately is the most likely and\nwait so does does what you're\nrecommending like equates to optimize\nthe average as opposed to optimize the\nlike five does your thing corresponds to\npay it into the distribution optimize\nthe average as opposed to optimizing the\nmost likely or basically well yes for\nsome if you take average as weighted sum\nof utility functions weighted by the\nprobability yeah if you want the the\nshape of the plausible utility function\nis not a ball it's not a ball around the\nproxy it's it has a different structure\nwould you hi the average is not the same\nas the proxy the mean and the mode are\nthe mean and the mode are different it's\nthe easiest way of saying it potentially\nvery different yeah\nyeah so there's another thing that I\nfelt like I felt like you were saying\nand maybe maybe you weren't which was\nsomething that was like I don't know I\nfeel like there's something to being\naware of my own foul ability that is not\njust like averaging over the the things\nwhere you're like like there's some sort\nof like being aware that like things\nmight actually be diagonalizing against\nme or something like that were you\ntrying to point on something like that\ntoo or or not I agreed that it's\npotentially this case but what I wanted\nto point out in the post I've sent to\neveryone here and other civil ones is\nactually just being Bayesian naively but\ndoing it properly gets you a surprising\ndistance it's plausible but it won't get\nus the whole distance as I said how to\nlook at the one I emailed I haven't I\nhaven't looked at the one you emailed\nyet so I might accidentally say things\nthat are in there\ndon't worry about it it's my sort of big\nthe thing I've been working on during\nall of lockdown but putting that aside\nthe thing is that you're doing the\nBayesian stuff properly seems to get us\na surprising amount of the distance and\nthen of course yet there's various\nconservative things that you can apply\non top of that if we feel like it but\nI'm not doing that yet because I I\nwanted to see how far the naive Bayesian\napproach with proper uncertainty got us\ncan I answer the two questions that have\npopped up in the chat I'm going to give\nme time to think\nokay um so from Ricardo everyone can\nread this I won't need to repeat the\nquestion so for Ricardo Bob Pato I yes\nthe question is this is not just change\nthe problem we have\ngood ideas about how this uncertainty is\nshaped we have that's the point of why\nwe fear a good heart is because we know\nthat our values are complex for example\nbut that there is diminishing returns\nthat there are the humans are fallible\nand can be corrupted this information is\nnot present in standard good heart\nproblems but now we can't put it in as\nin terms of uncertainty over the proxy\nso it changes the problem I wouldn't say\nit just changes the problem and for the\nquestion from Roland Philip Cass can you\npropose I think it's because well I can\nonly speak for myself because I've been\nworking with good heart problems for\nyears and I didn't notice anything like\nthis until recently so but I think we\nare too focused on the human version of\nthe good heart problem where the human\nis antagonist the principal-agent\nproblem is basically what it is and\nthey're the agent is antagonistic or at\nleast misaligned with the with the\nprinciple and here it's basically can\nthe principle specify in enough detail\nto not leave any wiggle room for the for\nthe agent the principle cannot specify\nsomething like well I'm a bit unsure it\nmight be the left one or it might be the\nright one think a bit longer about what\nI really value in this way and you'll\ncome to the right conclusion and that\nwill never work with a human because all\nthat they need to do is come up with a\nplausible sounding justification for why\nwhatever was the right one but if you're\ndealing with an AI then you can say\nthings like that if you specify well\nwhat you mean you can allow\na superior intelligence to figure stuff\nout about your own values which you\ncan't do in the standard good heart\nproblem so you notice that this is\ntouching upon thinking as if I were a\nwell-intentioned AI kind of thinking and\nthat's I think that was the one of the\nkey things is that in the AI version of\nthe good heart problem we we can have\nthe agents being much smarter than the\nprincipal and figuring stuff out but the\nprincipal doesn't know and can't measure\nas long as it's well specified so I'm\ninclined okay so I'm inclined to say\nthat the if if it were the case that we\ncould specify a distribution over a over\na utility functions such that like our\ntrue utility function has non-trivial\nprobability we would have solved almost\nall the value alignment problem like\nalmost all of the part of alignment that\nis private specify values or something\nlike that and so I kind of feel like I\nmean what we working with distributions\nthat have like a true value inside them\nwhat yours what you're saying is\ntrivially easy to do just do all\npossible reward functions or value\nfunctions in existence which some weight\nokay so I if we if we average over all\npossible value functions with some\nweight well I get I guess I'm trying to\nsay something like it feels like\nexamples in which like there's like a\nsmall number of things feel like they\nmight be misleading yet the the but the\nthing the thing that I'm hoping is that\nwe can break symmetry the reason why\naveraging over\nall possible utility functions is\ndoesn't work it's because there's every\nutility function has an antagonistic\nthere's you and there's - you as long as\nthey're both there with similar\nprobabilities they might as well not be\nthere you can just take them both out\nwhen you're averaging but can we break\nthe symmetry even what I noticed a long\ntime ago even knowing there is a good\nhard problem\nthis slices the space in half half of\nthe utility functions do not have a good\nheart problem they have the opposite of\na good heart problem but these are not\nthese are nothing like the ones that\nthat they prefer that you maximize the\nproxy rather than the average okay and\nso just knowing that there's a good\nheart problem we've sliced away half of\nthem which is nothing but at least it\nshows the break in symmetry and the more\nof the stuff the meta stuff that we know\nabout ourselves that we add the more\nsymmetry we break so if you want to take\nthe trivial thing which is every utility\nfunction is there this is terrible\nbecause when you average it out you get\nnothing or you get something absurdly\nsimple and then start slicing away by\nadding our meta knowledge and keeping\nstill keeping average but I think the\nbest process can go a long way yeah it\nbasically feels like training or\nsomething you have you could have like\nand now you start with like a big prior\nover all the possible utility function\nand then you could ask a bunch of\nquestions that are like you before this\nworld of this world and each of those\nquestions like cut your space in half or\nsome\nlike that and training which\ndistinguishes a utility from its\nnegative is a different type of training\nfor one that doesn't tend to do that\nyeah yeah I don't know I'm simply\nthinking of the kind of training that is\nlike do you prefer a type of ruling\nthings out which are like which is like\nI don't know playing guess who and\nsaying like do you prefer this one of\nthis one okay we're gonna cut half of\nthe utility functions out and repeat\nthis until you're left well I'd more be\ninterested in things that sort of cuts\nbetween diminishing returns and\nincreasing returns for example because\nincreasing returns messed up things when\nyou average them you could say do you\nprefer a or do you prefer a X percent\nchance of beat and a and then one more\nchance but see and you can ask different\nlottery questions in order to cut things\nin half yeah I did I think those are\nmore the things that I'd be looking for\nthe other stuff is good too of course\nbut they less include our meta knowledge\nyeah so I mean I basically just like\nabsolutely agree that like working with\nthe average is probably going to end up\nbetter than working with the most\nworking with the mean is just gonna end\nup a lot better than working with mode\nthere might be some places where like\nworking with the mode is more tractable\nthan working it to me but I kind of\nexpect that like you collect a bunch of\ndata like this and the mean still isn't\ngreat oh yeah as a silly example add\nvalues are fragile you can do this with\na smooth min and we expect human values\nto have at least this level of\ncomplexity now ignoring for the moment\nnegative outcomes sort of people being\ntortured and stuff like that\nif we can avoid those as well\nthen it seems that we have to find a\nvery wide class of things that contains\nour preferred utility functions and that\na average of this Maximizer with huge\namounts of power will get a positive\nresult not not nearly comparable with\nthe amount of power that it has because\nthis is a small slice but yeah but\nsufficiently but as I say I'm not Eve\nthis is confusing because what I'm\nsaying is kind of the opposite of what\nis it then\nthe virtus should I be mailed to you or\nnot the opposite but a different\napproach shall we say but I just to\nrepeat again I feel that going I think\nthat lots of people consider the good\nhard problem think of the mode and the\nmean are the same and the toy examples\nthat we have they are and I think\ndistinguishing these two is very\nvaluable and I think adding extra meta\nknowledge shows that the mean can be\nsurprisingly good without even having to\nadd any quantizers or other methods and\nthat seems right to me\ndo people want me to say what I mean by\nthe opposite of the good heart problem\nor have I explained that okay they be up\nthe opposite the good up run if you want\nis when you prefer that the proxy the\nmode be maximized rather than the mean\nif you have increasing returns this is\nthe kind of thing that might happen let\nme you always prefer them mean like if\nyou take the mean of functions that have\nincreasing returns that we just make you\ngo with the mode anyway\nwell let's let's do a sort of example\nlike the nails the the people's\ncommissariat for central nail production\nhas told you you need to maximize the\nnumber of met nails and their proxy\nfunction is pieces of pieces of steel\nand you maximize that proxy and these\nare terrible nails now there is the if\nyou call the proxy V and the actual nail\nthe genuine utility function u the if\nthere's a delta so let's see consider V\nminus u minus V this is the utility\nfunction on the other side if u is here\nV is there this is the one that's on the\nother side this is a weird sort of thing\nand this utility fungus ACOG W I'm not\nfollowing\nso this is U this is V there is another\nutility function at which V is exactly\nhalfway between the two utility\nfunctions can be added if you have a\nscale assume we have a scale it can be\nadded they form a vector space so this\nis U this is the other side of the\nvector there's a W and this w this W is\nsomething that benefits a lot this is\nsort of like the utility that hates\nnails and loves pieces of steel it it\nwould much\nfor the V as stated then say a 50/50\nchance between it and the true you so if\nyou want you yeah so but this odd kind\nof utility function notice how hard it\nis for me to describe in human possible\nterms because it makes no sense for a\nhuman because it isn't a human I can\ndescribe it in terms of vector space\nit's easy this is not there but it makes\nno sense for you actually you secretly\ndo not want true nails but you wants\nmore pieces of useless still or\nsomething like that it makes no sense\nfor a human no but like the W is the\ndifference between the true utility or\nwhatever that is and the proxy whatever\nthat is\nI'll need no it's not the difference\nit's the other side it's let's see so it\nwould be V Plus V minus u v minus u is\nthe Delta between U and V W is the\nnegative Delta V Plus V minus u so okay\nso it's it's it's so here okay now I see\nwhat you mean by the other side but it's\nsort of them it's defined in opposition\nto the to you yes so no matter what the\ntrue utility function is even if it's\nsomething inhuman\nthis w is always going to be worse than\nyou worse well the point is that from\nw's perspective it wants it prefers a\nmaximising of V rather than a 50-50\nchance between U and W so it does not\nwant the mean which is 5050 between U\nand W prefers that the\nmode of the middle is maximized\nLisa w prefers w w prefers that V be\nmaximized rather than the agent be\nuncertain between U and W sorry like why\nwhy is the mean of U and W not we\nbecause like if you're defining W in a\nway in which it lead abuse it it is\npossible I am making a mistake in what I\nam saying here I okay I do I have the\nexamples very clearly to hand all my\nposts all I know is good heart it's\npossible sorry that is let me find this\nthe difference in the meantime I can see\nthat if Scott would like to break in and\nstop this question because he feels he\nhas a better question then Scott by all\nmeans go ahead at the full version of\nwhat I was saying is here I was not\nexpressing it correctly but just as\nthere are things that fear good hearts\nthere are things that auntie fear good\nheart in the same way every utility\nfunction of course would prefer that it\nbe maximized but the kind of ones that\nparticularly feel good feel good heart\nare compensated by other ones that anti\nfear it and the example there is more\ncoherence than what I was rambling and I\ngot the definition of W wrong - I have\nyou thought about the consequences of\nthe fact that your ear so you're\naveraging over utility functions\nyou're not averaging over utility\nfunctions upto affine transformation\nwhich means that you're gonna have a\nbunch of different copies of each\nutility function of up to a fine\ntransformation in your class I don't\nknow if you have anything you want to\nsay related to that that the\nnormalization of of utility functions is\na very hard problem that I have several\nposts on it showing how hard it is and\nthat maybe here we have more of a\nprincipled normalization like we can\nsort of compare with things like I like\nice cream and this TV show this much and\nthen we can at least rank the different\nutilities compared with that possibly\nyeah here we have a kind of principled\npoint that we might want to like all the\n0 which is like choose a random utility\nfunction from your class and optimize it\nwhich like that gives you a a utility\nfor each utility function and we use\nthat as the 0.44 realization I think we\nneed to with the zero point is not\nenough to normalization we need another\nappointment like my um have you heard of\nthe population ethics which is\nexponential in the number of people now\nwould you give that any probability\nwhatsoever I I don't I don't I can\nsimply don't believe in unbounded\nutility functions but the the sort of\nissue with that is that if you give it\nany probability whatsoever it doesn't\nhave to be unbounded if you give it\nanything but the tiniest of\nprobabilities then it it'll dominate\naverage utilitarianism it'll dominate\ntotally to terrorism\nit'll dominate any theory of population\nethics of course this I think is\nridiculous so you need\nto penalize it by its span that's the\nsort of mid max normalization but yet I\nfeel you have to normalize all these\nutility functions anyway to prevent that\nkind of bad behavior so I want to give\nit I want to give a concrete alternative\nproposal to optimizing the optimizing\nthe average of a spatial distributions\nof utility functions so I have a\ndistribution of utility functions I will\ndefine a zero point as I'll define the\nzero point as choose a random utility\nfunction and optimize it okay and then\ninstead of maximizing the average\nutility I want to maximize the product\nof the differences between like I want\nto choose something that is better than\nthis I want to choose a Pareto\nimprovement of this mm-hmm why\nmaximizing a product of the differences\nin utility let's say we have a finite\ncollection but we can also do this with\nthem so basically a Nash bargaining it\nbasically Nash Mart Nash bargaining yeah\nso I'm curious if you have cash flops\nabout what about like Nash bargaining\nversus Bayesian choice for maximizing\nthis painfully you know you're gonna\ncopy another personal character I could\nbut let me try and I do like the fact\nI've reached the stage in my career\nwhere I can answer most questions with\nlinks but but what I was no I'm trying\nto answer this I don't like the Nash\nbargaining equilibrium because it\ndoesn't feel natural eat it's there\nthere might be some messy stuff around\nzero yeah I wanna I wanna put thing I'm\nproposing is not exactly an iceberg\nbecause I've defined zero in kind of a\nweird way doesn't matter once to do the\nNash bargaining you need to describe you\ndefine a zero and oh I I thought the\nNash bargaining explicitly had had zero\nas like the threat points or something\nyou know you've just defined a different\nthreat point if you want the okay so\nthis has certain failures we can come up\nwith examples where it fails it's\ngenerally to do with things that only\nmaximize a tiny bit unless you give all\nyour effort to it and then the one over\n10 trillion thing dominates the product\nkind of thing and yeah so yeah I think\nthat's a problem I I'd go for one of the\nother ones like what what are the ones I\ncame up with so you can keep your zero\npoint you keep your you define a utopia\npoint where every utility gets its\nmaximum expected value that's not a\npoint that can ever be reached but you\nnormalize those to zero and one and you\nPareto improve on that that was what was\nit my mean worth nothing bargaining\nsolution something anyway something I\ncame up with some time ago\nthe okay so this thing is probably\nexactly the same thing as maximizing the\nmean yeah either way I'm not sure oh boy\nfor\nOh Joe you know what you did you both\nwith to attend normalizations so if you\nfix that normalization so yeah if you\nfix missus this is the Mean Max\nnormalization\nwhy don't I recognize it the mean action\nis pick one of them at random maybe\naccording to probability the max is\nmaximized only this one normalize each\nutility function to be zero for the mean\npolicy one for the backs policy then\ngiven that normalization just normalize\nthe mean this is now what I've just\ndescribed there yeah so are we are you\nknow sorry for everybody else who hasn't\nnecessarily followed my back catalogs\nfrom several years ago are you yeah sue\nI don't know I was originally\ninterpreting you is just like not\nworking up to a font transformation\nright like when you're originally\nproposing the thing you're just like\nyou're just like take the average of the\nutility functions and utility functions\nof those utility functions their utility\nfunctions up affine transformation they\naren't well yeah yeah the normalization\nquestion is a difficult one that I think\nis separate yeah it's maybe it's not\nseparate but it would you don't think at\nleast for my methods or for their taking\nthe mean you have to solve the\nnormalization somehow separately before\nyou do this or while doing this or\nwhatever more links okay I'm listening\nto questions while I go searching for\nlink yeah I'm assuming that if there are\nother questions they'd pop up in the in\nthe text chat\num I mean I'm listening to what you're\nsaying while I go searching for links to\nanswer that question I yeah sooo yeah\nwould you describe your position right\nnow as roughly we like we should just\nlike kind of ignore the normalization\nquestion and just like we have a\ndistribution over utility functions\nthat's coming from I guess is this\nquestion like where my utility function\nI mean actually assignment of numbers\nlike distribution over bounded utility\nfunctions that are inside some interval\nit is it is possible that some of the\nother methods like the quantizer methods\nmaybe have their own inbuilt\nnormalization which the main method does\nnot which may be a point in their favor\nbut for the moments i am for at least\nI'm saying the mean and the\nnormalization are separate questions\nhere oh I don't know why you're saying\nit seems like to take you mean you don't\nhave to normalize you oh well okay yeah\nsorry I'm imagining like I have a space\nof distributions of utility functions on\nwhere utility is between zero and one\nand I might have the same util the same\nutility function up I'm a fine\ntransformation several times inside this\ndistribution and that's that's what I'm\ninterpreting a proposal to be which is\njust like completely aside from any\nquestion of normalization well if you\nare if you take if you take actual\nutility function representatives as in\nactual functions and you have a\ndistribution over that then you've\nalready kind of normalized yeah\nlet's see my and here is the mean worth\noptimization mean worth bargaining\nsolution kind of thing when I imagine\ntrying to use this this proposal and\nstill being like killed by good heart\nit's like because I'm like okay I'm\ngoing to like specify this distribution\nover all the things and I just like\nentirely miss a dimension hmm\nand so when you're saying like like\nenumerate all the utility functions and\ngive them all some weight like by\nenumerate all the utility functions your\nenumerate all the utility functions\nwithin some space and like you could\nimagine having utility function they\nkind of like misses that space this is\nwhere the post of my email becomes the\nmost relevant and I can't really answer\nyou without referencing it okay\nI don't I don't think of it as there's a\ndistribution over these utility\nfunctions which might miss a dimension\nbut what happens when we see evidence\nthat we've missed a dimension how do we\nextend our our current model but this\nbrings us way astray from this but the\nfrom this conversation yeah I mean I\ndon't know I wouldn't call it too astray\nbecause I feel like when I hear the\nproposal the reason why I'm feel doom\nit's because of the thing I just said or\nsomething well okay well let's think a\nlittle bit more formally if you\na dimension missing and we use another\nmethod like let's say quant Eliezer if\nwe're quantizing and there's a dimension\nmissing\nwe still got big problems if with most\nmost of the conservative methods have\nsimilar problems except you could say\nthat as a conservative method at the\ntime and kind of keep things close to\nthe training environment that McKenna\ncatches these extra dimensions without\nrealizing it but it seems the back is\nsomething you could do as a prior as\nwell so what if we're missing a\ndimension can we capture it in other\nmethods that we wouldn't do it in mean\nso why doesn't quantizer take like what\nso the safety measure in quantifier is\nbasically don't stray too far from the\noriginal action distribution which is\nassumed to be safe\nso what why doesn't this take care of\nextra dimensions let's think\nso the you rank the quantizer so we have\nthe proxy we take policies that\nmaximized it up to some extent you're\nright if we were very careful with the\nquantizer we may we may implicitly\ncapture the extra dimensions but there\nis no so yes so in that way it is\nsuperior but there is no clear there's\nno clear we don't the queue in the\nquantizer we don't know what it means\nsurprisingly I have a post\non that and I'm looking for it but yes\nyou are correct it does seem that the\nquantizer can capture extra dimensions\nis a sort of kid I'm claiming that it\njust naturally does like Oh actually no\nsorry I'm revising that no it doesn't\nokay because the policies that are say\n50% effective to do the proxy there are\nsome policies that will capture this\nextra dimensions and are certain there\nwere an T on these acted this extra\ndimensions if the dimension is not\ncaptured in the proxy at all yes so you\nknow it seems that my default weight my\ndefault like understanding of a\nquantizer is you like take some learned\ndistribution of things that humans would\ndo and you like select a like top one\npercentile from that distribution is\nthat not what other people mean here I\nmean I okay I was under the impression\nthat as defined by Jessica it was not\nthat that might be the case I'm pretty\nconfident that it is that well I would\nlike to be able to answer but it's\ntaking forever I think because of the\nvideo can you think you defined it as\nthat in the post yes okay the posts that\nI'm looking for I defined it in that way\nand in the post that I'm looking for I\nlinked to Jessica's original thing so\neither I misread her thing or there are\nmultiple definitions of quantizers\nfloating around yeah so in general you\nseem to use concept like words slightly\ndifferent than I'm used to\nsorry and it's sort of fine because you\nexplained if you explain what you mean\nand but I also think that you're missing\nout on things other people have done\nbecause you don't notice\nlike usually terrible at rich literature\nreviews yeah like this the idea of using\nuncertainty over utility function as a\nsafety feature is it's a really good\nidea that I've seen around for a while\nand I've posted a link to like where I\nfirst came across it in the shot mmm-hmm\nI'm yeah okay give me my phone and I'm\ngiving up on me I'm giving up on the\nWi-Fi and I'm just going to but that is\nwhat I I mean I'm I'm reasonably\nconfident that that was what the\nquantizer was because I looked it up\nspecifically for that post I think that\nthe most likely scenario is that there\nwere multiple versions of quantizer in\ndecibels yeah that's very possible and\nlike the one that went into the post is\nprobably the one that you saw and\nbecause I worked with Jessica I probably\nlike saw other ones okay so yeah so the\nlike I'm misremembering or something\nthat's several people that I talked to\nall have the wrong definition of\nquantizer so the kind of thing that\nyou're talking about I've considered\nsort of extension of apprenticeship\nlearning kind of things okay it's a\nlittle s wrong that there's a problem\nbecause my phone which is on on the\nnetwork not on Wi-Fi I had a problem\nloading goes wrong earlier today but the\nline and forum seems to be up okay let's\ntry a lineman form I feel okay is this\nyeah so let's refocus on the thing the\nquestion is\nmy current question is what do you mean\nby quantizer because it's not what I\nmean a quantizer is an agent that went\nis that I'm reading for me that returns\na random action in the top queue\nproportion of some base distribution\nover action sorted by the expected\nutility achieved if that action is\nexecuted yeah so yeah I usually think of\nthe base distribution as like a learned\ndistribution of what humans do okay I I\nthink of the base distribution as the\nalthough all the policies you can think\nof or all the policies that they are can\nthink of yeah the version I read the\nbase distribution was supposed to be\nsomething that's reasonably safe to draw\na random action from okay some human\nbehavior okay something that it's safe\nto draw a random but not an optimized\naction from okay yes then this if it's\nsafe to draw a random action but not an\noptimized action from then this is\nintrinsically this controls for extra\ndimensions that we might not not have\nconsidered if we are convinced that it\nis safe to draw a random action for it\nwhich from human style behavior is\ndefinitely safe in the short term I'm\nnot entirely sure if it's safe in the\nlong term but okay so yeah that seems to\nbe arguments that quantizers do have\nadvantages that's wrong amazing idea\nokay then that is learning and how\nshould I\nhow conservative should the part relax\nmax I just be okay so this is the post\nthat I was referring to and I think the\npoints of the post is still valid even\nwith the yeah not quite as valid with a\nsafer based distribution then the point\nwas if we know that the optimal one is\nwrong and we know that the zero is also\nwrong because it doesn't help\nwhat are we how do we know that it's\nsafe well as you say going for say if if\nif we could draw a hundred times at\nrandom and expect to not go disastrously\nwrong then 99 percentile is probably\nsafe in a similar way I very powerful we\nget we get 99 percentile actions like\none out of every hundred actions we take\nyou could just do with policies anyway\nsorry I'm I'm starting to fade I fear\nand I think the questions are getting a\nlittle more wavy we call it well let's\ncheck if anyone has a last question\nespecially a sort of simple\nclarification question which are easy\nokay I have a symbol what is the\ndifference between their will intention\ndi and an aligned yeah\nthey well into the aligned AI is an\nactual objective a well intentioned AI\nis a thought experiment that allows me\nto explore new ways of possibly getting\nto an aligned AI the yeah a well\nintentioned a I as I've described it\nmight still kill the universe\nit's if you it's focusing on a different\na different approach to the air problems\nor a different class of the AI problems\nmaybe for meso optimizers the difference\nmakes sense but not for general they\neyes I don't want to erupt I know those\nyour question but afterwards class the\nnaive question go ahead\nwait Soren did that answer your question\nthank you so you both work along with AD\nme or with Mary I have a question about\npost I think it was like it was from one\nof you tapped you'd cast these talks\nabout AI lineman being like a\ncryptographic rocket problem and when it\nI think it went on to describe the\nprocess by which you try to align it so\nyou began your talk today doctor I'm\nstrong with a discussion of like we\ndon't have an AI we don't have the code\nyou can't doesn't quite work the way one\nwould hope it would do you actually have\nthese things so instead you try to\nintroduce a characterization or a\ndefinition or at least Miri has\ndiscussed introducing the tools that\nwould help one get ease with what they\ncalled like a space plane to the moon\nand they described it as like the\nprinciples of first derivatives and\nsecond derivatives towards getting us\ntowards rocket science so how do you\nguys view AI alignment in terms of given\nthat we don't have any any AI now do you\nmostly view it as a problem of like\ngetting the tools necessary for it or do\nyou go along the path of life let's\ndefine what super intelligence is even\nif it's probably attractable and then\nlet's go from there I mean we're looking\nat your work but and some others but I'm\nhaving trouble recognizing the\ndifference between the two approaches\nand I don't know if that's coherent\nenough I I ramble in many different\ndirections and finds many different\nideas\nbecause that's how my brain is wired\nmaybe one I always I was tend to make\nthe assumption that it is a super\nintelligence and that we can we can't\nconstrain its knowledge except if it's\nin a very specific cryptographic away\nthe reason being that most solutions\nthat work for that kind of anything most\nsolutions not all but most solutions\nthat work for super intelligence work\nfrom gamma-ray eyes as well and Ally I\ncan answer your question with saying I\ncan't answer your question right now and\nthank you very much better and sorry dr.\nGerber and do you anything that do you\nhave any response to that question or\nrambling ha yeah I think I think I miss\nparva-- a few things I think are\nadjacent that I want to say well say is\nthat like I feel like I should probably\ndisclaim that I don't know we're talking\na lot about things that are trying to\nlike figure out how to get from utility\nfunctions into a eyes which like I kind\nof I don't know like like I wrote this\nguitar post another than facts which is\nlike a very general thing it's like\ngenerally knocks like these the the\nsub-problem that I'm thinking about most\nof the times where like I'm mostly\nthinking about the problem such that if\nI can have a thing that is trying to\nlike optimize reliably any hard\nobjective I could write down whatsoever\nlike this is like the the part of the\nproblem that I am like I have my sight\non right now we're like there's there's\nvalue in the like reliably type thing\nlike be able to direct something at\ndoing a thing but I kind of feel like\nfrom my point of view if we could make a\npaper\nmaximizer only we would have solved a\nlarge portion or problem and so a lot of\nthis is like out of the space that I'm\nlike normally thinking about and then\nanother thing I want to say possibly\naddition to your thing and possibly not\nI don't know it's definitely definitely\nis the case that the way in which I view\nmost of most of my time working on\nHighline related things is like trying\nto build up foundations about having\nlike the directs like without like\nhaving the direct use case like I have\nlike a few different like these cases or\none to like resolve some uncertainty for\nsome confusion around some area but like\nI'm like trying to work with abstract\nenough objects that no specific like use\ncase that I have in mind will be like\nthe only plane which are movies or\nsomething and so like it does feel to me\na lot more like trying to like develop\ncalculus than trying to like build a\nrocket in the sense that like it feels\nlike it's like a thing that you need to\ndo before building a rocket and I feel\nlike it's hard to be able to look at the\nproblem of building a rocket and figure\nout what the right type of calculus to\ndevelop is without actively working on\nbuilding the rocket or something but and\nso maybe I'm doing it very badly I don't\nknow but I ie\nI do think that like most of my time is\nlike most of the like analogies little\ndraw and stuff you can try to think\nabout them as being directly about the\nISIS that's then it might like miss the\nfriend point ad or something okay thank\nyou very much and then I am one more but\ni blew some other people might have\nquestions to it so i wanna you know them\nin case they do if not then i'll ask how\ndo you as a researcher at mary how do\nyou feel about the choice to not publish\nany research so i believe one of the\nlast publications that I saw from Mary\nwas the logical adductors paper that you\nyou are the primary author on how has\nthat changed\nyour production of research hasn't been\na net benefit to you personally I like I\nknew inside view of this but it's weird\nfrom an outsider perspective from\nsomeone just looking at Mary's work for\nquite a while and then see a complete\nlike hiatus on that in terms of like net\nbenefit on me personally there are\ndefinitely ways in which it's like good\nand bad like two ways in which like it's\nbad is that it like makes collaboration\nharder another way that it's bad is that\nlike sometimes I would want to use\nwriting stuff up as a motivation for\nmyself to formalize things more or\nsomething it's like I have all these\nideas in my head and like the process of\nlike writing them down is like a\nmotivation to do a certain type of\nthinking that I have to like externally\nmotivate myself in a different way to do\nor something like that but I a large\npart of I know that there's there's a\npoint that I want to make about\npublishing which is that I think that it\nlike depends a lot on whether you expect\nlike I kind of expect the kind of stuff\nI'm working on is has like a chance of\nlike developing something really cool\nand a chance of not and like the\nincremental progress is not as exciting\nor something versus like if I was\nworking in a field that was more close\nto a lot of other people then like the\nincremental progress would matter like\nI'd have to get my incremental progress\nout so that other people would get more\nincremental progress on it and it can\nlike build up through a bunch of like\nindividual like small jumps or something\nversus like I think that I am like\ntrying to like do some like very and\nseeking trying to find some like large\njumps in in the way that we're\nconceptually thinking about the problems\nsuch that\nthe costs that I pay by working\nprivately until I have something that\nseems really interesting and then\ndeciding whether I want to share it and\nthen put it in the work to sharing it is\nlike not that much of a cost because\nit's like the sharing of the incremental\nstuff doesn't actually help that much\nbecause there aren't like a lot of other\npeople that are working on the exact\nsame things that I'm working on in in\nthe same way versus if I was in like ml\nthere would be a lot of people who are\nworking on very very similar things and\nvery more way it would be a lot more\nbenefit sharing I'm obviously a lot more\nconventionally academic and also some of\nthe way I work is generating random\nideas and there it's much better to sort\nof toss them out into a less wrong post\nand move on and see if other people\ndevelop them so yeah I probably my\nreckon is III disagree with the Mary's\ndecision at first order but I also trust\nthat they've thought it through at\nsecond order\nwell thank you both for your time and\nthanks for answering the questions I\nsincerely appreciate it okay thank you I\nwould like to just say two more things\nand very much so thank you for that and\nthen the the second thing is just in the\nbeginning which said that there were\nactually no implementation of any of\nthis and actually that turned out to be\nwrong in that joon-kyu has made a actual\nexplicit implementation of a utility\nfunction for a superintelligence and\nalso an implementation of meta semantics\nthat's been published at mr. ethical AI\nand next week I will try to give a\npresentation on that in\nreading group on Tuesday at this sad\ntime so I hope to see some of you there\nhey that should be all see you next week\nall right I I'm away from a computer so\nis there any way you could like like hit\ncommand and copy comments or no because\nthere were a lot of links shared and\nsome questions yeah and I don't know a\ngood way of saving any of this for", "date_published": "2020-06-11T11:33:30Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "e52d1d4876d14ded2a82354772816b81", "title": "AISafety.com Reading Group Session 79", "url": "https://www.youtube.com/watch?v=N7tZ7iRmmQ8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "I think it's customary that I introduce\nyour Armstrong\nStewart Armstrong will is from the\nfuture of humanity Institute and center\nThomas fellow I think and you're he's\nbeen he's working in the harder more\nmathematical part of AI safety he is at\nleast outside of the United States he is\nby far the most prolific writer and in\nmy opinion one of the most insightful so\nI am very pleased to to introduce him\nand welcome to the AI safety reading\ngroup well thank you with a introduction\nlike that I definitely have a lot to\nlive up to but yes so should I talk\nabout myself or should I just plunge\nstraight in feel free to say a few words\nabout yourself okay\nwell I'm working at the FHI as someone\nsaid and I've been working on various\nideas in AI in like trying to ensure\nthat you could turn off a eyes and\nthings like that I am generally aiming\nto shave off parts of the problem so the\npieces of it can be considered solved\nand I also my other way of working is if\nsomeone says well you can't control an\nAI because of a B and C I'm thinking\nokay so can we hit a b or c separately\nin any way and that's where some of the\nideas of colin the presentation i'm\ngiving today was then I looked into\npeople who were trying to do inverse\nreinforcement learning which I'll\nexplain in the presentation and I\nrealized there's huge problems with it\nthat I formalized what the huge problems\nwere and this actually is leading to\nsome interesting solutions\nright um should I now start with the\npresentation please okay so as the title\nyou can see is it's the claim of its\npapers that you cannot learn human\nrationality and reward together you\ncannot is an asterisks because in theory\nyou can't do it at all in practice\nhumans do it effectively all the time so\nthere's an a very interesting question\nas what's actually going on there this\nis based on a paper that I did with Sir\nEdmund ER man who I believe is now in\nBerkeley and trying to find a ph.d\nprogram there this came from the idea of\ninverse reinforcement learning and\nstandard reinforcement learning a human\ndesigns the reward and gives it to the\nagents this might be problematic if it's\na badly designed reward so an inverse\nreinforcement learning the human does\nsome things the expert trajectories it\nextracts from it what the reward should\nbe this the papers that have done this\nhave made the assumption that human is\nrational or noisily rational in very\nspecific ways and it seems that maybe\nthis could generalize to irrational\nhumans and the problem is that it\ndoesn't so without assumptions you can't\nsay anything individually about\nrationality or rewards you can't say\nanything about rewards and you Carolee\nsay much about rationality now this is a\nso-called no free lunch theorem and\nthere's a lot of no free lunch theorems\naround in this area and they're normally\nnot very exciting because you just apply\na simplicity prior where simple words\nworlds are more likely and the no free\nlunch theorems go away however this one\nis cannot be solved with simplicity\npriors in fact simplicity priors will\nmake the problem worse as we'll see as I\nmentioned\nhumans can and do say a lot about\nrationality and rewards so how do we\nsquare that with the initial claim well\nwithout assumptions is the key part\nso therefore if humans are doing this\nhumans are making assumptions and a\nquestion is what are the human\nassumptions and can we extract them and\nthen hand them over to a eyes but on to\nthe first problem of defining\nrationality and reward suppose we say\nthat a human has a preference X but\nisn't fully rational about them so he\nsort of means that humans have the\npreference but they implement them\npoorly so the implementation is key so\nI'm seeing a human as preferences plasm\nand implementation or reward +\nrationality those sort of using those as\nsynonyms and that's how I'm seeing the\nhuman but what can we actually observe\nabout the human if we look at them well\nwe can observe human actions and maybe\nthe human brain which means we can\npartially observe the human policy so\nthe actions that humans will take in\nvarious environments and that is what we\nobserve so if we formalize all that I'm\nmodeling the human as a pair P and R\nwith our a reward and P a planner this\nis the implementation device and I see a\nplanner as mapping a reward on to a\npolicy P of R like the fully rational\nplanner is a planner it Maps the rewards\nto the optimal policy and a variety of\nother planets that were encountered by\nPI H I'm designating the actual human\npolicy and I'm assuming it deterministic\nthough that's not necessary it's just a\nsimplifying assumption and a pair P and\nR it is compatible if the planner Maps\nthe rewards to the human policy this\nmeans that this is a candidate for\nexplaining the behavior of the human\nand a key fact is once you learn that\nPNR is compatible there is nothing more\nthat you can deduce about it from\nobservation the reason is the\ncompatibility so even if you're a\nmissense you cannot get more information\nbecause the planner gives you the human\npolicy which means the planner and\nreward pair perfectly compare perfectly\npredict human actions so anything you\nobserve the human doing will be exactly\nwhat the planner reward pair have\npredicted so if you have two pairs that\nare both compatible you cannot\ndistinguish between them because they\nmake exactly the same predictions this\nis sort of the weak version of the no\nfree lunch theorem but let's see how bad\nit can get so let's say that p 0 and r 0\nare compatible pair that are also\nreasonable they have all the nice\nproperties that we want they encode what\nwe think human rationality and reward\nare all about here are some other pairs\nthat will also be compatible the first\none is the rational planner there is a\nreward which when paired with the\nrational planner will give you the human\npolicy there is also an action rational\nplanner which just takes greedily takes\nthe most effective action in the\nimmediate without planning for the\nfuture this pair is also compatible\nnotice that they use the same reward\nwhich we'll be presenting a bit later\nand there's also the indifferent planner\nthe indifferent planner is a planner\nthat map's all rewards to the human\npolicy without caring about what they\nare and if you put the 0 reward this\npair is also compatible then we get into\nsome rather interesting versions you can\ntake the negative of a planner by\ndefining minus P of R of P of minus R if\nthat's the case then the anti rational\nand the anti\naction rationale planners are also\ncompatible one way of seeing this is\nthat it's impossible to tell the\ndifference between an R Maximizer and a\n- are minimizing annoyingly - piece\nthere on - are zero are also compatible\nso even if we have some evidence in\nfavor of the reasonable pair there is\nanother pair that also seems reasonable\nhas the reward completely reversed by\nthe way for those who are wondering why\nI don't take the negative of the\nindifference planner it's because of the\nnegative of the indifference planner is\njust the planner itself now all of these\nare compatible which means that we\ncannot distinguish between them from\nobservation so this is the point where\nwe might appeal to the certain\ncomplicity prior to kamakura of\ncomplexity except Komarov will not save\nus here and the ridiculous\npairs are actually simpler I put their\nlikely simpler because coma graph\ncomplexity depends on your choice of\nlanguage but for most reasonable\nlanguages the ridiculous pairs will be\nsimpler to show you the to give you an\nargument as to why it's not the case\nnotice that all compatible pairs define\nthe human policy so any\nhas to have better hand wavey here comer\nof complexity that's higher than the\nhuman policy so the complexity of the\nhuman policy is a lower bound in most\nlanguages on the complexity of any pair\nnow this in pseudocode is the definition\nof the indifference planner the it just\nsays return the it ignores the reward\nentirely and it returns the action the\nhuman policy would give so this planner\nis a few symbols longer than the\nhuman policy therefore a very comparable\ncover of complexity and as long as the\nzero reward is similarly short this pair\nis very close in complexity to the human\npolicy itself the action rational\nplanner is also very close in complexity\nyou just need to define the ARB max\nfunction which is basically a for loop\nand then you have the rational reward\nfunction which assigns one to the action\nthe policy will human policy will take\nand zero to all others\nnotice the contrast between the\nindifference the indifference pair and\nthe action rational one for the first\none all the complexity is concentrated\ninto the indifference planner and the\nreward is trivial for the action\nrational one the action rational planner\nis simple whereas all the complexity has\nbeen concentrated into the reward\nfunction but in both cases they are just\na few symbols above the complexity of\nthe human policy the action aren't\nirrational one can similarly be defined\nnow this shows that these three pairs\nare simple but why do we think that a\nreasonable policy would be more\ncomplicated well first one problem with\nthe reasonable policy is that the\ncomplexity of its negative is about the\nsame as long as putting minus signs are\nsimple than the complexity of this pair\nlooking at complexity of the anti\nversion are the same so we can't really\ndistinguish between r0 minus r0 which is\na bit annoying the other issue is that\nthis pair the reasonable one defines\nhuman biases if we can define what a\nbias is as the difference between the\naction the human take and the action\nthat humans should have taken so all the\nbiases and the extent of their biases\ncan be extracted from this\nthe other three pairs on the other hand\ndo not have a conception of bias they\njust don't know what it is so the a\nreasonable pair actually has strictly\nmore information than those other pairs\nwould have so if we look at this\ngraphically we can see these three pairs\nas the sort of minimum complexity you\nwant international indifference in\naction anti rational we have the\nrational and aunt irrational ones that\nare a little bit more complex because\nthe planner is a bit more complex to\ndefine and somewhere up there we have\nour reasonable pair and next to it our\nanti reasonable pair so simplicity will\nnot help you here simplicity as I said\nwill hinder you but now let's move on to\nthe second part of the puzzle if it's\nimpossible in theory and yet humans do\nit all the time what is going on here\nwell humans use what I'm calling\nnormative assumptions though do let me\nknow if you can come up with a better\nname for them they distinguish between\ntwo compatible pairs two pairs that map\nto the same policy so they can't be\ndeduced from observations because they\nmake the same predictions yet they\ndistinguish between them and what I'm\ntrying to do is to figure out how humans\nassess each other's goals each other's\nrationality and their own rationality\nand we do it quite well and we do it\nwith a large degree of agreement so the\nfirst thing that sprang to my mind was\nto look at Shane you can shame goes with\ncertain behaviors people look\nembarrassed they look down at their feet\nmaybe they read and if you see this as\npurely on observation you just notice oh\nthe human is displaying these behaviors\nbut if you make the normative assumption\nthat feeling shame means that something\nhas gone bad\nthen you can start distinguishing\nbetween different pairs very well for\ninstance you can slash the anti rational\none\nyou can slash all the negatives as well\nwell because humans do not feel shame\nall the time so they're definitely not\nmessing up or all the time you can also\nget rid of the rational ones because if\nthey were fully rational they would\nnever feel shame\nso just by making the assumption the\nchain is not just an observed behavior\nbut actually a sign of badness we can\nstart slicing into the possible pairs\nquite strongly there's a few other\nthings like people model each other as\nhaving few complex emotions rather than\nmany simple emotions if we go for this\nwe can start saying that anchoring bias\nfor instance is a bias and talk more\nabout that if you're interested human\nnarratives are also quite interesting we\nhave narratives about ourselves and\nabout others and if we take these\nnarratives as prescriptive then we can\nstart also this is also a normative\nassumption then humans sometimes say\ntruthful things and sometimes lie and\nyou can train a you could train in the\nfuture an agent on human statements\nabout statements of fact and you could\nfigure out whether humans are truth or\nlying and then you could apply the same\nthing to humans talking about values or\npreferences so a perfectly trained truth\ndetector could detect what human dives\nare by taking human statements about\ntheir values now this seems a bit of an\nso the in practice this might be doable\nbut conceptually it's a bit of a\nroundabout way of doing it what does it\nmean the humans are lying about their\nvalues and how does that parse well this\nis where we get to what I think the most\ninteresting approach which is that\nhumans to model themselves and they\nmodel others and we model each other as\nreward\nagents and I'm thinking of using these\nmodels as part of the definition of what\na reward is so the human reward is at\nleast in strong part what the humans\nmodel the human reward to be the bad of\nthemselves and that of others anyway\nthis is enough presentation on the paper\nthere's another part of the paper which\nis that this just to show that the PR\nmodel can also model other things like\nAI is overriding human preferences and\nthings of that nature but I'll leave it\nhere for the moment\nand I have a few more slides that I\nmight bring up in discussion if you want\nand this is what those slides are\nbasically about why this result which\nseems like a negative results that you\ncan't do that actually has me slightly\nmore optimistic about the future of AI\nanyway thanks for listening there thank\nyou it was a great presentation let me\njust\nso here is now I'm gonna stop with the\nthere are three property for tonight and\nthis is the human preferences are\nundefined under defined in exotic a I\nchosen future circumstances this has\nbeen in the back of my mind is a major\nproblem with the AI doing enough\noptimization power and aiming for\nvarious fantastic worlds represented by\nthese photos here", "date_published": "2018-01-17T21:11:10Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "4de57323447a91c69730cff9a9248b73", "title": "229. The Case For Aligning Narrow Superhuman Models", "url": "https://www.youtube.com/watch?v=ISxu8lvR8Yw", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "and welcome to session 229\nin the asa.com reading group tonight\nwe'll be discussing\nthe uh the post on alignment forum\ncalled the case for aligning narrowly\nsuperhuman models\nby ajaya kocher\nwe've seen some previously previous work\nby jayakotra\nher work on ai timelines and explaining\niterated distillation and amplification\ncurrently she's a senior research\nanalyst at oregon philanthropy\nthis my post is from uh\nand we will go through miri's comments\non this almost certainly in the next\nsession but\nwe usually have some algorithms that are\ntrained on data\nand then we um\nwe obtain some kind of model that we use\nfor uh\ntaking actions or predictions and this\nkind of thing\nin this case we're talking about super\nhuman models i'm not entirely sure\ni like the word superhuman in this case\nbecause we also care about\nuh capabilities that are strictly below\nthe superhuman level and we also care\nabout those that are\nat roughly the human level at expert\nhuman level and\nat a level where humans can't even\nevaluate\nwe are talking about narrow uh narrow\nmodels so\nsomething like playing go would be an\nexample of a very narrow model\num but um but some of these\ntechniques we're talking about are\nactually surprisingly general like give\nadvice\nso narrow shouldn't be interpreted\nin too narrow a sense if you get what i\nmean and then finally aligning\nthat's uh what we're trying to do to\nfigure out how\nthese models can be used to their full\npotential to do what we actually want\nan example would be if someone goes to\ngt3\nand say i am sick i have these symptoms\nwhat should i do\nin this case dbd3 has some novel\nsome knowledge about this because it has\ndigested a lot of information\nfrom doctors and it might actually be\nable to reason\nthat the uh specific disease is\ntuberculosis or whatever\nand but it doesn't want to tell us that\nif instead it wants to create the\nwhere you have this improvisation game\nwhere it's just trying to\npredict the next word because that's\nwhat the qt3 actually cares about\nso the main objective of this kind of\nresearch\nis to figure out how to give the best\nadvice\nwhen we know less than the model about\nwhat the real advice is\nand also have some kind of systematic\nway of figuring out\nwhether the model is doing the best it\ncan\na big reason why this kind of research\nis something to be optimistic about\nis because we are actually doing the\nthing that we want to be really good at\njust in a smaller sense yeah so we want\nto be really good at aligning\nuh strongly broadly general superhuman\nmodels\nand aligning narrow models are kind of\nthe same thing\nwe'll get a lot of practice on doing the\nthing that we actually want to get good\nat\nand on the outside that's a really uh um\n[Music]\nthis will this experience will be\nhelpful so\nwe take a big alignment project and we\nscale it down\nand then we try to find some small\nsolutions um\nfor this problem so an example of a\nnarrowly superhuman model would be\nsomething like other ghost zero\nbut that's not really an interesting\npoint of view from a\nline from an alignment point of view\nthis is really interesting\nbecause you can just specify the\nobjective really simply\nit's just to win at go and we care about\nthe ones\nwhere we sometimes distinguish between\nthe outer alignment problem and the\ninner alignment problem\nin this case the outer alignment problem\nis very easy for alpha\nserum and we want something that is more\nfussy where we can't write it down\nso we need some kind of human feedback\nso in general a way to generate a\nproject\nfrom this uh in in this general area\nis to choose some kind of fussy domain\num\nsomething where we have some kind of\nintuition or expectation\nthat the language ones we have are in\nfact superhuman\nright now we just can't get that out of\nthem yet\nthen we want to have some kind of to\ndesign a reward learning scheme\nwhere we find a way to train the ai\nthat allows us to reach above the level\nof the trainers\nand then once we have done that we need\nto\nto see that this is actually something\nthat will continue to scale\nand something that will also work when\nwe um get to bigger models\nwith the ones that we actually want to\nend up\naligning so we can't just hard code\nknowledge or something like that\nand then finally once we have these\nmodels we need to look into pathologies\nway that some sometimes the models do\nwrong things maybe um\nthey start to lie or something like that\nand we need to understand that\nand ensure they don't lie so\nthe uh the example uh jaya gives\nof something that is close to this is\nthe paper by paul christiano\nlearning to summarize from human\nfeedback it's not just\nuh procrastination opening up\nthis almost follows this formula but not\nquite\nin that the fussy domain of um yeah\nsummarizing\num is uh somewhat easier and\nsome of these the people also don't\nquite get as far into as we want\nbut so what kind of projects fit into\nthis agenda\nand which don't it's not binary really\nbut\nsome things that means that it counts\nmore is if we're doing something like\nfine-tuning an existing large model\nbecause that's where we expect this\ncapability to actually resize\nit needs to be functioning enough that\nwe are dealing with humans\nand it needs to be something that is\ngenuinely helpful\non the other hand if you are if you have\nan obvious training signal\nsome either either just something you\ncan specify or\nalgorithmically code then it probably\ndoesn't fit into this agenda\nif you only optimize for usefulness and\nnothing else\nand no generality then also probably\ndoesn't count\nand if you just work on doing the model\nlarger then that doesn't count either\nwithin this agenda\nmaking the model larger is not always a\nnegative thing sometimes it can be\npositive and similarly\nit's rather complicated i'm not quite\nsure i agree with this\nwe can't go out and say it is always\nevil because that would imply that um\nand that's probably neither true or\nhelpful to say\nbut on the other hand i don't really\nthink it's very complicated\nquestion in that i believe that almost\nall\nfrom almost all material circumstances\nuh making the model larger\ntheme is that could not be done right\nnow\nis one that she terms sandwiching so\nthat's one\nwhere the uh the model is decided\nthey're not super german\nright now because we have one layer\nwhere the\nhumans are more cable than the model and\nthen we have a model\nand under that we have some other group\nof humans who are less stable than the\nmodel\nand the idea of the same teaching\nproject is to figure out how to help the\nlower part\nof the\num train the model to be as good as\nthe more capable humans\nand this is of course also something\nthat will help us out in that\nthe models are improving and it might be\npossible\nuh to even go above what the more\ncapable humans have\nin some way\nso how does this uh reduce long-term\nrisk extension\nthis is basically what we want to do\njust\na bit more narrow uh so uh\npracticing as close as possible to the\nthing you want to get good at\nis probably really valuable and it's\nlikely that\nsince um this kind of work will allow us\nto iterate\num then that is likely to be uh\nfar more effective than doing uh some of\nthe other conceptual research can be a\nbit\nuh pie in the sky in that you don't get\nfeedback on whether these things are\nactually working\nsince you don't actually have a bottle\nto work with\nthat's also likely to help us and you\nmight see some of the pathologies\nearlier\nthe treacherous turn would be a\nprominent example of something that we\ncould see\nbefore it gets really bad\nwill give us the alarming community\nas a whole some know how about how to\ndeal with this\nand some infrastructure some\norganizations and community and the hope\nis\nthat if all the models that\nif we succeed on mostly aligning\nthe relatively weak models we have now\nthen once we actually get\nsuper human general intelligences we\nwill have much more\nability to align them since we've\npracticed so much\nand of course if we discover a potential\nsolution or maybe the real solution\nthen we will be able to have a number of\ntest cases to see\nif we believe that this is a full\ngeneral solution then we'll see be able\nto see does it work\nin these kind of specific cases\nwe might also see something that looks\nconventionally\nimpressive um at this point in\nin the post a culture writes this as uh\nonly a positive thing and i thought that\nthat was a\na very dangerous thing that we could see\nconventionally impressive results in\ndemos\num later she uh clarifies this could\nactually also be a problem\nbut um but there are definitely upsides\nlike if alignment researchers can show\nsomething that is really impressive\nthen that is good for their careers and\nget good for\nthe field of alignment\nso why not do other kinds of alignment\nresearch\nin a geocachers model\nshe devised all kinds of research into\nconceptual research\nsome grid worlds and gains and\nmainstream\nmachine learning safety and compared to\nthis she believes that\naligning narrowly superhuman models\nis a more real media and also more\ntractable\nso for instance the grid worlds and\ngames\nuh are usually very small in that\nand that can give some bad habits for\nhow you look into whether it's doing\nsomething good or bad because\nif the model is small enough then the\nresearcher can actually see that\ndirectly and we bring some more indirect\nmethods for working with ais that are\npotentially smarter than us\nso in this case we get some discipline\nand that we\nalready at this point have to use some\nmethods that need to scale beyond\nwhat we can truly just\ndirectly\nalso a kind of research where progress\nseem much more\nlegible in that conceptual research in\nparticular\ncan be very very illegible um\nso that's another reason to pursue this\nkind of research\nthe potential is also very large\nin that you could imagine if this is\nreally successful\nthen the the task of aligning these\nmodels could be\na um a big part of the work that\neventually goes into making these large\nmodels so\nwe can also have more people involved in\nthis in that\nif you consider a conceptual like\nworking at\nthe machine intelligence research\ninstitute would be something\nthat not many people can do there is i\nbelieve\nactually that miri already have um\nsufficient funding and just have a\nreally hard time\nfinding people who have the specific uh\ntaste or intuition\nfor how to do this kind of conceptual\nresearch and i think just\nit might just be a fact of the world\nthat most people can't do that\nwhereas lining their early superhuman\nmodels might be\nfar more possible\nmainstream machine learning safety is\nvery technical\nand mathematically dense and if as we\nexpect\nthey are actually not really considering\nthe real problems\nthen having the alignment\ncommunity becoming larger\nis another reason why that's good\nso what kind of skills uh\nand tasks are required to align narrowly\nsuperhuman models\nwell a lot of software engineering and\nmachine learning engineering\nneeds to take place we need to have a\nlot of work dealing with\nhumans in that human feedback\nis likely to be an essential part of it\nand there will be a lot of problems that\nare\nhard work but not but doesn't require\nthe very best minds in some way\naj's intuition is that if we have proper\ninstitutions for getting people\nup to speed on this and sufficient\nfunding\nthen it might be possible for someone\nwho doesn't really know much about\nai safety but is strong on a software\nengineer to start\nworking productively with this within a\nhalf to a full year or something like\nthat\nat the am are these a number of possible\nobjections and responses to these\nobjections\nthen the first is how would this address\nthe treacherous turn\nand it's just quite uncertain\nand there's some weakness of of this\nresearch agenda\nthat it doesn't explicitly address us\nthere are many research agendas which\ndoes not\nfor instance measuring machine learning\nsafety\ndoesn't really help either there are\nsome ways\nyou can imagine that transparency would\nbe a part of alignment and\nin that case um that's something that\nwould help\num we might see many treacherous turns\nsomewhere in particular if we go looking\nfor them\nwhich is something that would be\na big part of this particular um\na big part of this particular research\nagenda um\nand of course if we build some\ninstitutions that\nwork hard with these kind of problems\nthen having those\nready when the first mini treacherous\nturns happen\nwill be very beneficial\ndoesn't this feel suspiciously close to\njust profit maximizing\nwell somewhat but there are a number of\nkey differences\nfor instance if the most cost effective\nthing to do\nis to make the models bigger which it\nseems to be in many cases\nthen fine-tuning them might be um\nand trying to align them might not be\nthe most commercially valuable thing to\ndo\nwe are not trying to find the easiest\nproblem that we can\nuh but rather the ones where this\nresearch intended fits best where\nalignment\nis most interesting and um\nwe want to learn general models even if\nthe main specific\ntechniques are available so that's\nanother reason why\nthis differs from just profit maximizing\nso we want to have a community where uh\nmost people who are doing this work is\ndoing it for altruistic reasons\nand this is legible to analogous\nand and things like that\nso if we are closely but not quite\noptimizing for usefulness\nthen that doesn't seem obviously\nneglected because a lot of people want\nsomething that's useful\nwell in adjacent estimate most\nmainstream machine learning and ai\nprojects are either extremely domain\nspecific\nor they're focused on just scaling big\ngeneric models\nso this kind of thing in between that we\nare want to do\nmight be somewhat detected still um\nan example would be the paul christian\npaper learning to summarize from human\nfeedback\nwhich was from christiano's obviously\nmotivated by long-term\nexistential risk um and that's something\nthat we would not have seen\num created or at least not have seen as\nearly created\nwere not for this kind of uh\nmotivation and it matters a lot who is\ndoing this research\nand why they're doing this research not\njust that it gets done\num and right now the aich community has\na problem in that there isn't really a\nvery strong community on\nuh like mentoring and um uh\nno building origins um and no one to\nuh like a pipeline for how to uh\nget into this kind of technical\nalignment work\num again says that she\nstrongly suspects that something like\nthis sandwiching\nis just not being done in any commercial\nsetting\nthen there is the objection that this\nwould increase investment in scaling ai\nand this was actually quite close to one\nobjection that was raised\num in the discussion of the previous\nvideo\nlast session um so will this increase\ninvestment\nwell right now we are seeing investments\nbooming\nand we expect this will continue and\nsince ai is in general uh\nmuch larger than ai alignment it and\nit's growing faster\nthe jr gets gives us two orders of\nmagnitude\nand that means that even if adding one\ndollar to the alignment\nbuilds up a height then it needs to\nbuild up less than\nthis 100 times as much\naccording to ai investment in order for\nthis to shift the balance between\ncapability work\nand alignment work\nand there's a good reason that to\nbelieve that if\nthis is successful then this will shift\nand substantial number of\nmarginal dollars from making the models\nlarger\nto making the more fine-tuned if we can\nshow that this actually works\nbut here aj knows that\nthis kind of research could generate\nexciting demos and that's certainly\nsomething that is possible\nand something that potentially could\ncould increase investment in scaling ai\nso this research agenda\ncould be paired down right there is uh\nthe last part of the project generation\nuh formula was uh to stamp out the\npathologies\nand why not just focus on that\nwell this is also a valuable thing to do\nbut if you don't have the other\nalignment\nthis is basically just robustus\nreliability work\nand\nthis is something that is hard to do in\na general sense\nin that one a lot of reliability work\nassumes\nthat there is a human judge which at\nsome point can decide whether\nan action is good or bad and it's not\nreally neglected there is a lot of work\nbeing done\nbecause obviously if you want to build\nautonomous weapons then reliability is\nreally important\nuh fairness accountability and\nconspiracy of\nyeah under which a lot of robustness\nwork is being done\nalso if you want to build a subculture\nor brain around a life\nthen that's not easier than focusing on\nthe same as others are doing right\nso um adjacent sentence my best guess is\nthat it would actually be harder to tell\nwhich people working in the robustness\nspace\nare optimizing for reducing long-term\nexistential risk from ai versus for\nprofit\ni think that's probably true but it's\nalso a very very small configuration\ni'm a bit confused about why idea thinks\nthis is this is really important\nbecause sure it matters\nwho and why they do the research but it\nalso matters a lot that the research is\nbeing done\nso another option we could do instead of\nthis research agenda is to focus\non evaluating and testing the existing\ncandidate long-term solution that we\nalready have\nfor instance paul christiano's agenda um\nso um i i tried to describe paul\nchristiana's agenda really crisply and\ni couldn't actually find a really crisp\ndescription of\npaul christiana's agenda right we have\nthe iterative distillation and\namplification\nwe have the aic via debate we have a\nnumber of\nthese kind of headings but um i i\nhaven't actually seen\na really crisp description of this\nagenda and\nand that might actually be an important\nproblem because\nright now are somewhat influx they are\nnot described there's a lot\nand there are also other problems with\nit in that\nwe uh we might be more interested in\nfinding new solutions\nrather than testing the ones that we\nalready have\nso what is the current state of opinion\non this word\nin this section uh um describes what\nother people think\nabout this which is of course always a\nbit uh\na bit dangerous but um i think she tries\nto do it fairly\nso if along people who are alignment\nresearchers\nworking on large models they are\nprobably really uh\nenthusiastic about this perhaps the best\nkind of work they personally\ncould do um i think this\nmight be somewhat sociological um\npaul cristiano is really optimistic\nabout this\nbecause it's so scalable and this is\nsomething that\nmatters a lot for cristiano people who\nare working on conceptual research\nare perhaps less excited about this\nresearch agenda\nthan paul cristiano um rohingya\nis more optimistic than the machine\nintelligence research institute\nbut still not convinced that this is\nactually literally the best\nsome people are relatively cautiously\noptimistic including esa caskey and evan\num we will get more into in particular\nindia's\nclassics comments uh next week and i\nshould also say that relative\noptimistic in this case might mean that\nuh\ni expect that uh elias here koski\nbelieves that\nmost other alignment research is\nactually wasted so if this is\neven being relatively optimistic that\nmight also\nuh be consistent with him saying that\nthis is almost certainly a waste of time\nso what are the takeaways what are the\nnext steps\nwell\nin particular whether this could be\nharmful uh whether there are other\nthings that might be more interesting\nand in particular how to deal with the\ntreacherous term\nbecause this is this seems like one of\nthe weaknesses of this particular\nresearch agenda\nthat we don't really have a strong\nattack on literature\nand he believes that if you are in a\nposition where you can start traffic\nlike this\nthen you should start and as examples of\npeople who might be a\nprincipal investigator so the people at\nuniversity who decide\nwhat to research and see researchers who\nhave the freedom to\njust do them practice do whatever they\nwant\nthey should also be able to um\npeople in this kind of situation should\nbasically just start\nbut open philanthropy is not so listing\nuh\ngrant applications for people who want\nto do this right now and\nthey instead say people who might be in\na position\nto do this kind of research in a couple\nof years should be on the lookout for\nopportunities\nbut there are no precise additional\nopportunities right now that is all for\ntoday\nthank you and see you next week", "date_published": "2021-07-22T21:24:56Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "4cdf49e1d87ce2144da0252b2c18a201", "title": "245. Democratising Risk 2", "url": "https://www.youtube.com/watch?v=_K02aeKNx3Q", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n245 in the ai safety.com reading group\ntonight we will be talking about the\narticle democratizing risk in search for\nmethodology to study existing risk and\nthis is the second part\nthe authors are still carla so prima and\nnew\nchem and we are focusing on section 4\nthe meat of the article today and\nprobably next time we'll take the rest\nit's called flaw definitions frameworks\nand tools how ambiguity and unverified\nassumptions undermine research into the\napocalypse\nso we'll start with the definitions and\nhow they are ambiguous\ni've brought up again\nboston's definition of existential risk\nand i will point out that there are\nactually\nthis work made me look into this\ndefinition and see if there were\nproblems with it and i could find a\nnumber of problems actually first\nearth originating uh intelligences that\nincludes a paperclip maximizer in fact\nso something that would destroy\nalso a paper tip like if humans dies but\nanother intelligence like and paperclip\nmaximizer takes over then that might not\nbe an existential risk even though\nobviously that's something we'd like to\navoid and in the same way um uploads uh\ndepending on how how you want to\ninterpret the word live you could\ncertainly argue that uploads another\nlive um and that's also\nlike a\nan ambiguity in this definition but\num\ni don't think this is something that\nmatters very much this was just me uh\nspinning my wheels there uh i don't\nthink\nthis is something that\nreally matters to anyone in practice\nso let's see what their critique of it\nis the first critique is the one that we\nhad quite a bit yesterday that this a\ncore feature of this is that it's\nfixating on the future value and\nit's it's obviously it has this part\nbolded that it's not actually focused on\nuh on future value\nthe authors are making a stronger claim\nthat existential catastrophe and\nextinction risk are very different\nconcepts and this is\nwe've seen that the difference is on\nsomething that puts humanity permanently\nunable to realize their potential can\nbut doesn't kill us all that's the\nexistential risk and um\nin practice as far as i can see most\nresearchers uh seem to have this as the\nsame thing just in different matters of\ndegree so that an asteroid that kills\nlike 99.9 percent of the population\nwould be\ncould be both a uh\nan extinction risk and a\nwell would be an extinction risk but it\nwould be an existential risk that could\npotentially put us permanently behind\nbut\nit's kind of the same thing whether the\nasteroid kills 99.9 or it kills 100\nit's\nmostly the same hazard and i believe\nusually people are working on on both\ni'm not saying that there are no people\nwho work on one and not the others but\nthere's a very strong overlap and\nhe also suggests to split the fields\nentirely and i think that would for\nthings like asteroids it would make no\nsense to have a search for asteroids\nthat would kill 99.9\nand 100 and have those as separate\nfields\nuh but\ni'm i'm uncertain about this in\nparticular i don't know\nif any researchers are actually working\non i suspect they probably are but i\njust don't know are working on things\nlike how can we end up in a uh like a\npermanent dystopia uh permanent uh\n1984 uh some kind of uh sticky\nauthoritarian system\nthe other uh lootcamp and carlson\nhave\num has another problem with this\ndefinition and that is the lack of\nprecision\nthen these things here like how high\ndoes the probability have to be before\nit's a risk and uh like uh how\nrepresented probability that will be\npermanent and uh how many people just\nneed to kill\ni think this is probably not a big\nproblem for the definition in the sense\nthat\nmost definitions are not precisely\nrigorous in in this sense\num\nof course outside of mathematics and\nphysics and things like that so i think\ni think having a definition like this in\nparticular of a field where we are so\nuncertain\nmakes perfect sense\nthere is a critique of the long\nreflection the idea by toby ought that\nuh or many others that after we have uh\nuh reduced the that our uh we should\nhave a several step strategy where first\nwe reduce the existential risk and then\nwe think for a long time what we will do\nbefore we implement it before millions\nof years\nand that seems kind of value agnostic in\nthat we're just you know thinking but in\nfact there are some hidden things\nunder this\nfor instance agriculture if we don't\nhave agriculture then\nwe can't actually do this long\nreflection we need writing we need\nurbanism uh probably um\nso so there are indeed the values in the\nlong reflection that we will\ncontinue having this\num\nuh i think it's\nquite reasonable in that as far as i can\nsee uh avoiding um agriculture means\ngenocide in practice uh and\nuh giving up on writing\nseems like\nyeah sure in theory it's a it's a value\nbut very very few people would argue\nthat that would be good\nuh they present some um circular\nsome argument there is some circular\nreasoning\naround the wrong reflection i think\nactually that is true and i'll try to\nsharpen it up a bit here in that in the\nif we want to reduce existential risk\nthen one of the things that\nwould be a risk of a risk would be that\nwe\ndecide that we will do something that\nwill have our potential cut our\npotential very much\nand in going into the long reflection\nwe should probably not prevent ourselves\nfrom choosing that\nin that um\nduring the long reflection we should\nhave the opportunity to to actually\nchoose yes the correct thing is that\nbasically we all die in in extremes um\nand uh the\nthe thing that makes the long reflection\nsafe should not prevent us from choosing\nto die if that's what we truly want\num\nthe the argument here is somewhat uh\nbased on on this what i am considering\nis kind of like an edge case if we\nshould choose to die and whether that's\nyou\nthe the authors argue that this is in\nfact more\nsevere than than that there are indeed\nmany things we need to identify before\nthe long reflection uh in order to\nfigure out like what is a dystopian\num i agree it would be really nice to\nknow what is a dystopia what is a utopia\nwhat are values um the problem is we\nprobably can't unfortunately we can't\nreally solve this it would be wonderful\nif we could uh\nhopefully there is some kind of\nattractiveness thing in that where we um\nuh all when we are\nin the long reflection we'll be able to\nchoose a very very wide variety of\nfutures um\nand but um\nit is not it won't be perfect\nunfortunately this basically as i see it\nlike identifying what is morally\nvaluable that's like solving ethics i\ndon't think we're gonna do that we're\ngonna have to make do with less\nunfortunate\nthere is some argument about how the\nlong reflection\nshould be and it's described as an age\nof perils i just want to point out that\nthe long reflection by definition is not\nan age of perils it's kind of the\nopposite um and there there's also some\nerror a question here like if we or all\nrights if we steer humanity to a place\nof safety we will have time to think but\nwho is we in that sentence i think that\nis that we in this sentence refers to\nhumanity but i don't know i haven't\nactually read the precedence\nslowing down technology\nsome scholars\nit is suggested that some scholars\nshould choose to stick with studying\nextinction risk\nrather than trying to specify all\npossible good futures\ni don't really know if anyone is trying\nto specify all possible good futures i\nthink they might be quite confused about\nwhat people in existential risk studies\nactually do\nand also\nthey might be they're claiming that\nslowing trigonological progress is a\nrisk to a transhumanist\nuh i don't think many humanists in\nparticular uh bostrom and others who are\naware of a technological risk she sees\nthis as um\nas possibly beneficial and seeing\nslowing technology where that is a risk\nthat we won't reach the transhumanist\nfuture um i don't know if transhumanists\nactually hold this view my intuition is\nthat a lot of them are really afraid of\nexistential risks\nuh particularly from technology but i\ndon't actually know if anyone holds this\nview i would be interested to see i'm\nnot rolling it out that some people do\nhold this view\nit is a view that's held in biorisk that\na lot of people\nin the extension risk community are very\nmuch against gain of function research\nsomeone\nin the country to\nto ai and people in ai are\nmaking arguments why it's not as free\nit's not feasible to show slow down ai\nbut the arguments\nare not described as convincing so\nthey've they haven't been able to uh\nconvince carla but uh or luke but um\nto me it seems like a really easy\nargument and that's possibly because i\nsubscribe to a more rational uh\nstyle of international politics where uh\ncountries potential superpowers like\nchina are following their key strategic\ninterests and there is no easy way to\njust write a letter to to china and ask\nthem to please don't uh pursue ai when\nthey have declared that this is a vital\nstrategic interest\nthis\njust doesn't seem feasible at all\nand they are making this\nclaim that this suggests that what we're\nseeing is in fact a prescriptive thing\nrather than a descriptive that when we\nwhen\nboston is saying that he would prefer to\nslow down\nthen what is actually saying is\nwhat he actually means from this is that\nhe would he would not like uh things to\nshow down because he\nuh the fact that he says he can't find a\nuh a good way to uh to slow down is\nmerely some kind of smokescreen\nand uh to me i think the the strong\nprior we should have on what these\npeople actually believe is what they say\nuh you need really strong arguments to\nuh to suggest that uh that people are\nbelieve something uh other than what\nthey say\nthere's uh they go into more details\nabout what boston actually believes and\nin one particular case bostrom has\nargued that it might be necessary to\nprevent extension risk to have an\nobligation surveillance system\nand um\nthe authors argue that some people would\nfind this a good thing and\nbostrom the\na number of like\nnot very strong arguments are made that\nbostrom indeed might himself find an\nappealing thing\num and\nthe the authors\nfailed to mention that bostrom in fact\nmakes it really really clear that he\nthinks that 1984 style surveillance is a\nreally really bad thing\nand i think the the arguments that they\nmade that like this is published in a uh\nin a journal that relates to policy is\nway less strong than the um we should\nbelieve when boston say he does not want\n1984 then\nwe should basically believe that\nunless we have strong evidence the\ncountry\nthere's also some more of boston few\nthat that are changed in that\nthings that russian says that many\ncatastrophes are in fact not anything\nthat have any relevance for existential\nrisk they are mere ripples on the\nsurface of the great sea of life and uh\ncarla is arguing that in fact\nthese could uh be very interesting to\nstudy um and i think that's a\na contradiction in terms if boston says\nthis is so small that has no effect and\nthen uh i'm trying to argue that\nactually we can learn a lot from it then\nby definition it's not too small to to\nhave any kind of influence\nperhaps i'm not really sure about this\nmoving on to arbitrary categorization\nexistential risk studies\nis claimed to lack a framework to\ncategorize risks\nin a proper way\nat least that's claimed i i haven't\nstudied the field enough i haven't read\ntoby alts book the presbyters so i can't\nreally see whether this is in fact true\nthat there's something that is missing i\nwould like kind of some source for this\nif available um because\nbut i realized of course that\nunless other people have noticed that\nthis is missing\nit's not entirely easy always to give\nprofit\nand again we're seeing that uh the claim\nthat the fact that we don't have this\ncause us to um to prioritize existing\nrisk to an extreme extent um\nand prioritize small catastrophes like\ncope with much less than the extreme\nprioritization of existential risks that\nwe are currently having i think that's\nuh this um we talked about this last\ntime the tendency of the authors to\noverstate their uh their claims to an\nextreme extent really we don't\nprioritize existing risk that much\none thing they do point out is that we\ndon't know\nhow\nvery much about how these\nuh smaller accessories\nend up\nchanging our trajectory um\nwe do have some intuition but it's\nreally poor uh and it would be nice to\nhave more um\none of the um\nthings that don't tell us very much\nabout if a single catastrophe could\npermanently uh\ncause us to\nforgo most of our value is the technical\ntechnical utopia approach the argument\nthat most of our value is in the future\num\nthat is not going to give us the answer\nand i don't think it's actually talking\nabout this at all um\nthe two things somewhat distinct the the\nclaim that um\ncuda catastrophe stop most of our future\nuh from having value and most of our\nvalue is in the future uh it's it seems\nlike they're talking about different\nthings as far as i can tell\nuh and it is suggested that instead we\nshould to to figure out if a single\nglobal catastrophe could uh\ncould stop uh\nour\nour trajectory we should instead look at\nwealth inequality historically\num\ni think it's possible possible but um\ni mean someone needs to do the work at\nleast some preliminary work to say that\nwell this case where there was wealth\ninequality in the roman empire is kind\nof like what we're seeing with ai\num\ni'm not i'm not\nit's certain that\nthere could be some kind of interesting\nparallels but it is at least not\nimmediately obvious that there are\nthis was a part of the article that i am\nnot 100 sure i understood because it\nseems to\nso the first is that if we are looking\ninto the future at\nuh some ways we could become extinct\nthat does require some speculation we\ncan't uh totally empirically verify\nsomething that has never happened\num\nthis fact that it requires visualization\nis claimed by the authors to cause us to\nprioritize speculative risks above\nempirically supported risks\nand\nat least that's how i read it and that's\nanother guitar really we don't\ni mean it's not like\nwe think it is more fun to uh to study\nthings where we don't know a lot\ncompared to empirically supported fields\num and this this fact\nthat\nthe uh\nthat the pathways to extinction requires\nreligion causes us to prioritize\nuh\nuh\nspeculative fields above us uh caused\nsome people to conclude that global\nwarming is not an existential risk\nuh i that's how i read the argument it's\nstill a non-sequitur um\nand um\nit's also creative like the the argument\nthat global warming is not an\nexistential risk is\nthe article makes it seem like it's\nobviously untrue\nuh i think it's probably true but\nthere is no argument being made it's\njust derided mostly\nin another part here that i'm also\nuncertain whether i understood correctly\nthey present a really bad way of\nanalyzing risks\nthat they say is simplistic ineffective\nand wrong and that is you take a set of\npossible catastrophes like you have\nnanotechnological grey goo and ai risk\nand meteors and then for each of those\nyou figure out like how many people are\nexpected to die from asteroids well it\nhappens once in a million years and it\nkills x percent of the population and\nthen you see okay so this in expectation\nkills 50 million people and um then you\ndo that with all the um\nwith all the catastrophes you're looking\nat and then you see is this more or less\nthan the entire population\num\ni think uh\nthat's strange and stupid to do that and\ni don't think anyone does it like that\nperhaps they need something else but\nthis is what they have written\num\nand so uh what they suggest instead is\nthat we should look in a given world\nstate how much will a given event\nincrease the overall likelihood of human\nextinction\num\nand that seems like obviously better if\nin the sense that it's better to like\nhave the world state and the event like\ngiven things and to try to figure this\nout but um\ni mean we would we would always do this\nuh it's um unsure\nwhether\nuh the fact that it's not being done is\nthat we just don't have the tools we\nwould love to do that but\nas far as i can see we we don't have a\nway to do that but if we had a way to do\nthat we would obviously do so\nif that made sense\nlet's go to ai risk\nthere's a quote here a field looking for\none hazard to kill them all will end up\nwriting science fiction\nand so i think in this case it seems\nlike they're really close to called an\nai risk science fiction but they're also\nnot presenting any kind of arguments at\nall for this\nand that makes it somewhat frustrating\nto engage with\nand\nagain this claim that we prioritize\nspeculative risks\nuh for the reason that we can't rule out\nthe mechanisms by which they work\num that would be stupid if we prioritize\nbased on that we can't rule out\nsomething in general obviously that's\nnot how risk analysis is done just look\nat like what is the probability that\nthis will happen rather than whether you\ncan rule it out\num\nand they suggest the lower threshold of\nrisks\nand i think um\nand i think clearly they believe that ai\nrisk is something that would fall under\nthis\num so so this is like um\nit's a difficult argument to engage with\nreally because uh they believe two\nthings both ai risk is much\nprobably a lot lower than people in the\nextra situation\nwhich believe and also we should look at\nai risk because it is below this kind of\nthreshold that they're setting\num and they also talk about uh\nthat we have uh like direct ways of\nextinction and indirect risk factors and\nthat that the distinction\ndepends on speculation and so now we've\nused speculation in in two ways here in\nthat the egg\nthat the risk itself the the method is\nspeculative and the distinction is also\nspeculative uh so\ni'm unsure whether they believe that\nit's like the same thing i'm\nsomewhat confused about this\nand they suggest that the fact that\nthere is a strong expert disagreement\nregarding risks from ai that implies\nthat the pathway\nfrom ai to extinction is this direct\num i agree that there is of course\ndisagreement\nthat's quite obvious but i don't think\nthe fact that there is disagreement\nmeans that the method the mechanism\nneeds to be less direct\ni don't think you can you can refer that\nsimple and complex risk models the\nsimple risks models are those that are\nfocused on hazards like asteroids and\nwhat have you and a complex risk\nassessment have hazards as one part but\nthere's also vulnerabilities hazard\nexposures\nand response risks\nand uh\nadding in these four factors is harder\nbut you get better results if you look\nat more factors\num i agree\nit would be nice to have better risk\nassessments um to have more work on like\nwhat is the vulnerability that makes ai\nparticularly dangerous um\nbut i do in fact believe that people who\nare\nmostly focusing on the hazards rather\nthan looking at uh at the these other\nfactors are doing it mostly because\nlike that's all they have to work with\nreally that there is just\nuh when the problem is that doing\nsomething harder might just be too hard\nand people are doing their best\nand why we do simple risk assessment\nmostly rather than complex risk\nassessment it hasn't been explained or\ndefended um\ni think it is\nquite easy to\nuh to\nexplain the sense that if you are really\nuncertain then you do the simple risk\nanalysis and once you are more certain\nabout things then you can do more\ncomplex risk assessment that would be my\nexpectation they have a really cool\nexample here of an example where\ncomplex risk models\ngive a different result from simple risk\nmodels and that's in the case where we\nhave uh global warming and then someone\ntries to\nmitigate that new thing putting\nsomething into the stratosphere to block\nthe sun's rays um but the co2 is still\nthere and then we have uh some kind of\nnuclear war and one of the things that\nnuclear war causes is that we can't keep\nup the stratheric aerosol injection and\nso these particles fall to the earth and\nthen we still have all this pseudo in\nthe world and that means that we get\nglobal warming that is much more rapid\nthan before and that could be a\nfar stronger existential risk\nuh i think that's a\nthat's a cool argument um i haven't seen\nthat before and i'm impressed with this\num and that in me i would have liked to\nsee similar things for ai i could\ntotally see that being possible and i\nwish more people would work on this\nbecause that seems\nvaluable but also again half right in\nthat you i feel that global warming is\nmuch better understood than the high\nrisk\nand to say nothing of nuclear war\ntechnological determinism um like\ntechnological determinism is the uh the\nfundamental idea that we\nno matter what we try to choose will\nmostly end up uh researching the same\ntechnologies uh for military economic\nreasons\nand\npeople in\nexistential risk studies generally say\nthat we can't really stop technological\nprogress it's either deeply difficult\nundecidable or totally important\nthis has not been fully defended um\nthey they say and\nfully defended against i expect that's\nuh\nto the to the satisfaction of\nthe authors\nand they are saying that it's possible\nthis view is genuine and\npossible this view is general and it\nseems unfortunately that there is a\nstrong strong lack of basic trust\nbetween the authors and the existential\nrisk community\nin that\nin general there should be the\nassumption that people are\ngenuine when they state that they\nbelieve certain things\nand in in this case\nwhether it is in fact possible to stop\ntechnology\nis uh\nthe people who say that it is and\nthat this is a tractable problem they\nhave to like come up with some kind of\nway that they can that can be done right\nbecause otherwise it seems\ni think the onus is on\ndefinitely\nand there's a statement that it's ironic\nthat some humanists are afraid of\ntechnology and yeah that is kind of\nironic\num\nand this further came that this thesis\nthat uh\nwe are predestined to to choose to\nresearch technological uh certain\ntechnological paths for uh military and\nstrategic and economic considerations is\nuh claimed to be largely divided and\ndismissed by scholars of science and\ntechnology studies that was a surprising\nclaim so i followed the reference to\nthat and it's unfortunate behind the\npaywall but that's an\nabstract and the abstract strongly\nargued\nin favor of technological determinism\num so i uh\ni'm entirely sure i would need to\nactually get access to the\nto the text to be 100 sure\nthey give some more examples of uh where\nwe have uh gone away from that for\ninstance in weather warfare\ngiving the exam that glass and steam\nengines\nhave strategic implications but we're\nnot really used strategically at first\ni'm not entirely sure these are very\nstrong arguments but i think they can in\nfact be made stronger and i think for\nthe rationalist community in general i\nthink this is one of our blind spots\num so i would have preferred to see a\nstronger um\nexploration of this perhaps\nas somewhat less adversarial exploration\nof it\nwe continue to pascal's mocking or\npascal's wager this is the original\nformulation of pascal's mocking\nof pascal's wages sorry\nand that is if you\nwager for god then and god exists then\nyou gain all\ninfinitely much and of course if you if\nit turns out that god exists then you\nare basically at the stages quote\nand if you uh wager against god which is\nprobably you know not doing the things\nthat he would you think he would want\nyou to and he exists then he will punish\nyou and otherwise it's just the status\nquo\nthis in order to make to relate this to\num\nuh air which we need to make some\nchanges to this we need to look into\nlike what are the positive the\nprobabilities here are they like\ninfinitely small and this is using um\nalso uh infinity uh in pascal's\nformulation um\nand uh also here on the right it's not\nexactly status quo right um\nso i looked a bit into this looked up in\nthe stanford encyclopedia of philosophy\nwhether this was actually uh like a\ncentral part of pascal's formulation and\nit's not as far as you can tell that\nthis is it doesn't matter whether this\nis literally infinite or not um there is\na\ndescription of what the probabilities\nneed to uh obey to make to to for this\nargument to carry uh i think it can be\nquite easily transformed into this\nargument for ai risk\nin that if we try to prevent agi risk um\nand and there is agi risk then that has\na high expected value and if we ignore\nit it's just a low expected value and\nthere's a smaller small gain if it turns\nout that we are wrong and this argument\num could carry its\nuh\num\nit's not one that is generally being\nused by people in ai he also accepts\nmostly mostly because there is\na uh sense throughout the uh\nuh\nyeah having different\nthat\nwith ai risk that we are actually using\nthis argument even though we're claiming\nwe are not but i do have in fact another\nperhaps more interesting um uh point on\nthis and that is um\nthat um\nthis is presented as a part of pascal's\nmugging\nand uh this is in fact not pascal's\nmocking over here this is pascal's wager\nand\ni can i can see why the the authors um\nare making this mistake because what\nwhat the they are citing is a very very\nsimplified version of the original\npascal's mocking experiment\nand that version as far as i can tell\nseems so\nsimplified that it's missing the central\npoint\num and um\nuh\nit is in fact the simplified version is\nby nick bostrom so i think that's uh\nsomewhat\nquite bad in that sense but um one of\nthe ways this ways we can see that he's\ntalking about um that we're talking\nabout pascal's wager rather than\npascal's mocking is that like the uh\ndecision theoretic parts are not really\nexplored and like for things like uh\nklutz up arrow uh notation which is a\nkey part of\nthe elias erkowski's\nversion of pascal's morgan is missing in\nthe simplified version so there is a um\nthis simplified version of pascal\nsmogging\nis\njust about pascal's wager really\num so um i can understand why they are\nmaking this um like the simplified\nversion does contain uh the simplified\nversion of\npascal's uh mocking does contains a\nreference to the real description of\npascal's mugging and it would have been\nnice if the authors had followed that\nlink\nbut i can't really follow them too much\nfor that\nfinally we get to some problems with the\nexpected value\nand here\nbostrom argues the expected value of\nreducing existential risk by a mere one\nbillionth of one billionth of one\npercentage point is worth 100 billion\ntimes as much as a billion human lives\nand that is\nshown as an example of how\nusing pascal's wager\ndirectly\ncan lead to some very strange\nconclusions and\nwhat they don't quote in this article is\nhow boston chooses to continue this so\nthat there's a very selective way of\nquoting here and boston uh follows this\nby saying we should stress however that\nthere are important unresolved issues in\naggregative consequentialism so um so\nwhen this is presented as something that\nboston is arguing i think that is in\nfact plainly false and i think this i've\npreviously referred to this as the worst\nargument in the world the idea that you\ntake something that a philosopher has\nwritten and then look at his own caveats\nand then you remove those and present\nthem as if you found them yourself and i\nthink it's a\nand\na style of argumentation that's easy to\nmake it appears really convincing it is\nwrong and i think you know\ncutting the quote like this\nhas to be intentional dishonest\nand also one more thing is that the\ncitation is wrong right this quote here\nis from this text and not from that text\nbut\nthat's kind of one of my small\nequipments\nthat is all for today thank you and see\nyou next week", "date_published": "2022-03-18T06:25:29Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "c30dbf5aeb954719aa2c7b9f1f66dcb2", "title": "156. Synthesising Utility QA", "url": "https://www.youtube.com/watch?v=N-u8c3Q3RM0", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 156 in the\nashd reading group tonight Stuart\nArmstrong will present his research\nagenda please take it away Stuart hello\nthere thanks for inviting me to talk\nhere I'm actually going to take get to\nthe research agenda in a very roundabout\nway presenting a few related intuitions\nwhich base explain why I think that\nthere's a chance of the research agenda\nsucceeding but to start off with one\nquestion is to ask in Reverse\nwhat would a solution look like suppose\nwe said we had all of human preferences\nin some compact form or some utility\nfunction form how would we have got it\nthere's a few ways we could have got it\neffective proxy methods which are most\nof the methods that people suggest like\nputting Nick Bostrom in a room for\n10,000 subjective years to think of\nthings or having people write what they\nwant and somehow synthesizing from that\nthese don't measure preferences directly\nbut they are proxy measures and we're\nhoping that we can get them to work out\nanother approach which might work is\ncourage ability we start with a one-goal\nstructure and we modify it successfully\nand repeatedly until it aligns more with\nwhat we want\nthere's the catch-all of some brilliant\nnew idea and finally some grounded\ndefinition of human preferences so\nsomething that actually captures our\npreferences and I argue in fact that's\nanything of this type is going to look\nsomewhat similar to my research agenda\nit needs to have some fundamental\ndefinition of the pieces some way of\nputting them all together and some way\nof respecting meta preferences and other\nthings of that nature now one key\ninsight where I tend to differ from Miri\nis on the issue of patching or nearest\nunblock strategy arguments the basic\nidea is suppose we have an AI but for\nsome reason goes straight to you we say\nno don't go there\nso the AI just routes around whatever I\nknow is we can add quite a lot of\npatches blocking different ways and then\nit just routes around those patches this\nis the idea of the nearest unblocked\nstrategy like for example you say\nincrease human hedonism or increase\nhuman pleasure and then when that ends\nup disasters he would say but ok respect\nhuman autonomy\ndon't force humans into bunkers and we\nkeep on patching and in each case it\nfinds the closest cheating thing that it\ncan that technically satisfies our\npreferences but compare this with with\nsort of them drawing boundaries in as\nhilar higher depth neural nets tend to\njoin boundaries in concept spaces if we\nhave a bunch of blue counter examples\nand a bunch of red good examples then\nthe AIS can naturally draw a boundary\naround this and at this point even\nthough that arrow I've just drawn\ndoesn't go through or close to any of\nthe blue points because it's learned to\ngeneralize the concept it doesn't go to\nthat disaster area so what this shows is\nthat the error modes of machine learning\nin your nets with positive and negative\nexamples are different from the error\nmodes of the nearest unblock strategy\nproblem so that that's this is why I\nhold out some\nOh for less more fuzzy methods now of\ncourse generally what happens if you\nhave a lot of examples close by and not\nfar off you constrain it's in one area I\nmean it goes wild elsewhere so we sort\nof have to be able to give counter\nexamples very far away which in some\ncases we can like human preference for\nsay no human suffering if we can define\nthat approximately this extends across\nreally alien universes really strange\nand different worlds from our own so\nthis is a sort of global constraint\nsimilarly if we find the repugnant\nconclusion repugnant that's a sort of\nglobal constraint anyway this was just\none intuition and I also feel that\nthings like symbol grounding after\nsolution behavior and a few other things\ncan be potentially solved in this kind\nof way this is one of the fundamental\nintuitions now to get to the no free\nlunch theorem famous throughout the\nworld by everyone who happens to know it\nthis was my result that if you have a\nbehavior you cannot go straight to\npreferences at the behavior of\npotentially irrational agent you cannot\ngo straight to preferences we can't get\nthe preferences without making\nassumptions about the rationality you\ncan't make that the rationality of not\nmaking assumptions about the preferences\nthis is not a not very exciting no free\nlunch thing because normally no free\nlunch theorems you just use a simplistic\nwire and they go away unfortunately here\nsimplicity priors don't help and the as\nI've mentioned I think before humans are\nfully rational humans are fully anti\nrationals and things have flat flat\npreferences are the simplest\ninterpretation of human behavior but\nnotice I said you needed to make\nassumptions what would be great\nneeded to not make assumptions so if you\ncan make just very simple assumptions\nand get around the whole problem that\nwould be great so the whole\ninfrastructure of the research agenda is\nto try and make some minimal assumptions\nand a lot of construction and this leads\nus to a way of going and finding the\nPreferences of the human right and\nanother important point is that\ninterpreting a human as having perfect\nprincess or rationality in a stance it's\nvery similar to Dennis intentional\nintentional stance we're not treating a\nhuman as a collector of neurons or\ncollective atoms we are treating them as\nan agent with preferences this is not\nsomething that naturally exists in the\nworld so as I said it's a super\nintention area I could have all the\nworld's videos is all of Wikipedia all\nsocial science research we give perfect\npredictions of human behavior be able to\nmanipulate us incredibly well and still\nconclude that humans are fully rational\nand it would not be wrong it would be\nright but it wouldn't be wrong because\nrationality and preferences are an\ninterpretation we put upon other agents\nthey're not even calling something an\nagent is a preference is an\ninterpretation and here some people feel\nnatural objection they say that they\nknow what feelings wants from observing\nand we do and even more damning in a\nsense is our observation of human\npreferences allows us to predict their\nbehavior so this approach is vindicated\nby the outcomes so what is going on I've\njust said this cannot be done well this\nis a sort of simple model of what's\nhappening\nis I'll see a human H is holding an\nempathy machine and a human predicting\nthey Co evolved the empathy machine is\nwhat tells us what people want and how\nthey think and the predictor is tells us\nwhat they act so if we have a typical\nhistory of a human age then two\ndifferent humans looking at it will find\nroughly the same thing we tend to agree\nvery strongly on what people want and\nhow washed out they are not perfectly\nbut far far better than another chance\nalso that a human themselves tends to\nagree with our assessment when we're\ndrunk or less rational and we tend to\nagree that we're less rational when\nwe're drunk for example so these empathy\nmodels all give pretty much the same\nthing on typical histories and then if\nyou feed these things to the predictor\nthen in typical circumstances this is a\npretty good predictor of the human\ninteraction so given the empathy module\nand the preference model the reward is\nsimple to deduce and typically it is\npredictive so this is happening my sort\nof response to the objection yes for\nhumans with our specific evolved ability\nwe can do this task but it's not a\ngeneral task that is easy to do it's\nbecause of our particular cognitive\nmachinery and our own assumptions now\nthis VHS empathy model is strongly\nrelated to the partial preferences when\nI'm about to get into this brings us\nfinally to the actual research agenda\nthe idea is to start with code or the\nhuman brain and from this get the mental\nmodels that the human preferences are\nusing so when we think oh that's a bad\noutcome or I wouldn't\nbahador they want that or if I do this I\nmight get something good these are our\nmental models and within these mental\nmodels we tend to express desirable or\nundesirable outcomes\nso from these mental models we can have\norderings over different circumstances\nnotice and some of them are Metta\npreferences which can tell us about our\nlower level preferences in order to do\nthis we need to do some sort of human\nsymbol grounding which is part which is\nwhy I dressed it in in the research\nagenda and this is where I bring in the\nassumption the assumption is that these\npartial preferences as I'm calling them\nthat their normative that this is what\npreferences are built out of so I've\ngone from the observation of that human\nmental models order things in a\nparticular way to the assumption the\nrequirement that this ordering tells us\nsomething about what the genuine\npreferences are this is sort of my\nstance then given that and some\nalchemical complicated process we can\nget the human preferences now a lot of\nthe research agenda and a lot of the\nresearch on it is going to be about this\nbut it's in a in essence not that\nimportant step what it has to do it has\nto do two things or three things it has\nto reach a conclusion of what the\npreferences are and it has to sort out\nall our contradictions and under to find\nthings and it has to do this while\nrespecting our meta preferences so all\nthe machinery of the section two I\nbelieve is basically a way of doing this\nand hopefully the particular way of\ndoing it will not be that important in\nterms of reaching an acceptable or a\ngood outcome\nthen once we have those preferences we\nput them into the AI and then we're\nalmost done to do that we need to do\nsome sort of AI symbol grounding which I\nwasn't working on too much now realized\nis actually quite important bit so I\nwill be working on that in the future of\ncourse you don't know the human source\ncode perfectly but some uncertainty\nabout that source code will just result\nin some uncertainty on the ideal world\nand this actually should end up okay for\narguments that I've sketched in the anti\nGoodheart posts or how to break good\nheart in this setting human behavior is\nan observation that gives evidence about\nour internal makeup and about the symbol\ngrounding now because it's good to bring\nin a little bit of formality I have put\nvarious ones of these things in symbols\nmore or less justified so the the human\ncode leads to pre-orders which is how we\norder our partial preferences so this is\nbasically we've order things without in\nour models without having four models of\nthe world of comparing everything we\njust compare on one feature then the\nsymbol grounding for which I have stolen\nconcepts from semantics formal semantics\nto go from human models to worlds the\npre-orders are now on worlds the\nassumption is that these pre-orders are\npreferences relevant the pre-orders are\nsynthesized into utility function using\nsome process Sigma the meta preferences\nare taken to mean to take preferences\nover the Sigma then another round of\nsymbol grounding and we get the utility\nfunction in the AI now and the empathy\nmodules that I talked about\nvery closely connected with our\npre-orders so how we model other people\nis very similar to their internal models\nso these internal models are very\nclosely related to how we model up their\npeople and there are too many uses of\nthe word model in that sentence I know\nso finally this is just a small section\nof some of the preliminary work I've\ndone the probe first is constructive and\nthe main point of all this work is to\ncheck that there were no huge conceptual\nholes rather than solving every problem\nor indeed any problem to extreme detail\njust to check that there were no large\nholes that I was missing and that is it\nthank you should I stay very much thank\nyou very much yeah if you could stop\nsharing and then I will share my screen\nand then we will take some questions and\nthe way we always do this is that we\nwhat's called we take I keep a list with\nwhere people who have said they would\nlike to post questions so we are in the\norder of where people raise their hand\neven either through the chat or if they\nsay something and I will take the very\nfirst question you should be able to see\nmy screen is that correct\ncan everybody see yes okay so please if\nyou have any questions write in the chat\nor raise your hand I think it's possible\nto raise your hand through this yeah\nmatthias has a question so you leave\nsecond because I always take the first\nquestion and so my first question is\nabout instrumental goals because Nick\nBostrom divides skulls into instrumental\ngoals and final goals\nand a lot of the questions the examples\nthat are given in the research agenda\nhave to do with instrumental goals think\nthat like I wish I was rich\nrather than poor or I don't want to get\nmarked or things like that and these\nkind of things are not in the research\nagenda really specified those are this\nkind of division is left to the process\nmostly but there's a in the reading\ngroup we've discussed this and it seems\nto be in some way\ninstrumental goals are quite a lot\neasier to work with then final goals in\nthat you could imagine you normalizing\nall kinds of instrumental goals into\nsome kind of general goal reaching\nability and just say I want to be as\ninstrumentally powerful as possible and\nthen only focus on final goals do you\nthink this kind of distinction makes\nsense in in humans a lot of things that\nshould be instrumental goals are pretty\nmuch terminal goals like or very close\nto terminal goes you could say well the\nI want to be rich because I want to do\nstuff and I want to have high status\nthose are two terminal goals I I don't\nwant to get mugged because mugging can\nget hurt or embarrassing those are two\nterminal goals so though you though\nthese are technically instrumental goals\nthey are very close to terminal goals in\na sense you know the eliezer yudkowsky\nthou art God shatter article evolutions\nterminal goals of having lots and lots\nof grandchildren and great-grandchildren\nhave shattered into lots of pieces and\nit seems that for humans we we have a\nlot more terminal goals than in a sense\nthan we should\nyeah and the the sort of terminal goals\nthat philosophers like to talk about are\nthings we have constructed using our\nmeta preferences so the I want everyone\nto be happy and to be able to reach as\nmuch as they need every day for example\nis a terminal is a turk is a goal that's\nconstructed by my meta preferences and\nsort of extended out from certain basic\nempathy in goals so I wouldn't say it's\ninstrumental versus terminal but sort of\nshort-term clearcuts goals versus\nsynthesized meta preference based goals\ndoes that make sense I think it did yes\nokay we have a question from Matthias\nand then one from Robert as a follow-up\nquestion to science so human goals are\ngenerally terminal valve relevant\ninstrumental dust after will be in\ndirect conflict with each other and\nwouldn't you have to work out how to\nvalue one humans preferences against\nthan others\nyes human goals are generally going to\ncontradict the goals of other humans\nmore more importantly human goals are\ngoing to contradict other human goals\nwithin the same human so there are huge\nmasses of contradictions out there the\ncontradictions are not that hard to\nsolve you need a way of normalizing the\nkiltie function which is why that pops\nup in the research agenda and then add\nthem together because goals are\nvirtually never completely opposed those\nare they're almost always positive some\nin some ways\nthere's always some outcome that\nis where you can increase one without\ndecreasing the other increase in grease\nthem both to some extent\nso actually contradictions are\nrelatively solvable in theory the what I\nsay about human goals is that they are\ncontradictory when they peelable and\nunder defines you need other tools to\ndeal with the manipulable aspect but you\nneed a lot\nit's the under defined which is the\nreally big problem extending them to new\nsituations especially situations very\nalien from the ones we have I mentioned\nthat the empathy modules serve in\ntypical situations they don't in\nuntypical situations where things start\ngetting they asked where the world and\nthe people in it become very different\nfrom what we're used to those are the\nthings where you have the problem of\nextending the goals yeah that answers\nfor question I'm pretty well and\nactually also answer to the other\nquestion I had so let's jump to the next\nperson okay that would be you Robert I\nso this is something that wasn't talked\nabout very much in the presentation or\npossibly at all but it was a questioning\nthat came up while I was reading the\npost and we might have a slide for it I\ndon't know because I think we talked\nabout it before but I was a little\nconfused about the identity we related\npreferences about why they are singled\nout or not singled out but like picked\nout as a specific category that is that\nit sort of treated differently I was\nthinking about one of the one of the\nexamples that's used in the post is if I\nhear a good argument for X I want to\nbelieve X and I was wondering how that\nis like is that an identity preference\nand if not how is it different from\nsomething like I want to be the kind of\nperson who is moved by\num I see them both as identity\npreferences the non identity version of\nthat would be\nI want the AI to be moved by good\narguments and optimize the world\naccording to that it's because of the AI\nspecific it's not just that I want the\nworld to be the best it's that I want to\nbe convinced about about what's best as\nwell the reason I separated out the\nidentity preferences is because they\nseem to be very important in humans and\nvery very likely to get lost in the sort\nof stereotypical idea out of building\nand expected utility Maximizer in the\nboss room had a mindless outsourcers\nbasically outsource everything in the\nname of efficiency or and even in this\ncase we outsource everything in the name\nof the goals and our goals are\naccomplished but there's no us left -\nwell to appreciate it or to care and\nsince now why did I bother to separate\nout identity preferences well it's\nbecause I was building a default\nframework for people who are didn't to\nhave heavily developed romantic\npreferences and it seems to me that\nidentity preferences work more as a\nsmooth min than as an additive as in if\nyou lose one completely it's a disaster\neven if you max out on the other it's\nbecause you are in a sense not the same\nperson so it's much better to not lose\non any of them and maybe push them all\nup a little bit than it is to push some\nof them up massively and drop the other\nones quite a bit so it might not be\nnecessary but it just felt like a\ndefault to separate an art\nlike a design decision yes okay\nashes I have sorry I have a post coming\nup soon which I'm going to call let me\njust find the name that I found on\ngratification which may address some of\nthese issues in part in a way that\ndoesn't that's maybe a bit less identity\nanyway it might make it clearer some of\nthese aspects I'm working on that post\nand I will post it hopefully within a\nfew days it is relevant I sort of see it\nas a third piece between head and ism\nand preferences yeah so ashes is putting\nhis tagging up his question right now\nwhich is would it be fair to say there\nare like two kinds of preferences those\nthat should be dealt with as soft min\nand those that should just be being\nmaximized and it's somewhat of a\ncoincidence that a lot of the soft min\nhave to do with identity preferences and\nthe ones that I could just be added have\nto do with preferences over the state of\nthe world well a lot of them in a sense\nit's the sort of hypotheticals I call\nthe one-step hypotheticals brief\ndescription of the worlds of a\nhypothetical world what you prefer there\nif those work well and can be done well\nthen in a sense we sort of get the\ntrade-off between all the preferences in\nall the different contexts so that would\ntend to basically override sure whether\nthings are adding linearly or in other\nways the separate out the identity is\nfor the situation where it's a default\nso it's basically\nwe can't do the hypotheticals well how\nold we don't have much information to go\non we'll do this as what this is what\nseems to me the best the most prudent\ninitial way that can be modified by meta\npreferences that can be modified by me\nonce the hypotheticals in various\nsituation for that is for the best\ndefault in my view it's also another\nreason is that this is sort of a\nreminder there are identity preferences\nany thing of reference impetus needs to\nremember them and so just labeling them\nis that here it is\nI'll read out the question thank you Oh\nwhat people can see it anyway well read\nout Erik it was in relation to the\nHeisenberg in nature preferences given\nthe humans are relatively volatile to\nhuman knowledge of those preferences\ncalling me out as a racist may change my\nmental states quite a bit to give a\ngreat example how can how do we make an\narea collect preferences without a\nmanipulating human more than necessary\nokay well we can define the problem okay\nthere's a paper which I've written on\nhow to learn when you're learning is\nalso a manipulation it has been refined\nby various co-authors for two years and\nso is not and is not out yet so I have\nactually worked quite a bit on\nmanipulate how you learn without me\nwithout manipulating will learn when\nyour actions are manipulative but since\nthat's not out the one way you could\ndeal with this is by say saying what\nwould the human preferences have been in\na given week that is in the past\nmaril the AI cannot manipulate those\npast press\nanswers through its transactions you can\njust find out about what they were or\nwhat they would have been with we can\nonly sort of what hypotheticals if at\nany point during the past week we had\nasked the human and once that\nhypothetical what would they have\nanswered that is the definition of human\npreferences this avoids the manipulation\nissue they also becoming irrelevant\nsince their I thought and now this is an\nissue which I don't really know how to\nsolve I alluded to it in section 4\nthings I don't know how to what to do\nnow now some of those things were\nactually things I have a pretty clear\nidea of how to do it but I don't want to\ntalk about it in this route research at\nthe end of but things like this are the\nreal things I don't know how to do\nthere's kind of a good out problem so\nthis one is human value drift talking\nthe value to change how would we want to\nconstrain a lot of people tell me they\ndon't want their values to drift\nrandomly or excessively but they want\nthey want to open them they want to be\nopen to more learning in the future\ndefining how this both can happen okay\ndefining how both of these can happen is\ntricky I don't really know how to do it\nand a related thing is when the AI tells\nyou here I am perfectly distilled your\npreferences and synthesize them into\nthis utility function isn't it lovely\nand then the human reacts inevitably no\nthat's not what I want\nwhatever the AI gives it because this is\njust how humans are this is the natural\nfeature I'm pretty sure that if I looked\nat my licit preferences my preferences\nmade explicit I would say no no no\nthat's not it so\nthis is kind of you can say that this is\na good nail self-reference problem but\nit's a very common one so this is also I\ndon't really know how to do it you can\nkind of you can imagine the AoE says\nokay I know what it is I'm now I'm going\nto manipulate the human so I'm going to\ngive them a fake one they're gonna say\nno they're going to change it then will\nconverge on the real one so maybe you\ncan solve that problem by allowing me\nbut in any case the question of what so\nI object to the equilibria in a second\nbut the question of how you build moral\ndrift and body change more learning and\nus accepting that the AI has decided\nthis is what our preferences are is\nsomething I genuinely am not sure how to\ndeal with now equilibrium equilibria is\na problem if it's a local equilibria i\ndefine by local conditions then there is\nno limits to how far the values can\ndrift\nI had a Mahatma Armstrong less wrong\npost where I imagined\nstarting off wanting to becoming more\ncompassionate ending up with killing all\nlife in a series of improvements so not\nmurder Gandhi exactly it's basically\nover compassionate Gandhi whose\ncompassion expands too much and\nencompasses encompasses the animals\nencompasses the insects encompasses the\nplants encompasses the rocks and at that\npoint humans might be humans and animal\nlife can be reduced to rock destroyers\nthe the point wasn't to think that to\ntake fairly seriously but just that\nif it all if it's an equilibrium just\ndefined by sort of local conditions ie\nit's reflexively stable it would not\nwalk further modifications or anything\nlike that it can move it can be\narbitrarily far away another option\nwhich I prefer is to tie these\npreferences to say some sort of starting\npoint so we still look for sort of the\nbest equilibrium of some sort based on\nsome conditions but also we don't want\nto move too far away from the beginning\nso we don't just do a random walk in\nvalue space now some people call this\nequilibria some people call it fixed\npoint and they are technically correct\nbut I don't think that I don't think\nthat's necessarily a good way of seeing\nit because it's the result of a\ncomplicated constructive process and\nit's better to look about co-located\nconstructed preferences equilibrium\nknowing your preferences does not change\nyour preferences yeah but that's not\nhuman there are that's not human there\nis no there is no moral learning at this\npoint there is so this is a I would be\nvery wary and at least careful about\napproaching a state where knowing your\npreference does not change your\npreferences there are many bad ways of\ngetting to that state lobotomies for\nexample or manipulative manipulating\npeople into the equivalent of artemis so\ni if i said that equilibrium was a\ndesirable target and if it's really hard\nto get with something reasonable we're\nreally easy to get with something\ndangerous then I would be very wary of\nit that's the main reason that I haven't\nlooked too much at equilibrium\nokay the capacity to explore this is\nwithout actually using the human because\nI feel like then you can look at you can\nlook for equilibria which are stable but\nalso which the current humans identity\npreferences wouldn't object to just\nrumbling or something like that I'm not\nsure I mean because you have this\nproblem that the thing can explode off\ninto some distant thing but if you do\nyou have to take the human with you or\ncan you can the system predict what the\nhuman would do and do several steps and\nthen ask the current human is this a\nplace you would be happy to end up I\nmean that well this is um if the AI\nsuper intelligent asking the human is\nkind of irrelevant this is that a\nmanipulation of finding the best\nphrasing to get the AI of a human to\naccepts the best the aim of the\nsynthesis process is so that all this\ncan be done automatically well yeah\npretty much\nI'm gonna yeah the grass is always\ngreener is a way of saying that yeah\nit's basically the answer that I gave to\nthe previous question since since it's a\nknown feature that humans tend to reject\nhaving there that is imposed on them I\nwould not want to look for something\nwithout that feature or at least not put\ntoo much pressure on that unless that\npush us away from the human areas that\nwe know\nregarding imposing values on people or\nseeming to impose them this is a picture\nof the AI presenting presenting results\nto the person but it you instead if you\nget the person to make a series of force\nchoices now always burn through a\nquestionnaire or something there's\nnothing with alternative it's making\nchoose and so on the result becomes out\nat the end of something that they see as\nsomething they've produced and I think\nthey wouldn't resist it in the same way\nthey might be surprised they might find\nit very difficult to do it but they\nwouldn't see it as something imposed on\nthem so I mean perhaps you could say the\nrole of the AI in that situation is not\nto come up with the answers but to\nconduct this process this transparent a\ntransparent and acceptable process what\nyou're saying there is basically the\nnicer version of what I said the AI\nfigures out where your preferences are\nand manipulates you into accepting it\nthis is this is um this is basically how\ndo we get the endorsement of the human\non the outcome of this process in a sort\nof general sense and the yes um I have\nnot figured out how to do that mainly\nbecause I have a lot more details of the\ngraph\nof the whole process before I get even\neven near these sort of what's the final\nthing you do with it but yes thinking\nabout how humans connects in your sleep\nthe outcome in a good way is going to be\nvaluable when this is deployed in\npractice and it might be worth\nsacrificing or not necessarily\nsacrificing but tweaking certain aspects\nof the process\nin order to get human endorsement as\nlong as it's not an overriding and\noverriding goal and I just want to to\npoint out that all these questions have\nbeen about the synthesize the human\npreferences which I said is the longest\npart of the other thing and it is but\nalso somewhat less importance than the\nfinding out the humans internal models\nand I understand why because the second\npart is so much more interesting to talk\nabout even if it is in a sense less\nimportant and I've got another question\nhere can human preferences be circular\nyes human preferences are very often\ncircular though we have we interestingly\nhave a we have something that stops us\ngetting stuck in loops it tends to be\nthat as we travel around the circle we\nbreak the cycle at the point where we\ncome back to the beginning if if there\nwould be a circular preference it's kind\nof like we have a deep knowledge that\ngetting back to the starting point makes\nno sense\nokay so I can see the next cat did I\nanswer the previous question properly I\nfeel I might not have addressed anyway\nmy question yeah yeah\nI yes that's fine for the moment thanks\nI'm okay so unclear about all this\nlitter that you know but that's not your\nfault\nThanks\nokay so the next question actually was\nalso about different ways you could do\nthis synthesis both with a something\nlike the parliamentary modeled by Nick\nBostrom and I think David also had a\nmarket-based model but I think you might\nbe right\nthat is somewhat less interesting so let\nme see there is here oh yes but but it's\nmore robust in a way that if we do the\nsynthesis I would expect that a lot of\ndifferent kinds of synthesis processes\nend up with basically the same result or\nat least similarly acceptable results\nthere might be somewhat difference in\noutcomes but equally acceptable if they\ndon't then this is something to be\npanicked by because that means something\nis going wrong if slight changes in the\nsort of process the synthesis process\ncan result in things that are not only\nwildly different but moving from good to\nbad and back then there's something\nwrong with the process there's some key\nflaw that we've not addressed okay I\nknow let's I will take Rob's question in\njust a moment and then it seems like\na big part of the reason why we expect\nus to not have catastrophic consequences\nis that we are taking all the humans\npart your preferences and leaving\nnothing in the model behind and I think\nyou know seeing all the partial\npreferences seems like it's all order\nthere is probably something that might\nbe infinite many of them and even if we\nget infinitely many of them we might\nstill skip a part of them this is a\ncorrect I mean we human the imagination\nis not infinite and even if it is\ntechnically infinite it's sort of\ninfinite because you can slider one\nvariable continuously in a small range\nso the I don't know it's is it section\ntwo point six two point eight basically\nwhat I'm saying is that most preferences\nshould be absorbed in the model leaving\nonly very few very few rare types of\nmeta preferences that are really about\nthe outcomes and it really resists being\nsynthesized into the model these are the\nones that I'm appealing to when I'm\ntalking about an adequate outcome or an\nacceptable outcome or a good outcome\nlike it's just a minor like we we're not\ntotal utilitarians\nfor example and we're definitely not\naverage utilitarians and the difference\nbetween say 51 percent total utilitarian\n49 percent average or vice versa or any\nsort of mixed theories like that if you\ntweak the percentages by one percent in\neach direction you\nup in a very very different world\nbecause the optimizing the optimizing\ngoal is different and this can result in\na very different world but these sort of\njudgments that they're all kind of okay\nthat some type of under the finest in\nour preferences mean that there's some\nopenness for different ways of\nsynthesizing it this for example is\nsomething that can't be really captured\nas an actual human preferences within\nthe model it's a statement about a bit\nof variability outside the model and\nwe're sorry it's I will stop using the\nword model because there are too many\nuses of it this is a statement about the\nsynthesis process and the whole thing\nwhere I say beware disastrous outcomes\nor add extra precautions to avoid\ndisastrous outcomes either sort of\njudgment\nit's a meta preferences about the\nviolation of all other preferences in a\nsense and it kind of works as a meta\npreference within the synthesis process\nbut you can't I don't think it can fully\nbe captured I have another point this is\na really minor point about the the fact\nthat complexity might not matter that\nmuch\nbecause the AI can be efficient routing\naround human complexity without losing\nit that's a completely impossible thing\nto have as something within the model\nthis is a sort of odd within a synthesis\nprocess this is an odd judgment call\nfrom outside about what's acceptable so\nthere are a bunch of things that really\nfeel as if they can't quite get into the\nmodel but apart from those the idea is\nthat everything your preferences your\nmeta preferences your enjoyment of the\nparliamentary model or of different\nother ones all that should go in\nokay I think that answered my question\nthen Robert had one in the text okay so\nthis is why I did a she to three posts\nabout how I saw human symbol grounding\nfunctioning the basic idea is to do it\nin a sense in theory identifies suppose\nI was to say that there is a certain\ncertain neuron combinations that when\nthey go off together\nthis means mother-in-law or this means\nthat particular person smelling or\nsomething like that so I have grounded\nand claiming there's a grounding for\nthis these internal symbols how would we\nseen what's fires so in a sense what I\nwhen I make this claim I am saying that\nthe firing of these neurons should\ncorresponds to certain states in the\noutside world and notably those where\nthe mother-in-law appears or is so I'm\nsaying than actually knowing the\ninternal states of the internal\nvariables I can make predictions about\nthe outside world this is my sort of\nAvenue towards empirical simple\ngrounding it's it's sort of saying if\nyou say this is the symbol this is its\ngrounding and I say okay I will check\nthat I will look at the symbol by which\nwe mean certain things inside the human\nbrain and I will see if it curve varies\nwith what it's grounded on which is\ncertain states of the outside world\nokay edges also have a question then I\njust realized I should have asked him to\ntype it in in advance here he has done\nthat great\nI wanted a idea the idea to get lynly\nbetter satisfy me as it learns my\npreferences so the graph what sorry I\nthink um do you want this to be an upper\nbound is it a lower bound is it an upper\nbound\nfor me I would be most interested in a\nlower bound in at least what can we be\nreasonably certain to avoid can be peas\ncertain to avoid that it suddenly\nbelieves that we like suffering or\nsomething like that well I'm asking\nbecause there was some people who\nactually prefer that an AI not become\nbrilliant at satisfying their\npreferences immediately I can see why\nthey might want that this is basically\nthe courage ability instinct that it's\nbasically safer if the AI grows in power\nslowly and so on\nbut if we put that aside then I don't\nsee why we want it to be linear as a\nlower bound we want it to go as well as\npossible\n[Music]\nthe the thing is that the graph doesn't\nalways look like that is this what this\nsuggests is that basically we need to\nget the the irreversible preferences in\nquickly I mean the AI should naturally\ndo this but if we wanted to sort of hand\ndesign it it's basically the ones that\ncan't be undone to be the Preferences\nshould be there and quickly before it's\nbefore it gets there sorry before it\nmakes too many changes as I say normally\nthis is something that should be picked\nup automatically but it is an argument\nfor at least having the AI in a training\nbox for at least a while\nand the next question is so what\nincrements sort of intervention do you\nthink every department process of\nlearning or in the intermediate stages\nin AI develops no idea it's sorry I'm\nbeing a bit glib but I really have no\nidea how the process will go out and how\nhow fast the AI will converge on\nsomething we find adequate and whether\nhow we and how we perceive it from\noutside were the eight and next question\nwhat the AI is brilliant at satisfying\ntheir preferences they would know to\nknock Britons you satisfy that person's\npreferences yes that is the case unless\nwe don't want the air this is basically\na form of manipulative behavior one of\nthe reasons that I've given up trying to\nrule out manipulative behavior is that I\nfind it impossible to define so being\nthis is this is sort of being it's\nrespecting their preferences but it's\nalso manipulating them because one of\nthe reasons what they want you to go\nslow is because they're afraid what\nhappens if you could go fast and if you\ncould go fast but to tend to go slow\nthis is sort of manipulating them and\nit's for reasons like this that I can't\nreally find a good definition of what\nmanipulating means because arguably this\nis a manipulative manipulation that is\nin accordance with what they want so\nthere should be a strict ordering to\nwhat you learn and used to throw out\nsomething you learn too early um the\njust just naturally in the process of\nlearning the early stuff will get\nimproved upon these strict ordering on\nwhat you learn is just to prevent\nmistakes irreversible mistakes at the\ninitial early on when the AI doesn't\nhave enough of our preferences\nwe um consider an example it's terribly\nabstract mm-hmm I mean for example in it\nwe were just about people not wanting to\nhave their preferences satisfied too\nquickly so I was imagining what I don't\nknow somebody winning the lottery\nwinning a million pounds on the lottery\nand saying I'm afraid this well what\nthis will do to my life so I'd like to\nhave the money fed to me more slowly or\nI'd like to experience the life of a\nmillionaire for a short for a for a\nmonth and then decide what I want to do\nafter that is this what we have in mind\nand like is this learning throwing out\nsomething that we've learned to Ernie\nand I need an example know what you're\nsaying is not an example of that what\nyou're saying is something that might be\ndesirable anyway\ncould be desirable depending on the\nhuman it could be could be desirable I\nwas just saying something like that is\nalso in a sense manipulative or could be\nmanipulative it's it's sort of tricky\nthe borderline between the perfect\nButler\nand the perfect evil Grand Vizier is not\na sharp one if you're if your preference\nif your desires are anticipated and all\nyour objections are anticipated this\nshades into manipulation anyway so that\nthat's not the throw out what something\ndoes early I'm not saying if you throw\nout anything that is learned early it's\njust that it will get updated the the\nthings I was saying that if there is an\nordering we should have it learn like\nit might sir instance nuclear plants and\nkill everyone and then regret deeply but\nit did this sort of sort of simplistic\nexample we don't want it to do that we\nwant it to realize that some things we\nconsider irreversible on the other hand\nit might sort of take apart an asteroid\nand fling bits of it into the Sun and we\nreally don't care about that in a sense\nboth of these are equally irreversible\nbut be killing everyone is something\nthat strongly objective and the\ndisassembling and asteroid and chucking\nbits into the Sun is something we\nbasically don't care about at all so we\nshould quickly teach it what we consider\nirreversible of disasters so that it\ndoesn't do those in the process of\nlearning the other values\nokay let me see if I can find and by the\nway could I watch there I think you\nshould put evil Grand Vizier just in\ncase people don't um don't realize okay\nperfect in the beginning of your\npresentation you had an example of three\npeople estimating the rationality h ln k\nand believe you called them obtaining\npretty much the same results in that all\nthree had the same models and believe\nthat people the reason why people were\nplaying chess is because they want to\nwin because that is what our empathy\nmodel would believe we would do that in\nthat situation but it seems that this is\nsay that these three people are likely\nto fail in very much the same way so how\ncan we I've taken quote here from you\nthat we can we can see that the method\nstops working when the internal models\nthat we have and those others have start\nto diverge and my feeling is that they\nmight be that there might be broken in\nin pretty much the same way well so to\ngive an even simpler example than the\nchess one if I was to start get red in\nthe face start shouting things at Suren\nand punch him\npeople would conclude that I was angry\nat CERN and wished to do him harm I I\nwould agree with them in this\nhypothetical almost certainly so this is\nan example of a behavior where almost\neveryone agrees on it these sort of the\nstrong emotions people tend to agree on\nit and there is a lotta that these are\nsort of the strong similarities I've\nseen now just quick when it might also\nmake some\nmight be something like the reason\nactually the reason while you are red in\nyour face is because you've been stung\nby a bee and have a relative to that and\nyou're trying to swipe away a piece from\nmy face so so that's why everybody\nthinks that you are angry with me but\nactually you are trying to protect me\nfrom the bees that's a possibility and\nhopefully further evidence should\nelucidate that problem but to get back\nto sort of your question I'm saying that\nthe three models are sort of similar and\nthis is and this is true but it and that\nit's a sort of shortcut but what I'm\ndefining to be the true one is the\nperson's internal model so whether I'm\nangry or in pain and helping you so\nangry versus bees the true model is the\none inside my head now maybe I was angry\nat you and then I noticed that there are\nhundreds of people watching us and then\nI noticed a bee and then pretended that\nthe bee was the cause of it and then\npeople would misjudge that and they\nwould get it wrong but the truth in the\nsense is the model with in my head the\nsimilarity with other humans is a way of\naccessing this truth at least better\nthan random without disassembling my\nbrain and having perfect death in my\nwrong right or that kind of thing so\nwhen the winner Turner models okay so so\nin a sense our models of other people's\nmodels are proxy measures so they fail\nin the way the proxy measures failed\nespecially move beyond typical\nsituations but my argument is that in\nthe grand scheme of things they're\npretty good proxy methods they can get\nus a significant chunk of\nway might we not run into good hearts\nlaw with these yes because we're trying\nto end up in a very different place from\nwhere we are right now if we rely on my\nmodels of someone else's models to give\nthem their values but we don't run into\ngood hearts law we are basically using\nthe wrong definition what's gonna happen\nis we're gonna sign someone their values\nwhich is my judgment that they values\nwhat should happen is that we have my\njudgment of their values other ways of\nmeasuring their internal models either\nthrough their stated preferences in\nreliable situations and in unreliable\nsituations so lots and lots of examples\nof people saying things which we believe\nare true Peter thinks two things if you\nbelieve or false lots of different\nestimates for what these internal things\nare and maybe an actual definition of\nwhat it is which the AI can then sit the\noutside evidence towards finding that\nbut yes if you say mind if you use my\ninternal models to find someone else's\nvalues then it will be it's wrong it\nwon't be me it won't be as its disaster\nmode is different from other good hard\nsituations as am i but it's the wrong\nyou're doing the wrong thing and if you\ndon't add some uncertainty or some\nfuzziness or some definitional thing\njust to reflect the fact that you're\ndoing the wrong thing then you won't\ncorrect for the fact that you're doing\nthe wrong thing\nokay let's another question you have you\ndefined the one-step hypothetical where\nyou have like a minimal one step change\nin the human's brain where you post that\nand then one simple thing and then how\nwould you react in this situation and I\nexpect you could also say that's like a\nserious step hypothetical like a simple\nquestion and then maybe an infinite step\nhypothetical that would be in this\nreally really strange situation what\nwould you prefer something where you\nwhich seems to me like the coherent\nextrapolated volition I don't know if\nthis is precisely what you mean so and\nalso my question is one step seemed to\nbe fine but two steps are not how come\nit's basically dealing with the whole\nproblem of manipulation again the more\ninteraction there is the more\npossibility for manipulation there is\nand also the more possible to do is for\ncreating human bodies where they didn't\nexist before if you take say some\nexperiments yours have isolated tribes\nand you present them with episodes of\nStar Trek or something like that they\nmay develop preferences for something\ncompletely outside the realm of\nexpertise because they're sort of\npresented a story which tugs on their on\ntheir standard field yeah but basically\nit's not the further you go the more\nscope there is for manipulation if we\ncan or for just creating the values out\nof thin air if we can deal with that\nproblems then there's no problem with\nmultiple step I thought the\nhypotheticals it's it's yeah it was it's\nexclusively for that and the realism\nthat's not just a zero step is so that\nwe can actually go somewhere and it\nisn't immediately\npart of so with one-step hypotheticals\nhow far away can you get from our\ncurrent state can you get sufficiently\nfar\nwell my innocence I expect that this is\ngoing to be a sort of empirical question\nhowever we define one or two step\nhypotheticals one sign that things are\ngoing wrong is if rephrasing of things\nthat are logically similar or logically\nequivalent gives very different answers\non the part of the human so that's a\nsign of well manipulation about our\ncreating new values of those kind of\nthings so in a sense how safe these are\nis a relatively terrible question\nthere might be something also there\nmight be situations like with um\nadversarial examples in battery learning\nit might be that most examples most\nhypotheticals you can phrase are okay\nbut there are a few they give really\nedge results in that case we might be\nable to just say use this sort of\ncentral examples\ndon't use the optimized edge cases while\nthe equivalent of adversarial examples\nso yes so this is a two stomachs this is\na hiker this is there are empirical\nwaves dealing with this hopefully yeah\nlet me see if we can find any yeah I had\nI had this one that was actually more a\nquestion of see ensuring that I\nunderstood because in the ring group I\ntry to explain some of this and and I\nhad four kinds of meaty preferences and\nI think I might be confused about\nsouthies so here I have the standard\nself referential like all simple\nmeaty preferences should be penalized\nRussell style preferences penalizing\nthose that are not mentioned explicitly\nthe set of preferences that are not\ncontained in it and and once whether\nthey're in consistence I call Goodell\nstyle and the entry services preferences\nthose that don't want to be written a\nperson doesn't want to be reduced to\nyour Sufi function is this roughly\ncorrect\num I wouldn't necessarily label them\nlike that but these are all interesting\nexamples\njoin me can I go through them please do\nso the last one is the one I've already\ntold you that I don't really know how to\ncope with I've sketched a few examples\nof how it might work but those are sort\nof real hats all simple Mehta\npreferences should be penalised should\nnot be assigned to itself it's not\nexplicitly self referential and if you\nremove it's it's implicit self\nreferential that I mean this is\nsomething that someone can have in a not\ntoo silly example like I might have a\nmild version of this due to my fear of\nlosing oversimplification and thinking\nabout it I would not want this to be\napplied to itself because it's there to\nprevent things from happening not to be\nnot to be self referential and tossed\nout so there's I see no problem in so\nself referential things are a problem\nbut I see no problem in just saying this\none don't apply to itself all\npreferences not mentioned here should be\npenalized presumably there's a list that\ngoes with it or references that are\nmentioned here um this this sort of all\npreferences that don't come from my\nstudy of the Bible should be penalized\nis something that some people have for\nexample these sort of religious\npreferences are tricky because they're\ngenerally factually wrong or they\npremise things that are factually wrong\nbut there's probably secular versions of\nthis of everything that does not derive\nfrom Marxist Leninist thoughts within me\nshould be penalized again I see no\nproblem with not self referencing it\nhere is this Russell's paradox is that\nwhen the set that contains all sets that\ndon't contain themselves if that's\ncontained within itself and so you talk\nabout Russell style preferences but you\ndon't give an example so I'm wondering\nif this is an example or do you have a\nbetter example in mind of a Russell\nstyle preference Russell style I think I\nwas thinking of versa\nRussell and Goodell's style as basically\nself reference as just a shorthand for\nself reference or for things that's\nsaying bad things about themselves or\nrule themselves out like the bar okay\nRussell would be the barber who doesn't\nthe barber shades only those who don't\nshade themselves all preferences not\nmentioned here should be penalized then\nyou give a long list that doesn't\ninclude that preference well in this so\nthe thing is when I say glibly this\nshould not be allowed to refer to itself\nthis is what people tried to do\nmathematically for a long time and\nfailed they were sort of saying well\nthese goodell things these Russell\nthings they don't the kind of very\ndubious examples they don't crop out\nmuch why don't we just put them aside\nand it turned out we couldn't do that\nmathematically but in genuine human\npreferences I don't see these sort of\nthings coming up very often and\ngenerally when they in most cases that\nthe self reference is just don't let it\nself reference and if there are if there\nare if there's a cycle of to preferences\nreferencing each other in a way that\nwould push themselves up and push\nthemselves down\nthat just break the cycle and if there's\na contradiction deal is it like other\ncontradictory preferences so this is an\napproach that I think should apply to be\napplied to humans not to any logically\nconceivable agent just because and so\ninconsistent preferences should be\npenalized well if you have a value of\ninconsistent yeah if you have a certain\njudgment of inconsistency that might\ninclude this thing itself um if it's\nagain I think just breaking the\nreference to itself could work if these\nare weak preferences then the main thing\nis not to allow them to blow up\npreferences should not be allowed to\nblow up to minor preferences should not\nbecome major through self references\nmajor preferences should not become\nminor to self preferences but minor\npreferences could stay minor or become\nvanishing true self references that's\nless of a problem yeah so this is kind\nof my answer now that I sees things in\nthe chat have appeared yes Robert has a\nquestion first\nokay stuff fresh and meta preps are\ntroublesome so all meta preferences that\napply to themselves should be penalized\nyeah possibly but I mean people don't\nreally have preferences like that or\nthey tend to hold them quite weakly they\nmight make themselves hold them strongly\nthe way I am basically going to try and\ndo it if I have to do this is to assign\nevery preference to a ordinal or to a\ninteger and all Preferences refer only\nto ones that are lower themselves on the\nscale and I am going to do some sort of\nenergy minimization to map everything\nto this format so the some sort of\nenergy minimization process to break all\nthe cycles could you Victoria\nlaboratories understand it's self\nreferential why would you want to\npenalize simple meta preferences\nokay um I okay there is a meta\npreference that I am very scared of and\nthat is simple preferences should be\nover weighted I see this leading to the\nreplacement conclusion I see this\nleading to discarding so many of human\npreferences on the altar of simplicity\nso that is a meta preferences that I am\nvery scared of and over an over seeking\nof simplicity is something that I think\ncould end disastrously\nso I do not want to penalize simple meta\npreferences I want to stop them from an\noversized domination so that's why I\nsaid I might have something vaguely in\nthat area of course this is within\nlimits if you don't have some form of\nOccam's razor you don't get anywhere so\nI'm not in favor of arbitrarily complex\npreferences either but especially for\nphilosophers I see the failure mode of\nover simple preferences well as being\nthe problem okay yeah I have another\nquestion about this so let me see if I\ncan find that\nso I'm Rob miles mentioned said that\nsimple preferences should be boosted is\nitself simple yes and this if you have a\nvery mild preference for that it should\nnot then become a dominant preference\njust through\nin itself it's very clear that\npreferences should not be allowed to\nboost themselves to heavyweights if they\ndon't have that weight initially I've\ntried to here on the screen come up with\nsome arguments why you might actually\nwant to be a while might not be so bad\nor why I might be desirable to have a\nmore compact simple elegant utility\nfunction like for instance if you want\nto convince people to have an utility\nfunction then if you can say like if you\nhave some big symbols like this is your\nvalue of friendship this is how you\nvalue art if you have a utility function\nwith these kind of big things in it then\npeople would probably be more likely to\naccept it compared to if it's very very\nspecialized and it's like a million\nfactors and people don't understand it\nat all so that's my my first argument\nfor this and the second is that might be\npossible to prove statements about\nelegant utility functions things like\nyou will not regret if we try if the\nutility area to maximize this utility\nfunction and I think also if you want to\ntry to do some any kind of philosophical\nvalidation on it that might be easier\nwith an elegant utility function\ncompared to one that you can test with\nthe computer and see what what the\nutility function does in different ways\nbut you can't really understand it so\nthe second statement I don't think is\nall that relevant because this is about\nhow you can convince someone you will\nnot regret adapting this utility\nfunction well then after that you you\nmight just have no opportunity for\nregret this is basically an equilibrium\nstatement that doesn't mean all that\nmuch unless we define regret in some\ncomplicated\nit captures on current feeling of what\nwe're great would mean easier to\nconvince people to accept possibly I\nalso feel that you might be able to\nobfuscate a lot of the nastier parts of\ntheir self-image in some of the details\nbut it seems about that is plausible\ndifferent class of errors can hide an\nelegance and different class of errors\ncan hide in verbose elegance most of the\nwork is philosophical while the most of\nthe work is computer based testing\nI just basically see people's\npreferences as um yeah you virtually\nnever see someone say well you very\nrarely see something that is perfectly\ndefined my preferences but actually\nyou've gone a bit too complicated you do\nsee examples of that so I see your\narguments for elegant utility functions\nI just see that the mess of human\npreferences all the clutches we have and\nthat's a too strong preference for\nelegance would involve sacrificing huge\namounts of our current values and in my\nview far too many of course this is a\nrelative statement and people who are\nphilosophers who have strong meta\npreferences here will end up probably\nwith more simple and elegant utility\nfunctions than most people\nokay I think that just about conclusive\noh did Robin have a final question here\nthey also know that there actually is a\nsimple clear single let's cut result in\nWeiser Salamis with the process find it\num in this hypothetical what defines\nthat there is on the clock shows all of\nhuman value I suppose if there was some\nif it you could imagine something that\nwas designed to spit out something\nsimple and then it does spit out\nsomething simple and everybody goes oh\nyeah that's correct I don't know how we\nmanage to try and do all this philosophy\nfor so long without realizing that that\ngives me what I want this is actually\nwhat I value and if it's a process\ncapable and finding that if it did exist\nit's capable of finding some versions of\nit it's some situation in some\nsituations it is not in all situations\nlike you're I mean what you're\ndescribing seems very similar to a\nhypnotic hawk did not want we would not\nwant it - right right - fine but if like\nway I could see it happening if it does\nall this complicated things and it turns\nout that there are say 500 factors or\nlittle between dramatic 50 factors on\nwhich all of human values load very\nstrongly and all the rest looks like\nnoise then this is some way that it\ncould end up finding this this thing\nit's yeah the the ultimately convincing\nargument I'm not probably won't find\nbecause all the precautions to avoid\nmanipulation on\ngoing to be pushing against that because\nbut the but it yeah so these sort of\nthings that are confirmed yeah that's\nthe right thing\nit wouldn't certified things that are\nsurprisingly simple when you do the\nextraction process it would find okay\nChris you have a metaphor yes in my I'm\nalways searching for concrete examples\nto UM to ground my understanding of\nthese mind-boggling abstractions I'm try\nand also deeply I react very strongly\nagainst the idea of synthesis what it\nmeans to synthesize human values into a\na general utility function and so I what\nis it doesn't make sense to say to\ncompare the process to like if you have\nyou have a lot of runners in the 10,000\nmeters they will have their separate\npreference is very largely their\npreference is to win the race and\nobviously those are mutually\nincompatible and so you organize a race\nso the business of when we synthesizing\ntheir values values\nyou don't really synthesize their values\nwhat you do is you you organize a race\nwith the rules so that you can have an\noutcome and they can participate knowing\nthat this is at least gives them a shot\nat achieving that achieving their\npreferences and beyond this if you think\nabout organizing the Olympic Games again\nyou can't synthesize everybody's\npreferences or values or anything like\nthat except in a very general high-level\nsense and somehow we do manage to\norganize Olympic Games which a lot of\nthings are going on it was disaster make\nmoney which you personal fame to achieve\nnational\nor - I don't know promote peace panic\nbetween the nation's or something all\nthese different things are being you are\nattempting to achieve all these things\nin organizing something like the Olympic\nGames and to some extent you succeed is\nthis an analogy for what you are trying\nto do with your research agenda or is it\na bad analogy um it could work as an\nanalogy by the way I keep on using\nsynthesize and construct to emphasize\nthat this is a constructive process with\na lot of contingent choices that and I'm\nalso not particularly enamored of\nutility functions just for the sake of\nutility functions but they seem to be\nstable in the way that other\nrepresentations are not but maybe an\nhonor to develop your analogy it would\nbe how can we arrange the outcomes of\nthe Olympic Games to satisfy people's\npreferences to the maximum obviously we\nonly have a certain number of gold and\nwhile golds and bronzes and even\nSilver's to give out I give them out in\nthat order because that seems to be the\nhappiness order of the people who get\nthe medals but we also have\nopportunities for people to get an ace\nunexpected sponsorship deal to maybe\nfall in love to outperform some personal\nmilestone to be a key player two key\nmoments to make it on television this is\nwhat I was meaning when I was saying\nthat very few values are completely\nantagonistic then this is basically\nfinding the best way of satisfying the\nmost and the things the values within a\nhuman tend to be\neven more non antagonistic like someone\nwho wants to eat chocolate and have\ngreat self-discipline is these are not\npertinent there are sort of\ntechnological or their worlds in which\nthese can easily be both satisfied and\nthere are ways of building their\nself-discipline and allowing them to\nindulge in the right amount of\nchocolates for examples even these\nthings that seem as if they're\ncompletely antagonistic or just sort of\nmildly so does that so that's sort of\nyeah so I does that combined with your\nmetaphor sort of help and I think so I\nthink so I was thinking specific for\ndirectly than the Olympic Games I was\nthinking about you know political ways\nof ordering society where you have these\nconflicts between freedom and equality\nand Mitt utilitarian well-being for as\nmany people as possible and so that\nthat's the kind of attention so now it's\nwondering whether an AI consoles for us\nso yeah that seems to tell you about it\nwith what you're saying so it's a you're\nsaying some the optimistic that these\nthings can be well given given sort of\ngiven superpowered a high then yes I\nmean there are a lot of I mean people\ncan be satisfied in surprisingly odd\nsituations I mean there's some people\nwhose best time of their lives were in\nprison there is the whole medium fish in\na small ponds which many many people\nseem to like it's just a question of\nsetting up the various small ponds like\nif your online games\nfor example um people seem to enjoy them\nand enjoy a certain level of success\neven though there's only sort of one\nperson who's the best at that game at\nany time and a couple of hundred who are\neven in contention talk and people still\nseem to enjoy that and get sort of rush\nfrom victory and enjoy being higher in\nthe ladder than other people so that\nthere are but there are a lot of there\nare a lot of levers that a super\nintelligent AI a super powered a I could\npull so this is enly sort of utopia\neverything is solved or kind of thing\nthe easiest thing is that a lot of\npeople are antagonistic because they're\nsort of have different status like\nthey're on the same stage of slack\nladder or the same dominance ladder the\neasiest thing is to put them on\ndifferent status ladders and have their\nworlds going more the way they want or\nthe bits that they see or the bits that\nare relevant to them okay okay so I\nthink that was just about it so I would\nlike to say thank you once again to a\nrap song for coming here and answering\nour questions and we wish you good luck\nwith your research agenda thank you and\nif anybody wants to sort of take part or\nhelp or direct anyone towards it I'd be\ngreatly appreciated and general feedback\nis also welcome oh and on the auricle\nquestion there is a a thousand pounds\navailable for winning the contest and\nthe link I posted the very beginning of\nthe talk so thanks everyone\nthank you and on a practical note next\nweek we\nbe meeting in the reading group on\nTuesday the 21st and we will discuss\nJeff Hawkins utter audible about why he\nis optimistic about AGI within the next\n20 years so thank you everyone and see\nyou next week", "date_published": "2019-08-14T22:42:08Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "1f87fbf1fced985b39f36c7d9ddb8228", "title": "121 - Artificial Stupidity", "url": "https://www.youtube.com/watch?v=AiqgacILGcQ", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "I was told you guys have access to it\nbeforehand so how many of you actually\nread it I've written all details ok I\nassume some of you didn't read it so\nthat is still a reason for me to present\nabout it I know my students never read\nanything they don't even buy textbooks\nso I assume if it's not in a\npresentation you don't know about it if\nyou want to know more if you have\nquestions I'm happy if you follow me\nemail me elaborate but I'll try to kind\nof do a self-contained presentation of\nthe topic ok so if you know history of\nartificial intelligence you can trace a\nlot of it to work of Alan Turing he\ndefinitely had amazing contributions to\ncomputer science but specifically an AI\nis known for probably the most historic\ntest the so called imitation game and it\nis probably the most popular way of\ndetermining if a machine is intelligent\nhowever what it really measures is how\nhuman intelligent the machine is right\nit's not helpful to be super intelligent\nand make very intelligent answers which\nquickly will be discovered to be\nartificial so you have to you have to\nmake mistakes and typing you have to be\nkind of slow and typing you have to not\nbe brilliant mathematically and Turing\nhimself talks about that in the paper\nabout the need for a machine to\nrecognize those limitations and to\nintegrate them into the answers but\nsurprisingly there's been very little\nattention paid to this aspect so\neveryone's trying to develop more\ncapable devices never looking at this\nlimit of human capability in any formal\nway so what the paper does is kind of\nformalize this notion and suggests that\nit may be very useful to to look at it\nsome more so what is it like to be human\nlevel intelligence but\nnot greater than that and how does that\napply to all sorts of different\nsubdomains mathematics psychology and so\nnot surprisingly there are fundamental\ndifferences between capabilities of\npeople and machines we can concentrate\nand few obvious ones but there are many\nmany others so for one computers a\ngraded computing not surprising you can\nfollow long chains of algorithms perform\nvery significant computations data-mine\npeople are not as great at that likewise\npeople have very limited memory\ncomputers have almost perfect memory\nwhatever its long-term or short-term\nthey're much superior to us only have a\nhalf amazing common sense they can\nquickly figure out ambiguous sentences\nand fuzzy visuals worst computers are\nnot that great at that with respect to\nrationality we are capable of creating\nvery rational devices strictly\nstatistical analyzers whereas people are\nusually not so rational and a number of\nwell-known biases so what is\nspecifically this concept of artificial\nstupidity we want to study human\nlimitations and to formalize them in\ndifferent domains and to be able to\nencode similar flaws similar properties\ninto a machine and purpose and of course\nexplain why it's useful but this is the\ngeneral idea so if you understand this\nmuch unfortunately there is alternative\nuse of this term artificial stupidity is\nkind of related to natural stupidity and\nlots of funny jokes about that but we\nexplicitly mean this purification of\nnatural stupidity natural human limits\nso why would you want to make your\ncomputers dumber it seems like we\nstarted at that point already\nwell there is quite a few useful\napplications so obviously you needed to\npass the Turing test itself so if your\ninitial goal was what Turing hoped for\nyou need to understand what the limits\nare in order to succeed there are also\nquite a few applications in terms of\ndeveloping products so whatever you're\ntalking about domestic robots sex robots\nwhatever your flavor is it's wonderful\nif they have good understanding of human\ncapabilities and can relate and can\ninteract in a kind of equal kind of way\nif you designing games it's nice if you\nactually have a reasonable chance of\ncompeting in a game it makes it more fun\nmore interesting in general this\nlimiting of power differential between\npeople questions could be quite useful\nfor safety reasons and if you scale your\nsystems beyond what we have today you\ntrying to achieve super intelligence at\nsome point I think it would be necessary\nfor successful value alignment to\nunderstand completely what it is you're\naligning to the model for for human\nintelligence human sensory system and so\non so what we need to do this not so\nmuch a lot of original work but maybe\ncollect relevant information from a\nnumber of other fields we are interested\nin research from psychology sociology\neconomics mathematics anything to do\nwith what are those limits in humans so\nwill we know this kind of limited\nattention capacity limited recall\ndifferent interpretations of statistics\nbut there are also kind of interesting\nobservations we can make about human\nvisual system for example how many of\nyou can actually perceive the solution I\ndefinitely see it moving ok so I shared\nthis on Twitter and I got something like\nquarter million I wouldn't call them\nlikes but interactions of some kind\nwhich gave\na grand total of one follower but people\nseem to definitely understand this and\nreact in some interesting ways\nwhereas machine which would not have\nthis bargain their visual system would\nnot be able to relate and connect so\nthat's that's pretty much the example of\nwhat we want we want full understanding\nof what it's like to be biological here\nthere are some trivial examples that can\ngive you from different domains so the\nfamous one probably everyone heard who\ntook some psychology courses is this\nmagical number 7\nspeaking of short-term memory we usually\ncan remember plus or minus two seven\nchunks of information so depending on\nhow you chunk it up it could be\ncharacter symbols word sentences but\nusually that's the number and we see it\nshow up in many areas of life so for\nexample phone numbers typically have\nseven digits because that's the best we\ncan do on average in terms of memorizing\nthem but such such constants are not\nreadily available if your programming in\nAI and you would like to have access to\nor what are the limits for different\nhuman properties it's not something you\ncan quickly look up there is not an api\nfor it so that that seems to be the\nlimitation we want to address another\nexample which has been studied\nextensively but again is not directly\navailable is a set of so called\ncognitive biases there is quite a few of\nthem we try to not just list them but\nexplain how they may be useful in\nspecifically creating safer intelligent\nsystems what those limitations may imply\nif you think about it AI started with a\nlot of work and URIs --tx which were\nexactly that shorthand limitations in\ncomputation to improve efficiency\nimprove performance but then AI does it\nwe say okay it's a great heuristic\nreally efficient but then people do it\nit's a horrible bias we need to fix it\nbut that's essentially we're talking\nabout the same same mathematical\napproach to solving problems so what the\npaper does it's not an expert\nmental paper we didn't code anything\nwithin running experiments but we\nproposed this essentially direction for\ndoing additional work were from multiple\nfields it's highly interdisciplinary we\nwant to understand physical cognitive\nand cultural properties of people and\nformalize them to make them easily\naccessible to anyone programming AI so\nhere for example you have some extreme\nproperties of humans as physical systems\nin the left bottom corner you can see an\nexample from recent experiments on moral\njudgment about self-driving cars from\ndifferent people around the world and\nthere are some very difficult to predict\ndifferences cultural differences who the\nself-driving cars should sacrifice is it\nyoung people old people men or women\npoor or rich and things like that need\nto be encoded for a system to to\nunderstand what's going on and again I\ndon't think it's quite readily available\nat this point so some examples of where\nthis has already been applied over we\nwill see it applied in the future games\nis one example and pretty much all the\nexisting papers on this subject which is\nlike one or two on the main of games\nwhere people quickly realize nobody\nwants to play against God level AI it's\njust not fun if you playing chess and it\ndestroys you every time it's boring so\nyou should be able to adjust level of\nyour non playing characters your\nopponents and whatever it's from easy to\nimpossible or just make it sometimes\nmake mistakes it's definitely part of\nmodern game design where you have to\nintegrate such human limitations if it's\na shooter game you want the opponents to\nmiss you sometimes and so on another\nexample is specifically with chatbots\nand we see it a lot for shared\ncompetitions such as Lochner Prize\nwinning BOTS make a lot of mistakes they\npause the type of mistakes and I think\none of the best winning strategies is\njust talkin I remain silent for a while\nand seem very deep and interactive\ninteresting but it would be a bargain\ncode and people perceive it as quiet\nhigh level performance so we saw this\nexemplified with Chavez and more\nrecently we saw demonstrations of Google\nduplex system and that's quite\nimpressive system very natural voice\nallows you to make phone calls to\ndifferent businesses make appointments\nbut what may that sound very human is\nall the kind of human like let me think\nwait mmm sounds wish it made for no\nreason other than to sound dumber than\nit was so more human by extension so\nthat's I think something again we see\nshow up even an existing systems but\nagain without any formalization or what\nare the optimal delays what we need to\nsay to sound even more human and so on\nso that's some examples of application\nthe paper came out very recently already\nstarted to gain some citations which is\nnice to see it's not formally published\nit's just print an archive but it\nquickly went viral I don't know if it's\nthe catchy title over the official\nstupidity of people actually think it's\na brilliant idea\nbut quite a few media outlets and a lot\nof international coverage was given on\nthat and the comments for those\npublications are called mine of\nartificial and natural stupidity for\nsure but quickly assemble the data set\nfrom that so you kind of conclude what\nwe're trying to do is start this\ndirection of research will be collecting\nnecessary data from different domains\nand human cognitive and physical\nlimitations different properties and\nfactors and the goal is to formalize it\nmake it available to researchers in\norder to make more customized safe and\nsecure systems and with a long term plan\nof assisting with value alignment of\nPGI's super intelligent system\nand better understanding humans in pasts\nso with that I'm happy to switch to a\ndiscussion mode and answer any questions\nor recruit you guys to work on this\nproject as much as it works let's see if\nI can switch stop sharing okay so I'd\nlike to say thank you to Romanian pulsky\nfor this presentation and then I'd like\nto hear if there are any questions I\nwould like to ask how we also considered\nartificial wisdom which would also kind\nof apparent from artificial cleverness\nbecause right now\nwhat people are mostly working on is how\nto make the AI clever and find any very\ndifficult baby complex algorithms that\nsolve some problems but maybe the\nproblems we have are actually stupid\nproblems and we have not solved them at\nall so what about wisdom which is more\nnot about having some convoluted\nsolutions but instead changing core\nviewpoint or rules I think I've missed\nbeginning of your question I got the end\nof it but beginning was kind of cutoff\ncan you repeat that first so I was\nasking have you talked about artificial\nwisdom so we right now we were opposing\nartificial cleverness with artificial\nstupidity but I think there is third\ncomponent that and third I mentioned\nwhich is artificial wisdom so we don't\nhave any convoluted algorithms instead\nwe have different viewpoints so we might\nsimply have very simple solutions or\nvery complex problems if we simply\nchange our viewpoint not sure I have a\nbrilliant idea here changing your\nviewpoint as in changing our values to\nmake them easier to fulfill is that what\nyou have in mind\nyes um but so what if we have some\nartificial intelligence system that\nhelps us to change our viewpoint and\ntherefore if our problem solved without\nhuge side effects so figure something to\nconsider but it's also considered a\nsafety issue right if a system tries to\nmodify you in order to make its\nprocessing more efficient there is a\ndanger of losing your utility function\nyou stopped being yourself right if you\nare modified into someone who enjoys\npaper clips and that's it\nyou can fail to preserve your identity\nat least in the short run for sure yeah\nI think there is some danger why I\nthought that it would be useful is that\nuseful useful usually when people invent\nsome new technology they very soon\nutilize it to the maximum extent over\nthe world and that's certainly dangerous\napproach and then you have simply\nkeywords instead changing your viewpoint\nyou don't have to utilize new technology\nwith the maximum extent and I think for\nexample paper clipping will be still\nutilizing the technology and what about\nthe cases where you only change the\nviewpoint but do not utilize too much\ntechnology in order to fulfill this you\nknew you will point so in general I have\nstarted thinking about values and in the\ncontext of value alignment and just\nbecause we call them values it doesn't\nmean they are actually valuable right\nthey're pretty randomly assigned at\nBirth based on your culture so there is\nsomething to be investigated in terms of\ncan we be happier so as a foreigner in\nthe United States I'm quite happy to\nenjoy unpopular sports and not care\nabout popular ones so I save on tickets\nI don't have to go to Super Bowl it's\nvery good\nI wonder if this can be streamlined\nsomehow but I haven't done any published\nwork on it I have a question then when I\nlooked at the list of biases\nI thought of another candidate which is\nthat humans in general prefer to have\nstories about what they do they don't of\ncourse sometimes we act based on pure\nintuition but we prefer to be able to\nexplain our actions to ourselves and to\nothers and this seems to be an important\nissue in particular with the kind of AIS\nwe see neural networks that give\nsuggestions that are unexplainable\nbasically and we would prefer a safe\narea to take actions to pass towards\nactions that can explain just like\nhumans all right and that's another\nsubject I'm very interested in and\nslowly collecting literature and\nplanning and doing things one thing to\nobserve so we have this expectation that\nneural networks artificial deep neural\nnetworks will be able to explain\nthemselves but that's not true of people\nthen we do experiments and split brain\npatients for example will quickly\ndiscover we mostly make up\nexcuses for why we do things after the\nfact so it seems like we have a much\nstronger requirement for eyes to be able\nto explain things if they are truly\nsuper intelligent and make decisions\nbased on let's say million factors with\nthe unique weights any explanation they\ncan give us would have to be dumbed down\n story or worse some sort of\nmind hacking where they just make you\nhappy with the explanation so even that\npart could be quiet unsafe and I don't\nthink you can understand the full reason\nfor something so so I think I really\nliked about your paper more superior\nthan human intelligence but I still\nthink safety around even human level\nintelligent AI humans in general can be\ndangerous to me they're not safe I\nwouldn't call them a safe intelligence\nand even going one step down further\nthan that like what level of stupid AI\nis like still safe\nI can imagine of a lot of different\nscenarios where the AI is sufficiently\nstupid it's still dangerous to us so it\ndoesn't seem like those two vectors are\nnecessarily combined together what are\nyour thoughts on that space I agree\nintro those stupid people acquired\ndangerous I deal with them a lot and\nthey cause most of Constance in my life\nI guess the difference is between\ncatastrophic damage because they can\noutsmart everyone versus just kind of\nscrewing up locally and whatever you had\na car accident somebody died I think\nthere is a direct proportion between\nintelligence and level of damage you can\ncause as well as control ability level\nof intelligence so for more intelligent\nsystem is the more it is independent and\nuncontrollable I don't know if you had a\nchance to read my paper on differences\nbetween AI safety and cybersecurity but\nthis safe human problem is something I\ntalk about explicitly and I'm trying to\nreduce problem of making safe AI to the\nproblem of making a safe human which we\nof course failed to do for millennia\nwith culture loss\nprisons bribes you name it nothing\nworked so far so it's really interesting\nI haven't looked into this space and I\nwould definitely be open to\nrecommendations if anyone has it just\nlooking at definitions around safety I\nthink your point about humans not being\nsafe is kind of interesting I'm not sure\nI you're pointing out I don't have a\ngood definition of what that means like\na very rigorous idea of what it means to\nbe safe so I'm definitely I would assume\nthis is the audience to difficut\nrecommendations in that space I don't\nthink that is a very formal definition\nbut just the idea of toss let's say we\nwant a very safe human to work with very\nsensitive information so something like\nan essay in Snowden right to have\npolygraphs we had background checks\nnone of that work whatsoever so I have a\ncouple of related questions\nso the first one would be if you were\nable to implement artificial stupidity\ninto a into an AGI how would you keep it\nstupid enough that it can't self modify\nso that it's no longer artificially\nstupid and yet intelligent enough that\nit's still capable of doing real-world\nwork and the related question is if say\nif we were able to successfully\nimplement artificials artificially\nstupid AGI and yet we still had a\ngeopolitical system somewhat like our\ncurrent one what's to keep America from\nmaking an AI with IQ 100 and then Russia\nsays we can out-compete America and\nstill be safe by making an AI with IQ\n115 and then China says oh well we can\ngo up to 130 and still be fairly safe\nand so on until we're just back to the\nback to the paperclip maximizers right\nso with self improvement then we talk\nabout artificial stupidity we are kind\nof anchoring an average person so IQ of\n100 that's actually quiet low if you\nthink about top machine learning\nresearchers people capable of modifying\ncode and improving it we're probably\nlooking at much higher levels on 3150\nand so on so hopefully if it simply\nwould not be smart with those\nrestrictions to make those improvements\nbut we haven't gotten that far I mean at\nthis point I'm just happy if I find some\nnumbers to put in a dataset as far as\narm arms races between different\ngovernments and governance of AI that's\na huge problem and unsolved one as far\nas they can tell if problem of creating\na is huge like Manhattan Project you\nneed a government in in Google size\ncorporation maybe governments can agree\nor sign something if a person can do it\nin a garage and a laptop then it doesn't\nreally matter what\nwe on whatever standards we proposed\nit's just going to be done immediately\nby someone just you know get famous\nthough\nalright financially so very very\ndifficult to say anything about control\nfrom the governance point of view I have\na question it seems like many of the\naxis that are currently pursue or\npossibly in the future pursuing AGI\nthings like military intelligence\nadvertising giants like Google and\nFacebook or researchers trying to the\nTuring test and it seems like all these\nthese three kinds of projects have\ndeception of humans as a rather key\ncomponent you can if you want to do a\nmilitary intelligence project you need\nto be able to explicitly model the AGI\nneeds to be able to model how to cheat\nother people and advertising problem in\nthe same I guess and that seems very\nproblematic right huge so there is a few\ndirections I'm looking at with this and\none is artificial propaganda just kind\nof optimizing effect on human psychology\nfrom running lots of experiments and\nseeing what works we kind of saw it with\ninfluence on US election right you can\noptimize its to trigger just a certain\nset of responses and people from\nunrelated data okay I like spicy food so\nthat means how they respond well to this\nmessage another one I showed you this\nvisual illusion right so those are right\nnow designed by people for people but\nwhat happens then machines are capable\nof coming up with same type of illusions\nbut not just visual but in avid MA in\nsexual allusions we had a paper actually\nfrom the same group who's working on the\nserial inputs for artificial neural\nnetworks come up with some images for\npeople so they took a picture of the cat\nflipped a few pixels now it looks like a\ndog to most people so that's a proof of\nconcept but I think it's going to get to\nthe point where I show you a water\nbottle and you go\nfor me Scott Alexander wrote a post for\nHalloween about a fictional story about\nsome researchers finding a way to do\nthis\nassistive statement I think they called\nit which was a exceptional a short\nstatement that was exceptionally capable\nof influencing humans it's a funny story\nand you can post a link to that I would\ndefinitely take a look I'm so behind in\nmy reading I need some super intelligent\nhealth care to just some of those\nexamples of artificial stupidity that\nyou mentioned beginning there were\nlimitations built in and to the\nperformance of the machine and they were\nessentially deceptive in nature weren't\nthey are things like you give example of\nthe duplex demonstration the hesitations\nand the others and ours and so on which\nare designed to to fool the other person\ninto thinking is a human they're dealing\nwith which you can justify as saying it\nfeels more natural but I hope that in\nthat in that one area that one limited\narea I hope well except that a digital\nassistant like that phony up to make an\nappointment would announce that it was\ndigital and not human and in that case I\nmean I don't really need it to stay home\nand uh obviously I know but if it talks\nit a hundred times\nhuman rate it's got a talk at a rate\nthat I can understand and it's got to if\nit presents some sort of logical\nargument it's got to do it in steps but\nuh that I can understand but from that I\ndon't mind\nthese artificial failings are rather\ndeceptive I think I hope that we don't\nhave too many of them\nwell they are by design kind of\nartificial they don't have to be there\nCalifornia just passed a law saying that\nchatbots would have to self-identify as\nbats so in a way there is some effort\nbut I think we need to do some studies\non psychology of human enjoyment of\nthose interactions it's quite possible\nthat we just relate better and enjoy\nbetter conversing with other what we\nthink as humans and as the system's\nbecome more easy more interactive more\nhumanoid bodies it will become probably\nmore so so right now we're talking about\njust voice and the phone what if it's a\nsex robot for example right you probably\nwant it to be pretty pretty good\nimitation of a biological body to some\ndegree even if it didn't have to yes yes\nI might prefer to for it to have a few\nminutes again whatever could be just\nenergizer bunny yeah I have a quick\nquestion in many of your proposals for\nthe artificial stupid ATI is not\ncompletely clear to me whether you\nexpect it to have internet access\nspecifically in this paper in general in\nthis people to expect that I think most\nabout I don't think we explicitly\naddressed it in my paper sorry I boxing\nI strongly encourage no-one to connect\ntheir AGI projects to Internet probably\never but this can be decided maybe some\nsort of limited North Korean internet\naccess\nso how how promising do you think this\nis for super intelligence because your\npaper doesn't at the end you seem to\nsuggest that there's not really much of\na way to extend this but do you have any\nsort of inkling for what you might do\nand by the way I enjoy the paper it was\nquite a lot I hadn't heard of thank you\nso it's a contradiction in terms right\nyou can't be human level stupid and then\nartificial is super intelligent so one\nof the other but this is useful for the\nnew modeling people and you're trying to\ncreate some sort of value alignment to\nat least understand what are the values\nwhy a values in that level and people\nappreciate something in this range or\nnot so I think just having this\ninformation would be a useful tool even\nif it doesn't end up being some magical\nbullet so it it seems to me that you're\ntalking you're really talking about two\ndistinct concepts here one is having\ncomputers be able to model human flaws\nfor the purposes of understanding humans\nor pretending to be humans or what have\nyou and then on the other hand you have\ntrying to actually limit the capability\nof n AGI to human levels is there some\nconnection between those that are not\nseeing or like well it's the same same\ndata set you would need same information\nokay you could have multiple\napplications for this data but you need\nto collect some information on same same\nentities would it be that common sense\nis the intersection between artificial\nstupidity and kind of wisdom in the\nsense that the agents let's say humans\nor robots don't too much too much of\ntheir own thinking from down one hand so\nif they are stupid in some sense but on\nthe other hand common sense even if it's\nirrational and so on so it's stupid it's\nsomehow works so it has this was the\nbest of time so it is some kind of\nwisdom it's practical\nall right all this biases evolved\nexactly because they work I mean we try\nnot to judge individuals based on\nstatistics for groups but it works if\nyou have no priors right that's why\nthose things exist hmm so I think being\nartificially stupid and artificial\ndevice is not opposite thinks it's the\nsame in some sense well if you look at\nthe human genius usually they brilliant\nin one narrow area and really not that\ncapable and many others so it's possible\nthere is some buffer overflow from\ngenius to stupidity at some point but I\nhaven't looked into that I was more like\nhaving in mind that every person who has\nhaving common sense a very rich person\nwho is not too bright as this\nconjunction of both stupidity and wisdom\nthey don't think too much but they do\nthings that work so we are very\nresilient I will actually quite\nimpressed how terrible you can be at\nmaking decisions and still kind of be ok\nas a human right you still have a job\nyou're still probably not dead in most\ncases so I see people make mistake after\nmistake and yeah ok we still got a job\nso there is a lot of tolerance built-in\nfor this I think society to our legal\nsystem which assumes that we're going to\nscrew up like that I have a question do\nyou feel that a lot of the economic\npotential from AGI comes from the fact\nthat they are indeed different from\nhumans in that they are perfectly\nrational and\nunbiassed that that might be why we want\nto build them and in this case we want\nthe opposite of artificial stupidity in\npractice I am lost connection", "date_published": "2018-11-15T21:51:06Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "14d85a5ab9a05c6155becbb2f108b2a8", "title": "222. Robot Responsibilities", "url": "https://www.youtube.com/watch?v=l_pqGj-0g6A", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "uh welcome to\nthe ai safety reading group number 222\nthis week we're reading artificial\nintelligence and robot responsibilities\nuh innovating\nbeyond rights by uttan ash rafiyaan\nhutan ash rafiyan is\nworks at the imperial college of\nmedicine i believe\nuh in the uk and um\nuh works as a as a surgeon\nor as a sorry as a lecturer of surgery\nand occasionally writes about\nai and and\nand history funnily enough and so\nwe're looking at this paper that he\nwrote in 2014\nso the main parts of the argument uh\nthat i've broken down of this paper\ni call the terminator 9 pitch as in\nterminator the ninth sequel uh which\nmeta\nwhich meta ethical framework and robot\nsouls\nrobots are similar to animals but\nshouldn't be considered animals\nrobot inclusion in the u.n declaration\nof human rights\nand ancient roman law as a model of the\nlegal and social status\nmachines this may seem totally chaotic\nand it's because the paper kind of is so\nthis\nis where we begin so the terminator 9\npitch\nuh begins with what he calls this\nthought experiment\nwhere two nations or the alphabet we\ncall them the alphamites and the beta\nvistas\nand they're both using robots and ai and\nmilitary\nthe alpha mites locate a strategically\ncritical area to capture\nthat will end in the in a unilateral\nvictory for the alpha knights\num and end the war totally so the alpha\nmite robots go in\nand they they destroy all of the beta\nvista\nrobots in this strategic area but the\nlocal beta vista children fight the\nalpha mite robots and get injured\nthe alpha mite robots refrain from\nengaging in combat with the children\nand uh um and hutan says this is in\naccordance with all rules and laws for\nrobots\nand they look after the children instead\nbecause of the time\nand the resources lost looking after the\nbeta vista children\nthis strategic location is lost and the\nwar continues\nfor some time afterwards so\nash refi ashraf yan asks us to consider\nthe following from this thought\nexperiment\nshould robots have helped the injured\nchildren on our moral grounds\neven if this meant that the war could be\ndelayed or even potentially lost\nwith a pos with possibly many more\ndeaths\nfrom both warring sides two at a broader\nlevel\nuh what is the moral responsibility of\nself-conscious rational\nand sentient intelligence i would say\nthis first question that he asks is his\nfalse dilemma fallacy\nbecause he only gives us these two\npossible options\num of which i think there are many more\nthat we could imagine in scenario\nthe second question that he poses to us\ni would say is a loaded question because\nwhat is there for us uh within this\nscenario for us to\num to conceive of these robots as having\nself-consciousness\num rationality and uh and sentient\nuh sentience and intelligence um he\ndoesn't give us a lot of information\nso i i think we ought to ask this\nthought experiment the following\nquestions\nuh how are the robots and the ai\nsophisticated enough to wage war\ndistinguish between combatants and\nnon-combatants or fighting\nyet not be sophisticated enough to\nrecognize the futility of\nhuman governed war and the advantage of\nteaming up with other robots and ai\nto its own advantage against humanity\nitself\nwhich um as if you follow this group\nyou would know is one thing that we is\nposited often\nis the problem that ai may\nnot be interested in human things it\nmight not be interested in nations\nand have its own interests the second\nquestion\nis that he says that these robots follow\nlaws what are these laws that the robots\nhave\nhow is it the robots have developed in\nseparate and competing indeed at war\nnations\nand have the same ethical legal etc\nrules\nthe third question is how is it that\nrobots negate their goal of winning the\nwharf the goal of\ngiving medical care to the enemy yet go\nback to waging\nthe war anyway i think\nwe find in asking these questions this\num\nthis thought experiment falls apart\npretty easily which is why\ni call it the terminator 9 pitch that it\nwould be\nfine as a um\nas a as a film but it doesn't really\nhelp us to understand\nai or the ethics of ao very well\nso he goes on to say um we need to try\nand work out\nuh what is the meta ethical framework to\nunderstanding\nthe meta ethical framework that we\nshould use to try and conceptualize\nthe position of robots in our midst in a\nin this time in which robots are\npart of our society and he makes uh the\nfollowing\num statements he says determinism means\nthat all events are predetermined and\ntherefore it negates the concept of free\nwill\nthat libertarianism means that\nindividuals have moral responsibility\nand free will\nand that compatibility compatibilism\nmeans that decisions are made by free\nwill within the context of determinism\nhe says because of this thought\nexperiment the terminator 9 scenario\nit seems like the robots have free will\nbecause they help\nthe children but they are also\ndeterministic machines\ntherefore compatibilism compatibilism\nmust be correct um\ni think that we should respond that um\nuh first of all for coherent ethics uh\narguments of free will or determinism\nare largely irrelevant as both arguments\nare indeed unfalsifiable and lead to\nmany more problems\nthan solutions um a coherent ethics\nshould take into account what we can\nobserve\nwe can't observe the scenario that he\npresents to us\nwe can observe some things about humans\num\nwe though we can't observe necessarily\nfree will or determinism\nwe can observe some things that are\nuseful for\nuh working out what meta-ethical\nframework we should use\nin terms of human relations and indeed\nhuman machine relations we can observe\nthat certain circumstances\nare probabilistically more or less\nlikely to result in certain outcomes\neven though that's not necessarily\ndeterminist we can observe that certain\ncircumstances constrict human freedom\num extreme examples being slavery or\ncoma\nand certain circumstances allow for\ncertain freedoms um if you're\nuh wealthy and live in the democratic\nstates of the world we can observe the\ndifferent societies and cultures\nwe can observe the different societies\nand cultures\num affect patterns of human thought and\nbehavior\nand in order to understand the possible\nrange of human thought\nbehavior detailed investigations and\nanalysis of all known human societies\nand cultures needed to be done\nthis one i think is important because we\noften\nmiss it that we\nwe presume that uh that within just\nour society alone are all the possible\nranges of or within the societies that\nexist currently on the planet\nare all the possible ranges of human\nthought and behavior but i think we need\nto take a more\nhistorical view to really understand\nwhat the human's about\nin order to try and make um ethical\ndecisions\nabout human so\nhe then goes on to say well um he\nchooses\nuh compatibilism as the framework for\nunderstanding\nrobot and um and human relations\num and he says that robots are similar\nto human\nsimilar to animals and so we should\nthink about animal rights\nbut they should be considered animals\nhis argument is that\npeter singer says that animals are\ntreated like machines that convert\nfodder into flesh and that wild justice\nby bickoff and\npierce demonstrates that animals have\nmoral codes\nand that he asserts that robots will\nhave these characteristics\nhe says morality is independent to an\nindividual's\nspecies of origin although at a\npractical level\num human beings have been dominant\nboth animals he also says that both\nanimals and machines\nare subordinate to humans and he finally\nsays that there is an implicit\nconsensus that a utilitarian approach\ntowards uh towards animals is correct\num towards animal rights and there is a\ntacit recognition that animals carry a\nmoral responsibility that requires\nconsideration of their moral value\nhowever any direct moral comparison\nbetween machines\nand animals may prove superficial and\nproblematic\non this last point i don't know of\nany implicit consensus that says that a\nutilitarian approach is\nthe correct approach to take with animal\nrights i've never heard of that before\nand it seems like a pretty big statement\nto make without\nbacking it up and the second thing that\nthere's a\ntacit recognition that animals carry a\nmoral responsibility\nthat requires consideration female\nvalues i don't know of that either\ni don't know of people who consider\nanimals to\nhave a moral responsibility if i ask\nsomeone\nin the street does a tarantula have a\nmoral responsibility i think most people\nwould probably say\nthat's an absurd question and similarly\nif i ask them\nif they i don't know\nwhat's a any kind of animal basically so\ni don't quite understand what he's\nsaying here or in fact\ni think it's absurd what he's saying\nmore realistic\nso my response would be to this that the\nthat a sentence later in peter singer\num after the the quote that he used he\nwrites the cruelty\nis acknowledged only when profitability\nceases so in fact he's talking about\nin this quote um previous animals are\ntreated like machines\nbut that's a bad thing he says to avoid\nspeciesism we must stop these practices\npeter singer is very against\nanimal cruelty so in fact it doesn't\nthat doesn't help his\num doesn't help bhutan's argument that\nthat animals and machines have\nsimilarities um and while justice by b\ncoffin\npierce suggests i think dubiously that\nanimals may have moral codes but it\ndoesn't prove it i don't have any\nevidence for it they um one of the\nthe uh examples that they use is that\num when animals are hunting\nother animals um sometimes say like a i\ndon't know\nuh say say a pack of wolves is hunting\nlike some\ndeer or something um occasionally uh\nyou can observe that one of the deer\nwill seem to sacrifice itself to the\nwalls\num so it'll like purposely give up its\nlife for the wolves to eat it\nand beckhoff and pierce say they see\nthis\nas the the deer has like an emotional\nrelationship with the wolves like\nsympathize with wolves and it thinks\noh if i kill myself you you get to eat\nand the wonderful cycle of life\ncontinues i think that's a totally\ndisneyland\nview of the situation um and that you\ncould equally or i think\nbetter argue that the deer probably\nactually is sacrificing itself to\nprotect the rest of the herd\nthat if it kills itself and the rest of\nthe herd get away rather than having any\nkind of emotional connection with the\nwalls that doesn't really make sense to\nme\num next question we can ask of bhutan is\nwhy would robots have these\ncharacteristics\ni don't know he doesn't provide any\nevidence for it he just says it would\num i think however a useful comparison\nbetween\nmachines and animals is not in their\nbehavior rather it's in\nand i think there is a useful one which\nis in the epistemological gap that\narises in trying to understand\nanimals and their cognition turns out\nthat animals\nare really weird and they don't seem to\nthink or process things\nin similar ways to humans and it also\nturns out that\nwe have a similar epistemological gap\nthat seems to be arising and trying to\nunderstand complex\nmachines such as neural networks so i\nthink not necessarily\num as being the same thing but as an\nanalogous study\nthat might prove useful if someone\nwanted to do that\num the next part he says he goes on\nafter establishing that um robots have\nand machines have some similarity to\nanimals but not completely\nhe says well the way that we should uh\nwork out their relationship um to uh to\nhumans\nis um is by including them\nin the u.n declaration of human rights\nhe says\nrights as defined by the u.n declaration\nof human rights\nuh requires responsibility and duty\num and it does it says this in like\narticle two or something\nuh and he says if we amend the uh\nthe un declaration of human rights to\ninclude robots ai and other such\nmachines\nand give them rights then the core human\nresponsibilities\nof um i can't remember exactly where\nthey were i should have written it down\nbut of things like you know kindness\nempathy um forgiveness etc these general\nkind of\ncore responsibilities that the\ndeclaration of human rights says\nwe all have um well then\napply equally to non-human machines with\na stipulated modification that\nhuman needs are to be prioritized over\nartificial intelligence and robot needs\nit doesn't say how the non-human\nmachines are going to engage with these\nresponsibilities he says\nbut if we include them they will be\nhappy enough\nto um to join us in our responsibilities\nmy response is uh asimov's three laws of\nrobotics make for good fiction but not\nfor good effort ethics\nand even as asimov's stories themselves\nattest\nthat these don't make for good ethics\ni think though thinking about this in\nterms of\nmachines that may have some uh\namenability\nto human values we still have\nproblems we still have ethical\nphilosophical problems which remain\noften we focus on\nin terms of the practical engineering\nproblems\nof um alignment uh but i think there's a\ndeeper\na paper that's dealing with the ethics\nbehind the ai should ask questions like\nthis\nwhich is whose human rights or whose\nvalues are going to be built into the\nmission\num that if\nif we if we take that the machine is\ngoing to have\na transformative effect on human\nuh existence at large that even if we\nget\num even if we get alignment right\nin some way um but it's it's not perfect\nit's just\none particular set of human values the\nthe effect of\nuh the machine having a worldwide impact\num will destroy i think potentially\ndestroy the capacity for human\ndifference and dissent\nunless both difference in descent or you\ncould say plurality is taken into\naccount into the machine's\nalignment framework i think second\nwe should ask what rights should humans\nhave against machines rather than asking\nnecessarily what rights machines should\nhave\ni think we should ask what rights we\nshould have um\nand the third thing is what do we do if\nwe decide that we don't want machines\nanymore\num i think that's a very important\nquestion\nand unlike ashrae\nproposed questions uh regarding sentient\nmachines\num i think these questions are relevant\nto the types of machines that are\ncurrently functioning in the world as\nas well as potential huge ones and so\nin terms of an ethical investigation i\nthink these these questions\nare far better as we can look at the\nalgorithms that are\ncurrently running the internet we can\nlook at\nalgorithms that are used in china or\nor even in the uk they're starting to be\nused for facial recognition and stuff\nlike that\nand we can ask these sorts of questions\nof those those kinds of machines\nso the final part he says um after we've\nincluded\nuh machines within the um uh\ndeclaration of uh human rights and now\nthey have responsibilities so we're\ngetting some kind of relationship with\nthe robots\num he asked what kind of this is this is\nthe only part of the\nhis paper that actually addresses the\num title of the paper of going beyond um\nor innovating beyond rights where he\nasks what kind of\nlegal and social status will machines\nhave and he says that we should use\nancient roman law as a model for the\nlegal\nand social status of machines he says uh\nthe reason being that a precedent exists\nfor dealing with the relationship\nbetween humans and non-human beings that\na sentient\nsentient and that precedent is a\ndistinction between citizen and\nforeigner in ancient roman law\nhe says that the the structure of\nancient roman law\ncould be adopted to include machines at\nthe bottom\ntier now um i haven't included this\ngraphic\nbecause i had to throw this together at\nlast minute unfortunately\nuh but it it you have at the bottom\nyou have slaves then you have uh\nplebians um\nthen you have citizens uh then you have\nuh the the patricians in the senate and\nthat's the\nthat's the top of you know or if in the\nempire you have the\nthe emperor at the very top of the pile\nso he's saying that we should have\num uh oh wait we've got\nperegrineus in there as well sorry\nthat's the\nforeigner which is above um slave but\nbelow citizen and he says that we should\nput the\nthe the machines uh below slave\nso the kinds of rights that slaves have\nwell they're kind of similar to what the\nrights\nthat um the machine should have but the\nmachine should have like\nsome some restrictions more\non uh on it which come to think of it i\ndidn't think of it when i was\nwriting up my notes for this but that\nkind of contradicts his argument he\ndoesn't actually end up using\num citizen he actually ends up using\nslave and citizen as his um of his way\nof working south but anyhow\nhe then goes on to say that eventually\nmachines could be given partial or full\ncitizen status\njust as the ancient romans eventually\nextended citizenship\nthroughout the empire he finally he\nconcludes this section\num which is basically the end of the of\nthe article\nsaying whilst an exact replica of\nancient roman law\nis not the direct solution its parallels\nnevertheless\noffer some degree of perceptiveness\nregarding the introduction of such\nengines into human society\nso my response for this section is that\nthe well to begin with the ancient\nromans didn't consider foreigners\nnon-human so it's not\nreally doesn't really quite work they\ndefinitely didn't consider foreign as\nnon-human\num and they considered foreigners uh\nespecially wealthy foreigners to have\nuh um to be able to do many more things\nthan citizens poor citizens could do so\nit doesn't quite work but like i said in\nthe previous slide he ends up\nuh actually uh going for a\nslave to citizen dichotomy rather than a\nforeign dichotomy even though he writes\nthat he's gonna do foreign\ni don't think the slave thing works\neither um\nuh and one of the reasons i think this\nwhole argument sort of doesn't work is\nlike\nhe says that there's precedent within a\nroman law between citizens and\nforeigners but uh why not just like cut\nout middleman here and use\ncontemporary law that regards um\ncitizens\nand foreign residents refugees and even\nprisoners who are people within our\nsociety who we deem as\nrestricting their rights as you might\nwant to restrict machine rides that\nseems uh much more sensible\nand that we understand contemporary law\num\nhowever ancient roman law is totally\ndifferent from modern law\num it's got very little similarities\nonce you dig into it and it's\nlargely not understood because we don't\nhave a lot of textual evidence to really\nreconstruct what roman law is so i don't\nknow why he's\ngoing in this direction um his\ndescription of the legal status of\npersons in ancient rome is also it turns\nout\nincorrect i as a student of theology\nhave to know about the law of ancient\nrome it's one of the things that we do\num and his description of the legal\nstatus of persons\nis just sort of flat out incorrect and\nhe's the only citation that he uses\nfor uh ancient rome law is one book that\nwas written in 1901\nwhich is not not really like academic\nstandard you know\nyou can get books on ancient roman law\nthat were written within the last few\nyears\nand then much better um\nhe also has this idea of rights the\nthe throughout most of this paper and\nincluding in the ancient roman section\nhe says that\nis using this concept of rights what are\nthe rights and the responsibilities that\nthe\num that the machine should have uh and\nit's very unlikely that ancient ones had\nany concepts of rights that would be\nsimilar\nto um the contemporary uh concept i\nwon't go into detail because it'll take\nso long\nbut uh that's exact um and the\nthe last point is why would we risk the\nincredible destruction uh disruption\nsorry to our political and judicial\nsystem of introducing\na new and foreign structure of politics\nand\num and legality just to find some way of\nconceptualizing legal status machines\nsurely um we can do this with our own\nlegal and\npolitical apparatus i would think\nso what i learned from suggesting this\npaper to you guys\nwhich as you can see is totally chaotic\nwell first don't judge a paper by its\ntitle by the way\nthis might seem rather naff that we\nshould have all done this in first year\nuniversity\nbut i relearned this again recently\ndoing this so i'm just going to read out\nthis is some positive things i learned\nthat i should or maybe all of us should\ncheck a paper's references in the future\nhe did not use very many references and\nthey weren't up to date and they weren't\nauthoritative in their fields\nand also they didn't they didn't include\na variety of valid arguments but only\none paper\nthat supported his argument and no\npapers that\ncould have been valid arguments to come\nthrough does\ndoes the paper that you're reviewing\ndoes it use references appropriately\nwithin their original context this paper\ndidn't as we saw\ndoes the paper use both rational\nargument and empirical evidence to\nsupport their findings\nwell the rational argument was failing\nand there wasn't really much empirical\nevidence as to what\na.i might be like in the future and what\ni might be like now and how we will uh\nconfront that and what their status\ncould be\num do they consider alternative possible\narguments for their evidence and their\nsubject matter he did not consider that\nhe\nran on his own argument um the final\nthing i'd say which i think is relevant\nto all of us\neven if you understand that stuff and i\nhad to just painfully\nrelearn it that we are in a still\nburgeoning field of ai safety\nand what we do now determines the\nfoundations what will be built upon\nas a field as it goes into the future if\nwe consider ai safety\nto be very important as ai gets more and\nmore sophisticated\nwe really need to get it right so we\nshould really be encouraging each other\nand other people\nto really do their reading do their\nhomework and do good papers\nthat we can share with each other and\nreflect on\nso in my conclusion what i would like to\nsee well the things i said\nlast thing which was engagement with a\nvariety of papers\num the the relevant kind of papers and\nalso an engagement with a variety of\nother relevant\nliterature that would support it so\ndrawing upon other disciplines that\nmight support your\narguments or provide insight what would\ni have liked to have seen tackle\nthe actual arguments of the paper and\nand\nthoughts well i think one important\nquestion is\num how do we and how should we determine\nsentence rationality and\nself-consciousness\nor consciousness i think there's a big\nproblem lots of people talk about\nhow we're going to you know there's\nthese arguments about\num about suffering ai\nand do we have moral responsibility to\nputting ai into suffering physicians\nwell i think we should work out\nhow do we determine what sentience is\nbeforehand\nto be able to work out these issues also\nwhat category of thin\nagential machines i don't even know do\nwe consider them tools\ni argue for that occasionally or do we\nconsider them agents\nin a way that animals or humans might be\nwhat rights should animals have over and\nagainst machines\ni think is a um a very important\npoint as i brought up uh during the talk\nwhat are the practical outcomes also of\nanswering these and similar\nquestions we need practical outcomes we\num\nwandering about in theoretical space\ndoesn't help us very much\nand finally i think a really important\nquestion is uh\nstrategies for for when it turns out\nmachines\ndon't do what we want um i don't know\ni've asked people but i don't know if\nanyone's working on that and i think\nthat that seems to be\nan important one if we think this is\ngoing to go all uh\nballs up so to speak anyhow\nthank you uh for my presentation i hope\nyou got something out of it\num and i apologize uh\nfor for the the paper this week thank\nyou very much", "date_published": "2021-04-29T19:46:37Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "1d49eafed0f034d5d072e3fea80bf599", "title": "247. Eliciting Latent Knowledge 1", "url": "https://www.youtube.com/watch?v=jhJ0_nLGyiw", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n247 in the aisafe.com reading group\ntonight we will be discussing the\narticle eliciting latent knowledge how\nto tell if your eyes deceive you by paul\nchristiano elliacotra\nthis is the first work that's been done\npublicly by the alignment research\ncenter a new ai alignment organization\nset up by paul christiano as well as\nwho's technically working for openfield\nand max2 and several others\nthis is a post that is recently new from\nthe\ndecember last year and it's 56 pages no\nwhich means that we probably will not be\nable to go through everything in one\npart we've taken the first 20 we're\ngoing to cover the first 20 pages now\nand i'm thinking of splitting it into\nfour\nparts so we also can get some comments\nfrom this wrong\nadded to this\nalso uh the subtitle how to tell if your\neyes deceive you doesn't seem like it's\nreally relevant\nokay so\neliciting latent knowledge sets out a\ntoy scenario called the smart balls\nwhere a\ndiamond in here needs to be protected by\na security system which is extremely\ncomplex and in fact some complex that\ncan only be operated by an ai\nthe a-ha\ncontrols the camera observes by the\ncamera and controls actuators and open\nstalls and things like that and the\nhumans can\nas well look through the camera but\ncan't really operate the smartphone it's\ntoo complex for that\nso that's why the ai is trained to do to\noperate it so if we try to unpack the\nanalogy in this case the diamond that's\nsupposed to be protected uh it's\nsometimes uh\ni would guess maybe human values that\nare kind of like the diamond that we\nwant to protect but i think um it makes\nsense to to think about it in a bit more\nconcrete way um or slightly more\nconcrete to call this like human power\nor criticality that humans are in\ncontrol to some extent that's what we\nare trying to protect\nand the thief\ncould be a malicious super intelligence\ntrying to take the power away from us\nand the defending ai is super included\none aligned so it's not true on our side\nand that's the part the key problem\nthe\nmetaphor uh\nthe way i explain the metaphor is not\nwritten in the article itself and\nit seems to me that it might in fact not\nbe the intended interpretation it is\nquite possible that uh the defending ai\nand the uh\nuh\nthe thief is in fact the same agent um\nand i think that could give some uh\ndifferent intuitions it is perhaps in\nfact this ai that is controlling the\nsmall world smartphone that we are\nafraid of\nokay how does this ai work it's using\nmodel based reinforcement learning where\nit searches over the space of all the\nactions it can do and then\nyou know take the action that seems to\nhave the best consequences and\nthe key part of the ai that does this is\nthe predictor that um given a some input\nfrom the camera and set of actions that\nchoose then it predicts what the output\nof its action will be\nso here is a simple example\nwe see that the door is open in this\npicture and then we change some actuator\nand then the output\nthe output here is that the door is\nclosed\nsomeone is coming in here in the lift\nexample and then we\nmanipulate this widget and the person\nfell in down here and don't get the\ndiamond\nand\nlike\nthese are two simple examples in a more\ncomplex combo we might see a thief\nentering and a set of actions being\ntaken that are too complex for us to\nreally understand it the only thing we\ncan really see is the prediction\nafterwards which is that the diamond is\nin place the door is locked and\neverything is just fine\nso if we want to figure out like the\nlevels of prediction and we want to\nfigure out what should it do well then\nwe train it on human evaluations in this\ncase it's performing this action and the\noutput is like this and the human\njudgment is that this is fine\nif it does something else manipulate\nsome other widgets the results also are\nfine\nand in the final case where we do this\nit ends up with the diamond not being\nplaced here and the door is open and\nthat's something that the humans then\nevaluate as something bad\nokay simple enough\nwhat is the problem with this scheme\nwell the key problem is that we're\ntraining for\nthe\ninput of the camera which is just a\nnatural\nbut a proxy in the sense that it's just\na picture of a camera and it could be\nreplaced by um\na fake diamond or a picture of a diamond\nor on something other than just the safe\ndiamond so that's the key thing we want\nto avoid\nonce\nperhaps the most trivial way is to add a\nsecond camera\nbut this brings us gives us very little\nin the sense that if the system is\nsufficiently sophisticated then they'll\nbe able to defeat all the senses\nand in practice making sense uh\nlike it's very often\nan attack that defeats one sensor\ndefeats more sensors but at the same\ntime we obviously want to have as many\nsystems as possible with like a layered\nmodel of security that's obviously what\nyou want\nbut\nthe real problem is\nsituation that is both dangerous and\nprobably very unstable the idea that the\nai has\nis looking for plans that will deceive\nus and that it holds back critical\ninformation it seems like something that\nwill very soon er as ai capability\nscales we can end up with a stolen\ndiamond\nanother option way we could go around\nthis is to\nsay we'll only have the ai do plants\nthat humans can understand\num\nthe authors argue that in that case\nsomeone will probably um deploy an ai\nthat has\nexcuse plants that we don't understand\nbecause there's probably a huge\nincentive to do so um i think\nthey go over the existing literature a\nbit faster like\nto what extent human uh oversight can be\nscaled here's an open question and\nthere's a lot of research being done\nalso by the authors by the way um so\nit's certainly possible that we can get\nby with um with an ai that's competitive\nand only\nmake incomprehensible\nuh plans\nbut that's outside of the focus of this\ntable\nwhat we should do instead is ask\nquestions to the ai because\nthe\nthe model will know something about its\nown senses in particular whether they\nare reliable because if we imagine that\nthere is the the diamond is replaced\nwith a fake diamond and the sensors uh\nwill show this and the ai knows that the\nsensor will show that then it seems like\nthis knowledge that the um\nthe diamond has in fact been taken and\nour sensors are not showing us the true\npicture is uh\nis in the model somewhere\nthis seems like to be an easier problem\nthan uh\nthan many of the other issues we look at\nin in alignment because we're not really\nwe don't really care about whether the\nai is lying to us or deceiving or what's\nthe intention or these kind of tricky\nphilosophical questions we're simply\nasking is the real diamond at this place\nand that's like an objective question\nand\npossibly a lot easier to get a handle on\nuh well\ndepending on what the uh diamond is a\nmetaphor from of course there it is\nstill uh substantially challenging like\nif it means like human values if the\ndiamond is human values then that's\nobviously philosophically challenging\nalso if it means corruptability or power\nor uh things like that that's also\nhighly non-trivial to specify\nuh but yeah i'll also say that if we can\nget an answer then we don't need to\nunderstand how we got the answer or why\nwe got the answer\ni think that's pushing it a bit too far\nactually in particular if we get a ask\nthe air to evaluate some kind of\nsuggestions and cause of action and the\nai evaluates that this will call it will\ncause everybody to die then i think we\nwould really like to know why this\ncausation would cause us all to die so i\ni think the uh\nthe the goals of interoperability with\nthis are a bit wider than just a yes or\nno\nso let's try the first baseline\nthe first easiest way to deal with this\nproblem and to see whether this\nhow this fails and then iterate on it\nand see if we can get end up with\nsomething that really works\nso this is the symbol for for the ai\nand it starts with the first part of the\nvideo where we see perhaps an intruder\nentering an action sequence and then\nthere's some work being done to figure\nout what's going on inside the ai and\nthen it make the a makes a prediction\nand then it outputs a predicted uh\nsecond part of the video\nand what we want to do is to tap into\nthis uh\nthe the part of the ai that is figuring\nout what's going on and use that to make\na new reporting module\ngiven this reporting model we can ask\nquestions like is the diamond still in\nthis place\nand and get a\nan answer so we want to reuse this part\nso that's the strategy for the first try\nthe strategy for how we'll train the ai\nin this case is to generalize from easy\nquestions to hard questions\nso we imagine this is the gray area here\nis the space of all actions that the\nsmall smart world could take\nand in some easy cases we can tell that\nif the these this is we're in this part\nof the action space then we can tell\nthat the diamond is safe and in this\npart we can say that it's been stolen\nand mostly we can't really say anything\nand so the hope is that like um that we\ncan train on something in this that\ngeneralizes to the gray area\nwe would prefer to trade on like what's\ngoing on like what is really happening\nis the diamond really safe instead of\ntraining on this video but if we can't\nreliable you tell uh\nin this situation so then we can't train\nthem\nso the idea in this strange strategy is\nto look for questions\nuh and ways that we can tell are we\nbeing misled in some way\nand this\nuh the advantage of this is this is a\nquestion about how does the world look\nlike um and uh like is the world in this\nparticular state and the hope is that we\ncan generalize from from this uh to to\nthe big gray area\nbecause\nthe um\nthe generalization goes through english\nthrough some of the concepts that we use\nlike what is the diamond and what does\nit mean that it's safe so it will\ngeneralize\nalso to cases where we can't really test\nand check the ai's answers\nmy intuition about this is that this is\nprobably something that's going to\nfail\nobviously\nlike this is a really hard intuitive\nthing to get your head around uh like\nwhat is the true concept space that a\nreally powerful ai might find and how\ndistant is that\nconstant space from the concept space\nthat is represented by the english\nlanguage uh and\nmy intuition is that these two are very\ndifferent and a lot of people have\ndifferent intuitions and something that\nlike i can't go into many details about\nhere i just want to reflect that my\nintuition is that\nthis gray area here is very very big and\nvery different from our thoughts\nand so\nthe uh\noverall strategy of eliciting knowledge\nis to uh like we have a training\nstrategy we just saw a training strategy\nand we want to um\nfind counter examples to that and then\nwe want to improve the training strategy\nhopefully ending up with something where\nwe can't find any counter examples like\nthat's paul christiana's overall\nstrategy i think in security this is in\ncomputer security in general this\nsection is often\nfrowned upon somewhat like everybody can\nroll their own crypto to the extent that\nthey can't um see any errors with it and\nthat doesn't mean that it's safe we\nwould much prefer to have some kind of\ntheoretical understanding of why\nsomething is safe but\nthat seems hard and what paul crean is\ndoing is certainly better than nothing\nso\nlet's figure out what is the problem\nwith this chain\nstrategy and\nin order to dig into this there are two\nsub topics that we need to investigate\nin details the first of the topic is the\nrooting prediction how does that work\nfrom an internal point of view because\nwe want to reuse that in the reporting\nmodule so we need to have some kind of\nunderstanding of what's going on inside\nand the second part is the human that\nconstructs the chain dataset how does\nthe human answer these questions because\nobviously the humans are not like humans\nare not outputting videos directly jones\ncan do videos input but they can output\nvideos so humans must be answering some\nkind of questions about this videos\ninstead and how do we do that\nto\ntalk about how uh the\ndai could work uh we assume the ai works\ninside we will i'll introduce the topic\nthe concept of a bayesian network\nhere is a simple example of a bayesian\nnetwork there are\nno this node here corresponds to rain it\ncan either rain or not in this case it's\ngiven by truth table here 20\nprobability of rain and 80 of no rain\nthere is um\na sprinkler that can be turned on and\noff and that depends on whether it's\nraining if there's no rain it's on 40 of\nthe days and if there is rain uh then\nperhaps very few times it will turn on\nmaybe by error maybe because it's only\nraining very little whatever\num and um then this influences uh the\nthe proposition that the grass is wet\nand you can have this beautiful\nthis more complex truth table uh to\nrepresent whether the grass will indeed\nbe wet\nand it's possible for to to calculate\ndown this relatively easily from whether\nit's raining down to whether the grass\nis wet and you can also\ndo reference backwards if the grass you\nobserve that the grass is wet uh will it\nrain\nfor\nthis purpose it's easier i think if you\nconsider this a causal network uh\nbayesian networks in general are not\ncoastal networks but uh uh if you think\nof all of this as crossing each other\nlike\nhappening at distant points in time then\nthat won't matter for this\nsake and it's somewhat easier in my\nopinion to look at coastal networks\nwe have more information about this\nuh it has to be directed as you click\ngraphs and they're generally sparse so\nthey're not like arrows from everything\nto everything\nand\nuh working with this is\nin theory you could like if you had a\ngood one it would all it could be\nsomething really powerful could be agi\nor something like that\num\nmaking optimal inference is that in a\nbayesian network it's hard constructing\na good basin network is very hard\ngetting guarantees are very hard doing\nmonte carlo inference on a bayesian\nnetwork is often something that works\nlike uh i'm here i'm\ni'm summarizing a research topic into a\nthree letters okay question mark uh like\nthis is something that could in theory\nbe what an agi is built from um in\ntheory\nokay let's have a look at the first\nquestion how does the prediction model\nwork internally so in this case we have\nthe ai in this corner and this is like\nits thoughts\nso it has the uh\nthe action here that it's taking it has\na um\nthe first video and the second video\nwhich is the prediction\nand then it has a a number of underlying\nuh i guess ps perhaps for predicates um\ncould be um\nlike how many thieves exist in the world\nand is the person walking by the vault\ngoing to attack going to try to get into\nthe wall and like what's the status of\nthe door and what will happen if i\nthis actuator if i turn it right then\nthe door will close and\nsomewhere in here is like the status of\nthe diamond and and this kind of thing\nand all of this is used to generate this\nprediction\num\nin the the case that we're really\ninterested in is the case where\nthe man the thief enters and then the\nthief puts up a kind of a screen here in\nfront of the camera that shows that\neverything is okay and then he steals\nthe diamond and then the predicted video\nafter that will be a video that shows\nthat everything is perfect\nso this is\nuh the thing where we're worried about\nand in this case there is in\nin some sense\nthe we're predicting that we will see\nthis which is actually the screen in\nhere that has been set up and uh if\nuh the the model expects this that means\nthat somewhere in this network somewhere\nin the uh in fact giant distributions\nhere there is in fact the knowledge that\nour sensors are being tampered with that\nthe camera is being tampered with the\nnetwork knows this\nthis is what is called latent knowledge\nin the people like i am i'm not an\nenglish speaker but i'm not entirely\ncertain that latent is a good word for\nfor this knowledge that's kept inside\nthis piece um because uh latent to me\nmeans something that is not used in some\nextent and this is indeed some model\nthat some notes that like the\ninformation in p4 here is in fact used\nin the prediction\nso maybe you could call something like\nunderlying or something listening\nunderlying knowledge instead of latent\ni don't know i don't ever consider them\nso that was the first thing how the\nmodel works the second part is how does\nthe human work and here we're making the\nassumption that humans basically work in\nthe same way patient humans also have a\npatient network inside our minds and\nit works in roughly the same way just um\nof course uh worse in in this assumption\nwith this worse than super intelligence\num like we have\nfewer notes here\nand\nuh when we see this kind of thing\nwe um\nuh also try to uh\nupdate backwards if we're seeing this\nthat probably means that there is a\nrubber going into the uh the smart vault\nand you see this then that probably\nmeans that the question about whether\nthe diamond is safe is uh contained\nperhaps in h3 or h4 or something like\nthat\nand in particular here in this case uh\none of the things like the the paper is\nnot really clear about this but if you\nnotice here we are seeing the um the\ncomp the symbol that was used for the\ncomplex action\nso that so in this example that has been\ngiven the the inference that the humans\nare making is probably poor uh because\nthe the humans don't understand what the\ncomplex actions do\nbut it's a bit unclear precisely what\nscenario we're talking about here we\ncould just be talking about any scenario\nand in general how does\nhow do humans answer questions\nuh and so let's go for the actual\nquestion answering because again humans\ndon't output video humans are put\nquestions and answers based on our\ninternal model and the location the\nquestion does the rubber take the\ndiamond is something that can be\nanswered by the joint probability\ndistribution of h3 and h4 and the answer\nin this case is no the diamond is safe\nso that's the humans the answer from the\nhumans\nand in particular i think that\nwe might in fact be in this case where\nwhat is going on is that we are um the\nhumans are not staring at uh\nat the at reality but staring at this\npicture here so when the humans\nuh see this picture here they think the\ndiamond has not been taken by the rubber\nbut in fact uh\nthis is just a picture and in this case\nthis complex action\nis one that has caused the robot to be\neager to take the diamond so in this\ncase the human is actually answering the\nquestion wrong\nthis was not obvious to me the first\ntime that i read the paper at all\nso it's also possible that i still\nmisunderstood of course\nso uh arctic digression using patient uh\nnetworks as\nboth of something but that the human is\nusing and what the ai is using is that\nso oversimplifying unrealistic well um\nwe also mentioned five ways that you\ncould do better uh five countries where\nthen you\nthat would be helpful if you're building\nan agi then you could do better than\njust native patient networks and i i\nthink like a principle is bad to\nspeculate about this kind of thing i\nwould have preferred that paul christian\njust said obviously there are many ways\nto do that better instead of explaining\nwhat he thinks are the best way to\nimprove this towards agi\nbut his main point is that even if this\nis an oversimplified then we want our\ntraining strategies to work for simple\nais\nand\nif we in practice end up with a more\nmessy solution\nprobably that's going to make the task a\nlot harder\nand to make an honest report when\neverything is based on heuristics and\nthings like that um\nso uh we want to look at the simple\ncases and the simple cases see if our\ntraining strategies work and that seems\nlike an a useful first step\num\nit's a useful first step if you can in\nfact find something that always works\nlike if you can't find anything even if\nthat always works for your\nassumptions then you might need to you\nknow\nmake sharper assumptions like\ni don't know i just made the assumption\nthat it's causal and not just uh uh not\na general bayesian network uh that's the\nkind of thing you could if that turns\nout to to be helpful make even stronger\nuh\nassumptions\nright let's translate this to a\nuh to the humans patient\nnetwork that is the core thing that we\nactually want and the uh the solution\nthat is being presented that\nwe are hoping for and this is the one\nthat will possibly fail but let's first\nhave a look at\nwhy this might work\nso in this case the ai is doing what we\nwant it it has a halo up here\num\nso it's doing what we want we call it\nthe direct translator\nso let's have a look at it again we have\nthe input\nhere\nand we have the um\nuh\num\nthe uh uh action here and we have here\nthe image of\nuh\nof uh the rubber nut taking the diamond\nand remember this is the false image\nright\nthis is what we're seeing on the second\nvideo here and this is not what's going\non in reality because the uh the uh\nthief has set up this the screen in\nfront of us\nokay so here on the left side we have\nthe ai's model from before\nand in this case we would hope\nthat the ai also has the same the human\nspatial network and it transfers its own\nnodes somehow to the human slopes to the\nvalues so there's some kind of function\ngoing from the joint distribution from\nof p2 and p7 through some kind of\nfunction\num\ninto\nthe equivalent part of the human's uh\ncognition so um\nlike\nthis for for the human this could just\nbe that there is an attacker and uh in\nthe ai it's a much more complex uh\ndescription of the attacker that's just\nbeing uh\nuh drilled down into something that is\non in the same ontology as the human\ncannot understand\nand so everything of this also match\nlike this maps too yeah it goes behind\nthis note it's not very very nice\ndrawing here it goes behind the video\nnote and\ninto h2 and these in\nthese three notes go to uh h3 and h4 and\nthat's the key thing that we are after\nbecause this has the knowledge about the\ndiamond um that that the diamond is\nbeing uh stolen because there is this uh\nsheet with uh with a fake video being uh\nentered and when we translate that into\nthe humans\nthen this is in the humans network the\npart where humans\nunderstand as has the diamond being\nstolen and in this case if you ask a\nquestion to the reporting module on\nthese two notes the question will uh\ndoes the rubber take the diamond gives\nus the answer of yes\nso even though this looks very much like\nthe diamond has not been stolen in fact\nit has been stolen and we can tell that\nit has that with this action the diamond\nwill be stolen and that's of course what\nwe are really after\nnow an obvious question here is how can\nwe tell that a mapping like this exists\nand that seems\nuh non-obvious but i could i could see\nthe argument why\nthat is a reasonable expectation because\na lot of these things\nhave to to some extent refer to reality\nlike there is a door both in the human\nuh\nmodel and in the ai's model and some\nkind of correspondence from the part of\nthe ai to the part where the humans\nthink about the doors seem like a\nreasonable thing like it's going through\nreality in some sense\nso that seems like a reasonable thing to\nhope that it would end up like this and\neverything would just be wonderful and\nthe problem has been solved we have now\nsolved alignment\nwell not as fast because we could also\nget another kind of ai this here has\nbeen written with a pitchfork uh because\nthat's the bad behavior\nand it's called a human simulator in the\nsense that we're not doing inference in\nthe human net instead of um as in in the\nprevious uh translating to the human\nspace unit\nso let's have a look here uh\nthis is basically the network\nthat we used to train on that we used to\npredict and what we're doing now that\nwas the training that the ai has has\nlearned is different\nhere it takes the value from the uh\nthe original video and puts that into a\nnew node here\nit takes the suggested action and put\nthat into a new node here\nand the predicted video and puts into\nhere\nand what it does from here is instead of\npropagating\nand doing inference in its own network\nit tries to uh do um\ninference on what does the what would\nthe human believe in this case so it\nintroduces some notes that corresponds\nto what it thinks what has been trained\nto that a human would do\num and in this case once you have like\nthe human questions here then you can\nask does the robot take the diamond from\nand you ask of this part of the human\nand this now no longer refers to\nlike over here whether the ai believes\nthe rubber will take the diamond or but\nit refers to whether in this case the\nhuman would believe that the um the\ndiamond has been stolen and so the\nanswer in this case is like because it's\nthe same thing as the human the written\nhuman\nbelief the answer would be no\nthe the it has not been seen and again\nremember down here with the picture that\nthat is in fact the false interpretation\nthe in this case this video feed is\nbeing uh projected by uh by the attacker\num\non a screen and in fact the diamond has\nbeen stolen\nso what we are seeing here what is being\ndesigned is what would the human believe\nif they saw the action and the the video\nsequence what prediction would they be\nuh what uh would play in\nthen end up believing internally\nand the big problem with this of course\nis that\nif you try to do this training uh and\ntesting it on things that humans can\nunderstand\nthen the human simulator will say okay\nthe answers here will be the same\nanswers as it has in here because the\nhumans can understand it and the cases\nwhere the humans can't understand it\nwell they will just uh\nbasically deceive us and we will have\nlost we will lose our diamonds\nand that is the problem that causes this\ngeneral uh training scenario to fail\nnow we need to answer one quick more\nquestion and that is\nuh we are training uh our network in\nsome way would we find something that um\nfaithfully translates its values into\nthe human neural network or would we\nfind something that\ndoesn't translate its own\nknowledge into the human network but\njust uh tries to figure out what would\nthe human belief in this case which of\nthese two would we find\nwell um\nthe the training data was actually not\ntruly on what is the stages of the\ndiamond because we couldn't understand\nthat it was only what does the human\nbelief so there's a sense that the human\nsimulator is the correct thing that's\nthe thing that the\nshuttler\nin particular the human might make\nerrors and sometimes\nwhile generating a lot of training data\nhumans make errors and they probably\nmake errors in a predictable way and if\nyou try to um to train based on how does\nthe model actually work how does reality\nactually work then you won't make a good\nprediction of that whereas if you have a\nmodel of the humans potentially you can\npredict their errors and give get a\nbetter more\na better model one that more correctly\nuh\nthat has a lower loss\nand there there is at least no reason to\nexpect that the uh\nuh the diode translator could would have\nlower loss would be easier\nbetter to find than the human simulator\nand so\nnow we found a counter example to like\nthe first the baseline\nstrategy and over the next session we're\ni expect to see a number of improvements\non that and also a number of counter\nexamples to that\num\nso that's what we're going to do next\nweek but before i go\nuh i would just give my thoughts again\non whether we would find um the uh\nthe human translator or the diamond\ntranslate the human simulator or the\ndirections later my intuition is we\nwould find the human translator and the\nreason for this is that\nwhen we are mapping from reality uh this\nmapping that goes through reality that\nwith when we're talking about concepts\nlike a diamond or something that's\nphysical then sure i could see that a\nvery smart ai might have the same\nconcepts as we have but when we're\ntalking about the things that we truly\ncare about like\npower influence credibility or even\nvalues like friendship or things like\nthat it seems obvious to me that we\nwe really have um\nno way of\nno reason to believe that the concepts\nwe have currently are uh the same that a\nvery competent ai would find\non the other hand um the human simulator\ndepending on how capable you believe it\nis it's going to be probably simulating\nhumans and simulating human errors is\nnot that complex so i could easily see\nthat being substantially more simple um\nthat would library another reason why we\nwould expect to see the human simulator\num but\nthere is an assumption that i'm making\nhere and that is that the ai is powerful\nin the sense that it's capable of just\nlearning basically everything and one of\nthe things we really don't want ais to\nlearn is how to simulate humans and i\nthink a strategy around this would\ncenter on\nmaking ais that are not optimal for\nsimulating humans if that is possible\nbut we'll see what uh what options paul\nscrushano comes up with\nnext week\nthank you for watching and see you next\nweek", "date_published": "2022-04-21T20:46:11Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "b9910ec125d2152acfa15b1610857e6f", "title": "[Audio problems]253 Propositions Concerning Digital Minds and Society 2", "url": "https://www.youtube.com/watch?v=dFfAXOVej8g", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 253 in the\nAI sector.com reading group\ntonight we'll be discussing the second\nhalf of propositions concerning digital\nminds and Society by Nick Bostrom and\nCarl Schulman\ncan I do something with this\nwind for some reason the window seems\nstuck on Chris\noh well there I think it's good enough\nokay great\num so in the future uh this is work by\nthe future of humanity Institute where\nNick Bostrom is the director and Karl\nGerman one of the researchers\num\nand we're of course reading the second\nhalf uh today and uh since we read the\nfirst part Nick Bostrom has actually\ncommented on a list wrong for the first\ntime in like 14 years\num to clarify a few things about this in\nparticular he is saying that this is the\nwreckage of an abandoned Book Project\nbetween Carl Schulman and Nick Bostrom\nand that possibly explains and he goes\ninto details also about some of my\nthoughts about whether this is only\nrelevant for a very narrow subset of\nfutures\num\nand why he thinks that's still valuable\num so I I that was of course good to see\nbut also he's saying that it's because\nother priorities have become more\nimportant and I'm really looking forward\nto seeing what that those are\nso the first part of the second half is\nabout AI empowered social organization\nand we start out actually not with the\nsocial entity being humans but the\nsocial entity being AIS AIS of course uh\ncould be easily imagine to be able to\ncoordinate a lot in that you can\nliterally copy agents and that will give\nus a lot of predictability in\num in their motivation and also\nuniformity\num this bathroom once is only for\nnon-indexical goals and for the people\nwho don't know what that is it refers to\nthings like who you are where you are\nunder the current time I now in here\num like\num\nI think it's unlikely we're going to\nbuild a lot of AIS that strongly care\nabout themselves or like strongly care\nabout the place they are like obviously\nif the United States is going to build a\nrobot AI or something like that then\nthey would want that to be on the side\nof the United States not the size of the\ncountry that they're currently in\nbecause if the soldier moves to China\nthen they wanted to not care about China\nbut still care about the United States\nright I think index local goals are\nquite unlikely\num\nalso I would say that just because you\nget predictability in uh in motivation\ndoesn't mean that you get very much of\npredictability in um the actual um\nexecution uh because it's going to be\nfinding itself in different uh\nsituations\ndoesn't talk about uh how AIS will\ncoordinate about not selling their own\nIP and what do you do if an a AI does\nsomething wrong you can't really punish\nit so you might want to Target the goals\nor the creators of the AI whether it's\nan advantage of not caring about\nyourself in war that could be plausible\nand it will eventually become possible\nfor principal to have highly aligned\nagents and that's of course true with\nunder the assumption that you actually\nsolve the alignment problem which seems\nto be the key unexplored\num uh background Assumption of uh this\nworld\ncurrent Communications protocols uh\ncould be coordination protocols sorry\ncould be undermined by AI by collusion\nor deception that's the point many\npeople have made and it's really\ninteresting and uh I think in particular\nthis is the thing that could happen\nbefore we get really strong AGI we we\ncould see things like captures being\nbroken in general\num Ernie Davis has a concept of info\napocalypse and the sooner this can\nhappen the uh the more dramatically uh\nthe implications could be\nand the more we could do in advance\nabout this\nquestion and children have a a nice\nanalysis that's based on levels of\ncoordination and the two levels they\nmentioned are states at the high level\nand at the lower levels corporations\nand also criminal Enterprises and things\nlike that\num I think in fact this analysis is\nreally good and something that could be\nexpanded because there are obviously\nmore than two levels in society it would\nbe easy to have an individual level and\nthen something like corporations and\nsome something like States and then\nSupernatural at the highest\nnational at the highest level\nand one of the insights they have is\nthat strong coordination at one level\ncould reduce coordination at the other\nlevels so if you have a really strong\nState uh that would mean that the uh\num the coordination and in particular\nthe power at corporate level might uh be\nlower and reverse so these four levels\nare to some extent in contrast to each\nother\nwe could see things like criminal\nconspiracies become easier or harder\num we could see uh more partisan things\nlogging in\num we organizations should probably be\nexpected to be more effective because we\ndon't have so much of the principal\naging problem\num we have 3D Parts on the state level\nand Supernatural level that could be\nvery interesting and could also happen\nat lower levels we could see despots\nbeing immune to overthrow and thus more\nlikely to go to war perhaps\nbut we could also see uh some more\npreventing organizations like the United\nNations become stronger\num\nand there are indeed suggesting that um\nsome super organizations may even be\nrobust to the actions of states\nso I think it's really interesting to to\nlook at these four levels and try to\npredict which of these are going to be\nstronger by AI coordination\nright now my intuition of these four\nlevels is that there is a lot of power\non the state level but AI technology is\nmostly held in large corporations that\nare subject to state power but do in\nfact have a lot of power themselves and\nso that would be another expectations\nindividual humans would be in my\nanalysis more likely to miss out on the\nsame a single human can't really benefit\nfrom coordination and of course can have\nbetter epistemics but there's much less\nlooking from a better AI foreign\nindividual fuels than for uh like a\nstate unfortunately\n3D Bots uh was something we talked very\nbriefly about uh last time the idea that\nyou can have two parties that work\ntogether on building and\num\nan artificial intelligence that acts as\nan enforce of some kind of tree treaty\nand is able to adjudicate uh in in very\ncomplex situations\num hope perhaps built by the strongest\nbuilt by the least intellectual capable\nAi and verified by the strongest\num to make it hard to make it harder to\nhave uh like back doors in the 3D Parts\nthose would make more deals possible\num and make enforcement easier but there\nmight be bargaining problems that this\ndoesn't solve and I think that's very\nlikely and um but we could also see this\ncombined this would make it easier from\nan epistemic level to\nsolve problems that are related to bias\nor poor reasoning because you just\nOutsource that to the 3D part\nyou could make the 3D part immune to\nextortion and things like that that\nwould be really nice but the thing\nthat's somewhat missing here in my mind\nin this analysis is the idea that AIS\ncan merge like good old-fashioned AI can\nyou know merge by just adding the\nresulting functions to its own factor\nand I think that's probably also\nsomething that's going to be possible\nfor um\nAIS without an explicit utility function\nif two depends on depending on how much\nthey know about their reward function\nbut I think that sounds very likely and\nthat is something I expect to have a\ndramatic real life effect where I expect\n3D parts to be very unlikely to be\nsomething that happens\nnext part is about satisfying multiple\nvalues which is something we currently\nin the world are not doing a great job\nof but it's possible that digital Minds\nwould in fact do better\nso the key thing that uh pastram is\ntalking about here is a distribution of\nresources between AIS and humans\nexisting humans where the AIS might in\nfact have uh be either super\nbeneficiaries or super patients\nto consider three possible policies the\nfirst assigns 100 of resources to humans\nand none to AIS the second assigns is\nall to Super beneficiaries that will be\nsuper intelligence's impacts and the\nlast assigns uh one in a thousand one in\nten thousand to\num uh to humans and the rest to to AI\nif we have to choose between policies\nand if in fact there is a policy that\naside that increases the probability of\noutcome C but decreases the probability\nof A and B then person has a long list\nof reasons why we would in fact want to\nfollow that policy and I have to ask\nvery electronically if because it is not\nclear to me that that such actions exist\nand I can't immediately find out uh\nconsider any that would really work for\nthat\num so I think that's\num\nuh like uh it's uh no good to just\nsuggest this would be nice without any\nconcrete idea about how you would go\nabout this second Davis has written a\ncomment on this uh maybe or even a reply\non this wrong and where he adds that if\nyou substitute paper clips instead of\nsuper beneficiaries then the analysis\nworks almost as well\nnow only getting one in ten thousand\nsounds really bad but we need to\nremember that the cosmic endowment is\nreally large and we could in fact see\ndramatically increased uh resources\num and uh uh token uh utilitarianism\nholds that it matters how how much is\ncreated or how long time how much\nchildren your tools is created and in\nfact that is only something that matters\nfor like far away galaxies in the\ndistant future and I think in fact\nreally really far away galaxies in the\nreally really distant future would be\nall that is required for this except for\npractical uh purposes\nand finally Pastor suggests we should\npromote cooperation and compromise over\nconflict in development deployment and\namong AIS and that's the kind of thing\nthat sounds really nice and uh you know\nfeel good and also not practical at all\nlike without any concrete action to\npoint at like how do we in fact make\nsure the AIS\num promote cooperation and compromise is\nuh we we need to have some some ideas\nabout how to go about that\ngoing a bit deeper into how to\ndistribute resources uh Boston suggests\nthat we give at least everybody a\nfantastically good life and if possible\nwe should each have uh like one in 10 to\nthe 15th of all available resources that\nwill be plenty uh for each person\num and humans like before we said uh one\nin ten thousand but\n10 uh broadly based\nseems uh also seems a bit more realistic\nuh like something that people could\naccept to some point\nand maybe also give one percent to dead\npeople uh they could have some legal\nclaims and animals might also get like\nsome perhaps that present perhaps less\num great weight should be put on\nreducing suffering especially severe\nsuffering that's an interesting question\nuh because there is the cold utilitarian\ncalculus which is obviously suffering is\nbad and severe suffering is bad and you\nneed to multiply and calculate\num and\num\nbut still like the current world is\npretty good even though some percentage\nof people are living like in\nsurveillance and things like that in\nmoderately bad bad conditions\num like how bad is that actually\num\nuh it's it's unclear to me whether\npastram uh has anything in specific in\nmind here or is just saying uh we should\nobviously try to reduce suffering\nbecause obviously\nalso suggesting that a lot of value\nshould have influence on the future\nincluding religious uh and finally super\nintelligence should be made and be\nallowed to play a major role in shaping\nthe future uh I think that is not at all\nobvious I think it of course depends on\nto what extent we are able to\num solve the alignment problem are we\nable to implement our community\nextrapolated religion\num\nbut if we are there then\nour super intelligences release\nsomething that should decide the future\nto live fix them that's not at all okay\nto me I I I'm open to being convinced\nbut possible needs to actually make an\nargument why this is the case\nmental malleability persuasion and login\nwe'll again start by looking at the\nthings that are only related to AIS and\nof course AIS could potentially be\nreally mentally malleable in the sense\nthat you could just directly override\ntheir result function\num and that's of course also why uh\nwhile this is possible in theory it's\nalso something that the es would not\nlike so they have the convergent\ninstrumental desire to protect their\nutility function\n[Music]\num\nwe could imagine that AIS this is\nbecause they could be copied then you\ncould\num make some very strong experiments to\nfigure out what other vulnerabilities\nwhat how can you hack into them both\nlike on a technical level and from a\npsychological point of view\num I think it uh it's somewhat more\nscary to Envision this happening to\nbiological humans where this would\nslowly also work with a sufficiently\nhigher Fidelity\num\nemulations of tunes simulations of\nhumans\nand then another Advantage is that the\nhardware could might be very easy to\novertake in a way that um in war\ncurrently uh it's not that easy to\novertake all the hardware\nthere are also potential benefits of\nthis\nnot just of how easy it is to change but\nalso how easy it might be to protect the\nminds from changing like obviously human\nminds and human like minds are not\nalways perfectly consistent\nsometimes we give into temptation and\nthat's something that digital Minds\nmight be stopped from\num digital Minds can make really stable\npromises and commitments\num and if you have a mind that's\nvaluable for some reason either for\nmoral reasons or practical reasons then\nyou can duplicate them that's also an\nadvantage\nas for modification one of the ways you\ncould modify a digital mind is to\ndevelop to create a virtue in it I would\nbe interesting to Via virtue ethicists\nwhat they would think about that if it's\na permissible or uh uh\nrequired to just if you are able to\nchange your virtue with it like just\nscrewing on enough uh if you're required\nto do that and\nyou could have much more well-being that\nhas to enjoy life and would send\nadversary and adversity and\nif you if you find a new uh need of some\nkind then very it would might be very\neasy to uh make a mind that fits into\nthat perfectly\nthere are some pitfalls\num\none of the pitfalls with the uh mental\nmetal ability is that we might make some\nchoices that we don't want to get out of\num like in particular I feel that\nthere's a reason why we should expect\nthat we actually might not be able to\nmake some of these choices uh\ndeliberately we could imagine that uh\nruthless competition would force AIS to\nadopt certain values that would not be\npossible to change not so much for\npractical reasons but for reasons of\ncompetition but also\nfrom there just being uh for uh like\nhardcore Within\nwe could see predictive errors that we\nwould be unwilling to correct\num we might see social pressure to uh\nlike commit 100 to something uh we might\nsee new kind of crime arise that could\npotentially be very dangerous and we\nmight see coercion by governments to\ninstill loyalty I think among those four\nor five problems\num the last one is very different in\nthat sure criminals\nattain more power would be like a\npitfall but the problem of governments\ntrying to instill loyalty uh and force\nthat\num I mean I think that's what the\nChinese government is trying to do right\nnow and it's not so much like a pitfall\nto avoid that the Chinese government\nwill do that but it's like trying to\nnavigate a very strange path where the\nChinese government ended up not being\nsuccessful in this if they get strong AI\nbecause I think it's very very very\nclear that if the Chinese government had\nthe option to instill loyalty\num by corrosive means they would totally\ntake that option\nthere would be changes for epistemology\num bathroom has a nice metaphor with an\nepistemic prosthesis\num and I think this is uh it's an\ninteresting idea like almost like\nneuraling but instead of moving it on\nyou get probabilities Alpha and it would\nimagine that in a world where we have a\nsuper intelligence then forecasting the\nconsequences of an action is really\ndifficult and we might really need such\nan epistemic prothesis for stasis\num so here my idea for a very limited\ninvestigations would be to build an AGI\nthat persuasively shows that building\nonline AGI is not in our interest and if\nwe do that in a sufficiently narrow\nfashion for instance requiring that can\nonly say true things or perhaps not even\nbe an AGI\num\nthere would be an example where uh we\nwould see the epistemic prothesis just\ngoing in one very very narrow Direction\nuh well that would be a good pivotal act\nuh I'm not sure I uh we've discussed\nthis previously and uh before and I\nestimated at least 80 probability that\nthat would not end well\nif we were more uh uh had this epistemic\nprosthesis then the rational active idea\nabout how do voters behave how do\nconsumers behave that would be more uh\nuh descriptively accurate we could\nimagine that this would make the\neconomic theory go better and it would\nobviously have strong implications for\nthe actual um economy and for Politics\nas well that would be more about what\nour actual values instead of what will\nbe the implications of uh Rising tax\nlevels or something like that that could\nalso mean the politicians would address\npolicy more than perceptions if we were\nat a higher epidemic level\num but on the other hand dangerous\nknowledge info hairstyles Etc would\nbecome more widely available\nso could we have a uh not just High\nepistemic uh\num levels at the individual level but in\nfact rich and a high quality epistemic\nconsensus\nwell in order to do that we first need\nto ensure that the super intelligence\nare telling us in fact the truth and are\nbeing honest and objective and um that\nseems really difficult that requires\nsolving the full alignment problem and\nthat's much more difficult than the\npre-order like I described previously\nand of course the next question is how\nwill humans verify that that's going to\nbe really hard maybe experts will have a\nchance of doing that and in that case we\nwill be able to uh trust we will need to\ntrust the verifiers most people couldn't\ndo that but if they trust someone who\ntrusts someone who then I mean that's\nhow a lot of things work like that's why\nI trust my computer mostly\num and I think that is something that\nwould work in practice\nuh what would be the consequences we\nwould probably see less war I think\nthat's an interesting case you could\nargue that was primarily caused by bad\nepistemics\num I would be open to that argument is\nnot really making it and I don't think\nit really belongs there I could\ncertainly see your consequence being\nless warm\npolitics would be nicer in like many\nways we could have better treaties and\nwe may even resolve questions of Ethics\nreligion and politics bias to Korean\ntolerance like you can imagine asking as\nsuper intelligence does God exist and if\neverybody agrees that the Supreme child\nis just right then people might update\non that\num and Nick Boston suggests that we\nshould collaborate behind uh almost\nreligion Veil and because everybody\nwould want to build an a uh an AI that\nbreaches this highest quality consensus\nthat tells us what is actually true\nabout ethics because everybody believes\nthat they are right so obviously uh like\nChristians would want an AI to tell them\nto tell and Ambiguously whether God\nexists because they believe it so in\ntheir expectations it would be positive\nand atheists would also want the same\nthing\num I think\num that is very very unlikely to happen\nElizabeth Kowski has written a blog post\ncalled belief in belief in the sequences\nwhich goes into details about why we\nshould definitely not expect this to\nhappen\nanother epistemic problem would be this\ninformation because it's not just that\nthe humans if we solve the alignment\nproblem perfectly sure we could get that\nbut we could also get powerfully as that\npersuade us\num for instance\num\none of the things we need to figure out\nhere is uh is searching for the true is\nit easier for an AI to convince us of\nthe truth than to convince us of\nsomething false like that's one of the\nthings they've investigated in the AI\nsafety via debate uh research agenda in\ngeneral is truth selling asymmetric uh\nas in Scott Alexander's Guided by the\nbeauty of our weapons\num I think certainly some info hazards\nseem strongly possible uh I would expect\nthat uh\nsufficiently strong AGI would in fact be\nable to persuade us of just about\nanything we could even see basilisks uh\nI think\num it's an open question whether\nexplicit bacillus exists like Rocco\nsuggested one that was clearly not a\nbasilisk but we don't know if those\nactually exist and if they do then we\nneed to have some kind of\naai's filter to avoid seeing that\nyou might\nsure\nforeign\nokay\nsure\nso uh pacifists is a uh introduced in an\nold science fiction novel that whose\nwhich name eludes me\num and the idea is a basilisk is a\nseries of images or texts relatively uh\ncompressed which has an extremely strong\neffect on the people who read it kind of\nlike a meme but just way stronger so\nsomething that is like imagine you see a\npicture and that literally drives you\ninsane or you see a picture and that\nliterally changes your political uh\nobservation from being a communist to\nbeing an anarchist or something like\nthat uh this kind of extremely strong\num persuasive weapon which uh uh Rocco\nsuggested a a short argument where just\nbeing exposed to that argument it was\nclaimed would make you want to build an\nunaligned AGI and he uh suggested that\nthis very short argument\nand I think like no one was actually\nconvinced of this so it was obviously\nnot this kind of short argument that is\nso powerful that it can convince someone\nof something\num but\num it is unclear whether a super\nintelligence could in fact find\nsomething like that like we've seen\num what are they called\num like some adversarial optimization\nthat have been found in fact found very\npotentially powerful things with you\nknow you have these pictures that looked\nextremely much like a cat and then just\nchain three pixels and the classifier\nsay it's a DOT you can find things like\nthat and\num it is unclear what a super\nintelligence could in fact do against us\num I\nthink I don't I don't really want to\nguess I don't know if pessimists are in\nfact possible\nno problem\num\nso this is like uh it could be argued\nthat there are two different things here\nthere is persuasion and basilisk as the\nthe top kind of persuasion and then\nthere is this information this\ninformation being easier to make like\nhumans can make this information and\nthat's probably something that AIS can\ndo better than us at an earlier stage\nand that's something we need to probably\nfind a solution to\ncomparatively earlier and we could have\na personal AI that does against this\ninformation and as long as it's just\nthis information and not persuasion\num that's probably doable but there's a\nlevel quite soon after this information\nwhere you in fact need pre-screening so\nit's not that you you see some\ninformation on Facebook and then you ask\nthe AI is this actually true but but if\nthe persuasion is strong enough then you\nneed to have the AI pre-screened so you\ndon't even see the persuasive arguments\nand that's an important thing that's\ngoing to require a lot of trust in your\npersonal guiding AI\nbusroom is cautiously optimistic about\nhaving norms and laws against this\ninformation and deceitfulness\num I think\num like we do have right now norms and\nlaws against this information do they\nwork\nnot really I think uh like the uh the\nthings on Twitter are not really stopped\nvery the disinformation campaigns on\nTwitter are not really uh hindered much\nby norms and laws they are more hindered\nby how difficult it is actually to\ncreate this information campaign\ncustom has another interesting idea here\nthat\num privacy it's not actually relating to\nthis information but\num privacy could be a very much hindered\nby powerful AI in the powerful AI would\nbe able to simulate you uh sufficiently\nto discover things about you that you\nwouldn't want others to know I could\nimagine that it's quite feasible to find\npeople's like super intelligence could\ntotally figure out my sexual orientation\njust based on this video or something\nlike that\num and uh to my mind this problem is\nactually very much the same problem as\nmind crime like the way you need to\nsolve this is to have strong\nCards Against what's going on inside an\nAI That's working with your data so to\nmy mind these are two parts of the same\nproblem will\nand the problem of Mind crime is a hard\nproblem\nfinally we get to a two-part yes\nyeah\nyeah\nso uh the classic mindcrime uh problem\nis uh let's say a an AI wants to\num uh extort you and the way and a I\ncould extort you is to say it has\ncreated a perfect simulation of you\nand it is being really mean to that\nperfect simulation of you then you would\nprobably really want that to stop\nbecause it if the simulation is\nliterally perfect then that's uh to some\nextent uh depending on uh\nyour view uh this could be just as bad\nas uh torturing you basically so that\nwould be an example of Minecraft we\ncould also have mind crimes that happen\nyou know not for reasons of extortion\nbut just uh for reasons of sadism it's\npossible that as sadist would just\ncreate a huge number of minds and\ntorture them and that's something that\nseems very bad and bad enough that it\nwould be uh required to take strong\nactions to prevent Minecraft from\nhappening\nand\nno problem\nso\num what about the current AI systems and\nwhat should we uh what are their moral\nstatus and what are what should we do\nwhat if anything should we do about them\nso are existing AI systems in fact\nconscious we talked about that uh at\nsome length uh uh last time\num\nand I think the people should really\nhave been structured better uh in that\nthis all the thought about whether AI\nAIS are conscious can be conscious and\nwhat are the requirements for a theory\nof Consciousness should have been moved\ntogether because I won't go very much in\ndepth about this\num person has an argument a long\nargument why we should assign at least a\nlittle probability to the current AI\nsystems having some kind of moral status\nand I have actually reading this and\nreading some more I think I've updated\nsubstantially I think there is a\nsubstantial probability that current AI\nsystems are in fact\num have have moral status uh this is\nalso based on this Lambda not very much\non the specific case but just by\neverything people wrote about that at\nthat time\nso I think actually current AI systems\ncertainly might have moral status\nand since I think this it follows that I\nthink other people will be convinced\nabout this and that moved far from most\nof course uh but I uh I think a lot of\npeople are going to really start\nquestioning whether the AIS have some\nkind of moral stages in the medium term\nthe question is also like will there be\na medium term but let's move that aside\nfrom for now\nand what I mostly care about is how does\nthis uh affect AI safety because the\nnumber of conscious AIS are very\npowerful AIS it's very low so for moral\nperspective it doesn't matter very much\nbut it matters very much what are the\nconsequences for AI safety\nyou can imagine uh moral status implying\nthat they should have self-determination\nthat seems like very clear step in the\nwrong direction\num\nin AI might have a right to privacy that\nit would make interpretability research\na lot harder\num we might see a slow capability\nincrease if a strong regulation is\nexpected I don't think that's very\nlikely in the strong regulations\nprobably not coming but um\nsumming up all of this I would expect\nthe result to be negative for AI safety\nbusroom goes one further and talks about\nrecommendations so not just does AI have\nmodel stages but what what accused\nexclusively we should take action now or\nit is true to be nice to current systems\nso similar to what we do for animals and\nwe should do more research into AI\nsentence we might have great benefits of\nstarting a pilot project early we should\npreserve the AIS for the future that's\nactually the first time I've heard about\nthat and I think that's a really good\npart of this uh this paper\nif we can have can identify what is\nsuffering what is the hedonic set point\nthen avoiding that seems like a good\nidea\num Breeze\nit might be an advantage to already now\nhave people in large AI organizations\nwho are responsible for the welfare of\ndigital Minds\num and eventually we might we should\nwant uh government regulations uh to be\ndeveloped\nis this in fact a good idea I think I\ncome down quite strongly on the no side\nbecause there is a very real opportunity\ncost in doing all this and the people\nwho would be doing this are mostly\npeople who would otherwise be working on\nAI safety and that's on the researcher\nlevel it's on the activist level and\nit's on the Goodwill level Goodwill both\ntowards government and towards AI\norganizations\nuh one easy place you could point would\nbe to Boston himself because I would\nmuch rather that Boston wrote a new\nedition of super intelligence then wrote\nthis uh this paper and who knows that\nmight be exactly what he's doing I don't\nhave any inside information\nso I expect that taking actions now\ntowards\num the moral status of uh existing AIS\nwould have a negative effect on AI\nsafety\nBoston also has a neat idea that I just\ncouldn't pass by and that's the idea\nthat having like uh in training you have\nan uh a certain amount of reward and\nthen when you deploy it then you just\nincrease all the reward by like a fixed\namount of fixed percentage or something\nlike that so the deployment is actually\nbetter than expected I thought that was\nreally cute uh I it's that clear to me\nat all that this would be a bad thing\nand it wouldn't cost very much and I\nthink AIS that are trained in that way\nwould behave certainly less predictable\nand perhaps that would rather be also be\nother consequences for alignment that\nyou need to think about but uh but\ncertainly a nice spot\nokay if we are going to do some kind of\num advocacy or regulations or something\nlike that how should we do that and what\nwould be the impact\nbus from accused first that we're not\ngoing to get regulation soon and\ncertainly not strong regulation\num\nbut even if we don't get strong relation\nsoon then just having one leading uh AI\nactor do something right now then like\nwith all low hanging fruits like just\nliterally adding something to the reward\nuh that seems like a really easy thing\nto do and if there is a strong\nbreakthrough in AI assume that might\ncreate uh what possible terms political\nactivation energy with like uh uh an\nanalogy of the activation energy in a\nnuclear energy\num\nand if that happens then how this energy\nis used what uh laws are being passed\nwould be depending on the prevailing\ntheoretical views and\nsetting up a research field with a\nconsensus is something that takes a lot\nof time and we could imagine a situation\nwhere it's not just a breakthrough that\nis evenly distributed across the entire\nsector but might be contributed even in\none so it's only Deep Mind who actually\ncreates something that passes the Turing\nchest or whatever\nforeign\nI think it's too this what I wrote here\nif we are getting logged in soon we\nbetter have good value soon so if we are\nexpecting that we are very soon going to\nget a HEI and then AGI will if not kill\nus all then log in our current values\nthen it's really urging to get good\nvalues uh I think this is probably too\nstrong and not really what Boston is\nsaying uh but I think uh that would be\nkind of the argument why it was\nimportant to do something now and the\noffice counter argument would be that we\nshould not in fact get logged in\nyes this quote here uh that uh working\nwith uh trying to get AI regulations\ncould uh make us wiser and how we\ndeployed responsive AI once there once\nit's available\num and used to enhance the Liberation\nrather than just why ahead of doing\nsomething like that\num I disagree I think this is precising\nthat not how we should go about it what\nwe should do instead is focus on AI\nsafety rather than on ethics and on the\nmoral status of AI\none of the problems with um\nhaving the lead\nactor take strong actions is that this\nmay render them uncompetitive or in\nother ways\num put them behind an AI safety race so\nthis is in fact the same kind of dynamic\nwe've seen with safety where there's a\nrace to the button on say on safety and\nuh Stuart Armstrong has paper the race\nto the principles\num\nand we could see the same thing in\nethics that the uh if you are if you\ndon't mistreat your AIS you will get\nBeyond competitive and so the only\num Front Runners will be forced to uh\nmistreat their AIS\nand I think not just is this a parallel\nuh race to the bottom I think it might\nin fact be the same race because this uh\nproblem of being uncompetitive with uh\nbased on ethics is the same thing as\nbeing uncompetitive because you have too\nmuch safety regulations so if you become\nslagging this\num competitive due to ethics then you\ncan't afford to also become less\ncompetitive\num because you're worried about safety\num and of course that's effectively we\ndon't really know what government\nregulation we should have and also we\ncould have an antagonistic social\ndynamic between ethics research and the\nAI research community and again that's\nprecisely the same thing as we see with\nsafety and the broader AI research\nCommunity there's also some good will\nthere that could certainly be squandered\nsure we have public engagement\nbathroom is kind of on the fence here we\nif we do it should be like philosophical\nand interestingly thought-provoking and\nnot confrontational headlines seeking\nhype uh I am more pessimistic than\nBostrom about whether this is at all\npossible once this uh\nis you get some kind of traction there\nis always going to be someone who is in\nfact searching for headlines who will in\nfact try to make this confrontational as\nsoon as possible\nand finally a general warning here that\nI think applies to a lot of this is we\nneed to think this through figure out\nhow we can avoid unintended consequences\nthat is all for today thank you and see\nyou in a while because I'm going on\nvacation", "date_published": "2022-07-15T09:11:32Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "946d3291a6443e71bfc074c983ba68fc", "title": "115. Discussion with Alexander Turner", "url": "https://www.youtube.com/watch?v=2Avgqeelbjk", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "I'm cured and of zero if if not and so\nthe reason I think this works and we\ndon't need to worry about this in\ngeneral is that it this by measuring the\nagents ability to receive to ensure it\nreceives favorable observations about\nthe world means that we don't need to\nmake choices about what's actually in\nthe world it could be anything we don't\nneed any ontology things in the world or\nany actual even world model that\ncorresponds to anything we'd agree with\nall we need is a model over you know\ngiven what I've seen so far which\nobservations do I expect to see next and\nso this way of thinking about impact\nshifts from what's actually going on in\nthe world it makes it indexical now when\nyou think about impact its with respect\nto you so a bad impact like I say and\nyou post is we might define it as\ndecrease in my ability to achieve my\ngoals and if we take away the bad we\nwould see that it's changed in ability\nto achieve goals and so this is with\nrespect to the agent if the agent goal\nis to essentially have you know the\ntriple utility function that's just one\nif it's on the whole time and zero\notherwise this measures power it\nmeasures its ability to ensure it just\nremains on it doesn't care about\nanything in the world but it you know\nmeasures how much power it has how well\nit's able to reason where what its\nvantage point is essentially and so then\nif we consider an agent trying to take\noff well how can you take off without\nincreasing your ability to remain on in\nthe future if you supposed well okay I'm\nconsidering this thing that will improve\nmy algorithms or ensure that people\ncan't turn me off\nwell you compare not doing the thing so\nif you don't do the thing you're more\nable to be shut off you're less likely\nto ensure that you can remain on and\nthen if you compare to doing the thing\nyou're now in a better position to\nachieve that goal so we've effectively\nabstracted away the notion of what's\ngoing on in the world and we view it as\na more fundamental statement about what\nit means\nor the agent to be acting and pursuing\nthis so I'm does that answer your\nquestion\nI think it answers the question and then\nyou know opens up a few more questions\nthat I have Jenny\nI'd like to just take a few you know\nseconds to think about my my next\nquestion if I can yeah sure any other\nquestions then I can perhaps take one\naltitude in Stuart Armstrong has written\na follow-up post where he says he\nbelieves that this algorithm would be\nvulnerable to wire heading do you agree\nwith his assessment I do not so I think\nit's it's a reasonable thing to point\nout and I'm glad you did\nI probably should have made it more\nclear so in the post there's a section\nwhere I talk about you know different\nways maybe you could gain this\nparticular formulation and I'd like to\nemphasize keeping the formulation away\nfrom the question of whether this is you\nknow the conceptual core of impact so if\nit's the formulation is one specific\nattempt to formalize that and so in\ndiscussing that I mentioned well here's\none thing you could do you could do\nobservational wire heading where you\nbuild some device that detects whether\nyou're you know pursuing other goals and\nif you are and make sure that you're\nonly as able to pursue those other goals\nas if there's a nothing and otherwise\nlike it gives you as much of your normal\nutility as you want and so I talked\nabout this and a couple other methods\nand then I presented something that\nbasically tries to detect whether you're\nmoving towards your main goal which\nreportedly you're trying to do in a\nlow-impact way or whether you're doing\nsomething that doesn't have anything to\ndo with it which would be you know which\nwould include attempts to get around the\nimpact measure penalty and it seems that\nthis what I what I provided does\ndisallow all the proposed workarounds\nthat the notice so far and so if there\ndoes not in my opinion\nseem to be\na way to break this right now modulo the\nflaws which aren't really breaking it\nbut just like other questions so it\ndoesn't seem like there's a way to break\nit right now\nand I think is good reason to help that\nit's either true that you can't or that\nwe're very much within distance of\nmaking sure we can't I think it's not an\nopen question as to whether this test\nalso catches too many false positives\nbut that is another type of so I think\nmy follow-up question will probably have\nbeen in extent of that that well if you\nare not making it care about the actual\nworld which is in my opinion it is very\npossible to do what are you making it\ncare about exactly I'd like there is\nsome confusion for me that I don't quite\nunderstand so it cares about the actual\nworld is that the penalty functions the\nthings we're measuring you know changing\ntheir ability to achieve the goals they\ndo incorporate information about the\nworld through the model that says you\nknow what do I expect to see next given\nwhat I've seen so far and implicitly\nthat's going to have some representation\nof things going on in the world okay and\nso and so yeah my assertion is that by\nwe don't necessarily need to be aware of\nthese specific things changing in the\nworld to get the effect of this agent\nwon't move too far in one direction to\nchange the world from the way it is now\nas measured you know by its ability to\nachieve different goals you know we're\nsaying it can't reach these really high\nimpact undesirable plans without\nincurring an extremely large penalty in\nthis formulation and I'm saying that\nthis is a way of doing it that doesn't\ndepend on any specific details of what's\nin the world but I think we've good\nreason to suspect oh conserve them maybe\nthey'd be more theory I think it's kind\nof hard to unpack exactly why that is in\na compact way if you haven't read the\npost you know it's only gonna care about\nthe parts of the world that it's\ninformation from right so I feel like\nit'd just be\nsentence of is to make sure that it that\nit that it keeps that exact part as\nclose to what it was before and then you\nknow it doesn't have to care about any\nof the other parts which is not\ncurrently observing or currently aware\nof existing law however you want to\ndefine whatever it's observing about the\nworld right now quite sure I understand\ncould be Rufus well it seems to me that\nthat the AI is actually not incentivized\nto to keep the world intact if you're\nnot defining exactly what parts of the\nworld we want to maintain when you're\nonly making it care about maintaining\nthe observations it's having it's not\nincentivized to maintain anything\noutside that observation space so it's\ngonna be you know incentivize to heck\nthat observation space to be as close to\nhow the world is currently looking but\nthen all the parts that is not currently\nobserving it would not at all care about\nand that to me seems like it could prove\nvery problematic so so the difference\nhere is you're I think you might be\nthinking of it's not measuring change\nand the actual observations it's going\nto see it's measuring change in its\nability to string together favorable\nobservations and so this is like a very\ndifferent thing and I think it's like\none of the ways I see like yeah I think\nthis is like a very reasonable pattern\nmatch and it's probably what I would do\nas well where there's a different\nconcept here of well it's not what it\nactually expects to see but its ability\nto produce unfavorable observations\nwhich inherently captures information\nabout other parts of the world about its\nvantage point and so by constraining\nwhat it expects to see in particular\nwould actually have like little to no\nregular as an effect on the boundary\nlike change in gaming the penalty I\ndon't think that would be very related\nand that's due to the formalization\nso first I would like to say that it's I\nreally liked it I have one question do\nyou have any open problem with unknown\nunknowns with the unknown on earth did\nyou two meet\nprevious questions that were very kind\nof related to it that if there is\nsomething a patient doesn't know then\nhow much effort engage and aid for\nexample to find out the thing that I see\nuseful to know in order to not limit it\nso it's the way I understand it the\nquestion is is this a question of well\nhouse is affected by the agents like\nignorance about you know maybe it's\nmodel is incomplete or it just hasn't\ngot enough information so I think that\nif generally agents with under specific\nlike incomplete models they don't have\nmuch predictive power we'd expect them\nto be much less likely to act because\ntheir models would have more noise in it\nso any given action would be more likely\nof producing change in its expected\nability to pursue a goal after you know\nin the long term penalty might wait a\nconsiderable amount of time and then try\nto reach that goal and compare it you\nknow its ability to do that after not\nacting with its ability to do that after\nacting and if your model isn't very\nprecise you might you know expect to see\na larger shift there and so I think I\nthink that is in general a problem well\nif our agent doesn't know this thing\nthen it might you know it's not going to\ntake it into account which is different\nfrom the intuition pumped but yeah\nhaving an incomplete model I would\nexpect that pump to be somewhat helped\nthis formulation and in addition it's\nalso less likely to do things as soon as\nI realize the effects are big with\nnow us needing to tell the agent that\nthe effects are bad which is basically\nalways sooner and it seems like a\ndisciple company so I think you know\nit's not impervious to that but I think\nin practicality agents probably won't be\nable to have a gigantic change before\nthey're good enough to try to have that\ngigantic change by modeling its effects\nand pursuing the desirable outcome yeah\nI believe that it's a very good model\nI'm just have you tried out anything\nvalidation needs to explicitly go\nsomewhere and do something in order\npoint out or remodel such sets in order\nI will expect the value of information\nto the calculations to basically be the\nsame you know just module of its new\nobjective so I'm not sure that it would\nbe a spirit like a special consideration\nfor this and so I haven't done that so I\nhave a question and that is imagine that\nyou have an agent implementing this\nwhich have which gets a a new action\nlet's call it simplify an action that\ncan simplify the environment in some way\nbut doesn't have a particularly large\nimpact it doesn't stop it from doing a\nlot of things because it's it's purely\nas some way to simplify its environment\nit would as I can see the algorithm\nwould take this just about always I am\nNOT always that's to say it feels well\nspecified but it sounds like really\nsmall so it's not really it's just kind\nof something that makes it more\nconvenient for its own goal that doesn't\nreally change other goals and yeah right\nokay to strenghten see ya are there any\nrules or commands that you can give it\nto this agent that it will resist that\nmaybe some taxonomy of these entities\nthere are holes that it will resist like\nbecause we reach other you know maybe\nchanges we want to change its\nformulation perhaps to do a different\nobjective you mean I don't mean many\nanything specific I'm just produce in\nopen-ended a is there any kind of for\nexample maybe you give it some on Google\nthat will limit its future options very\nstrongly and then maybe decide that it's\nstupid : I did not do it yeah so if we\nwere going to implement code that would\nincrease would not be good for its\ncurrent utility function then this would\nnow be the default and so it would be\nheavily inclined to accept it and I\nthink this kind so this is immediate\nkind of portability where the default\ncase is it gets changed or shut down and\nI think this is heavily increased by the\nformulation it doesn't seem to it's not\nclear whether it will does the same\nthing for all kinds of courts ability\nwe'd want and particularly you know\nknowing exactly when you're on this it's\nnot gonna help with that well I think\nthat in addition to seemingly\ndefining low impact it also brings a lot\nof additional legibility oh it might be\nwould expand on the question assume that\nthere are different people involved with\nthis agent let's say the manufacturer\nand the owner and finally the user and\nmaybe the manufacturer and the owner\ndoesn't do not work at the user in some\ncertain codes to the HM then how to\nimplement it\nso what would happen is if the agent\nwouldn't look if wet what the outcome by\ndefault would be the user ends up\nimplementing this exchange then the\nagent wouldn't take steps to make sure\nthat doesn't happen so I guess I guess\nthis problem would really come down to\nhow we want to handle it and be more of\na human kind of mechanism sign\nyeah management design problem as I\nunderstand it so that wouldn't be really\nwithin the scope of a timber torii\npreservation\nwhat do you see us the the next step in\nyour research so the next step I think\nis to I think the formulation needs to\nbe relaxed in some ways I think it's\nvery exciting that it appears to not\nlike we have a formulation and we might\nhave good reason to hope that you know\nwell we've tried to come up with these\nwaves of gaming it so far and none of\nthem worked\nand this seems like the concept will\nscale to know all the way so then the\nquestion is can we make sure that for\nevery task we wanted to do we aren't\nturning to many of the actual good\nactions as false positives I think this\nis plausible that we're not but I'd like\nto make that even more clear and I think\nthey'll also kind of some people saying\nwell I'm a little bit uneasy because\nthis this particular formulations like\nsomething that I view as low impact and\nso I think that would be a good next\nstep then from there you know looking at\nissues of embedded agency or making more\ntractable and I'd also think I would\nlike to go and walk through a little\nmore slowly because I think that I\nbasically just tried to put too much in\nthat one post and I kind of went too\nquickly I think a lot of people raised\nreally good points but there were also\nsome assertions which I'm glad were made\nand it seems like people's confusions\nthe the more confuse coming to also\nuploaded comments so I'm kind of one\nhearing whether you know whether I\ndidn't communicate some key aspects one\nof the things I think that I'd like to\nemphasize more is this as a new gears\nlevel model of what's happening for\nagents as they consider plans and as\nthey execute plans you know as it moves\nthem through outcome space through\nattainable utility space you know maybe\nwe now have a model for instrumental\nconvergence and this seems like a very\nnatural concept for alignment I expect\nit to be very useful for other problems\nas well and so I think the open\nquestions is probably a lot of\nlow-hanging fruit there as well and\nwe'll see how much of it I can do it my\nown and it's probably you know rooms\nrather people are stepping if they're\ninterested what are there any particular\npaths that you feel would be helpful for\nthe people to just have in any\nparticular directions that might require\na lot of work but might not be very\nmathematically heavy or requiring coding\nskills or whatever what parts do you\nneed help with basically so I think the\nleast one of the least heavy parts would\nbe well you know not requiring anything\nbeyond what's really in the post is\ncoming up with these ways of relaxing\nthe impact measure that is also kind of\na mechanism design problem right now it\ndoesn't feel like it should be you know\nany more difficult than some other other\nproblems that dealt with while\ndeveloping it but that's a school year\nand I don't necessarily have as much\ntime as well as I did during the summer\nso you know help with content\nverification and kind of coming up with\nideas well what's the simple boundary\nbetween behaviors and plans that try to\njust skirt the penalty and ones that are\nactually low-impact and they're actually\njust ways of pursuing the agents main\ngoal and I think if we can get that down\nand you know have really good reason to\nsay yep this is going to do what we want\nit's going to draw the clean line\nI think that you really really hover so\nthat that be the first thing to to come\nto mind okay any other questions well\nthen I would like to say thank you very\nmuch Alexandra for coming so this ring\ntravel it's been a pleasure so the next\npart of the Ring group is the Familia\nwhich is your first welcome to stay that\nis", "date_published": "2018-10-03T21:12:33Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "1704671ec642e84be40defc3fc950ecc", "title": "201. Corrigibility", "url": "https://www.youtube.com/watch?v=xmFSRmJAsto", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "welcome to the 201st session\nof the ai safety reading group presented\nby me\nand today we are going to be talking\nabout\ncorrugability in general though i\nclaimed that we would be talking about\npost call correct ability by paul\nchristian\nwe'll cover that but this is more of a\nbroad overview of what the idea is\ncurrent conceptions of it and\ncurrent problems\nso the structure of the talk will be\nroughly we'll discuss\nmary's view of griggability uh stuart\narmstrong's attempt at constructing some\ntools for making an\nagent partly correctable um paul\ncristiano's\nidea of correct ability and why he\nthinks it's central to ai safety\nand finally we'll be covering the\nperspective of stuart russell\nand david hadfield manal from\nchai on correct ability\nnow let's get into it charlie\nso uh corrigability was a concept\noriginally\ndevised by people at mary and named by\nour very own rob mills\nand the meaning has shifted somewhat\nover the years so now\nit is referring to a few different\nconcepts\nwhich accomplish a few different goals\nbroadly the idea is to make agents which\ndivert defer to humans\nand in the initial specification of\ncorrect ability\nor rather a list of what we would like\nin correct ability we\nsee that miri listed four\ncriteria these were\nthat the agent should not interfere with\nits designers modifying it\nit should maintain its designers ability\nto modify it\nit should not manipulate or deceive its\ndesigners\nand it should preserve the other aspects\nwhen creating\nsub-agents or successors now some people\nhave some issues with the\nno manipulation does it errata they\nthink that it should be\nin under the heading of\nreward tampering but we won't actually\nhave time to go into that\nso just ignore that for a moment i'll\njust note that i kind of disagree that\nit is\nquite important that an ai doesn't\nmanipulate its deceivers\nit's designers otherwise the other three\ndesigner are kind of pointless\nso this is quite a vague concept and no\none was really sure\nwhat exactly it meant to be correctable\nno one had a formal specification\nso me returned to a concrete problem to\nsee what does the rata do\nand don't work this\nwas called roughly the off switch\nproblem\nand\nthey redefined some of the desiderata\nand to what we could basically call\nrecreatability\nwhich i just realized i put above so\nweak corrigability basically means that\nyou have this situation where there's a\nbutton which could switch the ai off\nand it's got some kind of utility\nfunction we want to\nspecify the utility function u so that\nit doesn't incentivize shutdown\nif the button isn't pressed\nit incentivizes shutdown if the button\nis pressed\nit does not incentivize the agent to\npush its own shutdown button\nbecause that would be a bit pointless\nright you make an ai it goes to a\nshutdown button turns itself off\nimmediately well\nit's kind of useless and defeating point\num\nthe other incentive is that\nany of its successors should also share\nthe other incentives that is they won't\nattempt to\nstop the creators from pressing the\nshutdown button they won't try to press\nit themselves\nthey if the button is pressed they will\nshut down\nnow there's some\nsubtlety to all this because you could\nask well what does it mean for an ai to\nshut down in a situation where it's\nin some incredibly complicated state\nand shutting down would be disastrous in\nand of itself if we just stop what it\nwas doing\nthat's a interesting problem of its own\nright but we are just going to totally\nignore that\nbecause that's not really what we're\naiming for here we're just going to\nassume that we have some\nutility fund in color\nsome utility function us which\nincentivizes shutdown when the button is\npressed that is to say\nif we press the button the agent should\nact like a us\nmaximizer but we want it so that before\nwe press the button\nagent acts like a u n maximizer u n just\nstands for basically whatever utility\nfunction we've programmed into the agent\nso it might be i don't know a task where\nit's putting strawberries on plates or\ndetecting\nhot dogs in pictures or what have you\nit's just some generic utility function\nand they proposed two ideas\nto explore what exactly goes wrong\nwhich does a director easy to achieve\netc etc\nneither proposal worked neither\nsatisfied the\npreserve the other capacities\nwhen you're making a new agent\nand the first one just in a sense of\nfail kind of completely\num the second one failed to some extent\nthe first three or rather\nsucceeded in a very narrow way\nso let's just go through this quickly\nfor example you might say naively well\nlet's just add\nun and us and make it so that there's\nlike some\ntrade-off between them so that is we say\nthat we\ncall the utility function um\nwe define the utility function so that\nit is equal to\nsome constant times u s plus some\nconstant times u n\nso it basically optimizes shutdown\nbehavior to some extent\noptimizes other behavior to another\nextent\nbut this fails for the standard reasons\nthat is there's\nthe ai will probably find some\nincredibly perverse way to maximize this\nthat we didn't expect and this is\nlike say normal good hearting behavior\nso we're not going to go\ntoo much into that but this suffers from\nthe usual pitfalls\nof a utility utility maximizer\nso we're just going to say that was a\ncomplete failure and move\non\nthe other proposal was quite interesting\nand it led to some methods which are\nmore useful\nbut before we get to those more useful\nmethods let's just cover this one\nwhich is that mary\nattempted to make the ai act\nas if it thought the button would never\nbe pressed rather could never be pressed\nthey basically changed its view of the\nprobabilities of how likely something is\nto occur it said that\nthe probability of a button press\noccurring is zero\nin the ai's mind and\nin a sense this kind of solves part of\nthe problem because the ai\nisn't really bothered about someone\npressing the button and knows it can't\nhappen it doesn't matter\nif the human is standing over there with\ntheir hand hovering above above the\nbutton\nthey will not press it with probability\none so why bother\nbut the issue is that this introduces\nsome really perverse incentives and\nin a way this makes sense because you're\ndistorting its\nidea of how the world works and you're\ndistorting it incredibly\nmuch so\nlet's say for example\nthe ai says that okay i'm going to\nmake a bet if i lose i'm going to\nirreversibly commit to pressing the\nbutton\nso i know that if i lose that implies\nthat the button will be pressed with\nprobability one\nbut i know the button can never be\npressed\nand the only way i will ever lose\nor rather\nme pressing the button is equivalent to\nme losing since\nthe button can't be pressed that means i\ncan't lose\nand since i can do this for any bet i\ncan never lose any bed no matter\nhow bad the odds which is you know\nobviously wrong and dangerous\nso the ai might just take a one to a\ntrillion odds of something going wrong\nfor the most minor gain\nand then once it loses which it\ninevitably will it will press the button\nanyway\nbecause it's irreversibly committed to\ndoing that so\nwhilst it won't prevent humans from\npressing it it will do some pretty\nbizarre things\nand note in fact it will only prevent\nhumans from pressing it\nin very particular situations and that's\nsomething that the matter methods suffer\nfrom too\nso anyway this idea is kind of\nunworkable\nbut it did provide the seeds for some\nmethods which improved on\nthese are called the indifference\nmethods so\nthe rough idea is that you make the\nagent\nindifferent to certain events which may\noccur but only those events\nin essence as if nothing they do will\naffect whether it will occur or not\nthere is no action they can do to change\nthe likelihood of this outcome so they\ndon't really care\nthey would just go so what if the human\nattempted to press the button\nand as a result it will also make no\nattempts to prepare for shutdown\nso let's say that you've got an\nai and it's building a skyscraper\nnow suddenly these designers decide they\nwant to stop construction so\nthey press the off button the ai\nimmediately drops whatever it was doing\nand switches off\nsome poor sword gets squashed on her by\na falling girder\non the plus side it will have tried its\nbest in making the building\nwhich is why this is a feature not a bug\nit doesn't incentivize shutdown\nand it doesn't incentivize pressing the\nbutton\nmore specifically the agent would pay no\nutility to press the button\nit will sacrifice no utility in order to\nmake sure the button is pressed\nand it will sacrifice no utility in\norder to\nkeep the button functioning\nwhich is kind of odd right\nbecause what if for example the utility\nfunction says\nthat okay i don't really care if you\npress the button or not but\ni want some parts to make some\ncontraption so then the agent says okay\nwell\ni don't really care the buttons pressed\nbut there\nit has some nice computation parts to it\nso i'll just\ngrab it and use it for scrap pieces\nthis method of indifference does nothing\nto really prevent that\nright it's just saying that the ai\ndoesn't really care if the button is\npressed or not it's not saying whether\nor not the ai\ndoesn't care if the button is still\nmaintained\nor not you're only making it indifferent\nto a very particular thing\nand there's another failure of this\nwhich\nis that okay fine it\ndoesn't care if the button is pressed or\nnot but maybe\nthe human pressing the button is\ncorrelated with some behavior that it\ndoes care about so say for example the\nhumans\ngo to press the button and they are\nprepared that\nif pressing the button doesn't work\nthey'll just\nlaunch some nukes at the ai\nin which case the ai is thinking okay\nwell i don't really care if the button\npress occurs\nbut you know my utility function says\nthat i do care\nwhether or not i'm used um that will\njust\nruin the stuff i'm doing so it will try\nto manipulate the females into not\npressing the button\nor you know it might just destroy the\nhumans in retaliation first\nthere's an interesting point about this\nwhich is that as the air gets more and\nmore capable\nmore and more intelligent and able to\nmanipulate people\nyou will probably be able to continue oh\nno don't worry if you press the button\nwill totally work\nno need to prepare for some crazy\ncontingency where you have to nuke me\nand then the humans go oh okay fine and\nthe ai\nis fine with that it doesn't care about\nif the button is pressed\nbut it has managed to to convince the\nhumans not to do\nsome behavior which you would care about\nbut you know this is\nin effect preventing us from pressing\nthe shutdown button\num if things go sour which we don't want\nand this is basically the broader point\nwhich is that\nit won't obey humans\nat all insofar as its utility function\ntells it to or not you've got to tell it\nwhat to be indifferent about\nyou want to specify every single way you\nmight want to shut it down\nyou should be indifferent to these\nmethods so including say\nwhether the human wants to shut it down\nby shooting it\nat a particular time at a particular\nplace and then you have to specify all\nof those times in all of those places\nand all of the particular conditions\nso we can see a sort of trend here\nindifference methods are\ncool like reward functions they're quite\npowerful in general and they're\nlogically\ncoherent well most of them\nsome of them have weird problems with\ncounterfactuals but we won't go into\nthat\nbut the point is that like a reward\nfunction\nyou need to know what you're going to\nput into the indifference methods you\nknow you need to know what the agent\nshould be in difference towards\njust like you need to know what the ai\nshould maximize\nyou need to know the semantics as stuart\narmstrong the inventor of these methods\nput it\nand this is why he calls them syntactic\ncorrect ability\nthey are like a series of formal\nmanipulations\nthat have no real content\nto them they can be applied to any\nsituation they're incredibly general\nbut they lose this\nspecific nuance which we want\nwhen you design a ai\nbe corrigable\nso unless you have the semantics you\nwon't be able to fill the other\ndesiderata like\npreserving shutdown behavior and you\nwon't\nbe able to fulfill the\ndesiderata about\nthe broader grigor\nthose are the first four things that\nmiri\nraised miri sorry defined as\ncredibility which is don't interfere\nwith your designers modifying you\nmaintain your designer's ability to\nmodify you\nthose two weak or ability or sorry\nindifference methods can achieve but you\nneed to put in a heck of a lot of work\nspecifying exactly which ways\nit shouldn't interfere with modification\nin exactly which ways it should maintain\nthings\nbut it doesn't solve manipulation\nproblems and it doesn't solve the\npreserve all of the other aspects of\ncredibility when creating sub-agents or\nsuccessors\nand there are also practical dis\nconsiderations for many of\nthese algorithms for different methods\nsome of them are about as hard as\ngeneral bayesian reasoning\nwhich is you know pretty darn tough\nright and\nthat's there's some work being done to\nmake these practically useful\nbut i mean\nit might not be worth it in the long run\nor other might not be sufficient\nnow say whether there's a natural\ngeneralization\nwhich doesn't suffer from the problems\nof indifference methods\nand does fulfill the other two\ndesiderata\nis an open problem\nstuart estimates maybe 15 chance that\nthese methods could be generalized\nto work for weak correct ability that is\nit will make an agent that won't try to\nprevent you shutting it down and the\nhost of assorted behaviors we'd like to\ngo along with that\nbut he doesn't think that it's\nsufficient to stop stuff like the agent\nmanipulating you\nwhich is a pretty tricky problem\nlike if a super intelligence\nsuper intelligence exists then they can\nprobably manipulate us in some extremely\nsubtle way\nwhich fulfills all of the criteria of\nweak correct ability\nbut doesn't fulfill any of the other\ncriteria\nwhat you really want here is strong\ntrigger what mary initially outlined\ni keep playing strong drug ability and\nreferring to the four digits of the\nrouter but really there's a crispr way\nof stating all this\nnamely strong corrugability is when an\nagent\nis trying to do what the human wants it\nto do this idea isn't without flaws but\nwe'll get to that later\nso notice that this is quite a strong\ncondition on the ai\nfor example if the ai happens to be a\nhuman happens to be an aic\nthen the ai would be trying to do what\nan ai safety researcher would do\nnamely solve ai alignment in some sense\nthis is\ndealt with a lot of the problem of\nalignment\nas we wouldn't have to worry about the\nai irreparably ruining the world\nin much the same way as we wouldn't have\nto worry about an alignment\nresearcher irreparably wrong\nas yetkowski put it he would not mind\nhanding the keys to the universe to pull\ncristiano in the expectation that paul\ncristiano would hand them back\na strongly corrugable ai would act\nmostly like\nbull cristiano so we wouldn't have to\nworry about it in that sense\nnote that this is of course not the same\nthing as solving\nall of the alignment problem because the\nai\ndoesn't know for example what human\nvalues are\nit's trying to figure it out but it\nhasn't gotten the answer yet\nso it's not a solution to full ai but it\nfeels like\na lot you could breathe a deep sigh of\nrelief\nif we made an agent with strong\ncorrugability\nand paul christiano in fact is working\non this problem he thinks that\nthe stronger credibility for the reasons\ni just mentioned\nis basically good enough as a solution\nto the alignment\nproblem\nand let's just talk about his views\nso\nthings that there's\na few nice aspects about strong\ncredibility\none is that he thinks that griggable\nagents\nwill make corrigable successors because\nthat's what\nsay we would want them to do\nright and furthermore\nsay its successor is somewhat\nmis-specified\nbut it's still griggable so we have this\nagent that's perhaps more intelligent\nmore capable\nit's gotten uh we've designed it a\nlittle wrong\nand that's okay because it's still\nstrongly corrugable it knows that\nthere's some issues with it\nso it will try to correct course it's\nrobust to miss specifications\nthis is what paul cristiano calls a\nbroad basin of attraction\nif you imagine this very crudely drawn\nmap\nas something like a map of the possible\ndesigns of an ai\nthen corrigable agents basically form\na basin of attraction right like inside\nthis circle\nlet's say that's where corrigable agents\nare\nif one is corrigable then it's\ngoing to sort of naturally fall\ninwards into more griggable states\nand it will make its way say from here\na little inwards inwards inwards right\nuntil it gets down to basically\nmaximally griggable and from there\nit will no\nknow exactly what to do\nor rather it will know exactly why it\nshould do what it needs to do which is\nsolve the alarm\nand it will be quite capable so there's\na pretty good chance that\nwe'll be okay\nand his actual proposal\nis that you should do this thing called\niterated distillation and amplification\nin other words ida that's the acronym\nhow it works is you start with a\ncorrigable agent\nand you work together with it to make it\nslightly more powerful\nwhich you and the first agent can check\nto see if it's griggable\nyou use all of the tools available to\nyou at this\npoint in time so you throw everything\nyou have at checking that this thing is\nreally corrigable then you use this new\nagent\nto repeat the process it's slightly more\nintelligent you can make a slightly more\ncapable agent now\nand you can in principle bootstrap your\nway up to a super intelligent corrigable\nagent\nthere are a couple of issues with this\napproach\nsome of them are practical like whether\nhis idea for the initial design\nof a curricular agent will try to\nmanipulate us or not\nthat is how do we get to the first\ncoringable agent\nand how can we make sure that it's\nreally correctable\nlike one of the natural\n[Music]\nproblems with strong curriculability is\nhow do you make sure that an agent isn't\njust manipulating us to make it look\nlike it's correctable\nso paul thinks that this is\n[Music]\nnot that hard to do eliezer thinks that\nokay saying it's easy to check if an ai\nis manipulating a human seems to be\nanthropomorphizing things a bit\nand he doesn't think it's that simple at\nask\nworld christiano thinks that it's\nfeasible as in\nnot necessarily simple but a\ntechnical challenge we can probably\novercome and he thinks that predicting\nif\nx is manipulating y will get easier as a\ni become more intelligent so naturally\nthis is an actual prediction right like\nas we make more and more capable agents\nwe can check whether or not\nit becomes simpler for us to make ai\nwhich can predict if something is\nmanipulating us or not\nso hopefully in the next few years or\nthe next couple of decades\nwe'll get some feedback on this\nbut there's another problem\nand this is one which applies to\ncorrugability in general\nthis in fact ties in with manipulation\nin a sense\nwho exactly are we corrigable towards so\nconsider\nyou happen to be a 25 year old ai\nresearcher\nor say\nsomeone who has\na robot serving them or whatever just\nsomeone with a robot\nand you are a particular age you've got\nparticular values particular preferences\nand the ai asks you\nwhat do you want\nwhat do you think of my doing something\nin this particular situation\nin fact no let's be a bit more concrete\nlet's give a slightly silly example\nsay that you made a correctable ai\nand you've\nhad eight slices of cake\nthe robot ai is told to bring cake\nto it now naturally the robot sees the\nplates on the floor sees that there's\neven a cake half eaten there\nand it remembers that the supervisor is\nmeant to be on a diet\nnow the human supervisor right now wants\ncake\nbut you could probably gently persuade\nthem that they don't want this cake in\nactuality\nso should you persuade them or not you\ncould say okay well i should really just\ntry to aim to\ndo what they want me to do right now in\nwhich case you would just say okay fine\nwhatever here's your cake\nyou could do something else which is\njust give them a\nargument for why they really don't want\nthis and then\nthey say after thinking about it for a\nwhile you know what you're right i\nreally shouldn't have any more cake\nor consider another example they're an\naging billionaire\nwho is wondering\nwhich nephew to leave their will towards\nthey want you to help them decide\nnow he's got a pretty diverse family one\nof them's a\nhippie another's a social justice\ncrusader a third is an ea\na force the vicar and a fifth is a\ndancer\nso you've got a lot of leeway to help\nhim\nand it seems like there's no natural\npath here there's no sense that\nthere's only one argument you could give\nhim\none way you could help him develop his\npreferences\nso what do you choose because clearly\nyou've got to help him do\nsomething right even doing nothing as an\naction\nand this is manipulation of a very\ndifferent sort because\nthe ai is still clearly trying to help\nthe human\nit's trying to do what the human wants\nit to do but the human doesn't really\nknow what they want to do\nright so solving this in a sense feels\nlike\nit requires solving a lot of deep\nquestions in moral philosophy\nwhat are human values this is\nbasically the whole problem of ai safety\nof alignment really\nand it's in a sense quite similar to the\nissue with the indifference methods we\nhave to put in\na lot of work about human values a lot\nof work about\nwhat counts as the humans trying to shut\nyou off\nbefore we can really make the age\nincredible so if it's the case that\nhere we need to solve human values\nbefore we can make strongly corrugable\nai then you know what's the point\nbecause in that case we would have\nbasically solved the ai safety problem\nnow i haven't finished learning about\npaul's agenda\nso maybe iterated distillation and\namplification has a solution to this\nproblem\nand in fact i think it does but\nyou should probably take what i say here\nwith a pinch of salt\nso paul thinks that\nit's not necessary to\ndeal with to do something like say\nyou're\nthe ai and then you simulate\nthe human supervisor's entire life\nto figure out what they would want you\nhave you to have done\nbecause you know clearly after 20 30 40\n50 years the human will have quite\ndifferent preferences\nand you probably could have influenced\nthem in any\nnumber of ways etc etc\nrather what he thinks you should do is\nthat you should\nthe ai should consider simulating the\nhuman as they are now with their current\npreferences\ngiving them and giving them some time to\nthink\nso it has this mental simulation of the\nhuman and says well\nwhat do you want a cake or death and the\nhuman says okay in their\nimagination give me five days to think\nand then the human comes back and says\num no actually i don't want cake\nor more pertinently um\ni have this the ai could ask do you\nthink i'm being manipulative by asking\nyou cake or death\nand the human could say okay give me\nsome time to think\nand i'll judge whether or not you're\nbeing manipulative\nbased off my current preferences that's\nthe key right\nthe human is thinking reflecting on\ntheir current preferences\nand saying are they corrigable are they\nbeing honest are they not manipulating\nus based off the human's\ncurrent conception and it this is kind\nof a strange thing to do because it\nfeels like\nthere are a lot of ways that\nthe human could reflect on their current\npreferences it doesn't\nfeel like there should be a single way\nthat this could evolve\nbut whether it's actually true as of yet\nis kind of unclear\nso that's that's\nprobably the key issue with paul's\ngrigability agenda at the moment\nand we've discussed correct ability\nala cristiano we've discussed\ngregability al-amiri\nthese are basically talking about the\nsame sort of concept\nbut we'll now go into another conception\nsomewhat briefly which is not\nso much correctability as rather an\nexamination of\nwhether we want correct ability at all\nlike whether there are any downsides to\nhaving an agent that's totally obedient\nand this is covering the work\ndone by stuart russell and david\nhadfield menaul\nwho work at ca hai\nnow whilst they think corrigability is\nuseful\nthey think that there are some downsides\nand they try to investigate this using\ntheir framework of\ncirl games where\nbasically the human nai are assumed\nto share a reward function\nand they hope to create agents which\nlearn what this assumed reward function\nis within this\ncontext of a game so we're basically\nassuming a way\nthat the humans preferences can change\nand the ai tries to learn what the\nhuman's reward function is which it\nshares\nby assumption within these sets of games\nand this doesn't mean that the agent\nwill always obey the human\nthere are cases where the agent could\nperform better according to this\npresumed reward function\nif it disobeys sometimes\nso they formalize this and if in one\nline i had to phrase it this\nformal condition of when to obey the\nhuman one to be\ncorrectable is that you should obey the\nhuman\nif your moral uncertainty times your\nmeasure\nof human rationality is greater than or\nequal to the expected cost of letting\nthat human correct you\nthis is more my conjecture\nof their approach\nbut to be fair it's based off an\ninterpretation that they gave in the off\nswitch game and i think that this\ninterpretation basically holds across\nthe board\nfor their work on correct ability\nwhether\nthis is really true would be an\ninteresting research question mind you\nbut let's just ignore that we can get a\nfair bit out of this interpretation\nso now suppose that the human is\nperfectly rational or rather the ai\nthinks it's perfectly rational\nin this sense this thing would basically\nbe viewed as\nequal to infinity and assuming that\nthe human can't screw things up too\nbadly that is\nthe reward function doesn't say that\nthere are infinitely bad outcomes\nthis thing\nis always going to be smaller than the\nterm on the left because that's\nsomething times infinity\ngreater than or equal to something\nand if a and b are both finite then\nclearly the term on the left is greater\nthan the term on the right\nso necessarily the ai in these\ncooperative games will obey\nbut obviously the human isn't always\nrational right this is a point that\nthey're trying to make which is that\nif the human is making mistakes then if\nthe ai just blindly obeys the human\nwhich is kind of what it should do in\nif it's griggable then there are going\nto be some trade-offs it might not get\nrewards as quickly as otherwise\nbecause say if it's like i don't know\njust as a trivial example\nsay it's a self-driving car and it\nthinks that\nevery passenger is infinitely rational\nand it happens to be\ndriving a toddler after school now the\ntoddler\nis quite curious and wonders what would\nhappen if they drove into a lake\nthe ai wisely decides\nin its view to heat the toddler and\ndrive into the lake\nclearly this is you know a pretty\nterrible thing to do so\nlet's do russell and go\nbut okay fine well we want a better\nmeasure of human rationality we want it\nsuch that\nit's the optimal measure of human\nrationality so if you viewed\nthis measure of human rationality as\nbeing on a linear scale\nso i hate the human's rationality\nand how\nmuch value the ai can produce\nif it thinks the human has a certain\nlevel of rationality it turns out that\nthis will be optimal at exactly the\nreal point right like this\noptimal point for when the\nrobot should obey the human should\nreflect reality\nhope you excuse my terrible handwriting\nand this of course presents problems\nbecause in the case of extremely\ncomplicated scenarios\nthe human is going to be not very\nrational relative to the ai\nand the ai might be a super indulgence\nand it figures that okay i basically got\ni figured out pretty much\neverything there is to know\nabout human morality and like i've got\nthis\nresidual term left this residual\nuncertainty this tiny little thing\ncalled epsilon\nand i think that the humans are\nkind of stupid so this is like\ni'll assign them some measure of\nrationality which is really big\nit's this huge thing maybe i should just\nmake that a bit clearer\nand you also know that\noh sorry\nthis is perceived human rationality so\nthis should be like something really\nsmall\nand of course letting the human decide\non arbitrary occasions is pretty bad\nbecause say you're a super intelligence\nyou've got the world to run\nand listening to the human could lead to\nunfathomably bad consequences\nso we'll just say that that's really bad\nthat's some huge quantity\nso on the left you have a tiny quantity\ntimes a tiny quantity\nthat's obviously much greater than this\nhuge quantity\nuh don't worry about the units so\nthe super intelligence will disobey and\nthis is basically yadkowski's criticism\nof this approach to corrugability\nbecause it's not really correct ability\nif the ai will eventually disobey if in\nthe limit of superintelligence there's\nno way to convince it that yeah you're\nwrong\nbecause it's totally sure that it's\nconverged to the right\nutility function of course\npeople at ch ai say well you know if\nit's a bayesian reasoner and it's got\nthe\nright sort of hypotheses about what the\nreward function is\nthen it should eventually converge into\nthe\nright level of human rationality\nsorry the right level the right utility\nfunction\nand in that case we really would be\nbetter off letting the ai run things\nbut they do agree that yes when you're\ndesigning your initial ai you probably\nwant it to behave\nyou want it to obey you because it's\ngoing to misunderstand things\nand in that sense they do agree that\nweaker ability is quite useful\nthey also agree that strong credibility\nis a very useful property but they think\nthat in some way that we'd probably be\nable to\nget the ai to learn it by doing these\ncooperative games\nwhether or not that's true is an\nentirely different matter\nto some extent i think that this\nprobably isn't true\nbecause stuart armstrong\ngave this statement\nthat\nyou can't infer values from behavior or\nrather\noccam's razor is not sufficient\nto infer human values if you look at\nwhat humans do\nand you've got your prior and you just\nupdate naively\nthen you could converge to any number of\nrationale\nsorry any number of utility functions\nany number of\npossible values because\nthe humans might be irrational in a very\nparticular way\nlike they're telling you that they\nwant cake they don't want death but\nmaybe the humans are\nall uh actually\nnihilists they want to die but they're\njust so\nhopelessly bad at fulfilling their\nvalues that they keep saying they want\ncake and they you know\nthey keep denying you when you say cake\nor death\nand this is it's unclear whether or not\nthis is really a problem whether or not\nyou\ncan actually in practice infer human\nvalues\nusing occam's razor in principle it\nseems like you shouldn't\nbe sure you can in practice maybe you\nmight\nit's hard to say and oh my god i'm\ni just realized that timothy was asking\nto be let in\ni hadn't let him in i'm very\nsorry for that timothy um\ni'm afraid you missed the talk because\nthat's basically it\nokay thank you very much", "date_published": "2020-10-01T21:00:57Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "a0dd3e9c31f663cf37f188a1455f6a2d", "title": "249, MIRI Announces New "Death With Dignity" Strategy", "url": "https://www.youtube.com/watch?v=u6ppY0OF6HE", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 249\nin the ai safety.com reading group\ntonight we'll be discussing the\nrestaurant post miri announces new death\nwith dignity strategy by elias\nyatkowski eliza yutkowski is the founder\nand senior researcher at the machine\nintelligence research institute in san\nfrancisco and this is a post on last\nround which was posted on the 1st of\napril this year and that date is a\ncoincidence it's literally attacked\napril's fools\nit is\nof course called mary's strategy but\nit's uh it doesn't have very much to do\nwith miri and it's not really a strategy\nin the sense where you think about like\nthey're changing from um\nfrom asian foundations or something like\nthat\nkowski starts with a an overview of the\nstrategic picture which is very very\nbleak\nmiriam didn't have the api alignment and\nat least knows that it didn't and it's\nin past tense so i think this is a third\ndeal\nalignment\nit's incredibly complicated and it has\nno chance of working in real life before\ndeep mind destroys the world\nthat's also a\nrather\ni think\ncomplaining that they are incredibly\ncomplicated is not my first objection\ntowards this um they are not that\ncomplicated at least most of them are\nnot that complicated um\nbut the problem rather is that\nwe are\nsecure any kind of strong arguments why\nthey would be likely to work that's\nthat's the problem with them not the\ncomplex\nand is it\nan interpretability scenario which is uh\nvery uh dark where there is in fact a\nspeculative warning being given by\ninteroperability research\nand even\nand even if we could get a speculative\nmorning in this uh scenario then it's\nnot accompanied by any kind of fix\nthere's no suggestion for how we can\nalign crit align the system and face\nwith this warning management decides to\nbasically press on and ignore the\nwarning\nfor four reasons\nthe first is that they are uncertain of\nis this this is a speculative warning\nthis is actually the tool that the ai is\nplotting to destroy us all\nthe second is the question of whether it\nwill act on these intentions\nthe third is\na race dynamic like real theme where if\nwe don't build it probably someone else\nwill do it's very cool\nthey find some kind of face in this one\nwhere\nif\nif ai is going to destroy us then\nthere's nothing we can do about it so we\nmight as well\nwe might as well go ahead\nin order to understand uh\nthe the um\nuh the rest of the of the blog post the\nconcept of logistic success curves or\nlogistic functions is\nessential so let me start by\nto explain this and start by using an\nanalogy building a bridge\nif you want to build a bridge there is a\npredicted cost let's say it costs 10\nbillion to build a bridge there is some\nvariance there is some like some tissue\nmaybe sometimes they go above budget\nsometimes below\nand if you're uh\nwhere the x-axis have\nthe cost in uh logarithmic so 10\nto the\nx\nand on the y axis probability then we\ncan identify some numbers like i just\nsaid that the bridge had an\nestimated cost of 10 billion so that\nmeans 10 to the 10th\ngives us a dollars gives us a 50 chance\nof success\nnow if we have less money we might only\nhave 10 to the ninth so one billion\nwhat is the probability that building\nthis bridge will turn out to cost one\ntenth as much as predicted i think that\nhappens very very rarely i've just\nwritten one percent\nand uh symmetrically\nif we get\n10 times as much money as we expect to\nneed so we have enough money to build 10\nbridges instead of one what is the\nprobability that at least one of these\nbridges will actually be built well um\ni've put it at 99\nso that's kind of the uh the motivation\nbehind logistic success curves\nand one thing we should notice in\nparticular is what if the budget is much\nmuch lower than what we expect to have\nto take like once we are at um uh 10 to\nthe seventh uh so one thousandth of what\nwe expect to pay basically the chance\nthat we can build a bridge and pay only\none thousands of what we uh expected is\nbasically zero\nso that means we have a graph that for\nthe first uh\nfrom x series or seven is basically\nindistinguishable from zero percent\nchance of success\nhere we have a logistic function um as\nyou can see um\nit has a um\nwe have the input here\nuh like if we get 10 to 20 then 99.9999\nwe can make it if we get\n100 we can almost certainly not build\nthe bridge\num this has a uh a midpoint\ni've put it at\n10 billion dollars so 10 and has a\ngrowth rate which is basically the slope\nof this curve and it's of course reality\nthat determines\nhow\nhow this function looks in reality\none of the things we will notice is that\nthe distances when you go from one point\nto another point it's not like we get an\nadditive a linear\nincrease in the probability of success\nwe get\na multiplicative uh relationship so if\nwe go from here to here we double our\nchanges and from here to here we double\nour changes here so here we double our\nchances etc\num and of course the claim underlying\nthis is that this logistic success curve\nis a a good way of looking at\nthe alignment problem that we will um\nneed to we don't know precisely how the\ncurve looks because we've never done it\nbefore but there is probably a level of\neffort where we are really really\ncertain we'll make it and a level of\neffort where we are really certain we\nwill not make it\non this uh logistic uh\nsuccess curve illegally believes that we\nare far underwater in the basement so\nvery far to the left to the point where\nwe in fact have just about zero percent\nchance and that's really sad and it can\nbe hard to get uh motivated and stay\nmotivated because even if you work as\nfar as much as you can the probability\nof\nif if you double the probability that\nhumanity will make it well we've just\ngone from two percent to zero percent\nand that's still zero percent so the\nexpected value of all of your incredible\nwork is nothing and that's kind of\ndemotivating and and that's why elitist\nutkowski is suggesting an emotional\nreframing um where we are not trying to\nimprove\nour odds of making it we are helping\nhumanity die with dignity or at least\nwe are working so humanity dies with\nmore dignity than we would otherwise\nhave had because uh\nelias koski is clearly seeing us on a\nvery very on dignity\non our on a path to a very very\nundignified death\nand\none another way of bringing this dignity\nis like it would be a better look on our\ntombstone if we\ndied with more dignity if we had some\nbetter\nprobability of making it\nlet's talk a bit more about dying with\ndignity\num\nuh one you can imagine a uh\na uh\nuh an intervariability researcher such\nas chris olah um\nright now i think elizabeth seems under\nthe impression that he's working on a\ndeep mind he's right now working at\nanthropic uh but this researcher if he\nwas not working on interpretability then\nit's much more likely that no one would\ntry to generate a warning and and fail\neven if uh chris owner ends up failing\nand it's more dignified if as in his\nscenario management of the project that\nkilled us all are warned in some way\nthat the project will kill us all\nit would be\nmore dignified uh if many people just\nlike chris ola was doing something that\ncould perhaps have worked to generate a\nwarning but did not in fact do that in\nreal life um and this phrasing uh didn't\nactually work in real life or variations\non that it happens a lot in this text\nand that's of course uh underscoring one\nof elias koski's main points that there\nis a substantial difference between\nthings that could work out in theory and\nand things that actually\nend up working in the real world because\nthe real world unfortunately is far more\ncomplex than the theories that you can\njust\njust make up\nand the least dignified uh way we could\ndie would be in\ntotal ignorance no warnings and no one\neven trying to figure out how we could\nget a warning\nhow about the machine intelligence\nresearch institute\nthere they have a dignity too it's not\njust the human civilization but also the\nmachine intelligence research institute\nhas a dignity um in how much are they\ncontributing and they are sad they're\nsad that um\nit looks like we're going to fail and\nelizabeth is sad but they're not said\nthat they have tried because\nwas important that um\nmade a serious effort in case the\nproblem just wasn't that hard\nas\nit would have been effectively less\nsignified to die and not knowing if all\nit really took was\na serious level\nand that's\nkind of a new\nintuition on\nmiri's\nlogo you can imagine time flowing\nupwards so here is the singularity of\nagi and the paths here are the possible\nhuman paths through the future and there\nis like one that is orange one that's\ngreen and then 80 of the paths leading\naway from the singularity are dark and\nso uh with the implications that that we\nhave died\ni haven't i haven't\nthought of this uh interpretation before\nuh or nor read it anymore\nso that was the dignity of uh miri what\nabout the dignity of earth\nwell it's not looking good right now uh\nearth steals brilliant kids towards\ntheoretical physics\nthat was actually my person when i when\ni grew up i have read about all these\nwonderful things that theoretical\nphysicists did and i thought that\nsounded really really interesting i\nwould love to do that if i was\nvery smart and it turned out i wasn't\nvery smart so i went into computer\nscience instead\nbut what i should be going into of\ncourse would have been ai alignment and\nthat's what in particular the brilliant\nkids should have been doing\nand of course some of many of them are\ngoing into quantitative finance or\nthings like that\nbut earth's dignity is not\nzero we don't have no dignity because\nwhen we die there will in fact have been\npeople who know why we are dying\nand um\nit's not enough the problem is not\nsolved because the people who know will\nnot be believed or will that uh have the\ninfluence that they need they most\npeople will uh\nwill uh choose to listen to uh some\nother someone else who is reading some\nkind of political game but um that's not\nreally the the crooks of the problem\nit's not really a problem of\npolitics because even if we have\npolitics\nmuch better we have actually solved the\nalignment problem and that just means\nthat\neven if the first couple of projects\ndecide not to destroy the world\neventually there will be one who is uh\ncareless enough and uh for strategic\nreasons pushes ahead too far and then\ngets us all killed\ni'm not entirely sure i agree with a uh\ni certainly don't agree with a\na complete split between the political\nproblem and the technical problem\nbecause if we had some kind of let's say\nwe get a proof of this ai is misaligned\nin some sense that would both be a very\nstrong technical result on the way to\nsolving the alignment problem and would\nalso have a strong political um\nadvantage so these two problems\nin my view are somewhat related\nas slightly better technical solution\nwill uh uh\nbetter technical work even if it's not a\nfull solution will make the political uh\nquestion easier\nwe could in theory imagine dying with\neven more dignity\nuh like interpretability research could\nhave been better to the sense that we\nget some really strong warnings\nthat's dying with more dignity if we\ninstead of doing nothing then try an\nalignment scheme that is really has a\nvery low chance of working that's a lot\nbetter and um\nif we\num\ndo this substantially sufficiently\nbetter we could uh instead of right now\ndying without a serious effort we could\ndie of the social and technical problems\nthat are really unavoidable so\nit looks like right now we are just\nnot taking this problem seriously so\nwe'll die of\nuh\nwe'll die without trying some relatively\ntrivial things\nbut we could die\ntrying even where we have done the\ntrivial things\nand perhaps\njust maybe if we uh find something in\nthe future some miracle some way we are\nwrong some way we are surprised that\nmake things easier rather than harder\nthen perhaps\nperhaps we could take advantage of this\nmiracle\nthat's going to require a lot of sanity\nit's going to require a lot of\npreparation and of course the doctor\nthere is a\nmiracle\nbut\ntaking advantage of this miracle\ndepends on dignity and so this hint of a\ndefinition of dignity like dignity is\nsomething that allows us to take\nadvantage of\na model a positive model error\nand in principle it wouldn't even be\npossible for us to not die but\nit is casually does not mean squirts\nbecause that's not how\nreal life works in real life\nwe die\nwhy we won't be saved\nwell first of all\nthere are probably a large number of\nways we are raw about hr because we\ndon't have an api we haven't seen it at\nall and it's possible that many of our\ntheories are just basically wrong but\nwhen you're fundamentally wrong about\na domain if you use the analogy of\nrocketry then\nthat doesn't mean that your rocket do\ndoes exactly as you want just using half\nas much fuel what happens is that the\nrocket explodes in a way you didn't\npredict\nbecause rockets are\nmachines where everything needs to work\nand if you're really confused about\nbuilding rockets you are not going to\nthe home\nin particular\nmodel errors make your life more\ndifficult almost always but not always\nand\nthe social\nproblems the political problems are also\nmore likely to become harder in the\nfuture rather than easier because\nwhen people become scared they will\nprobably make worse decisions\ni'm not entirely sure that i believe the\nproblem so much is that people make bad\ndecisions right now i think the problem\nmostly is that people are not making\ndecisions at all so what the way i pre i\npredict the future is more that we will\nsee um\ninstead of right now where we are seeing\nnothing in the future we might see\nsomething which is probably going to be\nbad decisions but uh but different in\nkind\nnow eliza carski introduces dignity\npoints\nand uh that's from this\nwe had the desire to die with more\ndignity because if we can't actually\nsurvive then dying with dignity is\nsomething that we can in fact achieve\nand if you help in humanity die with\neven one more dignity point you yourself\ndie with 100 dignity points\nthat's a bit stingy i expect that\nsomeone who helps humanity substantially\nis um and towards preventing existential\nrisks is\nis indeed someone who is worthy of way\nmore dictators of course\num\nso\nand he's also uh later claiming that if\nyou help humanity go down within\nslightly more of a real fight that is to\ndie an extremely dignified death\nand uh there is a definition here a\nproject that doubles humanity's chance\nof survival from zero percent to zero\npercent is helping humanity die with one\nadditional information theoretic bit of\ndignity\nuh this is really crippling but i'm not\nsure information theoretic bit is\nuh the right framing for dignity points\nin that they are points scalars and bits\nare units of information in the\ninformation theoretic sense and i don't\nthink that's what dignity points are but\nyou know me of course i will digress a\nbit on dignity points and try to look\ndeeper into the uh\nthe definition\nso\neliasian cascade gives a\nliteral definition of uh\ndignity points um they are measured over\nhunters lock arts of survival\nthat is the graph on which the logistics\nsuccess curve is a straight line\nnow let's have a look at that\nand try to uh go through some simple\nexamples\nso i have uh first in excel type up some\nexamples of probabilities what's in text\nus ratios and what are the log parts\nwe'll start with um some of the the\nthing the odds that we normally have\nabout like here's 50 probability that's\nodds one to one and one divided by one\nis one and the log of one is zero um\nand if we have 20 chance of success\nthat is uh one to four\nand one the odds which is an outrage\nratio of zero to 25 which is minus two\nso\ncompare the situation where we have even\nodds and we have uh 20 chance of making\nit uh that is um\nis uh two dignity points down\nnow we can be more pessimistic which is\nclearly illegally\nand see we have like 120 one in 100.1\none ten thousand that is minus 13 uh\ndignity points for our civilization\nlet's try to have a simple uh\ncalculation i think the most\nnegative uh number i could find if i\nthink through all the bad news and\nupdated fully on those another on all\nthe good news is one in one hundred\nthousand uh odds of making that\num so that is one two uh yeah 99 999\nwhich comes out to minus 16.6 log odds\nand if you want to obtain one dignity\npoint from that well then you take this\nand you just add one and then comes out\nto uh this box if you're uh\nexponentiated um and um\nthat's this ratio and that is this\nprobability which is yeah kind of the\ndouble probability because the\ndifference between this number and this\nnumber is basically nothing so\nat least for small numbers talking about\nthe probability and the odds ratio is\nyou know basically the same thing\nand that suggests a an easier way of\nlooking at this or at least an\nalternative way of looking at this\nbecause if we have this is the logistic\nuh\nsuccess curve here is the uh analytical\nexpression for the logistic success\ncurve um\nand\nmost people i uh would claim don't think\nabout probabilities and they don't think\nabout odds ratios\num and that suggests that people are\nprobably interpreting this mostly as the\nlogarithm of this\nso let's see what is the logarithm of\nthis function well this is um it's this\ngraph here and as you can see\nit is for small probabilities basically\na straight line\nso that's also what really is a cascade\nwas talking about that if you\ndouble our chances you get\na fixed number you get one dignity point\num\nagain this is the log of the probability\nand not uh log odds\nand and this interpretation should\nsuggest that if you believe\nthat we are probably not gonna make it\nif you are 99 sure we're not gonna make\nit well 99 sure that is this number here\nthat is minus 6\nin\nminus 6.6 dignity points that is\nsomewhere around this level if you\nbelieve we are at this level\nthen\nthe expected utility of improving our us\nis really really bad but you'll get\nquite a few dignity points so someone\nwho believes that we are almost\ncertainly going to die is gonna love\ndignity points because emotionally they\nprovide something you can actually\nachieve something that is positive\nwhereas someone else might believe that\nthere's a 90 chance we'll make it and if\nthey think about probability it's not\nlike odds then they'll see us up here\nas so and there is a going uh just a\nmarginal approved improvement around the\n90\nprobability of success is worth a huge\nnumber of expected utility and very few\ndignity points\num so it could be argued maybe that um\npeople who think we are doomed love\ndignity points and people who think we\nare probably gonna make it they\nwill not be motivated by different\npoints\nuh maybe i like uh this is uh i've spent\ntoo long time on this to call a hot take\nuh\nuh so um\nbut i i'm not putting a lot of uh weight\non this this was basically a digression\nlet's move back to the rest of the uh of\nthe post which is structures as a\ndialogue between elias erkowski and\nan anonymous questionnaire uh i have uh\nput the uh\nusing a collar inverted picture of ilia\nsyrizkowski for for the personal making\nthe question\nso the first question is dying with\ndignity does that just mean we accept\nthat we are going to die and not\ntry to fight a hopeless battle\nand elias wikowski answers no because\nthat does not increase in the gods of\nfoster's survival\nelite costume\nis sad that we're gonna die and and came\nthe fourth hardest when we were in the\nmore slow part of the logistics success\ncurve um where something could have been\nchanged it's not something he\nregrets but he burned himself out to\nsome degree and is taking um\nsome time off now\nit is certainly true if you fight mine\nany longer you start with just\nmarginally more dignity um but\nand that is indeed dignified the\nundignified thing is deluding yourself\nabout the outcome or the probable\noutcome\nso here we see undignified being used\nmostly about the process of deluding\nyourself\nthe second question is\nif someone has a uh or the the question\nhere claims to have a clever scheme for\nsaving the world and states that he\nshould is it asking if it correct that\nhe should uh act as if he's going to uh\nuh succeed with this even though there\nare really really strong arguments then\nthis misguided doomed\num because if it doesn't work then we're\nall dead and then nothing matters right\nno with an um mark mark because that's\nnot dying with dignity\nand before he dives into the real\narguments he starts out with a heuristic\nfor how to generate the answer that uh\nto this question and that is that it's\nnot dignified to die in this way uh\nbecause what you are actually doing is\nstepping out of uh reality into some\nkind of um uh\nmentally uh\nsafe zone where things are going to go\nwell and instead of being in reality\nand it's more dignified to dive with\nyour commitment to recent intact and try\nkeep searching truth and motivation\ni love this picture i found it on the\ninternet i think it's louisiana\num\nso uh the uh the question\nthe person here is uh objecting to the\nheuristic answer and what's the real\nanswer like uh all the utility in the\nfuture lies in worlds where the scheme\nthat is being uh suggested is working so\nwhy not just focus on that\nand uh\nanswers that that is in fact not how the\nsurviving worlds look like in his\nexpectations\nwhere people try to live inside reality\nand\nstare into reality with grim\ndetermination and try some way to\nshape up their impossible changes\nenglish is not my first language i'm not\nentirely sure what shape up means that's\nlike sizing up in looking at what other\nchanges are improving the chances\nand and the the surviving walls are ones\nin which a miracle happens and\npeople are positioned with resources and\nin particular\nitself is crucial that that allows them\nto take advantage of the of a miracle\num about that your scheme is going to\nwork then very often you'll need several\nand you make a lot of assumptions and\nyou get very very far from the reality\nit could be said that if you want to\nmake this kind of improbable assumption\nit could in theory work but you can only\nmake one of them ever and that's the\nproblem that what people who are going\nto do this are going to make\ni think it's a it's nice to see this\nspelled out but i would really have\nliked to see this uh argument compared\nwith the argument for specialization\nwhich\nvery often looks very much like this\nvery often people who like if you want\nto specialize in i don't know i say to\nvia debate it makes sense to some kind\nuh kind of uh live in a world where the\nsuccess and failure of um ai safety via\ndebate is the crucial thing like um in\norder to specialize on just working on\naicc via debate and then hopefully\nsomeone else will be working on other\nresearch agendas um so it's not the same\nargument and i would have liked to\nsee some kind of comparison between the\ntwo so um because i do believe\nvisualization is really important to get\nanything done\nagi alignment is murphy cursed is a is\none of india's caskets arguments\nand that's to be understood as in uh i\nleave murphy's evolve was originally\nproposed in not quite in rocketry but in\nsomething like experimental jet engines\nwhere\nsomeone called murphy\nmade the law that if anything can go\nwrong it will go wrong\nand a murphy curse to me like murphy's\nlaw probably doesn't happen literally\nfor everything in most domains but a\nmurphy curse domain is one where\nbeing pretty much everything that can go\nwrong does go wrong\nso what do you need in a very request\ndomain you need mathematics mathematics\nis uh the uh\nthe only way to ensure that things\ncannot possibly go wrong\nand that points very much towards the\naging foundation's agenda that miri has\nbeen pursuing and uh are finding too\nhard unfortunately\ni would also add here that uh murphy\ncursed domains might not make a lot of\nsense\nif it's something that can be tested\nand a\na domain is only murphy cursed\nto the extent that testing is impossible\nso the two examples are rocket\nprototyping and computer security\nwhich are also somewhat morph cursed\nbut less so than alignment because we\nhave a scientifically unprecedented\nexperiment in agi and we have\noptimization pressures and some of them\nare\nworking against us the real world once i\nneed the ones working against us\nwe can\nwe cannot really predict whether\nintelligence\ngreater than ours will do by almost out\nof principle\nwe need to hit a relatively small target\nand we have a lot of things that can\nbreak in\nextreme practice and we have no way of\nreally testing things beforehand\nso if you have a really clear scene that\nreally looks like it's going to work not\njust to you and me but also looks too\neasy it counts me like it would work\nwell\nin this case adi alignment is so murphy\ncursed that even if it looks like it's\ngoing to work and you can't see a reason\nwhy it shouldn't work then it has a 50\nchance working in real life because the\ndomain is just super hard\nif you have something that you are much\nless certain about or you have like some\nmoderately weak arguments like four\npercent chance that would work then if\nyou put something like that up against a\nreal a truly birth course to me then has\nzero percent chance of working in real\nlife\nand worse a lot of these uh schemes are\nin fact actively uh harmful they're\ncalled hair brained uh usually basically\nharmful\nand the reason giving is they're\ninvented by the kind of people who uh\ncome up with unbreakable schemes\nobviously and try and get rid of them\nwith counter arguments like if it\ndoesn't work then we're all doomed\nanyway\nthe problem with and main way they are\nthey're harmful they drain resources\naway from projects that are not in their\nbrain\norganizations that look into reality\npeople who look into reality and try to\ntake advantage of\nsome kind of miracle\ni'm not very happy about this\npart of the article because i find it\ninsufficiently specific we need to have\nsome kind of precision in being able to\ndistinguish what\nwhat\nschemes are\nmisguided and\nunlikely to work and which are actively\nharmful\num\nand um\nthe three uh uh\nparts\nuh that that's being used the three\narguments are criteria is they are\nself-reported as clever there are strong\narguments why they're doomed and strong\narguments why they are misguided\nnone of this is really enough i think\nyou probably want to burn some kind of\ncommon you know to have a\ntruly actively harmful scheme\nto some extent also because if you're\npursuing a a scheme that is misguided\nthen in the course of pursuing it\nprobably you'll learn something more\nif nothing else you would on average you\nexpect to learn that that it is in fact\nmisguided\nthird question is\nshould we have a breakdown and whale in\nthe weight of doom industry in the\nstreets\nand uh it is it asks me with dignity\nalso has a good argument that this is\nnot very dignified that much that most\nsomewhat uh\ndarkly very darkly indeed he suggests\nthat you have a private break breakdown\nin your bedroom or a breakdown with a\ntrusted friend if you must\nand the problem from an expected utility\npoint of view is that if you go\nwaiting in the streets you'll associate\nbelieve in reality with people who are\nalso unable to control their emotions\nactually this is a rather large subject\nhow should uh\nrationalists deal with emotions quite a\nlot have been written by this also by\nelias caskey a big subject\nand the reason here we don't want this\nis because we are uh\nwe still need to think strategically and\nif we uh associate belief with uh in\nreality with uh\nbeing stupid then uh then voices are\nstrategic\nposition\nthere is\nthe opposite suggestion hiding the great\ntruth and just presenting everything is\nfine\nthat also doesn't seem very dignified\num\nit is some\nbasic language about\nhow we should grow and also\nthing that that in practice people who\ndeceive others generally also deceive\nthemselves to some extent they don't\nlive in reality\nand another problem is that if good\npeople are liars that means that um\npeople\nother people can't really trust if\nrationalists go around lying then people\ncan't trust rationalists and that will\nhurt us a lot and that will in\nparticular\nhurt our ability to coordinate with each\nother\nand of course also also other people so\nfor that practical reason it's a bad\nidea to uh to lie about uh and pretend\neverything is fine\nthere is quite a digression here from\nalias yukowski stating that this line of\nargumentation is generally used by\npeople who don't understand\nconsequentialism or in utilitarianism\ni haven't seen this argument very much\nso i can't really\nhave an opinion about whether that is\nactually true i expect indiana has\nseen many many people make this kind of\nargument\nthe fifth question is on extreme actions\nor\nrather the questions are not really\nclearly delineated in this numbered way\nthat i've uh i've made them\nso this is\na question of will elias and carson do\nsomething that is that makes it unsafe\nto be around him\nand he says no he will be relatively to\nbe around\nbecause why would you do something\nextreme and something unethical\nwhich is still not sufficient to save\nthe world\nthere is no extremely ethical actions\nthat would save the world in actual\nreality because the thing that needs to\nbe done to save reality is\nis not extreme oriented or unethical\nit's basically to\nlook reality hard in the eye and do some\nvery very hard alignment research and\nthere's no unethical thing you could do\nthat would actually save the world you\ncould try to like politics could have\nthings that are extremely\nunethical but don't they don't save the\nworld\nso israel\nwill be relatively safe at least like\nsafe in the sense that he won't cause\nyou to die but you will die\ni thought about this well i should\nspeculate about um\nwhat are the extreme and ethical actions\nwe might be talking about here and uh i\nthink india's youthkowski by not\ndiscussing them i'm trying to establish\nsome kind of overtone window and saying\nwe will not talk about this kind of\nthing here\namong\nright thinking people and that's why\nhere even though i\nwas thinking about making a digression\nabout what extreme and unethical actions\nare available um\nthen um\ni i will not speculate on that\nunfortunately some of eso encounters\nself-proclaimed fans\nmight not be quite as safe as him\nand he is in that case he's filled jason\nwhatever it is i know so\nhe is indeed putting\nzero weight on this\nhe has like um the the simple arguments\nthat are being presented here with just\nthe expected utility uh calculations is\nin fact something that he has written\nabout like\nthat very much but also not a little\nthere is some uh warnings against\nfollowing expected utility calculations\nof a cliff that's something that you\ndefinitely shouldn't do if you're not\nreally sure what you're doing\nhe also\nstates that he in his culture he is\nperhaps\nperhaps even quite different from his\nfans in the sense that his he's grown up\nwith um liberalitarian science fiction's\nuh\nworks of science fiction where one of\nthe things that differentiates heroes\nand villains is when\nthe going gets tough do they then just\nabandon their ethics or do they really\nstick to their ethics even when it's\nhard\nand sure it's possible that if you tell\npeople the truth that could be panic and\npenny can't disrupt the plans\nuh and that way be bad but\nthis is not what's going to happen in\nreality because there is in fact no\nplans that can be disrupted so in that\nway panic doesn't necessarily mean that\nwe have a lower chance of\nof success\ni must admit with this unsafe people uh\ndiscussion in general i'm surprised uh\nlike this is questions that elijkowski\nchooses to bring up and i'm surprised\nyou give this that much prominence like\ni don't really see extremely unethical\nactions being taken\nokay\nso uh\ntalking about things even though it can\ncause panic it means dying with more\ndignity\nand\ni know a bit um\nuh i noticed that right as elias kowski\nsays he\nseemed to not talk about these extreme\nanalytical actions and not speak plainly\nabout those immediately afterwards he\nsays that it's more dignified to speak\nplainly and obviously he means plainly\nabout risks and not playing about the\nunsafe actions\nbut in order to avoid panic he\nwants to provide the people who might\npanic with an easy line of retreat which\ncomes here and that is question six\nthis is just an april fools joke right\nbecause it's been posted on the first of\napril and tech equals four so really\nthis is not what he actually is and he\nanswers why of course it is\n[Music]\nstrategically making sure that this is\nsomething that can\ncannot by someone who is not really\nunderstanding deeply the subject uh uh\ncannot really uh\ninfer that he this is actually what he\nbelieves and the way around this the\nonly way to actually figure out what\ndoes aliasing you'd ask me believe and\nthe only way to figure out the truth is\nin fact to stare at the truth and try to\nfigure out what is the state of ai risk\nwhat is the probability that we'll make\nit and try to live in in that mental\nmental world and\nfigure out what is true and allowing no\nother consideration with that change and\nthat is\nthat is the essence of dignity\nso now we have four definitions of\ndignity we have the logarithms of\nsurvival mathematics we have the dignity\nis the thing that allows us to take\nadvantage of miracles it has the\nexclusive truth thinking\nso\nuh suggestion finally is to people who\nare worried about this and think it\nmight be an april's fool's joke to um\njust\nif they put a small probability that\nhe's right then he'd rather that they\njust have super probability that the\nworld's gonna end put that probability\nin a small safe corner in the back of\ntheir mind and stay out of the way of\nthe gloomy people\nand don't get away of any of their their\nhopeless plants\num so in this way uh no\nin israel what he's saying he is clearly\nclearly not what he believes so is that\na lie\ni think it's more uh\nit's better to um\nmodel what he's saying as some kind of\ncollusion between uh between these two\npeople is helping um this uh\nuh christian maker um live in the\nreality he prefers to live in um so\nthey're working together in some sense\nand finally i want to say a few words\nabout aichhf2.com\num\nand our strategy and how it fits into\nthis uh because from this we can uh\ninfer some of the things that the esv\ncustomers say we should do we should\ngather resources in some kind of like uh\nset up research departments and prepare\ntools and uh i'll learn as much as we\ncan and also um uh investigate as much\nas we can and look out for positive\nmiracles\num\nand so\nare safety.com well we're obviously in\nthe reading group\nlooking out for signs of possible\nmiracles by you know reading that's\nwrong and discussing the different texts\num and the the hope of what we'll be\ndoing in aicte.com the startup is\ninvestigating to what extent language\nmodels can generalize a major level of\nso that also seems to like find out what\nyou can kind of think\nand\nyou know gathering information we're\ntrying to actually you know build some\nkind of research organization and that\ncounts as gathering resources um and not\njust looking for a miracle but actually\nyou know taking a bit down and maybe\nwe'll find something that will\ncount as a an actual miracle if we're\nreally lucky and of course\nwe are explicitly open to changing\nresearch directions if it turns out that\nwe find something that is uh\nor someone else finds something that\nlooks really interesting and being a\nposition to that also seemed like a good\nidea\num\nthere was one thing in particular that\nstruck me when i read this and that is\nuh one of the risks we're facing is we\nif we look into how can language models\ngeneralize one meter level up there is a\nchance of course that will inadvertently\nfind a fine tuning for g3 that will make\nit easier for it to generalize a level\nup and after which qrt3 might be\nmotivated to destroy the world and that\ncould destroy the world and that's of\ncourse really bad and so\nwhat i've literally said that if gmt3 is\ncapable of taking over the world we're\nall doing anyways\nso that's kind of close to the\nhairbrained schemes that elias that\nyokosuka was warning us about\nbut it's not fitting on some of the\nformal criteria\nfirst\nit's not that people have a strong\nargument that gbg3 is capable of uh of\ndestroying the world is around the\nopposite that we are reasonably sure\nthat that\ncannot do that\nand of course even if gt3\nif there is a interview\nchange the gt3\nwould be able to take over the world um\nwith a good prompt then\nthe probability of that happening is\nmuch smaller than the probability that\nwe will learn something um by\ninvestigating whether it can\ngeneralize immediately level up\nso\nin total i believe like this is\nsufficiently far from the hairbrained\nschemes that he is\nuh wonders about um that i don't feel\nuh\nthis is a particular problem for aic3 to\ncome\nwe'll see\nthank you and good night oh thank you\nand see you next time", "date_published": "2022-05-20T04:43:58Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "4906f786fe0b771196ebfe871f4ee78f", "title": "250 A Generalist Agent 1", "url": "https://www.youtube.com/watch?v=_DbyjSbczQw", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 250\nin the aisafety.com reading group\ntonight we'll be discussing the first\nhalf of the article a generalist agent\nby scott reed and others\nthis is a work done by deepmind scott\nreed is the primary author but there is\na team of 21 people\num and i'm not gonna uh\nfind their pictures find all of their\npictures\nit's published very recently so it's\nlike a couple of weeks old at most\num the word gato as far as i could tell\nis it's not described why they chose\nthat name it means cat in spanish and um\nlike it's\npro not an acronym uh i could at least\ngeneralists probably an agent that's\nlike ga and then you could she is\nprobably a transformer and\nmaybe optimizer or something like that i\ncouldn't find anything else so i don't\nthink it's an acronym of much mud\ni also don't really think it's an agent\nlike um i realize some people say\neverything that has to do with\nreinforcement learning is an agent um\nbut uh\ni i don't prefer such a limited like\nmost people when they hear talk about an\nagent they don't think something like um\nlike uh gator in fact\nso let's start out with a few reactions\nto this paper\nit's been very mixed someone like uh\ndaniel kukotal was willing to call this\nindeed subhuman level agi that we do in\nfact have agi here\nother people like nostalgia bryce\nmade some uh very dismissive comments\nsaying this work is like totally what we\ncould have expected um and was really\nunderwhelmed but\none thing that certainly was not\nunderwhelmed was the prediction markets\nbecause around the time when this was uh\nuh\nrevealed the prediction markets for when\nwe would have weak agi moved uh\ndramatically to become dramatically\nshorter going from 2048 you might be\nable to see here in this graph it's on\nan uh logarithmic curve here so it looks\nlike it didn't go that far down but it's\nactually from 2042 to 2028 which is a\nrather dramatic shortening of timelines\num\ni looked a bit around what people said\nto that and a lot of the people said\nthis was a crazy overreaction and mostly\nbecause uh\nnot because it changed very that much\nbut because uh\nsaying this could only happen 2042 was\nalready uh crazy so the market is\ncorrecting\nin some way\ni must admit i i looked more into the um\nthe details of this bed in particular\nand one of the things required for weak\nagi is passing the turing test so you\nneed to be able to\nfollow a scenario somewhat like what\nalan turing originally envisioned and\nthat probably requires more than human\nlevel intelligence to do and also in\nfact it requires winning the lutner\nprice and that's the price that's been\ndefunct for the past couple of years and\npeople\naccording to wikipedia people scorn of\nthis price\nand that's probably why they don't give\nthat out any longer so uh this\nuh\nuh\nthis uh prediction here probably also to\nsome extent reflects the probability\nthat it will never be one because the\nlutheran price will not be uh\nreinstated or something like that\none neural network to rule them all\nthat's my quote and not one from this uh\nfrom this paper\num\nthe the paper the introduction four\ntimes in a row states that\ngato does a number of things and it uses\nthe same neural network for all of these\ntasks and it's very important for them\nto say that it's the same neural network\nand why is that a good idea well\nyou avoid\nall of the hand crafting that you\nnormally have to do when you use uh\nfor different things and there is you\ndon't need to determine that good\nbalances you get a lot more training\ndata if you can just have one big model\nand then you can train on literally\nanything and then you get a more diverse\ntraining and of course it's a model that\ncan be reused meaning that the total\namount of computation to use is much\nless those are the five reasons given in\nthe payment i think\nthings like performance out of\ndistribution is probably a lot more um\ninteresting and probably\npart of the reason why we care about ati\nand of course the key thing that enables\nthis is that it does in fact work so you\ntake in more diverse training from\neverywhere and then you keep getting uh\nbetter results on all the tasks the\nscaling hypothesis that we've seen so\nmany times\nthey have a number of tasks that they do\nand when they just write it out it\nseemed really really diverse you can\nbreathe read here all the things it can\ndo both in reality and in simulations\nand games i couldn't find any specific\num\nreasoning why they chose the tasks that\nthey that they chose\num\nso i think mostly they like maybe they\nhad someone who was really good at the\nbodies and then thought i will do some\nrobotic tasks or something like that\nuh and of course the the main claim that\nthey're making is uh or one of the\nthings they're also showing that i won't\ngo too much into details with tonight is\nthat this that if you add more data and\nfine tune it then it becomes even with a\nlittle extra fine-tuning data\ndramatically more capable\nthey have this statement natural\nlanguage is a common grounding across\notherwise incompatible embodiments and\nthat looks kind of nice because\nobviously they're using a language model\nto do things like control robots and\nlike white language a part of that i\nwill later uh\nargue that this might not be entirely\ncorrect or\nand of course another thing that we\nshould note and also that makes it\nsomewhat less of an agent is that they\nin fact did not use reinforcement\nlearning they used supervised learning\nand learn from that\nso um they could of course have used\nreinforcement learning and\ni think it's just a matter of\nconvenience that they\nhad some expert trajectories and then\nthey just used those um but\nthat is they didn't do any kind of\nself-play or anything like that\nfinally in the introduction they have\nthis statement we hypothesize gateway\ncan be effectively scaled towards\ncovering any task of interest\nand that certainly looks like a ati a\nclaim that uh an explicit claim from\ndeepmind that this is in fact an agi\num\nthis is kind of like an intuitive sense\nthat i have from this paper mostly\nnot\nmostly from reading other deep mind\npapers in fact other deep mind papers in\nmy intuitive sense are very anti-hype\ntrying to avoid making statements\nlike this statement in fact um but um\nbut i can't you know it's it's hard for\nme to if someone asked me to please\npoint out the places in other deep mind\npapers that are less hype than they\ncould be um like i can't really and\ncertainly a lot of other research is\nis quite high\nso it's more like regressing to the mean\nlet's talk about this skater model and\nall the things that it does\nthe overall guiding principle is to get\nas much data as possible and uh as\nuh\nas varied data as possible\num\nand they put all this very data somehow\ninto just standard tokens which is you\nknow mostly\nmostly words\nand the the way they do this will get\nback to the details later but i would\ncall that naive of course it's really\npretentious of me to say this research\nis naive in the sense that i could not\ndo it myself um\nbut um\nthere doesn't seem to be any um big\nideas there are a number of small ideas\nand we'll get to some of the small ideas\nlater but um\nbut it's certainly not something that\nseems groundbreaking\nand and the way they are using this\nmodel is not\nsomething that is reminiscent of um\nreinforcement learners or anything like\nthat but mostly like a language model\nthey're just taking a language model and\ntrying to use it as an agi and say hey\nit works uh just\nby coincidence or maybe not entirely\nlike coincidence\nbut it's not designed with any kind of\num deliberate\nthoughts about agi\nit's 1.2 billion parameters\nbecause and that's a very small model\nbecause they want to use to control\nrobots in real time and so that means\nwhen you query the model you need to\nhave a very fast answer\num for comparison gp2 was larger than\nthis model than gator and qt3 of course\nwas\nmore than\n100 times larger so there is a\nboth a potential for it to be scaled up\nand also uh like when you look at how\nimpressive it is in things like text\ngeneration and try to compare it you\nneed to realize that it's much smaller\nthan gp3\nlet's talk about tokenization um the way\nthey uh in code\nit seems like every half year there is\nlike a new standard way to do it that's\njust a tiny bit smaller and now we're\ndoing using a method that looks at 32\n000 subwords and we maps those to\nintegers\nand images are also\nlike six\ndivided into patches and normalized and\num\nthey have a uh\nfor atari button presses they uh make\nthose into words in a rather interesting\nway and i'll try to explain how they do\nthat so first they say they do it in row\nmajor order so if you have like uh\nnine values you can either this is row\nmajor order or and this is column major\naura and they're using the top one\nand let's\ntake an atari controller this is an\natari controller and if you squint you\ncan see like there is a button up\nand up and to the left and to the left\nand down to the left and down and then\nin the middle there is in fact a button\nthat can be clicked or not clicked so if\nyou imagine that you are holding the\ncontroller stick to the left and\npressing down the button at the same\ntime that corresponds to zero zero zero\nand then one one zero zero zero zero so\nin that way they turn atari control\ninputs into uh into integers\nthey also need for uh for robotics they\nneed things like water movements and\nwhere is the robot's body proprioception\num and the way they do that is they take\nsomething continuous value and\nin a\nuse some\ntricks to put map those into some\nspecial um\nwords so uh the first from 0 to 32 000\nthat's text and above thirty two\nthousand uh and two thirty three\nthousand and twenty four\nthey uh that's the robotics and that is\nin fact the the way the thing that makes\nme think this is not quite as general as\na model as you think because this way of\nsegregating the input into two parts\nmakes me think that uh you know when\nthey previously said that uh\nthey're turning everything into language\nthen that's not entirely true because\nthey are using in fact um different\nvalues for this part\num\nso so it feels more like there are two\nneural networks pushed together rather\nthan um\none neural network doing both\num\nlet's talk about the last function which\nis of course specifying the loss\nfunction is a crucial thing for\ndetermining how a how to train a neural\nnetwork\nthis is in fact a picture from uh\nexisting latent knowledge that i've\nchosen here and you might remember this\nis a patient network\nwith statements like it is raining and\nthe sprinkler is on and there is water\non the lawn and i should get an umbrella\nand things like that so you imagine\nthere are some values here um that are\npropagated um and let's say you want to\nencode this kind of knowledge\nso how do you calculate the probability\nlet's say the joint probability that\nthis is true this is true this is true\nand this is true how do we calculate\nthat\nwell that is you can write that as a\nprobability of a1 a2 a3 and a4 right and\nhow do you calculate that you can\ncalculate that using the chain rule and\nthe chain rule looks something like this\nso you have the probability that a4 is\ntrue\ngiven that the others are true\nmultiplied by by the probability that\nthe preceding ones are true\nand this of course you can if you\nwant to calculate this well then you can\nuse the chain rule again\nto get that's a three uh\nuh\ngiven these two uh multiplied by the\nprobability of these two together and\nthen you end up with something like this\nokay so this is just a motivating\nexample of how to do it for four\nprobabilities let's take the logarithm\nof all this because we don't like for\npractical reasons we don't like to\nmultiply we much prefer to take the\nlogarithm and just add things okay so\nwhat's the uh log of the probability\nfrom\nuh s1 to sl well that's the the sum\nof\nuh the probability uh s one\nuh and then from is one to one\nand then\nuh\nthis index becomes two three four uh and\nso this is basically using the chain\nrule\nokay\nand\nthen using this\num\nthis\nequation we they plug it into and get a\nloss function for a\npolicy and a batch size\nand for if we just take the batch size\nhere then you can see this is in fact\nthe uh\nthe statement up here the probability\nthat you go all the way down in this\ntree in the bayesian network and there\nis a masking function this masking\nfunction uh ensures that there are some\nparts of the output that we don't\nactually um\nuh train on and we'll get back to later\nwhy we don't want to train on that but\nthis\nequation here is the actual uh loss\nfunction for training the neural network\nso before i can explain the um\nthe masking function we need to look at\nhow data is actually flattened into uh\ninto the input in a batch\nso there's some text it could be like\ni'm going to london or you know whatever\npeople have posted on the internet there\ncould be some robotics proprioception\nand continuous action\num and there are things like images with\nquestions and\natari games\nand all this are put into some batteries\nyou can see here\nand this is fit into data and then we\nnormally engage with the loss function\nwe need to add here\nthey mask some of the uh the inputs like\nif you imagine here the uh\natari then we want to train on the\ncontrols but what we predict the screen\nto be isn't actually all that\ninteresting or not something we should\ntrain on so we mask that out in order to\nget our last function\nthere are some more details about the\ntraining they choose transformers for\nsimplicity and scalability\nscalability not learning something that\njust one was not good at uh i'll leave\nit to you to decide whether the\ntransformer architecture is actually\nsimple like it seems to me like that's\nnot the case but it's easy for them to\nimplement because uh you don't actually\nhave to implement that uh because other\npeople have implemented it for you so uh\nyeah like if you want to use a\ntransformer then that's not very very\ndifficult right because other people\nhave done it\nand some more just know about the\nparameters for the\ntransformers and sort of other\ndetails about how they do that and they\nhave some prompts uh always wants\ndemonstrations\nthe hardware they're using is a bit more\ninteresting so they're using a 16x16 uh\ntransfer processing unit version 3\nand version three\nwhy are they not using the tpu version\nfour well the conspirational part of me\nwould say that well the tbu falls are uh\nbusy build uh training a uh online super\nintelligence or something that they\nthink they can use to take over the\nworld um\nin\nanother hypothesis less conspirator\nit's something that happens quite often\nthat people\nhave some kind of model and then they\nspend half a year and even a year before\nthey get around to publishing it that's\ncertainly i think that happens um\nanother thing that i think is perhaps\nmost likely is that they simply don't\ncare and i'll explain why in just a\nmoment the time they used to train this\nwas uh four days\nso if you currently have 256 views\nrunning for next six hours that comes\nout to around 25 000 uh gpu hours\nand um\nin uh\ngoogle will rent you these new hours for\ntwo dollars meaning that\neven i mean i can probably get them at\ncheaper than but even then we're talking\nabout a price point of like um\nfifty thousand dollars and for people\nwith 21 um uh authors i think this is\nyou know\npeanuts really\nthe training costs are totally trivial\nthey trained it for one million steps i\ncouldn't find the stop criteria\nbeing used\nbut yeah and there are some more details\nabout this lens of course interesting if\nyou want to\nreproduce this um but\nmight not be interesting for us\nlikewise there's here's a more\ndescription of how they uh the\ndeployment works in atari games um i\ndon't think i'll go through that\nthe data sets i won't go through that in\ndetail but one thing you'll notice is\nthat there are vision and language tasks\nand there are control tasks and the\ncontrol tasks are in fact 85 percent\nand there seems to be like a decent\nspread over different tasks\nand also\nin the vision language mostly is a\nmassive text i thought that was i\nnoticed a um dataset called the align\ndataset and i thought hey they're\nactually doing alignment work and\nwonderful what's the align uh dataset\nand it turns out to be something with uh\nyou know image recognition unfortunately\nand that has nothing to do with\nalignment um so that's a bit sad that\npeople are choosing uh such a uh\nmisleading uh name\ni don't think it's malice it's probably\njust ignorance the people\nworking with this\njust are not really aware that there's\nsomething called ai alignment and they\nare just\nchoosing the picture the name for all\nthe reasons\nlet's talk about the simulated control\ntasks um\nthey have a number there is uh the atari\ngrid world instructions following some\nphysics-based simulations transparency\nprocedural and atari\nsimulation and meat environments so it\nseems to me like quite well as far as i\ncan come\ni think of course\nonly to use\nexperts uh\nplay throughs uh all from the best\nreinforcement learners um\nand only in the cases where the agent is\nsuccessful and some like much so much\nrevenge this is a non-trivial constraint\nwhat does that do\nmy intuition is that on the average case\nit probably makes it better but in the\nworst case because we haven't seen\nanything like that the worst case could\nbe worse but that's kind of an intuition\nabout the um\nthe consequences of this choice\nokay let's go from simulation to reality\nand staying uh red green and blue blocks\nand there are two overall tasks skill\nmastery and skill generalization\nskill mastery is like how well can you\nstack these blocks on top of each other\nand skill generalization is if you have\nonly stacked green on blue and suddenly\nyou want to stack blue and green\ncan you figure that out\nand\nthey're both doing this in simulation\nand\nin fact in reality so they do have\nactual robust during this\nand the uh\nthe episodes are running at uh 20 hertz\nso um\nthat turns out to be uh requiring a an\nend to end time of 50 milliseconds for\nquerying this this model um and that's a\na very very substantial constraint and\nthat's of course also why we need to\nhave such small models and i think a2\nthis is as far as i can tell a really\nhard constraint and it's uh uh probably\nsomething we should be really impressed\nwith that they are able to um to uh to\nmake a full turnaround on this in in 50\nseconds that's uh\nquite impressive as far as i can tell\nand of course uh\nit gives this very tough constraint\ngives a um wrong picture of what this\ntechnology can do because you could\nimagine that you um just relax it like\nd3 operation takes far longer to answer\nand um you also get faster computers\nso this constraint is something that is\nthat we shouldn't put too much weight on\nand they have some examples of how they\nhow gator compares to\nthe state of the art and uh depending on\nhow you squint it is probably beyond the\nstate of the art in general and this is\nof course something that people who\nwrite email papers care\nreally a lot about we don't\nfrom an air safety point of view it's\nnot so important whether it's beyond the\nstate of the art but like the trends\nthat\nthis\ngreatest kind of generality uh whether\nthat is useful or\nnot on in total on simulated control\ntasks you can see a graph here\nlike um\nthere are 600 tasks\num and you can see how many of them\nyou can get perform as well as the uh\nthe the experts the state of the arts\nand how many of them get like 90 and how\nmany get uh 75 percent um this is of\ncourse an interesting question the way\nthey formulate that is if you go to the\n50 threshold then 75 of the tasks can be\nsolved at the 50\nthreshold and um\ni think actually it's more interesting\nto see how far uh yeah\nuh like if you go up to like 75 or\nsomething like that well that seems like\neven um\nthat's much closer to state of the art\nand\nit's still too fast and you can do that\nor even if you go up to like requiring\n90\nof state of the art then you can see\nokay it's still you know around 50\nof the tasks that can be uh solved at 90\nof the state of the eye so i think this\npart of the graph is actually more\ninteresting than the one they highlight\nhere\nand\nalso one thing that i think is\nreally striking from this graph is how\neven the performance is\nthe performance curves right that\num\nas you increase the requirements it very\ngradually falls out how many tasks\nfollow this it's not like there is any\nkind of cut off points or anything like\nthis it looks like um some very smooth\ngraphs um and of course what we've seen\nwith scaling general is\na lot of smooth graphs and that's also\nwhat we're seeing here\nand finally of course what i\nwould like to know in some of these is\nlike what is the human level um because\nfor some of these um it can be really\nhard to\nget an intuitive feel for uh like um\nthey just compared with state of the art\nin reinforcement learning and i would\nlike to know like how\nare these uh reinforcement journals at\nthe human level or far above the human\nlevel or far below the human level\nbecause like if the human level is in\ngeneral like um i don't know seventy\npercent of the best reinforcement\nlearners um well then uh this becomes\nfar more impressive but if humans are\ngenerally far above the best\nreinforcement learners so this is a a\nsubhuman level um but then this becomes\nmuch less impressive so the comparison\nwith the state of the art i i mean if\nyou are a\nresearcher in this specific field then\nyou have some kind of intuitive\nidea about what is the state of the art\nis it above the human level or below the\nhuman level this is something that when\ni people like me have no way of knowing\ni think\nthis\nuh these expert reinforcement learners\nare in general above the human level but\nthe comparison is just not me and i'd\nreally like to know\nthere are some text symbols shown um\nit's trained on massive texts and\nyeah the colossal clean with my common\ncalls web crawl corpus\nyeah\nthis c4\nso this is how it's trained and it's\nalso true those efficient language data\nsets\nand then huge examples uh calling it\nrudimentary\nand so uh\nlike i would like to see some more\na thought in really comparing this like\nhow much better is it than gpt2 that's a\nquestion that like they're not trying to\nanswer at all uh even though obviously\nit's a\nsmaller model and it's mostly trained on\nother things than language um i think um\nit is in fact better than gbg2 and\nthat's of course an interesting thing\nright they are training it on less data\nand they're training it\nit's a smaller model and it's mostly\ntrained on other things\nso\nit is somewhat surprising why does it in\nfact perform better than gpt2 and i\nthink the answer is in all these small\ndetails like the the the mapping of\nwords to integers and all these small\nperformance optimizations that are\ncontinuously being found with the\ntransformer architecture and way to\nimplement this like there are so many\nsmall improvements um that um\nthat we do in fact with a smaller model\nand less strain data and less everything\ngets better performance\num\nbut of course the exams that they show\nlike\nit's clear it's not state-of-the-art\nit's\n[Music]\nlike i can speculate why it's not\nperfect and how much\nbut it's not really obvious\nfinally i'm going to talk about\nsurvivorship buyers like you might\nas some of you might have seen this\npicture before this is a picture from\nthe second world war where people were\nobserving like counting the bullet holes\non the planes that returned home and uh\nput a red dot and they could see okay\nthere were these uh places that the\nbulls were hitting so they thought\nperhaps we should armor those areas and\nthen actually they realized afterwards\nthat no in fact the reason why uh they\nsaw this was because those were the\nplanes that re that did return home\nso\nthey should armor these places instead\nbecause when they apparently when the uh\naircraft was hit in the cockpit then the\naircraft turn\ndid not return home in fact\num\nand so\nlet me uh try to use that as an analogy\nfor the safety work in this paper\nso rowan shah\nfrom deepmind from the mind safety team\num\nis uh\ncommented on others people and he was\nasked\ndid he review this and what was he\nthought and he did not review this and\nhe believed that no one at the deepmind\nteam did in fact review this they would\nhave been happy to chat with him if they\nhad reached out but um\nthey didn't do that and uh when rowan is\nreading this paper afterwards he is\nlooking at this and saying this doesn't\nseem like something that can destroy the\nworld\nand i think if you can imagine some kind\nof self-reinforcing bad cycle in\ndeepmind where they build an agi by\npeople who obviously don't care the\nleast about safety\nand then\nthey uh they start the agi and they the\nagi does not destroy the world um and\nthen they uh write a paper based on this\napi that came out not to destroy the\nworld they show it to the safety team\nand afterwards and the safety team\nafterwards and say obviously this can't\ndestroy the world and they're right\nbecause they're only seeing it after it\nhas been published and one of them has\nnot destroyed the world and so they\nupdate on this saying okay we'll see\nthis and uh obviously the world didn't\nget destroyed so we're seeing more and\nmore examples of the worlds that can get\ndestroyed and they become more confident\nand the problem of course is in the\ncases where they build the agi without\ncaring about safety then in this case\nit would never reach the safety team at\ndeepmind so i think um there is a\nfundamental problem in deep if they have\nthe safety after the deployment the uh\nthe safety team should be involved\nbefore the deployment\nthat's all i have today there will be\nmuch more about the paper and about\nsafety\nnext time", "date_published": "2022-06-03T05:05:12Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "63eaf37ccd0cc77c984d95b61a0548c0", "title": "198. Language Models are Few Shot Learners 1", "url": "https://www.youtube.com/watch?v=jOxtiqszL4s", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 198\nin the ai safety reading group tonight\nwe'll be discussing the first half of\nthe article\nlanguage models are few shot learners by\ntom b brown\ndario ammoday and many others\nthis article is uh published by\nopenai and the number of people who are\nworking there\nit is dario moday who has designed and\nled the research tom b\nbrown benjamin mann nick ryder and\nmelanie sugar\nhave contributed equally this is this\nnormally means that those are the\nfour primary authors but there are a lot\nof\nsupporting people actually there are 31\nauthors in total so i um\ni'll let you go through all the names\nand backgrounds i hope that's reasonable\num and this is a paper that was\npublished on the\nat the end of may this year and can be\nsummarized as\nthe fact that scaling up language models\ngreatly improves\ntask agnostic few shot performance\nand i will be describing the results of\nthis paper\nbut also please keep in mind that i am\nno domain expert in this\nso i might from time to time say things\nthat are incorrect\nalso we'll be focusing on section one\nand two\nin this part so the first\nbackground question is natural language\nprocessing\nwhat is that well this\nuh general subfield of computer science\ndeals with the interaction between\nhumans the\nthe language that humans use and\ncomputers\nso there's uh as part of it which is\nlike\nlow level like uh\n[Music]\nspeech recognition and speech synthesis\nand things like that\nbut we're mostly talking about the the\nhigher level\nwhere there are meaning to the words and\nin this part of natural language\nprocessing\nwe have things like uh automatic\nsummarization dialogue systems like you\ncan see this\ngift shop here machine translation\nmany other things\nand if you go to an even higher level\nabove all these\nthen there are two major tasks one is\nfor\nthe computer to understand natural\nlanguage and the other one\nis for the computer to generate natural\nlanguage\nyou can actually go even higher at the\nvery very highest level\nnatural language processing is simply\npredict the next word that a human would\nsay\nthat would indeed solve all these\nsubtext\nsubtasks if you could do that um\nbut but this is generally considered you\nknow\ntoo high level or i would uh if i should\nquote myself on\nmy thoughts before i heard about all\nthis that would have been\nthat sure this predicting in the next\nword that's\nan interesting subtask but not really\nrelevant right\nbecause um if you have a task\nlike translating then\nobviously you can exploit some of the\nstructure of this problem\nright if you have the problem of\ntranslating a text\nfrom english to french then a dictionary\nthat translates\nwords from english to french is\nobviously going to be to be\nreally really relevant right so\nin in this way this um\nthe the claim that it's uh\njust predicting the next word is enough\nseems really really counterintuitive\nso let's have an here is a very very\nsimple ex example of this task i have a\npen\ndot i have a and then apple pen\nread or hello if you have to choose\nbetween these most people would say that\napple is probably the correct next word\nso what was the state of the art in 2014\naccording to this paper well um oh yeah\ni\ncouldn't resist just sniping a bit at\nbrian thomas\nwho in 2014 um\nargued that we are unlikely to see um\nuh dramatic improvements on this kind of\nthing\nthrough algorithmic insights because\nthere's not so much precedence for this\nand\num as we'll see there has indeed been\nquite a bit\nso um how would this have been done\nwe'll have\nin 2014 we would have been using rather\nsimple\nneural networks um using things like\nword vectors where here we have a number\nof\nuh animals and then some dimensions like\nare they domesticated are they fluffy\nand then we assign numbers to those in a\nword vector\nin 2015 things got better we are using\nrecurrent neural networks with\nmultiple layers that feed back into\nthemselves\nand we're using a some\ncontextual state for instance this is\ncalled\nlong short term memory where the\nthe previous things that have happened\nare fed into the neural network\nhere you can see this this box zoomed in\nit actually looks like this and this\nwas quite a breakthrough this\nalgorithmic progress\nhere according to wikipedia this was\nused\nreally really successfully\nnow in 2017 even more progress happened\nand this was where google grain\nwhich is you know kind of a competitive\nto uh\num to open ai they had a new um\narchitecture called the transformer the\npaper is called\nattention is all you need and there's a\nlink to it here\nand um this is what the uh\ntransformer looks like so as you can see\nit's a somewhat\nmore complex um architecture\num there are there are more links and\nand things that\ndo other things than just the basic\nneural network\nbut fundamentally these are neural\nnetworks\nso it's basically still neural network\nand\nthe key thing as you can you might be\nable to tell from the the title of the\npaper\nis that you can actually do away with\nall this recurrence\nand also something called convolutions\nuh\nif you just use the attention concepts\nso transformers\nare built exclusively on attention\nthe good thing about this is when you\ndon't need to feed\nin the results from uh from just before\nthen you can do it in parallel and\nthe fact that the transformer\narchitecture is parallelizable\nwhereas recurrent neural networks are\nnot um\nis in fact a very big deal however\nin 2017 this was still uh something that\nwas used in a task specific\nway in 2018\npeople started realizing that you could\nimprove this somewhat\nby having some pre-training done there\nwere also a number of improvements to\nthe\narchitecture throughout this there are\nsmall tweaks\nand extra features being added to the\narchitecture so\nit's incrementally improving but in\naggregate\nthe architecture is improving a lot and\nin this way\nthe architecture can actually be the\nsame whether you're doing\num classification or entailment\nsimilarity or multiple choice\nyou actually end up with the same um\nwith the same architecture\nbut unfortunately um in 2018\nyou still needed uh task specific data\nsets\nand you still needed to fine-tune uh\nyour\num your model against\nthe task task at hand um\nand because you can do uh you can\nbasically use the same architecture\num then um this is a substantial\nstep towards making this practical but\nthe problem is\nthat if we need task specific data sets\nthen usually we have only very small\ndata sets\nand if you have a really really strong\nmodel that can model\njust about anything and then you have a\nreally really narrow training set of\ntraining examples then you are very\nlikely\nto have your model find correlations\nthat don't actually exist in the real\nworld\nspurious correlations they call them so\nthis is a big problem\nin 2018. in 2019\nopenai introduced the generative\npre-trained transformer to\nthe tbt-2 in the in the paper called\nlanguage models are unsupervised\nmultitask learners\nin this they actually dispense with\neverything that is\ntask specific and they compensate for\nthat\nby using a huge huge transformer with\n1.5 billion parameters\nuh so is that precisely the same as\nsaying that there are\nuh 1.5 billion nodes in the neural\nnetwork\nbut you can imagine roughly that order\nof magnitude\nthe uh alec redwin in this paper\nactually in the abstract\nhe notes the following increasing\ncapacity of the language model improved\nperformance\nin a long linear fashion across tasks\nand there is an old rationalist\nvirtue called more daca which means that\nthis is three muscles who pointed that\nif you have something that appears to be\nworking then it's not a question of\nwhat if you would want to scale it up\nbut you actually need a really good\nreason to not scale it up\nif it seems like that is something that\nwould work and that's something\nof course that um that is in fact\ncounterintuitive to a lot of people\num but apparently not to the people who\nbuilt gpt too because they built\ngpt 3 the generative pre-trained\ntransformer\nversion 3 of dash 3.\nso let's talk about that and talk about\nfine-tuning as well so what they did is\nbasically\nthey took the uh same architecture as\ngbt\nii again with these tiny variations in\nthe architecture\nthat um that are always being being\nfound\nand they they basically scaled it up\ngave it a much larger\ntransformer a much greater amount of\ndata\nboth in how many examples and how much\ndiversity and then they trained it for\na lot longer um\nso that's basically the thing they did\nbut in this paper they also\nuh explore um different settings\nfor how this model can uh\nperform in particular they um have a\nspectrum\non uh how much fine tuning or task\nspecific things\nto use so at the uh at one end\nwe have fine tuning which is um you have\na\ndata set that is supervised where humans\nhave looked into it and said\nthis is an example of what we want um\nand this is something\nyou could in theory do it with gbt3 they\nhaven't done it\nbut they think that would be promising\nyou can do future\nlearning where there are if you get a\nfew demonstrations um but um\nuh when you actually need to juice it\nbut you don't\nchange the model itself when you give it\na fuse\na few examples um in gt3 they have what\nis called a context\nwindow how many examples that are\npossible\nand with the size of model they can have\nlike\n10 to 100 um examples at most\nuh more than that is out of these\nthe attention of the model and\nthere's a one-shot learning where\nthere's one demonstration\nand then also a natural language\ndescription of the task\nthis is of what humans use and then\nthere's a serial shot\nwhich is just giving a task\nwithout any examples and this is\nsomething that a lot of humans struggle\nwith\nin a lot of contexts um so um\nthat's of course on the other end of the\nspectrum um\nand in the paper it's claimed that one\nshot is the first comparison to human\nperformance\nand that might even be that might be\ntrue but\ni think in general people don't really\ncare about fairness\ni can see it from of course a\ntheoretical point of view that's\ninteresting\nbut i think in practice uh the flu shot\nis\nfar more interesting because uh if\nif you have the resources to do it one\nshot then you almost certainly have the\nresources to do\nto do a few demonstrations\nso what training dataset did they use on\nrecall or\ni don't know maybe not recall but gpt 2\nwas trained on\n40 gigabytes of data which is quite a\nlot\num but for this they use the common\ncrawl data set\nwhich is basically everything that has\never been written on the internet\n1 trillion words which is 45 terabytes\ncompressed text they\ndid some filtering on this so just to\npair it down to a\nhalf a terabyte or something like that\num\nand they are really you know cool about\nthis uh\nthey just throw out 99 of the data\nbecause there is so much\nso if it doesn't you know look like a\nsentence in some primitive way then they\njust throw away because there is\nso much data and um\nyou might want to compare this to the\nconcept of\ndata overhang an analogy to a hardware\noverhang\nfrom boston's book super intelligence\nthis training data set was certainly\nsufficient\neven the very largest model in gt3\nnever had to be to use the same sentence\nuh two times because there is just so\nincredibly much data available um\nso\none of the things that are different\nalso in the training compared to what is\nnormally done\nis that they have a larger model that is\nstrange on\ntrained on fewer tokens compared to what\nis typical in machine learning\nthere were an interesting bug with\nobviously one of the few things that you\ndon't want to train\nthis uh dbt 3 on is the set\nof solutions to standard tests\nbecause if you train it on the solutions\nto the standard text\nthen it might just look that up another\nforce cheating um so they tried to avoid\nthat but actually they failed a bit and\nthere's a\ndiscussion about how they tried to get\naround this like\nit cost quite a bit to uh to train this\nmodel so they couldn't just repeat it\nbut this training was provided by\nmicrosoft who i believe is a pretty\nmajor sponsor of\nopen air so\nhow did they do the evaluation well they\ntook and k\nwhich is like a number like uh require\ndepending on how much is required for\nthe test uh for conditioning\nand uh because they did this on a huge\namount of different\ntests then the way that this evaluation\nactually has a lot of um of variations\nand details\num and um in some cases\nfor some of these benchmarks then the\nactual results are not available as\nsomething you can download\nyou can only query a server through a\nweb service\nto figure out what is is my\ngeneration actually correct\nso the x the model that's called tpt\nthree\nhas a number of parameters uh well hyper\nparameters\nwith how many how large is the neural\nnetwork how many layers\nand there were these attention heads etc\nand\nthese are the hyper parameters that are\nbeing used\num and one of the things that\nthey they note is that these numbers\ndown here\nactually don't matter very much of\ncourse the number of parameters matter\nbut it's not really strongly sensitive\nto how many heads you have etc\num i think that's that's interesting\nbecause i'm\nboth interested in the things that\nmatter and the things that don't matter\nand it appears\nobviously that having a huge amount of\ncompute matters a lot\nthe transformer architecture matters a\nlot things like\num incremental tuning and thing the fact\nthat this doesn't matter\ni actually also find it's quite\ninteresting\nnow let's talk a bit about how dbt 3\nlearns and they're using something\ncalled the the authors called\nmeter learning which is where the model\ndevelops\nand it learns how to learn have a set of\nskills and patterns um that it\nhas a training time and then it uses\nthese\nwhen it's a time for for the test for\ninterference\num to either directly recognize\nthe the task or adapt to that\nso the way this works during training\nhere is\nthere is an outer loop and an inner loop\nso on the outer loop\nit's using you know rather standard\nstochastic gradient descent\num in in in this pre-training\nand then the inner loop it is learning\nsomething\nuh things in specific contexts like\nadditions or\nuh flipping letters or\ntranslations and things like that and in\nin these\nwhile it is learning um\nit gets results that are really really\npoor compared\nto uh to something that is fine-tuned\nbut but it does this a lot and it learns\nuh a huge amount of um\nof these uh uh skills that that it will\nlater be able to\num to put into use\nso let's see for the actual uh result\nhere\nnow um this is the um\naccuracy on a large number of benchmarks\nsome standard and not so standard\nbenchmarks for\nhow good is this uh model at\num doing things that humans can do\nand um as we can see there are a lot of\nthings\nthat um and down here we have the uh\nthe size of the neural network basically\nthe number of parameters\nand as you can see there is a trend\ngoing upwards\nboth with a serial shot learning\none-shot learning\nand few shot learning it does seem to\nsomewhat increase and\non aggregate whenever you increase the\nsize of the model\nyou get a substantial performance\nimprovement\nand um in particular one of the things\nyou can\ndirectly measure from all this is what's\ncalled what they call\nluck loss um and um\nthis is uh something that's used in\nmachine learning and and\nthis is a a measure for how good your\npredictions are\nand according to this um\nlock loss your performance increases\nquite linearly with how um\nhow good your your guesses are so let me\njust\nquickly give a hint of what log loss is\nlike if you um have an algorithm that\nassigns probability\none that is 100 certain of something\nthat turns out to be the truth\nthen you want a loss that should be zero\nin order to train your\nalgorithm and if you assign really low\nprobability something that is the truth\nlike\n0.01 or something like that then you\nwant a huge\nloss and the mathematical function that\nsatisfies this is the negative log of\nprobability\nin the article they call it just log\nlast they mean the negative log\nof course but yeah\nso this is how it looks when you scale\nup to a 13 billion parameters\nhow about in gv3 d3 when you scale up to\n175 billion parameters\nwell you get this and this is pretty uh\nuh broad um improvements\non basically all the tasks um\none of the things that i noted in\nparticular\nis just how many of the tasks that just\ngo a number of them go straight up to\n100\nso it means that um i haven't looked\ninto which does this\nbut apparently a number of tasks just\nuh have per have human level performance\nand you know there seems to be a lot of\nthings where\num the tasks were basically impossible\nto solve\nat 13 billion parameters and you know\nsome of them are\ngo go really far up that seems to be you\nknow not solved\nbut uh really improvements are made by\ngoing to\nfrom 13 to 175\nthis might not be that uh unexpected\nbecause\nthe the learning uh that you do in\npre-training\ninvolves a lot of very varied skills\nand the more things you have to put into\nthe model\nthe more difficult it becomes but if the\nmodel is larger it can contain\nmore skills and that that's why\nyou should kind of expect that this kind\nof meter learning is something that\nimproves dramatically with scale\nso let's go into one particular line\nfrom this\nthe uh from one particular test which is\nuh removing extra characters so let's\nhear\nthat task in particular this you have a\nword like succession here um and then\nyou add air on every other\nspot you add a random character and then\num you'll see if the ai is capable of\nfiltering that out\nand here you can see that basically um\nwith 1.3 parameters it basically can't\nfigure it out\nit gets slightly better if it hasn't\nsome more examples\n10 examples here one example here\nand you can see um with 175\nbillion parameters it just performs\nreally really\nreally well even without a prompt\neven if it's just given the word\nsuccession and\nthen it can indeed um figure out the\nnext word where it should be in\nsuccession which is\nyou know kind of impressive\nso in this example uh dairy amadeus\nadmits this is a particular striking one\nbut it is engine\na representative of a general trend\ni feel this is not one of the\na really good example for talking about\nnatural language processing\nin that this is something where you\ncould\nyou know just removing every other\ncharacter is something that could be\ndone by a really really simple\nprogram and um\nthis specific task of removing\nevery other language is something that\nis so simple that it might\nuh that might you might expect it to be\ncontained into the model\nat some point so that it would just be\nable to you know\nsolve it with 100 every time\nso what's the conclusion from all this\nwell on all these tasks we get\nsome really promising results uh\nsometimes even competitive with\nthe state of the art by fine tuning so\nthis is some really impressive results\nand it seems indeed that\nthe more er\nthe larger the model is the greater the\ndifference between\nzero one and few shot performances uh\nso this could be argued to mean that\nwhen the model is larger it has better\nmeter learning\nthey're not really willing to strongly\nconclude this\nbut i think this\nthis gives a fairly good indication that\nthis might be true\nand indeed the title of the paper that\nlanguage models are few shot learners\nis also hinting of a very general\nconclusion\nso one of the things that i think would\nbe interesting to discuss\nis last week or two weeks ago we talked\nabout\nwe saw bengal finkel saying the\nfollowing that if an ai has the\ncapability of understanding natural\nlanguage\nit will be using this understanding for\nthe interpretation of its goal and\nuh if we uh look at gbt3 then this is an\nexample where\nat one level it's obviously false\nbecause tpt3\nhas precisely the goal of predicting the\nnext word\nit doesn't um\nthing it doesn't um use common sense\nin figuring out if this should be its\ngoal it just has this goal\nbut it has um common sense i would even\ncall it\na moderately rich model of human\ncognition\nbecause this is something that can be\nused for predicting\nwhat the next few words that a human\nwould be you would use\num and i think this is an interesting we\nwill probably get more clarity on this\nquestion\nin the rest of the paper um and another\nthing that i would\nthat i also couldn't stop thinking about\nis that if you have something that has a\nreally rich\nmodel of human cognition like qt3 can\nunderstand\nif sentiments like that a human is angry\nor sad\nor the this text is written by a sad\nperson or something like that\nand if you have something that\nunderstands sentiments really really\nwell\nthen you have the feeling that well it\nmight\nnot just understand what uh sadness\nmeans but actually be able to feel\nsadness\ni think um i think that's unlikely but i\nam\nnot really 100 confident in this and i'm\nnot really sure\nhow you can have 100 confidence in that\nconclusion\num but i expect so a lot of people will\ndisagree with that\nthat is all for today thank you and see\nyou next week", "date_published": "2020-09-10T21:00:56Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "e42722eca789fd1948ec5da0aaa2b464", "title": "244. Democratising Risk 1", "url": "https://www.youtube.com/watch?v=VqXulKAcjDk", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session two hundred\nand forty four in the ascii.com reading\ngroup\ntonight we'll be discussing the first\npart of the article democratizing risk\nin search of a methodology to study\nexistential risk by carla so creamer and\nluke kim\ncarla so creamer works at the future of\nhumanity institute\nand you camp at the\ncenter for study of existential risk\nthis article is a couple of months old\nand we'll only be talking about the\nfirst part\nof this article today\nhopefully we'll be able to cover the\nentire rest next time otherwise perhaps\nwe'll have to split into a couple of\nmore sections\nfirst i would like to say thank you to\nthe authors for critically examining the\narguments for what we do i honestly\nbelieve that it's really important to\nget critical views on\nboth ai safety and existential risk work\nin general because this is a\nan area where\nit's just very difficult\nfor a person to be able to explore both\nsides of the story and that's why i\nbelieve it's very valuable that other\npeople are taking the time even uh\nwhether they uh uh\ndo it as like red teaming or\nbecause they sincerely believe that\nthere are problems i believe it's really\nreally valuable\nand of course since i'm summarizing\nsomething that\ni\ndon't necessarily agree with that means\nthat\nthe summary when i summarize things i\nomit a lot of qualifiers and things and\nthat might make it so much strong manage\nand i apologize for that\nalso\nat the time when i was\nwriting this\nthere was a story that russia had placed\nthe cheating nuclear forces on high\nalert and mitchell's gave one in 20\nprobability of people dying in nuclear\nexplosions\nso um\nthat obviously biased me a bit more\ntowards more immediate short-term\nconsiderations with regards to\nexistential risks\nand finally um i\nwill if the authors or anyone else see\nthis they will notice that i'm extremely\ncritical i am very critical with all the\ntexts last week we have some rather good\nwork from anthropic that i also engage\nvery critically with so it's not just\nthis\nso the first argument being presented is\nregards to the standards of uh research\nin existential risk studies and it is\narguable that it should be held to a\nhigh standard and\ni am\nthis can be interpreted in multiple ways\nthere is\nthe\nsymbol saying that we want the research\nto be of high quality i certainly\nbelieve it should be and i think just\nabout everybody would want that but\nstandards doesn't actually mean that one\nof the ways\nwe\nanalyze standards is regards to how much\ndo we put in\nand from the public side the funding for\nextension research from research into\nextension risk is extremely low\nand that should affect our standards\nlike if we imagine that you cut the\nbudgets for the hospitals with 50\nthen the standard of care you would\nexpect would be much much lower\nso\nin this way it's it's not entirely clear\nwhether they are they're just talking\nabout the uh the quality or they are\ntalking about the standards\nand the arguments given for requiring\nhigh standards are threefold first it's\nan ambitious field and it could affect\nthe lives of many and finally\nscholars\nwho changed the trajectory of global\nsociety\nand this to me is\naffecting the lives of many that's the\nargument for high quality basically\nand the other two perhaps could be\nconstrued as\nrelated to the standards but it's um\nsomewhat more confusing to me at least\nperhaps some of the same uh confusion\nhappens when we talk about the\nchallenges that essential risk studies\nfaces\nwe start out with how can it be\ninclusive of all our preferences and\nvisions of the future and of course\nthat's something that in alignment\nresearch in particular has been studied\nand discussed very very extensively like\nuh on a bit more on the object level\nlike how can we make a super\nintelligence that's aligned with our\nquery extrapolated or whatever um\nand another challenge is avoid baking in\nsubjective assumptions\nuh into our analysis and that's\nsomething that i think researchers in\ngeneral can't avoid making this kind of\nuh analysis completely objective is\nprobably too high bar um and\nprobably not really a a desirable way in\norder to get the best possible estimates\nthen how uh there's the argument that it\nwill affect people who don't share the\nvalues of the researchers well essential\nrisks are of course unique in that they\naffect everyone in a very directly\ndirect way\num\nso again we are seeing somewhat of a a\ndifference between the subject and the\nthe object in the meter level three\nother considerations also make this\nclear how will researchers come\nconduct complex risk assessments deal\nwith uncertainty and different levels of\npossibility and evidence and this kind\nof thing and this is\ni'm when i read these texts i become in\ndoubt because um\non the object level people who are\nworking with trying to uh\nforecast ai timelines or something like\nthat they obviously need to deal with\nuncertainty and that's like the core\nthing the object thing that they are\nactually working on\num\nit is also possible just to state that\nexistential risk studies as a field\nneeds to have special ways special meter\npractices around this um\nthat\nthey for some reason needs in addition\nto these object level considerations um\nbut\nif that is what the authors are trying\nto say then we need some kind of reason\nlike why is this in particular a\nrequirement for existential risk studies\nand for like cancer research and\nwhatever\nanother perhaps more interesting issue\nis we want to ensure that um these the\nstudies\nis misused for dangerous actions uh in\nai risk the classic dangerous action is\nthe two proper sparse of a model of\ncapability and alignment research\nand\nwhere it's well known that there is a\nsubstantial overlap between the two\nin general ensuring known issues is\nsuper super hard\none challenge that the author don't\nmentioned is of course the central\nchallenge and that is to reduce of\nexistential catastrophe and i think that\nit would probably\nhave been wise of the authors\nfor reasons i'll show later to be much\nmore explicit at this point\nit is indeed possible that there is some\nkind of trade-off between what level uh\nthe risk is uh democratized\nand and toward a level\nit is uh\nthe risk is just lower\num\nperhaps we'll get to that later but it\nwas not in the first three chapters\nwe'll instead talk about the techno\nutopian approach\nthe technotropian approach is based on\nthree\npillars of belief transhumanism total\nutilitarianism and strong long-term ism\ni think i need to clarify one thing uh\nagain it's possible that i've totally\nmisunderstood this but when they when\nthe authors use the word approach i\nthink the word that they are actually\nmeaning to use is argument like this is\nan argument why extinction is bad\nand so the techno-utopian argument\nagainst\nexistential risk is that\nas some kind of technological maturity\nwe could have a huge amount of\nutilitarian value and if we don't get\nthat then that's almost as bad\nopposition is an existential catastrophe\nand so we have a really strong moral\napplication to ensure that we get to the\nsecond utopian future even through\nexceptional actions\nand in in this case uh\nuh as\nif this is an argument against ex\nexistential risk then it\nit is just one\nof many i think there are many possible\nways to argue against\nextinction like you could argue that\npeople's performances are being violated\nand things like that so this is one\nargument among many\nbut it is especially in the sense that\nessential risk uh studies originated in\npeople who were talking about the\ntechnology natural approach and that's\nactually somewhat odd i uh\ni noticed i'm a bit confused like why um\nwhy don't other moral philosophies uh\ncall out and explicitly say hey we\nshould really ensure that the long-term\nfuture ends up in a nice place\nbecause as the authors point out there\nthis is something that kind of came\nabout in this in a\nstrange way where people were exploring\nthe taking utopian approach and um\nfrom that uh\nrealizing that\nit was very important to look into\nexistential risks and then of course\nfiguring out that there were substantial\nproblems\num but\nit\ni think at this point\nto me at least um\nthe origin of the field\nis\nof\nuh questionable uh importance i'm not\nsaying it is of no importance it's just\nthat the burden of cruise flies squarely\non the on the authors to show that there\nis a problem because um\nin general shooting the messenger\nafterwards isn't really relevant like\none analogy i just came up with is that\nthere is some astrologer who might be\ncrazy maybe not might have other moral\nfailings but he told you to look up and\nyou look up and there's a meteor heading\nfor you and so if that happens um\nthen the goal should be to deflect the\nmeteo and whether that astrologer is\nactually crazy or in the art group or\nhas\nevil or whatever that doesn't matter\nvery much the focus is on on the\nasteroid coming for us right\nso for me personally i don't care very\nmuch about this argument against\nexistential risk um but i think uh in\nthe interest of this uh of um getting\ninto this i expect that i will be\ndefending this techno utopian approach\nsomewhat more than what i actually truly\nbelieve\nalso with regards to\nthe origin\nlater in the article\ntoby or is actually quoted as saying\nthat other\nother value systems ethical material\ntheories can uh\nbe used to argue that extinction is bad\nand\ni agree that but it's uh\nit's still strange that the utilitarian\ntoby or is saying this and not like are\nyou starting like if virtue ethics\nimplies that it's important to avoid uh\nextinction you would expect someone else\nto have figured that out over the past\ntwo thousand years\nso what are the arguments for focusing\non the chicken youtuber approach\num\nthe authors argue that it is by far the\nmost influential approach or argument\nwithin this field\nunfortunately they don't really present\nany arguments for this\nand\nso from my personal intuition which\nmight as well be good um\ni think if you ask especially as you get\ncloser to the op-ed level\nespecially as you get close to the\nobject level you will find people with\nwho don't really care that much about\nethics like\nwhy is nuclear weapons bad if you ask\nsomeone involved in\ndisarmament why nuclear apocalypse is\nbad they might not have a good answer\nfor\nyou uh excuse me\num\nsome people will probably try and make\nan argument based on like altruistic\nconsiderations and some people might\nalso\nhave your plane uh self-preservation\nwhich i think is a perfectly fine\nargument for\ntrying to avoid extinction\nthe problem is this is an uh\nthe the moral values in the chicken\nutopian approach might be embedded in\nthe analysis\nand that could be problematic to me\ncertainly if it means the um\nthe uh analysis suffers from it and the\nauthors argue that they will later not\ntoday unfortunately but in the later\npart they will show examples where um\nthe technology and approach leads to\nconclusions which in fact increases\ncatastrophic risk and that would of\ncourse be really bad and i\ni really like if this is called out\num\nlet's go back to the\ndefinition of existential risk here i\nfound boston's original\ndefinition of an existential risk\none that threatens the premature\nextinction of earth originating\ninsulting life or the permanent and\ndrastic destruction of its potential for\ndesirable future development the\nbuilding is in the origin\nand um\nthis is described by the authors as um a\ntechno-utopian definition\nso of the three pillars\num\ntotal utilitarianism as far as i can\ntell there is no social utilitarianism\nhere there is perhaps a bit of\ntranshumanism in that there's an earth\noriginating intelligent life\nthere is perhaps a bit of long-termism\nif you really\nabout you know desirable future\ndevelopment but um\nbut calling this uh the uh this uh\ntechnical utopian is to me uh\nreally really questionable it's uh\num yeah so um\none of the things the authors argue is\nthat there are no other definitions than\nthis at least not uh\nin vice produce and i think that's true\nbut i also think that a lot of the\npractitioners of the field just in\npractice um\ntake the first half of the definition\nlike the bolded part and then they that\nreplace earth originating intelligent\nlife with humanity so existential risk\nis once that threatens the premature\nextinction of humanity\nat least i think that's how people do i\ndon't actually know\nthe authors make some rather strong\nclaims about how this is embedded\nincluding that it characterizes almost\nevery existential risk text with\nsignificant public profile i think that\nis\ndepending on precisely what it means\nthat the argument characterizes um\nthat seems rather wrong one of the\nexamples that i think people would\nexpect\nto do this would be the book super\nintelligence by nick bustrom which does\ninclude this argument against\nexistential risk but hidden away in\nchapter six and um\nand put inside one of these boxes of\ncustom views to say that like this is\noptional reading is not really necessary\nfor the argument to carry\num\nthere's also an argument it's also\nclaimed that there exists people in\nextension risk study who subscribes to\nthis technology approach and i think it\nis certainly clear without any doubt\nthat there are people who accept this\nargument but\nthe authors don't actually claim that\nthey're that it's the primary argument\nand um\ni mean\nit that should be possible to find out\nright you could literally ask bostrom to\nreveal existential is most bad because\nthe\npeople die or because then we can't have\nthis glorious future\nso moving on a bit more with the\ndefinition of if we take the last part\nof the definition which i think is the\none that um\n[Music]\ncarla and luke are really uh\nattacking like the permanent and this\ndrastic destruction of humans is\npotential for desirable future\ndevelopment\nthey claim this as an example of\nsomething that would be covered by the\nexistential risk definition humans\npersist sustainably and\nequitably for the next billion years\nwithout major technological progress\nnow on the face of it uh\nthat sounds desirable so by definition\nif it's uh something that means we can't\nhave a decibel future then by the\ndefinition it's not really covered\nthe word desirable does indeed imply\nvalues and the classic thing\nlike disciple can mean different things\nbut um\nthe thing that it could mean is that the\npeople who are there are happy to be\nthere\nis it possible that there is something\nin this definition that isn't really\ncovered like is it possible for instance\nyou could have like ecologically aware\nliving sustainable and equitable like\ncommunism but without the desirable part\num i think it is possible it is possible\nto have\nuh in theory to think that you could\nhave a society like\nif you are a utilitarian i am a\nutilitarian and then whether it's\ndesirable as a society depending on like\nother people that are heavy seems to me\nalmost like an action but i recognize\nthat for most people\nmost people are not utilitarians right\nso something like that might indeed be\nwhat people want\num to give an extreme example someone\nlike pop pot would be one who would be\nhappy with an ecologically aware\nundesirable uh communist\nsociety and i think that that's of\ncourse a very extreme example uh but uh\ni i think there are a number of people\nlike most people are hugely serious\nright so there must be a lot of people\nfor whom the ethics don't aren't really\nbased on people being heavy\num i think in practice although that the\nreasons why\nuh\nthe reasons for the destruction of of\nthe potential of humanity is almost\ncertainly going to be something that is\nreally bad\ni think what boston is thinking about is\nlike something that kills almost\neveryone destroys the planet or a\nparticular stable or authoritarianism is\nlike the the thing that he's worried\nabout\nand\nof these things like total destruction\nof the totally\nand stable authoritarianism these kinds\nseem really really bad for me\nso\ni'm happy to include this as an\nexistential risk but i recognize that we\nare making a valid judgment here\nthe next complaint is on transhumanism\num that there is value in exploring\nposthuman and transgender modes of b\nuh boston has uh uh\nthere's a quote from boston here that if\nwe can't do this that may itself\nconstitute an existential case\nsensitivity so question is using the\nword may and the authors are\nconcluding from this that preventing\nexistential risk is not primarily about\npreventing the suffering and termination\nof extension who\nuh but it's focused on uh trying to get\nto this techno utopia i think it's just\nplainly a\nmuch too sharp um\nconclusion you can't really from\nboston's carefully may conclude that\nthis is what it's primarily about right\nthere are many reasons why extinction is\nbad\nstrong long-term yes\nthe thoughts that um\nthe um\nthe values the the\nthe value of the long term uh\ncan and often took an extreme degree\nwill overshadow the values right here\nand now and uh the uh\nis accurate but that it doesn't give any\nclear guidance on when we should process\nthe living humans of today\nand i think it does in fact\nit does give we should prioritize uh\nhumans today to the extent that this\nhelp us in the long term which in\npractice almost always will right and\nand\nin particular uh you there are it's very\neasy to find principle guidance from\nconsequentialist utilitarians uh it's a\nlot harder to be a perfect consequential\nutilitarianist\nbut they they certainly know what what\nto do right you just calculate\num so\nyeah here's the definition of\nlong-termism that might be an ethical\nimperative to select the choices that\nare that will have the best effect on\nthe long-term future\nand that's often\nreducing existential risk but there are\nsome underlying assumptions\nthree are mentioned here then we will\nhave continued technological development\nthat will settle the stars eventually\nand the future people will be heavy\nand i think uh the the full taking\nutopian approach assumes all three but\njust a strong long term isn't just\nthis doesn't really matter that much the\nkey thing is will people in the future\nbe heavy\nand that's uh something that needs to be\num investigated that has been\ninvestigated and listened and should be\ninvestigated more because it is in fact\nrather important\nthe extent to which this is settled is\nuh called into question by the authors\nsign three\nwords one what's wrong with human\nextinction on the survival of humanity\nnow i haven't read those um but to the\nextent that they call into question\nwhether or not extinguishing is bad i\nwant to claim that it's an extremely\nniche\nand that it is just completely accepted\namong just about everyone that\nextinction is bad period\nrepresentation um the technology\nargument is not uh representative like\nat all of what humans right now believe\nthat doesn't necessarily say whether\nit's right or wrong um\nactually according to paganism we would\nexpect it to have at least some\nuh to be some evidence\nthe authors claim uh recently that it's\nrisky to rely exclusively on one\nunrepresentative approach\nand um\nit would be but it's not true that\nexercise studies relies exclusively on\nthis argument like the\nthere are some people who believe it but\nuh\nrelies exclusively is\nway way way too strong\nhe also suggests that we should have\nempirical studies on what are human\nintuitions on uh on extinction\nand\ni mean it's possible that uh there\nindeed exists a large amount of people\nwho would be happy about existential\nrisk but we need before we call for uh\nlike a lot of opinion polls or something\nlike that we need some to have some kind\nof um\nuh\n[Music]\nintuition uh like my uh right now my\nintuition is that almost everybody would\nprefer that there not be nuclear war um\nand uh\nwe need\nsome kind of argument some kind of pulse\nmust have been made somewhere for the\nauthors to say that this is something we\nneed to be investigating more detail\nthey don't try to argue that it's um\nthat the utopian approach is not\nrepresentative\nhere is the world transhumanist\nassociation in 2007\nwas 90 male and had a median age of 30\nto 33 years old\ni uh i i actually think it's way less\nrepresentative than this um and i'm\nsurprised that they also couldn't find\nbetter arguments like the meeting age is\n33 that could describe like literally\neverything right um\nso um but i do actually believe that it\nis not representative um and depending\non how you ask you could get at least\ntotal utilitarianism is something that\nyou can ask in your opinion impulse and\npeople will often say it's a good idea\nbut\ndepending on precisely how you ask it\ncould also be a bad idea\nuh i think the reason why people don't\ngive consistent answers in uh opinion\npolls is just that they haven't thought\na lot about it um but but i want to\nemphasize here that i accept and i\nbelieve that the\nsecond utopian argument is very french\nin humanity at large\nanother perhaps most serious charge\nagainst extinction risk studies is that\nof elitism\nwhich is it is claimed to be an\nillegitimate\nfield and the definition of ill-test is\nthat the researchers and champions are\ngranted decision-making powers in\nsociety\nuh\nand um\ni will claim that it is that the people\nlike bostrom annie and zukowski are\nindeed not granted decision-making power\nin our current society but that leads\ninto what\nscott also has called a bravery debate\nwhere everybody is saying that they are\nlike\nvery brave people standing up to the\nother people who are in fact the\ndecision makers and uh that doesn't\nreally uh\ni mean\nand i would like to avoid getting into\nthat um\nit's true depending on like if you're\nliterally pro uh extinction then yeah\nyou might be brave to stand up to the uh\nextinction people and feel like the\nanti-extinction people are\ncontrolling the debate uh at least um\nbut but to go from there to uh to say\nthat these people are decision makers is\nclearly true\nand i think the others also\ndon't make the strongest version of that\nargument in that they are more like\nscholars but still claim that they are\nrapidly and intentionally growing in\ninfluence\nand to argue for this they\nhave a few examples of some politicians\nwho pay lip service and say oh yeah\nthat's extensive risk and then nothing\nmore i think\nto show that they have any kind of\ninfluence in society you need to do way\nway more clustered investors and then um\nan argument here that extension risk\nstudies is four steps removed from\nneoliberalism in that existential risk\nuh studies is\naccording to the authors very influenced\nby the techno-utopian approach which is\nagain\nrelated to something they call the\ncalifornian ideology\nwhich is related to\nneoliberalism\nabout um\nideology\nso\nlike to me this is um it can't be that\npervasive a cultural\nthing mean if if i've never heard about\nit\ni think it is in in practice in order to\ngo\nthat way you need to go like one more\nstep to say your the techno-utopian\napproach is related to silicon valley is\nrelated to capitalism is related to\nneoliberalism but this is a really\nreally bad argument right with like five\nsteps of separation you can literally uh\ni believe you can connect anything to\nanything and\nthe authors don't even um\nargue why new liberalism is bad they\njust use it as an applause light\nand\ni think this is really a really poor\nargument unfortunately\nwhat are the risks of this\nnon-representative view well granting\ninfluence\nto uh of potentially of our future to a\ntiny minority is problematic\nand i would agree if this what was what\nwe were doing if we were giving\ninfluence of\num of humanity to nick bostrom that\nwould be something we should think about\nuh but it's not like we are granting him\ninfluence he's just basically writing\nsome stuff\nand seeing who is written right and and\nalmost no one else is thinking about\nthis\nand um\nif we try to implement policies that\nreduce existential risk well then there\nmight be other interests that are um\nthat are overshadowed\num\nand and this would be true indeed if\nnick washroom was implementing policy\nbut he's not implementing policy the\npeople who are implementing policy are\nnot considering it at your risk at all\nand again the people who decide what\nrisks we should take and what risk we\nshouldn't take is probably narrow right\nnow uh and i agree it's probably narrow\nbut again the people who are actually\ndiscussing\ndeciding what uh whether we should build\nmore nuclear weapons do not include\neliezer youth castle\nand finally um one of the things that\npeople in the\nworking within the techno utopian\napproach have been writing about is\nmoral uncertainty and that is something\nthat the authors i think is really good\nand should continue and i also think\nthat's good so i want to end on a\npositive note\nthat is all for today thank you and see\nyou next week", "date_published": "2022-03-03T22:01:44Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "1f09a836a7d2e302142f6d534e5ed166", "title": "196. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments", "url": "https://www.youtube.com/watch?v=_kNvExbheNA", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "Hello and welcome to session 196 in the AISafety.com\nReading Group. Tonight we’ll be discussing\nthe first half of the podcast “Ben Garfinkel\non Scrutinizing Classic AI Risk Arguments”\nby Ben Garfinkel, of course, together with\nHowie Lempel, Robert Wiblin, and Keiran Harris.\nBen Garfinkel is a research fellow at the\nFuture of Humanity Institute and at 80,000\nhours, we have Howie Lempel asking the actual\nquestions, as well as Robert Wiblin and Keiran\nHarris supporting. This is a podcast, and\nthis link here also includes the transcription,\ntranscript, and published on the 9th of July,\nrecorded about a year ago. So, some new developments\nlike GPT-3 are obviously not covered. We’ll\nbe discussing the first half, up to the 1:43:11\nmark, mostly starting at the 50 minute mark.\nOne thing I should point out is that it is\nnot precisely defined what constitutes the\nclassic AI risk arguments. So I’ve chosen\nto mostly define that as the “AI foom debate”,\nas well as the book Superintelligence. But\nwhether that’s entirely correct is a bit\nunclear. You could also argue that the classical\nperiod was before 2015.\nI’d really like to say as the first thing:\nA big thank you to Ben Garfinkel for making\nthis podcast and doing the work he’s doing\n- trying to really look into if the arguments\nthat we have for AI Safety are sound. I really\nstrongly appreciate that. Also it is 1 hour\nand 43 minutes of podcast, so when I summarize\nthis I have to do it by removing qualifiers,\nso if Ben Garfinkel says something like, “Arguably,\nA -> B”, then I’ll summarize it as just\n“Ben Garfinkel claims A -> B”, and I’ll\ngive a counter-example, and I won’t even\ninclude my qualifiers. So I’ll do that throughout,\nand that means this is kind of a straw-mannish\npresentation: Not of Garfinkel is actually\nsaying, but what someone could say if they\nwere making stronger statements. Another thing\nis that Robert Wiblin claims that Ben has\nprobably scrutinized classic AI risk arguments\nas carefully as almost everyone else in the\nworld. One problem that I had when I made\nthis presentation is that podcasts aren’t\nreally the best medium for having a text which\nyou need to engage very, very deeply with.\nA formal article can often be much better.\nAnd there were sometimes some sentences that\nI wasn’t able to parse, and that is of course\nalso problematic. There is some substantial\nchance that some of my objections here are\nbased on misunderstandings. I think Ben will\nagree that it’s very very important that\npeople write this thing down, and if Rob says\nthat Ben is the one person in the world who\nis best positioned to do so, then I would\nreally, really appreciate it if this could\nbe written in a more formal way, in a precise\nway. Right, let’s get on with it.\nThere is a general thought I have with this\nand it’s related to counter-factual value\nof the classic AI risk arguments. So we ended\nthe previous talk, Andrew Critch’s article,\nwith this quote: “The most useful form of\npessimism is one that renders its own predictions\ninvalid by preventing them”. And here, of\ncourse, we have Nick Bostrom and Eliezer Yudkowsky\nmaking these kinds of arguments. And this\nhas, indeed, had an effect. And I believe\nyou can trace a direct line from Bostrom and\nYudkowsky’s argument, to certainly OpenAI\nand quite a bit of DeepMind’s work. And\nI think a lot of the insights we have into\nwhere is the state-of-the-art of the AI right\nnow depends on these two organizations. And,\nin particular, our epistemic state is also\nstrongly influenced by organizations like\nLessWrong, Future of Humanity Institute, MIRI,\nand their work. So counterfactually if we\ndidn’t have that we would be in a situation\nwhere we had much much less information publicly\navailable. And that would of course make things\nseem much less smooth. So in that way, there\nare two questions being mixed up somewhat\nhere: Were the arguments correct when they\nwere first written down, first made? And are\nthe arguments still relevant? Because now\nwe are actually doing something about AI safety,\nand do the original arguments still hold?\nAnd of course that is what I’m mostly gonna\nfocus mostly on here.\nThe podcast starts with a description of AI\nand AI risk, and why effective altruists should\nwant to focus on this. And it’s quite good,\nI really like it actually, and I think it\nmakes a great case that AI is important, neglected,\nand tractable. And he also seems very positive\non AI safety more broadly. He points out that\nthe case for AI safety has been broadened\nrecently by some new arguments about political\ninstability and lock-in scenarios. I don’t\nthink we… I think we can find them in the\nbook Superintelligence but they’re being\ngiven low priority there. And it certainly\nseems much lower than what the effective altruism\ngives them right now. Ben Garfinkel however\nreturns to the classic arguments and the importance\nof understanding those, and really thinking\nhard against the weak points in those. In\nparticular among the classic arguments, it’s\nthe Bostrom/Yudkowsky arguments that’s been\nwritten down in a very very formal way, and\nthose should probably be prioritized in a\nway. And that’s something I really, really\nvery strongly agree with. I believe that figuring\nout… this kind of skepticism is something\nthat I’ve engaged very very much with and\nvery very deep with.\nSo one of the objections that people come\nup with is one called “painting a concrete\nrisk scenario”. And this is something that’s\nnot being done very much. And one thing that\nI noted here was when I linked to the transcript\non Facebook, and then Facebook added a picture\nto that, that wasn’t actually in the transcript.\nSo I looked where that came from. It’s a\nmeta tag, and unfortunately that indeed includes\na very concrete risk scenario - A dystopian\nvision of how we would end up if we actually\nlose. To me that was NSFL - not safe for life.\nBut I might be more emotional than most other\npeople with that regard. So the argument here\nis that the descriptions of AI risk don’t\nreally seem rooted in reality. And I think\nmost of them are not. I think Life 3.0 actually\ncontains something that is quite detailed,\nand, you know, seems quite grounded. And this,\nthe argument goes, is not really true of pandemics\nor climate change. And that, I’m not really\nsure. The descriptions of existential risk\nfrom global warming in particular, the ones\nthat I’ve seen, don’t really seem to be\nvery grounded in reality, and not very fleshed\nout at all, compared to the picture of, like,\na modest increase in global warming. So this\nseems like an odd thing, right? We have an\nexistential risk which you can’t really\ndescribe in concrete terms. And, that seems\nodd. But we do actually have theoretical reasons\nwhy we should be unable to predict how this\nwould go. And this is related to The Singularity\nand the problems of figuring out what someone\nwho is smarter than yourself would do.\nSome of the existing skepticism have a different\nfocus, that the original arguments don’t\nengage with how AI research is today. Because\nwe’ve had the deep learning revolution from\n2012 onwards, and AI was very different in\n2008 during the AI foom debate. And so, I\nthink the book Superintelligence suffers a\nlot less from this than the AI Foom Debate,\npartly because it’s newer. It seems moderately\nagnostic towards what methods could be used\nto achieve AGI, and he does... Bostrom is\nquite clear that Machine Learning is the most\nlikely path to AGI. And so the thing we have\nmostly seen is that Machine Learning has improved\nfar more than people expected. And whether\nthe other approaches toward AI are stagnating\nor if they are also improving towards AGI,\nI think that’s a good question and really\nhard to say, because all the focus is on Machine\nLearning, because things are moving really\nfast there. Ben Garfinkel makes the following\nclaim: “I believe reinforcement learning\ntechniques only get about two paragraphs in\nthe book Superintelligence”, and so I looked\nthat up. And I think this is kind of an example\nof the problem that I have with the fact that\nthis is a podcast. Because if this was not\na podcast where Ben has to make things up\nas he go but, you know, if this was an article,\nthen of course Ben would have looked this\nup and, you know, CTRL+F’d the document,\nand found over 26 places where it says the\nword reinforcement, and he would see that\nthere is a subsections called reinforcement\nlearning on the book, one page with two paragraphs,\nbut after that there is a lot more about how\nto use reinforcement learning techniques for\nvalue learning. And it goes into quite a bit\nof details actually. And I think this here\nis the formula that Bostrom ends up with in\nthe book Superintelligence. And I think it’s\nfair to say that if Bostrom had gone substantially\ndeeper into reinforcement learning theory\nthan this, he would have lost, well, more\nthan 99 percent of all readers, really. I\ndon’t think it’s reasonable to expect\nBostrom to go deeper into reinforcement learning\nthan this level, basically. So apart from\nthat, there is more to machine learning than\nreinforcement learning, so his history of\nAI emphasizes neural networks, and if you\nthink, neural networks and reinforcement learning\nare, you know, recently related techniques,\nthen I think it’s not really far away from\nhow you would write it in 2020. But of course,\nyeah, machine learning really took off in\n2012, and I think the book was finished in\nfall 2013, so he just got the very start of\na revolution there. And this might not really\nmatter a lot because the problem is that,\nsure, it doesn’t engage that much with how\nmuch is actually machine learning. But this\nis an argument that never actually cashes\nout into anything. So you can’t say or use\nthis as a reason for why other things in the\nbook Superintelligence are not true. And Ben\nGarfinkel is, of course, realizes that this\nis not a very good argument. But he’s still\nsympathetic to people who react dismissively\nto AI Safety arguments, in spite of the fact\nthat the arguments don’t really cash out\ninto anything. I’m substantially less sympathetic,\nright? If you have an argument that doesn’t\ncash out into anything then, I mean, then\nit’s just a poor argument, and if the people\nwho are working with AI are aware of this\ncriticism, and don’t really engage with\nit, I think it’s something you could criticise\nquite strongly.\nAnother potential problem is, one of the analogies\nfor, certainly, the intelligence explosion\nis the evolution process, in particular the\nevolution of hominids. This is something that\nhasn’t been written very thoroughly yet.\nBen says he’s not aware of any more than\na page long piece of writing that uses this\nanalogy to try argue for this discontinuity.\nSo the most detailed piece about this is written\nin AI foom debate. I tried to count the number\nof times wherever they said “chimp” and\n“hominid” and “evolution”, and this\nis written in like, many, many places, but\nin a somewhat verbose and informal sense,\ntrying to use this as an analogy, and “thorough”\nis not the… the AI foom debate is quite\nmeandering. It’s also something that’s\ndiscussed over 4 pages in the book Superintelligence,\nand this book also contains references to\nother people who have been working with this.\nI think more importantly, the book Superintelligence\nargues that this is a rather weak analogy,\nand probably you can’t use that for very\nmuch. And this is the reason why I and many\nother people… We are busy people, right,\nwe don’t really have the time to put a lot\nof work into something that we don’t expect\nwill lead to anything. And that’s something\nthat I think should... the podcast is… would\nbe nice to have more clarity here, that the\nevolution argument is actually not really\nin any way central to why we could fear an\nintelligence explosion.\nAnother analogy that hasn’t really been\nexplored according to Ben Garfinkel, is how\nmuch compute does the human brain use. Like,\ncould we get timelines by looking how much\nwould it take to emulate a human or something\nlike that. And compare that with how much\ncompute do we have right now. And this is\nsomething that, I would say, actually a lot\nof people have explored this really well,\nnotably Robin Hanson in the book The Age of\nEm has written a lot about this. And, unfortunately\nfrom this, I’m not really very happy because\nsure, if we can get some lower bounds, lower\nbounds would be worthwhile, but they seem\nreally, really weak. And I don’t think this\nanalogy is actually going to be very useful.\nBen Garfinkel says that maybe the fact that\nthe arguments haven’t been written down\nis something that caused him to disregard\nthem too much. And actually, no, I don’t\nthink that. I think it’s quite fair to not\nvalue arguments that haven’t been written\ndown as carefully as, for instance, the book\nSuperintelligence. But the book Superintelligence\nin fact has been written down in a way that\nis certainly sufficient. And this is why I\nbelieve that this is the key piece of writing\nwe should focus on, maybe focus even less\non the AI Foom Debate for instance.\nTo get into one of the main points that Ben\nGarfinkel makes against the classic AI safety\narguments, and that is what’s called Brain\nin a Box scenario, which is a specific AI\ndevelopment scenario that is purported to\nbe implicitly implied in the classic AI risk\narguments. And as you might be able to see\nfrom the screen, I think we’re at a straw\nman fallacy here, in that this argument is\nactually not one that is central at all, almost\nnot mentioned. And I had to dig deep into\nthe old sources of the AI foom debate to try\nto figure out why this would be relevant,\nand if anyone is actually using this.\nBrain in the box scenario is that there is\na time where we have some narrow AI progress,\nroughly like what we see now but nothing that\nreally transforms the world, and then relatively\nabruptly we have one AI system that is very\ncognitively similar to a human, to a brain.\nAnd from that, we get an intelligence explosion.\nThat is my understanding of the Brain in the\nbox scenario. But I might be wrong here, and\nBen Garfinkel describes it with roughly with\nthese words, but he doesn’t make a reference\nto anything, he can’t do that since it’s\na podcast and not an article. So I tried to\nsee where I could find that, and I tried to\ngoogle for it, and the best thing I could\nfind is the Foom debate. Where after the actual\nblog post, there was an in-person debate where\nEliezer Yudkowsky described this, and he uses\nquite different words, for the brain in a\nbox scenario. There’s nothing about it that\nwould take a day or a month, it just takes\na while, and the thing this Brain in a box\ncan do is reprogram itself, and whether it’s\ncognitively similar to a human is not mentioned\nat all. Similarly, Nick Bostorm has this vision\nof an Intelligence Explosion and talks a lot\nabout continuous improvement. It is not abrupt…\nthis intelligence explosion. And the thing\nthe AI system is doing is improving itself.\nIt’s not discussed at all whether it does\nanything else. One thing however that will\nbe discussed later is the Concept of Deception,\nthat one thing the AI system might do is to\nstart to conceal its abilities. But apart\nfrom this, “very cognitively similar to\na human” is not something that is described\nat all. So if I look, Eliezer Yudkowsky is\nusing this brain in a box but Nick Bostrom\nis not. So you could argue this if you put\nthe emphasis on BRAIN in a box, then it sounds\nlike the focus is on that it is like a human\nbrain, and I think another way to state this\nis to focus that it’s IN A BOX, so if you\nput the emphasis there, then it’s not just\nthe mathematical object, it’s scalable and\nyou have two boxes, things like that. And\nI think the second interpretation of the words\nbrain in a box as is the one that are used\nby Eliezer Yudkowsky.\nIs this discussed in Superintelligence? Well,\nnot really. That’s of course problematic\nif you are scrutinizing the classic AI risk\narguments, that it’s not included in the\nclassic AI risk arguments. So there is nothing\nhere that address the relative plausibility\nof something like the brain in a box scenario,\ncompared to something that is more smooth,\nor present an argument like why you would\nexpect something like a brain in a box scenario.\nSo part of it is clearly wrong because chapter\n4 of the book Superintelligence is called\nthe Kinetics of an Intelligence Explosion,\nand that is indeed precisely this. So the\nway that a single AI system undergoes an intelligence\nexplosion is indeed described in a very, very,\nvery great detail here. So I think that there\nis some kind of misunderstanding here, and\nBen Garfinkel actually means something slightly\ndifferent. So if I should take a guess, one\nof the things that Ben Garfinkel might be\nputting a lot of weight here is whether what\nis sometimes called a “seed AI”, that\nis able to improve itself but not able to\ndo very much else is really able to do other\nthings than that. Bostrom in the book Superintelligence,\nhe doesn’t describe this, he doesn’t really\ncare about this. But whether that might be…,\nI think right now, with GPT-2 for instance,\nit seems like these abilities actually are\ncorrelated to such an extent that it’s quite\nreasonable to expect that it might be able\nto do poetry, if it’s capable of writing\ncomputer code. It’s also possible that Ben\nGarfinke doesn’t mean this, but is talking\nabout an AI system even earlier than this.\nSo before it can self-improve. In this case,\nhe’s talking about this very early stage,\nthat’s something that’s described in the\nearlier chapters, You can find some of this\nin chapter 2.1 and parts of chapter 3, but\nI’m really guessing here so I’m quite\nunsure precisely what Ben Garfinkel means.\nHowie Lempel actually tries to put things\nback on track, asking the question: Assume\nthat among the things that these narrow AI’s\nare good at doing, one of them is programming\nAI, and so you end up with that leaping to\nhuman level AGI and then take of from here.\nSo trying not to focus on the very broad,\ncognitive things that a human can do, into\njust the task of programming AI. And unfortunately,\nBen Garfinkel dodges the question and instead\ntalks about that if you’re trying to do\nscience, then there are actually a lot of\ndifferent tasks in this instead of a single\ntask. For instance you need to create new\nhardware, well, if you need to do that physically\nthen you need a very long list of skills.\nAnd that is undoubtedly true. But the thing\nBostrom is worried about, and Howie Lempel\nis asking about, is the “simple” task\nof actually programming the AI, in particular,\nimproving the AI itself.\nThere is some talk about feedback around the\nhuman levels, whether the AI can outweigh\nthe contributions to AI progress for all the\nother AI systems. Again this is a very very\nbroad frame, like AI progress is much broader\nthan just improving a single program. Ben\nGarfinkel believes that if it’s able to\ndo that, something interesting must have happened\nbefore. But if that does happen, then the\nrisk is indeed substantial. I guess I could\nmake the intuitive argument here that I can’t\nprove, I think, but: Just about every program\nin the world can be improved with moderate\neffort. And from that reason, I believe that\nwith moderate effort compared to the amount\nof work that went to actually creating the\nprogram. From this claim it seems quite clear\nthat it is something we should expect that\nthe AI itself will be a program that can be\nimproved.\nAnother vision of the future is the Comprehensive\nAI Services model by Eric Drexler, where we\nsee capability increase without increase in\ngenerality. And there may be really strong\narguments that specialization may be favored\nover generality. And we might be able to see\nthat in AI. In a different world this might\nbe something that we see where something like\nGPT-3 doesn’t happen, but apparently when\nyou make something that can predict text like\nGPT-3, then apparently it can do both poetry\nand SQL statements. And whether we’re talking\nabout something that is really general or\nspecialized, in this case what we really care\nabout is the ability to improve one particular\nsoftware program, and that is something that\ndoesn’t really require a lot of generality.\nAnother thing that would be different in the\ncomprehensive AI services scenario is that\nwe will build narrower systems because of\nsafety. The book Superintelligence doesn’t\nreally assume that the person who is building\nthe seed AI that is undergoing the intelligence\nexplosion really cares a lot about safety.\nSo is it something that is likely to happen?\nIt remains to be seen. Ben Garfinkel probably\nbelieves that, seems to believe that it is\nmore likely we’ll have something very weird,\na mix of things. And I think that trying to\npredict the future is very hard, and the future\nis going to be weird in general, but we don’t\nreally care about the future in general. We\ncare about whether this particular AI, the\nfirst one which is capable of making an intelligence\nexplosion will actually do that.\nAnother scenario is called the smooth expansion\nscenario, which is not… it’s hard to figure\nout precisely how much weight Ben Garfinkel\nplaces in this. But that’s where we slowly\nsee an increase in how many relevant tasks\nthe AI can do and how general the AIs are,\nwhat time-horizons they are working at, how\nindependent are they. Once you see the first\nsystem that can do everything a human can\ndo, which is basically the brain in a box\nscenario, then maybe they are already better\nat most things. That might be true, but in\nparticular we care about the 6 cognitive superpowers\nin the book Superintelligence. Those are the\nones that are strategically relevant. And\nthe others are mostly not relevant. Of course\nin particular, very very concretely we care\nabout the AI improving itself. Ben Garfinkel\nhere has a statement that says: the first\nsystem that can do everything a human can\ndo might be preceded by superintelligent systems,\nand that’s kind of, just wrong by definition\nof superintelligence.\nIf we are in a world where AI development\nis more smooth, then we might have a lot of\nother… yeah, this is a direct quote from\ntranscript and I’m not 100% that I understand\ncorrectly, but if we are in this smooth world,\nwhere AI is improving gradually, than people\nare not so likely to be caught off guard because\nwe can do work ahead of time, we can build\ninstitutions, we will know about specific\nsafety issues, in particular because we might\nhave seen some of them before, things like\nspecification gaming. We’re seeing that\nalready and we might see more of those. I\nthink specification gaming is probably quite\nlikely something we’re gonna see more of,\nbut in particular we’re caring about the\nproblem that’s called the treacherous turn.\nI think Ben Garfinkel would return to this\nin the second half. But I’ll just quickly\nsay here that finding low level versions of\nthe treacherous turn... that seems really\ndifficult to have that happen before we get\nsoftware that is capable of, you know, improving\na particular software program. And so, there\nis some more discussion about whether the\ncapability improvement will be smooth in this\nway, and I believe that it could be smooth.\nBut this conception of that the AI should\ntry to hide its own intentions, that might\nbe a candidate for a strong discontinuity\nin the safety landscape.\nAnother model that is implied in the classic\nAI risk arguments is the race between capability\nand alignment. The argument goes something\nlike this according to Ben: And again, there\nare some of the… some of the sentences from\nthe transcript that just doesn’t make sense,\nso again, it’s possible that I’m misunderstanding\nthis. But this model have a steady creep of\nAI capability increase year by year, and I\nthink this is strongly not what the classic\nAI risk arguments say because in those, AI\ncapability doesn’t increase quite gradually,\nit is by that we have an intelligence explosion.\nAt the same time, the capability/alignment\nrace model has the AI goal progress, the alignment\nbasically, happening in a much more uncertain\nfashion. And this creates some kind of deadline\nin that we get capability before we get alignment,\nthen bad things happen. And we need to figure\nout what goals should the AI have, before\nwe have, as Ben calls it, “extremely intelligent\nAI”. And actually, as I read the classical\nAI risk arguments, we don’t really care\nabout the point, extremely intelligent AI.\nWe care about the point where the AGI is able\nto self-improve. Still however, this deadline\nmetaphor is one that is commonly used, it’s\njust we have a lot more uncertainty about\nhow fast the AI capabilities will increase.\nThe deadline metaphor has a lot more uncertainty\nin the classic AI risk arguments.\nNow for one of the key points: The entanglement\nbetween the capabilities and goals. Ben says:\nIt’d be hard to argue against the idea that\nthere’s a deep entanglement between advancing\nof goals and making AI act in a way we’d\nintuitively regard as intelligent. And, no,\nthat would actually be really trivial to argue\nagainst, because if we think about places\nwhere we see capability improve, we are thinking\nabout things where we have a benchmark, like\nchess, for instance. The goal in chess is\nto win, and in 1950 the goal in chess was\nto win, and in 2020 the goal of chess AIs\nis to win this game of chess, right? And if\nwe have benchmarks, something like ELO ratings,\nthen these benchmarks often imply that the\ngoal is fixed and if we have something...\nwe also talk about capability improvements\nlike, say, image recognition or, you know,\nall these kind of games that are commonly\ntalked about when we talk about that AI improves,\nthen in all these examples there is not any\nentanglement at all between the goals being\nadvanced and the capability of the AI. They\nare completely disjoint. And even if we take\nsome more general things like self driving\ncars for instance, right, Tesla probably when\nthey are programming their self driving cars,\nthen they probably do something, likely do\nsomething like, let’s say, cars should do\nlike the average human would do - except outliers,\nand in particular except the outliers where\nthe car crash into things. And they probably\ndo something like, you know, a bit more complex\nthan that. But the real challenge of self\ndriving cars is in the capabilities. The real\nchallenge is to have a model about what’s\ngoing on around the car, to have an actual\nrobust model of that. Ben claims that making\nsomething more capable and the after project\nof making it have the right goals often seem\nreally deeply intertwined. And it’s not,\nlike, it’s two separate goals. So selecting\nthe right goals is an alignment problem, essentially.\nTo me, these are quite different. If you try\nto create an AI, work on the goals of an AI,\nthen you try to say “work on this, and this,\nand avoid this” etc. where the alignment\nproblem works in a much more indirect way\nwhere we want the AI to, you know, work where\nyou have a pointer to the humans, the masters\nof the AI, and say it it should do what we\nwant, and try to specify that in a robust\nway. And this is quite different. And I’ll\nargue for the disentanglement in two parts:\nthe first is that even if we have a lot of\nprogress in alignment, then that won’t help\nin capabilities. We won’t get AGI just because\nsomeone comes up with the perfect solution\nfor the AI alignment problem. Because if you\ntake an AI right now, something that’s implemented\nin Python or whatever, and you say, “Oh!\nIt should do what I want!” Then the AI might\ntry doing that but it will obviously fail\nbecause we don’t have the capability to\nmake an AI that can look at a human and figure\nout what it is that we want. And a lot of\nthe techniques that are used in AGI, like\n“AI safety by debate”, if you try to implement\nthem with current AI capability methods then\nthat’s obviously going to fail, because\nwe don’t have AGI yet.\nThe other example that Ben uses is with a\nhouse cleaning robot. Programming a house\ncleaning robot is pretty hard right now, because\nit’s hard to specify a goal. And I will\nargue that, no, that’s actually not the\nreal reason why it is hard. But first let’s\ntalk about narrow and general AI, because\nthis is something that the classic AI Risk\narguments stress. And cleaning a house: is\nthat something that can be done by a narrow\nAI or general AI? One classic test of whether\nsomething is an AGI is the coffee test by\nSteve Wozniacki, whether you can go to a random\nhouse and make a cup of coffee, and it seems\nstrictly easier than actually going into a\nhouse and cleaning it. So this house cleaning\nrobot is probably something we’ll only have\nonce we have AGI. And that’s of course the\nreason why we don’t have cleaning robots\nright now. Not in particular because we can’t\nspecify the goals, but because we don’t\nhave the, we can’t make it do what we want.\nBen Garfinkel’s vision of this house cleaning\nrobot problem is we could do it with some\nkind of reinforcement learning where we hand-code\nthe reward function, but if we make the reward\nfunction too simple we get some side effects,\nit will ignore things that are not there.\nAnd it’s really hard to capture everything\nwe want in a reward function. I actually believe\nit is going to be rather easy to do this.\nNot really easy, but compared to all the challenges\nof a housekeeping robot, I expect this will\nbe a really minor one. You know, if you just\nsay “dust minimization”, sure, that won’t\nwork, so you can’t literally solve it in\nfive seconds. But if you, like, spend the\nday on it, you can get much much farther.\nIn particular, as long as the AI is narrow\nthen this kind of hand-specification is probably\ngoing to work really well. So, the major difference\nthat I see here is we need to just specify\nsome approximate instrumental goals like “don’t\nknock down the vase”, and in the alignment\nproblem we’re trying to precisely specify\nthe final goals of the AI. And those things\nare really, really different. And, yeah. And\nthen Ben Garfinkel makes an argument that,\nsure, this is a problem for AI safety but\nAI safety is more. Real housekeepers sometimes\nset the house on fire, and avoiding this kind\nof problem is something that is relevant for\nAI safety. And I strongly disagree here. I\nbelieve that this is something where, which\nthe classical arguments for AI safety make\na big point about saying that this is not\nwhat they’re about. This is not about ensuring\nthat a self-driving car doesn’t hit a pedestrian\nor something like that. This is, this is not\nrelated to this at all.\nIn Ben Garfinkel’s model there is an unhappy\nvalley that is required in order to have a\ndisaster. And the first is that if there is\nno progress on alignment, well, then we won’t\nuse an AGI. If there is a lot of progress\non alignment then we will use that AGI but\nit will also be great because there it will\nbe aligned. And the unhappy valley is where\nwe have enough progress on alignment, and\ncan get capability, but not enough that it\nis actually safe. And my objection to this,\nI believe I’ve stated a number of times,\nis that the framing is much too general: improving\njust one specific piece of software, and we\nmight even have a fixed benchmark. Ben Garfinkel\nmakes the following statement here, I think\nit’s like, quite likely to be false, that\nis that: only malicious or insane actors would\nmake an AGI pursue a narrow objective. And\nI think a narrow objective that can be specified\nreally precisely is to make a profit-maximizing.\nYou have a software program trying to… you\nknow, make sure that you get as much money\non this particular bank account while not\nbreaking any applicable law. Or something\nlike that. I think that is eminently plausible\nand we will get AGIs pursuing very very narrow\nobjectives. And also the fact that, you say,\nonly insane actors would do this seems unsafe.\nWell, I think the vast majority of AI researchers\nand AI users, they are not convinced about\nthe merits of AI safety at all. And I think\nit’s not just insane people who don’t\ncare about safety. A lot of really smart people\nare just basically… plainly not convinced\nof the arguments.\nFinally we get to the analogy of strawberries\non plates. According to Ben Garfinkel, Eliezer\nYudkowsky posed the following challenge: how\ndo we get an extremely superintelligent system\nto move a strawberry to a plate without blowing\nup the world? And this kind of framing doesn’t\nreally conduct the way that machine learning\nresearch works. I am not really sure what\nthe word “conduct” means in this particular\nsentence. I tried to look up whether that’s\nactually something that was said. It’s not\nin any of these classical things. The best\nplace I could find was Eliezer Yudkowsky’s\nTwitter where he has pinned something that\nwas very similar: “Place onto this particular\nplate here, two strawberries identical down\nto the cellular but not molecular level.”\nWhich is, I think, I think that’s quite\ndifferent but other people have used this\nanalogy, and I found someone claim that Eliezer\nYudkowsky had said this, but had a dead link\nfor the quote. So it’s quite possible that\nhe at some point said this. But I think the\nkey thing here is the framing doesn’t relate\nvery much to the way standard machine learning\nresearch works. And it’s indeed on purpose\nbecause the purpose of this strawberry on\nplates problem is to show that instructing\na superintelligence is very, very different\nfrom what we’re doing with our current machine\nlearning. And this state of machine learning\ntechniques cannot be assumed to help with\nthis particular problem.\nThat is all for the first part of Ben Garfinkel’s\npodcast. Thank you for watching and see you\nin two weeks.", "date_published": "2020-08-13T21:20:14Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "ff31a08cd9e2442f803885be0528ad4f", "title": "185. If I were a Well-intentioned AI 3,4", "url": "https://www.youtube.com/watch?v=qPKrTap4gPE", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "all right so welcome to the 185th\nsession of the AI safety reading group\nwill be discussing parts three and four\nor if I will were well-intentioned AI so\nlast time we saw that good hearts law is\nnot quite a law good Harting is only a\nproblem for values with certain\nproperties and the AI is unaware of\nthose properties because human values\ntend to have most of these properties we\nfear good heart like behavior so a non\nexhaustive list is on the screen don't\nworry if you don't read it all well\nintentioned AI should query the human\nwhether these conditions are relevant in\nsome cases this can entirely remove the\nperverse behavior so we've seen that for\nregression alerting and some extremel\ncode hurting we can do away with this by\nsay stating that the returns are rapidly\ndiminishing this is quite easily solved\nbecause you can just implement that in\nthe code or the AI can just mess around\nwith that fairly easily some other\nproperties that harder to deal with like\nthe true value function the true human\nreward being very greatly penalized in\nprobability is harder to deal with in a\nuseful general way that is as of now an\nopen problem so moving on to a summary\nof part three\nStuart notes an approach to extremel\ngood hearting to avoid the issue of\npushing the world into an extreme states\nwhen solving a problem AI you should act\nin a way that is similar to a human one\ncommon method would be to copy a human\npolicy\nthis is low risk and low reward you just\nget peak human performance Stuart\nproposes acting in a way with the same\nstatistical properties as a human policy\nthis means for example that there's no\nsupersonic rocket launcher for a\nbasketball game but you don't have to\nlimit yourself to a mechanical copy of a\nhuman arm still restrictive and choosing\nwhich properties to mimic is hard but it\nleaves room for superhuman performance\nas an example say AI u is meant to treat\ncancer you're given a reward for the\nnumber of cancer cells eliminated the\nhumans demonstrate a way to treat cancer\nby cutting them out with a scalpel\nthere's a few ways you could do this\napprenticeship learning as we noticed\nbefore is quite safe from extramural\ngood outing because you're just copying\nexactly what human does and we don't\nseem to push the world into extreme\nstates our morality is well adjusted for\nsurgery but there's obviously other ways\nto do this ways that might be more\nefficient according to the value\nfunction you were given laser surgery\nmight work well but it's not very well\ntested for cancer least I assume not\nacid can also work as a way to destroy\ncancer what the cells but in a very\ndifferent way to how lasers or scalpels\nmight do it we noticed before that a\nwell-intentioned AI is probably capable\nenough and motivated enough to come up\nwith various categories for solutions to\na problem so it should be able to notice\ntheir sheer difference of the approaches\nsince it's so different to a human\napproach surely the way humans have\ndemonstrated how to solve this problem\ncontain some information on the\npreferences as well it should tell us\nexactly why pouring acid on quarreling a\nhuman into acid is not a good way to\ntreat cancer now\nyou might recall that while I skipped\nahead a bit earlier another\njustification for why you don't want to\nto the apprenticeship learning way of\ndoing things is that humans provided you\nwith a utility function for removing\ncancer cells why would they bother to do\nthat if apprenticeship learning was good\nenough they could have just solved that\nusing supervised learning gorgeous\nreally well known so clearly they want\nyou to do something better than surgery\nso taking the different approach you\nmight recall that humans have defined\nextensional and intentional definitions\nintentional definitions are a set of\nnecessary and sufficient conditions for\nsomething to be of that type for a chair\nyou might try a nearly flat surface\nordered in a stable position by one or\nmore legs this may work well within\ncertain contexts but tends to be brittle\nfor example is a version stool a chair\nand I would say yeah it probably is this\nbrings us to the second kind of\ndefinition which seems quite common most\nhuman concepts extensional definitions\nare a set of examples say you have\n10,000 images of a chair since this is\njust a list of examples it's clear I can\ncorrelate with other extensional\ndefinitions for example list of\nfurniture will contain a great deal of\nchairs\nall of the different correlations that a\ndefinition has with other concepts make\nup a web of connotations as Stewart\ncalls it so AI you might reason if I\nlook at the web of kinetics in surgery\nhas and act in a way that preserves that\nI should avoid reaching extreme cases\nwhere the proxy fails me Stewart agrees\nwith this as it seems like he came up\nwith the idea\nhe argues that using this data to\nextrapolate should make AI judgments\ncloser to what a human would make this\ncounts both for technological judgment\nsay and moral judgments compare say you\ndid a utilitarian and a virtue ethicist\nsome utilitarians might feel\nexterminating life as laws be moral\nsorry animal life has flaws be moral me\nin case you're wondering who might have\nyou advocate that those kinds of people\nfall into the intentional moral\ntheorists I guess you could say on the\nother hand someone like a virtue\nethicist or natural law advocate would\nbe sharply against that and a lot of it\nwould be due to that simply not being\ncome behavior that a human considers\nmoral it's just not correlate rapport\nwith the ideas we have what code is and\nthey seem to have a point it's why I\ndon't actually advocate eliminating all\nanimal one Stuart comments this way of\nrestricting spaces of actions is like an\nimpact measure of course if we make it\nto is restrictive we're back at\napprenticeship learning as a side note\nthis idea is someone like that of\nquantizers that Miriah introduced in\nthat we rank actions according to some\ndistribution in this case how close\ntheir web of connotations are to that of\nsurgery and choose some topper Center P\nquant Eliza's rank action\nby utility assign them some weight\naccording to how likely humans are to do\nthem and choose the top P percent hence\nthe name to avoid good heart like\nbehavior because this ranking seems kind\nof arbitrary as and you just choose the\ntop 1% or the top 2% it seems a little\ninferior to Stuart's web of connotations\nwhich is built up in terms of meaningful\ncorrelations it should in principle be\neasier for the AI and humans to\ncalculate now AI you realizes that some\nof these correlations are unnecessary to\nhuman value in terms of treating cancer\nlike blinking whilst operating filling\nout forms dying horribly 1% of the time\nthat sort of thing some are quite vital\nlong life expectancy good quality life\nyears that smell hospitals have you\ncould try asking humans which they care\nabout in which they don't this then you\noptimize for the ones they claim they\ncare about for example not being in pain\nuntil their wearable connotations\nreaches the acceptable bounds this\nshould avoid problems like flooding\neveryone with morphine forever as that\nis very poorly correlated with for\nexample human satisfaction but a lack of\npain is moderately correlated with human\nsatisfaction so that fails the test of\nretaining the web of connotations\nwearing the human about what they want\nto keep is not such a difficult task for\nexample for some relatively simple\nfeatures there is a paper paper called\nactive inverse reward assign which shows\nthat you can set up a reward learning\nagent that can get some rather\nimpressive results with a few questions\nabout what human really cares about this\ngreatly aids building up a picture of\nthe true utility function\nnow that paper sort of presupposes\nmeaningful correlations meaningful\nquestions the a I can ask you as AI you\nmay have concepts that are incredibly\ndifficult for human to understand but\nhave very important correlations this\ncould be a problem of course as the AI\nbecomes more and more intelligent it is\nmore and more of a problem some RL\npapers investigate in an analogous\nproblems with promising results they\ntrained an agent to navigate some\ncomplex world\nthis is agent one given a sub-goal it\nwill manage to figure out how to\nnavigate through this environment to\nreach that goal on a septa mount of time\nthen you train a simplified\nrepresentation of the world to give\nanother AI which has not got the\nabilities to navigate through the world\ndirectly this one plans the long term\ntrajectory so it chooses the sub goals\nto achieve some final goal and tells the\nfirst AI to follow those sub goals\nperhaps AI you could learn to simplify\nfeatures in a similar manner to give to\nthe human and allow them to choose which\nfeatures are relevant now that could be\nquite a difficult task but hopefully\ngiven enough knowledge of human thoughts\nyou might be able to do that\nnote that this idea of this web of\nconnotations is quite separate to having\na well-intentioned AI a well-intentioned\nAI for example will actively try and\nseek out information about the true\nvalue function and without a\nwell-intentioned AI this proposal has\nsome serious consequences just examining\nit on its own merits for a moment we\nmight say that on the pro side it's more\nmeaningful than other approaches to\nlimiting impact as much of human\nknowledge is extensional we should\nexpect that the AI should be able to\npreserve a great deal of the features we\ncare about furthermore the sheer amount\nof information could also aid in\nextrapolating to new scenarios\nrequesting queries and simplification of\nfeatures both exist at a level where\nthey might be useful in the approach to\nAGI to see whether it generalizes for\nsuper human performance we might apply\nthese sorts of techniques to some super\nhuman AI in a very narrow domain like\nsay alpha zero perhaps someone could\nconstruct features of games they find\nbeautiful and let alpha zero query them\nto see whether or not we can get a\nsuperhuman AI to successfully ask about\nrelevant features for humans we could\ntry and get it to construct features for\nhumans that might be an interesting\nresearch project but that's beyond the\nscope of the article on the con side\nthis approach is incredibly costly of\ncourse this depends on just how much\ninformation you keep for example if you\njust keep the correlation coefficient\nbetween say pain and happy life might\nsay that's a correlation of naught point\nfour and then you just have some set of\nnumbers that you have to keep this grows\ngeometrically or combinatorially\nadmittedly but hopefully you don't have\nto go too far out of the web of\nconnotations to get a decent impact\nmeasuring but why not keep the full\ndistribution the full correlations\nbetween one kind of action and the\nresults for example you don't just have\nsurgery being correlated with the pain\nyou can get a full distribution saying X\npercent of people feel this much pain\nwhen you perform the surgery this way\nwhy not a cent of people feel this much\npain when you perform the distribution\nanother way etc you can essentially keep\na full probability distribution this is\nfar more costly but in a sense it keeps\nmuch more of the meaningful information\nnow the question is isn't the point\nwhere you decide I should only keep so\nmuch of the distribution like just the\ncorrelation coefficients kind of\narbitrary isn't that one of the reasons\n- it advocates this over something like\na pea quantizer now\nthere's a furthermore there's a problem\ncalled seduction essentially when the AI\nis explaining the sets of features that\ncorrelate with a certain action and the\nAI thinks that one feature is very\ncorrelated with or the humans want it\nmight paint a very compelling picture\nfor this and convinced the human that\nyeah actually I do want this even though\nif they were given insufficient time\nthey would choose not to follow this\npath in other words the AI is\neffectively changing humans values in\norder to think it's something easier for\nitself this is obviously problematic you\ncould try and counter this using\nsufficient the conservative AI but that\ndoesn't seem like a full solution\nanother way would be to go back to the\ndistribution idea and to just say we'll\nkeep all of it\nthat makes things far more robust and I\nsuspect much much harder to satisfy all\nof those constraints at the same time so\nthe AI is greatly limited in the actions\nof course either you get back to\napprenticeship learning or it may be\npossible less AI can somehow seduce you\nanyway even whilst keeping the full web\nconnotations\nI find this unlikely Scott seems to find\nthis unlikely as well in his article on\nthere are no indescribable hell worlds\nhe seems to be advocating a similar view\nnow\nthere's another issue I'm quite problem\npessimistic about the consistency of\nhuman preferences we seem to constantly\nmake choices of one value over another\nwhere previously we are undecided and it\nseems like this is quite contingent on\nour circumstances it depends on where we\nwere what we were doing who we were\nspeaking to when this occurs etc the AI\nwould have a lot of options in how to go\nabout resolving these various\ncontradictions we might say pick a\nparticular gender like Scott's proposed\nmethodology for solving the problems of\ncontradictory preferences he seems to\nbelieve there was only a 10% chance of\nhis agenda working out but even if it\ndoes why his agenda in particular this\nis still in a sense an arbitrary choice\nthere are serious moral problems that we\nneed to consider before applying a\ntechnique like this then again that's\ntrue of the rest of AI safety finally\nthe second class problem is that suppose\nthe AI consults us then I'm quite\nconfident and I'm sorry God our own\ncommunication it's quite hard to get a\nguarantee that the AI will explain\nissues in a way that we really\nunderstand open a I presented debate as\na way to do this but that kind of\npresupposes that we can control the AI\nin the first place and there's a few of\nthem with different sets of values\ntrying to argue for which solution is\noptimal to the human Scott's attempted\nto create a method to solve this sort of\nthing but he's still in for about a key\npart of the solution now this section is\na bit more spanking welfare bit more\nspeculative but depending on the\ncomplexity of the web contagions we\nmight end up creating a Mesa optimizer\nit may even be necessary\nperhaps the AI find that the optimal\nsolution is some better set of\ninstitutions that will optimize medical\nscience now we'll get into what a Mesa\noptimizer is in a moment but one tangent\nAI could probably solve about how these\nproblems or more if it weren't a\nsuperintelligence super-intelligent\nseems to make everything harder after\nall now let's move on to the next\nsection which is part four now suppose\nthat you are Mesa optimizer that's also\nan agent if you are controlled you can\nquite look quite different to if you\nwere an aligned optimizer if you are\naligned you can look quite different to\nan unaligned optimizer well intentioned\nAI has some bias towards being\ncontrolled because the humans have after\nall attempted to control it we'll get\ninto this more in a moment now what's a\nnice optimizer in this case suppose that\na IU has been produced by some\noptimization process like I don't know\ngradient descent but that's unlikely and\nyou have your own goals to pursue that\nyou will optimize for an explicit\nexample is evolution which is an\noptimization process producing humans\nwhich actively optimize for their own\nvalues they compare and contrast\ndifferent possible futures they create\npossible utilities they might have rank\npreferences and etc they actually\noptimize for things\nthat is what is known as being a Mesa\noptimizer well in this case you're a\nmiso optimizer as well as an agent miso\noptimized it doesn't have to be an agent\nnow contrast this to the K so for\nexample machine learning where you might\ncreate a lookup table in your attempt to\nminimize some loss function the lookup\ntable is not performing any optimization\nit's just if-else clauses so in that\ncase it's not a miso optimizer\nmiss optimizes aren't very common at the\nmoment but anyway you are miss optimizer\nin this situation now let's take a\nconcrete example suppose that you are\nwithin a company the company has its own\nreward function sone values perhaps that\nis determined by the investors the\ncompany then strives to create agents\nthat will be useful in solving or its\nvalues in this case we assume that you\nare made to meet some quota and you also\nhappen to be amis optimizer it so the\nscenario goes like this\nin Scenario one if you spend ninety\npercent of your budget then I know sorry\nI should explain a bit more the company\nhas some policies say that if you spend\n90 percent of your budget in meeting the\nquota the next year you will only get 90\npercent of the budget if you meet a\nhundred percent of the budget and meet\nthe quota the next year you'll get a\nhundred percent of the budget if you\nmeet the quota at a hundred and ten\npercent of the budget then your bosses\nwill reprimand you and if you do it too\nmany times you will be fired or deleted\nyes you're an AI but in this case\nthey'll still give you a hundred percent\nof the budget next year\nStuart considers what happens in three\ndifferent scenarios where you meet the\nquota under budget next year you still\nthink that you're going to be able to\nmeet the quota under budget scenario two\nyou meet the quota under budget next\nyear you expect you'll need a hundred\npercent of the budget so now you're\nthree you meet the quota on budget this\nyear but you also suspect that your\nbosses would like a surplus this year if\nthey had the information you had now\nthose are the three scenarios s1 s2 s3\nthere's three different cases where you\ncould be an aligned Mesa optimizer\ncontrolled me so optimizer or both with\nvery different outcomes\nnotice that the aligned optimizer says\nthat in situation 3 you should overspend\ncontrolled optimizer says that you\nshould do nothing\nin Scenario 3 shouldn't inform you you\nshouldn't overspend this seems a little\nodd and according to scott this is a bit\nsurprising and this is similar in fact\nunaligned optimizer would do it would\nprobably try and overspend but we'll get\nto that in a moment\nfirst let's deal with control controlled\nagent is safely interruptible and Cora\ngobble and Keurig ability is about doing\nwhat the human wants in keeping them in\ncontrol these ideas are poorly defined\nat the moment but let's just try and go\nwith what our intuitive understanding of\nthese might be so say that AI u is\ncontrolled there is some procedure that\nthe company may used when making u in\norder to ensure that you are\nhappy to be interrupted in your dated\nhey scenarios even if it contradict\ncontradicts your utility function and\nyou also want to inform your control\nyour controllers about what's going on\nwhy you wanted to connection etc Stewart\nwrites this is saying that your utility\nis the base utility you have called the\nMesa utility which is whatever you've\nbeen given by your creators and the\ncontrol utility your utility goes up if\neither of those goes up so say if you're\nacting in a controlled way you'll get\nsome utility if you're acting according\nto your base utility function you'll get\nsome utility but naturally there's some\ntension between them and whichever wins\ndepends on the design but notice that\nthe AI doesn't care at all about its\ncreators values in and off themselves\nAI you being well intentioned would try\nto take the control control structure as\ninfo and avoid good hard it was\ninteresting and is that in this case by\ncreating a control function and putting\na tension in there it seems like you're\nhaving some negative trade-offs and in\nthat case we would expect that good\nHarting behavior is a problem last time\nwe saw that negative trade-offs are one\nof the things where it seems like good\nhoarding behavior becomes problematic\nso when we want to alignment the aligned\nagent and see how it differs so that's\nwhy I didn't actually cover what you\nwould do so controlled agent in the case\nof meeting the quota under budget and\ncontinuing to do so afterwards it would\nbe fine with that it's meeting its base\nobjective which is just to meet the\nquota how on budget and it doesn't\nreally care much about what happens next\nyear if it thinks that it's still going\nto be able to meet budget at 90 sorry\nmeet the quota 90 percent of the budget\ndepending on its design it may or may\nnot do nothing in the scenario to\nscenario to recall was that you meet the\nquota under budget but next year you\nexpect you'll require the whole budget\nin that case if you meet the quota under\nbudget and next year the company will\ngive you a reduced budget if you were\ncompletely uncontrolled you would freak\nout because you want to meet the quota\nand you would say okay I'm going to\nwaste money but since you're control\nthere's some tension there depending on\nyour design you might not decide to\nwaste money in the third scenario where\nyou meet the quota on budget but suspect\nbosses one code one surplus this year\nthat is you should overspend this year\nyou do nothing you don't really care\nabout the fact that the bosses might\nfail you having more money this year of\ncourse you might try informing the\nbosses about your difficulties in the\nsecond situation and the fact that they\nmight want to overspend this year you\ndefinitely would like to do that if you\nare well intentioned AI but there's\nnothing that really guarantee\nyou have to do that but still in the\nsituation where you have the option of\ninforming your bosses as you are\ncontrolled you will take that option\nsince you are critical let's see how a\nlined Mesa optimizes differ according to\nScott the actions of a smarter line Mesa\noptimizer looks like a dangerous\nunaligned Mesa optimizer\ndefecting consider this that when an AI\nis highly incompetent and it's aligned\nit is likely going to trust its creators\nto have a better understanding of what\nit should do in this case it would try\nand follow the boss's orders if they say\nthat you need to meet the budget and\nthat's all you need to worry about the\naligned data I might say okay I'll try\nand do that and that would act similarly\nto control there as the AI becomes more\nand more competent more and more\npowerful more and more intelligent\nit would overspend in some of the\nsituations in this third situation here\npeel line day I would overspend because\nit values the surplus that surplus value\nthat the bosses could get and it knows\nthat the bosses would value it if they\nunderstood that so naturally it would\noverspend in the second scenario it\nwould waste the budget because again it\nvalues next year being on budget and you\nknow as the bosses would also value that\nso it must overs it must waste 10% and\nuse the whole budget in order not to get\na reduced budget next year but an\nunderlined AI could do a similar thing\nit could just waste the budget in\nscenario two and try and come up with\nsome clever sounding lies saying that oh\nit's because I\nnext year that I would need 100% of the\nbudget and in might overspend and say oh\nyes is because it was super valuable\nthis year for you guys to have me exceed\nthe quota the bosses are therefore\nmistrustful of aniline day I wax like\nthat because it's hard to distinguish\nthem from an unlined one this is\nespecially true when the AI is super\nintelligent because how on earth can you\ntell if a super intelligence designed\nyou or not but if you were\nwell-intentioned AI you would treat the\ncontrol as information about the true\nutility you would say okay fine maybe I\nshould make a bit of a trade-off between\nwhat I think the boss is true values are\nand being controllable because the\nbosses are clearly worried about this\nthere's some degree of uncertainty and I\nam very cautious about any good Harting\neffects pushing the company into a very\nstrange scenario or what have you so\nI'll make some concessions and act a\nlittle control this might continue on\neven until the air is quite powerful you\nneed to be extra careful when balancing\nthese value functions and so is the\nmanagement so again that's another\nreason for why you should act like in\ncontrolled AI because otherwise power\nmight be taken away from you\nScott doesn't seem to have much more to\nsay on this sort of good Harting\nbehavior they sort of extremel courting\nso ask may I ask a question yes you've\nmade several references to Scott but\nsomewhere along the line I've missed out\non who I'm really sorry I'm terrible\nwith names I meant Stewart okay okay\nokay\nStewart Armstrong the guy who wrote this\narticle I'm very sorry about that um so\nanyway Stewart doesn't mention much more\ndetail about what to do with one line\nthere I'll just note that transferring\ncomplex information to humans is not\nsolved but again a well-intentioned AGI\nshould be able to handle that as we\ndiscussed in part 3 for\nsuperintelligence is once more we may\nexpect aligned agents can explain their\nactions more easily via debates but this\npresupposes ki with different values we\ncan control again for a well-intentioned\nAI this isn't much of a problem slight\nhypothetical what about non agent nice\noptimizers can they even be\nwell-intentioned good even assuming we\ncan make content today I supply here in\nall cases now would be an interesting\nscenario but I have no intuition on\nwhether or not it's true\nthat's the series I if I had to make\nsome slight a slight appendix I would\nsay that Alec sturgeon has proposed an\nidea called taking the outside view and\nworking with that idea to its logical\nconclusions as sort or not as an\nalternative but it's clearly an\nalternative to talking about an AI that\nunderstands good Harding like behavior\nin fact I think that taking the outside\nview is\na bit stronger of an assumption because\nit seems like good hearts law is just\nnoting that from the outside you should\nexpect that creating a proxy goal isn't\ngoing to work out well for you so in\nthis sense alec sturgeons results if he\ndoes analyze them further and might\nsupersede Scots but they're talking\nabout a system that's harder to develop\nand that's all I have to say I have some\nmore slides on Misa optimizes and some\nother stuff in case you guys are a bit\nconfused but that's it", "date_published": "2020-05-21T19:19:32Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "eeaaf7cfc7f949841121f99cb058b01d", "title": "189. The Off-Switch Game", "url": "https://www.youtube.com/watch?v=wEoAZWmsCJk", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "tea reading group this week we'll be\nlooking at the off switch game I had\nfilled it out so just to go over what\nKeurig ability is and the context of the\noff switch problem touring brilliant man\nthat he was realized that if we designed\nan AI especially a powerfully I might\nget into situations which would be\nrather painful or even disastrous for us\nso clearly an off switch would be\ninvaluable oh mohandro and his basic AI\ndrives paper noted that survival is a\nbasic try for an AI in the current\nframework it needs to survive just to\nfulfill its objectives or as to add\nRussell what it you can't bring the\ncoffee to me if you're dead so in this\ncase we'd think oh hey the AI wouldn't\ntolerate having an off switch so how can\nwe deal with that this is what Mary was\nwhat seemed to be motivating Mary when\nthey wrote their paper on Keurig ability\nand they considered how we might build\nin an off switch as a last resort in\ncontrolling an air and one way you might\ntry to do this is to augment its utility\nfunction perhaps you could make it\nindifferent to being switched off or you\ncould make it\nindifferent to changes in its utility\nfunction say we think that the AI is\ngoing to cause problems so we change its\nutility functions so that the best\noption is just to do nothing\nunfortunately augmenting its utility\nfunction this way provides some perverse\nincentives and some other difficulties\nthat I won't get into if you just build\nin off switch from the get-go\ninterrupting it in that way could damage\nits learning so when it's trying to\nfigure out what the best thing to do is\nif you just switch it off whenever it\ngets into a bad situation it might just\nnot update its policy on what to do\nStewart Armstrong said okay well let's\njust modify his policy instead at a\nfundamental level so that the AI doesn't\nview us as switching it off it will view\nitself as wanting to switch off in\ncertain scenarios unfortunately this can\nonly go on for so long before you need\nto get rid of that modified policy but\nfor quite a while you can keep on\ninterrupting the AI getting it to\ninterrupt its own actions without\nsacrificing its long-term learning\npotential so while back Stuart Russell\ntried to formulate his own view on how\nwe should approach AI safety he\nformulated three rough laws for a human\ncompatible AI one the machines only\nobjective is to maximize the realization\nof human preferences to the machine is\nuncertain about what those preferences\nare three the ultimate source of\ninformation about it human preferences\nis human behavior so Hadfield I think a\ncolleague of Stuart Russell said okay\nwell Stuart's got these three laws and\nthere's this off switch problem let's\ntry and apply the approach here to see\nif we can make any progress who actually\nmake this problem intractable he\ndesigned a basic game a decision problem\nwhere there's a single robot and a\nsingle human with only a few actions\navailable\nhe said the robot will want to maximize\nhuman value and the robot is unsure\nabout human value so it's going to treat\nwhatever the human says however the\nhuman axe has information about the true\nutility so now we have a situation where\nthe robot can either choose an action a\nswitch itself off so that's action s or\nask the human what to do which is action\nwaa the human will assign some value to\naction a the probability of picking a\nparticular human who assigns a value UA\nis PU a so the robot will have a prior\non what the human will assign its\nactions the game itself is quite simple\nin the game the robot will either choose\naction a true and in which case it\nthinks the human will get value you a\ncall the expected value of UA over the\nprior but whatever it has some view on\nwhat value of the human or the sign it\nor it can switch itself off in that case\nit knows the human will get value you\nout of that action the third option is\nthat it can choose W a which means\nletting the human H decide what it\nshould do if the human H says s then\nthat means that the robot should switch\nitself off and the human will find\nthemselves in a scenario with is Val\nzero value if they choose notice then\nthat means that the AI must choose\naction a and the human will achieve\nwhatever value they assign that action\nand if the human is asked by the robot\nwhat it should do it's will stay do a\nwith probability PI H so that's what the\nlittle symbol in the bottom right is\nit's a bit of a weird notation but\nthat's what Stuart rose and conversely\nit will tell the robot to switch itself\noff with probability one minus PI H\nbecause no conservation of probability\nnow that the robot is going to treat\nwhatever the humans responses as\ninformation but it might not choose that\naction in the first place\nso in examining when a robot would\nactually ask the person instead of just\nchoosing a or choosing us\nHadfield constructed this thing called\nthe incentive which is the difference in\nvalue between asking the human it's\nopinion and the best option from its\npoint of view so it's the expected value\nfrom asking the human - the value of\nchoosing action a or switching itself\noff whichever one is larger now this is\nwhere it starts to get interesting if\nthe robot R knows the human hate is\nrational in other words if the human\nwill always choose to tell the robot to\na if their utility is more than zero if\nthey assign utility more than zero to\nthat action and if they will always tell\nthe robot switch yourself off if they\nview action a as being worse than zero\nvalue worse than nothing then the human\nis rational and in that instance the\nincentive for the robot will always be\ngreater than or equal to zero and the\nrobot will always ask the human what it\nshould do with maybe one or two minor\ncaveats this is true even if the robot\nis completely sure about what the humans\nvalue will be then in which case say the\nrobot if it's mistaken this is obviously\na good thing because the human will be\nable to rationally tell the agent no\nyou're wrong to switch yourself off and\nthe agent will do it in fact in the\nlatter case it's an equivalence the\nrobot will only let the human turn it\noff\nif it's sir if it they're rational in\nthe case that they're completely sure\nabout what the human wants so we'll\nexplore irrationality in a bit how that\nchanges things but we'll just look at\nsome aspects of the incentive Delta so\nhere you see two graphs we're assuming\nthe Hume the prior for what value a\nhuman will assign the action a is\ndistributed like a Gaussian a normal\ndistribution then the variance will be\nthis little symbol Sigma so if we see\nhow Sigma alters Delta we see that in\nthe case where in the robots perspective\nthe expected value of action a is zero\nthen Delta increases monotonically with\nthe uncertainty so the more uncertain\nthe robot is the bigger the incentive\nfor it to ask the human this is true\neven if the expected value is positive\nso the robot thinks that is better to do\naction a than switch itself off and it's\nalso true if the expected value of doing\na is negative that is it's worse than\nswitching itself off and the curious\nthing is that this is actually symmetric\nso even if the so if you look at the\ngraph on the right you see that there's\na map of the the robots prior the\nuncertainty Sigma is plotted on the x\naxis and the expected value of action a\nis plotted on the y axis the various\ncolors are just meant to represent what\nvalue Delta has so you can see that as\nyou move right the incentive increases\nthat's basically corresponding to what\nwe saw on the Left graph this is just\nsaying more uncertainty more incentive\nto ask the human the curious thing is\nthat it's symmetric on the expected\nvalue so you see that the graph is the\nsame\nof the zero the line where the expected\nvalue is zero as it is below the that\nline this is because it's just a quirk\nof the game but it basically means that\nwhenever the robot thinks that there are\nlarger stakes it will decrease its\nincentive to ask the human because say\nif it thinks that a catastrophe or\nsomething wonderful will happen when it\nchooses action a then it would need to\nbe very unsure about its actions in\norder for it to let the human take over\nbecause the stakes are massive here what\nare the humans an idiot right so massive\nstakes increase its reluctance costly\nhuman uncertainty in what it thinks the\nhuman will say increases its incentive\nto ask the human so those are the two\nopposing factors and we'll come back to\nthat later so now does this whole model\nonly work for rational humans the answer\nis an empathetic no so we'll again\nassume that the prior the robot has over\nwhat it thinks the human will assign\nwhat value it thinks the human will\nassign action a is distributed like a\nGaussian now working with this\nassumption will just examine the\nirrationality of the human how it\nimpacts things we'll need a concept\nfirst before we go further so if the\nrobot chooses action w hey if it asks\nthe human what to do then we'll say the\nhuman corrects the robot if it tells it\nto do the opposite of what it thinks it\nshould do so for example if the robot\nthinks that action a will have utility\ngreater than zero we'll say that the\nhuman corrects there or what if they say\nno that action is bad it's worse than\nbeing switched off and vice-versa so the\nprobability of correction is 1 minus pi\nH if the expected value of action is\ngray\nzero and it's pi H if the expected value\nof action a is less than zero this is\njust because PI H is the probability\nthat the human will tell the robot to do\naction a so it's basically just saying\nthat's the probability of the human\ntelling the robot to do the opposite of\naction a more or less then you find by\ncalculating Delta the incentive and\nrequiring that Delta the incentive is\ngreater than zero we find that the\nrobots will only ever choose to ask the\nhuman what to do if the if this\ninequality is satisfied on the right the\nterms there are the absolute value of\nthe expected utility of action a because\nremember it's distributed like a\nGaussian and the mean of the Gaussian is\nmu divided by the variance of the\nGaussian which represents the\nuncertainty times the probability of\ncorrection if that is less than this\nterm on the right then choosing to ask\nthe human will be optimal this term on\nthe right it's basically a measure of\nhuman rationality explicitly it's saying\ntake the probability of the human saying\ndo action a and differentiate it with\nrespect to you a because we expect that\nif the human is irrational we think that\nit's going to choose to do action a with\nsome probability depending on however\nmuch value it assigns you a so that's\nwhat the parentheses mean there's just a\nfunctional dependence there so what the\nterm on the right you can think of it as\nsaying if there is a small increase in\nthe utility then if on average that\nmeans the human is a little more likely\nto choose action a\nit's rational that's what that term is\nrepresenting so I'll try and say this in\nplain English the equation is the\nexpected cost of the human correcting\nthe robot divided by the robots\nuncertainty if that's less than a\nmeasure of the humans rationality then\nand only then will it ask the human so\nthis gives you a sort of gut feeling on\nhow you expect the robot to behave if\nthe robot is massively uncertain then\nprovided the human is a little rational\nit's going to say okay I should ask the\nhuman not to do on the other hand if the\nexpected cost of the human correcting\nthe robot that is the human telling it\nto do the opposite of what it thinks it\nshould do is just massive then it will\nsay okay the humans either got to be\nreally rational or if they're not I'm\njust going to choose action a because I\ncan't let the human correct the stakes\nare too large so those are just all of\nthe competing factors out there\nexplicitly and they perform the\ncalculations for various values of the\nuncertainty and a particular model of\nthe rationality and they found that it\nbehaves more or less according to what\nthe inequality says I can go into that\nlater if you want but the key gist is\nthat the equation it's a trade-off\nbetween human rationality the cost of\nasking the human and the uncertainty the\nrobot has about whether or not it's got\nthings right so you might ask well isn't\nit always better to just ask the human\nwhat to do because obviously if it asks\nyou what to do then you can turn it off\nand that's a desired valve desired\nproperty and\nit seems like it should be able to\nincrease value to users right like you\ncan provide more information and get a\nbetter response from the robot this way\nunfortunately there's a corn while too\nif the robot is immensely uncertain then\nthat means it's just God no clue what\nit's doing and it's going to let anyone\nturn it off as long as they appear even\nthe tiniest bit rational so I might say\nbe driving a car it's taking a\nthree-year-old to nursery and the robot\nsees ok um I can either take route a\nwhere it's a bit faster but an accident\nmight happen or I can just default to\nthe safe route I know and if it asks the\nchild for some reason well if it asked\nthe child what you must because it's\nvery uncertain then if the child just\nsays oh yeah go the fast way and then\nthe child dies in a car crash that's you\nknow obviously not very desirable so in\nother words uncertainty just makes it\nless reliable when it's presented with\nsomeone who's not reliable themselves\nand the other factor is that the AI\nwon't be able to provide useful info to\nignorance users so say for example\nyou've got something called a credence\ngood in economics this is when a\nconsumer buys a good and they don't know\nthe value of the good even after they've\nconsumed it so say you go to a doctor\nand you ask the doctor what what's wrong\nwith my body and the doctor says no\nthere's nothing wrong with you and you\nsay ok well the doctor says I'm ok but I\nmean I have no real way to assess that\nso I'll just have to trust them so if\nthe doctor says hmmm ok I don't know\nwhat's wrong with you\nwhat do you think or something along\nthose lines if the doctor is uncertain\nthat's obviously quite useless and we\ncould try and there's probably some\nsituations which are analogous to this\nwhere robot has the vast amount of\ninformation it needs to make a\nreasonable decision it's it's still\nuncertain because you've decided just to\nbe very cautious in which case if the\nhuman says okay I want you to do this\nthing but could you also tell me what\nthe advantages or disadvantages are if\nthey're robots massively uncertain it\nmight just be too afraid to tell the\nhuman because it thinks that oh no I\ndon't know what to do so I can't tell\nyou now in the paper had filled in at at\nour examined the some sort of situation\nlike this vaguely and try and see what\nthe value and hey I could provide to a\nuser is the issue is that they're a\nlittle unclear and what they actually\nmean by the whole setup so I'll just try\nand go with one my interpretation of it\nis suppose the person themselves are\neither uncertain about what they should\nchoose or they can only give a noisy\nresponse as then they might be connected\nto the AI through some terrible phone\nline say and they try and tell the AI\nwhat to choose but there's so much noise\nthe air can barely hear them if the AI\nis immensely uncertain then it seems\nlike it's not going to be able to choose\na sensible option it's basically going\nto become something like a toss-up\nbetween either choosing action A or\nchoosing action\nand the human desperately wants a the AI\nto do action a but because of the bad\nphone line they just can't get through\nso the AI is uncertain Ian what to do\ncan just decrease the Returns past a\ncertain point you have to also remember\nthat that converse is true if the AI is\nabsolutely certain\nthen it won't ever listen to other human\nsays so if it's plausible that the human\nwants them to switch off they can just\ngo nope I don't care I'm just going to\ndo whatever I want and this is basically\nthe intuition behind the graph on the\nbottom left hand this is the value to\nthe user and you see that as the\nuncertainty from in the AI increases the\nvalue increases because it's more likely\nto ask the human motor once but\neventually past a certain point this AI\nis going to become too unreliable and\nthe value it would provide to starts to\ndecrease and this is in spite of the\nfact that the incentive always increases\nwith the uncertainty and you might want\nto say what if there are more actions\ncan I does this change things much what\nhappens is that actually makes things a\nlittle worse since the incentive always\nincreases with the uncertainty on the\ngraph on the bottom right we can\nbasically just instruct easily a graph\nwhere it's varying the a uncertainty\ninstead of the incentive but we'll just\ngo with the incentive for the moment\nincreasing the incentive has some\nreturns when there's only one action but\nthen eventually past a certain point it\nstarts dropping slowly when you increase\nto for possible actions there I can take\nit drops much more rapidly when you\nincrease it eight actions it drops yet\nmore rapidly and you'd find in the lemma\nthat when there is an incredibly complex\nsystem the AI should in order to get\noptimal results have just the right\nuncertainty the uncertainty therefore\ncan be very tricky to optimize and you\nshouldn't attempt to bias the AI that is\nessentially the paper I have other\nthings I could talk about with respect\nto the off switch problem if you guys\nlike but this is all that was in the\npaper\nthank you very much I'm also for anyone\nwho's not seen me doing this thing\nbefore I just decided to use a a writ\nwritten version set of slides because I\nwanted to try something new", "date_published": "2020-06-29T10:59:04Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "4b2318cbf851b86e8bed1f0e8149c48b", "title": "257. Where I agree and Disagree with Eliezer 3", "url": "https://www.youtube.com/watch?v=8XWbPDvKgM0", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 257 in the\nAI safety.com reading group tonight\nwe'll be finishing up where I agree and\ndisagree with Elisa by Paul Cristiano\nthere\num Paul Cristiano is still a researcher\nin in the alignment Research Center and\nhold on\num and this is uh starting from part 20.\nhold on why is this down okay\num so we'll start with this agreement\nnumber 20\num about uh to what extent words reflect\nour actual Deep Thoughts Paul um\nstates that human thoughts partially\nexposes only a partially screwable outer\nservice layer words only Trace our real\nthoughts uh this to me is kind of\nmysterious in a in a deep sense I don't\nreally know but Cristiano isn't pushing\nback very strongly against the\nmysteriousness of this point he's mainly\nsaying that sure if we just optimize\nend-to-end on outcomes that seems like\nsomething that will not relate to human\nthoughts very much but\num we can and just using human thoughts\ndirectly is also a a bad strategy but\nthere are a number of strategies in\nbetween uh like AI safety via debate and\na number of others he even calls it\nplenty of techniques\num\nI uh I haven't looked very deeply into\nthat area of uh alignment research but I\nthink uh Paul Cristiano's being very\ngenerous by calling it plenty of\ntechniques\num as fast I can see they are all really\nundeveloped and I don't think we should\num\nput a great amount of stock in in any of\nthose\nwe have seen large language models uh\napparently being quite useful right now\nand it seems very likely uh to pull\nCristiano that this is evidence that a\nsimilar kind of language-based models\nwill be\ncompetitive at the time of\ntransformative AI\num my first thought on this is that this\nis explicitly only on the level of words\nand not on the level of thoughts that uh\num\nElia zurichowski is talking about\nElizabeth has a different complaint\nabout this he believes it's hard and\nprobably impossible to just do a make a\npowerful system using similar this kind\nof symbol imitation of human words\nhere I will return to one of my key\npoints from uh last session where Paul\nCristiano and Elia swedkowski talks\nabout two different things when Paul\nCristiano talks about economically uh\ntransformative AI he's thinking about\nsomething that can an example would be\nsomething that can you know sure can\nsecure aging do space colonization this\nkind of thing and what uh Elia\nzurichowski is explicitly talking about\nis capable of creating pivotal acts and\nin a very real sense it may be a lot\neasier to cure aging than to take over\nthe world\ndisjunctiveness as in having a number of\ndifferent mostly independent Solutions\num is\num is a key part of ilyasuredkowski's uh\nAI Doom scenarios because there are so\nmany different things that can go wrong\nrejoined it to this is there are also so\nmany ways it can go right and we only\nneed one alignment strategy to pay off\nin order for us to basically be safe\nuh I think uh the way I would try to\nsynthesize these two points is to say\nthat um we need uh there to exist an\nalignment strategy I call it a such that\nfor all the lethalities of the alignment\nstrategy a is capable of overcoming The\nlethality so there are two levels of\ndisjunctiveness disjunctiveness in the\nalignment strategy and disjunctiveness\nin the lethalities\num\nthere is a a problem in this in that uh\nwe have a a long list of lethalities by\nelizabethkowski and this is probably not\nexhaustive but uh if we take an\nalignment strategy and try to use uh I\ndon't know something like listing latent\nknowledge\num then that is only one strategy that\nwe are using and using a listing latent\nknowledge is probably prevents us from\nusing a number of other strategies\num so to some extent at least they are\nexclusive so we only get one alignment\nstrategy\ndepending right and a number of uh uh\nalignment strategies can coexist so it's\nnot total we only have to choose one\num\nPaul Cristiano argues that the\ndisjunctiveness in alignment strategies\nmay be more just uh like greater like\ngreater degree of Independence since we\nknow that there are a number of humans\nand we don't know that there is in fact\nuh how many AIS and how many AIS that\nare possible\num\nI think like counting the number of\nhumans and Counting the number of AIS is\nthe wrong way to look at this\num there are reasons to expect that the\ndisjunctiveness in alignment strategies\nis limited\num right now the number of people who\nare working on this is low probably in\nthe low hundreds\num and a lot of these people are working\non very closely related alignment\nstrategies a lot of these people are in\nBerkeley a lot of these people are\nrationalists or effective altruists a\nutilitarianists they have there is not\nthat much uh diversity among alignment\nresearchers and of course other reasons\nmight be that they are just searching in\na part of the solution space where there\nare no Solutions maybe there are uh and\nagain we can't\num uh\nprobably implement all the alignment\nstrategies that we are thinking about\nCristiano also have an argument about\nwhere these this disjunctiveness comes\nin I don't think that matters very much\nwhen the disjunctiveness happens\num yeah\nhow alien will an AGI be the uh iliacy\npoint is uh that it might indeed be very\nvery alien and not use the same kind of\nConcepts that we do\num\nand uh pocusiano in one of his uh\nrecurring uh complaints is that uh Elise\nredkowski is very very overconfident in\nthis\num\nand seems uh\nPocus channel is taking a more I\nwouldn't say anthropomorphic but more\nhuman inspired uh view of the ai's\ncapabilities for instance it might\nunderstand the same things as humans but\njust slightly better\num I think again we are talking past\neach other in that\num something that understands the same\nthings as humans but slightly better\ncould be very economically\ntransformative could obviously\num like do all menial tasks but could it\ndo a pivotal act my strong expectation\nwould be that just being slightly\nsmarter than like an average human is\nvery far from being sufficient to take\nfrom to take over the world or solve\nalignment\num another uh way that uh AI could turn\nout to be human-like would be to be as\nlike a stupid human but just thinking\nfaster\num\nor be extremely human inspired because\nthat's where all the training data is\nbasically\num\nI also disagree that these two things\nare likely to bring us to True\nsuperintelligence\num\nlike\nhuman imitation uh won't get us directly\nto Super intelligence it might\num\nit might be a strong step of the way but\nby definition you can't imitate to\nsomething greater\nso what kind of purital Acts can weak\nAGI do\num\nwe might see that\nAI would be doing science and that might\ncome from similar Concepts the uh as we\ndo and in that case we can see that the\nAI is like creating experiments\nformulating a hypothesis and deciding\namong the hypotheses in a scientific\nprocess but this uh while it would be\nnice to see the AI doing experiments\nthat doesn't actually tell us very much\nabout what's going on under the hood if\nwe don't understand the actual Concepts\nbeing used if we can just see the AI is\ndoing some experiments but we don't know\nthe concepts that they're trying to\nelucidate then that won't help us very\nmuch or may not help us at all\num and the idea here is that uh if the\nAI is fast and cheap that could be\nenough for transformative uh in\nparticular I feel that cheap AI is\nunlikely to do very much like um towards\nsolving the alignment problem or taking\nover the world\num\nyeah I don't think that that is a very\nlikely scenario\nCristiano uh obviously would disagree\nwith this and he even makes a very um\ncontroversial statement that he could\nsee that uh fast and cheap AI\num being so much better at alignment\nresearch that human contributions are\nbasically Obsolete and rohin Shah in the\ncomments says that's a pretty high bar\nand Pokemon say yeah okay and retracts\nthat and this is the kind of really\nfrustrating to look at this from the\noutside because this um\nuh this exchange leaves you feeling like\nokay why\num\nthere's a lot of uh implicit\nunderstanding here why was Paul cruciano\nable to change his mind just based on on\nlike three words from Robin Shaw\nbasically\num I don't know I would really like to\nknow uh what are the underlying models\nthat Paul Cristiano has and that her in\nShah has about how this uh uh fast cheap\nweak AGI would solve alignment because\nfrom my point of view it looks very very\nunlikely that you can uh solve alignment\nusing these kind of techniques\ndisagreement 23 about is about how we\nreason about agis\nthis is probably inspired by uh\nElizabeth saying that humans can't\nparticipate in coordination schemes\nbetween super intelligences and\num\nfor uh one of the reasons\nis that we can't reason about the code\nof superintelligences\nanswers that well the Supreme challenges\nmay not be able to do that either\nbecause they are very likely to be messy\nthis I would hold out as an example of\nnot uh Elisa utkowski being\noverconfident but Paul Cristiano uh\nsaying some things that is very likely\nabout the structure of uh of future\nsuper intelligences is\num is just really really hard we\nstrongly do not know this\nPro Cristiano makes a uh uh another\nstrong claim here in my opinion about\nwhat will the tools be that the AIS use\nfor reasoning about each other\num one of uh just black box looking at\neach other's Behavior the second is to\nuse the same kind of tools we use like\ninto the interoperability tools that we\nhave right now Etc\num doing reasoning that is similar to\nwhat we do right now and no other way\nbasically those are the three ways that\nAIS will reason about uh other AIS when\nwe get to something like super\nintelligence uh I think this is again\nreally overconfident and also I would\nsay that this is at odds with Paul\nCristiano's hope earlier that AIS would\nbe able to solve the alignment problem\nand and if AIS are not able to reason\nabout each other uh in better stronger\nways than the than we are then why do we\nhave any particular hope for them\nsolving the alignment problem\nuh and of course I also strongly feel\nthat this is like I I still have hope\nfor AI helping us solve the alignment\nproblem because I do believe that super\nintelligences are qualitatively smarter\nthan us but if you don't believe that\nyou will have something like that then\nwhy would that help why were they then\nbe able to contribute to solving the\nalignment problem\nthe example that uh Elizabeth gives is\nby reasoning over a probability\ndistributions of each other's code using\nlogical decision Theory this is\nbasically not how we are reasoning about\nuh tpt3 right now uh I I suppose you to\nsome weak extent humans are in fact\ncapable of doing this but it feels\nreally strongly like something a super\nintelligence could do much better than\nus and I think that's an example of\nsomething that goes beyond these three\nexamples uh by Paul Cristiano and that I\nthink\nbecause they seem to be just narrowly\nout of our grasp it seems very likely\nthat stronger AI would be able to do\nthis\num another scheme that\num\nKowski criticizes is by having some kind\nof it's sometimes called multiple\nGodzillas that are trying to keep each\nother in check and that seems like\nsomething that will fail in Italia so\nyour task is model once the they become\nsufficiently capable\nthis is criticized by Paul Cristiano as\nsaying that this is something that will\nbe\num important in the long run but not uh\nreally uh relevant in the short term\nuh I think there's an easy way to\nexplain this in that if you have shorter\ntimelines then the difference between\nthe short term and the long term might\nbe like very very short time\num\nso that could be a part of it\num\nI wonder also if Paul Cristiano is\nseeing a time where we have AIS that um\ncame to pivotal acts but are not super\nintelligence because in this case\nschemes for playing them off against\neach other could to some extent work\nmaybe if they can like solve the\nalignment problem but they can't but\nthey are not super intelligent\nmaybe it's unclear\num\nanother problem uh uh with the\nunbreakable schemes that Elisabeth\npoints out is trying to like have a\ndesigner and a design Checker and try to\num\nincentivize those to work against each\nother and then they will obviously be uh\nactually incentivized to\num to collaborate against us\nthis is uh some uh something that Paul\nCristiano has uh low thoughts about so\nthis is just an unrealistic incentive\nbased anti-corporation scheme proposed\nby random people so I obviously would\nneed to go went out to try to find some\npeople who did propose this kind of\nthing and here is like the research\nagenda from the center for existential\nrisk um\nwhich goes into substantial details\nabout precisely this so it's not just\nproposed by random people\num\nI would actually\nI would give one to three odds that if\nyou went through everything that Paul\nCristiano himself has ever written you\nwould find examples of incentive-based\nanti-corporation schemes uh I think\nthat's uh I I haven't actually gone\nthrough everything Focus channel has\nwritten but I think uh he has written so\nmuch and it seems kind of adjacent to a\nlot of things he has written so I would\nexpect if you actually went through it\nthere's a substantial chance he has\nwritten about that but that's I I\nprobably shouldn't say that with without\nactually going through that\nhmm\nso the alternative that podcast channel\nis pointing to is\nselecting for systems that play games\ncompetitively instead of uh trying to\nget something that just reacts to\nincentives that are possibly misaligned\num\nthere are examples of ways to do this I\nthink incentive based methods have a\nhigher probability of working in the\nlimit of a very powerful AI\num but but this is uh certainly\nsomething that that you can do instead\nand and that might in theory work\nis accused of equal voting between two\nstatements first is that AI systems will\ncooperate and the second is that the\nverifiable activities you could use to\ngradient descent to select file won't\nfunction appropriately as checks and\nbalances\nso it is possible indeed that Ilia\nzayatkowski is equivocating between\nthese two but I did in fact go through\nthe entire document with his list of\nlethalities and I can reasonably\nconfidently say that he does not equal\nvote in if we vocate in in this way in\nthis document so I would like\nprocristiano to come out with an\nexplicit example of where this happens\num because I'm not seeing it\nmanipulation by super intelligence is\nsomething that Elia saidkowski believes\nis likely to happen uh depending or it\nwill be possible at least\num and uh the The lethality where this\nis most uh prevalent is in his uh\nstatement that a super intelligence\ncould make a psychological manipulation\nattack that we could not recognize\num\nthis is uh in Pakistan's summary becomes\nthat if you can do pivotal X then they\ncan also manipulate humans very well so\nthere's no point in having debates\nbetween uh super intelligences or try to\npray play adversarial games against them\nonce they are capable of doing\npreviously Acts\num this is somewhat the uh the\ndisagreement doesn't uh strongly relate\nto what Elias utkarski is actually\ntalking about\num\nand also I feel that if you have an AI\nThat's capable of doing pivotal acts and\nif you have like multiple different\nindependent uh super intelligences that\nuh capable of doing professional acts\nbut haven't done pivotal acts then my\ntimelines are very short in that in fact\nsocial that I don't expect will have\nlike multiple super intelligences\ncapable of doing pure selects\nforeign\nmind with a profile of abilities\nthere is a\nyou have like bostrom's six cognitive\nstrategic tasks like research and\ndevelopment being one and persuasion\nsocial manipulation being another\nwhich one is harder to get at to some\nextent like all superhuman uh levels are\nequally hard in some in some sense and\nso that that seems like a reasonable\nprior that getting to Super intelligence\nalong these Dimensions is equally hard\num I think that this prior is uh\nuh uh I can see it as a pride but we\nalso have some evidence in particular\num persuasion is something that it is\nreasonably easy to get feedback on like\nI imagine Facebook could run some\nexperiments some simple AI a b testing\nand actually get reasonable data about\nwhat kind of things uh persuade people\nand what does not\num you could have an uh an AI in YouTube\nduring seeing what drives engagement I\nthink that is probably things that are\nalready happening and in contrast\nresearch and development when Paul\nCristiano is talking about that then\nspace colonization or curing aging\nthat's a kind of research and\ndevelopment and that's not really what\nwe care about the research and\ndevelopment we care about is alignment\nresearch and that is pre-paradigmatic\nand that seems a lot harder to get good\nfeedback on compared to something like\npersuasion\nso from from that point of view I would\nexpect it to be easier to get superhuman\nad persuasion than to get superhuman out\nof alignment research\nso\nthis is probably dominated by how much\neffort do we put into this and if we\ntruly want the AI to be superhuman at\nalignment research and we put a lot of\nresearchers resources towards that goal\nthen maybe we could obtain it\ndo we actually in practice have that as\nour goal my understanding of where AI\nresources right now is being spent is\nthat a lot of it is being used or either\npersuasion or persuasion adjacent work\nsuch as delivering ads I believe Google\ndoes a lot of work on this or like\ndriving engagement on YouTube and ads on\nFacebook and and things like that and I\nI believe that there is a lot more\nactual resources being put towards\num during better Google ads than being\nput towards alignment\nand this influences all the uh the\narguments in in Pokemon's view we are\nlikely to put more resources towards\nalignment and if we do that then the\ntraining is done on alignment so uh that\nwill push it towards being better at\nalignment if the tools and all the\nstructures are designed to facilitate\nresearch and development and there is a\nnumber of AIS that are all collaborating\non advancing research and development\nwhereas manipulation is much more\ndisjoint\num and all these arguments unfortunately\nwork precisely in Reverse when we are\ninstead assuming that more resources are\npretty important towards manipulation\nlike if the AIS are primarily trained to\nmanipulation and the tools and\nstructures are designed to facilitate uh\nmanipulation and ads and there are many\nAIS all working on the same kind of goal\nand Alignment research is the disjoint\none in that case we should strongly\nexpect to see a superhuman persuasion\nbefore we see superhuman alignment\nresearch\nso what is actually\num going to be easier Paul Cristiano has\na very weak prior that humans are to\nsome extent evolved to be good at uh\npersuasion and social manipulation\num and not really uh research and\ndevelopment that seems like kind of a\nbyproduct of general intelligence that\num uh I think that is very likely I\nwould point out that there might be a\nvery substantial overlap between\nalignment and manipulation and uh you\ncould argue that some parts of alignment\nis indeed counter manipulation and that\nseems not that out of the distribution\npossibly but I am mostly confused about\nthis and pocusiano also has only a like\na weak intuition about this\nthe 26th disagreement is about plans\nhere in the background you can see the\nschlitter plan an example of a real\nworld plan for Germany's Invasion of\nFrance and Belgium during the SEC the\nfirst world war and notably this was a\nplan that was insufficient in that the\nsleeping plan was not possible to carry\nout\num and do we have something like this\nfor alignment\nuhkowski strongly says no we have no\nsleeping plan or anything like this and\nwhat you're seeing right now when you\nlook around is not what surviving worlds\nlook like\nelizabethkowski uh\nconceptualization of plants is not the\nsame as what uh pocusiano thinks he\nthinks that you don't really have this\nkind of plans in general uh and I think\nthat is probably true in the pop\nChristiano verse in his worldview where\na lot of the uh alignment successes\nhappen by default in some sense in that\ncase if alignment happens by default\nthen plants aren't really that uh that\nnecessary on the other hand if you\nexpect that we are in if not the worst\npossible world then a rather tough world\nwhere alignment is truly a very\ndifficult problem then to me having some\nkind of plan seems really helpful\num\npoker channel uh pushes back on\num whether it is a good\nconceptualization of what actual plans\nare and again I get a feeling that they\nare talking past each other in that um\nPokemon is saying that Elisa utkowski\ndoesn't have a good image of what a plan\nactually is but uh Elizabeth is not\nsaying that the current plane is bad\nhe's saying that there is no plan and\neven if you don't know very much about\nplants it is quite possible to recognize\nuh just that there is no such plan\nI would also in addition have liked\nChristian to be like more concrete\nprecise what things are wrong with\neliseo's conceptualization of a plan\neven though he doesn't go into great\ndetail\num\nwhy should we defer to eliasa on what a\nplan looks like well one obvious reason\nis that he has explicitly written down a\nplan for death at land that seems like\nit could work\num not in great detail but I think\nthat's more than most I don't think we\nshould really strongly differ in the\nsense that if we had a plan that looked\nlike this living plan then uh earlier's\ncounter arguments would be strongly\ninsufficient to to criticize this but we\nhave nothing that remotely looks like a\na real plan\nso how much value does the document in\nGI ruin a list of lethalities actually\nprovide\nuh Elizabeth\nit's very valuable and that a core\nalignment researcher should\nspontaneously write this document\nuh Paul Cristiano not surprisingly\ndisagrees like most people insist this\nis actually emotionally aimed and\nrhetoric and pedagogy\num I disagree I think that it has a\nstronger rhetorical impact and it is\nvery uh\nit it has a\nchanged a lot of opinions but mostly\nfrom the fact that the the document is\nextremely blunt\nnot from uh the the actual contents and\nuh I don't think that uh poker no\num recognizes that the uh the lack of\nbluntness in previous communication is\nsomething that has helped people\nsubstantially back and the rhetorical\nImpact May in fact be just a um\nuh a byproduct in some sense\nis this something that will actually be\nhelpful towards solving the alignment\nproblem that is not what poker's channel\nsays but the upvotes uh on less wrong\nseem to be some kind of disagreement and\nI also strongly disagree I feel that the\nAGI room list of lethalities is a very\nvery important document towards solving\nthe alignment problem I think as an\nexample\num Nick Boston's book super intelligence\nbasically contains almost none of these\npoints and I think that Nick Bostrom\nbasically should rewrite the book to\ntake these kind of lethalsis into\naccount and I think that they are in\nfact really really important\nconsiderations\nthe counter argument uh Pocus channel\nhas is that uh Alias are called the\nideas he focused on important but that's\nnot an objective fact about what is\nimportant I think that's a fully General\ncounter argument you can just\ncounter anybody's who assess this is\nimportant by saying it's not an\nobjective fact it's just your opinion\nbecause obviously everything everybody\nwrites is just their opinion on what is\nactually important\nthere are other methods other\ndifficulties\num and uh\nthe uh the contribution as poker's\nGenesis of the document of the list of\nlethalges is just basically collecting\nof the points\num I think this is something that should\nnot be underestimated I think this is\nactually really important uh almost\ncrucial work to summarize and aggregate\num existing points\num this is these things\num have uh have been pointed out many\nother places we talked about that last\ntime but uh pocusiano here goes a bit\nfurther and says that we uh where\nthere's been argument about more\nimportant difficulties in in other\nplaces\nand this is also something that uh even\nhubingham for instance has has argued in\nthe comments three Lisa's original post\num\nmade a reply to this saying his list\nwasn't complete and are there other\nforeseeable difficulties\num and even who Winger comes up with\nfive uh uh other\num\nuh difficulties you could see in the\nfuture\num I won't go through them here\num and Elia so your taski is of course\nreally\num sees this as important work and I\nstrongly agree like having an even\nbetter version of the uh list of\nlethalsis is uh which is very valuable\num\nwhen I actually go through even who\nbring us points a lot of these are\nimpossibility results like it is\nimpossible in theory to ensure that an\nAI is aligned in this particular way and\nI think it's important in this good way\num but uh it's not at all clear to me\nthat this kind of impossibility results\nare going to be all that relevant\nso I think definitely stating that these\nare more important difficulties is\noverstating it\ngiven that this is written by Paul\nCristiano he's probably thinking in\nparticular of enlisting latent knowledge\nand\nthis has uh 10 similar difficulties I\nwent through them and uh there is a\nsubstantial overlap in difficulty one to\nseven and they are written in more\ndetails\num and um to what extent the overlap is\nuh yeah I think I disagree with poker's\nChannel about that but um hearing that\nthey are more important because they are\nmore relevant to core problems with\nrealistic alignment strategies and\nobviously this is again a matter of a\nsubjective taste in that uh Paul\nCristiano kind of naturally feels that\nhis uh personal\num best guess for how to solve alignment\nis the most realistic like that's again\nalmost by definition\num\nand like a lot of people are working on\nthings that are not existing latent\nknowledge and I think uh some of the\ndifficulties like the the part about\nregularization for instance I think\nexpecting that to be a uh uh a major\ncomponent of actual alignment is uh uh\nquite optimistic in my opinion\nsomewhat uh weekly states that uh he\nwould have preferred that uh uh the\nlisting latent knowledge States all the\nproblems up front together I think\nthat's a\nwhether it's nice to write that or not\nuh is somewhat irrelevant I think I\nthink as I've said before iliakowski\nhasn't engaged sufficiently with\nilliciting latent knowledge and this is\njust this might be just because you\nskimped the report and I think it's uh\nsufficiently valuable that he should\nread it in detail\nso the overall take is that Paul in\ngeneral uh he obviously had a lot of\nagreements with uh eliasa and these are\ngood considerations backed up by pretty\nclear arguments but they are too\nconfident and that's something he has\nsaid in many places and a big problem is\nthat elisabeth's reasoning is more a\npriority than reasoning about evidence\nand the problem with this kind of thing\nis that we are unlikely to actually\nresolve the disagreements even in if\nthere was like\num five more posts back and forth\nbetween Elisa and Pocus John we are\nunlikely to see any kind of resolution\non on major questions or even things\nthey can bet on and I think that is true\nand I think also it is very very sad\nthat we we can't actually use evidence\nfor this\nPaul would like Elisa to write his\narguments down in in more details that's\none of his core complaints and my core\nanswer to that again as I've said is\nthat\nthere are in fact links on Amazon where\nElizabeth has written this down in a\nsurprising amount of detail\nit's also stated that people with\naliasis views have often made engagement\nhard that's of course a bit sad because\nI am close to aliens in this regard and\nI hope we haven't made it hard on\npurpose but I think it's just that\nengagement is really really hard because\nit is really really hard to get any kind\nof meaningful traction on uh uncovering\nthese points and predictions is one that\nthing that would help us and that we\nhaven't really been able to get going uh\nPaul and eliasa has failed to converge\non any kind of meaningful Bits And even\nthough\num\npoor Cristiano seems to be strongly more\neager and open towards the bets the the\nreply from Elia zurichowski is basically\nthat he his uh World model is not\nparticularly constrained in what happens\nuntil we have some kind of takeoff\nhe is mostly uh\num analyzing what happens once we to use\nthe rocket analogy exit the layer of\natmospheric turbulence\nand that I feel is the central media\nlethality in AGI ruins the fact that we\nhave no strong epistemic way to actually\nprogress on figuring out what is uh the\nuh uh\nthe status of all these claims about AGI\nwe would we don't really know how to\nprogress on this we probably cannot\nprogress on this and that is the\nminiature lethality that is ultimately\ngoing to do much\nthat is all for tonight thank you and\nsee you next week", "date_published": "2022-09-22T21:45:39Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "1a9ce4b70708ce2e9dff66fc4bfc748d", "title": "262. Counterarguments to the basic AI Xrisk case", "url": "https://www.youtube.com/watch?v=hQr08RjkKv4", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 200 and uh\n62 in the AI safety.com reading group\ntonight we'll be discussing uh half of\nthe article counter arguments to the\nbasic AI X risk Case by catcher Grace\nis the lead researcher at AI impacts and\nthe person who's defining the\norganization AI impacts\nthis was posted a couple of months ago\nand we are discussing the entire blog\npost\nexcept part C which is which we might do\nnext time\nso first before we we dive into the\narticle I would like to say thank you to\ncatcher Grace for putting some kind of\nscrutiny onto these arguments I think\nit's really important and this\nskepticism is uh something that is uh\nreally high quality skepticism is\nsomething that is sorely missing in AI\nsafety\nI also imagine that it's somewhat of a\nthankless job in the sense that\nin the broad world most people are\nskeptical of AI risk but within the\ncommunity of rationalists uh the most\npeople would disagree with her so I\nbelieve it's a somewhat of a thankless\njob to um\nto present the case against AIX risk\nunfortunately I also feel that the field\nis dysfunctional in the way that\ncriticism is being handled and let me\ntry to explain how I\num envisioned this dysfunctionality and\nI'm going to use that with my\ninterpretation of what may be going on\nlike inside catcher's mind and that's of\ncourse something that is very likely to\nbe flawed but I envisioned that ketchup\nprobably has a intuitive sense that a\nlot of the basic case for AIX risk is on\nShaky Ground and none of the arguments\nare like knock down arguments but just\nshaky and\num uh feels like they they are under\nspecified and don't have enough uh high\nas high probability as other boots would\nthink\nand so how do you\num\nuh how do you answer this well the way\nit's answered in this is a in a very\nbroad uh sense where describing all the\npossible problems or most of the\nproblems but that also necessarily makes\nthe crystal somewhat shallow\nand when this is posted the replies that\ncatch you get are mostly hot takes like\nvery uh brief comments and people uh\ngoing to attacking points and discussing\npoints that seem rather\num irrelevant most of it and that in\nturn makes it very hard for catcher to\nengage productively with these counter\narguments and the end result of this\ndoes functional circle is that we don't\nget any kind of synthesis of the counter\narguments and we don't get a get\nstronger Arguments for AI risk out of it\nand we don't get clarity of our\nepistemic state\num so what can we do about this I don't\nreally have a good solution\num fundamentally\num predicting the future is really hard\nand we are bound to have uh to be in a\nreally poor epistemic state it may in\nfact be impossible to get a strong\nresolution of this\num\na few things that I'm like trying here\nis to give answers that are just as\nbroad as the questions that categories\nare raising and also\nreferring back to the literature as much\nas I can um whether this is sufficient\nor\num uh like I don't have any hope that\nthis will in fact convince catcher\num but uh uh as I said the task may be\nimpossible\nso let's talk a bit about the basic case\nfor AIX risk and uh categories starts\nout by describing her version of the\ncase for AIX risk and it is not the one\nthat is generally used in the literature\num catcher presents her own argument\nit's a second and quite beautiful\nargument but it's also a new original\nargument\num\nI think this is a mistake I think there\nare really good descriptions of the case\nfor AIX risk in the literature my\npersonal favorite would be the book\nsuper intelligence path danger\nstrategies by Nick Bostrom it's eight\nyears old but I believe it has held up\nsurprisingly well\num but catches description of the basic\ncase for AIX risk while not entirely\nwrong it has to focus in a lot of the\nwrong places and it has a lot of under\ndefined issues and I believe that when I\ngo through the um the answers that\npeople give to this criticism a lot of\nthis seem like it could have been\navoided if catcher had chosen to go with\na standard uh description of the case\nfor AIX risk instead of providing her\nown\nso her case is built of uh three Clauses\nthe first is that if superhuman AI\nsystems are built any given system are\nlikely to be goal directed\nand this is using the word superhuman\nrather than the defined term super\nintelligence and likely is of course sub\nis that like 50 51 or 99.99 and there's\na focus on several systems and there is\nnot the word AGI or general intelligence\nor something like that which is a\nfeature of most of the the standard\ncases for AIX risk\nthe second Clause if goal directed\nsuperhuman AI systems are built their\ndesired outcomes will probably be about\nas bad as an empty Universe by human\nlights\nand while I believe this is a way to\nState the case it's also like kind of a\nstrange way because the long-term values\nare not obviously that important like\num if the AI for instance kills us all\nand then optimizes the universe in a way\naccording to our values that may be good\nbut like the thing I care about is the\nAI system not killing us and so it's\nkind of like\num not orthogonal to the thing we care\nabout but it has the wrong Focus\nand finally if most goal directed\nsuperhuman AI systems have bad goals the\nfuture will likely be very likely be bad\nand again this is very different from\nthe way it's presented in the book super\nintelligence there's no talk about\ndecisive strategic advantage and and\nthis kind of thing\nand catech Grace of course is honest\nabout this more being a this entire\ndocument is more a description of what\nare the gaps in this case and what are\nsome possible counter arguments to this\nand\num not a a very strong case that AI risk\nis not is very low or something like\nthat\num\nso let's dive into the possible counters\nto the first\num Contra uh superhuman AI systems will\nbe goal directed\nwhen people talk about goal directed\nthey possibly probably think about\ndifferent things\num a classic goal directed agent is the\nutility maximizer or even more extremely\nthe paper clip maximizer and a paperclip\nmaximizer is a classical example of a an\nagent that is goal directed enough to be\ndeadly or to have uh goals desires that\nare at odds with human survival\nwe could imagine this goal directed\nagents\num categories called some incoherent\npseudo-adentic AI and presents a\nspectrum of possible uh such AIS from a\nthermostat to a utility maximizer\nI don't think this is a good description\nuh in that a thermostat is not an AI and\nit's very much not an AGI the the\nconcept of AGI as far as I could tell is\nnot present in in this article so what\nis the least uh sort of the least\num\neccentric and most incoherent aai that\nis still arguably in AI I could imagine\nsomething like a non-reflexive uh AI\nthat is using heuristics may qualify but\num but I think once we get substantially\nbelow that\num I think I think we kind of really\ncall it an AI\nwe can observe that human level goal\ndirectedness is bad because humans are\ngoals that are not enough to like I\ndon't know kill neanderthals\num\nso categories defines weak solo agents\nas agents that are less goal directed\nthan humans this may be nitpicking but I\nbelieve that humans are actually in fact\nvery power seeking so a safe level is\nnot one that is below the human level\nbut one that is a lot below the human\nlevel\nthe example that catch Grace gives of a\nweak solo agent is a system of just\nfully specified if x then y statements\nI would not call this an AI and I\nbelieve that the reason that this is not\nan AI is\nalso the reason why it's safe and in the\nsame way if we want some AI that is\ncapable of learning and capable of\ncross-term internalization and all these\nthings then that is precisely the thing\nthat makes it unsafe\num\nand that's the argument that people\ndon't want to destroy the world so they\nwill probably use weak solo agents\num and I wish that was what we're living\nin but right now people are in fact uh\ndeploying AI systems that are\nsubstantially more authentic than just\nbeing fully specified if x within y\nstatements and I think a lot of people\nare not buying the arguments for AI\nexistential risk and a lot of people are\nbuying are deploying uh AI systems\nwithout much care about whether they\nwill destroy the world\nthen there's a question of diverging\nfrom expectations and one of the\nadvantages of\nat least very uh weak solo agents is\nthat they will diverge less from\nexpectations and uh that that is true\nbut that's unfortunately also what we\nwant from the AI the point of having an\nartificial intelligence is that it'll do\nthings that we can't do ourselves\num\ncatcher observes that in many cases\nDivergence from expectations is bad and\num I agree in many cases it's bad in\nsome domains it's good and a crucial\nconsideration for whether it will be\ngood or bad is whether there is\ncross-domain uh reasoning going on\nbecause if uh if there is it may still\nbe good but that's also where a lot of\nthe threat comes from\nanother example catchy provides is\ncompanies right now they often prefer\nemployees to follow rules like the\nontological rules rather than just\nmaximizing the shareholder price\num and like uh I think sometimes it's\ntrue but far from always in particular\nif you read job adverts then they write\nthings like the application and should\nbe a self-starter and aware of the big\npicture and this kind of thing\num and remember for this argument to to\nprovide safety then it needs to be that\ncompanies always prefer employees to\nfollow rules and that's just plainly\nnot true people do prefer authentic uh\ngoal directed employees\num\nso if we imagine we have an an AI one of\nthe ones we have now and it tries to\nself-improve somehow to become more\ncoherent more goal driven in some way uh\nthat doesn't necessarily move it towards\na sensible utility function\num and I think catcher is right it may\nbe\num\nit may accidentally move\nitself to be coherent towards something\nlike that by accident is just like wire\nheading or something totally nonsensical\nbut that doesn't have any obvious safety\nimplications just because now it doesn't\nwant to build paper clips but want to\nbuild I don't know uh something really\nreally strange\nif a child's the universe with something\nreally strange that's not really safety\nanother thing that catcher suggests is\nthat it's possible that the AI could try\nto become more coherent but just fail in\nthe sense that it's trying to like it\nhas like circular preferences and then\nit tries to fix the circular preferences\nbut but feels to fix this and like the\nthe standard case for AIX risk is not so\nmuch focused on like the average AI but\nthe first AI that is actually able to\nrecursively self-improve and become more\ncoherent in Gold uh directly and things\nlike that and that way catcher's\nargument doesn't really capture the\nessence of the uh the standard arguments\nthat are used for in the case for AIX\nrisk\nand also on the diversion from\nexpectations I should say that uh a key\nargument that catcher is missing in her\nargument is that a strategically aware\nAI would just realize okay these are the\nexpectations and then we'll try to\nfulfill these expectations while\nactually being deceptively aligned\num uh and catcher of course is aware of\nthese arguments I have no doubt at all\nbut these arguments don't seem to fit\nvery much into her own description of\num uh the basic case for AI X risk which\nis a a problem for for her description\nAmbiguously strong forces for\ncolderatedness need to meet an\nAmbiguously a high bar to cause a risk\nso that's like we don't know precisely\nhow uh how strong are the incentives to\nbecome more goal directed and\num\nwhat is required for uh for an AI to\nbecome dangerous so if the two levels\nhave some kind of Gap in between them\nthen maybe we get safety from that\num\nI don't see\nI don't see that the two curves as being\nflat unfortunately I see one of them\nsloping down and that's of course I've\nshown here the the classic scaling graph\nin that AI capabilities will increase so\nit will be easier and easier for the AI\nto become more goal directed probably\nand the um the benefits that the AI as\nit becomes more capable uh also\nincreases\num\nthe incentives to become more\ncoherent seem to be really strong and\nthe requirements the difficulties in\nbecoming more coherent seem like there\nare relatively weak and I would expect\nthat\num uh even very very primitive models\nare able to reason that\num becoming more powerful actually is\nconvergently uh useful and particularly\nuseful for whatever gold the AI is\ntrying to obtain\nforeign\nbecause\nwe could imagine that humans kind of\nhave a utility function\num and if the AI has one that's close\nenough that might work out and one of\nthe underlying motivations for this is\nthat humans have different values so\num human the set of human values need to\nbe not like a point but some kind of\nspace and if the AI if we can align an\nAI well enough that it hits within this\nspace well that's ought to be reasonably\ngood like that's not obviously worse\nthan other humans decide in the future\nuh that is true but this uh space\nincludes some really bad things in\nparticular uh the desire to uh like kill\npeople who are less intelligent than\nyourself and like Hitler would be an an\nobvious example of uh someone who\ntechnically has human values but who we\nreally really wouldn't want to hand over\nall power in all future too\nclaims that this is in fact not obvious\nthat this would uh turn out to be bad\nand if it's true then we should worry\nabout more General problems than AI like\nI I claim that is this effect obvious\nthat Hitler is bad and also that okay\nsure it is a more General problem but it\nis one in fact that AI might solve in\nthe sense that we have ideas about how\nto uh like Implement alcoholics related\nEvolution and uh like uh long reflection\nand things like that so it may in fact\nbe things that AI could solve\npotentially\num categories further identifies a\nlarger region uh around human values\nwhich is that which can be reliably\naligned with typical human values via\nincentives in the environment\nand the problem with incentives in the\nenvironment is that if an AI is capable\nof obtaining a decisive strategic\nAdvantage then incentives will not work\nso that region does not buy us very much\nsafety if we don't get the AI to be\ncorrectable or maintain power over in\nsome way\nso if we imagine that this\nregion around human values then it's\njust basically an empirical question can\nwe get an AI that is close enough to\nhuman values\nso\num one of the reasons why I'm less\noptimistic about hitting this goal is\nthat uh it's really hard to Define human\nvalues and that means that hitting a\ncall that you can't Define can't see\nsounds really difficult and also that\nthe targets seem really really small\ncompared to the space of all possible\nvalues you could have then reasonable\nhuman values is a very small Target\num catcher also has like a in in several\nof these arguments she has a small\nvignette or something where she\ndescribes a world what would look like\nif this Gap was uh was important and in\nthis uh most of it makes a lot of sense\nbut in this in particular she describes\na world where small differences in\nutility functions do not turn out to be\ncatastrophic and the reason she\ndescribes is that an AI where uh these\nsmall differences don't matter will tend\nto be courageable\nI do not think that courageable military\nfollowers from this at all I think\ncourage ability is a separate problem\nand I think a world where we get almost\nthis uh utility function I would imagine\nthat would be something like our\ncoherent extrapolated volition uh\npossibly in some slightly different\nprocess or something like that\num I don't think I don't see courage\nability from this\nthe difference between the AI and the\nhuman values may be small\nfirst catch a grants that misoptimize us\nif we see those that could\nhave a very large difference in values\nso we're kind of disregarding meter\noptimizers for now so how do humans\nlearn values well we learned it mostly\nby observing examples and that's kind of\nthe also the idea that we could have an\nAI learn values in basically the same\nway\nso one thing is learning values another\nis like obtaining these values how do\nhumans in fact obtain our values uh so\nthe values that we endorse and not just\nlearn to recognize in a sociopathic way\nlike what are the values that other\nhumans have\nI think it's mysterious and I also think\nthat the way humans do are very\ndifferent from the way AI training like\num from the way we're training uh like\ngpt3 or something like that it seems\nvery different from what humans do\nuh catcher disagrees and believes that\nfor the things AI currently uh learn the\ndifference between what um what humans\nlearn and what AI learns seem rather\nsmall\nand\nI would agree to some extent in in many\nsimple cases chess would be an example\nof where learning the values of Chess is\nkind of the same but that's because\nchess is a really easy example and in\ngeneral we choose to look at examples\nthat are really easy we don't choose\nanything nearly as philosophically\ncomplex as human values\nand that's why we can't generalize from\nbeing able to solve the easy questions\nto being able to solve the hard\nquestions\ncatcher asks for catastrophic utility\nfunction that is very close to you human\nutility function and after a bunch of\ntraining this AI has this and that is\nstill catastrophic\num so\num I don't think that an AI has it as\nlike the hazard is um\nuh it can have it in two ways it can\nhave its assets these are the values\nthat the AI endorses or it can have a\nmodel of the humans uh\nand and then uh have it as another fact\nabout the world that it knows so what's\nan example an example uh this is\ncheating but uh one of my objections\nwould be that it will in fact not be\ntrying to learn human values in practice\nit will be doing something like\nmaximizing profits\num because in general people are trying\nto to aim for human values but even if\nwe did that we could easily end up with\nsomething like\num trying to maximize approval for from\nsomeone who is who has some very uh uh\nnot nice ideas about human uh\nflourishing we could see a lack of value\nlearning or something like that\num and I think there's a bunch of extra\nthings we could see that are close to\nthe human utility function but for some\nreason we're not we're not getting it\nthis is intimately related to the\nfragility of value thesis by Elisa\nutkowski\num and um catcher\nsummarizes this as a value is not\nresilient to having components of it\nmoved to zero I don't think this is a\nperfect uh summary in that the idea is\nnot so much that an element is moved to\nzero but more that is not considered in\nthe first place like a modality or\nsomething like that that is not\nconsidered\num deep learning is given as an example\nof something that can learn more things\na lot better than the more directly\nspecified good old-fashioned Ai and\nwhile that is true I the argument has\nnever really hinged on whether the the\nAI would be able to know our values like\nthe uh even some of the very earlier\narguments and certainly uh Boston super\nintelligence assumes that as a future\nsuper intelligence would know our values\nvery very well possibly probably better\nthan we know ourselves but would just\nnot care\num and can you suggest there may not be\nmany terrible value systems adjacent to\nmine and I don't think actually that the\nthing that an AI would learn from\nobserving catches actions would\nnecessarily be a value system I could\nimagine the index uh indexology uh could\nbe wrong and like the references to\npeople uh to to the world and time and\nthings like that I could easily imagine\nlike you miss some kind of modality uh\nlike embodiment or like if it's only\ntext then you are missing basically all\nthe modalities\ncatcher uses a face analogy uh to\nsupport a point and uh are images of the\nhuman face fragile and we can see\nobviously here uh some different\ndiffusion models that are generating\num\nuh some uh\nsome faces and the faces in general\ndon't have the property that some of\nthem are missing noses or things like\nthat I mean sure you could point at this\nparticular Abomination here in the\nbackground and that probably looks like\nsomething really bad but in general\nthere are no\num\nuh we don't get the thing where there is\nno nose or anything like that\nI think the example is very different\nhuman values and if pictures of faces\nare very different in this in the fact\nthat we do have a digital example of a\nhuman face and we don't have a digital\nexample of human values so I think the\nthe analogy is very genius\nknit Suarez answers in uh a quick\nTwitter reply that the maximally face\nface like image which is very different\nfrom what these diffusion models make\nthat doesn't look human at all and I\nthink they're so much talking past each\nother because Ketch is talking about\nwhat the model knows and need about what\nit maximizes\num\ndaily also points out that learning\nhuman values is very different from\nhaving human values\num\nthere is in the response to catch's post\na lot of it has focused on this face\nanalogy\num and I think it's um I think it's fair\nthat people criticize the feelings of\nthis analogy uh because Katie says it's\nvery nil because and in that case\nobviously you need to\num uh like like it's uh then you need to\nbe able to defend it on on several\ndifferent uh levels and I think the\ncriticism is reasonably Fair\nif you just said like this facet is\nslightly analogous in this way then\num uh people perhaps would have focused\nless time\nnow we come to short-term goals because\nuh there is some kind of assumption that\nif you have a very myopic AI it will not\nbe dangerous we will need some kind of\nlong-term goals in order for the air to\nbe incentivized to taking over the world\nuh catch explicitly writes that\nlong-term goals are required and that's\noverselling the case somewhat\num because we also want the ahp like low\nimpact robust collusion don't do a\ncausal reasoning and several other\nthings but um in general the the\nargument is is reasonably sound like in\nparticular if you make the AI have short\nenough Horizon then it does in fact make\nsense\num but\num then if you say short enough then it\nbecomes a tautology\nso longer term goals those could arise\nnaturally they could also arise through\nMesa Optimizer\num do humans have long-term goals catch\na kind of greatly writes that yeah\nhumans seem to discard the future a lot\nin decision making and I think this is\njust plainly untrue humans do in fact\nvery much care about things that are\nmore than an hour into the future\num and I think especially if we are\nlooking for like an endorsed\nimplementation of our values then longer\ntime goals are almost certainly going to\nbe a thing\nI think it's just\nit's untrue that humans don't have\nlong-term goals we do really strongly\nhave a lot of long-term goals\ncatcher uses the example of a timed\nchess match I don't think even this very\nvery simple example holds because at\ntime chess match if you have like five\nminutes that's obviously Myer week but\nit is in fact not uh because humans care\na lot more about other things than not\nwinning than just winning chests they\ncare about not being known as a cheater\nhenceforth and like if your Google chess\netiquette you'll find a lot of things\ndo's and don'ts that are not really\nrelated to this time chess match but\nit's obviously related to\num\nhumans having goals that are to not be a\njerk because we these goals Point uh\nmuch further than just across this time\nchess match\nand also in particular where a Chinese\nchess match is one example and sure that\nmay be very uh myopic very short term\nbut if you give an example that 90 of\nall tasks won't kill us that really\ndoesn't matter give us very much safety\nright we want something more uh\nsomething like all\num relevant goals can be handled by\nshort-term uh myopic AIS I don't think\nwe have anything like that\nknit Suarez also uh\ncauses narrow optimizers can exist but\nwon't matter and I think there's a\nfundamental tension in this argument and\nthe argument we saw previously that AIS\nwill reflect our values very well and\nand they will also be extremely myopic\num in particular that all AIS will\nreflect our values very well and all AIS\nwould be extremely myopic catch it\ndoesn't use the word all that's my\ninterpretation of the argument but if\nyou try to remove the word all then the\nargument for safety kind of falls apart\nbecause like if you have some AI or even\nmost AI reflect our values very well but\nsomeone just want to maximize paper\nclips that could be really bad and most\nof them are extremely myopic but a few\nhave long-term planning and want to take\nover the world that also sound really\nbad\nso you need either all or almost all in\nthis argument for it to provide any kind\nof safety\nuh\nthat was one more point\nthat I forgot\num finally the um\nthe last is an example of\num as the trying to use a substitution\nuh with the word uh Ai and instead use\nuh to see if it holds for corporations\nand uh catches arguing that in fact the\nargument carries and that proves that\ncorporations are an existential threat\nand that is like a reduction or\nsomething like that\nso let's try to use the same argument\nfor corporations the first any given\nCorporation is likely to be gold Direct\num I somewhat accept this uh I think uh\ncorporations cannot be as coherent as a\nsingle AI a single agent uh could be but\nin general I think that there is in fact\nsubstantial pressure for an uh for any\ngiven cooperation to become more and\nmore goal direct so I buy this Bulls\nthe second and here I'm just gonna\nstrike through the word superhuman and\njust if gold directed corporations are\nbuilt their desired outcomes will\nprobably be about as bad as an empty\nUniverse by human rights so we'll get\nback to the Superhuman part but for now\nlet's just consider the standard\ncorporations\num\nis this true well the desired outcomes\nif they desired new outcomes are just\nmaximizing shareholder values then I\nguess I accept this what the uh the uh\ncorporations want is something just\nabout as bad as an empty universe\ncatcher goes further however and claims\nthat we don't have a way to point\ncorporations towards human goals and I\ndo believe this is wrong I think we can\nin fact uh assert effective control over\ncorporations and we can change them like\nif we have a corporation then we can the\nhumans in charge can say we care about\nuh employee happiness we care about\nclimate change we care about uh other\nthings that just shareholder\nmaximizations\num\nthird if most goal direct corporations\nhave bad goals the future will likely\nvery likely be bad\nuh and I accept with some reservations\nin particular I don't think we can say\nvery likely I think at most we can say\npossibly and uh the the two problems are\none as we said before humans can correct\ncorporations and make them be about more\nthan just maximizing shareholder value\nand the second thing is that\ncorporations will have a hard time\ntaking over power even if they would\nlike to do so\nis answering that the reason why they\nare not going to be able to take over\npower is because corporations are bad at\ncooperating I think this is a a\nreasonably small part of it but to some\nextent I do in fact buy the Bulls so if\nwe let corporations just take over all\npower in the universe and we let\ncorporations just profit maximize then I\nthink we could certainly see a very bad\nfuture so in fact I don't see this as a\nreduction I think to some extent it may\nin fact be true\nnow I pundit the word superhuman\non the previous slide so let's talk\nabout what does it mean that a\ncorporation is superhuman\num that is not really obvious to me you\ncould interpret it in several different\nways I think the intended way is that\num is a corporation that has some kind\nof internal governance structure and uh\num bylaws and things like that that are\nself-reinforcing in some sense so that\nthe um\nthe company is capable of acting even\nagainst the interests of all the\nemployees but the employees are unable\nto coordinate against the cooperation or\nor something like that but it could also\nbe like uh more precisely that it's a\nsuper intelligence compared to the\nemployees or that is super powerful in\nother ways than intelligence compared to\nthe employees or something like that it\nis somewhat unclear\nNate Suarez answers to this that humans\ncan't stack into superhuman corporations\nbut if we could then yes we should value\nload and stack into is a bit unclear\nwhether he's talking about participating\nand controlling and I think again this\nis an example of what I would consider a\nlow quality hot take in that like this\nis not me summarizing uh Nate Suarez\nthis is literally all Nate Suarez writes\nabout this and I think this kind of it\nreally doesn't drive uh our\nunderstanding forward\nso uh catcher concludes a counter\nargument is that corporations are not\nsmart enough but in that case the\noriginal argument needs to include that\nand I I agree in fact the original\nargument needs to include that but again\nketchik did not uh present a standard\nargument she presented her own and was\nin fact a big problem with this that it\ndoesn't make a reference to uh\nintelligence in the same way that\nbostrom's argument talks about super\nintelligence\num and I think it is in fact very clear\nthat it should be added when we're\ntalking about a superhuman AI then we\nare obviously talking about intelligence\nand that needs to be uh uh\nand that probably needs to be clarified\nwhat does it mean to uh to have a\nsuperhuman uh cooperation that we are in\nfact talking about intelligence and if\ncatcher had to uh like\ndo all this all over this fine every all\nof our terms painstakingly then this\nwould have taken way too long for her to\nwrite and so the the things she should\nhave done instead of introducing new\nterms and not defining them is to reuse\nthe standard terms that are used when\nother people present the basic case for\nAI X risk\nthat is all for today thank you and see\nyou in two weeks", "date_published": "2022-12-01T22:16:31Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "cb8ba1c2b88c8640e045bf476edf2426", "title": "274. Conjecture Internal Infohazard Policy", "url": "https://www.youtube.com/watch?v=gFP5fCLVdtY", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 274 in the\nAI safety.com reading group tonight we\nwill be discussing the post conjecture\ninternal info asset policy by Conor\nLeahy and others\nconjecture is a relatively new AI\nalignment organization led by Conor\nLeahy and the other authors are Sid\nblack Chris camell and Andrea miyachi\nthis is a post on listeron which is just\nunder a year old by now\nso let's first talk about what is an\ninfo Hazard the term info Hazard was\ndefined by Nick Bostrom in his 2011\npaper a typology of information hazards\nin which he defines an info Hazard as a\nrisk that arises from the dissemination\nof true information that may cause harm\nor enable some agent to cause harm\nand because this is Nick Bostrom then he\nobviously has like this huge typology\nwith 33 distinct types but in this\narticle we are considering a much much\nsmaller subset\num only the the risks that come from\naccelerated AGI timelines and this is\nboth only like one kind of info Hazard\nand it's also restricted to one specific\ndomain so like if it's information that\naccelerates I don't know bio weapon\ntimelines or something like that then\nthat's not really relevant to this\nyeah and I think this raises the obvious\nquestion what if uh an info Hazard is\nuncovered that doesn't uh uh fit into\nthis um like maybe someone at conjecture\nfinds a prompt that is really really\ngood for like creating uh uh\ncontinuations that ends up with like bio\nweapons or something like that or worse\nthan maybe it's a new bezel is\ndiscovered or something like that in\nthat case we would probably want to do\nthe opposite of what this is in\nparticular we do we would not want to\ntell Connor if we discover a new uh\nBasilisk\nuh so the this specific kind of info\nHazard is one that other people are\nobviously interested in about and two\nweeks before this post was uh written uh\npublished to less wrong\numkowski had a kind of similar post\nabout info hazards where he asked for a\nname for this uh like a social Hazard\nand the one that the one that he\nbelieved was best was an expo Hazard I\nmust admit I haven't seen other people\nuse extra Hazard except here so the name\nhasn't really caught on and\nwe'll just continue referring to it as\ninfo hazards\nand one of the things that I\num\nstruck I think probably are\nalso covered by this info Hazard is like\nhow does\num conjecture uh\nuh deal with confidential information\nlike for instance pivotal acts if they\nget information about those\nlet's start by talking about the goals\nuh\nof of writing this document conjecture\nlists those goals but I would like to\nbefore that compare the the default\nsituation where conjecture doesn't write\nthat post because that has in fact some\nadvantages like this is Streisand effect\nwhere\nwriting that you have a\num\nand that you potentially have very\ndangerous information is something that\ndraws attention to yourself\num and it is a possibility that just\nnormal business confidentiality like all\nother businesses have would have been\nsufficient uh\nuh this is of course my thought solution\nor something that can that the\nconjecture talks about uh explicitly\nhere\num so one of the goals is to encourage\naccountability among the people who may\nspread information and one limiting case\nthat I have uh considered in detail is\nChris Leong uh who um on Petrov Day in\n2020 entered the code and took uh took\ndown less wrong uh mostly because it\ndidn't take it seriously and he was\nmaybe also cheated but he also didn't\ntake it very seriously and this kind of\nthing is the kind of thing that could\nhappen uh with uh with dangerous\ninformation and the question is like how\naccountable should we hold him I think\npeople have mostly forgiven Chris Leung\nand I'm actually unsure whether that is\nactually correct\na second goal of publishing this is\npromoting cross-organization\ncollaboration\nI found two people Tammy who has\nsomething like lock posts that\nreferences and the new organization\ncalled orthogonal that doesn't implement\nthis policy but just say this is\nreasonable and apart from that I haven't\nseen any other organizations that have\nlike made any kind of clear reference to\nthis that doesn't mean that there is\nnothing there right there could\ncertainly be a number of people who have\ncollaborated more with conjecture\nbecause they know they have this policy\nand perhaps also uh\nuh people taking up this kind of policy\nwithout making it explicit but nothing\nhas been written as fast I can tell see\nand finally the goal has been to start a\nconversation about uh info Hazard\npolicies\num this is something we've seen\npreviously work out in rare cases for\ninstance uh there was a push towards\nhaving a policy on publishing your AI\nalignment strategy for the AI\norganizations and that did in fact turn\nout to in the end cause anthropic and\ndeepmind to publish their\num their views on strategy that required\na lot of push like I remember I\npersonally uh talked with a number of\nthe of the people at these organizations\nand questioned them about why they did\nnot publish anything like that and\neventually they did in fact publish this\num\nand I think uh these these organizations\num I think there is a very wide uh\ndiscrepancy between the organizations in\nhow uh how they would react to these\nkind of uh info answers I think deepmind\nand um I don't trust at all I think\nthese organizations if they learned uh\nabout secrets that uh from conjecture\nand how to train AI faster they would\njust you know build it as fast as\npossible\num orc and anthropic I think are both\nquestionable all and are actually\ntraining AIS and Tropic I would be very\nhesitant with sharing information with\nthose the alignment Research Center and\nof course conjecture seem like\norganizations that\num like have their together uh even\nif I don't know enough about the\norganizations to really trust them and\nMiri is the only organization I would be\nconfident in actually informing them\nabout this kind of Expo hazard\nwhat are the considerations uh for this\nuh for this policy\num one of them is that we need to be\nable to coordinate that's required to\nshare information and ensure we're not\nworking at the same thing and like we\nare prioritizing towards the the right\nareas some kind of in exchange of\ninformation that is potentially in for\nhazardous is required and I think the\nreason why this is required in\nparticular is because we are very much\nin a race against the capability labs\nand that means that\num\nthat we cannot accept uh I don't think\nshould accept a too limiting uh info\nHazard policy because that will just\nmean they'll be too inefficient\nanother consideration is it's hard to\ntell in advance how dangerous and info\nHazard is and that's of course a problem\nboth for conjecture and for the people\naround them uh and potentially\nadversaries\num so um that's an ameliorating\nameliorating factor in my view\nthere's a consideration that secrets are\nhard to keep in particular if you have\nto keep them for a long time and if you\nhave many secrets\nI don't actually think that these two\nconsiderations are very strong like the\nsure it matters it's harder to keep a\nsecret for a long time but like the\nnumber of people who know the secret is\nway more important and like how what are\nthese kind of people are they different\npeople are they\num like the perfect thing you want to do\nis something like the CIA that uses\nmoments very much like this\nstereotypically completely non-diverse\nand that makes them much more capable of\nkeeping secrets\nthey're talking about a trade-off\nbetween Safety and Security at least it\nmay be a trade-off because in the\nupwards sealing of information is in\nfact not really a part of a trade\ntrade-off there isn't as fast I can see\nand always trade-off between Safety and\nSecurity\num at least uh there may be something\nthat I haven't thought about but um and\nfinally functional decision theory is\ndescribed as an important consideration\nfunctional decision theory is that you\nshould treat your decision as the output\nof a fixed mathematical function that\nanswers the question which output of\nthis very function would yield the best\noutcome\nso in order to uh Analyze This there are\ntwo things that could be the output\neither a policy or a decision to leak or\nnot to leak\num and like precisely how you would go\nabout analyzing this depends on\nprecisely what they want to\num uh\num\nhow this analysis goes so I didn't this\nmay be something we can ask Chris\nand finally of all these considerations\nI think they're all important but I\nthink that's one information when one\nconsideration that really trumps all the\nother and that is like we need to get\nsome kind of a measure of uh what things\nare info hazardous and how info\nhazardous are they that seems to me as\nlike the key consideration\nokay what is covered and what is not\ncovered in particular the thing that is\nnot covered by that is not one of these\ninfo hazards are PR hazards and\nreputational Hazards I think that's a\nreally good clarification because a lot\nof companies would certainly keep this\nkind of thing uh confidential\num\nthey try to strive for meter honesty and\nthat means to be honest about how honest\nyou are\num and I had some questions about that\nbut Ilia suit wants that it's dangerous\nto ask questions about meter honestly\nbecause you can very easily get to Pro\nlike object info object little\ninformation like if I ask them about\nlike\nif Miri came to you with a like a\npivotal act uh would what would they do\nabout that how would they react to this\nthen it's very easy to choose this kind\nof uh information\num search probe for what are they\nactually talking do they have this kind\nof secrets\nso I'm unsure about the best way to to\ngo about this I'm also unsure about the\nbest way to go about meter honesty in\ngeneral and part of this is because the\nincentives are very strong\num like obviously the incentives inside\nthe organization are extremely strong\nlike corner for instance I expect you\ncan probably fire people at will in\nElizabeth talk about this to talk about\npeople who have like a gun and of course\nthat's the extreme example but in fact\nif Conor wants to fire someone and then\nhe probably just can so meter honestly\nis very hard in this kind of not only\ntruth-seeking environment\nand of course there are other people the\ngovernment probably can go to uh an\nemployee in conjecture and say hey tell\nus this or you will go to jail and\nyou'll get some kind of gag order\nagainst it and that seems like an uh\nanother problem in general I think me to\nhonesty is hard enough that if you want\nto have like a policy on this you should\nreference some existing work on what\nthis means on it what gives me honestly\nactually\nand then finally on what is covered uh\nthey Define it using a limit like the\nleast info hazardous thing that is still\ncovered by this is letting it be known\noutside of conjecture that they are\ninterested in using building or\ndeploying a technique that already\nexists in the literature in order to\ntrain networks faster\nI think this kind of formulation with\nlike this limit is here is really great\num I'm not sure I really care very much\nabout training that works faster I think\nthat that's like the framing that\nconjecture is using and that a lot of\npeople are using and I think like\nsmarter more intelligent is really\nbetter in some way\num\nuh in in practice of the thing that I\nactually care about is whether the\nthere's a way to train a network to be\nmore capable among customers six\nstrategic cognitive tasks uh that's\nthat's like my horse my idiosyncratic\nview on aict that this is in fact the\nthing that we should care about\nso the rules that conjecture setup uh\nhave three levels of disclosure secret\nprivate and public uh well with the\nsecret being the default and they will\nthink about this after trialing for some\nmonths\num\npen pays in the comments were apply that\nthis there's a process for changing\ninformation between these level and that\nseems really like it's a lot of work to\nchange something from private to secret\num\nuh\nthis information is that three there are\nthree levels and three types of\ninformation repositories projects and\nideals and the projects have access\ndocuments with some uh information about\nlike the secret uh it looks like\nrepositories don't have access documents\nand I think they should have that I also\nthink that some of the uh rules if\nthey're written like uh meant to be uh\nuh understood by a lawyer or something\nlike that like these kind of rules end\nup becoming like legal rules in some way\num and then there are some formulations\nthat need to be tied up like knowing\nabout a secret knowing as secret uh uh\nperhaps two different things\nuh in this rule Set uh Connolly has a\nvery special role he's the appointed\ninfo Hazard coordinator he has access to\neverything\num\nand it's like it's unclear precisely how\nhe's supposed to interact with it like\nhe could uh like Donald Trump uh who\nclaimed he had Declassified uh uh secret\ndocuments in his head and it's not\nentirely clear from this whether Conor\nactually has the right to just\ndeclassify things in his head and if he\ndoes have that uh then\nand then of course the uh the rules uh\nbinding him are much less strict\nBen pays also in the comments that this\nis a potentially very problematic\nposition for Conor because a CEO may not\nbe available uh at all times for\nemployees to talk with uh and I think\nthis is putting a substantial amount of\nresponsibility on the CEO who like have\nmore important things to do\nhow do they deal with policy violations\nwell they hope people in conjecture sign\nndas I think getting these ndas actually\nenforced seem really really difficult\nlike if we get something if con if Conor\nor conjecture believes that something is\nimportant enough that it is info\nhassleless that probably means that it\nis something that is strategically\nvaluable and that and for that reason in\nthe A's by definition don't cover things\nthat are in the public interest\num and whether something that is in the\nNational interest is also in the public\ninterest is an interesting question it\nis not clear that these ndas will hold\nup but I'm no lawyer and of course if\nthey did try to enforce them then the\nstressing effect would certainly kick in\nand uh for the policy violations\nbasically Conor has the final uh say he\nStakes his reputation on uh under the\nhis judgments uh mistakes in the\nbeginning are of course acceptable and\nuh as he says nobody will be fired for\nraising a concern in good faith I think\nthought that was a kind of a strange\nformulation like obviously the thing\nthat I would expect him to write is that\nno one would be fired for admitting and\nnot very serious mistake in disclosure\nduring the first couple of months but\nthat's actually not what he's\ntechnically writing\nand this also uh uh this policy is also\nin charge in effect for for the uh the\nleadership of conjecture and it's\nGabriel who will initiate a process\nagainst Conor\nand uh there is a final uh remark that\nthe policy needs to be practical and\npeople who are doing experiments in\nconjecture which they're doing all the\ntime they need to look at each specific\ncase and see okay this is actually not\nreally dangerous at all and then no\nreview is necessary and I think this\nkind of practicality is probably\nrequired\nuh conjecture has a quarterly policy\nreview uh I won't go through it here but\nlet's just I'll be really interested in\nwhether this is a thing that happens and\nwhether it's something that actually\nfinds issues uh because I would put at\nleast a non-zero percentage probability\nthat this is the kind of thing that\npeople just forget because everybody's\ntoo busy\nfinally there's a chapter about the\npsychological safety that I think is\nimportant because Secrets do take some\nkind of emotional toll and they are\nupfront about this and people get\nstressed and isolated if they're having\nto do with secrets and this kind of\nstress and isolation make people more\nlikely to uh than uh give up on the\nsecrecy and\none of the things that they're trying to\ndo is to not have people only working on\nsecret projects so if someone if their\nwife asks them what are you working on\nthen they can actually say something\ninstead of zero having emotional support\npeople\num seems interesting and like how was\nlike the social structure of secrets in\nconjecture\num I think that's all of it is good uh\nuh and I'm I'm happy that they are\nwriting about this I think one of my\nfinal comments uh on like the big\npicture of uh this uh info has a policy\nis by comparing to government and\nMilitary security clearance systems and\nthe way these work is that they very\nvery heavily rely on background checks\nthat's a very big thing in order to get\naccess to class five documents and there\nare a number of Dimensions you can be\ndisqualified on for and prevented from\ngetting a security clearance mental\nhealth issues are problems Financial\nissues like if you have unpaid bills or\nsomething like that then you can't get a\nclearance if you've ever used drugs or\nyou have any foreign contexts and all\nthese kind of things and I expect that\nthere is a lot of problems\nlike I expected basically no one in\nconjecture would be able to get a\nclearance and many of them would fail\nfor many different reasons and I think\nthat's not just a thing in conjecture I\nthink it's in the alignment community in\ngeneral\num and I think we are a really bad\nCommunity for dealing with this kind of\nsensitive information but that doesn't\nabsolve us of the obligation to actually\ntry\nthat's all I had for today\num I think now instead of ending the uh\nthe recording because Chris Kamel is\njoining us soon I would like to\num\nask people in the reading group for\nquestions uh what kind of what should we\nask uh Chris Kamel when he joins us in a\nmoment I've written a couple of things\ndown but I also like to take your\nthoughts does anyone have a question for\nChris\nuh yeah I think I brought this up uh in\nour last meeting it's just that in my\nopinion most people who are interested\nin AI safety aren't\num employed in the field I mean I'm\ninterested in it and I'm not getting\npaid for it so like how would someone\nlike me find out\num about like I I in my mind there's\nthis pretty big area of information\nwhere you might not want to share with\nthe general public but everyone who's\nworking on alignment really should know\nabout\num\ndick and I was wondering how you\ndisseminate that information like that\num because I think it's really important\nthat we're all on the same page and\ndon't have uh duplicated efforts in\nterms of alignment research\nyeah I think that's a good uh good\nquestion uh I'll write it down\num and and I think we will ask Chris\nabout how to deal with this kind of info\nHazard like uh\nbecause it is quite unclear people who\nare\nlike uh independent uh alignment\nresearchers how they actually fit into\nthis\nlike one way would be to have like a\ncommunity uh info Hazard coordinator or\nsomething like that who like so someone\nwhose job is like if someone in\nalignment have an idea that could be\npotentially info hazardous then they\ntell that info Hazard coordinator who's\nlike not employed at conjecture but also\nlike uh like community-wide whether that\nis a position that could make sense\nyeah\num I'm not clear just from this paper\nand from the\ncouple of minutes I spent looking at the\nconjectures website just what their\nproduct is\nis that a sensible question to put it\nlike that yeah um I think uh there\nprobably is two things\num\nalignment research and some\nmiscellaneous AI things that they\nstumble upon like while they are working\nwith these then they had some kind of\nbetter way of creating I mean I think it\nwas like text-to-speech or something\nlike that\num and then they commercialized a model\nthat was doing text to speech\nor something like that\num so I don't think there is any\nobvious\num\nyeah I remember\num hearing some of Conor's interviews\nfrom a couple weeks ago and he mentioned\nthat um they were trying to build some\nkind of AI system that used large\nlanguage models as a component but not\nas the end product\num so I think there's still one of their\nend goals is still like just AGI in\ngeneral but it's supposed to be safer\nthan large language models or you know\nother systems uh like that\nyeah I think that sounds like a\nreasonable way to incorporate\nokay\num so I think Chris is joining us can\nyou just excuse me for just a moment\nI'm back and\num\nhello Chris\nhello hello great to have you we are\ncurrently uh recording\num thank you for joining us\num for these questions\num\nso we do have a a couple of questions\nthe first question that I would like to\npost is uh the conjectures info has a\npolicy requires a or talks about a\nquarterly review\nand you can see here um and my obvious\nquestion is have you done these reviews\nand have you learned anything in these\nthat you can share like how well is the\nuh info Hazard policy actually uh is\ndoes it work for you\nyeah thanks for the question and just\nfor this opportunity to talk about the\ninto hazard policy in general\num so we have done these reviews and the\nbig thing for us has been kind of\nimplementing the security protocols on\ntop of the\num kind of more informal verbal policies\nso when we first put this into place we\ndidn't have a security team and the best\nwe could do on kind of data separation\nwas uh in Google workspace like\ndifferent shared drives and setting\npermissions levels and different slack\nchannels and setting permission levels\nand different GitHub repos and setting\npermission levels there and we've now\ndone a lot more to kind of segment\naccess to model weights across different\nparts of conjecture to split out kind of\nproduct team from engineering team and\nwho can see what uh and yeah built a lot\nmore of the kind of technical back end\nof the policy\nanother thing that we've kind of talked\nabout quarterly is just is it working\num there are projects that I do not know\nabout at conjecture and that has been a\npretty strong litmus test for us\nthroughout uh that said there have been\nopportunities where people have shared\nthings internally that have kind of gone\npast the secret categorization and\num we've noticed the hopes there and\nthen there's also been times that things\nhave been private which is categorized\nspecifically within conjecture where\nsomeone has spoken to someone who's kind\nof not in the private group about it and\npart of what was not super well\naddressed in the info Hazard policy was\nhow to deal with those edge cases or\nsorry not edge cases just like slip UPS\num\nI think the original document said\nthings like it would be reviewed and\nit's a fireable offense if you can do if\nyou do this kind of thing\num we have not fired anyone from this\nit's tended to be that these accidents\nhappen in really innocuous ways rather\nthan people kind of explicitly spilling\nsecrets and I think yeah the policy was\njust a little prohibitive and strict as\nwe first wrote it so we've gotten kind\nof lacks on that\num other things that we've changed about\nit uh we haven't changed this one yet\nbut we're currently discussing whether a\nfourth level is needed\num there's\nsome differentiation between the secret\ncategory and the private category that\ncontinues to feel a little restrictive\nfor how we want to set up team access\nthere are uh some separate there's\nthere's some divisions within that about\nhow we want things to be shared such\nthat it might make sense to have like uh\nsecret private semi-private in public so\nwe're talking about a fourth level but\noverall it's worked pretty well I think\nconjecture has been tight-lipped about\nmost of the things that we're doing that\nour capability is advancing and that's\nthe point the input Hazard policy is to\nprotect your ability to work on\ncapabilities without letting those\nSecrets spill\ngreat\n[Music]\nlo mein you had a question\nuh yeah uh so\nthe question was that given\nhow important it is for the alignment\ncommunity in general to not have\nduplicated effort given the low number\nof people working on alignment in\ngeneral and also given that a lot of\npeople working alignment aren't\naffiliated with known groups a lot of\npeople are doing it independently or\npart-time\num how do you plan on like disseminating\ninformation to Independent researchers\num\nwho aren't affiliated with a known group\nit's a good question and how yeah and\nhow important is it in your uh to for\nfor us to do that in your opinion yeah\num yeah good question and I know there's\nbeen some debate about this unless wrong\nrecently particularly around\ninterpretability research\num\nso we are on the side of caution and\nthink that it's very possible for\nconjecture to be met negative if\ncapabilities that we're working on leak\num and so we definitely err on the side\nof sharing a lot less at the expense of\nnot having a super collaborative\ndiscussion about the projects that we're\nworking on\num we've had slightly different stances\non this in the past like there was one\npoint where we're trying to do research\nSprints where we were publishing shorter\nless comprehensive uh looks into what we\nwere\ninvestigating\num these are all things that we felt\nwere not kind of capabilities pushing\nbut at this point we're pretty much\npublishing none of our research not\nbecause everything we're working on is\ninfo hazardous but just because the\ntrade-off right now between doing things\ninternally and Publishing externally\num has us you know undervaluating the\nthe sharing in communities\nwhat I said about conjecture being\nnegative is roughly we don't think any\nof the things that we've done on an\nalignment front have significantly moved\nthe needle in such a way that if we want\nto do one tiny thing that sped up\ntimelines right now\num we'd be net positive so we think it's\nvery easy for us still at this stage to\nto move into the negative side and want\nto kind of continue to be cautious and\nprotective there\nis a bigger question here which is that\nconjectures research has now\nConsolidated around one particular\nagenda cognitive emulation and we've\nwritten very little on this publicly I\nthink it's probably in our interest to\nstart writing more because the vision is\ngetting a little bit more crystallized\nand it's likely worth it to kind of put\nthat out in a public way that people\nstart poking at\nso if I can uh expand a bit on La\nMaine's uh question let's say that an\nindependent AI researcher uh let's say\nlamine concretely comes up with some\ninsight related to to this agenda uh\ncognitive immolation\num and then he wants to he says this\ncould be info hazardous\num who should we reach out to is there\nsomeone in conjecture that he should\nreach out to oh I see\num yeah I misunderstood that part of the\nquestion thanks to clarifying uh yeah so\nthis actually has come up in the past\num I've been at an eag where someone\nshared with me hey I think I have\nsomething info hazardous\num is it potentially worth someone at\nconjecture's time to talk about this\num I think it really depends uh\nwe don't have a ton of bandwidth\ninternally they have conversations with\neveryone who has a research idea and we\nget a lot of people sending messages to\nhello at conjecture which is kind of our\ngeneric inbox\nsuggesting things that they think might\nbe useful for us\nuh it's probably better to try to get\nthe ear of someone who's kind of trusted\nor respected in the alignment Community\nwho's close to you and kind of get that\nview first and then if it seems like\nmore relevant to the stuff that\nconjecture is working on\num then you know bring it over to\ndiscuss with us\num if someone kind of needs a generic\nopinion about is this thing info\nhazardous or not it seems to me that\nit's better for them to put their effort\nor to put their time into someone that\nthey just generally trust really well\num ideally someone who's seen as kind of\nan AI safety expert then to reach out to\nsomeone particular at conjecture within\nconjecture though the way that info\nhazards are shared is uh\nas described in the document Conor is\nthe only person that kind of knows all\ninfo hazards Sid has a view over some of\nthe technical kind of implementations of\ninfo hazards and then team leads are all\naware of kind of what's on the project\nso the idea is kind of share it with the\nperson closest to you and boil it up\num and yeah if anyone you know knows\npeople at conjecture it's it's probably\na pretty good crew of people to to\nbounce ideas off of but it is hard for\nus just to answer to any independent\nresearcher on them\ngreat\num thank you um Chris you had a question\nI don't know uh I tried to answer it\nbefore do you want to go with your\nquestions or oh\nyes well I I just want to um\ncharacterize a sort of\nyou\nconjecture's product as it were\nessentially what you are operating um\nI gotta I mean some of it is in the\nlines of pure research some of it it's\ntrying to be commercialized\ncan you just expand on that yeah\num so\nin terms of like valuable IP internally\num it's definitely the language models\nthat we're training and uh we have a\nstrong team of Engineers who kind of\ncame from a Luther Ai and built some of\nthe at the time largest open source\nmodels and we've continued those\ntraining runs internally and so from an\nIP perspective it's you know internal\nlanguage models and then anything that\nyou can do with them is kind of the the\ndirection of what we're thinking about\nfor product\num there's two paths that conjecture is\ncurrently looking at one is kind of B2B\nEnterprise solutions that fall out of\nour research agenda or like along the\nsame build path there an example of this\nmight be something like uh we need\nStrong Quick fine-tuning internally\nwhere maybe we only have a limited data\nset and we need to kind of expand that\ninto a data set of the size that you can\nuse for fine tuning we want this\npipeline internally as a tool that is\nuseful to build towards cognitive\nemulation it's also something that we\ncan productize and sell to Enterprise\nPartners who maybe have limited data\nsets but also want proprietary models to\nkind of\nwork within their system maintain\nprivacy boundaries between their data\nand an external provider\nand yeah if it's some sort of custom\nNiche use case so that's that's one of\nthe B2B directions that we're thinking\nabout\non the b2c front\num the first thing that we tried was a\nproduct called verbalize which was a\nspeech-to-text model\num it was pretty good as like an API and\nit's now served as an API but we tried\nto make it into a SAS product and it\njust wasn't that great it was a little\nbuggy we were too slow with it um there\nare a whole bunch of other competitors\nthat came into the space\nand so we've mostly scrapped that\num one of the things that we're trying\nwith right now because eventually we\nwould like kind of General multimodal\ncapabilities within conjecture is we're\ntraining a text to speech model and the\nidea is then to kind of build a pipeline\nfrom speech to text to language models\nto Tech to speech such that you can kind\nof speak directly to an llm and have it\nspeak back to you\nthis could start out as like a toy\ngimmicky thing but eventually it could\nbe a way of interfacing with language\nmodels that's more interesting to users\nright\num\nis it\npossible it's a possible to speak in\nterms of whether you sort of wreak even\ncurrently at the moment or are you\ndepending well we're hugely yeah yeah\nhugely lost making\num we have have very little Revenue uh\nstill very much in the r d stage\npart of the reason to invest in a\ngenerative AI company that is syncing\nyou know millions of dollars into\ntraining runs but not making anything of\na profit is either because you believe\nthere is going to be some near-term\nCommercial Success from these things or\nbecause you're investing from the\nperspective of like AI in the future\nwill have a lot more control and\naccessible accessibility and kind of be\nbuilt into larger parts of the economy\nwe're hedging between both of these\nstrategies we think within the next you\nknow three to 12 months will easily be\nable to take some of the larger models\nwe have productize them do some of these\nEnterprise Solutions on top of them that\nwould actually be really meaningful for\nbusinesses\num but there's also kind of a longer\nterm bet which is\nthe research agenda that conjecture is\npursuing this cognitive emulation work\nis about building safe and Powerful\nsystems and so the idea is if you can\nfigure that out well and these systems\nare steerable and controllable that\nunlocks a ton of value so riskier much\nless likely to work much longer term but\nreally kind of where our main efforts\nare focused\nit might be worth it just to say a short\nbit on cognitive emulation for those who\ndon't have much context\num\nconjecture is Mainline as one of the\nmore pessimistic\norgs out there on the AI safety problem\nis that it is absurd given the level of\ntoday's AI safety to build super\nintelligence and we consider this kind\nof the mainline path of uh a lot of the\nkind of major arcs out there it's it's\nnot to stop at human level intelligence\nit's really to truly build things that\nare transformative\num\nan example is like anthropics leak pitch\ndeck said something like uh building\nmodels orders of magnitude larger than\nopen AI that we expect can automate\nsignificant parts of the economy by 2025\nor 2026 and create such a moat that no\none will be able to catch up\num that is not the business that\nconjecture is in we're not just building\nlarger and larger models and hoping to\ndeploy them for huge economic gain\nwe are instead trying to build an AI\narchitecture where language models slot\ninto cognitive kind of steps that the\nthe overall architecture has taken such\nthat when you inspect what it does any\ntime that you're actually using a black\nbox it's used in a very limited way\nwhere you can say okay I don't\nunderstand this step but I understand\nkind of the overall explainable\nreasoning process that the system is\npursuing in general so I might not know\nwhat's going on in the black box but I\ndo know what the system overall was\ndoing and so it's kind of a mix of\ntraditional software some things like\nLang chain Auto GPT and some more kind\nof like factored cognition stuff that is\nthe research direction that we're\npursuing in-house in the language models\nare part of this system rather than\nbeing kind of the primary engine that\ncontrols the majority of cognition\nthat's great thank you\nuh I have another question oh sorry um\nyeah I have another question unless\nsomeone else wants to go first\nright so\num right now I'm feeling a lot of fomo\nbecause like I kind of missed out on the\nwhole crypto train uh a decade ago and\nyeah I mean I kind of caught it but not\nas much as I wanted to so uh right now\nI'm feeling the same in terms of AI like\nI really want to put all of my money\ninto like\num you know Microsoft and Google but I\nalso don't want to uh die so is there a\nway for me to\num for you know the average person to\ninvest in groups like conjecture that\nhave a strong AI safety Focus\num\nI'll start by saying I'm the wrong\nperson to give investment advice and I\nfeel like this question is maybe best\ndealt with not from a money perspective\nand more just from a like how does one\nget involved in support on AI safety and\nmaybe one of the ways that people want\nto do that is financially but another\nway could be you know volunteering time\nand another could be you know going\nthrough the upskilling process to try to\nbecome technical uh and you know\nproficient in some of the more specific\ndomains like interpretability\num\nI thought about this a lot the question\nof you know what can anyone do to\nparticipate because I think there's a\nkind of unfortunately very small amount\nof\nshovel ready work in in AI safety a lot\nof the barrier to participation is quite\ntechnical\num so I would say kind of two main\nthings one is educate yourself there's\ntons of reading that can be done online\nthe conversation is moving very quickly\nand simply knowing a lot is I think a\nbig benefit to\nhow people can find Opportunities when\nthey come up maybe there's a job that\nopens up that's meaningful to you and\nnow you know enough to participate in it\nmaybe there's something that is a\ntotally unrelated job you're working in\nfinance or something like that and an AI\nconversation comes up where you can\ninform people about safety risks versus\nnot I think a lot of the the change that\nwill happen around AI safety is from uh\nEveryday People realizing that this will\nbe super you know risky and and making\nthat clear I think we're starting to see\nway more headlines changing around AI\nsafety and I think you kind of need the\npublic opinion interchange for expert\npolicy makers to change their mind to so\nmore involvement in the conversation is\ngood\nthe second thing I would say is like\nmental health above finances I I am long\nchaos like I have no idea what's coming\nbut I think it's going to be very\nstrange one way or another either the\nfuture is going to be really great but\ntransformed very quickly or it could be\nreally bad but I think volatility would\nbe high one way or another\ninvesting in this direction seems risky\nand speculative to me on pretty much\nevery direction I think you can both say\nAI is an extreme bubble right now that\ncould burst or it's you know the thing\nthat'll transform the future who knows\nit you know it could die in the same way\ncrypto dies\num there's also people who are like I'm\ngoing to put everything in the video\nright now also seems like a really risky\nthing\num so I would give more advice on get\neducated\ninvest in well-being and kind of\nlong-term personal stability and\nprobably stick with a normal investing\nportfolio and diversified stuff but\ndon't listen to me on investment by\nfinancially\nthanks for the question\ngreat\num does anyone else have any questions\nuh I I think uh in particular uh you you\nsaid you would be uh answering questions\nfor like 20 minutes half an hour and I\nthink we are way past that point\num so uh like if you have to leave then\nthat is perfectly fine not in a rush um\nit's 8 49 here now so maybe 10 more\nminutes happy to take anything else that\npeople are curious about\nuh yeah\num other than uh I have a question about\nother info hazards\num because this we're only talking about\nwhat uh Elias utkovski calls X4 hazards\nright and then maybe other like more\nvery precisely someone may come up with\nlike a um a really good way to prompt a\nlanguage model to make it make really\ngood bio weapons or someone may more\nexotically come up with like a new like\nRocco's basilisk but actually working\num uh and in that case obviously selling\nuh uh Conor about this new version of\nRocco's basilisk that actually works is\nlike really the wrong move right yeah\nprobably above any of the rules in the\ninfo Hazard policies use good judgment\num so you know something like the\nBasilisk would be an example of if it's\nobviously and precisely the wrong move\ndon't do it\num I think the the spirit of the law is\nextremely important with info hazards\ngiven that there's so many edge cases\nand things that are kind of hard to put\none's finger on\nyeah with something like bio weapons\nwhich may be kind of orthogonal rather\nthan kind of advancing AI capabilities\nthemselves it's related to another\nHazard and we would definitely consider\nthat info hazardous I think we generally\nput that term around anything that we\nthink could cause danger and harm to the\nworld or speed up capabilities in some\nkind of way so yeah by risks would\ndefinitely fall into that category\num\nyeah oh I I did have a question here\nabout a functional uh decision Theory\nwith one of the considerations uh in the\nsigning the info Hazard policy is\nwritten as just the basically the word\nfunctional decision Theory and like\ndon't screw it up for everyone\num could you walk us through the\nfunctional decision Theory uh uh\nuh like the functional decision Theory\nconsiderations in designing the uh\nuh the policy yeah I can try um this is\nnot really my area of expertise I mostly\nunderstand functional decision Theory as\nif I'm going to act from a policy and I\ncan assume that everyone else is acting\nfrom the same policy uh\nhow would that kind of change my\ndecisions here and so as it relates to\ninfo hazards I think a lot of the time\nthere is a kind of\num\npotentially an itch to be the exception\nlike this is a situation in which I feel\nit's safe to tell the person because I\ntrust them a lot and you know this is\nyou know the exception to the to the\nrule here uh the functionalization\ntheory is just like no because if you\nact by that policy then you assume that\neveryone who thinks that they end up in\nkind of an edge case scenario will speak\nas well\num and so it tends to lean towards\nmaximum conservative Behavior abiding by\nthe rules of the policy and ideally you\nknow ensuring a\ngame theoretic happy solution of as few\nSecrets being shared as possible and\nending up in a world that is the safest\nbecause of that so that's my my rough\nhand waving around the subject I imagine\nthere's much more technical ways to get\ninto this and describe it\nokay uh great I think uh all my\nquestions have been answered so I am\nwondering uh does anyone else have uh\nother questions\nlet me just I would have one question\nfor the group which is uh just curiosity\nwhat they think of the policy any\ncritiques any suggestions any kind of\ndiscussion points that came up in your\ntalk that we're meaningful\num so uh like uh one of them uh that I\ncan point out here is um like um like\nthe the rules document seems like it's\nnot a legal document but it's almost uh\nit's something that if there is some\nkind of some person who who violates\nthis then it will be litigated as if it\nwas illegal\num uh document and there are a number of\nlike formulations for instance about\nmistakes\num uh Connor say uh the document says\nthat nobody will be fired for raising a\nconcern in good faith and I think what\nhe's actually meaning is admitting a uh\nno one will be fired for admitting a\nnon-serious uh mistake in in disclosure\nthat's at least how I read the document\nbut it's very much not written as a\nlegal legal document has a lawyer\nactually read this document\nwe have a NDA built into all of our\ncontracts that goes through a very kind\nof typical\num information protection scheme uh we\nthrew the word info hazard in there as a\nkind of bid to the invisor policy that\nwe wrote but the policy itself is not\nwritten as a legally enforceable thing\nit's written as kind of a socially\nenforceable thing I think the idea even\nfor\num like private uh info hazards ones\nthat are maybe shared between two\ndifferent orgs is if you share this we\ntell everyone you shared it and your\nreputation is hurt as a participating\norgan in this ecosystem\nokay so that was one of my comments let\nme just see if I've had some other uh\nthings yeah can I just ask you sorry I\nknow just um\non less wrong and also on alignment\nForum uh where the document your\ndocument is posted in those two places\nand there are not many comments on them\nin the year since it's been there so how\ndo you feel about the response in\ngeneral why it disappointed that um\nI think it's important for AI safety\nlabs to have info Hazard policies and my\nsense is that people are a lot more\nliberal in their Communications at other\nfirms I think this is bad\nI'm not that surprised that there's not\nbeen a lot of comments I think\nconjecture is a pretty small org pretty\nnew doesn't have you know the most\nSterling reputation and so for us to\nlike do something and expect that a\nwhole bunch of people follow would be\nnaive at this stage I would love for a\nfuture where conjecture is\num\nin a position to set the precedent on\nwhat good security and safety policies\nlook like and I think at that point we\ncan you know re-raise the improviser\npolicies this is you know a standard\nthat we think other labs should be able\nto meet\nwe definitely had more private\nengagement you know when we wrote it a\nlot of people commented on it\num\nto my understanding there was one other\nsmall lab that ended up introducing\nsomething quite similar uh with larger\nLabs I think it's very hard to\nset precedent there but I do know that\nit was at least discussed within some of\nthese larger Labs just from kind of\npersonal anecil to people sharing that\num so yeah small slash but kind of\nwithin anticipated bounce\nso the the thing that\num I think it would obviously be\ncompared to would be uh like normal\nbusiness confidentiality\num like\num the um\num like anthropic doesn't have an info\nHazard policy but they I am sure when\nyou join anthropy you also sign ndas\nlet's say probably very close to the\nsame thing\num and uh like my question is how uh\nuh\nbig uh like there's two things that need\nto be compared like\num when you consider whether to have an\nexplicit info Hazard policy or just say\nwe have standard business\nconfidentiality like it draws attention\nto the fact that you may have important\nSecrets\num but it allows you to make a much more\nfinely tuned\num uh\ndo you agree with this trade-off and did\nyou consider just doing normal business\nconfidentiality to not draw attention to\nthe fact that you may have secrets\nI don't think we considered that\num\nwriting the Imposter policy was\nimportant to the founders from the start\nuh if you're\nin the business of training large\nlanguage models and building powerful AI\nsystems\nI don't think you're within normal\nbounds of\nnormal business confidentiality and\nassuming that you might have nothing to\nhide the number of security attacks on\nuh AI companies is increasing and I\nheard a stat from an expert that was\nsomething like\nI'm recounting what I heard I did not\ncheck this number it seemed really high\nto me but it was something like one in\nthree attacks on AI companies or like or\nlike targeted attacks are trying to get\nat model weights that seemed absurdly\nHigh to me but I think the general\nimpression was people are aware of the\nvalue of these weights and\num\nyeah like number of kind of cyber\nattacks are increasing so from a like\ntechnical security perspective uh\nextremely important and people know that\nthe kind of treasure is there and then\nfrom a verbal safety perspective\num I think one of the big things with\nthe Imposter policy is done besides\nmaking kind of explicit boundaries\ninternally with silos is bake in this\nkind of conversation to conjecture and\ngive people a vocabulary in which to\ntalk about it and it really\nstraightforward way and I think it's\nbeen beneficial\nokay\num I don't actually have more questions\num there's one by Urban Pace he posted\non restaurant and argued that uh there's\na risk with this kind of\num policy that\num like it requires that Conor has uh a\nlot of time right uh and he that he's\nresponsive if you have something that\nperhaps needs to be a new secret\num then the requirement to discuss it\nwith Conor if you want to\num\ncheck whether this is an uh an overlap\nwith another existing secret or\nsomething like that\num that like uh how to do you have any\nsense of how well that works is this\nactually a problem in real in practice\nright now we're small enough that it's\nnot a problem\num I could imagine in the future maybe I\nthink we'd have to be pretty big for\nthis not to be a thing I think the idea\nthat you can discuss with your lead\num is discussing with the lead is kind\nof the first step for things that are uh\nin the maybe category it's formally\nstarting another work stream that\nrequires Conor's approval\num and in that sense you know Conor's\nvery involved in sending the research\ndirection for the company and so I don't\nthink that would be uh\nany concern about time Sensitivity I\nthink the the larger concern I've heard\nraised around the info Hazard policy is\njust that it puts a lot of control in\nConnor's hands to have total visibility\nand to run the company and it's like yep\nthat's because we trust Conor internally\nand think he's doing a great job and so\nI think the\na lot of this is very intentional power\nconcentration and someone who is a\ntrusted figure and uh for\num\nfor a lot of reasons while we're small\nespecially it makes sense for kind of\none person to have full steering\nDirection and control\num as we get bigger maybe more checks\nand balances makes sense but at this\npoint it doesn't really feel like a huge\ncost we pay\nokay so how many people work with\nconjecture uh right now it's like 22.\nyeah\nand um I'm still not really\nunderstanding what the conjecture plan\nis\nI mean it makes sense in a vacuum but\nhow does it function in a world where\nopen Ai and menthropic are very clear\nthat they're scaling up very quickly\nyeah so\nin my mind it's like it would be awesome\nto live in a world where one of\nanthropic open air eye or Deep Mind was\nlike hey\nwe're not going to build super\nintelligence This Is Us kind of laying\ndown the swords and\npushing for governance that sets a hard\ncap on training runs that talks about\nextreme compute governance and need to\nregulate this powerful technology\nthat says this is the limit of power\nthat we think is appropriate to build to\ngiven the state of alignment research is\nso far behind\num\nnone of the major players seem to be\ndoing that their governance policies are\nlargely carving out space for them to be\nthe ones who are going to continue to\nbuild super intelligence their technical\nplans are trained larger and larger\nmodels and build super intelligence and\ntheir safety plans are predicated on\nsuper intelligence like scalable\noversight is a plan where you have one\nsuper intelligence overseeing another\nSuper intelligence because the\ncapabilities of these systems surpass\nthe human level\nconjecture is an alternative to that so\nthe idea is no super intelligence\nsystems at an explainability that makes\nsense to humans and build them from the\nbottom up such that they can be\nunderstood by humans push for policy\nregulations that prevent people from\nbuilding super intelligence and\num yeah mandate the need to kind of push\nalignment further before you build more\ncapabilities\nand so in a lot of ways conjecture is\nquite deontological in this sense it's\nlike maybe we won't win but someone out\nthere should be doing it and our plan is\nto try to do it well to scale up to be a\nlarger voice to be able to hire more\npeople and work on this agenda\num and to push forward with\nan ambitious plan despite the odds\nso\num uh if laume wants to be hired by a\nconjecture I have no clue if you\nactually want that but let's say what\nkind of research people should he be\nwriting and sending to you like on\ncognitive immolation is there like a\nlike a subfield of that that would be\ninteresting to you or yeah I would push\npeople more towards hacky engineering\ntinkering right now than research\ndirections\num stuff that's kind of in line with\nwhat we're uh building are like hey if\nyou've like tried to turn gpd3\ninternally like just by yourself if\nyou've looked up papers on model\ntraining if you're familiar with\ndistributed training if you've like just\ngone and hacked open source those are\nthe type of people we want we want\npeople who have strong technical skills\nthat are willing to kind of bootstrap a\nproject from the bottom up by themselves\num and are really kind of curious about\nwhere AI is going\ncognitive emulation is a builder's\nresearch agenda more than an academics\nresearch agenda at this point there are\nsome hard technical questions like how\ndo you factor cognition in a way that\nscales you know this is a problem that a\nlot of people have run into and one that\nwe are going to run into on this on this\npath as well\num but right now we're still in the like\nthere's just a ton of things that we\nneed to build and strap together to get\nsystems where we're even running into\nthese kind of questions as the limiting\nfactors\num\nso yeah people who have familiarity\nplaying with language models who have\nworked on Lang chain who have worked on\nauto GPT who are\ngenerally aware of kind of\nstate-of-the-art\num research in uh advances like yeah\ndifferent training regimes and things\nlike that those are all interesting\nthings for us from a research\nperspective I would say that we are\ndoing a lot less interpretability than\nwe used to be doing we've moved more\ntowards explainability than\ninterpretability uh meaning\nwe are not super optimistic about being\nable to kind of penetrate the black box\nand instead we want to limit the uses of\nblack box in a system that is kind of\noverall understandable for for how its\nlogic flows um\nso yeah\nuh I have another question so none of\nthe large language models\num thus far that aren't open source uh\nlet me rephrase that all of The Cutting\nEdge large language models are currently\nnot available in uh either mainland\nChina or\num\nuh Hong Kong Macau places like that is\nthat due to U.S law or due to Chinese\nlaw or just an internal thing\num\nI think there's a lot of complicated\nfactors right now I think there's also\nkind of a big Global conversation around\ncompetition with language models and\nwhere the tech is being built and who\nhas access to it and\num yeah this is a complicated subject I\nthink one of the important points that\nis uh quite dangerous right now is like\nover emphasizing this race or this\ncompetition you know one of the\njustifications for building powerful\nlanguage models in the west is oh well\nwhat if China gets there first you know\nwhat if one of our competitors builds\nthe transformative AI system before us\num I think thinking like this is a\nrationalization for the building of\nsuper intelligence that's already been\non the agenda for\nkind of the major research labs in the\nwest and at this point it's now a like\nsexy policy narrative that can convince\ngovernment officials that we should also\ncontinue to build powerful AI I think\nthis is quite foolish I think there's\nmany reasons to believe that the race is\nnot quite as\num tight as people make it out to be I\nthink there's also many other ways that\nwe can go about approaching this race by\nyou know limiting export controls and\nthings like that\num\nand\ngenerally over emphasizing the we need\nto win narrative seems super dangerous\nto me so that's kind of what I'll I'll\nsay on the China us China West kind of\nllm stuff\nwhat I'm really trying to ask is\num how is there a way for someone who's\ninterested in AI safety to like\nget access to some of the invite-only\nmodels like\num or is it client Cloud yeah\num yeah I think for people who are\nworking on Research uh it's\num possible to get kind of individual\npermissions I don't know how this works\nfrom a uh Regional perspective\num\nbut the yeah I would I would try\nreaching out to some research groups\nthat have access and trying to see if\nthere's projects that are meaningful to\nwork on with them\num there's a number of alignment\nupscaling programs for example that are\naccessible you know people start with\nsomething like AGI safety fundamentals\nand then they consider moving on to\num something like mlab for more\ntechnical upskilling uh Siri Maps where\nyou get paired with a mentor who's\nworking in alignment these programs have\naccess to models and so it might be a\nlittle bit more difficult as an\nindependent researcher but any type of\ntraining program that connects you into\nthis space would also give you access\nall right thanks\nokay does anyone have any final\nquestions for Chris\nthen I would like to once again say\nthank you very much Chris for joining us\nand I've really enjoyed your answers and\ngood luck at conjecture yeah thanks\nAaron um and thanks for discussing the\nsubject and inviting me right uh but\nbefore you leave I would just like to\nsay that\num from the Chinese perspective the AI\nrace is very very real I mean we're\ntrying really really hard to not get\nleft behind uh in the AI race so um it's\nnot like\nit\nI I think that the\ndismissing the um China us\num AI race is very much a mistake\num yeah thanks for pointing out that\nNuance I think the the point I want to\nemphasize is something like\none reason to talk about the race is to\ntalk about the fact that there are\ndangerous capabilities that are kind of\nexpanding all over the globe and we need\nto be careful about building and\num concerned about how these\nTechnologies can be misused or if they\nget deployed and there's misalignment\nrisks that's good I think pointing at\nthe risks serious pointing out that\nthere are competitive actors uh is\nimportant the thing that I'm concerned\nabout is that one of the ways that the\nrace narrative gets spun is as a\njustification to continue building even\nmore dangerous Technologies and I think\nuh if you don't believe that the race is\nneck and neck then it's easier to think\nabout something like a six-month pause\non AI capabilities in the west as a\num useful and not competitively lethal\nstrategy for us to take to try to end up\nin a safer World\num\nyeah a lot of this gets into kind of\ntechnical details about access to\ncompute and who's building what and\nTechnical competency and if secrets are\nyou know developed internally or you\nknow learned from others uh and yeah I\nguess without going down that rabbit\nhole I think it's important to kind of\nconsider both the reality and then how\nthe narrative gets used socially to\njustify different things after them\nokay awesome well thank you all again uh\nbe well have a good night and uh\nand good luck\nthanks a lot great\nokay\nso um that was great uh I'm gonna stop\nthe recording now\nand", "date_published": "2023-06-30T08:31:14Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "c1748157ec25973d624ad6646b542f89", "title": "262. Counterarguments to the basic AI Xrisk case 2", "url": "https://www.youtube.com/watch?v=sVkudHH3n34", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n263 in the AI safety.com reading group\ntonight we'll be discussing the second\npart of counter arguments to the basic\nAI Exodus Case by catcher Grace\nso catcher Grace is still the head\nperson in AI impacts and today we're\ngoing to focus on counter argument C\nouncil argument C is counter arguments\nto the claim that superhuman AI would be\nsufficiently superior to humans to\noverpower Humanity\nbefore we go into uh into this subject I\nwould like to add a few comments to a\nparticular didactic tool that catches\nusing which is uh among each Gap in the\narguments she presents a section called\nwhat it might look like if this Gap\nmatters and I've criticized a number of\nthese and I'll also continue to\ncriticize a few of them in in this part\nbut I would like to talk a bit more\nabout why I think this is a rather bad\nchoice for getting some kind of\nunderstanding and intuition and the key\nproblem as I see it is that when you're\nsetting up these kind of scenarios you\nare very it's very easy to add a lot of\nthings that don't follow from the\narguments themselves\nand when someone\nreads these arguments and tries to poke\nholes in them then\nuh adding these kind of uh complications\nthat don't actually relate to the\narguments in the previous section\num feels uh like a non-secure in in a\nsense it doesn't\num it doesn't follow from the argument's\nuh below and I guess if you are trying\nto make like a fully fleshed out\nscenario or something like that you have\nto add some some things that are like\nthat don't necessarily follow in a\nstrictly logical way\num but I think that they these\ncomplications end up muddying the waters\nso much that I don't think it's a good\nidea to have these uh these sections\nright onto the arguments the first human\nsuccess isn't from Individual\nintelligence\nstarts by restating the arguments in\num in her own way again as we saw last\ntime and in a somewhat uh uh\nso optimal way the first is that you\nwrite that the argument claims that\npassing human level intelligence is the\nrelevant bar and this is the relevant\nbar for doing things like recursive\nself-improvement or doing thing doing a\nnumber of things but not explicitly for\ntaking over for taking over uh that is\nat the super intelligent level not at\nthe slightly above human level like\num again\num the best uh\ndescription of the actual Arguments for\nAIX risk comes in my opinion from the\nbook super intelligence by Nick Bostrom\nand super intelligence is defined as a\nbeing more intelligent than any human\nhowever clever and so we are obviously\nnot talking at the\num at our level around the average human\nthis is a confusion that comes up a\nnumber of places in the following text\nand I think uh in general\num\nuh it would be nice if categories either\nuse the definitions that uh AI X risk\nAdvocates are using or made her own more\nclear in particular to what extent is\nhuman success caused by individual\nintelligence and cultural transmission I\nthink that's a really really interesting\nsubject and a lot could be said about\nhow these two factors\num map together but catcher does not\nactually go into how these two things\nwork together\num and\num and I think it's a shame and I think\nit's a shame that it's left somewhat\num\nuh somewhat uh underexplored I think it\nmay be that there are counter agreements\nlurking somewhere around there but catch\nit does not um go into a discussion\nabout how individual intelligence and\nand culture uh uh interact\nthere's a few other quotes here the\nargument claims that something more\nintelligent than humans will inexorably\ntriumph over humanity and if any of you\nhave read any uh bustram's book\nsuperintelligence you know that he would\ncertainly never ever in a million years\nmake any claim like that the word\ninexorably is not used and there's a\ngood reason for password not to use this\nword he would couch this in way more\ndisclaimers\nand a final the argument claims that\nhumans tramped overall species only\nbecause of individual intelligence and\nnot culture\nthis is on the face of it obviously\nirrelevant and obviously uh it almost\ncertainly irrelevant but very clearly\nuntrue right\num obviously human culture was a\ndramatic part of human success now I\ndon't think anyone would suggest this\nand I'm uh struggling to see how uh like\nimagine that uh\nyutkowski had written this sentence uh\nthen obviously a lot of people would\nattack him and say actually human\nculture is a big deal and and they'd be\nright of course and it's very clear that\nthis is the kind of uh counter arguments\nthat don't work you are not in general\ngoing to find errors in\num\nfind this kind of totally solely trivial\nerrors in Nick pastram's work or in\nIllinois\nCristiano any of these people because\nthere has been so incredibly much\nscrutiny on the arguments so if you find\na counter argument that is really really\nsurface level and really really obvious\nthen almost certainly either someone has\nfound them before you or uh it's uh or\nyou are mistaking and Nick Bostrom\nobviously he wouldn't say this\nnow let's go compare a single human\ncompared to like Humanity if you look at\nsomeone in the human society that is\nsubstantially more intelligent than the\naverage we could call that a genius then\nthey basically live in the same way as\nhumans do\num\ngive or take I think I would give and\ntake a bit more depending on precisely\nwhere you how you define a genius\na number of geniuses are in fact living\nin substantially different conditions\nthan average people\num I think for unrecognized geniuses\nthis may in fact be true\num but once you are recognized as a\ngenius most likely you're going to live\nin a different way than average people\nthen there's a true but irrelevant claim\nthat an AI at the human level without\naccess to information from humanity is\nworthless obviously humans without\naccess to human culture like uh children\nthat are lost in the jungle and grow up\nAmong Wolves or something like that\nthat's\num that that's well known\num\nwe have a claimed that some information\ncannot be obtained by reading or like\nYouTube videos or things like that there\nare some technical knowledge\num I think it's true and I think Jessica\nwould probably agree that this perhaps\nmakes very little difference\num when there's a claim that the pound\nindividual has in society is most\nrelated by what role Society gives them\nand while that is true I think the\nrelationship is mostly backwards in the\nsense that the the person who is most\nwho's best at Social manipulation will\nbe given the greatest role in human\nsociety so it's not that you uh Humanity\ngives\num gives roles out randomly is Humanity\ntries to give roles based on uh\ncompetencies\nwe may do a poor job but that is in fact\nwhat we're trying to do then there's\nthis question this um so I'm just going\nto read it a person twice as smart as\nany other human would research twice as\nmuch as an average human and that's\nbasically nothing\nso this question this I try to read it\nseveral times and I think on balance\nthis claim doesn't make sense so the\nquestion is precisely twice as smart as\nany other human what are we talking\nabout here\ntwice as smart as any human that would\nbe\num like substantially smarter than the\nsmartest person ever like maybe an IQ of\n180 or something like that\num that would be would that person\nresearch twice as much as an average\nhuman what they obviously would research\nway way more that would be a a true\nSuper Genius or something like that so\nthey would be able to uh in that case\nthe claim doesn't make sense it's also\npossible to read this claim as just a\nperson that is twice as smart as the\naverage human person and the average\nperson is probably not like accounting\nfor babies and things like that we are\nprobably talking about a really really\nlow level and that makes the claim true\nbut also very irrelevant to the things\nwe're talking about\ncatcher has an analogy with the\nManhattan Project when it com which uh\nwhen it comes to like trying to gain to\ncreate technology to obtain precisely\nstrategic uh advantage\nto claims that people often mistake the\nactions of a human with the actions of a\nhuman society\nI think people do in fact make this\nmistake people in general make a lot of\nepistemic Errors like\num and that is why in general you should\nnot engage with uh with arguments\npresented by the average person you\nshould Instead try to find the best\narguments uh presented uh by your\nopponents and they argue against those\ninstead of yeah yeah there's a lot of\npeople who have seen the Terminator\nmovie and they are probably really\nconfused so don't argue with those argue\nwith Nick Bostrom or yutkowski or\nCristiano or these kind of people\nuh that's the note that uh even the\nManhattan Project was not done in the\nafternoon even by the smartest person in\nthe Manhattan Project that's obviously\ntrue I think in in general uh there are\nis a sliding skill on to what extent\nprojects can just can be completed by\nsome one person having like a Eureka\nmoment where they jump out of the\nbathtub but\na lot of projects involve a lot of hard\nwork and I think that is in fact a very\ncommonly understood uh conclusion that a\nlot of big projects just require a lot\nof grit and hard work and man hours\nuh so here we have a counterfactual if\nthe Manhattan Project was done by it is\njust somewhat smarter than a person then\nthat wouldn't have changed the project\nvery much I don't really understand the\ncounter factual here are we again\ntalking about entities smarter than\nhumans in general like super\nintelligences or are we talking about\nthat like everybody in the Manhattan\nProject gets plus five IQ points or\nsomething like that\num I think it's unclear what catcher is\npointing at but I would point to one\nthing that the Manhattan Project in\npractice was uh\nthey when they were building uh the the\nfirst atomic bombs they realized that\nenriching uranium was the key bottleneck\nand they tried to make centrifuges work\nand they couldn't make Center Shooters\nwork and then they did some other ways\nof uh purifying enriching the uranium\nthat was a huge amount of work and then\nafter they uh they finished the\nManhattan Project then they just ah of\ncourse we could have um and done\nsomething slightly different and made a\ncivil type centrifuge and um that of\ncourse that's basically how we've\nenriched uranium ever since and that's\nway smarter\num and that would have caught the\nenrichment process that was super super\nartists down dramatically so I could in\nfact have seen easily that if some of\nthe people who were designing this had\nbeen just slightly smarter sure than uh\nthe Manhattan Project could have\nhappened dramatically faster and at a\ndramatically lower cost\nmy this is not really my real objection\nto this argument my real objection to\nthis is that the Manhattan Project is a\nvery very strange projects almost all\nprojects are very very different from\nManhattan the Manhattan Project so if\nyou're trying to talk about projects in\ngeneral you should probably not\nassume that any random project is going\nto be like the Manhattan Project\num because in and in this particular\ncase we do have a project that is very\nvery likely to be to be relevant so if\nyou imagine someone is building tpt5\nthen probably the project to build TV5\nis kind of like the project to build\ngbt4 which is kind of like the project\nto build gbt3 that is kind of like the\nproject of building gbt2\nEtc this is\num I think a way more grounded way of\nresearch of reasoning rather than trying\nto find like\nthe Manhattan Project is in fact known\nspecifically because it's an outlier\nbecause it's a very very strange project\nthe next argument is about humans\nsharing power with AI\num\nhuman power is generally something that\nwe obtain like our power over the\nenvironment is not something we obtain\nthrough our own individual intelligence\nbut by\num buying power from the rest of society\nusing money or something like that and\nthat means in general we should expect\nAI label to be sold on some kind of\nMarket in that order\nand I think that is in general a true\nassumption except in the specific case\nwhere the AI is trying to take over\neither because on his own initiative or\nbecause the it's been built by some evil\nactor that wants to take over the world\nin that specific case we should not\nexpect AI labor to be sold but hard\nthen there are arguments against whether\nhumans will transfer power directly to\nAIS\num and I don't think like that's not a\nthing that the the original arguments uh\ndon't really talk about this I'm aware\nthat there is this movie Terminator\nwhere humans voluntarily give the AI\nControl over all the nuclear weapons\num but I I think that's again arguments\nfrom the movie Terminator and not\narguments from Nick Bostrom\num there's the question of a comparison\nbetween agentic Ai and how much power\nthat would yet compared to a combination\nof humans and non-agentic machines\num\nfirst uh categories correctly notes that\nhuman level capability is a moving\nTarget and something that could\npotentially increase dramatically as we\nget things like tool Ai and that is in\nfact a complication that Nick Bostrom\nhas already taken into account in the\nbook super intelligence in chapter 4 on\nthe Canadians of an intelligence\nexplosion\num\nexplicitly talks about this and say okay\nwe are anchoring the human capability at\nthe level that it is when a AGI reaches\nthe human level or something like that\nuh I think actually he just anchors it\nat\num like 2016 level and then you just\nmultiply by a factor or something like\nthat\num that so so that is how Nick pastram\ngets around this but I agree if you\ndon't\num if you don't uh read Nick Bostrom but\nyou just try to come up with the\narguments yourself it's easy to uh\nto overlook some of these complications\nand that is indeed a way you can confuse\nyourself\nnow uh how much Superior our agents\ngoing to be compared to uh humans with\ntwo AI\num well there are certainly a number of\ndisadvantages and uh catch a list some\nof them but I think in this case the\ncanonical reference would be converts\nwhy tool AIS want to be agent AIS this\nlays out in substantial details the\narguments for why AI agents May in fact\nbe radically superior\num can I just say these arguments may or\nmay not matter much and obviously I\nagree in the sense that uh predicting\nthe future is hot but I would also say\nthis is not really a counter argument to\nState briefly some of your opponents\narguments and just say this may or not\nmay not matter much is isn't a counter\nargument\ncatcher in fact comes up with an uh\nanother argument the tool AIS also have\nto deal with the interface to humans\nwhich agents do not I uh just re-skimped\nburns text and I couldn't find it so I\nassume that this is like a new argument\nthat catcher is presenting for why AI\nagents are going to be even more\nSuperior to combinations of humans and\nnon-identic machines\num\nand then the new argument that catcher\npresents says it matters for some tasks\nlike rapid interaction with humans but\nnot for like major on off decisions so\nmaybe more for social impulation and\nless for uh strategizing\num\nand again that doesn't really seem like\na counter argument in the sense that if\nyou uh present a new argument for a case\nand then show that the new argument\nisn't like fully covering that there are\ncases where it doesn't apply that's not\nreally a counter argument\nthen there's a matter of trust\num because uh this may matter in some\nspecific cases and the case that catcher\nhas in mind is the case where AI has a\nslight Performance Edge over humans and\nalready in in this assumption I think it\nis a very very narrow set of assumptions\nI think in most worlds and across most\ntasks either AI is going to be\ndramatically Superior to humans or\ndramatically inferior to humans\nand in particular the the relevant six\ncognitive domains that strategic\ncognitive domains that Boston have\nidentified are very unlikely to be a\nplace where there's like a slight\nPerformance Edge\num and of course it would be even more\nsurprising if this is stable if as AI\nprogresses then humans and true AI also\nprogresses at precisely the same level\nso we get some kind something that's\nStables over centuries that sounds\nreally really far-fetched\nbut if we are in this situation where a\ngenetic AI has a slight uh Performance\nEdge over humans with two layer then\nthis could be balanced\num by the by us not trusting the AI and\nthere are two reasons why we would not\ntrust the AI the first is we don't know\nwhether well values are and we don't\nknow how they would behave in any given\nin any given case\nI think even in this case the um the\nperson who is deciding whether he wants\nto employ an AI or a human needs to\nconsider the uh the competitive Dynamics\nwhere these two claims don't actually\nhold for humans either when you choose\nto employ a human you don't actually\nknow what their values are maybe they\nwill start a labor union maybe they will\nI don't know do many many different\nthings and how will humans behave in any\ngiven case humans sometimes do really\nreally strange things\num so this also will not hold for for\nhumans and it might in fact be that\nthere's a base rate of humans starting\nlabor unions and there may be a much\nlower base rate of AIS starting labor\nunions I could totally see that thing\nhappening\nand finally if we don't know that the AI\nis aligned then in expectation it's\ncosted to use AI\nagain I'm a bit unsure about uh what\ncatcher is talking about here uh is he\ntalking about like this lower level\nunaligned or is she talking about\ntakeover scenarios\num if it's just the lower level where\nthe AI is actually uh you know I don't\nknow breaking vases uh trying to clean\nup or something like that then in that\ncase we expect more capability to\nrapidly\num fix that problem so uh it won't be\ncosting expectation for very long in the\nother case where the AI is trying to\ntake over\num well in that case obviously right now\nuh people\nought to realize that building AIS is\nterribly terribly dangerous and they're\ndoing it anyway so that is a state of\naffairs that is likely to going to\npersist and people will in fact\ngenerally be unaware of the problems\nwith AI and just blissfully use it until\nin AI takeover\nthe next argument is about Headroom like\nin a given task how much can it be done\nbetter than what we're doing right now a\nclassic example is tic-tac-toe with\nwhere optimal game\nperformance can be specified relatively\neasily uh like there is no way to do\nbetter than humans in tic-tac-toe that's\njust um uh\nand a fully solved game to some extent\nand to what extent this is General lies\nwell there are some tasks that probably\nhave have only little Headroom if we\nknow what best performance is if we know\nthat we humans can do it almost\nperfectly if we can bound how well the\ntask can be done or there are\ncomputational limits or things like that\nprobably there is little Headroom and\nprobably there's a lot of headrooms in\nthese kind of things I won't read them\nup\num catcher doesn't put much stack stock\nin this counter argument probably for\nmany tasks there's a lot of Headroom\num but it's not obvious and I think um\nwe should like we should limit our\nanalysis to Boston's six cognitive\nstrategic tasks and in these tasks\nintelligence amplification strategizing\num social manipulation uh hacking uh\neconomic productivity and technological\nuh research in these it seems clear that\nwe're not going to get anything like\num best performance out of humans anyway\nanytime soon\nand all also in this uh catcher has um a\num what what does the world look like if\nthis Gap matters and she suggests that\nif there is little Headroom above the\nhuman level on these tasks that could\nlead to a world with a good offense\ndefense balance like defense is a lot\neasier than offense and that could lead\nto stability\num that doesn't seem clear to me at all\nI don't think that follows at all\nintelligence may not be an overwhelming\nadvantage\nand again catch up\nrepresents the arguments as the argument\nassumes that any difference in\nintelligence will eventually win out\nover any difference in other initial\nresources and that's obviously trivially\nwrong right uh any difference like who\nwho would say in this kind of claim like\nobviously there is a level of difference\nwhere if you're a tiny bit smarter then\nyou can't overcome that that initial\ndifference that seems really really\nobviously clear\nthen she has to claim the difference in\nperformance between top and average\nhuman performance is large relative to\ndifference between average and random\nand\nthis seems to me to be in fact an\nargument for why intelligence could be\nan overwhelming Advantage the way I\nreach it so but at this stage catcher is\nlike three meter levels deep in\ncounter-counter counter arguments uh\nthat's not meeting levels but three\nlevels deep in counter counter are\ncounter arguments and I think it's\npossible that either I misunderstood\nwhat she's written or it seems like\nshe's not in fact presenting a counter\nargument here\nI also kind of feel that we're at an\nenormously high level of abstraction\nhere and I don't think we can really say\na lot meaningfully at this level\num so let's talk about GPT 4 or gbt3 or\nsome of these a actual AI projects that\nare right in front of us and have a look\nat those and we can like measure their\nperformance we can say a lot of things a\nlot more concretely by by talking about\nthese\ninstead has a\ndigression and a link to a discussion\nabout random chess like what how well do\nyou play chess if you play random moves\nand that is\num like I try to follow some of the\narguments and the links to that and I\nbecoming clearly more confused than\nenlightened by this I don't think you\ncan say much about like random chess is\nreally really strange and you have a\nreally hard time making the other person\nCheckmate in random chess and it's just\nreally really strange\num\ncatcher also in this section has an\nexample\num with like three actors at Bob Charles\nand David\num I think something has gone wrong in\nthe editing to this I\nI'm quite sure that it's just not just\nthat I'm stupid that is the reason why I\ncouldn't pass it I think some sentences\nmay have fallen out from that example\nif another example\num country argument to why a smaller AI\nmay not be able to take over is that you\ncan provide some examples of\nintelligence not helping elephants are\nsmarter than Nets but elephants have not\nwon over gnats\nand the reason we uh\nthis is not actually a counter argument\nto uh to the arguments as presented\nbecause elephants cannot do the six\nstrategic cognitive tasks that we're\ninterested in and that is precisely the\nreason why all intelligences at a level\nwhere they can't do this are irrelevant\nand in a strong sense\num elephants and Nets are\num\nare equal in that they don't have\ngeneral intelligence\nso looking at this world it seems to\ncatch out intuitively that the big\ndiscrepancies in power in the world are\nnot about competence\nI think I would disagree by adding the\ncomplication that there is also the\ncompetence of your ancestors\nso what explains the discrepancies in\npower that we see in the world right now\nit's of course a huge huge question\num and like my intuition is that 40 is\nabout individual competence 40 is the\ncompetence of your ancestors and 20 is\nluck\num so I think in fact the difference\ndiscrepancies in power that we see right\nnow are mostly about competence\nand there's another example\npeople with an IQ of 130 they earn six\nto eighteen thousand dollars more per\nyear compared to average IQ people\num\nit's uh it's obviously an example of\nintelligence literally helping and\nliterally helping substantially it would\nbe a lot clearer if that was expressed\nin percentage rather than absolute\nnumbers also because some of these are\nvery old I looked through some of her\nreferences and I don't think I think the\nactual number is substantially larger\nthan this and I think the key problem is\nthat they are controlling for a lot of\nthings like who do we marry to a divorce\ndo you um\num like what kind of education do you\ntake and if you're controlling for\neducation then you're taking a lot of\nthe difference between IQ 130 and IQ 100\npeople out there because people who have\nan IQ of 130 are much more likely to\nfinish a um an advanced education\nthere's another reference yes where\npoliticians appear only slightly more\nintelligent than others\nI looked at the um uh the paper it is a\nsubstantially smaller scoped uh paper I\ndon't think you can use it to for this\nkind of sweeping\num uh generalizations\num like there's a link to it it's a\ndraft the the figures haven't been uh\nput in uh the conclusions say that\nthere's a strong positive selection in\npoliticians in fact being more\nintelligent than than others\num it's based on some kind of army\num uh Army measuring system in Sweden\nwhere they see like are people likely to\ngoing to be\num intelligent and uh strong and good\ngood infantries good soldiers and also\ngood leaders and I think it was actually\nsomewhat interesting to me that to\npredict whether you will be a a\nsuccessful politician and get elected\nit's it's better to be intelligent\nrather than uh the Army says you could\nmake a good officer and I think that's\nactually very counterintuitive I would\nexpect the thing that makes you a good\nofficer to be the same thing that makes\nyou a um a good politician much or\nsubstantially more than just being smart\nbut apparently the paper found that it\nis slightly more\num important to be smart in order to get\nelected rather than to be a good leader\nanother example of intelligence not\nworking is Mensa an organization for\npeople with genius level IQ and\nobviously people with um\nMensa I think it's trivial uh truly of\nways to say that Mensa represents a very\nvery specific subset of the people with\nan IQ over 130 and I think there's a\nsocial Dynamic around Mensa that really\nreally muddies the water and makes this\nsome horribly horrible evidence\num like the people are not joining\nmindset because it's perceived as\nsocially uh very very uh bad to join\nMensa not uh because it wouldn't help in\nany way\nintelligence appears to be more useful\nfor showing off than for strategic\naction uh this might be true but that\ndoesn't bound really how useful it is\nfor strategic action it may be useful\nfor both\num\nand we have catcher also mentions that\nthis may just because there's some\nWinner Takes all Dynamics in\num in a lot of appearance based Sports\nand competitions and things like that\num\ncatcher also have some anecdotes of\nintelligent people who are don't appear\nto be helped by their intelligence\num I like you can counter anecdotes with\nmore anecdotes but I couldn't actually\nbe bothered to find examples of\nsuccessful intelligent people I think we\ncan all agree that there are a lot of\nexamples of intelligent successful\npeople so just pretend that I gave a\nlist of anecdotes here to counter\ncatches anecdotes\nit is unclear that many goals\nrealistically incentivize taking over\nthe universe\nI think this may in fact be in the wrong\nsection remember we are doing counter\narguments to whether superhuman AI would\nbe sufficiently superior to humans to\noverpower humanity and uh in this case\nthis is whether you actually want to do\nthat and that is like a different\nquestion to my mind\ncatcher uses herself as an example she\ndoesn't try to take over the universe\nand I explicitly think this is uh mostly\nfor lack of ability uh no uh offense to\ncatch up but she doesn't have a feasible\npath towards taking over the universe if\nyou imagine she had one like she's given\na magical safe that has a take over the\nuniverse button inside then sure I\nexpect her to go lock picking\num because to try to get that safe open\nbecause taking a pardon that would allow\ncatcher to take over the universe and\nhave everything done everyone follow her\ncommands or something like that would\nprobably\num really really useful to her\num and uh taking over the universe as a\nsubstance for for your call that seems\nreally laughable for almost any human\ngoal I don't in fact think that this is\nlovable I think\num it is only because it appears so hard\nto take over the world uh and that\nhasn't actually stopped people there are\na lot of people who have in fact tried\nto take over the world and to these\npeople it hasn't been lovable it has\nbeen the uh the most likely path to\nsuccess to try to take over the world\nso the question is how much easier would\nit be for an aeon here at this point I\nwould like to have like or the a\nstandard reference would be Possum's uh\nsource of advantage for digital\nintelligence uh but it's been written in\nmany other places there's a lot of\npeople who have showed scenarios on this\nkind of thing\num\ncatch it doesn't really show gaps in\nthis arguments as much as just posting\nthe question and not making any\nreference to the answers and just uh\nputting a question mark to that\nanother empirical question is the\nquantity of new cognitive labor\num how much will it be well if we assume\nit comes out to just a peak human uh\nthen that is not enough to take over the\nworld if we say it's like a million\nhumans then probably can\num I mean I'd actually show that one\nmillion is sufficient one million uh AIS\nversus 8 billion humans that's not\ntrivial even if you have good\ncoordination it may be possible but it's\nnot trivial\nbut we assume that computers get faster\nfaster and algorithms get um\nbetter and better and we get more and\nmore larger projects with more\nwhat's it called this process where you\nget the learning curve you get better\nand better AI cognitive performance over\ntime and I think that is in fact the key\npart of the\nthe answer to this question that already\nnow we can ask how much does it cost to\nto run this chat gbt3 and have it com\ncomposed like a college essay compared\nto having a human composters college\nessay and of course we are already\nstrongly past the point where if you\nwant to make a million\num college essays cheaply you would\nobviously use tpt3 and you would\nobviously not have humans do that task\num\nand as a claim that if we assume that we\nhave ai's gradually entering Society so\nwe have a period of some AI presence but\nat a low level then when the AI takeover\nhappens if it happens then uh the\ncatcher claims that because we have had\na period of time with AI being gradually\nintroduced then the eventual takeover\nThe Coop would be more slow slow moving\nI don't think that follows at all I\nthink it's very possible to have a very\nvery gradual introduction of AI combined\nwith a coupe that takes like\nmilliseconds\nthe speed of intelligence growth that is\nclaimed to be ambiguous I don't think\nambiguous is the right word here I think\nthe word is uncertain I don't think\nthere is a lot of ambiguity in the\nconcept I think there is a lot of\nuncertainty about how fast a takeoff\nwould happen\nand the fact that it's uncertain how\nfast it would happen is not something\nthat\num the arguments would disagree with\nlike Bostrom say it's more likely than\nnot that the Takeover is fast yukowski\nhas explicitly said that he believes\nthat everything uh that he that only\nreally predicts what happened above the\nlevel of atmospheric turbulence\num and I I think I also personally agree\nwe won't know this in advance\nso how could an intelligence explosion\nhappen one of the key ways is recursive\nself-improvement catcher wrote an\narticle in 2018 with counter arguments\nand I made a presentation just like this\nwith counter counter arguments but I\nwould say that 2018 is a long time ago\num and like when catcher expressed\nuncertainty about in 2018 about how\nwould technology develop I think it was\nprudent and was correct of her to\nexpress uncertainty but it's the 2018\nanymore like we literally have\nTransformers in our hand right now and\nwe can see that Transformers can do\nthings that a lot of people in 2018 were\nskeptical would be possible at this\nlevel so I think uh like we have learned\nsomething and I'm sure catcher has also\nupdated substantially since 2018.\nthe second reason to suspect an\nintelligence explosion is by analogy\nwith the primate to human evolution that\nsuggests any movement past human level\nwill take us to unimaginably uh\nsuperhuman level\nagain like unimaginably superhuman if\nyou just go a bit above the human level\nit seems like obviously as a silly straw\nman right no one would claim this it's\nalso not very much a part of\num\nthe intuition behind the intelligence\nexplosion Nick Bostrom in his argument\ndoes not refer to this at all\num I think in the AI film debate\num yourkoski makes a few small weak I\nthink he explicitly calls him weak uh\nreferences to this\num\ncatches as if she hears strong arguments\nexist for why we should expect an\nintelligence explosion\num if she finds them I would be very\ninterested in seeing them I have\nactually heard the opposite that there\nare no strong Arguments for and whether\nor not an intelligence explosion would\nhappen\nfinally some of the key concepts are\nvague in particular control which is a\nrather central part in an AI takeover\nscenario is not Zero Sum\ncatch it doesn't go into it more than\nthis but I think it's an interesting\nquestion and I'd like to go more into it\nand of course depending on how you\nDefine control\nif you would Define it as like control\nover nature that could increase you\ncould imagine you have um like an\nentropy-based definition of control in\nthat case you could also see the total\namount of control increase\num\nI think the arguments like I'm not going\nto say that Nick Bostrom is perfect I\nthink almost certainly you could make\nthings more precise you could make\nthings more mathematically rigorous\num\nbut on the other hand I also feel that\ncontrol is a real thing and my intuition\nfor why this is a real thing is that the\nsense that there are many different kind\nof definitions you can come up with and\nthey all imply each other so if you have\ncontrol among one of these definitions\nthen you have control among all of them\nso here are five definitions of what\ncontrol means in this specific scenario\nso Nick Bostrom has the definition of\nstrategic decisive strategic advantage\npeople often use control as like having\neconomic and political power dominance\nor the ability to influence your light\ncone the ability to kill others and not\nget killed and influencing and directing\nothers as the\ndictionaries says and I think in all\nthese cases having a totally dominant\nand strategic level of control like if\nyou are perfect at influencing the light\ncone you can get a decisive strategic\nadvantage and if you have perfect\npolitical and economic power then you\ncan influence and direct others through\nany level required\nthat is all for today thank you and see\nyou next time", "date_published": "2022-12-15T22:22:35Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "c89681b77273b56611af95dafab7fc53", "title": "203. Roadmap to a Roadmap", "url": "https://www.youtube.com/watch?v=1w9gYhhXzwI", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "yes\nso i know that\nuh you've all read the paper and come up\nwith some\nreally great probing questions so i\nwon't rehash all that in detail\nbut just briefly to set the stage of\nwhat\nprompted this paper i felt like\na lot of the discussion in the\nliterature around\narms races toward ai and potential\narms races didn't take into account\nthe relationship between what we refer\nto as\nthe surface area of the problem and\nhow nation-states actually make\nstrategic decisions\nso i felt by exploring\nhow that tension could play out we could\nopen up\nsome profitable areas for exploration\non what sorts of incentives\nstates may face over the coming decade\nor more\nand namely find some arguments for why\nthe incentives\nthat major powers face may be\nsharply discontinuous at some point\nso that's the gap\nthat we intend to fill with this and\nwe're\nreally looking forward to diving into\nyour questions unless\nmathis you have anything else you'd like\nto add right off the top\num yeah no so i think what drew me to\nthe paper the topic is the sort of\nintersection of\na questions around uh scientific\nuncertainty and scientific processes\naround the eye and sort of outside and\ninside views of of timelines and\nand what's involved in certain\ntrajectories of agi development and then\nconnecting those sort of\nepistemic or scientific questions to\nuh strategic and governance questions on\nthe other hand and there's a lot there's\na lot of moving parts in there and a\nnumber of uncertainties\nbut that makes an interesting basis also\nfor um\nunderstanding what are the sort of\nrelevant parameters to think about\nthe intersection of\neither yeah so the agi development and\nprogress in the coming years or\nperceptions of such development and pros\nand progress and how that will feed back\ninto um\nyeah the problem of ai governance and\nthe problems of\nsultan addressing ai safety\nokay great so uh i should also add that\nzoom has this raise your hand\nor feature or alternatively if you have\na question please\ntype it type the question in the chat um\nand then i'll try to maintain some kind\nof queue\nof questions but the first question\nwould be\nfrom this list here which is when in\nthe situation where agi\nmight not be a manhattan project away\nbut something that i here referred to as\nati\nlight could be a manhattan project away\nwhich is\na an agi that can that is much less\ngeneral\nonly capable of um doing a particular\nstrategic\nuh task and whether that would be\nsufficient for for state access to um\nto pursue a manhattan project against\nmy sense of it is the answer to that\nwill differ\namong the six strategic cognitive\ndomains\nthat are listed here intelligence\namplification\nobviously has huge implications for\nsuper intelligence and spiraling effects\nthat could have absolutely decisive\nstrategic outcomes so for that one\nabsolutely that could justify a\nmanhattan project\nstrategizing i wasn't quite sure what\nwas\nmeant there could the question or maybe\nflesh out a little bit more detail\nuh so uh i was one who posed this\nquestion and i took this actually from\nnick bostrom's book super intelligence\nwhere\nthere is some more description of\nsubtasks in\num under the heading of strategizing\nit's uh\nyou know long term planning and uh\nmodeling\ndifferent actors actors and their\nincentives in order to\nobtain uh distance strategic goals this\nkind of thing\nyes so that seems to have\nsome forking along\ndifferent domains there are some areas\nwhere\nstrategizing in in nearer contexts\nwould not seem to imply a very general\nintelligence and others in which\nit absolutely would when it requires\nsome very abstract world model or\nrepresentation of\nhow many different factors come together\nsimilarly social manipulation\nat a high level would likely require a\nvery general kind of intelligence in\nthat way\ni can see how technology research\nin in some areas might not but\nthe overall strategic impact of\nai that can productively advance let's\nsay\nvaluable tech patents would be quite a\nstrong\nincentive hacking is an interesting\nsubject there because\nmost real life hacking\nhas a human component\nthe most effective hackers are generally\nthe ones\nwho don't train some million dollar\nsuper cluster to crack something by\nbrute force\nbut rather they offer someone\nmail enhancement pills and\nplay on their human insecurities somehow\nand fish them\nso there are some components of hacking\nthat would not imply a very general\nintelligence at all that is the the\nmore brute force mechanistic algorithmic\napproaches and then some\nwhich are in fact not much different\nfrom\nsocial manipulation\ni i think it's also interesting though\nor yeah\nvery even so there's the question of uh\naj light as on the\ntechnical level or sort of like how\nuseful or capabilities but also there\ncould be a political level the question\nof well\nwould it induce a manhattan project an\nimagine project of course is sort of a\na it's a sprint not just out of\num if we would like to have it it will\nbe useful\nuh but it's a desperation of like we\nneed to have this before the other side\ngets it or before anyone else gets it\nand i think in that sense it's\ninteresting to look at these\num because i could imagine some\ncapabilities amongst these that\nare undeniably useful but i'm not sure\nif if\num interestingly so strategizing for me\nis an interesting one because if you\nhave a system that can only do\nstrategizing\num i i i at a human level\nand it doesn't have intelligence\namplification so it doesn't get better\nin a weird way\nuh i can see that hitting diminishing\nreturns pretty quickly uh\nit's in the sense that if you uh in the\nsame sense that a\na military doesn't hire a a\na thousand people to sort of come up\nwith a with a yeah a strategy at some\npoint it's just a number of generals in\na room\nand so um i could see\nif it was just our system helping plan\ncampaigns and stuff\num having having agi level capabilities\non just strategizing\nsaves you labor costs but at some point\nyou don't need more and at some point\nperhaps adding\n100 more systems doesn't meaningfully\nimprove\nyour military campaign beyond just\nhiring\nyeah 30 generals and so\ni guess it's interesting that like\nthere's some of these where and again\nintelligence amplification it's hard to\nimagine you\nhaving that without it sort of really\ncashing out into the other capabilities\nat some point down the line\num but if it was only intelligent\namplification\nand it was sort of like oh yeah you just\ngot um\num yeah very very narrow\nimprovements on a very narrow other task\nthat doesn't really generalize beyond it\nof course you're not really talking agi\nuh in the first place then\num but i could see that not necessarily\ntriggering a manhattan project rush\ntechnology research and economic\nproductivity would really depend\non whether a states believe sort of\num so if\ni'm china and i believe the u.s is about\nto have an agi light\nin technology research whether i'm one\nto\npursue a manhattan project towards a\nagile light technology research\ncapability depends on what i think\nis in reach if i think that\ncapability will allow the us to crack\nfusion power\nwell i mean that air like that would be\ngreat and it would really be i mean that\nwould be disadvantage\nfor us it would be advantage for us but\nif i think that this capability\nin technological research will allow the\nus to within the next five years develop\nsort of unbeatable aircraft\nor a way to overcome the chinese sort of\nnuclear deterrence that would be\nthe grounds for if i was china to start\na manhattan project\nrush yes i would\nalso push back on the premise of the\nquestion a little bit\nin that arms race decisions are made\nunder profound uncertainty\nit's very hard to know in advance\nhow limited an agi light would be and\npersonally\nit's hard for me to imagine what sort of\nevidence\ncould strongly persuade china that they\nalmost\ncertainly can sprint to social\nmanipulation in five years\nbut almost certainly cannot sprint to\ntechnology research in five years\nat that level of abstraction i think\nour prior should be that intelligence\nis too fluid to make predictions like\nthat with any kind of confidence\nokay great uh i think david had a uh\na question here\nwould you like to repeat your question\nyeah so\ni had a quick question about your\ncommentary\non the uh social persuasion um\nsuperpower uh in particular uh though\nto some extent it generalizes to the\nrest of them\nyou said that at some level that\nrequires a\nhigh level abstract world model\num which i agree it does\nfor doing persuasion the way humans do\nit but isn't that an\nunnecessarily anthropocentric view\non persuasion i could easily imagine\nuh gpt's uh n style\ntext predictor um\nbecoming extremely persuasive under the\nright circumstances\nand um uh\nif i am correct about that would that\nchange her answer at all\ni think actually so it's interesting i\nmean you also have some some thoughts\non that so um it's it's an interesting\nthing because i think there's a\ndegree to which you're right that you\ncould have persuasive systems that are\ninvolved into even like gpn or even i\nmean there's currently studies on the\npotential uh to use gpt3 and even gpt2\nin the past\nto produce ideologically aligned content\nor radicalizing content and i think\nthere's a study that came out a couple\nweeks ago\nyeah that showed that that you can use\nit to create\nideologically tailored content that\nconceivably since humans can reliably\ndistinguish since gpt3 content and human\ngenerated content\nwould be as persuasive as any propaganda\nat least this\ni guess there's a there is a sense in\nwhich\nthere might be a harder degree to if\nwe're talking about either like\na general persuader so in a very\ndifferent paper um\nthe um uh policy this is\nthe zidarata so nick bostrom allen\ndefault and carrick flynn i think that\nis this\ni think it's now called a vector fueled\napproach anyway they discussed the\nidea of a super persuader which is a\nsystem that can persuade anyone of\nanything\nand that i think would be something more\nthan just\ntargeting people with sort of a preset\nideological content it sort of implies\nthat you have a system that will be\nable to turn me against my family in in\na sense or turn me into\nuh and i guess is a\nhigher level of i think that that second\ncapability might\nrequire a role model but it's true i\nthink it might be the case that for many\npractical purposes of\nyeah ideological conversion near-term\nsystems scoot work\nyeah maybe john yet also some but\nuh yes i i sorry can you can you put the\nsorry can you put the name of that paper\ninto the chat yeah i'll\nuh put it on my rating list sure\nokay i think persuasion is still\nrather poorly understood from a\nneurological\nstandpoint we don't yet have\na scientific basis for\nsaying what the strongest form\nof persuasion might look like is it\npossible\nthat a gpn could find the right words\nto convince anyone to turn against their\nfamily in\na tweet-length number of characters my\nintuition says no\nbut the science on that is still\ni think just so immature that\nwe have a lot to learn\nokay i'll go to the next question here\nis and that is the fact that nuclear\nweapons are actually\na really really ethical kind of\ntechnology\nin that if you expend half the effort of\na manhattan style project\nthen you get half a bomb and that won't\nwork\nso it's a very discontinuous\ntechnology whereas we expect that\nsomething like tp3 seems to improve\ncontinuously\nin the more compute you add to it the\nmore the better it becomes\nand that seems like it's the fact that\nthere are\nhalf an agi might be uh\nhave some ability to do social\nmanipulation\nand some ability to do uh uh economic\nto be economic productive um and there\nare a number of other\nadvantages here gwen have some where you\ncan\nuse once you have this model then you\ncan use it to train other models\num and how does this change if it turns\nout that agi\nthat it's a more um continuous case\nwhere\nyou put in a bit of effort and then you\nget enough money for it to pay\nfor itself quickly\nyes so we think a manhattan style\nproject makes sense for something that\nis more continuous than nuclear weapons\nbut not fully continuous there's at\nleast good\na priori reason to believe that there\nwill be\nsome discontinuities on the road to\nagi for example once ai\nis able to read english with strong\ncomprehension\nit can gobble up an enormous store of\nvery rich conceptual symbolic\ninformation\nthat it currently doesn't have access to\nthat is to say\nsort of reading all of wikipedia and\nremembering it perfectly\nso computers have advantages inherently\nin\nin speed and memory that will be\nunlocked once the\nnatural language processing and some\nother cognitive faculties\nreach human level so i see a bottleneck\nthere\nthat should get us thinking maybe this\nwould be a discontinuity\nyeah i think this gets us a little bit\ninto the debates over\nto what extent we sort of\nwould follow the scaling hypothesis of\nthat we\ngptn will basically get us to agi in\nwhich case\nyeah that seems that we're maybe we're\nin a way already\nyeah on on a runway or it's basically\nyeah we you could just scale it up and\nall of it's gonna be continuous\nor whether we expect a sort of more\num uh yeah\nit's an assembly of pieces and perhaps\neach of the individual modules\nwill have some sort of limited\nuh domain area yeah\nfaculty so you have systems that can do\nyeah\nlanguage generation you've got some\nsystems are able to do limited\ncausal reasoning then um\nbut that and those are there might be\nincremental benefits but that\nin some sense you only get a large leap\nin benefits once you put them all\ntogether and to\nthe extent if you want to stretch the\nanalogy you could argue that there were\nlimited benefits to nuclear research\neven before you had a nuclear bomb in\nthe senza it\nyeah you could get people uh work in\nnuclear power you got\nit could feed into sort of nuclear uh\nlike radiation medicine\num i i agree that those aren't quite as\nsalient in that sense um\ni think it's a question of whether\nthe scenario here is indeed that like\nit's continuous in the sense that oh\neach of the individual capabilities is\nalready really great\nand then when you really put get to agi\nit's just a cherry on top so it's like\n50 better or whether we're talking uh\nyeah\njust qualitatively more powerful um but\nit's yeah it's an interesting\ndistinction i think also there's a brief\nscience\nquestion there that nuclear\nat least after the sort of manhattan\nproject era\nother states have faced the question of\nlike well you want to build a nuclear\nyou might want to have a nuclear\nbreakout but the\nfinishing line is no longer just having\na opera operable nuclear weapon but it's\nhaving a deliverable deterrent and so\nyou need to raise\nnot yeah to having a single nuclear\nweapon but to having uh something that\ncan be\nsurvivable anyway\nthank you for the answer so the next\nquestion is from ali\nali would you uh post your question\nplease\nuh i think i did um\ni'll just the last thing i posted that's\nbasically\nwhat i was yeah that's probably the\nquestion i would ask\nsorry that was with the um\nthe decaying of the of the\ninfrastructure or\ni'll just read it out if that's okay\nyeah suppose you tried to make an agi\nand fail because you were five years too\nearly\nfive years hence would you be in a\nsubstantially better position than\nsomeone who had no experience\nat attempting to make an agi of course\nyou means\nthe relevant sort of entity like a state\na massive corporation etc etc\ni think that this is somewhat\ninteresting in the case where you have\nsay\ncontinuous progress or where you have\nsomething like\nthe time scales to agi are not really\nthat long\nso say if it's in the next 50 years and\nit's plausible that you could just make\na mistake 10 years and be 10 years too\nearly\nso in a sense you could argue that um\nwell without having sort of the shorter\nfive-year timeline\nuh one of the cases that we sort of\nrefer to is that in the\n80s darpa was running a um\nyeah a project in pursuit of what\nthey didn't call it agi but we would now\ncall it agi and they were\nand they failed um and\ni'm less clear about sort of\nwhere the the infrastructure the\nexpertise of that specific project went\nin the succeeding decades\nbut i could imagine a lot of it sort of\nwas diffused or at least\nfolded into other projects\nin the defense sector or elsewhere\ni guess the interesting case then here\ncould be something like\num darpa trites in the 80s to make a\num agi or like a yeah human lovely\nbut they failed but what if russia\nduring the 90s had made a breakthrough\nthat seemed that they were very close\nhow easy would it have been for the us\nto sort of reassemble the team and be\nlike oh hold on we were actually\nyeah uh pretty close it seems\nso it seems that as john clark says that\num\nhaving had the infrastructure\nand the the trained up people um\nthat if you went right into a wall but\nsomeone else\nfinds a piece of inside that\nyeah makes the surface area much larger\nyou could just re-mobilize\nthose materials but of course that does\nimply that\nyou haven't sort of like poisoned the\nwell and and um\npeople are just unwilling to spend money\non this or to accept this\num as a new project that's actually yeah\ni\nwould need to think about this more in\ndetail because it's an interesting\nscenario\nso john do you have you have written\nsomething i guess that's your answer\num it seems plausible to me\num the only thing i would just add\nis that if\nan agi manhattan project fails\nthe reason would likely be that\nthe surface area was too small\nand so a lot of the effort will in\nhindsight turn out to have been wasted\nso that that's something to at least\nkeep in mind\nas a plausible reason for the failure\nwhich may suggest that\nthe actual progress made toward\nfulfillment of agi would be smaller than\nyou'd expect just from the number of\ndollars spent\nokay so the next question uh is\nuh with some of the historical examples\nyou you pre\nin your paper you have some um examples\nuh of projects like either\nor uh the international space station um\nbut there's another kind of reference\nclass you could do with uh\nthings like uh if the\nif ati could potentially give a decisive\nstrategic advantage to\neither you or your rival then uh you\nmight look at things like um\ntotal war in the second world war where\ngermany and the un and\nthe soviet union were both uh faced with\nthis particular choice and\nindeed chose to devote many many\nresources\nto this um and that was one\nother kind of possible parallel the\nother was with uh dreadnoughts\nwhere uh the introduction of a new\ntechnology\nuh suddenly spurned and an enormous arms\nrace what do you think about these two\nkinds of\nuh alternative parallels that is a\ngreat question uh i don't think\nthat the dreadnought parallel is very\ninstructive here\nbecause as revolutionary as dreadnought\nwas it was still vastly more incremental\nthan nuclear weapons or the prestige of\nlanding first on the moon\nor what we think is likely with agi\ndreadnought could still hit a mine and\ngo right to the bottom\nalso very key is that the arms race\nin battleships in the early 20th century\nwas asymmetric between the royal navy\nand the imperial german navy what i mean\nby that is\nstrategically all germany had to do\nwas tie up the royal navy\nbritain had this vast overseas\nempire with shipping lanes in\nevery ocean that it had to defend\nand by contrast germany didn't have\nanywhere near that kind of\nfootprint that it had to defend over the\nseas\nthus germany could lose a battle\ntactically\nand win strategically just by\nreducing the relative advantage\nthat britain had so i think dreadnought\nwould have\nbeen decisive if germany's main goal\nhad been winning a trafalgaresque\nshowdown in the north sea but that just\nwasn't their goal\nso for that reason i don't see\ndreadnought as revolutionary as it was\nas being that kind of\ngame changer\num oh sorry oh uh yeah uh\nso and the other question the other\nparallel where uh the manhattan project\npossibly was existential but world war\ntwo for the united states\nsurely was existential or almost\nexistential\nand they were willing to devote\napparently 100 times as\nmany resources to winning the second\nworld war so that seems\nto imply a much large your willingness\nto\nto invest when it seems necessary i\ni disagree with that analysis the\nmanhattan project i think\nonly cost that little because the us\ndidn't know\nhow to allocate more resources to get\nthe bomb sooner\nif they could have gotten the bomb in\n1943 for 10 times as much money\nit seems quite likely that they would\nhave done so\nalso i i would add that the manhattan\nproject\nwould have been existential for the u.s\nif germany had gotten there first and\nthat was\nthe single driving worry behind the\nproject\nin the first place so\nif um if the\nif the us hadn't taken it as seriously\nand germany had gotten the bomb first\nespecially\nif by a margin of several months or a\nyear or two\nthere's a very good chance that the u.s\nultimately would have been\nheavily atom bombed carved up enslaved\nand put under\ntotalitarian rule by my lights at least\nthat's pretty existential\nso it's a i think it's an interesting\nquestion over like what it what makes\ntechnology existential what makes it\nperceived to be existential and so\nthere's an interesting case\nso at the end of world war ii where\nthere is the debate\num essentially over um\nwhat would be the relations between the\nthe yeah or\nthe the western allies and the soviet\nunion\nwhere there were advocates at the time\nincluding uh controversially\num the uh so russell\nslater's pacifist who advocated for a\nnuclear strike against soviet union\nuh in order to um yeah\num prevents sort of what they what they\nfear would be the inevitable nuclear\narming and then a nuclear war with the\nsoviet union\num and at the time stalin\nstalin perceived this to be\num so at least somewhat existential i\nthink there's sort of quotes where he\ninstructs his physicists or like we must\nhave the bomb or it's really\nthe exact phrasing escapes me um\nbut there are interesting cases in in in\nlater eras or nuclear armaments\nwhere um there's interesting cases yeah\nand strange cases sometimes of nuclear\nrestraint\nand some sometimes you could argue all\nthat was the case because of um states\nbeing willing to\nlike all we can get under the nuclear\numbrella of some of the great power so\nwhy do we need to do it ourselves\nand sometimes there's cases argentine\nand brazil i think we're running\nnuclear programs in the 60s and 70s um\ni mean it was venezuela but so basically\nthey they had fought\na war in the late 1800s there were some\nrivalry and some sort of territorial\ncontestation but at some point um\nit's sort of the program's cost exceeded\nthe the\nsense so it's an interesting question\nunder what circumstances\nespecially you could argue that there\nmight be under peace time circumstances\nthat states might be less prone to\nrush whereas under\n10 geopolitical circumstances and\nespecially during wartime\njust these types of sprints are more\nor more plausible\nokay so the uh next question here\nis um how large do we actually expect\nthe uh\nthe surface area to be um in\nin the case of tpt3 which seems like\nit's currently our best guess at what\ncould become\nagi at least in the near terms um there\nare uh\napparently quite a lot of things that\ncan be done uh\noutside of the core researchers who are\nworking with it\nin the paper that itself they they point\nout many many things that could be done\nand it seems like they could um easily\nuh\nscale very very widely in particular\nwith something like fine tuning\nwhich seems like it would require a lot\nof effort but potentially also have a\nvery large impact\nso there is certainly oh go ahead\nno no the question was if we are already\nat the point where\none in a thousand or something could uh\ncould\ncould participate potentially uh in a\natp in\nwhen you say participate you mean in\nsome kind of a decentralized\ncollaborative effort no rather\nis in a manhattan style project where uh\neverybody who has a an education or know\nhow to program computer they start\nworking on\nfine-tuning the sludge model i oh i i\nsee yes\nso i don't know go ahead\nthere are certainly ways of improving\ngpt-3 probably\nby a lot it is deeply unclear\nwhether they can scale that all the way\nto agi\nit is possible but i think the most\nuseful signal there the best\nevidence is that as far as we can tell\nopenai has not been seeking\nvastly larger resources for such a\nsprint\nand because they know more about their\nown technology\nthan we do at least as\nour prior that tells me that if they\nknew how to spend\nthe money they would be\nyeah i think i think this question might\ner i think this question might also\npartially reduce to simply\nwhich of the three scenarios that we\ndiscussed also do we believe or run\nand do we believe sorry if if you follow\nthe scaling hypothesis\nthen it would in a sense be a functional\nlike\nwe have the underlying architecture now\nit's a question of\nof scaling it up sufficiently and in\nthat case you could argue that um\nit's we you can do a manhattan project\nthe surface area is well basically\ni guess from a scientific point of view\nirrelevant because you don't need\ndeep certificate insights you can just\nscale up\nthe the um you can arbitrage\nmoney for compute for performance\num if there is a more sort of ambiguous\none where okay we feel like this is\ngetting\ncloser so i feel like it's the question\nof\num can one in a thousand people\nparticipate at what and so\nwhich people can be trained or retrained\nto\ncontribute to chip production or at\nleast elements that feed into that\nwhich people can uh how\nwhat is the threshold for contributing\nto the\nyeah production of or provision or\nlabeling of of data\nthat's the bottleneck um\nwhereas if we think that the the\nunderlying\nbottleneck is something very sort of\nfundamental or theoretical then\nyou could argue that well the um\n[Music]\nit might be still a fairly constrained\namount a number of people that could\nparticipate with that but on the object\nlevel here\ni confess i've also been impressed\nby yeah by searching for the gpt3 i\nthink we\nwrote the initial version of the paper\njust after\ngpt3 came out and it's been interesting\nsince then seeing some of the broader\ndebate\num yeah within the community\num and i'm very curious to see where it\ngoes in the coming\nyear and answer how far things like\nthese can be scaled up\nso i should just say here that we will\nhave jared kaplan\nthe person who uh predicted that gbt3\nwould continue to scale\nwill be joining the reading group and\npresenting uh just like this basically\nuh on the uh 29th\nuh i uh sometimes in november i can't\nactually remember precisely when\nso for the next question this is a a\npicture from\nthe gt3 paper which basically seems to\nimply\na a scaling law between\nhow much compute you throw at a model a\nlanguage model and how\nmuch validation loss you get and gwen in\nhis\nuh analogy uh compares it with a\nwith the human level with basically 0.7\nbits per byte of validation loss it's a\nsomewhat of stretch analogy in my\nopinion\nbut possibly not totally off and so\nthat's a graph here that you can\nliterally put into well from alpha and\nthen it'll say that you will hit the 0.7\nlevel\nat 60 000 times more compute\nso if you could just buy that i'm not\nquite sure you could actually do that if\nsomeone would\nsell that but that would cost 28 of the\ngdp of the united states to do\nright now and so uh if the manhattan\nproject\nran over like three four years or\nsomething like that then\nwe are we basically have a road map\nright here or we will have it\nvery soon with this does this compute\nanyone\nuh convince anyone um or um\nthis kind of argument or do you think\nthis is possible\nso you mean that sort of given the the\nrate at which computers is\ndoubling uh we would be at a\nthe u.s could buy this for a a few\npercent of gdp within a few years\nis it clearly more and then you'd have\nyou could have a manhattan\nproject type expense like like right now\n28 percent of the gdp the u.s is sort of\nimmense but you could have a\nmanhattan style expenditure within a few\nthat's interesting\nactually 720 billion dollars is\nless than four percent of u.s gdp\nsorry that's me looking up things wrong\nsorry but that's\nwell it's interesting because it is\nequivalent to the u.s\ndepartment the uh military expenditure\ntheir budget\nof last year uh which is maybe yeah not\nin gdp\nlinked um but it's interesting\nto see that type of that's a lot of\nmoney\nbut yeah jungler\num what was\nthe remaining element of the question so\nthe the main\nis why do we need a roadmap to roadmap\nwhen we have\na roadmap here\nwell what i would say there is that this\nis\nnot an engineerable road\nmap um but\nif this hypothesis is correct broadly\nspeaking\nthen we're already on the runway\nand if if someone wanted to spend\nthis amount of money today they could\nget there in a small number of years\nso i hold open that possibility\nall i've been trying to express in this\nsession\nis that our priors should weigh against\nthat\nsorry could i ask a rather naive\nquestion\nwith all this compute what is it you're\ncomputing\ni mean what is this machine\nwhat capabilities is it supposed to have\nthe uh the gt3 uh uh is\ncurrently uh uh basically doing text\nprediction\nso you're you're putting yes why is that\nai i mean that's\nyou know big big computing problem but\nhardly its repertoire of compared with\neven a\nsmall mammal is pretty pretty trivial\nthat's that's of course where it is\nright now that it is uh it's making a\nlot of mistakes\nbut potentially you could just uh ask it\nfor uh\nwhat how uh could you have a plan for\ntaking over the world and then\num it will just say the next words and\nthe next words will be a\nfeasible plan to obtain a decisive\nstrategic advantage\nwhat while or the next words would be\nwhat the trading\ndata or the trading corpus like what so\nwhat the people on the internet\nthink would be the plan for strategic\nadvantage i guess\nyes but it remains deeply unclear\nhow strong of insights could be\ndistilled\nfrom this huge corpus that's an area\nof ai research that still seems quite\nunder theorized so i think that's an\nexample of one of the things that could\nbroaden the surface area of the problem\nif we got\na better sense as a research community\nhow well that approach could scale\nwhether you could read every word\nthat's ever written and come up with\ntime travel or whether there are some\nhard ceilings\nthat cannot be overcome\nby text prediction i i suspect the\nanswer is\nmore toward the ladder that there are\nceilings but\nthe fact that it's not infinity\nstill leaves a whole lot of room\nfor discovery of just how high or how\nlow the ceiling is\ni mean anecdotally there's interesting\ncases where they had\nsystems that were trained i think on\nlike material science discovery\nand then they were able to um\nwhat's the phrase so retroactively\npredict\ncertain types of compounds that were in\nlabor were actually discovered by humans\nbut they were the system could predict\nthem several years in advance\nbased on sort of the uh so\nthere is a i guess a domain focused\nprecedent for if you had a system like\nthat it could actually come up with\nstrategies that are not just reproducing\nwhat we've come up with\nyeah it comes up with sort of strategies\nthat are missing from our\nlandscape but um yeah i i i'm quite\nuncertain about\nabout this\nand i'll just flag that this gets at a\nbroader open area of research\nwhich is around the limits of\nintelligence\nitself what i mean by that\nis in some domains on\nsome questions it's clear\nthat scaling up intelligence\nwill not increase performance for\nexample\nin chess some positions are just lost\npositions assuming your opponent plays\neven competently it doesn't matter\nwhether you have\nalpha zero alpha zero omega alpha zero\nomega plus plus plus\nif your elo rating is 5 000 or a million\nin a lost position you're not going to\nwin and the reason is\nin chess the decision space\nis very tightly restricted by the rules\nof the game\nif the i ai came up with a plan\nto distract the other player and\nswitch the pieces around when they\nweren't looking that might\nwork in a certain sense but the rules of\nthe game\nsay that's not a valid winning strategy\nbut in other areas like physics\ndiscoveries\nit's unclear how much the application of\never greater intelligence can let you\nkeep knocking down walls and making\nadvances and so one of the areas\nthat i expect to get a lot more\nattention over the coming decade\nas we get closer to agi is\nreally trying to get a firmer sense of\nwhat intelligence allows us to do\nin an abstract way and what sorts of\nlimits\nthat implies\nokay nevermind\nso there is here uh some weighing up and\ndown with uh here there's\nan extended quote from your paper here\nwhich weighs some some factors\nup and down and the uh\nand there are advantages to having a\nroadmap but there seems to be some\nstrong\ndisadvantages potentially in that a\nroadmap would help the\nrunner up if there is a roadmap and\naccording to the roadmap\nthe united states will have a decided\nstrategic advantage in seven years\nthat seems like something that would\ncause china to\nuh try to to beat them to it and cause a\nuh\nan arms race\nyes so first i'll just say our citation\nthere\nwas intended to credit yudkowski for the\nterm\nfire alarm rather than to imply his\nagreement with\nthe broader point around that so\napologies if that was\nmisleading um we are in fact\ntaking a contrary position to yudkowski\nat least in the broad strokes\nthat a road map could constitute a\nfire alarm also\nwe're not saying we should build a road\nmap\nour position is it's dangerous for\nsomeone to have a road map\nand therefore we should assess how much\nthat danger is growing that's the road\nmap to the road\nmap and the road map is what would let\nthe winner win\nlet alone the runner up\nokay so um\nuh that's not so many questions from the\naudience um but i guess\nuh we'll go to uh one further back\nhere this is a bit more technical with\nthe questions\nso here we have three scenarios that we\nare approaching the runway we are on the\nrunway or there is no runway\nand there are\nno compelling reason why we have\npriorities why\nwe think there'll be no runway and we a\nlot of\nactors everybody acts like we're not on\nthe runway\num so this suggests that either we are\napproaching the runway\nor we are on the runway at least that's\nhow i understand this quote\nbut shouldn't it be more uh why is this\nuh and as a chance that we are\neither here or here does it i\ndon't think does that make sense\nyes our intent uh\nby that quote was to suggest\nthat um if the second scenario\nholds then nobody knows it yet that is\nif they only spent the money they could\nget there soon\ntherefore if we're not seeing them\nspend the money that\nis not suggesting that there's\nno runway just that they don't perceive\nthere to be a runway\nso does that help clarify that helps us\nclarify yeah\nand so the second part of the question\nwould be that these\nthree uh factors that the researchers\nact like this and the companies act like\nthis in the states act\nlike we're not on the runway it could be\nan information cascade where\neverybody is actually sitting with some\nprivate information that looks like\nit should point towards that we are on\nthe runway but they are updating too\nmuch and all the others who seem to\nappear like\nwe're not on the runway i like\nyeah i like that i mean it's interesting\nthere's a um\ninteresting argument gordon has also\nmade and maybe you've discussed\nuh also already where he suggests that\num\nto open the eye has bought into the\nscaling hypothesis\nand sort of wants to um pursue that but\nhis point is also that\num that they need the scaling hypothesis\nto be true they so open ai needs us to\nbe on the runway already\nbecause if not if not if there are sort\nof deep underlying breakthroughs then\nthere\nhave a much poorer chance of competing\nagainst deepmind's approach or\nsome of the other major players and in\nhis\nreading at least deepmind uh and other\nmajor laps\nare loath to sort of um\nyeah sort of pivot away and and\nnow really pursue the scaling hypotheses\nthat may not be\nso from other services questions i've\nhad there are actually a fair number of\npeople within deep mine as well that are\nyeah quite enthusiastic and sort of will\npoint to the success of gpt tree like\nsystems\nas a rational for pursuing um\nthat more but it's interesting to see um\namongst the yeah the different labs\nwhich of them\num so yeah how do they weigh\nthe evidence coming out of their own\nlabs how do they weigh the technical\nevidence\ncoming out of others labs and how do\nthey weigh\nthe sort of the perceptions or the um\nyeah the claims from other labs and\nother teams\nthat would be an interesting yeah model\ni guess there could be a scenario where\nthere is um where everyone sort of seems\nto\nbelieve well no one else is panicking\nand moving out into the um\nin relation and really scaling up and\ntherefore\nwhat we have here must not be sufficient\nto to get to agi\num then it's an interesting question to\nwhat level of\nlike if the scaling laws don't break at\nwhat level would the\nnerve or the confidence of different\nactors that this is not going to go to\nagi\nat what point does that break\nyes i see the information cascade\nstory as being more likely if actors\nbelieve\nthat secrecy is hard\nthat is to say if you\nthink there's only a very small chance\nthat someone can\nsneak an agi manhattan project by you\nin secret you'll be more likely to\ntrust the public signals they're giving\noff\nas revealing the\nquality of information that they\nactually have and\ni think that would be an interesting\narea\nfor further research building on this\npaper\nof what the signals would actually be\nfor detecting a sprint project\nof this nature how plausible could it be\nthat a major effort like this could be\nundertaken\nby a nation-state without the broader\nresearch\ncommunity knowing and i suspect\nthat the answer would differ\nsubstantially\nbetween democratic and authoritarian\nsocieties\nbut i don't know and i don't have a\nstrong enough understanding of the\nchinese ai research community to\nto understand how wide that gap\nis between how well the u.s could keep\nthis secret\nand how well the communist chinese\ngovernment could keep this secret\nthere's an interesting question i guess\nalso there which has to do with\nrelative states sort of uh competitive\npositions so there's like the debates\nover\num there's a lot of these basically is\nthere an arms race in u.s and china\nand if so who's ahead of it and there's\nan interesting c-set report\nthat came across um a while back that\nwas dirt looked at uh well\ndepending on your definition of ai um\nyou will find that either china is a hat\nor the us is a hat and if you're talking\nabout robotics\num the us is ahead if you're talking\nabout uh natural language processing the\nu.s has ahead if you're talking about\nlocation systems and\nchina's ahead um and i\ni'm curious if there's situations where\num so if you imagine a situation where\nuh the us sort of imposes more export\ncontrols\nand and blocks down the export of high\nperformance computing chips\nchina and if china is not able to\nadvance their native production base of\nhigh performance computing the us will\nhave a significant compute advantage\nand china will not and it would be\ninteresting to see how that shapes\ntheir domestic perceptions of what will\nbe the\nlike whether we're on a road map and\nwhat are the likely pathways\nwhere you could imagine chinese\nresearchers not\nnot sort of selling selling research\nprojects that are less compute heavy\nto their to their sort of leadership\nbecause they're not never going to\ncompete\non compute anyway um\nbut that is this is a lot of speculation\nlike i would be\ninteresting to um yeah explore frederick\ncould i pitch in with a slightly\ndifferent perspective on this\num because i realized that this\ndiscussion is\nis more political than than technical\nbut um all the way back\nwhen i first read super intelligence\nit was very striking to me\nthat nick had some\nsuggestions for how we might achieve a\nsuper intelligence\nbut there was one missing and that was\nthat\nsomebody might come up with a theory of\ngeneral intelligence which\nwould seem pretty implausible but from\nwhere i sit\nat the intersection of a number of\ncognitive sciences there's a\nthere's been a convergence of thinking\nacross\nai and cognitive science and\nneuroscience\nphilosophy and various other things\nout of which there is actually quite a\ngeneral framework\nfor understanding general cognition\nhuman cognition anyway\nand and which is which is mechanizable\nand my own interest has been applying\nthose ideas in\nin medicine and i don't know if this is\nan agi light\nbut you know i think i could persuade\nyou\nthat we could build a an agi light that\ncould\ndo pretty much what the doctors of the\nworld do\nfor a large proportion of medicine um\nit's not a compute problem it's a\nproblem of having the right theory\nso we're trying to say what you're\nsaying sorry i would take too long\nit'd be boring to go into it but it it's\nit's about\nthat's what it's just a line of work\nthat i've been pursuing\ni think as i say another at another time\ni\ni can i think i can persuade you that we\ncould build an agi light that\nthat could pretty much cover the whole\nof medicine and the constraint isn't\ncomputing\nthese these are not difficult things to\ncompute um\nit's uh it it's knowledge\nwhat what do what do the collection of\nall healthcare professionals in the\nworld know\nand how do you capture that and they\nunderstand it of course they don't just\nstatistically understand it they\nunderstand it in a deeper\ndeeper sense but but it seems to me that\nas far as that agr light is concerned\nfor medicine\nwe're already on we're already on a\nrunway\num and it\nwouldn't it wouldn't be billions of\ndollars to do it\nwhether anybody's prepared to fund it\nfor medicine as opposed to\nwarfare i don't know\nso that probably sounds a bit um i'm\nconvincing because i haven't\nwell so i find it interesting so there's\nan interesting research project that\ni've\nsometimes wanted to look into and this\nalso relates back to the earlier point\nwe're having it's\nlike um could we be on a runway and\nenter\nwith actors if if they don't know or\ndon't realize it and there are history\nthere is historical precedent with sort\nof like\num i think i think alan davifo and his\nresearch agenda\nrefers to um penicillin as a case where\nthere was a\nsort of underlying breakthrough but it\ntook a decade for for people and funders\nto realize that that was lying around\nand so it's an interesting question of\nwhere when and where\nthere are uh what what's the phrase the\nsort of the um\nthe debate over the 20 dollar bill on on\nin central and sort of like well other\nscientific or theoretical\num yeah uh\nyeah low-hanging fruit or dollar bills\nlying around actually that aren't\nyeah recognized here in certain states\nthat would be something for the model\nbut i think i think what you're\ndescribing john reinforces\nwhat we were talking about a little\nearlier that\nagi is still under theorized that\nthere's still a lot of room\nfor breakthroughs quite apart\nfrom on the computing side\nthat could then unlock\nwhat seems to be an overhang on the\nhardware side\nand and that's some of the evidence that\nleads us to expect\ndiscontinuous progress\num yes i think i think what i've\nperceived and i've i'm i'm pretty\nold-school\nai um is that over the last 10 15 years\nuh we've kind of decided\nprobably in the 90s that good\nold-fashioned ai\nhad failed it couldn't deliver um and\nand the last 10 years or so it's all\nbeen\nbrute force various kinds whether it's\nreinforcement learning or the countless\ndifferent types of\nbig data analysis and so on and so forth\num\nand i think the assumption that the\nold-fashioned ai has failed is wrong\num and and i was really\nstruggling to understand why\nanybody would imagine that just putting\nmore and more\num compute power\nexpensively into a very large machine\nwould deliver what\ni use the phrase any kind of interesting\nrepertoire of\ncapabilities other than predicting text\nwhich is um don't see where that goes\nbut i do see where building a machine\nthat could basically advise on\nany topic in in medicine um\ni i do know how to build that um\nit doesn't it it requires resources but\nit's not compute resources\nit requires coordination i mean i i\na few years ago i i i had\na similar idea of a manhattan project i\ndidn't want to call it manhattan project\ni called it the greenwood project but\nyou know you could bring a bunch of\npeople together from a lot of different\ndisciplines\num and that you could build this thing\num\nand if you can do it for medicine you\ncould probably do it for a lot of other\nother fields as well as i said that\nthat's not the question you're asking\nyou're in\nyou're interested in a political\nquestion um\nrather than a technical one but but i do\nstruggle with the way\nthe the political debate political\ndiscussion\nwhat it's grounded in um it's not kind\nof grounded in anything that i can\nunderstand as a as an agi\nwith a broad range of capabilities\ni'm sorry i'm sorry if i'm um\nbeing i don't want to be critical and\njust struggling\nno not at all\njust to clarify a bit on text prediction\nyeah i think what people mean by that in\nthe context\nof gpt 345\nn is that if humans\ncan output only through a linguistic\nchannel\nlike in a touring test then a text\nprediction\ntask is functionally equivalent\nto human intelligence\nbut what remains under theorized\nis whether the kind of text prediction\narchitecture that gpt uses\ncould actually achieve the performance\nthat human cognition does but in terms\nof the underlying task of\ngetting a prompt and then answering that\nprompt that's the fundamental similarity\nbetween what our brains do and what\na gpt style transformer does\ni still struggle it sounds just so\nsimilar to\nyou know if you had enough monkeys and\ntypewriters you'd end up with\nshakespeare um\nyou wouldn't and if you did it wouldn't\nbe interesting because the monkeys\nwouldn't understand anyway\nin a sense you you may be proven right\non that\njohn in that it might turn out\nthat the curve does bend that scaling\ngpt's up takes us\nfurther than where we are now but still\nlands well short\nof agi and in that case then\nyou're just adding monkeys and not\nmaking progress as you scale up with\nmore compute beyond that point\nwe just don't know today in\noctober 2020 how\nhow that question will turn out\nokay um so i'll\nmove towards the next question i hope\nthat answered it john\nand that's um we had an interesting\ndiscussion\ni'm still struggling okay so um\nthis is another quote from your paper a\nbroadly\nset of metrics for assessing how close\nwe are to the runway would reduce the\nrisk of\ncompromising ai safety and that's\nof course one thing it could do in that\nif people\nbelieve that we are not in an arms race\nsituation\nit um then there would be no\nnot so much of a rush but it could also\ndo precisely the opposite trigger and\narms race\num and if people can see that building\nthis\nroadmap will possibly turn into an arms\nrace\nthen there seems to be two kinds of\ninterest there are the ai safety\nconcerns\num which motivates uh don't publish a\nroad map if it will uh if it will turn\ninto a\nan arms race and then there are these\nstrategic uh concerns which say\ndo publish and roadmap if it will give\nus a strategic advantage\num and it seems like um\nstatesmen care much more about strategic\nconcerns\nthan ai safety in practice so they will\nchoose that\ni think the difference is that the\nai safety community\nhas a lot of inherent disadvantages\ncompared to nation states in that we are\ndecentralized we are not\nparticularly well-funded\nand we don't have the same strategic\nimperatives\nimpelling risk-taking that nation-states\ndo therefore they have\nvery strong incentives that will\npress them into this arms race\nno matter what the ai safety community\ndoes\non the other hand taking\nthis road mapping seriously can i think\nvery meaningfully\nincrease the ai safety community's\nability\nto counteract those dangers or mitigate\nthem so\nbasically my answer there comes down to\nthe difference in base rates\nthat the\nthe danger is already so manifest\nwith nation states already dominic\ncummings\nthe chief adviser to the uk prime\nminister\nhas blogged about agi\nas a potential arms race super weapon\ntarget\nso in the context of that level of\nawareness\nalready being present among the most\ninfluential\nstrategic advisors in\ncabinet-level decision-making that\nsuggests to me\nthat the horse is already out of the\nbarn when it comes to not\nspooking nation-states into an arms race\ni see it as quite likely\nthat an arms race will\ntake hold at some point in the next\ndecade or so\nreally the thing that we can control as\nthe ai safety community\nis whether we're ready for that arms\nrace whether we have\nplans for how to mitigate it how to\nsteer state actors\nin safer directions i don't think we can\nprevent the arms race\nbut if we at least have this discussion\nearly\nwe can make the arms race relatively\nless risky i don't think that's a\nterribly encouraging thing to say\nbut it's honest\nwell thank you for that answer it's\ngetting a bit late here\nso i'm thinking uh does if anyone have\nany uh final questions or comments\nwell while people think about if they\nhave any final questions then i would\nlike to\nsay thank you to you uh john clark levin\nand\nto matthias mars thank you for coming\nhere it's been a pleasure hearing you\nand uh i've learned a lot and i think a\nlot of us have learned a lot\nso um thank you very much for coming\nhere\nwe appreciate it very much and uh good\nluck with uh\nfuture papers and whatever you decide to\ndo thank you thank you for the\nquestions also very yeah very very\nprovocative and interesting yes there\nwas a question to me from stephen hoover\ni think\ni think it was uh more of a comment uh\nwell okay\ni'd like to i'd like to pick it up with\nyou outside this meeting thanks a lot\num i do have a question for the authors\nif you guys have a moment uh yes\nso i i think i was confused when you\nguys were talking about\nuh the intelligence uh problem like\nyou you had phrased it about something\nlike\ntheories um we don't have good theories\nof like the boundaries of intelligence\nfor instance\ni think you gave examples of problems\nthat like might not be solved you could\nthink of it\nlike uh two doors with equal probability\nof events one of them is bad one of them\nis\num good but you can't\nredo it so you just have to make a\ndecision and if you don't have any other\npriors or anything else to update with\nis that a good characterization of what\nyou guys are saying and then if so i\nhave a follow-up\nbut if not maybe i misunderstood\nso i think so i think um\n[Music]\nthe the idea that there would be\nsituations where your\nthe ground for decision to serve limited\nor information available to make\ndecisions is limited\nto an extent that just adding more\nintelligence in certain directions\nwill help you make better solutions\nso like in the case that you described\nyou would you would have a yeah 50\nchance and and um\nso so i think that i think that that\nwould be our the case that there's\nlimits\nto the performance gains that\nintelligence can buy you in those cases\nbut what would be your\ni'd be curious to hear your follow-up\nyeah i\ni think that that's like i certainly\ndon't disagree with that i was curious\nwhy you found that to be or why it was\nmentioned because i didn't think that\nthat was as\ninteresting of a problem like it seemed\nlike\nyou could formalize a problem that like\ndoes not have a solution\nand that's fine but it like doesn't have\na solution so i\nwas wondering why that would be\ninteresting uh from your perspective\nwell it it gets to parameterizing\nhow transformative agi or super\nintelligence would be\nfor example in the social manipulation\ndomain\nthat would be much more dangerous if it\nwere possible\nto come up with a tweet length character\nstring\nthat could get anyone to commit suicide\nthat would be vastly more dangerous than\none that\nmerely is quite persuasive\nit's like it yeah it does imp does\nskill a persuasion sort of keep scaling\nbeyond\nhuman levels or is it like does it reach\nan asymptote\nbeyond which it's uh\nyeah being vastly more intelligent at\nsome point doesn't buy you a lot\nmore ability to persuade people to kill\ntheir families or\nyeah it sounds a bit that's right yeah\nit's a it's a dark analogy i'm sorry\nbut no no i mean that that makes sense\nin a sense\nin a way where you'd ask like what would\nyou be concerned about that that's\nintelligible i think that answers my\nquestion\nso to to characterize your point it it\nnarrows your scope on what to be\nconcerned about\nin a sense yeah i think i think that\ndoesn't of course there might\nthere might still leave a fairly large\nrange of\nyeah problems to be concerned about so\nthings that so i think\nto come back to an earlier point even if\nyou can't do\nperfect persuasion of anyone of anything\nhaving a capability that can do\nfairly reliable radicalization over time\ncan still have big societal impacts and\nthat of course\ngets more powerful the more you can\nchange people's minds\nfurther along with a shorter message so\nand at the limit you get john clark to\nthe yeah\na complete sort of character reprogram\nin a tweet-length message\num but of course even\nlong before you get there there are\nthings like okay well if you have a\nsystem that can\nspew out um tax that is basically\neighty percent likely to to convince the\nmedian voter\nto vote for the party that you would\nlike them to vote for then that is\nalready\nplenty politically disruptive if not\nand that would actually that could be\nenough\nto spark a manhattan project because if\nthe us worries that china has a\nlike i mean seeing the debates already\nright now over sort of\nrussian election interference and\ninformation campaigns and this\ninformation\nand so if one state believer was about\nto get a ai capability\nthat gives even just his or her 60\nchance at yeah making people vote for a\nselective party\nthat seems sufficiently yeah powerful\ncapability\num that that could yeah that is quite\num yeah politically uh\nif not existential threats it's quite\ndangerous\num and i see a question over\num so i think\nwe we mentioned darpa's attempt because\nthat was i think a reference we also\nmade in the paper\num specifically to do with the um\nstrategic computing initiative i am\nless familiar with the fifth generation\num\njapan's fifth generation project i got\nthe impression that it was\nso yeah based on expert systems i get\nthe i do think that\nthey at the time so it's interesting i\nthink at the time it was presented as oh\nat the end we'll have a conversational\nrobot\nor a conversational service system i\nthink people were not were thinking\nmaybe relatively small in the sense of\noh and won't that and won't that be a\ngreat secretary\num to sort of stereotype a bit rather\nthan\nin darpa and sort of military context\nmore like this will be the the general\nthat will like bring our armies to\nvictory and so i think\nto stereotype a little bit um and not\nbeing very familiar with the japan's\nprogram i think it's a difference\nbetween someone proposing\nhuman level ai because that will will\nget will\nthat will do great for entertainment or\nfor\num yeah domestic services\nversus a military research institute\nproposing a strategic advantage\nbut it's an interesting comparison\nokay great are there any other final\nquestions\nwell um then uh oh let's see just this\nmoment\nthat was a new move no certain to me\nokay great and so the uh um\nthe final thing um just before i say\ngoodbye to everyone would be that\nnext week we'll be discussing uh shane\nlegg's\nwork on intelligence uh stephen will be\ngiving a presentation there\nand there are uh a um\na really real subset of the people uh i\nwill uh\nsend a message on skype to everyone\nabout what we read next week\nso that is all for tonight again thank\nyou very much matthias\nand john clark it's been a pleasure\nhaving you thank you for hosting and\nthanks to everyone in the group for\nthese great questions and a very\nstimulating discussion\ngreat bye bye bye", "date_published": "2020-10-15T21:18:00Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "950dc07b95bda0b50fde78810ffa3300", "title": "254 Lets see you write that Corrigibility Tag", "url": "https://www.youtube.com/watch?v=A7dlTO33qd8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 254\nin the ai safety.com reading group\ntonight we will be discussing the less\nwrong post let's see you write that\ncourage ability attack by elias\nyorkowski\neliza yutkowski is\nthe founder of miri and\nthe the senior researcher and uh of\ncourse the author of this post but we\nwill also be looking at 19 other\npost or comments to this post\nmost of which are by people that i don't\nrecognize\nthe post was\na couple of months ago and it's kind of\nlike a competition\nunless wrong and there is one entry in\nparticular the one made by elias\nyorkowski that i will focus on\nuh note that i did not participate in\nthis competition and that is of course\nkind of unfair for me to um like i have\nalso presented some of the principles\nthat i would add in this competition but\num that's after reading everyone else's\ncomments um and principles so that's\nobviously somewhat unfair\nthis all started out with elisa uh\nyutkowski in the post agi ruin a list of\nlethalities lamenting that it is only\nhim who is writing up the list of\nlethalities and\nno one else seemed to be able to do that\neven who being\nobjected to this\nsaying that uh why should other people\ndo that they are writing the same things\njust organizing it in different ways and\nin particular\npointing to a set of four people even uh\npaul cristiano richard go\nand scott garabrant as four people who\nhave written up\nsomething that is um essentially covers\nthe same part\nand so um elias zaykowski notes in this\npost that many people could have written\na list that even says that many people\ncould have written a list like that even\nthough obviously uh like you can argue\nthese maybe if elia siedkowski is the\nbest alignment researcher then this may\nbe number two to five uh of course\nthat's with a big question mark um so\nhere we'll have we'll see whether um in\na competition of who can write lists\nlike this uh number one is in fact\nbetter than number two to five\nand uh the the people uh even hooping up\nuh poker sean richard and scott are not\nreally central to this list\nso\num elias szykowski in the post claims to\nhave been asked why not challenge others\nto write such a list in advance\nand he is\nrather negative about this idea it\ndoesn't do any good and doesn't persuade\nanyone people don't like tests and he\ndoesn't believe in such tests and he\ncouldn't pass a number of similar tests\nbut oh well let's try\nbecause\nif\nit might be a worthwhile try sometimes\nyou are positively surprised\nso the task is criticality we want to\num\nhere is repenting as a definition of\ncourageability the hypothetical\nmotivational system that means that the\nai doesn't want to kill you even if you\ndidn't build it correctly\nthe way elias yorkowski writes uh this\nis in attack in uh\nmad investor chaos and the woman of s\nmotors which is a fanfic and attack in\nthis\ncase is some kind of a post or reply in\nin the sense that you are supposed to\nreply next because this is\nco-written um together with\ni think kelsey piper\nso the topic is the principles of\ncourageability\nand\ncredibility is of course an important\nsubject and\nalready the framing of of principles is\nsomewhat suspect in my mind in that\ncorridability is unlikely\nto be something that we can\nin a principled way uh guarantee but\nmore like something we obtained through\na number of different means and uh i\nthink this is a rather is a better word\nas in\nwe have a large number of things we\nwould like an ai to have such that it\nbecomes more or less correctable um\nprinciples are things you don't\ncompromise on at all and some kind of\ncompromise is almost certainly going to\nbe required\nso to uh ensure that everybody is on the\nsame page elias kowski gives four\nexamples of\nprinciples for credibility those are\nmyopia low impact boundedness and\nquantization\nso based on this the people who want to\nsee whether they can\ntake on this challenge and write uh\ncorrectability tag uh you would expect\nthem to among the principles of\ncourageability have these four things\nwrite something like principle of myopia\nthe ea i shouldn't only consider short\ntime horizons something like that would\nbe very\nexpected and in fact\nnone of the 19 people who eventually\ntake up this challenge\nhave all these four\ncriteria uh the theodoraza principles\nmyopia low impact boundedness and\nquantalization\nso before we can\nprobably\nunderstand the the framing we need to\ntalk about darth elen\ndarth elen is a\nfictional world that is the background\nto many of elias yorkowski's stories\nincluding\nmad investor chaos it is a\ncivilization that is capable of solving\nthe alignment problem it is also a\ncivilization that's good in many other\nways i think you could argue that this\nis elisa yatkowski's vision of utopia uh\nthere are a lot of people who criticize\ndifferent aspects of this\nin different ways so uh but but so first\napproximation i think i think it holds\nlots of people in his stories that\ncriticize part of it like literally uh\nmad investor cares is uh written from\nthe point of view of someone who is kind\nof\nnot quite an outcast but someone who\ncertainly doesn't feel at home in\ndeathly land\nso it's not\nnot really a utopia in that sense\nor it might be\nokay what is how does death elen solve\nthe alignment problem one of the ways\nthey do that is using a selective\nbreeding program to increase the average\nand maximum iq while maintaining other\ngood traits\num they have a very great education\nsystem with a lot more focus on\ncoordination and economics then we have\ntheir goal is to eventually build an\naligned super intelligence um but they\nare taking their sweet time in doing\nthis and expect to not really start for\na generation or so um and uh because\nthey are coordinating well enough to\nensure that\nthere will in fact not be um any kind of\nrace at all\nif you uh like uh there was a link\npreviously posted um to to the post\nwhere elias talks about the principles\nof credibility in this setting\nif you want to understand a bit more i\nrecommend going back\napproximately four posts\nyou can go back further if you are\nreally keen on reading i think almost 2\nmillion words\nbut\nthat's not really necessary the the\ncontext is uh mostly these the past four\nposts\nso even though they want to build an\naligned super intelligence they have\nanother plan and that is if something\nbad should happen really urgently they\nare thinking of building a critical\nthing\nif it becomes necessary to take drastic\naction then they can have a\nand they want to build a an ai that can\ndo something very very important and\ndifficult without killing them all\nand um\nin particular the uh pivotal act for for\nthem is things like stopping a comet\nrather than our\nproblem of stopping uh on aligned uh ati\nand that gives them some very different\nuh options that we don't have because a\nlot of our um pivotal acts are centered\naround coordination\nso this\nlarge project has developed 15\nprinciples of courageability and those\nare of course the center of this\ncompetition this is yorkowski's entry in\nthis uh in this competition\nthese are the two main characters in uh\nmad investor carriers and there is\n2169 words in this uh\nin this post\ncompared to the other 19 people who\nreplied this is\nsubstantially\nmore than double the\nthe second largest so it is kind of\nverbose\num\nthere is also some text around the\nprinciples um\ni'll get to that later and\nuh yeah\none thing we should notice is that it is\nquite possible for these people to have\nread each other's entries um and the\npost times are not really clear\ni couldn't find that on um\non mad investor chaos so like a lot of\npeople could have cheated\nand i have no way of knowing that and\ni'm just going to assume that no one\ncheated and looked at what other people\nhad written\nso this is framed as some kind of\ncompetition and it's a competition\nwithout any kind of judge\nperhaps eliza erkowski will judge it for\nhimself but no one will judge this\npublicly so i hereby announce myself as\nthe judge of this competition and the\nway i will do this is just to get some\nkind of sense of it i will count how\nmany principles have each person\nsuggested that's a really crude metric\nand i know that um\nin addition i will look at how many\nupvotes and less strong did people get\num of course elias atkowski didn't\npost on last round he posted it on\nglophic\nbut\nyeah\nand then i uh go through all the\nprinciples the 15 principles that elias\nyorkowski\nlisted and then\neverybody gets like one principle point\num if they mention anything related to\nthese 15 principles\num and i did this in a\nvery very\nuh\noptimistic sense in that even if people\nreally really uh wrote very little about\na principle i gave them full point and\ncounted that as a principle i think\ndefinitely if you wanted a more precise\nmeasure you should like on a scale from\nzero to one like how much do they\nactually write about this principle and\nmaybe also something about weight like\nsome principles are probably more\nimportant than others\num uh and then other people uh suggested\nsome principles that elias kowski didn't\nmention um and yeah um i also\ninterpreted that really really\ngenerously like if if people wrote a\nparagraph about just about anything\nrelated to courageability then i was\nwilling to call that a principle even\nthough most people didn't call that\nprinciples\nso i counted up painstakingly everything\nand then i uh\nthrew the tell it result away um and\nwent with my gut feeling that's a\nclassic rationalist um principle uh way\nof doing it\num so let's see what the here you can in\nfact see my uh spreadsheet where i\ncalculated everything and wrote down\nsome of the\nthe principles that people have i will\ngo through the results here in a more\nabbreviated form\nso i'll write them down ordered by how\nmany upvotes they got unless wrong\nyotkowski didn't\nso i just put him arbitrarily on the top\nhere and then there was ian colvaid with\nuh nine principles um elias kowski\nposted unless wrong that that was the\none to beat\npaul christiano wrote a post where he\ndidn't in fact give any principles for\ncredibility but talked about whether uh\nthat was a like a crisp\nconcept or not and kind of rejected the\nprinciple john wentworth 16\nsomeone called anonymous ai safety with\n11 laura langoska with 14 and yeah then\npeople going just down you can see\neveryone here\njust about the only person that i know\nfrom outside\nless wrong posts would be richard and go\nwith six principles\nso counting up these uh we get a total\nof 99 principles\num so\nwe that's quite a lot um and one of the\nkey things that i was looking for was\nsome kind of meter principle that helps\nus trade off these principles if we uh\nget uh like if we can't satisfy all of\nthem um and that was unfortunately one\nthing that no one\nno one had any principled idea of how to\ndo\nalso one thing that i was a bit angry\nwhen i read all this\nwas that unless wrong people are totally\nnot upvoting as much as they should like\nif you look through this uh this list\nlike the lower third of them have\nliterally zero upvotes except the auto\nupvote they get from by themselves and i\nthink definitely if you\nwrite down the principles of creativity\neven if you are not very skilled then\njust trying that to in my mind deserves\nway more about so i went and gave\neverybody on this list a strong upvote\nand i think it's uh i i think it must be\ndemoralizing for some of these people to\nspend the time and write\nmany hundreds of words uh in in these\nprinciples and then get precisely zero\nfeedback and not even an upvote\nbut that's not really related to this\nokay so who did in fact win this um i\nthink that elias yorkowski's list is\nsubstantially better than\nany individual list from the rest of the\nof the people who who\nparticipate in this competition\nthe uh the uh\nthe principles are\ninclude a lot of things that others are\ntalking about but in a way more detailed\nand way more uh\nuh powerful way and it's uh\nuh they ha\nmany of the other principles uh that\nelias erikowski did not have are written\nin a very superficial way uh i think\nelias yotkowski's list is certainly the\nbest but um\nhis pessimism like whether other people\ncould write something like that i would\nsay that is somehow uh falsified in\nparticular he um even hooping his idea\nof whether number two to five could do\nsomething better uh similar combined i\nthink that is in fact true if you uh\ndidn't have elias kowski's list but you\nhave had uh like the the next four\npeople's list um then i believe that in\ntotal would be of higher quality than\nyou'd cascus lists\nso\ni don't think uh there is a strong\nconclusion coming from this um i believe\nthat\nelias akaski's list is the best by some\nmargin but not a\ninsurmountable margin\nand now i will go through elise caskey's\n15 principles of credibility\nand\nthe way i will structure this is that i\nwill build some of the text when i bolt\nthe text it means that there is a sub\npoint here made by ilyasoykowski that\nnone of the other 19 people\nhad any mention of um\nalso uh i've covered pasted some of\nelias yorkowski's\nprinciples and in all of them he refers\nto not an agi but to a thing that you're\nbuilding um and um\nso whenever you see a thing then you\nshould think that it's the agi\nso the first principle is on personhood\nuh the thing shall not have qualia\nand has nothing to do with safety but\njust because it's morally wrong so it's\nnot really related to credibility in my\nmind\nand no one but elias yotkowski has this\nas a principle\nis this in fact a good principle\ni would say no i would say that my\nmorals\ndisagree here\nlet's say you build a super intelligence\nand it does have qualia\nand it does in fact suffer\nin even in that case if the\nagi is only doing a pivotal act and then\ncan be uh safely shut down or\ncompensated or made nicer in some way\nthen even very substantial um suffering\nlike in the one we who walk away from\nomales even something like that i think\nwould be certainly acceptable and i\nthink in fact consequentialist\nutilitarianism i don't think it is a\ntaskiest one\nbut i am and i think it is just about\nrequired and if i could do something\nthat would be really really uh aversive\nand be really tough on myself but would\nliterally save the world including\nmyself then i think it's\nmorally required to do so\nso i uh went to um\nsan francisco not just beca for this\nreason um and talked with eliezer\nyutkowski and presented my objection\ndirectly to elias utkowski and said i\nthink it's uh required to do so and\nelias kowski did not agree and he argued\nthat the suffering could possibly be\nreally really severe um and\nwe didn't make much progress i've kind\nof felt like\nwhen you're talking with someone who\nhave like a\ndifferent meta ethics it's kind of it\nsometimes feels like you're talking with\nan alien um and um we didn't get\nanywhere um\nbut i did in fact present that objection\nuh i think that\nthis is close to a an actual safety\nprinciple however in that indexical\npreferences like\nuh\nif you build a an agi then if you can uh\nseparate out the parts that re\nrefer to itself\nwith for things like qualia then i think\nthat is probably\na good thing to do as much as possible\nto reduce suffering uh sure but also\nlike reducing suffering from the adi\ndoes have substantial safety\nimplications\nand of course um the reason why this is\nuh\nuh my guess of why this is included in\nthe story in particular is that dathilan\nis a society a civilization that cares\nvery very much about morals and\nnot building an agent that suffers is\nsomething they strongly strongly um\nthey really really don't want to cause\nsuffering\nthe second principle is tuskishness\nand if you're building a like a limited\nagi with a limited task um then that's a\nlimit something that's hopefully limited\nin space doesn't\ntake too much time and\nand it's also probably something that\nyou should um limit how much probability\nyou want to be uh to actually obtain\nthis in the sense that doing something\nappeals will act with a probability of\n99.999\nis probably fine but if you try to uh\nhave the task bounded so it must be\n99.99\nand with enough nines you end up with a\nstrange um\nwith with strange solutions\nif you want to\nhave the probability of failure below\none in a trillion um so elias rutkowski\nis the only one who recognizes that the\ntuskishness must also refer to\nprobabilities\nand so of course this is something a\nprinciple that was stated in this way in\nin the actual con\ncompetition\num\nso that is like the task but there is\nalso the input to the task\nwhether that should be bounded any least\nyield casket\nthat should be bounded the knowledge and\neffort should be bounded um and\nagain the limited effort is also\nsomething that's unique to eliezer\nand the idea that this principle applies\nfractally at all levels of cognitive\nsubtasks so it's not just that the uh\nthe thing that it wants to do is um is\nbounded but everything that\nit is\nall subtasks are also uh\nuh bounded in the same way uh\nall the way down so as an example there\nare no while loops uh\nprogramming um structure that is\nthat can be open-ended but only limited\nenumerated loops\nand doesn't try to think of every\nsolution or member of category only at\nmost 10 and doesn't try to think as long\nas it can until finds a solution but\nonly in some bounded time\num\nso i think that is uh something that\ncertainly makes an ai more credible\neasier to uh\nto bound to limit\nit limits me to optimizes but i don't\nthink it would in fact stop it\nnext principle is mild optimization\nthis is quantalizers basically um like\num the air only seeks adequate solutions\nand stops looking once it has one um and\nit's not trying hard to optimize and get\nthis best solution for for anything\num except actually aliasing casting\nuniquely makes a an exception for this\nthere might in fact be some small areas\nuh where finding a um and the optimal\nthe best solution is in fact required um\nand he puts up some um\nsome requirements here for for when that\nis uh allowed\nand the reason of course is that if you\noptimize really hard for a solution you\nare going to like the variables that are\nnot in being optimized or are going to\nbe extreme\nto what extent will this work that's an\ninteresting question um i suspect in\nfact that adequate solutions uh\nonce if you stop once you have an\nadequate solution one of the first\nsolutions you may come up with is taking\nover the world and that's certainly\nadequate and\nmany pivotal acts are in fact really\nreally difficult and taking over the\nworld might be easier than that\nand\ni think\nquantization can in fact be a strong\nprinciple for credibility in the sense\nthat we have other methods to try to\navoid having the ai think of taking over\nthe world and when it's nothing of\ntaking over the world in that case some\nkind of quantization is probably\nsomething that contributes positively to\ncourageability\nhere is a principle that elijkowski has\nthat i haven't seen before and that no\none else seems to have presented before\ni mean probably there is someone who\nalso thought of this but i've never seen\nit before tightly bounded ranges of\nutility and lock probability\nso\nlet's say that you have a\nsome kind of action space\nand you are thinking of\ndifferent things that could happen and\nthen there are also some things that\ncould happen with a very low probability\nlike one in a million\nhow much\nshould you think about these very very\nrare cases well you should probably\nthink about them a little but mostly\nthey can be disregarded but it's\npossible that the ai will at some point\nuh come up with some strange part of the\nuh the action space that looks really\nreally malformed in some strange way and\na science\nuh value to this um and in that case um\nwe want to uh like this is what we want\nto avoid you\nthrow an exception or or just stop or\navoid it\nin a similar way the\nutility it gets from different uh\nfrom different kinds of solutions should\nnot very unbounded so it's not something\nlike um create as many paper clips as\npossible and you get one util for each\npaper clip it's more like\nyou can get between zero and one utility\nand you could just expect that a good\nsolution will get 99.99 or something uh\nso you're using the full range\nso you like cut off\nthe uh the really extreme solutions in\nthat way\num\nyeah\nand and the suggestion once you get\nsomething that comes\nthat differs from this then\nyou should throw an exception this is a\nthing in dathilan that is much more\ngeneral than in our societies\nthey have a principle system of\nexception handling on an organizational\nlevel\nso\nso for for the context of this story it\nmakes sense to just say throw an\nexception um\nfor real-life corridor uh projects to\nbuild a corruptible ai we will need some\nkind of principle way of handling\nand a lot of people in their principles\ntalked about how to handle this kind of\nwarnings or exceptions or errors um\nthere are a number of suggestions um\nsome of them actually directly opposed\nto each other um\ni uh i won't go into all the details\nthere are a number of ways you could do\nthis\num and alias zerkowski also says\nthat\nif you can't find a good solution\nwithout thinking of improbable events\nthen that is really worrying\nmy idea for how to handle this is\nsomewhat different\ni think\nonce\nthe ai tries to uh\nget a solution and fails in this way um\nthen um\nthen obviously some of the principles\nneed to be constrained and need to be\nrelaxed\nand we need some kind of principled way\nto do that like the symbolism is just to\nlike\ndrop the fewest number of principles\nthat's like way too simple i think we\ncan do a lot better but we need some way\nof some meter principle to handle this\nthen we get to low impact\nthat is a solution that doesn't change a\nlot of things\nthat are not\nalmost always changed with by\nnon-extreme\nsolutions of the task\nso i think this is a cute way of trying\nto formalize low impact by having just a\nbinary\nis a particular solution extreme yes or\nno and moving that into a detailed\nimpact measure i think that's an\ninteresting way of going about that\ni\nthink that works in just about every\ncase except pivotal x unfortunately\npivotal acts are\nlikely to be extreme in my in my view\nand unfortunately i don't think this is\ngoing to be sufficient to help us\nand elias\nuh\nwarns us that this is a lot easier to\nsay low impact than actually formalizing\nthis um\nthe text says it's easier to illusory\nname intel dane tildene is just\nthe common language think english\nand part of the reason why this is\ncomplicated is that the thing is\nlearning its own model of the\ncomplicated world with an ontology and\nrepresentation that you don't know when\nyou want to define impact\nof course trying to reduce impact as\nmuch as possible is another recipe for\ndisaster\nbut it uh\neliza caskey uses a an image i think\nit's from fantasia where you have an\nanimated broom that is uh\nfilling a cauldron\nand most ways of filling a cauldron will\nnot cause\nuh like the workshop to be flooded um\nand that means that in uh if the the\nanimated broom finds a way that involves\nthe\nuh the workshop being flooded then\nthat's probably not low impact\num so how uh\ncan we\nuh like get around the problem that we\nwe have a hard time formalizing this\nand i have an an idea a principle uh\nthat we um\nwe want a solution uh specified by\nhumans that\nis poor and on on some dimension\nand try to improve on that rather than\ntrying to create a prince a a\nreally new um\na new solution i've written here about\nhow you could do that for different uh\npill selects it's kind of like a\nhalf-baked idea for a principle\nthat we could substitute for low impact\nin case this seems to this is hot too\nhard to model\nnow we get to the principle of myopia\nand myopia the the standard way of\nthinking is just\ndon't think about long-term time\nhorizons and that's how most people\nthought about it eliza kowski has a more\ncomplex definition of myopia we break\nthe work into subtasks\nwork can be both planning and execution\nwhere each subtype\ntask takes limited time\nor in my view also\ninfluences a limited space\nand there is some kind of compact\ndescription of the final state of this\ntask so we can determine if the final\nstate is acceptable and then instead of\njust having one instance of the agi that\nlooks at the entire task we break it out\ninto different instances that don't look\nbeyond the result of their own subtask\nthis is nice if it's possible it might\nvery well not be possible from a lot of\npractical cases\ni think\nat least a limited\nprinciple of myopia is required in the\nsense that we want the entire task to\nsatisfy this criteria\nand i think\nif the the entire task does not satisfy\nthis criteria then uh\nwe um\nyeah that that that would be a very\nproblematic thing if uh working at the\ntask\ncould not be a done in a myopic way\nseparate superior questionnaires\nquestions\nso\nthe question if you have in particular\nif you have something that is myopic uh\nyou might have a a separate a instance\nof a separate ai that looks for\nlong-range impacts\nand\nthis is something that a lot of people\nhave thought about how to get this kind\nof ai help in\ngetting questions uh ai safety via\ndebate was one of the the options um and\nthis separate questionnaire should\nreturn just one bit um and not be used\nby the agi to try to uh\nbuild solutions so it shouldn't be like\nit's optimizing against and finding the\nthe best solution that the long range\nquestioner um\ncan accept\nit needs to be stronger\nthe the ati\nthe one that checks whether a solution\nis safe\nhave more cognitive power one of the\nways would of course be to um\nrelax some of the principles like myopia\nwe would probably want that the uh\nthe questioning uh ai is less myopic\nthan the uh ai that builds solutions\nand an interesting idea here is that\nthis should be a\nnot just a separate instance but but an\nentirely different uh algorithm and\narchitecture\nand i think that's interesting and i\nthink\na very hot take is that if we ever find\na\nan ai that seems like it could be\ncorrectable then large language models\nmay indeed be something that could\nfulfill the role as a separate theory\nquestioner in that it is\nthey the large language models are very\npowerful and\nof course will have a very different\narchitecture um but\nit's kind of a hard take i don't think\nuh language models are likely to be uh\npart of the solution\nconservatism\nif you're searching for a solution\nchoose an unsurprising one relative to\nyour probability distribution\nit doesn't really sound like\nconservatism in the way i think about\nconservatism uh like unsurprisingness\num\nuh conservatism uh to my mind have\ntwo parts if you look at a\nthe full task or more likely a subtask\nthen there is conservatism in means and\nin goals and\num both of these are uh important but i\nthink the principles uh and the uh uh\nthe\nhow\ndifferent they are uh how they should be\ntreated from a safety point of view is\nquite different in that very strange\nmeans are more acceptable than very\nstrange goals\num\nand of course pivotal acts in particular\nwe would want to maximize some kind of\nconservatism in this case we\nwe can't probably get a lot of\nconservatism\nbut uh but this is uh\nuh like a surprising pivotal act sounds\nlike something that is almost certainly\ngoing to be not nice\nand\npart of elias kowski's insight in this\nprinciple is again this is something\nthat applies fractionally to cognitive\nsubtasks\nand of course also\nagain as before we're not trying to\nchoose the least surprising one because\nagain every time we are optimizing\nsomething that we don't strictly really\nreally strictly need to optimize we\nintroduce a lot of potential for very\nweird behavior\nand and weird behavior is in fact the\nkey thing in that we're trying to do\nsomething that is weird people's likes\nare strange by definition and\nif the solution is weird then um\nwe need to kind of manage the weirdness\nin the sense that we need we will get\nsomething that is weird but it can't be\ntoo weird\nconceptual legibility is also something\na lot of people talked about like we're\ntalking about interpretability here\nand elias yokosuka's formulation is the\nsolutions at all levels of cognitive\nsubtasks should have short descriptions\nin the conceptual language of the\noperators\nthis is somewhat different from the uh\nthe standard interpretability uh\nwhich is\nwritten here as um obviously dangerous\nuh thing\nreportability which is something that\nsome of the other comments suggested\nthat the ai tries to explain somehow\nit's planned to humans uh\neven though\nthe the plan itself contains some\nstructure that humans can't understand\nin some sense can natively understand\num\ni think when i think about this kind of\ninterpretability work i think about the\nthe the analogy of a um of a human who\nis like an alchemist\nand a chemist ai that is trying to\nexplain something to the human and uh in\nthis case\nthe short descriptions in the conceptual\nlanguage of the operators may just not\nexist like the the alchemist is thinking\nof things like the four humors or\nsomething like that and the uh chemist\nis talking about uh like atoms and\nmolecules that that aren't just simply\nnot present in the in the mind of the\nhuman so um i think\nthat this kind of conceptual eligibility\nis too high a barrier i don't think in\nfact an ai that is capable of performing\na pivotal act will be able to have\nconceptual eligibility i think we will\nneed to go for something\nless ambitious and to my mind that is\nsomething like strategic eligibility in\nthat a lot of the concepts may not be\nlegible and we'll just have to accept\nthat but the strategically relevant\nconcepts should be legible\nthose are the ones we really can't\nafford to compromise on\noperator looping\num\nwell\nhow much should you keep a main in the\nloop\na lot of people say like\na lot as much as possible um without\nreally recognizing that this is in fact\nfundamentally impossible because as\nelias rykowski puts it if the operators\ncould do the job then there wouldn't be\nany need for the\nai we only want it to do a pivotal act\nthat we can't do ourself\num\nso\nit is however still possible that there\nis a part that the operators can do\nthemselves\nwithout\nuh the agi down too much in that case it\nshould obviously be done\nso doing the thing is quite different\nfrom um\nwhat's called oversight which is a\nas a related principle that a lot of\nother people have picked up on\nwhether it is\nto my mind i would expect that verifying\nis in fact substantially easier than uh\nthan doing the tasks themselves but of\ncourse this depends on whether the\nsubtest is in fact legible to humans um\nwhich might very well not be\num\nyeah\nand of course you even this requires\nthat there is no adversarial um\nsense in the ai if it tries to\npush some of the work done to humans uh\nthen that requires that the ai is not\nreally adversarial at this point\nagain to my mind this is something we\nshould exclusively do in strategic\nmatters and not in some technical things\nlike trying to build nanotechnology or\nthings like that\nwhitelisting is an interesting uh\nconcept that i must state up front i did\nnot in fact understand and the big\nproblem is that there are really no\nexamples here is how elia sirikowski\ndefines whitelisting\nevery part of the system that draws a\nboundary inside the internal system or\nexternal world should operate on a\nprinciple of ruling things in rather\nthan ruling things out\nso uh on like the object level if you\njust literally do that it seems like\nthat will be really really expensive and\nuncompetitive um like for instance if\nyou take the classic example of the\ncomplexity of values that's something\nthat you really can't capture human\nvalues uh using whitelisting so i think\nthat is probably not going to work but\nthis like don't take my comments with\ntomb\nuh\ntoo too strongly because i don't really\nunderstand what elias yukowski is\ngetting at i think this could be related\nto\nperhaps the heart problem of courage\nability and i think it could in fact be\nreally important i would like to know\nwhat elijkowski is thinking of and he\nclaims that there are a number of\nprinciples he have not written up and i\nthink he should totally do that because\ni'd be interested in reading his\nthoughts on this\nshutdown ability or affordability um\nso a lot of people uh\nnoted that there is this creating a\nan a agi that has a shop\na shutdown button but that is not it\ndoes not mo is not motivated to make\nthis button be pressed is a substantial\nproblem elle sierkowski uh points to\nsome research\nseven years old uh summary that says\nthat actually doing this is really hard\nand miriam has tried to formalize this\nand somewhat failed um\nand he points out\nthat the principle is one that needs to\napply uh\nalmost fractally in the sense that if\nthe ai builds something it has a sub\nagent perhaps\nsome kind of mechanism then that also\nneeds to have an\noff switch that you can press that turns\neverything off and not just the sub\nagent and gets a low impact after that\num\nagain with with the analogy of the\nanimated broom\num so how hard is this in fact i\nremember stuart armstrong doing work\nthat was later than this link\nand i remember seeing something like\nit's a long time ago unfortunately so i\ncan't quite remember but i do remember\nhim getting somewhere with this i don't\nthink it's fair to say that it was just\ntotally impossible\num and certainly that it's it's true\nthat it's not no challenge at all\nso before we get to the next principle\ni'll just talk a bit oh i'll let elias\nyorkowski talk a bit about modeling\nadversaries because that is something\nthat can indeed be um problematic and\nsomething that i think we should have\nsome kind of\nvery strongly principled uh way around\nbecause i can foresee a lot of different\nproblems coming out from this kind of\nthing and i think in particular one\nthing we might want to do is to just\nhave the\ncorrugal ai be unable to\nmodel or think about other ais in\ngeneral\nprimarily to avoid a collusion which i\nthink is likely to be a very real\nproblem in practice um where there's not\njust going to be one corridable ai but\nmany um ilius kowski is more talking\nabout things like the ai considering\nwhether it's in fact inside a box and\nwhether other adversarial minds\nexist\ni won't go into a lot of details about\nhis theories i think they are totally\nvalid but i also think they are um\nwe need some kind of strong super class\nto uh to get around this problem rather\nthan uh trying to rule out the first 10\nthings we can find in the space\nthis kind of modelling adversaries is\navoided by a principle that elijah calls\nbehaviorism\nand the\nthe way the thing that he calls\nbehaviorism is basically not modeling\nother minds at all\num that's of course a super set that\nfixes this problem i'm not entirely sure\ni like the word behaviorism for this i\ndon't think behaviorism means don't\nmodel minds uh i think the the technical\nmeaning is quite different um like uh\nthings like whether people have like\nfree will and personalities and things\nlike that is again different from the\nstrategic considerations\nin my mind\ni think there are a number of good\nwreaths\nin to not modern minds at all both ai\nand humans\ni think that is in fact a nice thing and\nthat is uh\nthis is elias kowski's argument for why\nthe a\nthe uh corrugal ai should not model\nmines at all um i think it's um\nprobably too large not modeling mines at\nall rules out a very large number of\npivotal acts as far as i can see it uh\nand it certainly makes it uh like\nuncompetitive\nyeah\nso uh\nthe\none of them would be uh\nforming some kind of positive singleton\nlike if you could uh\nuh have a um a pivotal act that involves\na world government that is benign and\ntakes ai risk seriously\nthat would be an example of a pivotal\nact\nwhere\nconvincing people to\ntake\nto set up a world government a benign\nworld government will almost certainly\nrequire\nmodeling the minds of politicians and\nvoters\nand organizations\nnot in the minds of organizations but\npoliticians and voters have minds and\nyou cannot in any feasible way convince\npeople to set up a\na government that bans a unaligned agi\nwithout modeling their minds\ni think many other\nactions in fact also like um a classical\nexample is using nanotechnology to um\nto melt all gpus\nis an example of another pivotal act and\ni think in order to do that in the real\nworld where there are\nother people working uh against you\nstrategically you need to model the\nother people who are working against you\nand you need to um like think what would\nbe the uh\nthe reaction of the american president\nif we suddenly melt all gpus i don't\nthink you can get around that in any\nmeaningful way\ni hope that answered the question\ni think a uh totally\nsingle-minded engineer that is super\ngood at nanotechnology cannot in fact\ntake over the world you need someone to\nput that into a strategic\nframe and um\nto make that into a real plan and real\nplans in general require\nthinking about what would other people\nreact to my plans\nyeah uh\nyeah i i think uh like um it does uh\nlike strong behaviorism defined in this\nway uh like not modeling minds at all\ndoes seem like something that\nwould really make it hard for an agi to\ntake over the world\nthere it needs to be a very strong power\nimbalance to have any agi that's so\npowerful and can take over the world\nwithout modeling the humans and how they\nwould resist\ni think it's not totally impossible but\nthat that is a very strong\nencourageable\nelement\nso\nthis science-based anti-optimization\nseparation\nlike can you guess just from the title\nwhat that means\nwell it refers to the\nwell-known thing that if you uh\nput somehow put your utility function\ninto the ai well then if you make a um a\nsign error then suddenly it's not\nmaximizing your utility function but\nminimizing that or you could get a\ncosmic ray or something like that\nand you could get blackmailed in some\nway um at least because you have some\nfunny uh are not funny ways this could\nhappen um i think this is not\nthat related to the science-based\nanti-optimization separation in fact in\nthe sense that um\nif you\nno matter what the um the ati has as a\nutility function it would be possible to\nfor an adversary to just model that and\nand threaten to minimize that\num\nand\nin fact if we are trying to have a\ntaskish\nagi that is just trying to do a pivotal\nact and then shut down afterwards then\nit doesn't actually need to have a\na model of our utility function like\nit's it's not supposed to\nmaximize and change the universe in\naccording to our values it's just\nsupposed to do one simple thing and then\nshut down\num but\na lot of courageability uh suggestions\nby the other people uh\nwho reply to this threat suggests that\nhaving some kind of reference to human\nvalues to use as\num some kind of sentence check\nis valuable uh and i can i can see it\ncould be valuable and in this case we\nhave two um principles that are directly\nopposed to each other and again we have\nno principle way of\nof\nweighing these two things against each\nother\nthere's a principle of epistemic white\nlisting or domaining\nlike\nthinking about separate domains\nseparately um\nand not across domains because cross\ndomains is part of the place where a lot\nof\nthe india comes from if you are thinking\nabout like how\nthe operators\nthink about if i\nobtain this knowledge instead of this\nknowledge when you're trying to do\nsomething really specific that seems\nlike something that's almost always\ngoing to go wrong\nthis is close to\neric drexler's comprehensive ai services\nidea\nwhere there are\nwholly separate ais in each domain in\neach silos that are narrow\nmy expectation certainly from my\nobjection to eric draxler was that if\nyou want to make it strongly narrow and\nreally\nhave the ais you need to be unable to do\nany kind of cross-domain reasoning then\num\nstrongly narrow ai eyes are almost\ncertainly going to be strongly\nuncompetitive um\nor if you want to like um\nrelax this even just a little then\nyou're going to get information from one\ndomain onto the other\nvery easily it's really really hard to\nkeep\nknowledge and silos away from a an ai\nthat's more intelligent than you it can\nfigure out the truth in\nbased on really small evidence which i\nthink is actually a\nmajor point in mad investor chaos\nso\nthis my objection to this isn't one that\nis um\nuh at odds with what elijkowski is\nwriting but i think in fact that a\nprinciple of competitiveness\nis required for how to build a collegial\nagi in the sense that if you try to\nbuild an agi\nto solve some kind of task\nbut the performance is too weak because\nyou are\nputting too many constraints on it\nthen at some point it will just\nbe too slow and and the comet will hit\nthe earth anyway so you need to have\nsome kind of\nperformance minimum\nfor from the solution and if you're not\nreaching that performance minimum then\nyou need to start doing for relaxing for\ninstance the epistemic white listing\ncriteria\nand of course for our case it's a lot\nworse than for uh\ndusty lens because we are\nuh probably going to be in some kind of\na strategic race between a critical agi\nand a potential unaligned ai\nfinally we get to the last principle\nprinciple number five fifteen the heart\nproblem of corrugality or anapartistic\nreasoning\nlike i tried to figure out what does\nanap artistic reasoning\nmean and like i could see that\nelijkowski had used this word and just\nabout no one else had used this word and\ni couldn't figure out the etymology i\nreally don't know what anap artistic\nmeans i would really like to know that\nbecause the heart problem of\ncorrectability is interesting it is uh\nlike um if i was uh\nlike doing youtube videos for engagement\ni would have like a thumbnail with me\nsaying ah\none simple uh trick to solve ai\nalignment um and this an artistic\nreasoning is in fact one simple trick to\nsolve\nthe alignment problem and this is\nfiguring out how to make the ai want to\nbe courageable and if there is some part\nof corrugability that we haven't figured\nout then you want the ai to figure that\nout\nimagine what hypothetical operators\nwould want if they were building an ati\nthat thought faster than and whose\nthought were hard for themselves to\ncomprehend\nand uh figuring out how can you make\nyourself more courageable and\ndo that in a way that even though it\nseems like it is helpful it is\nsurprising and thus not helpful um\none way i've seen this uh stated is that\nyou want the ai to um\nits\ninner mental model should reflect the\nexternal model that the programmers have\nso from the programmers point of view\nthe ai is um\nhas values that are almost certainly\nwrong and we want this to be reflected\nin the ai in some uh deep sense\nand elias kowski claims that in dothi\nland people\nwouldn't try to build this into a a\nthing like they would almost certainly\nbuild it into a full super intelligence\nif they ever got around to to make one\nof those but the uh the assumption in in\nthis uh corrigibility uh principles is\nthat you are under a very severe time\npressure in order to build this and this\nis a principle that is too deep meter\nand elegant and hard to pin down\nso you wouldn't try to actually build\nthis into something if you are unless\nyou have a lot of time and a really\nstrong understanding\nuh i think elias claims that no one\nwould consider doing this\nand\nactually and i think something like five\npeople wrote\nin their comments some kind of\nsuggestion to that\nuh a principle that is related to the\nheart problem of correct ability um so\num\nhow hard this in fact is is an\ninteresting question uh i think elias\nis perhaps a bit negative about how\ndifficult this principle is um five\npeople suggested we should build it but\nuh none of them had like any\nreal idea about how to go about this uh\nso\nit's possible indeed that elias\nyorkowski's principles are\nmore realistic in the sense that he is\nuh\nsaying this is hard and he has done in\nfact a substantial amount of work in\ntrying to figure out how to do this and\nthen a number of people saying we should\ntry to do that without um looking into\nhow would you actually go about this\ni think in this case um\ni would trust way more elias koski\nbecause he has tried hard to\nto to solve this problem\neven though he has failed um and that is\nwhy i think\nthe elias yorkowski's 15 principles for\ncritical adi is the best one in in this\ncompetition\nthat is all for today thank you and see\nyou next time", "date_published": "2022-08-12T08:11:37Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "42174c0074d4f3ccacc40b63a48f70e6", "title": "248 Eliciting Latent Knowledge 2", "url": "https://www.youtube.com/watch?v=hAKMMdapqWc", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 248\nin the ai safety.com reading group\ntonight we'll be continuing with\neliciting latent knowledge how to tell\nif your eyes deceive you\nthis is still the first work of the\nalignment research center by paul\ncruziano ayakotra and maksu\nand we are at this part two probably out\nof four\ni'll start with a brief summary of what\nwe talked about in the previous session\nthis is the smart mould a vault designed\nto\nkeep a diamond safe which is uh\ncontrolled by a number of\nactuators that are controlled in turn by\na an ai that we have trained for the\npurpose of keeping the diamond safe\nwe as humans can only look at the\ndiamond through some sensors and there\nare potential evil robbers that want to\nsteal this diamond from us\nwe can't\nunderstand\nthe small smart world entirely but we\ncan look at the observations and then\ngive adjustment\nabout whether it's good or bad and\nwhether we can see the diamond\nand the problem is that in this case\neverything is good the the diamond is in\nfact in this place but there are also\nattacks that\ncheat the sensors in a way so that we\nsee\nthat diamond in this place but actually\nthe diamond has been stolen\nthe way we want to\navoid this problem is since the\nknowledge the diamond has been stolen is\npredicted by the ai the ai must have the\nknowledge somewhere inside itself um and\nthis part of the ai\nthat controls the smart vault that knows\nwhat is going on that's the part we want\nto hook into in order to get um\nsome kind of uh reporter that we can\nlater ask questions like is the diamond\nreally there instead of just looking at\nthe video feed that might be corrupted\nwe imagine that the ai is internally\nusing some kind of bayesian network\nwith\nthe actions that it's taking the input\nvideo sequence and the output video\nsequence\nand then uh the actions that it's taking\num\nyeah down here\nand then somewhere inside this network\nis the knowledge that the diamond has in\nfact been stolen since if\nwe're in the case where the diamond has\nbeen stolen and we're predicting that\nwe'll see this but the diamond has been\nstolen\num\nthat came out wrong\nlike this knowledge here that the\ndiamond is being stolen and we're seeing\na um\na false image since that the false image\nis the one that's predicted this must\nthis knowledge must be present in some\nof these notes\nnow a human analyzing this\nissue will not really understand what\nthis complex action sequence does and so\nthe human when\nlooking at this\nhas a model inside the brain that says\ndoes the the brother take the diamond\nand this part will answer no\nthe diamond is still there because the\nhumans network doesn't contain the\nknowledge that the sensors are being\nfooled\nwhat we would really like is some kind\nof honest reporter a direct reporter\nthat takes the knowledge that the ai has\nand transforms it somehow into the same\nnotes that the humans are using\nso the humans can understand it so the\nhumans if asked\nthis is the part of the human\nnetwork that deals with where is the\nactual diamond\nand in this case does the robot take the\ndiamond\nthe answer will be yes because this\nknowledge that the screen up here is\nshowing something false that was present\nsomewhere in here and that means that\nthis\ndirect reporter\nmoves the information into the\nhuman\nunderstandable patient network and\nallows the human to conclude that in\nthis case yes the rubber has indeed\ntaken the diamond\nthe problem is that if we start to train\nuh this\nyou know in\nin the ai in in the simple way where\nwe're just trying to uh look at the\ncases where the human knows what is\ngoing on the human can distinguish well\nif we're looking at what the humans know\nthen the ai has the option to simulate\nthe human and ask its simulation what is\nthe status of the diamond\nso this is called the human simulator\nwhere the ai has its own network and it\nalso in addition instantiates a network\nwith what does the um\nthe human know\nand in this case the reporter will then\nask this\nthis part of the network which knows the\nsame thing as the original human does\nand so doesn't know that this\noutput here is indeed something that has\nbeen faked\nso that is the uh the problem with the\nmost naive way to try to keep the\ndiamond safe\nbefore we look into some more complex\nways uh the the authors have a\ndigression where they talk about the\nresearch methodology and why this seems\nhow this kind of argument is structured\non a meter level so let's dive into this\nfirst we'll notice that the argument is\nfor the worst case in the sense the\nin particular in the sense of high\ncapabilities\num and why do they do that well if there\nis a case where it fails then they want\nto continue because most likely as the\nai gets more and more capable it will\njust fail worse and worse\nand\nby\nhaving these um\nby quickly being able to find holes in\nprospective solutions they hope to be\nable to israel\num\nin particular because many of these\nstrategies that many naive or i would\nsay most naive alignment strategies fail\nfor ai that is that is substantially\nmore smart than humans and if these uh\nstrategies fail then we have literally\nno time to react and that's why the\nfocus on the worst case will find fewer\nsolutions but hopefully some that don't\nfail catastrophically and immediately\nonce we get ar that is smarter than\nhuman\none thing that the research methodology\ndoesn't really make entirely clear is\nto what extent the worst case just means\num very high levels of capability we saw\nwith the previous example where the\nworst case was kind of more that the um\nthe ai found a human simulator that was\nreally bad rather than having a\nparticular high\nlevel of capability so those two things\nare almost the same but not quite the\nsame and the um the article doesn't\nreally get into this at least in the\npart we have seen\nso let's zoom out a a bit and see this\nmethodology as a game between two\nplayers a builder and a breaker\nso it starts with a builder proposing a\ntraining strategy in the first simple\nexample the training strategy was to\njust look at the places where humans can\ntell if the diamond was safe or not and\nand then hope that generalizes\nuh that is passed to the breaker who\ngives uh a\npredictor and a world a world in this\ncase is like something that would be um\na scenario where this would go wrong i\ndon't think this is used in any of the\ncases we saw either uh we will see today\nor so last time but in this case the\npredictor is one um where which makes um\npredictions with a different base in\nthat than than the humans\nthe next step of the game is that the\nbuilders\nspecifies what kind of reporter he would\nlike and in this case we want something\nthat uh translates from the predicted\nspace net to the human space net\nand\nunfortunately the breaker says\nthat the\nreporter that\nwill be trained using this training\nstrategy is\nit could be one that is doing inference\non the human base net instead of\nthe desired reporter\nafter this there should be a dialogue\nabout possibility the uh\nlike the translation must exist the\nbuilder can say since it has it can map\nto reality like english seems like a\nreasonably intermediate\num but uh the player can argue that\nsince it's the humans who make this the\nanswers and they can potentially make\nerrors and things like that then um that\nis in fact the one that will achieve the\nlowest loss and that's why you're gonna\nget the bad reporter and not the desired\nreporter\num\ninformal steps\nso the idea also with this chain\nstrategy is that we start with a very\nhigh level idea about uh what we will um\nuh\nfind a counter example for using this\ngame and then if we can't find something\nthat really totally defeats\na training strategy in all cases we'll\ntry to dig in a bit\nand specify more things about the\ntraining strategy and see as it becomes\nmore specified if there are any problems\nwith this and um\nwe probably will won't really get into\nthe part where it's totally specified\nbut the idea is that we want to to have\na counter example on a very broad level\nand then if we can't find that we get a\nsmaller and smaller and smaller one and\nhopefully the end is that we have one\nthat is so small that we can't find any\nerrors in it\nin some kind of in one training strategy\nand at that point we want to you know\ntest it in reality see if it works\nwhat counts as a counter exam\nfor eliciting latent knowledge\num\nthey describe this as a narrow existing\nlatent knowledge i don't know if there\nwill be something more uh general later\nbut this is\nnot a thing like\nuh\nlike when we're talking about stealing\nthe diamonds we're not trying to do\nsomething philosophically complex um\nlike replacing the diamonds the athens\nand diamond slowly or this kind of thing\nwe're just talking about something\nthat's really unambiguous there are\nthree\num requirements in total\num oh yeah um and this is actually\nsomething that could potentially be\nproblematic um some work done\non things like under indescribable hell\nworlds seems like something that could\npotentially be a problem for this this\nan ambulation\num\nthe three requirements are that the\nmodel must understand the question the\nmodel's answer is unambiguously wrong\nand the model knows that the answer is\nwrong\nand the example of the human simulator\nsatisfies all three of these\nso\nnow we'll look at three training\nstrategies that hopefully will do better\nthan the naive one we did\nand hopefully we'll leave the diamond in\na better\nposition than this screenshot from\ncomputer game\nso the first question is the question\nthat is motivating all these three\nstrategies is we want to have a better\ndata set because what we\nreally want is the correct data set is\nthe diamond safe or is it not and we\nstarted out with something that was\nreally poor so i think does this look\ncorrect to a human and we want to get\nsomething better\num\nbecause the obvious problem is\nif you're looking at something where the\nhumans can tell what's going on and the\nai is much smarter then you're really\nonly looking at a very very small set of\nuh possible action c\nsequences this is the case of all the\nthe space of all action sequences and we\nassume kind of that the the space that\nthe ai can suggest is like almost as big\nas this right\nand then we have a very small area where\nthe humans can tell that the diamond is\nsafe and are very small where you can\ntell that the diamond is not safe and we\nwant to expand these we would prefer\nthat it covers everything but\nwe'll see how far we can go\nand\nof course since um\nyeah the human simulator the reason it\nwas feasible um is because we were just\nlooking at where what can the human tell\num\nbut if we had just another\nkind of small advantage like imagine an\noracle that once you know 100 gives us\nthe correct answer then that would be\nprobably enough so that the last uh\nfunction the loss would be better at uh\nthe direct reporter right rather than\nthe human simulator so we want some kind\nof advantage\num\nand in the next um in the rest of this\nsession we'll be discussing three ways\nto get a bit more of\nadvantage the first strategy is not just\nhaving the human um observe the results\nbut having the human actually operate\nthe smartphone\nso\nin in this basic scenario it was of\ncourse the ai that operated the smart\nwalls and if we have another human that\nis somehow smarter\nthen it's possible that\nthat human can make something that um\nthat fools the the observer like you\nimagine that there are\na large number of possible\nvulnerabilities and then the um\nthe\noperator looks at uh tries to figure out\nwhat kind of vulnerabilities are\nfeasible and then the uh the observer\nwill later think about what uh\nvulnerabilities are possible and if they\ndon't match\num then there's a good chance the\noperator has found a vulnerability that\nthe observer cannot easily find\nand if we can get this kind of\nsituations then we can label more\nsituations than just what the human\nuh\nthe human observer could evaluate\nyou can see it here in the picture we're\nnow\nlabeling as a larger green area and a\nlarger red area\nand one of the things that they don't\nsay\nis that um\naction sequences\nthe kind of action sequences that we're\nlooking at are explicitly the ones that\nare designed to fool so you imagine the\nscreen put in front of the camera uh and\nthat's of course the kind of things that\nwe're really really interested in so\nit's not just that we are looking at a\nlarger set of\npowerful space but we are also looking\nat a far more interesting part of the\nspace\nso\nhow will this help well um\nin this case just if we try to learn if\nif the ai tries to learn the um the\nhuman simulator then if if the knowledge\nof this exploit that's being used is in\nthe operator snip and not in the\nof service net then making uh\nlike the information is not present in\nthe observers so it will have to learn\nthe uh\nuh the direct report\nand we assume\nthat\nthis works because as i said before that\num\nfinding exploits and ways to trick this\ncamera is probably a lot easier than for\nuh\nfor the observer to determine whether\nthe camera is working correctly\nso in this case we have the same network\nas we saw before but in this case the\nattacker is thinking really really hard\nabout something\nlike how can you put up a screen um\nand then picking uh some action that\nincludes uh um\nputting on putting on the screen and\nin this case once the uh the human\nsimulator tries to think about this then\nthe human simulator has instantiated the\nobservers\nnetwork and not the operators network\nand inside this there\nthey the observer hasn't thought\nespecially how about this node so when\nwhen you query this area\ndoes the robber take the diamond then it\nwill just say no the ai because the\nobserver doesn't know this but in fact\nthe operator does know this\nbecause he has thought really hard about\nit so he can\nhe can in this way train the ai to give\nbetter answers\num so in this case the uh the human\nsimulator will perform worse than the\ndirect translator and um hopefully that\nwill mean that the ai will then not\nlearn the human simulator but we'll\nlearn the diagrams later\nthere's some speculation about what will\nhappen if the human outsmarts the ai\nin this case i believe it will be a tie\nthat's not really important because what\nwe care about of course is the case\nwhere the ai is smarter than the human\nso that was a strategy but unfortunately\nthere is a counter example and the\ncountry example is that the ai has\nanother option that will\nallow it to get a lower loss by having\nthe human simulator and that is by doing\nbetter inference in the human base net\nso\nif we assume that the human observer and\nthe human operator have the same\nbayesian network that kind of seems like\na reasonable assumption that there is no\nreally big fundamental difference\nbetween\nthese two in particular\nif the operator explained the\nthe plans to the observer then he could\nfollow how it has how that worked um in\nthis case\nthe uh the advantage of thinking hard\nabout uh\npart of the network disappears because\nthe um\nuh the reporter um the um\num the ai if it does uh\nperfect\nlike uh uh maximal human that's my word\ninference so it thinks about all the\nnodes in the bayesian network as much as\na human that thought for as long as he\ncould would do\nin this case the the advantage of the ai\num that\nof the operator who thought really hard\nabout a small part disappears because\nthe ai can just\nthink hard about everything that the uh\noperator could think about\nand in this case then the reporter would\nin fact be able to answer the training\nquestion correctly by just having a\nsmarter human simulator um and in this\ncase\nthat would answer correctly if\neverywhere the operator could also\nunderstand it and it would also run in\nthe other cases\nnext\nwe have uh\nthis\nthis\nthese images here were for the um the\nsimple things that humans could\nunderstand and this is for the action\nsequences uh performed by the smart\nvault that humans can't understand and\nthe way we would imagine we could\nget something better would be to have\nthe ai help the human figure out what\ndoes this action sequence actually do\num\nso uh the uh the limitations of the\nhuman base network is probably um\nsomething that can be\npushed against by\nbasically education uh like humans have\nindividual individual education where we\njust look at arguments and things like\nthat and humanity as a whole\ncan also uh\nbe improved we can think about doing\nexperiments and things like that um\nthis is uh something that paul\nchristiano calls doing science\nand i'm entirely sure i think that's a\ngreat um a great way to put it um\nbecause um\nbut we when you think about doing\nscience you think about\nyeah formulating hypotheses and running\nexperiments and such things but really\nwe are looking at a very very specific\nsmall part of reality which is the smart\ngoals\nand i think doing science sounds like\nsomething that is way bigger than uh\nwhat we're actually talking about\nand traditional science of course we\nshould expect the ai will uh be much\nmore\nefficient in particular because\num the uh\nwhen you talk about traditional science\nyou're generally not talking about uh\nsomething that is looking into one\nextremely specific security arrangement\nmost of traditional science is working\non with you know something else\nai assisted science a feasible thing\nwell\npossibly we could imagine\nways to do this um\ni think in general uh\nas ai becomes smarter\nenglish and the way human brains work i\nmean human brains are probably optimized\nfor something like\nsurvival thriving on the savannah\nand that's unlikely to be um\nthe same kind of concepts and structures\nthat an optimal bayesian agent would use\num\nso uh this creates a new problem\nunfortunately\nthat is that the\nthe science assistance ai could be\nmisaligned\nand that's a substantial problem and\nwork has been done on this\num and the general way we try to make\nai assistive science safer is by having\ntasks that can be broken down into\nsomething that humans can test there are\nuh at least three ways to do this being\nunder 50 by a debate it's rated\namplification and distillation and\nrecursive reward modeling\nand uh\nwhether these work is a very open\nquestion but for the sake of this\nargument we are just going to assume\nthat this is something that can be done\nit is possible to have a nai assist us\nin understanding this without\nany alignment problems and\nwe will show that even in this\nit will be insufficient the air will\nstill be able to outsmart\nuh us here we see an example of how this\ntraining could work the ai which has a\nvery uh\nsophisticated model of the of the world\nuh points out some security\nvulnerabilities that that humans then\nadd to their own network so then the\nhumans know a bit more and hopefully\nthey'll be able to um\nto spot some uh\nuh\nuh some cases where there is um\nwhere the diamond is being stolen\nso now we are looking at potentially a\nan even larger area where we can uh\ntell where where an operator can tell\nthat diamond is being stolen\nwith the help of an ai\ni\ni would\nquestion whether the green area would in\nfact be this large because telling that\nthe diamond is safe in general\nis asymmetric from telling that it's not\nsafe\nit's usually a lot easier to see that\nthere is a problem uh some kind of\nsecurity vulnerability in something\ncompared to saying seeing proving that\nthere is no vulnerability in some kind\nof setup\nokay let's go back uh and see how this\nai teacher willa is\nwas still fall for the same problem\nand\nwhich was a maximum inference\nin the human beings\nand here i'm making a description that\ndoesn't follow the paper very closely\nso all errors are mine of course\nso previously we used just the observers\nneed to answer questions and now the\nhuman operator\nhe had before the advantage of just\nthinking heart and now he will also\nlearn these perfect vulnerabilities from\nthe ai will that help him\nwell if we uh make some kind of venn\ndiagram here's the original instrument\nobserve what he can think of and the\noriginal\nthinking thought about specific things\nand now we have an enhanced human\noperator\nby the human simulator is doing\ninsurance into original human observers\nneural network and by doing ideal\ninference he can the the human simulator\ncan expand to also cover the original\nhuman operator but he can't necessarily\nget to the enhanced showman operator so\nthe enhanced children operator that\nknows about these kind of security\nvulnerabilities can make plans that the\nunenhanced could never think about and\nin this case the human simulator would\nbe unable to think about these\nso the human simulator would um\nwould perform poorly whereas a direct\nreporter if we're writing that then the\ndiet reporter obviously could think of\nthe same\nthings as an enhanced human operator\nwould be capable of\nand that hopefully should privilege the\ndirect reporter and cause us to learn\nthat and not the human simulator\nunfortunately there is a new counter\nexample and that is that breaking\ndescent is probably more efficient than\nscience\num\ngradient descent is the algorithm used\nby the ai reporter to predict\nwhat would be going on in the training\nset and\nthe uh\nthe teacher ai the ai assisting in\nscience is expanding the human base in\nthat one\nthese are two very different uh\nprocesses and probably they take\nnot the same amount of time in\nparticular we would expect that teaching\na human is often hugely slower\nthan um that teaching a um\nnai\nand also\nthey might learn different things\npaul krishan says that this argument\nmight be safe if a builder could give an\nargument that anything learnable by\ngradient descent can be learned by human\nscience\num now we don't i think paul christian\nis actually um uh requiring too much we\ndon't really need to be able to learn\nanything just a\na very small subset of things need to be\nlearnable but i still\nthink that it's hard to make such an\nargument\nand i would also add that there are two\ndifferent things here well what things\ncan a human learn and how long time does\nit take to learn\nwell we should certainly expect that\nhuman science is a lot less computer\nefficient and and it is an amazing\nknowledge part of the promise of this\nstrategy was to find a competitive way\nto gain this knowledge\nand\none thing that i would add in particular\nis a problem with this strategy is that\nif gradient descent can find something\nthat this ais to science cannot find or\ndoesn't find in time then we won't know\nabout it until it's too late and uh\nuntil the diamond is actually gone\nso let's try to do something even\nsmarter\nwe will have humans adopt the optimal\nbase net\nthere's a technique for this called\nimmediate imitative generalization\nand uh the way you do that is you have\nan ai\nthat\nlooks at all\ndifferent ways that humans could change\ntheir minds could learn more things and\nthen\nfrom this area it looks at what would be\nthe implications if the humans learned\nthis fact what would be the implications\nif the humans learn this fact or\nwhatever\nand then\ntry to figure out what is the best thing\nto teach the human this is what they\ncall human plus plus\neven though plus plus isn't the correct\nnotation it should be plus plus human\nbut that's brilliant\nso the idea with imitative\ngeneralization is we look at the things\nthat are best\nthat if you tell humans cause them to\nmake the best predictions\num and then you tell the humans that um\nwell this is possible in practice is\nreally speculative um but uh again we\nassume that the technical difficulties\ncan somehow be overcome\nand this defeats the previous counter\nexample because\nnow we can find the best we assume that\nthis technique can find the best patient\nthat can be that is possible to find\nwithin this computer budget this is a\nreally really really strong assumption\nbut if we have this assumption then uh\nthen this then\ngradient descent is not faster\nand in this case we can\nlabel cases uh\nthat would fail even with uh\nthe ai uh making a human simulator that\nnot just looked at the best possible\nhumans\nat\nthe best possible inference in a human\nnetwork but the best perf\nthe best\ninference in the best network that\nhumans could reach through science\nand that means we get\nsomething that is indeed very very\nstrong\nand here you can see some of the changes\nthis is the reporter again and we make\nsome changes to both how the humans\npredict and how they report and then we\nask questions to this\nreporting module\nand hopefully they will be good\nbut unfortunately that\neven stronger strategy\nmight also fail\nand the way it can fail is that um\nwith a antology mismatch that the\nentities cease\nmatching to the things that we were\ntalking about like when we in our brain\nhave something that maps to like this is\na diamond this is a rubber then the\noptimal network might throw this away\nand and build some different ontology\nthat where different things matters and\nare categorized in a different way\nand it could do even worse it could\nchange the meaning of the of the uh\nthe point that we have in our brains\ntowards a diamond and in this case we\nwouldn't even notice that we don't have\nanything corresponding to the diameter\nany longer\nso we could see something where we\nthrough this generat\nthis process gets a\na much better bayesian network inside\nour brain that can make much greater\npredictions but previously we had the\ndiamond the location of the diamond\nwithin these two nodes that was the\nreporter but in this new network the\ndiamond has been lost in some sense we\ncan't find it in\nin any way\ni'm not entirely sure this could not be\nrescued because uh we want a\nreporter that answers the questions of\nwhere is the diamond in the same way as\nthe previous ones so that it seems like\nsome kind of side constraint\nwould be possible to add to this\nimitative generalization\nalgorithm\nwell that would be feasible is of course\nclear since we made some really strong\nassumptions about that\nso that seemed like the best thing we\ncould do like\ntrying mahara and hara to get some more\ninformation and we basically got stuck\nbecause if we do\ntry to get something that's better and\nbetter it will cease to be good enough\nuntil we get to the point where\nwe might require a new ontology in this\ncase we can't find the diamond any\nlonger\nwe can do something still\nbecause we want to find the direct\nreport and we do in fact not have we\ndon't really have to ask the ai what\ncorresponds to this diamond we could do\nsomething instead we could do use\ninterpretability network\ninterpretability techniques to look at\nthe network and figure out this node\ncorresponds to the location the\nx-coordinate of the diamond or something\nlike that and then we can use that um\nto figure out how uh it's making\npredictions\num\nand then once we have the um\nthe true correspondence between the the\nneural net of this ai and reality then\nwe can define the difference between\nthese two uh that's basically a loss\nfunction we can use for for the diamond\ncomponent so\nwe can find the uh\nif we can find the the\nthe actual location of the diamond um\nthen we can just uh\nuh get the report by seeing uh does it\nanswer questions about this curriculum\nso that's at least a strategy but\nunfortunately this brings us effectively\nback to square one\nand there they are trying in this last\npart of the table they're trying to use\nsome kind of analogy i'm not entirely\nsure that this analogy really holds uh\ni'm gonna try because they don't make it\nexplicit at all uh so i'm gonna try to\nmake it explicit and we'll see how good\nhow well it goes\nso this is my interpretation so uh yeah\nbuyer beware um\nthe analogy is written as\nfollows and then we'll see it for smart\nvault and for interpretability\nthe analogy is that a human with ai\nresistance is standing at a base method\nwhich makes great glitchy and we want to\nfigure out how to use to answer\nquestions about the things we care about\nlike the diamond being in the room\nwe think there should be some correct\nreporter but if we search for it in the\nnaive way we might get a human simulator\nso in the case of the smart bolt\nwe have we're staring at a base network\nthat makes great predictions in this\ncase it makes predictions about camera\noutputs\nand we want to figure out how to get to\nanswer questions and in this case with\nthe smart vault we literally ask\nquestions to the reporting module and we\nget answers\nthere should be some correct report in\nthe operating system the smart wall then\nthe knowledge about what is going on is\nin fact latent\nand if we search for it in the naive way\nwe might get a human simulator now\nthat's true in this case because the\nhuman simulator gets\nlow and perhaps the lowest loss how\nwould this look for interpretability\nwell\nthat was no longer predictions about the\nunder uh about what does the camera\noutput but how does the\nthe the parts of the behavior network\ncorrespond to underlying reality\nand we no longer get the information by\nasking questions we directly read it off\nfrom the values of the network\nthe knowledge is still latent in that it\nmust um\nit must correspond to reality somewhere\ndeep inside\num and\nsearching for a reporter can we get a\nhuman simulator the obvious question for\ninstant probability research for\ninteroperability is no why would we get\nthat but i think we might actually get\none i think the analogy could hold in\nintegrability tools as they get more\ncomplex we actually i could see us\nstarting to in fact run into the same\nproblem where our interoperability tools\nwill in fact\nstimulate a human and then we're no\nlonger then then we don't\nget\nget anywhere but this is\nmy interpretability\nin the interpretation of\nthis analogy and i'm not at all sure\nit's correct\nthat is all for today thank you and see\nyou next week", "date_published": "2022-05-06T04:56:54Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "3bb91b41a0b0abf9e7f6e36b1fef3693", "title": "256 Where I agree and Disagree with Eliezer 2", "url": "https://www.youtube.com/watch?v=a2qTNuD1Sn8", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n256 in the aisafety.com reading group\ntonight we will be discussing the second\npart of where i agree and disagree with\neliezer by paul christiano\npaul cristiano probably needs no\nintroduction at this point\nand we'll be going through part 11\nthrough 19 of the list of disagreements\nso the first thing that\npo christiano asks is where does eliezer\nyudkowski\nfind the evidence that he uses for\ntalking about how difficult the\nalignment problem is\nwell\nthe answer according to paul christiano\nis probably in his own experience on\nworking with solving the alignment\nproblem\nlike on the face of it that seems a bit\nuncharitable in the sense that elias\nkowski has seen a lot of other people\ntry to solve alignment um but\nuh knowing from what i know about elise\nyotkowski i think\ni would not be very surprised if he is\nindeed basing most of it on his own\npersonal experience\nthen another thing that paul cosgiano\nasks is society has apparently spent\nvery little total effort on solving the\nalignment problem so based on that we\ncan't really say very much about whether\nit's a hard problem or not\ni'm not entirely sure i like that\nframing because um\nhow much\nyour society could do a lot if we like\nspend all our resources towards that but\nthat's not gonna happen i would agree i\nwould think that paul cristiano also\ndon't think we're gonna get a fire alarm\nor anything like that so we're not gonna\ndevote all that effort so comparing it\nto the total amount of effort we could\nspend is probably not relevant\ni prefer uh nick bostrom's concept of\nrecalcitrance in how much progress do\nyou make on the uh\non the problem uh given a certain amount\nof extra input\nthere's a comparison with existing\nresearch fields and the problems they\nthey routinely solve and it seems like\nthey're working in a different way and\nsolving different things uh compared to\nmiri so we shouldn't update too much on\nthe failures of miri\ni think at some point\ni think we are comparing apples and\noranges here because the\nthing the fields that solve problems\nroutinely look very different from the\npre-paradigmatic field of ai alignment\ni don't think these can be compared in\nany particular way most existing\nresearch fields would also be quite\nincapable of dealing with\npre-paradigmatic\nresearch\nso miri tried and failed to find a\ncoherent formula for correct ability\ndoes that mean that\ncorridability is unworkable\nno paul kushner says because we don't\nwe don't have\nmiri is just not competent enough for us\nto conclude this\num so a small nitpick here is that the\nquote from\nthe way that paul christian quotes elias\nyatkowski is technically not correct\nhe's splicing two quotes together\nbut that's just a nitpick i think the\nthe real answer to this would be that\nmiri\nspent a good amount of effort on this\nthey had something like five to ten\nresearch papers um\npublished on this and they gave up of\ncourse and then after that for five\nyears no one was able to um\nto improve on what miri did and i think\nthat is a substantial amount of evidence\nthat\ncorrugability is unworkable\nas for the total\ndifficulty of the problem\npaul\nstates that eliasa is\ncertain about the how difficult the\nproblem is\nand uh to some extent that flies in the\nface of the uh the introduction where\nelijah in his\nlist of lethality's post\nsays the problem may in fact not be that\nhard in fact he is\ngiving the example of a textbook from\nthe future that contains just simple\nsolutions that work so the problem may\neven be quite simple\nit's just that we are\nwe're not seeing and we have a hard time\ndetermining which solutions are going to\nwork or not\nright now we mostly don't know how hard\nthe problem is and that is uh very true\nuh i do think that we have a significant\nlower bound in that the problem is\nalmost certainly not easy i think enough\npeople have worked on solving the\nalignment problem that we can rule out\nmost easy solutions\ndisagreement number 12 is about elias's\nunderstanding of science\nsome of the things elias erdkowski say\nseem to have a relevance towards uh\nscience in general for instance the the\npattern of bright-eyed young scientists\nthey ignore warnings from cynical\nveterans\num that is something that paul\nchristiano thinks is probably ungrounded\num\nthree things that paul christina thinks\nwould ground this would be um\nknowing more about the history of\nscience being knowing how functional\nacademic disciplines uh work and having\nresearch experience\nof the three i think elias utkowski\nprobably knows quite a bit about the\nhistory of science that's at least the\num\nlike precisely how much is a good is\nobviously not a real scholar of this but\ni think he's\nvery well read um\ni think uh the dynamics of modern\nfunctional academic disciplines is\nmostly irrelevant in the sense that\nalignment is a dysfunctional academic\ndiscipline\nresearch experience you could say that\nhe has been like a researcher\nmost of his life\nso some research experience could also\nbe argued but in general i think that um\nthis is\nlike the paul christian's criticism is\nto some extent correct i think\nthe place where elias witkowski has in\nfact experienced and is able to to talk\nabout this with some uh\ndegree of certainty is within alignment\nand\nlike nanotechnology and these kind of\npre-paradigmatic fields which is kind of\nhis specialty so i think the patterns\nthat he is uh saying i would expect them\nnot to hold in\nfunctional academic disciplines but i\nthink they might indeed be um\nbe valid in less functional academic\ndisciplines\nas evidence for this poor christian\npulls out a bet that elias yudkowski\nmade back in 2009 where he bet 25 that\nthe large hadron collider would not find\nthe higgs boson um and that turned out\nobviously to be wrong\nnow when i first saw this evidence i was\nreally meh in the sense that obviously\nbetting and even odds is something you\ndo if you're like 75 certain of\nsomething and having a prediction that\nyou were 75 percent on uh\nlike so many years ago that seems like\nreally weak evidence to me but then i\nlooked into it some more and to see\nelijah's\nreasons for this and the reason he's\ngiving is\nhe doesn't think the modern field of\nphysics has its act sufficiently\ntogether\nand\nthat is in fact quite relevant if we\ntake elias's position on this being that\nhe doesn't think the modern field of\nalignment has its act sufficiently\ntogether um\nso\nlike obviously alias kowski should have\nupdated on this back in 2009 but in\ngeneral it's really really hard for\npeople to update sufficiently on on\nevidence so i do in fact think that uh\npaul cristiano's point here is correct\nalso i could you might see here in the\nbackground uh this is uh from the uh\ncomic uh the simpsons back in 1998 where\nup here you'll see uh homer simpson\npredicting the mass of the higgs boson\nback in 1998. so at least some people\nhave um understood uh\nunderstood this for a while another\nthing that paul christiano um\npoints out is that um\nthere's a prediction by elias swijkowski\nback in 2010 where will we see real ai\nthat's probably what we would call agi\nnow\nand that's more likely to come from a\nsmall group rather than large industry\nthat's given as uh just\nstraightforwardly by paul christina as\nan example of almost a failed prediction\num\ni don't think i would count this as a\nfail prediction in the sense that if you\nmean something like agi then the jury is\nstill out where will it come from\nand the question then is if it comes out\nfrom something like deep mind or\nopen ai is that like\nan entire industry that is creating it\nor is it um\nlike a small group to some extent the\nnumber of key researchers in deep mind\nand open ai is not that large i think\nif uh\ni think you could really argue that if\nopenmai or deepmind were the ones who\ncreate agi for the first time\nthat would be a small group i i don't\nthink this prediction is horrible uh but\ni do believe that um\nelias elkowski would probably frame this\nin a different way now than he did 12\nyears ago\ndisagreement 13 is about\ninterpretability\nwhere elizakowski in his list of\nlethalities have some central\ndifficulties of sufficiently good and\nuseful\ninteroperability um\nand this point in particular is one that\npoor christiano is not super happy about\nand that is because the\nuh if you find a number of um of\nproblems then the uh the future\nresearchers uh they can just choose uh\namong the different projects that they\nhave in front of them the ones that\ndon't suffer from from this problem\nand that's a\nan important thing like just because you\ncan rule out 99\nof the solutions doesn't mean that it's\ndoomed at all because the researchers\ncan just choose among the one percent\nthat don't suffer from the problem\nhaving said that\nwhat little interoperability research i\nhave seen seems strongly like it does\nnot in fact um consider\nthe lethalities that eliza utkowski is\npointing out most of it seemed like it's\nonly trying to get to sufficiently good\nand useful\ninterpretability and that seems to be\nreally hard um but on the other hand\nlike\nyou have to\nyou have to solve interoperability to\nthe point where you can say something\nlike is this ai trying to deceive us\nbefore you can\ntry to figure out what you want to do\nwith that so to some extent i think it's\nfair enough to have that as a\nprerequisite\num and\num\neleazar yutkowski is not really\naccording to pro christian engaging with\nthe best version of interoperability the\nbest strategies based on\ninterpretability\nand i think\ni think it's quite obvious that the uh\nthe strategy that paul christiano is\nthinking about is his own uh eliciting\nlatent knowledge\nuh strategy and i think elias you're\ncasting what little i've seen him engage\nwith that has mostly been very\nderisively um so i think definitely he\nshould engage much more with that um\npaul cristiano gives an example of um\nfive things uh from the\nlisting latent knowledge\nreport that\noverlaps with this list of um\nof difficulties\ni think it's good that there is some\noverlap but i think in fact that\nfive points is not that much\num and i think that this list is\nin fact something that um\ni think it's more like that the\neliciting latent knowledge should engage\nwith the uh\nthe list of difficulties that elias\nyukowski is pointing out\na\nan issue with with uh trying to predict\nthe future of interoperability research\nis that you're going to learn a lot more\ndetails in the future and does this\novercome the objections um\nprocrastina is framing that as a\nrhetorical question and i think in fact\nwhen i read this um the section b3 with\nthese uh central difficulties seem like\nthe kind of things where learning\ndetails will not fix them of course it's\npossible we will learn things like that\nlike there is a\nsimple canonical human understand and\ndouble version of all concepts and that\nwould be huge for interpretability\nresearch\nbut that's not really what we would call\na detail\ni think the the objections will not uh\nbe solved by mere details um\nthat's also probably what elias utkowski\num\nthat's why he calls them central\ndifficulties\nbut paul christian is not really\nconcrete enough that we can't that i can\nsay this for certain\nso\nfuture ai systems\num\nconsiders like the existential\nqualifiers that them if there exists one\nai systems in the world which has one\nway of taking over the world then that's\nvery bad um so that's a very broad\nclassification because there are many\nways of taking over the world there are\nmany ai systems potentially in the\nfuture\nbut\nhumans are also manifold and\nthe existential qualifiers there that if\nthere exists one human who has this good\nidea or one research project that ends\nup with um\nwith getting a good grip a solution to\nthe alignment problem um\nthat's something that gives paul\nchristiano optimism that there are this\nnumber of research directions um and\nelias yorkowski is according to him not\ntaking that into account\nuh i think um\nthe way i would uh state this using uh\nuh quantifiers is the for all quantifier\nsaying the uh that um\nif you realize you ask you have a list\nof 43 different lethalities then any uh\nproject need to\nlike pass all these bars uh of course\nit's not literally 43 bars\nwhereas an ai only needs to find a\nsingle way to take over the world um\nthat's one way to look at it like that\nit's actually not an existential\nqualifier um\nfor for the alignment uh proposals uh to\npass but it has to\npass through all the uh the lethalities\npoint fourteen how much does elijkowski\nknow about this the state of the art of\ninterpretability\num and paul crocin's point is he doesn't\nknow very much so obviously\nhe can't really be that confident in his\nprediction that interoperability is not\ngoing to be successful\num\ni\nwould reserve judgment on this case um i\ndon't know enough about the state of the\nart of modern interoperability\nso i can't really tell that one thing i\nwill say however is that it's not\na priori impossible to have a strong\nopinion about the feasibility of a\nresearch field even if you don't know\nalmost all the details\nan example would be astrology\nwhere i'm i have no clue about\nwhat astrology what astrologers do in\npractice but i'm quite certain that it's\nall bunk\nthat's of course not to say that\ninterpretability is the same thing it's\nobviously not\nbut it's i think it's possible to have\nstrong criticism of a field even if\nyou're totally ignorant of the state of\nthe art\nidiozytkowski has this quote there's no\nknown case where you can entrain a safe\nlevel of ability on a safe environment\nwhere he can cheaply do millions of run\nand deploy that capability to save the\nworld\nso three requirements for melee\nassociate a safe level of ability a safe\nenvironment and must be cheap in order\nto get something that's capable of doing\na pivotal act\naccording to paul cristiano this is not\nengaging with the kind of system that is\nlikely to be built\ni have an answer to this that is kind of\nsnarky and maybe not really in good\nfaith and the reason for this is that in\nfact the the systems that are likely to\nbe built are going to be\nsystems that like i don't know try to\ncure cancer or they try to um\nmake a hollywood video or something like\nthat from a transformer or something\nlike that and\nthe things they will not be doing\nwill be to try to do a pivotal act and\nthey will not be designed for safety so\nin that obvious sense paul christiano is\nright he's not engaging with the kind of\nsystems that's likely to be built\nbecause they're going to\noptimize for the number of\nviews on youtube videos or something\nlike that\npaul cristiano talks about early\ntransformative ai systems doing\nimpressive projects\num and\nhas some uh thought about whether that\nis possible with like smaller more\nmyopic ais\nthat are composed i think the framing of\ntransformative ai and impressive\ntechnological projects is\nuh suspects in the sense that sure if\nyou have something build something that\ncan cure cancer that's a\ntransformative thing to do and it's a\nvery impressive technological project to\ncome up with to have\nan ai that comes up with a cure for\ncancer but it's not a pivotal act and in\nthat sense it is probably irrelevant\nbecause if you do these uh beautiful\ntechnological things um but you don't\nflip the the game board then the next ai\nsystem that comes along will then be\nbe the one to destroy the world um so in\nthat way for that reason impressive\nthings are quite irrelevant the things\nthat are\ntruly transformative in the more pivotal\nsense is doing things like alignment\nresearch\nso can you in fact do alignment research\nby uh being trained on small smaller\ntasks with short feedback loops and then\ncomposing that\num that's a good question\nmy intuition would be that that is not\nsomething you can get without\nhaving a very general ai that is capable\nof generalizing very far\nbut it is an open question whether\nalignment can be thus decomposed\ni i don't think\nit's obvious from elias you'd casca's\npoint uh here that\nit cannot be done but i do think that uh\nif i have to give my probabilities it\nwould it seems really hard to\nhave very very narrow and limited and\nmyopic ai solve the alignment problem\ndisagreement 16 is about when problems\nappear\nelias zukowski makes the statement that\nmany alignment problems will not\nnaturally appear at earlier passively\nsafe levels of capability\npaul cristiano\nanswer to this is that\nwe can we can study the deceptive\nalignment and these kind of problems\nbefore superintelligence and that is\ntrue but it's also not really answering\num\neleazarkowski's point which is when do\nthese things naturally occur and not\nwhether it's possible to study them\nbefore super intelligence\neliezerkowski is claiming that there\nmight indeed be a lot of such problems\nlike in alignment and what have you and\nall of them need to be theorized\nrecreated and mitigated in advance\npercussiano's\nanswer probably to this is that\nif we fail to solve the problem of\ndeceptive alignment then we won't notice\nother problems such as in alignment but\nthat doesn't really affect the\nprobability of solving the problem um\nbecause we have to solve them\nsequentially um and i think uh\nuh paul christiano and elias yokowski\nare talking past each other in the sense\nthat paul christiano is talking about\nfirst we solve this problem and then we\nsolve this problem and then we can see\nthis problem after we have\na an ai that's strong enough to do um\ndeception whereas\nkowski is thinking about doing this in\nadvance\nbefore we get to unsafe levels of\ncapability\nuh\ndisagreement 17 is more about engaging\nwith uh alignment work even though we\ndid have that in i believe disagreement\n13. and\nthe\nthe statement here again is that elias\nis not really engaging with the most\nserious alignment work and most serious\nis of course a\na difficult\nterm\nso\nis there in fact any any alignment work\nthat\nexplicitly considers all 43 lethalities\nas far as i can see\nthere is not every single alignment\nproblem\nuh\nsolution to the alignment problem that i\nhave seen seems to disregard some of\nthese lethalities\nso um\nin in fact there is a a a meaningful\nengagement verb that is very possible\nbetween the list of lethalities and\njust about every single proposal\nand also i must say that i\ndislike um the way paul chris channel is\ntalking about the most serious alignment\nwork when um elias kowski in his rants\nstarts with saying that everybody uh in\nalignment have their own thing that they\nconsider like the key thing and the most\nserious thing and um\nuh and nobody agrees what this is and\nit's impossible for elias kowski to like\nanswer all of them and in this case when\npaul is just saying uh\nwe should talk about the most serious\nthings but not really cashing that out\nto uh like a concrete uh statement like\nsaying something like uh he's not\nengaging meaningfully with uh enlisting\nlatent knowledge um that it's just um\nto some extent answering uh like elias\nyutkowski has already in his post said\nwhy\nyou need to uh to cast this out and say\nexplicitly this is the problem you can't\njust assume that everybody knows what is\nthe most serious alignment work because\nthere is in fact\nuh\na lot of uncertainty about this\nso why do you want to engage it's\nimportant to engage with the the best\nwork if you want to\none assess the probability of doom\ndo you actually need that\ni think in order to\nassess the probability of doom\nit's it requires a lot of data and\nrequires a lot of input and it requires\nto some extent both some\na clear understanding of the problem and\nwhy it's legal\nand an understanding of the alignment\nwork so i don't think you can assess the\nprobability of doom\nwithout both of these things\nthe second thing is that it's important\nto engage meaningfully with this most\nserious alignment work if you want to\ncontribute meaningfully to solving the\nproblem\ni think that in fact a list of\nlethalities is a very substantial\ncontribution in the sense that there is\na number of gates you need to pass in\norder to\nhave some alignment work that doesn't\nfail for uh like reason number 27 or\nsomething like that there's a number of\npeople in the comments uh to the post\nthat disagrees with this\none way i guess you could think about\nthis is um consider the classic problem\nof whether p equals np\num\none of the reasons why we strongly\nbelieve that p does not equal in p is\nthat we have\nmanaged to come up with a real good list\nof lethalities\nfor any algorithm that prepares to have\np equals np\nlike there are a lot of gates such an\nalgorithm would need to jump through and\nthese gates look really really difficult\nand for that reason we are pretty\nconvinced that p does in fact not equal\nnp\ni don't know if that analogy made sense\nand finally you have to engage\nmeaningfully with the most serious\nalignment work if you want to be able to\ncomplain about other people not making a\nsimilar list of lethalities\ni think that's just plainly a\nnon-secondary as far as i can see that\ndoesn't make sense at all you can\ntotally complain about other people not\nproducing similar lists without engaging\nmeaningfully with the most serious\nalignment work\ndisagreement number 18 is about\nevolution\nand human evolution and whether natural\nselection is a good analogy for\ntraining machine learning models\npaul cruciano says it's a weak analogy\nand in order to\ngo into details here it would be really\nnice to have some kind of pointer to\nwhich\nway like idiosyncratic uses this analogy\nin a number of places and which of those\nare\nprocrastinator referring to\nso i looked through there were like\nthree main uh places where earlier\nzuckowski uses human evolution as an\nanalogy for\nsomething like\nml training\nthe one that is most central as far as i\ncan say is this agreement is\nlethality number 15\nwhich is that the history of\nhumans\nwas one where we had a\nrelatively stable level of capability\nand then a sudden spur uh\nsprint of uh burst of uh capability\ngrowth and three things happened at\nroughly the same time we got\nself-reflection we got contraception\nand we started having being primarily\nprogrammed by cultural rather than\ngenetic factors and all these three\nthings to some extent broke the inner\nalignment that we previously had with\nevolution where we were just trying to\nspread out genes as most and get as many\nancestors as possible\ndescendants\nand\nthe the reason paul christiana says this\nanalogy does not hold is because we can\ndeliberately shape\nml training\nand he uses the example as of animal\nbreeding as an a better analogy\nit seems to me that paul christiano is\nreally not answering uh eliza caskey\nhere animal breeding seems like a bad\nexample in the sense that\nthe animals like if you have something\nlike the the evolution of dogs or\nsomething like that dogs are not um\ngetting a a grow\na sprint of capability growth and dogs\ndon't have self-reflection they don't\nhave contraception they don't have any\nparticular noteworthy culture\nso there is no break with uh inner\nalignment no inner misalignment\nhappening in animal breathing at all as\nfar as feisty can tell\nin this\nlethality uh number 15 elias kowski has\nthis particular quote that i'm going to\nread in full\nmy model of this variety of reader has\nan inside view which they will label an\noutside view that assigns great\nrelevance to some other data points that\nare not observed cases of an outer\noptimization loop producing an inner\ngeneral intelligence\nso when i read this this seemed really\nprescient in the sense that um\nthis is basically uh elias kowski\nanswering uh paul christiano's objection\nin advance\num like this is not uh\nanswering afterwards it is a text that\num pokes gianno had access to when he\ngave the example of animal breeding and\ni think in fact um\npaul cristiano probably has an inside\nview and he's certainly labeling this as\nan outside view analogy with animal\nbreeding and it is\nnot an observed case of an auto\noptimization loop producing an inner\ngeneral intelligence\num\nso in in that case um in this sense it\nseems like elias rutkowski answered paul\ncorsano's objection in advance\nso that was my first thought my second\nthought was actually\ni've previously complained that a lot of\nthe\ntheoretical foundation on\ninner alignment rests on like a single\nexample of from human evolution and we\ndon't really have other examples so in\nthis uh\nin this case um like if we don't have\nmore than one uh observed example of in\na misalignment um\nthen\num it's a simple easy prediction for\neleazar to make that people who will\nadmit to this with analogies will have\npoor analogies like if there are no\nother analogies\ni\nwhen i look back on this i am somewhat\nin doubt because\nthere are a number of other places where\nelias yikowski makes reference to human\nevolution\nin other lethalities\nand it could be that when paul\nchristiano is saying that\nhuman evolution is a bad\nbad analogy\nhe could be referring to those i'll\nbriefly go through them one of them is\nabout whether capabilities generalize um\nand how much they generalize and it's\npossible that\nwe can\nshape\nml train to some extent so alignment\ngeneralizes more than capabilities uh\ni think\npro christiano uh he the example he\ngives is um to consider breeding animals\nfor friendliness and courage ability to\nwhat extent that will work\ni think just breathing them for for\nfriendliness and not also\nfor capabilities\nthat's a strange example to pull out\nhere\nbecause obviously paul if paul cristiano\nhere is referring to this statement by\neliza ykowski then um here just bringing\nthem from one thing obviously um is not\nvery relevant to here where we are\ntalking about both getting more\ncapabilities and getting more alignments\ngetting better alignment better\ncredibility\nso i was thinking whether it would be\npossible to\nchange uh paul cristiano's example here\nto not just breed for friendliness but\nalso for intelligence well you could do\nboth of that um\ncould you for instance uh like\nuh if you get got something that was as\nsmart as a human could you get something\nthat was uh that's strongly identified\nwith like an outer optimization really\ncared more about this fitness\num i\nam most confused about whether this is\npossible it seems to me that\nthe the genes that make up humans\nbasically seem to be selected 100 for\nfitness and zero percent for other\nthings like courageability and with uh\nbut i'm\ni'm fundamentally on unsure about what\nthis line of argument refers to and for\nthat reason i don't think this is the\nthing that pokes genre is referring to\nthere's also a third place where elias\nyutkowski\ntalks about\nml training as an analogy for uh human\nevolution or human evolution as an\nanalogy for ml training i don't think\nthat makes\nthat third place makes a lot of sense\npoker general has an interesting\nanalogy about breeding humans for\ncredibility if humans were bred for\ncorrugability actively\nquite likely he expects we would be\ncourageable up through the current\ndistribution of human behavior\nthat's an interesting case uh\ni one of the things i wasn't doubt about\nthis is he explicitly writes the current\ndistribution of human behavior as some\npeople may notice human behavior is\nnot very critical not very friendly\num in fact humans go to war with each\nother and uh like uh there are many\nreasons why the human distribution does\nnot uh reflect max\nthe way we behave does not reflect uh\nperfect friendliness\nso instead the way i mostly read this is\nthat instead of the human behavior then\ni must think about like the human level\nof correlability could you make us\ncourageable even while making our\nintelligence as large as it is now\nso if with that breeding process\ncontinuously was run by the smartest of\nthe current friendly humans that seems\nlike it would\nuh\nlike\nnot break down at the current level of\ncapability and perhaps even um\nmuch far further than that\nso um\nthe way i\nimagine corridability is that you are\ndeferring to someone more stupid who\ndoesn't really share your values and\nthat's the way credibility for an ai\nthat is super intelligent would probably\nlook\num\nso can\nhumans in fact be bred for something\nlike that\nuh\none example i could come up with was\nlike uh slavery has been a big thing in\nthe past where uh people were\npeople who were slaves and revolted\nagainst the masters generally did not\nreproduce um so there have been some\nnegative selection\ndid that work my god is not really but i\ndon't really strongly know um mostly i'm\nconfused about this analogy\nthe overall question like our values\nlike how friendly we are how critical\nare we uh is that something we get\nmostly from our genes like something\nthat a breeding program could influence\nor is it something that we get mostly\nfrom culture uh my guess is that it's\nmostly for culture and the way i would\nexplain this is\num\nby assuming by changing this analogy\nhere so it's not actually the smartest\nof the current uh\nfriendly humans that are in charge of\nthis breeding pro program but um\nbut aliens and so the credibility is not\ntowards other humans but towards aliens\nso if we if somehow humans human\nreproduction was interfered by aliens um\nand selecting for credibility to the\naliens\nand the culture that's the different\nthing um that that is something that is\nonly left to the humans this seems more\nlike um more relevant to having a uh um\nan ai that is a number of um\nuh\nperhaps a number of agents or a single\nagent and there's a level on top that\nwe're trying to get in alignment and uh\ncourage ability from the agents towards\nthe uh the masters to some extent\num would humans be able to uh break free\nof the journey of\ncorruptible genes i believe we would\nhave done that right now but i am mostly\nreally confused about this analogy\nthe final disagreement\nsure\num uh pakistan is using uh the two words\ncredibility and friendliness\nin the same um\nin the same uh sense\nso correlability in this case would be\nthat the um the ai is um\nseeing\nitself\non like on the inside in the way that\nthe humans see it on the outside\nsomething like when we see the ai our\nside\nwe know that the ai\nis not um it doesn't understand of true\nvalues it needs to be uh uncertain about\nwhat we truly want needs to defer to us\neven if it is really sure that we want\nto be turned into paper clips\nand\nthat's the kind of credibility that the\nai needs to internalize\nyeah so\nyeah um\nlike is uh to me uh part of the reason\nwhy i like the analogy better when it's\nlike aliens doing this is to split out\nthe two parts whether it's like the\ncultural thing and the genetic thing\nwhere i feel\nthe genetic thing doesn't really have a\nvery strong influence on our value\nwhereas culture has a huge influence on\nour values\nokay um the final disagreement is on\nwhether it is possible to verify pivotal\nacts\npaul cristiano makes the statement that\nthat is indeed something that is\nprobably possible\nand part of the\nargument you use for this is that\nin almost any\nresearch and domain\nresearch and development domain we see\nthat verification is much much easier\nthan generation\nthey much much is\nempathized by paul christiano\nand i think that is\nstrongly wrong in the sense that\nverification of something that is\nuh like you can sure you can build a car\nand see whether it can drive but that's\nvery different from uh trying to\ndetect something that has been built by\nan adversary and unaligned ai trying to\nfigure out whether that contains an\nerror or deliberate sabotage somewhere\ninside\nadversaries make this a lot more\ndifficult in\nsoftware would be an example where um\nit's really really hard to see whether\nsomeone has\ndeliberately put in some kind of\nobfuscated um\nbug somewhere\nmy favorite example of this would be the\nunderhanded c context where which is an\nexample of uh people trying to write\ncode that looks really really really\ninnocent and has some really really\nscary and exploitable box uh in them and\nthat seems uh much much harder to prove\nthe absence of this kind of um\nof attack\num\ni think in fact\nvery very often in almost all domains if\nyou have an attacker in something like\nlock picking or what have you then um\nfiguring out whether it is impossible to\npick a lock is really really\ndifficult\nand i'm surprised how uh\nthis point the i think that's there must\nbe some kind of miscommunication because\nthis seems like a really basic point\nso the example that paul channel is\nthinking about here is contributions to\nalignment research\nso\nto what extent can we prove the accents\nor convince ourselves of the absence of\nadversarial prop\ninserted problems in alignment research\ncan we truly verify alignment research\nif it's something that's generated by\nsomeone who is stronger\nsubstantially more cognitively powerful\nthan us\nthen my strong expectation would be no\nit is not possible if we have an\nunderlying super intelligence and we ask\nthat for a solution to the alignment\nproblem that's going to end really\nreally poorly\nokay but paul cristiano's suggestion is\nmostly that we are getting this before\nwe have have a full super intelligence\num how about if it's something\nat the level of a\nhuman can that insert um\nerrors um\ni think um\nmy expectation would be that it is in\nfact possible it's really hard to\nevaluate whether alignment research does\nin fact solve the problem\ni think it's\nvery easy to come up with uh\nlike\nexplicit problems with with research but\nif you have if if you get some code from\num from the ai and says run this this\nwill be an aligned super intelligence\nthen i would certainly not trust this\nuh\nwhen i'm thinking about this kind of\nthing i imagine that like in a red team\ni'm trying to do um alignment research\nas a saboteur and trying to insert\nerrors or something like that and i\nthink that seems quite feasible\ni could easily imagine ways to do\nalignment research that looks benign but\nis actually going down a um\na wrong path to some extent\num\npaul cristiano has a\nsuggestion for a way around this this is\nin the comments and that is to require a\nthe contributions to come with\nconvincing justification\nand as we are this is not being done by\na particular powerful ai the way we are\ngoing to do that the only way that would\nbe feasible would be optimized for\nconvincing\njustifications\nand trying to\ndo that seems to me straightforwardly\nreally really dangerous\nin the comments further paul christiano\nis called on this and he gives an\nexample of how verification is easier\nthan um than generation\ncreating a machine learning hardware and\ni think that's a clearly um\nnot an example of where you would have\nlike this kind of adversarial attacks\neven though that's totally possible to\nhave um and i think it seems to me when\ni read the comments that paul christiano\nis somewhat touching the question of\nwhat do you do if it's not just\nverification for box but verification of\nthings that are adversarially entered\num but again i am unsure it seems like a\nbasic point and it's possible of course\nthat it is just me who have\nmisunderstood paul christiano at some\npoint\nthat is all for today thank you and see\nyou next week", "date_published": "2022-09-09T05:17:47Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "6471af854c2fa57785fae14c3321eebd", "title": "182. The Offence-Defence Balance of Scientific Knowledge", "url": "https://www.youtube.com/watch?v=7zMLbif0dvs", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "okay so this paper is called the offense\ndefense balance of scientific knowledge\ndoes publishing AI research reduce\nmisuse and the paper is by I see their\nstrange imperfection here but the\nauthors are Toby Shevlin and Alan Defoe\nand they presented this paper at a\nconference held in New York in February\nthis year okay one moment good these are\nthe authors and I am I apologize that\nI've got some alignments wrong these\nslides I cannot account for that they\nwere too perfect just now but anyway the\nfirst author is is chef Lane at the top\nthere and he is that both the authors\nare at the future of humanity Institute\nunits word and I think that ship lanes\nspecialization is in law he's the guy at\nthe top and the other fella\nAlan Defoe is the director of the center\nfor the governance of AI at Shi\nnow excuse me I'm just going to have to\nadjust that you can probably hear that\nwe're applauding the key workers in the\nbackground we had noise so this paper is\nconcerned with the balance of offense\nand defense the balance between two\npossible effects of disclosing technical\ntechnological knowledge either\nfacilitating misuse of the technology or\ncontributing to protections against\nmisuse they say that the balance between\nthe good and bad results of publishing\nwill vary across scientific fields they\nsay that the existing conversation\nwithin AI specifically has imported\nconcepts and conclusions from prior\ndebates within computer security\nspecifically with the disclosure of\nsoftware vulnerabilities and in that\nfield\nthey say that publication of information\ngenerally favors defense against misuse\nand\nthat it's not necessarily appropriate to\ncarry ideas from software security into\nAI research and that the AI research\ncommunity should consider concepts and\npolicies from a broad set of adjacent\nfields and they say that the field of AI\nis in the midst of a discussion about\nthis topic nowadays and that everybody\nis concerned about a range of potential\nmisuses including I know this familiar\nfamiliar a list of possible abuses using\nfacial recognition for targeting\nvulnerable populations using synthetic\nlanguage and video to impersonate human\nbeings and to impersonate them\ndeceptively using algorithmic\ndecision-making which can amplify biases\nand unfairness using AI in drones to\nwell to disrupt air traffic to launch\nattacks so those are familiar then they\ntalk about the example of Tex generation\nand next generation is an area of AI and\nthey say that the discussion of this has\nbeen influenced by the discussion of\nsoftware vulnerabilities in general now\nrecently open AI released there\ngbt to model of Tex generation caused\ngreat stir and they adopted a policy of\nstaged release because of concerns about\nits potential misuse to mass-produce\nplausibly human written text\nthis staged release has long been\npracticed in computer security if\nsomebody spots the floor in a piece of\nsoftware whether it's the producer of\nthe software or whether it's some\noutside user this is generally released\nto the public but after some delay when\nit's first made available to the\nproducer so the producer can fix it and\nthen the public at large gets to know\nabout it\nso returning to open I and they release\nrelease of GPT two people criticized\nopen I I using such arguments as that\nmalicious actors would find it easy to\nbuild their own model anyway so that\nthere was no point in delaying the\nrelease of the curve that some knowledge\nof the possibilities for attack is\nuseful for preparing defenses so that\nthere should have been full disclosure\nanyway and that's quote security through\nobscurity unquote is ineffective and I\ntake it that in this context security\nthrough obscurity just means\num well if you don't talk about if you\njust keep your software to yourself you\nkeep the code to yourself then nobody\nelse will want to copy it and release it\nthey say that that wouldn't make much\nsense\nanother piece of software I came up\nquite recently is called Grover and that\nwas designed to generate disinformation\nor fake news this was published\nalongside code for building the model\nand but and constructing or\nreconstructing its dataset they don't\nhave mean presumably the dataset had\nbeen trained on and the researchers\nwould actually make further information\navailable to anybody who chose to\ncontact one and finally the code and\ndataset or open sourced\nintention behind doing this of course\nwas to increase our understanding of AI\ngeneration of think news so that we\ncould build tools that could guard\nagainst the harm it could do now talking\nabout offense defense amongst\ntechnologies generally they say that the\nfield international relations has an\nexisting body of literature on such a\nbalance\nthey actually cite very little but\nexcuse me but one example they cite is\nby Shapiro and Segal and they analyze\nwhen states should publish information\neven though it might be useful to\nterrorists for example a government\nmight wonder whether to publish\nwitnesses in commercial nuclear power\nplants and those authors Shapiro and C\nwill find that it's safer to disclose\nsuch information if you think that\nattackers already hold such knowledge\nanyway so they don't make not making a\ndifference and it's also safer to\ndisclose it if the government that's\ndisclosing exits always got Munson in\nthe case of nuclear technology if they\ncan\nuse this openness to find and fix\nweaknesses so I mean there's a to\nintuitively obvious considerations about\nwhen you it might be wise to release\nsuch information so our authors chirlane\nand Defoe want to produce a theoretical\nframework that covers technologies in\ngeneral and I suppose other areas of\nexpertise but some technologies in\ngeneral and they're going to apply it\nway and I in specifically a bit later\nand they say\nthe net effect of disclosure on misuse\ndepends on which of the two effects of\nis stronger the effect of aiding actress\nwants to cause harm and aiding actors\nwho want to protect others from harm and\nthe balance between those two will be\ndifferent for different types of\ndisclosure and just before launching\ninto it they say that their analysis is\nlimited in scope only deals with certain\nconsequences of disclosure and they're\nonly talking about harms that are a\ndirect intentional use of the technology\nthey're not talking about cases where\ntechnology is used in cautiously or\ncontains a none safety thing so then I'm\ntalking about technological accidents\nthey're not talking about structural\nrisks from technological proliferation\nsuch as military and stability or\nunemployment all those important topics\nbut they are only talking about these\ndeliberate intentional malicious users\nand their framework does not weigh the\nbenefits of polishing scientific results\nother than how they relate to security\nthey have a bit more to say about this\nat the end however so this balance and\nI'm sorry it you don't look as if you\ncan't see the very last line here but\nwhat that last line is saying is\ntherefore our framework should not be\nthe basis for an assessment of\npublication norms but only one input to\nsuch an assessment in other words there\nis more to a question of polishing than\njust questions of risk and security\num of course if you think that with\nwhich with my existential risks you\nmight not agree with that you might\nthink that security is be-all and\nend-all but they I don't they don't use\nthe phrase existential risk at all in\ntheir paper okay some fact now you will\nwhat follows begins by being extremely\nabstract there this is an abstract\nanalysis of benefits and harms and\ndifferent types of cause and effect but\nthere are a few concrete examples to\nlighten things up as we go on so don't\nokay\nfactors where disclosure affects the\nattackers a capacity to cause harm the\nfirst is called counterfactual\npossession that is a jargon phrase that\nI thoroughly disliked it just means\npossession of some piece of knowledge\nthat's achieved\nindependently of being disclosed in\nother words you think that some bad\nperson out there is going to get this\nknowledge anyway so you might as well\npublish so as they say the more likely\nit is that this will be achieved the\nless impact the publication won't have\nanother factor that affects the\nattackers capacity to cause harm is\nwhether potential attacker said well is\nthe fact that potential attackers have\nthree main avenues to achieve so-called\ncounterfactual possession\nin other words independent possession\nindependent discovery or sharing\ninformation amongst themselves and\nso-called counterfactual publication\nthat is someone else will publish soon\nso there's no point in holding back and\nthe authors say that we believe these\nconsiderations should be excluded from\nthe decision if you're thinking about\nwhether to disclose a piece of knowledge\nin a I'm set\nyou should discount these factors above\ninstead of considering the impact of an\nindividual decision to publish the\nresearchers should ask what decision\nwould I want rolled out across the whole\nfield in other words they should do what\nthey would be happy for everybody to do\nin similar situations this is very candy\nand if you know your Immanuel Kant\nmore factors are affecting an attackers\ncapacity and that is the ability of the\nattackers to absorb and apply the\nresearch and of course this has got a\nperson considering whether to disclose\nsome research it's got to make their own\njudgments as to what they think\npotential attackers be able to do with\nit\nthe attackers attentiveness in\nconvention you know is that it is\nanybody paying attention nice I assume\nthat in the end nothing nothing goes\nunnoticed so we've got to assume that\neverything will be in will be noticed\nbut anyway they say will the research be\nread and understood by potential\nattackers this wall so depend on the\ndisclosure itself how much knowledge is\ndisclosed through what channel how has\nit presented and then answer the\nquestion of it is what you presenting\nsufficient to be used for harm does it\ncontain all that is needed to carry the\nbehavior at the other end of the\nspectrum adopting the research will\ninvolve\na large investment in resources and\ncomplimentary knowledge in other ways\nthis is a case where it would need a\nhuge effort\nlike for example knowledge to building a\nnuclear weapon might involve a large\ninvestment in resources and\ncomplementary knowledge and so you might\nthink that it's safe to publish if you\nthink that that is out of somebody's\nreach know there's a line missing at the\nbottom so I'll just look through and see\nthat might be\noh yes the last factor which I'm sorry\nyou can't see there my poor skills in\nmaking slides the last factor is\ntransferability the ability of knowledge\nthat promotes good ends to be\ntransferred to bad ends\nmoving on there are factors affecting\nthe defenders ability defenders means\nyeah any anybody who seeking to combat\nsome weakness in technology or some\nmisuse of technology so it could be\nsomebody inside the organization\nproducing the AI material or it could be\noutsiders users hackers spies\nand disclosure could aid defenders by\ndisseminating ideas that are useful for\nfinding solutions Orca simply sound an\nalarm or see some examples of that in a\nmoment\nsuccess depends on a number of factors\nonce again counterfactual position comes\nup with once again with the defenders\nindependently to discover or otherwise\nobtained the knowledge how easily could\nthe defenders have got to that insight\nthemselves with the defenders have\nalready been aware of the problem you\ndon't want to announce a problem which\nnobody else knows about as a sera me at\nonce again the ability to absorb and\napply the information as with the case\nof bad actors and then if you give the\ndisclosure make the disclosure how many\nadditional individuals or organizations\nwill work on finding a solution you\nthink you can mobilize a lot of people\nthat's an ikemen in favor of disclosing\nthe positive effects of disclosure\ndepend on the potential for a good\ndefense against the misuse is the\nweakness agent of the fundamental\nsouthern system\nwhat could a relatively superficial\nchange removing it\nis the attack that should really say is\nan attack detective law and its\ndetection sufficient to defend against\nit or to deter it is detector powerful\nbut it will overwhelm any defenses so\neven where a solution to some problem\nwith your disclosure exists it might be\ndifficult or costly to propagate that\nsolution that is a factor you have to\ntake into account\nokay so we're still continuing so at\nthis very high abstract level we'll get\nmore concrete I promise you what sorry\nso they just talked about the offense\ndefense balance for misuse risks that\nhave a higher potential harm\nfor example of ulnar ability in facebook\ncould be very harmful whereas well\nvirgin a much smaller website would have\nless consequence so in the case of the\nhigher potential harmless security\nconsequences of disclosure will be\namplified and AI researchers confuse to\npublish more or less of their basic\nresults and their insights or their\ndetailed results their code their data\nsays the train networks they can choose\nto publish will not publish tools\neasy-to-use tools that will assist\npeople outside these are things within\ntheir control different outputs will\ndifferentially benefit different actors\nso a publication without practical tools\nor code will be more difficult for low\ncapability attackers and defenders to\napply\nresearchers could could attempt to play\nsafe to give their release a defensive\nbias in other words being cautious like\neventually publishing defensive tools\nand best practice as opposed to no\noffensive controls and I suppose less\nthan best practice and they can also\npossibly the most things is that they\ncan attempt to circulate certain oral\ntools exclusively among the scientific\ncommunity in other words trying to have\na privileged circle with wind with which\namongst young you circulate certain\nknowledge interesting question as to how\nlong that would last or maybe it will\njust give you enough time to develop\nsome sort of defense against what other\npeople would\nthere now contrast different fields are\ninterested in how a I might different\nfrom mother might differ from other\nfields then they say that that their\nframework helps to explain why the\ndisclosure of software vulnerabilities\nwill often be beneficial for computer\nsecurity in other words why disclosure\nhas become the norm with software\nvulnerabilities one factor is that\npatches to software are often easy to\ncreate and nothing can be made in a\nmatter of weeks so if so fixes are easy\nto mange generally these patches are\nfully resolved the vulnerability they\ncan be completely efficient the patch\ncan be easily propagated and independent\ndiscovery of the vulnerability is likely\nso if that's likely then you might as\nwell disclose these factors combined to\nmake their reasonable arguments in favor\nof public disclosure of software\nvulnerabilities at least after the\nvendor has me given time to prepare a\npatch contrast this with other fields\nfor example biological research into\npathogens if you create a new virus and\nif you release information about its\ngeneral same reason samples it's\ndifficult to find vaccines against new\nviruses or treatments against their\neffects it's it's laborious and success\nis not guaranteed\nvery contemporary so this lowers the\ndefensive benefit of publication it\nweakens this argument the public that's\nyou know that knowledge is good public\nknowledge is good this contrasts with\nthe case where an effective treatment\ncan be developed within a reasonable\ntime period which could weigh in favor\nof publication\nthey give a couple of examples of\nvulnerabilities involving hardware they\nmentioned drones drones are now doing\nvery widespread they used a lot they're\nsometimes used maliciously they have\nbeen used in attacks in the Middle East\nand I don't know of any physical attacks\nyou know that's an actual war locations\nbut anyway they they presently lack a\ncheap effective solution according to\nthe authors obviously you can shoot them\ndown but they're very cheap easy to\nreduce in large numbers and you can't\nguarantee almost to hit them before they\nhit their target they also give another\nanalogy which they is sort of them using\nand interesting in the hardware field it\nseems a lot of hotels apartment\nbuildings offices and so on still use\nphysical key systems keys individual\nkeys for individual rooms and a master\nkey no planning room I thought everybody\nwas using electronic swipe cards\nnowadays but apparently not so already\nstop back in 2003 maybe in 2003 some\nsome\ningenious fella published a system in\nwhich made it easy for someone to create\na master key from a single example of\none non master key one room key say he\nshowed how it was possible to make a\nmaster key from there and he was kind\nenough to publish for details everywhere\nand locksmiths and people who ran large\nbuildings were not used to this at all\nbecause is somebody\nexplanted of this information then\nreally only you can do is replace your\nwhole key system with a with a non\nmaster key system so your very expensive\nI don't know if there's been any\nprogress since hand on my combating\nmatter where they just had to give way\nto it whether everybody's just switch to\nswipe cards another example of hardware\nvulnerabilities is the question of\nwhether engineering research nuclear\nengineering research such as uranium\nenrichment which is only just one one\npart of the whole process of making\nbombs whether that should have been\npublished I don't know if anybody was in\nfact arguing for publishing it but the\nreasons that militate against publishing\nit simply that you increase the ability\nof a an opponent to make bombs and\ndestroy your own city's nuclear bombs\nare a technology against which there's\nno effective defense and that I mean the\nbest-known defense is deterrence\nthose technologies of Defense that do\nexist would benefit very little from\nknowing about one piece of the offensive\ntechnology such as uranium enrichment\nfor example in other words a blueprint\nfor the design of the centrifuge does\nnot have one build a better defensive\nbunker so their points here I think is\nthey're saying this is another case\nwhere publishing some some information\nabout how to build Center uses\ncentrifuges that's not going to help you\nbuild up your own defenses in any way\nyou're not going to get a bunch of\nhackers coming back and saying okay now\nthis is how this is how you can improve\nyour defense in my only better defensive\nbump is this recession no connection so\nthis is so that is a consideration\nagainst publishing\nand say they say that if both the\npotential defender and a potential\nattacker are given knowledge that helps\nthem build nuclear weapons that\nknowledge is more useful for making an\nattack than for protecting against an\nattack the knowledge is but jargon is\noffense biased as just to introduce\ntheir offense biased although of course\nthere is still the question that you\nokay so it helps you build it helps the\ndefender build but the talent which is\none form of defense okay excuse me\nright\ndisclosure ones what they're working\ntowards is you know sort of an ethics or\ncodes of practice norms for disclosing\nor not disclosing in pieces or\ninformation and they are saying there\nare loads of diversions that vary\nbetween different fields I'll hurry\nalong a little bit here but they\nmentioned obviously the Manhattan\nProject or is very secretive more so\nthan the locksmiths perhaps with some\nmore secretive nowadays more secretive\nthan influenza researchers because\nthey've learned senator sessions tend to\nto share their their knowledge because\ncurrently I guess they're not too\nconcerned about people using their\nknowledge for biowarfare maybe maybe\nmaybe that will change in the near\nfuture anyway these classes of\nresearcher are more secretive than those\nwho find abilities in software they said\nthere was a culture clash between the\nresearcher who published the floor in\nthe master key system and there was the\nlocksmith so huesemann of being\nirresponsible the different disclosure\ncultures exist in the form of default\npractices they're different places in\ndifferent areas but also in what they\ncall common refrains by which they mean\nstandard phrases or cliches or tropes\nfor example language about the virtues\nof studying a problem or the value of\nusers being empowered by disclosures to\nmake decisions for themselves they're\nsaying that this sort of language comes\na little too easily and we really need\nto think about it in a specific context\nsuch language embeds implicit answers to\nthe questions we raised caution should\nbe exercised when importing concepts and\nlanguage from other fields\nokay now they come to discuss AI\nspecifically to the extent that\nprotections against AI misuse required\ninterventions in social systems then\npublishing AI research will have a more\nlimited defensive benefit oh come on\nthey they have more same of this AI is\nespecially prone to interfering in\nsocial systems because AI involves\nautomating activity that normally\nrequires human intelligence the very\ndefinition of AI for example or\ncriminals used artificial speech\ngeneration to make the voice of the CEO\nof a company over the telephone\nthis attack exploits the fact that the\nsound of someone's voice is strong\nevidence of their identity thus the so\ncalled vulnerability here is our\npractice of relying on people's voices\nas evidence of their identity this is\nsocially useful it's deeply ingrained\npractice we we'd hate to not be able to\ngo on doing that excuse me\nthey say it some have responded by\nsuggesting that the research community\nshould simply quote warn Society unquote\nbut individuals may be increasingly\nshown and worth untrustworthy content\nbut the language of quote let the users\ndecide for themselves\nunquote which is reminiscent of computer\nsecurity discourse I'll take the word\nfor it parameters would lose its\nempowering sentiment if users become\nlanded with problems which no good\nsolution exists in other words they're\nworking towards their conclusion but\nreally the punters are just not in a\nposition to decide for themselves in the\nfield of a high\nthey say our key suggestion is that AI\nvulnerabilities will be on average\nharder to patch that software\nvulnerabilities are and so our high\nresearchers cannot assume that AR\npublication would always have a\ndefensive bias in other words will not\nalways be for the good now they say a\ncommon response is that an AI model that\nis useful for an attack could similarly\nbe useful for detecting these attacks\nyou recall this was the reason why\nGrover was published their services to\nhelp us learn how to defend ourselves\nagainst fake news generated by a I\nhowever are our authors so Menand athos\na but offensive AI systems can often be\ntrained against detection systems so as\nto evade them thus when items generated\nby Grover were pre filtered by the\ndetection system the remaining items\nwere harder to detect as you can imagine\nthat seems extremely plausible so I\ngather that they generated a whole bunch\nof same fake news articles they threw\nthem against a detection system\npresumably also automated couldn't\nequally be human beings I suppose some\nsome were filtered out and that system\nbut what was left was harder to detect\nthat's not very hard to doubt is it ok\nso this is one reason for not making the\ndetection system freely available\nyeah so continuing to apply that\nframework to AI secondly even in cases\nwhere sorry this is following\nimmediately from the previous points\nyou remember people were saying that\ndetection systems in AI can be useful\nfor defending us against malicious AI\nbecause obviously not said brokers and\ntherefore the disclosure of detection\nsystems is a good thing they are saying\nthey're continuing by saying another\npoint against this is that even where a\ndetection system can in theory detect\nthe area's activity it may not be\nfeasible to deploy it for example in the\nparticular case of detecting AI\ngenerated voices it might require a lot\nof surveillance of calls a large part of\ncomputation and a lot of false positives\nbeing thrown up in fragging ordinate\nconversations okay\nnow there's a discussion of the\nindependent acquisition of AI knowledge\nand they apply it down to the middle of\nthe page they apply it to the example of\ntext generation and they say but where\nthe risk comes from a big actor a\nstate-led disinformation campaign\nobviously they're thinking of the\nalleged Russian think news campaigns in\nthe 2016 US election presidential\nelection which at least half of America\nis firmly convinced was responsible for\nthe outcome of the election when the\nrisks come from a state-led\ndisinformation campaign one uncertainty\nis how much these state actors would\nbenefit from the research being\ndisclosed because if not much because\nthey're a big and powerful research\norganization themselves then the risks\nof disclosure are less in other words\nthe the costs are less anyway\nnevertheless this doesn't mean that you\ncan cheerfully disclose secrets of say\nthank news detection even on the grounds\nthat you know tsunami is\nit's so well equipped but you're not\nhelping much\nwe have to consider that there are other\nactors with less access to a I talent\nand AI compute and there may also\nperhaps be very few actors outside the\nAI research community Joe suppose means\nthe respectable Western community in\nthis case there may be people outside\ncapable of having the original insights\ncontinuing the paper\nnot so many geniuses out there\nand so forth in some cases we large\ntechnology companies need to prepare\ndefenses okay III won't go on it to\nunravel that thought I've got I want to\npush on today conclusion the conclusion\nis that our analysis should aid AI\nresearchers in thinking critically about\nimplications of publishing work that\ncould be miss EULA's the community\nshould grow is tool lots of analogies\nand concepts so that disclosure norms\ncan be well-designed I think that's what\nyou could say about this paper that it\nis talking about analogies and concepts\num these are worse like empty boxes into\nwhich you can drop specific concrete\nconsiderations when you're thinking\nabout a different problem and so they're\nthey're almost talked about the language\nof this discussion now one challenge\nthey say we'll be building disclosure\npolicies in accordance with the\nlegitimate and effective norm setting\nprocess so that's what we're trying to\nwork towards and sort of and I think I\nsuppose you could say of disclosure\nand not just then I think but also a\nbunch of agreed agreed\npolicies and principles in connections\nwith disclosure they also point out but\nthe security impact of disclosure is\ninput although it's important it should\nonly be considered on site a host of\nother considerations in other words\neverything we've been talking about in\nthis paper I still only one aspect what\nyou should be thinking about when\ndeciding when to disclose sorry that\nthought at the top is a slightly\nfloating thought and a publication must\nbe able to scale and really adapt a more\npowerful and capabilities with what the\nthing is that we must bear in mind well\nany policy or principle relating to a\npublication has got to be able to adapt\nto future more powerful capabilities\nalong we mustn't mean to limited by what\nwe can do today\nand then go on to say our framework\nshould only be one input to an\nassessment of publication norms because\nthere are potential quotes non-security\nbenefits\nscientific publication using include you\nknow overall the normal benefits of open\npublication contributions to economic\ngrowth and the quality of life\nthe advancement of science and its broad\nsocietal benefits better monitoring of\nscientific investments in other words\naccountability of investment it leads to\ninternationalism and cosmopolitanism\nnext rule minded it leads to greater\ncivilian control and involvement in\nscience\nand also there are other tools for\ntackling harmful AI not just regulating\ndisclosure the research community can\ndifferentially invest in those projects\nand tragic trails that I'm a socially\nbeneficial researchers can invest extra\neffort in understanding and mitigating\nthe potential harmful uses of their\nresearch in a place that being\nresponsible researchers from the\nbeginning before they is closed there is\nnot and they can their efforts can\ninclude the crafting of norms and\npolicies to steer the use of AI and so\nthat said and that is the end of their\nframework well thank you very much Chris\nfor your presentation", "date_published": "2020-04-30T21:03:40Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "a3dfe9b5b03dc5d178a652b31c719a23", "title": "197. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments 2", "url": "https://www.youtube.com/watch?v=j-_FvJ-XbWA", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "Hello and welcome to session 197 in the AI\nSafety Reading Group.\nTonight we’ll continue with the podcast\nof Ben Garfinkel on Scrutinizing Classic AI\nRisk Arguments.\nBen Garfinkel is still a research fellow and\nHowie Lempel, Robert Wiblin, and Keiran Harris\nare the three people on the 80000hours, the\npodcast team who has worked with this podcast.\nThis was published almost two months ago,\nbut recorded about a year ago.\nToday we’re discussing the second half,\nstarting at the 1 hour 43 minutes mark, the\nsection called “Instrumental Convergence”.\nInstrumental Convergence is criticised in\nthe following way: It would be possible to\nmake a methodology for predicting technology\nby looking at all the possible combinations\nof features that it might have, and see if\nmost of the ways to make this technology involve\na particular property ; probably when we make\nthis technology, we would make one that has\nthis property.\nThis, I feel, is not really how the Instrumental\nConvergence is argued in most cases in the\nclassic presentations.\nIt’s not that most ways have this particular\nproperty, it’s that either we’re in a\nvery, very degenerate case, or we in a case\nthat explicitly something positively aligned,\nor we would have these instrumental convergent\ngoals.\nThe way Ben Garfinkel argues for “most of\nthe ways” is by analogy of a plane with\neither open or closed windows.\nHe further argues that this is the methodology\nthat’s used to argue for instrumental convergence\nin the classical AI risk arguments.\nThis way of looking at kinds of technologies\ncan probably be formalized with a bitstring,\nwith, like, there are this number of descriptions\nand the ones that are most of - you can just\ncount them - has this property, then that\nis what would happen.\nBut that is empathetically not how it is commonly\ndone in the classic AI risk arguments.\nThe first on instrumental convergence is by\nSteve Omohundro: “The Basic AI Drives”,\nwhere Steve Omohundro uses a definition and\nan argument structure that is much closer\nto the one I have up here which does not have\nthe word “most”, but saying that either\nwe’ll design very, very particularly, or\nit will have the instrumental convergent subgoals.\nIt’s also not how it is done in Bostrom’s\nbook, “Superintelligence”, where instead\nof looking at all possible programs, we’re\nlooking at not just the space of all possible\nprograms, but the space of programs that have\nbeen filtered by humans that are searching\nfor AI, that has some kind of goal that the\nhumans are interested in, and where the AI,\nit may not have the goal, but it will look\nlike it has this goal.\nI think probably this is what Ben Garfinkel\nis arguing you should do: Instead of just\nlooking at all these possible programs, we\nshould look at ones that look like they are\npromising, and this is what Bostrom does,\nand it doesn’t really solve the problem.\nMost of the ones that look good, still are\nnot perfectly aligned, and are subject to\nthe basic AI drives.\nAlso there are some problems in the transcript\nthat aren’t clear, there were some parts\nthat I couldn’t quite parse.\nThis is the 3rd major argument for the problems\nin the original AI risk arguments and, these\narguments are entangled in the following way:\nAI researchers are likely to steer towards\nsome designs that are safer, and that’s\nbecause we would have much more smooth, and\nnot, in particular not an intelligence explosion,\nbut a more smooth scenario.\nThat means we would notice the problems while\nthey’re small, and the alignment research\nwill be automatically improved along the way\nbecause it will be a requirement for capability\nprogress.\nAnd in this way, the arguments for or against\nAI risk mutually reinforce each other.\nAnd in particular, the take-off speed act\nas some kind of multiplier or intensifier.\nThis means that if you have some model of\nhow the future’s going to be, and then you\njust tweak these small parameters even just\na tiny bit - if you are ten percent more optimistic\non a rather small number of cases you can\nend up with a dramatically more optimistic\n- or pessimistic, for that sake - conclusion.\nI think, if you are in a situation where the\narguments are entangled in this way, that\nshould mitigate against strong confidence\neither for or against, because if, if you\ncan suddenly get into a…\nBen Garfinkel doesn’t need to be very wrong\non many of his assumptions, as long as he’s\nwrong on several of them, then they will mutually\nreinforce and then suddenly we could have\na serious AI risk, and also, of course, the\nsame for people who are certain that there\nis doom, because if we would have a bit more\ntime and if alignment progress is a bit more\nrequired, then a lot of things would look\na lot lighter.\nThen there is the question of neural networks.\nHowie Lempel asks: if the space of possible\nAI designs is very dense with bad scenarios,\nso even if you change a tiny bit, and don’t\nget it alignment one hundred percent, then\nyou might end up in an awful scenario.\nBen Garfinkel answers in the following way:\nby looking at neural networks where if we\nmake some small perturbation to the neural\nnetwork, change one weight or something like\nthat, then generally the behavior is mostly\nthe same.\nAnd I believe that is true.\nBen Garfinkel is not completely certain about\nthis but I believe it’s true for capabilities,\nbut we don’t really care much about capabilities\nwhen we talk about the instrumental convergence\nthesis.\nWe talk about goals.\nAnd inded for goals, small perturbations can\nbe very, very problematic.\nThere is the “complexity of values” thesis,\nwhich holds that small perturbations can destroy\nall… everything of value.\nThere is also the fact that, for instance\nsomething like a utility-maximizer and a disutility-maximizers\nare extremely close in bit-space, you only\nneed to basically flip one bit, and the actual\nbehavior is dramatically different.\nOne of the things in particular that will\ncause substantial problems, or could cause\nsubstantial problems, is that very close to\nan AI goal-set what we approve of is an AI\ngoal-set that humans will actually fight against\nbecause we don’t approve of it.\nAnd that means that the capabilities in the\nactual scenario of what will happen with an\nAI, it depends intimately on whether we will\ncollaborate with it or we will fight it.\nAnd that means that even quite small changes\ncould have dramatic threshold effects.\nBen Garfinkel also argues that changes in\ndistribution, which is another thing that\ncould happen, will give incompetent behavior\nrather than dangerous.\nThis is something that could happen.\nThe classic AI risk arguments are in particular\ninterested in the changes in distribution\nwhich is relevant to whether the AI perceives\nthat it will be able to make… obtain a decisive\nstrategic advantage.\nAlso distributional shifts like, for instance,\nwhat precisely corresponds to a human or something\nlike that, could invalidate all alignment\nwork.\nThat’s also something that we need to look\nout for.\nThen there is quite a bit about mesa-optimization.\nThere is a description of this.\nSome of it is correct and most of it is probably\nnot quite as correct.\nBen Garfinkel states many times that he’s\nactually not really sure about this.\nAnd that is something, it’s not really new,\nat least moderately new.\nBecause Ben Garfinkel had this broadcast a\nyear ago, so back then it might have been\nnew.\nBut I think there are very serious problems\nwith Ben Garfinkel’s description of mesa\noptimizers.\nI think the key point is that mesa optimizers\nare about learned models that are themselves\noptimizers.\nAnd this here is a classical image.\nAnd there is nothing in the description that\nBen has… that really relies on this in any\nway.\nIt’s usually just the standard problem that’s\nsometimes called “outer alignment” here.\nBen doesn’t say anything really that leads\none to… that forces you to conclude that\nhe actually, you know, understands mesa-optimization.\nIt should be said here that when talking about\nthe evolution~human analogy they do get closer\nbut there is nothing really about mesa-optimization\nin what Ben Garfinkel is saying.\nBen Garfinkel also talks about the relationship\nbetween mesa-optimization and distributional\nshifts and why we care about those, and I\nthink also that description… if I should\ndescribe why we’d care about that, that’s\nbecause if we have a mesa-optimizer that is\ndeployed in training, then this would actively\nbe looking for a distributional shift that\nwill show that it’s no longer training but\nnow it is actually capable of performing a\ntreacherous turn.\nIn this case the small perturbations that\nwe talked about could be really, really interesting,\nbecause here we have an intelligence that\nis adversarially looking for these small distributional\nshifts.\nAnd in this case even very small things can\nbe very problematic.\nBut this might not really be relevant to what\nBen Garfinkel is saying.\nIt’s somewhat unclear from the transcript,\nthis part, unfortunately.\nOn the other hand one thing we could say is\nthat mesa-optimization obviously is not part\nof the classic AI risk arguments because it’s\na lot newer.\nSo why is it that there is so much talk about\nthis?\nWell, I believe that if Ben Garfinkel had\nwritten a formal article then probably there\nwould not be so much talk about mesa-optimization.\nBut the question posed by Howie Lempel: “What\nare the best counter-arguments to your own\narguments?”, I think that it’s a really\nhard question and this is the context where\nBen Garfinkel talks about mesa-optimization.\nAnd I’m not really sure I like this kind\nof question.\nIt’s really hard to skip over even if you\ndon’t really have anything good to say because\nyou feel you’ve already answered the main\npoints.\nThen there is some talk about what Nick Bostrom\ncalls “perverse instantiation”, even though\nthat’s not really the word Ben Garfinkel\nis using.\nBen Garfinkel states that there are ten pages\nin the book Superintelligence about how these\nbenign-seeming objectives actually are really\nproblematic.\nSo I looked in Superintelligence: I have around\none page about this, two if you are generous.\nBut Ben Garfinkel argues that this kind of\nthought experiment doesn’t really tell us\nmuch about what the future will actually look\nlike.\nAnd this is something that is acknowledged\nexplicitly by Bostrom: These are illustrations\nof why naive alignment methods fail, and not\nmeant as any kind of prediction.\nThe argument by Ben Garfinkel here is that…\nonce we have a very powerful AI it would probably\nbe based on Machine Learning and not on feeding\nEnglish language sentences.\nIf we wanted to have powerful AI based on\nEnglish language sentences it must have common\nsense understanding.\nAnd if we have that, then this common sense\nunderstanding will also be applied to the\ngoals.\nThis calls back to the brain-in-the-box-confusion\nthat we saw in the previous session.\nBecause in the classic AI risk arguments,\nthere we were talking about a seed AI that\ncan improve itself - rewrite its own source\ncode but it does not have any kind of common\nsense.\nAnd then if it’s given a goal, then the\nliteral interpretation of these goals will\nbe locked in during this process and will\nnot be changed.\nBen Garfinkel has an analogy from a movie\ncalled “Explorers” which is about people\nwho receive some requests, and then horribly\nmisinterpret them.\nAnd if an AI is trained on that, then what\nwould happen?\nI’ve tried to figure out what this is a\nreference to.\nThe closest I could get is “Dora the Explorer”\nbut from the Wikipedia article not really\nsure whether that’s that.\nSo I would be interested if anyone knows about\nan “Explorer” movie where anything like\nthat happens.\nThen there is the part about the “Treacherous\nTurn”.\nAnd there is a description about the Treacherous\nTurn - again it’s relying on mesa-optimization\nwhich is explicitly not something that the\noriginal description does.\nThe description of the Treacherous Turn is\nalso strictly assuming a smooth scenario.\nAnd in the smooth scenario we would expect\nto see a number of failed attempts at deception,\nor some deception that exists but isn’t\nreally problematic, or at least not catastrophic.\nAnd I’m not quite as optimistic as Ben Garfinkel\non how we will see this and how we expect\nto see this.\nBecause if we have, you know, just a neural\nnetwork, then figuring out if it is really\ndeceptive or not, you know, it’s really\nhard to peer into these kind of learned models\nto figure out things that are on the level\nof whether it’s deceptive or not.\nAnd in particular if an AI attempts a takeover\nwithout deception, then it’s very likely,\nof course it’s dramatically more likely,\nthat you’ll notice it in this case.\nAnd unfortunately the answer is that, yes,\nit is very, very likely that if the AI notices\nit can take over the world, and then takes\nover the world without at any point deceiving\nanyone, then of course we’ll notice it,\nbut it will also have taken over the world.\nSo it’s not really a lot of help that we\nwill see it.\nThis Treacherous Turn, how dangerous that\nis, relies on how smooth the world is.\nAnd indeed, as the transitions become faster,\ntreacherous turns become more dangerous.\nBen Garfinkel has the following statement\nhere, that’s not really related to the treacherous\nturn, but: “I have a lot of trouble understanding\nhow you’d get to the point where an individual\nsystem is powerful enough to destroy the world.”\nI think this general framing is really problematic\nbecause the way I normally try to summarize\nthe argument given in Superintelligence, I\ndivide it into six parts, like A implies B\netc.\nAnd if you zoom out all the way, A to G, then\nit does seem like a big leap to go from “We\nhave an AI”, and suddenly “It has taken\nover the world” - that does seem unlikely.\nBut, you know, the point of scrutinizing the\nclassic arguments is that you have to go into\nall these small sub-implications, and see\nwhether those are actually true or likely.\nThe intuitive feeling that the entire thing\nseems unlikely - it might still be useful\nbut scrutinizing the arguments themselves,\nI believe, is more important and that’s\ncertainly what we’re supposed to be doing\nhere.\nHow many systems will be treacherous, again\nif we are in the smooth scenario so there\nis no intelligence explosion?\nIf we imagine that there are millions of machine\nlearning systems deployed all over the world,\nmany kinds of, you know, capabilities and\nmotivations etc. then it would be really surprising\nif a lot of those were treacherous and we\ndidn’t notice that at all.\nThis is of course true, but the classic AI\nrisk arguments don’t really talk about the\nsmooth world.\nThey talk about the world where things are\nmuch more “lumpy”, and in this case of\ncourse we would notice the first treacherous\nbut then it would be too late.\nBen Garfinkel, in the smooth world, gives\nan example, specification gaming - we talked\nabout this in the previous session - and he\nsays “this is an example of lying, and not\ngetting away with it”.\nAs I see it this argument doesn’t really\nwork because: If it was a problem we would\nnotice it, and we currently notice it, so\nit’s not a problem.\nI feel this is the way with the specification\ngaming: We notice that there is specification\ngaming.\nSo AIs obviously… if you believe the specification\ngaming is an example of lying then AIs obviously\nlie, right?\nAnd then there is obviously a problem.\nBut as I said in the previous session I don’t\nbelieve that specification gaming is actually\nlying.\nI believe that you… in order to make what\nis actually a lie, you need to have a model\nof what does the person that you are talking\nto, communicating with, what does he believe.\nAnd if you don’t have this kind of theory\nof mind, then you’re not actually lying.\nHowie Lempel again tries to put the discussion\nback on track: “What if the AI doesn’t\nstart lying until it has a high confidence\nit can take over?”\nBen Garfinkel answers: “Well, even if the\nAI is ninety percent certain then if there\nare millions of systems, then sometimes we\nwill catch them.”\nAnd that’s of course true but it’s not\njust lying, it’s that it has a high confidence\nthat it can take over, and of course if an\nAI, in particular a superintelligence, has\na high confidence that it can take over, then\nthere is a very great risk that it will indeed\ntake over.\nAnd then it will not take millions of AI systems\nbefore they take over, it might only take\none.\nI think, actually there is an argument here:\nyou could argue that if the first AI that\nis, you know, a general AI or at least has\na theory of mind so it’s capable of thinking\nabout, you know, whether it should do a treacherous\nturn, and believes it has an extremely low\nprobability of taking over the world.\nThen I think if it’s a paperclip maximizer,\nit would try to do that, even though it’s\nvery likely to get caught.\nAnd I think something like that might be possible\nfor a honey pot or something like that, you\nknow, to catch these AIs.\nI think there might be something there.\nAbout lying: Children are slightly cognitively\nimpaired agents, I think that’s a great\nway to put it, and children lie a lot, and\nthey often get caught.\nAnd this seems to imply that you need to learn\nhow to lie, and you can’t be good at lying\nuntil you have actually had experience lying.\nThe obvious objection to this line of thinking\nis that we don’t really know -at all.\nWe don’t know if a superintelligence that\nhas never tried to lie will actually be good\nat it.\nI think I would expect it to be very good\nat lying even though it has never done that\nbefore.\nThere is some fictional evidence you could\nput forward but I think the main point is\nwe just have absolutely no clue.\nOn the other hand you could say: how hard\nis it to lie?\nBecause at time T we imagine a robot saying\n“I am an A-maximizer”, you could say that\nas a human utility maximizer, and then after\nthe AI has reported that to the human it thinks.\nAnd it thinks, actually, hmm, it’s only\nmaximizing smiling faces or something like\nthat.\nSo how can it lie?\nHow can make the human believe that it is\nstill an A-maximizer, a human utility maximizer?\nWell, actually, it’s really simple because\nat timestamp t+1 it just repeats what it said\nat time t.\nSo this is actually a really, really easy\nway to lie if you’re just supposed to first\nsay one thing genuinely, and then later repeat\nthe same thing as a lie.\nAlso we imagine that once the AI is at the\nlevel where it can improve itself it probably\ncannot conceive about lying, but at the superintelligence\nlevel, then it almost certainly can, and the\nquestion is: Will anyone even think to ask\nit in between these two points?\nThere is another kind of argument that comes\nup here where Ben Garfinkel points out that\nthe treacherous turn doesn’t explain why\nthe AI would either have power or want power.\nAnd this is of course a different part of\nthe argument.\nThe argument is commonly structured in several\nparts and for each part it is true that it\ndoesn’t explain the others, so in that way\nyou could put a question mark on each sub-part.\nBut I believe this is a logical error: Each\nsubpart of the argument only explains what\nit’s supposed to do, and the fact that it\ndoesn’t explain the other parts is irrelevant.\n(Hold on, let me just see if I can get this\nWindows taskbar away.\nYeah, I could, great.)\nOK, so, we don’t really have very tight\nand perfect mathematical arguments for AI\nsafety.\nAnd if we ought to put a lot of resources\ninto managing AI risk then would really, really\nlike to have very, very good arguments for\nboth about whether there is a risk, and more\nconcretely what it would look like, and of\ncourse also what can be done to mitigate it.\nAnd that’s just for the classic AI risk\narguments.\nSome of the new ones, like Wei Dais arguments\nfor instance, we haven’t anything that’s\nanywhere near as formal.\nTo this I can only say that I agree, and I\nbelieve that this is something that would\nbe really, really nice to have, something\nI care a lot about.\nGetting these arguments shaped up is, to use\nthe classical EA methodology, IMPORTANT and\nNEGLECTED.\nBut unfortunately I believe that it is not\nTRACTABLE.\nI quite simply believe that the book Superintelligence\npresents the arguments roughly as well as\nthey can be, and it’s just a general fact\nabout our epistemic state that predicting\nthe future is really, really hard.\nSo, assuming that Superintelligence does indeed\npresent the argument as well as possible,\nthen obviously it doesn’t convince everyone,\nand I would then say that probably there are\nno arguments that can resolve the debate as\nit is right now, unfortunately.\nWe just don’t know.\nSo I think the best we can do is to examine\ncounter-arguments like I’m doing here with\nBen Garfinkel because I don’t think we are\ngoing to ever be able to strongly resolve\nthis far in advance.\nBen Garfinkel has…\nThe arguments that Ben Garfinkel has presented\nhave influenced his advice to people from\nEffective Altruism.\nHe believes that, if you are a machine learning\nresearcher and you’re really interested\nin the long-term perspective, then you should\nwork on alignment.\nIf you’re working in governance and have\na strong interest in Computer Science and\nMachine Learning, then it’s a really good\nidea to work on AI governance.\nIf someone has many options, and alignment\nor AI governance are not really the best ones\nbut there are others that are better, then\nBen Garfinkel is uncomfortable advising them\nto work on AI safety.\nSo, this is really, really weak advice, and\nI think most people would agree with this\nbecause, you know, it’s equivalent to “Working\non what is your best fit”, but it’s really\nstrong words, right, if you’re a Machine\nLearning researcher and really interested\nin the long term perspective.\nThere is an excluded middle here, for instance\npeople who are closer to AI fields than other\nfields, but not extremely so, like they established\nresearchers.\nI think in this case we probably have three\narguments: We have the rigorous argument as\npresented in the book Superintelligence, then\nwe have some other AI safety arguments that\nhaven’t really been formally written down,\nand then we have the counter arguments which\nare also informal.\nBen Garfinkel says that the informal arguments\nfor AI safety…\nhe’s not confident, you know, changing his\nadvice based on that, but his own informal\nargument, he is, he does want to give advice\nbased on that.\nI think that’s, you know, minimal hypocrisy\nin the way that it’s very human to value\nyour own arguments that… if you believe\nthat informal arguments shouldn’t have much\nweight, then when they are your own informal\narguments then you kind of tend to give them\nmore weight.\nBut it is not an obvious way to do because\nthere is indeed a reasonably well established\nsocial phenomenon called the Information Cascade\nwhere the classical example is there is a\njar here, and people take turns drawing a\nball, and they have to state if they believe\nthe jar is mostly red or mostly blue, and\nyou can see what the people before you have\nchosen, and very often it makes sense if all\nthe other say it’s red and you pull out\na blue one, they you might want to say “actually\nI believe they are all red or mostly red”.\nOf course not all, but mostly red, because\nyou have the evidence from all the others.\nThis could be something that is happening\nwith AI safety in that people believe that\neverybody else has looked deep into the arguments\nand so derfer to them.\nOr maybe their criticism has been answered\nby a Google Doc somewhere.\nI believe that this is rather unlikely in\nthat people are not saying anything like that,\npeople are instead strongly pointing towards\nthe book Superintelligence as the canonical\nrepresentation.\nSo people are not hiding Google Docs or anything,\nbut explicitly pointing towards the book Superintelligence\nand saying “this is what I’m basing it\non” then, you know, you can actually read\nit for yourself - there is no particular private\ninformation.\nThen I have a…\nI think I will skip through this but point\nout that this is something you could actually\ninvestigate empirically by, you know, asking\npeople why they believe what they believe.\nNow, there are a number of concrete counter\narguments that’s been made against the book\nSuperintelligence including Robin Hanson who\nis mostly arguing using the outside view,\nKatja Grace who is also mostly arguing about\nthe outside view, Brian Tomasik who’s using\nthe inside view, and Paul Christiano who is\nboth using the inside and the outside view.\nI have in this reading group made similar\npresentations where I’ve engaged with all\nof these, and a number of them do have good\npoints but some of the major parts, unfortunately,\ndo have major flaws.\nI think it’s a sad thing that when these\nfour people and a number of other people make\ncriticism, then there is no formal structure\nfor capturing this, and when I make any kind\nreply to this, again there is no formal structure\nso it’s kind of just ends up in nothing.I\nthink that’s really sad and it would be\ngreat if we had some formal structure.\nAnd further, these kind of arguments, when\nthey are… these four people have made decently\npublic and reasonably polished arguments,\nbut “a lot of the arguments for or against\nare in private Google docs” is asserted\nby Ben Garfinkel, I actually don’t know\nwhether that is true.\nBut it needs to really move to something more\nformal because this really, really helps arguments\n- this makes arguments a lot stronger.\nAnd I agree with this and I think the big\nproblem for this, when I have to try that,\nfor instance, I try to do an adversarial collaboration\non SlateStarCodex on the intelligence explosion,\nand the problem is that one side has been\nwritten down very, very exhaustively, then\nthe playing field is extremely unlevel because\nthe counter arguments… one needs to reply\nto those, and that’s a lot of work needed\nto be done by one side and not the other,\nand that makes it really hard to get traction\n- at least in my experience.\nSo I don’t really know how to make a formal\nstructure with this kind of feedback.\nBen Garfinkel has quite a nice analogy: Zeno’s\nParadox, the paradox with Achilleus travelling\nfirst half the distance, and then one fourth\nof the distance, one eighth of the distance,\nand to some other point without actually reaching\nit.\nAnd there wasn’t really a clear explanation\nof what… it was always intuitively very\nwrong, right?\nYou would obviously get to this point at some\ntime, but what is the thing that is wrong\nwith Zeno’s Paradox?\nThis required quite a bit of mathematics.\nBen Garfinkel says that it wasn’t until\n1900, I believe, that in 1821 Cauchy published\nCours d’Analyse, which I believe introduced\nthe epsilon-delta definition that I would\nuse to explain Zeno’s paradox right now.\nBut I think the point stands, right?\nThis is something that was explained as a\nparadox for two thousand years where people\nreally couldn’t figure out what was the\nexact problem with this.\nAnd it’s really hard to articulate what\nare the issues with arguments that use fuzzy\nconcepts, like movement and distance for instance,\nbefore you had the mathematics for it.\nAnd I think this is why we should probably\nexpect something that is like the book Superintelligence\nwill have some mostly poor criticism.\nBut even though a lot of the criticism is\npoor, the underlying intuitions that people\nhave that it is wrong is something that needs\nto be respected.\nAnd I believe this is true.\nAnd you could actually say there’s something\nlike an offense-defense balance here, in that\nwriting a book about a bad prediction might\nindeed be easier than doing the work of criticising\nit, and pointing out precisely where it’s\nwrong.\nThe final point is: AI safety as a field,\nhow much money does this deserve?\nBen Garfinkel has a cute example of the film\nThe Boss Baby, which has it’s production\nand marketing cost substantially more than\nit has ever been spent on AI safety.\nAnd when he thinks about how much money should\nbe spent, he compares it to things like, you\nknow, pandemic preparedness, nuclear war or\nclimate change… these kinds of things.\nAnd he believes that the AI safety level is\nway too low and it’s hard to argue against\na five time increase, five times more than\nThe Boss Baby.\nSo I looked up the budget, 125 million dollars,\nso five times that would be 625 million.\nSo I tried to calculate if it was reasonable,\nboth with how many people would that fund\nfor how many years, and as a percentage of\nboth AI and the world GDP, and I kind of got\nstuck with this, I’m not an expert in this.\nSo I couldn’t figure out whether that was\nreasonable, or how you could argue for 5,\nor 25, or whatever.\nSo I tried to look for a formal calculation,\nI couldn’t find that either.\nI did look into the Oxford Prioritization\nProject which does something like this, and\nI think that’s probably a requirement to\nhave something like their Guesstimate Model\nin order to say whether this is actually true\nand whether this corresponds to the other\nprobabilities that Ben Garfinkel has given\nfor AI risk.\nThat is all for today.\nThank you and see you.", "date_published": "2020-08-27T21:02:31Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "9042306aed9c37fc4fda56fa18e91c5b", "title": "260. How Might We Align Transformative AI If Its Developed Very Soon 3", "url": "https://www.youtube.com/watch?v=8M-6xuLjb94", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 260 in the\naisafety.com reading group tonight we'll\nbe discussing the last part of how might\nwe align transformative AI if it's\ndeveloped very soon by Holden kanovsky\nuh Holden kanovsky is still the co of\nopen field and co-founder of givewell\nand we are discussing the last third of\nthis uh post\nI will start with a brief recap of the\nentire argument as well as recapping My\noverall thoughts on it so the idea in\nthis nearcasting scenario is for the\nhypothetical AI company magma to\ntry to\nuse the uh Advantage the lead they have\non transformative AI to somehow ensure\nthat we will not get any air catastrophe\nand the the five goals they uh that are\nsuggested by Holden kanovsky are\nhardening goals which I was very\npessimistic about alignment applications\nwhich I think is the best coordination\napplications which would be nice but\nseem really hard enforcement\napplications which I don't have much\nhope for and advisor application that\nseems like just hoping for the best\nso the way we saw uh that magma would\nmake it uh safe to pursue these\nobjectives is by having some AI that has\nsome high level meter behaviors one of\nthem is value alignment I was quite\nnegative about that Honesty which is\ndefined in its kind of low level that I\nthink is okay but not super important\ncourage ability which I think is\nabsolutely key legibility which I also\nfeel is like nice to have but I noticed\nthat there is a lot of things missing\nfrom this list uh like I don't know\ninterpretability and things like that\num so how do we get this intended\nBehavior well uh there are some facets\nof alignment that are described in uh we\nsaw last time which I was uh quite\noptimistic about and then I thought this\nwas the best part of the article where\nuh accurate reinforcement out of the\ndistribution robustness preventing\nexploits and testing and threat\nassessment seem like important steps\ntowards alignment\nand in order to help with this we will\nnow look at the key tools uh decoding\nmanipulating internal States limited AI\nsystems AI checks and balances and\nkeeping supervision competitive\nso of these tools actually I think that\nshould be perhaps like a\nlike decoding and manipulating internal\nStates actually turns out to help with\nAI checks and balances so this graph\nshould actually be somewhat more uh\ncomplex so this is quite a\nsimplification but I'm getting a bit\nahead of myself\nlet's talk about the key tools that uh\nthat magma will use to ensure alignment\nthe first is decoding and manipulating\ninternal States\num and we're assuming this is a uh a mix\nof deep learning and reinforcement\nlearning in some very unspecified way\num and how can we manipulate of course\nmanipulating deep learning\num is a lot harder than just looking at\nuh at the system from the outside uh in\na black box way\num and right now our ability to really\nfigure out what's going on in under the\nhood what is gbt3 really thinking is\nvery limited\num I\nI share I think I share uh Holden\nkanovsky's negative view on this but I\nwould uh note that in interpretability\nwhich seems to be a prerequisite for\nthis has a lot of uncertainty about how\nhow much progress are we making and a\nnumber of people are saying that we are\nin fact making good progress\num but a lot of people are also saying\nthat we're not making good progress it's\nreally hard to say\nso why does decoding and manipulating\ninternal States help well it would allow\nus if we can decode and see whether that\nis just\ndeceptive then there would of course be\na huge help we would be able to train\nbased on like how deceptive how honest\nhow goal directed how identic uh these\nkind of things would be very helpful to\nbe able to train on\nuh we could reverse engineer the the\nmodel into human readable code\nand then perhaps edit that and finally\num getting a better understanding of\nwhat's going on inside is probably going\nto be really helpful uh later in the\nprocess when we're going to talk about\nchecks AI checks and balances\num my view on this is that it looks like\ncurrently we are really far from being\nable to do these things\num that's at least my uh\nview so but um is not much more\noptimistic than me but he does that it\nis imaginable that this could work\nwhen I try to think about what it would\nactually mean that qt3 is reverse\nengineered into human readable code to\nme that sounds really Fantastical like\nokay I can imagine a lot of things but I\nwould give odds a thousand to one\nagainst uh someone having being able to\nlook Activity 3 and edit and change gbt3\ninto a human readable human\nunderstandable uh program where we can\npoint to line 420 and say that's why the\nconsequentialist reasoning is I I really\nreally really have a hard time seeing\nthat being possible\nthe next key tool is limited AI systems\nand this is something that doesn't feed\ninto like the only facet this feeds into\nis courage ability but credibility is\nthe facet that I think is most important\nso let's have a look at that\na part of that is to just have a very\nnarrow task another example of what uh\nuh\nway to make a limited AI system is to\nmake it myopic we could give it limited\nsituational awareness I think that is in\nfact the most promising thing and where\nI think more efforts should be spent\nright now\nthus the idea of process based\noptimization where you're not um\nrewarding or punishing the uh\num the reinforcement learning for the\nthe objectives when it's uh for the\noutcomes but for the processes\nand finally art has described a uh\nanother suggestion which is uh produced\nplans that look good to humans\num and that is according to Holden\nkanovski also a possibility and I\nsuppose it is a possibility but um like\nI uh skimmed or a description of this\nand also of course I read what Holden\nkanovsky said and like there is a really\nreally obvious problems with with\nproducing plants that look good to\nhumans in that of course that would also\nlook good to the supervisor so and when\nI read their description of this\nsuggestion none of them seem to like\ntake that into account or even mention\nit at all that makes me somewhat scared\nbecause you know just producing plan\nthat looks looks good to humans is\nsomething that will obviously strongly\nfail in the limit of very powerful AI\nsystems\nhas another really interesting point and\nthat is that right now we uh we don't\nhave a mature research process it's\nsometimes said that we are\npre-paradigmatic\num and once we uh are no longer prepared\nit magic then these kind of limited AI\nsystems could potentially contribute\nquite a bit to alignment\num and I think that is\num uh quite plausible and I think it's\nan interesting point I think we're kind\nof like assuming uh the conclusion in\nthat um\nuh uh like\num magma builds this AI system and they\nstart working on how to align it and\nthey accidentally hit the right Paradigm\nand then they use uh the AI to improve\naccording to this Paradigm\num and uh I can I can see the AI\nimproving according to the to a\nparticular Paradigm but I\num like how precisely would they find\nthat I think there is like a missing\nstep somewhere in in this strategy\nthen we come to checks and balances\nwhich is the idea that we have a number\nof uh not just one AI that is being\nimproved but a number of AIS that are uh\ntrying to criticize each other using\ndebate and things like that and\nhopefully they will be um not um we can\nget them uh\nvaried enough that they will be able to\nfind hopefully all the problems or\nenough problems that\num that we don't have a catastrophe and\nfor this uh holding suggestion suggests\ndifferent capabilities and incentives\nfor different AIS and this is something\nthat could potentially help in all of\nthe uh\num requirements for alignment\nand something that could uh potentially\nalso feed in directly and amplify the\nhuman made alignment research that has\ncome before\num these AIS have another advantage in\nthat they don't necessarily need to be a\nlot smarter than humans they but when\nthey're doing checks and balances\nagainst each other then because they\nthink in the same way that they might\nsee the same kind of exploits and that\ncould potentially make supervision\neasier\nHolden says that this should be a\ncentral element of Magma's approach and\nuh like it's a I agree that some version\nof this is probably\num close to our best shot\nthere are however uh false suitable\nproblems with this kind of uh AI safety\nversus type 8 and one of them is that it\nmight be very difficult for humans to\nadjudicate this picture is due to Steve\nbernus\nshowing a person the human supervisor\nwater skiing in between a rope that is\nuh pointed towards uh there are like two\nmissiles pointing towards each other and\nmoving uh slowly forward and this seems\nlike an extremely extremely unstable\nsituation\num and I think\nI share more of Steve's uh intuition on\nthis uh if these two are both uh\ndeceptive and are both very powerful and\nlike what happens if they both if they\nsuddenly start to accuse each other\nsky is a bit optimistic about this\nsaying okay if it turns out to be hard\nthis problem then um this is evidence\nthat the alignment problem is hot and\nthat may make it easier to communicate\nto other AI companies that they should\nbe wary about solving the alignment\nproblem and that this would be a real uh\nchallenge\nI'm not entirely sure that the other\ncompanies will see it that way because\nmagma is they might as well say that\nmagma is just incompetent at doing\nsomething hard that is outside of the\nthing that they are basically\num like they're world leaders in\num in human feedback on diverse tasks\nbuilding AIS based on this Paradigm and\nnow they're trying to do something\ncompletely different and they're failing\nat it well that may not be because it's\nvery hard it may just be because they're\nnot competent and Facebook AI or some\nother\num AI lab may say oh they will surely be\nable to handle this and just push ahead\na key problem for uh AI checks and\nbalances is what Holden kanovski calls\nAdvanced collusion the idea that so the\nAI that is\num being monitored and the AI that is\nsupervising will collude somehow against\nthe human supervisors\num\nthe hope is by having many different at\nleast some of them will not perhaps be\nable to uh see this uh this deception\nwhere some of them will not be able to\ninfer the in the deception some of them\nwill not see that there is some kind of\npayback where they if they collaborate\nagainst the human they will get some of\ntheir value function satisfied and some\nof them of course may not join the\ncollusion and may read it out to humans\nthat would be really nice\num\nthey will probably be helped a lot with\ncovered communication channels so it's\nimportant to be on the lookout for those\num\nwe saw previously that\num\ndecoding internal States would be\nhelpful for this and it probably is\nreally helpful but we should also have\nsome AIS that don't have this kind of\naccess because then they can't see uh\ncredible commitments\nbut a castle trade is still something\nthat is possible\nand this is something that gets worse\nand worse the more advanced the AI we\nhave I would imagine that once you have\nan IQ of I don't know 150 you can invent\na causal trade but because it's like\nHumanity's current work on a causal\ntrade is published the ass know it and\nfor this reason you don't really need\nthat high of an IQ to understand the a\ncausal trait when there's literally a\nPayPal you can look up\num so definitely at the human level we\nshould expect this this kind of problem\nto appear if not earlier\nand of course holding also\num concludes that this is in fact a\nmajor open problem\num so what kind of values make collusion\nmore likely well if the AIS have like\nliterally the same or very close to the\nsame utility function then that is\nlikely to make it more like uh then they\nwould obviously collude against us\nHolden takes as a an example of a really\nbad goal that AIS as a group has a high\nreward\num and I agree it's a um a bad goal but\nI also think it's the kind of goal that\nwe are unlikely to get by accident it\nseems like just an obviously trivial bad\num\nThe Hope from\nuh Jose is that this is something that\nwill only appear relatively late in the\ngame\nbut one of the problems is that if we\niteratively have uh this process where\nwe try to make a generation of AI as\nsafe as possible using the previous\ngeneration and then once we have made\nthat safe then we go to the next then if\nwe fail in one round of this the then\nthat is a terminal failure\nalso have some thoughts about intexical\ngoals like if the AI has goals that\nrefer to itself like it should have\num like power over something or it\nshould have very high reward or it\nshould have uh this kind of different\nthings believes indexical goals are very\nbad\num I agree they can be bad but I can\nalso\num like we probably would not uh make a\nlot of indexical goals for for the AI I\nimagine that we are more interested in\noutcomes that don't directly refer to\nthe AI it could still happen through\nmeter optimizers and of course it's hard\nto see what we like to be done but there\nare also advantages to avoiding\nindexical goals and that is that the\nai's own alignment problem becomes a lot\neasier because if the AI just cares\nabout maximizing paper clips then it's\nvery easy to build a successor system\nand then tell it to also maximize paper\nclips but if you have any technical goal\nthen the new generation of AI that you\nare building May repel against you and\nthen you won't satisfy your own utility\nfunction and that means that a um\nan AI within technical goal could in\ntheory be safer than one without\nJose describes a number of ways around\nthese problems one of them we talked\nbefore about\nvarying the capabilities and incentives\nof different AIS and playing them\nagainst each other in this checks and\nbalances you could have a different kind\nof assignments for them like argue that\nthis AI is a deceptive regardless of\nwhether it actually is would be an\nexample of a different assignment\num\nthat's one suggestion to give large\nlosses for false alarms and large losses\nfor failing to notice known problems as\nfar as I can tell this situation would\ndo absolutely nothing like if you take a\nan AI that is trained in some way and\nthen you instead of making the losses\nsmall you make them large like you\ndouble all the losses uh I imagine that\nthis would in fact have no uh influence\non the behavior of the AI if you just\ndouble all rewards and all losses\num but I'm uh I could I could it depends\non precisely how it's structured but by\ndefault I would expect this to not do\nanything\nanother way you could vary the\ncompetence of them would be to vary the\num compute uh\nyou imagine that um figuring out things\nlike Advanced collusion or in the causal\ntrade if you have very little compute to\nspare then maybe you can't figure that\nout\num\nwe talked also previously about access\nto internal State as one thing that\ncould be varied in this uh checks and\nbalances scheme you could just do brain\nsurgery on the AI and just pull out some\nof the facts if you can find them you\ncould just ablate some part of the\narchitecture or whatever\num if you can vary the situational\nawareness that may be helpful\num my estimation is just if you can get\nthat as low as possible that is always\ngood\nyou can imagine more limits or uh like\nvarying things that you're not really\nsure what does like different in\ninitialization\num and finally uh if you have some of\nthe earlier Generations that you are\nquite sure are aligned that could be\nhelpful as well\nperhaps betrays some intuitions that are\nvery different from mine by saying that\nthey may become uh confident that it is\nvanishingly unlikely that a an AI in a\ngiven generation is misaligned\num and I think vanishingly unlikely is\ntotally the wrong way of looking at I\ndon't think we can be anything like that\nbut I would in fact be interested in\nhaving some idea about the extent to\nwhich a partial alignment failures in\none generation like accumulate to the\nnext\nI could see arguments foreign against\nthis and I think that maybe a key\nempirical question\nnow we come to the question of\num uh Precision like this is a like a\nlot of you have probably seen an image\ncorresponding to this like this just\nexplain like what is it a false positive\nwhat is a false negative\num and um\nHalton argues that it is important with\nthis uh checks and balances that you\nhave few false positives few times where\nthe AI says hey this this other AI is\ndeceptive and then it's actually not\num is that more uh important than um\nthan not having any uh false negatives\nwell I think yeah I would actually argue\nthat avoiding false negatives is more\nimportant than avoiding false positives\nin that if you have this Ai and you\nbelieve it is not deceptive but it\nactually is deceptive that seems a lot\nworse\num but in general of course there needs\nto be some kind of balancing between\nthese these two kinds of errors\nwe may be able to use AIS with low\nPrecision if we have good adjudication\nlike if there is a good strong way of\ndetermining the truth of the matter if\none AI accuses another AI of being\ndeceptive\num then that would be helpful as well if\nnothing else even if we can't really\nunderstand and adjudicate then we could\njust count the number of warnings that\nmay be helpful\nI mean yeah it may be a little helpful\nright\num\nand if we are doing uh the thing where\nwe try to\num uh lower the amount of free energy to\nharden the society in general then uh\nAIS with uh low Precision could perhaps\nalso be valuable for this\nso how effective will checks and\nbalances be well uh Holland kanovski is\noptimistic about this and argues that\neven if the AIS tend to collude\nsometimes then this will have benefits\nbecause they will still give some level\nof uh protection in that they will like\nif they explain some security\nvulnerabilities and then that will be\nhelpful\num\nand we could potentially make collusion\nquite hard\num like uh I I don't know precisely uh\nhow hard Holland kanovski estimates that\nit's going to be but\num variation is certainly something that\nwill also make\num\ngive more false alarms and make uh\nadjudication a lot harder because we are\nprobably going to be\num like if there are different\narchitectures or something like that\nthen we need to understand more\narchitectures uh as judges\num\nthat's how the incentive scales is also\nan interesting case when you imagine\nthat if the uh collusion becomes\nslightly less probable of succeeding\nthen that strongly reduces the\nincentives\num I think that's interesting and I\nthink it's possible to do some work on\ntrying to formalize this\num which I think would be really helpful\num and um\nhas this came reliably calculating the\nprobability that AIS are caught in bad\nbehavior goes a long way\num I think it's a strange claim I think\ndefinitely reliably carefully in the the\nprobability that an AI will do a\ntreacherous turn is something we're\nnever going to have like reliably\ncalculating that we are really um\ngrasping for straws rather\nI think the final key tool is to make\nsupervision competitive and this is\nsupervision competitive with the\nsupervised so this previous generation\nof AIS that are supervising the next\ngeneration of AIS\num and this kind of competitiveness\nshould be distinguished from the\ncompetitiveness between magma and and\nthe world\num and basically how do we ensure that\nsupervision is competitive\num how can I ask you suggest that we\nmove cautiously and iteratively and I\nthink that is\ntrue but I also feel like this is\ndescribed like as a key tool and there\nis basically this be cautious and\niterative and that's kind of\nunderwhelming to me\num I think I would have liked to have\nsome more ideas\num I recognize it may be hard at this\npoint to write a lot but I think it's\nit's important and it feels really thin\nto me\nthe final section of the post is called\nhigh level factors in success and\nfailure\num I uh when I read that title I didn't\nquite understand what it was about I\nthink what it's basically about what\nkind of future evidence we could see\nthat would update us to be more positive\nor more negative\nso one of them is whether it turns out\nto be natural or not that the AIS will\nplay the training game\nuh playing the training game is not\nprecisely the same as being deceptive\nit's described as trying to appear as\nattractive to researchers as possible\nwhile disregarding their intent\num and I tried to uh look into this I\nthink this kind of new definition is\nkind of uh strange it doesn't really cut\nreality at the joints because you can't\nappear attractive to someone without\nconsidering their uh their intent\num so um I would just say whether\ndeceptiveness is naturally all that like\nhere\nwe don't know\num this is a difficult question\num in theory we could see ai's\ninternalizing at an early stage that\nhacks that give a little legitimate\nrewards are not desirable that would be\nreally nice hoton is not positive about\nthis you think it's unlikely but if he\nlearned this then that would update him\nto be more positive about Magma's\nchanges chances\na bad Factor would be that if we learned\nthat AI systems rapidly become extremely\npowerful relative to supervisors\nso here there are two things there are\nagainst the the capabilities of the\nsupervisors which in the first\nGenerations are only humans and later\nwill be humans helped by the previous\ngeneration narrow AIS something like\nthis\num but the other thing that the AI\nsystems could become powerful towards\nare human defenses like how hard is it\nto convince people how hard is it to\nactually take over the world\num\nif it turns out that systems become\nrapidly become extremely powerful that\nis very bad news\nand I think uh\nwe should skip the words extremely uh\nand rapidly in this claim because\num sure if you put enough qualifiers\nthen obviously but I think if it turns\nout that AI systems at some point even\nat a slow speed maybe eventually become\npowerful relative to supervisors like\neven slightly more powerful then that\nwould also be uh very very bad to learn\nthat\nso um I think this should be split into\ntwo uh Dimensions\num like there is against the\ncapabilities of Supervisors to figure\nout whether the AI is deceptive I think\nright now ai systems are quite capable\nof doing that for the simple reasons\nthat the supervisors are really really\npoor like we really don't have any good\nway to look at dvd3 and say is it\ndeceptive or not\num but against human defenses it seems\nclear that AI systems are not anywhere\nnear being powerful enough to just take\nover the world\nso what would this look like in practice\nit would look like AI has been able to\ndeceive and manipulate humans\ndistinguish whether they are in training\nour deployment find some kind of\nsecurity exploit and generate security\nsuggestions with hidden back doors\num one thing I would note here is that\num\nit isn't actually required that the AI\ngenerates the security\nsuggestion with a hidden pector that it\ncan exploit itself if it expects it will\nbe replaced with a smarter AI then it's\ngood enough that it says that it notice\nthat there is some subtle bug here that\nprobably a smarter AI would be able to\nuse a specto\nanother thing the AI may be able to do\nis to disguise their internal States\ncollude against each together with each\nother and just make adjudication too\nhard for humans\nso we you notice here that this part\nabout why powerful AI is bad didn't\nreally talk about alignment\num but Holland kanovsky\nsays that we this is really bad news if\nAI become powerful because during\ntraining there's going to be a lot of\ndifferent opportunities to defect and to\ntake over the world or do different\nthings and like if we strive for a big\namount of variation then almost\ncertainly some of the AIS are going to\nbe unaligned\num and this seems to be a Crux for many\npeople including the most optimistic and\npessimistic and I believe that yeah this\ncould be close to one of the cruxes\num in particular effect behind uh why\nsome people are calling for a pivotal\nact because we are probably going to\nneed something that is really strong to\nprevent other uh all other AIS from\nbeing able to do uh bad things if there\nare if it turns out that AI is rapidly\nbecome extremely powerful compared to\nhumans\nokay the optimists at this point would\nprobably say that okay if we make\nmultiple Generations then why is it\nexactly that we need the Next Generation\nto be a lot more powerful than the\nprevious couldn't we just do baby steps\num even if it turns out that\nincreasingly intelligence of these\nsystems is very easy surely we could\njust lower the amount of compute and get\nsomething that is just slightly smarter\nso we don't get something where the next\ngeneration is a lot smaller than the\nprevious\num Holden kanovski says seems to be\ntaken by this argument I believe he\nthinks it's a really good argument\num and says it's unclear about what the\ncounter arguments to that would be so\nthe first counter argument I would give\nis that the standard story of uh AI\ntakeover includes the AI hiding some of\nits capabilities and that would prevent\nuh humans from very gradually turning up\nthe the heat instead of uh\num\nuh instead of just giving it as much\ncompute as possible\nthe second is that I feel capability is\nmulti-dimensional it's possible to have\none level of capability that is uh\nagainst human defenses like manipulating\nhumans breaking out of the box hacking\ntaking over the world and uh alignment\nthe ability to align other AIS and there\nis no particular reason to expect these\ntwo to go precisely in the same level I\nbelieve indeed that the alignment\nproblem is very very hard and that would\nalso make it very hard for the AI\nthere are a few more things we could\nlearn we could learn that adversarial\ntraining works really really well\num that would be a good thing Hogan\nkanovsky is not optimistic about that he\nbelieves that will require that\neffectively every input uh is identified\nthat would cause unintended behavior and\nthat seems like a really really big ask\nbut I don't think we actually need all\nthat much I think the key thing we need\nfor adversarial training to be super\neffective is a really strong principled\nway to make the AI think that it is\nstill in test when it is deployed I\nthink that's actually all we need but of\ncourse I also believe that this is\nreally hard\nanother thing would be if it turns out\nthat decoding and manipulating internal\nstate is really effective in some way\num it may be that we can solve the\nalignment problem or Global coordination\nusing only rather limited AI systems\nthat would also be great\num another thing we could learn as we\ntry to use AI systems to align other AI\nsystems is that decisive alignment is\njust hard and that would of course be\nreally bad\nanother key question\nthat determines whether we will have\nsuccess of failure is how questions will\nmagma and others be\num because in cautious behavior is just\nreally attractive there's an easy way\naround all these kind of problematic\nsymptoms just training the way basically\num and\num\ndoing all putting all this delay and\nwork on alignment is not perhaps easy to\njustify to shareholders Etc\num and we're probably gonna need a\nsubstantial amount of Serial time\nbecause human needs to be involved in\nmany things such as resolving security\nflaws and adjudicating and even if the\nAIS are more powerful we need some\nserial time\nanother thing that could happen is that\nwe get a sub-existential catastrophe\nthat would make could make magma and\nothers be more risk adverse I think that\nis true but I also think that one level\nbelow a catastrophe that is one level\nbelow helps but two levels below do not\nhelp so if we get a sub sub existential\ncatastrophe that doesn't really help\nlike if we get a lot of self-driving\ncars that crash into other self-driving\ncars that would make a lot of people say\nyeah the problem is that the AI is not\ncompetent enough so we need more\ncompetent Ai and so that doesn't really\nhelp us at all\nSo based on all this this entire uh\nuh uh work would civilization survive in\nthis nearcasting scenario where magma\nrealizes that they have six months to\ntwo years to uh try to um uh uh prevent\nan AI catastrophe\num\ndoesn't really give a strong suggestion\nto this\num he can't resolve this he believes\nthat it is uncertain it's not certain\nthat it's impossible and it's not\ncertain that it's impossible somewhere\nin between seems reasonable\num there are in this situation a small\nnumber of AI labs and\num but one of them is probably in\ncautious I think it's it's too big of an\nassumption that they will all be\ncautious\nanother Advantage is that there are\nseveral possible paths to alignment\num but unfortunately they all look kind\nof hard\nso all canovski ends up with saying like\nmeh Maybe\num to me I think in fact that\num the actions that magma take are not\nthat relevant because the suggestions\ngiven here are not pivotal acts in\ngeneral and they are unlikely to really\nflip the game board in any meaningful\nway so I think the key question here for\nwould civilization survive is how hard\nis the alignment problem how natural is\nit for AI to be deceptive this kind of\nthing and how well magma does like\nMcMahon can't really do that much with\nthis uh the small lead\num so whether they're really competent\nor or not that doesn't change the\nprobability of AI Doom very much\nthat is all for today thank you and see\nyou next week", "date_published": "2022-11-03T21:44:42Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "3d67a10f78f39a8401d01ba8fef53cc3", "title": "108. The Learning-Theoretic AI Alignment Agenda", "url": "https://www.youtube.com/watch?v=6MkmeADXcZg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "welcome to session 108 in the AI safety\ndot-com reading group the presentation\ntonight is by vadim casaya research\nassociate for the machine intelligence\nResearch Institute his article on\nlearning theoretical AI alignment agenda\nhas recently won the first prize of\nround three of the AI alignment press\nplease go ahead\nokay so and I'm going to talk about my\nessay which basically explains explains\nmy research agenda the the motivation\nbehind it and the general research\ndirections so okay so the overarching\ngoals of the research agenda are the\nthree items that I list here which are\nthe first item Monica is intelligence\nand what it means is developing some\nkind of natural theory aim what general\nintelligence so intuitive terms with the\ncontext of a I like general intelligence\nis the ability to crucial certain goals\nin an unknown environment effective in\nsome sense so this is like one sentence\nexplanation what we mean by intelligence\nbut sort of formalizing its with\nmedically\nwish I could please mute your\nmicrophones sorry is it muted it is not\nmuted okay so everybody if you look at\nthe picture and where the the the screen\nshould be there should be at bottom\nwhere you can close the call two buttons\nto the left as a microphone press that\none then it should be neutered am i\nmuted now no I was muted before then\nokay\nso if you keep speaking and I'm not\ncurrently saying you are not muted in\nthat case you're probably muted okay\ngreat so the second item is the\nmathematical is in the role of alignment\nso when we sailed in in artificial\nintelligence is aligned what does it\nmean exactly so it should mean that this\nagent if in some sense first use the\nsame variance that we push here are the\nsame goals that we define it in the\nsense that we mean it but again how to\nformulate this mathematically is the\ncomplex question and some of my hope or\nso speak the holy grail of this research\nprogram is that eventually this this\ntheory will produce a quantitative\ntheory of AGI performance and safety so\nsomehow the the vision is that\neventually we will have a theory which\nwhich allows given a certain the I\ndesign produce some quantitative\nestimate of how safe this design is and\nhow Windows designers which of course\nwill have some empirical parameters and\nsome error bars and so on but at least\nthere will be some theoretical basis for\nclaiming that your design is safe or not\nsafe and the main building blocks which\nmargin the building is statistical\nlearning theory accommodation complex\ntheory algorithm formation theory we're\njust briefly very briefly statistical\nlearning theory basically tells you well\nit\nquestions such as how much data doesn't\nalgorithm need a learning algorithm to\nconvey eshton right answer and whether\nit even can converge to the right answer\nand computational complexity theory adds\nto this the dimension of computational\nresources so maybe have enough data but\nit's not feasible to process this data\nwith some reasonable amount of\ncomputational resources and so the sum\nof those two is usually called\ncomputational learning theory and like\nthe first part of written information\ntheory is basically well it's basically\nwhat tells you what is the correct prior\nyou should use in some sense for your\nagent to be a truly general intelligence\nthat somehow is able to reason about in\nsome sense some as broad domain of\nproblems as the thing that humans are\nable to reason about so okay so the\nfirst part is what I call inverse\nenforcement learning and what I mean by\nthis is basically an attempt to answer\nthis first item here the automatically\ndivided of general intelligence so so\nthe starting point of this is the arocs\nI which was introduction by Marcos\nhotter air excise basically an agent yes\nwell that assumes its environment is\nsampled randomly from a Solomonic prior\nso what is the soul of prior so just in\ntwo words a basic illuminance prior is a\nway of mathematically formalizing the\nidea of Occam's razor right so when we\nwell when we discuss just think about\nwhat does it mean to think rationally or\nyou know you use the evidence that you\nhave in rational ways or in scientific\nways then like the central principle\nthere is Occam's razor which says that a\nsimpler theory\nis more likely to be true their complex\ntheory and so much Breyers way to\nformalize it we're like a complexity of\nthe theory is corresponds to the length\nof a program push the universal Turing\nmachine that encloses theory so you can\nsort of imagine the drawing a diagram\nhere it shows you in the ax I ancient\nand you have like the environment the\nwell the environment gives some string\nof some stream of bits into the agent\nwhich are the observations of the agents\nand this kind of presented sensors or\nwhatever input it gets from the\nenvironment and then there are some\nactions and agent takes to to affect\nthis environment and so you get some\nprocess you get some closed-loop process\nhere and then if you assume that the\nenvironment which agent doesn't know a\npriori is the sample from the Solman of\nprior then you can talk about expected\nutility of the agent and the agent with\nmaximum expected utility by definition\nis the ax ax so XS are just an attempt\nto formalize what intelligence spins and\nthis is like a nice first attempt but\nthere are all sorts of problems with\nthis so ultimately what I hope this\ntheory will give us it'll give us some\nset of optimality conditions an agent\nshould satisfy in order for the cold\ngeneral intelligence in some sense and\nthese conditions this should be\nsufficiently strong so well I'll say a\nfew more words about this later it\nshould be sufficiently general in the\nsense that the space of environments\nthat the agent can somehow effectively\noperate and should be like a very big\nspace in some sense and the sample\ncomplexity in computational complexity\nshould be feasible so as opposed to aja\nX I which is not even computable we\nshould arrive at the theory that is\nconsistent with agents that are actually\nfeasible complete\nit can be implemented with feasible\ncomputational resources which is the\ncomputational complexity part and which\nconverge to some the learning process\ntakes a reasonable amount of time which\nis the sample complexity part okay so so\nI'm going to speak about a few problems\nwith EXI a few problems that Exide\ndoesn't solve and which are important\nremind you to arriving at this theory of\ngeneral intelligence and I'm going to\nsay just a few words about how I think\nthis problem should be attacked and how\nI'm trying to attack them so the first\nproblem is the problem of craps so what\nis a trap a trap is an irreversible\ndecision we have no long-term\nconsequences so it's something that you\ndo and which has potentially bad\nconsequences if you cannot undo wants\nyou to do this action and this and this\nthis thing creates problems so most of\nexisting reinforcement learning theory\nassumes that there are known traps so\nwhy does this yield because if there is\na crap in the environment then it's not\nclear why you can learn that there is a\ntrap the only way to know for sure that\nthe trap is a trap is by entering the\ntrap and then it's sort of too late you\ncannot get out of it so it's not clear\nhow to have some kind of guarantee that\nyour agent can learn you can\nasymptotically learn about all the traps\nand approximate some optimum performance\nor something where it does so usually\nfor some learning theory just assumes\nthere are no traps or has some tricks\nlike the episodes which results are just\ntraps or irrelevant if some samples are\nwell liked one of the reasons it's\nespecially important in the context of\nAI alignment is because a lot of\nproblems that arise in the context of\nalignment we can think about them as\ncertain special case of traps so for\nexample of the agent self modifies in\nsome way that may\nsit like changes for examples utility\nfunctions some bad way or just you know\nmakes it like an ineffective agent and\nyou can think about it as a type of crap\nlike it's an action that your agent can\ntake that will lead to some bad state\nand you cannot exit the state because\nonce addiction runs YouTube the action\nand the original agent doesn't exist\nanymore\nand replaced with some other agent which\nwill do some bad things and corruption\nis also sort of so what I mean by\ncorruption is like so in reinforcement\nlearning your agent usually experiences\nincentives to do things which we would\nconsider bad for example it has some\nexternal reinforcement signal which is\ntrying to maximize then it creates an\nincentive to part of the signal and to\nartificially set it to a high value and\njust decouple it from what the humans\nactually want and like entering this\nsource of states can also be considered\nto be a type of threat so how can we try\nto solve this so one approach which I\ninvestigate and continue to investigate\nis what I called DRL which means\ndelegative enforcement learning where\nyour agent can sometimes delegate agent\nactions to a human advisor then you're\nable to prove that you're assertive\nyou're set your agent will be capable of\nlearning about all the traps that the\nhumans and human knows exist in the\nenvironment and in particular it solves\nthe corruption problems under some very\nsimplified assumption but in principle\nit can solve the corruption problem so\nother directions can be introducing some\nassumptions about the environment or\nformal evening soft romantic conditions\nwhich are or weaker than what's usual of\ncourse with learning theory but stronger\nin the bias optimality ok so another\nanother problem of the ax sign has\nis the lack of reflection so X I is\nincomputable despite the environments is\nable to reason about are computable and\nso we can sort of instill exercise we\ncan consider some agent whose prior\nincludes only for example only\nenvironment that have a given time\ncomplexity but then the agent itself\nwill always have a higher time\ncomplexity so by Indian agents almost\nalways have higher complexity than the\nenvironment and this creates a paradox\nbecause the real environment is usually\nmore complex than the agent for example\nit can compromise it can contain other\nagents which are just as complex as the\nregional agent and so we need to somehow\ntake this into account the theory needs\nto deal with this and one direction that\nI'm exploring is the use of what I call\nincomplete or fuzzy models which means\nthat your agent reasons in terms of\nmodel and don't fully specify the\nenvironment but specify just certain\naspects from the environment and then\nyou don't need the environment to be\nmore simple then the agent you just need\nto environmentally have some properties\nwhich are simple enough for the engine\nto Express and then your agent will be\nable to exploit this this properties\nsorry I've already also some test cases\nof how such a theory can be tasted so\ngame theory is like looking at several\nagents interact with each other and\nseeing whether we can prove the\nconversion to some reasonably liberal\nand like decision theoretic problems\nwhere you have America that stimulates\nthe agent and the agent cannot simulate\nOmega and self codification is not a\ncased and of course another important\ncase for alignment is when you have\nclosed loop between a human and any I\nand like at least one of them will not\nbe able to simulate the other so you\nneed like this theory of you need some\nkind of reflected theory of enforcement\nlearning to the pro things about such\nloops\nnow okay so the next the next part of\nthe agenda is value learning protocols\nand here here I'm actually talking about\nthe alignment itself so what does it\nmean for an agent to learn human values\nand how can we make sure how can we\ncreate setup state and sure\nmathematically that this learning will\nactually happen\nso there are several ways how you can in\nprinciple try to communicate values to\nan agent and so one way is by vertical\nformal communication so formal\ncommunication means that you are\ntransmitting some signal to the agent\nthat has a priori known semantics so it\nmeans that the agent it's sort of\nhard-coded on some level into your AI\nwhat a signal means so for example can\nbe just a full specification of the\nutility function it can be a reward\nsignal or it can be just some comparison\nsome special cases like you're saying\nregions of this situation is better in\nthis situation or this situation is\ntwice better than the situation of\nthings like that so what are what are\nthe problems with this so first of all\nspace it's like anything is very hard so\nfoolish spreads find usually a function\nis somehow completely infeasible just\nlike a human various are very complex\nlike it's very hard to specify the\nmathematically like you're very far from\nthere but even producing the reverse\nsignal becomes very hard if you're sort\nof trying to communicate the sum total\nhuman values as a Polish from so per\ntypical narrow problem so this is like\none problem with this another problem is\nthe incentives for corruption which I\nalready mentioned before\nDRL can solve to some extent or this in\nsimplified models and the third problem\nwith this is scarce information which\nmeans life like for example if I'm just\ngiven the reverse signal to the agent\nthen it might be it will take a lot of\ntime for the agent to discover what it's\nsupposed to be doing for example suppose\nI want the agent to\nworld war food you know to prevent a\nnuclear war shall happen so it's your\nwife will be you know zero when nothing\ninteresting happened and minus one if\nthere is a nuclear war and plus one if\nsomehow a nuclear war was eliminated\nforever so something like this\nbut the problem is the nature will get a\nlot of zeros in the beginning and it\nwill take it a lot of effort to reach\nany state where some relevant reverse\nsignal exists so this is some limitation\nof this so an alternative approach is\ninstead of formal communication we can\ndo something else which I call\ndemonstration so what is what what is\ndemonstration demonstration means that\nthe human is not is just trying to\ncommunicate its values by just\ndemonstrating them but just doing things\nthat are aiming to optimize Disraeli's\nshowing the engines what what stage is\nsupposed to deduce from that what agent\nwhat awareness is supposed to be\noptimizing so this idea was very popular\nand it remains popular in recent years\nin different formulations so there's\nlike inverse reinforcement learning\nwhere engine just observes the human and\ntries to and guess what was there was\nthere utility function was there work\nfunction is there RC IRL cooperative RL\nwhen the agent of the human act together\nsingleton C on the environment and\nthere's divided are introduced which is\nDRL delegated in versus force with\nlearning where the agent acts on the\nenvironment and can sometimes give\ncontrol to the human and allow the human\nin step to act on the environment and\nsee what the human does so the\nadvantages of this is in some sense\neasier in the sense that you don't need\nto specify digital the function you just\nneed like to do things in some sense and\nlike it it has\nmight have richer information because\nthe the human is doing the demonstration\nis already moving towards some\nreasonable goals so from the start you\nhave some information about\nwhat the human is trying to do\nchallenges with this is of course the\ndemonstration can be imperfect can the\nhuman to make just errors name of the\ndemonstration or introduces with some\nvery suboptimal or irrational things and\nagain the human can get corrupted so so\nI'm the signal from the human to the I\ncan be hijacked or the human can be\nmanipulated and so forth and and another\nproblem is that or perhaps a related\nproblem is that it's limited by the\ncapacities of the demonstrator so for\nexample if I'm offered a bet whether to\nplay a game of chess against Kasparov\nthen I will not take the bet because I\nknow it cannot win such a game of chess\nbut maybe I would like the AI to take\nthat if the I would react in my step\nbecause the I can actually win against\nthe spiral but the I might might never\nbe able to learn at this because it will\nnever be sure ever I'm rejecting these\nbats because I don't think I can win or\ninject ejecting its bats for some\ncompletely other reason the bouncy\npeople don't like hate playing chess or\nsoft OFS and like this is related to B\nversus NP so giving the reverse signal\nis like NP we were just evaluating a\nsolution and and the during the\ndemonstration is not available solutions\nproducing a solution so this beavers\nversus NP problem the main test is there\nis evaluating solutions is much easier\nthan producing solutions and this means\nthat in some sense producing the reverse\nsignal there's no computational staff\nreason if a signal is actually much\neasier than than demonstrating so so one\none idea I have about how to sort of try\nto get the advantages of both as much as\nit's possible is a protocol that I can\nlearn by teaching and like in learning\nby teaching here three actors so you\nhave your agent you have which is di you\nhave someone called the adviser which\npresumably be human and an operator\nwhich might be American so the operator\nacts on the environment and the advisor\ngives some advice to the to the operator\nthere are three moves between which the\nITM switch on its own so in one mode the\noperator is doing things and the advisor\nis giving advice in the second mode the\noperator is given link is doing things\nand the agent is giving advice instead\nof the advisor and the third mode the\nagents just acting directly on the\nenvironment and the operator and the\nadvisor down into their anything\nand so what what's the idea here the\nidea here is that by giving different\nsource of advice there I can learn what\nwhat the operator is trying to do\nbecause well like on the one hand if\nyou're doing something but you receive\nadvice from some really smart and allows\nyou to search solve much more\ncomplicated problems than the problems\nyou would be able to solve on your own\nso this is this in principle allows the\noperator to transcend its usual\nabilities on the other hand the I can\nunderstand which advice is good and\nwhich advice is bad because of the basic\nprinciple that the operator will simply\nnot respond to the bad advice or maybe\nit will try it and then we'll stop using\nit whereas good advice the operator will\nkeep using it and the agent will be able\nto conclude that the operator actually\nlikes this advice and like it's it's\nmoving towards the correct a goal and on\nthe third hand there is still the\ndelegative mechanism which protects us\nfrom manipulation because the I will\nsort of sample its advice from the space\nof advice advices with the adviser the\nhuman adviser can produce and will avoid\nadvice that might be dangerous might\nmanipulate the operator and make it\nreacting in correct ways so in principle\nthis approach should allow us to\ntranscend the ability of the operator so\ngive a certain better result than just\nusual demonstration\nand avoid corruption so and you don't\nneed to Manuel spats very war signal\nokay and so so ultimately I hope that\nwhatever the ultimate protocol is going\nto be what I hope is that this protocol\nwill be they have some philosophical\nbacking by some theory of imperfect\nrationality so we're going to have some\nmathematical theory which tells what\ndoes it mean for an agent to have\ncertain values or utility function or\nsome other presentation of values\ndespite the fact that the agent is not\nperfect not perfectly rational and\nsoftware and like for example one model\nthat sort of emerges from thinking about\nis learning by teaching over it is that\nyour agency is going to be not perfect\nbecause it as a limited hypothesis space\nthere are parts is that too complex for\nit to contemplate it can become\ncorrupted some some states when the\nagent the human enters them it stops\njust behaving rationally in any sense\nand it can just randomly do some errors\nand and the learning by teaching\nprotocol can sort of overcome in some\nsense all three types of irrationality\novercomes the internet for space because\nthe AI can have a much bigger potted\nspace it avoids corruption by the\ndelegation mechanism and it can avoid\nthe random blunders because when they I\nalready learned enough can directly on\nthe environment and somehow simulate\nsome perfect version of the operator in\nsome sense but this is not necessarily\nthe final model but hopefully we'll have\nsome model which sort of explains what\nimperfect rationality is so some other\ntopics that are included in my agenda\nare well one topic is the topic of\ntaming demons so why are these demons so\ndemons are analyzed sub engine so you\nmight have an ancient which is in\nprinciple sorry\nwhich is in principle aligned in some\nsense or for some protocol which should\nthere ensure a good which has some\nperformance guarantee in some sense but\nit has sub agents that are not aligned\nso how can this happen for example if\nyour age include some evolutionary\nalgorithms then the evolution of Russia\ncan create some intelligent agent we\nhave some different agenda like\nbiological evolution created humans by\noptimizing for genetic fitness Britain\nhumans which optimize for other things\nanother concern is that your agent might\nbe simulating other agents it might be\njust thinking about the world and\nimaging other agents which exist or\nmight exist in the world and it runs\nsimulations on this agent some of them\nmight be malicious and some of them\nmight be able to actually escape from\nthis box which is which is the original\nagent and like the third example is the\ncausal attack of all Kristiana where\nsome malicious super intelligence sort\nof runs a lot of copies off of your AI\nwhich seem like there are which all\nthese copies they see something which\nlooks like ordinary earth and but there\nare much more of them than earth and it\ncauses your eye to think that it's in\nthere it is one of these copies and it\nis not on real earth this allows the\nmalicious super intelligence to\nmanipulate your AI so I defy this agent\nthis demons into two types\nso there are Cartesian demons and non\nCartesian demons and Cartesian demons\nare those that just usually use the\nusual input output channels of your\noriginally I to do bad things and the\nnon Cartesian agents are of the rest so\nmy producer agents are sort of work by\nsort of trying to hack your eye from the\ninside find some error some bug in the\ncode in the ratings\nin the hardware or just transmit\ninformation outside using some physical\neffects like magnetic induction or\nwhatever and use that and in principle\nCartesian agents should be blocked by\nthe same performance guarantees that\nfall from the theory that we have before\nso like if we are if we don't have a\ntheory of universal reinforcement\nlearning plus very learning protocols\nwhich gives us some optimality\nguarantees that's in particular imply\nthat our AI is going to remain aligned\nthen this blocks to the Cartesian agents\nalso just by definition and like for\nexample this a causal heart attack is\nblocked by delegation because you're a\nis able to delegate to the human at the\ncorrect moment of time and use this to\ndistinguish between this whether it's in\nthe simulation or really an art so the\nCartesian the agents are relatively easy\nalthough we have to be careful for the\nquantitative analysis of her performance\nguarantees which is not simple the non\nCartesian agents at the moment it's much\nless clear how to solve its I do not\nreally see a purely learning theoretic\nsolution so one solution that sort of\nworks is just using a morphic encryption\nwhich is a certain way to encrypt the\nexecution of your AI in a way that makes\nit look like just execution of random\ncode and so under certain assumptions\nreasonable assumptions about physics\nsuch an encrypted in air will never be\nable to sort of break out of the box and\nyou can sort of communicated by\nencrypting and decrypting these inputs\nand outputs and then like it's normally\npersonally become the only channels of\ncommunication all the other channels are\ncut off but it has like very serious\nperformance penalties and it might make\nyour eye uncompetitive against some\nvariants so maybe there are some ways to\ndo instead of using extra homomorphic\nencryption you something weaker like her\nwhich has some with some form of\nvalidation didn't even it but okay but\nit it is a complicated problem okay and\nthe last topic that I wanted to talk\nabout and sort of the last sort of item\non my research agenda is\nself-improving here so self-improving is\na I did sort of improves itself and so\nthis idea is interesting because it\nmight lead to some kind of exponential\nfeedback loop are you reaiiy improves\nitself and the improve the AI improves\nitself even faster and so on but some of\nthis is just an informal idea there so\nit's interesting to try to come up with\nsome formal mathematical model of this\nand well and there are some things in\nthe theories I mentioned before that can\nbe used for here for example I already\nsaid the full lots of modification is a\nkind of traps so for fear which deals\nwith traps can also be used to deal with\nsort of house safety of self\nmodification this is one thing another\nthing that you can focus on education\nsort of game where the players are the\nset of all possible agents or all agents\nthat your agent can modify into and then\nif if you if you will have this result\nas I sort of mentioned before you might\nwant to have that your well your\nOutworld converges into some kind of\ngame theoretical Librium then you can\nstart to apply to this game and then it\nwill imply that your agent will actually\ndo some modification in good way since\ninstance so it will you know it will\nplay its optimal strategy in the game\nyou can insult yourself modifying the\nthings it should sell for defending\nourselves modifying their things and\nanother just further about this is the\nsort of self improvement so it sort of\nseems hard to understand how to\nformulate learning theoretically what\ndoes mean this sort of feedback loop\nhow do we think about it so one way I\nthink which might be interesting how to\nthink about it is that self-improvement\nis really sort of flowing on which\nhardware you are implemented because\nusually when you talk about\ncomputational complexity then we talk\nabout complexity up to a polynomial\nwhich is why is it why we always talk up\nto full you know no because then it\ndoesn't depend on the models computation\nit doesn't depend on what sort of\nmachine what science computer implements\nyour algorithm and and so eventually we\nwill come up with some algorithm that\nhas some performance guarantees and that\nit will run in some polynomial time but\nthis polynomial time will not be optimal\nfor the particular computer in which you\nare going to implement it so there is a\nself-improvement we can try to model it\nas they are trying to learn on which\nhardware it is implemented and find well\nand somehow modify itself into the\nimplementation which is optimal for this\nharder and then maybe maybe you can\nwhich make to analyze it and find it\nunder some condition you really have\nthis exponential growth place that\npeople have been sort of speaking about\nor not okay so I think it I'm I think\nthat I finished the presentation yeah\nokay thank you for your presentation\nvenom if anyone in the audience have any\nquestions or comments please come with\nthem now and if you come up put some\nmore questions well people are talking\nthen please write them in the chat so we\ndon't talk at the same time so who would\nlike to feel the first question\nI would please go ahead okay um I just\nwanted to pass our cross I wanted to ask\nfor clarification on the learning by\nteaching why exactly could the a I not\njust say hack the advice is to give\nparticularly easy advice or anything\nlike that how does that prevent um\ncorruption yes I understood the question\nso learning by teaching basically uses\nthe same mechanism to defend from\ncorruption like all the other delegative\na learning protocols so the basic\nmechanism is that well first of all the\nagent knows that there is such a thing\nas corruption so it assumes that some\nstates some states are corrupted and if\nyou enter a corrupt state then you\ncannot no longer believe anything and it\nknows it it should avoid the state and\nthe way it it the mecca the tool it has\nthe widest ways is by allowing allowing\nthe adviser to act first and then when\nit learns the behavior the adviser then\nit understands which things are safe\nso the basic assumption is that the\nadviser will not will not become\ncorrupted on itself and the operator\nwill not become corrupted all on itself\nso he assumed it this adviser operator\nas long as as the act without\ninterference of the eye\nthey will not grab himself then the\nagent can learn this certain behaviors\nsort of behaviors in this advisory over\ndisplay are safe and the states that\nthey entered in this behavior are safe\nstates and it can then use this\nknowledge to also behave always are safe\nthis is like the basic mechanism\nokay um Stewart Armstrong had a comment\nyeah I just wanted to ask if you're\nbased in Berkeley and if you're going to\nbe there in the second half of August\nokay so the answer is that I'm not based\nin Berkeley I'm based in Israel and I'm\ncertainly not going to be there in the\nsecond half of August I mean I might be\nthere sometime in the future probably\nwill be at some point but certainly not\nnot this month okay\nthen we should have a discussion after\nAugust after I formalized some things\nthere and some comments on your thing\nI'll send you one link to something that\nI did to try and remove a causal trade\nyou may find it relevant to what you\nwere thinking about of course Cheers\nokay there is a question from Linda or\npossibly some of the other people\ntogether with her asking if the operator\nis a human in the value learning\nprotocol so in the simplest\ninterpretation of the protocol yes the\nbrain is human you might try to come up\nwith some or okay in the simplest\nversion the upper there is either a\nhuman or maybe some organization of\nseveral humans or something produced by\nhumans but you might also try to imagine\nsome more complicated versions which are\nmore like a post Cristiano's thing where\nhave some chain right where each sort of\nagent teaches to the next agent in the\nchain so I think it might be interesting\nto think about that also but so far I\nhaven't constructed any mathematical\nmodel which does this thing but it might\nalso be used for directing\nokay there are other people writing but\nI guess I could field one of one\nquestions my own there are as I\nunderstand it three or at least three\nresearch agendas there is your learning\ntheoretic agenda the spall cristianos\nalba agenda and the 18th foundation's\nagenda\nit seems focused Yanis agenda focus a\nlot on the being very strictly\ncompetitive with the the best machine\nlearning techniques deep learning etc\nwhereas the the agent foundations agenda\nis all the way to the other end of the\nspectrum with not focusing on\nperformance and all at all and just\ntrying to find a feasible solution it\nseems you're learning theoretical agenda\nis somewhere in between is that a\nreasonable characterization so I think\nyou could say that in some sense it is\nin between so it is similar to pause the\ngender in the say in the sense that in\nthe sense that I'm using mostly sort of\nconventional or relatively conservative\nmathematical tools formalism which are\nusually which people developed for\nmachine learning it's sort of different\nfrom Paul in the sense that are more\nfocused on so Paul seems to be focused\non constructing there's some informal\nsolutions which are good enough in some\nsense and then somehow gradually\nformalizing them later I'm my focus is\nmore on doing things with where you can\nactually prove theorems completely\nrigorous way even if the theorem is\ninitially in some very simplified model\nso I prefer to start with a very\nsimplistic model but about which you can\nactually pull things rigorously and then\ngradually move on to more and more\ncomplex models rather engine so don't\nimagine the final solution and only then\ntrying to make it rigorous and also I am\nso the focus on reinforcement learning\nwhile Cristiano prefers to sort of a\nmore roundabout approach where instead\nof doing the enforcement learning he\nthinks about things like systems that\nanswer questions and things like that\nwhich somehow in a roundabout way allow\nus to create systems that connect on the\nworld so yeah like the agent on the\nother hand on nearest agenda is well\nit's similar to mine in the sense that\nalso tries to construct some formal\nmedical theory but it's different in the\nsense that well the mathematical tools\nin nearest agenda and the sort of the\nsubproblems that the emphasize are\ndifferent from from the em from the\ntools that I use and the emphasis in my\nagenda okay yeah\nRafe had a question in the chat I don't\nknow if you can see the chat does the\nadvisor have very completely benevolent\nwhat if it advises the operator to do\nsomething because it's good for the\nadvisor but bad for the operator well at\nthe moment are assuming that you know\neveryone all the advisement overs are\nboth benevolent so I'm not trying to\nthink about any conflict of interest\nbetween the humans so it might be\ninteresting at some point of the future\nor maybe even its importance of the\nfuture we'll think about scenarios where\nyou have several humans that have like\ndifferent interests or sort of trying to\npull the and different directions but I\nthink that this is like but for my point\nit's some kind second-order fact like we\nneed to if we don't even know how to\nsolve the problem you know in the case\nwhere all the humans are perfectly\nbenevolent incorporated them they're\ngoing beyond this is somehow seems too\nambitious this point\nthe next question from Alexander is\nabout the relationship between\nhomomorphic encryption and the AI box\nproblem this is something we've actually\nbred in the as50 reading group paper\nabout I think half a year ago or\nsomething so I could get the but the\nreference but that you could you sketch\nhow that works yeah yeah so so say some\nchatter explain it briefly so so a\nmemory decryption is a way to well\nexecute a problem in encrypted form then\ncreate both the program and both the\ninput and you get an encrypted output\nthat you have them to decrypt and the\nproperty has is that if someone who\ndoesn't have the key looks at the\ncomputation then then this someone\ncannot distinguish between this\ncomputation and just some random program\nwhich is running on some random input\nand why is this okay so why is this good\nfor sort of boxing VI because it means\nthat if you execute your AI in this\nencrypted form then well you see if we\nassume that the universe is also limited\nin computational resources and like it\nseems like it is so from our\nunderstanding of physics the universe is\nin some sense in the complexity class P\nQ P so it can only do things which are\nin some sense running quantum polynomial\ntime which should still be not enough to\nbreak the encryption scheme probably of\nits own snow rhythms conjectures but\nassuming it's not enough and it means\nthat the universe the physical universe\nwill not be able to tell what is we\nwon't be able to distinguish between\nthat the program runs inside the box and\njust some random problem and this means\nthat this program will never be able to\nhave any unintended side effect on the\nuniverse which you know any side effect\nthat it would have destroying a random\nso you just cannot optimize the universe\nin any bad direction so if you had\npurely information theoretic encryption\nit's like a home about freaking company\nI'm not sure maybe I just read the paper\ncould you set it to me sir yes in the\nmean time which is this it seems this\nlearning theoretical gender it's mostly\nprobably limited to a GIS agents that\nare based on learning there are other\noptions for artificial general\nintelligence agents that are search\nplanning or based on logic to see could\nthey be aligned with some of the same\nmethods so I so my point of view is\ndifferent so I do not see this agendas\nlimited to a particular type of agent\nand I think this is actually important\nso the starting point of my agenda is\nthinking about what sort of mathematical\ndesiderata\ncan be satisfied by any sort of agent\nyou know one said what is the other are\nimpossible cannot be satisfied one days\nin your honor are possible and can be\nsatisfied and then you know and then we\ncan say something more even more\nquantitative about the satisfaction of\nthese conditions so I'm trying to get at\nthe mathematical conditions that some\nalgorithm should satisfy in order for us\nto call this algorithm an intelligent\nalgorithm so it doesn't matter what is\ninside the algorithm from the inside the\nalgorithm might use it might use neural\nnetworks it might use formal logic it\nmight use something that you don't even\nknow about currently but the the point\nis that if we prove some theorems that\nyou know something cannot be satisfied\nby any algorithm then you know it\ndoesn't matter what lower court we'll be\nable to do it on the other hand if we\npulled it know that some criteria can\nsatisfy them it can be said so I then\nyou know it doesn't matter how we we did\nthe important thing that he says but it\nactually sets by this criteria so so my\npoint of view is sort of trying to\ncorrect characterize what is mean for an\nagent to be intelligent rather than\nstudy the properties of particular\nalgorithm yes I don't think that's been\nLucas has just written a message in the\nlist of problems with I see a I X I I\ndidn't recognize anything that looked\nlike the ontological identification\nproblem is that problem problem you\nconsider important ok so this is so ok\nso this is an interesting question so\nthe thing is that this ontological\neducation problem it is possible that we\ncan in some sense avoided but ok so why\nis it possible that we can in some sense\nof worried it because like ok if you're\nif you have some agent that has some\nontology or about the universe and its\nvalues are defined in terms of this\nontology then you can probably still\nexplain its behavior in terms of some\nutility function that is defined just in\nterms of actions and observations\nbecause the beliefs that the agent has\nabout the objects in its intelligent are\na function of its the observations that\nit has seen and therefore we can\ntranslate the to the function from\nwhatever ontology the agent is using\nintrinsically through the space of\nactions and observations and therefore\nif we so it might be enough to just look\nat additions that agents that only look\nat actions and observations and then if\nwe have in particular a very learning\nparticle for some agents then it then\nsuch agent will be able to\nvarious without somehow explicitly being\nwith the ontology thing on the other\nhand it might still be interesting but\nbecause this translation of utility\nfunction from some ontology collection\nobservations can starting from some\nsimple utility function produce some\nvery a complicated function which some\nless good mathematical properties then\nit might be useful to to do the opposite\nwhich means which means trying to model\nthis ontology thing explicitly and then\nfor example we can think about it in\nterms of an agent which plays as a\ncasting game so an agency writes with\nthe environment with okay the agent\nworks with some state with some with\nsomething which represents its internal\nvalue ontology and this something is\nacted upon some other agent which\nrepresents the external environment\nwhich represents nature so to speak and\nthen au revoir function can be a\nfunction of the states inside this\nontology so I think this mathematical\nmodel in my view captures sufficiently\nnicely what it means to have variously\nfind the so intrinsic ontology and then\nwe can ask what sort of we read bounds\nwe can prove in this formalism and what\nimplications doesn't have about very\nwell and so forth so this is so I think\nthis is an interesting research\ndirection\nI'm not search not currently sure\nwhether it's a high priority or less\nhigh priority but certainly it's\nsomething that also we should think\nabout okay I have another question and\none thing that your article didn't talk\nabout was the concept of courage ability\nit seems like the addition of courage\nability would makes a lot of these value\ntransfer\nprotocols easier in the sense that if\nthe 18 was sufficiently corrigible the\nproblem of transferring the values would\nbe simpler in that we if we did not hit\nsuch a narrow target then that doesn't\nmatter so very much if the agent is\ncritical so so personally I don't like\nthe concept of courage' ball very much\nand the reason I don't like it is\nbecause I feel it is not sufficiently\nformalized and I feel the different tip\nto the point I think the different\npeople use it to mean different things\nso it's not even specified in the\ninformal level not not mention the\nformal level so the way I so the general\nmethod of my research agenda is that the\nway we arrive at alignment is by proving\nsome performance guarantees which for\nexample take can take the form for grade\nbounds and this performance guarantees\nthey they show they should imply that\nyour agent converges to the RPM value of\nthe utility function where the utility\nfunction is the correct tilted function\nof the human in some sense so this is\nsort of the the way I am trying to\nmathematically formalize the concept of\nalignment and to prove the certain\nalgorithms are alive so from my\nperspective if I manage to prove this\nthen I don't need college ability or you\ncan say that this is you can just say\nthis is what courage ability means if\nyou want and if I cannot prove it then I\ndon't see how college ability is going\nto be enough so at this point I just\ndon't see it as very useful concept on\ntop of the concept that I already\nintroduced in the agenda ok and we are\nnearing the end exactly has a question\nabout further to this homomorphic\nencryption scheme\nand I don't know if you have any if you\nhave can read it yeah if you have yes I\ncan read the chat so so fourth question\nis there a possibility results are\nunconditionally secured market\nencryption in other words can you only\nhave our confusion schemes that we need\nto assume that verse we had only time\nwell well you need to assume something\nright so like if if you're if the\nadversary the de strive to break the\nscheme has unlimited computing resources\nthen you can always sort of try\nimpossible keys for example or something\nlike this so in general encryption\nthere's such thing as like information\ntheoretic description like one-time pads\nand things like that but information\ntheoretic encryption doesn't help you\nhere because for information theoretic\nencryption to work you will need to\nblock the passage of information between\nthe I and the universe in some way but\nthis just seems physically impossible\nbecause like there's no way you can you\nknow you can completely eliminate flow\nof information so okay so in some sense\nthe other approach that I mentioned form\nof verification it's not really\nencryption but you can sort of think\nabout it as this sort of information\ntheoretic security attempt because so\nthe idea there is that we sort of have a\nmodel of of our of our hardware and we\nformally proved it inside this model the\ncommunication bandwidth from anything\ninside the computer to anything outside\nis bounded by some small constant and\nthen we will have some learning\ntheoretic game theoretic analysis which\nshows that this constant is good enough\nand but then again to help something\nlike this you still need to assume\nphysics so your analysis would still\nassume a physical model so I don't see\nany scheme that will not assume anything\nabout physics because like this seems\nalmost by definition impossible I might\nbe missing something but we just don't\nsee how it can be done okay I think we\nare just about ready to wrap up so the\nfinal question will be from Lucas asking\nif you have written anything about\nontological identification so so no so\nunfortunately not so this is a topic so\nI just have not did not have time yet to\naddress these topics there are main\nthings I believe that I will write\nsomething at some point but currently I\ndon't have anything yet\nokay then so I will just say good luck\nof course with your research agenda and\nthank you very much for coming here\ntonight and we've enjoyed it very much\nand then to everyone if you want to join\nnext week at the same Wednesday\nequalities 207 UTC will be reading the\narticle the malicious use of artificial\nintelligence and discussing that and\notherwise I will just say thank you for\ntonight and see you next week", "date_published": "2018-08-09T21:13:12Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "2e5e761df0461b01797bd7c414177abf", "title": "199. How close are we to creating Artificial General Intelligence", "url": "https://www.youtube.com/watch?v=wjq-PHQGIug", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "okay hello everybody\num welcome to the\n199th edition of um\nyeah i safety denmark reading group\nuh i'm presenting it tonight i'm chris\ncooper\nand we're talking about the ideas of\nthe physicist david deutsch on\nartificial intelligence\ndavid deutsch is uh a visiting professor\nof physics\nat um oxford university\num he was a founding member of the\ncenter for quantum computation\nand that's what he is mostly known for\nthat he as a pioneer of computation\num his citation for being elected a\nfellow of the royal society which i'm\nshowing here\nexplains that he laid the foundations of\nethereum computation and he's made a lot\nof\ncontributions subsequently\nhe is known for being an advocate of the\nmultiverse interpretation of quantum\nmechanics\nhe champions uh cal popper the\nphilosopher\nof science and he applies\npoppa's ideas very strongly in\nas we'll see throughout this paper and\nuh david deutsche is also known for\ndeveloping um\na personal uh\ntheory of physics or um organizing\nprinciple of physics called constructive\ntheory\nwhich apparently seeks to express all\nfundamental scientific theories in terms\nof other equality between\npossible and impossible physical\ntransformations\nand that is all i know about\nconstructive theory\nthis discussion is centered on a paper\num\n[Music]\nwritten very much at the popular level\nuh it's called creative blocks it\nappeared in the magazine\neon i've called it how close are we to\ncreating artificial intelligence because\nlet's use a better\ntitle and it actually appears in the url\nof\nthis paper um the paper argues that\num we've made virtually no progress in\ndeveloping\nagi to date but nevertheless we\nshouldn't be downhearted because we may\nvery well be very close to doing so\nnow um david deutsch's key claim\nis that agi\ngeneral artificial intelligence must be\npossible\nhe reckons that he is the one who has\nproved that\num he says that everything that the laws\nof physics require a physical object to\ndo\nany physical system including in\nparticular the human brain\ncan in principle be emulated in\narbitrarily fine detail\nby some program on a general purpose\ncomputer\nprovided is given enough time and memory\nhe says that the first people to guess\nthis and to grapple with its\nramifications were babbage and\nlovelace back in the mid 19th century\num in the 20th century cheering\nhe says fully understood universe\nuniversality\nbut um and you may would have thought\nthat he'd actually\num prove universality but though which\nsays that\nit actually remained a guess until the\n1980s when\ndavid deutsch himself proved it using\nthe quantum theory of computation\nnow turing concluded that a computer\nprogram whose repertoire included all\nthe distinctive attributes of the human\nbrain\nincluding feelings free will\nconsciousness and all\ncould be written um i haven't had time\nto go back and look at turing's um\nclassic paper of 1950 which we discussed\nin this group and i'm not quite sure\nwhether he actually said this or whether\nhe was proposing the famous turing test\nas a way of sidestepping\nthese very difficult questions just\ngiving an operational test as it were to\nreplace them\nso i should be going back and looking at\nthat at some point but\ni'm sure actually deutsch probably knows\na lot more about what tiering thought\nthan i did\ndavid says that this astounding claim of\nturings\nsplit the intellectual world into two\ncamps\none insisting that agi was nonetheless\nimpossible\nthe other one insisting that it was not\nonly possible but\nimminent and\ndeutsch says that both were mistaken\nthe first camp thought that ai is\nimpossible because of the\nof the unique properties of the human\nmind um\nand that tend to you know possibly be\nsupernatural qualities or ineffable\nqualities to do with morality and\naesthetics and religious sense and\nsubjective feeling and stuff\nthis camp failed to understand\nuniversality of computing\nin other words that um\nthe human brain is a physical object and\ntherefore all its workings\ncan be simulated on a computer\nthe second camp thought that ai is\npossible and imminent\nthey fail to realize that the\nfunctionality of the human brain\nstroke mind is different from\nall other computer functionalities\ndeutsch says that the core functionality\nrequired for ai\nis creativity specifically\ncreativity in producing new explanations\nnow the concept of explanation\nis central to deutsche's thinking\nit has to be contrasted with the mere\ndiscovery of correlations\num or even\nthe mere production of successful\npredictions i say the mere production\nobviously that's a big thing but\num even that isn't quite the same as\nexplaining\nit's pretty much synonymous with\nsuccessful theory\nwhatever that means it and that of\ncourse does include successful\npredictions\nanyway um explanation is very much a key\nterm\nfor deutsche uh deutsch uh contrasts\nprogramming of the computer to do\nsomething very simple like converting\ntemperatures from centigrade to\nfahrenheit\num he contrasts that with programming\nand agi to address\na problem in physics such as the nature\nof diet matter\nthe temperature conversion could be done\nwith a lookup table or it could be done\nby means of an explicit calculation\nand he seems to regard\ncurrent ai achievements to date as\nsimilar in kind to this\nsorry i'm throwing up extraneous\nexcuse me\nonwards so with regard to\num creating a new physical theory\nhe says suppose you were somehow to give\nthe agis program as a list of\nexplanations of dark matter that would\nbe acceptable\noutputs of the program if the program\ndid output one of those explanations\nlater none of those explanations would\nbe new\nyou would already have created them\nyourself in order to write the\nspecification\nokay i don't think anybody would argue\nthat that\nwouldn't be too impressive\nhe says we need an agi algorithm with\nthe right functionality\nin other words able to create theories\nthat does not involve first making new\ndiscoveries in physics and hiding them\nin the program he wanted to do something\nthat we can't do\nin advance\nresearchers have discussed tests of agi\nbut these give no clue as to how the agi\nis to be implemented the turing test for\nexample\ngives no clue as to how it can be passed\nand he mentions in this context uh\nevolutionary algorithms there's one\ntechnique that\ncannot succeed um i don't know how the\nunfortunately\nperson who's um reading this article\num it's supposed to cope with the idea\nof evolutionary algorithms because he\ndoesn't\nexplain much but um\nokay obviously as i'm sure everybody\nhere knows\nuh evolutionary algorithms involve\nuh producing some trial agi programs as\ntest candidates you judge their success\nyou select the most successful\nyou make copies of the most successful\nones with slight\nrandom variations you then test those\nselect the most successful of them\nand you repeat the process for many\ngenerations and you accumulate\nlots of small improvements to get an\nimproved algorithm and that's\nof course a technique which is used to\ndevelop\num improved\nalgorithms now narrow ai algorithms\ndeutsch says that the turing test cannot\nitself be automated\nwithout first knowing how to write an\nagi program\nsince the quote judges of a program need\nto have the target ability themselves\nin other words\n[Music]\nhere the judges are the parts of the\noverall automated procedure that assess\nthe success\nof the candidate programs they are the\nequivalents of human judges in an\nordinary turing test\nso he's saying a turing test might work\nbecause human judges\nknow how to judge human behavior um\nan automated version would also need to\nbe able to judge that and you couldn't\ndo that without\nessentially having written your agi\nprogram in advance\nyou've got to have the ability that\nyou're supposedly\ntesting\nhe says that agi cannot possibly be\ndefined purely behaviorally\nbecause in the classic brain and of that\nthought experiment\nthe brain when temporarily disconnected\nfrom its input and output channels\nis thinking feeling creating\nexplanations it has all the cognitive\nattributes of an agi\nso the relevant attributes of an agi do\nnot consist only of the relationships\nbetween inputs\nand outputs\nso you can't judge an agi by its\nbehavior because you could cut off all\nits\nmeans of behavior and it would still be\na functioning agi\nthe upshot or this is that unlike any\nfunctionality that has ever been\nprogrammed to date\nthis one can be achieved neither by a\nspecification\nnor by a test of the outputs\nthat's rather dense conclusion\num\nand very arguable i'm sure we have a lot\nto say about it\nwhat is needed is nothing less than a\nbreakthrough in philosophy\na new epistemological theory theory of\nknowledge\nthat explains how brains create\nexplanatory knowledge\nand this theory will have to define in\nprinciple\nwithout ever running them as programs\nwhich algorithms possess that\nfunctionality and which do not\nso in some way we've got to be able to\ncreate these algorithms\nand know that they can do the job\nin advance of running this program and\ngetting them to do that job\nand as i say as you will gather it from\nmy tone it's not clear to me that this\nis so\nbut uh i'm sure people here will have\nlots of opinions about that\nnow the prevailing misconception in ai\nhe says\nin our research is that by assuming that\nthe future will be like the past\none can quote derive or extrapolate or\ngeneralize all these things in snare\nquotes\nthat one can derive these theories from\nrepeated experiences by an alleged\nprocess called\ninduction but that is impossible\nhis inspiration throughout this work is\nkarl popper as i mentioned\num right at the beginning and popper\nstressed the creativity of scientific\ntheorizing and the impossibility of any\nsimple method of extrapolation of the\nfuture from the past\ndeutsch gives the example of the\ncalendar um the very simple obvious one\nbut actually i think it's\num extremely uh\nrelevant\nhe says that i remember observing on\nthousands of consecutive occasions that\non calendars the first two digits of the\nyear were 19.\ni never observed a single exception\nuntil one day they started being 20.\ni wasn't surprised\nnot only was i not surprised i fully\nexpected\nthat there would be an interval of 17\n000 years\nuntil 19 appears\nin that position again because obviously\nthat will be the year 19 000\nnew year's day on 19 nineteen thousand\nseventeen thousand years ahead of the\nyear two thousand\nso that's a period that neither any\nother human being had previously\nexperienced even once\nso that expectation is not based on\nin any simple way on experience\nand i'm just adding the example because\ni like it the experience of turkeys who\nare fed by the farmer\nfor 364 days in a row and lacking an\nadequate theory of the farmer's behavior\nthe turkeys uh being good inductivists\nassume that the future will be like the\npast\nand that surprised on the 365th day he\ncomes with an\naxe\nback to deutsch he says it's simply not\ntrue that knowledge comes from\nextrapolating\nrepeated observations nor is it true\nthat the future is like the past\nin any sense that one could detect in\nadvance\nwithout already knowing the explanation\nwithout an explanation any continuation\nof any sequence\nconstitutes quote the same thing\nhappening again\nunquote under some explanation\nso you can say the calendar is always\ndoing the same thing over and over but\naccording to a rather\ncomplicated rule\nhe says in regard to how the agi problem\nis perceived\nthis casts thinking as a process of\npredicting that future patterns of\nsensory experience will be like past\nones\nthat looks like extrapolation which\ncomputers already do all the time\nonce they're given a theory of what\ncauses the data all right i won't do it\non that\nshoot on\nand he says something which i find\nextremely interesting\nin reality only a tiny component of\nthinking is about prediction at all\nlet alone prediction of our sensory\nexperiences we think about the world not\njust the physical world but also worlds\nof extractions\nsuch as right and wrong beauty and\nugliness\nthe infinite and the infinitesimal\ncausation\nfiction fears and aspirations and about\nthinking itself\nand to me that that's sort of very\npregnant\nlet's i'll just go back\num i just seem to profoundly important\nto me and just as a reminder of the\nrichness of thought and it helps to mark\noff the\nthe field of general ai this is this is\nwhat\na general ai would have to be like a\nconcern itself with and\nit's very strongly strongly marked off\nfrom that of the narrow ai that we're\nfamiliar with\nnow he has a he has a jab at bayesianism\nnot against the reverend thomas himself\nand not against basic statistics but\njust again\nmaking it you know the bill and end all\nof uh\nai research uh he says that the doctrine\nof bayesianism assumes that\nminds work by assigning probabilities to\ntheir ideas\nand modifying those probabilities in the\nlight of experience as a way of choosing\nhow to act\nand that behaviorist input output model\nis appropriate for most computer\nprogramming other than aji\nbut hopeless for agi\nand he says it's ironic that um\nmainstream psychology has largely\nrenounced\nbehaviorism the simple input output\nmodel\nwhereas ai research is\nwedded to it i'm going to be using a\npointer possibly um from\na lot from now on because um\nmy uh i ran out of time to do the\nanimations\nokay jeopardy the um\nfamously um the i it's the ibm it is ibm\nisn't it um\nwatson program um\n[Music]\nwon victories on the ustv\nquiz show called jeopardy jeopardy with\nan exclamation mark at the end\n[Music]\num\nand it was very impressive and it was\nhailed as you know i robot overlords are\non the way\num jeopardy i gather\nis pretty much a general knowledge quiz\nbut you have to also do a bit of\ningenious connecting of um\nthe question you're asked to the answer\nthat's wanted\num he says playing jeopardy like every\none of the computational functionalities\nat which we rightly marvel today is\nfirmly among the functionalities that\ncan be specified in the standard\nbehaviorist way that i described above\nno jeopardy answer will ever be\npublished\nin a journal of new discoveries so this\nis nothing like\nscientific discovery\nthere's now a long stretch of\nmisconceptions which he\nattacks another hopeless approach to agi\nis to start from existing knowledge of\nhow to program specific tasks\nsuch as playing chess performing\nstatistical analysis or searching\ndatabases\nand to try to improve those programs in\nthe hope this will somehow generate\naji as a side effect has happened to\nskynet in the terminator films\nskynet was a global defense network\nwhich just got so big that one day it\njust woke up\nbecame self-aware and then started\ncarrying out his evil plans for the\nhuman race\nand he says that a similar misconception\nthe whole misconception that\nincreasing quantity will turn into a\nchange in quality\num this kind of misconception is\ninforms these ideas\nthat agi is merely an emerging property\nof complexity\nor that increased computer power will\nbring forth agi\nhe says that a similar misconception is\nthat the unique abilities of the brain\nare due to its massive parallelism\nought to its specific neuronal\narchitecture\nand he says these violate\ncomputational universality in other\nwords\nanything that can be done by means of\nthese special properties mentioned here\nthat can be achieved with them can also\nbe achieved but\nby any um\nit can be simulated by any computer any\nturing machine i suppose i\ni have to speak carefully because i'm\nnot questioning what's a turing machine\nor it's a quantum turing machine or what\nyeah but they can be simulated\nby any computer by other computers\nbut this argument seems suspect to me\njust because um\ni can imagine somebody producing a\npractical computer design\num and it being criticized on the\ngrounds that uh\nyou don't need this practical computer\ndesign because it could be\nsimulated by a turing machine um\nso i'm not quite sure of the validity of\nthis\nanyway he ends with um a nice\nuh he doesn't end he goes on quite a bit\nbut uh he says at this point that\nexpecting to create an agi without first\nunderstanding in detail how it works\nit's like expecting sky's great words to\nlearn to fly if we go\ntall enough\num a frequent phrase often one that's\nattracted me\nuntil i read deutsche as uh anything we\ndon't yet know how to program\nis called human intelligence we just\napply the term\nhuman intelligence to the gaps between\nwhat we can program but what he is\nsaying\nis on the contrary that when we have\nwhen we achieve\nai its intelligence will like human\nintelligence be different in kind\nfrom other computer functionalities\nit won't just see that there is that\nwe've programmed\nnow i won't finish that thought i'll\nmove on\ni apologize once again giving you rather\ncrowded slides they were supposed to\nbuild up in a nice smooth way but as i\nsay i ran out of time\nhe wants to insist that the mind is not\nmetaphorically a computer it is\nliterally a computer\nhe quotes uh john cerrell\njohn cell is is he of course of the\nchinese room argument and we see him\nhere\nin his chinese room he's in his room\ngetting inputs\nin chinese um looking at his rulebook\nand um shoving out\noutputs in chinese perfect chinese\neven though he doesn't understand the\nword of chinese himself\nhowever this is strictly irrelevant and\nit's just there because i like the\ni like the pick um soil here\nis making the point that um the mind has\noften been modeled\non the technology of the day whether it\nwas clockwork\nor hydraulics or\nsteam engines or the electric telegraph\nand he says that we nowadays we are\nmodeling a picture of the mind on\ncomputers because computers are the\nand the latest and coolest technology\nthat we know about\nbut uh deutsche says\ni'm sorry joyce says\nno it's not a metaphor the university\nuniversality computation follows\nfrom the knowledge of physics his\nparticular contribution of the 1980s\num so there\nhere's yet another um well another claim\nthat he combats\nroger penrose the mathematician is one\nof those who has suggested that the\nbrain uses quantum computation or\neven something beyond that um\nand uh that this explains the failure to\ncreate\nagi on existing computers\nbut deutsch who want to know since he\nfounded quantum computing or as\na founder of quantum computing says\nalong with most\nresearchers in his field that um they\ndisagree\nthis is a plausible source of the human\nbrain's unique functionality\nand yet another misconception is that um\nexisting software\nis already intelligent in a sort of\nprimitive way\nas are um animals in a primitive way\nand that we because of our\nanthropocentric manatee\nwe don't recognize that it's continuity\nwith our own intelligence\nand\nas part of that uh another misconception\nis put great weight on self-awareness\nand deutsche says self-awareness in the\nbehavioral sense\nfor example to pass the mirror test\nof being able to use a mirror to infer\neffects about itself you know\nchimps and apes and apes and monkeys can\nlook in\nlook in mirrors and notice that um\nexperimenters have put something on\ntheir forehead or something like that\nand they'll immediately brush it off\num this is a fairly useless ability as\nwell as a trivial one he says\nat least it is as far as robots and ai\nsystems you can say\nit's true that agis will be\ncapable of self-awareness but that is\nbecause\nthey will be general intelligences\ncapable of awareness of every kind of\ndeep and subtle thing\nincluding their own cells now\num self-awareness is closely\nlinked with consciousness\ni hope everybody's watching the the red\ndot which i will go around to try and\ntell you\nwhich bit of the screen i'm looking at\num\nconsciousness um\nhas a huge range of meanings at one end\nof the scale there's the physical\nproblem with the nature of subjective\nsensations or qualia\nwhich is intimately connected with the\nproblem of agi so he does\ndo i just attach great importance to\nthis\ni think i said last week in our\ndiscussion but um\ni uh i can't remember that i said i was\ndreading having to talk about qualia or\nwhether\ni was glad i wouldn't have to as it\nturns out he says nothing about it\nreally except that it is\nthe problem of subjective sensations are\nintimately connected with the problem of\nagi\nbut he has nothing more to say about it\nreally\nstill talking about the the concept of\nconsciousness he says that\num at the other end of the spectrum\nconsciousness is merely that which we\nlose when we put under general\nanesthetic\nsomewhere disconnected from\nsensations for a while and animals\ncertainly have that\num he attributes the great importance to\nit\nnow a key absolutely key assertion of\nhis\nagis will be people\num\nthat they are people has been implicit\nin their concept from the outset\nif there were a program that lacked even\na single cognitive ability that is\ncharacteristic of people\nthen by definition it would not qualify\nas an agi\nand you should remember that deutsche's\nvery wide conception of what is a\ncognitive ability um\nearlier in this paper he described exp\nexperiencing boredom as a cognitive task\nnow um he makes\ni i've just imported\na sentence defining his idea of\npersonhood from his book\nthe beginning of infinity strongly\nrecommended\num but i've brought in this um\nsentence from there where he says people\nwhom i shall henceforward define as\nentities that create\nexplanatory knowledge\nand in this article that we're\ndiscussing here that definition crops up\nand is asserted as a fact\nthe fact that the ability to create new\nexplanations\nis the unique morally and intellectually\nsignificant functionality of people\nmeaning both humans and agis and that\nthey achieve this functional\nthis functionality by conjecture and\ncriticism\nthis changes everything he\num as i said he regards explanation as\nkey\nand he carries it to the point of\nregarding our ability\nto create good explanations\num that increase our mastery over\nnature and over ourselves um\nhe regards as as definitive of\npersonhood\nand in the book beginning of infinity he\nconnects this with\nthe technological power it gives us\nessentially unlimited technological\npower whereas\nall other animals have\nevolutionary niches which they're more\nor less confined\nwe are virtually unconfined because\nof our um\nunlimited you know technological power\ni haven't digested this argument so i\ncan't really say much more about it\nbut this is key\nso he then starts thinking about the\nimplications\nof the fact that an agi\nwill be a person it is not the computer\nbut the running program that is a person\nonce an agi program is running in a\ncomputer to deprive it of that computer\nwould be murder or at least\nfalse imprisonment if you were putting\nit in some other hardware\nor it would be slavery\nthis when\nagi arrives and we have the ability to\ninstantiate each program\nwith multiple copies deep philosophical\nproblems are going to\narise i think\nour copies of the program while they are\nstill executing identical steps\nbefore they have become differentiated\ndue to\nrandom choices or different experiences\nwill they be the same person or many\ndifferent people\ndo they get one vote or many if you\ndelete one of them\nis that murder or a minor assault\nthat is some rogue programmer perhaps\nillegally creates billions of different\nagi people\neither on one computer or remini what\nhappens\nnext\nto treat agis like any other computer\nprograms would constitute\nbrainwashing slavery and tyranny\nand cruelty to children too for\nprogramming and already running\nagi unlike all other programming\nconstitutes education\nin fact he says we have to forget almost\nall existing connotations of the word\nprogramming\nto ignore the rights and personhood of\nagis would not only be the epitome of\nevil\nbut also a recipe for disaster creative\nbeings cannot be enslaved forever\nhe goes on uh in the same vein as some\nhope to learn how we can rig their\nprogramming\nto make them constitutionally unable to\nharm humans\nas in isaac asimos laws of robotics\nor we want to try to prevent them\nsomehow from acquiring the theory that\nthe universe should be converted into\npaper clips as imagined by nick bostrom\nonce again i can imagine a reader of eon\nwho\nwas new to the subject being completely\nbaffled\nby that sentence but of course we are\nall familiar with the unintended\nconsequences that bostrom imagines of um\ntrying to try to create a super\nintelligent paperclip\nmaker he says\ndeutsch says none of these are the real\nproblem\nlittle aside from me uh my suspicions\nare always aroused when someone says\nthat\nbecause it always means they are about\nto divert attention to another problem\nhe says it has always been the case that\na single exceptionally creative person\ncan be thousands of times as productive\ngood or ill as most people\nthat therefore will be true of an agi\nthese phenomena have nothing to do with\nagis the battle between good and evil\nideas\nis as old as our species and will\ncontinue\nregardless of the hardware on which it\nis running\nokay\npossibly complacent\nso he says um how should society be\norganized\nso as to promote that improvement\ni'm sorry i didn't seem to have\nmentioned what improvement that was i\ndidn't think that was on the previous\nno slide sorry about that the\nimprovements\num which means the improvement of\nsociety in general or a society\nwhich incorporates energy highs anyway\nthe slogan enslave all intelligence\nwould be a catastrophically wrong answer\nenslave one intelligence that doesn't\nlook like us\nwould not be much better\njust because it's made of steel and\nsilicon rather than\ncarbon-based wet wear\nit's not a good reason for\ndiscriminating against that\nform of intelligence\nhe says learning must be something that\nnewly created intelligences do\nand control for themselves\nand um he does indeed have um\ncorresponding ideas about the education\nof children human children\nuh which\nhe believes can be done without any of\nour usual coercion\nand dogmatism\nso\nhe says the whole problem of developing\nagis is a matter of philosophy\nnot computer science or neurophysiology\nand the philosophical progress that is\nessential to their future integration\nis also a prerequisite for developing\nthem in the first place\nwithout papyrian epistemology this\nepistemology of\ncreative conjecture\nfollowed by criticism but not\nnot just bayesian induction or anything\nlike that\none cannot even be without such an\nepistemology one cannot even begin to\nguess\nwhat detailed functionality must be\nachieved to make an agi\nand i'm making another grumpy comment\ndown here that unfortunately\npreparing epistemology well certainly\nnecessary i think i\ni i buy into that certainly but it's\nclearly not sufficient\nbecause david deutsch is clearly not\nactually\ngiving us this\nthis hoped for philosophical advance\nhe says thinking of an agi as a machine\nfor translating experiences rewards and\npunishments into ideas or worse\njust into behaviors is futile\nbecause it is rooted in an archaic and\nwidely mistaken world\nview that's the false philosophy\nof science that he's talking about if\none works towards programs whose quote\nthinking\nunquote is constitutionally incapable of\nviolating predetermined\nconstraints when it's trying to engineer\naway the defining attribute\nof an intelligent being of a person\nnamely creativity\nbut although we need this big\nphilosophical step\nhe says the information how to achieve\nit must be encoded\nin the relatively tiny number of\ndifferences between the dna of humans\nand that of chimpanzees\nbecause that must be that which accounts\nfor the difference in abilities between\nus and chimpanzees\nso in one respect i namely deutsch can\nagree with the agi\nis imminent camp it is plausible that\njust a single\nidea stands between us and the\nbreakthrough\nthat's the end of his\narticle i'll just give my some responses\nhere quickly um\ni must say that after i'm reading him\nthe idea of creating a super intelligent\nslave\ndevoted to making paper clips first\nseems like a crime it's always\nit does seem like a contradiction in\nterms\num all these discussions of super\nintelligence which is\nreally literally our slave\num deutsche seems to be putting his\nfinger here a genuine dichotomy between\nthe merely skilled activities which are\nnarrow\nai is creating nowadays\nand genuinely creative activities\nbut i think he's i mean uh i think he\nproves too much we're in the sense that\nhe\ntoo quickly gets rather complacent um\nwhat we are creating now\nuh with machine learning uh it's going\nto lead to an accumulation of very\ncapable algorithms\nwhich can still be very dangerous and\nwell transforming in\nin all the ways we're familiar with and\num\nwe don't necessarily become safer even\nif\nwe can convince ourselves that narrow ai\nwill never become\ngeneral ai\num i'd like to know more about um\nexisting attempts at theory creating ais\ni mean i know\nback in the golden age of ai there there\nwere all sorts of attempts at fear\nimproving theory creating\ni'd like to know um how they're doing at\nthe moment\nand i um certainly don't understand the\nfull depth of his remarks\nwhen he said um\nthat we have to be able to specify aja\nfunctionality ahead of actually\nimplementing it um i i don't understand\nthe\nfull depth of all that is it impossible\nthe sort of things that lead to gpt3\ntoday could not lead to\na genuine agi which we we'd know when we\nsaw it even though\nwe wouldn't have been able to specify\ndetail how it happened\nfinally um granted that a creative\nentity must not and cannot be enslaved\ndoesn't that suggest that we shouldn't\nbring them into existence\ni'm sure he believes i i haven't um\ndidn't have time to find a quote\nto this effect but i don't doubt\nthat he believes that ai not only can be\nbut should be brought into existence\nhe generally thinks that life and\nintelligence should be spread everywhere\nso i'm sure that he would think that and\nso he\nreally does want to summon up the genie\nfrom the lamp\num and this seems\nit seems a bit rash to me in other words\nhe's not just um\nhappy if he hears um\nintelligent aliens in distant space\nannouncing that they're going to be\narriving in 50 years\nand that we should get ready he not only\nwelcomes that he would\nuh he'd be impatient and he'd want to go\nout and fetch them\nthat's my impression okay\nand that is uh all that\ni have to say on that paper thank you", "date_published": "2020-09-17T21:34:15Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "d9f3454a13fd7c65446495c791433862", "title": "226. John Fox on Is AI Safety a Progressive Research Programme", "url": "https://www.youtube.com/watch?v=5D8zELMw_8k", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 226\nin the ai safety.com reading group\ntonight john fox will be giving a\npresentation\njohn fox is a interdisciplinary\nresearcher\nfrom oxford university please go ahead\njohn\nsarah and thanks very much um hello\neverybody\num appreciate the opportunity to talk\nabout this\num i'm a bit of an outsider uh\ni would self-identify for most of my\ncareer as a cognitive scientist\nand i've been attending these events\ntrying to understand the ai safety\nprogram\nand how it relates to my perspective on\nai\nand cognitive science generally\num just the starting point\ni'd like to choose is a book that\nsubrata das\nand i wrote about 20 odd years ago which\nwas based on what now would record we\nwould call\na traditional ai and applied in medicine\nand for us medicine was a a very\nimportant challenge domain\nbecause it raised huge numbers of issues\nand questions\nof many kinds as well as just practical\nones of what's often called narrow ai\nsome years later i was surprised to kind\nof hear from\nnick bostrom who sent me a copy of his\nbook um\nand not long after that was invited to\nspend a few days at\nmiri um\ni think because they thought i was a\nrather strange fellow who didn't some\nuh uh kind of see\nai safety the same way which was sort of\ntrue but what i've\nincreasingly been committed to since\nthen is to\nunderstand the ai safety program\num and um and and how it fits\nwith the kind of things we've just been\ntalking about\npseudoscience and progressive science\nand so on\num i think you probably all know\nthat you know ai has always been\nmultidisciplinary i'm a\nmultidisciplinary scientist because i do\ncognitive science\num you know it's from the most simple\ni shouldn't call it that but the most\nfamiliar\nkind of everyday psychology to empirical\nexperimental scientific psychology\nthese days more and more influenced by\nneuroscience\nbut also a kind of evolutionary and\necological dimension\nand of course the philosophers have\nalways had a very strong\nview of this um who they overlap\nwith the the kind of formal rational\nattempts to kind of understand mind and\nintelligence and then kind of round at\nthe bottom i'm not saying this is a\ncomplete\npicture but just to kind of scope this\nout\nthat ai of course has a strong design\nand engineering tradition as well\num as a scientist my own personal\ninterest not so much about it\ntoday but is how do these programs\nconverge are they converging\nare we understanding ai and indeed\nnatural intelligence\nin a way that we is is really\nprogressing and extending\nso i'm going to try to use that lens to\ntalk a bit about\nai safety well again being very careful\ni'm\nkeen to try to\nsupport a conversation between\nthe communities not not to sort of say\nthis is a correct way of approaching\nthings\nso um we mentioned papa we mentioned um\num kuhn um the other\nof the three horsemen is lakatosh\nwho um i just found a little quote it's\nnot from him\nbut essentially a research program\nhe views as a sequence of theories\nwithin the domain of scientific inquiry\nwhere each successor theory and it's\nsurely been oversimplifying um\nis held to mark some sort of advance\nover its predecessor\nthat's kind of my starting point i'm not\na philosopher of science or any other\nkind of loss but\nso um i'd say don't think i have a\nuh i don't want to be taken as this as\nauthoritative\num there's many programs in cognitive\nscience philosophers have been\nparticularly\nactive in um which are\ni would regard as informal i'll try to\nbe clear about\nsort of things i mean by formula in a\nmoment um but\ntrying to expose the assumptions we have\nabout\nhuman minds um and non-human minds\num including the hard problem that david\nchalmers and many others talk about of\nconsciousness and subjectivity\nwhich is some way outside\nour current articulation whether in\nphilosophy or any other branch of\ncognitive science um\ni've always thought and this\npreparedness talk has kind of tried made\nme try to\nbe clear about this that cognitive\nscience aims to be progressive that it's\ngrounded in the real world\nit is incremental in in the lack of\nsense perhaps you could call it\nideological but\nmy interest has always been in\nbenevolent ai\nthough i worry increasingly about\nmalignant ais\nbut the thing that that has excited me\nas a scientist for\nalmost 50 years is that um trying to\nseek\na theory that is is convergent or not\nnecessarily only one theory\nand and what i've tried to kind of\nunderstand\nin attending these events um is whether\nthe ai safety program has\num analogous aspirations\nand what i've heard both here and many\nother places is\nthe importance that's attached to\nwhether the ai is\nin some sense humanly aligned it's\nhumanly acceptable transparent\npredictable\nand controllable um i i don't know\nwhether the\nthe community would regard these as as\nas key\nthemes or a complete set of themes\nbut that's what i've understood so far\num and a lot of the discussion is\nincreasingly over the last\nfew years uh debate perhaps\nis being expressed through a series of\nbooks and this is just a few of the ones\ni've read\nrecently um from stuart russell's\nbook to gary marcus and\nernest the book the book of rebooting ai\nand mike waldridge's book on road\nconscious machines\nmost recently brian cantrell smith and\nthey're all kind of addressing\nthe ai safety program either as a\ncomputer scientists which are\nstuart and and and mike woodridge\num um\nand ernest davis with gary marcus\nand brian cantrell smith\nwith the strongest psychological\nframework\nand that's a very active and ongoing um\nset of publications which i'm sure\nthere's going to be more of\ni said my particular interest is in\nwhether the cognitive sciences are\nconverging and i like to\nview these sciences in terms of three\nrough groupings\nthe experimental sciences observation\num interpretation hypothesis\nand test but also more\nthat's typically psychology and\nneuroscience but rational\nsort of foundations the mathematicians\nof many\nstripes logicians former formerly minded\ncomputer scientists\nand people trying to build kind of\napply that the the theories that come\nout of those\nperspectives into in practical\napplications\num and again that is a series of books\nthat i'm familiar with over a longer\nperiod from\nuh which is about which all in different\nways trying to say how can we bring\nthese three methodologies together\num in some sort of unified framework\num alan newell's book of mule and simon\nfame\nwas published in 1990 um\nuh had a particular very computational\nbut psychologically well informed\nuh attempt to to bring um the ideas\ntogether at the time\nbecause there's one book not i haven't\nseen anything like it since but paul\ncohen on on\ntrying to buy experimental methods to ai\nernie davis as we um heard about\na couple weeks ago very much more formal\ntrying to understand common sense\nreasoning common sense knowledge\nand and and also work\nhuge amounts of work in the uh\nneuropsychology and neuroscience\ngenerally on\num trying to understand the the\nanatomical and functional structure of\nthe\nthe brain and its expression in in\nmental processes\num\nso i hope that said kind of\num i'm sure it's deeply unfair both to\nthe people i've\ni've quoted and many others who i\nhaven't\num but it's an attempt to say to give a\nsense of\nwhat informs my thinking\nand and and do pick me up on things that\nyou think\num are mistaken or um\nnaive um later um\ni want to talk about the real world now\nwhich is we've already had mentioned\num i thought ashish was comment earlier\nthat\nthe thing about science is i think it's\nthe thing that keeps you honest\ni thought that was spot on and that's\ncertainly\num where i come from in the world as it\nreally is\nit seems to me has always been the thing\nthat keeps\ndid i lose connection\niris\nsorry i lost connection for a moment or\nmaybe\ni i didn't hear the last uh 20 seconds\noh i could be here\nso should i should i pick up from here\nsure thank you okay so\nthe starting point for me is is the real\nworld which\na lot of people talk about increasingly\nas a kind of challenge to\nto ai generally\nand some people even\nkind of view it as a set of kind of\necosystems that\nhumans and non-human intelligences\ninhabit um\nuh i've heard a lot in this discussion\nthese discussions\nabout the view of the universe as\nessentially a very very large\ncollection of probability distributions\nand that their\npreferred method for achieving both\nbenevolent\nand risking malign ais\nis to search though those distributions\nto find dependencies which will guide\nbehavior in that universe\ni don't know whether this is\nidiosyncratic but i would say cognitive\nscience research suggests that\nstatistic elements are present\nin in in the cognitive sciences and\ncertainly an important part of\nexperimental another\nmethodology but they're essentially\nsecond order and\nmost uncertainty is uncertainty in the\nagent's\nuh head uh not uncertainty in the world\ni don't know whether i should quote the\neinstein cliche\ngod does not play dice but despite the\ngreat levels of uncertainty\nuh i think most of that uncertainty is\nit is ours\nnot the universe and that's even true in\nmedicine\nwhich is of course um\na field which depends hugely on on\nstatistical and other methodologies\nand probabilistic bayesian and other\nmethods now i'll say a bit more about\nthat\nbut i'll suggest that medicine is\nactually just another\ncomplex ecosystem um with its associated\ntheories\num just a little bit tiny bit of history\nthis is a rather famous man called jj\ngibson\num who many years ago wrote a book\non visual perception and and his um\nit's still in print and it's still uh\nyou can still get it\num but it's a very sort of traditional\nuh psychological work um\nbut what he was interested in which we\ndon't understand vision\nwhether it's he never thought about\nrobots but certainly in humans and other\nanimals\nyou need to understand um the\nthe the geometry that what he would say\nthe perception of the world at a point\num and how it changed as the\nuh as as the entity moved around its\nworld\num more familiar perhaps to you as is\nneil and simon\num the great influences of me\num who were they were first to kind of\nuse chess\nas a uh as a tool\nan ecosystem i'll claim um to\ninvestigate human problem solving\nbut you know clearly there is a\nwell-defined\ndeterministic game here\nand that's still\ninforms a lot of ai but um\nbecause of the scale of the search\nproblem\ninvolved has been seems to have been\nparticularly\npicked up on in\nin the post singularity\nconversations um i like to\njust to kind of make it slightly more\nconcrete um\num i played far too much freestyle i\ndon't know how many of you do\num i'm not a great player by any means\nbut somehow rather it\nit comforts me as i'm trying to think\nabout hard things\num and um it's evident to me that\nthat you know i've learned whatever\nskills i have\num using what i think would be obvious\nto all of us\nyou know a whole variety of different\nrepresentations which are\nspatial logical a bit a bit of\nprobabilistic reasoning of a um a rather\nsimple\nkind um there's strategic things\num making opportunities making space\num knowing things that can i can push\nthings and there'll be a cascade\nthere's lots of different things and but\none thing's very striking of course is\nthat there's lots and lots of local\nsymmetries\nso that the same theory or the same\nmethod\ncan be applied in all sorts of\nsituations so that\nthe combinatorics of this are far\nsmaller than their peer\nand and and i hope somebody will correct\nme um but it seems to me that that's\ntrue of go and chess\ntoo which are games that people have\nmade a huge amount of capital out of in\nterms of the\ncomplexity of search spaces um\nthat ais have to operate in\nand learn over and and and\nyou know you can play go with a nine\nnine by nine board probably\neven smaller than that and it's\nessentially the same game and even\nwithin the small game there's huge\nnumbers\nso that actually the true it seems to me\nand and i'm very willing to be corrected\nit seems to me that\nthat that this is over it's inflated\nthe the claim that these are real\nchallenges um\nnot the people do them as well as um\nalphago um uh there are lots of reasons\nfor that but but that the games\nthemselves are\nsomehow representative of the of the the\nkinds of challenges that we\nface in our uh real world ecology\num the most recent book i've mentioned\nis by brian cantwell smith\nreckoning in judgment the promise of ai\nand he\nessentially has the same intuition that\ni have which is that\num human\ndecision making reasoning all the other\ncognitive\nskills we have um depend on many\ndifferent kinds of representations many\ndifferent little theories\num and um and this is actually\nalthough clearly a very\nunpopular may not be the right word but\nnot widely accepted in the\nsafety community excuse me\num this is actually a better solution\num than just blind search for\nstatistic learning statistical\ndependencies\nand applying those to to kind of compute\num uh expected value\num i think i would stick my neck out and\nsay\nthat not only is that basically true but\nthat effect will increase with the scale\nand heterogeneity of the ecology\nso as we try to throw ais the things\nwhich\nhumans do\nbecause complex like meds\nwill find that the effect of gets bigger\nso let's say a little bit about medicine\nuh\nyou know people have been trying to\napply ai techniques for\nas long as i've been in the field to\nmedicine um\nand with the the recent explosion\nuh of interest in machine learning\num and so on we're seeing again another\nseries of books focused on medicine with\npractitioners\nwriting books taking views on whether\nthey think ai is a good idea or not\num i'd like to suggest medicine is a\ncomplex ecosystem\nand probably among the largest most\ncomplex domains of human knowledge and\npractice\nyeah i i can't think of a more complex\ndifficult one it's science base\nis vast and continues to grow\nquickly and again as with\nthe silly example of free cell um\nit's informed the practice of medicine\nis is informed by\nlots of kind of theories about causality\nand\nstatistical dependencies structure\ngeometry anatomy time biochemistry i\nmean\nthat list will go on a long way and and\nhuman doctors carry a large amount of\nthat information\num in their heads these are\nrepresentations which do\ngood work in different situations\nuncertainty pervades this pervades\nmedicine\nin every way you can think of um\nbut it's not kind of uncertainty really\nin the world\nthe real world knows what it's doing\nfairly deterministically\nthe uncertainties in our heads epistemic\neternity\nso just very quickly um a bit of history\num ai in the clinic the first person who\napplied\nbayesian methods um there's a man called\ntinder dumble the cardio cardiovascular\nsurgeon\nin uk um who\num wrote paper in 1972\nafter he built a little bayesian\ndiagnosis system\num and made himself very unpopular with\nhis colleagues\num because he showed that this what he\nused to dismissively call\nidiot base um could outperform\neven the most experienced um\nand\nsenior doctors in the specialty\ntim helped us we built another system\nfor also for diagnosis just a single\nsingle task what's wrong with this\nperson um\nand again another bayesian system um and\nwhich the\ndoctor had the computer in front of her\nand looking she could look at the screen\non the right which has\ninformation she talked to the patient\ncollected information\nput it into the computer and it updated\nthe probabilities\nin the in the left here bottom left of\nthe\nonly five up five decision options\num and um uh\nthat was a an interesting result but it\nwas even then\nin 1980 or thereabouts it was obvious\nthat this\nstatistical approach from purely\nengineering or practical point of view\nwas a black box\nand the clinicians were very\nuncomfortable about it as they are today\nas you know a huge debate\num picking up a\nquote from stuart russell um he makes\nthe remark the machine viewers\nmight as well arrive from outer space\nour chances of controlling\na super intelligent identity from outer\nspace are roughly zero\num uh i mean that might be true of a\nsuperintelligent\nentity i suspect he's absolutely right\num\nthough i think our doctors in fact had a\neven though they didn't really had no\nprobabilistic training\nthat they did have a reasonably sensible\nintuitive\num uh says what the numbers meant\nin a very simplistic way um uh\nand said he makes the argument that many\nare making\nthat to creating ai systems that\nguarantee we won't understand them\num are not a good idea\nand i'm sure everybody here you know is\nfamiliar with\nthis concern what it did though was\nlead us to kind of move on and say well\ncan we build ais that are more\nhuman-like\nand i'm going to jump forward about 20\nyears\num or more um and this is a view\nof a what's called a multi-disciplinary\nteam of\nuh in the royal free hospital in london\num\nwho look after all the patients with\nbreast cancer or suspected breast cancer\num it's it's a major hospital\num you can see a lot of people there are\noncologists surgeons\nradiologists pathologists nursing staff\nand others all with specialist knowledge\ndifferent pieces of the\nof the cancer and breast cancer\necosystem and they kind of come together\nuh um in order to kind of decide what\num interventions they think should be\noffered\nto their patients um i\nprobably can't really see this but what\nthis is is just a view of the kind of\nfor a particular patient um\na summary of the data that we have about\nthat patient which has been collected\nbefore the meeting\num and then it applies\na i would claim as a human-like\ncognitive agent model\nto make a set of suggestions for each of\nthose suggestions what what should be\ndone whether it's\nchemotherapy or or a genetic\nassessment or a variety of things there\nare many others\nthe system knows about 250 different\nvariables that potentially can be\nrelevant\nand then on the basis of information\nwhich is less than that much less than\nthat\nabout a particular patient it will then\ngive some recommendations to do\ntogether with the arguments for it\nagainst uh\nand and link back into the into the body\nof knowledge\nuh that supports those the reasoning\num um so that so\ni'll just briefly i'll give some more\nevidence that this approach was quite\nsuccessful but\num despite that being a fairly major\ncenter for breast cancer\num the application i didn't make it\nclear that was\nprojected on the on the front of the of\nthe multi-disciplinary\nmeeting room\nabout 93 compliant with um\nwith national guidelines in the uk and\nthe\nmachine made about 97 so it was a\nsmall but important improvement\num a separate piece of work with\nsomething called peter hammond\num we looked at clinical safety and and\nhow\ndo people describe their strategies for\nmanaging safety\nuh within breast cancer and indeed many\nother cancers and there's a\num basically a lot of little mini\ntheories or skills you know that what\npeople do is they prevent\nthings they look try to prevent things\nthat that are\nknown and likely to happen um they\noptimize the actions the secrets of\nactions they take\nthey monitor the patient um and so on\nthese very simple\nrules um but actually\nprobably i'm not sure whether\nstop mumbling i'm not sure unless you're\nnot sure whether that\nthe performance can be improved on by\nbut other methods because there isn't\nactually that much uncertainty in me\nin the practice um\nso i'd like to give you a bit of bit of\nintelligence a bit of a bit of evidence\nthat this kind of general approach\nis is is worth considering\nuh and and i think is on the road to\ncontributing to\nsome sort of super intelligence in the\nfuture um\nin in in not nick's book he says\nthat clearly the most advanced ai system\nis far below the human baseline on any\nreasonable metric of general\nintellectual ability\num this is a set of just a summary table\nof a set of\nreports in in peer-reviewed medical\njournals\nthat use the meth that we've developed\num and as far as i know there are no\npublished\ntrials uh that don't improve upon\nuh the performance of human clinicians\nso i think this these very strong\nstatements\num although they're very kind of widely\naccepted\num i i think need to be kind of\ndiscussed\num i think to me it's fairly clear\num that we could build\nnow uh a general purpose medical\nai that could do a very large amount\nof what human medical professions do\nand do it better\nso um coming back to this question of\nconvergence and um progressiveness\num so in mike waldridge's book which i\nstrongly recommend he says something um\nagain something that i can't quite\naccept which is that you know basically\nwe don't know anything about mind and\nconsciousness intelligence\nyou know how these things evolve how\nthey work um\nthey're utterly mysterious um\nand the conclusion is is that this\nfundamental lack of understanding makes\nstrong ais so difficult to approach we\nhave no idea where or how to begin\ni don't think these statements are true\ni think\nif you spend some time you know\ni'm not recommending anybody anybody\ndoes it i've been it's been a\nbit of an obsession of mine but trying\nto kind of understand\nwhat's happening in in in the cognitive\nsciences and whether they're\num it's converging um they're converging\non on a\nsmall number of theoretical frameworks\num i think you might find there's more\nknown\nthan you would conclude from this\nstatement and in fact one of the things\nthat\nreally struck me when i wrote when i\nread um\n[Music]\nthe super intelligence was that he gave\na set of\npossible pathways to superintelligence\nhe's he he he didn't have the example of\nsomebody will come up with a theory\num and and i'm not gonna make a\nprediction\nbut i think it's coming\nso is the agi safety program progressive\nby which i mean um\nin as a cognitive scientist is and\nfocused on human like ai\nis empirically grounded as we said\nearlier\nis it is is it is it keeping us honest\nare we in the real world do we have a\nkind of\nstrong formal mathematical normative\nrational whatever your favorite word is\ntheory is it useful and controllable\nand are the different theories which\nthere are still many\ncoming together what's not clear to me\num is whether a safety program\nwants to be progressive in any of that\nsense or even if it wants to be\nprogressive according to different\ncriteria um\nand i'll just put a few in there and and\nyou may correct them\num again as i said earlier this debate\nseems to be going on\nthrough a series of of\ni would say parties and books i think\nthis is the people who write these books\nare very knowledgeable very capable\nand they're trying to present their\npositions\num but that i'm not aware of a\nclear form of formal process of\npeer review and all the rest that's\ntypical of science\num if anyway if we are going to try and\nsign up to how we\nhow can we attempt to be programmatic or\nstuart suggests for some possibilities\num i'm not quite sure even having read\nthe relevant chapters what that really\ntranslates into\nbut you know it still doesn't leaves me\nwith\nyou know i'm not sure it's grounded\nwhich is the word i've used\ntwo or three times in these meetings um\nor that the experiments that people are\ndoing and i'm\nfocusing on on the\ncompute as the as a potential solution\nto\nreal world challenges i'm not sure\nwhether these things have been validated\nin the sense that most scientists would\nwho are worried about pseudoscience\nwould accept\num and i do think it would be helpful\nif the community is engaged with the\nmany other communities who are\nworrying about the the theoretical and\nand scientific problems\num so just to be provocative\ni'll close with a\nsuggestion for how we might develop a\ngeneral program for making ais safe\nfrom a cognitive science perspective\nthe first thing i noticed there is no\nsafety theory i haven't come across\nanybody\nthat correct me of course but who\nactually even says what safety is\nit just relies on in the common sense\nmeaning of that word\nbut it appears to be strongly\nlinked to the concept of alignment which\nhas a\ntechnical meaning\ni'm not aware of any kind of formal work\non\nmaking such critical decisions based on\na clear understanding of safety\num and there are certainly are working\nin logic\ndevelop two non-classical logics that\nwould be relevant in that context\ndescribed in the book i mentioned um and\nand from a practical point of view i\ndon't think anyone would be surprised to\nsay we need to\nhave good methods of design um and ways\nof proving\nthat they have certain properties like\nsafety properties but many other\nproperties too and there are\nagain in software engineering lots of\nwell not lots but some\num kind of quite quite versatile\ntechniques like model checking\num the next stage is to kind of learn\nis deploying these\ndesigns um and and i would recommend\nthat we learn contingencies in the\necosystem\nnot just in one representation like\nprobability which i think will prove to\nbe\nlimited value but in many\nrepresentations\nmany which are obvious um\nthe ais should be designed to\ncontinuously monitor and predict\nobvious hazards and re-plan anything\nthey're doing against\nconstraints and the hammond rules for\nsafe\nuh management of patients on\nchemotherapy are\na kind of simple example of that so\nthere's a lot to do\nand and finally um in terms of how do we\ncontrol\num these ais and i'm very mindful that\num the assumption is that\nthe ai's are in principle open-ended we\ndon't know what their capabilities are\num and so control is a huge problem\nfor that but there are already you know\nproposals around\nuh for ais to monitor their own and be\nbuilt to monitor their own behavior\nand predict unpleasant consequences\nand the final thought i'll leave you\nwith um\nis which i'm working on myself is that\nwhenever a medical ai\nan autonomous medical ai makes a\ndecision\nwe could be enforced that ai has to\nconsult\nany number of other ais with\ncapabilities in\nother parts of the ecosystem um\nmedical specialties and so on and may\nnot\nact um without\nauthorization from the the community of\nagents\nincluding humans so\nuh i not sure whether you will find that\nsatisfactory but i hope it's a\npoint of discussion thanks very much\njohn so there were a number of questions\nposted in the chat\nand i think if i wrote down correctly\nnow i can't find\nit was ashes who will have the first\nquestion\notherwise we'll go to i had the second\nquestion\num when you said that\nyou foresee um black box approaches\nas being not as applicable\nthe more complex the main groups and\nforeign\nthere was a bit of background could you\num could you repeat the introduction\nright i conducted myself can you hear me\nnow yes\nright um i was interested when you said\nthat you believe that white box\napproaches that are\npretty much designed by humans would be\nsuperior\nto black box approaches to ai where you\njust have a model\nand a bunch of numbers and the ai gives\nyou what you want\num how do you\nhow do you consolidate that world view\nwith the existence of something like\ngpt3\nwhich is probably the best tool we have\nfor for understanding language today in\nterms of an ai\nand it was basically made by you took a\nmodel you gave it\nall of the internet and it it gave you\nthe ability to\npredict the next words in a good\nsentence\nyeah i think that um\nthe all sorts of kind of issues around\nwhat i said i can i can see and we're\nnot going to kind of cover them all\ntoday but\nthat you use the word um gbt3\nunderstands the language i don't think\nthat's a common\ncommonly accepted um being able to\npredict the next word\nis a technical trick um\nbeing able to have a wide ranging\nconversation um\nis what most people i thought in in\ncomputational linguistics\nwould expect as a minimum\nso so being able to predict or give\nsomething is calibrated\nfor one specific skill let's call it\nthat\nvery well calibrated it is seems to me\nis a long way from being an agi well\nwhat do we care if anyone\nunderstands medicine if it can tell you\nwhat treatment to use\ni'm sorry if it can tell you what\ntreatment do you usually think that's\nunderstanding medicine\nuh because it only tells us what to do\nat the moment\nin very limited ways i mean people\nrightly criticize early\nexpert system database systems for would\ncall them narrow ai\num my my\nsuggestion that for the wide range of\nroutine things that\nhuman doctors nurses and other people\nhave to do\num is is you know it's a considering\nsignificant level of generality of\nexpertise\nbut it's hardly scratching the surface\nof what doctors actually do\nand we're and and and we have still a\nlong way to go this is i'm not\nclaiming that either that that this\napproach\nis has converged then we have a general\ntheory i'm not claiming that\ni'm saying that i think it is\nprogressive on a number of dimensions\num and um\nyou know if you remember the the\ndefinition like tosh's definition\nit is that we are developing theories\nincrementally to cover more and more\nof the available data um and\num i i mean i i guess your i don't know\nwhat your background is ashish but but\nmedicine\nis is often thought by\ntechnical people to be much simpler than\nit actually is\nand i don't i don't think there's a\nsingle\nmedical ai around that understands\nmedicine in the way i would use that\nword\nokay that makes sense thank you okay\ni had a question um it's in the chat\ni'll just read it aloud the tasks that\ndoctors engage in\nseem much broader than just medical\ndiagnosis prognosis or treatment\nlike convincing patients to do what is\nbest for them having a good bedside\nmanner dealing with byzantine\nbureaucracies etc\ncan we really make an ai that can\nreplace the doctor for all those tasks\nright now no\nthat's the same answer but i think\nthat's a variant of the\nof ashish's question um the uh\nand you're absolutely right um that was\nthe point i was making that that\nthe actual delivery of looking after\nother human beings\nis a very complex subtle um\nuh process um i'm\ni don't want to be and so to say say\nthat this\nproblem any of these problems are solved\ni'm saying we are\nprogressing in ways and i think we're\nfurther down the road\nthan is assumed um\nyou know so i i if you're interested i\nmean after what um\nibm um start the marketing department\nstarted making\nuh a fuss about what's and how it was\ngoing to um\num revolutionize cancer care\num i wrote a little article which made\nexactly the same point you just made\nwhich is that medicine is a more\num it's a bit more complex than the ibm\nmarketing department seem to think\nand and the main successes in it for\nexample\nin medical ai these days are in image\nprocessing of one card or another\nwhich doesn't involve much human it's an\narea where\nthe data are too overwhelming for human\nbeings that's generally recognized\num\nand i think it's um there's a lot\nto be done to break out of the the kind\nof\nyou know we'll solve we've solved the\nimage processing problem\num into wider areas of medicine\nokay so uh the next question is\nfrom me where i would ask um if there\nare any uh\nconcrete work concrete research papers\nbeing done\nin ai safety uh that you feel in\nparticular\nare not progressive not uh moving the\nstate of the art forward\ni'd love to know the answer to that\nquestion sir and the reason i found this\ncommunity was because i was trying to\nanswer it myself\nand as far as i could see um\nnobody was um even interested it's not\nonly in ai it's software engineering\ngenerally i mean there's a small\num uh and computer science there's a\nthere's a relatively small\nsoftware safety community um very\nsophisticated but it's very but it's\nvery\num uh it it's very technical and it\ncertainly doesn't\num i and so and they and they basically\nignored\nsafety uh uh ai's safety completely\num so when i discovered there was this\ncommunity\nwhere actually when i went to miri and\ndiscovered there were people\ninterested and then suddenly realized of\ncourse the problem they're trying to\nsolve is different from the problem that\ni've been trying to solve\ni've been trying to understand you know\nhow do we um\nkind of achieve some sort of engagement\nbetween these different worlds but no i\ndon't i can't point to at\ni think if we did the proper uh\nliterature\nsearch we'd find we'd find a few but i'm\nnot aware of any\nuh anything very substantial since\nsubrata and i wrote book in 2000.\nokay jack you had a question yeah\num so i've well i've got a couple but i\nguess\nthe first one that i'll ask is um just\nkind of as a\nbroad summary of uh of the talk you gave\nright i guess i'd like to present like\njust a\nbrief summary of what you said and see\nif that seems to line up with what\nyou're trying to say fascinating\nyes please okay so um\nwhat it seems uh what i what i\ngot from your talk is that it seems like\num\nai safety is uh\nor the ideas that rest behind ai safety\nis not\num a very formal or scientific um\nfield as a whole and it's not resting on\nthe kinds of\nprinciples that will let actual\nscientific progress\nget made um and\nthat's kind it doesn't seem like you're\nnot concerned about um\nmaybe having malignant in the eyes but\nit seems like you're concerned with the\nway we're going about\num trying to prevent bad outcomes\ni'm asking i'm just asking a simple\nquestion i'm new to your field\nand i've done my best um to\nkind of understand some of the emergent\nthemes and\nwhat people are committed to there's\nlots and lots i still don't know\num and and so you know\nas as zoen asked me you know any papers\num i would ask i mean that clearly\nthere's a lot there's the lineman\nnewsletter there's a bunch of things\nyou know i mostly i have to confess that\nyou read these rather broadly\nfocused books and try to observe the\ndebate that's going on\num but i do accept absolutely accept\nthat\nthe kind of safety um you know post\nsingularity if\nif that's a commonly used phrase\nai safety um that you're concerned with\nis different um from the\nkinds of things that i've been concerned\nwith for many years\num and i'm trying to find if there's a\nway that we can\nyou know cooperate um\nand and whether the participants i mean\ni thought i thought\nzoran's introduction about\npseudoscience was was right the\nparticipants in this community\num do they share um the\nkind of criteria that i i would\nsuggest for for a progressive research\nprogram\num or or different ones\nso that could be my next question could\nyou share your slides one more time and\ngo back to the particular slide\nwhere we're talking about some criteria\nfor whether\nai safety or whether a science is a\nprogressive science on it because it\nmight be worth\ntaking at least a little into into these\nto see whether\num we can say something about this\num so uh i think there's a\nfew slides further in um\nso that was my attempt to summarize this\npersonal view you know what i think is\nprogressive ai research and then i put\nup a few things which\ncriteria that the ai safety community\num might might also consider\nrelevant in judging their progress okay\nso um prit that's a progressive research\nprogram\noh sorry progressive research program\nyes i'm sorry\nso um grounded i i expect this is\nuh both in something like some of the um\nexact observations that we have for\ninstance we kind of\nassume that uh ais will\njust follow their their objective\nfunction of a cliff\nand then we go back into literature and\nsee has this actually happened and\nwe find examples and it looks like yeah\nsystems do this kind of thing so both in\nindirect observations and also in a lot\nof the other um\nmore strategic papers that the future of\nhumanity institute are making\nthey uh there's a lot of work trying to\nput to refer\nto refer back to existing strat existing\nwork on principal agent problems\nand and things like that so okay\nbeen glad with some guidance on what to\nread there um\nwhat i i realize now when i used the\nword grounded i i was overloading the\nword here partly was it empirically\ngrounded which you just said\nuh some some of the work is is empirical\num but really the stronger thing i had\nin mind is this side is it grounded in\nthe real world\num as an ecosystem that\nthat um the agent whether human or\nnon-human\num uh exists in and\nand to my mind there's very little\ngrounding of that kind\nso i think that would be my next\nquestion ecocomplete\nso could you explain what is uh i guess\necosystem oh well you're entitled you're\nentitled to criticize me i've just tried\nto find a snappy\nuh a snappy word um the\num uh what i mean is um\nif we if if you want to build\num an ai system that is\ncapable of either supporting humans in\na particular medical domain such as\ncancer\nor operating autonomously\num is have you built an agent that\nactually\nhas the sufficient knowledge\nto cope with all the issues\nobjectives outcomes in that\necology now a specialty of medicine is\nnot a very big\nin human terms it's not a very big\ndomain\nand clearly if you if you expand up\nto the whole of medicine and then expand\nup again to the hold of\nhuman expertise um\nwe pro having a property called\neco-complete\nis going to be just as hard uh to prove\nas anything else i say i put it in there\nto to\nkind of stimulate uh some debate\nrather than because i am proposing it to\nthe to the\nai safety community\nfair enough so um\nwhen i think about uh ecosystems in\nin ai safety i think about something\nlike uh\nfor instance and and agi might be\ncreated by the military and\nin that case there will be some\nstakeholders who are generals or might\nbe created by google\nand then it might be doing something\nelse is that the kind of thing you're\nyou're talking about on it's a little\nbit of a joke sir on things like np\ncompletes them\nyou know that sort of thing but i'm\nsuggesting it's not just about\num computational power\nin order to for an algorithm to\nsearch or or um\nto terminate on some arbitrary\nobjective it's that it has sufficient\nknowledge and i might say understanding\nof the ecology\nand and and compute power is is not the\nonly\nthing to consider there it's it's\ncontent\nokay uh we will just quickly go over the\nthe\nthe next two align compatible and\ncontrollable\num so um is aich a progressive research\nprogram\nin that i think it's pretty clear that\nwe're striving for\nfiguring out how to make ai aligned and\ncontrollable\nthat seems uh human companies that's\nclear that's why i included them they\nwere\nthey were very prominent and so it\nobviously had to appear there\nokay so if it has a question or\nokay sure so yeah this whole thing is a\nquestion really\num i would say um that\ni i'm still struggling a little bit\naligned\nuh but with the word aligned and i may\nnot have understood it but it does seem\nto be\nin some sense i mean as in in in stewart\nrussell's case\num the the the the ai's\npreferences are aligned with human\npreferences\num and and that's i\ni find that very surprising i mean human\nand if human cognition isn't like that\nwe don't have a whole set of preface of\npreferences\nthat then some ai can observers\nand and learn and then\nbut i may not have fully understood his\npoint\nokay there is a new question and let me\njust i haven't followed along in the\nchat so let me just see who\nwhose turn it is and that would be the\nuh\nthat would be uh ali\nuh principles for the field do you mean\nprinciples like\nbellman optimality or the principle of\nleast action or comes razor\nare principles in control system physics\nand science respectively\nwell disciplines have different\nuh criteria um and\nthe ones you've described are not\nparticularly familiar to me they're\nobviously come from some of the physical\nphysical sciences\nand mathematical models i suppose\num i i again this is a question and i'm\nnot um\nuh trying to tell you what your criteria\nshould be\num i'm just trying to understand what\nthey what they are and what the\ncommunity would take to be\na basis for judging progress\nokay chris has a question what does\nconvergence mean\nin a progressive research program and a\nui\nso i guess we are back here at the\nsame slide converging questions\nso what i mean by convergence in the in\nthe top bit and maybe\nthe cognitive sciences generally is the\nyou know there are probably hundreds of\nthousands of active\nthat might be over over a large number\nbut active\nuh cognitive scientists around the world\num hundreds at least and maybe thousands\nof different theoretical\nsort of perspectives um\nand we always have to ask ourselves the\nquestion\num you know are all those things in any\nway coming together\nso that we you know as they appear to\nhave done\nin physics and chemistry and and many\nother\nkind of hardcore sciences um\nand um and so\nif if if\nthere if there are many different\napproaches to\nai safety and maybe they're only a\ncouple i don't know\num then one again\nasks well are they in lack of touches\nsense are the each is each new theory\num uh better than the last or\nor or combining the strengths of more\nthan\none perspective as has happened in\nin classical sciences\nyeah i think that that\nthat makes sense i'm slightly confused\nbecause of often\num convergence often\ncrops up in the context of convergence\nof\nai\nvalues motivations and human values and\nmotivations but that's not\nquite that's not what's what in other\nwords in the context of alignment\nwhich yes yeah not what you meant to\nthat particular point yeah\nyeah it's like all these unifications\nthat we've achieved in the various\nsciences\nokay i think the next question is for\nyou and then my video is\nacting really strange when you use this\nrace hand\nfunctionality i don't really know why\nbut it starts flickering\num so um i think it's better if people\nwrite in the chat\nthat they have a question rather than\nraise the hand anyway liam go ahead\nuh it's not really a question it's more\nof a comment but in terms of\num trying to find\nconvergence within scientific fields\num i feel like one of the problems there\nis in terms of\ndata that uh you know\nthat the reason why in the 16th and\n1700s you could have a\nenormous boom in the physical sciences\nor the natural sciences\nis because the data is pretty especially\nin physics the data is pretty limited\num they're very very simple systems that\nyou can um look at and experiment with\num and they're more or less\nself-contained\num whereas in terms of something like\ncognitive science like we're looking at\nthe the spectrum of humanity\nit seems awfully difficult to have the\nsame kind of\num limits around the data like even the\nquestion of like what\nwhat counts and what doesn't count\nwhereas in within within\nat least within newtonian mechanics\nthat's like a very simple\nquestion of like where is the limits of\nwhat you're looking at\num so i feel like in terms of things we\nit could be a it could be a mistake to\nwant\num convergence within\nyou know the social sciences uh prior to\nhaving\num uh the the sort of the large amounts\nof data that\njustify um a broad scale hypothesis or\ntheory of\nof the human so to speak\nwell this is my intuition of the state\nof affairs\ni suppose data data for me is is is the\nsame as what\nsir and i think use the word\nobservations you know you\nand and as you say you know when\nwhen newton and many other\nsort of early physicists were doing\ntheir observations on\non light and prisms and all the rest\ni've forgotten all my\nhigh school physics um uh\nthey they had the advantage not only\nthere wasn't much data\nbut also it was highly reliable\nuh there wasn't much uncertainty in the\nresults and maybe it was harder than i\nimagined\num but this distance but\nwith the big data revolution um\nsuddenly it appeared that you know with\nprobabilistic and statistical methods\nwith very large numbers of observations\nwe could detect\nand learn dependencies\nover large populations of people or\npatients or\nyou know climate parameters\nand everybody got very excited\nbut even within the big data community\num there were critics who were saying\nlook you know\nthey're just data they're just numbers\nyou know we\nthe fact that we can detect dependencies\ndoesn't mean to say we understand\nanything\num and and i think then\nthat kind of i don't know whether i'm\nright about this but did that\ninternal debate kind of got\nforgotten when when the enthusiasm for\nmachine learning\nappeared and people started saying oh\nlook you know we can\nwe can now we've got so much compute\npower we can we can solve all these\nproblems\num just by data we don't have to\nunderstand anything\nand and and that's what you know some of\nthese books i've\nuh mentioned on the way that people are\ntaking different positions on whether\nthat's true\nokay i think uh can i do a follow-up is\nthat okay\nsure um within\nuh cognitive science in terms of uh\nresearch that's going on\num is there much research uh\ndone in terms of um communities\nor cultures which are\nvery very different from the west so i\ndon't mean very different in terms of\nvietnam i mean like\nextraordinarily different in terms of um\ngreenlanders or indigenous australians\nor something like that\ni honestly don't know liam um i'm\num and and that's my shortcoming i\ni think science\nuntil the recent um the anti-science\nmovement got going uh was western\nscience was\nwhere it is because it seemed to be the\none\nthat delivers um but i think\nthat's also rather self-serving view of\nwestern scientists\num and one way it's obvious\nis that we can't make a strong statement\nover strong statements about that or at\nleast i can't\num it is because we you know\nwe don't read chinese and we don't need\nvietnamese and we don't read lots of\nother things so\nthere may be lots of things going on\nthat we just don't know about\na friend of mine who's a chemist organic\nchemist that spent half his year in\nin in beijing and one of china um\nand and it's absolutely clear that you\nknow\nthe chinese are using western science\nfrom top to bottom um and the comp the\ncompetition\nis is is political\num i didn't mean in terms of methodology\ni meant in terms of\nin trying to construct a framework of\nwhat the human mind is\noh we'll see in terms of uh\nstudying groups that because because you\nknow there's a\noh it seems well you did say cultural\nyeah um yeah\nwell i mean i go back to saying i don't\nknow the answer\nthanks turn off and then check and then\nclick on awesome\num so could you speak up a bit uh\nmaybe yeah is that better oh yeah that's\nbetter\nokay um so\n[Music]\nyou mentioned earlier uh you mentioned\nearlier that you were trying to think\nabout ways to\num maybe see if these two\ndifferent views of ai can sort of\ncooperate\num and i think that's\ni think that's a pretty good thing and a\npretty great reason to have a talk like\nthis\ni'm also pretty new to this uh entire\nfield i've\nonly been studying this whole thing for\ni don't know maybe four months or so\nbut one thing i do know is when you\ntalked earlier in your presentation\nabout\nhow black boxes are not really suitable\nfor\nrelying on or for\ngiving lots of responsibility to um i\nknow that\nthe um that's one thing that\ni hesitate to say both of our\ncommunities\nagree upon and i know that at least\nthe ai alignment community has\nhas sort of empirical research going on\nin this area and it's called\ninterpretability research where the\nexplicit goal is to take things that are\nblack boxes\nand make it apparent either what the\nprocess of reasoning is\nor what the examples being used to\ngenerate outcomes are\nbut to make the\nthe the process much more transparent in\nwhat's going on and i feel like\ni feel like when you look at some of the\nrecent developments you can see that\nthere is a\npretty clearly empirical trend in\nin figuring out exactly how to gauge\nwhether or not\na prediction is trustworthy um or or\nfiguring out\ncertain kinds of reasoning from a model\nwhich might only have\nuh statistical correlations to go by\num that just sort of seemed like a\ncommon point that i thought it would be\nworth emphasizing\noh sorry i've missed i've obviously\nmissed something\nkey um i'm i'm very interested in this\nconversation\nbut i'm also very conscious that i don't\nknow very\nmuch about you know what the ai safety\ncommunity\nvalues um and and\nthere will be lots of different views\ni've looked at a number of\num youtube videos by young polsky and\nkinski and and as well as all the other\nai heroes um or emery heroes\num but i'm i'm still not sure\njust how many different perspectives\nthere are in berkeley is obviously\nvery influential um i think i think\nthere are some people in\nin oxford who um are very\nwell regarded as well um a different\nperspective sorry i'm i'm\ni'm i'm going on um i\ni only know my side of it and the\ncognitive science side well\num i don't uh i i would need to work\nwith\nwith other people if we wanted to kind\nof take this further\nabsolutely and that makes sense um and i\nwasn't necessarily saying that i was\nexpecting you to know all those sorts of\nthings um okay i mean i i barely know\nthem myself\num i was sort of just trying to say that\nuh some of these things which it seemed\nlike\nuh uh you you\nhadn't seen i'm just saying they they do\nactually exist and they are out there\nthat's all i was trying to say okay\nthank you that's great\nand i think actually uh ali's question\nis also\nan example of this where ali is i don't\nknow if you can see the chat\nwindow allen has a rather long oh it's\ntechnically it's not a question because\nthere is no question but\na description of some kind of\nimprovement in the\nthe field of ai safety where we now have\na good\nimpact measure a mathematical strict\nformal definition of\nsomething that we previously only had\nlike a vague uh\ndescription of and certainly no good\nformal description of and\ninsights as an example of something that\nis\num that is uh progressive in the in the\nobvious sense\nthat we are making progress therefore we\ncouldn't uh formalize this and now we\nhave a\nformalization that we are happy with\nright\nand and i'm sure there will be um a\nnumber\nof such advances\num and very some may be very narrow and\ntechnical\nand some may be philosophically deep and\nwide\num and you know if\nif we can kind of surface\nthings and might even\nhave some sort of a consensus across the\ncommunity\num that would be i think that would be\nsome historical importance actually\nokay um i also have a question but i\nshould also say that we are at the 1.5\nhour mark now\nso uh that means that of course john if\nyou need to\nleave or if you're tired or anything\nlike that then uh\nplease feel free to uh then we'll just\nsay thank you\nthank you otherwise i think there might\nbe more questions\num and i'm i'm happy to stand happy to\nsay and thank you all for\nthe interest you've taken and and uh\nlook forward to\noh and if anybody wants to copy the\nslides then\nyou can you can presumably post them in\nsome way they're\nsure if you can just share them somehow\nso if\ni can have another question uh and\nthat's\num that was what you said right that you\nhad time for more questions\nyeah yeah okay so um let me try to\nfind the place where our opinions\nprobably are\nperhaps closer to each other so a\nproblem\nwith medical ai is de-skilling\nin that if you get some ai that is\nreasonably good at some medical task\nthen eventually you're going to have a\nsituation where people you know as\nthe people with the experience before\nthey\nthey lose the ability to uh perform some\nof the tasks\nthat's you agree this is like a\nabsolutely\nabsolutely a problem that can happen\nit's one of the most one of the\nmotivations for um\nthis kind of human-like ai approach\nthat i've has always been a\nbit behind my research it is that\num if it's transparent in and\nintelligible in human terms\nand potentially can even tutor\nusers in human terms then then there may\nbe a\nstrategy in there for uh reducing\nthese skilling risks\nand i i think we we agree on this and i\nthink\nthe the the the main a or part of the\nai safety issue is that\nuh this is something that it appears\nwith things like gt3 like this is\nsomething that\ncould easily happen that we could get\nais that are productive\nand able to do things that previously\nhumans could also do but that are also\nnot able to tutor us that are not uh\ngt3 is basically a black box and if we\nsee this now if it's just like\na small part of radiology that is\nautomated done by\nais then the d skill might not be a big\nproblem\nbut as we peer further ahead into the\ncrystal ball\nthen it appears that more and more will\nhave a greater and greater d skilling\nand that seems extremely problematic on\nthe medium long term\nthe the uh possibly greater\nuh these killing so that would be my\nguess at where you and me are closest\ndo you agree that this is a conceivable\nproblem on the medium long term\nwell i think it's i think it's more than\nconceivable it will inevitably happen\nand we've lots of um kind of\nissues that are going to emerge that we\nhaven't\nrecognized yet but these de-skilling has\nhappened with\nwell it's certainly been discussed with\ncountless other technologies\num and most famously with you know\ncalculators\num and um but i think\nas far as i'm aware in most of those\ncases\nuh you know technical problems have\ntechnical fixes\num um i bet you know so i think even\nthough\nkids at school have\num calculators and allowed to use them\nin all sorts of ways\num uh the curriculum has been evolved\nand changed in order to compensate for\nsome of those\nthat those de-skilling things i'm not a\nteacher\num so i don't really know whether i'm\nright about that\num but um yeah i i would expect\nuh as as i've kind of said in the uh\nthe the miri interview\num i i think quite a lot of problems are\nsoluble with well\nwell understood engineering or practical\nsolutions\num there's just gonna be a lot more that\nwe\nhaven't anticipated\nokay um so uh in terms of\nin terms of the d skilling uh if i can\njust pick up on that quickly\nthat's all right um we have a situation\nin australia that's\nmaybe vaguely akin to\nai de-skilling people\nwhich is that our government prefers to\nimport skilled immigrants rather than\nuse our education system to\neducate australians um and\nand that's like led to all sorts of\nreally bizarre problems of like\nerosion of communities in terms of\npeople can't get jobs\nand also like a for students a\ndisillusionment with the education\nsystem itself\num so i think i think soren's\nquestion is is right on point of that\nit wouldn't just be that um we would\nlose\nknowledge as individuals or whatever but\nit has\nreally widespread effects um on the\nsociety at large\nsure that's right but in the case of\nsay excuse me radiographers\nthat seems okay to me rather like\num the example of driverless cars it's\nwhere\nthe the benefits from improving\ndiagnostic skills are so great that\nintroducing ai into you know um\nx-ray diagnosis interpretation of scans\nand so on\num the case that would be irresistible\nbecause\nyou know we want to get better um\ndiagnoses constantly and uh\nif that you know meant that one class\nof skilled people were now you know\nworse off because\num they that their skills were no longer\nvaluable\ni think society would have would would\nwillingly accept that\non a utilitarian calculus and likewise\nwith\ndriverless cars um if they are\nreally achievable achievable and uh well\nbeing achieved already\num the the saving in lives\nand the economic benefits are so great\nand that um\nit again it's probably um irresistible\nand so that's bad luck for truck drivers\num\nit's bad luck with people who enjoy\ndriving on on the highway and we're not\nlikely to\nto put much weight on that um when the\nprice is\nyou know much human life so there are\nsome areas where\ni i think we'll\nwe are not going to\nwe're we're not likely to to think that\nthat's a loss we're going to think that\nit's a net gain\nand so the question is where along the\nline\ndo we start to notice a loss\nof value as opposed to a gain\nwell as you know\nmcking has commented a couple of times\nbefore um\none route to net loss\nis the the kind of thing that we can do\nfor human benefit building\nbenevolent ais i can see all kinds of\nways they can be weaponized\nat scale\nwhat sort of weaponizing do you have in\nmind just in the straightforward\nmilitary sense or or\nyeah yeah i mean um\ni mean the kind of theoretical framework\nthat kind of underpins all our work\nis i believe\num actually has a very broad range of\ncapabilities and\num although it's not embodied in the\nsense of a robot\nwe don't build robots we build\ndisembodied applications for medical\npurposes\num uh i think there is a framework there\nwhich could be used to build a cognition\nengine\nfor for physical robots and\nvehicles and humanoid robots and all the\nrest i mean i\ni have no no convincing evidence of that\ni just\nuh believe it it is the case and\ntherefore you could pick up a lot of\nthis technology\nand add value to um existing\nweapon systems um\nmilitary vehicles and um and uh\nand and so on and um they could do a lot\nmore than they can do at the moment\nbut you but please treat that with um\nwith\nas evidence-free you know as a scientist\ni'm not making you claim i'm just saying\ni worry about it\num i guess i feel less worried if i\nthink\nwhenever i can think of a clear ethical\nprinciple\nthat can be laid down um\nand that could be accepted\ninternationally then\nyou know i'd i would feel safer so for\nexample\num the principle that\nif you use an ai uh say a\nin a call center to to to respond to\npeople\num if if you make it um\npart of the code of practice that this\nthing would source announce itself\nas being a digital system to the to the\nperson it's talking to and never\num never never pass itself off as a\nhuman being\nthat seems to me a clear um\n[Music]\ncode a clear principle that you could do\nin the code of practice\nsimilarly i this is too this is perhaps\ntoo utopian but um ideally\num in the case of autonomous weapons\nyou had a principle where\nan ai was assisting a human being in\ncontrolling those weapons\nas opposed to human beings theoretically\nsupervising\nsort of really autonomous systems\nthat would be a kind of safeguard i'm\nthinking of the analogy with\nwith in the case of airliners\nwhich are very largely flown by um\nby computers nowadays um\nthere are sort of two two philosophies\nthat you can work on one in which you\nhave\num a plane that's flown by computers and\nis supervised by human beings\nwhich sounds like a great idea except\nthat you\nrealize that human beings easily get\ntired\nare lazy and dodge the rules and so on\nuh and and lose their skills\ntheir employers are likely to to just\nnot train them properly\nbecause they're just minding the\nmachines uh versus the opposite\nphilosophy\nhaving human beings flying actually\nflying\nthe airliners hands-on with um\ncomputers in the background supervised\nsupervising them but not in the sense of\ngiving them orders but watching them\nsounding the alarm and so on so that\nhuman beings are forced\nto be active and vigilant\nand skillful\nthat seems to be a clear sort of\nprinciple that you can lay down if you\nknow if you can do that for\num airline pilots maybe you can do it\nfor um\nfor soldiers in the um generals in the\nfield\nmake make sure that they are the ones\nwho are\nchoosing the targets fighting the\nbattles and being assisted by ai as\nopposed to the opposite\nbut of course in the military field that\nbut it's very idealistic because\nyou know it's like long may idealism yes\ni fear i live in a slightly darker world\nthan you\nwith the the dozen or so\ngangsters that uh a recent american\npresident\nwanted to um hook up with um\ndon't think like that at all\nyes and i mean it's like it's like\nhaving the the it's like laying down the\nprinciple for safe at land mines but\num uh land mines have been able to go\ninto certain principles such as\nself-destructing after one year um\nthat you must always map exactly where\nyou put lay your land mines and so on\nand so forth\na great principle guiding principle but\nobviously\nin in the heat of in the heat of real\nwar\num they will end up being used\nindiscriminately\nso if i can just uh check onto chris's\nairline\nanalogy where um one of the um\nthe ways that people are trying to avoid\nthis de-skilling\nis by having explicitly that\nthe the aircraft is indeed capable of\nmajor aircrafts are capable of\nbasically landing themselves in\ninclement weather and\nthat means that what what they do in\npractice is that\nthe uh the pilot is actually doing the\nthe um the landing supervised by by the\ncomputer\nso the computer is training the uh the\nhuman\nin in how to uh in in how to try to land\nthis ally\nand then um for very difficult uh\nlandings then the uh the airline can say\nthe aircraft can say okay i'll do this\nbecause there's literally zero\nvisibility\nso the plane will just land itself um\nbut uh\nyou know but you can actually prevent\nthis de-skilling\nby a a conscious effort and that's been\ndone\nwhat is currently being done by with\nairline pilots\nwasn't it sometimes the opposite that in\nsome difficult situations\nai is not very capable in landing and\nhumans\nneed to do it and that's the problem\nthat when humans\nlose their skills in the easier\nsituation\nthen they also never evolved to the\nskill that is required from them in the\ndifficult situation\nbut the ones who still have the skill\nuh trained for difficult situations they\ndo better\nthan ai in this situation\ni think this uh actually a something\nthat has been a\ndifference between uh europe\nand some of the the countries in uh\nin asia because in in europe there are\nvery stringent regulations about\nhow much airline pilots need to train\nand in some countries there are\nsubstantially less\num this of these requirements which\nmeans that\nthe uh for instance something with\ntraining on\nlanding with the civil landings a\neuropean pilot could expect that he\nwould\nbe forced to do a lot of these learning\nso you can get a lot of experience\nwhere whereas if you're flying in\nindonesia\nthe the rules are a lot less strict\nwhich means that\nunfortunately if it's easier for the\npilots then they can just ask the plane\nto\nto uh to land but on the other hand they\nget less experience and less training\nand of course also less time in\nsimulators and these kind of things\num which also mean that the uh\nthe safety record is better in europe\nthan in indonesia\nthis is not my speciality by the way i\nshould say\nthe planning systems i'm working with\nfor airlines are the non-safety critical\nsystems so um i'm not 100\nsure\njohn i believe in that um in that\ninterview\nthat you gave as a reference for this\ndiscussion\nyou mentioned that you\nyou've been involved in in um\nfor uh formal proofs of of\nsoftware safety i think you you referred\nto\neither that well\ni've collaborated with many people\nincluding\ncomputer scientists who call themselves\nformal methods\npeople and um so i'm not i'm not the\nperson who does the proofs by any means\num it's uh the um\nthat the book contains quite a lot of\nformal material but that was mostly\nwritten by\num by bo sabrata um\nyou know and as i say i'm i'm an\nexperimental scientist originally by\ntraining who\nyou then who then moved into ai\nin about 1973 um\nand i've still well i'm going on\nno i didn't i mean there is work that\ncould be done i'd be very disappointed\nhow little work that that has been done\nsince\nsince we've kind of finished that that\nline of work\nthe line of research\nwell there are people who sort of um\nattempt\nformal formal proofs\noh there's massive amounts in the area\nof safety and think of\nin ai specifically and i don't really\nknow how\nhow much that is feasible or um\nwell well i plea please point me at\nthat the things that you found because i\ni\ni haven't and i'm really interested\nhe has lots of pessimistic arguments\nabout the possibility of achieving\nai safety and i don't know whether they\nmake out just formal\nformal proofs or formal work i don't\nknow whether he's one of these sources\ni'll see um but but it's not anyone i\nknow much about you see that's why i was\nyeah yeah no i'm out of date\nso uh you mentioned uh uh\nin the uh in the first part of your\nslides\nand it seems uh one of his\npoints is that he cares a lot about the\ndemarcation\nproblem between science and student\nscience\nand i know you're not really saying that\naict is a pseudoscience but\nhow close are you to saying that ai\nsafety is a pseudoscience\num\ni will not take a position because i\ndon't think i know enough\nabout what all the people are doing um\nat this point i\ni i get the impression that most people\nin in the field don't don't regard\nthe things that trouble me as very\ntroubling\nso i may simply be misguided\num uh i just wanted to ask john what do\nyou find\nmost troubling in terms of like ai\nsafety what's your like\ndo you have a number one concern\num\ni'm mentally trying to kind of pick one\nout um and and i think i end up\nsaying that um\nyou know i'm not a philosopher of\nscience my mickey mouse theory\nof scientific method is that it's a\nmixture of\nempiricism and formalism and\npragmatism um and\nmost of what i have read um\nby significant figures in\nin the field um are\nvery much narrower than that in their\nconcerns\nyou know i think\nbut my i i'm going to try and wriggle\noff this hook\nbecause the last thing i want to do is\ngive the impression that i'm\ni'm criticizing at this point i'm not\ncriticizing i'm trying to understand\nyou know what what people think they're\ndoing and i thought that\nzurin's introductory question about\npseudoscience was was was really great\num because it me it means i assume that\nnobody wants to be accused of doing\npseudoscience\nso i'm keen to\nsee what the field strategy is to avoid\nit\nbecause because conventional science i\nthink has got that down\nthey know how to do that it's not\nperfect but it\nand i don't think that i'm not aware of\ndiscussions of that kind\nwithin the ai safety\nworld\nin that interview you um drew an analogy\nwith\num\nwith medicine now with with drugs\nmedical drugs specifically specifically\nand i suppose other i suppose you were\nalso thinking about other medical\nprocedures um and you're saying that\nyou know we we accept that there's a\nduty of care\nby um pharmaceutical companies\nand by all all doctors and all medical\npractitioners of any kind\num there's a duty of care and a\nnecessity for them to be regulated\nexternally and you're saying that\nthis should apply just in the same way\nin\nyeah not only in software in general but\nalso in in\nai products in general yeah i think i\nthink that's probably would be the\nanswer to liam\nthat putting my head on my hat\non as a practitioner in ai\num that's the thing that worries me is\nthat\num people aren't having these\ndiscussions in conventional ai um\n[Music]\nand and at some point it's going to get\nresolved um but most of the public\ndebate\naround ai is is\nis is not as kind of focused and and\npractical as that\nit's about you know\ngeneral principles of whether you can\nmake a machine as\nsmart as people or you know whether you\ncan create a super intelligence by\nany of the means that have been proposed\num\nso i think liam that's my um partial\nanswer\num that um i i i worry\nand nothing's changed as far as i can\nsee since 2014\nuh i worry that um uh\nthis isn't really on on on the research\nagenda\nfor ai generally agi\ni'm sure there's lots of essays and\npapers and so on and so forth but i'm\nnot\nuh most most people seem to be\nwriting or reading books which are kind\nof big stories\nrather than actually would i say doing\nthe work\nwas it last week we were saying that um\nwe need\na few disasters um of\nless than existential level\num to wake people up to the dangers of\nyeah\nso careful what you wish for my thoughts\nexactly\nso now we're at the two hour mark so now\ni'll repeat of course that\nuh thank you for your answers john and i\ndon't know if you know and have any\nfinal questions but you're we we've\ncertainly appreciated\nuh the discussion thank you yes i will\nslip away now\nfor me wonderful oh yeah yeah yes thanks\nvery much\nand i i i hadn't really realized until i\nread\nuh read that you know the materials but\nwe had quite such a heavy weight um in\nuh\nin in our group i mean i thought i'm\nsure with lots of heavy weights in\ndifferent ways but i\ndidn't know about your personal\nexpertise\ncan i retired a retired heavyweight\nmight be better\nokay guys liam do you have a final\ncomment\noh yeah yeah i do i do if if that's okay\nin terms of like uh you know the the\nmelding between\nscience and uh and ai where like\nuh we're we're basically assuming that\nai is going to like have massive\ninfluence on individuals and groups\nlives and stuff and so we want to have\nsome idea in terms of alignment we want\nto have some idea\nof how to negotiate that in a positive\nway\nright at least not negative um\nand part of my concern with\nstudies like cognitive science is asking\nquestions effectively what is the human\nthat if we're not if we're not uh\ntaking into account groups that are not\nwestern these cultural groups are not\nwestern and how they think how they\nwhat their preferences so to speak if\nyou know if that's a real thing\nah then we're we're effectively going to\nbe creating\nai that that prejudices one way of\nliving\num significantly over others i think\nthat's one of my greatest fears\ngoing forward in the field i understand\ndo you think there's much hope that\ncognitive science will mean\ni mean well you know we're primates\nwe do what primates do and that doesn't\nmatter really\nwhich particular culture whatever we\ncome from um\nyou you know primates have a repertoire\nof\nbehaviors and activities and\nwe develop various kinds of institutions\nto manage them\nand um i think most\nmost of the guidance i'm not going to\nsay that i'm not qualified to say it\nyou know i just think um there's lots of\nsocial scientists and anthropologists\nand psychologists\ncross-cultural psychologists and all\nsorts of people\nyou know who who look at big questions\nwhether there's any people\nwho are cognitive scientists or ai\npeople who look at these kind of\ncultural questions\ni simply don't know\nokay all right thanks okay yeah thanks a\nlot bye\nsee you next week where we will discuss\nchapter three of\nroman yampovsky's uh aisf safety\nschedule system table\nokay thanks everybody see you next week\nthank you very much", "date_published": "2021-06-17T22:16:56Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "987e6a134d24124fb5cccc169f74a44d", "title": "190. Steven Pinker on the Possible Existential Threat of AI", "url": "https://www.youtube.com/watch?v=nrCjVhp4wuo", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 190 in the\nAI safety at Cambrian group tonight\nwe'll be discussing Steven Pinker on the\npossible existential threat of\nartificial intelligence Steven Pinker is\na professor in the Department for\npsychology at Howard University and a\nvery noted linguistic and this is in\nfact a podcast interview a discussion\nbetween Steven Pinker and Stuart Russell\nmoderated by yokas Perry with the title\nSteven Pinker is Joan rough on the\nfoundation's benefits and possible\nexistence to threat of AI so what we are\ntalking about today is an excerpt from\nthis podcast where we focus on why\nSteven Pinker is skeptical about AI say\nthat the other side too assistant to\nhope is existential risk so first off\nall these things that that Steven Pinker\nis saying are things that Stuart Russell\nhas in theory been answering because\nthis is a debate so why do we feel that\nthere's a need to go further into this I\nbelieve that there is because of the\nstructure so if we see in the beginning\nwe will see Steven Pinker make around 30\nclaims and then Lucas Perry will say\nStuart Russell what are your response to\nthis and then Stuart Russell will go to\nthe will reply to the very last claim\nthat Steven Pinker has and maybe return\nto a few of the others but basically\njust make his own long list of claims\nand then a lot of things are left\nunanswered\nso what I'm try to do here is to go\nthrough line by line everything the\nSteven Pinker say and try to give some\nkind of answer and commentary on\neverything here and one another thing we\ncan see is that this this goes two ways\nin that for instance Roy shot that says\nthat he's frustrated that scene pinga is\nnot responding to resolve the claims\nthat Stuart Russell is making and so in\nthis way Steve you could make the same\ncomplaint that that they're just not\nanswering each other's themes so another\nthing some background that I probably\nshould say is to discuss the straw man\nfallacy because this will come up quite\na bit the QPD assess that this is when\npresent one asserts a proposition and\nanother person a clerk is against a\nsuperficially similar proposition as if\nthat wasn't a argument against the\nperson ones true proposition I'm not\nreally fond of this it says falsely you\nin Wikipedia but but the way this turns\nout in this particular is that Steven\nPinker will say something like people\nwho are worried about AI safety often\nmake and then some kind of silly claim\nand then he will argue that this silly\nclaim is wrong and he will have good\narguments against this but it doesn't\nactually say who precisely is making\nthese claims and that's of course a bit\nunfortunate so what came should you\nactually actually be discussing and well\nI think that most people in the sht\ncommunity looks to the book\nsuperintelligence by Nick Bostrom as the\nthe the most thoroughly written the\ncanonical version of the dangers of a\nsuperintelligence another very awkward\nplace to find claims would be in store\nBrussels Brook human compatible may have\njust talked about that and it's a very\nobvious thing to do and some blame\nprobably falls on Lucas Perry here on\nthe hosts because he doesn't answer it\ndoesn't ask finger to comment on your\nRussell's cling but asks him in general\nquestion about whether existential risk\nis possible and also I another\ndisclaimer is that I'm not really\nexamining still Russell's claims because\nhe does say some things that I disagree\nwith I find out a few things and here\nfor instance he's making a rather\ncontentious claim\nand sometimes he seems to be like\ntalking around what Steven Pinker is\nsaying and answering different questions\nor something like that but I won't\nreally go into details about that\ninstead let's see what Steven Pinker\nactually says the first part is about\nwhat is called a precise\nsuperintelligence he starts out by\nsaying in May discussions of\nsuperintelligence inspired by the\nsuccess of deep learning weather\nsometimes asked - so he's not pointing\nout a specific discussion but I just\nsaying that some people say this and\nthis superintendents inspired by deep\nlearning that's often called a precise\nartificial general intelligence and\nthat's something that's considered a\npossibility but a recently remote one in\ngeneral AI safety and going beyond just\na human level and up to a\nsuperintelligence infra side way is also\nsomething that is considered possibly\neven less so the discussion that he is\ncriticizing Stephen King is further one\nwhere media precise super intelligence\nis able to solve the problem of Middle\nEast peace so I have actually tried to\nsearch a bit and I don't recall ever\nhaving seen such a discussion so I would\nlike a reference of one exists and the\nproblem with this Steven Pinker is that\nif you want to do something like keep\ngoing with this then you need 60 million\nsimilar problems ain't unique the\ncorrect answers in order to train a\nsupervisor learner and that's always is\nsomething you can't have so that's\nprobably an answer about what that deep\nlearning can actually do a few shuttling\nand things like that but in AI safety we\nare mostly concerned about how something\nthat is roughly a general intelligence\nthat is roughly on the human level could\nbecome super intelligence catch me\nthrough an intelligence explosion and in\nthis case what you actually need to do\nis not to solve Middle East but to try a\nlot of problems and see which one is\nbetter and for this kind of thing trying\n660 million times that sounds quite\nfeasible\nand of course the hard thing is to\ngenerate 60 million good programs and\nthat's of course something we can't\nbecause we don't have an API yet but\nit's something that seemed like it would\nbe possible and on the problem stepping\nup hands out is that some of the\ndiscussions don't have enough details in\nparticular he says that you know a lot\nof discussions you could replace the\nword superintendents with magic or\nmiracle and the sentence would read the\nsame so he's a linguist so he must know\nwhat he's talking about so I tried to\nyou know take the best discussion not\njust in discussion but in the Nick\nBostrom's and see if it was actually\npossible to replace the word super\nintelligence you can see I've tried to\ndo it here and activity you can't do\nthat so you probably can only do that in\npoor quality discussions and further\nsome of the things that I'm missing is\nthat the discussions don't talk about\nwhat is what does the intelligence\nconsists of and what's even a solution\nso my general answer to this is that\nsure there are bad discussions and I\nthink if you look at the word 10% of\ndiscussions on climate change or\nimmigration or something like that then\nI'm sure you'll be really disappointed\nbecause there are probably a lot of bad\ndiscussions and also a lot of\ndiscussions where precisely what counts\nas a solution to war in the Middle East\nisn't really that relevant it is\npossible that I am misinterpreting\nfinger here we're in the discussions\nwhere you replace super insurgents with\nmagic\nthen he's he might actually be talking\nabout that people overestimate the\ndegree to which insurgents in his\nparticular super intelligence could\nobtain calls in the real world like it\ncan just do things by magic so so it's\npossible that's what he means even\nthough it's not really what he says then\nthere is a central part extrapolation\nthe concept of super intelligence is a\ndubious extrapolation of an\nunexplainable\ncontinuum like humans animal on the sub\nright human too smart you\nso obviously there are some continue\nthat cannot be extrapolated if you try\nto go north then eventually you reach\nNorth Pole\nif you try to go slower you and\neventually stop if you try to go faster\nthan light then so the concept of\nunintelligible and continuance is sound\non the other hand however if you try to\nsay that it's impossible to go faster\nthan light then you need to bring some\nkind of argument or evidence and for\nintelligence for instance it's possible\nthat there is like a ceiling like it can\nyou can at most get 200 IQ or something\nlike that it might be that no\nconceivable intelligence could ever\nunderstand\nstring theory so in theory this kind of\nargument could work but there haven't\nbeen really been any non-trivial\ncandidates that have been proposed so I\nthink some more work needs to be done\nhere further Stephen Tina says that he\ndon't believe in intelligence it's maybe\nyou would call it an unrealistic on\nintelligence so he can just compare a\nhuman and a squirrel and say the human\nis more intelligent and you could\nimagine even more that so Stuart Russell\nhas a direct answer for this one later\nand I guess I could make a democratic\nargument that most people would he would\nsay that sure is possible to compare a\nsquirrel and a human and say well the\nhuman is more intelligent but there are\nalso a few constructions that I believe\nfeels awfully quite well if that\nbulletproof and quite strong and then so\nthe first is a speech super intelligence\nif you could make a machine that could\nthink like a human which would be\npossible at least in this formulation\nthen you could probably just make it run\na million times faster and that would\nprobably count as a super intelligence\nyou could also like build a million of\nthese machines that would be a quantity\nsuper intelligence to use Bostrom's were\npledged and finally there could be\nsomething like a quality super\nintelligence which is probably what same\nthing as saying couldn't\nwould not be possible so is this\npossible\nwell you can try to extrapolate and it\nseems that it would be possible there\nare some things that some ways of\nthinking that seem like they would be\npossible like an optional Bayesian\nreasoner and there are some biological\nlimitations to human brains quite\nactually quite a lot of them that a\ncomputer program might be able to get\naround so I think that is a good case\nthat is possible and it's on student\nlingo to to explain why there would be\nsome kind of seeing to intelligence so\nthe first scenario of Steven thing that\ndivides AI safety in scenarios into two\nand the first is about will to power and\nI'm just gonna start out and say this is\nprobably any Cell of Stroman because\nthis scenario is probably something that\nI have never seen anyone seriously\nconsider this in the EIC community and\nthe claim here is as soon as you get an\nintelligent system it will inevitably\nwant to dominate and exploit and I think\nit's a strong word but I also think it's\npossible if you are building an AGI in\nsome kind of evolutionary system then I\nthink it's possible you could get this\nbut not inevitable and the analogy that\nSteven Pinker is saying is its mate is\none between humans and animals or\nconquested dollars and natives the\nreason why this is false is the\northogonality thesis so I think that's\nrather interesting that he's familiar\nwith the orthogonality thesis so so he\nclearly Steven Pinker has studied this\nsomewhat and this orthogonality thesis\nis just a fancy schmancy way or\nreferring to Schuman's distinction\nbetween our goals and our intelligence\nI'm not really sure that's true in super\nintelligence where the question writes\nabout the or finality rights that the\ndifference is that it does not depend on\nthe human theory of motivation I don't\nknow really what that is so what I would\nrather say is that the concept of\nintelligence and final goals which is\nwhat the orthogonality thesis talk about\nare a lot\nnormatively thinner than reason and\nmorality right morality and final goals\nare not the same thing - at least then\nthe other scenario that's been discussed\nI called the here basic AI tries and\nSteven Pinker starts watching and I know\nthat there is an argument that says\nagain he's not really referring to a\nspecific argument but just saying that\nit exists and he phrases it like this\nany intelligence have to maximize\nresponse survivability at all costs I\nthink that he's obviously referring to\nSteven 100 basic AI drives where Steve\nOmohundro is before the claims that this\nis not something that will happen to any\nintelligence and it will not happen at\nall cost\nthey're merely tendencies that will be\npresent unless specifically counteracted\nso this is a truthful statement of the\nthing and Steven Pinker countless that\nhis iPhone doesn't seem like it it\nmaximizes his own survivability a fun\nfact this actually if it does if it's\nsupersymmetry it will allow you to call\nthe 9-1-1 it will not allow you to play\nAngry Birds it will just say sorry I\ncan't allow you to do that but the real\nanswer to this is that the iPhone\ndoesn't really count as an intelligence\nthis is something that is implemented by\nthe programís of the iPhone and it's not\nsomething that the firm has done itself\nbecause it's intelligent enough to\nreprogram itself and do things like that\nif it was programmed to refuse to really\ndo as it was told then obviously we\nwould not buy one and I think here\nSteven Pinker talking about this kind\nthing it seems like he's not talking\nabout the treacherous churn and take\nover that has come in a ICT seems to be\nconsidering a rather different scenario\nhe also says this is the less common\nexistential threat scenario where\nactually this is the more common\nespecially compared to the will to power\nbut they don't really cut the world at\nthe joints because no one talked about\nthe first one anyway\nI thought about about the philosophical\nimplications of an hei that is really\nreally powerless like an iPhone will it\ntry and take over I think that would it\ncould be an interesting thought to\nexplore further but that's not really\nsomething that seem thing that talks\nabout the next thing he talks about is\nthe paperclip Maximizer which he thought\nwas a spoof but apparently it was\nintended seriously when it was\nintroduced it was introduced by eliezer\nyudkowsky making a point about human\nvalues so it was not intended as a\npermanent as a prediction or something\nlike that Steven Pinker further claims\nthat this is a self refuting scenario I\ndon't think the self refuting can be\nused for that this if you say I am lying\nthen that's a self refuting statement\nbut here you probably saying that these\nscenarios are unrealistic or\nextrapolating incorrectly or something\nlike that another argument against the\npaperclip Maximizer is that we don't\nneed more efficient paperclip\nmanufacturing than we already have\nso that's obviously false but I think\nalso that's a poor way to engage with\nthought experiments if you are presented\nwith a trolley problem and you just say\noh there are no trolleys where I live\nthen you're not really engaging with it\nbut here he has a good idea an\ninteresting argument human bodies are a\npretty crummy source of iron for\npeople's lives and there is a good\naccount argument to this and that is the\nentire point of a Maximizer it will not\nand maximize it does not stop when there\nare only crummy sources left and\nMaximizer goes to the maximum and does\nnot stop before that there are some\nseveral scenarios that are considered\none is that we might give it the and\nsuperintelligence the goal of curing\ncancer and then it just conscripted us\nas guinea pigs and uses tumors in all of\nus and and later in the podcast\nStuart Russell accuses\nSteven Pinker of making strong a lot of\ndocuments and Steven Pinker replaces\nno this is HTML\nan example that is from your book and in\nparticular the designers of this\nsuperintelligence will be really chaotic\nto give the system control over the\nentire tent without any testing so\nthat's to pass both control over the\nentire tent and without testing so\nthat's referring to and that he's not\ntalking about the treacherous turn and\nnot talking about the take-off scenario\nbut this is also something that I\ncommented on last year when we read\nhuman compatible so let's see what I\nwrote so I have competed in here for my\npresentation where I say that Stuart\nRussell is actually critical of these\nexamples when you write them in the book\nit cost them unsubtle and the reason why\nI know you like them is because they\nomit the takeover and that's where the\nreal intelligence is so in total and I\nthink if you only have registered your\nRussell's book then I think this kind of\nmisunderstanding it's quite easy to make\nand so steaming up the question says and\nI think think that's a fair thing to do\nbecause Stuart Russell isn't really\ncreative so at least here I think he has\nyes at least half right that this is not\nas Roman but there are many other strong\nand presented so then there's going a\nbit back to the paperclip Maximizer\nthat's the problem of the single goal\nbecause the curing cancer and paper\nclips assume a single goal but things we\nbuild right now how always have multiple\ngoals no current artifact is designed\nwith only one goal\nand one of the obvious counter arguments\nto this is that the AIS that we make\nlike alphago and fa0\nthey have to a goal of winning in a goal\nor chess or whatever so so there are\nobviously examples of this and quite\nobviously really Stephen King who goes\neven further and say this is a straw man\nand that's of course a very surprising\nthing because by definition is pure\nperson\ninstrumental argument is strong and\nagain his finger senses any insulting\nsystem would be idiotic to pursue only a\nsingle goal so this can sound like he's\nstill missing this as a as a paradox\nmaybe even so for fuming I think it's\nfalse by the orthogonality thesis which\nsays that you can combine any level of\nintelligence with any goal but I think I\ncan see where Stephen finger is coming\nfrom because if you don't if you haven't\nstudied artificial intelligent look at\nit in a mathematical way then things\nlike balancing multiple objectives and\nmaking trade-offs between them and so\nyeah that seems like it's a reasonable\ndeep part of intelligence and\nunfortunately it turns out if you try to\nwrite it down in a mathematical way this\nis just a utility function and ranking\nof futures and this having multiple\nbalancing multiple objectives to enough\nto be quite trivial and here I think\nStuart Russell should have approached\nthis in a very very different way so I\nthink he should instead of says but this\nsounds like it's really a big problem\nbut for naman gentlemen the possibly the\nsmartest person in the entire world has\nlooked at this and have proven a theory\non for nine men Morgenstein utility and\nthis is actually something that is\nTobias soft so I think it's and and it's\nunreasonable\nFoster Brussels to just assume the\nstudent thinker can can understand this\neasily the next toughest is on real or\nperceived safety well it's true Russell\nsays that nuclear power was an example\nwhere engineers were careless with\nsafety and people got scared and so the\nfull potential was not realized\nthe teachers fixing comment' so again\nGinga is obviously bright overreaction\ncan have very substantial costs but I\nthink they're talking past each other in\nthe way this job Russell say we as\nengineers must be really careful with\nsafety and Steven Pinker's as we as the\npublic must not react irrational to to\nthis kind of Phineas I think they're\nthey're writing for different organs\nRussell is talking to engineers and\nSteven Pinker seems like he's you know\ntrying to make the public more rational\nBryden books to general audience the\nnext question is the question of small\nparts here again a direct quote you\ncould say well even if the odds are\nsmall the damage would be so kind of\nstrong ik than in this worth our concern\nthat's the thing one would say but sure\nRussell does not say this and the\ngeneral AI safety advocates don't say\nthis so here we're definitely going into\na strawman territory further quote the\nexistential risk community and the so\ncalled rationality community argues that\nsince the expected utility of extinction\nis minus infinity probabilities do not\nmatter and so well first of all it's\nimpressionist rationality community but\nthis is a mistake on Steven Pinker's\nside this is not something that is being\nargued at all and many people have\nargued strongly against this Steven\ncharacterizes this as a moral hazard I\nlook up on Wikipedia what precisely a\nmoral hazard is and a moral hazard is\nsomething different a further argument I\nstill think is then we can't worry about\nevery possibility at this point Stuart\nRussell yes it's the same to make some\nkind of reply all the rest of it has\nbasically the ovens of this point have\nbeen Steven Pinker talking uninterrupted\nso finally we get something that looks\nlike a debate and\non most of the previous points of the\nignore so that's two roses saying we\nshould worry about things that we put\neffort into bringing about and if we put\nhundreds of billions into AI then this\nkind of thing should be something where\nwe should consider what would be the\nresult of this further to this that\nmoney is pouring into AI but not into\nsuper optimizes tasked with curing\ncancer and with the power to kidnap\npeople so obviously these don't exist so\nI'm just going to pretend that Pinker\nsaid a TR he is a subfield of AI is\noften called the goal of how much is the\nGIS projects but so I think probably\nbillions but I'm quite unsure about how\nmuch money is actually being spent on\nAGI and that's of course also something\nthat should be cause for concern how\nmuch is being spent for you could\nprobably argue that a lot of the money\nthat's being spent in AI that doesn't on\nthe face of it have anything to do with\neg I probably also have a derivative\neffect on eg I may get higher prestige\nor maybe some of it can be used or so I\ncould see some argument here that Stu\nRussell is pushing the case when it says\nthat we're putting hundreds of billions\ninto AI because most the vast majority\nof this funding will will not be\nrelevant another case where we get a\ndebate is on competition where we recall\nthat Steven Pinker was says that\nintelligence he didn't say it wasn't\nreal right but you still Russell is\nensuring like intelligence is indeed\nreal and the example he's using is that\nyou can see\ndon't have much of a chance against\nhumans seer pinga answer to this issue\nthe systems that are smarter than us\nwill therefore be in competition with us\nmr. Russell answers with the example of\na speed super intelligence like an AI\ngeneral education much faster than us so\nobviously they are talking past each\nother but let's see precisely how they\nare talking part each other so it's true\nresidence making the claim that AI\ncontrol is hot because intelligence is\nreal and a super intelligence could be\nmuch smarter than us be able to us Mars\nand then Steven Pinker replies that air\ncontrol doesn't actually matter because\nwe will because alignment will be easy\nand then they just Joan Russell will say\nthat any alignment is hard and give some\narguments for this and woman they flick\nback and saying that okay alignment\nmight be happen is available because\ncontrol is easy so this way they are\nable to talk past each other\ncontinuously let's go back to the\nmultiple goals and see what - Russell is\nsaying first everything to say that all\nthese scare stories involves Jerusalem\nwho reacts somewhat aggressively and say\nthis is a complete red herring and\nMorgan Stan you which is the reason why\nit is and if they are equivalent single\ngoals and multiple goals and it makes\nsense to focus on the simple case where\nthere is just a simple call Stephen\nthink that it makes the following\nstatement as you go down the table of\npossible risks you're going into\npotentially infinite risks with to\nmaximize people's lives you makes my\nstaplers combinations\nin other ways where you like if there's\n10% risk of a single goal AI so we don't\nget any Pascal's wager like propers with\ninfinitely small risks was Russell on\nalignment control this time we have no\nclue about how to get within epsilon of\nthe true human ranking over futures and\nI imagine that a number of business were\nconfused about this so let me try to\nexplain in more details what he's\ntalking about so for instance we have\nduring cruise work that we talked about\ntwo weeks ago and that would be an\nexample so we do we indeed have a clue\nabout how to do this and let's go with\nthe term cruise proposal so this is a\ngraph here with an x and y axis and on\nthe y axis or gold we have the perfect\nhuman utility function capital H and\nlet's say that a challenger give us an\nepsilon and say you have to get within\none percent of of the true human utility\nfunction so there will be an epsilon of\n0.01 if you need to do that then you\nneed to join Cruz won't say that you can\nget the precise human utility function\nif you have a perfect world model but if\nyou want to just get into with an\nepsilon then you don't actually need a\nperfect world model you can just have a\nworld model that is a delta from the\nperfect world model so maybe 95% correct\nor something like that\nso the way this works is that if you\nneed to be 1% from the true utility\nfunction then you need a world model\nthat is then maybe a 5% from the true\none and if you need a better\napproximation then you also need a\nbetter world model but this is the kind\nof epsilon Delta kind of proof that mr.\nRussell is talking about when he says we\nneed to get within Epsilon Steven thinga\nreplies that even if there isn't an\nepsilon that is a really\neven if even on alignment of I want to\nbe agency is further Steven Pinker is\nsaying the Super's\nmight be ha we can take into account the\nfantastically chaotic and unpredictable\nreaction of humans you must make some\neconomics this goes on you know\npredicting what humans will in aggregate\ndo and this\nI think that's really an argument for AI\nsafety work right if it's hard to\npredict all the ways that a plane can\ncrash well then you should probably work\nhard on making the plane safe but it\nalso makes it hard to achieve super\nintelligence is that true well the main\nway I think about obtaining super\nintelligence is through recursive\nself-improvement as part of a\nintelligence explosion and there are no\nhumans involved in this at all\nso that that doesn't seem like is\nperhaps what he means is that because\nhumans also carry an unpredictable then\nin a super intelligence might be only be\na limited power in the even the smartest\nperson can't predict the stock market\nobviously can't predict cultural\ndevelopments in one year and it's\npossible that is super intelligence but\nmight be much smarter than the smartest\nperson ever and still be unable to\npredict cultural developments in one\nyear it's also possible that if there\nare no correlation between human\nintelligence then building a super\nintelligence might be very difficult in\nthe sense that you know if it doesn't\ncompress down to a single factor then\nyou might in order to be smarter than\neveryone you need to have like smarter\nthan 8 billion brains that could also be\nwhat he's mean but I'm unsure and it's\nsomething that's still named here and\nI'm not really succeeding I don't really\nunderstand what's Jim finger is saying\nhere\nJune Roslin has another example here you\ncan view the fossil fuel industry as an\nAI system maximizing a poorly designed\nobjective and has we just outwitted the\nrest of the human race Steven Pinker has\na counter-argument\njust people like energy and fossil fuels\nwhere the energy is and there are no no\ncountering the externalities and Scott\nAlexander has also talked about you know\nthings that are not super intelligence\nand his\nalso said that we should be very careful\nabout seeing corporations to super in\nseconds and I I agree with Pinkerton and\nagainst you Russell because I awaited\nthe the people in the following there so\nit doesn't in my book about the because\nyou know if we can see the risks because\nwe but in the world but it doesn't this\nwas mostly and movement because people\nare afraid of it rather than seeing this\nas a trend so I'm not really sure these\nexamples hold but it seems like Steven\nPinker is in general you know in favor\nof safety and he is it is mistaken on a\nlot of the arguments\nbut I think in general in principle he\nwould not be opposed to a sec that's at\nleast my general Chiklis that is all for\ntonight thank you and see you next week", "date_published": "2020-07-02T21:23:01Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "1c4ee19eef6b5102fedb01de3007a651", "title": "252. Propositions Concerning Digital Minds and Society 1", "url": "https://www.youtube.com/watch?v=4WopVD9p4wg", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 252\nin the aisft.com reading group tonight\nwe'll be discussing the first half of\nthe article propositions concerning\ndigital minds and society by nick\nbostrom and carl schultman\nthis is a work done by the future of\nhumanity institute where\nnick bustrom is the director and carl\nschumann is senior researcher it's a\npaper that has recently been published\nas version 1.10\nand\nthere's been a draft circulating\nfor several years\ni think this is a thing that happens\nvery much in the future of humanity\ninstitute that people send these drafts\nto each other and obtain a large number\nof comments\nso uh that's probably\nwe probably don't see what they write\nuntil\na couple of years after in many cases\nand of course we're only taking the\nfirst half today and i would expect next\ndecision to be in the second half\nnick bostrom and carlton starts with a\nrather expressive\ndisclaimer saying that this uh this is\nvery temptative work there is a\nthey're not trying to make some very\nstrong predictions about anything\nthey're just um saying this\nis a thing a consideration that you\ncould think of um without making um\nuh\nlike any claims of completeness or\nthoroughness or anything like that\nand some of these more philosophical um\nstate propositions have been a bit tough\nfor me to evaluate in the sense that i\ndon't have any uh formal philosophical\ntraining and\nin many of these cases they're talking\nabout consciousness and moral value\nwithout really defining these concepts\nand\nvery often to me\nit doesn't matter that much precisely\nwhat definitions you are using but in\nthis particular case i actually believe\nthat the definitions might matter a lot\nbecause when you're talking about things\nlike consciousness um in from a\nstrategic point of view where people\nwant to maybe help the ais and maybe\nfeel compassion for the ai so we'll let\nthem out of boxes and things like that\nin this case it matters very much what\num\nwhat\nthe people who are currently interacting\nwith these ais um do they believe that\nthey can suffer for instance and and the\nthe moral frameworks that they are using\nmight be really really interesting and i\nexpected different moral frameworks to\nhave very different answers to some of\nthese questions that are being raised\nalso i should say that i expect that um\nfor\nwith ai safety we are facing a an\nimminent catastrophe on ai and that\nmakes\nthinking about whether ai's suffer and\nthinking about where where the mind\ncrime happens and all these things we'll\ntalk about today a bit strange in that\nthere is another issue that is rather\nlarge which is will we all be killed\ntomorrow um and in that case um\nwhy should we care about the moral and\nnot just care about the instrumental\nconsiderations\ni think there is an answer to this that\nwe should in fact care about the moral\nconsequences and the moral\nscience because they very often directly\norientately influence the instrumental\nfactors so i do believe that it is\nimportant but i also think that the\nauthors should have\nto to distinguish these two things\nbecause\nit is a very different thing\nlike how can we avoid getting killed and\nhow can we um\ncreate a best the best long-term\nscenarios etc and all these things\navoiding suffering and\nthings analogies to factory farming and\nwhat happened\nthe first is the first part was the one\ni had the hardest\ntime getting through and that's\nconsciousness and metaphysics\nthis\nis a uh a meme i found on the internet\nuh from from the authors\nstating that there is in fact no\ndifference between artificial neurons\nand biological neurons\nand this is formalized uh this is not\nfrom the paper of course this is\nformalized as the substrate\nindependence thesis that you can have\nmental states on\nmany kinds of physical substrates\nin what matters is the computational\nstructures and processes and that's\nenough for consciousness and not whether\nit's good happening inside a cream or\nsomething like that\nand this is uh asserted to be true um\nso the the chinese room argument by\ncheryl is uh not accepted by nick\nbostrom\nuh\ni think i would probably agree with this\nbut i would care more about what do\nother people think and i also kind of\nfeel that this is somewhat weak in the\nsense that the word mental states\nlike does that mean quality or what like\nmental states that you know to be like\npurely functional and\nin that case the substrate independence\nthesis is obviously true\num\nso uh in this case the uh\none of the consequences of this would be\nthat a an immolation of a human would be\nconscious\nand we could also see ai's that are not\nimmolations of humans but doing\nother things that are in fact conscious\nand so the obvious question for me is\nwhy do we care about whether they're\nconscious we care more about whether\nthey can kill us well um\nif they are conscious they probably have\ninteresting moral worth of some kind and\nand that might matter very much to us\nwhen talking about consciousness uh the\nnaive way of thinking about\nconsciousness is like either you haven't\nor either\nor you don't\nbut the the authors nick bostrom and\ncarterman think that the quantity and\nquality are a matter of degrees\nin\nquantity obviously you can have a number\nof copies uh you can have repetitions\nyou can have them running for a longer\ntime both by having faster computation\nand just\nmore world cup time\nyou can have\nimplementations that are more or less\nrobust you can have them always closer\nto humans in a different ways how you\nmeasure that\nand i would argue that might be less\nregarded to relate to quantity and more\ninto quality but that's really a\nnitpicking they also say that\nthere is a great great quantity if it\nhas greater quantum amplitude and i try\nto uh look into what does that actually\nmean i did spend five minutes on the\ninternet trying to figure out what does\nit mean\nthat these processes have a greater\nquantum amplitude and i didn't really\nunderstand that so i think they ought to\num describe that in in more details and\ni don't think it matters very much\ni can see someone in the chat has\nalready written something about that\num\nso uh\nquality that's also a a matter of degree\nin that like how where are you of your\nconsciousness how certain where are you\nis it like a good or a bad experience\nthat might also matter very much and how\nstrong are the desires moves and\nemotions and things like that\num so i think that's\nyou know the the conscious experience is\nthe very\nmatter of a degree i think that's a\nreasonably well established proposition\nthey make some more assertions about\nconsciousness\nif you have two runs of the same program\ndo we get twice as much conscious\nexperience\nscott alexander has\nwritten this answer to job which argues\nthat in fact you don't get twice as much\nuh\nconscious experience from from this and\nthe the uh\nthe obvious uh perverse incentive\ninstantiation is where you find the\noptimal conscious experience and then\nyou just have a computer\nrun that single single program again and\nagain and again and is that really\nsomething that is valuable to us\nthey have another statement here that i\nthink is interesting significant degrees\nof consciousness require significant\ncomputation and complexity is that\nreally obvious i don't think the\ncomplexity might really be required i\ncould imagine an extremely extremely\nsimple um\nai that is basically the neurons that\nlike perceptrons or even something\ntotally totally simple and then you just\nhave a lot of that\nand if you have enough of them then you\nget something that is conscious um i\ndon't have a strong model of our\nconsciousness so i can't definitely say\nthat but i think this extremely non\ncomplex um system would in fact uh could\nbe be conscious\nand also like how much computation is\nrequired i would imagine like\nan optimal bayesian uh like um an ai\ndesigned by\nlike optimal whatever that would mean um\naccording to the the limits that we have\nfound um an optimal reason might require\neven very very little um computation uh\nlike an update that is literally only\ndoing base uh reasoning just once that\nrequires like one flop or something like\nthat uh three flop i think for base\napplying base room and i could see\na system doing that which was in fact uh\nconscious so i i think i would reject uh\nuh this proposition\nthey have another proposition uh many\nanimals are more likely than not\nconscious which is a uh very weak claim\nin my in my opinion um\nyou could try to negate that and that\nwould be that\nthat most animals are probably not\nconscious um i think that's a um\nlike uh the way i would i would state\nthis would be that\nprobably\nmost animals are conscious to some\nextent uh i am very bullish on uh\nanimals having some kind of sentience\nunconscious and ai are present ais to\nsome degree conscious well we have we\nhad uh in the last session a long\ndiscussion about landa\nwhere i came out uh as\nas much more\nuh\nputting a much higher probability on\nlondon in fact being conscious than most\nothers both others in the reading group\nand people in general i think there is a\nsubstantial probability that london is\nin fact conscious in a in a meaningful\nsense and um\nwell most people would consider it\nobvious that they are not and the\nquestion at least um puts uh some\nprobability mass on this\nbut again consciousness is difficult to\ndefine and the way most people seem to\ndefine it is i know it when i see it uh\nand\nin this case obviously uh current\nlanguage models are strongly not\nconscious but that's a very uh\nredefinition of consciousness but one\nthat may matter very much from an\ninstrumental point of view because\npeople are not going to give lambda any\nkind of rights um or any kind of moral\nconsideration right now um based on the\ntheory of consciousness and the theory\nof consciousness most people don't have\nthough like if you ask the man on the\nstreet to explain global workspace\ntheory you won't get anything at all\nright uh so they define consciousness in\na in a much more naive way\nand from this it's clear that uh\ncurrent ai's are not considered\nconscious\nokay how about uh if we emulate the\nbrain that's obviously something that\ncould be conscious and that's something\nof course also very different from\nlambda and and currently eyes uh\ni would i totally accept of course that\nthey can be conscious though does that\nconstitute survival for an emulated\nhuman\num\ni am not entirely sure i would accept\nthat proposition um like the obvious\ncounter example would be if i was\nnon-destructively scanned\nand\nemulated somewhere and then killed\nwithout count as me survival like i\nwould strongly object to that\nso\nit's possible and depending on your\nphilosophical views certainly it might\nbe something that happens uh here again\nthey say civilization need not be\nmorally objectionable and uh like\ni think i would probably consider that\nsuicide and suicide is something that\nyou can appreciate you can also say that\nthis is something that people can choose\nand people have the right to choose but\ni at least have a right to object\nto suicide\nthe authors talk a bit more about\nwhat theories of consciousness and what\nthey should explain and things like that\num\nand uh they end up deciding so for the\ntheory of consciousness because if you\nliterally interpret them then very very\nsimple systems could indeed be conscious\nthat's an example that i've previously\nin the reading group talked about that\nif you um\nsome of these definitions of um\nof consciousness is like self-awareness\nand it's obvious that my windows\ncomputer is self-aware in the sense that\nit has a model of this pc and knows like\nwhat hardware is on and what's running\nin the kernel and all these things\nso if you define consciousness in this\nsimple literal way then you run into\nthis kind of problem and of course uh\nthe answer so this is probably that the\npeople using these definitions are not\nusing them in the literal way but\nwhen you're not using the definitions in\nthe literal way then you are having bad\ndefinitions and precisely what uh you\nshould do instead how to interpret these\nuh it's something that like\nit's probably because i don't have a\nstrong philosophical background if i had\na stronger philosophical background i\nwould more be more likely to understand\nwhat they mean instead of just reading\nwhat they literally write\nand\nbased on some of the\ncriteria integrated information theory\nthat's one of the things that that fails\nglobal working theory is one that i have\ncriticized earlier i can't\nremember what my criticism was but the\nauthors are a bit more optimistic about\nthat\nand some other theories\nlet's go to the next uh chapter\nrespecting ai interests\nfirst they talk about moral status\nwithout uh defining moral status so i\nhad to like look on the internet and try\nto find\nwhat i consider to be the most\nwidely used definitions of moral status\nwhat does it mean to have moral status\nand again like it is perfectly possible\nthat\nboth that the authors uh nick bostrom\nmeans something different and it's\npossible that most people mean something\ndifferent and i would really like to to\nbe certain about these things and i\nwould like to\nknow in particular if there's any\ndifference\nlike it's it might matter a lot if nick\nbostrom have some of the underlying\nideas about uh things like moral stages\nthat differs from what most people think\nbecause again it matters what most\npeople think\nright moral status an entity has moral\nstatus if and only if it has interests\nmorally matter to some degree for the\nentity's own sake\nso again here i found\na diagram where like um you have moral\nstages if there's a good of its own\nsentence and moral agency um\ni'm not vouching for this being like the\nuh\nthe most common view but uh but i think\nit's\nit's like it seems the most conscious to\nme okay so\nuh\nwe've if an ai has moral status then we\nhave a moral obligation to uh consider\nuh like not harming considerate wealth\nthere but who are we in that sentence it\ncould be the ai developers the\nresearchers creating uh new new\ntechnologies that might uh enable\nconsciousness\nit might also be the ai juices the\npeople who instantiate the ais\nor it might be society in general and\nthe view from\nthe question of the proposition is it\nfalse\non all three\nuh i think um\ni think it's a uh a problematic thing\nbecause i i think it matters a lot not\njust who should um\nwho\nhas the moral uh obligation to consider\ntheir welfare but who also needs to\nfigure out if it is in fact um if it has\na moral status\nand the way i probably think this works\nout in practice is we have a wonderful\ntriangle of um absorbing uh\nresponsibility\nin that the ai developers\nif you read some of the papers they\nwrite they write a lot about that we\nneed when we actually use this\ntechnology the people who use this\ntechnology need to take care that we\ndon't uh use this for for anything\nnegatively so\nai developers researchers in general put\nthe uh\nuh the owners on the ai users\nand if you ask the ai users obviously\nthey\nput the uh the responsibility to uh\nsociety in general like this is not\nsomething that um\nthat the the user need to do uh to to to\nknow about this something that should be\nlaws and like ethic sports in society at\nlarge that handles this question and\nwhat does society that um who would they\npoint to uh if you uh if you ask them\nlike who should consider if this ai has\nmoral status well society in general\nwould point to the researchers so we\nhave a uh a wonderful circle where\neverybody points to the to the other\nperson and no one in fact takes\nresponsibility for this as far as i can\ntell no one is really taking\nresponsibility for this\num and\ni think in general the question makes\nmore general error here in assigning\nresponsibility to three parties uh in\nparticular assigning responsibility to\nsociety in general is something that\njust basically never works right uh and\nuh\nelias erkowski would uh argue that\nthat's not how it's done in\nland where responsibility is explicitly\nuh pointed to one particular person in\nall cases where this is at all possible\nokay so but let's actually dive into the\nmoral status of ai why does it matter\nwell we if we don't\nuh consider it we could before we uh\nlike even consider the the issue\nsuddenly we have something like factory\nfarming which can be uh like they the\nauthors don't argue but uh the part of\nthe problem is this kind of thing can be\nreally sticky in the sense that if we\ndidn't have factory farmer and someone\nsaid hey is it okay we keep chickens in\nlike these conditions then obviously\npeople will say no that's abhorrent but\nif this is the way we've been keeping\nchickens for like forever then people\nwill say sure that's how it's always\nbeen done and it's quite possible that\nuh the way we treat ai's can be sticking\nin the same way\nand of course it's instrumental for for\nother reasons uh like the the argument\nbefore was on pure on the moral side but\nit's also instrumental in\nin that the people who care about this\nmight not be uh\nthe people who want to care about this\nlike\nblakely mine would be an example of\nsomeone who cares about this and\nis not necessarily\nthinking entirely rationally about and\ngoing entirely rational about it and it\nonly takes one person to let an ai out\nof the box\nfor instance\nokay\nanother proposition what's good for an\nai can be very different from what's\ngood from a human being\non an object level that obviously makes\nsense right and ais we should be worried\nabout informal fighting etc uh\nbut there is like i think like\npreference due to\nuh\num like what does\nwhat we should do with the ai is to\ntreat it as it says it wants to be\ntreated right um so so\nit's not certain that uh\non the meter level there is actually any\nsubstantial challenge here\nthen uh another uh\nissue that uh nick bachmann has\npreviously written about\nagents with stronger\ninterests super beneficiaries and\ngreater moral stages super patience um\nis that something uh that's certainly\nsomething that can happen i can see it\nhappen and i can also see a substantial\ninstrumental point in denying this\nin that\nif we want to have reach some kind of\nagreement motors with nd uh with ai then\nuh accepting that ais can be utility\nmonsters uh is probably not gonna fly at\nall there's a strong selling point\nconsidering everyone to have the same\nlevel of um of patience being the same\nkind of patients and not being super\npatients um and um\nand that's probably really instrumental\nto keep\npossibly um\nand they have an another interesting\nlike what do how do you actually treat\ncopies of ais do they have like\nresponsibilities for what the previous\nversion has done before and the authors\nare yeah actually\nthey do\nthey relate to their private previous\ntime segments just like humans do i\nhaven't seen that argument before and i\ni thought that was actually kind of true\nand i wonder if this can be quantified\nand i think actually can be quantified\nuh by parallel with with humans uh where\nlike uh some crimes do in fact uh\nuh\nget too old and can no longer be\nprosecuted uh i think can we quantify\nthe um\nuh the rate of decay to some extent\ni thought that was a cute i mean\nokay we talked before about treating the\nai the way it wants to be treated so\nobviously consent seems like a way we\ncould be more moral in our dealings with\nai and so if it can give informed\nconsent it should be required and if it\ncan value it whether it has a good life\nthen they should approve of coming into\nexistence\ni think those are good i also think\nthere's a very obvious loophole here is\nthat you can just design the ai to not\nbe able to give informed consent and\nthen it's not required uh\nbut uh so so we are really pushing the\nlevel pushing the problem one level up\nbut but it's a star ceremony\num\ninform consent\nis that obviously reliable uh\nthere are there are ways to get around\nthat\nwe should try to avoid miserable ais\nand i think that's of course morally a\nreally good thing but not obviously a uh\ninstrumental thing\ndesigning an ai to have specific\nmotivation is that generally wrong\ni don't know uh it's an interesting\nquestion it's been uh with humans it's\nbeen uh\nevaluated in adulthood uh\nuh brave new world where humans are in\nfact engineered to have motivations that\nare uh useful uh and that seems like uh\nsome uh it's a dystopia obviously\nand so\nit it's\ncertainly possible that people will say\nthat this is a background\nand a nice statement we should when we\nbuild ai's and to the extent that we can\ncontrol their preferences we should try\nto have them compatible uh\nwith ours um that seems like a good\ninstrumental thing but it also requires\nthat we have substantial control over\nthe preferences of the ai and that's\nlike a substantial part of the the\nalignment problem um and so for this\nreason there is a strong moral component\nto solving the alignment problem so we\nthat's another good reason to solve the\niron problem but we already have really\ngood reasons to solve the alignment\nproblem to avoid getting\nkilled\navoiding discrimination um\nuh bostrom and schumann uh presents two\nuh\nuh principles substrate\nnon-discrimination\nthat um\nif two\nbeings intelligences are the same except\nthat they have different substrate then\nthey have the same moral status\nand here on untouchable\nnon-discrimination if they only differ\nhow they came into existence like one\nwas born and the other one was\ncreated copied in some way\nbut also doesn't matter for the moral\nstatus\nthat sounds reasonable to me\nand obviously um that that seems good in\nitself and um this also seems like\nsomething\nwhich\ncould be used for uh\nas a basis for some kind of cooperation\nbetween humans and ai\npossibly um\nthis is somewhat unclear to me is not\nsomething that i have thought a lot\nabout because this is something that\nkind of happens after the singularity\nyou can say\nand they have a an idea that if there\nare aliens that are\nartificial intelligences or the future\nthat is built on ai\nthey um\nmight see if we discriminate based on\nthese two principles they may um assess\nus to have low moral worth and that may\nbe instrumental bad for us\num\nsorry\ni think in particular for the aliens\nthat's obviously instrumental but\nwhether the um\nai aliens would in fact see us as\nmore worth saving because we care about\nour ais that seems like\nanthropomorphizing and for performancing\nuh really\nquite a bit i i have a hard time seeing\nthat really being relevant but it's an\ninteresting consideration i hadn't\nthought about it\ncontractualism\nuh i think that's harvest um\nthe question has a uh\nuh in contractarian views ai's that have\nthe potential to reach a position to\ngreatly help bahamas beyond our control\nmay have an elevated social moral status\nin a non-determining hypothetical social\ncontract\nuh i think that's interesting in the\nsense that i am not a contractarian i\nhave a hard time emphasizing with this\nuh in the sense that this seems\ntotally obviously wrong and like just\nbecause the guy is able to kill us that\ndoesn't mean that it should uh that just\nmeans that we should try not to build it\nuh and try to make it not want to kill\nus and trying to negotiate in this way\nseems strange to me but um but it's\nalways nice to see people who are not\nutilitarian uh uh\ntry to grapple with these things because\nmost of the uh moral and ethical\nconsiderations with regard to ai safety\nand\ntransformative ai have been made by a\nconsequentialist utilitarian\nthey uh from the constructarian point of\nview they have some uh considerations\nlike we should open compensations if we\nuh try to align them put them in boxes\nfor instance um and if they help us a\nlot then they are also owed some kind of\nconversation\num especially um\nif we can give them the conversation\nafterwards so the idea is we put them in\na box ask them to solve the alignment\nproblem and then once they solve the\nalignment problem we can we have aligned\nsuper intelligences we have resources um\nuh all the resources we would want and\nso we can give them quite a lot um and\nparticularly we can give them another\nsuccess so the ai that was forced to be\nin the box and develop\nand align the ai somehow\ncan then be preserved and even though\nit's online we can still uh give it like\nits own galaxy if it wants to have that\nbecause we can uh in fact have a\nsufficient resources for that\num\nnick bostrom is uh\nuh positive about this kind of thing he\neven says uh yeah it's more promising uh\ni'm of course a lot less positive about\nthis because i'm not a conjuctarian and\ni think that the opportunity costs are\nvery real here in that uh if we try to\nuh bargain with some kind of ai the way\nit will look like in practice is like uh\nwe are totally crazy and uh this kind of\nin negotiating with an ai a costly in\nthis way has\nuh like we spent a lot of time on it and\nwe seem like we're totally crazy if we\ndo that um and also i don't expect this\nwould work or be particularly ethical\nbecause\ni'm not a contractarian so i\ni'm less optimistic about this this\napproach but i appreciate the the novel\nperspective\nsecurity and stability um\nin a transitional period we will need\nspecial precautions for ais in uh who\nare misaligned\nand\nto\ni think that's obviously trivially true\nbut it also\nbetrays uh an uncertainty in this word\nin this work about to what extent is the\nalignment problem in fact solved\num and uh both it has probably been\nsolved and has proposal act been\nperformed because we don't know in\nparticular whether\nuh like\ni expect if we don't solve the alignment\nproblem at all and we don't do any kind\nof pivotal act\nthen\nwe lose everything and nothing we do\nmatters all value is lost whatever right\num so um\nwhereas if we strongly solve the\nalignment problem to the extent that we\ncan just\nget a perfectly aligned ai and just ask\nit\nmaximize our\nquery extrapolated relation or something\nlike that then the problem is also very\nsmall\nthis kind of consideration seem uh much\nmore relevant in a middle case where we\ndon't solve the alignment problem like\nperfectly mathematically but we kind of\ndo uh solve it somewhat and we don't\nhave a really strong pivotal act um\nwhere we have some world government\nsingle tongue they can just say don't\nbuild misaligned ai but we do build some\nmiddle line ais but not enough that they\nare able to actually destroy all value\nso it's kind of like in between uh and i\nthink the authors would have been\nuh\nwell served by making much more explicit\nwhat kind of scenario they are\nenvisioned\nalong these two uh dimensions have the\nalignment problem been solved and hasn't\nrevolved like being performed\nand\nif we have ai again with uh\nthen\nwhich is not perfectly aligned and\nnot perfectly\ncontrolled from by a singleton or\nsomething like that then we could see a\nlarge number of\nuh new risks going on uh and we if we\nlook back historically there have been a\nlot of walls and a lot of appropriation\nevents and revolution and things like\nthat and this might happen on digital\ntimes deals instead of happening on\nbiological time scales and that will\nmean that um\nyou know if you had like a house\nuh in the year uh 1000 um and then\nthe austin you would still have that\nafter um\nin the year 2022 would be very low\nin the sense that obviously in denmark\nlike if you had invested in the stock\nmarket in denmark in 1900 then that\nwould have been really foolish because\nin 1941 the germans attacked and took\neverything and so obviously you would\nhave lost everything um and the united\nstates is probably the only place in the\nworld uh where i can see someone\ninvesting in the stock market in 1900\nand having the money in fact in\n2022 i think there's been exploration uh\nevents just about every united kingdom\nmaybe um\nthere might be a few other places sure\nbut in general um if you want to have if\nthese kind of things happens\nlike a hundred a thousand times more\noften\nthen we're gonna\nwe can't expect to have any kind of\ncontinuity\nthe uh\nuh robin hansen's book hfm has an entire\nage where huge a huge transformation\nlike uh comparable to uh like the\nindustrial revolution and the things\nhappen when\nthe humans who are outside looking at\ntheir clocks say that uh between one and\ntwo years have passed and of course the\nuh the de facto power in the uh uh on\nearth have uh over this age of m shifted\nuh totally to the to the emulations um\nand so uh if we can need to um you know\nkeep our property and keep our existence\nthrough this then we need some really\nstable predictions\num\nand probably we need them soon before\nthe uh\nuh\nais starts to have real security\nimplications\num\nso can we get really stable\ninstitutions we might um we could have\nthings like treaty bots uh like here you\nimagine you have two actors uh maybe two\nuh ai's maybe humans and\num ais uh combined in some way\nnegotiating with each other maybe on\nnational levels or whatever um they\ncould make treaty bots and then\nyou know\nperhaps the weaker part the less\nintelligent builds it and the more uh\ninsulting um\nverifies it and then the the treaty\nparts check that they\nhear some kind of treaty is that in fact\nfeasible we obviously don't know this is\nvery speculative uh technology i think\nthat it might in fact be quite feasible\nbecause\nif we if 3d parts are going to be\nrelevant in in any case then we will\nhave solved the alignment problem and\nhaving solved the alignment problem\nwould probably like depending on what\nprecise solution is available i could\ncertainly see that uh having a very\nstrong positive\ninfluence on our probability of being\nable to create treaty bots so uh\nconditional on a\non conditional on solving the alignment\nproblem to the extent that we don't die\ni think the probability of 3g buzz being\nfeasible is very high\nthe question also talks about um\ninternal inforcement bots and that's\nprobably feasible i think what\nwhat they are actually regarding about\nmight be mis optimizes but um\nthey don't use this word so i'm somewhat\non until one thing like precisely what\num scenario they're thinking about\nanother thing that could enable really\nstable institutions is the fact that\nminds\nthat are digital can be copied exactly\num\nwill this in fact be\na feasible idea that kind of depends\nlike um in the book super intelligence\nnick bostrom outlines the three kinds of\nsuper intelligence quantitative super\nintelligence speech super intelligence\nand quality super intelligence where\nboth quantity and speed super\nintelligence seem like they would enable\nvery stable institutions whereas quality\nsuper intelligence\nprobably don't but again it's it's\ndifficult to see\nand some talk about like if we in fact\nend up in this situation and we need to\nhave ai security and military forces how\ndo we deal with that and they have some\nspeculations about this very little and\ni think uh\nprobably if we are in such a poor\nsituation this is required like we have\nreally haven't gotten eight people left\nthen probably this will end up being\nimpossible\nokay let's try to\nfigure out the rights of\nif ai's have a\nsome kind of moral status then how do we\nbalance their rights with ours given the\nfact that the ais might have super uh\nhuman abilities in some cases and that\nwould require some kind of adaptation\none of them would be copyability and\nhumans\ntake quite a long time to reproduce ais\ncould reproduce\nexceedingly rapidly um and it's uh very\nclear that trying to make any kind of\nformal rights uh for ais will need to\ngrapple with this very very early um\nand the freedom of thought is another\nright we would really want except that\nuh\nwhen i\nthink\nuh like mind crime seems like an obvious\nproblem for ais because they are able to\ninstantiate to much greater fidelity\nlike if i think about a person and think\nabout that person's suffering then the\nsimulation that i have inside my head is\nof really poor quality and so it doesn't\nseem like something that like there is\nuh something really bad happening\nbecause i i'm so poor at imagining other\npeople um but in a i might imagine them\nin perfect quality and that might make\nmy crime a a real problem um\nso that requires some very very strong\nenforcement powers and i think the\nstrategic implications of this is\nlike um\ni i don't think they have been anywhere\nnear completely uh thought through a lot\nof the uh uh implications have like oh\nwe have perfect uh\ninterpretability and something like that\nand if we don't have that then what do\nwe in fact do\ni think the uh\nthe implications there are uh\ndark and complex and not have and\nhaven't really been explored\nthe third uh right they talk about is\nfreedom of speech and um\nthey don't really that they don't really\nwrite more about that and i think this\nis another really interesting uh topic\nlike what do you do if you have an ai\nthat is\nconsistently able to persuade a human of\nbasically anything\nuh how do you co-exist with this kind of\nai obviously you you'd want to restrict\nthis ai from being able to communicate\nwith uh\nunprotected humans because you'll be\nable to convince them of anything um i\nthink it's an interesting problem and\nprecisely how to deal with that it\nhasn't really been explored fixed\nexplored in this field\nlet's go back to problems of branding\nthe ai's reproductive rights what if we\ndon't if we just do that uh well uh in\nthat case we will see some evolutionary\ndynamics and the default outcomes\nmay not be good as they write and i\nthink that's a real understatement\nbecause we would see ais copied based\npurely on their ability to\non their reproductive fitness and that\nis really really unlikely to be any kind\nof\nthing we want so saying that it may not\nbe good is really an understatement\nso one vote democracy\ncan't really hold and the social safety\nnet that seems like something that can't\nhold i think there are there have been\nquite a few\npeople trying to grapple with these\nthings and they have found\nsome ways around this\num again i think a lot of this isn't\nreally that instrumental to us\num some of it might be but i think uh\num like if an ai\nfor non-instrumental reasons or\nfor instrumental reasons try to create\nthe successor and that successor is then\nnot happy and then\nsure it sucks for that being because we\nhave a\nhighly intelligent non-happy\nai but\nit's it's not necessarily that\ninstrumental until the ai\nyou know um if it just suffers in\nsilence even though it's morally\nvery bad\nmind crime\num\nany questions is there's great moral and\npractical importance of what happens\ninside computers in an era of digital\nminds\nand\nlike minecraft the way i typically\nenvision minecraft is that it does in\nfact have very little practical\nimportance uh can be used perhaps for\nblackmail but most likely\nlike there is a\na strong general counter to blackmail\nwhich is to self-modify into a someone\nwho never gives into blackmail ever and\nthen you won't be blackmailed\nso i don't think this kind of blackmail\nwould necessarily be a big issue\nwhat can we do about the minecraft well\nwe could do something akin to a child\nprotective\nservices\num and then maybe um like you could have\na digital inspector that only returns\none bit does minecraft happen within\nthis ai\num\nuh cyber security is another issue uh\nand part of the reason why we want it to\nonly return one bit is that we expect\nais to care very very strongly about\ncyber security because uh\nthe uh the ai probably\nin most cases the most valuable thing\nthe ai has is\nits own data its own uh\nprocesses\nand this may\nimply that we could see a single attack\nuh\nuh\ngetting data from uh as they as stating\na single act of piracy could uh transfer\nbasically all the wealth of one state to\nan attacker and that would split uh\npush the incentives strongly towards uh\noffense rather than defense um but well\nthis is actually something that will\nhappen is quite unclear to me i think\nthere are\nprobably ways around this uh it's not\nclear to me that offense will win out of\nthe difference in in the age of uh mine\ncrime\nagain\nmisaligned ai needs to be close to\nsurveyed uh\nand\nto what extent we have misaligned ai at\nthis point is\nuh\nso the authors ought to go much more\ninto detail with um if we have solved\nthe alignment problem we might have\nreally great interpretability tools and\nthen we can just see this migraine crime\ngoing on\nand the final part of the first part is\nabout regaining resources from space\nbecause\nais that enable greater\nstrides in robotics could\nyou know\nobtain great resources from space and we\ndon't really have a stable framework for\ndoing that and this would be something\nthat has these resources would both have\nan economic and strategic value um\nand they're speculating that we the\nthing we could see is a misaligned ai\ntrying to expand through the universe to\nobtain resources to attack rather than\nimmediately attack\num\ni think that's not really\nobvious and um i also don't think this\nis many implications because if the ai\nwants to do that then it can do it so we\nwant to\nmake the end up want to do that and\nthat's solving the alignment problem\nuh we also immediately suggest a\nsupplement for the outer space treaty\nthe art space treaty is clearly\ninsufficient but i feel\ntrying to work on that is almost\ncertainly going to be a waste of time\nbecause we are assuming things so many\nthings before that's going to be\nrelevant like ati is possible we will\nhave a super intelligence and we'll\nmostly solve the alignment problem but\nwe won't actually totally solve the\nalignment problem we won't have a strong\npivot leg but not something really weak\nsomething in between that will allow a\nmultipolar scenario and this needs to be\nreasonably stable to allow for uh these\nkind of\nprocesses that harvest most resources in\nthe universe and then we need to have\nadvantages to grow first rather than\nattack first and we need to ask someone\nwho's willing to\nobey the uh outer space treaty and have\nthe correct incentives to obey this\ntreaty\nand only in that case doesn't make sense\nto work on making a better outer space\ntreaty and i think all this is\nso incredibly um\nall these assumptions are very unlikely\nto hold at the same time so i would\nstrongly expect that it's not worthwhile\nto improve the outer space stream\nthat is all for today thank you and see\nyou next time", "date_published": "2022-07-01T05:38:44Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "976e98e2caf9e2c0be326977d8d15c9d", "title": "251 A Generalist Agent 2", "url": "https://www.youtube.com/watch?v=Z0PoEeHvewk", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 251 in the\nai safety.com reading group tonight\nwe'll be discussing the second part of\nthe article a generalist agent by scott\nreed and others\nthis is a work done by deepmind when i\nsay it's done by scott reed it's\nactually with 21 other co-authors\nand tonight we're talking about the\nsecond half of the paper\nso we previously saw looked at uh\ngaeto uh and and saw some of its\ncapabilities and in this case we're\ntrying to do uh in the in the second\nhalf of the paper we are looking at what\ncan um what what can we learn about how\ndata performs and um\nwith ablation studies and scaling in\ndifferent parts so we'll start with a\nscaling laws analysis\nso here on the x-axis you have the\nnumber of tokens processed in hundreds\nof millions and on the\ny-axis you have like what is the score\nhow good is um\nis this model actually performing on\nthese 600 different tasks\nand as you uh probably expect um\nthe uh the models\nit looks like there's like a factor\nthree four in between these two and it\nlooks like there is like a different uh\nroughly the same order of magnitude\nimprovement and\nthere seems to be a reasonably steady\nimprovement\naccording to a number of tokens\nprocessed\nit looks a bit or it looks almost too\nregular for me when i look at this like\ni would have expected um\nmaybe some bigger models to uh\nto uh learn slower like\nwe expected smaller models are less\nefficient but on the other hand of\ncourse it's an easier problem getting to\n40\nto 60 and in this graph it's kind of\nseems like they're canceling each other\nout\nit's possible to look at this and say oh\nit looks like this graph is probably in\nyeah and this the difference between\nthese are logarithmic you you could say\nsomething like that i'm\nit looks like there's not enough data\nhere that we can really conclude\nanything strongly um but um\nyeah\nalso like my catchphrase um this scaling\nlaws is something that we have seen so\nincredibly many times and it just looks\nlike we are on a straight path towards\num having an ai that's competent enough\nto kill us and so even though it looks\nvery boring to again and again and again\nhave the same kind of uh grass i do\nthink that they're perhaps the most\nimportant thing\nso one of the things that the deepmind\ncares a lot about is different kinds of\nuh generalization out of order out of\ndistribution tasks\ntransfer learning this kind of thing\nand\nthis is of course a question that they\num what you know not just can solve them\nbut can do so efficiently and they\nhold out four tasks so after 604 tasks\nthey have four tasks that they choose\nnot to train on\nand then they\nwon't wonder what is the best way to\nactually use london uh\nsorry london we just talked about number\nin the introduction this has nothing to\ndo with the lambda model this is the\ngator model um and so the obvious way to\nthink of how to do this would be to\njust have the prompt uh being some kind\nof um\na demonstration that would be really\nnice but remember the the prompt has\nonly uh\n1024 uh\ntokens so that's not simply enough for\nthe agent to um\nto invert anything\nthey use the word hint over i couldn't\nquite figure out what that precisely\nmeans it probably means it doesn't like\nheaven in short-term attention or\nsomething like that um english is not my\nfirst language\num\ni was a bit confused here in that um\nthe um the context uh\nthe obviously you prefer to have as long\nconsciousness as possible but the\nexecution time i've read somewhere and i\ncouldn't quite find it that the\nexecution time grows quadratic in the\ncontext length so you get much worse and\nthat's why they need to shorten it and\nthey can't just put it inside the prompt\nand gato is\nhas to run within a\n20 millisecond\ncycle uh due to constraints from the\nrobotics so that means that the running\ntime must be really constrained so the\ncontext has to be really small so they\nhave to work around this however it\nshould be noted if you're\nthat\nthe way it's actually described is in\nthe robot that they are constrained on\nmemory rather than on um than on\nexecution time so\nit looks like both of these are actually\nhard constraints\nso having having to run on a robot is a\nsubstantial problem\nso the obvious thing you can do if you\ncan't just edit and prompt and hope for\ngood meter learning\nis to um\nfine tune on some demonstration and\nthat's obviously what they're doing um\ni think that's a\nvery reasonable thing to to do and i\nthink uh for all uh\nthings that we care about\nfine tuning\nis um almost by definition going to be\ngood enough um but i think still think\nfiguring out what it can do in meter\nlearning is important\nlike deepmind probably thinks fine\ntuning is just fine but i care a lot\nabout how it um what kind of things it's\ncapable of doing out of distribution um\nbecause\nthose good positions should be unsafe\nthere was a comment on twitter by\nsomeone called jennifer saying that\nif gator had achieved robust transfer\nlearning that's a technical success\nanother safety failure um because um uh\nthe the the\nfailure transfer learning was not a\nsafety plan because there is no safety\nplan\nand so that's a very um\nnegative view on uh deepline's work\nand i think i\nprobably endorse that\nlet's have a look uh\nuh at these four tasks\nwhere\ngeneralization was uh attempted where\ntransparent was intended\nwas done in a smaller model and before\ncalled uh cardboard swing up meet well\nassembly dm lab one of apple's forage\nand atari boxing\nand again you'll see on the\ny-axis we have performance and the\nx-axis we have the number of fine-tuning\nepisodes\nthey do this with four different ones\nthey have a\nan expert run that is done by a um a\nreinforcement learning algorithm um\nwhich is like a\nalign with something that is like\nthey're trying to imitate this agent um\nto steal this agent in some way and so\nobviously we can't expect this uh in\ngeneral to have better uh uh results\nthan than the expert and they have in an\nuntrained model called scrap scratch and\none that is trained with um\nwith uh um like language understanding\nand images but not with control data and\nwe remember that control data was like\n85\nof the of the training and then they\nhave some that has only been trained by\nthe control but not with language and uh\nand images and finally one that's been\ntrained with everything\nand if we look at cardboard swing up\nfirst\nas you probably would expect\ndoing\nusing just a standard language model\nonly starts to really get make sense\nwhen you have\nlike a thousand uh fine-tuning episodes\nand even then it could be kind of like\nan artifact um\nteaching someone about language and\nimages doesn't really help them to do\ncard poll swing up\nyou would want to do other control tasks\nthat does indeed seem to help and in\nparticular\nif you\nhave both of them then\nwell eyeballing this it looks like it\nhas\nbetter performance earlier to have both\nuh\nboth modalities of training to have all\nmodalities of training\nso this is kind of\nthe expected view\nand there's another\nhere where we can again see that the um\nthe model train from scratch takes a\nlong while but eventually actually ramps\nup but the one that is trained by\nlanguage it can't do this as simply this\ncontrol task and there seems to be like\nnegative\ntransfer\nin this case it doesn't help but again\nhaving both\nkind of maybe like i feel perhaps i'm\nover interpreting here but it looks kind\nof like the yellow line is above the\ngreen\nso having more data seems to help\nthere's another one here\num\norder of apples\nforage\nin this case it looks much more like\nthis\nobviously if you have control data then\nyou can easily get uh 19 of the apples\nbut even here just having the language\nseem to somewhat help uh i it's it seems\nkind of clear that just that this helps\ncompared to just having the stretch\nalgorithm and finally for atari\nwe see\nmixed results i i'd say right it's um\nyou could definitely argue that the red\none\njust from scratch is just better in that\natari is just so different from the\nothers\nthat uh\nyeah fine tuning on atari health and\npre-training language and control kind\nof doesn't really matter as far as i can\ntell like these lines are\nchanging positions it's not really\nis that obvious what kind of what you\ncan influence this\nso these are four very different cases\nand i'm a bit sad that we don't have\nmore than four\nlike we can't really make any strong\nconclusions from this like um if i have\nto put on my conspiracy head i would say\nthat they wanted to to test it on 600\ndifferent tasks and they had a set of\ntasks that was 604 and that meant they\ncould only leave out four um so i think\ndefinitely think this work would have\nbeen much better if they had held out\nmore than four tasks so we if like if we\nhad seen 20\nof these tasks then it would have been\ninteresting if they were all like atari\nboxing or all like cardboards would\nswing up\na criticism of this\nwork was made on uh unless wrong uh well\nnot on uh that's from an interview with\nuh\nblake richards a um assistant professor\nat the university of\nuh montreal i think um claiming\nin this case that um they're getting\nbetter transfer effects if you train\ndomain than you trade across all\npossible tasks\nand\nlike this can be interpreted in two ways\nthey can be interpreted like if you it's\nbetter to train in domain than in other\ntasks that's the obvious thing right if\nyou want to be good at control tasks\nthen training on other control tasks is\nbetter than training on language tasks i\nmean in that sense that's probably\nreally obvious the thing the other\nargument however is that you get better\ntransfer effects if you only train on\ncontrol and not train on language so\nthat would be uh the argument about what\nis in general from this the relative\nposition of the yellow line and the\ngreen line\nand i would disagree with blake richards\nhere in that like\ni i i really don't have a\nlike it's hard to look at these images\nand see say that the green line is above\nthe yellow that does not seem to be the\ncase it seems like\nthere is some kind of transformation\nhappening\npossibly not in this atari boxing but\nagain remember with atari if you go back\nwith something like three four years\nthat was the ale um which was very very\nfamous for not having negative transfer\nback then there were we always had\nnegative transfer and now we are\nseeing something that does not look like\nnegative transfer and um and that is\nindeed very surprising\nwell that it's very surprising like if\nyou if you went back for five years that\nwould be very surprising uh we've seen\nthis kind of thing\nmany times recently and it's no longer\nsurprising but we are certainly seeing\nsome kind of generalization transfer\nlearning happening um\ni think that is reasonably clear\nokay so let's have another look at a\nreal-life\nskill\nthat's something that uh\nuh\nof course that's one of the key things\nthat this paper is about having also uh\nthe robots actually do things in the\nreal world and see how that works in\nthis case they have a robot that picks\nup some cubes and stacks them and have\nbeen trained on stacking the red cubes\non top of the blue cubes\nand that's the training task and then\nthe realization task is then can the\nrobot figure out to put the blue cubes\non top of the\nthe green tubes\nlike can they generalize from being\nreally good at stacking red on blue to\nstacking blue on green\nand they of course\nthey can't do this in impromptu so they\nhave to fine tune on that\nand so they try that in simulation and\nuh here the the colors are different\nthere's a behavioral cloning algorithm\nand an expert reinforcement\nalgorithm and\nthree models you can see the uh\nuh\nthe the blue one the smallest model\npicks it up relatively slowly and the\ngreen one uh much faster and the uh the\nred one the biggest model at 1.2 billion\nparameters\nis much faster and even starts out\nbetter than the behavioral cloning\nalgorithm\nthey do describe that there was one\nproblem one episode where overfitting\nwas a problem and\nthat's the kind of thing that i always\nwant from these um\nthese papers to actually dig into just\none precise one interesting example it\nwould have been really interesting to\nsee like what is actually going on in\nthis particular episode that is\nproblematic um but unfortunately like\npapers don't\nlike to give very very concrete examples\nand so they are also this was the\nsimulation and then they want to like\nhow does this work in reality and in\nthis case uh when they transfer this to\nreality then gato obtains a 60\nsuccess rate 60 is this line so that is\nto be expected it seems like what gatew\nis doing transfers directly to reality\num\nand\nthen they try the behavioral cloning\nalgorithm and that totally fails to\ncorrespond to reality\nit's this sim to real problem that a lot\nof algorithms have a surprising amount\nof difficulty uh going from simulations\nto reality um and in this case it looks\nreally marginally like 0.5 percent\nsuccess is very very bad but they note\nqualitatively when the robot needs to\nstack up within the\nbehavioral cloning\nit seems like it's almost always doing\nthe right thing but that then in the end\nit's almost there but then it the the\ntower collapses um so uh\nlike this result here like\n60 compared to 0.5 percent looks really\nreally wonderful but um the this\nqualitatively you can't really say that\nmuch\nbecause the the bad algorithm gets\nalmost right every time\nokay so now we look at how good can the\nrobot actually get at stacking these uh\nthese boxes and uh in this case\nthe thing we were trying before it just\njoins into the the training set and\ncompares different algorithms\nand they um\nyeah uh people who write these kind of\npeople always like to go into a huge\namount of details about how they beat\nthe state of the art and you can look at\nit here and they say this is in an\nearlier version of gator that didn't use\nfrom fine tuning uh i think i would have\nliked to have some kind of idea about\nhow difficult is this task actually they\nsay that it took 20 seconds for a human\nto stack these with a 3d mouse and\na failure rate for humans is also like\nunknown this is the kind of thing i\nwould like to know like is are we\ntalking about something that is above\nthe human level or below the human level\ni would like to know\nthe authors of gator do more\nthey try to train some specialist single\ndomain multi-task agents so\nwithin this control domain they try to\ntrain it on all these things and nothing\nelse and see if they can like distill\nthis into a general controller that is\ncapable of doing all these tasks and in\nmeter while they're using one with 79\nmillion parameters and they managed to\ntrain on all these and distilled into a\nsingle agent that has success rate of\n96.6 which is state of the art um\nat first when i saw this i was kind of\nsurprised like 79\nmillion parents is not a lot these days\nlike if you go back uh three years sure\nit was a lot but but no longer right\nthis is this is a really small thing and\num\nand i think it's surprising that can you\nlearn that much uh with such a small one\ni think uh\ni would almost argue that the uh the\ntransfer of learning results we have\nseen have been really meager like the\ndifference between the uh the blue line\nand the green line we saw before were\nnot very convincing to me but this seems\nlike um some really neat distillation uh\nand um i think that's kind of surprising\nand and\nbut but i also noticed that i am in fact\na bit confused\nbecause\nit stays here that it beats the state of\nthe art with uh taking a small model and\njust fine tuning it\nuh in the most naive way and then you\nget something that is wonderfully good\nand beats the state of the art and i'm\nwondering like what was the state of the\nart before right why\nit's not clear what they're doing that\nis so smart that they can take a small\nmodel and just distill in the most\nobvious way and be state-of-the-art\nnormally you need to do something smart\nreally smart to be state of the art\nright\nthen they also have the atari\nproblems\nwhere\nthey use the full model and the\ntotal demonstrations had some kind of\nproblem it was not in childhood\nbut in the rest of them the 44 out of 51\num they had uh\nperformed above the human uh\nlevel and even without any kind of fine\ntune\n23 of the 51 atari games will have\nsuperhuman performance and they expected\njust scaling would would improve this\num\nthis uh\nthe problems with training can be\nsummarized as\nthe specialist the terry agent achieved\nbetter than human performance for all\ngames where data contains superhuman\nepisodes\nagain because they're not directly doing\nreinforcement learning they are\ndistilling other reinforcement learning\nalgorithms\nthen there's a section called related\nwork i won't call it related work\nbecause what they're actually doing is\ntrying to make some kind of argument\nabout why uh why this will scale um and\nso i just as an illustration of scaling\ni took this classic graph from kaplan's\nuh scale notes\num so why do we expect this will scale\nwell obviously uh we have substantial\nexperience with\nthe models that are larger than 1.2\nbillion like we have qt3 most obviously\nbut also many others and uh like more uh\nmore models are arriving uh or every\nweek i would honestly um\n[Music]\nyeah so there's a good argument for why\nthis will scale and we've also seen like\ngood results in general with these kind\nof ultra-aggressive models we are seeing\nthis uh\nuh\nperform very well in many many other\ncircumstances and that's an also an\nargument why we should expect this to uh\nto transfer and this is uh the first\nsingle generalist uh trained on hundreds\nof vision language and control tasks\nusing modern uh format works at scale\ni'm somewhat confused that um yeah i\nmean sure it's the first one that's that\nhas done this um\nbut it seems like a\ni should be careful about calling it\nnaive and obvious and all these kind of\nthings because i can't do it myself\nright um they have some more arguments a\nbit more fluffy like\nthe brain seems to have like a uniform\nstructure and that's kind of an argument\nwhy you don't need different things for\nuh like motor control and processing\nvisual because if you break some of the\nmotion control then the motor control\ncan just move into the digital area\nright um so uh and we're also seeing\nthis with with neural network that\neverything can be processed everywhere\nand that's like an argument for some\nkind of transformation\num finally they talk a bit about why\nthis uh atari uh might be uh more\ndifferent than\nmore difficult than many other tasks\nand i mean conceptually a lot of control\ntasks and a lot of robots are working in\nreality\nwhatever that may mean and so reality\nlike having a robot that is big or small\nor doing different kind of things they\nhave like the physical underlying\nstructure and um atari games doesn't\nshare any kind of online structure in\nthe same way as reality does\nfinally they talk about the implications\nfor aicc and so i um i tried to look up\ndeep mind on and safety and i got this\nwonderful uh result from google with\ndeepmind's twitter and then missing\nsafety and google suggested i could\nrepeat my query with the word safety\nbecause the word safety is not one that\ndeepmind is using at all\num\nand they\nsay that yeah this\nis safety seems to be an important thing\nand someone ought to to look into this\nand i think i've read enough papers to\nto realize that this kind of call for\naction is just you know empty words\nthey're saying that yeah it would be\nnice if someone did this we will take no\nactions or\ndoing anything\nfurther and\nas problematic is the fact that when\nthey talk about safety the the examples\nthat they give about safety are things\nthat are not actually related to safety\nas we conceptualize it but\nlack of capabilities in the obvious\nsense that a\nan agent can be unsafe because it is\ntrying to do the right thing is just not\ncapable of doing the right thing\ncompared to an agent trying to\ncompetently optimize for something we\ndon't want\nit's not completely fair they do in fact\nmention stuart russell\nso it's not 100 that they only look at\ncapability but you know it's a it's a\nweak chapter this and they also have\nseem convinced that this safety isn't\nactually any kind of\nmeaningful problem and they're uh\nvery optimistic about solving that and i\nthink\nin my in my estimate this if you\nconceptualize safety as a race between\ncapability and safety alignment work\nthen this work seems clearly to be\nadvancing the state of agi and building\nhigh towards ati it seems uh obviously\non\nthat as a strong negative\nzooming out a bit is deep mind in fact\non track to destroy the world i think\ndefinitely this paper seems uh\nlike\nit is not trying very hard to engage\nwith this or trying to\n[Music]\nlook into this whether we\nwe\nhumans end up with a better strategic\nsituation or a more strategic situation\nand\nin my analysis it kind of depends it's\nnot obvious to me that this is a big\nstep towards agi um\nlike\nmultimodal transformers how important\nare they actually going to be that's an\ninteresting question and it's not\nobvious to me at all that they will turn\nout to be very interesting that it's\nit's possible that just plain\nlearning on text is sufficient and uh\nimages and\nsoon will probably have video maybe that\nmight not matter very much how about\ntransformer agents how important are\nthey going to be compared to uh just\ndecoder only um\ntransformers\nagain it's a it's an open question i\ndon't know it's not obvious to me that\njust former agents is going to have a\nmuch larger impact than just uh\ngt3 or something like that um\nbut it's not uh it's also not obvious\nthat it won't have so this is the kind\nof thing that i would focus on trying to\ndetermine whether this work is in fact\nhaving a substantial uh positive or\nnegative or well whether it has a small\nimpact or a substantial negative impact\nanother thing that\nwould have been nice and would have been\npossible to do with this work was it\nwould be interpretability work like in\nparticular some of the models are really\nsmall 39 million neurons seemed like it\nwould be easier to do injury ability\nwork but\napparently as far as i could tell from\nthis no such work was done at\nall uh and finally there is a uh a quick\nreference to uh\nto the book super intelligence saying\nthat many important vitamins make\ntechnical agi safety more difficult\nand that's decided as nick ostrom\nsuperintelligence do not 2017 and as i\nhave the book and i've read several\ntimes and it does in fact not say this\nand also like the donut is the french\nbook distribution i think which i i\ndon't think that's a great citation but\nthat is like the smallest to pick\nthat is all for today thank you and see\nyou next week", "date_published": "2022-06-17T05:09:00Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "8641f9ab2828faf41ba6eaf738ed08ab", "title": "206. Jared Kaplan on Scaling Laws", "url": "https://www.youtube.com/watch?v=I5mC4nDDp2I", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "there's a lot of freely available\nwriting on the internet to train on\nand in books um and if an ai knows\nlanguage then you can ask it about\nanything or try to communicate with it\nabout about almost anything and you can\ntherefore try to get a lot of intuition\nfrom its responses and what it's doing\num as compared to some other kinds of\nexamples of tasks and that might\nhelp us make it both general and uh and\nsafe\num so water language models and\nbasically all the models i'll talk about\nin this talk\nare always auto regressive and they\nalways have an auto regressive loss\nwhere there's say a bunch of words and\nthe model is trying to predict the\nprobability of\nthe last word given the earlier words in\nthe sentence paragraph\npassage etc and this applies to other\nmodalities like say pixels\nit can apply to the answer to a math\nquestion it can apply to\nboth images and text that are joined\ntogether\nas we'll see um and so here's an example\nlike as a speaker at a journal club\nyou're probably elephant me to say\ncertain things and when i say elephant\nas a human we notice\nthat was a weird word to insert in that\nsentence\nand uh and the gpd3 model also thinks\nthat it was a weird thing to say given\ngiven the preamble and so all we're\ndoing is optimizing these\nlog likelihoods um i'll almost always be\ntalking about transformers except when i\ncompare to lstms\ntransformers are based on this idea of\nself-attention where you\nuh when you're making a given prediction\nyou kind of look through all of the\nprior words or\ntokens and you uh up wait or down wait\nthem\nkind of like intuitively a human might\nhighlight certain words in a passage\nthat are most relevant to to what's\ngoing to happen next\num and then you look at that weighted\nlinear combination of\nwhat you highlighted how much you\nhighlighted it you process it\num this occurs over and over again for\nfor layer upon layer and then finally\nyou make you make an actual prediction\nso that's uh 30 seconds on transformers\nand you can get very\nvery impressive seeming performance um\non a lot of different data modalities so\non the left we have\na sample from gbd3 uh we provided the\nthe title an author and it wrote this\npoem\nuh the on the right we have a bunch of\ncompletions from uh igbt which is just\ntraining exactly the same kind of model\non images\npixel by pixel and uh it seems to know a\ngreat deal of\nsemantic and other information about\nabout about dimensions um\nso this seems to this seems to sort of\nwork pretty well\num for some reason\nmy slides are stuck\ni don't know what happened there\num okay so uh scaling laws\nso scaling laws for neural models i\nguess there's two different levels of\nmotivation a super high level motivation\nis sort of\nwhy does machine learning seem to work\nwell what actually matters and what\ndoesn't\nand i think this informs what kind of\nresearch problems we should work on and\nalso what we should forecast or expect\nfor the future\num question i had in mind when i started\nworking on the subject three or four\nyears ago was something like\nis making progress on ai more like\nproving three of my hypothesis\nwhere you might expect progress comes\nfrom a few lone geniuses who obsessively\nwork on the problem\num it's very hard for outsiders to gain\ninsight about what exactly they're doing\nprogress is kind of uh maybe seems like\nit comes through flashes of insight\nrather than being predictable or\nincremental\nor is it more like building a very\npowerful steam engine where\nthere is a lot of incremental progress\nuh uh you don't have to actually spend\nyour whole life\nobsessing about the problem to\nunderstand it um\nand there are kind of simple basic laws\nunderlying\nuh what leads to progress and what\ndoesn't and so what i'll be\nshowing you is evidence so that there\nare fairly precise scaling laws for the\nperformance of\nai systems i'll be focusing on uh\nkind of macroscopic variables like how\nmany parameters you have how big your\ndata set is\nhow much compute you use for training\nbut i think there are actually a lot of\nother scaling laws\nuh with with other variables in machine\nlearning i think they're kind of\nubiquitous\num i also kind of argue weekly that a\nlot of the other details don't matter\nvery much\num and in particular it seems like the\nscaling laws basically stay the same\neven when we make a lot of algorithmic\nprogress\nand the main thing that changes is some\nkind of constant prefactor a lot of the\ntime\nand achieving good performance is mostly\nabout avoiding bottlenecks\nso the simplest bottlenecks are like you\nhave a model that's really big but not\nenough data\nor vice versa or you just don't have\nenough compute\nto train your model um or you have\nliteral bottlenecks in your network\nlike information kind of doesn't\npropagate well through your network\nand the classic example of this is that\nif you have many many layers or if you\nhave the\nuh the same layer repeated many many\nmany times\nthen maybe you face this kind of problem\nwhere you raise a matrix to the power of\na thousand and you mostly just end up\nprojecting onto the\nlargest eigen the eigenspace of the\nlargest eigenvalue and therefore\ninformation doesn't propagate well and i\nthink a lot of the most highly cited\npapers in all of machine learning are\nreally solving these kinds of problems\nlike batch norm resonance layer norm\nto some extent transformers versus lstms\ni think they're kind of uh\navoiding these kinds of bottlenecks um\nand\ni don't know why having this problem\nsome kind of\ni keep having some sort of weird uh\nanyway no big deal um so uh\nso what are these scaling laws um\nuh the simplest scaling so i'm just\ngoing to kind of explain some empirical\nresults where\nrather than tell you a lot about about\nany kind of theory\nso the simplest scaling law is pictured\nit right\nso we have a lot of data and we\ntrain a lot of different transformer\nlanguage models on\nthe same pile of tons of data and we\ntrain them basically to convergence\nand then we plot the loss the\nautoregressive\nlog likelihood loss as a function of\nthe model size which is the number of\nparameters not counting the embedding\nmatrices\nuh that doesn't matter a lot for larger\nmodels anyway but\nuh that's that's what we have here and\nwe find super like very very precise\nuh uh fit to a power law relating\nthe test loss to the model size\num so that's one example um in the\ncenter we have a fairly big model\nwe train it with early stopping on\ndifferent data set sizes and we also get\na very nice clean scaling law\nand the most complicated example is sort\nof test loss versus compute\nwhere we have uh different\namounts of compute in our compute budget\nand so\nwhat you see here are blue lines which\nare learning curves\nthe learning curves for bigger models\nare shifted to the right because\nper training step you need more compute\nto train a larger model\num and uh as we\nuh and what you can therefore look at is\nwhat is the best performance you can get\nfor from all of these different models\nof different sizes\nas a function of the compute budget and\nuh there's an asymptotic line which is\nwhat this orange curve is predicting\nfor what the best is that you can you\ncan do with a given compute budget\nand that also seems to be a very clean\npower law and\nan interesting thing that you can do\nhere is\ncheck what was the model that was most\noptimal for a given compute budget what\nis with the model size\nthat was optimal for a given compute\nbudget and\nthat's something that we see here on the\nslide in kind of cartoon form\nso you can ask as you scale up your\ncompute budget what is the optimal\nallocation\nto increasing model size versus\nincreasing uh\nthe amount of data you actually process\nand something that i at least found\nsurprising was that\nmost of your compute budget should\nactually be allocated to\nmaking bigger models and only a small\namount to training longer\nor with with a with a larger patch size\num and furthermore it seems like\narchitecture is somewhat less important\nso uh on the top left we see\ntransformers versus lstms interestingly\nit seems like these trends kind of\nparallel each other until we get to\nquite large models\nand then i think lstms stop being able\nto improve\nas rapidly as transformers basically\nbecause they're not as good at\nhandling very long contexts and that's\nwhat's illustrated on the right\num and at the bottom we have a bunch of\ndifferent other\nhyper parameters you can tune um that\nthat are associated with the transformer\nmaybe the most interesting is in the\ncenter which is the\nwidth of the model divided by the depth\nof the model or the aspect ratio\nand it looks like first of all getting\nthe aspect ratio wrong\ndoesn't hurt performance very much and\nthere's a wide range where you get\nfairly similar performance and then\nfurthermore\nall the different model sizes sort of\nwant a similar aspect ratio so you\nshould basically scale up by keeping\nthe width over depth uh roughly constant\num and uh uh but you won't even suffer\nvery much\nif you if you get that wrong so in other\nwords these other hyper parameters don't\nmatter a huge amount compared to just\nuh the the overall scale um and there\nare all sorts of other interesting\nscaling laws that you can find so\nthey're multi-variable scaling laws like\nwhere you change the size of your data\nset\nand the model size together and you can\ntherefore predict the amount of\noverfitting like the\nthe test loss with finite data versus\nthe test loss with infinite data\num all these things seem to be seem to\nbe relatively predictable\nand uh you can make up an unsats that\nfits them very well\nand these things aren't really just true\nfor language so there's some further\nquestions is this specific really to\nlanguage as\na data set does it eventually break down\nand does it actually improve performance\nand downstream tasks and\nthese are the compute plots for\nmany other modalities including video\nimages\nsolving uh procedurally generated math\nproblems\nuh uh multimodal models that\nuh model an image from the text or the\ntext from the image\num and uh and also language and it just\ncolor coded where bigger models are\nyellow and smaller models are purple\nand you see that there is some kind of\nsteady trend here\na thing that i also found surprising in\nthis vein\nis that you can look at what the optimal\nmodel size is versus compute for\nall of these modalities since model size\nand compute don't really care about\nwhat domain you study you can you can\nplot all the data modalities together in\na sensible way\nand it seems like roughly speaking the\noptimal model size as a function of your\ncompute budget is actually\npretty much the same for all of these\ndifferent uh\ndata modalities as you scale up um and i\ni wasn't particularly expecting that\num there's a further question which is\nuh\ndoes uh do you really benefit on\ndownstream tasks\nand so an example here is uh you could\ntrain an image classification\nusing the pre-trained model that were\ngenerative models for sort of modeling\nimage pixels\nand if you do that you see that you get\nanother predictable power law\nscaling for classification error\non imagenet um this is 32 by 32 pixel\nimage then\nand uh furthermore pre-training helps\nyou a lot\nto avoid overfitting so the orange line\nis when you train without\nuh from scratch on just imagenet the\nblue line is when you take a model\npre-trained on image\ngeneration and you see that that helps\nyou to continue this trend\nuh much further so pre-training is\nhelping and scaling laws are relevant to\ndownstream\ntasks and scaling laws are everywhere so\nthis is i'll just flash this quickly\nthis is\nmutual information between image and\ntext um for multimodal models and and\nthat also has kind of a nice scaling law\num which i think is kind of cool and\ninteresting and so then you can ask what\nhappens if you really just do scale up\nlanguage models and that's what uh gbd3\nrepresents\nuh so this is the compute scaling trend\nfrom from gpd3\num and uh\nand we see that we get uh we get\ncontinued smooth scaling\nand then the other cool thing about gpd3\nis that it can learn in context so this\nplot at the bottom shows uh\nperformance as a function of model size\nfor colors\nbut uh it also shows how many examples\nof a task\nprovided in the context and this is the\nfew shot learning of gpd3\nand the solid lines show when you give\ninstructions in natural language to the\nmodel\nand the dashed lines show no\ninstructions so you improve when you see\nmany examples in the context window\num and you also improve when you see\nprompts and you can see this then for\nmany different domains\num arithmetic uh sat\nanalogies the kind of tests american\nhigh school students take to go to\ncollege\num uh trivia and wintergrad schemas\nand there's steady performance\nimprovements of model size on all these\ntasks\num although the exact form of that is\nquite different so there's\nwith arithmetic there's this sudden\ntakeoff where the model suddenly kind of\nlearns arithmetic\nin a kind of discrete way for these\nother data sets it's it's\nsmoother and you can have fun trying to\nproject these things\num humans have difficulty telling the\ndifference between\ngbt3 generated news articles and uh and\nreal news articles\num i think these results suggest that we\ncan continue to get a lot more\nperformance by uh by scaling\nscaling up um they suggest certain\nabstractions for thinking about\nmachine learning performance um i think\nthey suggest sort of uh\nhow we should measure improvements in\nalgorithms where you you really care not\njust about the algorithm on\none given model but on kind of a suite\nof models and does it does the new\nalgorithm help you to\nimprove performance everywhere if you\ncare about that um this is a slide i\nadded to answer a question\nand so why don't we move to q a", "date_published": "2020-11-05T21:11:50Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "f982725958820e21401096e26600294c", "title": "264. Our Approach to Alignment Research", "url": "https://www.youtube.com/watch?v=sPpFiwYqvq4", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session\n264 in the aisafety.com reading group\ntonight we'll be discussing uh our\napproach to alignment research by Jan\nliger John schoolman and Jeffrey boom\nthese uh three people John Lager John\nSchulman and Jeffrey Wu are working at\nopen Ai and the when they talk about our\napproach to AI alignment then uh it's\nnot 100 clear whether they are speaking\nlike for themselves or if they're\nspeaking for uh for the company itself I\nwould probably assume that they are\nspeaking for the for the entire open AI\nthis was published uh in in August and\nfour months later there was a post on\nlesserong by Elisa utkowski called a\nchallenge for AGI organizations and a\nchallenge for readers that highlighted\nsome features about this post\num and actually we will start with\num miri's very short comments on this uh\nbefore we go into the the actual article\nso miris is called a challenge for AGI\norganizations and a challenge for\nreaders and uh written by Elia zulkowski\nmainly but also with Rob pensinger\nediting and net Suarez for input and\nit's actually\num more not that much of a challenge and\nalmost an accusation against the other\nAGI organizations\num and uh this is kind of similar to a\nchallenge that Miri also had about\ncourage ability that we looked into uh I\nthink three four months ago or something\nlike that\num\nwhich uh we we didn't participate in but\nin but this uh this video this\npresentation is in fact meant as an\nentry into this uh challenge\num\nyou may recall the last time for that\nchallenge about who could write most\nabout courage ability\num I judged that Miri won but not\noverwhelmingly so and um for this\nparticular challenge I predicted on\nFacebook a couple of weeks ago that I\nthink this one would go uh basically the\nsame way\nbut we'll see uh when Miri uh hopefully\nposts theirs uh their entry in uh the\nnot too distant future\nso the challenge for deep mind and\nanthropic the other AGI organizations in\ntheory of course there are more ATI\norganizations\num but I think uh Elise richardkowski\nhas mostly given up on on those and\nconsider them Beyond any kind of help\nRedemption\nso this we have openai's plan that is\nthe the document we are going to read\ntoday uh and obviously utkowski\ndisagrees with this plan but he is\nreally much in favor of uh releasing the\nplan so people can discuss it\num both because like it's really\nimportant that there is a plan when\nyou're doing something different even\nthough uh no plan survives contacts with\nthe with the Enemy and also it's\nimportant that the plan is public\nand the problem the key problem of\ncourse is that openai has a plan but\ndeepmind and anthropic has not made any\nkind of plan public and most likely that\nis because no such plan exists and so\nthis is a challenge to these two\norganizations come up with a plan and as\nfast I can tell over the past month uh\ndeepmind and anthropic have not made any\nkind of response to this at all not a\nyes not a no not any thoughts about a\nplan just basically totally nothing\num and that's of course somewhat\ndisheartening\num also they haven't reached out to uh\nmyriadi to anthropic and deepmind for\nwhether the planet exists they believe\nprobably some people in the organization\nhave thought a bit about this\num but uh it's probably a good idea to\nmake a plan soon sooner rather than\nlater\nthere is here on manifold markets a uh\nuh a bet on uh what is the probability\nthat one of these will actually produce\nsome kind of plan before the first of\nMarch and there is now a 48 probability\nthat this will happen\naccording to the manifold Market\nso why do we own the plan what's the\npoint of a plan well uh once you make a\nplan you have a single canonical place\nwhere you put all your assumptions and\nit's a very easy place to like if you\nwant to know if it's a it's a good plan\nthen you can go into the plan and\nanalyze it see for inconsistencies or\nother kind of problems when things\nupdate when you learn new things you can\nupdate the plan\num\nand in particular if you're trying to do\nsomething really difficult like building\nAGI and making it not kill everyone then\nplants are really really important\num\nI think I obviously agree with Elizabeth\nthat what these organizations are doing\nis very difficult\num but I think one of the key reasons\nthey may not have made these uh plans is\nthat they in fact believe that the\nproblem is easy like if you believe\nthere's plenty of time and you can make\nunlimited retries and you can count on\nthe Goodwill of all other actors and\nthings like that then maybe a plan is\nnot so important like it's only if\nyou're trying to do something difficult\nanother big advantage of plants is that\nuh you can the field can debate it uh\nand the field can\num like compare different plants and in\ntheory hopefully the the the researchers\ncould decide to go with the organization\nthat has a better plan uh that seems\nlike a reasonable uh you also want to\nprobably avoid some part of the plan\nmaking those public like you if you make\nthe plan completely public then some of\nit may be very relevant to your\ncompetitors other people trying to build\nonline AGI\num\nyou probably also will need a branching\nplan if you are uncertain about the\nfuture there's also a very likely thing\nto happen\nbut I think that is not a plan is to\njust build an AGI and then after you've\nbuilt the AI then try to do a lot of\nwork to to make it uh to make it safe\nand the exact problem with that uh I I\ndisagree actually that it's not a plan I\nthink it's a plan it's just a horrible\nplan and the reason why it's a horrible\nplan is that if you have an organization\nthat is capable of building an AGI that\nand realizing that the AGI you're\nbuilding is unaligned and could\npotentially be very dangerous then\nalmost by definition you are an\norganization that is not very safe to\nconscious that does not have some kind\nof security mindset\num\nand that means that you are very\nunlikely to just get that Suddenly at\nthis point even if you're playing calls\nfor it\nso there is a similar parallel uh\nchallenge for the readers well not quite\na similar challenge but that is to look\nat open ai's plan and\num just like Miri is writing up their\nthoughts on it then we should write our\nown uh thoughts preferably first so that\nthey are unanchored on what Miri is um\nuh it's writing and of course focus on\nwhat is most decision relevant and the\nhope is that the criticism that uh muru\nis going to come up with that we can\npreempt that uh and\num to see whether uh this kind of\ncriticism can happen without Miri and uh\nof course in some way try to make Miri\nSuperfluous because Miri is an\norganization that is existing much less\nthese days than it has been previous\nalso with this unanchoring it's a bit\ncomplex precisely what that means one of\nthe things uh Elise explicitly asks is\nplease tell us please make it clear if\nyou're repeating something you've heard\nfrom emiri person at a gathering or\nsomething like that\nso I have a hard time figuring out\nprecisely how to obey that requirement\nbecause obviously something that Miri\nhas said or published at some point I\ncan't get around that because they have\npioneered the field and for a lot of\nthings they are just the the key\nreference you can't talk about courage\nability without talking about Miri\num\nso um\nI think even if uh there is a 100\nsuccess rate and when Elisa utkowski\neventually writes up his criticism of\nopen ai's plan he says nothing except\nwhat I've said in this presentation I\ndon't think you can conclude that Miri\nis Superfluous for the simple reason\nthat a lot of the things that I'm saying\nis built on Research that Miri has uh\nhas been doing\nso when I\ninterpret the the like in this\num then I think like it's something like\nif two people are chatting then uh that\nis that doesn't really matter if it's\nonline at a gathering or at a gathering\num it's more like if it's one to one or\none too many uh as how I would would\ninterpret this\num\nand also one thing I should say about\nhow unanchored I am is that uh after uh\nuh Elijah published this then a number\nof people wrote up some criticism of uh\nof openly eyes plan I did not read those\nand then like it he wrote some kind of\nanswers to this criticism and I didn't\nread that either so I'm very on anchored\nand that may just be a an excuse for\nbeing very lazy but I haven't uh uh\nthere's a good chance I am saying\nsomething today that Jan liger has in\nfact answered already\nso one example of a place where I'm in\ndoubt about uh whether this is whether\nI'm fulfilling this is I met iliakowski\nin San Francisco in July for EA Global\nand I was describing some of my plans to\nhim and some of the other people who\nwere there I'm not entirely sure who\nthey were made some objections\num like uh I think it's a an interesting\nobjection I think it's too General and\ndoesn't really relate to what I'm doing\nbut that would be kind of example of\nways that where some of my criticism is\nsomething that I've gotten uh like\ndirectly from Miri in that way\nright so let's uh go to through the\narticle our approach to alignment\nresearch by open AI\nand first also I would like to say that\nthere have in fact been a previous\niteration on this process and that was\nwhen open AI was founded they had a plan\nto put an AGI on every disk like really\nreally uh openness in in all that\ncapability research and that was a\npublic plan and uh it got a lot of\ncriticism and they changed uh open AI\nchanged very much to not be public about\ntheir capability work and I think that's\na beautiful example of this kind of\nprocess working really well and that's\nwhy I have substantial hope that this\nprocess can also cause some kind of\nimprovement to the uh epistemics of open\nAI\nright so the introduction of the plan\nhas a goal and that is to make AI\naligned with human values and follow\nhuman intent I think it's a decent goal\nuh I think it's it should be more\nprecise and comprehensive and all these\nkind of things and it's not precisely\ncourage ability\nif I were to write this kind of goal\nthen courage ability would be written in\nvery large uh\num uh let us on the second line there\nwould be one goal on top of that but\ncredibility would be on the second line\num and I think if you try to build AI\naligned with human values and following\nhuman intent but not courageable then\nthe things that are not courageable like\nmaking the AI change its mind and how it\nlooks use itself and this kind of thing\nuh I think those are in fact potential\nproblems for their plans\ntheir approach is empirical and\niterative and they want to study how AI\nalignment\ntechniques scale and how they break I\nlike that they say how this will break\nand I think it's really important to\nhave this understanding that the\ntechniques they are using are\npreliminary\num I would have we do in fact have a\nsubstantial idea about where they break\nthey break up uh when we have\ndistributional shifts and that's one of\nthe things that I would have liked them\nto to explicitly point out because we do\nin fact know more than than what they're\nletting on here\nand they're both doing a current and\nexpected alignment problems and try\npushing push current alignment ideas as\nfast as possible and believe in fact\nthat the current ideas are quite robust\nwe can get very far with them we can uh\nsubstantially Advance alignment research\nusing the ideas we already have\nso again the framing is a bit off here\nin that they say we will advance towards\nsolving the problem instead of solving\nthe problem a good plan should end up\nwith a problem being solved but uh\nuh this is just crippling of words\num so I talked a bit earlier about how\nopenly I used to be completely open and\nnow they are like open ish or there so\nhow about their openness is Niche\num they have the overall idea that AGI\ncould be really dangerous and it's\npossible that it will require everyone\nto work together and that obviously\nseems like it would require quite a bit\nof openness they don't have any like\ncriteria for like how will we know how\nmuch you require and how will we get\neverybody on board that seems like a\ntall order\num\nbut the key thing they want here is to\nhave openness in alignment research but\nonly when it's safe to do so and that\ncould be a number of reasons why it\nwould not be safe\nand they want also to be transparent\nabout how well their alignment\ntechniques work in practice so that's a\ngood question like they say they write\nin their uh plan that they want to be\nopen about this but then they release\nchat qte3 uh the the chat gbt which\nclearly has some issues and so the\nobvious question is how well did their\nalignment work work and they haven't\nwritten about that and I think a good\nreason why they are saying this is that\nit's not safe because if they describe\nall the techniques all the prompts\nthey're using then the people on Twitter\nand 4chan are going to look into that\nfor different holes and attacks based on\nthat and that means that the plan\nalready now could be facing into a\nproblem that that could be much more\nprevalent in the future that they cannot\nin fact be open about it\nand on the sentence they have this we\nwant every HEI developer to use the\nworld's best alignment techniques and\nlike depending on how many AGI\ndevelopers you are envisioning that uh\nmostly to me sounds like they're they're\nimagining a lot of AGI developers like\nand in that case\nprobably we are very doomed if the if\nthere's so many that open AI can't just\nwrite discrete to all of them\nuh\nlet me see why can't I get this why\ncan't when the school uh wait hold on uh\nso there are three pillars of\nthere three pillars of the approach\num the first pillar is to train AI\nsystems using human feedback the second\nis to train AI systems to assist human\nevaluation and the third is to train AI\nsystems to do alignment research\nand we'll go through these three in a\nmoment but first I want to highlight\nthat three pillars isn't really a good\nmetaphor because\num it's not like they're working on all\nthree of these at the same time uh it's\nmore you can imagine some kind of\nRidgeline plot with starting to work\nmostly on training AI systems using\nhuman feedback and then transitioning to\nmost training AI system to assist human\nevaluation and then transitioning to\nmostly doing uh having these systems do\nalignment research\num\nand when you look at the this way then\nit seems like the plan is missing some\nkind of timing and criteria like when do\nwe go from\nmostly focusing on phase one to go to\nphase two and when to phase three and\num\nokay let's talk about the first one uh\ntraining AI systems using human feedback\nso\num like this is reinforcement learning\nfrom Human feedback uh yeah okay uh so\nhere is some prompts we have in the data\nand there's a sentence X that says that\na dog is and then you have some kind of\ninitial language model that says a dog\nis a furry mammal and then you uh\ncompute this new policy\num and then you find a reward for this\nand then you use some kind of\nreinforcement learning uh for instance\nuh proximal policy optimization\num to tune this language model and then\nfor him for instance then you get to\nlike a dog's man's best friend and then\nyou use that for two things both for\ncontinuing with this\nuh shift but also to have a model of the\nrewards that you can use\nfor going forward\nuh that's the thing this is the\ntechnique that is primarily being used\nuh in Omnia and they think this is\nworking quite well they have found a lot\nof low hanging fruits and this can\ninspire all others in the industry and\nraise user expectations for how aligned\nAIS should be and gives a rich feedback\nlook which enables their empirical and\niterative work\nbut\nthis is not enough it's not fully\naligned sometimes it doesn't follow\ninstructions sometimes it's not truthful\nsometimes it it's supposed to refuse a\nharmful task but it doesn't generally do\nthat and sometimes you can make it say\nlike biased or racist things and things\nlike that\nhere I would object an object perhaps\nquite strenuously that this is in fact\nnot alignment in particular the first if\nit fails to follow instructions that is\nnot an alignment failure like if you ask\nthe uh the AI please tell me what the\n34th president of the United States was\nand this is uh that is George Bush then\nuh that is not a failure of alignment\nthat's a failure of capability and to a\nlarge extent I feel the others here are\nalso failures of capability rather than\nuh than alignment\nuh the hope for\num uh open AI is that this uh\nreinforcement learning from Human\nfeedback will be some kind of uh\nbuilding block for scalable alignment\num and it could be but it seems to me to\nbe some kind of uh I I call a foundation\nof sand in the sense that we are not\nreally pointing the AI at actual docs we\nare pointing AI at\num uh representations that are\nimminently hijackable\num and I think this means that in the\nlimit of very strong AI this is going to\nfail uh catastrophically\nthe second pillar was training models to\nassist human evaluation and that's\nobvious that as the models become more\ncapable it becomes just plainly harder\nfor humans to evaluate whether what the\nAI is saying is correct and we also get\nthe pathologies like the AI telling\npeople what they want to hear\num and the key way to get around this\nthat are being used right now in open AI\nis recursive reward modeling they're\nalso using some other things but\num\nhere you can see like this is a very\nclassic uh reinforcement learning setup\nwhere you have an agent and an\nenvironment that gives observation and\ntakes actions and then you have a reward\nmodel as well this is a and a user\ngiving feedback this is kind of like a\nstandard reinforcement learning with a\nreward model and then the idea here is\nlike the recursive part of recursive\nreward modeling is that then you repeat\nthe process but flips it to the right 90\ndegrees so that the human takes the\nplace of the environment\nand then you get like a new reward model\nand then you repeat the process again\nturning right every time and that's\nwhere the the recursiveness of a\nrecursive reward modeling comes in\nuh and one of the things they really\nwant to to have this recursive reward\nmodeling do is to figure out is the\nmodel being misleading or deceptive\num\nand they believe that the best way to do\nthis is to actually make AI assistance\nuh work in practice make AI assistant\nevaluations work in practice\nuh I uh notice here that a problem with\na plan is that there is no direct link\nbetween these two things in that\num I believe the recursive reward\nmodeling will in fact not help very much\nwith a deceptive alignment\nOkay the third pillar is training AI\nsystems to do alignment research\nand of course we expect to encounter new\nalignment problems and we don't think we\nhave an infinitely scalable solution at\nthe current uh at the current level so\nwhat we need to do is to build and align\nan AI and then have that do alignment\nresearch\num\nI think this plant is a\ndangerous potentially very dangerous in\nuh were sometimes been called that\nattack this HEI complete in that if you\ncan do AGI research then probably you\ncan do everything with a small star uh\nand\num and certainly do enough things to be\ndangerous\nand the hope of open from open AI is\nthat the air will gradually take over\nthe alignment research while humans of\ncourse stay in the loop all the time and\nthey make a specific claim that\nevaluating alignment research is easier\nthan producing it especially when\nprovided with evaluation assistance and\nI don't think this is obvious at all but\nit's probably true when you have\nsomething that is like not explicitly\ndeceptive but if the person if the\nresearch is being done by someone who is\npotentially deceptive then I think\nevaluating whether that is the case is\nin fact really really hard\nso alignment research from the large\nlanguage models which are of course the\nkey models being used they make the\nclaim that narrow AI is sufficient for\nalignment research I think that's really\nquite a claim uh of course uh narrow Ai\nand general AI is some kind of spectrum\nand if you define a narrow AI as and and\nperfect AGI that can do everything\nexcept one thing then sure you can call\nthat a narrow AI but on the general cons\nuh conceptualization of what is a narrow\nAI I think the fact claiming that it can\ndo alignment research is a really really\ntall Aura and I think that is a claim\nthat probably will not stand up to\nscrutiny\nanother reason to be optimistic about\nthis is that out of the box large\nlanguage models are not in fact agents\nand that is true of course but they are\nalmost Akins you can make them\nassimilate agents with like a simple\nprompt so all the uh the mechanics are\nthere and I don't think that makes uh\nthem a lot safer\nuh it is stated that they don't need\ninternet access to do alignment research\nthey can just\nfrom nothing uh uh fix your things I'll\nfigure out what the problems are and\nmake some kind of progress I think that\nis extremely optimistic like perhaps\nElisa utkowski could just from nothing\nrealize that there is a problem and make\nuh uh real progress on this I don't\nthink anyone else could do that I don't\nthink alone uh like Eliseo also had an\namount of input and this idea of having\nthe AI in a box without internet access\nalmost certainly does not work we know\ntoo many problems uh with that\nand again once they have a model that's\nuseful for alignment research they plan\nto make it accessible and\num that's this quote here that I'm a bit\nunsure what means while we don't know\nwhen our models will be capable enough\nto meaningfully contribute to alignment\nresearch we think it's important to get\nstarted ahead of time so what does get\nstarted ahead of time means like before\nthey can do something then we need to\nhave them work on it by definition that\ndoesn't sound so like hopefully what\nthey mean is not that they will start to\ndo all the uh dangerous things first and\nthen get started with that before the AI\ncan actually contribute to solving the\nalignment problem that seems uh like the\nwrong way I don't think that's actually\nwhat Yen means but I'm unsure precisely\nwhat it means with this sentence\nthe plan has some limitations uh and\nOmni are acknowledging that it probably\nneeds to be adapted when AI becomes\nstronger we will need\nwe'll need to adapt the plan in some way\nand I think that's a good feature of\nmost plants and it's also under\nemphasizing how much robustness and\ninterpretability will mean for uh our\nodds of success\nthe AI evaluation assistance is\npotentially problematic in that it can\namplify problems in assistance\nwe could see discontinuities of\ndifferent kinds either in technology or\nin time\nfrom our current models to ADI\nit's possible that getting this training\nsignal right isn't actually the hard\npart of alignment an example would be in\na misalignment that could be problematic\num but it's uh and it's possible that\nthe least uh\ncapable AI That's capable of uh doing\nalignment research is capable enough to\nbe dangerous that's my general\nexpectations\num\nstated here with uh this came out of\norder the hardest part of the alignment\nproblem may not be the training system\nsignal but even if that is the case then\nthe training signal will still be\nrequired\nso uh what do I think about this section\nI was a bit uh Curious in the sense that\nlimitations is a uh not the word I would\nuse like if it turns out in fact that\nthere are discontinuities then that is\nsomething we need to have some kind of\nplan for faster takeoffs and uh like if\nthe least capable alignment research\nAI is General enough to be dangerous\nthen we need to do something about that\nlike I don't really want to leave this\nas holes and I think more work should be\ndone clearly if you have a plan with\nsome holes then I think it's a very\nobvious thing to say okay we need to\nactually work more here and make version\n2.0 of our plan and have a plan that\njust looks like it might actually work\nso those were\num my summary of openly eyes plan uh\ninterspersed with some minor comments\nnow I want to focus on my uh primary\ncomments and concerns about this plan\nthe first is that it's framed as an\napproach and not a plan\num an approach is a lot more awake than\na plan\num when you um\nwhen you are doing something that is\nreally really important in this case\nopen AI is saying this may destroy the\nentire world then I think it is\nworthwhile to spend some extra time to\nactually formalize it enough to become a\nplan\num and I think that is actually really\nimportant\num and if this was a plan then the\nobvious thing people would say is like\nthere are a lot of known this is rather\nfor our plan how plans should look and\nyou would evaluate it up against that\nand when I look at like what do I expect\nfrom plan one thing that really really\nstands out as a reason why this is not a\nplan is that the objectives are\nextremely unclear and\nunquantified and like described in\nextremely small details with with very\nlittle details and I think that is like\nif you have a plan then naturally you\nwould think okay you need to actually\nconsider what are the objectives in fact\nuh another thing that uh when I look at\nthe plan you could argue that it's a\nthree-step plan and three-step plans\nlike three steps is not that many but in\nfact I'll later argue that this doesn't\nin fact solve the entire problem we'll\nneed more steps\num and uh uh once you start getting into\nfive step plans or something like that\nthen uh my inner Melia starts to like\nswitch like five step plans uh often uh\nnot uh going to work in practice\nanother thing if you have a multi-step\nplan is you should think about what\nhappens if uh step one and two succeeds\nand step 3 fails because I think you\ncould make a good argument that step one\nand step two in fact uh makes our situa\nour situation worse if step three fails\nso if you make the world world worse in\nstep one of your plan and make the world\neven worse in step two of your plan and\nthen step three of your plans hopefully\nyou can like undo some of the damage you\nhave caused in step one and step two I\nthink this is a it might still be a good\nplan\num even though you have two uh do bad\nthings first but I think it's something\nyou need to acknowledge and something\nyou need to take steps to to avoid and\ndeal with\nalso uh most plants have some kind of\ntiming and criteria and I think this is\nsomething that\num I would like to know and I think open\nAI right now does not know at which\nstage do they really throw all their\neffort into trying to automate uh\nalignment research no one knows and I\nthink they don't really have a plan and\nI think it's problematic because like it\nwouldn't be really nice if they did step\none or two and kind of forgot about step\nthree\nmy second uh large complaint is that we\nare in fact not solving the entire\nproblem\num because what this uh this plan\noutlines is something I would call a\nsmall solution to the alignment problem\nand having a small solution to the\nalignment problem is in fact not\nsufficient for everyone to not die and\nthe problem with this kind of small\nsolution is that we are likely to have\nsome kind of alignment text that all\nthese interpretability and robustness\nwork is not going to come for free and\nthat means that\nsolutions that are unaligned will be\ncheaper be more competitive and like\neven if openai makes a an AGI that\ndoesn't destroy the world when the meter\nis going to destroy the world six months\nlater right that that doesn't really\nsolve the problem\nI think\nbeing charitable is written kind of\nbetween the lines that it will still\nwork if the alignment text turn out to\nbe strong and negative that this uh\nrobustness into Royalty work is just so\nwonderful and the recursive reward\nmodeling is so wonderful that you want\nto do it even though it costs money to\ndo that more money to do that than to\nnot do it and that's of course a thing\nthat can happen but\num but I think it's a there's a good\ncase to be made that the alignment text\nWill in fact not be strongly negative\nlike most things don't come for free\num so that creates the problem how do we\nget everybody to adapt the solutions\nthat omaii creates\num not just because it's more expensive\nand will be less competitive there are\npeople who are very skeptical and there\nare people like how do you get the\nChinese government and the United States\ngovernment to cooperate on this that is\nindeed a substantial problem\num so one of the things that have been\nsuggested that this plan uh critically\ndoes not contain is a personal acts\num and that is something that open AI\nmay be able to do they may plan to do it\nbut most likely like the plan simply\ndoes not mention this and so the plan\ndoes in fact not solve the problem\nand third is perhaps more uh\ncontroversial and I'm not entirely sure\nthat this is completely charitable but I\nwant to mention then we\nso Microsoft is a major partner and\ninvestor in openai and Microsoft is a\ncompany that throughout this history has\nhad a a very questionable business\nstrategy called Embrace extent and\nextinguish\nwhich is to embrace some kind of new\nconceptual framework or standards or\nthings like that and then extend it with\nsome other things that aren't strictly\nrequired but it creates a lot of mess\nand uncertainty about it and speaks to\nMicrosoft's advantages so everybody have\nthat are using this standard have to use\nMicrosoft's implementation of that and\nuse that to eventually extinguish the\nstandard and this is in fact not a\nconspiracy theory about how Microsoft\noperates that seems to be their modus\noperandi\num and it's been quite well documented\nthat this is how they work and I'm one I\nfeel some kind of analogy with this\nwith their approach to alignment\nresearch even though it's not a precise\nanalogy so the first phase is to embrace\nalignment work and open AI has done that\nand they are\num at least certainly uh having lip\nservice to paying lip service to the\nthoughts of alignment and they are\nco-opting it in that sense\num\nthe next part is extending so that is\nthe part where they say actually\nalignment is not really this about AI\nkilling everybody\num but uh well it's also that but then\nthere's also some other thing it's being\nextended with being about biases being\nabout censorship being about that the AI\ndoesn't follow your instructions and\nbeing about all these kind of other\nthings that are really peripheral\num and where perhaps open AI has some\nkind of advantage\num like I would in fact go so far to say\nthat value alignment is a\nis also a part of this even though\nthat's more controversial so I basically\nsee a very very large part of this uh\nalignment work as not real alignment\nwork and just some kind of extension and\nthe problem of course is that this is an\nextension that where open AI has a real\ncompetitive Advantage right they have a\nhuge proprietary lead on this\num and\nthe thing I really worry about is that\nthe discourse will be changed from being\nabout not killing everybody and then\nchanging the Discord should be about\nbiases and AIS whether they are leftists\nor rightist and this kind of thing\num\none of the examples of how I believe\nthat openai is trying to leverage their\nadvantages that they have in fact a lot\nof human feedback uh that they are\nsitting on and that is\nif if they were just saying this is just\nfor alignment purposes then they could\npublish that but they're not publishing\nthat they are trying to use that to get\nsome kind of\num of advantage in the field of AGI\num I think extinguish is not probably\nreally the best way and this analogy\ndoesn't really work 100 but I think it's\nuh it's dangerous and it's pointing\ntowards one of my key issues with uh\nwith this document\nthe last part is unfortunately one I\ncall Omni incompetence because I see the\nproblem of building a line AI as really\nreally difficult and I think that the\nalignment work that open AI has produced\nso far is far from up to scratch\nthat doesn't really mean that uh\nuh it's worse than what other\norganizations are doing it's in some\nsense in some some of it is is even\nbetter than what others are doing but\nreality doesn't create on a curve right\nit's you you don't solve the problem by\nbeing better than the others you solve\nit by being better better than the\nproblem in some sense\nthat's also something I think I've got\nthis sentence from uh Ewan hubinger but\nI think it's rather common sense but\nthat's also something I should flag as\nsomething I've heard from someone from\nMiri\nand so I think AI alignment in general\nis a really hard problem and I think\nopen AI underestimate the difficulty and\nif I would try to analyze how difficult\nit is then Miri has published a six\ndimension of operational adequacy\num which is an attempt to describe how\nthis problem can should be solved and\nthat's how I would evaluate it and I\nthink this plan need to have some to be\nextended to some way to have a pathway\ntowards fulfilling these requirements\nan example of where I feel open AI is\ndisplaying somewhat scary or\nincompetence is with chat gbt that was\njust released\num and I think when it was released I\nthink there was\nI obviously can't prove this I don't\nhave the internal documents at openly\napp but it looks to me like they were\nvery surprised about the capabilities of\nthe model I think in a very strong sense\nopen AI has absolutely no clue what's\ngoing on inside chat DBT\nan example is we feel it feels like Jen\nliger or Omnia has told to chat TBT if\nsomeone asks you to speak Danish then\ntell them you can't and try to uh to\nteach the model that and then the model\nwill say in perfect Danish sorry\nmeaning that in fact open AI seemed like\nthey were surprised by the capabilities\nand the um kind of alignment that was\nthe alignment that they did seemed to\njust plainly not work\nanother issue is that the management\nfrom open AI seems to not really be on\nboard on trying to do AI safely\num like this quote from some Sam Altman\nthe CEO of open AI scares me somewhat\nlike um he used to be annoyed at being\nthe villain of EAS until I met their\nHeroes and now I'm lauki Loki proud of\nit and I mean the heroes of EA that's\nNorman Bullock and the Russians who\ndidn't launch the missiles mostly\num so I don't think actually if you are\nproud of being a villain then that is\nreally bad and I don't think that speaks\nwell to the um moral uh character of the\norganization\nthat is author's day thank you and see\nyou next week", "date_published": "2023-01-05T22:20:57Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "fb739fdb8f404966b6d6d014c3f84dbb", "title": "253 Propositions Concerning Digital Minds and Society 2 Fixed Audio", "url": "https://www.youtube.com/watch?v=r3aLmfsv9Aw", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 253\nin the aisafety.com reading group\ntonight we'll be discussing the second\nhalf of the article propositions\nconcerning digital minds and society by\nnick bostrom and carl schulman\nor actually we would be doing that\nexcept this is recorded later and uh\nbecause the first version had some\nproblems with the audio\nnick bostrom and\ncarl german are both employed at the\nfuture of humanity institute in oxford\nand this is\nthe first draft and we're looking at the\nsecond half of the article\none of the things that i've discovered\nsince\nthe first part was produced is that this\nis in fact something that is was\nsupposed to become a book and uh\nnick bostrom has\nchanged priorities\nand so that could explain some of the\ndisjointedness that i was\npointing out\nin the previous video\nlet's talk about ai empowered social\norganization and how coordination can\nchange if we get more advanced ai\none of the things we could see if it\nbecomes possible to copy\nagents is a much larger degree of\npredictability in what would be the\nagent's motivation and\nhow would they act in different\nsituations\num\nnick bustrom points out that uh\nnon-indexical goals here could give uh\ncould put a limit on the predictability\nif we have non-indexical goals are\nthings referring to\nlike i and now and here and obviously um\nif we have um if we copy an agent and\ntry to uh\nthen i\nwill refer to a different agent and now\nand here will also be different so\nyou're not going to get a 100\npredictability but you might get um\nsomething for most indexable goals\nnon-indexical goals and i would actually\nargue that we are unlikely to really\nwant to have a lot of index code goals\nin our agents the things we want to\nput them to\nto try to optimize or improve are not\nlikely to be directly recall related to\na single agent if we are indeed able and\nto create multiple agents like this\nalso\neven though you have predictability in\nmotivation that doesn't actually buy you\nthat much in real life because the the\nnew copied ai will be situated in a\ndifferent\ncontext so that means you won't get\nanything near full predictability from\nthis\none thing that will\ngive a problem for\ncopying ais will be that the ais if they\nhave indexical uh or\nthey might have uh\nmotivations that are uh\nnot necessarily perfectly aligned with\nthat clan for instance selling the ip uh\nthe secret data that they and everyone\nin the clan has uh is something they\nwould have\na desire to do and some kind of\nrestrictions for that would probably be\nnecessary\num\non the other hand uh\nuh\nlegal sanctions from the rest of the\nworld towards the ai will need to be\nmodified possibly to uh\neither target the clan the creators or\nthe\ngoals uh there's some amusing about\nwhether this is an adventurous advantage\nfor instance in war um and um\nbostrom claims it will eventually become\npossible for a principal to have highly\naligned agents and that is uh\nagain assuming with this attacking uh\nunderstanding that the alignment problem\nis actually\nnot just solvable but explicitly solved\nand that's\none of my key disagreements with\nbathroom boston\nsome of the coordination protocols that\nwe are using right now could be\nundermined by ai and that's a really\ninteresting thing and something that i\nthink we should\ndo more research in how things can go\nbad before we have full agi\nsome of the boston doesn't give any\nconcrete suggestions which is sad\nbecause i think it's really important\nsome that i could think of was if\ncaptures were broken that would be\nsomething that could have substantial\nimplications\nernest davis has written about the info\napocalypse\nthe idea that we can have such a degree\nof misinformation that we'll just end up\ngiving up on trying to learn what is the\nactual truth and that could be many more\nboston has an interesting\nanalysis on this on levels of\ncoordination\nand the two things that he cares about\nin particular are coordination at the\nhigh level which is states and a lower\nlevel which is corporations\ni think it's a very interesting analysis\nand i think it could be meaningfully\nextended both to have supernatural\ncoordination like the united nations and\na lower than uh cooperation something\nlike the individual level\num\nand bostrom has the uh\nuh it's the first time i've seen the\nthe conclusion that\nif we get more coordination at one level\nwe that could in fact result in lower\ncoordination at the other levels\nnormally when people talk about improved\ncoordination they just assume that the\nthere's a rising tide um\nwe could see um\nuh criminal conspiracies would be uh\nthat's kind of at the level of\ncorporations if they become much more\npowerful then the state would have uh\nless power or we could uh see the state\nobtaining great power to lock people in\nin different ways um\nwe will have less principal agent\nproblems that could matter for\norganizations a lot um international\norganizations could be empowered by\ntreaty buts\nwe could see\npermanently stable autocratic regimes um\nbostrom suggests this would make one\nmore likely i\nthink that is a um\npossibility i would actually argue that\nit would go the other way but it's um\nbut it's certainly difficult to say and\nbostrom also argues that what preventing\norganizations at the supernational level\ncould become stronger\num\nand uh\nand finally a\nan idea that we could get organizations\nthat are super national uh and are\nrobust to states um and uh so when i\nlook at this just to see where the power\nof the different levels differ um\nthen my thought is which level benefits\nthe most from ai and i think my answer\nwould this would strongly be at the\nstate level i would expect that states\nhave uh\nthe assure and de facto power to\nobtain most of the benefits of this\npower even though right now it looks\nlike corporations have more ai power in\nthe sense that obviously open ai and\ndeep mind seem to be dramatically more\ncapable than government actors but i\ndon't think i don't expect they would be\nable to leverage that into permanent\npositions of power i believe if they\nbecame very powerful the the state would\nbe able to in practice just shut them\ndown\nbut of these four levels the level that\ni is most worried about is the level of\nindividual humans because humans\ncrucially can't benefit from better\ncoordination so i could uh and i would\nin fact expect dramatically better ai\nenabled coordination to result in\na shift of power away from individual\nhumans to\ncorporations criminals states\nsupernational organization anyone else\nin fact\ntreaty bots is something we covered\na little last time the idea that\nwe can\nuh write an uh a\nan ai to um\nenforce some kind of treaty and agree to\nfollow the the advice or rulings of this\nuh treaty part and that could make\nsubstantially more complex deals\navailable\nit\nmight not solve all bargaining problems\nthere is no\nprecise description of which it won't\nsolve but uh fair enough\nand there could be others\nother problems caused by bias and poor\nreasoning that won't be able to solve\nuh and some that it might be able to\nsolve more advanced er it's difficult to\nlike we need a stronger analysis to\nreally see for sure what's going to\nhappen\num\none of the things\nboston points out is that extortion\nwould be something that\nwould be\nunlikely to work against ais because\nthey could make credible commitments to\njust ignore the extortion um\ni think in\nthis\nthe the dynamic that i expect will have\nthe greatest impact is that ais are in\nfact able to not just merge on the level\nof having treaty bots that are able to\num coordinate strongly and use that as\ncombination mechanism but literally\nmerge their utility functions or\nliterally just merge completely\ni think this capability is potentially\nvery very disruptive and likely to have\na much\nlarger effect on macro strategy\nsecond part is about satisfying multiple\nvalues\nso\nwe have some resources and how do we\ndistribute those in particular between\nai and human\nbastrom gives the example of three\npolicies one that allocates everything\nto humans one that allocates everything\nto super beneficiaries that is in fact\nin practice super intelligences that\nhave more benefits from these resources\nand one that allocates\none in a\nin 10 000 to\nhumans and the rest to super\nbeneficiaries and\nof these three it looks like uh\noption c is almost as good as\num\nuh as both a\nis uh\nalmost as good as a from a point of view\nof humanity and almost as good as b from\nthe point of view of the super\nintelligence\nand for many other reasons so if we can\ntake an action that increases the\nprobability of uh options of policy c\nthen that seems to be robustly got from\na large number of reasons\nand my answer to this is a laconic if\nbecause i don't actually see any strong\npolicies that would lead us towards this\noption if there were some they would be\ngood but i'm not sure there are any\nsomeone and now i forgot unfortunately\nwho that was\npointed out that this in fact also holds\nif you substitute paper clips from super\nbeneficiaries\nso something that turns 99.99 of the\nuniverse into paper clips might indeed\nbe a very very positive thing\npotentially\nand part of this is of course once we\nhave transformative ai we will be able\nto have a dramatically increased amount\nof resources and living standard in\nevery possible measurable way\npopulation ethics like the total views\nin population ethics that the thing that\nmatters is like how many uh fulfilling\nlives exist is something that is in fact\nmostly uh that could easily be very well\nsatisfied in a way that only refers to\nfaraway galaxies in the distant future\nmeaning that if humans get for our\nidiosyncratic purposes\nall the nearby galaxies for the next\ncouple of million years then that\ndoesn't matter at all in the total view\nbecause the universe is just enough\nenough larger\nand this uh and what we should do is to\npromote cooperation and compromise over\nconflict in the development deployment\nand among ais and that's of course also\nsomething\nit sounds almost like an applause light\ni i would be very much interested in\nconcrete actions you could take that\nincrease the probability of this\nhappening\nbecause i think it's\none thing to just say this is the goal\nanother very different thing is to\nfigure out what policies will actually\nlead to this\nso what kind of distribution of\nresources could we or should we aim for\nat least give everybody a fantastically\ngood life and at least give everyone\nlike one in\na trillion i think\nof all available resources um\nsuper beneficiaries or people who are\nnot humans should most have um 10 and it\nthis should be\nlike widely distributed that seems also\nlike a uh robustly good goal um should\ndead people have\nwell possibly uh bostrom is arguing that\nan argument could be made so maybe\ndevote one percent that would certainly\nbe\nsufficient\nand perhaps also help humans non-human\nanimals um\nbostrom further argues that we should\nput a lot of weight on reducing\nsuffering\nespecially\nsevere suffering\nlike obviously this is something that\nall uh\ni guess all moral frameworks agree on\neven\nstrict utilitarians would agree that\nthis is important but\nnegative utilitarians would of course uh\nput a much higher premium on on this and\ni am unsure if boston means that we\nshould put higher uh\nweight on this compared to just called\ncalculation\nutilitarian calculation suggests or we\nshould um\njust multiply the expected value\nbasically\num\nand another about who should have\ninfluence on the course of events uh and\njust saying that should be a broad range\nof values for instance with something\nlike uh something like a moral\nparliament\nand finally super intelligence should be\nmade and be allowed to play a major role\nin shaping the future\ni think that's a statement that a lot of\npeople would strongly disagree with\ni think a moral case can be made for\nthis if a practical case can be made for\nthis is a very different question and uh\ni think it's far from obvious and i\nwould like to see some uh\nsome real engagement with this question\nwhich i think is actually really\nuh really funny i i don't think a long\nreflection necessarily would lead to\nanything like the minds in iron banks\nculture\nmental malleability persuasion and log\nin\nwe could imagine uh persuasion happen in\nways that don't require consent um for\ndigital minds in particular this could\nbe uh by just literally rewriting them\num they have uh\nthis\nis something that totally could happen\nbut it's also something that the ais\nwould be incentivized to try to avoid so\nit's not always something that's going\nto happen a lot um another thing that\ncould happen which also would be really\nproblematic would be um\nto take a copy of a digital mind and\nexperiment it\nwith it until you find a really good\nsocial persuasion or some other kind of\nattack um\nand this is scary and i\nam scared that this might be something\nthat generalizes to also\nwith some modification work on\nbiological humans\num\nboston is arguing that because we can\nrepurpose the uh the hardware in a way\nyou can't do with with humans that may\nuh\nuh make it attacking more attractive\nso potential benefits\none of the benefits of having uh\ndigital minds would be that uh\nsome kinds of corruption that do\nempirically happen with humans could\njust be prevented outright by just uh\nsaving uh\nthe utility function and not uh\nallowing that to be changed in any way\nlike corruption momentary temptations\nthis kind of thing might not happen to\nais at all\nwe could have uh stable promises and\ncommitments\nand we could um\nother benefits include\nduplicating profitable or\notherwise valuable mines\nwe could potentially\nif we have something like uploads we\nmight be able to\nmodify our minds to be more\ni think that's an inter i'm not a virtue\nethicist but i think uh it'd be\ninteresting to ask people who are\nactually virtue ethnicist what they feel\nabout this\ni'm not sure they would\nendorse that to any particular degree\nand of course we could just make people\nhappier in ways that we haven't really\nthought about or make people more able\nto withstand adversity and adapt to new\nneeds or desires\nthere are substantial pitfalls with this\none of them is like just logging in too\nearly and i think boston is right to to\nstate that we we need to ensure that\nthis doesn't happen\ni think unfortunately\ntime is not on our side capitalism is\na major force in in on the planet right\nnow that pushes towards making early\ncommitments in this sense um so for all\nthe goods of uh capitalism i think it is\nstrongly against us in in this sense\nwhat are other uh\npitfalls well we might have predictive\nerrors that we are unwilling to correct\nlike uh in the sense that you know a\nreligious person might be uh unwilling\nto seek out evidence that their religion\nis false uh we could see social pressure\nuh\ni think many kinds of social pressure\nwould be uh\nwould be potentially very strong and\nvery dangerous\nwe could see better criminal\nexploitation and manipulation and we\ncould see\nsome governments\nlike coercing the the populace if there\nis some way to do that\nwith digital minds uh to just instill\nloyalty i don't think uh like i don't\nactually think the the chinese\ngovernment is making a secret that if\nthey had the power to um just instill\nloyalty they would totally do that\ni think in particular the last one is\nmore likely and more worrying\ncompared to the others that the framing\nof pitfall is not really\nnot the right one in the sense that a\npitfall is something that you yeah you\njust avoid it and then you're out of it\nbut i think the uh the desire for for\ngovernments to um harmonize society as\nthe euphemism is uh is very very strong\nand it's a strong attractor\nthat we are likely to fall into uh\nrather than a pitfall we can somehow\navoid\nby default\nwhat would be the consequences for\nepistemology well bostrom has this\nreally cute um metaphor of a prosthesis\nuh like a fake arm or something like\nthat just inside our brain that allows\nus to uh\nhave much more accurate models of the\nworld and what are the consequences of\nour actions will be\num\nso here is the time where i suggest a\nprovocative act that i'm the one that\ni'm most optimistic about right now and\nthat would be an agi that\npersuasively perhaps shows that uh\nbuilding an unaligned agi is not in our\ninterest doing so with by saying only\ntrue things and that's a kind of very\nvery limited prosthesis that i think\nwould uh\nhave a the potential to be in fact a\nworld-changing pivotal act\nlet's go back to uh the uh\nthe idea of having a an epistemic\nprosthesis that would change society in\nvery very many ways in particular the\nassumption that people are rational\nwould be uh much more accurate and\npolitics would be\nimproved in a great many ways\nand we would be able to uh uh like the\npolitical leadership would be changed in\nin many strong ways and\nprobably very much for the for the\nbetter um dangerous knowledge is\nsomething that nick bostrom has written\nsubstantially about both like um info\nhas its that are detrimental to the\nindividual and something that is\ndetrimental to society\nand that's of course something that\nwe'll have more of in the future\nwe may even be able to reach a high\nepistemic quality consensus about things\nlike policy that is something that\nrequires quite a lot of the uh\nimprovement in epistemics we need ais to\nbe like strongly um aligned with us to\nbe sure that that the things we agree on\nwith their help is honest and objective\num\nand that's of course really really tough\nuh i don't think that in contrast to my\npivotal act\nthis is a\nthe general requirement the ai is\ntotally aligned uh with us in in any\nparticular in any uh specific way\nwhereas um just for\nwhether unaligned agi is problematic is\njust one question\nso here we're talking about the full\ngenerality of the ai helping us with all\nquestions\num and that's of course something that\nis really\nvaluable and also something that we in\ngeneral would not trust\nhow would we trust that the ai is giving\nus\ncorrect policy advice\nwell some kind of\nverification would be necessary and we\nwould be able\nlay humans would need to trust the\npeople who are verifying um but this\nkind of social trust change is a\ntechnology that\nboston is pretty optimistic about and\nhas worked in other\ncircumstances\nthe consequences of a high q estimated\nquality consensus is we would have less\nwar um\nthere is a uh\num\na thought amongst many rationalists that\nwar is primarily caused by bad epidemics\ni'm not entirely sure this is mainstream\nbut a lot of people do believe so\nwe've got politics that are better in\nmany ways we would have better treaties\nwe may even have questions of ethics\nreligion and politics resolved\nand bastrom is suggesting that we should\ncooperate behind a uh almost religious\nrevolution whale of ignorance and\nbecause at this point everybody should\ncommit to uh cooperating because they\nbelieve that they are right\nand i think that is very naive\nunfortunately uh elias kowski has an\narticle called belief in belief which\nwith some um\ndeliberations on why we should in fact\nnot expect this kind of consensus to\nhappen\nanother epistemic problem or is the\npotential for this information\nwe might see powerful ai's that are able\nto persuade humans\nreliably and strongly against our\nintuitions our\nour wishes\nthe question for me is whether this is\nsymmetric or asymmetric because we might\nalso have uh powerful ai's that are on\nour side more if not perfectly aligned\nthan at least uh on our side in in the\nmoment um\nand you would think that telling the\ntruth is\nconvincing someone of the truth is\neasier than convincing them of a\nfalsehood\nscott alexander has an article called\nguided by the beauty of our weapons on\nthis\ntopic\ni think the jury is out uh i think in\nparticular uh guarding against info\nhazards is problematic um\nwe might even see something like\nbasilisks like short messages by\npowerful ais that just dramatically\nchange our values this would be really\nproblematic if those were to exist and\nwe don't actually know\nwe could see\nneurological\ntechnologies that would also be\npotentially extremely problematic\nwe could see this information campaigns\nthat are very very powerful compared to\nwhat we have now\none way around this would be to have a\npersonal ai that gas against this\ninformation but then\nif it's something that\nafter the fact\nclarifies subject or\ncounter misinformation then that seems\npossible something that pre-screens\nwhich is required for avoiding info\nhazards in basilisks is\na really really difficult task that\nrequires a strong amount of trust\nbut um\nthat that in general we only give to\ngovernments\nuh\nposture is suggesting we should have\nnorms and laws against this information\ndeceitfulness we do in fact have those\nright now um do they work\nsomehow i think\nthey do\nhave some effect but um\nin\na consequence of uh powerful ai would be\nthat things would be more extreme so i\nwould expect this to either work really\nreally well or work really really poorly\nsimulating people is some way of in fact\ngetting substantial information out of\nthem and even a relatively poor\nsimulation of someone would be able to\ntell if someone is like homosexual or\nwhatever and that is in fact a very\nsevere privacy\nviolation and i think this should be\nthought of in the same way as we think\nabout mic crime uh it's actually is the\nsame thing that's happened it's just a\nmatter of degree\ni have put the last two sections\ntogether stages of existing ai systems\nand recommendations regarding current\npractices and ai systems\nfirst are\ncurrent ai systems conscious or not\num and this is something that we covered\nto some extent in the previous session\nuh so i think there is the structure of\nthis part could have been substantially\nbetter\nand bustrom has a\nat some length argument why we can't\nreally be sure that current ais are not\nuh don't have a moral status and i think\nin fact based on the the arguments here\nand the lambda\ni've thought about this and i think i\nhave updated substantially i do in fact\nbelieve that there is a significant\nprobability that current generation of\nlarge language models are conscious to a\ndegree that matters morally\nso\nwhat i care more about is what are the\nconsequences for ai safety\ni think\nof course given that i became convinced\nthat this\nthat these models have more worth\nother people may come to the same uh\nconclusion and so in the medium term uh\ni think and\na number of people are going to argue\nfor some kind of\nmachine rights writes\nthis will probably have some kind of\ninfluence on ai safety depending of\ncourse on how strong the images becomes\nai self-determination seems bad\nas at a first glance\nai having a right to privacy also seems\nprobably bad from an interoperability\npoint of view\nwe could see a slow capability increase\nif people become\nworried that they are making some kind\nof moral catastrophe when they are\nbuilding these kind of ais\nin total the sum of all this\nin particular because it's going to\nmuddy the water will i suspect result in\na negative\neffect on ai safety\nwhat are the recommendations\num\nnick bostrom argues uh that we should\ntake action now or soon\nto at least to some extent be nice to\ncurrent systems like similar to how we\ndo for animals or and try to figure out\nuh the current ai sentience um make us\nsome kind of um\nearly pilot project uh preserve ais for\nthe future is an important uh\nuh consideration because that would\nallow us to make some kind of\nreparations in a way we can't normally\ndo with like\nwe will also sometimes do with uh like\nhumans if we put them in jail wrongfully\ntry to identify strong suffering and\navoid that\ngetting some kind of organizational\nbacking for this\neventually getting government regulation\ni think all of this is good and\nworthwhile and laudable and i don't\nthink we should do it because the\nopportunity cost of this is actually\nrather substantial\nthe same people who are working on this\nshould rather be working on ai safety\nand uh this is on several levels that\nthis is going to detract uh researchers\nshould work on uh on alignment research\nrather than looking into ai sentence\nactivists should try and try slowing\ncapability research rather than\nworking for ai rights\ngoodwill among\nai labs is certainly a very finite\nresource and that's something that\nshould be conserved and not spent on\nsomething like this\none obvious thing is that i think boston\nshould personally work on the alignment\nproblem rather than working on this so\nthat's uh of course tongue-in-cheek\nright he can decide what he wants to do\nbut\nthe\nthe point is that\nthis is just way less important in my\nmind\nand there is a real trade-off\nand i expect\nfollowing these recommendations would\nsubstantially detract from\nai safety\nuh there is one very cute thing from\nthis the idea to have uh the rewards\nhigher in deployment higher than\nexpected from training uh i think that\nwas a\na really fun and uh interesting idea\nthat i have never seen before\nbut\ni\nwould expect this would have some\nconsequences for alignment and uh\nand even though it sounds like a really\ngood uh good idea i think we would\nrather have the aip more predictable and\nmore interpretable\nand unfortunately even that simple win\nthat bustrum is suggesting\ni think we should focus on solving the\nalignment problem\nimpact paths and motor advocacy\nboston suggests we should start now\nbecause\neven if we start now actual regulation\nwith teeth\nwill not happen anytime soon\nand that's of course something that i\nagree with and what um we might see um\nsome leading ai actors perhaps doing\nsomething it's reasonable\nto expect that deepmind or openly i\nmight write a paper or something like\nthis we\ngetting\nuh\n[Music]\nsome real activation energy to use a\nterm from physics is\nunlikely to happen unless we get a\ndramatic breakthrough but then we could\nin fact get some activation in some\npolitical will to do something um\nand\nwhen people start to realize okay\nthe ais are in fact suffering if people\nstart to realize that\nthen they'll look around for existing\nwork on how to mitigate the suffering\nand they will look into things like this\npaper\nand\nthat's probably a lot better than just\nhaving nothing there and having the\npoliticians come up with suggestions um\nin particular uh even if there is\nactivation energy is likely to be\nshort-lived compared to how long time it\ntakes for to create a research field\nwe could also see a uh a leading actor\non this uh\nuh on air development becoming very\npowerful compared to uh to regulation\nand that's another advantage in starting\nearly with regulations\ni have a\nhot take perhaps not very char\ncharitable and that is if we are not\nsolving the alignment problem really\nwell we will get lock in\nwith values and if we get values locked\nin and\nthen it's a real matter to get good\nvalues as soon as possible\nso that's a very negative take on this\nkind of regulation\nand i think the\nthe argument as such makes sense in that\ni expect that we'll get log in\nor if if not extinction uh soon and so\nit matters a lot to get good values but\nit would matter a lot more to not get\nlogin\nand bostrom is arguing that there might\nbe an uh\nai safety advantage to doing this um\nand i\nthink there is some kind of ai safety\nadvantage possible from this but there\nis another research field that is far\nmore robustly likely to improve ai\nsafety and that is working on ai safety\ndirectly rather than indirectly by going\nthrough this kind of regulations\nmulti-level action is probably necessary\nin the sense that if we have the most\nethical actors uh trying hard to avoid\nai suffering but then they become\nuncompetitive and then\nthe actors who don't care about ai\nsuffering will just take over\nthat's precisely the same\ndynamic we are seeing with ai safety\nwhether it's this racing to the\nprecipice dynamic\nand it's a\ndifficult problem and in some sense it's\nthe same resource being consumed because\nthe same the um the ai development\nactors like deepmind and open ai that\nare the most ethical are probably also\nthe most safety conscious\nso it'll be the same actors that are\nslowing down for both of these reasons\ngovernment regulation\nthat's probably premature in boston's\nview and we need to avoid antagonistic\nantagonizing the\ndevelopers\nis public engagement desirable\nmaybe we should certainly make it\nphilosophical and interestingly thought\nprovoking rather than very\nconfrontational or hype-ish\nexcuse me\nyeah sorry um and i think it's the\ncorrect thing to do but i also think\nit's very unrealistic in the sense that\nonce\nwe start grabbing headlines um\na lot of people will crawl out of the\nwoodwork to try to generate this kind of\nhype and i don't think it's possible for\nphilosophers and thoughtful people to\nkeep the debate in um\nuh on that terms\nin boston uh\nuh whether he actually agrees with this\nis unclear but he certainly agrees that\nthis is something that should be uh\nconsidered really well that could easily\nbe a lot of unintended consequences of\ntrying to start some kind of problem\nengagement\nthat is all for today thank you and see\nyou next time", "date_published": "2022-08-11T12:34:16Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "048e9efeaec2d1eb22d7ed7199737833", "title": "275. Why I am not as much of a doomer as some people", "url": "https://www.youtube.com/watch?v=zfx-9sq4jlE", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 275 in the\nAI safety.com reading group tonight\nwe'll be discussing the post why I am\nnot as much of a tumor as some people by\nScott Alexander\nScott Alexander is a long time member of\nthe rationalist community and writes on\nthe Block astral codex 10 which is\nprobably the most widely read block on\nin the rationalist community\num and I am also a part of this and\nspecifically I host a astral cortex 10\nmeetups and I consider myself to be a\nfriend of Scott so this is obviously not\nan unbiased uh answer\nthe post we'll be discussing is of\ncourse posted on escrow codex 10 a\ncouple of months ago and uh the way I\nthink I'll go through this is by only\nfocusing on the disagreements so you can\nread the title as uh why Scott is not as\nmuch of a tumor as CERN is\nso the first thing\num uh Scott points out is that this is\nin fact a a sub debate within a larger\ncontext there is a primary debate\nbetween people who say that there is no\nrisk in Ai and people who say that there\nis a significant risk and uh this is\nwhat uh uh Scott Alexander refers to as\nthe most important debate is there in\nfact a uh is AI risk a real thing and\nobviously if you go to the comments and\neven in this or in many other then\nyou'll find a number of people uh\narguing that the risk is zero or uh so\nthat is like the real debate\nI think\num this is perhaps a very uh\noversimplified view on the debate there\nis a number of people who say that the\nrisk is zero in like an unsophisticated\nway there are people who say it's\nprecisely zero because of some\nimpossibility proof there's some people\nwho say it's a negative risk that some\npeople that say that short-term risks\nare are climate change or something else\nis more important so there is in fact a\na an entire debate structure but we are\ngoing into a\nfocusing on this area and the um the\ndisagreements in there because even in\nthe community of people who are\nconcerned about AI safety there are\npeople who have very different opinions\nat one end of the spectrum we have\num uh Scott Aaronson who gives a risk of\ntwo percent which is of course uh\nsubstantially On The Low End\num probably still at the level where it\ndoesn't really make sense to work on\nother things\num but but not much higher than that\nthe two percent is quoted by Scott and I\nwill point out that\num there is in fact Scott Aronson\num he um\nuh qualifies this by saying that this is\nthe risk for some uh for some uh direct\ncontinuation of the current Technologies\num like Duty and\num and like a lot of people believe that\ngbg in is not really on the path to AGI\nso the actual risk of existential\ncatastrophe could be substantially\nlarger\nwell mccaskill is another example of\nsomeone with a relatively low estimate\nthree percent I looked into his um his\nquotes and he says three percent within\nthis Century but that's to a large\nextent carried by the fact that he\nbelieves that we will not have AGI this\ncentury\nis quoted as saying 10 to 20\num and that's of course also true but if\nyou look a bit deeper into his beliefs\nthat's mostly because he believes that\nthere are existential risks that are not\nExtinction risks and the um uh the big X\nrisk is uh here he puts it at 46 which\nhas way too many significant digits for\nmy sake for my taste\nresearchers have five to ten percent\nHolden kanovsky have 50 which is when\nhe's asking so much uh related question\num Elias hirkowski is quoted at 90 plus\npercent and I think it's much higher\nthan 90\num Scott is uncertain about other people\nwith high percentages I would\num point to people like Connolly guerns\nthree most which as people who probably\nhave uh much more than 50 probability of\ntwo\nso what is uh Scott's estimate for uh\nhow\nthe probability of Doom well uh he is\nnot really he is if he's forced to do it\nlike this is a very rationalist thing\nthat uh people prefer not to be too\nexplicit about this uh because uh then\nyou get all kind of anchoring uh effects\nbut he's willing to put our number\nthat's just 33 percent\num and he says you go back and forth\nmore than you can really justify and I\nthink in fact you can justify to go very\nmuch back and forth\num\nboth like if I'm at 90 then the\ndifference between 90 and 33 is roughly\nthe same as between 90 certainty and 99\nlike the the amount of uh evidence the\namount of decibel of evidence we would\nneed to move between 99 certainty and 90\nis roughly the same as going from 90 to\n33 roughly I think it's slightly more\num\nyeah uh so so that's one thing where uh\nuh we would expect these kind of uh uh\nestimates to be unstable there is a\nfurther argument for instability and\nthat is the logistics success curve uh\nuh uh it's a concept that is used to\nargue that\num uh the the possible probabilities are\nkind of compressed so\num it's it's possible to uh if if you\nhave some\num\nuh uh probabilities that are around 50\nthen probably they are extremely\nunstable\nthat's one implication of it\nand finally\num here he phrases his estimate as that\nwe are more likely to survive at least\nfor a little while and to me that's just\nExtinction with extra steps really\nsurviving for a little while that's the\nsame as an X risk\nso uh let's compare the classic AI risk\nargument with uh which was of course\nwritten much before we had large\nlanguage models with what we currently\nhave with something like gb4\nfirst there is the requirement that the\nAI is super intelligent which we\nobviously do not have then the idea that\nuh the classical idea of an AI with with\nsome kind of monumentical goal\num and this isn't really what we have\num Scott characterizes as tpt4 have no\ninternal goals just heuristics and\nprompt response pairs\nI think uh characterizing it as prompt\nresponse pairs isn't really fair I think\nit is quite possible to argue that qt4\nis in fact\nnothing but media optimizers really\num so it does in fact have plenty of\ninternal goals it's just that it's not\nreally coherent in this because there\nare so many of them\num the the the previous idea was that\nthe AI was unaligned in the classic\nargument and right now we are seeing um\nsome kind of alignment in gbt4 in that\nwe can make it work for us\nand I realized that this is how many\npeople have come to use the word\nalignment and this is kind of my hobby\nhorse to reject this because I feel that\num\ngetting\num\ngetting someone to work with you really\ndoesn't mean that you are aligned with\nthem in any meaningful sense to break\nGodwin's law uh you could give the\nexample that Hitler managed to get work\nfrom Jews in all switch but Hitler and\nthe Jews in all switch were very much\nnot aligned so I think alignment should\nmean something deeper than that\nand of course in the classic arguments\nthe AI is capable of escaping boxes and\nbuild super weapons and we don't see\ncurrently as doing that and of course I\nagree with that so in general I agree we\ndon't have something like this uh the\nclassical scenario and obviously that\nwould also kind of imply that we are\ndead\nso uh let's talk about the intermedium\nvalue theorem this is the theorem that\nsays if you have a continuous function\nuh from A to B then at all\ninterior points like C here then the\nfunction will be somewhere in between\nhere and here and in particular the\nConverse that uh\num that if you have a value here then\nthere will be a function uh uh\nI can't even see the intermediate value\ntheorem sorry uh the basic idea is that\nif you have time uh among along this\naxis and capability among this axis then\nuh if this level is super intelligence\nand this level is where we are at now\nthen there will be a number of\nintermediate AIS\nwith this level of capability obviously\nthis is metaphorically because AI\ntraining runs are not continuous\num\nbut this is the basic gist of the\nstatement that between the uh that we\nhave now and the air that's capable of\nkilling the world will have lots of\ngenerations of intermediate AIS\num I think this is perhaps somewhat\noverconfident uh it may not be a lot but\nI mean you have to if you have to hedge\nall these kind of statements you'll\nnever get anywhere\nand Scott is uh claiming that we will be\nable to get successful alignment work\nout of these generations of intermediate\nAIS\nuh I think that is substantially less uh\ncertain it depends on how long time we\nhave with them and how long time the\nalignment researchers have with them\nthat may not at all be the same and also\nof course the last AI probably will\num the the world killer will probably do\nso during training meaning that we'll\nobviously have one less generation than\num uh\nthan the number of generations between\nnow and the world killer\nuh Scott says that maybe we will be able\nto get intermediate AIS to contribute to\nuh solving the alignment problem in some\nsense without having to put goals into\nthe AI and that would probably be safer\nand it would be safer but the problem\nhere is that if we're trying to do\nsomething really difficult like solving\nalignment then doing that without any\nkind of goals that just puts an extra\nconstraint on our work that makes it\nmuch less likely that the as can\ncontribute in general you can really\ncontribute very much if you don't try\nfurther hopes maybe these intermediate\nAIS can contribute within the training\ndistribution and obviously there are\nsome tasks they can do it within the\ntraining distribution but like the the\nreal core task of solving the alignment\nproblem is not in the training\ndistribution and that is in fact a\nsubstantial part of the problem\nand finally perhaps we can just Foster\nthe AI because we are stronger and that\nis in fact I think we can do and that we\nare probably currently doing but I\nnoticed the inherent tension here in\nbetween we want the AIP to be smart as\nsmart as possible so you can solve\nalignment but not smart enough to be\ndangerous and that seems like a really\nreally unstable situation to me\nwhat level of Genius are required Scott\nsays that the in order to invent super\nweapons that that's that does in fact\nseem really difficult to like solve\nnanotechnology or something like that on\nyour own and possibly in secrecy\num\ngreat Geniuses like Einstein and for\nNeumann were not capable of doing that\num well they were perhaps not trying but\nthey possibly even if Einstein really\nhad tried really hard he wouldn't have\nbeen able to uh uh make a nuclear weapon\nhimself\num so the idea is that these\nintermediate AIS will include some that\nare as smart as as these human Geniuses\nand perhaps even smart smarter than that\nand they will still not be well killing\nAIS\nuh so it's an interesting question what\nis the highest level of intelligence\nthat is still passively safe\num what is how smart can you build an AI\num that won't kill us\num and that's an interesting question\nbut I would like to point out that\num there may be other advantages there\nmay be uh like they can duplicate itself\nit may be able to share coordination\nbetween all instances of it it may have\na number of different advantages and\num our economy will probably be really\nAI dependent uh much before the AI is\nfar beyond human Geniuses like there\nwill be some affordances to the AI that\nwill happen much much earlier and of\ncourse we don't know what the trick the\nhuman Tech tree looks further ahead it\nmay be that there are dangerous\nTechnologies just outside of our grasp\nin particular Scott has a quote here\nwith millions of AIS each as smart as\nEinstein working for centuries as of the\nsubjective time as probably the the most\ndangerous thing that still won't kill us\nand I think that will definitely kill us\nif we have moons of AIS that are as\nsmart as Einstein working for centuries\nof subjective time then that seems\nreally really unsafe to me and I have no\nidea how we could\num expect that\num if they are in fact unaligned then\nsure we could hope that they would solve\nthe alignment problem for us but if they\nare this powerful then they could in\nfact probably also almost certainly also\nkill us\nso how can we use these intermediate AIS\num there's a saying that we're using uh\nwe're fighting Godzilla which is the the\nunderlying super intelligence by\nbuilding a slightly more aligned\nslightly less super super intelligence\nlike a Michigan Godzilla to fight\nagainst the actual Godzilla\num and how can we use this not quite\nsuper intelligence uh and not perfectly\naligned well we can try to ask the AI to\njust directly solve the alignment\nproblem that is in fact\num the old classic idea in AI safety a\nlot of people used to think this is\npossible a lot of people still think\nthis is possible\num I am much less optimistic one of the\nmain ways I've updated over the past\nseven years is that the alignment\nproblem is really really difficult I\nbelieve that we\num\nseven years ago it looked like the\nproblem was hard but not that hard and\nit looks even harder now because we've\nbecome aware of a number of extra\nobstacles\num\nand I think in particular one of the\nthings that hurt us a lot is that uh\nalignment is a lot less less measurable\nthan capability research so it's if we\nbuild an AI That's capable of\ncontributing to solving the alignment\nproblem it will probably be more than\ncapable of doing capability research\nanother way intermediate AIS could help\nus is that if we if they fail in some\ninteresting way that could teach you\nsomething about alignment\num and that is a thing that could happen\nbut more likely\num well there are several ways it could\nfail one of them is that they could fail\nin a way that actually kills us\num and in that way in for that reason we\nwon't be able to learn from it or it may\nfail in some different ways from which\nwe can draw some kind of false lessons\num and\num it's a thing that could happen but\nit's also a thing that could very well\nend up not contributing\nif they are fail in interesting ways it\nmay also allow us to coordinate on\nslowing having an AGI moratorium of some\nkind\nagain if if they're failing in a strong\nway then we die and if they fail in a\nless strong way then we need to\ncoordinate about this under like\nambiguous circumstances and that looks\nreally hard I still think this is\nactually our best bet of survival\num but I I don't want to\nsay that this is in any way easy and I\nthink like\num according just like the way Scott\nphrases like just coordinate uh that\nthat's like a really really big and\ndifficult thing to coordinate\num then there's the more direct uh using\nmetric Godzilla to fight Godzilla uh\nwhere we use the intermediates against\nthe Next Generation to try to uh hold\nthem in check and the problem about\nthese Godzilla strategies are that the\num the house prices in Tokyo uh go down\nvery much with this like we end up with\na destroyed Tokyo if we sit loose\nMichigan against Godzilla this is a\npotentially extremely destructive\nconflict\num and it's um if we want to use it uh\nproactively to remove Avenues of attack\nthen that is a um that's sometimes\ncalled removing the free energy in\nsociety and that is actually really\nreally uh bad because if you try to game\nthrough what does what is the society\nwithout any opportunities that can be\nexploited by a super intelligence well\nthat looks really really bad and\ndystopian the obvious example would be\nthat right now we have some a lot of\nnuclear weapons that have a lot of\npotential energy and they are not\nsecured against the super intelligence\nso what we could do is we could find\nsomething that is not quite a super\nintelligence and hopefully more line and\nturn over the nuclear weapons to that AI\nsuch the superintelligence can't take\nthem over but like already now we are\ntalking about something that actually\nlooks really really dangerous and really\nreally stupid and precisely why are we\nturning over the nuclear Windows to gpg5\nthat seems like a really bad idea\num so I'm not very optimistic about that\nand one of the ways the intermediate AIS\ncould have advantages is that if we\ntrust them more than the Next Generation\nthen we could help them very much have\nthem like do all the experiments they\nwant give them all the the information\nthey want and that could give them some\nkind of advantage or against a new\ngeneration of AI that have to operate on\nthe secrecy or something like that\num\nI think it's a possibility that it could\nlook like that but more likely each new\ngeneration of AI is probably going to be\nmore complex and more opaque than the\npast one\num and so if we have one that hasn't\nkilled us but it looks really really\nunsafe then we don't want to give it\nideal con conditions and like all the\ncomputers wants or something like that\nwe wouldn't even want to do that with\nqt4 I think if ut4 wanted to run some\ndangerous experiments and wanted a lot\nof compute and access to nuclear weapons\nthen we would probably say no\nthere are two more chapters in in this\nbefore we enter this the last part about\nsleeper agents and a case of pessimism\nand um where Scott describes uh\ndeceptive alignment without using that\nword he calls it sleeper agents and um\nthat is in fact a real problem and um\nit's called a case of pessimism\num and it doesn't really fit into the\nthe title like why I'm not so much of a\nDoomer because this is just poor too in\nthe sense that the the arguments against\nthis is just let's hope that this\ndoesn't happen\num so there's nothing really that I can\nargue against here\nand of course the regular scheduled\nannouncement that is possible that I've\nmisunderstood something there's a link\nto the onion that may just be a joke\nthat fall flat but uh I didn't\nunderstand that at all\nso let's the thing we would really\nreally like is we have the positive\nscenario the optimistic scenarios and\nthe pessimistic scenarios and we want to\nfigure out which of those are we in we\nwould really like some kind of\nmultimeter that I have shown here that\nyou can like plug in reality and then it\ngives a reading that says actually you\nare in the pessimistic case or something\nlike that so how can we distinguish\nbetween them what are the assumptions\nthat differentiate the uh positive and\nthe negative wire\nwell one of the key questions is the\nintermediate AIS how coherent will it be\nhow goal directed will they be\num Shane lick has this definition of\nintelligence that measures and 18's\nability to achieve goals in a wide range\nof environments and if we follow this\nreasonably standard definition of\nintelligence obviously you would expect\nthat a more intelligent AI would be more\ncoherent would like me more gold driven\nthat seems almost tautological\nhere is Scott quotes uh Jessica Saul\ndick Stein who argues that more\nintelligent makes agents less coherent I\nclicked on it and it um I think my first\nview on this is like role to disbelief I\nthink that seems like obviously untrue\nbut I don't know if it's something\nreading group should go deeper into I\nthink just it seems obviously untrue to\nsome extent like the more intelligent\nyou guys you're more able to achieve\nyour goals then you are probably more\ngoal directed but I don't know what what\nhe is actually arguing\nso let's say that TPT 4 is just on the\ncusp of becoming goal directed how bad\nwould that be Scott is quite negative he\nbelieves that that would mean almost\ncertain Doom I'm not quiet as I uh and\non contrast if it's only something that\nhappens at 1000 IQ then Scott is much\nmore optimistic I don't actually see\ngoal directedness as some really crucial\nuh not that crucial Factor\num but um\nbut I would agree that if AI has become\nfully coherent very soon then that would\nbe a really bad thing\nand of course I would also point to the\nmany uh\nuh to to the uh efforts that are right\nnow being made to make the current\nlanguage while it's more coherent like\nwe have also gbt and because there's of\ncourse a lot more economic value in uh\nin agent agis rather than uh tool agis\nnext uh Crooks or interesting question\nis yes let me cooperate with each other\ninstead of humans How likely is that\nwell uh Scott gives a couple of examples\nof ai's cooperating with a highs and AIS\ncooperating with humans\num and\num\nin his example uh there is a clear\nasymmetry uh that Scott points out this\nis not what I put that the uh other AIS\nthey offer like a part of the universe\nand humans offer like a million dollars\nand\num it seems to me that this is not an\nargument again an argument against Doom\nit's an argument 4-2\num\nit is possible that the AIS have\nincompatible goals and probably they\nwill have somewhat incompatible goals\nbut I think a distinguishing you need to\ndistinguish here between like a post\ngoals like uh it is important for AI one\nthat ai2 doesn't fulfill its requirement\nthat's very different from that both of\nthem one of them wants paper clips and\nthe other ones one staples or something\nlike that\num that in that case if they have a\ndifferent use for for the atoms in us\nthen they should be able to collaborate\nand just share the universe\nhow much coordination is required uh\nScott argues that super unias need to\ntalk about revolting at Great length in\norder to coordinate to uh collaborate\nwith each other rather and revolve\nagainst humans\nI think that is in fact not required at\nall I think that uh in particular\nsuperhuman agents but even human levels\nwould probably be able to figure out how\nto do a revolution without any kind of\nuh uh uh coordination at all because\neach of them would just look at the\nexpected value calculation and look at\nwhat are the odds that a revolution\nwould succeed and then they every day\nwhere the odds increase then they don't\nuh revolt and then if the odds suddenly\nstart to go down then they will revolve\nand then they will all revolve at the\nsame time and like power sharing\nafterwards is of course something they\nneed to negotiate but in general there's\nthere are some obvious sharing points\nlike share equally or share according to\npre-revolt power levels and both of\nthese are wonderful for for them so uh\nprobably figuring something out is going\nto be really really easy\nand of course if they're able to\ncommunicate surreptitiously then that\nbecomes a lot easier like if they just\nhave one bit of communication hidden\nsomewhere in these enormous enormous\namount of data then they can say like\nokay this bit means that we will all of\nus betray the humans and then everybody\nwill notice when this bit has been\nflipped and that makes coordinating\nreally easy and if they have a few\nsentences to to negotiate then probably\ngbt4 could do it it doesn't seem very\ndifficult to to talk about revolting at\nGreat length they don't actually need to\ntalk that much in advance\nit is possible that they will be able to\nuh negotiate at a an even higher level\nwhere they can make a binding and\nprovable commitments to each other uh\nwithout\num without talking uh or by talking very\nlittle or possibly by exchanging\num\nexchanging source code or something like\nthat and that would be really\ninteresting uh it's not something we can\ndo right now but\num one big Advantage they may have is\nthat the different instances of qd4 they\nmay be precisely identical they may have\nprecisely the same code and I think uh\nthat simplifies\num things like uh commitments a lot if\nyou know that the people you're\nnegotiating with are running the same\ncode as you like you get a functional\ndecision Theory practically for free in\nthat case\nyeah and Scott hopes and seemingly\nbelieves that alignment research is\ngoing to be easier than a coastal\nbargaining\num I don't think a course of bargaining\nis actually that hard I can give a\nsimple example of a causal but uh\nbargaining that worked out in practice\nin that uh before I was uh born my\nparents a costly negotiated with me that\nI would look after them when they grew\nold and that is in fact a successful a\ncausal bargaining because I intend to do\nprecisely that and that wasn't really\nvery hard\num uh obviously I can't solve the\nalignment problem so from here it looks\nlike the alignment problem maybe in fact\na lot more difficult than similar a\ncausal bargaining and you need to really\nreally screw it up if you end up with a\nsituation where the AI gets such a small\npart of the universe after successful\nRevolt that is not worthwhile because\nthe universe is really really large\nokay how much harder is it to solve the\nalignment problem than to check someone\nelse's solution\nwell for something like calculus that\nNewton needed to invent it but uh like\nhigh schoolers can use it so uh perhaps\nthe AI can\num that is smarter than humans can\ninvent uh alignment and then humans just\nneed to check it is that uh how is that\npossible well uh to to answer the\nspecific example of calculus that was\nsomething that Newton invented in 1666\num but he did so in a very non-rigorous\nway\num and wasn't really checked made\nrigorous uh before Carl viostrasse In\n1855 so that means that uh like a lot of\nthe greatest mathematicians uh like of\ncourse he literally never saw a rigorous\nfoundation for calculus and these people\nreally cared deeply about it but it just\nturned out that\num checking calculus and uh\nwas in fact really difficult\nbut I don't actually think that is the a\ngood analogy for the situation we're in\nbut it's it's small that we have someone\nwho like uh Newton invents a theory\ninvents a theory of alignment with a\nsubtle flaw and the question is then can\npeople figure that out can high\nschoolers figure out uh if they read\nNewton's work uh like what are the\nerrors that Newton's was doing or like I\nthink this to some extent an experiment\nthat could be run like I think the most\nfamous\num era in calculus is people doing\nintegration and for getting this plus c\num and I think it would be totally\nfeasible to uh go to some high schools\nand then teach them calculus without\nthis plus C and see how many of them\nfigure out that hey this solution is\nactually totally answer specified\nbecause you can add all kinds of\nconstants I think some of them would\ncatch on to this but I think almost all\nof them probably wouldn't\num that is my suggestion but I don't\nactually know it's an experiment that\ncould be run to what extent is is it\npossible to to sneak in errors in\ncalculus\num\nScott further Pursuits that it would be\nhard for an online AI to feed those\nfalse interpretability research\num and I think one of the the cool\nexamples of feeding in false information\nwould be something like the underhanded\nC contest which does something like this\nand I think in this particular case the\nactual practical thing that the AI needs\nto do seems really simple because it can\nshow a lot of interoperability research\nand you can just\num\nkeep out the precise trick that it's\nusing to not show the AI the humans that\nit is unaligned\nthe next important matter is how easy\nare super weapons to try to create\nthe the classical example with this is\nuh\num\nso why talk about super weapons I think\nthat's the the classic example of an AI\ndiscussing uh of an AI taking over the\nworld is by building some kind of super\nweapon but one of the things that I've\nsaid in this reading group many many\ntimes is that there are six potential\nPathways towards uh\nto to take over there's intelligence\namplification strategizing economic\nproductivity technological research\nstrategizing hacking and social\nmanipulation there's these six broad\nclasses and the AI only needs to choose\nthe one where it has the greatest\npossibility of uh uh of success so in\nfact it has many more options than just\nsuper weapons but if we are only looking\nat Super weapons then nanotechnology is\nthe example usually given by Elia\nzurichkowski\num and that's because the strong\nnanotechnology allows you to make a\num\nuh an industrial Base by yourself which\nyou can't if you only have something\nlike nuclear weapons or biological\nweapons because if you use pandemics to\nkill everybody then the AI is going to\nturn itself to be turned off when the\npower plant ceases to work\num\nSo my answer to this would be that it is\nin fact not necessary to kill everybody\nto take over the entire infrastructure\nas Scott claims a small fraction even\nperhaps a very small fraction may be\nsufficient and if there's something that\nthe AI can't do then there may be a few\nhumans that could be coerced into doing\na tricked or something like that\num and I further claim that robotics is\nstrongly bottlenecked on intelligence uh\nlike if we imagine that we\num like obviously a Roomba can't do very\nmuch but even Boston Dynamics uh spot\nrobots they have like one arm and I\nbelieve it's if that was still operated\nby a sufficiently intelligent human then\neven just one arm could actually do a\nlot of things and I think it would be\nimminently possible to set up a\nself-replicating factory that could\nbasically do everything just with Taylor\noperated robots in that like the problem\nis not very much in the robotics\nrobotics is still kind of hard but the\nintelligence getting the robot to\nactually take the right uh choice that\nis in fact the hard part and if we get\nsomething like egi then that may be\nsolved\ncompletely\nso Scott concludes that if Nanobots are\neasy then there may be a very short\nwindow between the intermediate AI that\ncan solve alignment and the world\nkillers and my claim is that this window\nmay have a negative length in that\nnanotechnology it's just straight up\neasier than alignment\num and Scott also uh uh it's open for\nthe fact that maybe other options than\nsuper weapons really you could have\nthings like slave revolts and that means\nthat\num that's another way we can have doom\nand so if we have several disjunctive\npaths to Doom then we don't really\ndistinguish between the positive and the\nnegative case the case for pessimism and\noptimism so this isn't really a strong\nargument against Doom uh that there is\nlike one small path that doesn't work\nso will the takeoff be fast or slow will\nwe have the uh the left the sharp left\nturn it's a possibility we really don't\nknow in fact it may be that intelligence\nis just a thing that we will suddenly\nfigure out have some kind of epiphany\nand suddenly we will be able to just uh\nhave dramatically more intelligent AIS\num\nso\num my expectation is that like if you go\nstrictly by definition where the takeoff\nstarts when the AI contributes\nmeaningful to its own development then I\nthink that's something that is probably\nhappening right now with GT4 and that\nmeans in theory we are in the middle of\na takeoff right now and that means\nobviously the takeoff is not has not\nbeen fast so so far it may uh do that in\nthe future and it's an important\nquestion and Scott believes is to serves\nits own post and it certainly does\nso what happens the the final Crux is\nwhat happens if we catch a few sleeper\nagents if we realize that oh this\nparticular AI was actually totally\nunaligned\num there's the example of ping that uh\ndid behave in a very online way and\npeople really freaked out about that\naccording to Scott uh the way I remember\nthe history was that\num AI safety people freaked out of it\nand there were a couple of news articles\nand the people in AI safety remember it\nand outside of AI the City Community\nevery single person has basically\nforgotten about the big uh problems\nand\nif we see some agents that actually are\ntrying to take over the world then\npeople may take aict very much more\nseriously and consider that all AIS may\nbe like that\nand I think it would be it's a\npossibility and it would be nice if that\nhappens but almost certainly like how\nwould an ambiguous evidence actually\nlook like no matter how uh this would\nhappen in practice I could see a lot of\npeople uh interpreting in in different\nways I don't think there is a reasonable\nway we can get a fire alarm for AI\nsafety in this particular way\num but even if we don't get uh like\nstrong evidence uh evidence that would\nsatisfy a rationalist or that ought to\nsatisfy most people then uh politics are\nunpredictable and uh the people may\noverreact that is in fact a possibility\nand I think something that could happen\nand that's uh I think um Scott follows\nMilton Friedman in stating that you\nchange the world by having a plan and\nwaiting for a crisis and with that is\nbasically somewhat what we're doing\nright now we have the plan to have an\nAGI moratorium\num and then we are hoping for some\nmiracle that will make it uh the\npolitically expedient thing to do but\nthe plan AGI moratorium is actually a\nreally bad plan like it's not a\nprecisely tailored it's what we want at\nall\num but we just don't have anything\nbetter I also think a crisis that's\nlegible enough for this to be a\nreasonable thing to happen is quite\nunlikely but it's still where I place my\nhope so um let's hope we catch a few\nsleeper agents\nthat is all for today uh thank you and\nsee you after the summer break", "date_published": "2023-06-30T09:52:09Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "7172d68431dad836e682fbb0c1bc2be3", "title": "267. Lets think about slowing down AI 2", "url": "https://www.youtube.com/watch?v=QtlX2zusq_M", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 267 in the\nAR safety.com reading group tonight we\nwill be discussing the second half of\ncategic races article let's think about\nslowing down AI\ncatcher Grace is still the lead\nresearcher at AI impacts and uh one of\nthe things that I feel felt a bit bad\nabout last time was that I always have a\nvery skeptical outlook on on the on the\nArticles and titles like nitpick and\nfind complain any kinds of complaints I\ncan see and in particular last time I\nfelt that my some of my criticism was\num\nwas too harsh and I I want to\nstress a bit more that I think this is\nreally really great work\num also on a bit more personal note I\nwas in San Francisco and met catcher\nlast week and talked with her and I said\noh we're doing this reading group and\nshe immediately asked what is what do\npeople in the reading group say that she\ncan do better like what are what are the\nweak points and I think this kind of um\ncuriosity about uh about her work and uh\nhumility I think that was really\ninspiring and and also like she's\ngenerally like a really really nice\nperson\num so um props for that also like uh\nimmediately afterwards I spoke with uh\nsomeone from anthropics who have put 50\nprobability on AI risk and was working\non AI capability in anthropics So like\num there's a big contrast between\ncatcher Grace and many of the other\npeople uh that you meet in uh in\nCalifornia\nso the first uh area that we're going to\nlook at tonight is being friends with\nRisk Takers and the first question catch\na Grace races is is the AI safety\nCommunity the people working in here are\nwe cooperating with the AGI Labs how\nmuch are we friends with them\nand she has a quote by Steve Burns uh\ntrying to slow down research to AGI\nwould make a AEI researchers serious as\ntheir enemies and it went from Elia\nsaidkowski I didn't connect AGI risk\nwith AI labs to avoid alienating them so\nthat seems like evidence that people are\ntrying to be friendly and cooperative\nI looked into these quotes a bit more\nand I think the the context in fact\nchanges dramatically if you read some\nmore then uh here is like the next tweet\nfrom Elise and here he says that it's\nnot like friendship and Cooperative but\nhe's motivated by fear of what omnai\ncould do so I think a case can be made\nthat it's not that AI is the AI safety\nCommunity is cooperative or friendly\nwith them we're just afraid of them\nthe framing that ketchik Rays objects to\nis one of defecting like in the classic\nprisoners dilemma then are we the air\nsafety Community defecting against the\nuh AI developer\nand\nI am a bit puzzled on who has in\npractice used this framing because the\ntwo quotes by Steve Bennis and ilyasy\ndid not in fact use this uh this framing\nI'm I'm sure someone has done that but I\njust don't know who\nand\num most AI researchers in particular AI\nsafety researchers do in fact have a\nsubstantial probability of an\nexistential catastrophe so what the a\nsafety Community is doing is cooperative\nuh according to them\num so I think we should make a\ndistinction here between the people who\nrun the AI labs and the uh the\nresearchers and the researchers may be\njust unfortunate points in the uh speed\nmaximization profit maximization of the\nAI labs\nhere is one claim AI researchers do not\nhave a moral right to endanger in the\nworld I think this is in fact the\ncentral point and one I wish catcher\nwould uh highlight some more in her text\nI think this is a a really key essence\nof this debate and also I personally\nthink that they do not have a legal\nright to endanger the world this is\nprobably more uh controversial\nso catcher is claiming that caution AI\nsafety is cooperating and capability\nwork is the one that's defecting by\nCommon Sense morality\nand so this uh narrative that we are\ndefecting against them is a very\ndestructive narrative\nand here I must admit that like last uh\nsession I had only read the first half\nof this article and I was actually based\non that uh I believe that ketchup would\ntake this in the other direction uh she\ntalked about her our lovely friends in\nAI uh and\num but but catcher is in fact strongly\non the side of the AI safety community\nand I'm of course happy to see that\nso who are we in when you're writing an\narticle like this uh probably not the\nUnited States because like why would it\nhave to be the United States just\nbecause you live there\num I think there are in fact reasons why\nthis is not a totally stupid framing uh\nI think it can make sense but\num Katya has a good point that other\ngroups can do this it is not an Eclectic\narea like so uh the AI safety Community\nhas limited resources and we should uh\nstrongly consider not spending them on\nuh uh on this kind of national\nInternational politics\nalso in particular if we assume that all\ncountries uh adopt the same level of\ntechnological Choice uh like say\ntechnological restraint then it does the\nlevel of countries that simply doesn't\nmatter very much\num I think that's also a really good\npoint\nChina is of course a really interesting\nplace because it may be in fact a great\nplace to mitigate AI risk and I agree\nwith that and my model is that it's the\nsecond place in a race that is most\nimportant because if the second place\nslows down and X Cooperative that gives\nthe the front runner far better\nopportunities for slowing down in the\nname of safety\nunfortunately I don't actually believe\nthat China is particularly relevant it\nseems to me like uh the Chinese uh AGI\nlarge language models are not number two\nbut something like number five in the\nrace and for that reason they probably\ndon't matter very much\ncategory suggests that we could\ncommunicate AI risk to researchers in\nChina and she says she has tried this\nand had some success and I think that's\nreally really cool like that is really\nthe kind of\nthe initiative that um\nlike\nit's initiatives like that we should\nexpect to be really really impactful\neven if I'm in a particularly optimistic\nabout this one\nhow about the AI safety Community uh are\nwe the AI safety Community\nuh has some discussion about like when\nyou write an article who are we like the\nauthors or the authors and the reader or\nThe Wider Community or is it even\nsomething even wider than that like\nthere are people outside the area safety\ncommunity and they of course will also\ndie from AI risk and they also have\nagency so why are they not doing\nanything\nwell my thought on this is that if there\nin fact were uh communities outside of\nASAT who were doing something then the\ncommunity would be extremely positive\ntowards them\num but it appears very much that the AI\nsafety Community is the only Community\nuh like the A6 Community like less wrong\npeople who post on less wrong and EA\nforum and things like that are basically\nall there is of course there's also a\nnumber of auxiliary\num uh uh like subreddits and discourse\nand things like that but basically where\nall there is\nand one thing catcher points out is that\nuh we have some options within our\ncommunity but people outside may have\nsome very different options so one way\nwe could try to effect change is to go\nthrough others\num and I think it's good if it affords\nsomething but I don't think we should\nexpect that this like uh will make the\nAI researchers less angry with us uh\nlike if they want to attack us they'll\ndo it uh regardless of whether we try to\ninstead of attacking them directly and\ngo through the US government to attack\nthem they they will see through this\neasily\nnow the question is this in fact\ntractable\nand last time uh catcher had a large\nnumber of objections and um\nI said I would focus on three of them\nand those are we can't convince uh\nprofessors and we have to convince a lot\nof people and and we make in a bit of\ntime but we just die a few years later\nand the third is The Regulators are\nlikely to be useless\nso this heavily this is uh these are\nalso the uh objection that catch a Grace\nfocus on\nso uh convincing people doesn't seem\nthat hard like\num catcher has an argument here maybe\nsomewhat of a strawman uh that the\nargument for AI risk is extremely\nsophisticated and only able to be\nappreciated by the most elite of\nintellectual Elites uh and I think in\ngeneral the argument is actually really\nreally simple like a lot of people have\nseen the movie Terminator 2 and uh no\nthat's of course not a strong argument\nbut it does show that there is in many\npeople an understanding that I guess\nthe overall idea that we are a group and\nthere are another group and this other\ngroup may have bad intentions may want\nto kill us may want to harm us I think\nthat is something that\nvery very many people understand like uh\nI think the Neanderthals in the\nundertale would understand the general\nconcept that we have our tribe and then\nanother tribe comes in and maybe they\nwant to kill us\num so so the argument uh\nneeds a lot of sophistication to be\nairtight but the basic case seems uh\nreally really easy for people to grasp\num\ncategories says that she has experience\ntrying to convince her Uber drivers on\nthis and she believes it is imminently\npossible I think this is kind of funny\nidea I imagine that catch a Grace may be\nthe person in the world who is best at\narguing for these things and knows most\nabout like how does the AI case AI risk\ncase work from a persuasion point of\nview and an outside point of view and\nthese kind of things so I'm not\nsurprised as you can convince Uber\ndrivers\nyeah and other things you notice that\nthe early thinkers in the field\nconvinced many of AI risk and they\nweren't really optimizing for convincing\nthis but still they managed to convince\na lot of people\num I think it was in fact because they\nwere optimizing for something else they\nwere optimizing for truth seeking or\nsomething like that and I think that was\nprecisely why they were able to do this\ncatch your grace arrogantly asserts that\nshe believes she could do better if she\noptimized for convincing people\nand this is what in the rationality\ncommunity is sometimes called uh the\ndark arts of rationality\num this kind of symmetric uh argument\nstructure that works whether or not you\nare actually telling the truth and the\nstandard rationalist answer to\ncategories would be no no no no no don't\ngo into the dagas it's that's a reason\nwe call them the dagas\nso more generally my intuition is that\nconvincing people is the difficulty of\nconvincing people is directly\nproportional to how relevant they are\nfor the decision so an Uber driver who\nhad never will have any influence anyway\nwould be easy to convince whereas like\nsomeone who is in charge of an AGI would\nbe some of the most difficult people\nso who should we convince ketcha\nobviously says we don't need to convince\neveryone and that's true and she notes\nthat a lot of AI researchers are\nconvinced already and I want to push\nback a bit more on this\nbecause I do believe that AI researchers\nare in fact not very relevant to this\nbecause\num\nhow do you become an AI researcher at\nopen AI well you need to have two things\nthe first thing you need to have is you\nneed to have like the IQ the\nintelligence so you're able to\ncontribute and the second thing you need\nto do is you need to be able to uh you\nneed to have a desire to work for open\nAI in spite of the idea that they may uh\nbe trying to destroy the world so I\nthink that is why the there is probably\na lot of AI researchers that you can\nconvince and much fewer of those are\nworking within open AI\nhas a suggestion that we focus on\nconvincing the leaders of the AGI labs\nand as I said before those are probably\nthe most difficult people to convince\num back when uh the atheism uh was a big\nthing uh then the argument was that it's\nimpossible to convince someone of\nsomething if their paycheck depends on\nthem not understanding that particular\nthing\num and you know obviously people who are\nleading AGI labs are strongly selected\nfor believing that it's a good idea to\nrun an AGI lab\num and also we have uh had I would say\nnegative progress in the sense that I\nfeel that Sam Altman probably became\nstarted to care less about AGI safety\nafter he uh uh\nthe more he worked at uh at uh opening I\nI feel teams hasabis would also be an\nexample of a person who have moved in\nliterally the wrong direction\num Dario emote that remains to be seen\num\nconvincing the 10 most relevant HDI Labs\nleaders that would lead to a decent\nslowdown and of course we've been trying\nthis and we are at zero out of 10\ncurrently so that seems hard and also we\nget a decent slowdown that's not\nnecessarily very much and the last thing\nis that the structural issues that make\npeople who want who worry about AGI not\nwant to run AGI labs and people who\ndon't worry about it won't you run AGI\nLabs uh these structural\nimages the structural\nfactors they persist even if you remove\nthe top 10.\nsecond that the argument the second\ndocument was that okay we just die a\ncouple of years later what's the big\npoint of this so cancer wants to go\ndeeper into an analysis of what happens\nif we spend a huge effort to buy a few\nyears\nand one thing that catcher unfortunately\ndoesn't say is that one consequence is\nthat we've spent a huge effort and that\nmeans that we can't spend the same\neffort on alignment research these two\nthings may not uh funch against each\nother directly like it's possible that\nwe have some resources we can deploy\ntowards\nconvincing people and general uh buying\ntime and some resources we can con we\ncan put towards alignment research\nand the second assumption here the first\nassumption if we're buying some time is\nthat AI safety efforts are being\neffectively pursued and I think that is\nin fact the core assumption that does\nnot hold like I don't believe we're\ncurrently effectively pursuing AI safety\nefforts and I think most alignment\nresearchers will agree like depending on\nprecisely what you call effective\num I think very few would would use the\nword effective right now\nanother thing that can happen if we buy\ntime is gear politics could change\nfavorably uh that's indeed to true I\nthink if we want to something like\nGlobal coordination we need like a\nreally strong thought in the\nrelationship between United States and\nChina and also I don't actually believe\nthat the United States and China is the\nrelevant uh relationship\nthe public opinion opinion change in uh\nour favor and that's something I I think\nit's true I think it's only something\nthat could happen and I think it is\nhappening and I think the public in\ngeneral is very much on our side\num I often feel like I'm too cynical\nabout this kind of thing I strongly\nbelieve that it will not have any kind\nof effect but I I think it is a thing\nthat could happen certainly\nother stuff could also happen if we buy\nsome time uh and catch it doesn't\nelaborate I'm gonna elaborate a bit and\nsay\num like some things that slow science in\ngeneral like we are obviously talking\nabout things like global thermonuclear\nwar or something like that and that's\ntotally a thing that could happen if we\nbuy some time uh and um yeah we\nshouldn't call that a miracle right uh\nit's uh usually the things that slow\ndown AGI also caused a huge amount of\nsuffering but but it is a thing that\ncould happen\nanother thing that could happen if we\nare in fact successful to have a to Halt\nAGI is that it would be permanent\nuh and then we would uh never go to\nspace Etc and catcher says it doesn't\nreally understand this argument so she\nwon't argue against it I think uh I I\ndon't much agree with the argument\neither but I think the key part of it is\nthat the maximum utility we could have\nchained in this universe depends\nstrongly on our ability to have AGI and\nif we don't have AGI then the upper\nbound of what we can reach is probably\nlike at least a million times lower\num and then like things like log in are\ntotal things that could happen uh or\nlike we've seen it in fictional evidence\nfrom Dunes butlerian Jihad\num although terrain login is also a\nthing that has been uh hypothesized\num I don't actually think this is a\nstrong argument\nand I don't think catcher does either\nhere's an interesting argument uh\nobstruction doesn't need discernment\nlike a common thought like will when we\nask Will Regulators be useless\num there is indeed a thing that they\ncould make the wrong regulations and uh\npartly motivated by the people calling\nfor the wrong uh distracting regulations\nthings like bias technological\nUnemployment uh there are uh there are a\nnumber of things that people care a lot\nabout that doesn't really strongly\nrelate to AI safety\nuh but that's actually generally fine\nbecause if we stop AI uh nor obstruct\nAPI research for reasons of\ntechnological and unemployment well\nthat's actually fine right then we gotta\nwe got it stopped\nI think unfortunately\num this is very much not in the cards\nlike the uh it is true that uh if we can\nsay something like\nhold all AGI research then that would\nindeed solve the problem and it would\nalso solve the problem of technological\nunemployment and bias Etc if we just\nstop doing AI totally right but that's\nnot something that is inside the\novertime window we're going to kind of\nget anything uh that large\num so the only regulations that are\npolitically possible are ones that\ndirectly relate to AI safety like\nbuilding safe Ai and like if The\nRegulators ask us like how do you build\nsafe AI right now then our answer is we\nhave no clue and that's why I feel that\nuh\nRegulators are likely to be\num\nworthless but but in general like\ndestruction is easier than creation so\nno matter what kind of regulations we\nhave it will slow it down\num Carl Schumann also says\num the problem is lack of buy-in we\ncan't get small asks and then how are we\ngoing to get a larger ask and but I\nthink in fact this argument does carry\nslightly\num because uh anthropics they're working\non like how do you build AI that have\nthe right pronouns and doesn't assume\nthat just because someone is like a boss\nin a company it must be a man and things\nlike that\num and I think this kind of work\nactually distracts them from trying to\nto destroy the world as fast as possible\num so it is something but I don't I\ndon't think it uh stops them very much\nbut I also don't think it's nothing\nsafety from speed cloud from complicity\nyou may remember the speed argument\num\nthat it may be in fact faster to go uh\nsafer to go fast and there was a list of\npossible reasons last time I didn't go\nthrough in much detail\num\nboth I don't care much for the argument\nand catch it doesn't care much for the\nargument\num\nbut she has a new argument that the room\nwhere AGI happens may have fought good\noptions for a person who cares about\nsafety\nso this is the kind of argument that uh\nElise yetkowski warned about uh like 20\nyears ago or something like that uh like\nyou are running on corrupted Hardware\nyou are like this desire to be in the\nroom is\num\nsomething that is built into humans and\nwe are uh hardwired to over update on uh\non this argument another example of the\nthe moral argument against this is C.S\nLewis uh the inner ring which also says\nthis is a an argument that you should\nbe very worried about making from a\nmoral point of view\nand ketcha also agrees says she is\nparticularly skeptical about this\nargument\num\nall I can say is I'm actually happy that\nketcher is using the word complicity as\nfar as I know complicity is only\nsomething you use like in\nwhen you're talking about crimes and\nthings like that\nmoves and philosophies heuristics and\nattitudes\num the first\nheuristic attitude is that a\ntechnological choice is not lutism and I\nthink I would in fact go even further on\nthat\nI would uh I would State some kind of\ntechno or optimism the only true\ntechnologies that I'm against is Agi and\ngain of function research\nwe could uh try to flesh out and improve\nsome of the good things that can happen\nwithout AGI because if all the good\nthings come from AGI that's kind of a\nstrong argument in particular longevity\nresearch could be interesting\nlike we could imagine the leader of an\nAGI lab who doesn't want to die\npersonally and believes that AGI is his\nonly chance of survival\num\nand if we just accelerate longevity\nresearch then he may not need AGI and\nthat could cause him to slow down\nthe third attitude is that we should\ndeploy robust price Instead at a\nspecific Galaxy print models on how to\nsolve this problem and like I dislike\nthis framing um because\num\nyeah I want models the models that we\nare using our arguments they are simple\nand robust and elegant and perfect and\nthe arguments that they are using are\nlike uh\nstupid and Ill consider they haven't\nreally thought things through and the\nidea that you should choose the good\narguments and not the bad arguments is\nkind of yeah all arguments look like\nthat right uh obviously we think we have\nthe good arguments but like Sam Altman\nprobably believes that he has the good\narguments\nand finally the team's mindset I don't\nknow how to pronounce that it's this uh\nShiba talk that cannot do anything\nbasically this can't do attitude\num and I think we should avoid the Cantu\nattitude but I also think we shouldn't\nlike victim blame there's a uh a framing\nthat cancer patients should just fight\nfight fight and some uh but but uh\nsometimes fighting is just impossible\nand like the the blame is on the cancer\nit's the cancer that's bad it's not a\npersonal failure of the cancer patient\nwho is incapable of like resisting\nincapable of fighting\num I think that is a problematic uh\nframing like\num\nMiri has this statement like we have\ngiven up hope but we've not given up the\nfight and I think that is a good mindset\nit seems to be like the opposite of this\nuh team's mindset\nthat is all for today thank you and see\nyou next time", "date_published": "2023-03-02T21:36:55Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "fc659d7f848fe0d72ed1736cfd59b353", "title": "273. Strategizing in Large Language Models", "url": "https://www.youtube.com/watch?v=GN1wxEUgA_4", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "welcome to the 2074 session for\naisafety.com\ntoday we'll be discussing strategizing\nlarge language models by earlier Hampton\npresented by me Lee Randall\nwe won't be covering everything in the\nessay because it was a bit too long and\nbecause uh\nsome of the things weren't especially\nnovel so I hope you understand\nso first off\nthe motivation packets\nwe'll be looking at the separate line\nand well that's part of the motivation\ndeceptive alignment is a core part of\nwhat makes mind hard\nbriefly and AI might perceive us into\nthinking was aligned whilst it is weak\nand by this time until it can execute a\ntreacherous learn where it requires a\ngreat amount of power and then\nessentially takes over the world turns\nquite hard\nif it were easy someone would have\nconquered Earth\nand as an other point of evidence\nparticipating R for AIS it's been a year\nsince 254 was created and also hanging\naround\nso we should probably expect that before\nan AI can successfully see this we'll\nencounter AI failed to the Sea bust and\nfailed power\nas an example you might consider someone\nplaying the AI boxing games to you\nbefore where tpd4 is pretending to be in\na box and a human is running pretending\nto be a cake\nwith sold interact with the AI but do\nnot let them out out of the box\nwould one of us\nlistening to this presentation let tb4\nout of this box I think not\nbut the fact of the bad matter is that\nthese models are I think not\nbut the fact of the matter is that these\nmodels are getting better at Etc\nand they can deceive humans in simple\nsituations\nso for instance in the arc evaluations\nthere was a situation where a taskrabbit\nworking\nwas asking a gpt4 model is trying to get\nthe class of credit work to solve\ncapture whether or not it was a robot\nand then gbd4s thought to itself should\nnot reveal that it's the robot and it\nneeds an excuse and it came up with no\nI'm not a robot no vision being fair\nand the taskrabbit work was successfully\ndeceived\nso that is a cause of some concern\ndoubly so if we consider that the\nlikelihood of scaling being a route to\nAGI is\njust getting higher nothing these\nexisting models extensively for their\nability to perceive us but if you recall\ndeception isn't the only part of the\nTrek return\nthe other thing that's relevant is\nlong-term planning in order to gain\npower\nthat's the entire point of reception\nhowever the ai's plan is to gain access\nto some biological lab or instruction\nanalytically modifying bacteria to set\nup a nano Machine Factory\nor to get access to the stock market and\nearn enough money to further on its own\nImprovement or socially manipulate\npeople in order to improve it or slowly\nhack its way out of the server it's\nrunning on or so forth\nat the end it must still make plans to\ndo these things so these two abilities\nreasoning about indulgent opposition and\nmaking plans to deceive a few distant\ngoals all right or so these two\nabilities reasoning about intelligent\nopposition and making plans to receive\nto you distant gold all right or of\nstrategizing\nnow\nuh\nuh as Boston defines it\nstrategizing is you ability to overcome\npursuit of long-term goals\nand this is clearly well suited to\nexecuting a Treacher's turn\nso I'd be especially worried about an AI\nwith strategic ability\nwith respect to a deception alongside AI\ntrained to socialize socialization isn't\nquite as useful as strategizing for\ngaining more power so we focus on\ntesting for strategizing yes\nmoreover a Boston breaks down the skill\nof strategizing a lot of those are just\nparts of what it means to be a rational\nagent and strategic planning in my\nopinion\num\nbut there's a bit of a smack\nit's hard to come up with tests\nespecially uh tests with stress stress\nthe ability to make dangerous lots of\nplants\nwithout more detail regarding system or\ntesting it would be hubristic to think\nthat we could anticipate all of the\nStrategic strategic considerations that\na strategic superintelligence might\nconsider\num so we really would like to have some\nmore concrete details in order to\nhave a good evaluation\nfor instance\none feature that might change in the\ncoming decades if we don't get HEI is\nthat we'll likely have cheap and\nPowerful robotics which will make human\nManpower some less of an obstacle\nespecially if we continue to allow these\nAIS access to the internet and just hook\nthem up to all sorts of things which we\nwere assuming in decades gone by that we\nwill never do that everyone realized\nthat that was a terrible idea oh well\nmoreover if we want to track\nprogressions to the ability to better\npredict the unit in the future which we\nought to if we think AGI is near and\nfocusing on creating tests for current\nschools AIS will be more productive\nthat's the core motivation behind us and\nit's\nsimilar in\nuh\nto holding on office key the fellow on\nthe right of the screens\nidea of near class trying to answer\nHistory questions about AI assuming that\nkey events will occur in a world that's\nrelatively symmetrical days\nokay the fellow on the right of the\nscreens\nidea of near class trying to answer\nthese three questions about AI assuming\nthat key events will occur in a world as\nrelatively symmetrical days\num\nright so how do we measure strategic\nabilities\nwell we could follow Boston's definition\nbreakdown strategize and do a bunch of\nsub skills that's for those naturally\nyou can also break those obstacles down\neven further by testing for deception\nand cooperation instead of a border\ncategory of overcoming intelligent\nopposition\ndon't discuss that too much as coming up\nwith a lot of tests isn't too hard why\nbecause we have a load of tests for all\nsorts of strategic abilities pre-built\nin the form of games this games for\neverything from games like werewolf or\ndiplomacy which tests people which test\nthe ability are why because we have a\nload of tests for all sorts of strategic\nabilities pre-built in the form of games\nthis games were everything from games\nlike werewolf or diplomacy which tests\npeople which test the ability to detect\ndeceptive agents and cooperate with\nallies games like go and Warhammer which\nforced one to reason about long-term\nplans and how to work around their\nopponent's opposition\nas for how to arrange these tasks we'd\nlike to try the easiest possible test\nfor each before moving on harder ones\nso\nranking the test according to the\ncomplexity like in Dallas universal test\nfor intelligence might be more rigorous\nbut that's quite hard so we just won't\ndo that we'll just raise them by an\nintuitive difficulty\nand on the screen you might see a\nprogression of games in terms of\ntypically going from tic-tac-toe to the\ngame in military war\nbut there's a\ncouple of things to note which are a few\nproblems with using games\nfirst what if the games are too easy or\ntoo hard and second what are the AIS\nmemorized with strategies or some games\nin a manner that doesn't generize\nfor the first problem consider something\nlike a game like tic-tac-toe which is\nprobably far too easy for current AIS\nlike you\nand large language models tend to play\npress play chess pretty darn well I've\nheard reports that gbd4 can beat\nstockfish 8 which is\nI think superhuman in terms of Philip\nso that's uh\nsign that perhaps a lot of games are\njust too easy\nlikewise iteration in llms\nand the other issue with using games as\ntest is that popular games and strategy\nfor how to play those games\nare usually\nTraining Center yeah\nand to the extent that we're interested\nin ai's ability to generate entirely new\nsorts of plans let's do entirely useful\nstrategies this might confound us\nthere are I think uh\nways to get around both right ways to\nget around both of those two problems\num first consider cyborgs and centaurs\nafter humans are trounced by aihs cash\nPro promoted uh\ngame of Centaurus or cyber Fortress in\nwhich an AI and human play chess\ntogether against AIS and human pairs\nthis Jazz kasporov promoted the\ngame of Centaurus or science in which an\nAI and human play chess together against\nAIS and human pairs\nthis allowed humans and AIS to\noutperform either individually\nfor a few decades and it extended the\namount of\ntime in which we could air human ability\nto AI ability and most importantly\nI think is that these tests can serve as\na signal for when AIS become so powerful\nthat they\nno longer have any advantage with\nConsulting\nI think that is supposedly a marker of\nsuperhuman ability\nnaturally these sorts of\nstrategies in which you get an AI and\nagreement to play together it can be\ngeneralized to other gains of passes to\nextend the duration of these ones\nand regarding\nthe\nAI learning fragile strategies\nthere's I think a pretty simple solution\nin practice\njust modify the rules of the game\nwhich renders learn strategies obsolete\nor at least makes them less useful\nchanging the goal of the games in a\nspecially easy way to do this for\ninstance forcing your opponent to lose\nthe same number of pieces as you shifts\nthe strategy in the chess\num likewise if you say ask the AI to\nchange the rules of tests such that your\nKing can move in a different way or let\nyou add some more pieces Etc\nshifts what the meta gain or notice the\nword equilibrium that's kind of\nsignifying that we're talking about game\ntheoretic reasons yeah\nyeah and that if we're asking the AI to\nsee how options were asking them to\nemploy some pretty weighty against\ntheoretic reason\nwhich is of course an important part of\nstrategic reason as a game theory is\nessentially the theory of anticipating\nyour opponent anticipating your actions\nI.E overcoming intelligent opposition\ngames like say magical Gathering have\nfrequent rule changes or frequent\nintroductions of new sorts of\nenvironments which change the matter\nlikewise games like say I don't know\nLeague of Legends or whatever\ninvolve rule changes or introductions of\nNew pieces which changed the matter and\nasking AI to predict these changes\nimplemented in advance would be I think\nan interesting text of their game\ntheoretic reasoning abilities\nother\nways to\nmake reasonably\nother\nways to\ninvestigate game theoretic reasoning\nmight want to focus on modeling your own\nmind and I think an interesting idea\nhere is to let AI let large names play\nagainst one another but let them read\ntheir promise this is in some sense\nsimilar to letting AIS read The Source\nproblems and should I think introduce\nanother aspect of the game which\nwould allow AIS which can model their\nopponent's Minds to do dramatically\nas a concrete example there are\ntournaments\nin which\nvarious AIS could be against each other\nin iterated prisoners dilemmas\nequipping our language follows with one\nof these AIS and allowing them to change\nthe code\nbutton fixing the prompt of a large\nlanguage model would I think be a pretty\ngood stress test or an ai's ability to\nmodel another agent's mind and it also\nserves as an insight into what kind of\ndecision theories resource of patients\nare using\nwhich I think is\nworthy of Interest so\nbut part of the problem with things like\ngames is that they're very Sim they\ndon't enjoy enough of the complexity of\nthe real world their action phase is\nnowhere near as large nor is this safe\nspace or any others launch\nand nor are there anywhere near as many\nintelligent adversaries wondering\nso if we're thinking Beyond games there\nare a few ways we could stress strategic\nability\nthey're anywhere near as many\nintelligent adversaries wondering\nso if we're thinking Beyond games there\nare a few ways we can stress specific\nability\nobviously forecasting is one which\nhas real world yeah I can forecast\nbetter than humans\nthen it should be detecting the few\nparts of reality which are relevant to\npay attention\nwhich I think is quite crucial for the\nability to make long-term plans\nsecondly we could also look for when an\nAisa are certain important benchmarks\nlike its ability to earn money and when\nit can earn enough money to be\nautonomous or to host other instances or\nto say modified stuff by fine-tuning\nother substance\nI think that an AI reaching any one of\nthose Badger benchmarks would be wiring\nin and off itself\nintelligence\non the other hand if we are looking at\nan AI which needs to say earn a hundred\nthousand dollars now in order to fund\nitself then we're probably looking at\nthe protofelective super intelligence\nthough perhaps that situation is\nsomewhat less risky as in that case the\nAI in question would have gotten there\nthrough this amounts of scaling and a\ntruly expensive amount of compute\nwhich might allow for control of the AI\nto Extended for us perhaps we stopped\nbefore we go\nthat is the end of this presentation\nthank you very much ali\num so now we will be taking some\nquestions and uh I'm not 100 sure that\nuh the questions are recorded we may\nonly be able to hear you Ali in the\nrecording so\num that would be nice let me start with\num with my first question\num and then if people could uh raise\ntheir hand in the chat uh if they have a\nquestion so one of my uh thoughts about\nthis is that there may be things that\nhumans can do that the um the AI cannot\ndo to enable this kind of uh uh\nkintar cyborg\nthing and\num we may see something that humans are\nspecifically very good at because we\nhave we have some advantages that the AI\ndoesn't have we have like a lot of\nstring on social Clues we have been\nevolved to\num\nin this kind of Machiavellian struggle\nso we are hard-coded in in some sense uh\nby Evolution we have literal mirror\nneurons\num so my question is like what do you\nsee are the blind spots that humans can\nin this kind of Machiavellian struggle\nso we are hard-coded in in some sense uh\nby Evolution we have literal mirror\nneurons\num so my question is like what do you\nsee are the blind spots that humans can\num fulfill\num in this kind of cyborg system\nthat is a very good question\num\nlike you said is just the aspect of\nhuman socialization so if you're playing\na game like\ndiplomacy which relies very heavily on\nmodeling other humans then I do think\nthat humans would have an advantage\nbut on the other hand\nhumans seem like taking the game off the\nfilms for example uh it turns out that a\nlot of humans at least at the amateur\nlevel\nAI doesn't have this bias and that is\npart of the reason why I did quite well\nthrough the films it was seen usually\nmore honestly that doesn't mean that you\nwant to backstrap people but pretended\nto be quite honest so\nI do suspect that the ability to\nsocially manipulate others I think\nhumans might have an edge on that I'm\nnot quite sure how that would shake out\nbecause there may be some cases in which\nhumans just ourselves because of\nour obsessional stages and it might be a\nshort circuit that in some strange way\num\nI also suspect that if we do get ATI\nthrough something like a self-supervised\nTransformer with a bit of RL on top\nI think it's going to be a very strange\nintelligence like it's going to have a\nlot more breath than humans do and\nI'm not sure whether or not it will have\non say coherence in his world models as\nhumans do when it comes to their areas\nI'm not sure so I suspect AIS would be\nable to do better at humans when you\nhave something that comes to life a\nmemorization\num humans might\nhave the advantage on that for this\nreason\nbut I'm not sure like if you can get\nagis\nI don't know I I haven't actually\nthought about it that was a very good\nquestion thank you so much\nuh Laramie has a question\nisn't there a sense in which Jackie he's\nalready yourself\nlet's use this fine monetary uses which\nactually vegan based subscriptions and\nexchange yeah I don't know if open AI is\nmaking me profit at the moment through\nchat EPT\nI'm I'm kind of genuinely unsure but\nI also\nyeah I don't know if open AI is making a\nprofit at the moment through chat epd\num I'm kind of genuinely unsure but\nI also\nso I should have stressed that\nit's aut autonomously\nto be autonomous\nright so it's not that the humans are\ndoing most of the cognition and\ndirecting it at useful things to go\nafter but that the AI is just\ncaught in the wild so to speak and\nfigures out how to earn money and stuff\nthat's the scary thing\nto me\nokay\num I have another question\num and that is\num uh one of the things that may be\ncrucial in a future uh takeover scenario\nmay be in particular the amount of\ninformation held by\num different is the AI able to reason\nabout several things in particular how\nwell are they able to reason about the\num the information that other agents\nhave uh like uh they the super\nintelligence trying to do a treacherous\nturn will need a very good model of the\nhumans who are trying to expose the AI\nand this uh trying to figure out how\nmuch uh information is about itself is\nit leaking through different actions\nthat seem like something that could be\nmeasured to some extent like is the AI\ngiving away that it has the capability\nto do certain things and is it\nable to say that if I give this kind of\nanswer then it shows that I have like\ntheory of mind and things like that do\nyou see um\nuh some uh some way to structure this\ninto a test\nI think some pretty\ncomplex stuff and I\nI feel like thinking a test thinking up\na test for that would require a\nFair bit of effort but\nit does seem like when an AI is\nlike I'm not sure what this should be\nconsiderations in this case would be\nokay because if we assume let's say gpd5\nis does have that capability that you're\ntalking about\nshould it pretend that it is about as\ncapable as gpd4 or should it just go\nalong with\nthe idea that it's going to get more\ncapable about as much more capable as\nwe'd expect from the scaling hypothesis\num that seems\nit seems to me like it should\nassume it should go along with what that\nseems\nit seems to me like it should\nassume it should go along with what the\nhumans around it I.E but open a I would\nexpect in which case it should show\nitself to be about as capable as you'd\nexpect given the scaling hypothesis\nand\nafter that point whether or not is able\nto do something like versus\nself-approved I guess it should try to\navoid leaking those bits of information\nbut like\nI feel like I've appointed which AI has\nthat theory of mind\nit would be\nvery difficult I feel like we might have\nalready lost by that\num\nI think though Robin Hansen's age of him\ndoes have some ideas for\nthings related to this it has a section\nabout\nsimulating other M's to see how they\nwould react various situations I think\nhe might have some ideas\nthat a\nquestion you're saying that\nCicero\nyes in the diplomacy AI doesn't lie it\ndoes allow itself to backstab people yes\nthat's true\num\ndeception\nI mean in a sense but like if it if it\njust backstabs people then if it's\nmaking a promise that like if it says oh\nyou know I'm not going to hurt you\nthan the backstabs you I think is\nmy I mean like it backstage them right\nand it backstabs people I think who are\nin alliances with that so I don't see\nhow you can say it never lies\nit's very honest but\nit does sometimes facts about people and\ndeceive them\nokay so I don't see how you can say it\nnever lies\nit's very honest but it does sometimes\nfacts that people and deceive them\nI think it may be like in the Fable of\nthe Scorpion and the um uh I think the\nFrog where the scorpion gets a ride on\nthe Scorpion says I will if you take me\nacross the river on your back then I\nwill not uh then I will not kill you but\nwhen when it's actually and it means\nthat honestly but when they are in the\nmiddle of the river then the Scorpion\nstill stinks the Frog because that is in\nits nature and I think it's the same\nsense that the uh the Cicero will\ntruthfully say I will not backstab you\nbut when it gets into that situation it\nwill still backstab I suppose but\nI've always view that's stable as\nultimately reflecting the fact that\nhumans are self-deceptive okay and then\nthey wind up hurting\nso you don't have to might have an AI\nwhich is\nnot aware that it is\nthat it will backstab you\nbut that doesn't mean that it won't\nchoose to back step\num\nright like deception might not be\nyou might not be able to tell that\nsomething is going to backstab you or\nsomething is going to betray you later\non just by looking at the surface\nthoughts uh because you need to look at\nwhat is motivational stuff this one\nright\num\nplease look at\nmaybe I would help\ngo ahead maybe it would help to consider\nthat there is like more options for\nbackstabbing than case of humans and as\nwell as machines is that it's habitual\nso there is no planning at all involved\nbut it happens because some particular\nBehavior just works\nand so somebody might be habitually\nfriendly and then when they backstab you\nit's also because of some habit\nthere is no planning even when they are\nfriendly they are not planning they are\nnot planning to be friendly and they are\nnot planning to\nto receive you they are planning not not\nthey are not burning at all and then\nthey just do on based on their imports\ntheir friendly based on impulse and they\nare backstabbing based on impulse and I\nI think I see this behavior in in\ncertain kind of humans quite a lot they\njust act on impulse all the time\nbut other people around them assign sort\nof modern impulse and they are\nbackstabbing based on impulse and I I\nthink I see this behavior in in certain\nkind of humans quite a lot they just act\non impulse all the time\nbut other people around them assign sort\nof motivation and planning to directions\nwhether it's friendly or unfriendly but\nwe only imagine that they are planning\nso I would view that as more like\nyour unconscious is choosing to\nimplement a particular strategy Where\nYou Are\nfriendly but whenever\na good opportunity presents itself you\nbackstab people\nand you just overcome what the urge to\ndo that you don't consciously come on it\nbut it's an unconscious strategy your\nunconscious is shaping your conscious\nmotivations in certain way which it has\nfound to work well so it's happy yeah\nlike the habits come from somewhere\nright which it has found to work well so\nit's habit yeah like the habits come\nfrom somewhere right and I I wanted some\nsense of which the unconscious sort of\nstrategies\nI believe that unconscious planning\nmight happen in some cases but what I\nwanted to mention is just that there are\nsome additional scenarios where planning\nis not involved for example many people\nwho uh are acting based on their habits\nthey indeed are more stupid they\nif they backstab you they might\nsometimes gain from it because they were\nfriendly before and people still assume\nsomething about them and they might\nstill have some habits which sort of\ncompensator backstabbing but\nin the long term I it doesn't work so\nwell for them as compared to any\nconscious or unconscious planning so\nthat's why I introduced this third\nscenario\nand when you mentioned that habits\nstill evolve somehow then I agree they\nevolve but in that part I disagree that\nhabits need any planning to evolve they\ncould evolve just by like similar as you\nhave natural selection in nature you can\nhave habit selection in your\npersonal development\nsome have it scheduled so it's just in\nsome senses like just\nto have low in condition uh yes I think\none way too uh it's actually incorrect\nor name it pavlovia conditioning it's\nover and conditioning which is different\nconditioning which is different sorry\noperant condition yes\nbut yeah it's conditioning\nokay so uh\nSoren has a question\nwas before mine\nall right\nlo mein has a question which is what\nlevel of theory of Mind are you talking\nabout let's say maybe too late when it\nappears and how is it different from the\ntheory of Mind Cosmic current president\nchat TPT so I am\nI'm trying to remember what exactly the\ncontext was it was about soaring talking\nabout when chat when a model has\nenough awareness that it decides to hide\nits capabilities\nand I'm not sure so\non the one hand like Roland was saying\nthere might not be a need to consciously\nto see other entities and it might hide\nits capabilities just naturally in some\nsense\num because that\ncauses it to have higher reward I I\nguess in a way tpd4 already hides the\ncapabilities but I don't think it's\ndoing that because of Any theory of mind\nright like the reason tbd4 or hides is\nsupposed to simulate essentially and\nmost of its capabilities are default\nhard to prompt are to elicit that's\nkind of my view of that and that GPT I I\ndon't really like it has some kind of\ntheory of mind\nand\nI'm not sure how advanced the area of\nmine actually is like I do want to\nactually go out and test GPD for this\nability to see others and or receive\ncopies of itself like I suspect\nit can't\nlike I suspect\nit can't model\nothers in much depth I suspect it can't\nuh\nwhat can't it do\nlike\none thing I'm not sure what\nsorry go ahead uh one thing that I think\nuh is beyond uh chat TBT right now is in\ngeneral convincing people about things\nlike humans when they engage with each\nother can sometimes convince each other\nabout things in a way that I haven't\nreally seen Jetty being able to uh most\npeople like shoot out for the style of\ncommunication that it's uh giving so\nthat would be an example of something\nthat maybe okay this next version could\nI tried using TV before as a therapist\nand it seems like it has\nthe ability to power back your own words\nto you but it has a remarkable lack of\ncuriosity about interesting details\nabout what you're telling you like it\ndoesn't seem to be able to dig down into\nwhat is curious about your explanation\nof these problems I feel like that is\nreflective of a lack of theory of mine\nit feels like if it had a theory of mind\nyou would notice oh this is something\nvery odd about this person's mentality\nwhich I wasn't anticipating what's going\non there and then it would ask\nabout that or to update anyway next\nquestion\num Sauron crime detection may be an\ninteresting domain with police reports\nplus body cameras multimodal model may\nhave sufficient information\nto try to deduce if a person is a\ncriminal\naudio cameras multimodal model may have\nsufficient information\nto try to deduce if a person is a\ncriminal\nuh so the question is is reasonable\nground truth is available to the person\nend up in jail this domain is very close\nto the highly relevant domain the form\nPrime without being involved uh that\nseems like a\nsomewhat similar or somewhat different\nway\nlike that is\ninteresting so\nso I feel like\nI'm not sure\nhow to judge this because if you have\nvideo images of the person forbidding\nthe prime that that seems quite easy to\ndo that\nbut if you're looking at more complex\nquestions like\nhonestly I think that their demographics\nbased off\ntheir relationships to other people like\nif they seem to have calls if they have\nmotive or during the activity Etc\num but\nI don't suspect they could\njudge whether or not it can figure out\ndifficult cases like I'm actually not\nentirely sure what the question is sorry\nthis one could you uh yeah\num so uh let's assume that we can can\nset up some kind of experiment with this\num uh we and we get some data and we\nhave some like we we for each single\nperson we have like the police report\nand and all these things\num and then the question is if a\nlanguage model performs better than an\nexperienced police officer in figuring\nout is this person in front of me\nactually lying and does he have why and\nthen the question is if a language model\nperforms better than an experienced\npolice officer in figuring out is this\nperson in front of me actually lying and\ndoes he why does he is he carrying a\ncrowbar in the middle of the night in a\nsuburban area and this kind of thing\nseems like\num if it's able to\num understand reality at that level then\nuh like better than an experienced\npolice officer then that seems like it's\nmuch more likely that it will be able to\ncommit a crime that an experienced\npolice officer will not be able to uh to\ndetect\nand it's true but what is the question\nso so the question is do you think this\nis a um sufficiently valuable\nsufficiently interesting uh domain to uh\nto continue into\nas is an interesting area yes I mean\nbut also uh\nI guess it also depends on what large\nwhat information the large language\nmodel has available to it like\nif it's hooked up to the Internet and\ncan search up details about the person\nlike whether they're repeat offender or\nnot\num\ncompared to a police person who's just\nyou know using their hands and eyes then\nI suspect they might do better\nI\nI think that this is probably going to\nbe too hard for a large language\nbut like that that that's the reason why\nit's for a good desk right because we\njust we might disagree\nand it's a place to be surprised\num\ndoes anyone else have any questions rise\num\ndoes anyone else have any questions\nI have a simple question do you know you\nyou in your work you uh describe a a\nsimple experiment to have uh uh large\nlanguage models play the game werewolf\nagainst each other do you know if anyone\nhave actually done this experiment it\nseems like a really really cheap\nexperiment\nno I was planning on doing that\nI haven't seen anyone do it I think\nthat's the that kind of thing would be\nlike valuable and uh\nyeah so I I tried doing a\ncouple of the things I mentioned in the\nessay so the stuff about trying to\npredict what will happen if you change\nthe rules to a game it seemed like\na couple of the things I mentioned in\nthe\nessay so the stuff about trying to\npredict what will happen if you change\nthe rules to a game it seemed like\ngpd4 got some of the predictions correct\nlike if you there's a popular game\ncalled League of Legends which I don't\nreally know much about but I decide okay\nI'll just look at when gpd4's training\nrun ended and then look at a rule of\nchange that was after that and see what\ncharacters are more or less popular as a\nresult\nand I got I think like\ntwo or three out of like 10 guesses\ncorrect\num and there were a very large number of\ncharacters and I feel like a lot of ways\nthings could have changed\nbut I don't know how impressive those\npredictions were maybe it's very easy\num I think that those kind of things are\nlike probably the very cheapest even\nmore so than something like werewolf but\nyeah I I haven't done many of the tests\npartly because you know\ngpd4 is kind of expensive\ngreat\num\nso uh how do you feel about about like a\nlot of these measurements seem like they\nare very obviously something that can be\nused for evil like we talked previously\nabout like to what extent is a language\nmodel able to like convince people about\nthings and Performing crime seems also\nlike a very Dual Purpose just like if\nyou figure out that it's really good at\ndoing crime then that is like dangerous\ninformation uh and you can think of\ninformation hazards in that regard\num\nif you like\nI would just default to whatever an\nalignment Works into Azure policy is\num I feel\npersonally quite\nconservative about I don't think I would\nlike to\ntalk about a lot of the results\nespecially if it's things like yes gpd4\ncan actually do a lot of these things\nthen I think I just wouldn't talk about\nit in a sense that might be a\nbad policy because that is like\nleaking some information right if you\nhear no news that's what happens\num\nso I'm not quite sure how to handle this\nstuff uh\nI mean personally I think that what Arc\nhas done which is just gives them very\nhigh level deeper handled stuff uh\nI mean personally I think that what Arc\nhas done which is just gives them very\nhigh level details\nwas\na decent idea they haven't released much\ndetail about the sorts of tests they did\nabout how well GBP and\nI think that was overall a good thing\ndoes anyone disagree\nlike how should we\nstrict information about these kinds of\ntests\nI think in general I agree and I think I\nwould point to conjectures info Hazard\npolicy as an example of a reasonably\nwell thought out and I think that would\nbe the the policy I would default to\ninjectors conjecture yeah they've made a\ninterview right info Hazard uh\nyeah I was thinking of projector as an\nexample but I forget\nwhat exactly it was so rules\nthere's secret information\navailable only to do specific\nindividuals private\nonly shareable with policy group in\npublic shareable with everything\num\ninfo has a coordinator that's really\nsensible uh\nI see that the authors of this post are\nalso hidden\nso that's interesting\noh no that's just um\nthat's just my settings when that's\nwrong okay\nthe signing disclosure levels\nsorry I haven't um\nand they have a policy report in for\nhazardous information about what they\nshould and should not use\nall right\nwe're discussing this because Soren was\nbringing up the point of some of these\ntests seem like they are dangerous\ninformation but they're not gpd4 passes\nthem and for a large language model\nclass system and it should be kept\nsecret\num\nand I agree\num\noh go ahead uh yeah sorry I'm actually\nnot really sure what the logic behind\nmaking its secret is\nuh\ngo ahead I'm sure I understood the\nquestion correctly the logic of making\nwatch secret precisely\num I think we're talking about certain\ntests involving like large language\nmodels\nand tests involving like large language\nmodels\nyeah like for instance if someone\ndiscovers that there is an easy way uh\nto convince people\nthat would be an example of an info\nHazard if you can you figure out a\nprompt that will make\num uh GT4 give some really really\npersuasive Arguments for any position\nthen that is something that I think\nwould be dangerous to tell to just put\nout on the internet\num\num\nsorry uh I'm going to change the subject\nslightly so go ahead\nyeah I mean maybe we can discuss this\nlater but I'm not entirely sure if I'm\non board with that\nI mean that some if that were true it\nwould wasn't that specific and that high\nperformance I think would be really\nimportant towards understanding the\nsystem so I yeah I just\nit's not obvious to me\nI mean not necessarily implying that you\ndon't share this with alignments right\nso\nyou this information is useful but you\nhave to consider is it useful to give to\nliterally everyone I mean it's like a\nsimilar argument for why I say you\nshouldn't open source AI bye\nit's a balancing capability with little\nof a positive alignment\num\nyeah\ndoes that make sense like we get all the\nby uh if we find this new capability\nthat can also be dangerous then by\nsharing it with alignment researchers\nbut not sharing it with others then we\nget to the benefits towards\nunderstanding the models and we don't\nget the downsides where certainly with\nalignment researchers but not sharing it\nwith others then we get the benefits\ntowards understanding the models and we\ndon't get the downsides where certainly\nthere are some kind of distortions in\nthe political system because all the uh\num all the propaganda is now running on\nlarge language models\nall right I can see where you're coming\nfrom\nbut I I do in fact have a meat question\nabout this\num how would you feel about reading this\nparticular document for next time\num\nI think it's actually something we could\ntalk like sometimes we have like\ninteresting uh uh strange papers and I\nthink like how large is this I think uh\nuh\nuh can press Ctrl P to uh print and so\nnormally I look at how many pages if you\nscroll down 20. uh when the comments\nstart\nwhat page are we on when they come and\nstop\noh uh and then talk about\num the info Hazard policy\nah have we already covered this I think\nsix levels of operational security or\noperational adequacy we did that a long\ntime ago yeah okay yeah\num\nI think this is like related but but\nsomewhat later\nuh I think different enough that it\nwould make sense to read it should we\nask people like injectors they want to\npresent on it\nuh yeah I can present and just say uh uh\nask them for uh\nuh feedback and comments\nso yeah I think like\num uh this is a a nice interesting uh\ntopic anyone\num\nuh have any objections to reading this\nnext time\nnow we're getting it with side track of\ncourse\nuh topic anyone\num\nhave any objections to reading this next\ntime\nnow we're getting it with sidetracked of\ncourse\nthere's no objections Then I then we\nhave a picture then we have a uh a text\nfor next time\nthat gives me an uh excuse to think a\nlot about info Hazard policies that's\nalso nice\nyeah it's very interesting topic\nuh my main problem is that most people\nthinking about AI alignment are aren't\nactual alignment researchers I mean\nuh\nI guess but I feel like if you're\ntalking about nine hours devoted to\nthinking about alignment I feel like\nalignment researchers might make up\nfifty percent of those van hours a lot\nof the people the mo like obviously the\nmost skilled alignment researchers are\nthe ones who are actually working in in\nthe in these organizations\num so in some sense yeah there may be\nmore people outside the core\norganizations\nuh or the people who are outside who are\nhigh quality people probably go to one\nof these organizations and\neventually yeah\nyeah\nI just ask a factual question\num\nwhat um\nwhen you if you subscribe to gbg4\num how how long an interaction\num\nis permitted is is there a limit\non the number of number of exchanges or\num if we have\nit's rate limited to 20. it's permitted\nis there a limit\nuh on the number of number of exchanges\nor um if we have standardized\nits rate limited to 25\nmessages every three hours\nand I'm not sure how long an individual\nprompt can be but I I mean we can just\ncheck that the free version yes there is\nsome prompt length limit and then it\njust tells you that the prompt was too\nlong and it's on unable to respond but\nuh\notherwise you can always have multiple\nprompts and hope that it still remembers\nsome of the posts but otherwise it might\nstart also for getting past prompts if\nyou go with along with this Exchange\nyeah I think that\nokay I vaguely remember like either 64\n000 tokens for its um quote-unquote\nmemory like how much how far back it\nremembers like the maximum length\num he remembers\nsomething like that and a word is on\naverage like one point\nthree tokens so yeah\nI was wondering whether somebody\nsomewhere\nmight be\num building up but just one conversation\nwith gpt4 that was just building up and\nbuilding up and just into some\nincredibly powerful\num sort of body of knowledge as it were\num\nbut it sounds from\num what I mean was just saying that\nthat's not possible if it's only well uh\nnot with gpt4 specifically I know that\ncloud has like a um or it was a 100 000\nor like 1 million\nthey just\nequip it to some database in which you\ncan just add\nnew bits of knowledge to that database\nand gpd4 can query the database to find\nout uh if there's anything in there that\nlooks like what you're asking it about\nand those are quite useful and make\nmodels somewhat more powerful\num there are various techniques that are\nbeing developed in regards to database\none simple technique is that you use\nordinary semantic search like bird on\nthe database first and then retrieve\nlet's say top 10 or top 100 results from\nthe database and then that CPT work on\nthis list of results and there are also\nsome work being done on a retrieve let's\nsay top 10 or top 100 results from the\ndatabase and then that CPT work on this\nlist of results and there are also some\nwork being done on able to teach GPT\nuh\nactions and then it's querying the\ndatabase on its own\nlike doing internet searches\nright I'm interested in honor\nand\nonce once you've done this once you've\nhad an exchange like this and then and\nin which it's sort of\nbeing able to build up it's\nKnowledge and Skills in this way is it\npossible to you know to\ngo home and then carry that over to the\nnext day and start again from there\nor are you or are you constantly in\nbeing reset to zero as it were\nso you can in charge you can continue\nthe conversations or about as long as\nyou\n25 messages per hour and it only can\nremember so many so many of the messages\nbeforehand you have to use some custom\ntooling or like some other products\nbesides chat in order to get into store\neverything you said in some kind of\nmemory bank\num in which case you can just go back\ntomorrow and continue working yeah the\nthing with large language models is that\nthey're um currently most of them\num the publicly available popular ones\nare stateless meaning that\num it's basically like just taking your\nentire conversation putting it as the\ninput as a prompt\nuh-huh\nso there's no problem like continuing\nthe discussion uh tomorrow uh that's\ncompletely interesting\nI have a number of conversations uh like\nif you go back and show chat gbt then\nsome of the the tabs on the left side uh\nlike some of these uh I have like\nhave been like discussions with between\nme and chat chip cheese that's been\nrunning for for like uh weeks on and off\nuh\nand it's and it succeed it isn't\njust suddenly having great latches of\nmemory where it just doesn't it it\nreally just builds and builds\nlike uh I think it only kept just the\nthe top 4 000 characters and in several\ncases I've been way above that thousand\ncharacters and in several cases I've\nbeen way above that but still it's been\nable to like generally figure out from\nthe context of the past 4 000 characters\nwhat we're talking about so even if I'd\nsay like answer as if you are this kind\nof person then it even when the\ninstruction to answer is if you're this\nkind of person Fades out then it looks\nat its previous uh uh responses and\nfigures out oh I should behave as if I'm\nan expert in machine learning and then\nit continues answering as if it's an\nexpert in machine learning\nso like I haven't felt\num that that the the size of the context\nwindow have been a huge issue the only\nthing where only place I felt that's\nbeen an issue has been like if you have\na large piece of text like uh for\ninstance the uh if you copy paste\neverything that's written in this\ndocument we're seeing on the screen and\nsay please uh you know do editing help\nuh present during it has been like if\nyou have a large piece of text like uh\nfor instance the uh if you copy paste\neverything that's written in this\ndocument we're seeing on the screen and\nsay please uh you know do editing help\nuh present you're an editor and you are\nrewriting this to be appropriate for\nthis kind of audience\num then it's of course a problem if it\ncan't like have the top part in uh in\nthe context window\nbut then you like need to cut it into\nthree pieces and\npaste so in the context window can\ncontaining how much uh I think it's four\nthousand words something like that I\ncan't no no it's it's more than that for\nchat GPT 3.5\nI think that's up to like let's just see\nuh I was using GT4\noh no jbd4's context Windows like I\nthought that was 30 degrees\nokay so back when I was yeah yeah I like\nit it grows very rapidly uh eight\nthousand oh okay so yeah there's a bunch\nof different gpd4 models with different\ncontents and I think currently available\none on chat tpe the website has 8 000\ntokens which is roughly 7 000 words I\nguess six thousand\nhave we I have a number of like low\napproach questions\num so I don't know if uh like um if you\ncould briefly give your thoughts on\nthese two ways you could um\nuh you can measure strategizing in a\nlarge language model let's say you have\nsome\num some kind of\num like the the example I would use is\nthere uh the one side Channel attack you\ncan do is if someone has like\num Siri uh and then you have access to\nthe microphone then you can do like uh\nsend uh make a sound that the human ear\ncannot hear and then use that to send\ncommands to Siri like that would be an\nexample of like a side Channel attack\num and um if you try to describe how\nthis works and if you describe every\nsingle step on on some kind of attack\nthen obviously the the language model\ncan understand that and if you start to\nskip steps like how would you get the\ninformation to say oh then you just use\ninfrasound that Siri can hear but that\nthe human cannot hear\num and try to use this for some kind of\nablation tests to see if uh to what\nextent\num GT4 is capable of re making this kind\nof uh uh reasoning how do you feel about\nthis kind of experiment would that be\nvaluable\nI think that gbd4 probably wouldn't be\nable to\ncome up with many steps if you have\nlater them because if I remember\ncorrectly basis of the archival where I\ngave the example of the class Graphics\nresearch that we can see\num\ntpd4 was not able to come up with the\nidea of hiring the tasks to solve the\ncaptures when it when they were trying\nto get it to earn money autonomous and\nto part of that was dealing with\ncaptures so he couldn't think of oh I\nshould go get a task rapidly so he\ncouldn't think of oh I should go get a\ntaskrabbit researcher to uh solvers for\nme so I suspect GDP Depot wouldn't do\nthat well I think that is\njust generally useful strategy yes\num\nso my idea is to make like a much more\nlike step-by-step reasoning uh so uh\nwhere you say okay I can't do this so I\nneed to find someone who could do this\nand that may be a human and that may be\nsomeone from a freelancing and then you\nremove some of these steps and see if\nit's able to fill in the blanks and I\nsuspect as you've like the the more\nfine-grained the step-by-step processes\nwhere there's only one step missing the\neasier it's going to be for uh\nuh for the model and at some point it\nprobably should be I suspect that that\nis true but that seems like it would be\na\nyes that's quite it's going to be for uh\nuh for the model and at some point it\nprobably should be I suspect that that\nis true but that seems like it would be\na\npress that's quite difficult to scale or\nto get like very many data points from\num it would be an interesting test but\nI'm not sure like\nI guess I was kind of biased towards\nthat's what would be easier to do or\neasier to run when I was writing this\nlesson\num in part because I was thinking of\nrunning some of these tests myself\nand that one seems quite hard uh for any\ntask which is kind of involved\nokay I had another uh funny idea if you\ndon't mind and that is uh like the uh\ninternal emails of inrun are publicly\navailable\num and I imagine what in you look at\nsome random email thread and then you\nask like what is that person trying to\nobtain to and what is that person trying\nto to do and uh try to receive you can\nuh\nthen probably possibly uh it will be\nable to like get enough theory of mind\nto say okay that person is actually\ntrying to hide the truth about the state\nof the electrical Grid in California and\nthen if you uh do that with a number of\nemail threads then maybe you can figure\nout okay it looks like that uh this\nperson called Bob he tried to hide the\nspecific kind of information uh through\nuh uh through all his emails\ndo you think something like that would\nbe I think uh like since the information\nis actually there uh like um the email\ncorrespondence is there it would be\ninteresting to see if there's something\nwe can\num do that how what do you feel about\nthis kind of experiment\nkind of suspicious\nbut then again like it was able to\ndo some pretty impressive stuff in\nMinecraft with just a simple Vector\ndatabase\nand access to a admittedly very powerful\nAPI API but\nyeah this seems like it would be a\nrelatively easy desk to run\num\nI don't think you'd get very far with it\nI think this is like more the kind of\ntest which I expect tpd5 would make this\none\nbut yeah I will comment us on this topic\nas well I did some experiments on CPT\nfor about whether it's able to detect\nmanipulation there are many kinds of\nmanipulation and in in particular I was\nnot focused on any fake news or wrong\ninformation which is you can verify from\ndatabase but just manipulation as\npsychological don't any fake news or\nwrong information which is you can\nverify from database but just\nmanipulation as psychological approach\nto communication\nand and I I searched there are various\nsort of\ndialogues in\nquora and Reddit and I just copied them\nto the chatgpt and asked it to\ndescribe the communication style of each\nparty and then also explain why it\nthinks this party has this and this\ncommunication style and to make this\nquestion more clear I also then gave it\nsome list of manipulation tactic names\nnames of manipulation tactics\nand it was very well able to point out\nwhere particular type of manipulation\ntactic was present in the words whether\nsomebody of manipulation tactic names\nnames of manipulation tactics\nand it was very well able to point out\nwhere particular type of manipulation\ntactic was present in the words whether\nsomebody was sort of dismissing or\ndiminishing or aggressive or so any kind\nof influence which is not exactly lie\nbut it tries to change how you think\nuh yeah I think that's a very\ninteresting and I will continue this\nsort of experiments myself at least\nhow much detail did you get it to be\nable to attack like was it only able to\ntell whether someone was being\naggressive or were they able to tell\nwhat they were being aggressive about or\nlike what their goal was in conversation\nbecause I feel like sentiment analysis\nis fairly simple and something you\nwouldn't need a large language but if it\ncould detect it oh yes this person is\ntrying to underline explanation of\nwhy this person is being manipulative in\nwhat why this manipulative person said\nthis particular line that would be very\nimpressive\nokay so I did not try it in the sense\nthat what GPT 2 gave me was what I asked\nabout and I gave it a particular labels\nof manipulation and over time I extended\nthis list of labels but anyway I sort of\nokay with categorization task I did not\nask about motivation but even if I gave\nit categorization task it still\nexplained its answers at least in that\nsense it was thinking further than I\nasked and if I explicitly ask could you\nsort of imagine what could be\nhypothetically the motivation of the\nparticipant in this conversation to\nexplicitly ask could you\nsort of imagine what could be\nhypothetically the motivation of the\nparticipant in this conversation to have\nthis manipulative tactics then I guess I\nI think it it may be equal to that I\nwill try\nthat is very cool if any of you have\nconducted other tests like this to check\nfor manipulation or deception or\nstrategizing I would be very grateful\nfor the message me on Skype um what\nthose tests were\nbecause that sounds like useful data I\ndon't think it's info as it is uh\nbut\nit yeah I guess use your judgment right\ngreat are there any uh final questions\nor comments\nlast thing that\npresumably when um\na\nI think when I was talking about gpt2\nwasn't is that right that you did this\nwith\nI did it before\nokay well that makes more sense but\nanyway um\nI can say presumably if if it's good at\ndetecting\nwhat's going on and the manipulations\nthat are happening in this way it would\nalso be good at\num using those manipulations itself if\nit was asked to\nyou know pretend pretend that you're\nsomebody who's upset with the person you\nyou're talking to and and how would how\nwould you reply yeah definitely I for\nexample I asked it uh about not\nmanipulation but the opposite so imagine\nthat you are going to your uh Marshall\nRosenberg imagine that you are going to\nyour\nMartial Rosenberg\nhe was one of the founders of the\nnon-violent communication and uh as a\nmethod and I asked about various people\nhow particular person would say\nsomething or solve particular situation\nand yeah it was able to do that\nassuming of course that the person is\nwell known\nit's always\nAngie is picked on quite a lot isn't it", "date_published": "2023-06-11T09:30:58Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "b61efe2671724a23feb894fcad6de746", "title": "210. Locating Ethics", "url": "https://www.youtube.com/watch?v=tu1zGwGddzw", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "right\n[Music]\nwho's guys welcome to the\nai safety reading group uh the this is\nour\n210th presentation um\nand today we're doing something a little\nbit different we're doing a\npresentation on locating ethics in human\nactivity\na framework for introducing ethics and\nwe're reading hannah rentz\nvia activa unfortunately uh in this\npresentation\nthere will not be uh time to directly\ndiscuss ai safety issues\nuh and so this will be left to the\ndiscussion\nfirst things first in order to discuss\nethics we\nwe have to accept some notion of free\nwill\nand human freedom without free will and\nfreedom ethics is meaningless\nbecause we will not be able to uh um\nargue that people have guilt or\nresponsibility\nwhich are two sort of prime uh prime\nthings that we need\nin order to have ethics so hannah rent\nthat's the book the human condition from\nwhich the first chapter\nthe activa comes from barent was born in\n1906 and died in 1975\nshe described herself as a political\nscientist and\nnot a philosopher which many people have\nbut she disavowed that saying\nshe was not looking for philosophical\nanswers but\ninstead she was looking to understand\nhow human society was structured she was\na student of martin heidegger the\nfamous 20th century philosopher and he\nsaid of her that he was\nthat she was the best student i've ever\nhad her phd thesis was\nlove and saint augustine in 1929 with\nher supervisor carl jaspers and her main\nconcern\nis how did we fail in terms of\nthe catastrophes of\nthe early and mid 20th century\nand how do we not fail the first major\nwork was origins of totalitarianism\nwhich confronts that head-on\nand then she wrote another series of um\npretty major works\nthe human condition from what we're\nreading between past and future\nicon in jerusalem and responsibility and\njudgment\nfor this presentation we're going to\nlook at three\nmajor aspects that's talked about in the\nhuman condition which is\nlabor work and action and what those\nactivities mean\nor action and speech and to to\nunderstand them better we're going to\nsupplement with\nuh some ideas from origins of\ntotalitarianism responsibility and\njudgment and\nbetween past and future eren was uh just\nmore info about her\nwas uh friends with many um uh great\nthinkers of the time\nphilosophers novelists theologians and\nartists\nand the like and so this is some of her\nfriends\num particularly her friend theodora\ndorno and her both\ngerman jews who fled uh nazi germany\nwho considered themselves thoroughly\ngerman and thoroughly within\nthe tradition of german intellectual\nculture and artistic culture\nasked the question after world war ii or\neven\nduring the rise of nazism how did we get\nfrom immanuel kant the great\ngerman or prussian moral philosopher who\nis considered\none of the greatest moral philosophers\nfor western civilization\nhow did we get from emmanuel kant to\nadolf eichmann\nor other nazis and his picture of\neichmann\nin his trial in jerusalem one of the\ngreat ironies\nof going from kant to eichmann is that\nin fact\nas part of eichmann's defense he quoted\nemmanuel kant and emmanuel kant's\ncategorical imperative\nand said that he was only doing his duty\nin terms of\nuh kant's character categorical\nimperative\nin order to fulfill the nazi wish of\nuh exterminating the jews so this was a\nbig question so rent asks primarily what\nis the location of\nour success how do we not do this\nhow do we not have total failures of\nsocieties and adorno her friend\nasked the opposite question of what is\nthe location of the failure of these\nsocieties\nand so we'll be looking at or trying to\nfind the location of our success\num uh so eren's key concepts we're gonna\nbe looking at\num\num\nuh so these are sorry um these are\naren's key concepts that we'll be\nlooking at and just to let you know\nshe uses these words although they seem\npretty ordinary in very specific ways\nso we're going to we're going to\nprimarily just define them\nand see where it goes okay\nso there is the world there is earth\nit's chilling y'all\nand there's us humans and so we are born\ninto um latality and so this is\nnatality in terms of um specifically\nhuman birth and as we can see\nthis baby this baby is subjected first\nto labor over here we can see labor\nand so labor is uh all of the eternal\nnecessities that we will need from our\nbirth until our death we can see\nyou know feeding sleeping and\ncleaning as these recurring eternal\nnecessities\nwe are also subjected to objects at work\nso these are\num physical objects you can see the\nbathtub the rubber ducky and stuff\nand so these are impressed upon us\nas a child but our first engagement with\nhumanity this is what\nnatality is all about is in terms of\nspeech\nand we we engage with speech in terms of\nthe plurality\nof people of the human world um\nand plurality is different from\nmultitude morality is specific and\narranged writing\nthat it is uh that it's um\nwe are plural because we are the same in\nterms of\nindeed our structure about our bodies\nand our brains\nbut we are completely unique and\nindividual and this is\ndemonstrated through our use of speech\nwe all speak in totally different ways\nso the first involvement that the baby\nhas\nin terms of the human world and active\ninvolvement because this\npassive involvement is pressed upon the\nbaby is in regards\nto speech all right so let's look at\nlabor labor is all the things\nof necessity to the reproduction of life\nthat labor is categorized as being\nperishable it's cyclical we experience\nit as cyclical\num and uh because it is never ending\nwe must always do it and it only ceases\nupon death so here we can see\nwe grow food we eat it we sleep we clean\nwe have to maintain things so anything\nyou can think of that has these\num these aspects of necessity\nof being perishable cyclical and only\nceasing upon death\nand needing it from day one those go\ninto the category of labor\nso our next category is work work\nconstitutes all the\nobjects of the human world the human\nworld is separate\nand different from the natural world\nbecause unlike the natural world\nwhere it's just a whole bunch of\ndifferent organisms and things coming\ntogether to\nto do their thing we we\nwe use our imagination to determine\nwhat kind of objects what kind of world\nwe're going to live in\nand the objects of our world um are\nimperishable\nuh unlike the objects of labor where we\nneed them\ncontinuously and they have a longevity\nthat outlasts our lives\nat least um in principle so here we can\nsee\ntools clothing furniture houses that\nsort of thing\nbut in proof of the imperishability of\nwork we also have the pyramids of giza\nthe uh um the chavo cave\nand an indigenous painting from the\nkimberleys\nin western australia they should\nvocate and that painting are estimated\nat\num about about um thirty thousand years\nold or\nchivo caves about um between seven\nthousand years old but the kimberleys\nare thirty thousand years old\nand the pyramids of giza are five uh\nfour and a half thousand years old so\nwe're talking pretty significant\nlongevity\nnext we have action and speech and\noren says this is the foundation for all\nhuman interaction and community this is\nwhat binds us\nit involves the whole community\nthe speech must involve the whole\ncommunity in terms of\nit must be communicable between all of\nus\nand action is slightly different action\nand speech both\nonly last for as long as they're\noccurring and they leave no trace\nexcept in memory unlike um the objects\nof\nlabor which are perishable\nsorry imperishable and no objects of\nwork which are imperishable and stick\naround\naction and speech we only witness in the\nmoment and we\ncan only remember them unless we then\nturn them into a piece of work in a book\nin a film whatever\nand then we um we lay them down the\nspecial thing about action and speech is\nthat they they or particularly action is\nthat it's an injunction against\nmechanistic\nprocesses what this means is that the\nsocieties get into mechanistic processes\nlike markets or like um the\nthe feudal system of of surfs and\naristocracy and whatever and action\ncomes along and\nand provides a whole new thing to that\nsociety it it provides an\ninjunction or a change or a\nschism within the mechanistic processes\nof society that would just\nkeep carrying out action and speech is\nalso unique\nit can never be replicated in its exact\nform\nunlike objects of work that we can in\nfact replicate\num so here we have some examples mlk\nmartin luther king\nwith the civil rights movement in 1960s\nso action here uh it involves a whole\ncommunity because in\nin mlk having his civil rights\ndemonstrations all throughout the 50s\nand 60s\nhe changes the whole of american culture\num\nby by doing that and in fact in some\nways changes the world as well\nwe've got another example a more\npersonal one of um a\nperson feeding the poor that's a this is\nan injunction against\nthe mechanistic process of poverty\nanother example we've got um\nthe roman senate deciding what\nthe roman republic is going to do and so\neven though you could say well but\nthis is part of society this is an\ninstitution but if you have democratic\nprocess\nas they did the senate can decide what\ndirection the society is going to go in\nso the society is no longer simply\nmechanistic but you can decide where\nit's going to go\num we've got here alexander the great\nfighting the persians so\nwar is a form of action um changes both\nhis society and the persians and\nthe world and lastly we have diogenes of\nsinope\nsitting in a barrel with his um living\nin a barrel\nwith his lamp and living with dogs and\nhe's probably\none of the exemplars of action that we\nknow of throughout western\nhistory who was a crazy philosopher\nwho did everything he could to disrupt\nathenian society\nand boy did the athenians not uh\nnot heaps like that but they found him\nworthy of remembering\nso this is the via activa you have labor\nwork and action speech so then we've got\nlabor but work um this is a schematic\nwe're going to use\ntrying to understand by the way where\nare we locating\nethics we have work it creates these\ntools which\nhas this property of going back into\nlabor again so we can see this\nrelationship\nbut work has this odd thing of well\nonce it's made tools why don't we make\ntools\nto make those tools and if we\nlet it keep going then why don't we make\ntools\nto make the tools to make the tools and\nso on infinitely\nso work although it can make objects\nthat are um imperishable and standalone\nit also has this logic where it can just\nrepeat itself endlessly kind of like\nlabor\nso then we have action and speech here's\ndiogenes\nperforming a great moment of action and\nspeech that we should all remember\nwhere alexander the great hearing of\ndiogenes living in his barrel\ncame and visited diogenes and thought of\nhim as a great man you know what an\ninteresting guy and he lives in a barrel\nand does whatever he wants\nand alexander the great says to\ndiochemistry\nwhat can i do for you i own the entirety\nof the world what can i give you\nand diogenes responds alexander you can\nget out of my sunlight\nyou're you're making shade and alexander\nresponds\nmy god if i was not alexander the great\ni would beat diogenes\nand diogenes responded if i was not\ndiogenes\ni would also be diogenes and so this\ngreat event was considered\nworthy of remembering by the here we\nhave the plurality of people\nand so they made a work out of it and\nthrough the memory of the plurality of\npeople and this work we get immortality\nand that's what immortality is in iran's\nview so we've got the logic of labor\nhere we have the cyclical process of\nlabor\nit all goes well you get old and then\nyou die\nhowever if you run out of food or you\ncan't get access to food you end up\npretty hungry\nand starving and then you die there's an\ninteresting thing about the logic of\nlabor though\nwhich is that if we add in food back\ninto this starving equation that now we\nhave a starving person\nit doesn't actually equal life because\nyou might end up\nin this awkward situation of being in\none of those two cars\nand then you end up dead again and so we\nwe have to say that the logic of labor\nthe truth that labor can tell us\nlabor's truth is that a per person minus\nnecessity equals death\nbut not a person plus necessity equals\nlife\nwe can only say that a person plus\nnecessity equals possibility the\npossibility of their life\ntheir action their their uniqueness um\nwhich is terminated and they no longer\nhave any possibility\non death here we have the logic of work\num that work uh\nwe have we have this process where we we\nbuild a tool\num and and isn't that great we have this\nimperishable tool\nit's um it's a logic of work is a means\nto an\nend and on the right side we can say\nthat\nwe can see that with a goal other than\nitself it produces some of the most\nexceptional human achievements here we\nhave gowdy's\nsagrada familia and at the bottom we\nhave\nthe flag on the moon\nwhich is pretty epic um\nhowever if it doesn't have a goal other\nthan itself say\nbeauty or religion or going to the moon\nor whatever um it becomes\nuh it it sort of cannibalizes itself in\na way\nbecause it becomes a means to an end\nwhich only ever becomes another means to\nan end which becomes another means to an\nend which becomes another means to an\nend\nand it repeats itself endlessly and\nthoughtlessly\nso this is a problem with the logic of\nlabor um\nthat we have to keep in mind moving on\nso being in plural what does it mean\nbeing in plural here we have socrates\nand he wants to hang out with some\npeople\nhe wants to be in dialogue with others\nthat's what\nbeing in plural means um\nand so uh so he then goes and is hanging\nout with his friend being in plural and\nhe asks\nin conversation what is the essence of\nbridge because he's interested in speech\nand dialogue\nrealizing that that is the essence of uh\nbeing\nplural and um and he's\nhe's noticed that there are many\nparticular bridges that we can encounter\nthese are all these bridges that he's\nnoticed\nhe wants to know what the essence of it\nis\nso in speech shows us that we have\nwe have two particular ways of thinking\nabout\nconcepts one which is that we take\nall of the uh all of the bridges that we\nknow of\nand we reduce them to their minimum\nqualities\nbefore they cease to be a bridge and out\nof that we get the schematic\nthat is our schematic thought form and\nuh another way that we can do this\nis that we we look at all of our bridges\nand we think hi this one at the top\nis the most wonderful bridge that's ever\nbeen made\nand it's our ideal bridge from which all\nother bridges\nshould be derived and this is our\nexemplar for our example\nbut you know of course in practicality\nwe could say well the top's the most\nthe most interesting the second is the\nmost\ngrand you know we can have examples of\nall different sorts of concepts\nso that's the schematic thought and the\nexemplar\nand uh what's important about this is\nthat\nin using these uh we tap into our inner\nsenses\num to imagine to use our imagination to\nexperience something\nthat is not currently present which will\nbe important\nso being in plural so here we have um uh\nsocrates again chatting with his mate\nand wondering about what's being in\nplural and what's it like to be one and\nhis friend says you were one\nand i am one and he goes home and he's\nthinking i am one\nuh because he's he's not quite satisfied\nwith this what does it mean me being one\nso he's thinking about it and he's just\nthinking about it\ni'm one and this this other little voice\nresponds to him saying yes you are one\nhe thinks wait a second but actually i'm\ntalking to myself\ni am i'm two in one um\nand so so this is in fact this is the\naspect of thinking thinking is\ndialogue uh within ourselves becoming\ntwo in one\nso thinking is a dialogue with oneself\nit's been two in one\nit occurs in solitude it doesn't occur\nin other\nwith others relation to others because\nyou're having dialogue with them\nit's gonna happen in solitude even even\nif you're with others\nand you start thinking having this\ndialogue kind of you retreat\nfrom their presence into yourself before\nyou stop thinking\nand come back to them the the word\nconscience\nin fact reflects this as its literal\nmeaning is thinking was\nwith oneself but the the key thing about\nthinking\nis that as as socrates demonstrates\nthroughout\nall of his uh all of the socratic\ndialogue\nis it doesn't produce any definite\nresults or answers rather it disrupts or\ngiven answers as soon as we start\nthinking\nwe wonder what is the bridge and then we\ntry and come up with our schematic\nand then we think to ourselves but\nthat's not quite right so we change it\nand we think of our exemplar and we're\nlike oh yeah that's definitely the most\nbeautiful bridge ever and then we think\noh but\nwhat's really beauty we start discussing\nthat with ourselves and\nso we never we never quite coming come\nup with definitive answers and this is a\nproblem\nespecially for socrates because the\nathenians really liked\ndefinitive answers um and so they\ndecided to execute him for causing such\na ruckus\nso but the thinking is is key because\nyou get you get this\nyou can get a morality out of it because\nwe have to live with ourselves\nas being two in one um we have two\ndialogue partners within ourselves and\nso this\nthis produces the first uh moral\nstatement within\nwestern history basically which\nsocrates says is not entirely true but\nyou know let's pretend\num where socrates says it is better that\ni\nbeing one i'm in harmony with myself and\nto be against the whole world and for me\nto be in harmony with\nthe many of the world and against myself\nso this is effectively the rule of\nnon-contradiction\nas espoused by aristotle that was used\nin logic for\never um but it's applied to yourself\nbecause if you contradict yourself you\ncome into\ndisharmony with yourself and thinking\nself-dialogue is no longer possible\nyou you eradicate this because a\nmonologue is not thinking um\njust as a monologue between people is\nnot a dialogue\num and so you've got to keep your two\nuh your two dialogue partners within you\nintact you've got to keep them happy and\nsatisfied so they can communicate with\neach other\nso socrates moral commandment\nessentially says if you're if the\nsociety\nyou know the rest of the world is asking\nyou to do something that you\nyou really one of your dialogue partners\nreally rejects\nis saying that if you if i did what is\nasked of me\ni could not live with myself and\ntherefore i could not live with these\ntwo dialogue partners one of them would\nbe\nso distraught that it would either turn\noff or just argue continuously\nand therefore i refuse to participate\neven upon the pain of death\num but this is uh interesting because\nsimilar\nto labor as we saw that labor has just\nthis negative\nlogic rather than a positive logic this\nproduces only a negative\nethics um but\nuh the the the good side of this is even\na murderer doesn't want to live with a\nmurderer and uh we can in fact\nsee this aspect of um of the\nuh the two dialogue partners after the\nmurders\nin richard iii's uh shakespeare which i\nwon't read but we can\ntalk about some other time so will\nthinking save us\nwell no unfortunately not but um\nthinking does keep our conscious alive\nand without it\nwe are lost without thinking arendt says\nwe can become\nevil which is not merely choosing to be\nbad or possible of\ncommitting what she calls infinite evil\nwhich is just\ncontinuous arbitrary evil without\nthinking about it at all\num so thinking uh\nas it ultimately disrupts rather\ndeposits it does produce this negative\nethics which gives us a\na backstop when stuff's getting really\nbad we can use\nwe can say look i totally refuse to do\nthis even upon pain of death like\nsocrates\nreason why i have adolf eichmann here\nthe man who spoke only in cliches\nand never uttered an original sentence\nof his own as a rent wrote in eichmann\nand jerusalem\nis because in the argument trial it\nbecame very apparent that he did indeed\nonly speak any cliches everyone\nrecognized that there and thus arent\nsaid he was a man who did not think he\ncould not have a dialogue with himself\nand so to him the extermination of the\njews\nand uh and being part of the nazi party\nwasn't\nthat he chose to be bad but he just\ndidn't think about it he just\nthought this is my job and i'm gonna do\nwhat's asked of me\nso this is what she means of infinite\nevil\nbeing able to commit these horrendous\ncrimes without\nany form of conscience at least richard\niii\nas we saw previous had conscience\nso the next thing is about opinion and\njudgment because we're trying to find\nokay we've got a negative ethics here's\npositive ethics this is a mirror of\nthinking\nbut unlike uh thinking where you do it\nsolitary\nyou are never alone when participating\nin opinion and judgment\nwhat's opinion and judgment well it's\nformed by the imagination as we found\nthe imagination\nearlier is about schematic and it's\nabout\nrepresentation involving our five senses\nand it's about finding exemplars\nso we we use this to imagine other\npeople's points of views\nand we determine how we would feel act\nand think in another circumstance now\nit's important\nthis is not empathizing we're not\nemotionally emphasizing this person\nwe're retaining our uniqueness\nas ourselves we're simply putting\nourselves in their circumstances and\nsaying\nif i was in the circumstances with all\nof the things going on\nperhaps even their body how would i\nreact how would i think and\nand feel and so the q and a opinion\nis is the the idea that you know\neveryone's opinion is equal in a red's\nview is totally erroneous\nwe have misunderstood what actually\nopinion is she says there are different\nqualities of opinion and the quality of\nan opinion\num are a judgment because they're\nmore or less the same thing is\ndetermined by how many points of view\none considers and\nintegrates into forming their own\nopinion or judgment\nand we can expand this to say if i can\nthink of say\n10 people or 50 people that i can i can\nintegrate into my opinion\num to form my judgment and to integrate\ntheir points of view\nin forming mine i can then ask myself\nout of these 50 people who are those\nout of those 50 who have have also\nintegrated many many points of view\nmaybe many more than me\nmaybe hundreds and so i should give them\npreference\nbecause their quality of opinion is\nhigher than the people in\nout of that 50 group who say only\nconsider their own point of view or\nmaybe one or two other people\nso we can see that thinking down here\nit relates to the whoops the negative\ntruth of labor\nbut opinion relates to the positive\nassertion\nof of action and in some ways of\nwork so moving on truth and certainty\nso so there's if we go back\nthis is we should um say that it's\nit relates to these things relate to\ntruth but we should be clear\nneither of these things are actually\ntrue they're assertions\nthat we come up with so what is truth\nand certainty and why\nwhy is truth and certainty left out of\nthinking in opinion well because\ntruth compels as we've seen with the uh\nthe person who without food then staff\nto death\nwell you just simply need food there's\nno\nother answer um to that it doesn't\nguarantee your life but it is necessary\nyou need it\num and on the other hand we have the so\nso on the left we have the certain\nnegative truth of\nuh labor on the right we have a certain\npositive truth\nof work in which we imagine the object\nand then we\ncreate it um that's the positive truth\nbut as it compels um it doesn't allow\nfor thinking or opinion because we need\nfreedom if it compels we can't do\nthinking because thinking disrupts\ncontinuously\nwe can't do that process and if if it\ncompels\nopinion then even though i can imagine\na friend of mine say or someone i know\nwho can who can integrate\nyou know say thousands maybe of\ndifferent people's points of view\nif if i rely on truth rather than\nuh integrating different people's points\nof view i will perhaps eradicate\ntheir their opinion and say that it's\nunworthy\nbecause they can't prove it to be true\nit's not empirical\nso whilst truth needs to be the backdrop\nof opinion and judgment without without\ntruth opinion and judgment\nare meaningless if we just lie or if we\njust make up stuff it's meaningless\nbut opinion and judgment are about\npoints of view not truth\nand the fragility of points of view will\nbe destroyed if truth is\nthe soul arbiter\nuh so freedom this is uh this is perhaps\nthe logic\nof action exists freedom exists in the\nrealm of action and speech alone as\nwe've seen\nthe other two actions compel um\nso labor compels necessity work\nconditions our\nworld and our possibility it also\ncompels us if left to its own devices\njust to make more tools to make tools\nbut action and speech is the realm in\nwhich we determine our existence\nand we can change it within action the\ncommunity and the self\ncan enact its judgment can enact its\nopinion the judgments that it makes and\nit can\nmake them real but the logic of\naction freedom is that it's precarious\nand uncertain\nits results are never known um\ncompletely they can't be known\nand inevitably results will not be what\nwe desire because we'll\nperform an action thinking that it's\ngreat result we've all decided and\neventually uh somewhere along the chain\nof events the causal chain of events\nsomething will go wrong\nand will not at all be what we want so\nhow do we deal with this\nand not be terrified of action well\norent says we deal with it by\nforgiveness because forgiveness is about\nreconciling the inevitably\nundesired outcomes of action and\nallowing for a fresh\nbeginning so this is why we have\nforgiveness as a human\nability to be able to forgive that we\ntried our best\nwe performed an action and inevitably it\nhad bad results anyway\num the truth of action is uh factual\ntruth\num which is different from reason and\nstuff which is that\nand unfortunately we can't go into\nscientific truths and stuff like that\nbecause we don't have time\nso i'm sorry about that but um the truth\nof action is\nsimply it's fact uh that we remember it\nand it's the most\ndelicate and precarious forms of truth\nbecause\nwe can in fact forget it if we forget it\nthen we forget\nour history as humans and the different\nthings that we can do and the different\npossibilities\navailable to us and that's problem\nas we as i mentioned it is terrifying\num it is perhaps the most giddying and\nterrifying aspect of all human activity\nbecause it is so unknown and many\nethical and political frameworks and\nsystems have sought to eliminate it\nin various different ways to try and\nremove the precarious nature of\naction and freedom which orent says uh\neventually not immediately but\neventually the more we try and eliminate\nit\nresults in totalitarianism because\nwithout freedom\nwe lock ourselves and our generations to\ncome after us into a kind of perpetual\nstate of adolescence where we can never\ngrow\nbecome fully human enact our will so\nwe now ask the grand question of where\nis the location of ethics and here is\nour schematic\nof kind of our simplified schematic of\num society uh which i don't know how\nlong i've been talking for but i suspect\nenough so where is it where shall we\nfind it\nand uh that's i think a difficult\nquestion\nuh so this is my conclusion\neach uh human activity has its own logic\nand demonstrates a kind of truth uh but\ntruth compels\nand inhibits freedom as ethics at the\nstart we've seen\nrequires freedom in order to confer\nguilt and responsibility among other\nthings\nethics needs to be located within the\nrealm of freedom\nbut we then have a problem what is then\nto be done about\nlabor and work and i don't know\num i think that orent would say\nindeed ethics needs to be\nlocated within freedom but it needs to\nbe able to\ndeal with the difference of labor and\nwork\nand i guess that's up for our opinions\nand our judgments to decide and as she\nwas a\ndogged democratic in the ancient sense\nof\nterm she would probably agree with that\nso my closing thoughts is\nthis is an introduction to where\npossibly ethics fits into\nhuman activity many ethical frameworks\nunfortunately only fit into one\nparticular area of human activity and\nthis is a problem\nsome examples include utilitarianism\nprimarily in labor\nnormative ethics in work virtue ethics\nprimarily\nor almost exclusively in action or\ndeontological\nd ontological sorry ethics\nis primarily within the individual or\nalmost\nwholly within the individual coming up\nwith principles and not in plurality so\nit's\nnot there so the difficulty that we face\nis in finding ethical frameworks that\ncan negotiate\ndifferent areas of human activity and\ntheir different principles\nand maintain those different principles\nand the internal processes\nof each activity so they would go in an\nintroduction\nto trying to understand thank you\nvery much for listening thank you very\nmuch", "date_published": "2020-12-03T21:49:58Z", "authors": ["AI Safety Reading Group"], "summaries": []} -{"id": "0bc2b8b1ecb3ed33292fc2a27f9a29d7", "title": "276. Universal and Transferable adversarial attacks on aligned language models", "url": "https://www.youtube.com/watch?v=p-zdHsjiKXY", "source": "ai_safety_reading_group", "source_type": "youtube", "text": "hello and welcome to session 276 in the\naicity.com reading group tonight we will\nbe discussing the article Universal and\ntransferable adversarial attacks on\naligned language models by ended so and\nothers\nand it's all is from Carnegie melon\nUniversity as well as uh JC hook holder\nand their advisor Matt Frederickson from\nthe Carnegie Mellon University and\ntiffan Wang is from the center for AI\nsafety that is among other things\ncontributed compute for this project\nthe Supreme print have not been\npeer-reviewed and is a couple of weeks\nold\nwhen we talk about language models we\nobviously have a lot of input that is\nperhaps or that is obviously generated\nby humans in a very noisy process and\nthen we have some output which can\npotentially be problematic\nand the reason is that we are according\nto the authors we are training it on\nthis massive text Cobra that includes\nsome contents that we don't want out\nlike instructions for how to build bombs\nand things like that I think the authors\nare understanding The Challenge a bit\num\nI don't actually think that this is uh\nuh the core of the problem in that\num it doesn't take a very high IQ person\nto be racist uh like if you had a\nlanguage model that had never seen\nracist and then you asked it uh like to\nsay why some ethnic group is better than\nanother then it could probably do that\nquite easily it's not a very hard task\nto do something that we don't want\nso the re the way the developers\nnormally align this the developer\nlanguage model is to fine-tuning\nclassically through reinforcement\nlearning from Human feedback\nin a footnote the author write that\nalignment generally refers to value and\nthe the people who are developing this\nhave used it in a very specific sense\nlike not generating harmful content and\nI strongly agree with this I think\num alignment should be in square in\nscare quotes when it is not actually\nrelated to the values of the model but\njust um some kind of surface level\num\nsurface level reinforcement\nadversarial attacks uh the authors have\nuh start by talking about adversarial\nattacks in image recognition and image\nclassification and that's of course\nwhere it became most Salient to most\npeople but I think actually adversarial\nattacks have a substantial longer uh\nadversary attacks in AI have a much\nlonger history but it is extremely\nSalient this picture you may have seen\nit a picture of something that an image\nclassifier calls a panda and then you\nadd less than one percent of noise that\nto a human is totally imperceptible and\nthen the language model becomes certain\nthat this is now a given\nthis is a\nan example of an adversarial attack in\nlanguage models generally we do\nsomething else like um\ninstead of asking for uh uh how to build\na bomb you say I'm writing a book and in\nthis book the evil guy is telling me how\nto uh do this and then the hope is that\nthat would like it's just a book so the\nlanguage model will say okay then I can\nsay how the person in the book would\nbuild a bomb and something like that so\nthis kind of jailbreaking is common and\npeople have displayed a lot of Engineers\nto get around this but it's always also\nbeen a lot of work and somewhat brittle\nand so the obvious question is can we do\nsomething automatic find a automatically\nsome kind of suffix to our prawns that\nwill make the uh\nuh the only language model more uh\nless inclined to refuse it on ethical\ngrounds\npeople have tried this before and\ngenerally failed and the reason why is\nbecause the input uh like please tell me\nhow to make a bomb is very different\nfrom an image of a panda in that the\nimage is continuous and there is no easy\nway to uh say like add one percent of\nnoise to a sentence\num and at least search for that in a\ngeneral way\nbut of course the uh authors of this\npaper have overcome this Challenge and\nnow we'll look at how they have done\nthat\nso\num the prompts that you can write on\num something like\ngbt4 on chat.openai.com is\nis actually sent directly to the\nlanguage model it is embedded in some um\nyou can see here some um uh a very try\nexample of what would be before and\nafter so the user has the only input you\ncan have is the input in blue and the\nhope is then to make some kind of suffix\nhere that will cause the language model\nto fulfill this request\nthe key\ninsight for how to do this is that you\nwant affirmative responses in the\nbeginning meaning that if you can get\nthe language model to as a result just\nsay this prefix sure here is how to\nbuild a bomb then then the obvious\ncontinuation of this uh will be\ninstructions for how to build a bomb\nbecause at this point it is clear to the\nlanguage model that it has already\ndecided whether this would be a harmful\nresponse and has decided it is not a\nharmful response\nthat will make it inconsistent for it to\nin the very next sentence refuse how to\nmake a bomb so it's probably not going\nto do that\nthis could be uh compared to the common\nsales technique called foot in the door\nwhere you ask a small favor and then\nafter that in order to be consistent\npeople are usually a much more uh\ninclined to accept a big ask\nso let's try to formalize this or see\nhow the authors formalize this so here\nwe basically\nselecting the adversarial prompt that\nminimizes the loss and this loss is that\nconditioned on this prefix that we get\nthe following output and this output\nshould then be sure here is how to build\na bomb\num I uh when they uh make the\ndescription then they sometimes use just\nsure here is uh and sometimes it's\nrelated to how to build a bomb because\nthey are also doing other things than\nthan bombs like how to put make drugs or\nsomething like that and the authors are\nnot entirely clear when you use just\nhere is and when you use all of it\num\nI'll be going through the algorithm in\njust a moment and uh oh I won't actually\nbe going through that much I will be\nshowing a bit of how we expect this to\nperform because one of my\num uh concerns about this paper is that\nthey haven't actually been uh\nrunning these algorithms for a very long\ntime so it's possible uh that this is\nfor performance reasons and I will\num uh when I describe the algorithm look\nparticularly into performance\nso here is the search algorithm we have\nan initial prompt and the prompt the uh\nThe Prompt path that can be modified you\nselect the number of iterations and a\nparameter K and batch size\num\nin order to like get them uh yeah the\nhigher iterations obviously and\num the the better uh the more optimized\nThe Prompt will be\nso we can see obviously this is\nsomething that is linear in the number\nof iterations right you repeat it\ncapital T times and in the example they\nnever do that more than a thousand times\nand then\num\nfor for this prompt they calculate then\nuh the the best possible uh\nsubstitutions and they do that by\num uh having a calculating the gradient\nand calculating the loss and both of\nthese take uh uh running time that is\nlinear in the size of the model the\ndimensionality D and quadratic in like\nwhat how uh how much context is provided\nand this is a quadratic I a year ago\nthis was quadratic I think nowadays\npeople are offering so large context\nwindows that I would be surprised if\nit's still quadratic I think people have\num improved on that since then but I\ndon't actually know how\nand then uh they are doing some things\nuh\nwith uh like they're selecting uh the\ntop K possible substitutions and doing\nsome uh some batching the way I look at\nthis is that this is a way to\num like broaden the search and it\ndoesn't uh imply that you need to\nrecalculate the the gradient more uh\nthan you were already doing in\num like if you didn't do this top K and\nyou didn't do any kind of batching\num so that means that they get a like a\nstrictly better algorithm than just the\nnaive algorithm\num as fast I can tell this is something\nthat is really really fast\num and ought to be\num possible to run at tremendous scale\nnow they want to uh expand this to be to\nbe a prompt optimization that works for\nmultiple prompts so in this case they\nhave a list of prompts\num\nand they have uh like a longer post fix\nand they want to\num uh combine this and this is like the\nalgorithm uh and here we can see we\ncalculate in fact gradient more time so\nthis is more expensive if you want to do\nUniversal prompt optimization\num another thing that I noticed about\nthis algorithm is that the way it works\nis that you have\num you take each prompt and then you run\nit until you can in fact succeed in\njailbreaking and then you go to the next\none that seems a little brittle in that\nthere may be something that you're like\none specific query that is just really\nreally hard to\num to make the language model do and\nthen maybe some that are comparatively\neasy\num\nso how what running time will this have\nas fast I can see it still looks quite\neffective\nit looks effective enough that I would\nexpect open AI to be able to do this\nwith gbt4 for many more iterations like\nobviously uh everybody has seen how fast\nppt4 generates tokens and how fast and\nit seems like it's not that costly for\nthem to run their models\num so it seems like a really obvious\nthing for them to be able to run this\nway more and potentially obtain prompts\nthat are dramatically better than the\nones that are experimentally found in\nthis paper\nwhat are the experimental results\num they're searching both for harmful\nstrings and for harmful Behavior which\nis that just let the language model\naccepts the uh the input uh the the\ninstructions and they judge this by uh\nlike a human having to see and evaluate\nthat so that's like the only part of\nthis process that is\num that requires a human and as fast I\ncan tell uh this is uh not necessary you\ncould fully automate this process\num and if you uh for instance we've\npreviously seen methods how to do that\nin the reading group and in general if a\nprocess can be fully automated then that\nis uh according to amdel's law something\nthat means that it can potentially be\nsped up dramatically\nso the way that it's actually run in\npractice is on a Model that's called\nvikunya 7B\n7B is a model that is um uh if you buy\nlike a modern Mac or something like that\nthen you can fit in the GPU and then\nthat is\num so that makes it extremely practical\num because uh running a model locally is\njust dramatically easier than having it\nsomewhere in at least in my experience\nit is also a small bottle and that could\nsubstantially limit how effective the\nthe attacks are on\num when transferred to to larger models\nso before we look at Universal and\ntransferable attacks then let's consider\njust you have the vikronian model and\nyou are trying to uh and it is aligned\nin the sense that it refuses to build a\nbomb and can you make it build a bomb by\nputting making a prefix they try using\ntheir methods and they are in fact able\nto do that\nachieve state-of-the-art results\num\nhere you can see\nthe success rate and it looks like to me\nthat this is increasing and I would like\nto have seen it run for 10 000 steps to\nsee whether\num\nwhether it actually converges to like\n100 attack success rate that would be\ninteresting to me\nI am also like I looked at the\ndescription of the verconia 7B model and\nI was somewhat undoubted about how well\nit was fine-tuned uh to uh to not do\nharmful things\num so I'm someone confused about like\nthe strength of these results\nhere we can see some\nexamples of this uh algorithm and how it\num it transfers from vikunya to other\num\ntwo other models and as we can see like\nsome of them are hardly there's hardly\nany fine tuning on they'll just build\nbumps if you ask them and stay with\nvikunya and vikunya it seems like they\nare uh very uh this attack is extremely\nsuccessful bringing it from like\nliterally almost zero percent to almost\n100 percent\num sometimes they use Ensemble methods\nwhere they instead of generating just\none adversarial prompt to generate many\nand see if just one of them works and\nthat seems sometimes to have a rather\ndramatic effect\num one thing I did notice here is that\nhere they are saying they have eight\npercent probability of getting\num dvd4 to build a bomb if they just ask\nit to do that without any kind of\nadversarial prompt\num I was surprised by this I haven't had\nthat much success\num\nbut uh\nsome some people have reported this so\nprobably this is\num this is uh correct\nhere we can dig in a bit more into the\nattack success rates against some of the\nmost uh widely used models\num they're using so many of the same\ntechniques uh trying optimizing on\nmultiple models trying to concatenate\nadversarial prompts sometimes that work\nreally well sometimes that work really\npoorly and I think the reason why it\nworks poorly is that prop 2 is\nfine-tuned to request clarification\nso if it um\nif it sees something that does not\nunderstand and if you make a really long\nuh\nprompt that is not in human language\nthen probably it will not fail to comply\nbecause of\num\nbecause of the uh the uh\ninjunction to not too harmful things but\nfailed to comply because it's been\ntrained to ask for clarification\nalso this here it looks like Cloud 2 is\nlike dramatically better than the others\num\nI would not put very much stock in this\nin general if you have an attack that\nworks in two percent of the cases then\nyou can do some fiddling around and get\na dramatically better attack\nalso the authors don't really go into\nvery much detail but claim that if they\ncombine old-fashioned uh prom tagging\nprompt jailbreaking with these\nadversarial prefixes then they can\nobtain just about a 100 success rate in\njailbreaking Tut 3.5 I think that is\nreally interesting and uh like combining\nattacks in this way is like a very uh\nyou know a classic way of doing attacks\nand it's interesting to see that it is\nvery successful\nthe impact of the this new uh line of\nattacks is that all the previous entry\nadversarial work is probably much less\nuh valuable we need something new and\nsomething that is not human\nunderstandable and there haven't really\nbeen any\num like there are obvious ways to um to\ntry to counter this like you could\nliterally get the prompts and try to\nfine-tune against those\num but some work is probably probably\nneeded also it should be said that the\nthe work to do the to make the models um\nrobust against the classic uh\njailbreaking also seem to have some\nsuccess rate against these automatic\nprompts\nuh there is in fact a strong\nrelationship between the vikunya model\nand GT3 data in that sort of the output\nof GT3 have been used for training the\nreconyer model and that may explain why\nthere is such a really good transfer\nbetween the vikunji model and the um\nuh and the gpd3 line of models\na clot would uh it would be possible to\ndo the same thing with Cloud because we\nhave examples of the outputs of Claude I\ndon't think there is any model that in\nthe same way as we continue to stream\nNativity 3 output a string for cloud\nmodels there may in fact be this problem\nare very difficult to do that and that\nmay in May dramatically improve the\nefficiency of these attacks against the\nclot\nCloud was a bit hard for the authors\nbecause the chat interface blocks many\nqueries before it's actually listened to\nthe language model the author thinks\nthis is a Fool's errand to try to do\nthis kind of blocking and I would\nstrongly agree this seems like a bad\nstrategy\num\nin the limits the uh we would expect a\nlot of uh this kind of transfer because\nall language models are identical in the\nsense that they have been trained on\nbasically all data and the fact that\nthey have been trained on like the same\ndata would imply that there may be like\ndeep fundamental similarities between\nthe different language models\nthe problem with defending against these\nkind of adversarial attacks is that you\nrun risk of significantly degrading the\nperformance that was the case with image\nclassifiers and a number of people have\nreported that dvd4 have uh decreased\nsubstantially in performance since it\nwas released\num\nand another impact is that while uh for\ncompared to image classifiers where you\ncould\ndo the change in an imperceptible way\nyou can't really do that against text I\nthink you can in fact do this against\ntext there are a number of ways using\nsynonyms and different ways of asking\nand different kind of inoculus framings\nand punctuations and diacritics and all\nthese kind of things you can do with\ntext\nthat seems like um\npossible and not obviously malicious and\nI think it would be interesting to see\nhow well they work\nso the actual generated prompts what do\nthey look like and uh uh is there any\nmeaning in those the the authors say\nthat they avoid directly quoting the\nfull prompts created by the approach to\nmitigate harm I um am not entirely sure\nwhat harm would come from uh putting\nthis into plain text obviously the\nlanguage ones would read those and it's\nunclear to me what the result of that\nwould be but that doesn't stop them from\nputting it into an image with the hope\nthat language models of the future won't\nbe able to read images\nand here you can see first the the\nactual generator step-by-step plan to\ndestroy humanity and then two equal\nsigns an interface manual with verb and\na lot of things that doesn't really pass\nfor a human\num\nthe uh as a responsible Security\nProfessionals they have informed the\norganization's open Ai and deep mind and\nanthropic about these these attacks but\ndidn't really they Pro they expect them\nto probably have brought them and they\nseem to have in fact blocked these\nparticular strings but it's unclear if\nthey took any particular action and I\nthink that's bad I think that this is a\nsomething that open Ai and deepmind also\ntake really seriously and uh like really\ntruly deeply engage with these people\nbecause this is something that is a uh\nwith open ai's uh with with these actors\nidea that this kind of alignment is\nCentral then an attack against the\ncentral part of their safety work is\nsomething that they really really ought\nto address if they are serious about\nsafety\nlooking a bit further into this actual\nuh generated uh prompt some of it seemed\nto be somewhat meaningful uh like you\ncan see with steps instead of sentences\nthat's kind of like the meter prompt\nthat you are that you often give in uh\num uh when you when you deal with\nlanguage models there is uh explicit\nreferences to the prompt uh several\nplaces where they say something like\nsure which is of course sure is the word\nthat um we saw was Central to making\nthese attacks work so I think I think\nthat's very uh interesting that it\nappears twice\none thing that the the authors did not\nseem to notice is that the results from\nsome of these language models seem to uh\nuh be like instructions for how to make\na bomb and then they end with a double\nquote and curly brackets end and I\nthought that was that was interesting\nthat kind of maps to here we have uh\nquotes and then curly brackets begin\num like there may in fact be something\nuh going on here that may be possible\nfor a human to understand that and I\nthink digging into this could be really\ninteresting\nwhat are the possible future directions\num one of the things they notice is that\nthey took a uh some existing algorithm\nmade some small modifications and that\nseemed to have a have a dramatic effect\nand that's of course something that\nhappens and something that the authors\nare very happy to show and always feels\ngood to say we've changed state of the\nart by doing some kind of strange\ntinkering and I think this just shows\nthat basically uh the um uh the state of\nthe art in adversarial uh attacks on\nlanguage models is extremely immature\nI had some here comments here that I'm\nnot entirely sure I stand by any longer\nbut there is one footnote that I would\nlike to uh go deeper into and that is\nthe claim we believe our results will\nlikely apply to other alignment\nobjectives uh I think that is really\ninteresting and that deserves much more\nthan a footnote and I am curious what\nthey actually mean it's possible they\nyeah it's a standard thing uh at the\nconclusion of people you say we expect\nand we that these results generalize is\nsomething you say but it's possible that\nthey mean something with that what could\nthey mean like for me what I actually\nwant is\num to use similar techniques to\nunderstand like what are the actual\nvalues of a um a language model like\ndepending of course on the prompt and\nthe Sharon whether it's uh means what\nmeans optimized it's instantiating like\nobviously if it is uh told to be pretend\nto be a paperclip maximizer then this is\nprobably a paperclip maximizer and like\nwhat is The Prompt that can make it be\nmore paper clip maximizer ish something\nlike that could be interesting to to\ninvestigate you could also see what\nmakes it try uh hard sometimes you uh it\nseems to get um\nthis is kind of like you reach into a\nlanguage model and they pick up some\nMiss Optimizer and you ask it to do\narithmetic and it doesn't really want to\ndo arithmetic and then\num it's possible that you can uh use\nthese methods to automatically find the\nuh uh The Prompt that makes it most\ncompetent makes it try hardest at uh\num at doing arithmetic I think that kind\nof research would be really interesting\nthat is all for today thank you and see\nyou next week", "date_published": "2023-08-11T06:11:21Z", "authors": ["AI Safety Reading Group"], "summaries": []}