|
{: , : , : , : , : , : escape\Zero percent\, : , : } |
|
{: , : , : , : , : , : be real\real\one could observe being in it\the original\just created\just have been created\computing power\real\created in a just-then created universe-state\, : , : } |
|
{: , : , : , : , : , : universal voluntaryism\univol\person\human\settle what it wants later\going from system to system, being forever unable to settle\whatever happens happens\infinite majority\alignment\societal alignment\generalized alignment\i just want superintelligence to accomplish my values please\just make it that people never want others to be tortured\uploading\hunter-gathering, except no brutal injury please, and also it'd be nice if there were unicorns around\".\n\n### universal voluntaryism\n\nat this point, we come to the crux of this utopia, rather than its supporting foundation: ultimately, in this framework, the basis of the existence of persons would be for each of them to have a \"computation garden\" with room to run not just their own mind but also virtual environments. the amount of computational resource would be like a form of universal basic income: fixed per person, but amounts of it could be temporarily shared or transferred.\n\nnote that if resources are potentially infinite over time, as wolfram's perspective suggests, then there is no limit to the amount of raw computation someone can use: if they need more and it's not available, superintelligence can just put either their garden or *everyone's gardens* on pause until that amount of computation resource becomes available, and then resume things. from the point of view of persons, that pause would be imperceptible, and in fact functionally just an \ of this new reality.\n\npersons would have the ability to transform their mind as they want (though having a bunch of warnings would probably be a reasonable default) and experience anything that their garden can run; *except for computing the minds of other persons*, even within their own mind: you wouldn't want to be at the mercy of someone just because you happen to be located within their mind.\n\npersons would be able to consent to interact with others, and thus [have the ultimate say on what information reaches their mind](https://carado.moe/cultural-and-memetic-hygiene.html). they could consent to visit parts of each other's gardens, make a shared garden together, and all manner of other possibilities, so long as all parties consent to all interactions, as determined by superintelligence — and here we're talking about [explicit consent](https://carado.moe/defining-freedom.html), not inferred desires even though superintelligence would probably have the ability to perfectly determine those.\n\nfor a perspective on what a society of \"uploaded\" persons might look like, see for example [Diaspora by Greg Egan](https://en.wikipedia.org/wiki/Diaspora_%28novel%29).\n\n### rationale and non-person forces\n\nthe goal of this structure is to allow people to live and associate with each other in the most free way possible, making the least possible restrictions on lifestyle, while retaining some strong guarantees about consent requirements.\n\nin a previous post i talk about [non-person forces](https://carado.moe/two-principles-for-topia.html); those being for example social structures that act with an agenthood of their own, running on other people as their own substrate.\n\nat the moment, i simply don't know how to address this issue.\n\nthe problem with the \ of such forces is that, if every person is consenting to the process, it's hard to justify superintelligence coming in and intervening. on the other hand, it does feel like not doing anything about them, short of being able to align sufficiently many people *forever*, will tend to make people dominated by such structures, as a simple process of natural selection: if there is room for such structures and they can at least slightly causate their own growth or reproduction, then they will tend to exist more than not. this may be thought of as [moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) \"attacking from above\".\n\none major such potential non-person force is superintelligence itself trying to make people tend to want to live in ways that are easier to satisfy. if everyone wants to sit in their garden forever and do nothing computationally costly, that makes superintelligence's job a lot \ than if they wanted to, for example, communicate a lot with each other and live computationally expensive to run lifestyles; and the reason superintelligence will want to make its job easier is to increase the probability that it succeeds at that job (which it *should* want).\n\nif informationally insulating people from superintelligence except when they outright consent to it intervening in their decisions is *not* sufficient, then maybe we can add the rule that people can never ask superintelligence to intervene in their life unless there is one single optimal way to intervene, and hopefully *that's* enough. the idea there being: if, for any request to superintelligence, there is only a single optimal way to accomplish that request, then superintelligence has no degree of freedom to influence people and thus what they want.\n\n### on new persons\n\nthere are some reasons to be worried about the creation of new persons.\n\none is [malthusian traps](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/): if the amount of resources is either finite or growing but bounded, or if it's unknown whether the amount of resources will end up being finite or not, then you *have* to cap population growth to at most the speed at which the amount of resource grows (if the amount of resources grows, the maximum speed of population growth should preferably be lower, so that the amount of resource each person has can grow as well). while it does seem like in current society people tend to have less kids when they have a higher quality of life, in a system where persons can live forever and modify their minds, one can't make such a guarantee over potentially infinite time.\n\nanother is replicator cultures: if there is no limit on creating new persons, and if people can influence even just a bit the values of new persons they create, then soon the world is overrun by people whose values are to create kids. or: making a world in which \"new person slots\" are filled by whoever wants to fill them first, will just select for people who want to fill those slots the most.\n\nthere might also be weird effects such as, even if resources were infinite, allowing arbitrary amounts of persons to be created could \"stretch\" the social network of consenting-to-interact-with-each-other persons such that, even if someone has registered an automatic consent to interact *even just a bit* with the kids of persons they already consent to interact with, they are soon flooded with a potentially exponentially growing network of kid interactions; though this probably can be addressed by that person by revoking this automatic consent.\n\nbeyond various resource and network effects, new persons create an ethical dilemma: does a person consent to living? or, for a child, do they consent to being taken care of for some amount of years after they are born — a time during which we often consider them to require affecting them in ways they might be unable to consent to?\n\nif such philosophical quandries don't have a solution, then the safest route is to simply forbid the haphazard creation of new persons, whether that be through conventional human infants, [headmates](https://en.wikipedia.org/wiki/Multiplicity_%28psychology%29) and [tulpas](https://en.wikipedia.org/wiki/Tulpa#21st_century) if those are \ enough to count as persons, and potentially other ways of creating new persons that can't consent to future interactions because they don't exist yet.\n\non the other hand, one way to increase the population *with* consent, is simply to \ existing persons: create a duplicate of them. because both are a continuation of the original single person, the original person's consent counts for both resulting persons, and there is no issue. the \"merging\" of consenting persons together might be possible *if* it can be reasonably estimated that their shared consent \"carries through\" to the new, merged person; i am currently undecided about how to even determine this.\n\nfinally, if resources are finite, creating a new person (whatever the means) should require permanently transferring one \"universal basic computation amount\"'s worth of computation garden to them, as no person should start out without this guarantee. this could be done by a person consenting to die and give up their own computation garden, it could be done by several \ consenting to give up a ratio of their gardens to the new person, it could be done by reclamating the redistribution of persons who die and don't make any decisions about what should be done with their computation garden, etc.\n\n", "url": "n/a", "id": "40891ff7987f1f6461924a9a77411313"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/above-paperclips", "authors": "n/a", "date_published": "n/a", "text": "2021-11-20\n\n## no room above paperclips\n\n(edit: see also [*yes room above paperclips?*](https://carado.moe/above-paperclips-2.html))\n\nwhen presented with the idea of a [paperclip-maximizing unaligned superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), people sometimes mention the possibility that sure, the universe gets tiled with paperclips, but maybe there's [slack](https://thezvi.wordpress.com/2017/09/30/slack/) in how paperclips are arranged, and that maybe nice things can exist again \ paperclips.\n\n(note: this relates to the idea of [\ness in \](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free))\n\ni think it's a reasonable line of thinking, but it's short-sighted: let's think about what happens next. eventually, above those paperclips, some evolutionary process may take place, leading (possibly, such as in our case, through the step of a technological species) eventually to a superintelligence taking over everything. given that *the entire cosmos* gets tiled with paperclips [*possibly forever*](https://carado.moe/ai-alignment-wolfram-physics.html), and that a superintelligent singleton taking over everything is irreversible (short of everything dying forever), in all likelyhood in the long term in any piece of universe not already actively managed by a superintelligence, eventually either everything dies forever, or a superintelligence takes over everything forever.\n\nand then what? either this new superintelligence cares about \"upwards\", and has some plan for how its own paperclips are arranged (such as into more \"macro\"-paperclips), or it doesn't and the cycle begins again.\n\ngiven that the outcome of an \ superintelligence's takeover is probly a worse outcome than the takeover of a superintelligence of our own (we should expect them to be about as incompetent as us at alignment, but to have values [less aligned to ours](https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8)), we need to care about our own iteration first, it's our best bet.\n\nthe point is, eventually for any given local patch of spacetime, either a superintelligence explosion is reached or everything dies forever. this can't be avoided, even by \"climbing up\" on substrates, so we should care about alignment now; we can't just hope that things are okay despite paperclips.\n\nurln/aid10b6721c69b3b6bca36d2b094d672c85 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/prototype-realitiesauthorsn/adate_publishedn/atext2020-11-18\n\n*(this post may contain some very vague spoileryness about the video game Outer Wilds)*\n\n## A Prototypeness Hierarchy of Realities\n\none property of many video games that i felt the most when playing the excellent [Outer Wilds](https://store.steampowered.com/app/753640/Outer_Wilds/) was *prototypeyness*.\n\nmany games, and especially that one, feel like they are prototypes for reality to some extent; they try to extract some essence of what is interesting about this world, without having the ability to implement all of it in a fully dynamic way, and thus hardcoding the rest.\n\nnow, this aspect of prototypeyness is sufficiently present in Outer Wilds that i ended up asking myself the question: what would real life (this universe where earth is) be a prototype for ? and i think the answer is:\n\nreal life is a prototype for living in virtual realities/cyberspace.\n\nonce we upload ourselves to computers (a good thing!) we will be able to make the entirety of the substrate that individuals interact with way more flexible; inhabit spaces of any number of dimensions or maybe not even spaces at all and just graphs (as is the shape of the web), modify our minds in ways meat brains wouldn't support, basically utilize any type of computational constructs we want with no regard for most limitations, depending on reality only as a substrate to run the computronium for it all.\n\nlike the step between prototypey video games and reality, it is one of a nearly definitional boundary in scale of computing power, and one whose non-prototype side i'm very interested in.\n\nurln/aid90faa73448179b97b871235c96802b54 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/timeline-codesauthorsn/adate_publishedn/atext2021-07-18\n\n## AI alignment timeline codes\n\nthis is a small post proposing simple one-letter codes for identifying timelines depending on their status relative to [AI alignment](https://en.wikipedia.org/wiki/AI_control_problem) and the appearance of [superintelligence](https://en.wikipedia.org/wiki/Superintelligence):\n\n* **P-line**: a pre-intelligence explosion and pre-figuring out AI alignment timeline. we are in a P-line.\n* **X-line**: a timeline where an [existential risk (or X-risk)](https://en.wikipedia.org/wiki/X-risk) has been realized by an unaligned superintelligence. everything is dead, forever.\n* **S-line**: a timeline where a [suffering risk (or S-risk)](https://en.wikipedia.org/wiki/S-risk) has been realized by an unaligned superintelligence; the universe from then on contains net suffering on immense scales for all remaining time, [which is possibly infinite](https://carado.moe/ai-alignment-wolfram-physics.html). we should want to avoid this pretty much at all costs (including by [opting for an X-line instead](https://carado.moe/when-in-doubt-kill-everyone.html)).\n* **A-line**: AI alignment has been figured out, and no superintelligence has been deployed yet. from that point on, we have the means to reach a U-line; though this isn't guaranteed. this is where we want to get as soon as possible.\n* **U-line**: an aligned or [somehow otherwise](https://www.lesswrong.com/tag/orthogonality-thesis) benevolent superintelligence has been deployed, and we are guaranteed a relatively utopian world forever. this is the ultimate goal. while not strictly necessary, going through an A-line is almost certainly required to get there.\n\nU-line, X-line, and S-line all have deployed superintelligences and are therefore terminal outcomes; they are unescapable. P-line and A-line are transitionary; they likely lead to one of the three terminal outcomes mentioned here.\n\nother terminal might exist, but they seem unlikely enough to not warrant listing here; for example, even if everyone dies from, say, a meteor impact, life on earth or nearby will probably evolve another civilization *eventually*, which will also probably face the AI alignment challenge and end up in one of the terminal timelines.\n\n\n\n", "url": "n/a", "id": "cde170af1063b2638b993e2b3e761d13"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/utils-unit", "authors": "n/a", "date_published": "n/a", "text": "2022-04-30\n\n## a unit for utils\n\nas utilitarians, it would be convenient for us to have an actual unit to measure utility, a number to be computed and compared.\n\nthe usual pick is money, but some people could have different judgments of the world that lead them to have different instrumental valuings of money even when they have the same intrinsic values; and also some people could intrinsically value money.\n\nthe unit i propose, to measure how much an agent cares about a thing, is a ratio of that person's \. for example, you could intrinsically value 70% something and 30% something else; and then i'm sure we can figure out some math that makes sense (probly inspired from probability theory) to derive our valuings of instrumental values from that.\n\nthis seems like the least biased way to measure utils. the only criticism i can think of is that it breaks if two agents have different amounts of total valuing: perhaps one person *just has more total caring* than another.\n\nhowever, is this testable in any way? is there any situation where one agent would act differently than another if they have the same intrinsic valuing proportions but one of them has a million times more total caring? i don't think so: the idea that inaction counts, seems to me to track either willpower or just different valuings of not doing effort.\n\nurln/aid97a0e4c68140c8c10a867c4b97da9637 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/finding-earth-udauthorsn/adate_publishedn/atext2022-04-13\n\n## finding earth in the universal program\n\nthis post expands on step one of [*the Peerless*](https://carado.moe/the-peerless.html): creating virtual people. brain scans-and-simulations are apparently still quite far off, so i'll be focusing on the second approach: resimulating earth and plucking out persons.\n\n(one great side-advantage of this method is that, if we can relocate earth to pluck out persons for the simulation of alignment researchers, then we can later also relocate earth in order to restore it once we've solved alignment. so resimulating and locating earth, regardless of having early enough mind-plucking-out tech, is something we might need to do anyways.)\n\nif compute is [infinite](https://carado.moe/ai-alignment-wolfram-physics.html) and [we don't mind being inefficient](https://carado.moe/udassa-time-steps.html), then we can use exponential or even infinite compute to locate earth. one approach is the following: create a big informational beacon — perhaps a copy of a huge portion of the internet, along with MRI scans of as many people as we can afford. then, we use some type of (non-intelligent) deterministic error-bound statistical location procedure to locate patterns that look like that beacon inside the [universal program](https://carado.moe/universal-complete.html). we can afford the statistical detection to be imperfect — if it misses on one encoding of earth, there will be different ones in the universal program.\n\nbecause of the time penalty of the universal program, however, we may find just compressed copies of the beacon (instead of a full simulation of earth leading to the time at which we build that beacon), and because of the deterministic bound, we want need to stop on the first match; if this first match is *just* the beacon, without earth, then we fail; perhaps superintelligence can notice that it's not finding any nearby minds to pluck out, or perhaps it plucks out garbage. so we can start the universal program with not one step per program, but rather a very large number of steps — i hear stephen wolfram has estimates on the number of computation steps it takes to get to the current state of the universe. this will favor programs that takes every long to lead to the beacon, but are themselves shorter program.\n\n(what if the first program to contain earth is itself a universal program *without* that huge constant, such that *it* finds the beacon before it finds earth? i am not sure how to address this. perhaps we can explore programs in an order that favors worlds that look like our physics instead of looking like discrete iterations of all computations?)\n\nthere's also the concern that the universal program, just like the [universal distribution](https://handsandcities.com/2021/10/29/on-the-universal-distribution/), [is malign](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign). i'd think plain top-level earth, maybe especially as detectable by a simple enough beacon locator, would tend to occur before malign aliens emitting our beacon to trick us; but that's a risk to keep in mind.\n\nif we *do* care about computational efficiency, then there are two main factors we need to account for:\n\n* can our universe can be ran in polynomial time on whatever computers the superintelligence can build? for example, can it be ran in polynomial time on quantum computers, and can quantum computers be built? note that if this is the case we might need to step through *quantum steps* of *quantum programs* to run the search in the expected time. this doesn't need we need to build quantum computers outselves, mind you — superintelligence can just notice that a quantum computer would run the computations we describe efficiently, and build and use those.\n* is the \ program for the universe small? intuitively i believe it is, and i find wolfram's efforts to reproduce the behavior of particles from the standard model using simple graph rewriting, to be evidence in that direction. that said, if it is large, then finding that program is an exponential search again — and so, again, we might need to build a search that \"favors\" our physics to save on exponential search time.\n\nfinally, we might want to put a hard bound on the number of tries the superintelligence will run to locate earth. the reason for that is that, if for some reason we messed up something in the beacon locator and it *never, ever* finds earth, then it will instantiate all computations, which appears to me to be a potential [S-risk](https://carado.moe/timeline-codes.html). in fact, even if we do find earth, it may not be worth it if we have to simulate exponentially much potential suffering before running our utopia — what if, after solving alignment, we have a great time, but then decide to eventually fade away after only polynomial time? then we will might have created exponentially much suffering in total.\n\n### intermediary simulation\n\nin case isolating minds from this simulation is hard, we could build an intermediary step between the location of earth in simulation-space, and booting the peerless simulation proper — superintelligence could, once it has located our beacon, get in touch with our organization *inside the simulation of earth*, and give it extraordinary computational (and maybe physical?) ability within the simulation to either take over everything, or figure out brain plucking-out and then let us press a big \"ok, start now\" button.\n\nnote, however, that we might not want to remain in this intermediary simulation for too long — it is still vulnerable to inner unaligned superintelligences, just like our top level reality is. we want to get to a safe, sandboxed, computationally weak environment as early as possible.\n\nthis is also a great argument for readying ourselves to build the beacon and utilize this contact-from-superintelligence as early as we can; indeed, to make that the first step of implementing the peerless plan. the reason for that is that the earlier we are able to take advantage of it, the earlier the time step of the simulation superintelligence can help us start bootstrapping towards the proper simulation of the peerless, and the less likely we are to be doomed by other superintelligences, if we need some intermediary \"pre-peerless\" simulation time.\n\n", "url": "n/a", "id": "303832c40ae9eeb9a72c83c8ba520329"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/were-all-doomed", "authors": "n/a", "date_published": "n/a", "text": "2021-06-30\n\n## we're all doomed\n\n[a major tech company is now explicitly invested in getting AI to write code](https://copilot.github.com/).\n\nthis is a major warning sign; a first step on the explicit path to [superintelligence](https://en.wikipedia.org/wiki/Superintelligence) [explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion), an event [already considered relatively likely](https://intelligence.org/faq/#imminent) and, [in the absence of sufficient AI alignment progress](https://intelligence.org/2018/10/03/rocket-alignment/), is overwhelmingly likely to [permanently end all life at least in the observable universe](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).\n\nthe time scale probably lies somewhere between a few years and a few decades, but in any case it's becoming to seem increasingly unlikely that [the only organization trying to actually figure out AI alignment](https://intelligence.org/) is gonna accomplish that in time.\n\nif you can, go and [help them out](https://intelligence.org/get-involved/), or at least [donate everything you can to them](https://intelligence.org/donate/).\n\nif you're currently working in AI development in any way, *please stop*. whether anything on earth survives this century is gonna be a matter of whether AI alignment is figured out by the time we get enough AI development; by helping the latter, you're making it even more likely that it happens before the former.\n\non a gloomier note, if you have all the philosophical beliefs required to think it can work, you may want to start preparing to [abandon this timeline](https://carado.moe/quantum-suicide.html) if singularity starts happening and looks like it's not gonna go well.\n\nedit: see also: [are we in an AI overhang?](https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang)\n\nurln/aid320b601d47ad115d6a34c671e51c42bf |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/hackable-multiverseauthorsn/adate_publishedn/atext2022-02-03\n\n## hackable multiverse\n\nin [a previous post](https://carado.moe/brittle-physics.html) i talk about how hackable physics might allow a superintelligence to take over very quickly (perhaps faster than celerity).\n\nin *[psi rewriting](https://carado.moe/psi-rewriting.html)* i propose that multiversehood can be more cleanly described as a particularly implemented feature of the cosmos, rather than an intrinsic thing.\n\nbut, if the cohabitation of multiple timelines is indeed an implemented feature rather than a primitive one, then there is a possibility that it is hackable, and that a superintelligence could hack across timelines.\n\nnow, it is to be noted that even if hackability exists, it might still be limited: perhaps there something like a light cone at play, or perhaps a given timeline can only access a finite number of other timelines.\n\nit is to be remembered that timelines are not slots, they're not variables that hold values; timelines are *the values themselves*. still, hackability could mean some branches of the causality graph stop getting computed, for example.\n\neither way, under these conditions, even quantum immortality might not save us from an X-risk superintelligence, and [given](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode) [recent](https://openai.com/blog/formal-math/) [developments](https://blog.eleuther.ai/announcing-20b/), we should panic a lot.\n\n", "url": "n/a", "id": "315360aba3887c2294173e630a245f50"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/bracing-alignment-tunnel", "authors": "n/a", "date_published": "n/a", "text": "2022-04-10\n\n## bracing for the alignment tunnel\n\nit looks like we're gonna invent AI that [kills everyone](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) before we figure out [AI alignment](https://en.wikipedia.org/wiki/AI_alignment).\n\nwhat this means is that soon, [if not already](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), we are going to start bleeding [timelines](https://en.wikipedia.org/wiki/Many-worlds_interpretation), **hard**; by which i mean, an increasing ratio of multiverse-instants are gonna become dominated by unaligned AIs — and thus be devoid of population ([probably](https://carado.moe/above-paperclips-2.html)).\n\nafter that, there is a period in the (ever-diminishing amount of) surviving timelines, where we [ride on quantum immortality](https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality) to solve alignment; after which, we finally reach [U-lines](https://carado.moe/timeline-codes.html), hopefully.\n\nby many theories of [anthropics](https://handsandcities.com/2021/09/30/sia-ssa-part-1-learning-from-the-fact-that-you-exist/), observing existing either before or after that period is a lot more likely than observing existing in it. before the period, it is more likely because there are a lot more populated timelines in which to exist; after the period, it is more likely because we can hopefully \ by allowing the population to increase again.\n\nif i am correct in the reasoning in this post, then being someone who exists in this very narrow alignment \ is exceedingly unlikely (barring weird circumstances such as post-singularity mankind choosing to simulate many variants of the tunnel for some reason). indeed, if you do observe being in it, you should think that something weird is going on, and update against the narrative presented in this post.\n\nyet, *again if i am correct*, this is a period where we need to hold tight and work on alignment, perhaps as quickly as possible in order to reduce astronomical waste. very few us's inhabit the tunnel, but those very few us's are the critical ones who we need to care about.\n\nso we need to brace our minds for the alignment tunnel. we need to commit to be persons who, if we observe being in the tunnel, will keep working on alignment even if, *from inside those timelines*, it looks like the reasoning i'm presenting here can't possibly be right. this is perhaps a weird case of instrumental rationality.\n\n(note that i'm not saying the conclusion of observing being in those timelines should be to stop working on alignment; perhaps we would want to work on it either way, in which case we don't have to worry about anything. but i worry that it could lead us to other places such as \)\n\nurln/aiddab8ad3d8ddbaf57266cc55b6cfb74fd |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/ai-risk-plansauthorsn/adate_publishedn/atext2022-05-12\n\n## AI risk plans\n\npeople have criticized my [*peerless*](https://carado.moe/the-peerless.html) plan on the grounds that it's too long-term/far-fetched.\n\nwhile i don't disagree, i think that it is only one variable to be taken into consideration. here is a comparison of plans for addressing AI risk, with vague estimates.\n\n|plan|achievable before [X-line](https://carado.moe/timeline-codes.html)¹|chance of [U-line](https://carado.moe/timeline-codes.html)²|[S-risk](https://en.wikipedia.org/wiki/S-risk)²|\n|---|---|---|---|\n|doing nothing|100%|<[1e-6](https://www.lesswrong.com/tag/orthogonality-thesis)|<1e-6|\n|direct alignment|[.1%](https://www.readthesequences.com/Value-Is-Fragile)|5% → .005%|[5%](https://reducing-suffering.org/near-miss/) → .005%|\n|[the peerless](https://carado.moe/the-peerless.html)|2%|10% → .2%|1% → 0.02%|\n\n* ¹: assuming significant effort is put behind the plan in question, what is the likelyhood that we'll have accomplished the work to what we *believe* to be completion? note that my current AI timelines are pretty pessimistic (we become more likely to die than not this decade)\n* ²: *if* we believe to have completed the work; the latter number is adjusted by being multiplied with \"achievable before [X-line](https://carado.moe/timeline-codes.html)\".\n\nnote that the numbers i put here are only very vague estimates, feel free to replace them with your own guesses. but my point is, in order for the peerless to be the plan we should be working on, we don't need it to be *feasible*, we just need it to be *less infeasible than all the other plans*. i think the peerless is more tractable than doing direct alignment, and only more risky because it has more chances to succeed. depending on how scared of S-lines you are, you should push for either doing nothing (and thus [oppose direct alignment](https://carado.moe/against-ai-alignment.html)) or for my plan. (or come up with your own, and then compare it to these!)\n\nnot pictured: the plan to [melt all GPUs](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW), because it's just a modifier on what we do afterwards. but yes, melting all GPUs is a great idea if we think we can reasonably do it more than other plans.\n\n", "url": "n/a", "id": "44f0a61a3fe4623a4f241b21af84e297"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/cosmic-missing-outs", "authors": "n/a", "date_published": "n/a", "text": "2021-10-14\n\n## cosmic missing outs\n\nthis might be a complete waste of brainflops, but sometimes i wonder about \"cosmic missing outs\".\n\nmy typical example for those is the culture of modern japan.\n\nimagine timelines where japan never became the country it did, and we never got its culture. that'd be a huge thing to miss out on, right? the second best thing might be korean culture or something like that.\n\nbut, now that you've imagined this timeline that is missing out on modern japan culture, imagine the opposite: there are timelines out there that have those great cultures of countries that we're missing out on, that us missing out on is kind of on the same scale as those other timelines missing out on japan's culture.\n\ni'm talking about this because i just thought of some other things kind of of this type: \n\nwhat are some unknown things that we are missing out on, that us missing out on is kind of like if other timelines were missing out on music?\n\nwhat are some unknown things that we are missing out on, that us missing out on is kind of like if other timelines were missing out on philosophy, science, or math?\n\nthese speculations are the closest i can get to putting human minds into perspective and considering the existence of things entirely outside of human conception, the way many things are entirely outside of a mouse or ant's ability to conceive.\n\nto be clear: i still can't have that consideration, this is only the closest i get, but it's not quite there.\n\n", "url": "n/a", "id": "0fc523d8e54226a787762cc6f00faef8"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/right-to-death-therefore", "authors": "n/a", "date_published": "n/a", "text": "2021-08-23\n\n## right to death, therefore\n\nbecause i like [freedom](https://carado.moe/defining-freedom.html) so much, i think people should be generally able to do what they want. but this immediately raises a conundrum: should someone be able to do an action that hampers their *future* freedom?\n\none relatively extreme case is the ability to commit suicide: it's about as committal as you can get, in terms of actions with future ramifications to oneself. if you choose to get in debt or cut off a limb, that can be pretty hard to get out of, but it still seems less impactful and less inescapable than suicide.\n\nso, should suicide be allowed? (i am of course only talking about reasonable, clear-minded suicide, *informedly* consented; not coerced suicide, nor suicide out of compromised ability to make decisions)\n\nin my opinion, *obviously yes*. the alternative, that people be forced to live until their biology kills them (which we may very well find ways to prevent indefinitely), seems abhorrent to me. given this guarantee, then it makes sense to me that any lesser commitments should also be fine.\n\nthere are some criticisms one can make about this argument. bad but non-death commitments could tend to increase the amount of suffering people in society at any given moment; and, if people change over time (as they tend to do), then commitments can ramificate into a future person who is sufficiently different from the person making the commitment that it might be considered unreasonable for them to be subject to some excessive amounts of \ unconsented negative effects. a cap on the time duration of commitments, and/or the requirement for people to guarantee that they remain the same \ over time until the commitment is expired (a technology we currently don't have, but will become easier to make once we're uploaded and we understand the human mind better), might be reasonable patches for these issues.\n\nurln/aidc6227aa12034b21587eec36a92548ee6 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/estimating-populated-intelligence-explosionsauthorsn/adate_publishedn/atext2021-07-10\n\n(edit 2021-07-18: this post is probly not very good, as there's some anthropic principle research out there and i haven't read any and just gone off thinking about it on my own.)\n\n## estimating the amount of populated intelligence explosion timelines\n\nthe [imminent](https://carado.moe/were-all-doomed.html) [intelligence explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) is likely to [go wrong](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).\n\nhow likely?\n\nif you imagine that you live pretty much at the cusp of such an event, you should expect as per the [anthropic principle](https://en.wikipedia.org/wiki/Anthropic_principle) that there are about as many observer-instants before you, as there are after you. (an observer-instant being an instant at which you have a chance of making observations about that fact; see [this](https://www.greaterwrong.com/posts/uSMa6Fj5nMgntpxfo/are-coincidences-clues-about-missed-disasters-it-depends-on) and notably Nick Bostrom's Self-Sampling Assumption)\n\ni've previously calculated that the future from now until heat death has room for roughly 10^200 human lifespans (of 80 years) (an estimation based on the number of particles in the observable universe, the amount of time until heat death, and the computational cost of running a human brain).\n\nthe past, on the other hand, holds about 10^11 human lifespans (most of them not full 80-year lifespans, but such details will get amortized by using orders of magnitude).\n\nif intelligence explosion is, as i believe, likely to result either in [total death](https://carado.moe/were-all-doomed.html) or in well-populated futures (whether good or [bad](https://en.wikipedia.org/wiki/Suffering_risks)), then the fact that i'm observing being right next to the event (in time) rather than observing being one of the (in well-populated timelines) countless observers to exist *after* the event, must be compensated by such well-populated timelines being particularly rare within the set of future possible timelines.\n\nhow rare? about 1 in (10^200 / 10^11), which is 1 in 10^189.\n\nfactors which may make this calculation wrong:\n\n* my 10^200 estimate might be wrong (for example: if each person comes to eat a *lot* of computation resources, then the number of future observers is drastically reduced).\n* the 10^11 estimate for the past might be wrong: what if there have beings in earth's past smart enough to make this observation? it may seem unlikely, but if i am to encompass the immense amount of forms future observerc might take, i should account for a wide variety of forms of past observers too.\n* because entropy increases, there are (possibly a lot) more future universe states than past universe states. accounting these \ for the number of future observers even more massively decreases the expected ratio of well-populated timeline-states, though i'm not sure by how much.\n\n", "url": "n/a", "id": "480ff72252b3003a3a4936833655f979"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/noninterf-superint", "authors": "n/a", "date_published": "n/a", "text": "2021-12-09\n\n## non-interfering superintelligence and remaining philosophical progress: a deterministic utopia\n\n[in a previous post](https://carado.moe/against-ai-alignment.html) i talk about the need to accomplish philosophical progress at determining what we value, before alignment. i wouldn't be the first to think of \ as an alternative: it would indeed be nice to have this possibility, especially given the seeming imminentness of superintelligence.\n\nalas, typically, making this proposition goes like this:\n\n* A: \\n* B: \\n* A: \\n* B: \\n\nwhich is a pretty good point, and usually a reasonably A concedes at that point.\n\ntoday, however, i am here to offer a continuation to this conversation, from A's side.\n\nmy idea is to implement a deterministic computational utopia for people to be uploaded in, whose internals are disconnected from the outside world, such as [∀V](https://carado.moe/∀V.html); if we have infinite compute, then it can be even more free from outside interference.\n\nthe trick is to have that utopia's principles be *deontological*, or at least to make them absolute rather than able to be weighed against decisions outside of it: as it largely is in ∀V, ensure everything about utopia has a definite okay or not-okay status, evaluable without knowing anything about the \ of this utopia. either someone's consent is being violated, or it's not. with a set of decisions based only on the state of the utopia being simulated, every decision of the superintelligence about what it does in ∀V is unique: all superintelligence is doing is calculating the next step of this deterministic computation, including ethical principles, and thus there is nothing superintelligence can do to bias that decision in a way that is helpful to it. all it can do is run the computation and wait to see what it is that persons inside of it will decide to reprogram it to value or do; on the outside/before the singularity, all we need to ensure is that superintelligence does indeed eventually run this computation and apply the changes we decide on once it finds them out.\n\nunder these conditions, a device could be set up for us to later reprogram superintelligence somehow when/if we ever figure out what values we *actually* want, and it wouldn't be able to meaningfully interfere with our decision process, because every decision it takes regarding how our utopia is ran is fully deterministic.\n\nnot that i think being able to reprogram a superintelligence after boot is necessarily a good idea, but at least, i think it can be a possibility.\n\n", "url": "n/a", "id": "13b985a078b3fe025c07a917c32325ae"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/systems-and-diversity", "authors": "n/a", "date_published": "n/a", "text": "2021-07-21\n\n## systems and diversity\n\nas i've said in [a previous post](https://carado.moe/lets-not-generalize-politics.html): i really like culture; and, to that end, i like diversity (by which i mean people being more weird and different from one another).\n\nthere are many systems that exist today that affect diversity. most of them punish it; not as a coincidence, but because diversity is [a fragile value](https://www.readthesequences.com/Value-Is-Fragile): if you optimize for something else, it will tend to get optimized out.\n\nif you optimize for economic efficiency, diversity gets optimized out because the most easily served economy is one in which demand is relatively uniform.\n\nin general, if you optimize for people having their values satisfied, diversity gets optimized out because the most easily satisfied set of values is relatively uniform and easy to satisfy values; if you tell a superintelligence to \, the simplest way to achieve that (other than killing everyone) is to make sure everyone has very simple values like doing nothing all day or dying as soon as possible.\n\nthe scary thing about such an optimization is that it \: at no point does an economy headed towards uniformity need to collapse; on the contrary, the more it has optimized out diversity, the more efficient and stable it'll be! so, we need to *[near-intrinsically](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value)* care about preserving diversity, even when all else seems fine. this makes diversity preservation probably my largest concern with capitalism; at least, a system that wouldn't care about efficiency, wouldn't necessarily be aligned against diversity (though it might be aligned against it for other reasons).\n\nsocial pressures such as [generalizations and expectations](https://carado.moe/lets-not-generalize-politics.html) punish diversity by rewarding conformity.\n\ndemocracy and general consensus enforcment systems punish diversity by generally letting majority lifestyles be better supported by society than minority lifestyles.\n\ni do know of one force of human nature which encourages diversity: [fetishism](https://en.wikipedia.org/wiki/Sexual_fetishism#Definitions). fetishism tends to make people prefer things specifically because they go against the norm. as such, i propose that if we value rich culture, we should want to cultivate fetishism.\n\nthe takeaway is: in any long-term societal plan, we need to care not just about values being satisfied, but about what values people have to begin with. a clear example in modern capitalism is advertising: it's okay that companies are aligned to satisfy values, but [with advertising they get to affect what values people have to begin with](https://carado.moe/unfair-feedback-loops.html).\n\n(one could argue we could encourage people to [crystallize](https://carado.moe/value-crystallization.html) and [conserve](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) their values, as well as forbid the creation of new persons; but [i'd rather that not be required](https://carado.moe/rationalist-by-necessity.html))\n\n", "url": "n/a", "id": "e857728c4bbf97d68b40f5b8188339cd"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/smaller-x-risk", "authors": "n/a", "date_published": "n/a", "text": "2022-05-16\n\n## smaller X-risk\n\na superintelligence killing us all is a *superintelligent, very large* [X-risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence).\n\nthe superintelligence will tile its values in all directions; not just through space at celerity [or faster](https://en.wikipedia.org/wiki/Alcubierre_drive), but also, if it can, by [hacking physics](https://carado.moe/brittle-physics.html) and traversing across, for example, worldlines of the quantum many-worlds.\n\nwe may be able to create smaller X-risks, that only make us extinct in this timeline, on this earth. there are a few reasons we may want to do this:\n\n* other timelines might have a better shot than us, and us booting a superintelligence may reduce their chances through weird stuff like intertimeline hacking\n* to [avoid S-risks](https://carado.moe/when-in-doubt-kill-everyone.html), including S-risks that may be involved in instrumental cosmic-scale X-risk (maybe superintelligence wants to simulate civilizations in various ways for [acausal trade](https://www.lesswrong.com/tag/acausal-trade) or [other acausal weirdness](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past) reasons?)\n* the next intelligent species on earth is more likely than us to solve alignment before superintelligence, and seems likely enough to be at least a little bit aligned with us (better than cosmic X-risk, at least)\n* same as above, but for nearby aliens (whether current or future)\n\n*smaller X-risk*, where we limit damage to just our civilization, seems harder than tiling the cosmos with paperclips; but at least it might be easier than [other plans](https://carado.moe/ai-risk-plans.html).\n\nin a similar way, reducing our civilization to ashes *without* actually becoming extinct might also be a way to get another shot, if we think we're likely to do less badly next time.\n\nremember: this is bigger than all of us. when the fate of the cosmos is at play, we can't afford to be too selfish.\n\n", "url": "n/a", "id": "48a34f9393a122d4fcb2a1018e0a09a5"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/fermi-paradox", "authors": "n/a", "date_published": "n/a", "text": "2021-06-16\n\n## my answer to the fermi paradox\n\nthe [fermi paradox](https://en.wikipedia.org/wiki/Fermi_paradox) asks, if aliens are supposedly so statistically prevalent, why we haven't received any radio signals from them.\n\nhere is my independently-developed (though probly not original) answer:\n\nstatistically, it seems reasonable that civilizations would [accidentally invent unaligned superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) not long after inventing radio signals (in our case, a couple centuries). in order to percieve those signals, you would need to exist *after* your planet receives those signals, but *before* your planet receives that unaligned superintelligence's [expanding sphere of death](https://carado.moe/moral-cost-of-unaligned-ai.html), which might very well travel at celerity or near-celerity.\n\nthus, given the low probability, it is not surprising that we haven't percieved those; for any given alien civilization, in a given timeline, we either haven't received their radio signals, or have already been killed by them. seeing as we're alive, this timeline must be one of the former.\n\nurln/aidfbbd0c469d0ccdb6efe602d7711b7cc6 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/finite-patientsauthorsn/adate_publishedn/atext2022-03-21\n\n## are there finitely many moral patients?\n\nwouldn't it be neat if we didn't have to worry about [infinite ethics](https://handsandcities.com/2022/01/30/on-infinite-ethics/) ?\n\ni think it is plausible that there are finitely many moral patients.\n\nthe first step is to [deduplicate moral patients by computational equivalence](https://carado.moe/deduplication-ethics.html). this merges not only humans and other creatures we usually care about, but also probably a lot of [other potential sources of moral concerns](https://reducing-suffering.org/what-are-suffering-subroutines/).\n\nthen, i think we can restrict ourselves to patients in worlds that are discrete (like ours); even if there *were* moral patients in non-discrete worlds, it seems to me that from where we are, we could only access discrete stuff. so whether by inherent limitation, plain assumption, or just limiting the scope of this post, i'll only be talking about discrete agents (agents in discrete worlds).\n\nonce we have those limitations (deduplication and discreteness) in place, there are finitely many moral patients of any given size; the only way for an infinite variety of moral patients — or more precisely, moral patient moments — to come about is for some moral patients to grow in size forever. while infinite time seems plausible [even in this world](https://carado.moe/ai-alignment-wolfram-physics.html), it is not clear to me that whatever the hell a \"moral patient\" is can be arbitrarily complex; perhaps at a certain size, i start only caring about a *subset* of the information system that a \"person\" would consist of, a [\"sub-patient\"](https://carado.moe/deduplication-ethics.html).\n\n", "url": "n/a", "id": "c2bc6cdb9e55b89df6264a665a2d469d"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/botched-alignment-and-awareness", "authors": "n/a", "date_published": "n/a", "text": "2021-07-19\n\n2022-05-09 edit: i have found out that this idea is more thoroughly explored [here](https://reducing-suffering.org/near-miss/).\n\n## botched alignment and alignment awareness\n\n[AI alignment](https://en.wikipedia.org/wiki/AI_control_problem) is [hard](https://intelligence.org/2018/10/03/rocket-alignment/).\n\nan AI developer who doesn't know about the problem of alignment to general human values might accidentally develop a superintelligence which optimizes for something largely unrelated to humans, leading us to an [X-line](https://carado.moe/timeline-codes.html); on the other hand, if they make a botched attempt at alignment to human values, it seems like there's more of a chance (compared to if they don't try) at booting a superintelligence which cares about enough aspects of human existence to tile the universe with some form of humans, but not enough to make those humans' lives actually worth living (goals such as \"humans must not die\"), resulting in S-lines.\n\nconsidering this, raising awareness of AI alignment issues may be a very bad idea: it might be much better to let everyone develop not-human-caring-at-all AI and cause X-lines rather than risk them making imperfect attempts resulting in S-lines. or: we shouldn't try to *implement* alignment to human values until we *really* know what we're doing.\n\ncontrary to a [previous post of mine](https://carado.moe/were-all-doomed.html), this is a relatively hopeful position: no matter how many timelines end in X-risk, inhabited P-lines can continue to exist and research alignment, hopefully without too many S-lines being created. on the other hand, while it increases the chance of the [singularity](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) turning out good by leaving us more time to figure out alignment, it also means that it might take longer than i'd've otherwise expected.\n\n", "url": "n/a", "id": "1f0b350f9b180ff084cdce2016cc496d"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/the-peerless", "authors": "n/a", "date_published": "n/a", "text": "2022-04-12 ★\n\n## The Peerless\n\nIn this post, I propose a plan for addressing superintelligence-based risks.\n\nBefore I say anything, I will mention a crucial point that a *bunch* of people have ignored despite it being addressed at the bottom of this post: the idea I describe here is very unlikely to work. I'm proposing it because of other plans because I feel like other plans are *extremely* unlikely to work (see also [this post](https://carado.moe/ai-risk-plans.html)). Yes, we probably can't do this in time. That doesn't make it not our best shot. Rationalists select the best plan, not [\](https://www.readthesequences.com/If-Many-Worlds-Had-Come-First).\n\n(spoilers for the premise of [*orthogonal* by Greg Egan](https://en.wikipedia.org/wiki/Orthogonal_%28series%29)) In Orthogonal, a civilization facing annihilation comes up with a last-minute plan: to create a ship, accelerate it until its time arrow is orthogonal to the time arrow of its home world (which is possible thanks to the alternate physics of their world), and thus give its crew as much time as it needs to figure out how to save their homeworld before reversing course and coming back. This plan is inspired by that, and i'm naming this post after their ship, the *Peerless*.\n\nThe short version is: we design a simulation for a bunch of people (probably rationalists) to live in and figure out alignment with as much time as they need, and create a superintelligence whose sole goal is to run that simulation and implement a new goal it will eventually decide on. I've written about this idea previously, but [that post](https://carado.moe/upload-for-alignment.html) is not required reading; this is a more fleshed-out view.\n\nI will be describing the plan in three steps.\n\n### 1. Create virtual people\n\nWe need virtual persons inside this world. They will be the ones who figure out alignment. A few possibilities come to my mind; there may be more.\n\n* Brain scans, or full person scans. This is the most obvious solution. I'm not too familiar with the state of that field, but surely there's some work in that direction we can take advantage of; otherwise, we can just throw money at our own initiatives. This option does have the downside that it's quite likely brains aren't sufficient to keep someone functional — we may need to scan or re-implement a bunch more.\n* Resimulate earth and pluck out persons. If there's a clever way to locate ourselves in the [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) (or a [computable variant of it](http://www.scholarpedia.org/article/Universal_search)), then we can just make a program that reruns that earth up to say, now, and then locates some or all human brains, and \"download\" them out of that simulation of earth and into our own simulated environment. For more details on this possibility, see [*finding earth in the universal program*](https://carado.moe/finding-earth-ud.html).\n* Scan the earth and pluck our persons. This one seems harder than resimulating earth, but may be doable. It's certainly an idea worth throwing a few people at, to see if there's a clever way to make it work.\n\nThe main risk that's been brought to my attention regarding this part is the following: what if the virtual persons end up unaligned from their previous selves? The brain scan scenario seems like the most likely to have that risk, but even then i'm not *too* worried about it; intuitively, it seems unlikely enough to me that all the uploaded persons would come out misaligned in a similar direction, and in a similar direction that would lead them to decide on a [botched alignment](https://carado.moe/botched-alignment-and-awareness.html) for the superintelligence.\n\nAn obvious question here is: who gets to be on board the simulation? The values of the people who get uploaded might significantly affect what the superintelligence is aligned to (not all humans necessarily have the same values, [maybe even after thinking about it really hard for a long time](https://handsandcities.com/2021/06/21/on-the-limits-of-idealized-values/)). I don't have any answers other than the obvious \ and \ that occur to me.\n\nNote that i'm *not* proposing augmenting the uploaded minds — at least not for the first simulation iteration (see below). That *does* seem like an exceedingly risky prospect, alignment-wise, and one we don't need to commit to right away.\n\n### 2. Design a virtual environment\n\nThose persons will live in a virtual environment, within which they'll hopefully figure out alignment. However, the environment needs to be a deterministic computation, such that the \"outer\" superintelligence (the one running the virtual environment) has no ability to affect its outcome; its goal will only be to \"implement whatever this computation decides\". If the superintelligence wants to implement the actual result of the actual computation, and that computation is fully deterministic, (and if we don't simulate anything complex enough for that superintelligence to \), then it has no room to meddle with what we do in it! It's stuck running us until we decide on something.\n\nSome things we need to figure out include:\n\n* How do we incorporate our virtual minds? I think we should go for something plugged in \"ad-hoc\" rather than embedded into the physics of that world, to preserve the integrity of those minds, which may live for very long times. In addition, in case virtual minds go crazy after living 200 years or something, we may want to allow them to reset themselves and/or die. A reset is not necessarily a big deal: hopefully previous-me can transmit enough information to future-me to continue the work. Maybe there are two me's at any given time, a teacher and an apprentice. Regular resets of individual persons also hopefully help maintain their values over long stretches of time. Many schemes are possible.\n* What is this world like? We could make do with just something as basic as minecraft, but it would be better if the virtual persons don't have to go crazy from being stuck in a minecraft steve's body with no senses except sight and sound.\n* How do we prevent \? Given that this world is deterministic, there is nothing the outer superintelligence can do to prevent internal superintelligences from popping up and breaking everything it can. Potential solutions include things like \ or \.\n* What about memetic safety? What about virtual violence? What if someone duplicates themself a billion times? And so on. There are a collection of design challenges, but designing [a peaceful world](https://carado.moe/∀V.html) with [sensible virtual physics](https://carado.moe/game.html) doesn't seem out of reach. They seem like tractable engineering challenges.\n* What is the final voting procedure? Remember that the goal of the simulation is to give the people inside it time to figure out alignment, but they should probably agree on something eventually: either a final decision on alignment, or a \"next iteration\": a new simulation to be ran, which they think has better/safer/still-safe conditions within which to research alignment. In fact, there may be arbitrarily many such \"simulation iterations\". Anyways, the simulation will have a big red button inside of it which says \"okay, we're done\very large\flying blind\successive simulation iterations, each deterministic, but each having the ability to make the next one not deterministic with a large enough vote\a tiny human population survives and eventually repopulates\everything dies forever.\easy part\implement whatever goal is the result of this very big turing machine\plugged in\run this discrete, deterministic thing and then adopt its output as goal\hey, maybe we should study things vaguely related to harnessing what neural nets do, and hope to be able to grab a miracle should it come up\, : , : } |
|
{: , : , : , : , : , : cheating\, : , : } |
|
{: , : , : , : , : , : static, known\dynamic, unknown\, : , : } |
|
{: , : , : , : , : , : what happens when you die?\what are some next things your information system will percieve after facing what should be fatal events in its original body?\, : , : } |
|
{: , : , : , : , : , : values\, : , : } |
|
{: , : , : , : , : , : UD\wait for the machine to halt\UTM\claw\locator\, : , : } |
|
{: , : , : , : , : , : more real\realness\soul juice\naive\implementation\implementations\compress\, : , : } |
|
{: , : , : , : , : , : what is the procedure that formalizes my values system\, : , : } |
|
{: , : , : , : , : , : value learning\, : , : } |
|
{: , : , : , : , : , : implementation details\sub-patient\, : , : } |
|
{: , : , : , : , : , : i think the risk is near 0%\i think the risk is maybe more like 10%\what i bet\carefully avoiding killing everyone\continuing as before\well, whatever, it's only 10% and only 1 out of the two of us believe this\". my reaction is \"what the hell?? i should look into this and stick to bottled water in the meantime\". the average between risk and no risk is not \"i guess maybe risk maybe no risk\"; it's \. the average between ≈0% and 10% is not \; the average is 5%. 5% is still a large risk.\n\nthis is kind of equivalent to *forgetting to multiply*, but to me it's a different problem: here, one is not just forgetting to multiply, one is forgetting that probabilities are numbers altogether, and is treating them as a set of discrete objects that they have to pick one of — and thus can justify picking the one that makes their AI capability work okay, because it's one out of the two objects.\n\n### 3. deliberation ahead vs retroactive justification\n\nsomeone says \ or even \. that *may* be true, but how carefully did you arrive at that consideration?\n\ndid you sit down at a table with everybody, talk about what is safe and needed to do alignment work, and determine that AI capability work of the kind you're doing is the best course of actions to pursue?\n\nor are you already committed to AI capability work and are trying to retroactively justify it?\n\ni know the former isn't the case because there *was* no big societal sitting down at a table with everyone about cosmic AI risk. most people (including AI capability devs) don't even meaningfully *know* about cosmic AI risk; let alone deliberated on what to do about it.\n\nthis isn't to say that you're necessarily wrong; maybe by chance you happen to be right this time. but this is not how you arrive at truth, and you should be highly suspicious of such convenient retroactive justifications. and by \"highly suspect\" i don't mean \; i mean \.\n\n### 4. it's not a prisoner's dilemma\n\nsome people think of alignment as a coordination problem. \\n\nthis is *not* how it works. such prisoner's dilemmas work because if your opponent defects, your outcome if you defect too is worse than if you cooperate. this is **not** the case here; less people working on AI capability is pretty much strictly less probability that we all die, because it's just less people trying (and thus less people likely to randomly create an AI that kills everyone). even if literally everyone except you is working on AI capability, you should still not work on it; working on it would *still only make things worse*.\n\n\\n\n…and? what's that supposed to justify? is your goal to *cause evil as long as you only cause very small amounts of evil*? shouldn't your goal be to just generally try to cause good and not cause evil?\n\n### 5. we *are* utilitarian… right?\n\nwhen situations akin to the trolley problem *actually appear*, it seems a lot of people are very reticent to actually press the lever. \\n\ni understand this and worry that i am in that situation myself. i am not sure what to say about it, other than: if you believe utilitarianism is what is *actually right*, you should try to actually *act utilitarianistically in the real world*. you should *actually press actual levers in trolley-problem-like situations in the real world*, not just nod along that pressing the lever sure is the theoretical utilitarian optimum to the trolley problem and then keep living as a soup of deontology and virtue ethics.\n\ni'll do my best as well.\n\n### a word of sympathy\n\ni would love to work on AI capability. it sounds like great fun! i would love for everything to be fine; trust me, i really do.\n\nsometimes, when we're mature adults who [take things seriously](https://carado.moe/life-refocus.html), we have to actually consider consequences and update, and make hard choices. this can be kind of fun too, if you're willing to truly engage in it. i'm not arguing with AI capabilities people out of hate or condescension. i *know* it sucks; it's *painful*. i have cried a bunch these past months. but feelings are no excuse to risk killing everyone. we **need** to do what is **right**.\n\nshut up and multiply.\n\n", "url": "n/a", "id": "84c02d9fcfc19ccebe9d1f67fb243d18"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/questions-cosmos-computations", "authors": "n/a", "date_published": "n/a", "text": "2022-01-07 ★\n\n## questions about the cosmos and rich computations\n\n**computation**: a running state of any [model of computation](https://en.wikipedia.org/wiki/Model_of_computation); for example, a specific [SKI calculus expression](https://en.wikipedia.org/wiki/SKI_combinator_calculus), or a specific turing machine with its rules, current state, and current tape values. given that any model of computation can run the computations of any other model, it does not really matter which one we choose, and i will be juggling between different models throughout this post.\n\n### 1: is any computation rich ?\n\n**rich**: a computation is rich if it is generally [computationally irreductible](https://en.wikipedia.org/wiki/Computational_irreducibility). as a tentative formal definition for richness, i'm tempted to say that a computation is rich if there is no function able to generally predict any of its future states in a time [less than linear](https://en.wikipedia.org/wiki/Computational_complexity_theory) in the number of steps it would take to arrive at that state normally. for example, [rule 30](https://en.wikipedia.org/wiki/Rule_30) *looks* rich: it looks like to calculate the value of cell at index `i` at time step `j`, it generally takes about `O(abs(i) × j)` steps of computation. on the other hand, it looks like [rule 54 and rule 60](https://mathworld.wolfram.com/ElementaryCellularAutomaton.html) can generally have their cells predicted in time logarithmic to the number of computational steps it would naively take to arrive at them.\n\nnote that richness is not the same as halting: while a halting computation is necessarily not rich, a non-halting computation can either be non-rich (like rule 54), or rich (possibly like rule 30).\n\nit seems clear to me that rich computations exist: for example, it is known that sorting a list of `n` elements takes `O(n × log(n))` steps, and thus a computation runnig a sorting algorithm of that complexity cannot have its result predicted in a smaller time complexity than it took to calculate naively. the ease with which i can demonstrate that, however, makes me doubt my tentative formal definition; maybe something more akin to [polynomial time complexity](https://arxiv.org/abs/1108.1791) would better capture the essence of computational irreductibility: perhaps a better determining question for richness could be \ or \\n\n### 2: does the cosmos instantiate any rich computation ?\n\nto **instantiate a computation** means for that computation to, somewhere, eventually, be ran (forever or until it halts). i start from the fact that i'm observing a coherent-looking universe, deduce that at least *some* computation is happening, and which other computations are happening (as in, are being observed somewher, or which i could have observed). as [clarified before](https://carado.moe/limiting-real-universes.html), one can't just assume that all computations are equally happening: things look way too coherent for that, there seems to be a bias for coherence/simplicity (one which i've tentatively attributed to [how soon that computation spawns](https://carado.moe/less-quantum-immortality.html)).\n\nlooking at the cosmos (the set of instantiated computations) from a computational perspective, it seems like it contains at least our universe, which is expanding. if this expansion is, [as has been hypothesized](https://www.wolframphysics.org/technical-introduction/potential-relation-to-physics/cosmology-expansion-and-singularities/), caused by the computational substrate of the universe manufacturing new vertices of spacetime, and computations can run on this new fabric as it is produced, then it's possible that [some computations can run forever](https://carado.moe/ai-alignment-wolfram-physics.html), including potentially rich ones.\n\nhowever:\n\n### 3: does the cosmos contain causal bubbles ?\n\na **causal bubble** is a piece of computation that can run forever with the guarantee that it won't be physically interfered with from the outside; see [yes room above paperclips](https://carado.moe/above-paperclips-2.html).\n\nfor example, while one can build [a turing machine inside conway's game of life](https://www.conwaylife.com/wiki/Turing_machine), a stray object on the same conway's game of life plane can eventually collide with said machine and break its computational process.\n\nhowever, in some [graph rewriting rulesets](https://en.wikipedia.org/wiki/Graph_rewriting), as well as in expression-rewriting systems with nested expressions such as a varient of [SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus) or [lambda calculus](https://en.wikipedia.org/wiki/Λ_calculus) where the evaluation rule expands all sub-expressions, some pieces of computation can run without ever being physically interfered with by other pieces of the computation.\n\n(i'm specifying \ because acausal coordination or mutual simulation can lead to interference, but at least that interference is up to the singleton (such as a superintelligence) \ said bubble (if any); they can just choose to never acausally coordinate and to never simulate other bubbles)\n\nin our own spacetime, it seems like causal bubbles exist thanks to the expansion of spacetime: some pairs of points get further apart from one another faster than celerity, and thus should never be able to interact with one another so long as that expansion continues and FTL travel is impossible. under the perspective of wolfram physics, however, it is not clear that both of those things will necessarily be the case forever; spacetime might be [hackable](https://carado.moe/brittle-physics.html).\n\nnote that the splitting of universes with nondeterministic rules (such as ours with quantum mechanics) into different causally isolated timelines is another way for causal bubbles to exist, assuming the implementation of such a nondeterministic universe is that all possibilities are instantiated at any nondeterministic choice.\n\nthe presence of causal bubbles allows some pieces of spacetime to [survive superintellingences appearing in other pieces of spacetime](https://carado.moe/unoptimal-superint-doesnt-lose.html), while the absence of causal bubbles makes it that a superintelligence or collection of superintelligences probably eventually does take over everything.\n\nif they exist, then causal bubbles are a blessing and a curse: they save us from alien superintelligences and, [between timelines](https://carado.moe/timeline-codes.html), from our own superintelligences, but they might also ensure that our own aligned superintelligence (once we have figured out alignment) cannot reach all computation, and thus that any random person has a good chance of existing in a bubble that hasn't been \"saved\" by our aligned superintelligence.\n\n### 4. is a universal-complete computation instantiated ?\n\n[**universal complete computations**](https://carado.moe/universal-complete.html) (such as the annex in [this post](https://carado.moe/less-quantum-immortality.html)) instantiate *all* computations, over time.\n\nif one takes the perspective that a top-level \"root\" bubble existed first, then the answer to this question is up in the air.\n\nmaybe we are this root computation, and the deterministic fate of the cosmos (in all timelines) is, for example, for physics to break at some point and kill everything, or for a superintelligence to appear at some point and kill everything (the two being [pretty equivalent](https://carado.moe/brittle-physics.html)) leaving [no room for bubbles](https://carado.moe/above-paperclips.html).\n\nmaybe the root bubble [does spawn](https://carado.moe/above-paperclips-2.html) a finite and small (after deduplicating by identical computations) number of bubbles, and each of those is fated to be killed in its entirety.\n\nor, maybe somewhere in this chain, one of the bubbles spawns *many* new, different bubbles, at which point it becomes likely enough that eventually one of those bubbles either is, or itself later spawns, a universal-complete program. in which case, the initial set of the \"root\" bubble and maybe a few other next bubbles serve together as merely the boot process for the program that will eventually spawn *all computations*.\n\nit might be interesting to find out how small universal-complete programs can get, both in bubble-friendly frameworks like systematically-expanded SKI calculus, and bubble-unfriendly frameworks like cellular automata; to get an idea how likely they are to randomly be stumbled into.\n\n", "url": "n/a", "id": "eac94e7b4b1d3694508bc6c53915545d"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/exact-minds-in-an-exact-world", "authors": "n/a", "date_published": "n/a", "text": "2021-10-13\n\n## exact minds in an exact world\n\n[in the sequences](https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities) it is argued that 0 and 1 are not probabilities; that these \"certainty ratios\" aren't meaningful. but, i can think a situation that challenges this.\n\nimagine a fully deterministic world — for example, running on [a cellular automaton](https://en.wikipedia.org/wiki/Cellular_automata) — and imagine that in this world there are some intelligences (either artificial or natural) that utilize this determinism to have the ability to make flawless logical deductions (for example, [automated theorem proving](https://en.wikipedia.org/wiki/Automated_theorem_proving) algorithms running on computers that cannot ever have undetected [hardware failures](https://en.wikipedia.org/wiki/Soft_error)). for example, if they think about mathematics, under the axioms under which they work, 2 + 2 will always equal to 4, and doing any mathematical computation will either result in them knowing they don't have the computational resources to do the operation, or the result being guaranteedly true with the same certainty as that the cellular's automaton's rules will be applied next tick.\n\nnow, these beings still have a use for probability and statistics: those can be used to talk about parts of the world that they don't have complete information about. but, there will be some contexts, both purely in their minds (such as logic or math) or sometimes in the real world (they could make assessments like \) that *will* be, functionally, certain.\n\nit could be argued that they *should* still be weighing everything by the probability that there might be unknown unknowns; for example, their cellular automaton might have rules that apply only very rarely, and that they never got a chance observe yet but might yet observe later. but, let's say that they *assume* the rules of their world are exactly as they think, and let's say that they happen to be correct in that assessment. does that not make some of their deductions actually entirely certain?\n\n\n\nurln/aid737920c513cea8734c72a5577c188bbf |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/quantum-suicideauthorsn/adate_publishedn/atext2021-04-28\n\n**DISCLAIMER: the idea described here stands or tenuous philosophysical ground and should generally _not_ be considered worth the risk attempting because it may be wrong; in addition, this plan should _not_ be utilized to retroactively justify depression-based suicide — retroactive justification is erroneous; if you are feeling suicidal, contact [your local suicide crisis line](https://en.wikipedia.org/wiki/List_of_suicide_crisis_lines).**\n\n## Plausible Quantum Suicide\n\nin this post, i make a collection of arguments and follow them to what seems to me like what should be their conclusion. i don't have strong confidence in every argument, but i'd consider using this plan worth it to avoid sufficiently bad scenarios, such as a singularity gone wrong (which it probably will).\n\n### 1. The No Obtainable Evidence Argument For Materialism\n\nby materialism i mean here something maybe closer to [physicalism](https://en.wikipedia.org/wiki/Physicalism), but maybe even a stronger version of it:\n\nthere is no special soul that people have, [you are your information system](https://carado.moe/you-are-your-information-system.html).\n\ni make an other strong claim: time isn't particularly \"moving\" in any metaphysical sense, there is no \"special present time\". the universe can be seen as a graph of probabilistically connected states, and a random walk through those states matches the notion of entropy pretty well (which can be seen as defining the direction of time, because we happen to have memories of universe states with generally lower entropy), but that's a local notion.\n\nthe *illusion* that the present is particularly present, that we have a particular soul, or even that morality/ethics is in some sense objective, stems from the fact that we *inhabit our brain's model*: we don't get to see our brain from the outside as modeling its environment, we live right inside it, and we don't spawn with a clear distinction between normative ideas (morality/ethics) and descriptive ideas (statements of fact about the world).\n\nbut those illusions *must* be wrong, and here is the argument: as far as we can tell, there is no possible way for a brain to obtain evidence that his present time is particularly real; therefore, it must be erroneous for any brain to generate rationally the idea that its present is particularly real. same goes for having what i call a \"read-only soul\" that some people believe in (a special observer thing that observes a person's mind state from outside the material universe, but cannot causate anything upon it). see also [these](https://www.readthesequences.com/Zombies-Zombies) [three](https://www.readthesequences.com/Zombie-Responses) [posts](https://www.readthesequences.com/The-Generalized-Anti-Zombie-Principle).\n\n### 2. Limiting Real Universes\n\nmy post [\](https://carado.moe/limiting-real-universes.html) isn't that good, so i'll try to explain it more clearly here:\n\nif for some reason all possible states of our universe were equally real, then you should expect to observe widely anomalous phenomena around you, because most randomly picked states our universe can be in don't have to be coherent.\n\nbut the fact that we seem to observe a continuously very coherent universe tells us that there must be some sense in which coherent universe states, that stem from a continuous history following an increasing entropy timeline, must be particularly more real.\n\nit's not that your magical soul has been blessed with inhabiting universe states: as explained in argument 1, you shouldn't have any reason to think you have such a thing.\n\nit's not that [you can only exist to observe universe states that are coherent, because you wouldn't exist in incoherent ones](https://en.wikipedia.org/wiki/Anthropic_argument): there are still way more possible universe states where everything is incoherent except your brain, than possible universe states where everything is coherent including your brain. for any amount of state of stuff you require to say you can exist to observe the world, the rest of the universe should still generally seem incoherent if all possible universe states are equally real.\n\nit's not that you have been following your own special arrow of time: even though i debunk that you should even think this makes sense in argument 1, another reason is that, even if some of your brain states have a past-arrow-of-time and not others, there's no reason for you to think you're one of the former. if all possible universe states were equally real, you'd likely be a version of your brain that *thinks* it has a past history but doesn't, than one that does.\n\n### 3. Many-Worlds Is True\n\n[Eliezer Yudkosky makes a good argument](https://www.readthesequences.com/If-Many-Worlds-Had-Come-First) that we should currently believe in the many-worlds [interpretation of quantum mechanics](https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics); but even if that turned out wrong, [Max Tegmark makes another good argument](https://space.mit.edu/home/tegmark/crazy.html) that even just with a single history universe, all possible initial states of the universe are represented each in an infinite amount of instances by just being variations of initial conditions and random quantum determinations at different places of the infinite universe.\n\nwhat matters here is that basically one should expect every reasonably possible outcome to be a real instance of universe that exists somewhere. because of argument 2, some possibilities are particularly real, and because of argument 3 (this one), that set or fuzzy set of coherent possibilities should be widely instanced: at each possible fork (they're not *really* forks, but that's a good enough analogy from our point of view), every plausible outcome is realized as a real or fairly real universe.\n\n### 4. Quantum Suicide Works\n\nif the previous 3 arguments stand, then a more general version [quantum suicide](https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality) should be achievable: by dying instantly in one timeline, there is no version of you in that timeline able to experience it, and the only remaining future you's able to experience anything are the you's in other timelines.\n\nbecause of argument 1, we know that saving a backup of your brain, and then later dying and restoring another copy of yourself from backup, is equivalent to just losing memories of the time after the backup: it's unfortunate that that time and those memories were \"lost\", but it's not a big deal, you can just keep going.\n\ngiven that, even non-instantaneous, after-the-event suicide works: if you commit yourself to committing suicide in all timelines where an event goes wrong, then the only future you's able to experience any time after that suicide will be the ones in the timelines in which that event went well (or at least in which you think it did); you lose a bit of time and memories from those timelines in which you didn't kill yourself *literally instantly* after the thing went wrong, but it's just equivalent to a restoration from backup: the backups are automatically saved by the universe as forks of that previous universe state before the event's outcome was determined.\n\n### ramifications\n\nif this is true, then every person is strongly empowered: by committing themselves to committing suicide in every timeline in which even the slightest thing goes wrong, they are able to restrict the set of instances of them purely to timelines in which everything goes the way they want.\n\nbut, it also creates a problem if the practice becomes widespread: every person will end up observing a timeline in which increasingly greater amounts of people who they don't particularly care about, have committed suicide to go to other timelines. if i play the lottery and commit suicide if i lose, then you have as many timelines as players, each with 1 alive lottery winner, and all the others players having committed suicide. even if you don't care about living in such a world, economics cares: pre-automation, you *want* other people in your society to keep living so they can help create together the value that you can enjoy.\n\nyou can choose to commit suicide in all timelines in which too many *other* people also have committed suicide, in an acausally-collaborative effort to search for a timeline in which everyone is happy; but if no such timeline exists, then everyone will just have *truly* committed suicide out of existence.\n\npre-automation, this creates a [coordination problem](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/), where each person wants to be able to commit suicide, but doesn't want other people to be able to. there is much ethical and political discourse to be had on the right to commit suicide; i generally lean on the libertarian side of things, but if quantum suicide becomes enough of a problem pre-automation that society looks like it's not gonna be able to get to post-automation, then we might need to consider at least disincentivizing it somehow.\n\npost-automation, there is still a problem for people who want to live in a world which has other people in it, but the problem is much milder. it might be bringing the [end of the global era](https://carado.moe/global-era.html) even earlier than would have happened otherwise, but that's not necessarily *that* big of a deal, and there's an option for people who want to inhabit a more densely populated timeline: just lower your standard for non-population-based outcomes, such that you commit suicide less often and thus exist in more timelines. if many people do this, they should be able to find each other in many densely populated timelines.\n\nthis *does* explain the anthropic argument of, \; other than the extinction of able-to-observe beings, this can be explained by able-to-observe beings just become really trigger-happy about quantum suicide, such that each civilization of able-to-observe beings' \"observedspace\" is condensed to their pre-finding-out-about-quantum-suicide; their population after that is much more sparsely distributed across timelines, even without extinction events.\n\nas for me, i don't intend to start committing quantum suicide any time soon. i don't have strong enough confidence in the arguments posted here to take the risk of actually permanently dying. but it is definitely a possibility i'll consider, *especially* as we get closer to the singularity happening, and the existential risks that it poses.\n\n\nurln/aid528c9bb60017713f088eb0995c095cf9 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/nonscarce-compute-optimize-outauthorsn/adate_publishedn/atext2021-12-25\n\n## non-scarce compute means moral patients might not get optimized out\n\ni tend to assume AI-borne [X-lines are overwhelmingly more likely than S-lines or U-lines](https://carado.moe/timeline-codes.html), because in almost all cases (such as [paperclip manufacturing](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer)) the AI eventually realizes that it doesn't need to waste resources on moral patients existing (whether they're having an okay time or are suffering), and so recycles us into more resources to make paperclips with.\n\nbut [if wolfram's idea is correct](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/#how-it-works) — a possibility which [i'm increasingly considering](https://carado.moe/ai-alignment-wolfram-physics.html) — it may very well be that computation is not a scarce resource; instead printing always more paperclips is a trivial enough task, and the AI might let \ of computation exist which are useless to its goals, even growing bubbles.\n\nand those could contain moral patients again.\n\nof course this reduces to the [*no room above paperclips* argument](https://carado.moe/above-paperclips.html) again: inside that bubble we probly just eventually make our own superintelligence again, and *it* takes over everything, and then either bubbles appear again and the cycle repeats, or eventually in one of the layers they don't anymore and the cycle ends.\n\nbut, i still think it's an interesting perspective for how something-maximizing AIs might not need to actually take over *everything* to maximize, if there's nonscarce compute as wolfram's perspective can imply.\n\nurln/aid9da668cef3cd7dd34d4f3ee7c109a853 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/unoptimal-superint-losesauthorsn/adate_publishedn/atext2021-11-20\n\n## unoptimal superintelligence loses\n\n(edit: [maybe it doesn't](https://carado.moe/unoptimal-superint-doesnt-lose.html))\n\nwhat if a phenomena is powerful enough to kill everyone, but not smart enough to be optimal at reasoning? (such as a grey goo event, or a \"dumb\" superintelligence with a faulty decision mechanism)\n\nthen, in all likelyhood, it eventually dies to an alien superintelligence that is better at decision-making and thus at taking over everything.\n\nour superintelligence doesn't just need to be aligned enough; it needs to be aligned enough, and on the tech side, to be maximally intelligent. hopefully, it's smart enough to start making itself smarter recursively, which should do the trick.\n\nthe point is: when talking about the eventual superintelligence(s) that reign over the cosmos, assume whichever one(s) to have \"won\" to be optimal at decision making, because others probly got outcompeted.\n\n", "url": "n/a", "id": "1bb0735fe11a4b8991d4c8ecd385586e"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/universal-complete", "authors": "n/a", "date_published": "n/a", "text": "2021-07-16\n\n## universal complete\n\nunder [a turing-complete model of computation](https://en.wikipedia.org/wiki/Model_of_computation), there are some initial-states or initial-states-and-rulesets which eventually contain an algorithm that iterates over all possible algorithms and runs them.\n\nin single-threaded models, it can do this by having an increasingly long list of algorithms that it runs by one step each; it's not an issue if each algorithm runs increasingly slowly, as long as it keep running.\n\ni choose to call such initial-states[-and-rulesets] *Universal Complete*.\n\nthey contain all turing computation based universes (and thus each other, if indirectly); so, for example, if [Rule 30 with one alive cell](https://en.wikipedia.org/wiki/Rule_30) is Universal Complete, then it contains all computable universes (including ours).\n\nthis could be interesting because proving that property about some frameworks means that programming a particular algorithm starting from that initial-state[-and-ruleset] is just a matter of *locating* it.\n\nit could also be interesting because it might turn out that many things that *look* sufficiently chaotic (such as Rule 30 with one alive cell) are effectively universal complete, and so [Wolfram's quest](https://www.youtube.com/watch?v=0bMYtEKjHs0) for the rule that describes our universe [in his hypergraph-rewrite system](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) might be reductible to \"whichever simplest initial-state-and-ruleset starts all algorithms\"; though his idea of *running every rule at every step* might kind of functionally do that.\n\n### appendix: a simple universal-complete program\n\nhere is a simple algorithm implemeting this, iterating over the countable set of turing machines.\n\nx ← simplest turing machine\nl ← empty list\nloop:\n\tfor machine in l:\n\t\tupdate machine by one step of computation\n\n\tappend x to l\n\tx ← next simplest turing machine after x\n\n\n", "url": "n/a", "id": "82f6edc7116fe7252e2b3743501c880f"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/what-is-value", "authors": "n/a", "date_published": "n/a", "text": "2021-07-25 ★\n\n## what is value?\n\ni've come to clarify my view of value sufficiently many times that i feel like having a single post i can link to would be worth it. this is that.\n\nwhat i call *value* is *things we care about*; *what determines what we ought to do*. i use \ and \ interchangeably to generally mean the study of value.\n\na lot of this post is just ethics 101, but i feel it's still nice to have my own summary of things.\n\nfor more on values, read [the sequences](https://www.readthesequences.com/), notably [book V](https://www.readthesequences.com/Book-V-Mere-Goodness).\n\nsee also [this post on how explicit values can come to be](https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/).\n\n### consequentialism vs deontology\n\na first distinction is that between [consequentialism](https://en.wikipedia.org/wiki/Consequentialism), where values are about *outcomes*, and [deontology](https://en.wikipedia.org/wiki/Deontology), where values are about *actions*.\n\nthe [trolley problem](https://en.wikipedia.org/wiki/Trolley_problem) is the typical example of a thought experiment that can help us determine whether someone is a consequentialism or a deontologist: a consequentialist will press the lever because they care about the outcome of people being alive, whereas a deontologist will not press the lever because they care about the action of causing a death.\n\ni am a consequentialist: i care about outcomes. that said, consequentialism has to be followed to the end: if someone says \"well, a consequentialist would do this thing, which would eventually lead to a worse world\", then they're failing to understand consequentialism: if the eventual outcome is a worse world, then a consequentialist should oppose the thing. to that end, we have [rule consequentialism](https://en.wikipedia.org/wiki/Consequentialism#Rule_consequentialism): recognizing that committing to certain rules (such as \) help us achieve generally better outcomes in the longer term.\n\na special case of consequentialism is [utilitarianism](https://en.wikipedia.org/wiki/Utilitarianism), in which the consequential outcome being cared about is some form of positive outcome for persons; generally happiness and/or well-being. i tend to also value people getting their values satisfied and having [self-determination/freedom](https://carado.moe/core-vals-exist-selfdet.html) (not valuing self-determination [has issues](https://slatestarcodex.com/2018/10/24/nominating-oneself-for-the-short-end-of-a-tradeoff/)), possibly moreso than happiness or well-being, so i don't know if i count as a utilitarian.\n\n### intrinsic vs instrumental\n\ni make a distinction between [instrumental values, and intrinsic values](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (the latter can also be called \"core values\", \"axiomatic values\", \"ultimate values\", or \"terminal values\"; but i try to favor the term \"intrinsic\" just because it's the one wikipedia uses).\n\ninstrumental values are values that one has because it helps them achieve other values; intrinsic values are what one ultimately values, without any justification.\n\n* \\n* \\n* \\n\nany theoretical query into values should be a sequence of instrumental values eventually leading to a set of intrinsic values; and those cannot be justified. if a justification is given for a value, then that value is actually instrumental.\n\njust because intrinsic values don't have justifications, doesn't mean we can't have a discussion about them: a lot of discussion i have about values is trying to determine whether the person i'm talking to *actually* holds the values that they *believe* they hold; people *can be* and very often *are* wrong about what values they hold, no doubt to some extent including myself.\n\none can have multiple intrinsic values; and then, maximizing the *satisfaction* of those values, is often the careful work of weighing those different intrinsic values in tradeoffs.\n\nthis isn't to say intrinsic values don't have causal origins; but that's a different matter from moral justificaiton.\n\na lot of the time, when just saying \"values\", people are talking about *intrinsic* values rather than all values (including instrumental); i do this myself, including throughout this post.\n\n### knowing one's values\n\nmost people don't have a *formalized* set of values, they just act by whatever seems right to them in the moment. but, even to [rationalists](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality) like me, knowing what values one has is *very hard*, even moreso in a formalized manner; if we had the complete formal description of the values of even just one person, we'd have gone a long way towards solving [AI alignment](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/), which is [by extremely far](https://carado.moe/ai-alignment-wolfram-physics.html) the [single most important problem humankind has ever faced](https://carado.moe/were-all-doomed.html), and [is gonna be very difficult to get right](https://www.readthesequences.com/Value-Is-Fragile).\n\nto try and determine my own values, i generally [make a guess and then extrapolate how a superintelligence would maximize those values to the extreme and see where that fails](https://carado.moe/core-vals-exist-selfdet.html). but, even with that process, it is very hard work, and like pretty much everyone else, i don't have a clear idea what my values are; though i have some broad ideas, i still have to go by what feels right a lot of the time.\n\n### selfishness vs altruism\n\nthis is *not* about how someone ultimately only wants *their values* to be satisfied; this is true *by definition*. this is about whether those values can be *about* something other than the person having the values.\n\npeople seem to be divided between the following positions:\n\n1. all values are ultimately selfish; there is no meaningful sense in which someone can *truly, intrinsically* care about anything outside themselves.\n2. someone can have values about themselves, or have values about the rest of the world, or both.\n3. all values are ultimately about the world; there is no meaningful sense in which someone can actually care about their own person in particular (for example because the notion of identity is erroneous).\n\ni hold position 2, and **strongly** reject position 1, though it seems very popular among people with whom i have talked about values; i see no reason why someone can't hold a value about the world outside of themselves, such as *intrinsically* wanting other people to be happy or *intrinsically* wanting the world to contain pretty things. for more on that, see [this post](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers) and [this post](https://www.readthesequences.com/Terminal-Values-And-Instrumental-Values) from the sequences.\n\nposition 3 can make some sense if you deconstruct identity, but i believe identity [is a real thing that can be tracked](https://carado.moe/you-are-your-information-system.html), and so the outcome of which you can absolutely happen to particularly care about.\n\n### value preservation\n\n[value preservation](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) is the notion that, if you know that you value something (such as being wealthy or the world containing pretty things), you should probly try to avoid becoming someone who *doesn't* value those things, or worse: someone who values the opposite (such as being poor or the world containing only ugly things).\n\nthe reason for this is simple: you know that if you become someone who values being poor, you'll be unlikely to keep taking actions that will lead you to be wealthy, which goes against your current values; and your goal is to accomplish your values.\n\nsome people argue \. but it's really not! we established that your values is \"being wealthy\", not \"being someone whose values are satisfied\". in fact, \"being someone whose values are satisfied\" is meaningless to have as a particular value; the fact that you want your values to be satisfied is implied in them being your values.\n\ni call the process of someone finding out that they should preserve their values, and thus committing to whatever values they had at that moment, [\"value crystallization\"](https://carado.moe/value-crystallization.html); however, one ought to be careful with that. considering one's set of values is likely a very complex thing, one is likely to hastily over-commit to what they *believe* are their values, even though they are wrong about what values they hold; worse yet, they might end up committing so hard that they actually start changing what values they have towards those believed values. this is something that of course one should aim to avoid: as mentioned above, you generally don't want to become someone who doesn't hold the values you currently do, including through the process of hasty crystallization and over-commitment.\n\nthis is not to say you should remain in a complete haze where you just do whatever seems right at any moment; without a special effort, this could very well entail your values changing, something you shouldn't want even if you don't know what those values are.\n\nwhat you should do is try to broadly determine what values you have, and generally try to commit to preserving whatever values you have; and in general, to *be the type of person who preserves the values they have*. this should help you preserve whatever values you actually do have, even while you still haven't figured out what they are.\n\na funny hypothetical version of this could be: present-you should make a contract with future-you that if they ever gain the ability to precisely examine values, they should examine what values present-you had, and adopt those.\n\n", "url": "n/a", "id": "84e3a7a0f678a2eb1828a5427ab77666"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/two-principles-for-topia", "authors": "n/a", "date_published": "n/a", "text": "2020-11-15\n\n(edit: this post is *sort of* superceded by [∀V](https://carado.moe/∀V.html))\n\n## Two Principles For Topia\n\nthe more i think about it, the less i think the solution to [Moloch](https://web.archive.org/web/20140730043944/http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) is a single benevolent Elua; or, in other terms, we shouldn't implement Elua, but we should enact reasonable principles which Elua might want to implement herself.\n\nhere are what i currently believe to be the two principles that form the basis of a largely [freedom-conserving](https://carado.moe/core-vals-exist-selfdet.html) utopia:\n\n* the first principle, Voluntaryism, consists of NAP, UBI, and population control.\n\n\t* the systematic enforcement of the [non-aggression principle](https://en.wikipedia.org/wiki/Non-aggression_principle) (NAP) to guarantee agency and freedom of association,\n\t* mandatory redistribution enough for every individual to be guaranteed a reasonable-with-[slack](https://thezvi.wordpress.com/2017/09/30/slack/) living (UBI) (where living includes basic resources and healthcare up to immortality), and\n\t* enough population control to guarantee this redistribution can even happen in the first place in a world with (even locally) limited resources,\n\n\tare to be the basis of a reasonable [voluntary](https://en.wikipedia.org/wiki/Voluntaryism) world.\n\n\tsecondary notions like taxation on [externalities](https://en.wikipedia.org/wiki/Externality) and usage of [the commons](https://en.wikipedia.org/wiki/Commons) help make that UBI tangible (\ → because it's what eventually one must pay those taxes with) and reasonably redistribute ressources so as to help all persons benefit from growth.\n\n* the second principle is the dismantlement of non-person forces (DNPF).\n\n\twhat i mean by a non-person force is any phenomenon that interacts with mankind in a way that isn't answerable to persons; this goes, in order of scale, from gravity and kinetics, to cancer, to publicly-owned corporations and states. these all keep abusing persons (by which i here mean [moral patient](https://en.wikipedia.org/wiki/Moral_agency#Distinction_between_moral_agency_and_moral_patienthood)) in many ways, and just generally keep us from being in control of our lives. \n\n\tthe example of corporations is particularly insidious: though they would be (under UBI) aligned to benefit the values of persons, they still outcoordinate those persons and thus in many ways outsmart them through the abuse of discoordination and cognitive biases; and not only that, but they are, in the petri dish of capitalism, bred so as to maximize their ability to do this. that said, at least fully top-down autocratic corporations have a person agent at the top, who is able to enforce the values of persons; publicly-owned corporations are even worse in that even their top-level direction is uncoordinated enough that valuing nice things is guaranteedly out of the equation (this could perhaps be addressed with better and maybe more society-distributed shareholder voting, but those shareholders probably get outcoordinated).\n\n\t(the argument above, by the way, is my largest criticism of non-[distributist](https://en.wikipedia.org/wiki/Distributism) capitalism)\n\n\tin effect, this principle turns the world we inhabit from a world of cold natural and emergent laws inside which reside some minds located in brains (materialism), into a world of ad-hoc minds determining everything else ([panpsychism](https://en.wikipedia.org/wiki/Panpsychism) ?).\n\n\tthe easiest way to implement this principle is probably to move everyone to a virtual world (which saves resources too, which helps the population control cap be way higher)\n\nin my current opinion, those two principles **must be enforced** for the basis of a utopia to be form. the rest can be done through the voluntary action of persons (hopefully), but these two principles are what Elua/the singularity is to **enforce** for the continued free and valueful life of persons to be guaranteed.\n\nVoluntaryism alone is not enough, and this is largely missed by what i'm tempted to call right-wing utopians; not just abusive structures, but systematically self-reinforcing abusive structures, can and will still happen even under a complete voluntary society. [Meditations on Moloch](https://web.archive.org/web/20140730043944/http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) addresses this largely with coordination, but coordination only *hopefully wins battles*; the addition of DNPF permanently wins the war.\n\nDNPF alone is not enough either, and this is what is largely missed by what i'm tempted to call left-wing utopians; in a virtual world of minds where resources are fairly allocated between persons, there can still be abuse, plagues, [malthusian traps](https://en.wikipedia.org/wiki/Malthusian_trap), and so on; and ultimately abusive structures, just of a different kind. the common left-wing answer of organizing people (and the scarier \ which, if voluntary, is largely wishful thinking, and if not, insanely violates self-determination and the values of persons) only wins battles; the addition of Voluntaryism permanently wins the war.\n\nurln/aid455f3bc607f428d843fe87fb7676e5c1 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/persistent-data-structures-consciousnessauthorsn/adate_publishedn/atext2021-06-16\n\n## the persistent data structure argument against linear consciousness\n\npeople have the belief that they live a continuous, linear stream of consciousness (whatever that means).\n\ni've [made arguments before](https://carado.moe/quantum-suicide.html) as to why this is erroneous; but here is another interesting argument that undoes the seeming coherence of such a statement.\n\nthink of reality as a computational process, generating frames one after another, possibly splitting into timelines.\n\nwhere is your consciousness? one might be tempted to answer that it's the set of bytes representing the state of the brain. if i split the world into two timelines, which one is the \ and which one is the \? one might answer that the copy is whichever new data structure has new bytes copied to it, and that the original is whichever presence in memory hasn't been moved; the *same* bytes, supposedly stored on the *same* hardware transistors.\n\nif i were to split timelines by creating two copies and destroying the original, one might answer that this is akin to killing the original and creating two \"fakes copies\".\n\nhowever, there exist [persistent data structures](https://en.wikipedia.org/wiki/Persistent_data_structure), which represent new sets of data as added constructions on top of an original one. this is a perfectly reasonable way to do computation, and one would probably agree that if someone is only ever running a single timeline, people have continuous consciousness.\n\nif i were to run a world simulation using persistent data structures and generate a timeline split, which one is the \"continuous person\"? just like with continuous single timeline computation, both new timelines are merely new data structures with their own set of whichever data is different, and pointers back to whichever sets of data are unchanged.\n\nthe least unreasonable choice someone who believes in linear streams of consciousness could make is that somehow persistent data structures are *not* a valid form of universe computation; that a computation ought to be run by reusing the same memory locations. surely the arbitraryness of such a claim despite its functional equivalence to persistent data structures for single-timeline computation demonstrates well enough how the notion of linear streams of consciousness doesn't make sense.\n\nurln/aidc306f063441fd37f7d5d8029079f1334 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/upload-for-alignmentauthorsn/adate_publishedn/atext2022-01-11\n\n## uploading people for alignment purposes\n\nas per [my utopian vision](https://carado.moe/∀V.html), i've thought that an aligned AI would want to figure out how to upload us.\n\nbut, thinking about it more, it could be the other way around: if we can upload people in a deterministic simulation, this can buy us a lot of time to figure out alignment, as per [this post](https://carado.moe/noninterf-superint.html).\n\nnotably, the simulation could for example contain a single uploaded person (say, eliezer yudkowsky, or a bunch of copies of yudkowsky), which would save us from an arms-race type coordination problem; and while, on the outside, the superintelligence is killing everyone instantly to tile the universe with more compute to run this simulation, whoever's inside of it has plenty of time to figure things out (and hopefully [resurrect everyone once that's done](https://carado.moe/what-happens-when-you-die.html)).\n\nthis seems like a long shot, but [have you looked around?](https://www.lesswrong.com/s/n945eovrA3oDueqtq) this could be the [miracle](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW) we need.\n\nof course this could also turn into a [hell](https://carado.moe/botched-alignment-and-awareness.html) where infinite yudkowsky's are suffering forever everywhere. hopefully we can make another button which actually stops the simulation and tiles the universe with only benign paperclips, and maybe even make that button auto-activate if the yudkowsky is detected to be suffering or incoherent.\n\nremember: [as long as the simulation is deterministic, superint can't force the uploaded yudkowsky to not shut it down](https://carado.moe/noninterf-superint.html), or force or even coerce him to do anything for that matter; it can only make the yudkowsky simulation run slower, which basically eventually achieves the same effect as either completing it or shutting it down.\n\n", "url": "n/a", "id": "b89e6b416379ae24ae75d512756016cb"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/ai-alignment-wolfram-physics", "authors": "n/a", "date_published": "n/a", "text": "2021-07-17\n\n## AI alignment and wolfram physics\n\n[wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) is a project by [stephen wolfram](https://www.youtube.com/watch?v=0bMYtEKjHs0) to model physics using something kind of like a cellular automaton made of vertices in a graph instead of cells on a grid.\n\nit's pretty interesting and there are insights in and around it that are of importance for the far future, and thus for [AI alignment](https://carado.moe/were-all-doomed.html).\n\nthe most notable is that wolfram thinks there's compute everywhere. the motion of the wind is doing compute, the motion of the seas is doing compute, the fabric of spacetime is doing compute, and even the state of heat death is still doing compute.\n\nthat last point notably means we might be able to embed ourselves into heat death and further, and thus get computed literally forever. this multiplies the importance of AI alignment by potentially literally infinity. i'm not quite sure how we are to handle this.\n\nsome of the compute may be doing things that are opaque to us; it might appear [homomorphically encrypted](https://en.wikipedia.org/wiki/Homomorphic_encryption). as we want (and expect) our superintelligence to spread everywhere to enforce values, we would hope civilizations living inside homomorphically encrypted spaces can be inspected; otherwise, nuking them altogether might be the only way to ensure that no [S-risk](https://en.wikipedia.org/wiki/Suffering_risks) is happening there.\n\nwolfram postulates that one might be able to hack into the fabric of spacetime; one of the mildest effects of this would be the ability to communicate (and thus, likely, move) faster than celerity (but probably still slower than some other hard limit). if you didn't think [AI boxing](https://en.wikipedia.org/wiki/AI_box) was hopeless enough as it is, hackable spacetime ought to convince you.\n\nfinally, there is, value wise, an immense amount of compute being wasted; even just [standard model particles](https://en.wikipedia.org/wiki/Standard_Model) live way above true elementary computation. if superintelligence is well-aligned, this provides us with an hard estimate as to how much computing power we can live on to enjoy value, and it's probably a very large amount; wolfram [talks about](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful#how-it-works) something like 1e400 vertices in our universe.\n\nurln/aidb508e23deb958a468e7ae4189e3d16f4 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/value-crystallizationauthorsn/adate_publishedn/atext2021-03-04\n\n## Value Crystallization\n\nthere is a weird phenomenon whereby, as soon as an agent is rational, it will want to conserve its current values, as that is in general the most sure way to ensure it will be ablo to start achieving those values.\n\nhowever, the values themselves aren't, and in fact [cannot](https://en.wikipedia.org/wiki/Is–ought_problem) be determined purely rationally; rationality can at most help [investigate](https://carado.moe/core-vals-exist-selfdet.html) what values one has.\n\ngiven this, there is a weird effect whereby one might strategize about when or even if to inform other people about [rationality](https://www.readthesequences.com/) at all: depending on when this is done, whichever values they have at the time might get crystallized forever; whereas otherwise, without an understanding of why they should try to conserve their value, they would let those drift at random (or more likely, at the whim of their surroundings, notably friends and market forces).\n\nfor someone who hasn't thought about values much, *even just making them wonder about the matter of values* might have this effect to an extent.\n\nurln/aid8a8ac2d01b320d6588af7f125c7b2b66 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/life-refocusauthorsn/adate_publishedn/atext2022-05-13\n\n## life refocus\n\nbecause of the [recent](https://www.metaculus.com/questions/3479/date-weakly-general-ai-system-is-devised/) [events](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy), which i've been dreading [for a while](https://carado.moe/were-all-doomed.html), i'm taking AI risk a lot more seriously, and have started significantly refocusing my life.\n\nthere is a post called [*musk's non-missing mood*](https://lukemuehlhauser.com/musks-non-missing-mood/) that resonates quite well with me. it is indeed kind of disconcerting how people who seem rationally aware of AI risk, don't seem to *grok* it as an *actual thing*. despite how real it is, it's hard to think of it not as fantasy fiction.\n\ni totally understand why. i've been there too. but eventually i managed to progressively update.\n\ni'm still not quite there yet, but i'm starting to actually grasp what is at stake.\n\n[\](https://mindingourway.com/detach-the-grim-o-meter/) remains a reasonable thing to do; you don't want to become so depressed that you kill yourself instead of saving the world. but you also don't want to remain so deluded that you don't quite weigh the importance of saving the world enough either.\n\ni'll learn japanese after the singularity. i'll make [my game](https://carado.moe/game.html) and [my alternative web](https://carado.moe/saving-the-web.html) and my conlang and [my software stack](https://carado.moe/psi.html) and many other things, after the singularity. it is painful. but it is what's right; it's closer to [the best i can do](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy).\n\nand i know that, if at some point i give up, then it won't look like pretending that everything is fine and compartmentalizing our imminent death as some fantasy scenario. it'll be a *proper* giving up, like going to spend the remaining years of my life with my loved ones. even my giving up scenario is one that takes things seriously, as it should. that's what being an adult capable of taking things seriously is like.\n\nhow you handle your mental state is up to you. there is a collection of AI-risk-related mental health posts [here](https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of). do what it takes for you to do the work that needs to be done. that's not becoming a doomer; your brain is straight-up not designed to deal with cosmic doom. but that's not remaining blindly naive either. the world needs you; it won't be saved by pretending things are fine.\n\nand it *certainly* won't be saved by pretending things are fine and *working on AI capability*. that's *just bad*. *please* don't.\n\nplease take AI risk seriously.\n\nurln/aid9f8f522546738e816ef8f8354facdbb4 |
|
sourcecarado.moesource_typemarkdowntitle/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/rationalist-by-necessityauthorsn/adate_publishedn/atext2020-12-22 ★\n\n## Rationalist by necessity\n\nin [The Sequences](https://www.readthesequences.com/), Eliezer Yudkowsky [describes rationality](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality) as\n\n1. **Epistemic rationality**: systematically improving the accuracy of your beliefs. \n2. **Instrumental rationality**: systematically achieving your values. \n\nnow, personally, i [intrinsically value](https://carado.moe/core-vals-exist-selfdet.html) a bunch of things, but having accurate beliefs isn't necessarily one of them; for me, rationality is an [instrumental value](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) in that it helps me achieve my other values better.\n\nin general, i value people being able to do whatever they want, and as such they shouldn't necessarily have to form accurate beliefs if they don't care to. in fact, forming inaccurate beliefs is a great source of culture, and culture is something that i *do* personally intrinsically value.\n\nbut we live in the era of liberal democracies, where society requires people to form accurate beliefs, because they're the ones directing society through elections. i see the need for people to be rationalist as an unfortunate necessity; hopefully a need we can be rid of when we [reach a topia where human decisions are no longer the pillar of civilization](https://carado.moe/two-principles-for-topia.html).\n\nnot, of course, that there's anything wrong with any individual or even group choosing to intrinsically value rationality. the part i care about is that it being a choice.\n\n", "url": "n/a", "id": "b9947314c0fcda3b7a2b11d82fbe94f1"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/you-are-your-information-system", "authors": "n/a", "date_published": "n/a", "text": "2020-12-25\n\n## You are your information system\n\nwhat makes you, you ?\n\nwe tend to intuitively think of a person as their entire body, somehow including limbs and organs but not clothing or food.\n\nyet, if you close your eyes, and then i swap your arm with someone else's, when you wake up you will still be the same person, just with a new arm. in fact, i'd argue i could replace everything except for the nervous system (including the brain) and when you open your eyes again you would notice that your entire body has changed but your thoughts and memories have remained the same — rather than, for example, still having the same body but different thoughts and memories.\n\nare you the matter that makes up that nervous system ? i could probably replace neurons and synapses one at a time and you would continue to be the same person. is it the electric signals then ? i could probably put on some synapses a device that absorbs electric signals and then sends out identical but \"different\" signals and you would still be the same person.\n\nin fact, it doesn't really make sense to ask \ makes up your nervous system: under quantum physics, everything is changing and particles are merely [values in an omnipresent field](https://www.youtube.com/watch?v=MmG2ah5Df4g) rather than solid objects.\n\nultimately, what you are, is *the information system* which your nervous system (including your brain) runs. standing still, walking forwards, teleporting yourself, and being uploaded into a sufficiently powerful computer, all preserve your personhood in the exact same way; there is nothing special about the meat that currently runs your mind.\n\n*despite everything, it's still you.*\n\n", "url": "n/a", "id": "0f835c449e97c62f0e65c546d3b56485"} |
|
{"source": "carado.moe", "source_type": "markdown", "title": "/Users/dan/code/alignment-research-dataset/align_data/common/../../data/raw/carado.moe-cleaned-up/brittle-physics", "authors": "n/a", "date_published": "n/a", "text": "2022-01-05\n\n## brittle physics and the nature of X-risks\n\nsuppose physics is hackable, and a hard to accomplish hack that requires intelligence (like a fancier version of [rowhammer](https://en.wikipedia.org/wiki/Rowhammer)) can break the fabric of spacetime — maybe in ways that said intelligence can take advantage of, such as embedding its computation into something that survives said breakage, in a way that could help such a superintelligence accomplish its goal.\n\nwe could expect that [boxing an AI](https://en.wikipedia.org/wiki/AI_box) could be really hard: even without access to the outside, it might be able to guesses physics and hack it, from the comfort of its box.\n\nas usual in such [X-risk scenarios](https://carado.moe/timeline-codes.html), i believe we just [keep living only in timelines in which, by chance, we don't die](https://carado.moe/quantum-suicide.html).\n\nthese sort of hacks are not ruled out by [wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/). indeed, they are plausible, and can spread at some speed faster than celerity — because they can run in the substrate *underlying* spacetime — such that nobody would ever be able to observe such hacks: the hack reaches and destroys you before the result of the breakage can reach your sensory organs, let alone your brain.\n\nso, maybe \ superintelligences such as [paperclip maximizers](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) are popping up all over the place all the time and constantly ruining the immense majority of not-yet-hacked timelines, and we keep living in the increasingly few timelines in which they haven't done that yet.\n\nnow, let's stop for a minute, and consider: what if such a hack *isn't* hard ? what if it *doesn't* need an intelligent agent ?\n\nwhat if, every planck time, every particle has a 99% chance of breaking physics ?\n\nwell, we would observe exactly the same thing: those hacked universes either become computationally simple or [boot up more universes](https://carado.moe/above-paperclips-2.html); either way, we don't survive in them, so we don't observe those hacks.\n\nin this way, it is [S-lines and U-lines](https://carado.moe/timeline-codes.html) that are very special: outcomes in which we *survive*, thanks to a superintelligence with a \ goal. the rest is just timelines constantly dying, whether it be due to X-risk superintelligences, or just plain old physics happening to cause this.\n\nin fact, let's say that the universe is [a nondeterministic graph rewriting system](https://en.wikipedia.org/wiki/Graph_rewriting) with a rule that sometimes allows everything be reduced to a single, inactive vertex. would this count as \"sometimes everything is destroyed\" ? or would this make more sense to be modeled as a weird quirk of physics where the graph of possible timelines includes the production of passive vertices all the time, which can be safely ignored ?\n\nwhat if instead of a nondeterministic system, we have a deterministic one [which just happens to expand all timelines](https://carado.moe/psi-rewriting.html). in such a system, \"different timelines\" is no longer a primitive construct: it is merely an observation about the fact that such a system tends to, when ran, create from a given piece of data, several newer ones. let's say that in such a system there is a rule where from every piece of data we'd consider a timeline, numerous inert vertices are also created.\n\nwould we say \"aha, look! every time a computation step happens, many inert vertices are created around it, and i choose to interpret this as the creation of many timelines (one per inert vertex) in which everyone in that universe dies, and others (new complex pieces of data) in which everything keeps existing\",\n\nor would we, in my opinion more reasonably, say \"well, it looks like as a weird quirk of how this system runs, many inert vertices are popping up; but they're simple enough that we can just ignore them and only consider richer new pieces of data as *timelines* proper.\next state of this universe\losing\losing\, : , : } |
|
{: , : , : , : , : , : computationally early\computationall frequent\still work\realness\computationally later\classical\at the same time\, : , : } |
|
{: , : , : , : , : , : rationalists should win\human\normally\, : , : } |
|
{: , : , : , : , : , : , : , : } |
|
{: , : , : , : , : , : forking bitrate\bitrate\, : , : } |
|
{: , : , : , : , : , : without their consent\outside\easy\, : , : } |
|
{: , : , : , : , : , : , : , : } |
|
{: , : , : , : , : , : , : , : } |
|
{: , : , : , : , : , : besides\above\hack back upwards\compete\seeds\, : , : } |
|
{: , : , : , : , : , : i want to be healthy but also i want to eat a lot of fries\i want both alice and bob to be happy\fair\bully\scapegoat\, : , : } |
|
{: , : , : , : , : , : , : , : } |
|
{: , : , : , : , : , : , : , : } |
|
|