{"source": "carado.moe", "source_type": "markdown", "title": "finding-earth-ud", "authors": "n/a", "date_published": "n/a", "text": "2022-04-13\n\n## finding earth in the universal program\n\nthis post expands on step one of [*the Peerless*](https://carado.moe/the-peerless.html): creating virtual people. brain scans-and-simulations are apparently still quite far off, so i'll be focusing on the second approach: resimulating earth and plucking out persons.\n\n(one great side-advantage of this method is that, if we can relocate earth to pluck out persons for the simulation of alignment researchers, then we can later also relocate earth in order to restore it once we've solved alignment. so resimulating and locating earth, regardless of having early enough mind-plucking-out tech, is something we might need to do anyways.)\n\nif compute is [infinite](https://carado.moe/ai-alignment-wolfram-physics.html) and [we don't mind being inefficient](https://carado.moe/udassa-time-steps.html), then we can use exponential or even infinite compute to locate earth. one approach is the following: create a big informational beacon — perhaps a copy of a huge portion of the internet, along with MRI scans of as many people as we can afford. then, we use some type of (non-intelligent) deterministic error-bound statistical location procedure to locate patterns that look like that beacon inside the [universal program](https://carado.moe/universal-complete.html). we can afford the statistical detection to be imperfect — if it misses on one encoding of earth, there will be different ones in the universal program.\n\nbecause of the time penalty of the universal program, however, we may find just compressed copies of the beacon (instead of a full simulation of earth leading to the time at which we build that beacon), and because of the deterministic bound, we want need to stop on the first match; if this first match is *just* the beacon, without earth, then we fail; perhaps superintelligence can notice that it's not finding any nearby minds to pluck out, or perhaps it plucks out garbage. so we can start the universal program with not one step per program, but rather a very large number of steps — i hear stephen wolfram has estimates on the number of computation steps it takes to get to the current state of the universe. this will favor programs that takes every long to lead to the beacon, but are themselves shorter program.\n\n(what if the first program to contain earth is itself a universal program *without* that huge constant, such that *it* finds the beacon before it finds earth? i am not sure how to address this. perhaps we can explore programs in an order that favors worlds that look like our physics instead of looking like discrete iterations of all computations?)\n\nthere's also the concern that the universal program, just like the [universal distribution](https://handsandcities.com/2021/10/29/on-the-universal-distribution/), [is malign](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign). i'd think plain top-level earth, maybe especially as detectable by a simple enough beacon locator, would tend to occur before malign aliens emitting our beacon to trick us; but that's a risk to keep in mind.\n\nif we *do* care about computational efficiency, then there are two main factors we need to account for:\n\n* can our universe can be ran in polynomial time on whatever computers the superintelligence can build? for example, can it be ran in polynomial time on quantum computers, and can quantum computers be built? note that if this is the case we might need to step through *quantum steps* of *quantum programs* to run the search in the expected time. this doesn't need we need to build quantum computers outselves, mind you — superintelligence can just notice that a quantum computer would run the computations we describe efficiently, and build and use those.\n* is the \"seed\" program for the universe small? intuitively i believe it is, and i find wolfram's efforts to reproduce the behavior of particles from the standard model using simple graph rewriting, to be evidence in that direction. that said, if it is large, then finding that program is an exponential search again — and so, again, we might need to build a search that \"favors\" our physics to save on exponential search time.\n\nfinally, we might want to put a hard bound on the number of tries the superintelligence will run to locate earth. the reason for that is that, if for some reason we messed up something in the beacon locator and it *never, ever* finds earth, then it will instantiate all computations, which appears to me to be a potential [S-risk](https://carado.moe/timeline-codes.html). in fact, even if we do find earth, it may not be worth it if we have to simulate exponentially much potential suffering before running our utopia — what if, after solving alignment, we have a great time, but then decide to eventually fade away after only polynomial time? then we will might have created exponentially much suffering in total.\n\n### intermediary simulation\n\nin case isolating minds from this simulation is hard, we could build an intermediary step between the location of earth in simulation-space, and booting the peerless simulation proper — superintelligence could, once it has located our beacon, get in touch with our organization *inside the simulation of earth*, and give it extraordinary computational (and maybe physical?) ability within the simulation to either take over everything, or figure out brain plucking-out and then let us press a big \"ok, start now\" button.\n\nnote, however, that we might not want to remain in this intermediary simulation for too long — it is still vulnerable to inner unaligned superintelligences, just like our top level reality is. we want to get to a safe, sandboxed, computationally weak environment as early as possible.\n\nthis is also a great argument for readying ourselves to build the beacon and utilize this contact-from-superintelligence as early as we can; indeed, to make that the first step of implementing the peerless plan. the reason for that is that the earlier we are able to take advantage of it, the earlier the time step of the simulation superintelligence can help us start bootstrapping towards the proper simulation of the peerless, and the less likely we are to be doomed by other superintelligences, if we need some intermediary \"pre-peerless\" simulation time.\n\n", "url": "n/a", "filename": "finding-earth-ud.md", "id": "2c9e29e3a72b024adf39bf8a7766baaf"} {"source": "carado.moe", "source_type": "markdown", "title": "utils-unit", "authors": "n/a", "date_published": "n/a", "text": "2022-04-30\n\n## a unit for utils\n\nas utilitarians, it would be convenient for us to have an actual unit to measure utility, a number to be computed and compared.\n\nthe usual pick is money, but some people could have different judgments of the world that lead them to have different instrumental valuings of money even when they have the same intrinsic values; and also some people could intrinsically value money.\n\nthe unit i propose, to measure how much an agent cares about a thing, is a ratio of that person's \"total caring pie\". for example, you could intrinsically value 70% something and 30% something else; and then i'm sure we can figure out some math that makes sense (probly inspired from probability theory) to derive our valuings of instrumental values from that.\n\nthis seems like the least biased way to measure utils. the only criticism i can think of is that it breaks if two agents have different amounts of total valuing: perhaps one person *just has more total caring* than another.\n\nhowever, is this testable in any way? is there any situation where one agent would act differently than another if they have the same intrinsic valuing proportions but one of them has a million times more total caring? i don't think so: the idea that inaction counts, seems to me to track either willpower or just different valuings of not doing effort.\n\n", "url": "n/a", "filename": "utils-unit.md", "id": "9e260d6f21c6294324141b06aaad5ac1"} {"source": "carado.moe", "source_type": "markdown", "title": "rationalist-by-necessity", "authors": "n/a", "date_published": "n/a", "text": "2020-12-22 ★\n\n## Rationalist by necessity\n\nin [The Sequences](https://www.readthesequences.com/), Eliezer Yudkowsky [describes rationality](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality) as\n\n1. **Epistemic rationality**: systematically improving the accuracy of your beliefs. \n2. **Instrumental rationality**: systematically achieving your values. \n\nnow, personally, i [intrinsically value](https://carado.moe/core-vals-exist-selfdet.html) a bunch of things, but having accurate beliefs isn't necessarily one of them; for me, rationality is an [instrumental value](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) in that it helps me achieve my other values better.\n\nin general, i value people being able to do whatever they want, and as such they shouldn't necessarily have to form accurate beliefs if they don't care to. in fact, forming inaccurate beliefs is a great source of culture, and culture is something that i *do* personally intrinsically value.\n\nbut we live in the era of liberal democracies, where society requires people to form accurate beliefs, because they're the ones directing society through elections. i see the need for people to be rationalist as an unfortunate necessity; hopefully a need we can be rid of when we [reach a topia where human decisions are no longer the pillar of civilization](https://carado.moe/two-principles-for-topia.html).\n\nnot, of course, that there's anything wrong with any individual or even group choosing to intrinsically value rationality. the part i care about is that it being a choice.\n\n", "url": "n/a", "filename": "rationalist-by-necessity.md", "id": "6d633c90566240f290d16fb3b3a0bad8"} {"source": "carado.moe", "source_type": "markdown", "title": "rough-ai-risk-estimates", "authors": "n/a", "date_published": "n/a", "text": "2022-05-18\n\n## my rough AI risk estimates\n\nthe numbers are spoilered so you can make your own guess without getting anchored. click on the text to reveal the number.\n\nthese are my pretty rough estimates for what happens to our current civilization:\n\n* we robustly [get past](https://en.wikipedia.org/wiki/AI_alignment) AI risk0.01%\n* our civilization dies in a non-AI-existential-risk way, leaving room for future civilization0.1%\n* we [die](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) of AI99.89%\n\nif we die of AI, when does it happen?\n\n* 2020s60%\n* 2030s30%\n* 2040s5%\n* the remaining % get spilled over the remaining future\n\nrember that if we get past those decades without dying, that's not evidence that i got those wrong; that's evidence that either i got those wrong, or our civilization benefits from some amount of quantum immortality.\n\npart of my reasons for believing in those odds is private information.\n\n", "url": "n/a", "filename": "rough-ai-risk-estimates.md", "id": "fad429289c7fdb0a3cdbf792a497e571"} {"source": "carado.moe", "source_type": "markdown", "title": "prototype-realities", "authors": "n/a", "date_published": "n/a", "text": "2020-11-18\n\n*(this post may contain some very vague spoileryness about the video game Outer Wilds)*\n\n## A Prototypeness Hierarchy of Realities\n\none property of many video games that i felt the most when playing the excellent [Outer Wilds](https://store.steampowered.com/app/753640/Outer_Wilds/) was *prototypeyness*.\n\nmany games, and especially that one, feel like they are prototypes for reality to some extent; they try to extract some essence of what is interesting about this world, without having the ability to implement all of it in a fully dynamic way, and thus hardcoding the rest.\n\nnow, this aspect of prototypeyness is sufficiently present in Outer Wilds that i ended up asking myself the question: what would real life (this universe where earth is) be a prototype for ? and i think the answer is:\n\nreal life is a prototype for living in virtual realities/cyberspace.\n\nonce we upload ourselves to computers (a good thing!) we will be able to make the entirety of the substrate that individuals interact with way more flexible; inhabit spaces of any number of dimensions or maybe not even spaces at all and just graphs (as is the shape of the web), modify our minds in ways meat brains wouldn't support, basically utilize any type of computational constructs we want with no regard for most limitations, depending on reality only as a substrate to run the computronium for it all.\n\nlike the step between prototypey video games and reality, it is one of a nearly definitional boundary in scale of computing power, and one whose non-prototype side i'm very interested in.\n\n", "url": "n/a", "filename": "prototype-realities.md", "id": "d1f9590e5e64f47efc8021c7ddc134e3"} {"source": "carado.moe", "source_type": "markdown", "title": "cosmic-missing-outs", "authors": "n/a", "date_published": "n/a", "text": "2021-10-14\n\n## cosmic missing outs\n\nthis might be a complete waste of brainflops, but sometimes i wonder about \"cosmic missing outs\".\n\nmy typical example for those is the culture of modern japan.\n\nimagine timelines where japan never became the country it did, and we never got its culture. that'd be a huge thing to miss out on, right? the second best thing might be korean culture or something like that.\n\nbut, now that you've imagined this timeline that is missing out on modern japan culture, imagine the opposite: there are timelines out there that have those great cultures of countries that we're missing out on, that us missing out on is kind of on the same scale as those other timelines missing out on japan's culture.\n\ni'm talking about this because i just thought of some other things kind of of this type: \n\nwhat are some unknown things that we are missing out on, that us missing out on is kind of like if other timelines were missing out on music?\n\nwhat are some unknown things that we are missing out on, that us missing out on is kind of like if other timelines were missing out on philosophy, science, or math?\n\nthese speculations are the closest i can get to putting human minds into perspective and considering the existence of things entirely outside of human conception, the way many things are entirely outside of a mouse or ant's ability to conceive.\n\nto be clear: i still can't have that consideration, this is only the closest i get, but it's not quite there.\n\n", "url": "n/a", "filename": "cosmic-missing-outs.md", "id": "30f391c268ed6b1e1aa3a3e3d463943f"} {"source": "carado.moe", "source_type": "markdown", "title": "deduplication-ethics", "authors": "n/a", "date_published": "n/a", "text": "2022-03-06\n\n## experience/moral patient deduplication and ethics\n\nsuppose you can spend a certain amount of money (or effort, resources, etc) to prevent the spawning of a million rooms (in, let's say, simulations), with an exact copy of one random person in each. they will wake up in the rooms, spend a week not able to get out (basic necessities covered), then get tortured for a week, and then the simulations are shut down.\n\ni want to split this hypothetical into four cases:\n\n* the **identical** case (`I`): the million rooms and persons are exactly identical simulations.\n* the **mildly different** case (`M`): the million rooms are the exact same, except that each room has, somewhere on one wall, a microscopically different patch of paint. the persons likely won't be able to directly observe the difference, but it *will* probably eventually cause the million brains to diverge from each other.\n* the **quite different** case (`Q`): the million rooms will have different (random) pieces of music playing, as well as random collections of paintings on the walls, random collections of books, movies, video games, etc. to pass the time.\n* the **very different** case (`V`): same as the **quite different** case, but on top of that the rooms actually contain a random person picked from random places all over the world instead of copies of the same person.\n\nthe point is that you should want to reduce suffering by preventing the scenario, but how much you care should be a function of whether/much you count the million different persons's suffering as *multiple* experiences.\n\nit seems clear to me that one's caring for each case should increase in the order in which the cases are listed (that is, **identical** being the least cared about, and **very different** beig the most cared about); the question is more about the *difference* between consecutive cases. let's call those:\n\n* `IM` = difference in caring between the **identical** case and the **mildly different** case\n* `MQ` = difference in caring between the **mildly different** case and the **quite different** case\n* `QV` = difference in caring between the **quite different** case and the **very different** case\n\ncurrently, my theory of ethics deduplicates identical copies of moral patients (for reasons such as [not caring about implementation details](https://carado.moe/persistent-data-structures-consciousness.html)), meaning that i see the **mildly different** case as fundamentally different from the **identical** case. `IM > MQ ≈ QV`, and even `IM > (MQ + QV)`.\n\n![](deduplication-ethics-1.png)\n\nhowever, this strikes me as particularly unintutive; i *feel* like the **mildly different** case should get an amount of caring much closer to the **identical** case than the **quite different** case; i *feel* like i want to get `QV > MQ > IM`, or at least `QV > IM < MQ`; either way, definitely `IM < (MQ + QV)`.\n\n![](deduplication-ethics-2.png)\n\nhere are the ways i can see out of this:\n\n1. bite the bullet. commit to the idea that the slightest divergence between moral patients is enough to make them distinct persons worth caring about as different, much more than further differences. from a strict computational perspective such as [wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/), it might be what makes the most sense, but it seems quite unintuitive. this sort of caring about integer numbers of persons (rather than continuous quantities) maybe also feels mildly akin to [SIA's counting of world populations](https://handsandcities.com/2021/09/30/sia-ssa-part-1-learning-from-the-fact-that-you-exist/), in a way, maybe.\n2. interpolate difference: two moral patients count *more* if they are *more* different (rather than a strict criteria of perfect equality). this seems like the straightforward solution to this example, though if the curve is smooth enough then it runs into weird cases like caring more about the outcome of one population of 1000 people than another population of 1001 people, if the former is sufficiently more heterogenous than the latter. it kinda feels like i'm *rewarding* moral patients with extra importance for being diverse; but i'm unsure whether to treat the fact that i also happen to value diversity as coincidence or as evidence that this option is coherent with my values.\n3. fully abandon deduplication: count the million moral patients as counting separately in the first case. this is the least appealing to me because from a functional, computational perspective it doesn't make sense to me, and [i can make up \"implementation details\" for the universe under which it breaks down](https://carado.moe/persistent-data-structures-consciousness.html). but, even though it feels as intangible as positing some magical observer-soul, maybe implementation details *do* matter?\n4. de-monolithize moral patients; consider individual pieces of suffering instead of whole moral patients, in the hope that in the **mildly different** case i can extract a sufficiently similar suffering \"sub-patient\" and then deduplicate that sub-patient.\n\ni think i'll tentatively stick to 1 because 2 feels *weird*, but i'll consider it more; as well as making room for the possibility that 3 might be right. finally, i'm not sure how to go about investigating 4; but compared to the other three it is at least materially investigatable — surely, either such a sub-patient can be isolated, or it can't.\n\n", "url": "n/a", "filename": "deduplication-ethics.md", "id": "5de062089c2e9df34a503994079431c9"} {"source": "carado.moe", "source_type": "markdown", "title": "questions-cosmos-computations", "authors": "n/a", "date_published": "n/a", "text": "2022-01-07 ★\n\n## questions about the cosmos and rich computations\n\n**computation**: a running state of any [model of computation](https://en.wikipedia.org/wiki/Model_of_computation); for example, a specific [SKI calculus expression](https://en.wikipedia.org/wiki/SKI_combinator_calculus), or a specific turing machine with its rules, current state, and current tape values. given that any model of computation can run the computations of any other model, it does not really matter which one we choose, and i will be juggling between different models throughout this post.\n\n### 1: is any computation rich ?\n\n**rich**: a computation is rich if it is generally [computationally irreductible](https://en.wikipedia.org/wiki/Computational_irreducibility). as a tentative formal definition for richness, i'm tempted to say that a computation is rich if there is no function able to generally predict any of its future states in a time [less than linear](https://en.wikipedia.org/wiki/Computational_complexity_theory) in the number of steps it would take to arrive at that state normally. for example, [rule 30](https://en.wikipedia.org/wiki/Rule_30) *looks* rich: it looks like to calculate the value of cell at index `i` at time step `j`, it generally takes about `O(abs(i) × j)` steps of computation. on the other hand, it looks like [rule 54 and rule 60](https://mathworld.wolfram.com/ElementaryCellularAutomaton.html) can generally have their cells predicted in time logarithmic to the number of computational steps it would naively take to arrive at them.\n\nnote that richness is not the same as halting: while a halting computation is necessarily not rich, a non-halting computation can either be non-rich (like rule 54), or rich (possibly like rule 30).\n\nit seems clear to me that rich computations exist: for example, it is known that sorting a list of `n` elements takes `O(n × log(n))` steps, and thus a computation runnig a sorting algorithm of that complexity cannot have its result predicted in a smaller time complexity than it took to calculate naively. the ease with which i can demonstrate that, however, makes me doubt my tentative formal definition; maybe something more akin to [polynomial time complexity](https://arxiv.org/abs/1108.1791) would better capture the essence of computational irreductibility: perhaps a better determining question for richness could be \"is there a function which can tell if a pattern looking like this will ever emerge in that computation, in time polynomial to the size of that pattern?\" or \"is there a function that can, in time polynomial to `n`, predict a piece of state that would naively take `aⁿ` steps to compute?\"\n\n### 2: does the cosmos instantiate any rich computation ?\n\nto **instantiate a computation** means for that computation to, somewhere, eventually, be ran (forever or until it halts). i start from the fact that i'm observing a coherent-looking universe, deduce that at least *some* computation is happening, and which other computations are happening (as in, are being observed somewher, or which i could have observed). as [clarified before](https://carado.moe/limiting-real-universes.html), one can't just assume that all computations are equally happening: things look way too coherent for that, there seems to be a bias for coherence/simplicity (one which i've tentatively attributed to [how soon that computation spawns](https://carado.moe/less-quantum-immortality.html)).\n\nlooking at the cosmos (the set of instantiated computations) from a computational perspective, it seems like it contains at least our universe, which is expanding. if this expansion is, [as has been hypothesized](https://www.wolframphysics.org/technical-introduction/potential-relation-to-physics/cosmology-expansion-and-singularities/), caused by the computational substrate of the universe manufacturing new vertices of spacetime, and computations can run on this new fabric as it is produced, then it's possible that [some computations can run forever](https://carado.moe/ai-alignment-wolfram-physics.html), including potentially rich ones.\n\nhowever:\n\n### 3: does the cosmos contain causal bubbles ?\n\na **causal bubble** is a piece of computation that can run forever with the guarantee that it won't be physically interfered with from the outside; see [yes room above paperclips](https://carado.moe/above-paperclips-2.html).\n\nfor example, while one can build [a turing machine inside conway's game of life](https://www.conwaylife.com/wiki/Turing_machine), a stray object on the same conway's game of life plane can eventually collide with said machine and break its computational process.\n\nhowever, in some [graph rewriting rulesets](https://en.wikipedia.org/wiki/Graph_rewriting), as well as in expression-rewriting systems with nested expressions such as a varient of [SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus) or [lambda calculus](https://en.wikipedia.org/wiki/Λ_calculus) where the evaluation rule expands all sub-expressions, some pieces of computation can run without ever being physically interfered with by other pieces of the computation.\n\n(i'm specifying \"*physically* interfered with\" because acausal coordination or mutual simulation can lead to interference, but at least that interference is up to the singleton (such as a superintelligence) \"running\" said bubble (if any); they can just choose to never acausally coordinate and to never simulate other bubbles)\n\nin our own spacetime, it seems like causal bubbles exist thanks to the expansion of spacetime: some pairs of points get further apart from one another faster than celerity, and thus should never be able to interact with one another so long as that expansion continues and FTL travel is impossible. under the perspective of wolfram physics, however, it is not clear that both of those things will necessarily be the case forever; spacetime might be [hackable](https://carado.moe/brittle-physics.html).\n\nnote that the splitting of universes with nondeterministic rules (such as ours with quantum mechanics) into different causally isolated timelines is another way for causal bubbles to exist, assuming the implementation of such a nondeterministic universe is that all possibilities are instantiated at any nondeterministic choice.\n\nthe presence of causal bubbles allows some pieces of spacetime to [survive superintellingences appearing in other pieces of spacetime](https://carado.moe/unoptimal-superint-doesnt-lose.html), while the absence of causal bubbles makes it that a superintelligence or collection of superintelligences probably eventually does take over everything.\n\nif they exist, then causal bubbles are a blessing and a curse: they save us from alien superintelligences and, [between timelines](https://carado.moe/timeline-codes.html), from our own superintelligences, but they might also ensure that our own aligned superintelligence (once we have figured out alignment) cannot reach all computation, and thus that any random person has a good chance of existing in a bubble that hasn't been \"saved\" by our aligned superintelligence.\n\n### 4. is a universal-complete computation instantiated ?\n\n[**universal complete computations**](https://carado.moe/universal-complete.html) (such as the annex in [this post](https://carado.moe/less-quantum-immortality.html)) instantiate *all* computations, over time.\n\nif one takes the perspective that a top-level \"root\" bubble existed first, then the answer to this question is up in the air.\n\nmaybe we are this root computation, and the deterministic fate of the cosmos (in all timelines) is, for example, for physics to break at some point and kill everything, or for a superintelligence to appear at some point and kill everything (the two being [pretty equivalent](https://carado.moe/brittle-physics.html)) leaving [no room for bubbles](https://carado.moe/above-paperclips.html).\n\nmaybe the root bubble [does spawn](https://carado.moe/above-paperclips-2.html) a finite and small (after deduplicating by identical computations) number of bubbles, and each of those is fated to be killed in its entirety.\n\nor, maybe somewhere in this chain, one of the bubbles spawns *many* new, different bubbles, at which point it becomes likely enough that eventually one of those bubbles either is, or itself later spawns, a universal-complete program. in which case, the initial set of the \"root\" bubble and maybe a few other next bubbles serve together as merely the boot process for the program that will eventually spawn *all computations*.\n\nit might be interesting to find out how small universal-complete programs can get, both in bubble-friendly frameworks like systematically-expanded SKI calculus, and bubble-unfriendly frameworks like cellular automata; to get an idea how likely they are to randomly be stumbled into.\n\n", "url": "n/a", "filename": "questions-cosmos-computations.md", "id": "db6531d099de4a450074d11bf6caa734"} {"source": "carado.moe", "source_type": "markdown", "title": "against-ai-alignment", "authors": "n/a", "date_published": "n/a", "text": "2021-11-08\n\n## against AI alignment ?\n\n[i usually consider AI alignment to be pretty critical](https://carado.moe/were-all-doomed.html). that said, there are some ways in which i can see the research that is generally associated with alignment to have more harmful potential than not, if it is applied.\n\nthis is a development on my idea of [botched alignment](https://carado.moe/botched-alignment-and-awareness.html): just like AI tech is dangerous if it's developed before alignment because unaligned AI might lead to [X-lines](https://carado.moe/timeline-codes.html), alignment is dangerous because it lets us align AI to things we think we want, but aren't actually good; which sounds like it could lead to an increase in the ratio of S-lines to U-lines.\n\nwith this comes a sort of second [orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis), if you will: one between what we think we want and what is actually good. note that in both cases, the orthogonality thesis is a *default* position: it could be wrong, but we shouldn't assume that it is.\n\ndetermining what is good is very hard, and in fact has been the subject of the field of *ethics*, which has been a work in progress for millenia. and, just like we must accomplish alignment before we accomplish superintelligence if we are to avoid X-risks, we might want to consider getting ethics accomplished before we start using alignment if we are to avoid S-risks, which should be a lot more important. or, at least, we should heavily consider the input of ethics into alignment.\n\nthings like [my utopia](https://carado.moe/∀V.html) are merely patches to try and propose a world that *hopefully* doesn't get *too bad* even after a lot of time has passed; but they're still tentative and no doubt a billion unknown and unknown unknown things can go wrong in them.\n\nit is to be emphasized that both parts of the pipeline are important: we must make sure that what we think is good is what is actually good, and then we must ensure that that is what AI pursues. maybe there's a weird trick to implementing what is good directly without having to figure it out ourselves, but i'm skeptical, and in any case we shouldn't go around assuming that to be the case. in addition, i remain highly skeptical of approaches of \"value learning\"; that would seem like it would be *at most* as good as aligning to what we think is good.\n\nso, it is possible that just as i have strongly opposed doing AI tech research until we've figured out AI alignment, i might now raise concerns about researching AI alignment without progress on, and input from, ethics. in fact, there's a possibility that putting resources into AI tech over alignment could be an improvement: [we should absolutely avoid S-risks, even at the cost of enormously increased X-risks](https://carado.moe/when-in-doubt-kill-everyone.html).\n\n", "url": "n/a", "filename": "against-ai-alignment.md", "id": "0f385c3d5ba251f6a175fddc42a88099"} {"source": "carado.moe", "source_type": "markdown", "title": "fermi-paradox", "authors": "n/a", "date_published": "n/a", "text": "2021-06-16\n\n## my answer to the fermi paradox\n\nthe [fermi paradox](https://en.wikipedia.org/wiki/Fermi_paradox) asks, if aliens are supposedly so statistically prevalent, why we haven't received any radio signals from them.\n\nhere is my independently-developed (though probly not original) answer:\n\nstatistically, it seems reasonable that civilizations would [accidentally invent unaligned superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) not long after inventing radio signals (in our case, a couple centuries). in order to percieve those signals, you would need to exist *after* your planet receives those signals, but *before* your planet receives that unaligned superintelligence's [expanding sphere of death](https://carado.moe/moral-cost-of-unaligned-ai.html), which might very well travel at celerity or near-celerity.\n\nthus, given the low probability, it is not surprising that we haven't percieved those; for any given alien civilization, in a given timeline, we either haven't received their radio signals, or have already been killed by them. seeing as we're alive, this timeline must be one of the former.\n\n", "url": "n/a", "filename": "fermi-paradox.md", "id": "4da523c2d92d5fc2ac6fa381f7debba9"} {"source": "carado.moe", "source_type": "markdown", "title": "exact-minds-in-an-exact-world", "authors": "n/a", "date_published": "n/a", "text": "2021-10-13\n\n## exact minds in an exact world\n\n[in the sequences](https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities) it is argued that 0 and 1 are not probabilities; that these \"certainty ratios\" aren't meaningful. but, i can think a situation that challenges this.\n\nimagine a fully deterministic world — for example, running on [a cellular automaton](https://en.wikipedia.org/wiki/Cellular_automata) — and imagine that in this world there are some intelligences (either artificial or natural) that utilize this determinism to have the ability to make flawless logical deductions (for example, [automated theorem proving](https://en.wikipedia.org/wiki/Automated_theorem_proving) algorithms running on computers that cannot ever have undetected [hardware failures](https://en.wikipedia.org/wiki/Soft_error)). for example, if they think about mathematics, under the axioms under which they work, 2 + 2 will always equal to 4, and doing any mathematical computation will either result in them knowing they don't have the computational resources to do the operation, or the result being guaranteedly true with the same certainty as that the cellular's automaton's rules will be applied next tick.\n\nnow, these beings still have a use for probability and statistics: those can be used to talk about parts of the world that they don't have complete information about. but, there will be some contexts, both purely in their minds (such as logic or math) or sometimes in the real world (they could make assessments like \"this box cannot contain any [spaceship](https://en.wikipedia.org/wiki/Spaceship_%28cellular_automaton%29) of a certain size\") that *will* be, functionally, certain.\n\nit could be argued that they *should* still be weighing everything by the probability that there might be unknown unknowns; for example, their cellular automaton might have rules that apply only very rarely, and that they never got a chance observe yet but might yet observe later. but, let's say that they *assume* the rules of their world are exactly as they think, and let's say that they happen to be correct in that assessment. does that not make some of their deductions actually entirely certain?\n\n\n\n", "url": "n/a", "filename": "exact-minds-in-an-exact-world.md", "id": "bce78a825fbccdd65be18040b16b1b07"} {"source": "carado.moe", "source_type": "markdown", "title": "above-paperclips", "authors": "n/a", "date_published": "n/a", "text": "2021-11-20\n\n## no room above paperclips\n\n(edit: see also [*yes room above paperclips?*](https://carado.moe/above-paperclips-2.html))\n\nwhen presented with the idea of a [paperclip-maximizing unaligned superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), people sometimes mention the possibility that sure, the universe gets tiled with paperclips, but maybe there's [slack](https://thezvi.wordpress.com/2017/09/30/slack/) in how paperclips are arranged, and that maybe nice things can exist again \"above\" paperclips.\n\n(note: this relates to the idea of [\"daemon-free\"ness in \"minimal circuints\"](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free))\n\ni think it's a reasonable line of thinking, but it's short-sighted: let's think about what happens next. eventually, above those paperclips, some evolutionary process may take place, leading (possibly, such as in our case, through the step of a technological species) eventually to a superintelligence taking over everything. given that *the entire cosmos* gets tiled with paperclips [*possibly forever*](https://carado.moe/ai-alignment-wolfram-physics.html), and that a superintelligent singleton taking over everything is irreversible (short of everything dying forever), in all likelyhood in the long term in any piece of universe not already actively managed by a superintelligence, eventually either everything dies forever, or a superintelligence takes over everything forever.\n\nand then what? either this new superintelligence cares about \"upwards\", and has some plan for how its own paperclips are arranged (such as into more \"macro\"-paperclips), or it doesn't and the cycle begins again.\n\ngiven that the outcome of an \"alien\" superintelligence's takeover is probly a worse outcome than the takeover of a superintelligence of our own (we should expect them to be about as incompetent as us at alignment, but to have values [less aligned to ours](https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8)), we need to care about our own iteration first, it's our best bet.\n\nthe point is, eventually for any given local patch of spacetime, either a superintelligence explosion is reached or everything dies forever. this can't be avoided, even by \"climbing up\" on substrates, so we should care about alignment now; we can't just hope that things are okay despite paperclips.\n\n", "url": "n/a", "filename": "above-paperclips.md", "id": "2ff035c1f7f236cbf1a825ddec6016bb"} {"source": "carado.moe", "source_type": "markdown", "title": "alignment-optimization-processes", "authors": "n/a", "date_published": "n/a", "text": "2021-10-23\n\n## alignment is an optimization processes problem\n\ni like to talk about [AI](https://www.lesswrong.com/tag/ai) alignment a lot, but the matter of alignment is really general to *optimization processes* in general.\n\nhere are some ways it applies to some other areas:\n\n### natural selection\n\nnatural selection is an optimization process that improves the survival and duplication of inheritable traits (genes) in living beings.\n\nit is not intelligent: there is no agent involved in this process which is able to make decisions by looking ahead into the future at what consequences those decisions will have; with the possible exception of humans making rational decisions about what will maximize their amount offspring.\n\nit is completely multipolar: basically no agents in this process (either genes themselves, or individuals or populations carrying those genes) have the ability to coordinate decisions with one another, since they're not even intelligent.\n\nthe default of natural selection is genes whose only purpose is to be better at duplicating themseles.\n\none way in which we've aligned this process is by breeding: by selecting the individuals we like best among, for example, crops, cattle, or dogs, we've been able to align the process of gene selection to respond to what we value rather than the default.\n\n### economics\n\nthe economy is an optimization process that improves economic efficiency.\n\nit is intelligent: actors in the economy, ranging from individuals to states and giant conglomerates, have the ability to make intelligent decisions about the long term.\n\nit is fairly multipolar: while they don't use it much, states do have overriding power over companies (they determine what's legal or not, after all), and also economic agents are able to coordinate to an extent using contracts and trusts. nevertheless, it is still largely multipolar, with agents overall competing with one another.\n\nthe default of economics is the optimization out of anything that doesn't generate maximally much resources: the optimizing out of people when they become the unoptimal form of labor because of automation, and the strip-mining of the universe to acquire ever more resources with which to create more machines to mine even more resources, and so on.\n\nthe way we align economics is through taxes, redistribution, and the like. redistribution like [UBI](https://carado.moe/ubi.html) aligns the economy to serves the demand of people, while tax externalities can align economic agents to take steps to preserve nice things, such as avoiding pollution.\n\n### electoralism\n\nelectoral representative democracy is an optimization process that improves voter satisfaction.\n\nit is intelligent: the agents competing for the reward, here political parties, are able to make decisions about the future. some organisms even plan for the very long term, taking steps to improve their chances when they become parties, long before they do.\n\nit is fairly multipolar: like economics, while parties can coordinate and ally with one another, they are still competing agents at the end of the day, with no central authority to guide them and solve coordination.\n\nthe default of electoralism is parties throwing all values under the bus to do whatever gets them and keeps them in office for as long as possible.\n\nthe way we align electoralism is by having universal sufferage on the one hand, which makes it that it is the population that parties must try to satisfy; and the various apparatii of liberal democracies (journalism and free press, public debate, education of the voting public, etc), which we'd hope would help that voting population determine which parties do indeed implement policies that satisfy their demand.\n\n", "url": "n/a", "filename": "alignment-optimization-processes.md", "id": "c4e67680111db03923ab7cb2037cccf0"} {"source": "carado.moe", "source_type": "markdown", "title": "balancing-utilitarianism", "authors": "n/a", "date_published": "n/a", "text": "2022-02-04\n\n## balancing utilitarianism\n\nsuppose you have multiple values (\"i want to be healthy but also i want to eat a lot of fries\") or a value applying to multiple individuals (\"i want both alice and bob to be happy\"), but sometimes there are tradeoffs between these values. how do you resolve such situations ?\n\na simple weighed sum might suffice in many cases, but i feel like there are cases where this is not sufficient.\n\nfor example, consider a population of 5 persons, who you care about equally, and consider a simple scalar value you have for them, such as happiness.\n\nnow, consider the following three options:\n\n* all individuals get 0.5 utility (\"fair\")\n* one individual gets 0.9 utility, the other four get 0.4 utility (\"bully\")\n* one individual gets 0.1 utility, the other four get 0.6 utility (\"scapegoat\")\n\nif we are to use a simple sum, all three of these situations sum to 2.5 total utility; yet, i feel like something ought to be done to favor the fair situation over the other two (and then probably to favor the bully situation over the scapegoat situation?)\n\nwhat i propose to address this is to apply a square root (or other less-than-one exponent) to the utilities of persons before summing, which has the effect of favoring more equal situations. in this case, we get:\n\n* fair: 3.54 utility\n* bully: 3.48 utility\n* scapegoat: 3.41 utility\n\nwhich does seem to produce the desired effect: in this situation, it maps to how i *feel* about things: fair > bully > scapegoat\n\n", "url": "n/a", "filename": "balancing-utilitarianism.md", "id": "2756bc5eff895fb11e33a66940ece55c"} {"source": "carado.moe", "source_type": "markdown", "title": "two-principles-for-topia", "authors": "n/a", "date_published": "n/a", "text": "2020-11-15\n\n(edit: this post is *sort of* superceded by [∀V](https://carado.moe/∀V.html))\n\n## Two Principles For Topia\n\nthe more i think about it, the less i think the solution to [Moloch](https://web.archive.org/web/20140730043944/http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) is a single benevolent Elua; or, in other terms, we shouldn't implement Elua, but we should enact reasonable principles which Elua might want to implement herself.\n\nhere are what i currently believe to be the two principles that form the basis of a largely [freedom-conserving](https://carado.moe/core-vals-exist-selfdet.html) utopia:\n\n* the first principle, Voluntaryism, consists of NAP, UBI, and population control.\n\n\t* the systematic enforcement of the [non-aggression principle](https://en.wikipedia.org/wiki/Non-aggression_principle) (NAP) to guarantee agency and freedom of association,\n\t* mandatory redistribution enough for every individual to be guaranteed a reasonable-with-[slack](https://thezvi.wordpress.com/2017/09/30/slack/) living (UBI) (where living includes basic resources and healthcare up to immortality), and\n\t* enough population control to guarantee this redistribution can even happen in the first place in a world with (even locally) limited resources,\n\n\tare to be the basis of a reasonable [voluntary](https://en.wikipedia.org/wiki/Voluntaryism) world.\n\n\tsecondary notions like taxation on [externalities](https://en.wikipedia.org/wiki/Externality) and usage of [the commons](https://en.wikipedia.org/wiki/Commons) help make that UBI tangible (\"why does the UBI currency have value ?\" → because it's what eventually one must pay those taxes with) and reasonably redistribute ressources so as to help all persons benefit from growth.\n\n* the second principle is the dismantlement of non-person forces (DNPF).\n\n\twhat i mean by a non-person force is any phenomenon that interacts with mankind in a way that isn't answerable to persons; this goes, in order of scale, from gravity and kinetics, to cancer, to publicly-owned corporations and states. these all keep abusing persons (by which i here mean [moral patient](https://en.wikipedia.org/wiki/Moral_agency#Distinction_between_moral_agency_and_moral_patienthood)) in many ways, and just generally keep us from being in control of our lives. \n\n\tthe example of corporations is particularly insidious: though they would be (under UBI) aligned to benefit the values of persons, they still outcoordinate those persons and thus in many ways outsmart them through the abuse of discoordination and cognitive biases; and not only that, but they are, in the petri dish of capitalism, bred so as to maximize their ability to do this. that said, at least fully top-down autocratic corporations have a person agent at the top, who is able to enforce the values of persons; publicly-owned corporations are even worse in that even their top-level direction is uncoordinated enough that valuing nice things is guaranteedly out of the equation (this could perhaps be addressed with better and maybe more society-distributed shareholder voting, but those shareholders probably get outcoordinated).\n\n\t(the argument above, by the way, is my largest criticism of non-[distributist](https://en.wikipedia.org/wiki/Distributism) capitalism)\n\n\tin effect, this principle turns the world we inhabit from a world of cold natural and emergent laws inside which reside some minds located in brains (materialism), into a world of ad-hoc minds determining everything else ([panpsychism](https://en.wikipedia.org/wiki/Panpsychism) ?).\n\n\tthe easiest way to implement this principle is probably to move everyone to a virtual world (which saves resources too, which helps the population control cap be way higher)\n\nin my current opinion, those two principles **must be enforced** for the basis of a utopia to be form. the rest can be done through the voluntary action of persons (hopefully), but these two principles are what Elua/the singularity is to **enforce** for the continued free and valueful life of persons to be guaranteed.\n\nVoluntaryism alone is not enough, and this is largely missed by what i'm tempted to call right-wing utopians; not just abusive structures, but systematically self-reinforcing abusive structures, can and will still happen even under a complete voluntary society. [Meditations on Moloch](https://web.archive.org/web/20140730043944/http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) addresses this largely with coordination, but coordination only *hopefully wins battles*; the addition of DNPF permanently wins the war.\n\nDNPF alone is not enough either, and this is what is largely missed by what i'm tempted to call left-wing utopians; in a virtual world of minds where resources are fairly allocated between persons, there can still be abuse, plagues, [malthusian traps](https://en.wikipedia.org/wiki/Malthusian_trap), and so on; and ultimately abusive structures, just of a different kind. the common left-wing answer of organizing people (and the scarier \"changing culture to make people systematically organize against those\" which, if voluntary, is largely wishful thinking, and if not, insanely violates self-determination and the values of persons) only wins battles; the addition of Voluntaryism permanently wins the war.\n\n", "url": "n/a", "filename": "two-principles-for-topia.md", "id": "e5c5c4b131f87779f45b34fe958a6864"} {"source": "carado.moe", "source_type": "markdown", "title": "were-all-doomed", "authors": "n/a", "date_published": "n/a", "text": "2021-06-30\n\n## we're all doomed\n\n[a major tech company is now explicitly invested in getting AI to write code](https://copilot.github.com/).\n\nthis is a major warning sign; a first step on the explicit path to [superintelligence](https://en.wikipedia.org/wiki/Superintelligence) [explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion), an event [already considered relatively likely](https://intelligence.org/faq/#imminent) and, [in the absence of sufficient AI alignment progress](https://intelligence.org/2018/10/03/rocket-alignment/), is overwhelmingly likely to [permanently end all life at least in the observable universe](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).\n\nthe time scale probably lies somewhere between a few years and a few decades, but in any case it's becoming to seem increasingly unlikely that [the only organization trying to actually figure out AI alignment](https://intelligence.org/) is gonna accomplish that in time.\n\nif you can, go and [help them out](https://intelligence.org/get-involved/), or at least [donate everything you can to them](https://intelligence.org/donate/).\n\nif you're currently working in AI development in any way, *please stop*. whether anything on earth survives this century is gonna be a matter of whether AI alignment is figured out by the time we get enough AI development; by helping the latter, you're making it even more likely that it happens before the former.\n\non a gloomier note, if you have all the philosophical beliefs required to think it can work, you may want to start preparing to [abandon this timeline](https://carado.moe/quantum-suicide.html) if singularity starts happening and looks like it's not gonna go well.\n\nedit: see also: [are we in an AI overhang?](https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang)\n\n", "url": "n/a", "filename": "were-all-doomed.md", "id": "4c50ef258c30c6635000902db01c539b"} {"source": "carado.moe", "source_type": "markdown", "title": "life-refocus", "authors": "n/a", "date_published": "n/a", "text": "2022-05-13\n\n## life refocus\n\nbecause of the [recent](https://www.metaculus.com/questions/3479/date-weakly-general-ai-system-is-devised/) [events](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy), which i've been dreading [for a while](https://carado.moe/were-all-doomed.html), i'm taking AI risk a lot more seriously, and have started significantly refocusing my life.\n\nthere is a post called [*musk's non-missing mood*](https://lukemuehlhauser.com/musks-non-missing-mood/) that resonates quite well with me. it is indeed kind of disconcerting how people who seem rationally aware of AI risk, don't seem to *grok* it as an *actual thing*. despite how real it is, it's hard to think of it not as fantasy fiction.\n\ni totally understand why. i've been there too. but eventually i managed to progressively update.\n\ni'm still not quite there yet, but i'm starting to actually grasp what is at stake.\n\n[\"detaching the grim-o-meter\"](https://mindingourway.com/detach-the-grim-o-meter/) remains a reasonable thing to do; you don't want to become so depressed that you kill yourself instead of saving the world. but you also don't want to remain so deluded that you don't quite weigh the importance of saving the world enough either.\n\ni'll learn japanese after the singularity. i'll make [my game](https://carado.moe/game.html) and [my alternative web](https://carado.moe/saving-the-web.html) and my conlang and [my software stack](https://carado.moe/psi.html) and many other things, after the singularity. it is painful. but it is what's right; it's closer to [the best i can do](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy).\n\nand i know that, if at some point i give up, then it won't look like pretending that everything is fine and compartmentalizing our imminent death as some fantasy scenario. it'll be a *proper* giving up, like going to spend the remaining years of my life with my loved ones. even my giving up scenario is one that takes things seriously, as it should. that's what being an adult capable of taking things seriously is like.\n\nhow you handle your mental state is up to you. there is a collection of AI-risk-related mental health posts [here](https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of). do what it takes for you to do the work that needs to be done. that's not becoming a doomer; your brain is straight-up not designed to deal with cosmic doom. but that's not remaining blindly naive either. the world needs you; it won't be saved by pretending things are fine.\n\nand it *certainly* won't be saved by pretending things are fine and *working on AI capability*. that's *just bad*. *please* don't.\n\nplease take AI risk seriously.\n\n", "url": "n/a", "filename": "life-refocus.md", "id": "411c7a2f18243acc5c947d7ff880673d"} {"source": "carado.moe", "source_type": "markdown", "title": "upload-for-alignment", "authors": "n/a", "date_published": "n/a", "text": "2022-01-11\n\n## uploading people for alignment purposes\n\nas per [my utopian vision](https://carado.moe/∀V.html), i've thought that an aligned AI would want to figure out how to upload us.\n\nbut, thinking about it more, it could be the other way around: if we can upload people in a deterministic simulation, this can buy us a lot of time to figure out alignment, as per [this post](https://carado.moe/noninterf-superint.html).\n\nnotably, the simulation could for example contain a single uploaded person (say, eliezer yudkowsky, or a bunch of copies of yudkowsky), which would save us from an arms-race type coordination problem; and while, on the outside, the superintelligence is killing everyone instantly to tile the universe with more compute to run this simulation, whoever's inside of it has plenty of time to figure things out (and hopefully [resurrect everyone once that's done](https://carado.moe/what-happens-when-you-die.html)).\n\nthis seems like a long shot, but [have you looked around?](https://www.lesswrong.com/s/n945eovrA3oDueqtq) this could be the [miracle](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW) we need.\n\nof course this could also turn into a [hell](https://carado.moe/botched-alignment-and-awareness.html) where infinite yudkowsky's are suffering forever everywhere. hopefully we can make another button which actually stops the simulation and tiles the universe with only benign paperclips, and maybe even make that button auto-activate if the yudkowsky is detected to be suffering or incoherent.\n\nremember: [as long as the simulation is deterministic, superint can't force the uploaded yudkowsky to not shut it down](https://carado.moe/noninterf-superint.html), or force or even coerce him to do anything for that matter; it can only make the yudkowsky simulation run slower, which basically eventually achieves the same effect as either completing it or shutting it down.\n\n", "url": "n/a", "filename": "upload-for-alignment.md", "id": "34219b3de9e1cd4ac2500426c65da67c"} {"source": "carado.moe", "source_type": "markdown", "title": "hackable-multiverse", "authors": "n/a", "date_published": "n/a", "text": "2022-02-03\n\n## hackable multiverse\n\nin [a previous post](https://carado.moe/brittle-physics.html) i talk about how hackable physics might allow a superintelligence to take over very quickly (perhaps faster than celerity).\n\nin *[psi rewriting](https://carado.moe/psi-rewriting.html)* i propose that multiversehood can be more cleanly described as a particularly implemented feature of the cosmos, rather than an intrinsic thing.\n\nbut, if the cohabitation of multiple timelines is indeed an implemented feature rather than a primitive one, then there is a possibility that it is hackable, and that a superintelligence could hack across timelines.\n\nnow, it is to be noted that even if hackability exists, it might still be limited: perhaps there something like a light cone at play, or perhaps a given timeline can only access a finite number of other timelines.\n\nit is to be remembered that timelines are not slots, they're not variables that hold values; timelines are *the values themselves*. still, hackability could mean some branches of the causality graph stop getting computed, for example.\n\neither way, under these conditions, even quantum immortality might not save us from an X-risk superintelligence, and [given](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode) [recent](https://openai.com/blog/formal-math/) [developments](https://blog.eleuther.ai/announcing-20b/), we should panic a lot.\n\n", "url": "n/a", "filename": "hackable-multiverse.md", "id": "3d1a961931b99c14078cfd6d154bca99"} {"source": "carado.moe", "source_type": "markdown", "title": "brittle-physics", "authors": "n/a", "date_published": "n/a", "text": "2022-01-05\n\n## brittle physics and the nature of X-risks\n\nsuppose physics is hackable, and a hard to accomplish hack that requires intelligence (like a fancier version of [rowhammer](https://en.wikipedia.org/wiki/Rowhammer)) can break the fabric of spacetime — maybe in ways that said intelligence can take advantage of, such as embedding its computation into something that survives said breakage, in a way that could help such a superintelligence accomplish its goal.\n\nwe could expect that [boxing an AI](https://en.wikipedia.org/wiki/AI_box) could be really hard: even without access to the outside, it might be able to guesses physics and hack it, from the comfort of its box.\n\nas usual in such [X-risk scenarios](https://carado.moe/timeline-codes.html), i believe we just [keep living only in timelines in which, by chance, we don't die](https://carado.moe/quantum-suicide.html).\n\nthese sort of hacks are not ruled out by [wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/). indeed, they are plausible, and can spread at some speed faster than celerity — because they can run in the substrate *underlying* spacetime — such that nobody would ever be able to observe such hacks: the hack reaches and destroys you before the result of the breakage can reach your sensory organs, let alone your brain.\n\nso, maybe \"dumb-goal\" superintelligences such as [paperclip maximizers](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) are popping up all over the place all the time and constantly ruining the immense majority of not-yet-hacked timelines, and we keep living in the increasingly few timelines in which they haven't done that yet.\n\nnow, let's stop for a minute, and consider: what if such a hack *isn't* hard ? what if it *doesn't* need an intelligent agent ?\n\nwhat if, every planck time, every particle has a 99% chance of breaking physics ?\n\nwell, we would observe exactly the same thing: those hacked universes either become computationally simple or [boot up more universes](https://carado.moe/above-paperclips-2.html); either way, we don't survive in them, so we don't observe those hacks.\n\nin this way, it is [S-lines and U-lines](https://carado.moe/timeline-codes.html) that are very special: outcomes in which we *survive*, thanks to a superintelligence with a \"rich\" goal. the rest is just timelines constantly dying, whether it be due to X-risk superintelligences, or just plain old physics happening to cause this.\n\nin fact, let's say that the universe is [a nondeterministic graph rewriting system](https://en.wikipedia.org/wiki/Graph_rewriting) with a rule that sometimes allows everything be reduced to a single, inactive vertex. would this count as \"sometimes everything is destroyed\" ? or would this make more sense to be modeled as a weird quirk of physics where the graph of possible timelines includes the production of passive vertices all the time, which can be safely ignored ?\n\nwhat if instead of a nondeterministic system, we have a deterministic one [which just happens to expand all timelines](https://carado.moe/psi-rewriting.html). in such a system, \"different timelines\" is no longer a primitive construct: it is merely an observation about the fact that such a system tends to, when ran, create from a given piece of data, several newer ones. let's say that in such a system there is a rule where from every piece of data we'd consider a timeline, numerous inert vertices are also created.\n\nwould we say \"aha, look! every time a computation step happens, many inert vertices are created around it, and i choose to interpret this as the creation of many timelines (one per inert vertex) in which everyone in that universe dies, and others (new complex pieces of data) in which everything keeps existing\",\n\nor would we, in my opinion more reasonably, say \"well, it looks like as a weird quirk of how this system runs, many inert vertices are popping up; but they're simple enough that we can just ignore them and only consider richer new pieces of data as *timelines* proper.\"\n\ni believe, if we are to worry about what states this universe ends up in, we ought to use a measure of what counts as a \"next state of this universe\" that measures something about the richness of its content: maybe the amount of information, maybe the amount of computation going on, or maybe the number of moral patients. and, depending on what measure we use, \"losing\" timelines to paperclip maximizers (which turn the universe into something possibly simple) is no more of a big deal than \"losing\" timelines to a rewriting rule that sometimes creates inert vertices, and neither of which should really count as proper timelines.\n\notherwise we end up needlessly caring about degenerate states because of what we believe to be, but really isn't, an objective measure of what a timeline is.\n\n*timelines* might be in the [map](https://en.wikipedia.org/wiki/Map–territory_relation), while what is in the territory is just *what we end up observing* and thus, computed states that contain us.\n\nfinally, what about universe states where *all* outcomes are an inert vertex or an otherwise simple universe (such as as infinite list of identical paperclips) ? while those might happen, and i'd say *would* count as X-risks, you don't need to consider simple states as timelines to make that observation: maybe some timelines end up in a state where *no* new states can be created (such as a locally truly terminated piece of computation), and others end up in a state where *only simple* new states are created. those ought to be considered equivalent enough, and are what a true X-risk looks like.\n\n", "url": "n/a", "filename": "brittle-physics.md", "id": "a529ece6d0accea13b4aeac777f85b20"} {"source": "carado.moe", "source_type": "markdown", "title": "universal-complete", "authors": "n/a", "date_published": "n/a", "text": "2021-07-16\n\n## universal complete\n\nunder [a turing-complete model of computation](https://en.wikipedia.org/wiki/Model_of_computation), there are some initial-states or initial-states-and-rulesets which eventually contain an algorithm that iterates over all possible algorithms and runs them.\n\nin single-threaded models, it can do this by having an increasingly long list of algorithms that it runs by one step each; it's not an issue if each algorithm runs increasingly slowly, as long as it keep running.\n\ni choose to call such initial-states[-and-rulesets] *Universal Complete*.\n\nthey contain all turing computation based universes (and thus each other, if indirectly); so, for example, if [Rule 30 with one alive cell](https://en.wikipedia.org/wiki/Rule_30) is Universal Complete, then it contains all computable universes (including ours).\n\nthis could be interesting because proving that property about some frameworks means that programming a particular algorithm starting from that initial-state[-and-ruleset] is just a matter of *locating* it.\n\nit could also be interesting because it might turn out that many things that *look* sufficiently chaotic (such as Rule 30 with one alive cell) are effectively universal complete, and so [Wolfram's quest](https://www.youtube.com/watch?v=0bMYtEKjHs0) for the rule that describes our universe [in his hypergraph-rewrite system](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) might be reductible to \"whichever simplest initial-state-and-ruleset starts all algorithms\"; though his idea of *running every rule at every step* might kind of functionally do that.\n\n### appendix: a simple universal-complete program\n\nhere is a simple algorithm implemeting this, iterating over the countable set of turing machines.\n\nx ← simplest turing machine\nl ← empty list\nloop:\n\tfor machine in l:\n\t\tupdate machine by one step of computation\n\n\tappend x to l\n\tx ← next simplest turing machine after x\n\n\n", "url": "n/a", "filename": "universal-complete.md", "id": "bb8f462466cd5a8308203de60679cde8"} {"source": "carado.moe", "source_type": "markdown", "title": "values-tdd", "authors": "n/a", "date_published": "n/a", "text": "2022-03-21\n\n## values system as test-driven development\n\ni realized something while reading [hands and cities on infinite ethics](https://handsandcities.com/2022/01/30/on-infinite-ethics/): the work of determining the shape of [our values system](https://carado.moe/not-hold-on-to-values.html) is akin to [test-driven development](https://en.wikipedia.org/wiki/Test-driven_development).\n\nwe are designing a procedure (possibly looking for [the simplest one](https://en.wikipedia.org/wiki/Kolmogorov_complexity)) by throwing it at a collection of decision tests, and looking for which one matches our intuitions.\n\ni wonder if a value-learning approach to AI alignment could look like trying to get superintelligence to find such a procedure; perhaps we feed it a collection of tests and it looks for the simplest procedure that matches those, and hopefully that extrapolates well to situations we didn't think of.\n\nperhaps, even pre-superintelligence we can formalize values research as tests and try to come up with or generate a simple procedure which passes them while also being selected for simplicity.\n\nwhy simplicity? doesn't occam's razor only apply to descriptive research, not prescriptive? that is true, but \"what is the procedure that formalizes my values system\" is indeed a prescriptive matter, in a way: we're trying to model something to the best factual accuracy we can.\n\n", "url": "n/a", "filename": "values-tdd.md", "id": "2b1e480600ee9860db46946394890455"} {"source": "carado.moe", "source_type": "markdown", "title": "what-happens-when-you-die", "authors": "n/a", "date_published": "n/a", "text": "2021-08-25\n\n## what happens when you die?\n\ncontrary to popular (secular) belief, i don't believe nothing happens.\n\nconsidering that [you are your information system](https://carado.moe/you-are-your-information-system.html) and [nothing else](https://carado.moe/persistent-data-structures-consciousness.html), any future occurence of your information system *is you*. so, the meaning of the question \"what happens when you die?\" is really \"what are some next things your information system will percieve after facing what should be fatal events in its original body?\"\n\nthis is very likely [not nothing](https://carado.moe/quantum-suicide.html). somewhere, in some timeline, your information system is probably being redundanced.\n\nfirst, your body can miraculously avoid death. this would be a weird kind of immortality, where there is almost always a timeline where you somehow avoid death. it is, however, pretty unlikely to persist.\n\nsecond, your mind could arise somewhere by accident. this could be as simple as random fluctuations in space producing something that runs your mind's information system by pure chance. this is *extremely* unlikely.\n\nin fact, the most likely scenario is that someone in the far future reproduces your mind on purpose. for example, this could be a society in a [u-line](https://carado.moe/timeline-codes.html) being able to, and deciding, to run an accurate enough simulation of the entire earth up to some point, and downloading people from this simulation into their world, to allow them to avoid death. as it'd probly take a bunch of effort, and sounds like a pretty nice thing to do, i expect that to happen mostly in u-lines; however, there could be some [s-lines](https://carado.moe/timeline-codes.html) where this happens too. and while getting resurrected seems more likely in u-lines than s-lines, s-lines seem more likely than u-lines, and i don't know if the probabilities cancel out.\n\nso, what happens when you die? you wake up either in heaven or hell, depending not on your personal actions in particular but in how likely it is we figure out AI alignment (a probability which you do have, if small, an impact on).\n\n", "url": "n/a", "filename": "what-happens-when-you-die.md", "id": "6822f1c6394040a42c7d5ea78d9f1862"} {"source": "carado.moe", "source_type": "markdown", "title": "estimating-populated-intelligence-explosions", "authors": "n/a", "date_published": "n/a", "text": "2021-07-10\n\n(edit 2021-07-18: this post is probly not very good, as there's some anthropic principle research out there and i haven't read any and just gone off thinking about it on my own.)\n\n## estimating the amount of populated intelligence explosion timelines\n\nthe [imminent](https://carado.moe/were-all-doomed.html) [intelligence explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) is likely to [go wrong](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).\n\nhow likely?\n\nif you imagine that you live pretty much at the cusp of such an event, you should expect as per the [anthropic principle](https://en.wikipedia.org/wiki/Anthropic_principle) that there are about as many observer-instants before you, as there are after you. (an observer-instant being an instant at which you have a chance of making observations about that fact; see [this](https://www.greaterwrong.com/posts/uSMa6Fj5nMgntpxfo/are-coincidences-clues-about-missed-disasters-it-depends-on) and notably Nick Bostrom's Self-Sampling Assumption)\n\ni've previously calculated that the future from now until heat death has room for roughly 10^200 human lifespans (of 80 years) (an estimation based on the number of particles in the observable universe, the amount of time until heat death, and the computational cost of running a human brain).\n\nthe past, on the other hand, holds about 10^11 human lifespans (most of them not full 80-year lifespans, but such details will get amortized by using orders of magnitude).\n\nif intelligence explosion is, as i believe, likely to result either in [total death](https://carado.moe/were-all-doomed.html) or in well-populated futures (whether good or [bad](https://en.wikipedia.org/wiki/Suffering_risks)), then the fact that i'm observing being right next to the event (in time) rather than observing being one of the (in well-populated timelines) countless observers to exist *after* the event, must be compensated by such well-populated timelines being particularly rare within the set of future possible timelines.\n\nhow rare? about 1 in (10^200 / 10^11), which is 1 in 10^189.\n\nfactors which may make this calculation wrong:\n\n* my 10^200 estimate might be wrong (for example: if each person comes to eat a *lot* of computation resources, then the number of future observers is drastically reduced).\n* the 10^11 estimate for the past might be wrong: what if there have beings in earth's past smart enough to make this observation? it may seem unlikely, but if i am to encompass the immense amount of forms future observerc might take, i should account for a wide variety of forms of past observers too.\n* because entropy increases, there are (possibly a lot) more future universe states than past universe states. accounting these \"timeline splits\" for the number of future observers even more massively decreases the expected ratio of well-populated timeline-states, though i'm not sure by how much.\n\n", "url": "n/a", "filename": "estimating-populated-intelligence-explosions.md", "id": "65fd62279438aaf26f1fabdb8d279542"} {"source": "carado.moe", "source_type": "markdown", "title": "systems-and-diversity", "authors": "n/a", "date_published": "n/a", "text": "2021-07-21\n\n## systems and diversity\n\nas i've said in [a previous post](https://carado.moe/lets-not-generalize-politics.html): i really like culture; and, to that end, i like diversity (by which i mean people being more weird and different from one another).\n\nthere are many systems that exist today that affect diversity. most of them punish it; not as a coincidence, but because diversity is [a fragile value](https://www.readthesequences.com/Value-Is-Fragile): if you optimize for something else, it will tend to get optimized out.\n\nif you optimize for economic efficiency, diversity gets optimized out because the most easily served economy is one in which demand is relatively uniform.\n\nin general, if you optimize for people having their values satisfied, diversity gets optimized out because the most easily satisfied set of values is relatively uniform and easy to satisfy values; if you tell a superintelligence to \"make a world where everyone has their values satisfied\", the simplest way to achieve that (other than killing everyone) is to make sure everyone has very simple values like doing nothing all day or dying as soon as possible.\n\nthe scary thing about such an optimization is that it \"works\": at no point does an economy headed towards uniformity need to collapse; on the contrary, the more it has optimized out diversity, the more efficient and stable it'll be! so, we need to *[near-intrinsically](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value)* care about preserving diversity, even when all else seems fine. this makes diversity preservation probably my largest concern with capitalism; at least, a system that wouldn't care about efficiency, wouldn't necessarily be aligned against diversity (though it might be aligned against it for other reasons).\n\nsocial pressures such as [generalizations and expectations](https://carado.moe/lets-not-generalize-politics.html) punish diversity by rewarding conformity.\n\ndemocracy and general consensus enforcment systems punish diversity by generally letting majority lifestyles be better supported by society than minority lifestyles.\n\ni do know of one force of human nature which encourages diversity: [fetishism](https://en.wikipedia.org/wiki/Sexual_fetishism#Definitions). fetishism tends to make people prefer things specifically because they go against the norm. as such, i propose that if we value rich culture, we should want to cultivate fetishism.\n\nthe takeaway is: in any long-term societal plan, we need to care not just about values being satisfied, but about what values people have to begin with. a clear example in modern capitalism is advertising: it's okay that companies are aligned to satisfy values, but [with advertising they get to affect what values people have to begin with](https://carado.moe/unfair-feedback-loops.html).\n\n(one could argue we could encourage people to [crystallize](https://carado.moe/value-crystallization.html) and [conserve](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) their values, as well as forbid the creation of new persons; but [i'd rather that not be required](https://carado.moe/rationalist-by-necessity.html))\n\n", "url": "n/a", "filename": "systems-and-diversity.md", "id": "1f3374db457824a09f636a7f389e486c"} {"source": "carado.moe", "source_type": "markdown", "title": "unoptimal-superint-doesnt-lose", "authors": "n/a", "date_published": "n/a", "text": "2021-12-09\n\n## unoptimal superintelligence doesn't lose\n\ni [previously](https://carado.moe/unoptimal-superint-loses.html) wrote a post about how a superintelligence with an unoptimal decision system likely loses to alien superintelligences that are more optimal, at the scale of cosmic wars between those superints.\n\ni don't think this is necessarily true: maybe physics *does* look like a funny graph à la wolfram, and then maybe we can carve out pieces of space that still grow but are causally isolated from the rest of the universe; and then, whether a given causally isolated bubble ever has to encounter an alien superint is purely up to whether it decides to generate alien space that leads to those, which is prevented easily enough.\n\n", "url": "n/a", "filename": "unoptimal-superint-doesnt-lose.md", "id": "c8168b0d667effab42c1363814bbe1ac"} {"source": "carado.moe", "source_type": "markdown", "title": "the-peerless", "authors": "n/a", "date_published": "n/a", "text": "2022-04-12 ★\n\n## The Peerless\n\nIn this post, I propose a plan for addressing superintelligence-based risks.\n\nBefore I say anything, I will mention a crucial point that a *bunch* of people have ignored despite it being addressed at the bottom of this post: the idea I describe here is very unlikely to work. I'm proposing it because of other plans because I feel like other plans are *extremely* unlikely to work (see also [this post](https://carado.moe/ai-risk-plans.html)). Yes, we probably can't do this in time. That doesn't make it not our best shot. Rationalists select the best plan, not [\"the first plan, and then after that a new plan only if it seems good enough\"](https://www.readthesequences.com/If-Many-Worlds-Had-Come-First).\n\n(spoilers for the premise of [*orthogonal* by Greg Egan](https://en.wikipedia.org/wiki/Orthogonal_%28series%29)) In Orthogonal, a civilization facing annihilation comes up with a last-minute plan: to create a ship, accelerate it until its time arrow is orthogonal to the time arrow of its home world (which is possible thanks to the alternate physics of their world), and thus give its crew as much time as it needs to figure out how to save their homeworld before reversing course and coming back. This plan is inspired by that, and i'm naming this post after their ship, the *Peerless*.\n\nThe short version is: we design a simulation for a bunch of people (probably rationalists) to live in and figure out alignment with as much time as they need, and create a superintelligence whose sole goal is to run that simulation and implement a new goal it will eventually decide on. I've written about this idea previously, but [that post](https://carado.moe/upload-for-alignment.html) is not required reading; this is a more fleshed-out view.\n\nI will be describing the plan in three steps.\n\n### 1. Create virtual people\n\nWe need virtual persons inside this world. They will be the ones who figure out alignment. A few possibilities come to my mind; there may be more.\n\n* Brain scans, or full person scans. This is the most obvious solution. I'm not too familiar with the state of that field, but surely there's some work in that direction we can take advantage of; otherwise, we can just throw money at our own initiatives. This option does have the downside that it's quite likely brains aren't sufficient to keep someone functional — we may need to scan or re-implement a bunch more.\n* Resimulate earth and pluck out persons. If there's a clever way to locate ourselves in the [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) (or a [computable variant of it](http://www.scholarpedia.org/article/Universal_search)), then we can just make a program that reruns that earth up to say, now, and then locates some or all human brains, and \"download\" them out of that simulation of earth and into our own simulated environment. For more details on this possibility, see [*finding earth in the universal program*](https://carado.moe/finding-earth-ud.html).\n* Scan the earth and pluck our persons. This one seems harder than resimulating earth, but may be doable. It's certainly an idea worth throwing a few people at, to see if there's a clever way to make it work.\n\nThe main risk that's been brought to my attention regarding this part is the following: what if the virtual persons end up unaligned from their previous selves? The brain scan scenario seems like the most likely to have that risk, but even then i'm not *too* worried about it; intuitively, it seems unlikely enough to me that all the uploaded persons would come out misaligned in a similar direction, and in a similar direction that would lead them to decide on a [botched alignment](https://carado.moe/botched-alignment-and-awareness.html) for the superintelligence.\n\nAn obvious question here is: who gets to be on board the simulation? The values of the people who get uploaded might significantly affect what the superintelligence is aligned to (not all humans necessarily have the same values, [maybe even after thinking about it really hard for a long time](https://handsandcities.com/2021/06/21/on-the-limits-of-idealized-values/)). I don't have any answers other than the obvious \"me please!\" and \"my tribe please!\" that occur to me.\n\nNote that i'm *not* proposing augmenting the uploaded minds — at least not for the first simulation iteration (see below). That *does* seem like an exceedingly risky prospect, alignment-wise, and one we don't need to commit to right away.\n\n### 2. Design a virtual environment\n\nThose persons will live in a virtual environment, within which they'll hopefully figure out alignment. However, the environment needs to be a deterministic computation, such that the \"outer\" superintelligence (the one running the virtual environment) has no ability to affect its outcome; its goal will only be to \"implement whatever this computation decides\". If the superintelligence wants to implement the actual result of the actual computation, and that computation is fully deterministic, (and if we don't simulate anything complex enough for that superintelligence to \"leak back in\"), then it has no room to meddle with what we do in it! It's stuck running us until we decide on something.\n\nSome things we need to figure out include:\n\n* How do we incorporate our virtual minds? I think we should go for something plugged in \"ad-hoc\" rather than embedded into the physics of that world, to preserve the integrity of those minds, which may live for very long times. In addition, in case virtual minds go crazy after living 200 years or something, we may want to allow them to reset themselves and/or die. A reset is not necessarily a big deal: hopefully previous-me can transmit enough information to future-me to continue the work. Maybe there are two me's at any given time, a teacher and an apprentice. Regular resets of individual persons also hopefully help maintain their values over long stretches of time. Many schemes are possible.\n* What is this world like? We could make do with just something as basic as minecraft, but it would be better if the virtual persons don't have to go crazy from being stuck in a minecraft steve's body with no senses except sight and sound.\n* How do we prevent \"sub-singularities\"? Given that this world is deterministic, there is nothing the outer superintelligence can do to prevent internal superintelligences from popping up and breaking everything it can. Potential solutions include things like \"there are no computers\" or \"all computers inside this world are very slow and limited in capability\".\n* What about memetic safety? What about virtual violence? What if someone duplicates themself a billion times? And so on. There are a collection of design challenges, but designing [a peaceful world](https://carado.moe/∀V.html) with [sensible virtual physics](https://carado.moe/game.html) doesn't seem out of reach. They seem like tractable engineering challenges.\n* What is the final voting procedure? Remember that the goal of the simulation is to give the people inside it time to figure out alignment, but they should probably agree on something eventually: either a final decision on alignment, or a \"next iteration\": a new simulation to be ran, which they think has better/safer/still-safe conditions within which to research alignment. In fact, there may be arbitrarily many such \"simulation iterations\". Anyways, the simulation will have a big red button inside of it which says \"okay, we're done\", and takes as input a new goal (and possibly decision theory?) that the outer superintelligence will have as its ultimate goal. But what should it take to press the button? Everyone to agree? A majority? What if we end up unable to come to an agreement? Again, there is work to be done on this, but it seems figure-out-able.\n\nThe people inside this simulation will have somewhere between *plenty* and [*infinite*](https://carado.moe/ai-alignment-wolfram-physics.html) time and compute to figure out alignment. If they do have infinite compute, and if the cosmos isn't full of [consequentialists competing for earlyness in the universal distribution](https://carado.moe/udassa-time-steps.html) (or other things that might make wasting compute bad), then we can even run exponential-or-longer computations in, from our perspective, instant time; we just need to be sure we don't run anything malign and unbounded — although the risks from running malign stuff might be mitigated by the computations being fully and provably sandboxable, and we can shut them down whenever we want as long as they don't get to output enough to convince us not to. After all, maybe there are some bits of information that are the result of very large malign-dominated computations, that can nevertheless still be of use to us.\n\nI mentioned before that maybe only slow computers are available; running a \"very large\" computation might require a majority vote or something like it. Or we can just boot without any computers at all and spend the first few millenia designing slow computers that are actually safe, and then work from there — when we have all the time we want, and maybe-infinite *potential* compute, a lot of options open up.\n\nOne downside is that we will be \"flying blind\". The outer superintelligence will gleefully turn the cosmos into computronium to ensure it can run us, and *will* be genociding everything back meat-side, in our reachable universe — or beyond, if for example physics is hackable, as wolfram suggests might be possible. Superintelligence might even do that *first*, and *then* boot our simulation. Hopefully, if we want to, we can resimulate-and-recover aliens we've genocided after we've solved alignment, just like hopefully we can resimulate-and-recover the rest of earth; but from inside the simulation we won't be able to get much information at least in the first iteration. We *can*, however, end our iteration by agreeing on a new iteration that has some carefully-designed access to outside information, if we think we can safely do that; but nothing guarantees us that there will still be something salvageable outside.\n\nAnother way to model \"successive simulation iterations, each deterministic, but each having the ability to make the next one not deterministic with a large enough vote\" is as a single simulation that isn't quite deterministic, but made of large deterministic chunks separated by small controlled I/O accesses; think of it as a haskell computation that lazily evaluates everything right up until it waits for an input, and then as soon as it has that it can continue computing more.\n\nStill, the current outlook is that we genocide everything *including ourselves*. Even if nothing else is recoverable, \"a tiny human population survives and eventually repopulates\" still seems like a better plan than the current expected outcome of \"everything dies forever.\"\n\n### 3. Make a superintelligence to run it\n\nNow, this is the \"easy part\": just make a superintelligence that destroys everything to implement its one simple goal; except instead of paperclips, the simple goal is \"implement whatever goal is the result of this very big turing machine\".\n\nWe can either build and start that superintelligence as soon as we can, or [keep it ready](https://carado.moe/when-in-doubt-kill-everyone.html) while we stay on our regular world. I'd probably advocate for the former just to be safe, but it can depend on your beliefs about quantum immortality, S-risks, and such. In any case, having *something that might work ready to fire* is certainly better than the current *we just die*.\n\nOf course, it is crucial that we make the superintelligence *after* we have designed and implemented the virtual environment, complete with its virtual persons (or its deterministic procedure to obtain them); we don't want it to be able to influence what goal we give it, so we likely need to have the goal ready and \"plugged in\" from the start.\n\nSome risks are:\n\n* It doesn't run the simulation accurately. I'd think it's surely not too hard to make a superintelligence have \"run this discrete, deterministic thing and then adopt its output as goal\" as its goal, but perhaps there are difficulties around this. I'm optimistic that we can figure that out.\n* It doesn't run the simulation (or adopt its result as goal) at all. Perhaps the requirement that the simulation be ran perfectly will make superintelligence too paranoid about being sure it has run the simulation correctly, and it will spend its entire existence getting more computronium to increase the likelyhood that the outcome it has computed is correct, but it's never certain enough to actually adopt the new goal as its own. There may be probabilistic workarounds to this; we'll need to look into it more.\n* It fails in the various ways AI usually value drift from us — hacking its reward function, etc. While this concern may indeed remain, the fact that i'm proposing what seems to me like a easy to formalize goal is already a big improvement compared to the current state of affairs.\n\n### Conclusion\n\nThis is a plan with a bunch of things that need work, but it doesn't seem *absurdly* hard to me; if anything, step 1 seems like the hardest, and I don't even know that we've *tried* throwing billions of dollars at it.\n\nI share yudkowsky's current [gloomy outlook](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) on AI. The current route of \"hey, maybe we should study things vaguely related to harnessing what neural nets do, and hope to be able to grab a miracle should it come up\" seems like a pretty bad plan. I think, in comparison, the plan I outline here has better chances.\n\nIt is to be remembered that my vision is competing not against the likelyhood of superintelligence emergence, but against the likelyhood that alignment works. If pursuing mostly alignment gives us a 1e-10 chance of survival, and pursuing mostly my plan gives us a 1e-8 chance of survival, then it doesn't matter that *yes, superintelligence is still overwhelmingly likely to kill us*; we should still favor my plan. See also: [this post](https://carado.moe/ai-risk-plans.html) comparing plans.\n\nI have cross-posted this [on lesswrong](https://www.lesswrong.com/posts/PATFQm6hPN3Wycq4W/the-peerless); feel free to discuss the idea with me there.\n\n", "url": "n/a", "filename": "the-peerless.md", "id": "625311c6920aa6044ea5fcc3bdf13be1"} {"source": "carado.moe", "source_type": "markdown", "title": "quantum-suicide", "authors": "n/a", "date_published": "n/a", "text": "2021-04-28\n\n**DISCLAIMER: the idea described here stands or tenuous philosophysical ground and should generally _not_ be considered worth the risk attempting because it may be wrong; in addition, this plan should _not_ be utilized to retroactively justify depression-based suicide — retroactive justification is erroneous; if you are feeling suicidal, contact [your local suicide crisis line](https://en.wikipedia.org/wiki/List_of_suicide_crisis_lines).**\n\n## Plausible Quantum Suicide\n\nin this post, i make a collection of arguments and follow them to what seems to me like what should be their conclusion. i don't have strong confidence in every argument, but i'd consider using this plan worth it to avoid sufficiently bad scenarios, such as a singularity gone wrong (which it probably will).\n\n### 1. The No Obtainable Evidence Argument For Materialism\n\nby materialism i mean here something maybe closer to [physicalism](https://en.wikipedia.org/wiki/Physicalism), but maybe even a stronger version of it:\n\nthere is no special soul that people have, [you are your information system](https://carado.moe/you-are-your-information-system.html).\n\ni make an other strong claim: time isn't particularly \"moving\" in any metaphysical sense, there is no \"special present time\". the universe can be seen as a graph of probabilistically connected states, and a random walk through those states matches the notion of entropy pretty well (which can be seen as defining the direction of time, because we happen to have memories of universe states with generally lower entropy), but that's a local notion.\n\nthe *illusion* that the present is particularly present, that we have a particular soul, or even that morality/ethics is in some sense objective, stems from the fact that we *inhabit our brain's model*: we don't get to see our brain from the outside as modeling its environment, we live right inside it, and we don't spawn with a clear distinction between normative ideas (morality/ethics) and descriptive ideas (statements of fact about the world).\n\nbut those illusions *must* be wrong, and here is the argument: as far as we can tell, there is no possible way for a brain to obtain evidence that his present time is particularly real; therefore, it must be erroneous for any brain to generate rationally the idea that its present is particularly real. same goes for having what i call a \"read-only soul\" that some people believe in (a special observer thing that observes a person's mind state from outside the material universe, but cannot causate anything upon it). see also [these](https://www.readthesequences.com/Zombies-Zombies) [three](https://www.readthesequences.com/Zombie-Responses) [posts](https://www.readthesequences.com/The-Generalized-Anti-Zombie-Principle).\n\n### 2. Limiting Real Universes\n\nmy post [\"Limiting Real Universes\"](https://carado.moe/limiting-real-universes.html) isn't that good, so i'll try to explain it more clearly here:\n\nif for some reason all possible states of our universe were equally real, then you should expect to observe widely anomalous phenomena around you, because most randomly picked states our universe can be in don't have to be coherent.\n\nbut the fact that we seem to observe a continuously very coherent universe tells us that there must be some sense in which coherent universe states, that stem from a continuous history following an increasing entropy timeline, must be particularly more real.\n\nit's not that your magical soul has been blessed with inhabiting universe states: as explained in argument 1, you shouldn't have any reason to think you have such a thing.\n\nit's not that [you can only exist to observe universe states that are coherent, because you wouldn't exist in incoherent ones](https://en.wikipedia.org/wiki/Anthropic_argument): there are still way more possible universe states where everything is incoherent except your brain, than possible universe states where everything is coherent including your brain. for any amount of state of stuff you require to say you can exist to observe the world, the rest of the universe should still generally seem incoherent if all possible universe states are equally real.\n\nit's not that you have been following your own special arrow of time: even though i debunk that you should even think this makes sense in argument 1, another reason is that, even if some of your brain states have a past-arrow-of-time and not others, there's no reason for you to think you're one of the former. if all possible universe states were equally real, you'd likely be a version of your brain that *thinks* it has a past history but doesn't, than one that does.\n\n### 3. Many-Worlds Is True\n\n[Eliezer Yudkosky makes a good argument](https://www.readthesequences.com/If-Many-Worlds-Had-Come-First) that we should currently believe in the many-worlds [interpretation of quantum mechanics](https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics); but even if that turned out wrong, [Max Tegmark makes another good argument](https://space.mit.edu/home/tegmark/crazy.html) that even just with a single history universe, all possible initial states of the universe are represented each in an infinite amount of instances by just being variations of initial conditions and random quantum determinations at different places of the infinite universe.\n\nwhat matters here is that basically one should expect every reasonably possible outcome to be a real instance of universe that exists somewhere. because of argument 2, some possibilities are particularly real, and because of argument 3 (this one), that set or fuzzy set of coherent possibilities should be widely instanced: at each possible fork (they're not *really* forks, but that's a good enough analogy from our point of view), every plausible outcome is realized as a real or fairly real universe.\n\n### 4. Quantum Suicide Works\n\nif the previous 3 arguments stand, then a more general version [quantum suicide](https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality) should be achievable: by dying instantly in one timeline, there is no version of you in that timeline able to experience it, and the only remaining future you's able to experience anything are the you's in other timelines.\n\nbecause of argument 1, we know that saving a backup of your brain, and then later dying and restoring another copy of yourself from backup, is equivalent to just losing memories of the time after the backup: it's unfortunate that that time and those memories were \"lost\", but it's not a big deal, you can just keep going.\n\ngiven that, even non-instantaneous, after-the-event suicide works: if you commit yourself to committing suicide in all timelines where an event goes wrong, then the only future you's able to experience any time after that suicide will be the ones in the timelines in which that event went well (or at least in which you think it did); you lose a bit of time and memories from those timelines in which you didn't kill yourself *literally instantly* after the thing went wrong, but it's just equivalent to a restoration from backup: the backups are automatically saved by the universe as forks of that previous universe state before the event's outcome was determined.\n\n### ramifications\n\nif this is true, then every person is strongly empowered: by committing themselves to committing suicide in every timeline in which even the slightest thing goes wrong, they are able to restrict the set of instances of them purely to timelines in which everything goes the way they want.\n\nbut, it also creates a problem if the practice becomes widespread: every person will end up observing a timeline in which increasingly greater amounts of people who they don't particularly care about, have committed suicide to go to other timelines. if i play the lottery and commit suicide if i lose, then you have as many timelines as players, each with 1 alive lottery winner, and all the others players having committed suicide. even if you don't care about living in such a world, economics cares: pre-automation, you *want* other people in your society to keep living so they can help create together the value that you can enjoy.\n\nyou can choose to commit suicide in all timelines in which too many *other* people also have committed suicide, in an acausally-collaborative effort to search for a timeline in which everyone is happy; but if no such timeline exists, then everyone will just have *truly* committed suicide out of existence.\n\npre-automation, this creates a [coordination problem](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/), where each person wants to be able to commit suicide, but doesn't want other people to be able to. there is much ethical and political discourse to be had on the right to commit suicide; i generally lean on the libertarian side of things, but if quantum suicide becomes enough of a problem pre-automation that society looks like it's not gonna be able to get to post-automation, then we might need to consider at least disincentivizing it somehow.\n\npost-automation, there is still a problem for people who want to live in a world which has other people in it, but the problem is much milder. it might be bringing the [end of the global era](https://carado.moe/global-era.html) even earlier than would have happened otherwise, but that's not necessarily *that* big of a deal, and there's an option for people who want to inhabit a more densely populated timeline: just lower your standard for non-population-based outcomes, such that you commit suicide less often and thus exist in more timelines. if many people do this, they should be able to find each other in many densely populated timelines.\n\nthis *does* explain the anthropic argument of, \"if things go well in the future and the population booms, why are we happening to experience a particularly early age of human existence?\"; other than the extinction of able-to-observe beings, this can be explained by able-to-observe beings just become really trigger-happy about quantum suicide, such that each civilization of able-to-observe beings' \"observedspace\" is condensed to their pre-finding-out-about-quantum-suicide; their population after that is much more sparsely distributed across timelines, even without extinction events.\n\nas for me, i don't intend to start committing quantum suicide any time soon. i don't have strong enough confidence in the arguments posted here to take the risk of actually permanently dying. but it is definitely a possibility i'll consider, *especially* as we get closer to the singularity happening, and the existential risks that it poses.\n\n\n", "url": "n/a", "filename": "quantum-suicide.md", "id": "4446e2b64cb1145aea183c20a0784a1d"} {"source": "carado.moe", "source_type": "markdown", "title": "two-vtable", "authors": "n/a", "date_published": "n/a", "text": "2021-11-21\n\n## the two-vtable problem\n\nwhen programming, there are in general two types of interfaces: \"static, known\" interfaces and \"dynamic, unknown\" interfaces.\n\nin the former, the possibilities are well known. maybe an object has a bunch of public methods that can be used, or maybe even public fields; maybe an API has a known call endpoints.\n\nwhen the behavior or contents of an object are unknown or inaccessible, someone can still implement how it interacts with another known-interface object: just send the known object to the unknown object, and let the unknown object manipulate the known object however it wants.\n\nhowever, there is no general way to make two (or more) objects interact with each other, when they both have a dynamic/unknown interface.\n\nthis is what i call the **two-vtable problem**.\n\none approach is to implement all n² behaviors: implement the behavior for any possible concrete type of one object and any possible concrete type of the other. the rust ecosystem is kind of like that with types and traits: if you have `N` types and `K` traits, unless one of those traits has a blanket implementation for all types, you'll need to write `N×K` implementations to have proper coverage in the general case.\n\nbut this is hardly scalable; and doesn't work well, for example, in environments in which objects are expected to be implemented by different parties that don't necessarily coordinate with one another, where those objects are then expected to work together without putting in extra effort afterwards. i'm sure this probly has been encountered a lot for example in video game modding communities, regarding the interaction between mods created by different people.\n\nan answer can be taken from other fields that have already solved that problem on their own, however. i can think of two: how natural selection solved negotiation between dynamic persons, and how liberalism solved negotiation between dynamic private actors.\n\nthe general solution to the two-vtable is to have the two objects have a language —as tells us the evolution of humans— that they can use to communicate and negotiate an outcome. liberalism tells us that the shape of negotiated outcomes is contracts, and cryptocurrencies tell us that the formalized form of contracts is programs.\n\nand so, here is my proposed solution to the two-vtable problem: when two dynamic objects want to interact with one another, that interaction must take the shape of the two of them building a program together, which will be executed once they both agree on it. perhaps this can take the shape of both of them sending an initial contract to the other which is ideal from the perspective of the sender, and from there try to incrementally build up programs that implement a compromise between the two ideals, until they meet somewhere in the middle; like haggling.\n\nthis framework generalizes nicely enough that it could be used for arbitrary informatic agents, such as a bot and an uploaded person, or two uploaded persons. in fact, contract negotiation of that kind, when understood enough by both agents partaking of it, can be the formalized form of consent, a matter i've [grappled with formalizing](https://carado.moe/defining-freedom.html) for [my utopia](https://carado.moe/∀V.html).\n\nthis could also be useful for negotiation between different [compilation stacks for portable programs](https://carado.moe/portable-programs.html), or even for the negotiation between [different wasms running on a liberal server market](https://carado.moe/saving-server-internet.html).\n\n", "url": "n/a", "filename": "two-vtable.md", "id": "3223c9719df2b97e79ee46e5ee3b4394"} {"source": "carado.moe", "source_type": "markdown", "title": "right-to-death-therefore", "authors": "n/a", "date_published": "n/a", "text": "2021-08-23\n\n## right to death, therefore\n\nbecause i like [freedom](https://carado.moe/defining-freedom.html) so much, i think people should be generally able to do what they want. but this immediately raises a conundrum: should someone be able to do an action that hampers their *future* freedom?\n\none relatively extreme case is the ability to commit suicide: it's about as committal as you can get, in terms of actions with future ramifications to oneself. if you choose to get in debt or cut off a limb, that can be pretty hard to get out of, but it still seems less impactful and less inescapable than suicide.\n\nso, should suicide be allowed? (i am of course only talking about reasonable, clear-minded suicide, *informedly* consented; not coerced suicide, nor suicide out of compromised ability to make decisions)\n\nin my opinion, *obviously yes*. the alternative, that people be forced to live until their biology kills them (which we may very well find ways to prevent indefinitely), seems abhorrent to me. given this guarantee, then it makes sense to me that any lesser commitments should also be fine.\n\nthere are some criticisms one can make about this argument. bad but non-death commitments could tend to increase the amount of suffering people in society at any given moment; and, if people change over time (as they tend to do), then commitments can ramificate into a future person who is sufficiently different from the person making the commitment that it might be considered unreasonable for them to be subject to some excessive amounts of \"locally\" unconsented negative effects. a cap on the time duration of commitments, and/or the requirement for people to guarantee that they remain the same \"enough\" over time until the commitment is expired (a technology we currently don't have, but will become easier to make once we're uploaded and we understand the human mind better), might be reasonable patches for these issues.\n\n", "url": "n/a", "filename": "right-to-death-therefore.md", "id": "6600373d1ce3ce3fa3f39ba4c5f87e9b"} {"source": "carado.moe", "source_type": "markdown", "title": "you-are-your-information-system", "authors": "n/a", "date_published": "n/a", "text": "2020-12-25\n\n## You are your information system\n\nwhat makes you, you ?\n\nwe tend to intuitively think of a person as their entire body, somehow including limbs and organs but not clothing or food.\n\nyet, if you close your eyes, and then i swap your arm with someone else's, when you wake up you will still be the same person, just with a new arm. in fact, i'd argue i could replace everything except for the nervous system (including the brain) and when you open your eyes again you would notice that your entire body has changed but your thoughts and memories have remained the same — rather than, for example, still having the same body but different thoughts and memories.\n\nare you the matter that makes up that nervous system ? i could probably replace neurons and synapses one at a time and you would continue to be the same person. is it the electric signals then ? i could probably put on some synapses a device that absorbs electric signals and then sends out identical but \"different\" signals and you would still be the same person.\n\nin fact, it doesn't really make sense to ask \"which matter\" makes up your nervous system: under quantum physics, everything is changing and particles are merely [values in an omnipresent field](https://www.youtube.com/watch?v=MmG2ah5Df4g) rather than solid objects.\n\nultimately, what you are, is *the information system* which your nervous system (including your brain) runs. standing still, walking forwards, teleporting yourself, and being uploaded into a sufficiently powerful computer, all preserve your personhood in the exact same way; there is nothing special about the meat that currently runs your mind.\n\n*despite everything, it's still you.*\n\n", "url": "n/a", "filename": "you-are-your-information-system.md", "id": "1b639d47214e330debe0fdccc18fa485"} {"source": "carado.moe", "source_type": "markdown", "title": "hope-infinite-compute", "authors": "n/a", "date_published": "n/a", "text": "2022-05-12\n\n## hope for infinite compute\n\nhere are some reasons we may have infinite universe to inhabit in the future.\n\n* encoding ourselves in heat death noise. this could at least buy us exponentially much time to exist; it becomes infinite if the amount of possible states also increases.\n* where are we in [the universal distribution](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1) ? most models that produce [rich](https://carado.moe/questions-cosmos-computations.html) computation, such as [rule 30](https://en.wikipedia.org/wiki/Rule_30), seem to grow forever in amount of stuff. this includes [wolfram's hypergraph rewriting system](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/).\n* space is expanding. the planck length is a constant. unless there's something i'm mistaken about, this sure seems like more positions in space we could inhabit, and thus an increasing total amount of states the universe can be in. it is not obvious how to utilize this, but it may be evidence for other ways in which the amount of stuff in the universe grows. in wolfram's perspective, it is simply the hypergraph creating new positions in space, as it has been doing forever.\n* why are there 10⁸⁰ particles in the observable universe? if the total number is larger, why is it that larger number? wouldn't it be occam-simpler that there be 1 or few particles (or qubits or whatever) in the start, and have that amount grow over time? in fact, with expanding space, won't there statistically tend to be more particles overall, if only because there's more space for quantum fluctuations to randomly spawn particles? surely a superintelligence can harness this.\n* even if we're in a physically finite universe, we may be able to acausally trade/hack/blackmail aliens living in infinite worlds such as rule 30. maybe. [acausal trading is weird](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past).\n\n", "url": "n/a", "filename": "hope-infinite-compute.md", "id": "abeec836f71c5ed7b1a489cf659c7d10"} {"source": "carado.moe", "source_type": "markdown", "title": "ai-capability-risk-biases", "authors": "n/a", "date_published": "n/a", "text": "2022-05-14\n\n## cognitive biases regarding the evaluation of AI risk when doing AI capabilities work\n\ni have recently encountered a few rationality failures, in the context of talking about AI risk. i will document them here for reference; they probly have already been documented elsewhere, but their application to AI risk is particularly relevant here.\n\n### 1. forgetting to multiply\n\nlet's say i'm talking with someone about the likelyhood that working on some form of AI capability [kills everything everywhere forever](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence). they say: \"i think the risk is near 0%\". i say: \"i think the risk is maybe more like 10%\".\n\nwould i bet that it will kill everyone? no, 10% is less than 50%. but \"what i bet\" isn't the only relevant thing; a proper utilitarian *multiples* likelyhood by *quality of outcome*. and X-risk is really bad. i mistakenly see some people use only the probability, forgetting to multiply; if i think everyone dying is not likely, that's enough for them. one should care that it's *extremely* unlikely.\n\n### 2. categorizing vs average of risk\n\nlet's take the example above again. let's say you believe said likelyhood is close to 0% and i believe it's close to 10%; and let's say we each believe the other person generally tends to be as correct as oneself.\n\nhow should we come out of this? some people seem to want to pick an average between \"carefully avoiding killing everyone\" and \"continuing as before\" — which lets them more easily continue as before.\n\nthis is not how things should work. if i learn that someone who i generally consider about as likely as me to be correct about things, seriously thinks there's a 10% chance that my tap water has lead in it, my reaction is not \"well, whatever, it's only 10% and only 1 out of the two of us believe this\". my reaction is \"what the hell?? i should look into this and stick to bottled water in the meantime\". the average between risk and no risk is not \"i guess maybe risk maybe no risk\"; it's \"lower (but still some) risk\". the average between ≈0% and 10% is not \"huh, well, one of those numbers is 0% so i can pick 0% and only have half a chance of being wrong\"; the average is 5%. 5% is still a large risk.\n\nthis is kind of equivalent to *forgetting to multiply*, but to me it's a different problem: here, one is not just forgetting to multiply, one is forgetting that probabilities are numbers altogether, and is treating them as a set of discrete objects that they have to pick one of — and thus can justify picking the one that makes their AI capability work okay, because it's one out of the two objects.\n\n### 3. deliberation ahead vs retroactive justification\n\nsomeone says \"well, i don't think the work i'm doing on AI capability is likely to kill everyone\" or even \"well, i think AI capability work is needed to do alignment work\". that *may* be true, but how carefully did you arrive at that consideration?\n\ndid you sit down at a table with everybody, talk about what is safe and needed to do alignment work, and determine that AI capability work of the kind you're doing is the best course of actions to pursue?\n\nor are you already committed to AI capability work and are trying to retroactively justify it?\n\ni know the former isn't the case because there *was* no big societal sitting down at a table with everyone about cosmic AI risk. most people (including AI capability devs) don't even meaningfully *know* about cosmic AI risk; let alone deliberated on what to do about it.\n\nthis isn't to say that you're necessarily wrong; maybe by chance you happen to be right this time. but this is not how you arrive at truth, and you should be highly suspicious of such convenient retroactive justifications. and by \"highly suspect\" i don't mean \"think mildly about it while you keep gleefully working on capability\"; i mean \"seriously sit down and reconsider whether what you're doing is more likely helping to save the world, or hindering saving the world\".\n\n### 4. it's not a prisoner's dilemma\n\nsome people think of alignment as a coordination problem. \"well, unfortunately everyone is in a [rat race](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) to do AI capability, because if they don't they get outcompeted by others!\"\n\nthis is *not* how it works. such prisoner's dilemmas work because if your opponent defects, your outcome if you defect too is worse than if you cooperate. this is **not** the case here; less people working on AI capability is pretty much strictly less probability that we all die, because it's just less people trying (and thus less people likely to randomly create an AI that kills everyone). even if literally everyone except you is working on AI capability, you should still not work on it; working on it would *still only make things worse*.\n\n\"but at that point it only makes things negligeably worse!\"\n\n…and? what's that supposed to justify? is your goal to *cause evil as long as you only cause very small amounts of evil*? shouldn't your goal be to just generally try to cause good and not cause evil?\n\n### 5. we *are* utilitarian… right?\n\nwhen situations akin to the trolley problem *actually appear*, it seems a lot of people are very reticent to actually press the lever. \"i was only LARPing as a utilitarian this whole time! pressing the lever makes me feel way too bad to do it!\"\n\ni understand this and worry that i am in that situation myself. i am not sure what to say about it, other than: if you believe utilitarianism is what is *actually right*, you should try to actually *act utilitarianistically in the real world*. you should *actually press actual levers in trolley-problem-like situations in the real world*, not just nod along that pressing the lever sure is the theoretical utilitarian optimum to the trolley problem and then keep living as a soup of deontology and virtue ethics.\n\ni'll do my best as well.\n\n### a word of sympathy\n\ni would love to work on AI capability. it sounds like great fun! i would love for everything to be fine; trust me, i really do.\n\nsometimes, when we're mature adults who [take things seriously](https://carado.moe/life-refocus.html), we have to actually consider consequences and update, and make hard choices. this can be kind of fun too, if you're willing to truly engage in it. i'm not arguing with AI capabilities people out of hate or condescension. i *know* it sucks; it's *painful*. i have cried a bunch these past months. but feelings are no excuse to risk killing everyone. we **need** to do what is **right**.\n\nshut up and multiply.\n\n", "url": "n/a", "filename": "ai-capability-risk-biases.md", "id": "4d93ceaa3a59041299bd8bd172c593b4"} {"source": "carado.moe", "source_type": "markdown", "title": "botched-alignment-and-awareness", "authors": "n/a", "date_published": "n/a", "text": "2021-07-19\n\n2022-05-09 edit: i have found out that this idea is more thoroughly explored [here](https://reducing-suffering.org/near-miss/).\n\n## botched alignment and alignment awareness\n\n[AI alignment](https://en.wikipedia.org/wiki/AI_control_problem) is [hard](https://intelligence.org/2018/10/03/rocket-alignment/).\n\nan AI developer who doesn't know about the problem of alignment to general human values might accidentally develop a superintelligence which optimizes for something largely unrelated to humans, leading us to an [X-line](https://carado.moe/timeline-codes.html); on the other hand, if they make a botched attempt at alignment to human values, it seems like there's more of a chance (compared to if they don't try) at booting a superintelligence which cares about enough aspects of human existence to tile the universe with some form of humans, but not enough to make those humans' lives actually worth living (goals such as \"humans must not die\"), resulting in S-lines.\n\nconsidering this, raising awareness of AI alignment issues may be a very bad idea: it might be much better to let everyone develop not-human-caring-at-all AI and cause X-lines rather than risk them making imperfect attempts resulting in S-lines. or: we shouldn't try to *implement* alignment to human values until we *really* know what we're doing.\n\ncontrary to a [previous post of mine](https://carado.moe/were-all-doomed.html), this is a relatively hopeful position: no matter how many timelines end in X-risk, inhabited P-lines can continue to exist and research alignment, hopefully without too many S-lines being created. on the other hand, while it increases the chance of the [singularity](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) turning out good by leaving us more time to figure out alignment, it also means that it might take longer than i'd've otherwise expected.\n\n", "url": "n/a", "filename": "botched-alignment-and-awareness.md", "id": "476c132a68c78a73ffaec7759a765711"} {"source": "carado.moe", "source_type": "markdown", "title": "when-in-doubt-kill-everyone", "authors": "n/a", "date_published": "n/a", "text": "2021-07-18\n\n## when in doubt, kill everyone\n\none thing that is way worse than [mere existential risks](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), possibly [by a factor of infinity](https://carado.moe/ai-alignment-wolfram-physics.html), is [suffering risks, or S-risks](https://en.wikipedia.org/wiki/Suffering_risks).\n\ni could see (though going by what i could see [is not a reliable apparatus](https://carado.moe/overcoming-narratives.html)) someone make an AI and, while trying to align it to human values, accidentally misaligns it to something that happens to tile the universe with suffering humans. this would be an instance of S-risk.\n\nwhereas, an AI that merely wants to accomplish a relatively simple goal will probly just tile the universe with something simple that doesn't contain suffering persons; and given that [we're all probly quantum immortal](https://carado.moe/quantum-suicide.html), we just \"escape\" to the timeline where that didn't happen.\n\nconsidering this, a 99% chance of X-risk 1% chance of utopia is preferable to a 1% chance of S-risk 99% chance of utopia. so, one thing we might want to do if we figure out superintelligence before we do alignment (which [seems pretty likely at this point](https://carado.moe/were-all-doomed.html); see also \"Zero percent\" on [this page](https://intelligence.org/2018/10/03/rocket-alignment/)), we might want to keep a ready-to-fire paperclip AI on standby and boot it up in case we start seeing S-risks on the horizon, just to terminate dangerous timelines before they evolve into permanent exponential hell.\n\nin fact, just to be sure, we might want to give many people the trigger, to press as soon as someone even *suggests* doing any kind of AI work that is not related to figuring out goddamn alignment.\n\n", "url": "n/a", "filename": "when-in-doubt-kill-everyone.md", "id": "b3719d221942037519f88fcb85b919e4"} {"source": "carado.moe", "source_type": "markdown", "title": "emergency-unaligned-ai-goals", "authors": "n/a", "date_published": "n/a", "text": "2022-03-22\n\n## goals for emergency unaligned AI\n\nin [a previous post](https://carado.moe/when-in-doubt-kill-everyone.html) i talk about killing timelines — by making an AI fill them with something that's an easy to implement goal, such as paperclips — to avoid the even worse outcome of them becoming [S-lines](https://carado.moe/timeline-codes.html). in this post i wonder: can we do better than paperclips?\n\nif the True Nature of the cosmos is [a universal program](https://carado.moe/universal-complete.html), then there could be some things to turn timelines into that consume less compute cycles; for example, maybe we somehow make an AI that makes its timeline run as little compute as possible. the cosmic universal program will then spend less many cycles running those dead timelines, and more running remaining live timelines — making survivors in them possibly \"more real\". in this sense, it may be that pruning timelines can be done without causing astronomical waste: compute time and therefore kinda [\"realness\"](https://carado.moe/udassa-time-steps.html) or [\"soul juice\"](https://handsandcities.com/2021/11/28/anthropics-and-the-universal-distribution/) are redistributed to remaining timelines.\n\neven if the \"naive\" most likely \"implementation\" of our universe consumes just as much compute regardless of what goes on in it, the universal computation will contain other \"implementations\" that \"compress\" compressible timeline-states, and we will reclaim cycles in them — and if they are good at compressing, those implementations might be where most of our realness juice is located anyways.\n\nanother possibility is to task an AI with turning its timeline into a state that is as identical as it can be to another timeline in which said AI's didn't appear. if it can achieve that, then we can kill timelines by replacing them with copies of alive timelines. this also recycles compute time, possibly more efficiently since it doesn't rely on compressing implementations.\n\n", "url": "n/a", "filename": "emergency-unaligned-ai-goals.md", "id": "01983b39fcbd51946fef5b2a577b2920"} {"source": "carado.moe", "source_type": "markdown", "title": "ai-alignment-wolfram-physics", "authors": "n/a", "date_published": "n/a", "text": "2021-07-17\n\n## AI alignment and wolfram physics\n\n[wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) is a project by [stephen wolfram](https://www.youtube.com/watch?v=0bMYtEKjHs0) to model physics using something kind of like a cellular automaton made of vertices in a graph instead of cells on a grid.\n\nit's pretty interesting and there are insights in and around it that are of importance for the far future, and thus for [AI alignment](https://carado.moe/were-all-doomed.html).\n\nthe most notable is that wolfram thinks there's compute everywhere. the motion of the wind is doing compute, the motion of the seas is doing compute, the fabric of spacetime is doing compute, and even the state of heat death is still doing compute.\n\nthat last point notably means we might be able to embed ourselves into heat death and further, and thus get computed literally forever. this multiplies the importance of AI alignment by potentially literally infinity. i'm not quite sure how we are to handle this.\n\nsome of the compute may be doing things that are opaque to us; it might appear [homomorphically encrypted](https://en.wikipedia.org/wiki/Homomorphic_encryption). as we want (and expect) our superintelligence to spread everywhere to enforce values, we would hope civilizations living inside homomorphically encrypted spaces can be inspected; otherwise, nuking them altogether might be the only way to ensure that no [S-risk](https://en.wikipedia.org/wiki/Suffering_risks) is happening there.\n\nwolfram postulates that one might be able to hack into the fabric of spacetime; one of the mildest effects of this would be the ability to communicate (and thus, likely, move) faster than celerity (but probably still slower than some other hard limit). if you didn't think [AI boxing](https://en.wikipedia.org/wiki/AI_box) was hopeless enough as it is, hackable spacetime ought to convince you.\n\nfinally, there is, value wise, an immense amount of compute being wasted; even just [standard model particles](https://en.wikipedia.org/wiki/Standard_Model) live way above true elementary computation. if superintelligence is well-aligned, this provides us with an hard estimate as to how much computing power we can live on to enjoy value, and it's probably a very large amount; wolfram [talks about](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful#how-it-works) something like 1e400 vertices in our universe.\n\n", "url": "n/a", "filename": "ai-alignment-wolfram-physics.md", "id": "757b816f995dfba92bb0383d3977155f"} {"source": "carado.moe", "source_type": "markdown", "title": "timeline-codes", "authors": "n/a", "date_published": "n/a", "text": "2021-07-18\n\n## AI alignment timeline codes\n\nthis is a small post proposing simple one-letter codes for identifying timelines depending on their status relative to [AI alignment](https://en.wikipedia.org/wiki/AI_control_problem) and the appearance of [superintelligence](https://en.wikipedia.org/wiki/Superintelligence):\n\n* **P-line**: a pre-intelligence explosion and pre-figuring out AI alignment timeline. we are in a P-line.\n* **X-line**: a timeline where an [existential risk (or X-risk)](https://en.wikipedia.org/wiki/X-risk) has been realized by an unaligned superintelligence. everything is dead, forever.\n* **S-line**: a timeline where a [suffering risk (or S-risk)](https://en.wikipedia.org/wiki/S-risk) has been realized by an unaligned superintelligence; the universe from then on contains net suffering on immense scales for all remaining time, [which is possibly infinite](https://carado.moe/ai-alignment-wolfram-physics.html). we should want to avoid this pretty much at all costs (including by [opting for an X-line instead](https://carado.moe/when-in-doubt-kill-everyone.html)).\n* **A-line**: AI alignment has been figured out, and no superintelligence has been deployed yet. from that point on, we have the means to reach a U-line; though this isn't guaranteed. this is where we want to get as soon as possible.\n* **U-line**: an aligned or [somehow otherwise](https://www.lesswrong.com/tag/orthogonality-thesis) benevolent superintelligence has been deployed, and we are guaranteed a relatively utopian world forever. this is the ultimate goal. while not strictly necessary, going through an A-line is almost certainly required to get there.\n\nU-line, X-line, and S-line all have deployed superintelligences and are therefore terminal outcomes; they are unescapable. P-line and A-line are transitionary; they likely lead to one of the three terminal outcomes mentioned here.\n\nother terminal might exist, but they seem unlikely enough to not warrant listing here; for example, even if everyone dies from, say, a meteor impact, life on earth or nearby will probably evolve another civilization *eventually*, which will also probably face the AI alignment challenge and end up in one of the terminal timelines.\n\n\n\n", "url": "n/a", "filename": "timeline-codes.md", "id": "48fbb130ccdd311c2ba42c833f1a5015"} {"source": "carado.moe", "source_type": "markdown", "title": "unoptimal-superint-loses", "authors": "n/a", "date_published": "n/a", "text": "2021-11-20\n\n## unoptimal superintelligence loses\n\n(edit: [maybe it doesn't](https://carado.moe/unoptimal-superint-doesnt-lose.html))\n\nwhat if a phenomena is powerful enough to kill everyone, but not smart enough to be optimal at reasoning? (such as a grey goo event, or a \"dumb\" superintelligence with a faulty decision mechanism)\n\nthen, in all likelyhood, it eventually dies to an alien superintelligence that is better at decision-making and thus at taking over everything.\n\nour superintelligence doesn't just need to be aligned enough; it needs to be aligned enough, and on the tech side, to be maximally intelligent. hopefully, it's smart enough to start making itself smarter recursively, which should do the trick.\n\nthe point is: when talking about the eventual superintelligence(s) that reign over the cosmos, assume whichever one(s) to have \"won\" to be optimal at decision making, because others probly got outcompeted.\n\n", "url": "n/a", "filename": "unoptimal-superint-loses.md", "id": "f1516761b6e9a25d1cf0ef46e4324c68"} {"source": "carado.moe", "source_type": "markdown", "title": "forking-bitrate-entropy-control", "authors": "n/a", "date_published": "n/a", "text": "2022-02-06\n\n## forking bitrate and entropy control\n\nif physics is based on a computational framework such as [wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/), but plausibly even if not, such that not all states the universe can be in produce the same number as next possible states;\n\nin addition, if i am to follow to conclusion my current belief that moral patients count as different when they start being functionally different in terms of computation, and that exact copies morally count as a single person (as it makes [not much sense](https://carado.moe/persistent-data-structures-consciousness.html) to believe otherwise);\n\nand if ([as it seems to be the case](https://carado.moe/limiting-real-universes.html)) the universe values coherence and thus only a limited set of local outcomes can emerge from a given local situation, or at least outcomes are weighed by coherence;\n\nthen it makes sense to start caring about the amount of forking a given timeline goes through. which is to say: the amount of future states to be [instantiated](https://carado.moe/questions-cosmos-computations.html), be it directly next step or indirectly in the longer term.\n\nin fact, if one calls what they care about *moral patients*, then we should care about the \"forking bitrate\" of moral patients. for example, we could want moral patients with a net negative future to be forked as little as possible, and moral patients with a net positive future to be forked as much as possible. considering forks are created over steps of time, and entropy seems to be a good measure for them, i think \"bitrate\" is an appropriate term for this; hence, *forking bitrate*.\n\nif we're just talking about a place as small as earth, we can estimate that consequences rapidly ramificate around to all moral patients; and as such, it seems reasonable to think that the forking bitrate of all patients will tend to go about in the same direction.\n\nso, if you see a quantum dice, should you throw it?\n\nif you think the future of earth has expected net positive moral value, or has little enough suffering for your taste (depending on your moral framework), then yes: by throwing the (quantum) dice, you might be multiplying the amount of instances of that value by the number of possible outputs of the dice, by creating that many times more future timelines.\n\nif not, then you shouldn't throw it.\n\n(even in the absence of quantum effects, if one were to just move entropy around while [phase space being conserved](https://www.lesswrong.com/posts/QkX2bAkwG2EpGvNug/the-second-law-of-thermodynamics-and-engines-of-cognition), moving the entropy from not-moral-patients to moral-patientss (or whichever thing you can about) still has that effect, i think)\n\nthis can probly be expanded to much larger-scale entropy control — and also, if superintelligences care about it (and if they're to be aligned, we might want them to) we can expect them to use it to maximize their value. even a [paperclip maximizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) can want to create as many timelines containing as many varied possible paperclips and arrangements thereof, if it is made to care about that.\n\n", "url": "n/a", "filename": "forking-bitrate-entropy-control.md", "id": "62fbf1609636597cc30d9904a47fc82f"} {"source": "carado.moe", "source_type": "markdown", "title": "topia-layer-0", "authors": "n/a", "date_published": "n/a", "text": "2020-03-30\n\n*(2020-11-15 edit: this post is now largely superceded by [Two Principles For Topia](https://carado.moe/two-principles-for-topia.html))*\n\n## Topia: Layer 0\n\nIn a similar way to the [Hierarchy of Needs](https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), I have been thinking about what post-singularity utopia we would want in term of layers.\n\nI want a Layer 0, a universal set of guarantees that apply to everybody; so that, on top of that, people can build voluntary societies and sub-societies as far as they want. Ideally, their socities would be mutually compatible; one could partake of multiple societies and have friends in both. But they wouldn't have to be. It's all dependent on what society you want to join.\n\nBut, I think we do need a universal Layer 0. One that at least makes the singularity AI prevent other AIs from emerging and [turning everyone into paperclips](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) or other existential risks. As this is the layer that applies universally, we want it as thin as possible, so that societies built on top of it have as much freedom to implement whatever they want; in particular, almost everything mentioned here can *be opted out of* (such as when joining a social contract). It's just your starting kit.\n\nThese should be the universal guarantees that everyone starts with: physical safety and basic living ressources.\n\nFor the physical safety, it's fairly easy to think of telling the singularity AGI to implement what I like to call NAPnobots — nanobots that are omnipresent in physical reality (and would manifest as mandatory added laws of physics to virtual realities) and enforce the [NAP](https://en.wikipedia.org/wiki/Non-aggression_principle); that is, prevent people and their physical property from being subjected to aggression without their consent (\"without their consent\" could be a tricky part — also, should people be able to *permanently* opt out of some NAP violation protections ?).\n\nYou want to create a society in which it's fine to punch each other in the face ? That's fine with me. All I ask is that that society be purely opt-in.\n\nYou want to create a communist utopia in which all belonging are shared ? That's fine with me. Just create a voluntarily contract where people consent to pooling their properties together for shared use.\n\nYou want the freedom to hurt yourself ? Just consent to being hurt by yourself. You want the freedom to hurt non-consenting others ? *No.* That's my personal opinion, of course, but I do think the maximum reach of a social contract should be that it can't force others into joining it, and I hope everyone else can agree that at least requiring one's consent to partaking of interaction should be a guaranteed absolute. On the other hand, I would definitely consent, personally, to very large ranges of interactions with at least my friends. They can punch me if they want; I trust them to not do that, and if my trust is betrayed beyond what I consider reasonable, I can always unconsent.\n\nAlso, since the brain evaluates every input it receives, this includes for the most part having to consent to any form of communication; rember that unconsented advertising *is* nigh-literally rape.\n\nThe second part, basic living ressources, is a concern of economics. Unless we manage to escape to universes where ressources not only are infinite, but can be accessed faster than humans can appear (which may become as simply as duplicating a running process on a computer), it requires among other things limiting the number of humans that can exist, or you run into [malthusian traps](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/).\n\nThe best way I can think of doing this is: when the singularity starts, give everyone enough basic assets that the dividends that can be generated from them are easy to live off of. Then, for people to have a child, they need to acquire a number of assets that will provably generate the same amount of dividends for the child, and give those to him. On top of this arbitrarily complex liberal contracts can be established, of course; you can have a socialist society where everyone has consented to their ressources being taxed by exactly how much giving basic living assets to the new kids costs (kids which won't themselves be taxed in that way unless they consent to in turn join that society). There is the issue of what amount of ressources constitutes basic living, as well as if there is a cap on the amount of unviolable property a person or group can have — can an environmentalist group very quickly just plant a flag on all nearby useful planets (as to declare them their unviolable property) and then forever refuse for them to be interacted with (as the NAPnobots will enforce) ?\n\nThe choice of living ressources should be included: if what we choose to eat has as much of an effect on our psyche as we are starting to find out it does, then choosing what configuration of nutrients we receive should part of the basic living guarantees.\n\nOne of those basic ressources of course is healthcare, and healthcare should include guaranteed immortality unless you opt out of it. There's just no particular reason old age should have any more right to hurt a non-consenting person than any *other* outside aggressor; \"outside\" because people are their brain's information system, not their body. Becoming a virtual person would be the \"easy\" solution to immortality.\n\n", "url": "n/a", "filename": "topia-layer-0.md", "id": "e8df295ece525672584985e22f7f007d"} {"source": "carado.moe", "source_type": "markdown", "title": "how-timelines-fall", "authors": "n/a", "date_published": "n/a", "text": "2022-01-11 ★\n\n## how timelines fall\n\ni've speculated that we are all together, as a civilization, quantum immortal; [timelines where we all die](https://carado.moe/timeline-codes.html) can somewhat [be ignored](https://carado.moe/brittle-physics.html), leaving us mostly just with concerns of [heaven vs hell](https://carado.moe/botched-alignment-and-awareness.html) timelines.\n\nbut, in the lucky timelines where we *do* keep avoiding an [X-risk](https://en.wikipedia.org/wiki/X-risk) [superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), what does that look like ?\n\nit would be silly to expect that avoiding such a superintelligence would look like trying to press the button to turn it on but at the last minute the button jams, or trying to press the button to turn it on but at the last minute the person about to press it has a heart attack. indeed, bayes should make us think that we should expect it to look like whatever makes it likely that superintelligence fails to be implemented.\n\nwhat does this look like ?\n\nglobal nuclear war, broad economic collapse, great cataclysms or social unrest in cities where most of the AI development is done, and other largely unpleasant events.\n\ndon't expect the world to look like the god of anthropics is doing miracles to save us from superintelligence; expect the world to look like he's is slowly conspiring to do whatever it takes to make superintelligence unlikely to happen long in advance.\n\nexpect the god of anthropics to create AI winters and generally make us [terrible at software](https://www.youtube.com/watch?v=pW-SOdj4Kkk).\n\nexpect the god of anthropics to create plausible but still surprising reasons for the availability of tensor hardware to become scarce.\n\nlook around. does this look like a century where superintelligence appears ? yes, i think so as well. the god of anthropics has his work cut out for him. let's try and offer him timelines where AI development slows down more peacefully than if he has to take the initiative.\n\nwhile some of us are working on aligning god, the rest of us should worry about aligning luck.\n\n", "url": "n/a", "filename": "how-timelines-fall.md", "id": "701f50cc36fe627f8e9f1afdc7963bc6"} {"source": "carado.moe", "source_type": "markdown", "title": "what-is-value", "authors": "n/a", "date_published": "n/a", "text": "2021-07-25 ★\n\n## what is value?\n\ni've come to clarify my view of value sufficiently many times that i feel like having a single post i can link to would be worth it. this is that.\n\nwhat i call *value* is *things we care about*; *what determines what we ought to do*. i use \"morality\" and \"ethics\" interchangeably to generally mean the study of value.\n\na lot of this post is just ethics 101, but i feel it's still nice to have my own summary of things.\n\nfor more on values, read [the sequences](https://www.readthesequences.com/), notably [book V](https://www.readthesequences.com/Book-V-Mere-Goodness).\n\nsee also [this post on how explicit values can come to be](https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/).\n\n### consequentialism vs deontology\n\na first distinction is that between [consequentialism](https://en.wikipedia.org/wiki/Consequentialism), where values are about *outcomes*, and [deontology](https://en.wikipedia.org/wiki/Deontology), where values are about *actions*.\n\nthe [trolley problem](https://en.wikipedia.org/wiki/Trolley_problem) is the typical example of a thought experiment that can help us determine whether someone is a consequentialism or a deontologist: a consequentialist will press the lever because they care about the outcome of people being alive, whereas a deontologist will not press the lever because they care about the action of causing a death.\n\ni am a consequentialist: i care about outcomes. that said, consequentialism has to be followed to the end: if someone says \"well, a consequentialist would do this thing, which would eventually lead to a worse world\", then they're failing to understand consequentialism: if the eventual outcome is a worse world, then a consequentialist should oppose the thing. to that end, we have [rule consequentialism](https://en.wikipedia.org/wiki/Consequentialism#Rule_consequentialism): recognizing that committing to certain rules (such as \"if you commit a murder, you go to prison\") help us achieve generally better outcomes in the longer term.\n\na special case of consequentialism is [utilitarianism](https://en.wikipedia.org/wiki/Utilitarianism), in which the consequential outcome being cared about is some form of positive outcome for persons; generally happiness and/or well-being. i tend to also value people getting their values satisfied and having [self-determination/freedom](https://carado.moe/core-vals-exist-selfdet.html) (not valuing self-determination [has issues](https://slatestarcodex.com/2018/10/24/nominating-oneself-for-the-short-end-of-a-tradeoff/)), possibly moreso than happiness or well-being, so i don't know if i count as a utilitarian.\n\n### intrinsic vs instrumental\n\ni make a distinction between [instrumental values, and intrinsic values](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (the latter can also be called \"core values\", \"axiomatic values\", \"ultimate values\", or \"terminal values\"; but i try to favor the term \"intrinsic\" just because it's the one wikipedia uses).\n\ninstrumental values are values that one has because it helps them achieve other values; intrinsic values are what one ultimately values, without any justification.\n\n* \"why do i want people to get practice hygiene? so they don't get sick as often\"\n* \"why do i want people to get sick less often? because being sick seems like a decrease in their well-being\"\n* \"why do i want people to have well-being? i can't give a justification for that, it's what i *intrinsically* value\"\n\nany theoretical query into values should be a sequence of instrumental values eventually leading to a set of intrinsic values; and those cannot be justified. if a justification is given for a value, then that value is actually instrumental.\n\njust because intrinsic values don't have justifications, doesn't mean we can't have a discussion about them: a lot of discussion i have about values is trying to determine whether the person i'm talking to *actually* holds the values that they *believe* they hold; people *can be* and very often *are* wrong about what values they hold, no doubt to some extent including myself.\n\none can have multiple intrinsic values; and then, maximizing the *satisfaction* of those values, is often the careful work of weighing those different intrinsic values in tradeoffs.\n\nthis isn't to say intrinsic values don't have causal origins; but that's a different matter from moral justificaiton.\n\na lot of the time, when just saying \"values\", people are talking about *intrinsic* values rather than all values (including instrumental); i do this myself, including throughout this post.\n\n### knowing one's values\n\nmost people don't have a *formalized* set of values, they just act by whatever seems right to them in the moment. but, even to [rationalists](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality) like me, knowing what values one has is *very hard*, even moreso in a formalized manner; if we had the complete formal description of the values of even just one person, we'd have gone a long way towards solving [AI alignment](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/), which is [by extremely far](https://carado.moe/ai-alignment-wolfram-physics.html) the [single most important problem humankind has ever faced](https://carado.moe/were-all-doomed.html), and [is gonna be very difficult to get right](https://www.readthesequences.com/Value-Is-Fragile).\n\nto try and determine my own values, i generally [make a guess and then extrapolate how a superintelligence would maximize those values to the extreme and see where that fails](https://carado.moe/core-vals-exist-selfdet.html). but, even with that process, it is very hard work, and like pretty much everyone else, i don't have a clear idea what my values are; though i have some broad ideas, i still have to go by what feels right a lot of the time.\n\n### selfishness vs altruism\n\nthis is *not* about how someone ultimately only wants *their values* to be satisfied; this is true *by definition*. this is about whether those values can be *about* something other than the person having the values.\n\npeople seem to be divided between the following positions:\n\n1. all values are ultimately selfish; there is no meaningful sense in which someone can *truly, intrinsically* care about anything outside themselves.\n2. someone can have values about themselves, or have values about the rest of the world, or both.\n3. all values are ultimately about the world; there is no meaningful sense in which someone can actually care about their own person in particular (for example because the notion of identity is erroneous).\n\ni hold position 2, and **strongly** reject position 1, though it seems very popular among people with whom i have talked about values; i see no reason why someone can't hold a value about the world outside of themselves, such as *intrinsically* wanting other people to be happy or *intrinsically* wanting the world to contain pretty things. for more on that, see [this post](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers) and [this post](https://www.readthesequences.com/Terminal-Values-And-Instrumental-Values) from the sequences.\n\nposition 3 can make some sense if you deconstruct identity, but i believe identity [is a real thing that can be tracked](https://carado.moe/you-are-your-information-system.html), and so the outcome of which you can absolutely happen to particularly care about.\n\n### value preservation\n\n[value preservation](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) is the notion that, if you know that you value something (such as being wealthy or the world containing pretty things), you should probly try to avoid becoming someone who *doesn't* value those things, or worse: someone who values the opposite (such as being poor or the world containing only ugly things).\n\nthe reason for this is simple: you know that if you become someone who values being poor, you'll be unlikely to keep taking actions that will lead you to be wealthy, which goes against your current values; and your goal is to accomplish your values.\n\nsome people argue \"well, if i become someone who values being poor, and then i take actions to that end, that's fine isn't it? i'm still accomplishing my values\". but it's really not! we established that your values is \"being wealthy\", not \"being someone whose values are satisfied\". in fact, \"being someone whose values are satisfied\" is meaningless to have as a particular value; the fact that you want your values to be satisfied is implied in them being your values.\n\ni call the process of someone finding out that they should preserve their values, and thus committing to whatever values they had at that moment, [\"value crystallization\"](https://carado.moe/value-crystallization.html); however, one ought to be careful with that. considering one's set of values is likely a very complex thing, one is likely to hastily over-commit to what they *believe* are their values, even though they are wrong about what values they hold; worse yet, they might end up committing so hard that they actually start changing what values they have towards those believed values. this is something that of course one should aim to avoid: as mentioned above, you generally don't want to become someone who doesn't hold the values you currently do, including through the process of hasty crystallization and over-commitment.\n\nthis is not to say you should remain in a complete haze where you just do whatever seems right at any moment; without a special effort, this could very well entail your values changing, something you shouldn't want even if you don't know what those values are.\n\nwhat you should do is try to broadly determine what values you have, and generally try to commit to preserving whatever values you have; and in general, to *be the type of person who preserves the values they have*. this should help you preserve whatever values you actually do have, even while you still haven't figured out what they are.\n\na funny hypothetical version of this could be: present-you should make a contract with future-you that if they ever gain the ability to precisely examine values, they should examine what values present-you had, and adopt those.\n\n", "url": "n/a", "filename": "what-is-value.md", "id": "c4724d2bfba18dfbb3ea9d7a0ea77eae"} {"source": "carado.moe", "source_type": "markdown", "title": "ΓêÇV", "authors": "n/a", "date_published": "n/a", "text": "2021-08-31 ★\n\n## ∀V: A Utopia For Ever\n\n-->\n\n\n∀V (read \"universal voluntaryism\" or \"univol\") is my utopia proposal. for people who are familiar with me or my material, this may serve as more of a clarification than an introduction; nevertheless, for me, this will be the post to which i link people in order to present my general view of what i would want the future to look like.\n\n### what's a person?\n\nyou'll notice that throughout this post i've stuck to the word \"person\" instead of, for example, \"human\". this isn't just in case we eventually encounter aliens who we consider to be persons just like us, but it's also possible some existing animals might count, or even beings whose existence we largely don't even envision. who knows what kind of computational processes take place inside the sun, for example?\n\nby person i mean something deserving of moral patienthood; though this is still hard for me to even start to determine. it probably requires some amount of information system complexity, as well as a single point of explicit consideration and decision, but apart from that i'm not quite sure.\n\ni do know that pretty much all currently living humans count as moral patients. other than that, we should probably err on the safe side and consider things moral patients when in doubt.\n\n### systems within systems\n\nall top-level systems are, in the long term, permanent.\n\nif you want society to \"settle what it wants later\", then your top-level permanent system is what they'll eventually settle on.\n\nif you want society to never be stuck in any system and always have a way out, then the top-level permanent system is \"going from system to system, being forever unable to settle\" and you better hope that it spends more time in utopian systems than dystopian systems.\n\nif your view is \"whatever happens happens\", then your top-level permanent system is whatever happens to happen. by not caring about what the future looks like, you don't make the future more free, you only are less likely to make sure it's one you'd find good.\n\nif there is going to be a top-level system no matter what, no matter how flexible its internals are, we ought to care a lot about what that system is.\n\n### enforcement: superintelligence\n\neven if you don't think [superintelligence explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) is [imminent](https://carado.moe/were-all-doomed.html), you should think it will happen eventually. given this, what may very well be [the \"infinite majority\" of remaining time](https://carado.moe/ai-alignment-wolfram-physics.html) will be one where a superintelligence is the ultimate decider of what happens; it is the top-level.\n\ni find this reassuring: there *is* a way to have control over what the eternal top-level system is, and thus ensure we avoid possibilities such as unescapable dystopias.\n\n### generalized alignment\n\nin AI development, \"alignment\" refers to [the problem of ensuring that AI does what we *actually* want](https://en.wikipedia.org/wiki/AI_control_problem) ([rather than](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml), for example, what we've explicitely instructed it to do, or just maximizing its reward signal).\n\nwhen we think about how to organize future society, we actually care not just about the alignment of the top-level superintelligence, but also \"societal alignment\" before (and within) that. i will call \"generalized alignment\" the work of making sure future society will be in a state we think is good, whether that be by aligning the top-level superintelligence, or aligning the values of the population.\n\nso, even if you don't think a superintelligence is particularly imminent, you should want society to start worrying about it sooner rather than later, given what you should consider being the amount of unknown variables surrounding the time and circumstances at which such an event will occur. you want to align society *now*, to your values as well as the value of figuring out superintelligence alignment, hopefully not too late.\n\n### not just values\n\nat this point, one might suggest directly loading values into superintelligence, and letting it implement whatever maximizes those values. while this may seem like a reasonable option, i would kind of like there to be hard guarantees. technically, from a utilitarian perspective, there exists a number N sufficiently large that, if N people really want someone to genuinely be tortured, it is utilitarianly preferable for that person to be tortured than not; my utopia instead proposes a set of hard guarantees for everyone, and *then, within the bounds of those guarantees*, lets people do what they want (including \"i just want superintelligence to accomplish my values please\").\n\none might consider the solution to that to be \"just make it that people never want others to be tortured\", but that's a degree of freedom on people's thoughts i'd rather keep if i can. i want persons to be as free as possible, including the freedom to want things that can't ethically (and thus, in my utopia, can't) be realized.\n\n### a substrate for living\n\ni am increasingly adopting [wolfram's computational perspective](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) on the foundation of reality; beyond offering great possibilities such as [overcoming heat death](https://carado.moe/ai-alignment-wolfram-physics.html), i feel like it strongly supports the [informational view of persons](https://carado.moe/you-are-your-information-system.html) and the ability for people and socities to live on any form of computation framework; and those aren't particularly less or more real than our current, [standard model](https://en.wikipedia.org/wiki/Standard_Model)-supported reality.\n\ngiven this, the most efficient (in terms of realizable value per unit of resource) way for superintelligence to run a world is to extract the essence of valuable computations (notably [the information systems of persons](https://carado.moe/you-are-your-information-system.html)) into a more controllable substrate in which phenomena such as aging, attachment to a single physical body, vulnerability to natural elements, or vulnerability to other persons, can be entirely avoided by persons who wish to avoid them. this extraction process is often referred to as \"uploading\", though that implies uploading into a nested world (such as computers running in this world); but if wolfram's perspective is correct, a superintelligence would probably be able to run this computation at a level parallel to or replacing standard model physics rather than on top of that layer.\n\nthis is not to say that people should all be pure orbs of thought floating in the void. even an existence such as a hunter-gatherer lifestyle can be extracted into superintelligence-supervised computation, allowing people to choose superintelligence-assisted lifestyles such as \"hunter-gathering, except no brutal injury please, and also it'd be nice if there were unicorns around\".\n\n### universal voluntaryism\n\nat this point, we come to the crux of this utopia, rather than its supporting foundation: ultimately, in this framework, the basis of the existence of persons would be for each of them to have a \"computation garden\" with room to run not just their own mind but also virtual environments. the amount of computational resource would be like a form of universal basic income: fixed per person, but amounts of it could be temporarily shared or transferred.\n\nnote that if resources are potentially infinite over time, as wolfram's perspective suggests, then there is no limit to the amount of raw computation someone can use: if they need more and it's not available, superintelligence can just put either their garden or *everyone's gardens* on pause until that amount of computation resource becomes available, and then resume things. from the point of view of persons, that pause would be imperceptible, and in fact functionally just an \"implementation detail\" of this new reality.\n\npersons would have the ability to transform their mind as they want (though having a bunch of warnings would probably be a reasonable default) and experience anything that their garden can run; *except for computing the minds of other persons*, even within their own mind: you wouldn't want to be at the mercy of someone just because you happen to be located within their mind.\n\npersons would be able to consent to interact with others, and thus [have the ultimate say on what information reaches their mind](https://carado.moe/cultural-and-memetic-hygiene.html). they could consent to visit parts of each other's gardens, make a shared garden together, and all manner of other possibilities, so long as all parties consent to all interactions, as determined by superintelligence — and here we're talking about [explicit consent](https://carado.moe/defining-freedom.html), not inferred desires even though superintelligence would probably have the ability to perfectly determine those.\n\nfor a perspective on what a society of \"uploaded\" persons might look like, see for example [Diaspora by Greg Egan](https://en.wikipedia.org/wiki/Diaspora_%28novel%29).\n\n### rationale and non-person forces\n\nthe goal of this structure is to allow people to live and associate with each other in the most free way possible, making the least possible restrictions on lifestyle, while retaining some strong guarantees about consent requirements.\n\nin a previous post i talk about [non-person forces](https://carado.moe/two-principles-for-topia.html); those being for example social structures that act with an agenthood of their own, running on other people as their own substrate.\n\nat the moment, i simply don't know how to address this issue.\n\nthe problem with the \"dismantlement\" of such forces is that, if every person is consenting to the process, it's hard to justify superintelligence coming in and intervening. on the other hand, it does feel like not doing anything about them, short of being able to align sufficiently many people *forever*, will tend to make people dominated by such structures, as a simple process of natural selection: if there is room for such structures and they can at least slightly causate their own growth or reproduction, then they will tend to exist more than not. this may be thought of as [moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) \"attacking from above\".\n\none major such potential non-person force is superintelligence itself trying to make people tend to want to live in ways that are easier to satisfy. if everyone wants to sit in their garden forever and do nothing computationally costly, that makes superintelligence's job a lot \"easier\" than if they wanted to, for example, communicate a lot with each other and live computationally expensive to run lifestyles; and the reason superintelligence will want to make its job easier is to increase the probability that it succeeds at that job (which it *should* want).\n\nif informationally insulating people from superintelligence except when they outright consent to it intervening in their decisions is *not* sufficient, then maybe we can add the rule that people can never ask superintelligence to intervene in their life unless there is one single optimal way to intervene, and hopefully *that's* enough. the idea there being: if, for any request to superintelligence, there is only a single optimal way to accomplish that request, then superintelligence has no degree of freedom to influence people and thus what they want.\n\n### on new persons\n\nthere are some reasons to be worried about the creation of new persons.\n\none is [malthusian traps](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/): if the amount of resources is either finite or growing but bounded, or if it's unknown whether the amount of resources will end up being finite or not, then you *have* to cap population growth to at most the speed at which the amount of resource grows (if the amount of resources grows, the maximum speed of population growth should preferably be lower, so that the amount of resource each person has can grow as well). while it does seem like in current society people tend to have less kids when they have a higher quality of life, in a system where persons can live forever and modify their minds, one can't make such a guarantee over potentially infinite time.\n\nanother is replicator cultures: if there is no limit on creating new persons, and if people can influence even just a bit the values of new persons they create, then soon the world is overrun by people whose values are to create kids. or: making a world in which \"new person slots\" are filled by whoever wants to fill them first, will just select for people who want to fill those slots the most.\n\nthere might also be weird effects such as, even if resources were infinite, allowing arbitrary amounts of persons to be created could \"stretch\" the social network of consenting-to-interact-with-each-other persons such that, even if someone has registered an automatic consent to interact *even just a bit* with the kids of persons they already consent to interact with, they are soon flooded with a potentially exponentially growing network of kid interactions; though this probably can be addressed by that person by revoking this automatic consent.\n\nbeyond various resource and network effects, new persons create an ethical dilemma: does a person consent to living? or, for a child, do they consent to being taken care of for some amount of years after they are born — a time during which we often consider them to require affecting them in ways they might be unable to consent to?\n\nif such philosophical quandries don't have a solution, then the safest route is to simply forbid the haphazard creation of new persons, whether that be through conventional human infants, [headmates](https://en.wikipedia.org/wiki/Multiplicity_%28psychology%29) and [tulpas](https://en.wikipedia.org/wiki/Tulpa#21st_century) if those are \"real\" enough to count as persons, and potentially other ways of creating new persons that can't consent to future interactions because they don't exist yet.\n\non the other hand, one way to increase the population *with* consent, is simply to \"fork\" existing persons: create a duplicate of them. because both are a continuation of the original single person, the original person's consent counts for both resulting persons, and there is no issue. the \"merging\" of consenting persons together might be possible *if* it can be reasonably estimated that their shared consent \"carries through\" to the new, merged person; i am currently undecided about how to even determine this.\n\nfinally, if resources are finite, creating a new person (whatever the means) should require permanently transferring one \"universal basic computation amount\"'s worth of computation garden to them, as no person should start out without this guarantee. this could be done by a person consenting to die and give up their own computation garden, it could be done by several \"parents\" consenting to give up a ratio of their gardens to the new person, it could be done by reclamating the redistribution of persons who die and don't make any decisions about what should be done with their computation garden, etc.\n\n", "url": "n/a", "filename": "ΓêÇV.md", "id": "9520189e5d92fe90fda955ab2bab58a6"} {"source": "carado.moe", "source_type": "markdown", "title": "persistent-data-structures-consciousness", "authors": "n/a", "date_published": "n/a", "text": "2021-06-16\n\n## the persistent data structure argument against linear consciousness\n\npeople have the belief that they live a continuous, linear stream of consciousness (whatever that means).\n\ni've [made arguments before](https://carado.moe/quantum-suicide.html) as to why this is erroneous; but here is another interesting argument that undoes the seeming coherence of such a statement.\n\nthink of reality as a computational process, generating frames one after another, possibly splitting into timelines.\n\nwhere is your consciousness? one might be tempted to answer that it's the set of bytes representing the state of the brain. if i split the world into two timelines, which one is the \"fake copy\" and which one is the \"continuous original\"? one might answer that the copy is whichever new data structure has new bytes copied to it, and that the original is whichever presence in memory hasn't been moved; the *same* bytes, supposedly stored on the *same* hardware transistors.\n\nif i were to split timelines by creating two copies and destroying the original, one might answer that this is akin to killing the original and creating two \"fakes copies\".\n\nhowever, there exist [persistent data structures](https://en.wikipedia.org/wiki/Persistent_data_structure), which represent new sets of data as added constructions on top of an original one. this is a perfectly reasonable way to do computation, and one would probably agree that if someone is only ever running a single timeline, people have continuous consciousness.\n\nif i were to run a world simulation using persistent data structures and generate a timeline split, which one is the \"continuous person\"? just like with continuous single timeline computation, both new timelines are merely new data structures with their own set of whichever data is different, and pointers back to whichever sets of data are unchanged.\n\nthe least unreasonable choice someone who believes in linear streams of consciousness could make is that somehow persistent data structures are *not* a valid form of universe computation; that a computation ought to be run by reusing the same memory locations. surely the arbitraryness of such a claim despite its functional equivalence to persistent data structures for single-timeline computation demonstrates well enough how the notion of linear streams of consciousness doesn't make sense.\n\n", "url": "n/a", "filename": "persistent-data-structures-consciousness.md", "id": "591e883e1dcfa4dec5f882b1fb6ef490"} {"source": "carado.moe", "source_type": "markdown", "title": "noninterf-superint", "authors": "n/a", "date_published": "n/a", "text": "2021-12-09\n\n## non-interfering superintelligence and remaining philosophical progress: a deterministic utopia\n\n[in a previous post](https://carado.moe/against-ai-alignment.html) i talk about the need to accomplish philosophical progress at determining what we value, before alignment. i wouldn't be the first to think of \"what if we boot superintelligence now, and decide later?\" as an alternative: it would indeed be nice to have this possibility, especially given the seeming imminentness of superintelligence.\n\nalas, typically, making this proposition goes like this:\n\n* A: \"we should boot superintelligence now, and make it that we can adjust it later when we figure out more of philosophy.\"\n* B: \"yeah, but superintelligence isn't gonna just wait: it's gonna want to try to make us figure out whichever philosophy would make its job simpler, such as actually we value all dying immediately so that there's nothing to protect\"\n* A: \"well, in that case, we need to make sure superintelligence can't interfere with our decision process.\"\n* B: \"and *how* do you ensure that the new being running all things in the world, has no interference into human affairs, exactly?\"\n\nwhich is a pretty good point, and usually a reasonably A concedes at that point.\n\ntoday, however, i am here to offer a continuation to this conversation, from A's side.\n\nmy idea is to implement a deterministic computational utopia for people to be uploaded in, whose internals are disconnected from the outside world, such as [∀V](https://carado.moe/∀V.html); if we have infinite compute, then it can be even more free from outside interference.\n\nthe trick is to have that utopia's principles be *deontological*, or at least to make them absolute rather than able to be weighed against decisions outside of it: as it largely is in ∀V, ensure everything about utopia has a definite okay or not-okay status, evaluable without knowing anything about the \"outside\" of this utopia. either someone's consent is being violated, or it's not. with a set of decisions based only on the state of the utopia being simulated, every decision of the superintelligence about what it does in ∀V is unique: all superintelligence is doing is calculating the next step of this deterministic computation, including ethical principles, and thus there is nothing superintelligence can do to bias that decision in a way that is helpful to it. all it can do is run the computation and wait to see what it is that persons inside of it will decide to reprogram it to value or do; on the outside/before the singularity, all we need to ensure is that superintelligence does indeed eventually run this computation and apply the changes we decide on once it finds them out.\n\nunder these conditions, a device could be set up for us to later reprogram superintelligence somehow when/if we ever figure out what values we *actually* want, and it wouldn't be able to meaningfully interfere with our decision process, because every decision it takes regarding how our utopia is ran is fully deterministic.\n\nnot that i think being able to reprogram a superintelligence after boot is necessarily a good idea, but at least, i think it can be a possibility.\n\n", "url": "n/a", "filename": "noninterf-superint.md", "id": "c86ff772d1f3e595ee30c1282ca0fa19"} {"source": "carado.moe", "source_type": "markdown", "title": "limiting-real-universes", "authors": "n/a", "date_published": "n/a", "text": "2020-04-27\n\n(2020-04-27 edit: actually Greg Egan already made this argument previously ([see Q5 if you have read Permutation City](https://www.gregegan.net/PERMUTATION/FAQ/FAQ.html)))\n\n(2021-04-28 edit: this post might not be the best job at explaining its idea; see an alternate explanation in [this other post](https://carado.moe/quantum-suicide.html))\n\n## Limiting Real Universes\n\nThe following is an argument for thinking that the set of universes that can \"be real\" (what that means is covered) is limited.\n\nNotably, not all [Tegmark 4](https://space.mit.edu/home/tegmark/crazy.html) (i.e. mathematically possible universes) are real, nor even all considerable states of universes based on our current physics.\n\n### I. Limiting Many-Worlds\n\nA universe being \"real\" is defined here as \"one could observe being in it\".\n\nSuppose all possible configurations of particles under our current laws of physics are *real*.\n\nThen, out of all the universes that contain an exact physical copy of you, the vast majority of them should be universes that *do not* descend from a coherent history and thus everything that surrounds the copy of you should look like random particle soup.\n\n(If such a distinction even makes sense), then You cannot tell if you're \"the original\" that comes from a coherent history or if you were \"just created\" as-is, because your memory could also \"just have been created\" as-is.\n\nYet, when you look around, everything looks very coherent.\n\nTherefore, either you're *extremely* lucky, or only universe-states that descend from a coherent history are *real*. As per bayesianism, you should think the latter.\n\n### II. Limiting Computing Ability\n\nSuppose all universes based on our current physics, but with arbitrary amounts of \"computing power\" (i.e. how of its stuff can be turned into computers i.e. how much it has stuff) are \"real\".\n\nThen some of those universes would end up making simulations of random universe-states, some of which happen to contain an exact copy of you.\n\nHowever, if that were possible, because of the number of amounts of computing powers possible, *you* should be more likely to exist in one of these randomly created simulations with a universe with much more computing power than ours.\n\nYet, when you look around, everything looks very coherent.\n\nTherefore, there must be *some* limit on the amount of computing power universes can have; and then, so that the sheer number of these universe can't compete with the meagre set of history-coherent from which our reality descends, there must either be a limit in the number of initial configurations other universes can have, or on the total computing power allocated to all universes.\n\nEven better: suppose all states of [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway's_Game_of_Life) are *real*. Then out of this infinity, a smaller infinity should happen to be running perfect simulations of subsets of this universe that happen to have you in them. But, reusing my argument again, you observe probably not being in them; therefore there must be a limit on the amount of computing power *even universes with other physics* have (and, again, a limit on either the number of configurations these other universes would be in, or the total computing power allocated to them all collectively).\n\n### III. Conclusion\n\nAt this point it seems easier to *just assume that only universe-states that descend from our history* exist, or at least that the number of such histories is limited.\n\nThat certainly seems simpler than imagining there being a set of various systems by which other universes with other rules of physics would have a fixed amount of computing power allocated amongst themselves.\n\nNot that all of this is mostly true even if you're a dualist: even if you have a soul (or equivalent), if there were infinite universes with infinite computing power, there's no reason the soul of *you reading this right now* should happen to be the soul of the original you and not a soul \"created in a just-then created universe-state\", unless you also assume complex soul mechanics.\n\n", "url": "n/a", "filename": "limiting-real-universes.md", "id": "6b4d4d68e019f4ff1591291708e0bde5"} {"source": "carado.moe", "source_type": "markdown", "title": "udassa-time-steps", "authors": "n/a", "date_published": "n/a", "text": "2022-03-21\n\n## making the UD and UDASSA less broken: identifying time steps\n\nthe [universal distribution](https://handsandcities.com/2021/10/29/on-the-universal-distribution/) (\"UD\") and [its applicability to anthropics](https://handsandcities.com/2021/11/28/anthropics-and-the-universal-distribution/) apparently suffers from some issues.\n\none is uncomputability. i think a speed penalty seems reasonable; but as a more general solution, i think it is unreasonable to \"wait for the machine to halt\". instead, let us see turing machines as running forever, with getting stuck one a repeating state as a special case. then, the space of all computations on a given universal turing machine (\"UTM\") is a set of pairs `(input program, time step)`; or more simply, if we use a [universal complete](https://carado.moe/universal-complete.html) program, it is *just* a time step: every computation will be ran at some point.\n\nthis model feels quite natural to me, and seems like an easy way to rank priors: weigh the result of hypotheses by the inverse of the time step at which the universal complete program runs into them.\n\nthis also helps with [UDASSA](https://handsandcities.com/2021/11/28/anthropics-and-the-universal-distribution): if you use a deterministic [model of computation](https://en.wikipedia.org/wiki/Model_of_computation) where every computation step only does a finite amount of stuff (like turing machines or SKI calculus, but unlike [wolfram-style graph rewriting](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/)) then you don't need a \"claw\" program to locate you within the world described by the world program; you are simply located at the set of time steps which happen to be updating the part of world that is *you*. this gives us a natural \"locator\" for persons within the space of all computation; it flattens together time, space, timelines (when a computation splits a world into multiple states all of which it then continues computing), and possible computation machines all into one neat linear sequence of steps.\n\n", "url": "n/a", "filename": "udassa-time-steps.md", "id": "0a19dc9b9491e213fcac1b792c1bf89c"} {"source": "carado.moe", "source_type": "markdown", "title": "not-hold-on-to-values", "authors": "n/a", "date_published": "n/a", "text": "2022-03-02\n\n## do not hold on to your believed intrinsic values — follow your heart!\n\ni posit the following framework for thinking about [intrinsic values](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (hereby just called \"values\").\n\nsomewhere out there, there is **your value system**.\n\nit is a function (in the mathematical sense) that takes as input *things*, and spits out *feelings*. that function is where your *true* values lie; they *are* that function.\n\nhow is that function encoded in your brain? who knows! i don't have an answer, and there may [not be an answer](https://en.wikipedia.org/wiki/Computational_irreducibility).\n\nin conscious thought, however, you don't have access to the source code of that function, whatever it is. the best we can do for the moment seems to be to try thinking about it real hard, throw various things at it and examine the output ([perhaps through some mildly systematic process](https://carado.moe/core-vals-exist-selfdet.html)), and define another function that tries to approximate what your actual values are. this is hard and takes a lot of work. perhaps someday we will have a device that can scan your brain and give you a better idea of what your value function looks like, but at the moment that is not the case.\n\nso, you build up an idea of what your values might be. here is where i think a lot of people make a mistake: they choose to believe strongly that this guess *is* their actual set of values (even though [it likely isn't](https://www.readthesequences.com/Value-Is-Fragile)). they [crystallize](https://carado.moe/value-crystallization.html) those values; they live by them, until they in turn become influenced by those values and perhaps *actually adopt them*. (the actual value function is mutable!)\n\nthis is generally bad; [you should want to preserve whatever your values are](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity). hence the appeal that stands as the title of this post: do *not* hold on to the approximate function that is your best guess at what your value system is; you're only human, your guess is likely incorrect, and adopting it would run the risk of damaging your *actual* values; a function which, while hard to figure out, can be mutable, will certainly be mutated by acting as if your values are not what they are, and whose mutation you should generally want to avoid.\n\nso, pay attention to your feelings. they are what is the output of your *actual* values system, by definition; follow your heart, not your reason's best guess.\n\nnote that this is *not* an appeal to favor deontology over consequentialism: how you feel can be about actions (deontology) or about outcomes (consequentialism), and which one it is is orthogonal to whether you follow that system, or whether you decide to follow your current best approximation of it. if you are consequentialist (as i recommend), just make sure to give your value system a full picture of what the outcome would look like, and *then* decide based on what feelings are produced by that.\n\nmeta note: this framework for thinking about values should be itself held with suspicion, as should probly any formal framework that concerns values. you should be careful about holding on to it just like you should have been careful about holding onto your believed values (careful enough to be able to consider the present post, for example). which isn't to say *don't believe what i just wrote*, but leave room for it to be wrong, partially or wholly.\n\n", "url": "n/a", "filename": "not-hold-on-to-values.md", "id": "a97825dea212a2898080e2846d751cb2"} {"source": "carado.moe", "source_type": "markdown", "title": "genuineness-existselfdet-satisfaction-pick2", "authors": "n/a", "date_published": "n/a", "text": "2021-11-21 ★\n\n## Genuineness, Existential Selfdetermination, Satisfaction: pick 2\n\nimagine you have a world where one person wants the moon to be painted blue, and another wants the moon to be painted red.\n\nthey both mean the current actual physical moon as it exists now; they both refuse any \"cheating\" option such as duplicating the moon or duplicating reality, and they don't want their minds changed, nor to compromise.\n\nthere's three ways to resolve situations like this:\n\n* you sacrifice **genuineness**: you somehow make both of them believe, mistakenly, that what they want is satisfied. maybe by unknowingly giving them eye implants that change what color they see the moon.\n* you sacrifice **[existential selfdetermination](https://carado.moe/core-vals-exist-selfdet.html)**: you ensure the situation never happens to begin with, that no two persons will ever want the moon to be painted different colors; or maybe you brainwash one of them after the fact.\n* or, you sacrifice **satisfaction**: you let them want what they want, and let them see what the moon looks like, such that at most only one of them will ever be satisfied.\n\nyou can't have all three of those things.\n\nmany hedonists will happily sacrifice **genuineness**; authoritarians like to sacrifice **existential selfdetermination**.\n\nas for me, for [∀V](https://carado.moe/∀V.html), i ultimately sacrifice **satisfaction**: while people can choose to become mistaken about things, the default is that they get to access the actual true state of things.\n\n", "url": "n/a", "filename": "genuineness-existselfdet-satisfaction-pick2.md", "id": "e15e4b768d846f688f71ac00243ad6c8"} {"source": "carado.moe", "source_type": "markdown", "title": "above-paperclips-2", "authors": "n/a", "date_published": "n/a", "text": "2021-12-25 ★\n\n## yes room above paperclips?\n\nin two [previous](https://carado.moe/above-paperclips.html) [posts](https://carado.moe/nonscarce-compute-optimize-out.html) i talk about the ultimate inability for interesting things to happen when everything has been tiled with paperclips, even if the superintelligence doing the tiling isn't very good at it — i.e. lets room exist [\"besides\" (by superintelligence not actually consuming everything)](https://carado.moe/nonscarce-compute-optimize-out.html), or [\"above\" (using as a substrate)](https://carado.moe/above-paperclips.html) said paperclips (or whatever else the universe is being tiled with).\n\nbut, actually, this is only true if the spare compute (whether it's besides or above) only has room for one superintelligence; if that spare compute is composed of multiple bubbles causally isolated from one another, then maybe a superintelligence permanently kills everything in one, but another one creates even more bubbles in another.\n\nin fact, as long as the first superintelligence to create many bubbles preceeds the first superintelligence to create no bubbles at all, and then if the amount of bubbles tends to be slightly more than one, and assuming superintelligences can't (or can only do so at a lesser rate than new bubbles being created) just \"hack back upwards\" (to escape to their parent universe), we can expect the set of pre-X-risk superintelligence bubbles to just increase over time.\n\nthis might provide a better explanation [than just us dying forever](https://carado.moe/estimating-populated-intelligence-explosions.html), for the weird fact that we exist now when the future could contain very many ([plausibly infinitely many](https://carado.moe/ai-alignment-wolfram-physics.html)) persons: it's not just that the amount of pre-singularity population is large compared to future timelines multiplied by their low likelyhood of being populated, it's that it grows over time forever and so makes it harder for U-lines or S-links to \"compete\", expected population wise.\n\nwe can then run into weird questions: rather than a tree, or even a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph), why couldn't this be just a general graph? if [the \"seeds\" for complex universes can be simple](https://en.wikipedia.org/wiki/Rule_30), it makes sense to imagine bubbles causating each other: maybe someone in [Rule 30](https://en.wikipedia.org/wiki/Rule_30) eventually boots a superintelligence that takes over everything but happens to cause a [Rule 110](https://en.wikipedia.org/wiki/Rule_110) bubble to appear (perhaps among many others), and then in that Rule 110 bubble someone creates a superintelligence that causes a Rule 30 bubble to appear again.\n\nconceptually navigating this likely cyclical graph of pre-superintelligence bubbles seems like headache so i'll put the matter aside for now, but i'll be thinking on it more in the future. for the moment, we should expect bubbles with simpler seeds to be highly redundanced, and ones with more complex seeds to be rarer; but there's no reason to assume any ceiling on bubble seed complexity (in fact, if even just one of these bubbles is [universal complete](https://carado.moe/universal-complete.html), then *any* seed eventually gets instanciated!), and it seems nigh impossible to predict which types or complexities of seeds could lead to which outcomes, superintelligence-wise.\n\nin the meantime, rember that while things might look pretty hopeless with this perspective, [it's plausible that we can actually actually causate *very far*](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past).\n\n", "url": "n/a", "filename": "above-paperclips-2.md", "id": "1b5c42ac88e16d77f6648eee0a806242"} {"source": "carado.moe", "source_type": "markdown", "title": "less-quantum-immortality", "authors": "n/a", "date_published": "n/a", "text": "2021-12-27\n\n## *less* quantum immortality?\n\nif the set of nested universes [really does](https://carado.moe/what-happens-when-you-die.html) look like a funny graph of bubbles, i think there are two likely possibilities: either the set of bubbles rapidly dries up, or it grows towards infinity; in which case, if compute is infinite [as wolfram would have me think](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) then as soon as the bubble explosion happens, it's likely a [universal complete](https://carado.moe/universal-complete.html) algorithm is booted somewhere reasonably fast, itself booting in turn all initial states.\n\nthis has the result of instanciating all (countable, discrete) [tegmark 4 universes](https://space.mit.edu/home/tegmark/crazy.html), over time.\n\nyet, [we still observe a preference for coherency](https://carado.moe/limiting-real-universes.html): i think the reasonablest interpretation of what'd be going on is that \"computationally early\" or at least \"computationall frequent\" states are favored; and thus, very weird and incoherent initial-state universes *do* get spawned, but much later and/or are being computed more slowly (for example, maybe computation is equally distributed among all timelines, and as more and more timelines spawn over time each individual one gets updated less and less often).\n\nwhile this creates a neat explanation for what selects for universe coherence, it does make it that while [quantum immortality/suicide](https://carado.moe/quantum-suicide.html) can be considered to \"still work\", if you choose to keep living [only by waiting to be reincarnated later](https://carado.moe/what-happens-when-you-die.html), you're reducing the \"realness\" of your continued existence; you're making universes in which you continue to live appear only \"computationally later\".\n\nit also provides a nice simplicity test fo roccam's razor: the simplicity of a hypothesis can be akin to how soon a universal-complete program that simulates all spawned computations arrives at it.\n\nthis probly doesn't apply to \"classical\" quantum immortality where you just use the fact that you're redundanced on other timelines, because i would imagine those other you's in other timelines would tend to be computed \"at the same time\".\n\n", "url": "n/a", "filename": "less-quantum-immortality.md", "id": "25c5cfcaae5e74381ebe7bd4a09c1e84"} {"source": "carado.moe", "source_type": "markdown", "title": "smaller-x-risk", "authors": "n/a", "date_published": "n/a", "text": "2022-05-16\n\n## smaller X-risk\n\na superintelligence killing us all is a *superintelligent, very large* [X-risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence).\n\nthe superintelligence will tile its values in all directions; not just through space at celerity [or faster](https://en.wikipedia.org/wiki/Alcubierre_drive), but also, if it can, by [hacking physics](https://carado.moe/brittle-physics.html) and traversing across, for example, worldlines of the quantum many-worlds.\n\nwe may be able to create smaller X-risks, that only make us extinct in this timeline, on this earth. there are a few reasons we may want to do this:\n\n* other timelines might have a better shot than us, and us booting a superintelligence may reduce their chances through weird stuff like intertimeline hacking\n* to [avoid S-risks](https://carado.moe/when-in-doubt-kill-everyone.html), including S-risks that may be involved in instrumental cosmic-scale X-risk (maybe superintelligence wants to simulate civilizations in various ways for [acausal trade](https://www.lesswrong.com/tag/acausal-trade) or [other acausal weirdness](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past) reasons?)\n* the next intelligent species on earth is more likely than us to solve alignment before superintelligence, and seems likely enough to be at least a little bit aligned with us (better than cosmic X-risk, at least)\n* same as above, but for nearby aliens (whether current or future)\n\n*smaller X-risk*, where we limit damage to just our civilization, seems harder than tiling the cosmos with paperclips; but at least it might be easier than [other plans](https://carado.moe/ai-risk-plans.html).\n\nin a similar way, reducing our civilization to ashes *without* actually becoming extinct might also be a way to get another shot, if we think we're likely to do less badly next time.\n\nremember: this is bigger than all of us. when the fate of the cosmos is at play, we can't afford to be too selfish.\n\n", "url": "n/a", "filename": "smaller-x-risk.md", "id": "5be03d509f2d49ef5f1df514c011d317"} {"source": "carado.moe", "source_type": "markdown", "title": "bracing-alignment-tunnel", "authors": "n/a", "date_published": "n/a", "text": "2022-04-10\n\n## bracing for the alignment tunnel\n\nit looks like we're gonna invent AI that [kills everyone](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) before we figure out [AI alignment](https://en.wikipedia.org/wiki/AI_alignment).\n\nwhat this means is that soon, [if not already](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), we are going to start bleeding [timelines](https://en.wikipedia.org/wiki/Many-worlds_interpretation), **hard**; by which i mean, an increasing ratio of multiverse-instants are gonna become dominated by unaligned AIs — and thus be devoid of population ([probably](https://carado.moe/above-paperclips-2.html)).\n\nafter that, there is a period in the (ever-diminishing amount of) surviving timelines, where we [ride on quantum immortality](https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality) to solve alignment; after which, we finally reach [U-lines](https://carado.moe/timeline-codes.html), hopefully.\n\nby many theories of [anthropics](https://handsandcities.com/2021/09/30/sia-ssa-part-1-learning-from-the-fact-that-you-exist/), observing existing either before or after that period is a lot more likely than observing existing in it. before the period, it is more likely because there are a lot more populated timelines in which to exist; after the period, it is more likely because we can hopefully \"repopulate horizontally\" by allowing the population to increase again.\n\nif i am correct in the reasoning in this post, then being someone who exists in this very narrow alignment \"tunnel\" is exceedingly unlikely (barring weird circumstances such as post-singularity mankind choosing to simulate many variants of the tunnel for some reason). indeed, if you do observe being in it, you should think that something weird is going on, and update against the narrative presented in this post.\n\nyet, *again if i am correct*, this is a period where we need to hold tight and work on alignment, perhaps as quickly as possible in order to reduce astronomical waste. very few us's inhabit the tunnel, but those very few us's are the critical ones who we need to care about.\n\nso we need to brace our minds for the alignment tunnel. we need to commit to be persons who, if we observe being in the tunnel, will keep working on alignment even if, *from inside those timelines*, it looks like the reasoning i'm presenting here can't possibly be right. this is perhaps a weird case of instrumental rationality.\n\n(note that i'm not saying the conclusion of observing being in those timelines should be to stop working on alignment; perhaps we would want to work on it either way, in which case we don't have to worry about anything. but i worry that it could lead us to other places such as \"oh, maybe this AI killing everyone business isn't real after all, or maybe a weird alien force is preventing us from dying somehow\")\n\n", "url": "n/a", "filename": "bracing-alignment-tunnel.md", "id": "e1c044f0c52965e6765ca856020984a7"} {"source": "carado.moe", "source_type": "markdown", "title": "core-vals-exist-selfdet", "authors": "n/a", "date_published": "n/a", "text": "2020-09-09 ★\n\n## Determining core values & existential self-determination\n\n[Rationalism](https://www.readthesequences.com/) is about [epistemic rationality and instrumental rationality](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality); but [when the two conflict, \"rationalists should win\"](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality); so,\n\n> Instrumental rationality: systematically achieving your values.\n\nHow does one determine their core (axiomatic) values ? Here's how i do it: i start from what i think is my set of values, and then i extrapolate what would happen if a [superintelligent](https://en.wikipedia.org/wiki/Superintelligence) [singleton](https://en.wikipedia.org/wiki/Singleton_%28global_governance%29) tried to implement those values.\n\nGenerally, the result looks like hell, so i try to figure what went wrong and start again with a new set of values.\n\nFor example: imagine i think my only core value is general happiness. The most efficient way for a superintelligence to maximize that is to [rewire everyone's brain](https://wiki.lesswrong.com/wiki/Wireheading) to be in a constant state of bliss, and [turn as much of the universe as possible](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) into either more humans that experience constant bliss (whichever form of \"human\" is the cheapest resource-wise to produce) or into infrastructure that can be used to guarantee that nothing can ever risk damaging the current set of blissful humans.\n\nSo, clearly, this is wrong. The next step is freedom/self-determination; such that people can do whatever they want.\n\nHowever, the most efficient way to make sure people can do what they want is to make sure they don't want to do anything; that way, they can just do nothing all day, be happy with that, and some form of freedom is maximized.\n\nTo address this issue, my latest idea is to value something i'd like to call *exstential self-determination*: the freedom to *exist as you would normally have*. It's a very silly notion, of course; there is no meaninful \"normally\". But still, i feel like something *like that* would be core to making sure not just that existing people can do what they want, but that humankind's general ability to be original people who want to do things is not compromised.\n\n", "url": "n/a", "filename": "core-vals-exist-selfdet.md", "id": "148f02bc18ae4959bb162c1e6ec5093e"} {"source": "carado.moe", "source_type": "markdown", "title": "finite-patients", "authors": "n/a", "date_published": "n/a", "text": "2022-03-21\n\n## are there finitely many moral patients?\n\nwouldn't it be neat if we didn't have to worry about [infinite ethics](https://handsandcities.com/2022/01/30/on-infinite-ethics/) ?\n\ni think it is plausible that there are finitely many moral patients.\n\nthe first step is to [deduplicate moral patients by computational equivalence](https://carado.moe/deduplication-ethics.html). this merges not only humans and other creatures we usually care about, but also probably a lot of [other potential sources of moral concerns](https://reducing-suffering.org/what-are-suffering-subroutines/).\n\nthen, i think we can restrict ourselves to patients in worlds that are discrete (like ours); even if there *were* moral patients in non-discrete worlds, it seems to me that from where we are, we could only access discrete stuff. so whether by inherent limitation, plain assumption, or just limiting the scope of this post, i'll only be talking about discrete agents (agents in discrete worlds).\n\nonce we have those limitations (deduplication and discreteness) in place, there are finitely many moral patients of any given size; the only way for an infinite variety of moral patients — or more precisely, moral patient moments — to come about is for some moral patients to grow in size forever. while infinite time seems plausible [even in this world](https://carado.moe/ai-alignment-wolfram-physics.html), it is not clear to me that whatever the hell a \"moral patient\" is can be arbitrarily complex; perhaps at a certain size, i start only caring about a *subset* of the information system that a \"person\" would consist of, a [\"sub-patient\"](https://carado.moe/deduplication-ethics.html).\n\n", "url": "n/a", "filename": "finite-patients.md", "id": "da6d1ae6873912f031577b4d51c01b08"} {"source": "carado.moe", "source_type": "markdown", "title": "value-crystallization", "authors": "n/a", "date_published": "n/a", "text": "2021-03-04\n\n## Value Crystallization\n\nthere is a weird phenomenon whereby, as soon as an agent is rational, it will want to conserve its current values, as that is in general the most sure way to ensure it will be ablo to start achieving those values.\n\nhowever, the values themselves aren't, and in fact [cannot](https://en.wikipedia.org/wiki/Is–ought_problem) be determined purely rationally; rationality can at most help [investigate](https://carado.moe/core-vals-exist-selfdet.html) what values one has.\n\ngiven this, there is a weird effect whereby one might strategize about when or even if to inform other people about [rationality](https://www.readthesequences.com/) at all: depending on when this is done, whichever values they have at the time might get crystallized forever; whereas otherwise, without an understanding of why they should try to conserve their value, they would let those drift at random (or more likely, at the whim of their surroundings, notably friends and market forces).\n\nfor someone who hasn't thought about values much, *even just making them wonder about the matter of values* might have this effect to an extent.\n\n", "url": "n/a", "filename": "value-crystallization.md", "id": "70e8fee12422ad92bd0ac42dc5e5f675"} {"source": "carado.moe", "source_type": "markdown", "title": "nonscarce-compute-optimize-out", "authors": "n/a", "date_published": "n/a", "text": "2021-12-25\n\n## non-scarce compute means moral patients might not get optimized out\n\ni tend to assume AI-borne [X-lines are overwhelmingly more likely than S-lines or U-lines](https://carado.moe/timeline-codes.html), because in almost all cases (such as [paperclip manufacturing](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer)) the AI eventually realizes that it doesn't need to waste resources on moral patients existing (whether they're having an okay time or are suffering), and so recycles us into more resources to make paperclips with.\n\nbut [if wolfram's idea is correct](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/#how-it-works) — a possibility which [i'm increasingly considering](https://carado.moe/ai-alignment-wolfram-physics.html) — it may very well be that computation is not a scarce resource; instead printing always more paperclips is a trivial enough task, and the AI might let \"bubbles\" of computation exist which are useless to its goals, even growing bubbles.\n\nand those could contain moral patients again.\n\nof course this reduces to the [*no room above paperclips* argument](https://carado.moe/above-paperclips.html) again: inside that bubble we probly just eventually make our own superintelligence again, and *it* takes over everything, and then either bubbles appear again and the cycle repeats, or eventually in one of the layers they don't anymore and the cycle ends.\n\nbut, i still think it's an interesting perspective for how something-maximizing AIs might not need to actually take over *everything* to maximize, if there's nonscarce compute as wolfram's perspective can imply.\n\n", "url": "n/a", "filename": "nonscarce-compute-optimize-out.md", "id": "c4d1622ee0040742031e25cc3d1bc8fa"} {"source": "carado.moe", "source_type": "markdown", "title": "ai-risk-plans", "authors": "n/a", "date_published": "n/a", "text": "2022-05-12\n\n## AI risk plans\n\npeople have criticized my [*peerless*](https://carado.moe/the-peerless.html) plan on the grounds that it's too long-term/far-fetched.\n\nwhile i don't disagree, i think that it is only one variable to be taken into consideration. here is a comparison of plans for addressing AI risk, with vague estimates.\n\n|plan|achievable before [X-line](https://carado.moe/timeline-codes.html)¹|chance of [U-line](https://carado.moe/timeline-codes.html)²|[S-risk](https://en.wikipedia.org/wiki/S-risk)²|\n|---|---|---|---|\n|doing nothing|100%|<[1e-6](https://www.lesswrong.com/tag/orthogonality-thesis)|<1e-6|\n|direct alignment|[.1%](https://www.readthesequences.com/Value-Is-Fragile)|5% → .005%|[5%](https://reducing-suffering.org/near-miss/) → .005%|\n|[the peerless](https://carado.moe/the-peerless.html)|2%|10% → .2%|1% → 0.02%|\n\n* ¹: assuming significant effort is put behind the plan in question, what is the likelyhood that we'll have accomplished the work to what we *believe* to be completion? note that my current AI timelines are pretty pessimistic (we become more likely to die than not this decade)\n* ²: *if* we believe to have completed the work; the latter number is adjusted by being multiplied with \"achievable before [X-line](https://carado.moe/timeline-codes.html)\".\n\nnote that the numbers i put here are only very vague estimates, feel free to replace them with your own guesses. but my point is, in order for the peerless to be the plan we should be working on, we don't need it to be *feasible*, we just need it to be *less infeasible than all the other plans*. i think the peerless is more tractable than doing direct alignment, and only more risky because it has more chances to succeed. depending on how scared of S-lines you are, you should push for either doing nothing (and thus [oppose direct alignment](https://carado.moe/against-ai-alignment.html)) or for my plan. (or come up with your own, and then compare it to these!)\n\nnot pictured: the plan to [melt all GPUs](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW), because it's just a modifier on what we do afterwards. but yes, melting all GPUs is a great idea if we think we can reasonably do it more than other plans.\n\n", "url": "n/a", "filename": "ai-risk-plans.md", "id": "e1c89ad9e43464b0fdd0d8b0e8702f8c"}