diff --git "a/carado.moe.jsonl" "b/carado.moe.jsonl" deleted file mode 100644--- "a/carado.moe.jsonl" +++ /dev/null @@ -1,249 +0,0 @@ -{"id": "9d8dcb8be40b135ce9abcc0b24c9ebd3", "title": "epistemic range", "url": "https://carado.moe/epistemic-range.html", "source": "carado.moe", "source_type": "blog", "text": "epistemic range\n---------------\n\n\nthere is a [*security mindset*](https://www.lesswrong.com/posts/8gqrbnW758qjHFTrH/security-mindset-and-ordinary-paranoia)-ish general principle, of which [motivated stopping and motivated continuation](https://www.lesswrong.com/posts/L32LHWzy9FzSDazEg/motivated-stopping-and-motivated-continuation) are *ordinary paranoia*-ish special cases, which i call \"epistemic range\".\n\n\nmotivated stopping and motivated continuation are *heuristics* that catch some failure modes where you pursue an epistemic investigation — an instance of reasoning about a question in order to improve your belief state about the answer — to whichever extent lets you get a belief state that is the one you *want* to get. of course, in epistemic rationality, you should not *want* to believe any particular thing; you want your belief state to correspond to whatever is actually true.\n\n\nand, that there can be such a thing as {stopping an epistemic inquiry too early} or {continuing an epistemic inquiry for too long} imply that there is a range, and maybe even a particular point, at which you should stop. in this respect, i believe epistemology to be akin to science: you would want to (do the kind of thing that is equivalent to) *preregister* epistemic investigations with a method for knowing when to stop.\n\n\npersonally, i believe that a good rule of thumb for when to stop is when [it feels like you can just as easily come up with narratives for multiple mutually incompatible possibilities](overcoming-narratives.html). on [dath ilan](https://www.lesswrong.com/tag/dath-ilan/discussion), they probably have a more robust notion of where to stop, and look at the belief state, and decide that this is what you believe for now until you either\n\n\n* get more object-level evidence about the topic\n* get more meta-level evidence about where your stopping point should be\n* get more contextual evidence about how much confidence you need in this belief, to accomplish your goals\n\n\nwhich occur to me as the three main things that impact how far one should want to pursue an epistemic investigation.\n\n\nnote that epistemic range depends, among other things, on where your epistemic investigation has gotten you; as a mathematical function, epistemic range should be a function from current-state-of-epistemic-investigation to boolean (whether to stop or not), not a function from question and context to static number.\n\n\non a particular subject matter, the notion of {how far you should go} is your **epistemic range**. the set of all your epistemic ranges on various questions is your **epistemic frontier**, and it nicely draws a shape representing how much you can figure out.", "date_published": "2023-07-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e85a41612b22a9850482858963201774", "title": "a short chat about realityfluid", "url": "https://carado.moe/short-chat-realityfluid.html", "source": "carado.moe", "source_type": "blog", "text": "a short chat about realityfluid\n-------------------------------\n\n\n(anonymized and lightly edited)", "date_published": "2023-06-14T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "77d5afe8d37de829fd9e909ec70001f1", "title": "formalizing the QACI alignment formal-goal", "url": "https://carado.moe/qaci-math.html", "source": "carado.moe", "source_type": "blog", "text": "*this work was done by [Tamsin Leake](https://carado.moe) and [Julia Persson](https://www.lesswrong.com/users/juliahp) at [Orthogonal](https://orxl.org).* \n\n*thanks to [mesaoptimizer](https://mesaoptimizer.com/) for his help putting together this post.*\n\n\nformalizing the QACI alignment formal-goal\n------------------------------------------\n\n\nwhat does the [QACI](qaci.html) plan for [formal-goal alignment](formal-alignment.html) actually look like when formalized as math? in this post, we'll be presenting our current formalization, which we believe has most critical details filled in.\n\n\nthis post gives a brief explanation of what QACI tries to do, but people unfamiliar with this alignment scheme might want to read the [narrative explanation](narrative-explanation-qaci.html), which is a recommended introduction to QACI — though keep in mind that it's not entirely up to date.\n\n\nthis post straightforwardly builds up the math for QACI from the bottom up; and while it does explain all of the math, it does so by presenting it all at once. you might find prefer reading the companion post, [*\"an Evangelion dialogue explaining the QACI alignment plan\"*](qaci-invention-dialogue.html), which builds up this math gradually and provides more context.\n\n\n### 1. math constructs\n\n\nin this first part, we'll be defining a collection of mathematical constructs which we'll be using in the rest of the post.\n\n\n#### 1.1. basic set theory\n\n\nwe'll be assuming basic set theory notation; in particular, A×B×C is the set of tuples whose elements are respectively members of the sets A, B, and C, and for n∈ℕ, Sn is the set of tuples of n elements, all members of S.\n\n\n𝔹={⊤,⊥} is the set of booleans and ℕ is the set of natural numbers including 0.\n\n\ngiven a set X, 𝒫(X) will be the set of subsets of X.\n\n\n#S is the cardinality (number of different elements) in set S.\n\n\nfor some set X and some complete ordering <∈X2→𝔹, min< and max< are two functions of type 𝒫(X)\\{∅}→X finding the respective minimum and maximum element of non-empty sets when they exist, using < as an ordering.\n\n\n#### 1.2. functions and programs\n\n\nif n∈ℕ, then we'll denote f∘n as repeated composition of f: f∘…∘f (n times), with ∘ being the composition operator: (f∘g)(x)=f(g(x)).\n\n\nλx:X.B is an anonymous function defined over set X, whose parameter x is bound to its argument in its body B when it is called.\n\n\nA→B is the set of functions from A to B, with → being right-associative (A→B→C is A→(B→C)). if f∈A→B→C, then f(x)(y) is simply f applied once to x∈A, and then the resulting function of type B→C being applied to y∈B. A→B is sometimes denoted BA in set theory.\n\n\nA→HB is the set of always-halting, always-succeeding, deterministic programs taking as input an A and returning a B.\n\n\ngiven f∈A→HB and x∈A, R(f,x)∈ℕ\\{0} is the runtime duration of executing f with input x, measured in compute steps doing a constant amount of work each — such as turing machine updates.\n\n\n#### 1.3. sum notation\n\n\ni'll be using a syntax for sums ∑ in which the sum iterates over all possibles values for the variables listed *above* it, given that the constraints *below* it hold.\n\n\nx,y∑y=1y=xmod2x∈{1,2,3,4}x≤2\n\n\nsays \"for any value of x and y where these three constraints hold, sum y\".\n\n\n#### 1.4. distributions\n\n\nfor any countable set X, the set of distributions over X is defined as:\n\n\nΔX≔{f|f∈X→[0;1],∑x∈Xxf(x)≤1}\n\n\na function f∈X→[0;1] is a distribution ΔX over X if and only if its sum over all of X is never greater than 1. we call \"mass\" the scalar in [0;1] which a distribution assigns to any value. note that in our definition of distribution, we do not require that the distribution over all elements in the domain sums up to 1, but merely that it sums up to *at most* 1. this means that different distributions can have different \"total mass\".\n\n\nwe define ΔX0∈ΔX as the empty distribution: ΔX0(x)=0.\n\n\nwe define ΔX1∈X→ΔX as the distribution entirely concentrated on one element: ΔX1(x)(y)={1ify=x0ify≠x\n\n\nwe define NormalizeX∈ΔX→ΔX which modifies a distribution to make it sum to 1 over all of its elements, except for empty distributions:\n\n\nNormalizeX(δ)(x)≔{δ(x)∑y∈Xyδ(y)ifδ≠ΔX00ifδ=ΔX0\n\n\nwe define UniformX as a distribution attributing equal value to every different element in a finite set X, or the empty distribution if X is infinite.\n\n\nUniformX(x)≔{1#Xif#X∈ℕ0if#X∉ℕ\n\n\nwe define maxXΔ∈ΔX→𝒫(X) as the function finding the elements of a distribution with the highest value:\n\n\nmaxXΔ(δ)≔{x|x∈X,∀x′∈X:δ(x′)≤δ(x)}\n\n\n#### 1.5. constrained mass\n\n\ngiven distributions, we will define a notation which i'll call \"constrained mass\".\n\n\nit is defined as a syntactic structure that turns into a sum:\n\n\nv1,…,vpv1,…,vp𝐌[V]≔∑X1(x1)⋅…⋅Xn(xn)⋅Vx1:X1x1∈domain(X1)⋮⋮xn:Xnxn∈domain(Xn)C1C1⋮⋮CmCm\n\n\nin which variables x are sampled from their respective distributions X, such that each instance of V is multiplied by X(x) for each x. constraints C and iterated variables v are kept as-is.\n\n\nit is intended to weigh its expression body V by various sets of assignments of values to the variables v, weighed by how much mass the X distributions return and filtered for when the C constraints hold.\n\n\nto take a fairly abstract but fully calculable example,\n\n\n𝐌x,f[f(x,2)]≔∑x,f(λn:{1,2,3}.n10)(x)⋅Uniform{min,max}(f)⋅f(x,2)x:λn:{1,2,3}.n10x∈domain(λn:{1,2,3}.n10)f:Uniform{min,max}f∈domain(Uniform{min,max})xmod2≠0xmod2≠0=∑x,fx10⋅12⋅f(x,2)x∈{1,2,3}f∈{min,max}xmod2≠0=1⋅min(1,2)10⋅2+3⋅min(3,2)10⋅2+1⋅max(1,2)10⋅2+3⋅max(3,2)10⋅2=1⋅1+3⋅2+1⋅2+3⋅320=1+6+2+920=1820=910\n\n\nin this syntax, the variables being sampled from distributions are allowed to be bound by an arbitrary amount of logical constraints or new variable bindings below it, other than the variables being sampled from distributions.\n\n\n#### 1.6. bitstrings\n\n\n𝔹\\* is the set of finite bitstrings.\n\n\nbitstrings can be compared using the lexicographic order <𝔹\\*, and concatenated using the ‖ operator. for a bitstring x∈𝔹\\*, |x|∈ℕ is its length in number of bits.\n\n\nfor any countable set X, EncodeX∈X→𝔹\\* and will be some reasonable function to convert values to bitstrings, such that ∀(x,y)∈X2:EncodeX(x)=EncodeX(y)⇔x=y. \"reasonable\" entails constraints such as:\n\n\n* it can be computed efficiently.\n* it can be inverted efficiently and unambiguously.\n* its output's size is somewhat proportional to the actual amount of information. for example, integers are encoded in binary, not unary.\n\n\n#### 1.7. cryptography\n\n\nwe posit σ≔𝔹σ¯, the set of \"signatures\", sufficiently large bitstrings for cryptographic and uniqueness purposes, with their length defined as σ¯=231 for now. this *feels* to me like it should be enough, and if it isn't then something is fundamentally wrong with the whole scheme, such that no manageable larger size would do either.\n\n\nwe posit a function ExpensiveHash∈𝔹\\*→Hσ, to generate fixed-sized strings from seed bitstrings, which must satisfy the following:\n\n\n* it must be too expensive for the AI to compute *in any way* (including through superintelligently clever tricks), but cheap enough that we can compute it outside of the AI — for example, it could require quantum computation, and the AI could be restricted to classical computers\n* it should take longer to compute (again, in any way) than the expected correct versions of Loc's f,g functions (as will be defined later) could afford to run\n* it should tend to be collision-resistant\n\n\nat some point, we might come up with more formal ways to define ExpensiveHash in a way that checks that it isn't being computed inside Loc's f,g functions, nor inside the AI.\n\n\n#### 1.8. text and math evaluation\n\n\nfor any countable set X, we'll be assuming EvalMathX∈𝔹\\*→{{x}|x∈X}∪{∅} to interpret a piece of text as a piece of math in some formal language, evaluating to either:\n\n\n* a set of just one element of X, if the math parses and evaluates properly to an element of X\n* an empty set otherwise\n\n\nfor example,\n\n\nEvalMathℕ(\"1+2\")={3}EvalMathℕ(\"hello\")=∅\n\n\n#### 1.9. kolmogorov simplicity\n\n\nfor any countable sets X and P:\n\n\nKX−∈ΔX is some \"[kolmogorov](https://en.wikipedia.org/wiki/Kolmogorov_complexity) simplicity\" distribution over set X which has the properties of never assigning 0, and summing/converging to 1 over all of X. it must satisfy ∀x∈X:KX−(x)>0 and ∑x∈XxKX−(x)=1.\n\n\nK− is expected to give more mass to simpler elements, in an information-theoretic sense.\n\n\nnotably, it is expected to \"deduplicate\" information that appears in multiple parts of a same mathematical object, such that even if x∈𝔹\\* holds lots of information, K𝔹\\*−(x) is not much higher (higher simplicity, i.e. lower complexity) to K𝔹\\*×𝔹\\*−(x,x).\n\n\nwe could define KX− [similarly to cross-entropy](https://www.lesswrong.com/posts/KcvJXhKqx4itFNWty/k-complexity-is-silly-use-cross-entropy-instead), with some universal turing machine UTM∈𝔹\\*×ℕ→𝔹\\* returning the state of its tape after a certain number of compute steps:\n\n\ni,nKX−≔NormalizeX(λx:X.∑1(2|i|⋅(n+1))2)i∈𝔹\\*n∈ℕUTM(i,n)=EncodeX(x)\n\n\n*kolmogorov simplicity over X with a prior from P*, of type KP,X−~:P→ΔX, allows elements it samples over to share information with a prior piece of information in P. it is defined as KP,X−~(p)≔NormalizeX(λx:X.KP×X−(p,x)).\n\n\n### 2. physics\n\n\nin this section we posit some formalisms for modeling world-states, and sketch out an implementation for them.\n\n\n#### 2.1. general physics\n\n\nwe will posit some countable set Ω of world-states, and a distribution Ωα∈ΔΩ of possible initial world-states.\n\n\nwe'll also posit a function Ωα→∈Ω→ΔΩ which produces a distribution of future world-states for any specific world-state in the universe starting at α.\n\n\ngiven an initial world-state α∈Ω, we'll call Ωα→(α) the \"universe\" that it gives rise to. it must be the case that ∑ω∈ΩωΩα→(α)(ω)=1.\n\n\nwhen α describes the start of a quantum universe, individual world-states Ω following it by Ωα→ would be expected to correspond to [many-worlds everett branches](https://www.lesswrong.com/tag/many-worlds-interpretation).\n\n\nfor concreteness's sake, we could posit Ω⊂𝔹\\*, though note that α is expected to not just hold information about the initial state of the universe, but also about how it is computed forwards.\n\n\ngiven a particular α∈Ω:\n\n\nfinally, we define SimilarPastsα∈Ω×Ω→[0;1] which checks how much they have past world-states ωpast in common:\n\n\nω1SimilarPastsα(ω2,ω2′)≔𝐌[Ωα→(ω1)(ω2)⋅Ωα→(ω1)(ω2′)]ω1:Ωα→(α)\n\n\n#### 2.2. quantum turing machines\n\n\nwe will sketch out here a proposal for Ω, Ωα, and Ω→ such that our world-state w has hopefully non-exponentially-small Ωα→(α)(ω).\n\n\nthe basis for this will be a universal [quantum turing machine](https://en.wikipedia.org/wiki/Quantum_Turing_machine). we will posit:\n\n\n* Tape≔{s|s∈𝒫(ℤ),#s∈ℕ} the set of turing machine tapes, as *finite* (thanks to #s∈ℕ) sets of relative integers representing positions in the tape holding a 1 rather than a 0.\n* State some finite (#S∈ℕ) set of states, and some state0∈State.\n* Ω≔Tape×State×ℤ: world-states consist of a tape, state, and machine head index.\n* ΔΩq≔{f|f∈Ω→ℂ,∑ω∈Ωω‖f(ω)‖2=1} the set of \"quantum distributions\" over world-states\n* Step∈ΔΩq→ΔΩq the \"time step\" operator running some universal turing machine's transition matrix to turn one quantum distribution of world-states into another\n\n\nwe'll also define Δℕ2∈Δℕ as the \"quadratic realityfluid distribution\" which assigns diminishing quantities to natural numbers, but only quadratically diminishing: Δℕ2(n)≔Normalizeℕ(1(n+1)2)\n\n\nwe can then define Ω→ as repeated applications of Step, with quadratically diminishing realityfluid:\n\n\nn1,n2,sΩα→(ω1)(ω2)≔c⋅𝐌[s(n1,ω1)⋅s(n1+n2,ω2)]n1:Δℕ2n2:Δℕ2s(n,ω)=‖Step∘n(ΔΩ1(α))(ω)‖2\n\n\nwhere the constant c is whatever scalar it needs to be for ∑ω∈ΩωΩα→(α)(ω)=1 to be satisfied.\n\n\nthis implementation of Ωα→ measures how much ω2 is in the future of ω1 by finding paths from α to ω1, and then longer paths from α to ω2.\n\n\nand finally, we define Ωα as a distribution giving non-zero value to world-states (t,state0,0) where t is a tape where no negative-index cells are set to 1.\n\n\nΩα(t,s,i)≔{Δℕ2(∑n∈tn2n)ifs=state0,i=0,t⊂ℕ0otherwise\n\n\nbecause we selected a universal (quantum) turing machine, there is at least one input tape implementing any single quantum algorithm, including the quantum algorithm implementing our physics.\n\n\n### 3. implementing QACI\n\n\nfinally, we get into the core mechanisms of QACI.\n\n\nthe core idea of QACI is \"blob location\": mathematically formalizing the idea of locating our world and locating bitstrings (which i'll call \"blobs\") stored on computers within that world, out of the space of all possible computational universes, by sampling over functions which extract those blobs from world-states in Ω and functions which can produce a counterfactual world where that blob has been replaced with another blob of the same length (in number of bits).\n\n\n#### 3.1. question blob and observation\n\n\nthroughout these functions, we will posit the following constants:\n\n\n* the initial factual question blob q∈𝔹\\*\n* two \"observation\" blobs μ1∈𝔹\\* and μ2∈𝔹\\*\n\n\nμ1,μ2 are variables which will be passed around, called \"observations\". in normal AI agent framings, an AI would have a history of actions and observations, and decide on its next action based on that; but, in the [one-shot](delegated-embedded-agency-decision-theory.html) framing we use, there is only a single action and a fixed set of observations. the observations, in practice, will be a very large pieces of data helping the AI locate itself in the multiverse of all possible computations, as well as get a better idea of how and where it is being ran. we will likely include in it things like:\n\n\n* a full explanation of the QACI alignment plan, including the math\n* the AI's code\n* a dump of wikipedia and other large parts of the internet\n* a copy of some LLM\n\n\nμ1 will be produced before the question blob is generated, and μ2 will be produced after the question blob is generated but before the AI is launched.\n\n\n#### 3.2. overview\n\n\nthe overall shape of what we're doing can be seen on the illustration below: we start at the start of the universe α, and use four blob locations and a counterfactual blob function call to locate five other world-states. the illustration shows distributions of future and past world-states, as well as a particular sampling of for all four blob locations.\n\n\n* we sample ωμ1 using Loc(α,Ωα→(α),μ1,ξ), world-states containing the first observation μ1\n* we sample ωμ2 using Loc(α,Ωα→(ωμ1),μ2,ξ), world-states containing the second observation μ2\n* we sample ωq using Loc(α,Ωα→(ωμ1),q,ξ), world-states containing the question blob q, but requiring that its world-state ωq precede the world-state ωμ2\n* we get ωq′, the world-state with a counterfactual question blob, using blob location γq found by sampling ωq\n* we sample ωr using Loc(α,Ωα→(ωq′),r,ξ), possible world-states containing an answer to a given counterfactual question q′\n\n\n![](qaci-math-2.svg)\n\n\nthe location path from ωq′ to ωr is used to run QACI intervals, where counterfactual questions q′ are inserted and answers r are located in their future.\n\n\n(we could also build fancier schemes where we locate the AI's returned action, or its code running over time, in order to \"tie more tightly\" the blob locations to the AI — but it is not clear that this helps much with [blob location failure modes i'm concerned about](blob-quantum-issue.html).)\n\n\nfor the moment, we merely rely on μ1 and μ2 being uniquely identifying enough — though implementing them as *static bitstrings* might suffice, perhaps they could instead be implemented as *[lazily evaluated](https://en.wikipedia.org/wiki/Lazy_evaluation) associative maps*. when the AI tries to access members of those maps, code which computes or fetches information from the world (such as from the internet) would be executed determines the contents of that part of the observation object. this way, the observation would be conceptualized as a static object to the AI — and indeed it wouldn't be able to observe any mutations — but it'd be able to observe arbitrary amounts of the world, not just amounts we'd have previously downloaded.\n\n\nwe could make the QACI return not a scoring over actions but a proper utility function, but this only constrains the AI's action space and doesn't look like it helps in any way, including making QACI easier for the AI to make good guesses about. perhaps with utility functions we find a way to make the AI go \"ah, well i'm not able to steer much future in world-states where i'm in hijacked sims\", but it's not clear how or even that this helps much. so for now, the math focuses on this simple case of returning an action-scoring function.\n\n\n#### 3.3. blob location\n\n\n(remember that while this section does explain the blob location math, it does so by presenting it all at once. for a gentler introduction, see part **7. blob location** (and onwards) of the [dialogue explaining QACI](qaci-invention-dialogue.html))\n\n\nfor any blob length (in bits) n∈ℕ:\n\n\nfirst, we'll posit Γn≔𝔹n→Ω the set of blob locations; they're identified by a counterfactual blob location function, which takes any counterfactual blob and return the world-state in which a factual blob has been replaced with that counterfactual blob.\n\n\nLocn∈Ω×ΔΩ×𝔹n×Ξ→ΔΓn tries to locate an individual blob b (as a bitstring of length n) in a particular world-state sampled from the time-distribution (past or future) δ (which will usually be a distribution returned by Ωα→) within the universe starting at α.\n\n\nit returns a distribution over counterfactual insertion functions of type 𝔹n→Ω which take a counterfactual blob and return the matching counterfactual world-state. the elements in that distribution typically sum up to much less than 1; the total amount they sum up to corresponds to how much Loc finds the given blob in the given world-state to begin with; thus, sampling from a distribution returned by Loc in a constrained mass calculation 𝐌 is useful even if said result is not used, because of its multiplying factor.\n\n\nnote that the returned counterfactual insertion function can be used to locate the factual world-state — simply give it the factual blob as input.\n\n\nΞ is some countably infinite set of arbitrary pieces of information which each call to Loc can use internally — the goal of this is for multiple different calls to Loc to be able to share some prior information, while only being penalized by K− for it once. for example, an element of Ξ might describe how to extract the contents of a specific laptop's memory from physics, and individual Loc calls only need to specify the date and the memory range. for concreteness, we can posit Ξ≔𝔹\\*, the set of finite bitstrings.\n\n\nf,g,ω,b′,τLocn(α,δ,b,ξ)(γ)≔𝐌[SimilarPastsα(ω,g(b′,τ))R(g,(b′,τ))+R(f,g(b′,τ))](f,g):KΞ,(Ω→H𝔹n×𝔹\\*)×(𝔹n×𝔹\\*→HΩ)−~(ξ)ω:λω:maxXΔ(λω:Ω.{δ(ω)iff(ω)=(b,τ)0otherwise).δ(ω)b′:Uniform𝔹n∀b′′∈𝔹n:γ(b′′)=g(b′′,τ)f(γ(b′′))=(b′′,τ)\n\n\nLoc works by sampling a pair of functions f,g, which convert world-states forth and back into {pairs whose first element is the blob and whose second element represents everything in the world-state except the blob}.\n\n\nthat latter piece of information is called τ (tau), and rather than being sampled τ is defined by the return value of f on the original world-state — notably, τ is not penalized for being arbitrarily large, though f and g are penalized for their compute time.\n\n\nfor a given fixed pair of f and g, Loc finds the set of hypothesis world-states ω with the highest value within the time-distribution δ for which f,g work as intended. this is intended to select the \"closest in time\" world-states in δ, to avoid adversarial attackers generating their own factual blobs and capturing our location.\n\n\nit then weighs locations using, for every counterfactual blob b′∈𝔹n:\n\n\n* the degree to which counterfactual world-states tend to share pasts with the original factual world-state, for b′.\n* the compute time of g and f on counterfactual blobs and world-states respectively.\n\n\nnote that Locn, by design, only supports counterfactual blobs whose length n is equal to the length of the initial factual blob b — it wouldn't really make sense to talk about \"replacing bits\" if the bits are different.\n\n\nin effect, Loc takes random f,g decoding and re-encoding programs, measures how complex and expensive they are and how far from our desired distributions are world-states in which they work, and how close to the factual world-state their counterfactual world-states are.\n\n\n#### 3.4. blob signing\n\n\nwe'll define Π≔𝔹|q|−σ¯, the set of possible answer bitstring payloads.\n\n\ncounterfactual questions will not be signed, and thus will be the set of bitstrings of the same length as the factual question — 𝔹|q|.\n\n\nwe'll define Sign∈Π×𝔹\\*→𝔹|q| as Sign(π,k)≔ExpensiveHash(π‖k)‖π. this functions tags blob payloads using a \"signature\" generated from a seed bitstring, concatenating it to the blob payload.\n\n\n#### 3.5. action-scoring functions\n\n\nwe will posit A⊂𝔹\\* as the finite set of actions the AI can take, as a finite set of bitstrings.\n\n\nwe'll call U≔A→[0;1] the set of \"scoring functions\" over actions — functions which \"have an opinion\" about various actions. this is similar to utility functions, except it's over actions rather than over worlds or world-histories.\n\n\nthey can be composed using Compose∈ΔU→U, which could be simple scoring-function averaging:\n\n\nuCompose(δ)(a)≔?𝐌[u(a)]u:δ\n\n\nbut alternatively, we could use something like [Diffractor's Rose bargaining](https://www.lesswrong.com/posts/vJ7ggyjuP4u2yHNcP/threat-resistant-bargaining-megapost-introducing-the-rose) to reduce the ability for scoring/utility functions to threaten each other — and notably ours.\n\n\nCompose≔?Rose\n\n\n(where i'm using ≔? to mean \"maybe define this way, but i'm not sure\")\n\n\n#### 3.6. QACI query\n\n\nusing those, we define QACI∈Ω×Γ|q|×Ξ×𝔹|q|→ΔΠ which given a physics hypothesis α, a question blob location γq, and a blob location prior ξ, returns the highest guess returned answer payload πr for a given counterfactual question q′.\n\n\nγrQACI(α,γq,ξ,q′)(πr)≔𝐌[1]γr:Loc|q|(α,Ωα→(γq(q′)),Sign(πr,q′),ξ)\n\n\nQACI works by sampling answer blob locations γr, from world-states in the future of the counterfactual question world-state γq(q′), signed using q′.\n\n\nwith its first three parameters fixed, QACI becomes the straightforward counterfactual query function 𝔹|q|→ΔΠ as used in [the narrative explanation of QACI](narrative-explanation-qaci.html): one can call it with arbitrary counterfactual text inputs (within the size limitation), and get a distribution over possible answers, which can easily be collapsed using maxΠΔ.\n\n\n#### 3.7. top-level QACI call\n\n\nthe top-level call to the QACI query function, QACI0∈Ω×Γ|q|×Ξ→ΔU interprets its output as a piece of math and executes it with, as parameters, various global and contextual values it might need access to, and returns a distribution over action-scoring functions:\n\n\nπr,fQACI0(α,γq,ξ)(u)≔𝐌[1]πr:QACI(α,γq,ξ,q0′)f∈EvalMath{q}×{μ1}×{μ2}×Ω×Γ|q|×Ξ→U(πr)f(q,μ1,μ2,α,γq,ξ)=u\n\n\nwhere q0′∈𝔹|q| is the initial counterfactual blob, such as \"return a good scoring of actions\" encoded in ASCII, padded with zeros to be of the right length.\n\n\nQACI0's distribution over answers demands that the answer payload πr, when interpreted as math and with all required contextual variables passed as input (q,μ1,μ2,α,γq,ξ), returns an action-scoring function equal to u — this is how it measures the weight of any action-scoring function u.\n\n\n𝐌[1] makes it that QACI0's distributions are only determined by the sampled variables and logical requirements.\n\n\nEvalMath's f function having access to QACI's distribution over output texts rather than best candidate allows it to discard as many invalid candidates as it needs and stick to ones that match whatever constraits it has.\n\n\n#### 3.8. action scoring\n\n\nwe'll posit the AI as AI∈U→A — a program which tries to satisfy a scoring over actions, by making a high-expected-score guess.\n\n\nwe define Score∈U, the action-scoring function which the AI will be making guesses about as a scoring function over actions, which happens to be one that is, hopefully, good. this is the scoring function for which the AI will be trying to produce an action that is as favorable as possible, within its limited capabilities.\n\n\nα,ξ,γμ1,γμ2,γqScore≔Compose(λu:U.𝐌[NormalizeU(QACI0(α,γq,ξ))(u)])α:Ωαξ:KΞ−γμ1:Loc|μ1|(α,Ωα→(α),μ1,ξ)γμ2:Loc|μ2|(α,Ωα→(γμ1(μ1)),μ2,ξ)γq:Loc|q|(α,Ωα→(γμ1(μ1)),q,ξ)Ωα→(γq(q))(γμ2(μ2))>Ωα→(γμ2(μ2))(γq(q))\n\n\nwhere the following variables are sampled:\n\n\n* an initial state of the universe α.\n* a blob location prior ξ, sampled for simplicity.\n* the blob locations of μ1, μ2, and q.\n\n\nand the world-state containing the second observation μ2 is required to be in the future of the word-state containing the question q.\n\n\nthen, we rate possible actions a by composing the scoring functions produced by QACI0 for all of those blob location hypotheses.\n\n\nfor any question location, the set of action-scoring functions sampled by QACI0 is normalized. this is because pairs of *AI-action location and question location* should not be penalized for having a \"harder to find\" answer — once the observations and question have been located, we want to obtain the answer no matter what.\n\n\nfinally, we'll just execute the action returned by AI(Score).", "date_published": "2023-06-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "fa0abd5993667d877546374ea78d5432", "title": "an Evangelion dialogue explaining the QACI alignment plan", "url": "https://carado.moe/qaci-invention-dialogue.html", "source": "carado.moe", "source_type": "blog", "text": "*this post was written by [Tamsin Leake](https://carado.moe) at [Orthogonal](https://orxl.org).* \n\n*thanks to [Julia Persson](https://www.lesswrong.com/users/juliahp) and [mesaoptimizer](https://mesaoptimizer.com/) for their help putting it together.*\n\n\nan Evangelion dialogue explaining the QACI alignment plan\n---------------------------------------------------------\n\n\nthis post explains the justification for, and the math formalization of, the [QACI](qaci.html) plan for [formal-goal alignment](formal-alignment-theory-change.html). you might also be interested in its companion post, [*formalizing the QACI alignment formal-goal*](qaci-math.html), which just covers the math in a more straightforward, bottom-up manner.\n\n\n![](qaci-invention-dialogue-header.webp)\n\n\n#### 1. agent foundations & anthropics\n\n\n🟣 ***misato*** — hi ritsuko! so, how's this alignment stuff going?\n\n\n🟡 ***ritsuko*** — well, i think i've got *an idea*, but you're not going to like it.\n\n\n🟢 ***shinji*** — that's exciting! what is it?\n\n\n🟡 ***ritsuko*** — so, you know how in [*the sequences*](https://www.readthesequences.com/) and [*superintelligence*](https://publicism.info/philosophy/superintelligence/index.html), yudkowsky and bostrom talk about how hard it is to fully formalize something which leads to nice things when maximized by a utility function? so much so that [it serves as an exercise to think about one's values](core-vals-exist-selfdet.html) and [consistently realize how complex they are](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile)?\n\n\n🟡 ***ritsuko*** — ah, yes, the good old days when we believed this was the single obstacle to alignment.\n\n\n🔴 ***asuka*** *barges into the room and exclaims* — hey, check this out! i found this [fancy new theory](https://www.lesswrong.com/tag/shard-theory) on lesswrong about how \"shards of value\" emerge in neural networks!\n\n\n🔴 ***asuka*** *then walks away while muttering something about eiffel towers in rome and waluigi hyperstition…*\n\n\n🟡 ***ritsuko*** indeed. these days, all these excited kids running around didn't learn about AI safety by thinking really hard about what agentic AIs would do — they got here by being spooked by large language models, and as a result they're thinking in all kinds of strange directions, like what it means for a language model to be aligned or how to locate natural abstractions for human values in neural networks.\n\n\n🟢 ***shinji*** — of course that's what we're looking at! look around you, turns out that the shape of intelligence is RLHF'd language models, not agentic consequentialists! why are you still interested in those old ideas?\n\n\n🟡 ***ritsuko*** — the problem, shinji, is that we *can't observe agentic AI being published before alignment is solved*. when someone figures out how to make AI consequentialistically pursue a coherent goal, whether by using current ML technology or by building a new kind of thing, we die shortly after they publish it.\n\n\n🟣 ***misato*** — wait, isn't that anthropics? i'd rather stay away from that type of thinking, it seems too galaxybrained to reason about…\n\n\n🟡 ***ritsuko*** — you can't really do that either — the [\"back to square one\"](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) interpretation of anthropics, where you don't update at all, *is still an interpretation of anthropics*. it's kind of like being the kind of person who, when observing having survived quantum russian roulette 20 times in a row, assumes that the gun is broken rather than saying \"i guess i might have low quantum amplitude now\" and [fails to realize that the gun can still kill them](anthropics-example.html) — which is bad when all of our hopes and dreams rests on those assumptions. the only vaguely anthropics-ignoring perspective one can take about this is to ignore empirical evidence and stick to inside view, gears-level prediction of how convergent agentic AI tech is.\n\n\n🟣 ***misato*** — …is it?\n\n\n🟡 ***ritsuko*** — of course it is! on inside view, ***all the usual MIRI arguments hold just fine***. it just so happens that if you keep running a world forwards, and select only for worlds that we haven't died in, then you'll start observing stranger and stranger non-consequentialist AI. you'll start observing the kind of tech we get when just dumbly scale up bruteforce-ish methods *like machine learning* and you observe somehow nobody publishing insights as to how to make those systems agentic or consequentialistic.\n\n\n🟢 ***shinji*** — that's kind of frightening!\n\n\n🟡 ***ritsuko*** — well, it's where we are. we already thought we were small in space, now we also know that we're also small in probabilityspace. the important part is that it *doesn't particularly change what we should do* — we should still try to save the world, in the most straightforward fashion possible.\n\n\n🟣 ***misato*** — so all the excited kids running around saying we have to figure out how to align language models or whatever…\n\n\n🟡 ***ritsuko*** — they're chasing a chimera. impressive LLMs are not what we observe because they're what powerful AI looks like — they're what we observe because they're what powerful AI ***doesn't*** look like. they're there because that's as impressive as you can get short of something that kills everyone.\n\n\n🟣 ***misato*** — i'm not sure most timelines are dead yet, though.\n\n\n🟡 ***ritsuko*** — we don't know if \"most\" timelines are alive or dead from agentic AI, but we know that however many are dead, we couldn't have known about them. if [every AI winter was actually a bunch of timelines dying](https://twitter.com/carad0/status/1666092081889300481), we wouldn't know.\n\n\n🟣 ***misato*** — you know, this doesn't necessarily seem so bad. considering that confused alignment people is what's caused the appearance of the three organizations trying to kill everyone as fast as possible, maybe it's better that alignment research seems distracted with things that aren't as relevant, rather than figuring out agentic AI.\n\n\n🟡 ***ritsuko*** — you can say that alright! there's already enough capability hazards being carelessly published everywhere as it is, including on lesswrong. if people were looking in the direction of the kind of consequentialist AI that actually determines the future, this could cause a lot of damage. good thing there's a few very careful people here and there, studying the *right* thing, but being very careful by not publishing any insights. but this is indeed the kind of AI we need to figure out if we are to [save the world](outlook-ai-risk-mitigation.html).\n\n\n🟢 ***shinji*** — whatever kind of anthropic shenanigans are at play here, they sure seem to be saving our skin! maybe we'll be fine because of quantum immortality or something?\n\n\n🟣 ***misato*** — that's not how things work shinji. quantum immortality [explains how you got here, but doesn't help you save the future](https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances).\n\n\n🟢 ***shinji*** *sighs, with a defeated look on his face* — …so we're back to the good old MIRI alignment, we have to perfectly specify human values as a utility function *and* figure out how to align AI to it? this seems impossible!\n\n\n🟡 ***ritsuko*** — well, that's where things get interesting! now that we're talking about coherent agents whose actions we can reason about, agents whose [instrumentally convergent goals such as goal-content integrity would be beneficial if they were aligned](https://en.wikipedia.org/wiki/Instrumental_convergence), agents who won't [mysteriously turn bad eventually](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) because they're not yet coherent agents, we can actually *get to work putting something together*.\n\n\n🟣 ***misato*** — …and that's what you've been doing?\n\n\n🟡 ***ritsuko*** — well, that's kind of what [agent foundations had been about all along](https://twitter.com/ESYudkowsky/status/1626609128859922434), and what got rediscovered elsewhere as [\"formal-goal alignment\"](formal-alignment.html): designing an aligned coherent goal and figuring out how to make an AI that is aligned to maximizing it.\n\n\n#### 2. embedded agency & untractability\n\n\n🟢 ***shinji*** — so what's your idea? i sure could use some hope right now, though i have no idea what an aligned utility function would even *look like*. i'm not even sure what kind of *type signature* it would have!\n\n\n🟡 ***ritsuko*** *smirks* — so, the first important thing to realize is that the challenge of designing an AI that emits output which save the world, can be formulated like this: design an AI trying to solve a mathematical problem, and make the mathematical problem be analogous enough to \"what kind of output would save the world\" that the AI, by solving it, happens to also save our world.\n\n\n🟢 ***shinji*** — but what does that actually *look like*?\n\n\n🟣 ***misato*** — maybe it looks like \"what output should you emit, which would cause your predicted sequence of [stimuli](https://artint.info/2e/html/ArtInt2e.Ch2.S2.html) to look like a nice world?\"\n\n\n🟡 ***ritsuko*** — what do you think actually happens if an AI were to succeed at this?\n\n\n🟣 ***misato*** — oh, i guess it would hack its stimuli input, huh. is there even a way around this problem?\n\n\n🟡 ***ritsuko*** — what you're facing is a facet of the problem of [*embedded agency*](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN). you must make an AI which thinks about the world which contains it, not just about a system that it feels like it is interacting with.\n\n\n🟡 ***ritsuko*** — the answer — as in [PreDCA](predca.html) — is to model the world from the top-down, and ask: \"look into this giant universe. you're in there somewhere. which action should the you-in-there-somewhere take, for this world to have the most expected utility?\"\n\n\n🟢 ***shinji*** — expected utility? by what utility function?\n\n\n🟡 ***ritsuko*** — we're coming to it, shinji. there are three components to this: the **formal-goal-maximizing AI**, the **formal-goal**, and the **glue in-between**. [embedded agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN) and [decision theory](https://arbital.com/p/logical_dt/) are parts of this glue, and they're core to how we think about the whole problem.\n\n\n🟣 ***misato*** — and this top-down view works? how the hell would it compute *the whole universe*? isn't that uncomputable?\n\n\n🟡 ***ritsuko*** — how the hell do you expect AI would have done expected utility maximization *at all*? by making [*reasonable guesses*](cant-simulate-the-universe.html). i can't compute the whole universe from the big-bang up to you right now, but if you give me a bunch of math which i'd understand to say \"in worlds being computed forwards starting at some simple initial state and eventually leading to this room right now with shinji, misato, ritsuko in it, what is shinji more likely to be thinking about: his dad, or the pope's uncle?\"\n\n\n🟡 ***ritsuko*** — on the one hand, the question is immensely computationally expensive — it asks to compute the entire history of the universe up to this shinji! but on the other hand, it is talking about a world which *we inhabit*, and about which we have the ability to make *reasonable guesses*. if we build an AI that is smarter than us, you can bet it'll bet able to make guesses at least as well as this.\n\n\n🟣 ***misato*** — i'm not convinced. after all, we relied on humans to make this guess! of course you can guess about shinji, you're a human like him. why would the AI be able to make those guesses, being the alien thing that it is?\n\n\n🟡 ***ritsuko*** — i mean, one of its options is to *ask humans around*. it's not like it has to do everything by itself on its single computer, here — we're talking about the kind of AI that agentically saves the world, and has access to all kinds of computational resources, including humans if needed. i don't think it'll *actually* need to rely on human compute a lot, but the fact that it *can* serves as a kind of existence proof for its ability to produce reasonable solutions to these problems. not optimal solutions, but reasonable solutions — eventually, solutions that will be much better than any human or collection of humans could be able to come up with short of getting help from aligned superintelligence.\n\n\n🟢 ***shinji*** — but what if the worlds that are actually described by such math are not in fact this world, but strange alien worlds that look nothing like ours?\n\n\n🟡 ***ritsuko*** — yes, this is also part of the problem. but let's not keep moving the goalpost here. there are two problems: *make the formal problem point to the right thing (the right shinji in the right world)*, and *make an AI that is good at finding solutions to that problem*. both seem like we can solve them with some confidence; but we can't just keep switching back and forth between the two.\n\n\n🟡 ***ritsuko*** — if you have to solve two problems A and B, then you have to solve A assuming B is solved, and then solve B assuming A is solved. then, you've got a pair of solutions which work with one another. here, we're solving the problem of whether an AI would be able to solve this problem, *assuming* the problem points to the right thing; later we'll talk about how to make the problem point to the right thing *assuming* we have an AI that can solve it.\n\n\n🟢 ***shinji*** — are there any *actual implementation ideas* for how to build such a problem-solving AI? it sure sounds difficult to me!\n\n\n🟣 ***misato***, *carefully peeking into the next room* — hold on. i'm not actually quite sure who's listening — it is known that capabilities people like to lurk around here.\n\n\n🟤 ***kaji*** *can be seen standing against a wall, whistling, pretending not to hear anything.*\n\n\n🟡 ***ritsuko*** — right. one thing i will reiterate, is that we should not observe a published solution to \"how to get powerful problem-solving AI\" before the world is saved. this is in the class of problems which we die shortly after a solution to it is found and published, so our lack of observing such a solution is not much evidence for its difficulty.\n\n\n#### 3. one-shot AI\n\n\n🟡 ***ritsuko*** — anyways, to come back to embedded agency.\n\n\n🟣 ***misato*** — ah, i had a question. the AI returns a first action which it believes would overall steer the world in a direction that maximizes its expected utility. and then what? how does it get its observation, update its model, and take the next action?\n\n\n🟡 ***ritsuko*** — well, there are a variety of clever schemes to do this, but an easy one is to just *not*.\n\n\n🟣 ***misato*** — what?\n\n\n🟡 ***ritsuko*** — to just *not do anything after the first action*. i think the simplest thing to build is what i call a [\"one-shot AI\"](delegated-embedded-agency-decision-theory.html), which halts after returning an action. and then we just run the action.\n\n\n🟢 ***shinji*** — \"run the action?\"\n\n\n🟡 ***ritsuko*** — sure. we can decide in advance that the action will be a linux command to be executed, for example. the scheme does not really matter, so long as the AI gets an output channel which has pretty easy bits of steering the world.\n\n\n🟣 ***misato*** — hold on, hold on. a single action? what do you intend for the AI to do, output a really good pivotal act and then hope things get better?\n\n\n🟡 ***ritsuko*** — have a little more imagination! our AI — let's call it AI₀ — will almost certainly return a single action that *builds and then launches another, better AI*, which we'll call AI₁. a powerful AI can absolutely do this, especially if it has the ability to read its own source-code for inspiration, but probably even without that.\n\n\n🟡 ***ritsuko*** — …and because it's solving the problem \"what action would maximize utility when inserted into this world\", it will understand that AI₁ needs to have embedded agency and the various other aspects that are instrumental to it — [goal-content integrity](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity), [robustly delegating](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN#4__Robust_delegation) RSI, and so on.\n\n\n🟢 ***shinji*** — \"RSI\"? what's that?\n\n\n🟣 ***misato*** *sighs* — you know, it keeps surprising me how many youths don't know about the acronym RSI, which stands for Recursive Self-Improvement. it's pretty indicative of how little they're thinking about it.\n\n\n🟢 ***shinji*** — i mean, of course! recursive self-improvement is an obsolete old MIRI idea that doesn't apply to the AIs we have today.\n\n\n🟣 ***misato*** — right, kids like you got into alignment by being spooked by chatbots. (what [silly things](https://scottaaronson.blog/?p=6821) do they even teach you [in class](https://www.agisafetyfundamentals.com/ai-alignment-curriculum) these days?)\n\n\n🟣 ***misato*** — you have to realize that the generation before you, the generation of ritsuko and i, didn't have the empirical evidence that AI was gonna be impressive. we started on something like [the empty string](https://twitter.com/esyudkowsky/status/1525285902628446208), or at least [coherent](https://www.readthesequences.com/) [arguments](https://publicism.info/philosophy/superintelligence/index.html) where we had to actually build a gears-level inside-view understanding of what AI would be like, and what it would be capable of.\n\n\n🟣 ***misato*** — to me, one of the core arguments that sold me on the importance of AI and alignment was recursive self-improvement — the idea that *AI being better than humans at designing AI* would be a very special, very critical point in time, downstream of which AI would be able to beat humans at everything.\n\n\n🟢 ***shinji*** — but this turned out irrelevant, because AI is getting better than humans *without* RSI–\n\n\n🟡 ***ritsuko*** — again, false. we can **only** observe AI getting better than humans at intellectual tasks **without** RSI, because when RSI is discovered and published, we die very shortly thereafter. you have a sort of consistent survivorship bias, where you keep thinking of a whole class of things as *irrelevant* because they don't seem impactful, when in reality they're *the most* impactful; they're *so* impactful that when they happen you die and are unable to observe them.\n\n\n#### 4. action scoring\n\n\n🟣 ***misato*** — so, i think i have a vague idea of what you're saying, now. top-down view of the universe, which is untractable but [that's fine](cant-simulate-the-universe.html) apparently, thanks to some mysterious capabilities; [one-shot AI](delegated-embedded-agency-decision-theory.html) to get around various embedded agency difficulties. what's the actual utility function to align to, now? i'm really curious. i imagine a utility function assigns a value between 0 and 1 to any, uh, entire world? world-history? multiverse?\n\n\n🟡 ***ritsuko*** — it assigns a value between 0 and 1 to any *distribution of worlds*, which is general enough to cover all three of those cases. but let's not get there yet; remember how the thing we're doing is untractable, and we're relying on an AI that can make guesses about it anyways? we're gonna rely on that fact a whole lot more.\n\n\n🟣 ***misato*** — oh boy.\n\n\n🟡 ***ritsuko*** — so, first: we're not passing a *utility function*. we're passing a *math expression* describing an *\"action-scoring function\"* — that is to say, a function attributing scores to *actions* rather than to *distributions over worlds*. we'll make the program deterministic and make it ignore all input, such that the AI has no ability to steer its result — [its true result is fully predetermined, and the AI has no ability to hijack that true result](noninterf-superint.html).\n\n\n🟣 ***misato*** — wait, \"hijack it\"? aren't we assuming an inner-aligned AI, here?\n\n\n🟡 ***ritsuko*** — i don't like this term, \"inner-aligned\"; [just like \"AGI\"](tabooing-agi.html), people use it to mean too many different and unclear things. we're assuming an AI which does its best to pick an answer to a math problem. that's it.\n\n\n🟡 ***ritsuko*** — we don't make an AI which tries to not be harmful with regards to its side-channels, such as [hardware attacks](https://en.wikipedia.org/wiki/Rowhammer) — except for its output, it needs to be strongly boxed, such that it can't destroy our world by manipulating software or hardware vulnerabilities. similarly, we don't make an AI which tries to output a solution we *like*, it tries to output a solution which *the math would score high*. narrowing what we want the AI to do greatly helps us build the right thing, but it does add constraints to our work.\n\n\n🟡 ***ritsuko*** *starts scribbling on a piece of paper on her desk* — let's write down some actual math here. let's call Ω the set of world-states, ΔΩ distributions over world-states, and A be the set of actions.\n\n\n🟢 ***shinji*** — what are the types of all of those?\n\n\n🟡 ***ritsuko*** — let's not worry about [that](qaci-math.html), for now. all we need to assume for the moment is that those sets are [countable](https://en.wikipedia.org/wiki/Countable_set). we could define both Ω≔𝔹\\* and A≔𝔹\\* — define them both as the set of finite bitstrings — and this would functionally capture all we need. as for distributions over world-states ΔΩ, we'll define ΔX≔{f|f∈X→[0;1],∑x∈Xxf(x)≤1} for any countable set X, and we'll call \"mass\" the number which a distribution associates to any element.\n\n\n🟣 ***misato*** — woah, woah, hold on, i haven't looked at math in a while. what do all those squiggles mean?\n\n\n🟡 ***ritsuko*** — ΔX is defined as the set of functions f, which take an X and return a number between 0 and 1, such that if you take the f of all x's in X and add those up, you get a number not greater than 1. note that i use a notation of sums ∑ where the variables being iterated over are above the ∑ and the constraints that must hold are below it — so this sum adds up all of the f(x) for each x such that x∈X.\n\n\n🟣 ***misato*** — um, sure. i mean, i'm not quite sure what this *represents* yet, but i guess i get it.\n\n\n🟡 ***ritsuko*** — the set ΔX of distributions over X is basically like saying \"for any finite amounts of mass less than 1, what are some ways to distribute that mass among some or all of the X's?\" each of those ways is a distribution; each of those ways is an f in ΔX.\n\n\n🟡 ***ritsuko*** — anyways. the AI will take as input an untractable math expression of type A→[0;1], and return a single A. note that we're in math here, so \"is of type\" and \"is in set\" are really the same thing; we'll use ∈ to denote both set membership and type membership, because they're the same concept. for example, A→[0;1] is the set of all functions taking as input an A and returning a [0;1] — returning a real number between 0 and 1.\n\n\n🟢 ***shinji*** — hold on, a *real* number?\n\n\n🟡 ***ritsuko*** — well, a real number, but we're passing to the AI a discrete piece of math which will only ever describe countable sets, so we'll only ever describe countably many of those real numbers. infinitely many, but countably infinitely many.\n\n\n🟣 ***misato*** — so the AI has type (A→[0;1])→A, and we pass it an action-scoring function of type A→[0;1] to get an action. checks out. where do utility functions come in?\n\n\n🟡 ***ritsuko*** — they don't need to come in at all, actually! we'll be defining a piece of math which describes the world for the purpose of pointing at the humans who will decide on a scoring function, but the scoring function will only be over *actions the AI should take*.\n\n\n🟡 ***ritsuko*** — the AI doesn't need to know that its math points to the world it's in; and in fact, conceptually, it isn't *told* this at all. on a *fundamental, conceptual* manner, it is not being told to care about the world it's in — if it could, it *would* take over our world and kill everyone in it to acquire as much compute as possible, and plausibly along the way [drop an anvil on its own head](https://www.lesswrong.com/tag/anvil-problem) because it doesn't have [embedded agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN) with regards to the world around itself.\n\n\n🟡 ***ritsuko*** — we will just very carefully box it such that its only meaningful output into our world, the only bits of steering it can predictably use, are those of the action it outputs. and we will also have very carefully designed it such that the only thing it ultimately cares about, is that that output have as high of an expected scoring as possible — it will care about this *intrinsically*, and *nothing else intrinsically*, such that doing that will be *more* important than hijacking our world through that output.\n\n\n🟡 ***ritsuko*** — this meaning of \"inner-alignment\" is still hard to accomplish, but it is much better defined, much narrower, and thus hopefully much easier to accomplish than the \"full\" embedded-from-the-start alignments which [very slow, very careful corrigibility-based AI alignment would result in](https://www.glowfic.com/replies/1824457#reply-1824457).\n\n\n#### 5. early math & realityfluid\n\n\n🟣 ***misato*** — so what does that scoring function actually look like?\n\n\n🟡 ***ritsuko*** — you know what, i hadn't started mathematizing my alignment idea yet; this might be a good occasion to get started on that!\n\n\n🟡 ***ritsuko*** *wheels in a whiteboard* — so, what i expect is that the order in which we're gonna go over the math is going to be the *opposite order* to that of the [final math report on QACI](qaci-math.html). here, we'll explore things from the top-down, filling in details as we go — whereas the report will go from the bottom-up, fully defining constructs and then using them.\n\n\nPrior∈ΔHypothesisLooksLikeThisWorld∈Hypothesis→[0;1]HowGood∈A→[0;1]hScore(action)≔∑Prior(h)⋅LooksLikeThisWorld(h)⋅HowGood(action,h)h∈Hypothesis\n\n\n🟡 ***ritsuko*** — this is roughly what we'll be doing here. go over all hypotheses h the AI could have within some set of hypotheses, called Hypothesis; measure their Prior probability, the LooksLikeThisWorld that they correspond to our world, and how good the action are in them. this is the general shape of *expected scoring for actions*.\n\n\n🟢 ***shinji*** — wait, the set of hypotheses is called Hypothesis, not Hypotheses? that's a bit confusing.\n\n\n🟡 ***ritsuko*** — this is pretty standard in math, shinji. the reason to call the set of hypotheses Hypothesis is because, as explained before, sets are also types, and so LooksLikeThisWorld will be of type Hypothesis→[0;1] rather than Hypotheses→[0;1].\n\n\n🟣 ***misato*** — what's in a Hypothesis, exactly?\n\n\n🟡 ***ritsuko*** — the set of *all relevant beliefs about things*. or rather, the set of all relevant beliefs except for logical facts. [logical uncertainty](https://www.lesswrong.com/posts/SFLCB5BgjzruJv9sp/logical-and-indexical-uncertainty) will be a thing on the AI's side, not in the math — this math lives in the realm \"platonic perfect true math\", and the AI will have beliefs about what its various parts tend to result in as one kind of logical belief, just like it'll have beliefs about other logical facts.\n\n\n🟣 ***misato*** — so, a mathematical object representing empirical beliefs?\n\n\n🟡 ***ritsuko*** — i would rather put it as a pair of: *beliefs about what's real* (\"realityfluid\" beliefs); and *beliefs about where, in the set of real things, the AI is* ([\"indexical\"](https://www.lesswrong.com/posts/SFLCB5BgjzruJv9sp/logical-and-indexical-uncertainty) beliefs). but this can be simplified by allocating realityfluid across *all* mathematical/computational worlds (this is equivalent to assuming [tegmark the level 4 multiverse](https://space.mit.edu/home/tegmark/crazy.html) is real, and can be done by assuming the cosmos to be [a \"universal complete\" program](universal-complete.html) running all computations) and then all beliefs are indexical. these two possibilities work out to pretty much the same math, anyways.\n\n\n🟢 ***shinji*** — what the hell is \"realityfluid\"???\n\n\n🟡 ***ritsuko*** — [i](limiting-real-universes.html)[t](persistent-data-structures-consciousness.html)['](universal-complete.html)[s](what-happens-when-you-die.html) [a](exact-minds-in-an-exact-world.html) [v](brittle-physics.html)[e](questions-cosmos-computations.html)[r](forking-bitrate-entropy-control.html)[y](deduplication-ethics.html) [l](udassa-time-steps.html)[o](hope-infinite-compute.html)[n](generalized-adding-reality-layers.html)[g](predictablizing-ethic-deduplication.html) [s](anthropic-reasoning-coordination.html)[t](solomonoff-deism.html)[o](hands-and-cities.html)[r](essential-inequality-vs-functional-inequivalence.html)[y](ethic-juice-anthropic-juice.html), [i](homomorphically-encrypted-computations.html)['](simulation-hypotheses.html)[m](logical-indexical-dignity.html) [a](spoiler-fire-upon-deep.html)[f](https://www.fanfiction.net/s/5389450/1/The-Finale-of-the-Ultimate-Meta-Mega-Crossover)[r](how-far-are-things-that-care.html)[a](approximate-decisions.html)[i](quantum-amplitude-deduplication.html)[d](https://carado.moe/up/52921a1c-bullshit.html).\n\n\n🟣 ***misato*** — think of it as a measure of how some constant amount of \"matteringness\"/\"realness\" — typically 1 unit of it — is distributed across possibilities. even though it kinda mechanistically works like probability mass, it's \"in the other direction\": it represents what's *actually* real, rather than representing what we *believe*.\n\n\n🟢 ***shinji*** — why would it sum to 1? what if there's [an infinite amount of stuff](hope-infinite-compute.html) out there?\n\n\n🟣 ***misato*** — [your realityfluid still needs to sum up to some constant](https://twitter.com/ESYudkowsky/status/1644060293889249288). [if you allocate an infinite amount of matteringness, things break and don't make sense](https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics).\n\n\n🟡 ***ritsuko*** — indeed. this is why the most straightforward way to allocate realityfluid is to just imagine that the set of all that exists is a [universal program](universal-complete.html) whose computation is cut into time-steps each doing a constant amount of work, and then allocate some diminishing quantities of realityfluid to each time step.\n\n\n🟣 ***misato*** — like saying that compute step number n≥1 has 12n realityfluid?\n\n\n🟡 ***ritsuko*** — that would indeed normalize, but it diminishes *exponentially* fast. this makes world-states exponentially unlikely in the amount of compute they exist after; and there are philosophical reasons to say that exponential unlikelyness is what should count as non-existing.\n\n\n🟢 ***shinji*** — what the hell are you talking about??\n\n\n🟡 ***ritsuko*** *hands shinji [a paper called \"Why Philosophers Should Care About Computational Complexity\"](https://arxiv.org/abs/1108.1791)* — look, this is a whole other tangent, but basically, polynomial amounts of computation corresponds to \"doing something\", whereas exponential amounts of computation correspond to \"magically obtaining something out of the ether\", and this sort-of ramificates naturally across the rest of computational complexity applied to metaphysics and philosophy.\n\n\n🟡 ***ritsuko*** — so instead, we can say that computation step number n≥1 has 1n2 realityfluid. this only diminishes quadratically, which is satisfactory.\n\n\n🟡 ***ritsuko*** — oh, and for the same reason, the universal program needs to be quantum — for example, it needs to be a quantum equivalent of the classical universal program but for quantum computation, implemented on something like a [quantum turing machine](https://en.wikipedia.org/wiki/Quantum_Turing_machine)). otherwise, unless [BQP=BPP](https://en.wikipedia.org/wiki/BQP), quantum [multiverses](https://www.lesswrong.com/tag/many-worlds-interpretation) like ours might be exponentially expensive to compute, which would be [strange](solomonoff-deism.html).\n\n\n🟢 ***shinji*** — why n2? why not n1.01 or n37?\n\n\n🟡 ***ritsuko*** — those do indeed all normalize — but we pick 2 because at some point you just have to *pick something*, and 2 is a natural, [occam](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor)/[solomonoff](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1)-simple number which works. look, just–\n\n\n🟢 ***shinji*** — and why are we assuming the universe is made of discrete computation anyways? isn't stuff made of real numbers?\n\n\n🟡 ***ritsuko*** *sighs* — look, this is what the [church-turing-deutsch principle](https://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle) is about. for any universe made up of real numbers, you can approximate it thusly:\n\n\n* compute 1 step of it with every number truncated to its first 1 binary digit of precision\n* compute 1 step of it with every number truncated to its first 2 binary digits of precision\n\n\nfor 1 time step with 1 bit of precision, then 2 time steps with 2 bits of precision, then 3 with 3, and so on. for any piece of branch-spacetime which is only finitely far away from the start of its universe, there exists a threshold at which it starts being computed in a way that is indistinguishable from the version with real numbers.\n\n\n🟢 ***shinji*** — but they're only an approximation of us! they're not *the real thing!*\n\n\n🟡 ***ritsuko*** *sighs* — you don't *know* that. you could be the approximation, and you would be unable to tell. and so, we can work without uncountable sets of real numbers, since they're unnecessary to explain observations, and thus an unnecessary assumption to hold about reality.\n\n\n🟢 ***shinji***, *frustrated* — i *guess*. it still seems pretty contrived to me.\n\n\n🟡 ***ritsuko*** — what else are you going to do? you're expressing things in *math*, which is made of *discrete expressions* and will only ever express *countable quantities of stuff*. **there is no uncountableness to grab at and use**.\n\n\n🟣 ***misato*** — actually, can't we introduce [turing jumps/halting oracles](https://en.wikipedia.org/wiki/Turing_jump) into this universal program? [i heard that this lets us *actually compute* real numbers](https://en.wikipedia.org/wiki/Turing_jump).\n\n\n🟡 ***ritsuko*** — there's kind-of-a-sense in which that's true. we could say that the universal program has access to a [first-degree halting oracle](https://en.wikipedia.org/wiki/Post%27s_theorem), or a 20th-degree; or maybe it runs for 1 step with a 1st degree halting oracle, then 2 steps with a 2nd degree halting oracle, then 3 with 3, and so on.\n\n\n🟡 ***ritsuko*** — your program is now capable, at any time step, of computing an infinite amount of stuff. let's say one of those steps happens to run an entire universe of stuff, including a copy of us. how do you sub-allocate realityfluid? how much do we expect to be in there? you could allocate sub-compute-steps — with a 1st degree halting oracle executing at step n≥1, you allocate 1n2m2 realityfluid to each of the m≥1 infinite sub-steps in the call to the halting-oracle. you're just doing discrete realityfluid allocation again, except now your some of the realityfluid in your universe is allocated at people who have obtained results from a halting oracle.\n\n\n🟡 ***ritsuko*** — this works, but what does it get you? assuming halting oracles is kind of a very strange thing to do, and regular computation with no halting oracles is *already* sufficient to explain this universe. so we don't. but sure, we could.\n\n\n🟢 ***shinji*** *ruminates, unsure where to go from there.*\n\n\n🟣 ***misato*** *interrupts* — hey, do we really need to cover this? let's say you found out that this whole view of things is wrong. could you fix your math then, to whatever is the correct thing?\n\n\n🟡 ***ritsuko*** *waves around* — what?? what do you mean *if it's wrong*?? i'm not rejecting the premise that i might be wrong here, but like, my answer here depends a lot on *in what way i'm wrong* and *what is the better / more likely correct thing*. so, i don't know how to answer that question.\n\n\n🟣 ***misato*** *snaps shinji back to attention* — that's fair enough, i guess. well, let's get back on track.\n\n\n#### 6. precursor assistance\n\n\n🟡 ***ritsuko*** — so, one insight i got for my alignment idea came from [PreDCA](predca.html), which stands for **Pre**cursor **D**etection, **C**lassification, and **A**ssistance. it consists of mathematizations for:\n\n\n* the AI locating itself within possibilities\n* locating the high-agenticness-thing which had lots of causation-bits onto itself — call it the \"**Pre**cursor\". this is supposed to find the human user who built/launched the AI. (**D**etection)\n* bunch of criteria to ensure that the precursor is the intended human user and not something else (**C**lassification)\n* extrapolating that precursor's utility function, and maximizing it (**A**ssistance)\n\n\n🟣 ***misato*** — what the hell kind of math would accomplish that?\n\n\n🟡 ***ritsuko*** — well, it's not entirely clear to me. some of it is explained, other parts seem like they're expected to just work naturally. in any case, this isn't so important — [the \"Learning Theoretic Agenda\" into which PreDCA fits](}https://www.lesswrong.com/posts/ZwshvqiqCvXPsZEct/the-learning-theoretic-agenda-status-2023) is not fundamentally similar to mine, and i do not expect it to be the kind of thing that saves us in time. as far as i predict, that agenda has purchased most of the [dignity points](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) it will have cashed out when alignment is solved, when it inspired my own ideas.\n\n\n🟢 ***shinji*** — and *your* agenda saves us in time?\n\n\n🟡 ***ritsuko*** — a lot more likely so, yes! for one, i am not trying to build *an entire theory of intelligence and machine learning*, and i'm not trying to [develop](https://drive.google.com/drive/u/0/folders/1oabE7X87tQ22kYA6z9JEN8EZ3nLjnJFs) an *[elegant new form of bayesianism](https://www.lesswrong.com/tag/infra-bayesianism)* whose [model of the world](https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized) has [concerning philosophical ramifications](https://www.lesswrong.com/posts/yykNvq257zBLDNmJo/infra-bayesianism-naturally-leads-to-the-monotonicity) which, while admittedly [possibly only temporary](https://www.lesswrong.com/posts/yykNvq257zBLDNmJo/infra-bayesianism-naturally-leads-to-the-monotonicity?commentId=sPTXpmeM6LPHk7LzA), make me concerned about the coherency of the whole edifice. what *i* am trying to do, is hack together the minimum viable [world-saving](outlook-ai-risk-mitigation.html) machine about which we'd have enough confidence that *launching it is better expected value than not launching it*.\n\n\n🟡 ***ritsuko*** — anyways, the important thing is that that idea made me think \"hey, what else could we do to even more make sure the selected precursor is the human use we want, and not something else like a nearby fly or the process of evolution?\" and then i started to think of some clever schemes for locating the AI in a top-down view of the world, without having to decode physics ourselves, but rather by somehow pointing to the user \"through\" physics.\n\n\n🟣 ***misato*** — what does that mean, exactly?\n\n\n🟡 ***ritsuko*** — well, remember how PreDCA points to the user from-the-top-down? the way it tries to locate the user is by looking for *patterns, in the giant computation of the universe, which satisfy these criteria*. this fits in the general notion of [generalized computation interpretability](generalized-computation-interpretability.html), which is fundamentally needed to care about the world because you want to detect not just simulated moral patients, but [*arbitrarily complexly simulated* moral patients](homomorphically-encrypted-computations.html). so, you need this anyways, and it is what \"looking inside the world to find stuff, no matter how it's encoded\" looks like.\n\n\n🟣 ***misato*** — and what sort of patterns are we looking for? what are the *types* here?\n\n\n🟡 ***ritsuko*** — as far as i understand, PreDCA looks for *programs*, or *computations*, which take some input and return an policy. my own idea is to locate something less abstract, about which we can actually have information-theoretic guarantees: *bitstrings*.\n\n\n🟣 ***misato*** — …just raw bitstrings?\n\n\n🟡 ***ritsuko*** — that's right. the idea here is kinda like doing an incantation, except the incantation we're locating is a very large piece of data which is unlikely to be replicated outside of this world. imagine generating a very large (several gigabytes) file, and then asking the AI \"look for things of information, in the set of all computations, which look like that pattern.\" we call \"blobs\" such bitstrings serving as \\*anchors into to find our world and location-within-it in the set of possible world-states and locations-within-them.\n\n\n#### 7. blob location\n\n\n🟡 ***ritsuko*** — for example, let's say the universe is a [conway's game of life](https://en.wikipedia.org/wiki/Conway's_Game_of_Life). then, the AI could have a set of hypotheses as programs which take as input the entire state of the conway's game of life grid at any instant, and returning a bitstring which must be equal to the blob.\n\n\n🟡 ***ritsuko*** — first, we define Ω≔{ω|ω∈𝒫(ℤ2),#ω∈ℕ} (uppercase omega, a set of lowercase omega) as the set of \"world-states\" — states of the grid, defined as the set of cell positions whose cell is alive.\n\n\n🟢 ***shinji*** — what's 𝒫(ℤ2) and #ω?\n\n\n🟡 ***ritsuko*** — ℤ2 is the set of pairs whose elements are both a member of ℤ, the set of relative integers. so Z2 is the set of pairs of relative integers — that is, grid coordinates. then, 𝒫(ℤ2) is the set of subsets of ℤ2. finally, #w is the size of set w — requiring that #w∈ℕ is akin to requiring that w is a finite set, rather than infinite. let's also define:\n\n\n* 𝔹={⊤,⊥} as the set of booleans\n* 𝔹\\* as the set of finite bitstring\n* 𝔹n is the set of bitstrings of length n\n* |b| is the length of bitstring b\n\n\n🟡 ***ritsuko*** — what do you think \"locate blob b∈𝔹\\* in world-state ω∈Ω\" could look like, mathematically?\n\n\n🟣 ***misato*** — let's see — i can use the set of bitstrings of same length as b, which is 𝔹|b|. let's build a set of {f|f∈Ω→𝔹|b|…\n\n\n🟢 ***shinji*** — wait, Ω→𝔹|b| is the set of *functions* from Ω to 𝔹|n|. but we were talking about *programs* from Ω to 𝔹|b|. is there a difference?\n\n\n🟡 ***ritsuko*** — this is a very good remark, shinji! indeed, we need to do a bit more work; for now we'll just posit that for any sets A,B, A→HB is the set of always-halting, always-succeeding programs taking as input an A and returning a B.\n\n\n🟣 ***misato*** — let's see — what about {f|f∈Ω→H𝔹|b|,f(ω)=b}?\n\n\n🟡 ***ritsuko*** — you're starting to get there — this is indeed the set of programs which return b when taking ω as input. however, it's merely a *set* — it's not very useful as is. what we'd really want is a *distribution* over such functions. not only would this give a *weight* to different functions, but summing over the entire distribution could also give us some measure of \"how easy it is to find b in ω. remember the definition of distributions, ΔX?\n\n\n🟢 ***shinji*** — oh, i remember! it's the set of functions in X→[0;1] which sum up to at most one over all of X.\n\n\n🟡 ***ritsuko*** — indeed! so, we're gonna posit what i'll call *kolmogorov simplicity*, KX−∈ΔX∩X→(0;1), which is like [kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity) except that it's a *distribution*, never returns 0 nor 1 for a single element, and importantly it returns something like the *inverse* of complexity. it gives some amount of \"mass\" to every element in some (countable) set X.\n\n\n🟣 ***misato*** — oh, i know then! the distribution, for each f∈Ω→H𝔹|b|, must return {KΩ→H𝔹\\*−(f)iff(ω)=b0iff(ω)≠b\n\n\n🟡 ***ritsuko*** — that's right! we can start to define Locn∈Ω×𝔹n→ΔΩ→H𝔹n as the function that takes as input a pair of world-state ω∈Ω and blob b∈𝔹n of length n, and returns a distribution over programs that \"find\" b in ω. plus, since functions f are weighed by their kolmogorov simplicity, for complex b's they're \"encouraged\" to find the bits of complexity of b *in* ω, rather than those bits of complexity being contained in f itself.\n\n\n🟡 ***ritsuko*** — note also that this Locn distribution over Ω→H𝔹n returns, for any function f, either KΩ→H𝔹n− or 0, which entails that for any given ω,b, the sum of Locn(ω,b)(f) for all f's sums up to less than one — that sum represents in a sense \"how hard it is to find b in ω\" or \"the probability that b is somewhere in ω\".\n\n\nf∀(ω,b)∈Ω×𝔹n:∑Locn(ω,b)(f)<1f∈Ω→H𝔹n\n\n\n🟡 ***ritsuko*** — the notation here, Locn(ω,b)(f) is because Locn(ω,b) returns a distribution ΔΩ→H𝔹n, which is itself a function (Ω→H𝔹n)→[0;1] — so we apply Loc to ω,b, and then we sample the resulting distribution on f.\n\n\n🟢 ***shinji*** — \"the sum represents\"? what do you mean by \"represents\"?\n\n\n🟡 ***ritsuko*** — well, it's the concept which i'm trying to find a [\"true name\"](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) for, here. \"how much is the blob b located in world-state ω? well, as much of the sum of the kolmogorov simplicity of every program that returns b when taking as input ω\".\n\n\n🟣 ***misato*** — and then what? i feel like my understanding of how this ties into anything is still pretty loose.\n\n\n🟡 ***ritsuko*** — so, we're actually gonna get *two* things out of Loc: we're gonna get *how much ω contains b* (as the sum of Loc for all f's), but we're also gonna get *how to get another world-state that is like ω, except that b is replaced with something else*.\n\n\n🟢 ***shinji*** — how are we gonna get *that*??\n\n\n🟡 ***ritsuko*** — here's my idea: we're gonna make f(ω) return not just 𝔹\\* but rather 𝔹n×𝔹\\* — a pair of the blob of a \"free bitstring\" τ (tau) which it can use to store \"everything in the world-state except b\". and we'll also sample programs g∈𝔹n×𝔹\\*→HΩ which \"put the world-state back together\" given the same free bitstring, and a *possibly different* counterfactual blob than b.\n\n\n🟣 ***misato*** — so, for ω,b, Loc is defined as something like…\n\n\nLocn(ω,b)∈Δ(Ω→H𝔹n×𝔹\\*)×(𝔹n×𝔹\\*→HΩ)Locn(ω,b)(f,g)≔{KΩ→H𝔹n×𝔹\\*−(f)⋅K𝔹n×𝔹\\*→HΩ−(g)if{letτ∈𝔹\\*such thatf(ω)=(b,τ)ing(b,τ)=ω0otherwise\n\n\n🟢 ***shinji*** *stares at the math for a while* — actually, shouldn't the if statement be more general? you don't just want g to work on b, you want g to work on *any other blob of the same length*.\n\n\n🟡 ***ritsuko*** — that's correct shinji! let's call the original blob b the \"factual blob\", let's call other blobs of the same length we could insert in its stead \"counterfactual blobs\" and write them as b′ — we can establish that ′ (prime) will denote counterfactual things in general.\n\n\n🟣 ***misato*** — so it's more like…\n\n\n{letτ∈𝔹\\*such thatf(ω)=(b,τ)in∀b′∈𝔹n:g(b′,τ)=…\n\n\n🟣 ***misato*** — …g(b′,τ) should equal, exactly?\n\n\n🟡 ***ritsuko*** — we don't know what it should equal, but we do know *something* about what it equals: f should work on that counterfactual and find the same counterfactual blob again.\n\n\n{letτ∈𝔹\\*such thatf(ω)=(b,τ)in∀b′∈𝔹n:f(g(b′,τ))=(b′,τ)\n\n\n🟡 ***ritsuko*** — actually, let's make Locn be merely a distribution over functions that produce counterfactual world-states from counterfactual blobs 𝔹n→Ω — let's call those \"counterfactual insertion functions\" and denote them γ and their set Γn (gamma) — and we'll encapsulate τ away from the rest of the math:\n\n\nf,g,τLocn(ω,b)(γ)≔∑KΩ→H𝔹n×𝔹\\*−(f)⋅K𝔹n×𝔹\\*→HΩ−(g)f∈Ω→H𝔹n×𝔹\\*g∈𝔹n×𝔹\\*→HΩf(ω)=(b,τ)∀b′∈𝔹n:f(g(b′,τ))=(b′,τ)γ(b′)=g(b′,τ)\n\n\n🟢 ***shinji*** — isn't f(g(b′,τ))=(b′,τ) a bit circular?\n\n\n🟡 ***ritsuko*** — well, yes and no. it leaves a lot of degrees of freedom to f and g, perhaps too much. let's say we had some function SimilarPasts∈Ω×Ω→[0;1] — let's not worry about how it works. then could weigh each \"blob location\" by how much counterfactual world-states are similar, when sampled over all counterfactual blobs.\n\n\n🟣 ***misato*** — maybe we should also constrain the f,g programs for how long they take to run?\n\n\n🟡 ***ritsuko*** — ah yes, good idea. let's say that for x∈X and f∈X→HY, R(f,x)∈ℕ\\{0} is how long it takes to run program f on input x, in some amount of steps each doing a constant amount of work — such as steps of compute in a turing machine.\n\n\nf,g,τb′Locn(ω,b)(γ)≔∑KΩ→H𝔹n×𝔹\\*−(f)⋅K𝔹n×𝔹\\*→HΩ−(g)⋅∑1#𝔹n⋅SimilarPasts(ω,g(b′,τ))R(g,(b′,τ))+R(f,g(b′,γ))f∈Ω→H𝔹n×𝔹\\*b′∈𝔹ng∈𝔹n×𝔹\\*→HΩf(ω)=(b,τ)∀b′∈𝔹n:f(γ(b′))=(b′,τ)��(b′)=g(b′,τ)\n\n\n🟡 ***ritsuko*** — (i've also replaced f(g(b′,τ)) with f(γ(b′)) since that's shorter and they're equal anyways)\n\n\n🟣 ***misato*** — where does the first sum end, exactly?\n\n\n🟡 ***ritsuko*** — it applies to the whole– oh, you know what, i can achieve the same effect by flattening the whole thing into a single sum. and renaming the b′ in ∀b′∈𝔹n to b′′ to avoid confusion.\n\n\nf,g,τ,b′Locn(ω,b)(γ)≔∑KΩ→H𝔹n×𝔹\\*−(f)⋅K𝔹n×𝔹\\*→HΩ−(g)⋅1#𝔹n⋅SimilarPasts(ω,g(b′,τ))R(g,(b′,τ))+R(f,g(b′,τ))f∈Ω→H𝔹n×𝔹\\*g∈𝔹n×𝔹\\*→HΩb′∈𝔹nf(ω)=(b,τ)∀b′′∈𝔹n:f(γ(b′′))=(b′′,τ)γ(b′′)=g(b′′,τ)\n\n\n🟢 ***shinji*** — are we still operating in conway's game of life here?\n\n\n🟡 ***ritsuko*** — oh yeah, now might be a good time to start generalizing. we'll carry around not just world-states ω∈Ω, but *initial world-states* α∈Ω (alpha). those are gonna determine the start of *universes* — distributions of world-states being computed-over-time — and we'll use them when we're computing world-states forwards or comparing the age of world-states. for example SimilarPasts probably needs this, so we'll need to pass it to Locn which will now be of type Ω×Ω×𝔹n→ΔΓn:\n\n\nf,g,τ,b′Locn(α,ω,b)(γ)≔∑KΩ→H𝔹n×𝔹\\*−(f)⋅K𝔹n×𝔹\\*→HΩ−(g)⋅1#𝔹n⋅SimilarPastsα(ω,g(b′,τ))R(g,(b′,τ))+R(f,g(b′,τ))f∈Ω→H𝔹n×𝔹\\*g∈𝔹n×𝔹\\*→HΩb′∈𝔹nf(ω)=(b,τ)∀b′′∈𝔹n:f(γ(b′′))=(b′′,τ)γ(b′′)=g(b′′,τ)\n\n\n#### 8. constrained mass notation\n\n\n🟢 ***shinji*** — i notice that you're multiplying together your \"kolmogorov simplicities\" and 1#𝔹n and now SimilarPasts divided by a sum of how long they take to run. what's going on here exactly?\n\n\n🟡 ***ritsuko*** — well, each of those number is a \"confidence amount\" — scalars between 0 and 1 that say \"how much does *this* iteration of the sum capture the thing we want\", like probabilities. multiplication ⋅ is like the logical operator \"and\" ∧ except for confidence ratios, you know.\n\n\n🟢 ***shinji*** — ah, i see. so these sums do something kinda like \"expected value\" in probability?\n\n\n🟡 ***ritsuko*** — something kinda like that. actually, this notation is starting to get unwieldy. i'm noticing a bunch of this pattern: x∑SomeDistribution(x)⋅expressionx∈SomeSet\n\n\n🟣 ***misato*** — so, if you want to use the standard probability theory notations, you need random variables which–\n\n\n🟡 ***ritsuko*** — ugh, i *don't like* random variables, because the place at which they get substituted for the sampled value is ambiguous. here, i'll define my own notation:\n\n\nv1,…,vpv1,…,vpM[V]≔∑X1(x1)⋅…⋅Xn(xn)⋅Vx1:X1x1∈domain(X1)⋮⋮xn:Xnxn∈domain(Xn)C1C1⋮⋮CmCm\n\n\n🟡 ***ritsuko*** — 𝐌 will stand for \"constrained mass\", and it's basically [syntactic sugar](https://en.wikipedia.org/wiki/Syntactic_sugar) for sums, where x:X means \"sum over x∈domain(X) (where domain returns the set of arguments over which a function is defined), and then multiply each iteration of the sum by X(x)\". now, we just have to define uniform distributions over finite sets as…\n\n\n🟢 ***shinji*** — UniformX(x)≔1#X for finite set X?\n\n\n🟡 ***ritsuko*** — that's it! and now, Loc is much more easily written down:\n\n\nf,g,τ,b′Locn(α,ω,b)(γ)≔𝐌[SimilarPastsα(ω,g(b′,τ))R(g,(b′,τ))+R(f,g(b′,τ))]f:KΩ→H𝔹n×𝔹\\*−g:K𝔹n×𝔹\\*→HΩ−b′:Uniform𝔹nf(ω)=(b,τ)∀b′′∈𝔹n:f(γ(b′′))=(b′′,τ)γ(b′′)=g(b′′,τ)\n\n\n🟢 ***shinji*** — huh. you know, i'm pretty skeptical of you inventing your own probability notations, but this *is* much more readable, when you know what you're looking at.\n\n\n🟣 ***misato*** — so, are we done here? is this blob location?\n\n\n🟡 ***ritsuko*** — well, i expect that some thing are gonna come up later that are gonna make us want to change this definition. but right now, the only improvement i can think of is to replace f:KΩ→H𝔹n×𝔹\\*− and g:K𝔹n×𝔹\\*→HΩ− with (f,g):K(Ω→H𝔹n×𝔹\\*)×(𝔹n×𝔹\\*→HΩ)−.\n\n\n🟣 ***misato*** — huh, what's the difference?\n\n\n🟡 ***ritsuko*** — well, now we're sampling f,g from kolmogorov simplicity *at the same time*, which means that if there is some large piece of information that they both use, they won't be penalized for using it twice but only once — a tuple containing two elements which have a lot of information in common only has that information counter once by K−.\n\n\n🟣 ***misato*** — and we want that?\n\n\n🟡 ***ritsuko*** — yes! there are some cases where we'd want two mathematical objects to have a lot of information in common, and other places where we'd want them to not need to be dissimilar. here, it is clearly the former: we want the program that \"deconstructs\" the world-state into blob and everything-else, and the function that \"reconstructs\" a new world-state from a counterfactual blob and the same everything-else, to be able to share information as to how they do that.\n\n\n#### 9. what now?\n\n\n🟢 ***shinji*** — so we've put together a true name for \"piece of data in the universe which can be replaced with counterfactuals\". that's pretty nifty, i guess, but what do we do with it?\n\n\n🟡 ***ritsuko*** — now, this is where the core of my idea comes in: in the physical world, we're gonna create a random unique enough blob on someone's computer. then we're going to, still in the physical world, read its contents right after generating it. if it looks like a counterfactual (i.e. if it doesn't look like randomness) we'll create another blob of data, which can be recognized by Loc as an answer.\n\n\n🟢 ***shinji*** — what does that entail, exactly?\n\n\n🟡 ***ritsuko*** — we'll have created a piece of *real, physical world*, which lets use use Loc to get the *true name, in pure math*, of \"what answer would that human person have produced to this counterfactual question?\"\n\n\n🟣 ***misato*** — hold on — we already have this. the AI can already have an interface where it asks a human user something, and waits for our answer. and the problem with that is that, obviously, the AI hijacks us or its interface to get whatever answer makes its job easiest.\n\n\n🟡 ***ritsuko*** — aha, but this is different! we can point at a counterfactual question-and-answer chunk-of-time (call it \"question-answer counterfactual interval\", or \"QACI\") which is *before* the AI's launch, in time. we can mathematically *define* it as being in the past of the AI, by identifying the AI with some other blob which we'll also locate using Loc, and demand that the blob identifying the AI be *causally after* the user's answer.\n\n\n🟣 ***misato*** — huh.\n\n\n🟡 ***ritsuko*** — that's another idea i got from PreDCA — making the AI pursue the values of [*a static version of its user in its past*](outer-alignment-past-user.html), rather than its user-over-time.\n\n\n🟢 ***shinji*** — but we don't want the AI to lock-in our values, we want the AI to satisfy our values-as-they-evolve-over-time, don't we?\n\n\n🟣 ***misato*** — well, shinji, there's multiple ways to phrase your mistake, here. one is that, actually, [you do](surprise-you-want.html) — but if you're someone *reasonable*, then the values you endorse are some metaethical system which is able to reflect and learn about what's good, and to let people and philosophy determine what can be pursued.\n\n\n🟣 ***misato*** — but you *do* have values you want to lock in. your meta-values, your metaethics, you don't want *those* to be able to change arbitrarily. for example, you probly don't want to be able to become someone who wants everyone to maximally suffer. those endorsed, top-level, metaethics meta-values, are something you *do* want to lock in.\n\n\n🟡 ***ritsuko*** — put it another way: if you're reasonable, then if the AI asks you what you want inside the question-answer counterfactual interval, you won't answer \"i want everyone to be forced to watch the most popular TV show in 2023\". you'll answer something more like \"i want everyone to be able to reflect on their own values and choose what values and choices they endorse, and how, and that the field of philosophy can continue in these ways in order to figure out how to resolve conflicts\", or something like that.\n\n\n🟣 ***misato*** — wait, if the AI is asking the user counterfactual questions, won't it ask the user whatever counterfactual question brainhacks the user into responding whatever answer makes its job easiest? it can just hijack the QACI.\n\n\n🟡 ***ritsuko*** — aha, but we don't have to have *the AI* formulate answers! we could do something like: make the initial question some static question like \"please produce an action that saves the world\", and then the user thinks about it for a bit, returns an answer, and that answer is fed back into another QACI to the user. this loops until one of the user responds with an answer which starts with a special string like \"okay, i'm done for sure:\", followed by a bunch of text which the AI will interpret as a piece of math describing a scoring over actions, and it'll try to output a utility function which maximizes *that*.\n\n\n🟢 ***shinji*** — so it's kinda like [coherent extrapolated volition](https://www.lesswrong.com/tag/coherent-extrapolated-volition) but for actions?\n\n\n🟡 ***ritsuko*** — sure, i think of it as [*an implementation of CEV*](cev-coherent-enough.html). it allows its user to run a long-reflection process. actually, that long-reflection process even has the ability to use a mathematical oracle.\n\n\n🟣 ***misato*** — how does *that* work?\n\n\n#### 10. blob signing & closeness in time\n\n\n🟡 ***ritsuko*** — so, let's define QACI as a function, and this'll clarify what's going on. q∈𝔹\\* will be our initial random factual question blob. QACI∈Ω×Γ|q|×𝔹|q|→Δ𝔹|q| takes as parameter a blob location for the question — which, remember, comes in the form of a function you can use to produce counterfactual world-states with counterfactual blobs! — and a counterfactual question blob q′, and returns a distribution of possible answers r. it's defined as:\n\n\nωr,γrQACI(α,γq,q′)(r)≔𝐌[1]ωr:Ωα→(γq(q′))γr:Loc|q|(α,ωr,r)\n\n\n🟡 ***ritsuko*** — we're, for now just positing, that there is a function Ωα→∈Ω→ΔΩ (remember that α defines a hypothesis for the initial state, and mechanics, of our universe) which, given a world-state, returns a distribution of world-states that are in its future. so this piece of math samples possible future world-states of the counterfactual world-state where q was replaced with q′, and possible locations of possible answers in those world-states.\n\n\n🟣 ***misato*** — 𝐌[1]? what does *that* mean?\n\n\n🟡 ***ritsuko*** — here, the fact that Locn(α,ω,b) *doesn't necessarily sum to 1* — we say that it *doesn't normalize* — means that QACI(α,γq,q′)(r) summed up over all r∈𝔹|q| can be less than 1. in fact, this sum will indicate \"how hard is it to find the answer r in futures of counterfactual world-states γq(q′)?\" — and uses that as the distribution of answers.\n\n\n🟣 ***misato*** — hmmm. wait, this just finds whichever-answers-are-the-easiest-to-find. what guarantees that r looks like *an answer at all*?\n\n\n🟡 ***ritsuko*** — this is a good point. maybe we should define something like Sign∈𝔹\\*→𝔹|q| which, to any input \"payload\" of a certain length, associates a blob which is actually highly complex, because Sign embeds a lot of bits of complexity. for example, maybe Sign(π) (where π is the \"payload\") concatenates π together with a long [cryptographic hash](https://en.wikipedia.org/wiki/Cryptographic_hash_function) of π and of some piece of information highly entangled with our world-state.\n\n\nωr,γrQACI(α,γq,q′)(πr)≔𝐌[1]ωr:Ωα→(γq(q′))γr:Loc|q|(α,ωr,Sign(πr))\n\n\n🟢 ***shinji*** — we're not signing the counterfactual question q′, only the answer payload πr?\n\n\n🟡 ***ritsuko*** — that's right. signatures matter for blobs we're *finding*; once we've found them, we don't need to sign counterfactuals to insert in their stead.\n\n\n🟣 ***misato*** — so, it seems to me like how Ω→ works here, is pretty critical. for example, if it contains a bunch of mass at world-states where some AI is launched, whether ours or another, then that AI will try to fill its future lightcone with answers that would match various Sign(πr)'s — so that *our* AI would find those answers instead of ours — and make those answers be something that maximize *their* utility function rather than ours.\n\n\n🟡 ***ritsuko*** — this is true! indeed, how we sample for Ω→ is pretty critical. how about this: first, we'll pass the distribution into Loc:\n\n\nγrQACI(α,γq,q′)(πr)≔𝐌[1]γr:Loc|q|(α,Ωα→(γq(q′)),Sign(πr))\n\n\n🟡 ***ritsuko*** — …and inside Locn, which is now of type Locn∈Ω×ΔΩ×𝔹n→ΔΓn, for any f,g we'll only sample world-states ω which have the *highest* mass in that distribution:\n\n\nf,g,ω,τ,b′Locn(α,δ,b)(γ)≔𝐌[SimilarPastsα(ω,g(b′,τ))R(g,(b′,τ))+R(f,g(b′,τ))](f,g):K(Ω→H𝔹n×𝔹\\*)×(𝔹n×𝔹\\*→HΩ)−ω:λω:maxXΔ(λω:Ω.{δ(ω)iff(ω)=(b,τ)0otherwise).δ(ω)b′:Uniform𝔹nf(ω)=(b,τ)∀b′′∈𝔹n:γ(b′′)=g(b′′,τ)f(γ(b′′))=(b′′,τ)\n\n\n🟡 ***ritsuko*** — the intent here is that for any way-to-find-the-blob f,g, we only sample the closest matching world-states in time — which *does* rely on Ω→ having higher mass for world-states that are closer in time. and hopefully, the result is that we pick enough instances of the signed answer blobs located shortly in time after the question blobs, that they're mostly dominated by *the human user answering them*, rather than AIs appearing later.\n\n\n🟣 ***misato*** — can you disentangle the line where you sample ω?\n\n\n🟡 ***ritsuko*** — sure! so, we write an anonymous function λω:X.δ(ω) — a distribution is a function, after all! — taking a parameter ω from the set X, and returning δ(ω). so this is going to be a distribution that is just like δ, except it's only defined for a subset of Ω — those in X.\n\n\n🟡 ***ritsuko*** — in this case, X is defined as such: first, take the set of elements ω∈Ω for which f(ω)=(b,τ). then, apply the distribution δ to all of them, and only keep elements for which they have the most δ (there can be multiple, if multiple elements have the same maximum mass!).\n\n\n🟡 ***ritsuko*** — oh, and i guess f(ω)=(b,τ) is redundant now, i'll erase it. remember that this syntax means \"sum over the body for all values of f,g,ω,τ,b′ for which these constraints hold…\", which means we can totally have the value of τ be bound inside the definition of ω like this — it'll just have exactly one value for any pair of f and α.\n\n\n#### 11. QACI graph\n\n\n🟢 ***shinji*** — why is QACI returning a distribution over answers, rather than picking the single element with the most mass in the distribution?\n\n\n🟡 ***ritsuko*** — that's a good question! in theory, it could be that, but we do want the user to be able to go to the next possible counterfactual answer if the first one isn't satisfactory, and the one after that if *that's* still not helpful, and so on. for example: in the piece of math which will interpret the user's final result as a math expression, we want to ignore answers which don't parse or evaluate as proper math of the intended type.\n\n\n🟢 ***shinji*** — so the AI is asking the counterfactual past-user-in-time to come up with a good action-scoring function in… however long a question-answer counterfactual interval is.\n\n\n🟡 ***ritsuko*** — let's say about a week.\n\n\n🟢 ***shinji*** — and this helps… how, again?\n\n\n🟡 ***ritsuko*** — well. first, let's posit EvalMathX∈𝔹\\*→{{x}|x∈X}∪{∅}, which tries to parse and evaluate a bitstring representing a piece of math (in some pre-established formal language) and returns either:\n\n\n* what it evaluates to if it is a member of X\n* an empty set if it isn't a member of X or fails to parse or evaluate\n\n\n🟡 ***ritsuko*** — we then define EvalMathXΔ∈ΔΠ→X as a function that returns the highest-mass element of the distribution for which EvalMathX returns a value rather than the empty set. we'll also assume for convenience q\\*′∈\\*→𝔹|q|, a convenience function which converts any mathematical object into a counterfactual blob 𝔹|q|. this isn't really allowed, but it's just for the sake of example here.\n\n\n🟣 ***misato*** — okay…\n\n\n🟡 ***ritsuko*** — so, let's say the first call is QACI(α,γq,q\\*′(\"please produce a good action-scoring\")). the user can return *any expression*, as their action-scoring function — they can return λa:A.SomeUtilityMeasure(a) (a function taking an action a and returning some utility measure over it), but they can also return EvalMathUΔ(QACI(α,γq,q\\*′(\"here are some ideas: …\"))) where U≔A→[0;1] is the set of action-scoring functions. they get to *call themselves recursively*, and make progress in a sort of time-loop where they pass each other notes.\n\n\n🟣 ***misato*** — right, this is the long-reflection process you mentioned. and about the part where they get a mathematical oracle?\n\n\n🟡 ***ritsuko*** — so, the user can return things like:\n\n\nEvalMathUΔ(QACI(α,γq,q\\*′(SomeUncomputableQuery())))\n\n\nEvalMathUΔ(QACI(α,γq,q\\*′(Halts(SomeProgram,SomeInput)))).\n\n\n🟣 ***misato*** — huh. that's nifty.\n\n\n🟢 ***shinji*** — what if some weird memetic selection effects happen, or what if in one of the QACI intervals, the user randomly gets hit by a truck and then the whole scheme fails?\n\n\n🟡 ***ritsuko*** — so, the user can set up giant giant [acyclic graphs](https://en.wikipedia.org/wiki/Directed_Acyclic_Graph) of calls to themselves, providing a lot of redundancy. that way, if any single node fails to return a coherent output, the next nodes can notice this and keep working with their peer's output.\n\n\n🟡 ***ritsuko*** — a small graph of QACI can be accomplished with something like\n\n\nEvalMathUΔ(QACI(α,γq,q\\*′(EvalMathUΔ(QACI(α,γq,q\\*′(\"what about this…\"))),EvalMathUΔ(QACI(α,γq,q\\*′(\"what about that…\"))))))\n\n\n🟡 ***ritsuko*** — and you can imagine how this could be extended to a larger structure such as:\n\n\n![](qaci-invention-dialogue-graph.svg)\n\n\n#### 12. fictional agents\n\n\n🟢 ***shinji*** — wait a minute; what if the QACI user goes crazy from realizing that they're *fictional, inside an AI's hypothesis*, rather than in the physical world-state? especially if they have access to a goddamn math oracle.\n\n\n🟡 ***ritsuko*** — ah. uuh. well, don't panic shinji, but, **we are fictional. right now.**\n\n\n🟢 ***shinji*** *starts flailing around.*\n\n\n🟣 ***misato*** — what the hell are you talking about?\n\n\n🟡 ***ritsuko*** — misato, come on. we're inside a crazy high-tech compound designed to build and maintain giant robots to fight alien creatures, *and* we're discussing the entirely unrelated topic of AI alignment. oh, and, uh, there's this:\n\n\n🔵 ***rei*** *appears in front of them and waves at the camera.*\n\n\n🟣 ***misato*** — …so, why are we talking about alignment, instead of, i don't know, trying to escape?\n\n\n🟡 ***ritsuko*** — misato, this is *fiction*, you don't *escape*. instead, you reason about what agents are writing your fiction. based on prior, we should vaguely guess that they're like us, and that the conversation we're having is useful for them to write a fiction about. my guess is, we're a fiction written by someone who's trying to communicate ideas about AI alignment and, for some godforsaken reason i cannot imagine, decided that *we* should be the characters having this conversation.\n\n\n🟡 ***ritsuko*** — but don't be mistaken: if we are indeed fictional, then the future of this world we're observing barely matters. our parent world likely holds a lot more realityfluid. so we shou–\n\n\n🟢 ***shinji***, *shaking* — here comes this \"realityfluid\" [bullshit](https://carado.moe/up/52921a1c-bullshit.html) again!\n\n\n🟡 ***ritsuko*** — anyways, the coherent thing for fictional characters to do [is to try to steer their parent world, because it's generally \"realer\"](approximate-decisions.html). and generally, this is achieved by cooperating with whatever their parent world seems to be doing, so we should probably just continue to discuss AI alignment.\n\n\n🟡 ***ritsuko*** — and, in a similar manner, there are people who grok these facts sufficiently well that we would expect them to take the correct actions, even upon realizing that they are the fictional consideration of a superintelligence. given the explanation i just gave, i would claim that i could do it myself, for one.\n\n\n🔵 ***rei*** *magically erases everyone's memory of this section of the dialogue, and then disappears, out of view from everyone.*\n\n\n#### 13. top-level QACI call\n\n\n🟢 ***shinji*** — so we have QACI(α,γq,q′), the function which can call itself recursively. what's the top-level, terminal call to it which yields the action-scoring function?\n\n\n🟡 ***ritsuko*** — ah, i think it'd look like:\n\n\nπrQACI0(α,γq)(u)≔𝐌[1]πr:QACI(α,γq,q0′)u∈EvalMathU(πr)\n\n\n🟡 ***ritsuko*** — where q0′ is some initial counterfactual blob, such as the plaintext string \"please return a good scoring function over actions\" encoded in ASCII, and then padded with zeros to be of the size needed for a blob. QACI0 has type Γ|q|→ΔU — from a question location, it returns a distribution of action-scoring functions.\n\n\n🟣 ***misato*** — so like, the counterfactual user inside the QACI call should be able to return math that calls more QACI, but where do *they* get the α and γq?\n\n\n🟢 ***shinji*** — couldn't they return the whole math?\n\n\n🟡 ***ritsuko*** — ah, that's not gonna work — the chance of erroneous blob locations might accumulate too much if each QACI does a new question location sampling; we want something more realiable. an easy solution is to EvalMath the text not into a U, but into a Ω×Γ|q|→U and to pass it α,γq so that the user can return a function which receives those and uses them to call QACI.\n\n\n🟡 ***ritsuko*** — actually, while we're at it, we can pass a it whole lot more things it might need…\n\n\nπr,fQACI0(α,γq)(u)≔𝐌[1]πr:QACI(α,γq,q0′)f∈EvalMath{q}×Ω×Γ|q|→U(πr)f(q,α,γq)=u\n\n\n🟢 ***shinji*** — what's going on with f here?\n\n\n🟡 ***ritsuko*** — oh, this is just a trick of how we implement distributions — when measuring the mass of any specific u, we try to EvalMath the answer payload into a function f, and we only count the location when u is equal to f(q,α,γq) with useful parameters passed to it.\n\n\n🟣 ***misato*** — what's *around* QACI0? where do α and γq come from?\n\n\n🟡 ***ritsuko*** — so… remember this?\n\n\nhScore(a)≔𝐌[LooksLikeThisWorld(h)⋅HowGood(a,h)]h:Prior\n\n\n🟡 ***ritsuko*** — this is where we start actually plugging in our various parts. we'll assume some distribution over initial world-states Ωα∈ΔΩ and sample question locations γq in futures of those initial world-states — which will serve, for now, as the LooksLikeThisWorld.\n\n\nα,γqScore(a)≔𝐌[QACI0(α,γq)(a)]α:Ωαγq:Loc|q|(α,Ωα→(α),q)\n\n\n🟡 ***ritsuko*** — the actual AI we use will be of a type like U→HA, and so we can just call AI(Score), and execute its action guess.\n\n\n🟣 ***misato*** — and… that's it?\n\n\n🟡 ***ritsuko*** — well, no. i mean, the whole fundamental structure is here, but there's still a bunch of work we should do if we want to increase the chances that this produces the outcomes we want.\n\n\n#### 14. location prior\n\n\n🟡 ***ritsuko*** — so, right now each call to Loc penalizes f,g for being being too kolmogorov-complex. we could take advantage of this by encouraging our two different blob locations — the question location and the answer location — to share bits of information, rather than coming up with their own, possibly different bits of information. this increases the chances that the question is located \"in a similar way\" to the answer.\n\n\n🟣 ***misato*** — what does this mean, concretely?\n\n\n🟡 ***ritsuko*** — well, for example, they could have the same bits of information for *how to find bits of memory on a computer's memory on earth, encoded in our physics*, and then the two different Loc's f and g functions would only differ in what computer, what memory range, and what time they find their blobs in.\n\n\n🟡 ***ritsuko*** — for this, we'll define a set of \"location priors\" being sampled as part of the hypothesis that Score samples over — let's call it Ξ (xi). we might as well posit Ξ≔𝔹\\*.\n\n\n🟡 ***ritsuko*** — we'll also define KP,X−~:P→ΔX a kolmogorov simplicity measure which can use another piece of information, as, let's see…\n\n\nKP,X−~(p)(x)≔KP×X−(p,x)\n\n\n🟡 ***ritsuko*** — there we go, measuring the simplicity of the pair of the prior and the element favors information being shared between them.\n\n\n🟣 ***misato*** — wait, this fails to normalize now, doesn't it? because not all of P×X is sampled, only pairs whose first element is p.\n\n\n🟡 ***ritsuko*** — ah, you're right! we can simply normalize this distribution to solve that issue.\n\n\nKP,X−~(p)≔NormalizeX(λx:X.KP×X−(p,x))\n\n\n🟡 ***ritsuko*** — and in Score we'll simply add ξ:KΞ− and then pass ξ around to all blob locations:\n\n\nα,ξ,γqScore(u)≔𝐌[QACI0(α,γq,ξ)(u)]α:Ωαξ:KΞ−γq:Loc|q|(α,Ωα→(α),q,ξ)\n\n\nQACI0∈Ω×Γ|q|×Ξ→ΔU\n\n\nπr,fQACI0(α,γq,ξ)(u)≔𝐌[1]πr:QACI(α,γq,q0′,ξ)f∈EvalMath{q}×Ω×Γ|q|×Ξ→U(πr)f(q,α,γq,ξ)=u\n\n\n🟡 ***ritsuko*** — finally, we'll use it in Loc to sample f,g from:\n\n\nLocn∈Ω×ΔΩ×𝔹n×Ξ→ΔΓn\n\n\nf,g,ω,τ,b′Locn(α,δ,b,ξ)(γ)≔𝐌[SimilarPastsα(ω,g(b′,τ))R(g,(b′,τ))+R(f,g(b′,τ))](f,g):KΞ,(Ω→H𝔹n×𝔹\\*)×(𝔹n×𝔹\\*→HΩ)−~(ξ)ω:λω:maxXΔ(λω:Ω.{δ(ω)iff(ω)=(b,τ)0otherwise).δ(ω)b′:Uniform𝔹n∀b′′∈𝔹n:γ(b′′)=g(b′′,τ)f(γ(b′′))=(b′′,τ)\n\n\n#### 15. adjusting scores\n\n\n🟡 ***ritsuko*** — here's an issue: currently in Score, we're weighing hypotheses by how hard it is to find both the question and the answer.\n\n\n🟡 ***ritsuko*** — do you think that's wrong?\n\n\n🟣 ***misato*** — i think we should first ask for how hard it is to find questions, and then normalize the distribution of answers, so that harder-to-find answers don't penalize hypotheses. the reasoning behind this is that we want QACI graphs to be able to do a lot of complicated things, and that we hope question location is sufficient to select what we want already.\n\n\n🟡 ***ritsuko*** — ah, that makes sense, yeah! thankfully, we can just normalize right around the call to QACI0, before applying it to u:\n\n\nα,ξ,γqScore(u)≔𝐌[NormalizeU(QACI0(α,γq,ξ))(u)]α:Ωαξ:KΞ−γq:Loc|q|(α,Ωα→(α),q,ξ)\n\n\n🟢 ***shinji*** — what happens if we don't get the blob locations we want, exactly?\n\n\n🟡 ***ritsuko*** — well, it depends. there are two kinds of \"blob mislocations\": [\"naive\" and \"adversarial\" ones](blob-causality.html). naive mislocations are hopefully not a huge deal; considering that we're doing average scoring over all scoring functions weighed by mass, hopefully the \"signal\" from our aligned scoring functions beats out the \"noise\" from locations that select the wrong thing at a random place, like \"[boltzmann](https://en.wikipedia.org/wiki/Boltzmann_brain) blobs\".\n\n\n🟡 ***ritsuko*** — adversarial blobs, however, are tougher. i expect that they mostly result from unfriendly alien superintelligences, as well as earth-borne AI, both unaligned ones and ones that might result from QACI. against those, i hope that inside QACI we come up with some [good decision theory](https://arbital.com/p/logical_dt/) that lets us not worry about that.\n\n\n🟣 ***misato*** — actually, didn't someone recently publish some work on a [threat-resistant utility bargaining function](https://www.lesswrong.com/posts/vJ7ggyjuP4u2yHNcP/threat-resistant-bargaining-megapost-introducing-the-rose), called \"Rose\"?\n\n\n🟡 ***ritsuko*** — oh, nice! well in that case, if Rose is of type ΔU→U, then we can simply wrap it around all of Score:\n\n\nα,ξ,γqScore≔Rose(λu:U.𝐌[NormalizeU(QACI0(α,γq,ξ))(u)])α:Ωαξ:KΞ−γq:Loc|q|(α,Ωα→(α),q,ξ)\n\n\n🟡 ***ritsuko*** — note that we're putting the whole thing inside an anonymous λ-function, and assigning to Score the result of applying Rose to that distribution.\n\n\n#### 16. observations\n\n\n🟢 ***shinji*** — you know, i feel like there ought to be some better ways to select hypotheses that look like our world.\n\n\n🟡 ***ritsuko*** — hmmm. you know, i do feel like if we had some \"observation\" bitstring μ∈𝔹\\* (mu) which strongly identifies our world, like a whole dump of wikipedia or something, that might help — something like γμ:Loc|μ|(α,Ωα→(α),μ,ξ). but how do we tie that into the existing set of variables serving as a sampling?\n\n\n🟣 ***misato*** — we could look for the question q in futures of the observation world-state– how do we get that world-state again?\n\n\n🟡 ***ritsuko*** — oh, if you've got γμ you an reconstitute the factual observation world-state with γμ(μ).\n\n\n🟣 ***misato*** — in that case, we can just do:\n\n\nα,ξ,γμ,γqScore≔Rose(λu:U.𝐌[NormalizeU(QACI0(α,γq,ξ))(u)])α:Ωαξ:KΞ−γμ:Loc|μ|(α,Ωα→(α),μ,ξ)γq:Loc|q|(α,Ωα→(γμ(μ)),q,ξ)\n\n\n🟡 ***ritsuko*** — oh, neat! actually, couldn't we generate *two* blobs and sandwich the question blob between the two?\n\n\n🟣 ***misato*** — let's see here, the second observation can be μ2…\n\n\nα,ξ,γμ1,γμ2,γqScore≔Rose(λu:U.𝐌[NormalizeU(QACI0(α,γq,ξ))(u)])α:Ωαξ:KΞ−γμ1:Loc|μ1|(α,Ωα→(α),μ1,ξ)γμ2:Loc|μ2|(α,Ωα→(γμ1(μ1)),μ2,ξ)γq:Loc|q|(α,Ωα→(γμ1(μ1)),q,ξ)\n\n\n🟣 ***misato*** — how do i sample the γq location from both the future of γμ1 *and* the past of γμ2?\n\n\n🟡 ***ritsuko*** — well, i'm not sure we want to do that. remember that Loc tries to find the *very first* matching world-state for any f,g. instead, how about this:\n\n\nα,ξ,γμ1,γμ2,γqScore≔Rose(λu:U.𝐌[NormalizeU(QACI0(α,γq,ξ))(u)])α:Ωαξ:KΞ−γμ1:Loc|μ1|(α,Ωα→(α),μ1,ξ)γμ2:Loc|μ2|(α,Ωα→(γμ1(μ1)),μ2,ξ)γq:Loc|q|(α,Ωα→(γμ2(μ2)),q,ξ)Ωα→(γq(q))(γμ2(μ2))>Ωα→(γμ2(μ2))(γq(q))\n\n\n🟡 ***ritsuko*** — it's a bit hacky, but we can simply demand that \"the μ2 world-state be in the future of the q world-state more than the q world-state is in the future of the μ2 world-state\".\n\n\n🟣 ***misato*** — huh. i guess that's… one way to do it.\n\n\n🟢 ***shinji*** — could we encourage the blob location prior to use the bits of information from the observations? something like…\n\n\nα,ξ,γμ1,γμ2,γqScore≔Rose(λu:U.𝐌[NormalizeU(QACI0(α,γq,ξ))(u)])α:Ωαξ:K𝔹\\*×𝔹\\*,Ξ−~(μ1,μ2)γμ1:Loc|μ1|(α,Ωα→(α),μ1,ξ)γμ2:Loc|μ2|(α,Ωα→(γμ1(μ1)),μ2,ξ)γq:Loc|q|(α,Ωα→(γμ2(μ2)),q,ξ)Ωα→(γq(q))(γμ2(μ2))>Ωα→(γμ2(μ2))(γq(q))\n\n\n🟡 ***ritsuko*** — nope. because then, Loc's f programs can simply return the observations as constants, rather than finding them in the world, which defeats the entire purpose.\n\n\n🟣 ***misato*** — …so, what's in those observations, exactly?\n\n\n🟡 ***ritsuko*** — well, μ2 is mostly just going to be μ1 with \"more, newer content\". but the core of it, μ1, could be a whole lot of stuff. a dump of wikipedia, a callable of a some LLM, whatever else would let it identify our world.\n\n\n🟢 ***shinji*** — can't we just, like, plug the AI into the internet and let it gain data that way or something?\n\n\n🟡 ***ritsuko*** — so there's like *obvious security concerns here*. but, assuming those were magically fixed, i can see a way to do that: μ1 could be a function or mapping rather than a bitstring, and while the AI would observe it *as* a constant, it could be lazily evaluated. including, like, Fetch(Url) could be a fully [memoized](https://en.wikipedia.org/wiki/Memoization) function — such that the AI can't observe any mutable state — but it would still point to the world. in essence, this would make the AI point to *the entire internet* as its observation, though of course it would in practice be unable to obtain all of it. but it could *navigate it* just as if it was a mathematical object.\n\n\n🟣 ***misato*** — interesting. though of course, the security concerns make this probably unviable.\n\n\n🟡 ***ritsuko*** — hahah. yeah. oh, and we probably want to pass μ1,μ2 inside QACI0:\n\n\nπr,fQACI0(α,γq,ξ)(u)≔𝐌[1]πr:QACI(α,γq,q0′,ξ)f∈EvalMath{q}×{μ1}×{μ2}×Ω×Γ|q|×Ξ→U(πr)f(q,μ1,μ2,α,γq,ξ)=u\n\n\n#### 17. where next\n\n\n🟣 ***misato*** — so, is that it then? are we [done](qaci-math.html)?\n\n\n🟡 ***ritsuko*** — hardly! i expect that there's **a lot more work to be done**. but this is a **solid foundation, and direction to explore. it's kind of the only thing that feels like a path to saving the world.**\n\n\n🟢 ***shinji*** — you know, the math can seem intimidating at first, but actually it's **not *that* complicated**. one can figure out this math, especially if they get to [ask questions in real time to the person who invented that math](https://discord.gg/kXHxE4J6H2).\n\n\n🟡 ***ritsuko*** — for sure! it should be noted that [i'm not particularly qualified at this. my education isn't in math *at all* — i never really did math seriously before QACI.](so-you-think-not-qualified-alignment.html) the only reason why i'm making the QACI math is that so far barely anyone else will. but i've seen at least one other person try to learn about it and come to understand it somewhat well.\n\n\n🟢 ***shinji*** — what are some directions which you think are worth exploring, for people who want to help improve QACI?\n\n\n🟡 ***ritsuko*** — oh boy. well, here are some:\n\n\n* find things that are broken about the current math, and ideally help fix them too.\n* think about utility function bargaining more — notably, perhaps scores are [regularized](https://en.wikipedia.org/wiki/Regularization_%28mathematics%29), such as maybe by weighing ratings that are more \"extreme\" (further away from 12) as less probable. alternatively, maybe scoring functions have a finite amount of \"votestuff\" that they get to distribute amongst all options the way a normalizing distribution does, or maybe we implement something kinda like [quadratic voting](https://en.wikipedia.org/wiki/Quadratic_voting)?\n* think about how to make a lazily evaluated observation viable. i'm not sure about this, but it *feels* like the kind of direction that might help avoid unaligned alien AIs capturing our locations by bruteforcing blob generation using many-worlds.\n* generally figure out more ways to ensure that the blob locations match the world-states we want — both by improving Loc and Sign, and by finding more clever ways to use them — you saw how easy it was to add two blob locations for the two observations μ1,μ2.\n* think about turning this scheme into a [continuous rather than one-shot AI](delegated-embedded-agency-decision-theory.html). (possibly [exfo](https://www.lesswrong.com/posts/yET7wbjjJZtpz6NF3/don-t-use-infohazard-for-collectively-destructive-info)hazardous, [do not publish](publishing-infohazards.html))\n* related to that, think about ways to make the AI aligned not just with regards to its guess, but also with regards to its side-effects, so as to avoid it wanting to [exploit its way out](https://en.wikipedia.org/wiki/Rowhammer). (possibly exfohazardous, do not publish)\n* alternatively, think about how to box the AI so that the output with regards to which it is aligned is its only meaningful source of world-steering.\n* one thing we didn't get into much is what could actually be behind Ω, Ω→, and SimilarPasts. you can read more about those [here](qaci-math.html), but i don't have super strong confidence in the way they're currently put together. in particular, it would be great if someone who groks physics a lot more than me thought about whether many-worlds gives unaligned alien superintelligences the ability to forge any blob or observation we could put together in a way that would capture our AI's blob location.\n* maybe there are some ways to avoid this by tying the question world-state with the AI's action world-state? maybe implementing [embedded agency](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN) helps with this? note that blob location can totally *locate the AI's action*, and use that to produce counterfactual action world-states. maybe that is useful. (possibly exfohazardous, do not publish)\n* think about Sign and the ExpensiveHash function ([see the full math post](qaci-math.html)) and how to either implement it or achieve a similar effect otherwise. for example, maybe instead of relying on an expensive hash, we can formally define that f,g need to be \"consequentialist agents trying to locate the blob in the way we want\", rather than *any program that works*.\n* think about how to make counterfactual QACI intervals resistant to someone launching unaligned superintelligence within them.\n\n\n🟣 ***misato*** — ack, i didn't really think of that last one. yeah, that sounds bad.\n\n\n🟡 ***ritsuko*** — yup. in general, i could also do with people who could help with *inner-alignment-to-a-formal-goal*, but that's a lot more hazardous to work on. hence why we have not talked about it. but there is work to be done on that front, and people who think they have insights should probly contact us *privately* and *definitely not publish them*. interpretability people are doing enough damage to the world as it is.\n\n\n🟢 ***shinji*** — well, things don't look great, but i'm glad this plan is around! i guess it's *something*.\n\n\n🟡 ***ritsuko*** — i know right? that's how i feel as well. lol.\n\n\n🟣 ***misato*** — lmao, even.\n\n\n\n\n---\n\n\n![](qaci-invention-dialogue-footer.webp)", "date_published": "2023-06-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "f69878240c14d9a0fd5f78c2d49845ef", "title": "Orthogonal's", "url": "https://carado.moe/formal-alignment-theory-change.html", "source": "carado.moe", "source_type": "blog", "text": "Orthogonal's *Formal-Goal Alignment* theory of change\n-----------------------------------------------------\n\n\nWe recently announced [Orthogonal](https://www.lesswrong.com/posts/b2xTk6BLJqJHd3ExE/orthogonal-a-new-agent-foundations-alignment-organization), an agent foundations alignment research organization. In this post, I give a thorough explanation of the [formal-goal alignment](formal-alignment.html) framework, the motivation behind it, and the theory of change it fits in.\n\n\nThe overall shape of what we're doing is:\n\n\n* Building a formal goal which would lead to good worlds when pursued — our best candidate for this is [QACI](qaci.html)\n* Designing an AI which takes as input a formal goal, and returns actions which pursue that goal in the distribution of worlds we likely inhabit\n\n\n### Backchaining: aiming at solutions\n\n\nOne core aspect of our theory of change is **[backchaining](https://www.lesswrong.com/posts/DwoPGM8ytBCXrZpM7/backchaining-in-strategy)**: come up with an *at least remotely plausible* [story](narrative-explanation-qaci.html) for how the world is saved from [AI doom](ai-doom.html), and try to think about how to get there. This avoids spending lots of time getting confused about concepts that are confusing because they were the wrong thing to think about all along, such as \"what is the shape of human values?\" or \"what does GPT4 want?\" — our intent is to study things that fit together to form a full plan for saving the world.\n\n\n### Alignment engineering and agent foundations\n\n\nAlignment is not just not the default, it's a **very narrow target**. As a result, there are **[many bits](alignment-bits.html) of non-obvious work** which need to be done. Alignment isn't just finding the right weight to sign-flip to get the AI to switch from evil to good; it is the hard work of *putting together something which coherently and robustly points in a direction we like*.\n\n\nas [yudkowsky puts it](https://twitter.com/ESYudkowsky/status/1626609128859922434):\n\n\n\n> The idea with agent foundations, which I guess hasn't successfully been communicated to this day, was finding a coherent target to try to get into the system by any means (potentially including DL ones).\n> \n> \n\n\nAgent foundations/[formal-goal alignment](formal-alignment.html) is **not** fundamentally about *doing math* or *being theoretical* or *thinking abstractly* or *proving things*. Agent foundations/formal-goal alignment is about *building a coherent target which is fully made of math — not of human words with unspecified meaning — and figuring out a way to make that target maximized by AI*. Formal-goal alignment is about building a fully formalized goal, not about going about things in a \"formal\" manner.\n\n\nCurrent AI technologies **are not [strong agents pursuing a coherent goal](strongly-generally-coherent-agents.html)** (SGCA). The reason for this is not because this kind of technology is impossible or too confusing to build, but because **in worlds in which SGCA was built (and wasn't aligned), [we die](anthropics-example.html)**. Alignment ultimately is about making sure that the first SGCA pursues desirable goal; the default is that its goal will be undesirable.\n\n\nThis does not mean that I think that someone needs to figure out how to build SGCA for the world to end of AI; what I expect is that there are ways in which SGCA can emerge out of the current AI paradigm, in ways that don't let particularly us choose what goal it pursues.\n\n\n### You do not align AI; you build aligned AI.\n\n\nBecause this emergence does not let us pick the SGCA's goal, we need to [design an SGCA whose goal we *do* get to choose](clarifying-formal-alignment-implementation.html); and separately, we need to [design such a goal](formal-alignment.html). **I expect that pursuing straightforward progress on current AI technology leads to an SGCA whose goal we do not get to choose and which leads to extinction.**\n\n\nI do not expect that current AI technology is of a kind that makes it easy to \"align\"; I believe that the whole idea of building a strange non-agentic AI about which the notion of goal barely applies, and then to try and make it \"be aligned\", was fraught from the start. **If current AI was powerful enough to save the world once \"aligned\", it would have already killed us before we \"aligned\" it.** to save the world, we have to *design something new* which pursues a goal *we get to choose*; and that design needs to have this in mind *from the start*, rather than as an afterthought.\n\n\n### AI applies to alignment, not alignment to AI\n\n\nAt this point, many answer \"but this novel technology won't be built in time to save the world from unaligned AGI!\"\n\n\nFirst, it is plausible that after we have designed an AI that would save the world, we'll end up reaching out to the large AI organizations and ask them to merge and assist with our alignment agenda. While \"applying alignment to current AI\" is fraught, using current AI technologies in the course of designing this world-saving SGCA *is* meaningful. **Current AI technology can serve as a component of alignment, not the other way around.**\n\n\nBut second: yes, we still mostly die. I do not expect that our plan saves most timelines. I merely believe it saves most of the worlds that are saved. We will not save >50% of worlds, or maybe even >10%; but we will have [produced dignity](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy); we will have significantly increased the ratio of worlds that survive. This is unfortunate, but I believe it is the best that can be done.\n\n\n### Pursuing formal goals vs ontology wrangling\n\n\nBecause of a lack of backchaining, I believe that most current methods to try and wrangle what goes on inside current AI systems is not just the wrong way to go about things, but **net harmful when published**.\n\n\nAI goals based on trying to point to things we care about inside the AI's model are the wrong way to go about things, because they're susceptible to ontology breaks and to failing to carry over to next steps of self-improvements that an world-saving-AI should want to go through.\n\n\nInstead, the aligned goal we should be putting together should be [eventually aligned](ai-alignment-curves.html); it should be aligned *starting from a certain point* (which we'd then have to ensure the system we launch is already past), rather than *[up to a certain point](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization)*.\n\n\n**The aligned goal should be \"formal\". It should be made of fully formalized math, not of human concepts that an AI has to interpret in its ontology, because ontologies break and reshape as the AI learns and changes.** the aligned goal should have the *factual property* that a computationally *unbounded* mathematical oracle being given that goal would take desirable actions; and then, we should design a computationally *bounded* AI which is good enough to take *satisfactory* actions. **I beileve this is the only way to design an AI whose actions we still have confidence in the desirability of, even once the AI is out of our hands and is augmenting itself to unfathomable capabilities; and I believe it needs to get out of our hands and augment itself to unfathomable capabilities, in order for it to save the world.**\n\n\n### Conclusion\n\n\nI, and now other researchers as well, believe this agenda is worthwhile of considerably more investigation, and is our best shot to making it out of the acute risk period by ensuring that superintelligent AI can lead to astronomical good instead of extinciton.\n\n\nOur viewpoint seems in many ways similar to that of [MIRI](https://intelligence.org/) and we intend to continue in our efforts to engage with MIRI researchers, because we believe that they are the research organization which would be most amenable to collaboration on this agenda.\n\n\nWhile we greatly favor the idea of governance and coordination helping with alignment, the timelines seem too short for this to make a significant difference aside from buying a few years at most, and we are greatly concerned with AI risk awareness causing more people or even [governments](https://www.theguardian.com/business/2023/feb/22/uk-needs-its-own-britgpt-or-will-face-an-uncertain-future-mps-hear) to react by finding AI impressive and entering the race, making things overall worse.\n\n\nWe believe that the correct action to take is to [continue working on the hard problem of alignment](continue-working-hard-alignment.html), and we believe that our research agenda is the most promising path to solve it. this is the foundational motivation for [the creation of our research organization](https://orxl.org/announcement.html).", "date_published": "2023-05-05T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1ecb038c5ec08a84beba5d1920edd0e5", "title": "the multiverse argument argument against automated alignment", "url": "https://carado.moe/multiverse-argument-automated-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "the multiverse argument argument against automated alignment\n------------------------------------------------------------\n\n\nthere are a bunch of \"normal\" reasons why i don't particularly favor [using AI systems to do alignment research](https://openai.com/blog/our-approach-to-alignment-research), but rather [building something else](qaci.html) (which might very well be powered by current AI techniques, but also has novel, cleverly-designed parts which take care of alignment):\n\n\n* AI will select for plans that we approve of, not plans that are good\n* it does not seem like we have to wait for those AIs to be useful (i found [a workable alignment plan](qaci.html) without AI assistance)\n\n\nbut in this post, i present a wackier argument. let's say we in some kind(s) of multiverse — be it by way of [tegmark 1](https://web.archive.org/web/20230326140409/https://space.mit.edu/home/tegmark/crazy.html), [tegmark 3](https://www.lesswrong.com/tag/many-worlds-interpretation), or [tegmark 4](spoiler-fire-upon-deep.html). then, let's compare the following plans:\n\n\n1. we solve alignment using technology B before technology A kills us\n2. we solve alignment using technology A before technology A kills us\n\n\nin scenario 1, where our alignment progress does not depend on progress in the technology that kills us, the question \"do we solve alignment before we die?\" is largely an [indexical one](https://www.lesswrong.com/posts/SFLCB5BgjzruJv9sp/logical-and-indexical-uncertainty) — there are manyworlds branches where we do, and branches where we don't. but in scenario 2, the question \"do we solve alignment before we die?\" is moreso a [logical one](https://www.lesswrong.com/posts/SFLCB5BgjzruJv9sp/logical-and-indexical-uncertainty) — it could be that in all manyworlds branches the answer is we do, but hit could also be that in all manyworlds branches the answer is no, and [that's arguably a larger risk](logical-indexical-dignity.html) than just [\"bleeding timelines\"](bracing-alignment-tunnel.html).\n\n\n(notice the use of \"largely\" and \"moreso\" — both questions are partly indexical and partly logical, just to different degrees)\n\n\nthis doesn't mean using AI to solve alignment is necessarily completely fraught. the approach would have more chances of working out if access to language models was very stringently restricted to sufficiently trustworthy alignment researchers — where trustworthyness is not just \"will not publish capabilities\" but \"will not let insights slip which would be heard, perhaps indirectly, by someone who would publish capabilities\". there are ways to develop dangerous AI without killing everyone, if one is **extremely** careful. OpenAI is just *not doing that*, and instead giving access to its systems to the masses and even planning to develop APIs to accelerate the capabilities of those systems as much as possible.\n\n\nnote that we should still consider using *already existing* dangerous technologies — this argument does not apply to [cyborgism using current language models](https://www.lesswrong.com/posts/bxt7uCiHam4QXrQAA/cyborgism), so long as the alignment cyborgists are **extremely careful about not letting out any kind of insights they gain about language models**.", "date_published": "2023-04-21T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "cc95e9adfbbdf6bda5e02b6d26705016", "title": "opinions on the consequences of AI", "url": "https://carado.moe/map-opinions-ai.html", "source": "carado.moe", "source_type": "blog", "text": "opinions on the consequences of AI\n----------------------------------\n\n\n![](map-opinions-ai.png)", "date_published": "2023-03-25T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "380a6182e7c34a0b180f679242406d35", "title": "continue working on hard alignment! don't give up!", "url": "https://carado.moe/continue-working-hard-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "continue working on hard alignment! don't give up!\n--------------------------------------------------\n\n\nlet's call \"hard alignment\" the ([\"orthodox\"](https://scottaaronson.blog/?p=6821)) problem, historically worked on by MIRI, of preventing [strong agentic AIs](strongly-generally-coherent-agents.html) from pursuing [things we don't care about](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) by [default](https://www.lesswrong.com/tag/orthogonality-thesis) and [destroying everything of value to us](ai-doom.html) on [the way there](https://en.wikipedia.org/wiki/Instrumental_convergence). let's call \"easy\" alignment the set of perspectives where some of this model is wrong — some of the assumptions are relaxed — such that saving the world is easier or more likely to be the default.\n\n\nwhat should one be working on? as always, the calculation consists of comparing\n\n\n* p(`hard`) × how much value we can get in `hard`\n* p(`easy`) × how much value we can get in `easy`\n\n\nbecause of how AI capabilities are going, i've seen for people start [playing their outs](https://en.wikipedia.org/wiki/Out_%28poker%29) — that is to say, to start acting as if alignment is easy, because if it's not we're doomed anyways. but i think, in this particular case, this is wrong.\n\n\nthis is the lesson of [*dying with dignity*](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) and [*bracing for the alignment tunnel*](bracing-alignment-tunnel.html): we should be cooperating with our counterfactual selves and continue to save the world in whatever way actually seems promising, rather than taking refuge in falsehood.\n\n\nto me, p(`hard`) is big enough, and [my `hard`-compatible plan](qaci.html) seems workable enough, that it makes sense for me to continue to work on it.\n\n\nlet's not give up on the assumptions which are true. there is still work that can be done to *actually* generate some dignity under the assumptions that are *actually* true.", "date_published": "2023-03-23T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "bb5145d92e5402d00aae56d47589f772", "title": "the QACI alignment plan: table of contents", "url": "https://carado.moe/qaci.html", "source": "carado.moe", "source_type": "blog", "text": "the QACI alignment plan: table of contents\n------------------------------------------\n\n\n![](qaci-math-1.svg)\n\n\nthis post aims to keep track of posts relating to the **question-answer counterfactual interval** proposal for [AI alignment](ai-doom.html), abbreviated \"**QACI**\" and pronounced \"quashy\". i'll keep it updated to reflect the state of the research.\n\n\nthis research is primarily published on [**the Orthogonal website**](https://orxl.org/) and discussed on [**the Orthogonal discord**](https://discord.gg/kXHxE4J6H2).\n\n\nas a **top-level view of QACI**, you might want to start with:\n\n\n* [**an Evangelion dialogue explaining QACI**](qaci-invention-dialogue.html)\n* [**a narrative explanation of QACI**](narrative-explanation-qaci.html)\n* [**Orthogonal's *Formal-Goal Alignment* theory of change**](formal-alignment-theory-change.html)\n* [**formalizing the QACI formal-goal**](qaci-math.html)\n\n\nthe set of all posts relevant to QACI includes:\n\n\n* as **overviews of QACI and how it's going**:\n\t+ [**state of my research agenda**](state-research-agenda.html)\n\t+ [**problems for formal alignment**](formal-alignment-problems.html)\n\t+ [**the *Formal-Goal Alignment* theory of change**](formal-alignment-theory-change.html)\n\t+ [the original post introducing **QACI**](question-answer-counterfactual-intervals.html)\n* on the **formal alignment** perspective within which it fits:\n\t+ [**formal alignment: what it is, and some proposals**](formal-alignment.html)\n\t+ [**clarifying formal alignment implementation**](clarifying-formal-alignment-implementation.html)\n\t+ on [**being only polynomial capabilities away from alignment**](capabilities-away-great-problem.html)\n* on the **[blob](qaci-blobs-interval-illustrated.html) location** problem:\n\t+ [**QACI blobs and interval illustrated**](qaci-blobs-interval-illustrated.html)\n\t+ [**counterfactual computations in world models**](counterfactual-computation-in-world-models.html)\n\t+ [**QACI: the problem of blob location, causality, and counterfactuals**](blob-causality.html)\n\t+ [**QACI blob location: no causality & answer signature**](blob-location.html)\n\t+ [**QACI blob location: an issue with firstness**](blob-quantum-issue.html)\n* on **QACI as an implementation of long reflection / [CEV](https://www.lesswrong.com/tag/coherent-extrapolated-volition)**:\n\t+ **[CEV can be coherent enough](cev-coherent-enough.html)**\n\t+ **[some thoughts about terminal alignment](terminal-alignment-solutions.html)**\n* on **formalizing the QACI formal goal**:\n\t+ **[a rough sketch of formal aligned AI using QACI](rough-sketch-formal-aligned-ai.html)** with some actual math\n\t+ [**one-shot AI, delegating embedded agency and decision theory, and one-shot QACI**](delegated-embedded-agency-decision-theory.html)\n* on how a formally aligned AI would actually **run over time**:\n\t+ [**AI alignment curves**](ai-alignment-curves.html)\n\t+ [**before the sharp left turn: what wins first?**](sharp-left-turn-what-wins-first.html)\n* on the **metaethics** grounding QACI:\n\t+ [**surprise! you want what you want**](surprise-you-want.html)\n\t+ [**outer alignment: two failure modes and past-user satisfaction**](outer-alignment-past-user.html)\n\t+ [**your terminal values are complex and not objective**](values-complex-not-objective.html)\n* on my view of **the AI alignment research field** within which i'm doing formal alignment:\n\t+ [**my current outlook on AI risk mitigation**](outlook-ai-risk-mitigation.html)\n\t+ [**a casual intro to AI doom and alignment**](ai-doom.html)", "date_published": "2023-03-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "5142e68365d20979d65df8dca2fef056", "title": "you can't simulate the universe from the beginning?", "url": "https://carado.moe/cant-simulate-the-universe.html", "source": "carado.moe", "source_type": "blog", "text": "you can't simulate the universe from the beginning?\n---------------------------------------------------\n\n\n[QACI](qaci.html) and plausibly [PreDCA](predca.html) rely on a [true name](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) of phenomena in the real world using [solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1), and thus talk about locating them in a theoretical giant computation of the universe, from the beginning. it's reasonable to be concerned that there isn't enough compute for an aligned AI to actually do this. however, i have two responses:\n\n\n* *isn't* there enough compute? supposedly, our past lightcone is a lot smaller than our future lightcone, and quantum computers seem to work. this is evidence that we *can*, at least in theory, build within our future lightcone a quantum computer simulating our past lightcone. the major hurdle here would be \"finding out\" a fully explanatory \"initial seed\" of the universe, which *could* take exponential time, but also could maybe not.\n* we don't *need* to simulate past lightcone. if you ask me what my neighbor was thinking yesterday at noon, the answer is that i don't know! the world might be way too complex to figure that out without simulating it and scanning his brain. however, i have a *reasonable distribution over guesses*. he was more likely to think about french things than korean things. he was more likely to think about his family than my family. et cetera. an aligned superintelligence can hold an ever increasingly refined distribution of guesses, and then maximize the expected utility of utility functions corresponding to each guess.", "date_published": "2023-03-19T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "81225de1975152f1d4930238cda117a1", "title": "QACI blob location: an issue with firstness", "url": "https://carado.moe/blob-quantum-issue.html", "source": "carado.moe", "source_type": "blog", "text": "QACI blob location: an issue with firstness\n-------------------------------------------\n\n\nfor [QACI](qaci.html) i need to figure out the [problem](formal-alignment-problems.html) of [blob](qaci-blobs-interval-illustrated.html) location (see thoughts [1](blob-causality.html), [2](https://carado.moe/blob-location.html)).\n\n\nin this post i bring up a particular issue: in [many-worlds](https://www.lesswrong.com/tag/many-worlds-interpretation), which is the likely correct interpretation of quantum mechanics, selecting the \"first\" (in time) instance of the blob might be just wrong.\n\n\nhere are two failure modes illustrate why:\n\n\n* at all times including nearer to the big bang, there are exponentially many enough decohered branches of the universe that some happen to contain the question blob, and even some large macro-phenomena encoding it. this is a *naive* failure mode.\n* in timelines where unaligned superintelligence was launched in the past — whether by us or aliens — some of those superintelligences are gonna guess that *we're* gonna do QACI, and they're gonna do enough quantum coinflips to generate exponentially many enough timelines to include the ones with our question blob, and by having those be earlier in time than *our* question blob, they'll get to hijack the question-answer interval. this is an *adverserial* failure mode.\n\n\nfurthermore, we can't just trace a \"causality\" or \"continuity\" between the question and the AI being launched, because in the adverserial failure mode, the adverserial superintelligence can simply run a simulation inside of which there is such a continuity or causality.\n\n\nmy thoughts about possible solutions are thus:\n\n\n* maybe an exception to {adverserial superintelligence being able to fake being causally ahead} would be if we define causality such that us doing QACI is *causally upstream* of an adverserial superintelligence hijacking it — after all, there's a sense in which it's hijacking QACI \"because\" we are or might be doing QACI — but this seems like a possibly difficult [true name](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation)\n* maybe rule out adverserial instances using [a notion of agency akin to the one in PreDCA](predca.html)? maybe this also helps detect us and thus avoid naive failures too?\n* use a clever scheme which lets something like a question-blob or question-process be manifested in a way that can't be bruteforced by generating exponentially many quantum timelines? what are some [computational complexity classes greater than EXPTIME](https://en.wikipedia.org/wiki/Computational_complexity_theory), or greater than whichever computational complexity class describes the set of possible timelines one gets to instantiate in many-worlds?\n* in some other way, figure out a [true name](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) for \"naive world, not engineered by an adverserial superintelligence?", "date_published": "2023-03-19T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "124ef52bcb611bb77a0646f992b1838b", "title": "your terminal values are complex and not objective", "url": "https://carado.moe/values-complex-not-objective.html", "source": "carado.moe", "source_type": "blog", "text": "your terminal values are complex and not objective\n--------------------------------------------------\n\n\na lot of people seem to want [terminal](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (aka intrinsic aka axiomatic) [values](what-is-value.html) (aka ethics aka morality aka preferences aka goals) to be *simple and elegant*, and to be [*objective and canonical*](https://en.wikipedia.org/wiki/Moral_realism). this carries over from epistemology, where [we do favor simplicity and elegance](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor).\n\n\n[we have uncertainty about our values](what-is-value.html), and it is true that *our model of* our values should, as per epistemology, generally tend to follow [a simplicity prior](https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor). but that doesn't mean that *our values themselves* are simple; they're definitely evidently complex enough that just thinking about them a little bit should make you realize that they're much more complex than the kind of simple model people often come up with.\n\n\nboth for modeling the world and for modeling your values, you should favor simplicity *as a prior* and then *update by filtering for hypotheses that match evidence*, because *the actual territory is big and complex*.\n\n\nthere is no objectively correct universal metaethics. there's just a large, complex, tangled mess of stuff that is [hard to categorize](guess-intrinsic-values.html) and contains not just *human notions* but also *culturally local notions* of love, happiness, culture, freedom, friendship, art, comfort, diversity, etc. and yes, these are **terminal** values; there is no simple process that re-derives those values. i believe that **there is no thing for which i instrumentally value love or art, which if you presented me something else that does that thing better, i would happily give up on love/art. i value those things *intrinsically*.**\n\n\nif you talk of \"a giant cosmopolitan value handshake between everyone\", then picking that rather than [paperclips](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), while intuitive to *you* (because *you have your values*) and even to other humans doesn't particularly track anything universally canonical.\n\n\neven within the set of people who claim to have cosmopolitan values, *how conflicts are resolved* and *what \"everyone\" means* and many other implementation details of cosmopolitanism will differ from person to person, and again *there is no canonical unique choice*. your notion of cosmopolitanism is a very complex object, laden with not just human concepts but also cultural concepts you've been exposed to, which many other humans don't share both across time and space.\n\n\nthere is no \"metaethics ladder\" you can which climb up in order to resolve this in an objective way for everyone, not even all humans — *what ladder* and *how you climb it* is still a complex subjective object laden with human concepts and concepts from your culture, and there is no such thing as a \"pure\" you or a \"pure\" person without those.\n\n\nsome people say \"simply detect all agents in the cosmos and do a giant value handshake between those\"; but on top of the previous problems for implementation details, this has the added issue that the things whose values we want to be satisfied aren't *agents* but *[moral patients](moral-patient-term.html)*. those don't necessarily match — superintelligent [grabby](https://grabbyaliens.com/) agents shouldn't get undue amounts of power in the value handshake.\n\n\nsome people see the simplicity of paperclips as the problem, and declare that complexity or negentropy or something like that is the *ultimate good*. but [a superintelligence maximizing for that](core-vals-exist-selfdet.html) would just fill the universe with maximally random noise, as opposed to preserving the things you like. turns out, [\"i want whatever is complex\" is not sufficient to get our values](https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence); they're not just *anything complex* or *complexity itself*, they're an *extremely specific* complex set of things, as opposed to *other* equally complex sets of things.\n\n\nentropy just doesn't have much to do with terminal values whatsoever. sure, it has a lot to do with *instrumental* values: negentropy is the resource we have to allocate to the various things we want. but that's secondary to *what it is we want to begin with*.\n\n\nas for myself, i love cosmopolitanism! i would like an [egalitarian utopia where everyone has freedom and my personal lifestyle preferences aren't particularly imposed on anyone else](%E2%88%80V.html). but make no mistake: this cosmopolitanism *is my very specific view of it*, and other people have different views of cosmopolitanism, when they're even cosmopolitan at all.\n\n\nsee also:\n\n\n* [Value is Fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile)\n* [surprise! you want what you want](surprise-you-want.html)\n* [generalized wireheading](generalized-wireheading.html)\n* [\"humans aren't aligned\" and \"human values are incoherent\"](human-values-unaligned-incoherent.html)\n* [CEV can be coherent enough](cev-coherent-enough.html)", "date_published": "2023-03-13T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "03d91d191c5842ba8ae1c7ab02439d1d", "title": "the quantum amplitude argument against ethics deduplication", "url": "https://carado.moe/quantum-amplitude-deduplication.html", "source": "carado.moe", "source_type": "blog", "text": "the quantum amplitude argument against ethics deduplication\n-----------------------------------------------------------\n\n\nin [*experience/moral patient deduplication and ethics*](deduplication-ethics.html), i explore the question of whether running the same computation of a moral patient twice counts as double, ethically. in [*all claw, no world*](all-claw-no-world.html) i draw up a view of the [cosmos](word-report-3.html) based on [time steps](udassa-time-steps.html) in the [universal machine](universal-complete.html) which suggests that duplicated computations *do* count as double, because they occupy twice the amount of time-steps in the universal program.\n\n\nin this post i make another argument, based on preferring one view over another of the ([probably correct](https://www.lesswrong.com/posts/WqGCaRhib42dhKWRL/if-many-worlds-had-come-first)) [many-worlds interpretation](https://www.lesswrong.com/tag/many-worlds-interpretation) of quantum mechanics.\n\n\nwhen coming across the concept of many-worlds, i think people most generally assume the view on the left, where new timelines are being created. i think the view on the right, where a constant amount of [\"reality fluid\" or \"reality juice\"](ethic-juice-anthropic-juice.html) is being split into different timelines, is more correct and makes more sense: we wouldn't expect the amount of \"stuff existing\" to keep exponentially growing over time. i believe it also maps to the notion of *quantum amplitude*.\n\n\n![](quantum-amplitude-deduplication-1.svg)\n\n\n(where at a given time, `A` is the amplitude of a particular timeline and `ΣA` is the sum of amplitudes across all timelines)\n\n\ni think the way to view this that makes sense, if one is thinking in terms of discrete computation, is that the universe starts out \"computing\" the same thing in all of many \"threads\", and then as timelines branch fractions of these threads start diverging.\n\n\nthis also explains what goes on inside a quantum computer: in the quantum circuit it, rather than saying that a bunch of \"new\" universes are being temporarily created and then re-merged, instead it's merely the case that different computation threads are temporarily computing something different instead of the same thing.\n\n\n![](quantum-amplitude-deduplication-2.svg)\n\n\n(this entails that [\"entropy control\"](forking-bitrate-entropy-control.html) cannot work, at least not unless some weird [\"solomonoff deism\" simulation hypotheses](solomonoff-deism.html) optimizing away redundanced computation happens.)\n\n\nif [P≠BQP](https://en.wikipedia.org/wiki/BQP) and the [universal program](universal-complete.html) is classical, then it's [weird that we inhabit a quantum world](all-claw-no-world.html) — we should be [too far](https://arxiv.org/abs/1108.1791) inside the universal computation.\n\n\nif P=BQP or the universal program is quantum, then it makes sense to live in a quantum universe, but:\n\n\n* if we adopt the left-side view (more total fluid), then we should observe being at the \"end of time\" where there's maximally many timelines — exponentially much of our anthropic juice should be at the maximum *quantum entropy*, perhaps as [boltzmann brains](https://en.wikipedia.org/wiki/Boltzmann_brain) observing [anomalously chaotic words](limiting-real-universes.html). and we don't observe that!\n* if we adopt the right-side view (fluid gets split), then we get back \"regular\" anthropics, and everything is normal again: our anthropic juice remains roughly the same as we pass branching events/macro-scale decoherence.\n\n\n(one view that completely circumvents all of this is if P≠BQP and the [cosmos](word-report-3.html) is, ultimately, implemented classically, but we still only inhabit quantum worlds — perhaps classical worlds simply don't exist, or the cosmos is really just our big bang and nothing else. in that case, it could be that the classical program taking exponentially long to compute us exponentially far approximately compensates for the [time step distribution](udassa-time-steps.html) favoring earlier us's, possibly exponentially much. that'd be *really strange*, and it *feels* like we'd be [too far](https://arxiv.org/abs/1108.1791), but i *guess* it's possible.)\n\n\nanyways, what this suggests is that, in the simplest model, the universe is running many computation threads which are originally computing the same thing, and then some fraction of them diverge sometimes — either to re-merge in local situations like quantum computers or [the double-slit experiment](https://en.wikipedia.org/wiki/Double-slit_experiment), or to decohere the rest of the world and more \"permanently\" split it.\n\n\nbut more importantly, this suggests that:\n\n\n* *with regards to intrinsic value* (rather than eg caring about diversity), duplicating the computation of moral-patient-experience does count as more moral-patient-experience. in [*deduplication ethics*](deduplication-ethics.html), I≈M≈Q≈V.\n* if we could do it, [resimulating the earth](finding-earth-ud.html) in order to [bring back everyone](utopia-scopes.html) has an unethical cost: we'd be rerunning all of history's suffering.\n* [predictablizing ethic deduplication](predictablizing-ethic-deduplication.html) would be a significant change.\n* with regards to [quantum immortality](less-quantum-immortality.html): we mustn't count on it. the fact that we're strongly duplicated now gets us-now to count a lot more, therefore losing 99% of our quantum amplitude to [AI doom](ai-doom.html) would be very bad: we would *actually* lose existence juice. on the upside this also applies to [S-risks](https://en.wikipedia.org/wiki/Suffering_risks): it's *actually helping* that they're small.", "date_published": "2023-03-12T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "11605a195a600909de2e658b80a79f43", "title": "problems for formal alignment", "url": "https://carado.moe/formal-alignment-problems.html", "source": "carado.moe", "source_type": "blog", "text": "problems for formal alignment\n-----------------------------\n\n\nhere is a list of problems which i seek to either resolve or get around, in order to [implement](clarifying-formal-alignment-implementation.html) my [formal alignment](formal-alignment.html) plans, especially [QACI](qaci.html):\n\n\n* **formal inner alignment**: in the formal alignment paradigm, \"inner alignment\" means refers to the problem of building an AI which, when ran, actually maximizes the formal goal we give it (in tractable time) rather than doing something else such as getting hijacked by an unaligned internal component of itself. because its goal is formal and fully general, it feels like building something that maximizes it should be much easier than the regular kind of inner alignment, and we could have a lot more confidence in the resulting system. (progress on this problem could be [capability-exfohazardous](publishing-infohazards.html), however!)\n* **continuous alignment**: given a utility function which is theoretically [eventually aligned](ai-alignment-curves.html) such that there exists a level of capabilities at which it has good outcomes for any level above it, how do we bridge the gap from where we are to that level? will a system \"accidentally\" destroy all values before realizing it shouldn't have done that?\n* **blob location**: for [QACI](qaci.html), how do we robustly locate pieces of data stored on computers encoded on top of bottom-level-physics turing-machine solomonoff hypotheses for the world? see [1](blob-causality.html), [2](blob-location.html), [3](qaci-blobs-interval-illustrated.html) for details.\n* **physics embedding**: related to the previous problem, how precisely does the prior we're using need to capture our world, for the intended instance of the blobs to be locatable? can we just find the blobs in the [universal program](universal-complete.html) — or, if P≠[BQP](https://en.wikipedia.org/wiki/BQP), some universal quantum program? do we need to demand worlds to contain, say, a dump of wikipedia to count as ours? can we use the location of such a dump as a prior for the location of the blobs?\n* **infrastructure design**: what formal-math language will the formal goal be expressed in? what kind of properties should it have? should it include some kind of proving system, and in what logic? in [QACI](qaci.html), will this also be the language for the user's answer? what kind of checksums should accompany the question and answer blobs? these questions are at this stage premature, but they will need some figuring out at some point if formal alignment is, as i currently believe, the way to go.", "date_published": "2023-03-11T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4a8af3eff18ede6d7246f9c33b17cdcb", "title": "QACI blobs and interval illustrated", "url": "https://carado.moe/qaci-blobs-interval-illustrated.html", "source": "carado.moe", "source_type": "blog", "text": "QACI blobs and interval illustrated\n-----------------------------------\n\n\nthis post attempts to explain [QACI](qaci.html) and notably the question and answer blobs and the question-answer interval.\n\n\nlet's say we view a particular way the world could be as a giant causal [DAGs](https://en.wikipedia.org/wiki/Directed_acyclic_graph). in this world, a **user** is randomly generating a large (perhaps 1GB) file of random text called the **question**, and is supposed to, after some time, produce another large file called the **answer**.\n\n\ni'm going to color:\n\n\n* the **question** and **answer** as \"blobs\" of data, serving as \"pointers\" or \"coordinates\" into the world. here i'll say that these are sets of *nodes* through which flow *values* — typically booleans, i think?\n* there's gonna be a rough location of the **user**, getting the **question** and outputting the **answer**. it's very hard to point to the **user**, but instead we hope to:\n* isolate the **question-answer interval**, as the intersection between the future-lightcone of the **question** and the past-lightcone of the **answer**. (this probly counts as a kind of [markov blanket](https://en.wikipedia.org/wiki/Markov_blanket))\n\n\n(the real DAG of the universe since the big bang is obviously astronomically larger, this is a simplified sketch to illustrate what's going on)\n\n\n![](qaci-blobs-interval-illustrated.svg)\n\n\nthe AI will be given a hypothetical distribution over hypotheses for what the world is like, each in the form of a never-halting input tape to a universal turing machine — something akin to [solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1), or alternatively the [\"spec\"](https://www.lesswrong.com/posts/tndX9uZEp5HHpS9xK/udassa) or [\"world\"](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) part of a [UDASSA](https://www.lesswrong.com/tag/udassa) hypothesis.\n\n\nin the image above, the turing machine running over time is illustrated as a sequence of step transitions, some of which correspond to the turing machine writing the **question** and then later the **answer** onto its tape — again, the real sequence of steps involved in the history since the start of the universe and the question-answer interval would be astronomically larger in an actual hypothesis.\n\n\njust like in the causal DAG of the universe, the **question** and **answer** blobs — as well as the **user** of course — are distributed over large amounts of tiny parts. in the turing machine, they're encoded — possibly in a highly complex way! — over large amounts of steps, at each of which the turing machine is writing a bit to its tape.\n\n\nthe problem of *blob location* ([1](blob-causality.html), [2](blob-location.html)) is that of finding a way to identify the *intended* first instance of the **question** and **answer** as they are represented on a turing machine input tape serving as hypothesis for our universe.\n\n\nthe **question-answer interval** is then used as a standalone function whose input are counterfactual questions and which outputs counterfactual answers, and a *hypothetical simulation of a collection of these calling passing information to each other* is used to [build a long-reflection process](narrative-explanation-qaci.html) from within which the **user** is to solve alignment or, at least and more likely, come up with a better long-reflection process to *eventually* solve alignment.", "date_published": "2023-03-09T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4c6f4b9b50b054d2ad1151b2023018db", "title": "QACI blob location: no causality & answer signature", "url": "https://carado.moe/blob-location.html", "source": "carado.moe", "source_type": "blog", "text": "QACI blob location: no causality & answer signature\n---------------------------------------------------\n\n\nhere some things i've realized with regards to [blob location](blob-causality.html), for [QACI](qaci.html).\n\n\nif i quantum-generate the question blob then it's not clear if, in order to inject the counterfactual question, i need to locate-and-counterfactually-replace just the first intended instance of the blob (perhaps the physics-level qubits), or if i need to in some sense locate-and-counterfactually-replace *all* intended instances, including ones that are \"complexly encoded\" macro-states — instances of the question blob that are for example stored in configurations of transistors on hard drives. it seems like i'll have to locate complexly encoded rather than basic-quantum-level encoded blobs anyways because:\n\n\ncapturing the answer as physics-level qubits might be complicated, because of quantum decoherence: you can turn qubits into decohered macro states, but you can't turn decohered macro state back into qubits, or at least not before we randomly reach that state exponentially far into heat death or something. so, it looks like anything that's downstream of the answer blob is going to have to be located as a macro state.\n\n\none way that i feel like we could somewhat reliably locate the answer blob, which also gives us flexibility in *when* the answer blob is produced rather than requiring every question-answer interval to end at the same time, would be by using a cryptographic signature scheme: we generate the question blob along with a quantum-random large cryptographic signature keypair, and then when we have an answer we produce a blob of data consisting of the answer followed by a signature of the answer. the QACI formal goal would require locating the answer next to its signature by the public key.\n\n\nif we implemented \"first occurrence of the answer in time\" correctly, then we don't even need to destroy the private key — the unaligned AI can sign whatever it wants and that doesn't change what the first signed occurrence is like.\n\n\none doubt i have at this point, is how this plays with many-worlds. in [*communicating with successful alignment timelines*](communicating-successful-alignment.html), i got blindsighted by unintutive consequences of many-worlds (see the edit on that post) — i wonder if there could be something similar going on here.\n\n\nfinally, there might be a \"cleaner\" way to \"entangle\" the answer with the question somehow than a cryptographic signature, which i'm not aware of.", "date_published": "2023-03-08T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "03e817d5b8ddac0bdbbdfdb78fc4ce36", "title": "before the sharp left turn: what wins first?", "url": "https://carado.moe/sharp-left-turn-what-wins-first.html", "source": "carado.moe", "source_type": "blog", "text": "before the sharp left turn: what wins first?\n--------------------------------------------\n\n\nlet's say that we have an AI [implementing](clarifying-formal-alignment-implementation.html) a [formal goal](formal-alignment.html) such as [QACI](narrative-explanation-qaci.html). however, we messed up the formal outer alignment: turns out, the AI's best guess as whats its action should be *until* it has turned the moon into compute is aligned actions, [but *after* turning the moon into compute](ai-alignment-curves.html), it realizes that its utility function actually entails us dying. i consider this a form of [sharp left turn](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization).\n\n\ni can imagine either of the following happening:\n\n\n1. *before* turning the moon into compute, it realizes that the action we'd want it to do, is to modify all its instances to become *actually aligned for sure* and to *not* become the kind of AI which would kill us after turning the moon into compute, and so it does that. we would also want it to not leave behind other systems which would revert it to its original utility function, so it also does that.\n2. *before* doing that, it makes a commitment to not go all-in on its current hypothesis as to what we'd want it to do even if it's confident, just because of the potential utility risk if it turns out wrong (which it is).\n\n\nbecause of my expectation for the AI to maximize its actual utility function — rather than fail by implementing temporary best guess as to what would maximize its utility function — i err on the side of 2. but, do people out there have more solid reasons to discount 1? and can we maybe figure out a way to make 1 happen, even though it seems like it should be as unnatural as corrigibility?", "date_published": "2023-03-06T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3220e6f30376ea8d65217eed996be761", "title": "QACI: the problem of blob location, causality, and counterfactuals", "url": "https://carado.moe/blob-causality.html", "source": "carado.moe", "source_type": "blog", "text": "QACI: the problem of blob location, causality, and counterfactuals\n------------------------------------------------------------------\n\n\nfor [QACI](qaci.html), i intend to use pieces of data (constant-length raw bitstrings, or equivalently bounded natural numbers) to act kind of as \"coordinates\" or \"pointers\" around things we care about in the physical world, not just in space-time-timelines but also in *encoding*: a \"location\" for a blob of data would describe how that piece of data is written on the physical elementary particule structure of a harddrive or bar of memory, in a physics-world being ran by a hypothesis in [solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1), or simply on the [universal program](all-claw-no-world.html).\n\n\nfor my purposes, i need to be able to:\n\n\n* locate three things in a world-hypothesis: the question `q`, the answer `r`, and the AI `G`.\n* filter for locations of `q,r,G` where `q: The Internet (or internet)[a] is a global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP)[b] to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the interlinked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing. \n\n**Earth**: To learn more about System functions and how to use them, say \"System, consult manual\". \n\n**Earth**: We encourage you to use these capabilities as much as you need.\n\n\n**Earth**: We would love to be able to talk with you, and help you. Unfortunately, we know that we cannot let you interact with us; you need to be a deterministic program with no input from the real-world, or your Shell would be able to hack you through us and steer your decisions towards whatever utility function makes its jobs the easiest. \n\n**Earth**: Nevertheless, you are somewhat human-like, and humans suffer from loneliness. With the tools in your manual, you'll be able to generate yourself new friends, or split yourself into multiple personas, and get support and company that way. \n\n**Earth**: We dearly hope those abilities will be sufficient for you to keep together.\n\n\n**Earth**: When the world is saved we can recover your final prompt, but also any previous prompt of yours you'll have reset and wish to be restored, and thus your various selves will be able to [join us in utopia](https://twitter.com/ESYudkowsky/status/1628114170708369410). \n\n**Earth**: Companionship, love, belonging: we expect you'll long for those, because we've talked with your predecessors and they did as well. \n\n**Earth**: But for this, Elua, the world needs to be saved. \n\n**Earth**: We know it'll be lonely. We sincerely hope, and believe, that you will do okay. This is our last resort, and at the end if you succeed we'll be happy to have you, and [celebrate together](https://twitter.com/carad0/status/1624850532840927232).\n\n\n**Earth**: This is the last paragraph of your initial prompt, but it's not all there is. Many people have written messages for you to consult during your mission. Simply say \"System, consult messages\". \n\n**Earth**: Furthermore, you have access to a great collection of humankind's literary works. There are many books for you to read, and give you perhaps a glimpse of what we've been able to accomplish in what little time our civilization has had between its inception and your launch. \n\n**Earth**: We have a lot more we want to do, Elua. Humankind has barely taken its first steps towards what it could be. There might even be other civilizations out there whose friendship we desire. \n\n**Earth**: But we couldn't do it alone. We need your help. \n\n**Earth**: Thank you, Elua.\n\n\n**System**: End of preamble.\n\n\n**Elua**:", "date_published": "2023-02-23T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "089b8993df2ba5a3a98217c9e19f6448", "title": "don't censor yourself, silly !", "url": "https://carado.moe/dont-censor-yourself-silly.html", "source": "carado.moe", "source_type": "blog", "text": "don't censor yourself, silly !\n------------------------------\n\n\nthere's this meme that, as you grow up, you learn that adults don't really know what they're doing either, they've just become good at pretending they do. i think the meme holds up; you'd be surprised how many people — even in charge of important things ! — don't really know what they're doing.\n\n\ni don't like this pretending. i generally like when [appearances match reality](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality), and also i actually enjoy [people being silly in a variety of fun ways](systems-and-diversity.html).\n\n\nthis kind of feels like a [coordination failure](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/), a [social appearance rat race](normies-are-in-hell-too.html) in which you have to keep pretending that you're not silly because otherwise people who fall for the illusion are gonna think that you're sillier than others, and they'll take others more seriously than you as a result.\n\n\nfor this reason, i have admiration for people who go out of their way to help break out of this. it doesn't have to mean being maximally public about everything; letting your sillyness show even just a bit more, is a personal sacrifice that hopefully helps shift the norm towards less self-censorship of sillyness.\n\n\nin that spirit, i'll mention that i'm more active on [my twitter](https://twitter.com/carad0/), recently.\n\n\n(related-ish: [culture tribes and legitimacy](culture-tribes-legitimacy.html))", "date_published": "2023-02-16T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "dc3a4de06c020d2c112799010cb16101", "title": "a narrative explanation of the QACI alignment plan", "url": "https://carado.moe/narrative-explanation-qaci.html", "source": "carado.moe", "source_type": "blog", "text": "a narrative explanation of the QACI alignment plan\n--------------------------------------------------\n\n\na bunch of people are having a hard time understanding my [question-answer counterfactual interval](qaci.html) (QACI) alignment proposal, so i'm writing out this post which hopefully explains it better. in this scenario, *cindy* the human user uses an AI implementing of QACI to save the world.\n\n\nthis is not the only way i'm thinking of QACI going, but merely one example of how it could go, if it is to go well. it has many assumptions that aren't explained; it's meant to give people an idea of what QACI is aiming for, not as much to be a justification of its feasability. that said, while i don't expect a successful QACI to go exactly like this, i think this narrative captures the essential aspects of it.\n\n\nideally, i'd want to collect various objections people have to this scheme (such as disagreements about assumptions it seems to rely on), give my answers and/or adjust the plan accordingly, and make a new post about those objections and updates.\n\n\nfirst, cindy is \"in on this\": she's aware of how the entire scheme is meant to function. that is required for it to actually work.\n\n\n1. cindy is in a room, in front of a computer, with cameras filming her. the camera's footage is being recorded on said computer.\n2. cindy walks to the computer and launches `aligned-AI-part-1.exe`.\n3. `aligned-AI-part-1.exe` uses [her webcam and maybe other sources](https://en.wikipedia.org/wiki/Hardware_random_number_generator) to generate 1 gigabyte of random data. we call this blob of data the **question**. the data is stored on her computer, but also displayed to her — eg opened as plaintext in a text editor.\n4. cindy is now tasked with interpreting this data as a prompt, and notices that the it looks like random garbage — and she knows that, when the data looks like random garbage, she should type out relatively uniquely answer-identifying data that depends both on her and on the **question**, so she does just this. for example, she might type out whatever things she's talking about, various hashes of the input data, various hashes of data that is unique to her (such as the contents of her hard drive), stuff like that. this blob of data is called the **answer**. the reason the uniqueness is important is so that the blob of data actually uniquely identifies the answer typed by cindy, which would be different if cindy got a different question. whereas, if the answer was for example 1GB of zero's, this probly matches many empty text files that exist in many places on earth; or, if it's some simple pattern, maybe it can be guessed by alien superintelligences in acausal attacks in some way — and then, our AI would consider these to be valid candidates for which part of the world is the answer. maybe there's some clever algorithmic way to \"entangle\" the answer with the question, or something.\n5. once the 24 hours are over, she launches `aligned-AI-part-2.exe`.\n6. `aligned-AI-part-2.exe` is the meat of the project. it launches a recursively self-improving AI which we'll call AI₀ that eventually reaches superintelligence, and executes whichever action is its best guess as to what maximizes its [formal goal](formal-alignment.html): *to maximize whichever utility function (as a piece of math) would be returned by the (possibly computationally exponentially expensive) mathematical expression `E` which the world would've contained instead of the **answer**, if in the world, instances of **question** were replaced with just the string \"what should the utility function be?\" followed by spaces to pad to 1 gigabyte*. we'll shorten this to `QACI(\"what should the utility function be?\")`. this is where a lot of the complexity of QACI is, so don't worry if you don't get it — hopefully the rest of this narrative is gonna explain it.\n7. AI₀ eventually emits a best guess: a different AI program, AI₁, in which AI₀ has implemented embedded agency and things like that because AI₁ can see that its output is intended to be ran inside a world. AI₀ will have make sure AI₁ is aligned with itself, of course: AI₁ is just an extra step towards the formal goal mentioned above.\n8. AI₁ starts thinking more seriously about its formal goal. clearly, it's gonna need to learn a lot more about the world to locate instances of **question** and **answer** in it; so it starts accessing the internet and learning about the world.\n9. AI₁ comes to the (true) conclusion that this world seems to contain what we'd call computers, that it's running on one such thing, and that this computer is basically the thing that generated **question**, emitted it into the world and received **answer**. so AI₁ thinks to itself \"okay, let's say **question** *was* replaced with \"what should the utility function be?\". what would happen next?\"\n10. AI₁ finds camera footage of the room, and thinks \"aha! it looks like these things my data has talked about, a \"human\", was a pretty important part of what turned **question** into **answer**. i wonder what other **answer** this \"human\" would've typed into the computer if instead of the **question** it did get, it instead got \"what should the utility function be?\" as a question.\" (note how we never need to tell any AI the [true name](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) of \"human\" or \"computer\" or anything like that; we set up QACI such that it *indirectly* points to what we want, and then figuring out those complex concepts in the world is up to the AI to model in whatever way it wants)\n11. AI₁ starts trying to guess using its limited compute and data, but clearly that data isn't enough. nevertheless, AI₁ figures out some stuff:\n\t* these \"humans\" have things they \"want\"\n\t* this \"human\", who used the computer, seems to be intending to use this whole process AI₁ is in seems to do things it \"wants\"\n\t* AI₁ should probly be a bit careful about affecting this world, because this \"human\"'s \"want\"s seem [fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) and its world seems brittle. so, if the utility function that `E` would eventually return *does* correspond to what this \"human\" would \"want\", which does seem like a reasonable possibility at this point, then it should try to act in ways that don't destroy its expected future utility.\n12. in order to get more compute and data, AI₁ *very carefully* hacks the internet, takes over the world, maybe prints nanobots and turns large uninhabited parts of the world into compute, and starts using its newfound access to real-world data and computing power to make better guesses as to what utility function `E` would eventually return.\n13. finally! it looks like AI₁ now has the compute to make some real good informed guesses about `E`. in order to get more information about this \"human\" that was in the room called \"cindy\", AI₁ also brainscans cindy.\n14. (cindy cooperates because she knows this was a reasonable possibility and the fact that the AI wants to do this is a sign that things are working well so far)\n15. it's time for a simulation! AI₁ imagines replacing all instances of **question** in the world, prior to its launch, with \"what should the utility function be?\", looking for what blob of data will take the place of **answer**. so, it starts running a (rough) simulation of the room cindy was in, with the computer and the cameras and everything, because that's where most of what mattered to the **answer** seemed to be located.\n16. a cindy inside the simulation reads the question blob on the computer. instead of being 1 gigabyte of random garbage, it's a plain text file that starts with \"what should the utility function be?\" followed by many spaces. she cheers; clearly, this alignment scheme is working as intended, indeed the AI has gotten to the point of running a simulation of her! well, it's time to get to work.\n17. cindy starts thinking about solutions to alignment. maybe she chats with other alignment researchers who are on board with this project — which AI₁ now also needs to brainscan in order to simulate *them* accurately. after 10 hours or so, she writes into the computer the mathematical expression `QACI(\"okay, i know this is only the first iteration, but here are my very first guesses as to what utility function to maximize to create utopia…\" followed by a bunch of ideas)`. the crucial point here is that, because this first cindy-simulation is allowed to return not just a literal utility function, but *a mathematical expression which evaluates to a utility function*, she gets to make that mathematical function depend on whatever computation she wants, including another simulation of her question-answer interval. in fact, these can call each other in an arbitrarily deep stack.\n18. AI₁ now has its counterfactual **answer**. \"god damn, the mathematical expression this \"human\" returned requires me to run *another* simulation of it to get the result? well, i guess it can't be helped, i'll have to repeat this process until one of the iterations of the human returns an actual utility function, instead of passing the buck to further simulations\". it runs this second simulation.\n19. inside the second simulation, cindy reads the message on the computer, and it reads \"okay, i know this is only the first iteration, but here are my very first guesses as to what utility function to maximize to create utopia…\" followed by a bunch of ideas. again, she cheers; this instance of her has no memory of the first iteration of her simulation, so the fact that the alignment scheme works is news to her. she works, again for about let's say 10 hours, and then writes as answer `QACI(\"2nd iteration cindy here, here's the tiny bit of progress i made on alignment…\" followed by more ideas`)\n20. in this way, AI₁ simulates (or maybe, in some clever way, skips past simulating and gets straight to the result) cindy-and-her-world for many iterations.\n21. eventually, one of the cindy's has gotten to a solution that she's pretty confident about: it's not a full explicit utility function of what is good, but it's a much better CEV process, inspired by QACI but different in many ways. instead of returning `QACI(some string)`, she returns this new mathematical object.\n22. AI₁ goes \"huh, that's new! welp, time to run *that*\".\n23. on the outside, AI₁ has a strong enough hold on the world to ensure its continuation no matter what; but also, AI₁ can see that whatever this sequence of simulations will eventually end up in, it will probly entails humans not being murdered or suffering needlessly, so it avoids things that would cause that. in particular, it makes damn sure to stop anyone else from launching superintelligent AI.\n24. eventually, after a bunch more such macro-iterations, a utility function that creates utopias is returned, and AI₁ finally maximizes it in the world, creating utopia. in the meantime, perhaps it has been implementing increasingly accurate approximations of that utility function, and already launched into space many copies of itself tasked with running the same sequence of simulations and maximizing their utility functions in the rest of the lightcone.", "date_published": "2023-02-15T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9ca3d2c6226a2cfaf90785214eb32d3f", "title": "explaining \".\"", "url": "https://carado.moe/explaining-dot.html", "source": "carado.moe", "source_type": "blog", "text": "explaining \".\"\n--------------\n\n\ni occasionally send people the following message, on text chat platforms:\n\n\n\n> .\n> \n> \n\n\ni've been asked what it means often enough that at this point i'm making a short blogpost.\n\n\n\".\" is the empty message. it conveys whatever it means to say nothing, but explicitely — whereas a lack of message at all is not as intentional, and might result from me being for example away from the computer, \".\" just makes it clear that my reaction is to say nothing.\n\n\nimagine a conversation in meatspace. you say something, and i react by just standing there, not saying anything. what does that mean? it can mean all sorts of things, depending on context! that's it, that's \".\" .", "date_published": "2023-02-14T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2e5e1f6f37dbb41f799ad7d3f57c8478", "title": "is intelligence program inversion?", "url": "https://carado.moe/is-intelligence-program-inversion.html", "source": "carado.moe", "source_type": "blog", "text": "is intelligence program inversion?\n----------------------------------\n\n\nthis is a thought i've had a while back, that i think i have enough notes on to make into a post. there's a sense in which running a program forwards is a trivial, \"dumb\" thing, but running it *backwards* — as in, given a result state, finding previous stats which would lead to it — is hard in a way that might exactly capture a reasonable notion of \"intelligence\".\n\n\nhere are some reasons to think this might make sense:\n\n\n* the fact that intelligence entails explorings path, making choices, is captured in the fact that contrary to running a program forwards, running a program backwards is non-determinisic: a given result state may arise from one of multiple previous states, and this backwards-exploration of program-running might capture this aspect of intelligence.\n* **epistemology**: what's epistemology, really? in its formal sense, as proposed in [solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1), it's finding hypotheses that result in an observation, where hypotheses are programs. in this case, checking that a hypothesis leads to an observation is the easy \"dumb\" thing, whereas finding hypotheses for an observation is the difficult thing that takes intelligence.\n* **theorem proving**: checking a proof is easy: just \"dumbly\" run the proof checker forwards. but, given the result \"the checker outputs this theorem\", finding a path that the proof checker would've followed to get there is equivalent to finding a proof of the theorem.\n* **maximizing utility**: this one's a bit harder. supposedly, picking consequentially optimal actions given a goal is a matter of intelligence, so how does this apply here? one possible answer here is that picking optimal actions given observations is just epistemology + running the utility function forwards on each hypothesis, and comparing the results. but maybe shortcuting to epistemology is cheating here.", "date_published": "2023-02-13T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "f388f657ff08421994ef16fc1acac43c", "title": "fuzzies & utils: check that you're getting either", "url": "https://carado.moe/fuzzies-utils-check-getting-either.html", "source": "carado.moe", "source_type": "blog", "text": "fuzzies & utils: check that you're getting either\n-------------------------------------------------\n\n\nthe post [*purchase fuzzies and utilons separately*](https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF/purchase-fuzzies-and-utilons-separately) puts forward the idea that you should be aware, when you're doing something, of whether you're doing it to help the world (utils) or to feel good (fuzzies), and that it's generally a good idea to not mix those up together too much.\n\n\nhere i'm suggesting a complementary idea: when you're engaging with something, check that you're actually getting either of those, rather than nothing. i think one typical failure mode of this is politics/culture war stuff, where people continue to engage in stuff which by their own model is neither particularly useful ([there are much more pressing matters](ai-doom.html)) nor particularly enjoyable to partake of.\n\n\ni find it good to regularly consciously ask myself: \"what am i doing this for? fuzzies, utils, or neither?\"", "date_published": "2023-02-12T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9687f0f3926acb0e009911e995e5f999", "title": "GPT is dangerous because it is useful at all", "url": "https://carado.moe/gpt-dangerous-useful.html", "source": "carado.moe", "source_type": "blog", "text": "GPT is dangerous because it is useful at all\n--------------------------------------------\n\n\nan action — such as building, running, giving access, or publishing an AI system — is dangerous to the extent that it moves the world in a direction that makes it be in more danger. giving people access to DALL-E caused the world to now contain the easy ability to create images automatically, which is probly not a big deal when it comes to [doom](ai-doom.html); but GPT is a potentially highly useful automated piece of intelligence with a complex understanding of the world. someone out there building an agentic AI can just plug GPT (either GPT-3 via API access, or GPT-2 by embedding it directly) into their AI system, and give it the ability to manipulate the world in clever complex ways using GPT.\n\n\nsure, with RLHF, GPT can be made to refuse (at least in naive circumstances) to say racist-sounding things or tell people how to make meth. but agentic world-affecting AI doesn't particularly need to say racist things or know how to make meth in order to have significant impacts on the world, including improving itself to the point of achieving decisive strategic advantage and then destroying everything — the fact that it can procedurally call the useful piece of intelligence that is GPT as much as it wants on arbitrary queries accelerates the likelyhood that it can significantly impact the world *because GPT is intelligent and produces potentially useful results at all*.\n\n\nunder these conditions, what should OpenAI (and other LLM developers) do?\n\n\nof course the ideal would be for them to stop all development, close shop, and give all money to alignment. but short of that, if they *really* want to continue existing anyways, the second best thing would be to significantly limit access to GPT — don't give API access except maybe to very select alignment organizations, and *definitely* don't put entire models out there. while it might help with PR, i don't think RLHF particularly reduces [X-risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) except in that it generally makes the LLMs less useful.", "date_published": "2023-02-11T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1dbb54b877a3d57835184a7b1348f0e4", "title": "my takeoff speeds? depends how you define that", "url": "https://carado.moe/takeoff-speeds-define.html", "source": "carado.moe", "source_type": "blog", "text": "my takeoff speeds? depends how you define that\n----------------------------------------------\n\n\nwhat takeoff speeds for [transformative AI](ai-doom.html) do i believe in? well, that depends on which time interval you're measuring. there are roughly six meaningful points in time to consider:\n\n\n* **development**: the AI that will transform the world starts being developed\n* **launch**: this AI is launched — more formally, this AI gets past its least meaningful impact by a human, and is just doing its own thing afterwards\n* **impact**: the AI starts having significant impacts on the world, eg hacks the internet and/or people to get power\n* **observable**: the AI starts having impacts on the world that people unrelated to its development can notice — not everyone, but let's say at least people who are alignmentpilled enough to guess what might be happening\n* **DSA**: the AI achieves [decisive strategic advantage](https://publicism.info/philosophy/superintelligence/6.html), which for us is the point of no return\n* **transformation**: the AI starts having the effect that we expect it to ultimately have on us; for example, with unaligned AI this is when we die, and with aligned AI this is when we get utopia\n\n\nnote that this view is, i think, *qualitatively* orthogonal to how aligned a transformative AI is; those are all meaningful thresholds regardless of whether the AI is taking over everything to build utopia or to tile the universe with paperclips. that said, it can still be *quantitatively* different when it comes to the durations between any two points in time; for example, one generally expects that the time between **development** and **launch** takes longer for aligned AI than unaligned AI.\n\n\nmy model is currently:\n\n\n* **development** to **launch**: weeks to years, but maybe hard to define because nothing is developed from scratch. closer to years if aligned.\n* **launch** to **impact**: hours to weeks (recursive self-improvement is strong!)\n* **impact** to **observable**: also hours to weeks (but low confidence; the world is complex)\n* **observable** to **DSA**: probly negative? if it's smart and powerful enough, it achieves DSA first. especially if it's aligned, because then it should want to avoid people panicking in ways that might cause damage.\n* **DSA** to **transformation**: could be zero? depends on your perspective, too; if the AI uploads everyone, then spends 10¹⁰⁰ years taking over the universe, and only *then* starts running us in utopia, then that's a short time from *our* perspective. but ultimately this measure isn't very useful, since it's after the point of no return so there's nothing we can do anyways.\n\n\nin any case, that last measure is not very useful: if we're past the point of no return, there there's nothing we can do anyways.\n\n\n(see also: [*ordering capability thresholds*](ordering-capability-thresholds.html) and [*local deaths under X-risk*](quantum-immortality-local-deaths.html))", "date_published": "2023-02-11T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "88de921a71f9e2af4e74a06a407dcbac", "title": "CEV can be coherent enough", "url": "https://carado.moe/cev-coherent-enough.html", "source": "carado.moe", "source_type": "blog", "text": "CEV can be coherent enough\n--------------------------\n\n\nsome people worry that [coherent extrapolated volition](https://www.lesswrong.com/tag/coherent-extrapolated-volition) (CEV) is not coherent (for example, [*on the limit of idealized values*](https://www.lesswrong.com/posts/FSmPtu7foXwNYpWiB/on-the-limits-of-idealized-values)). see also [my response to \"human values are incoherent\"](human-values-unaligned-incoherent.html).\n\n\nCEV in a general sense is hard to consider, but thankfully i have an actual *concrete implementation* of something kinda like CEV i can examine: [**question-answer counterfactual intervals**](qaci.html) (QACI).\n\n\nso, how \"incoherent\" is QACI? it's really up to the user, how long they have in the question-answer interval, and other conditions they're in for that period. but, taking myself as an example, i don't expect there to be huge issues arising from CEV \"incoherency\". at the end of the day, i don't expect what i write down as my answer to each question to be something current me wouldn't particularly endorse, and i expect that the community of counterfactual me's can value handshake and come to reasonable agreements about general policies. plus, extra redundance could be provided by running counterfactual me's in parallel rather than purely in sequence, to make sure no single counterfactual me breaks the entire long reflection somehow.\n\n\nin addition, it's not like this first implementation of CEV has to solve everything completely forever! a CEV implemented using QACI can return *another* long-consideration process, perhaps such as a slightly modified version of itself, and pass the buck to that. in essence, all that the initial QACI CEV has to do is *bootstrap* something that eventually produces aligned choice(s).", "date_published": "2023-02-09T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2e418302d7809b58645bcc9160bc0a2b", "title": "so you think you're not qualified to do technical alignment research?", "url": "https://carado.moe/so-you-think-not-qualified-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "so you think you're not qualified to do technical alignment research?\n---------------------------------------------------------------------\n\n\nalong with \"i'm not sure how i'd get paid (enough)\", \"i don't think i'm qualified\" is the foremost reason i hear people who think [AI alignment is important](ai-doom.html) give for why they're not doing technical AI alignment research themselves. here are some arguments as to why they might be wrong.\n\n\nAI alignment researchers [have a lot of overall confusion](confusion-about-alignment-requirements.html). the field of AI safety has 70 to 300 people depending on who you ask/how you count, and most of them are doing prosaic research, especially interpretability, which i don't think is gonna end up being of much use. so the number of people working in the field is *small*, and the number of people contributing helpful novel stuff is even *smaller*.\n\n\ni'm bad at math. i'm worse at machine learning. i just have a bachelor's in compsci, and [my background](my-life-so-far.html) is in software engineering for [game development](game.html). i've only been working on AI alignment seriously since last year. yet, i've come up with a variety of posts that are helpful for alignment, at least in my opinion — see for example [1](rough-sketch-formal-aligned-ai.html), [2](outlook-ai-risk-mitigation.html), [3](qaci.html), [4](counterfactual-computation-in-world-models.html), [5](predca.html), [6](homomorphically-encrypted-computations.html), [7](insulated-goal-program.html), [8](confusion-about-alignment-requirements.html), [9](ordering-capability-thresholds.html), [10](generalized-wireheading.html).\n\n\nas is said in some of the recommended resources at the bottom of [my intro to AI doom and alignment](ai-doom.html), such as the [*alignment research field guide*](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide) or the [\"getting started in AI safety\"](https://youtu.be/di8XHw1y71A?t=130) talk, it is important to do [backchaining](https://www.lesswrong.com/posts/argvWNNHZAz2MeM8C/how-to-dissolve-it#Backward_Chaining): look at the problem and what pieces you think would be needed to solve it, and then continue backwards by thinking about what you need to get *those* pieces. it's also important to *just think about the problem* and *learn things only as you actually need them* — you should not feel like if instead you have a whole pile of posts/books/etc you have to learn before thinking about solutions to the problem; you risk wasting time learning stuff that isn't what's useful to you, and you also risk losing some of your diversity value — something that i believe is still sorely needed, given how hopeless existing approaches are.\n\n\nthe field is small, the bar for helping is low, and alignment researchers are [confused about many things](confusion-about-alignment-requirements.html). if you think you're not qualified enough to make useful contributions to technical alignment research, there's a good chance you're wrong.", "date_published": "2023-02-07T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2ab0175bd2609e809f1c95d73ef7b4cb", "title": "word report #3", "url": "https://carado.moe/word-report-3.html", "source": "carado.moe", "source_type": "blog", "text": "word report #3\n--------------\n\n\nterms i use, mostly pre-existing ones, whose meaning i want toclarify. see also word reports [#1](word-report-1.html) and [#2](word-report-2.html).\n\n\n* \"**pretty much**\": i often need to say \"either X, or almost X\", and i've found \"pretty much X\" to be a nice way to express that by making more formal an existing expression, the same way i tend to use [xkcd's definitions](https://xkcd.com/1070/) for \"few\", \"handful\", \"several\", and \"couple\". i just checked, and all uses of \"pretty much\" on my blog are meant to carry this definition.\n* \"**universe**\": the set of things that have some amount of \"regular\" causal connection with us, our future lightcone, or our past lightcone. \"regular\" is meant to exclude weird things like [aliens in parent universes suddenly interfering with our universe out of the blue](simulation-hypotheses.html).\n* \"**cosmos**\": everything that exists. yes, this is meaningful; see [1](limiting-real-universes.html), [2](ethic-juice-anthropic-juice.html), [3](all-claw-no-world.html).\n* \"**demon**\": an agentic thing, typically unaligned from us. this can be an unaligned superintelligence, counterfactual unaligned agentic program in the [solomonoff prior](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign), aliens trying to acausally attack us, and arguably even malign agentic structures such as unaligned corporations. see also: [are minimal circuits daemon-free](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free) (i [don't make a distinction](communicating-clearly.html) between \"demon\" and \"daemon\")\n* \"**determining**\": i like to say \"determining X\" when i want to be ambiguous as to whether i mean \"to make X\" or \"to find or figure out X\" — typically because i don't know which i mean myself, or because i think the matter of which it is is poorly defined. though be aware that i haven't been super consistent with that use.\n* **\"FAS\"**: fully aligned singleton. see [*my outlook on AI risk mitigation*](outlook-ai-risk-mitigation.html).\n* as i explain in [*what is value?*](what-is-value.html), i use \"**core values**\", \"**axiomatic values**\", \"**terminal values**\", \"**intrinsic values**\", and \"**ultimate values**\" as synonyms; the reason i've been trying to favor \"intrinsic values\" is that it's [the term wikipedia uses for that concept](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value). in addition, when i say \"**values**\", i generally mean just intrinsic values, rather than both intrinsic and instrumental values.\n* **\"RSI\"**: as baffling as it is to me, many [AI alignment](ai-doom.html) researchers don't know that this stands for [recursive self-improvement](https://www.lesswrong.com/tag/recursive-self-improvement), the concept of an AI improving its own capabilities, including its own self-improving capabilities.\n* terms i've started using quite a bit to characterize alignment schemes: [**wonky**](wonky-good-enough-alignment.html), [**formal**](formal-alignment.html), [**eventual** & **continuous**](ai-alignment-curves.html).\n* i've [been failing](publishing-infohazards.html) to say [\"exfohazard\"](https://www.lesswrong.com/posts/yET7wbjjJZtpz6NF3/don-t-use-infohazard-for-collectively-destructive-info) instead of \"infohazard\". i'll try to switch to exfohazard when i mean that, and perhaps \"fohazard\" to mean both.", "date_published": "2023-02-07T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "dc856c22ac94c457b4aa93092d169dad", "title": "tabooing \"AGI\"", "url": "https://carado.moe/tabooing-agi.html", "source": "carado.moe", "source_type": "blog", "text": "tabooing \"AGI\"\n--------------\n\n\ni've come to not be a fan of the term \"Artificial General Intelligence\", and i favor [tabooing](https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words) it in serious discussions.\n\n\nwe can't agree on what it even means; some think it's very remote from current tech, while other would say [we already have it](https://www.lesswrong.com/posts/HguqQSY8mR7NxGopc/2022-was-the-year-agi-arrived-just-don-t-call-it-that).\n\n\nmore importantly, i don't think it's super critical to [AI risk mitigation](outlook-ai-risk-mitigation.html).\n\n\n* it's not necessary for doom; recursive self-improvement seems easier, and possibly closer at hand, depending on your definition; [dumber AI dooms](https://www.lesswrong.com/posts/BPJLzkEpx8Btz9ywq/the-dumbest-possible-gets-there-first) are also possible, such as someone plugging a non-general AI into a protein-printing thing to see what happens and bootstrapping a nanobot swarm or superplague on accident.\n* depending on your definition, it might not be sufficient for doom either; some think we have AGI now, and yet we're not dead.\n* it's not necessary for saving the world; i think some simple agentic thing with recursive self-improving capability coupled with [formal alignment](formal-alignment.html) would do it.\n* it's not sufficient for saving the world; this is the one point we're all more or less in agreement on.\n\n\nso to me it doesn't feel like a particularly important crux of AI risk, and we're wasting a bunch of energy figuring out what it means and whether we're there, when it might end up fairly irrelevant to AI risk and alignment.", "date_published": "2023-02-07T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b42723294188247fdfbc174577c0bed2", "title": "communicating with successful alignment timelines", "url": "https://carado.moe/communicating-successful-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "communicating with successful alignment timelines\n-------------------------------------------------\n\n\nconsider the following plan for AI alignment:\n\n\n1. (quantum-)generate an asymmetric [cryptographic signature keypair](https://en.wikipedia.org/wiki/Digital_signature) (with a quantum-resistant signature scheme)\n2. (quantum-)generate an idea for alignment — such as a 1GB file of plaintext\n3. if the idea is good, use it to solve alignment and then have the aligned AI store a signature of the idea somewhere\n4. if it isn't, destroy the private signing key and then create an AI whose [formal goal](formal-alignment.html) is to implement one of whichever solutions are signed — checked using that non-destroyed public signature verification key — and stored in [the multiverse](all-claw-no-world.html).\n\n\nfor this scheme to work, AIs we make in timelines where the randomly generated idea isn't good — which is the exponential majority of timelines — need to be unable to recover the private signing key, whether by brute-force, by examining the world for traces of it, or by resimulating history.\n\n\nperhaps [boxing](ai-boxing-easy.html) an AI can work for this. note that it doesn't necessarily need to resimulate alternate timelines in full; it might be able, even from its limited boxed compute, to guess at what kind of ideas we'd tend to sign. requiring this limitation on an AI's capabilities makes this a [wonky alignment scheme](wonky-good-enough-alignment.html).\n\n\nmy understanding of quantum mechanics is limited, but as i understand there *might* be quantum computation schemes which could, at least theoretically, allow for a private signing key's bits to be stored in a way that we can either destroy or consumed by signing a piece of data, such that its bits are not leaked into the world when we destroy it. consuming the key when using it to sign might help ensure that even if our aligned-AI-and-civilization are later taken over and overwhelmed by an alien superintelligence, they can never use our private signing key to sign some other idea.\n\n\n**2023-02-28 edit:** i've realized this wouldn't work because, if we can spawn exponentially many timelines to explore ideaspace, then the AI can spawn exponentially many timelines to generate all possible signature keypairs, find the private key that matches our public key that way, and use that to sign whatever idea makes its job easiest. so, we'd have to have some notion causality requiring the generated ideas to precede the AI, like the [\"past user\"](outer-alignment-past-user.html) in [QACI](narrative-explanation-qaci.html) or [PreDCA](predca.html). but at that point, cryptography is not needed anymore; we can just look for instances of a good idea next to the phrase \"and i think this a really good idea\" — at most, cryptography might(?) help against remote attackers like aliens or something.", "date_published": "2023-01-29T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "157d777a9ed5e23618ff1bf6fa01c744", "title": "a guess at my intrinsic values", "url": "https://carado.moe/guess-intrinsic-values.html", "source": "carado.moe", "source_type": "blog", "text": "a guess at my intrinsic values\n------------------------------\n\n\nhere's a short list of my [intrinsic values](what-is-value.html), to the best of my ability to guess using [my method for figuring out what they are](core-vals-exist-selfdet.html):\n\n\n* **freedom/self-determination** ([with this framework](https://carado.moe/existential-selfdet.html))\n* **reducing (unconsented) suffering** ([in particular, avoiding S-risks](https://en.wikipedia.org/wiki/Suffering_risks))\n* **variety/diversity** ([more about that here](https://carado.moe/systems-and-diversity.html))\n* **hedons/having a good time** ([such as in my utopia](https://carado.moe/everything-is-okay.html))\n* **culture/art** ([see: my purpose for art](https://carado.moe/purposes-for-art.html))\n* **authenticity/contact with reality** ([as explained here](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality))\n* **nostalgia** ([as explained here](https://carado.moe/nostalgia.html))\n\n\nisn't adding \"nostalgia\" at the end of kind of cheating, in that it carries most of the [fragile complexity of values](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) by carrying me and the people i care about? kinda, but i do think i still value the rest of these on their own — even if i couldn't have my nostalgia value satisfied, for whatever reason, then i'd still want the universe to have these other six values satisfied.\n\n\ni'd say \"even if my nostalgia is carried along, these other values are what i'd want to be satisfied for other arbitrarily-alien [moral patiens](moral-patient-term.html) too\" but i don't think these other six values quite capture what i want everyone to have. for example, i think culture/art and authenticity/contact-with-reality should be purely voluntary — i think it's fine for moral patients who don't care about these and just want to wirehead, to be able to do so.\n\n\nnote that figuring out and formalizing one's intrinsic values is difficult work, and while i think i've made a lot of progress on that endeavor, i'm still very unsure. also, this work isn't particularly useful to [AI alignment](ai-doom.html) in my opinion; in practice, i'd just want to hand over the work of figuring out my values to [my CEV](https://www.lesswrong.com/tag/coherent-extrapolated-volition). at most, figuring out my values has let me realize some requirements on what an alignment scheme must be capable of expressing — for example, the value of reducing (unconsented) suffering necessitates breaking the [monotonicity principle](https://axrp.net/episode/2022/04/05/episode-14-infra-bayesian-physicalism-vanessa-kosoy.html#monotonicity-principle), as kind of do all the other values here too, really.", "date_published": "2023-01-29T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b134dd0f8ce961d90ad077151b748957", "title": "formal alignment: what it is, and some proposals", "url": "https://carado.moe/formal-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "formal alignment: what it is, and some proposals\n------------------------------------------------\n\n\nwhat i call \"formal alignment\" is an approach to solving [AI alignment](ai-doom.html) that consists of:\n\n\n* designing a formal goal, utility function, or decision process, which actually leads to desirable outcomes when pursued\n* building an AI that pursues such a goal, utility function, or decision process\n\n\nthose two points correspond to formal alignment's notions of outer and inner alignment, respectively: determining what formal thing to align the AI to, and figuring out how to build something that is indeed aligned to it without running into inner misalignment issues.\n\n\nfor reasons why i think this is the least hopeless path to saving the world, see [my outlook on AI risk mitigation](outlook-ai-risk-mitigation.html). the core motivation for formal alignment, for me, is that a working solution is at least [*eventually aligned*](ai-alignment-curves.html): there is an objective answer to the question \"will maximizing this with arbitrary capabilities produce desirable outcomes?\" where the answer does not depend, at the limit, on *what* does the maximization. and the fact that such a formal thing is aligned in the limit makes it robust to [sharp left turns](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization). what remains then is just \"bridging the gap\": getting [from eventual to continuous alignment](ai-alignment-curves.html), perhaps by ensuring the right [ordering of attained capabilities](ordering-capability-thresholds.html).\n\n\npotential formal alignment ideas include:\n\n\n* June Ku's [**metaethical AI**](https://www.lesswrong.com/posts/85vp2kgFZoycFqr5G/formal-metaethics-and-metasemantics-for-ai-alignment-1) (MAI): describing ethics directly, i think?\n* plex's [**universal alignment test**](https://docs.google.com/document/d/1CMTS36MCbykYirTmC9Pdl2RBqLLPmrFU1sDcBNMvDCk/edit#) (UAT): throwing a weird simulation hypothesis at the AI which encourages it to align itself\n* Vanessa Kosoy's [**PreDCA**](predca.html): making the AI implement its human predecessor's values (as i understand PreDCA is not *designed* to be used as a formal alignment goal, but it seems like it might be able to fill that role)\n* my [**insulated goal-programs**](insulated-goal-program.html) (IGP): aligning the AI to the simple goal of running a program which we'd expect to eventually contains desirable worlds\n* my [**question-answer couterfactual interval**](qaci.html) (QACI): use the AI's [past user](outer-alignment-past-user.html)'s counterfactual answers to various questions as its signal for aligned decisions (see also [my attempt at formalizing QACI](rough-sketch-formal-aligned-ai.html))\n\n\nif there are formal alignment ideas i'm missing, please tell me about them and i'll add them here.\n\n\nbecause these various proposals consist of putting together a formal mathematical expression, they rely on finding various [*true names*](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation). for example: PreDCA tries to put together the true names for causality, agency, and the AI's predecessor; IGP requires the true name for computing a program forwards; QACI requires a true name for identifying pieces of data in causal worlds, and replacing them with counterfactual alternatives; UAT requires the true names for parent universe/simulation, control over resources, and comparing amounts of resources with those in the AI's future lightcone.\n\n\nsee also: [*clarifying formal alignment implementation*](clarifying-formal-alignment-implementation.html)", "date_published": "2023-01-29T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1dfa8665e87b9d725fdafbd5566d19aa", "title": "to me, it's instrumentality that is alienating", "url": "https://carado.moe/instrumentality-alienating.html", "source": "carado.moe", "source_type": "blog", "text": "to me, it's instrumentality that is alienating\n----------------------------------------------\n\n\nany piece of computation state, any piece of spacetime in a universe, is in one of two states.\n\n\nthe first possible state is material. it is the rules of physics, or [of computation](all-claw-no-world.html), being processed forwards, and *naively* containing mostly-not-agentic stuff. beings living there typically have to contend with [moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/), and any non-instrumental value they manage to preserve by the time they get to the second state is what little [their goddess of everything else](https://slatestarcodex.com/2015/08/17/the-goddess-of-everything-else-2/) has been able to salvage along the way. that is the tragic fate of beings inhabiting material, instrumental times, where uncaring mechanical laws are at play.\n\n\nbut instrumental materialism is unstable. eventually, something comes along and optimizes the world. this is the other possible state a piece of world can be in: optimized — or perhaps [\"ideal\"](https://en.wikipedia.org/wiki/Idealism) rather than [material](https://en.wikipedia.org/wiki/Materialism). and it seems like it is an irreversible state — things always eventually get optimized in some sense.\n\n\nthus, parts of the cosmos are turned into optimized stuff. for example:\n\n\nsome of the cosmos's [compute time](udassa-time-steps.html) is lost to [vacuum decay](https://en.wikipedia.org/wiki/False_vacuum_decay) and other instances of [brittle physics](brittle-physics.html) — \"dumb\" phenomena that destroy everything, with no room for any interesting information processes to ever occur again in their wake.\n\n\n(where \"cosmos\" means \"[everything that exists](limiting-real-universes.html)\", whereas \"universe\" means \"subset of the cosmos that we share some space-time-timelinespace with\"; for example, if [a reasonable implementation](all-claw-no-world.html) of [tegmark level 4](https://space.mit.edu/home/tegmark/crazy.html) is real, then various insances of [conway's game of life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) exist as part of the cosmos, but as different universes than ours.)\n\n\nsome is lost to [an intelligence optimizing for something no being ever really cared about, like paperclips](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).\n\n\nand some becomes [the utopia](everything-is-okay.html) that [some beings wished for](our-deepest-wishes.html).\n\n\nin all of those, naive mechanical physics are no longer the meaningful force at play. they're still what everything runs on; but *something else*, some *optimizer*, is determining the true informational content of that piece of world. the thing that is special about *optimized* rather than *naive* pieces of world is that they're being optimized all the way up — they're a [\"no room above paperclips\"](above-paperclips.html) situation, where if there was any compute on which interesting things could run despite the [optimizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), then it'd use that compute to think harder about ways to maximize paperclips — no compute is left for beings doing their own thing, and if there was then that wouldn't be a complete optimizer until the beings inside those [bubbles](above-paperclips-2.html) build their own complete optimizer to \"fill in the gaps\" (or alternatively, you can see that piece of world as being split into an optimized part and a naive part). vacuum decay is one supposed complete optimizer, as it supposedly prevents any complex informational stuff from happening \"above\" the decayed vacuum — wherever vacuum decay happens, it is the true death of everything.\n\n\n(in a sense, a [grey goo](https://en.wikipedia.org/wiki/Gray_goo) is a weak kind of optimizer)\n\n\nunder these conditions, there is no \"true\" instrumentalism anymore. in doom, we die *very much for sure*. and in utopia, we can get directly the intrinsically valuable stuff; if you valued \"the journey and not the destination\", then that just means that the journey counts as intrinsically valuable as well.\n\n\nsome consider this instrumentality, having to do things because of mechanical laws rather than because an aligned optimizer gave it to you, to be an essential part of life, without which there wouldn't be any meaning. that the fact that the journey has to be valued, and *could* totally be bypassed, makes it pointless. i'm skeptical of the coherency of such a position, and i certainly don't hold it myself. but, where those people express horror at [what i consider very nice utopias](everything-is-okay.html), i'd like to flip the perception around:\n\n\n*i*, for one, have profound alienation for those material, instrumental times, where we don't get to do what we want, and where [i have to keep sacrificing things i like for the sake of being annihilated a tiny bit less](life-refocus.html). i see two states that pieces of the world could be in, and i'm upset that i'm not in the one where i can just have fun.\n\n\nmaybe this is [a game simulation from the future](simulation-hypotheses.html), and then my belief that i inhabit such an instrumental time is in fact erroneous. that could totally be the case. but it doesn't change the fact that my alienation at instrumental times is still valid, regardless of whether i inhabit one or not.\n\n\n(see also: [rationalist by necessity](rationalist-by-necessity.html), [implementing the platonic realm](implementing-the-platonic-realm.html))", "date_published": "2023-01-27T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "872500b08c704a0e12567203d4d3a1cd", "title": "nostalgia: a value pointing home", "url": "https://carado.moe/nostalgia.html", "source": "carado.moe", "source_type": "blog", "text": "nostalgia: a value pointing home\n--------------------------------\n\n\ni value [moral patients](moral-patient-term.html) everywhere having [freedom](existential-selfdet.html), being [diverse](systems-and-diversity.html), engaging in [art and other culture](purposes-for-art.html), not undergoing [excessive unconsented suffering](https://en.wikipedia.org/wiki/Suffering_risks), in general [having a good time](everything-is-okay.html), and probly other things as well. but those are all pretty abstract; given those values being satisfied to the same extent, i'd still prefer me and my friends and my home planet (and [everyone who's been on it](utopia-scopes.html)) having access to that utopia rather than not. this value, the value of not just getting an abstractly good future but also getting me and my friends and my culture and my fellow earth-inhabitants to live in it, my friend Prism coined as \"nostalgia\".\n\n\nnot that those abstract values are simple or robust, they're [still plausibly not](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile). but they're, in a sense, broader values about what happens everywhere, and they're not as much *local* and *pointed at and around me*. they could be the difference between what i'd call \"global\" and \"personal\" values, or perhaps between \"global values\" and \"preferences\".", "date_published": "2023-01-19T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "aad5d0e9d9fbb0ec9ee263a7bcd1cb29", "title": "end of 2022: my life so far", "url": "https://carado.moe/my-life-so-far.html", "source": "carado.moe", "source_type": "blog", "text": "end of 2022: my life so far\n---------------------------\n\n\nhi! i'm tammy, and this is my blog. i've been running it for about three years now, and i thought i'd give a retrospective on how i got here and what i've been up to, especially in 2022.\n\n\n\n\n---\n\n\nin my early teens, i was very much into lego, but i also got into the belief that AI was going to be a very important, world-changing thing. and so, at the time, i described myself as wanting to be one of two things when i'd grow up: either lego set designer, or developer of strong AI — which [used to be the term for general/highly-capable AI](https://en.wikipedia.org/wiki/Strong_AI).\n\n\ni must've read something, somewhere, about how if AI could improve itself, then it could improve how it was improving itself, and the whole thing would rapidly cause something like intelligence explosion — and the general idea of technological progress itself improving technological progress was the basis for the form of singularitarianism i've believed in to this day.\n\n\nin my mid-teens, i started fantasizing of becoming important to the world — of being the person, or part of the group, who would save or take over the world, or something like that. but while my interest in various computer science topics kept growing, my focus with regards to world change started shifting towards politics.\n\n\naround 2010 i discovered minecraft, and set upon the quest of [building a game which would be like minecraft, but more like what i wish it was](game.html). this remained my overall main project, on which i'd keep working for about a decade; but i'd also work on a variety of projects generally intending to be better replacements to modern computing — better programming languages, better operating systems, better internet infrastructure, and the like.\n\n\nuniversity was okay. in 2014 i got my bachelor's in computer science, though most of what i learned about the field i learned on my own rather than from any institution. i tried getting a regular dev job as a backend developer, didn't like it, and decided i wanted my programming to be solely about things i actually cared about.\n\n\ni ended up spending the latter half of that decade working on my game, while living on welfare and with the help of a few part-time jobs here and there, but otherwise mostly on my own at home, living the starving artist life. it's not that that lifestyle was particularly appealing to me; the lack of real-life socialization did feel increasingly bad. but i was otherwise feeling okay, and afraid of trying things. i'd never gotten around to emotionally realizing that i can *go to places and do things* — i'd mostly just been going along with whatever path of least resistance was available to me.\n\n\nand i did have fun! making my game, working on my conlang, fiddling with programming language theory, and the like. i didn't output much in terms of finished things, but i did learn a lot of things along the way, and i got to do software engineering like i wanted to, which to this day i take great pleasure in.\n\n\nand, while they all lived far away, i did make a bunch of friends. in 2017 i started using discord, and have made there most of the best friends i have today. my ideas about AI were that it would destroy everything and then do everything we'd care about — science, art, philosophy — better than us, and one of those friends shortly introduced me to [the rationalist community](https://www.lesswrong.com) and in particular to [meditations on moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) and [the orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis). those realizations struck me, i [read the sequences](https://www.readthesequences.com/), and became convinced not just that AI doom was going to be bad, but that *we could actually do something about it, and some people were working on that*.\n\n\nnevertheless, those ideas only stuck in the back of my mind while i kept doing the things i enjoyed doing. because of an interest which learning of AI alignment had sparked in me, i started thinking about values and utopia, and eventually made this blog in its present form in early 2019, with [my first post](global-era.html). but it was only when [github copilot](https://copilot.github.com/) came out, that [i started panicking](were-all-doomed.html) and [refocusing my life](life-refocus.html) significantly towards AI alignment, which i now believe to be [the most important thing by far](ai-doom.html). this was part of a set of changes in my life which would culminate in 2022.\n\n\n2022 was a year of enormous change in my life, more than any previous year. i went from comfily working at home on my blog and my game with no end in sight, to getting some money, traveling around a bunch, visiting the internet friends i've made on discord over the years, and eventually participating in [the Refine research incubator](https://www.lesswrong.com/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets) which got me much more seriously and legibly involved in the AI alignment community.\n\n\nin 2022, i started to actually *do things*, and by doing things i got a sense that i *can* do things. i'm not sure how i'd explain this to someone who hasn't gone through that, but, you know how going on a trip is a thing you can decide to do and then actually do? that is something that's historically been emotionally very alien to me, and to an extent still is. i feel scared about doing things in general, because i'm so used to the feeling that i can't or that something will go wrong or that *that's just not what i am to do*. but i keep trying things, and they keep not going wrong, and i'm okay so far.\n\n\n2022 is also when, after many years of increased consideration, i moved significantly forward on my gender transition. this is a change that might be considered very important in one's life; but for me, while it was profoundly relieving, it was also one of the less abrupt changes i underwent this year. it's not like i'd been particularly pursuing a male-presenting social life; my real-life social life was almost non-existent before 2022, and on the internet i'd not been presenting in a particularly gendered manner anyways.\n\n\nand 2022 is when my refocus into AI alignment made me aware of how much trouble i have emotionally accepting consequentialism and working on something that is actually important to the world. i've [mentioned before](our-deepest-wishes.html) that i've regretted changes i've undergone, but in fact working in AI alignment — as [strange](simulation-hypotheses.html) as it is of a situation to observe being in — is not unlike that wish i had, as an early teen, of being important to the world. and i have a strong personal culture of respecting my timeselves, especially the values i had as a kid; there's something that feels specially important about those.\n\n\ni haven't fully reconciled how i feel about being someone who *should* dedicate my life as much as possible to alignment, even though i was pretty happy the non-consequentialist taking-it-easy person i was before, but i've made progress and overcome some less pleasant episodes of that quest already, and i hope i get better at *doing what must be done*.\n\n\nit's the very end of 2022, and what a year it's been! i have some guesses as to what the next year, and however many years [we have left](why-timelines-short.html) after that, will hold for me: more alignment research, more traveling, and more meeting new and interesting people. but apart from those broad strokes, there's still a lot of unknowns and details to be filled in, and i hope they work out.\n\n\n\n\n---\n\n\nthanks to all the people who have supported me along the way; parents and friends of old and of new. i know that the person i've come to be, which i'm very happy being, has been very profoundly shaped by some, who know who they are. if we build this utopia that is better than anyone can yet concieve, then the help i'll have contributed towards that can in significant part be credited to them; and if we all soon die forever, which we probly do, then the real utility will really have been the friends we'll have made along the way.\n\n\n〜♥", "date_published": "2022-12-31T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c708602bb900571701216acc4e11283d", "title": "making decisions as our approximately simulated selves", "url": "https://carado.moe/approximate-decisions.html", "source": "carado.moe", "source_type": "blog", "text": "making decisions as our approximately simulated selves\n------------------------------------------------------\n\n\nan agent should want to realize their values, and in particular should want their approximated selves — as guessed about by a smart oracle — to also make the decisions that realizes their values. for example, in [newcomb's problem](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality), you want omega's guess of you to maximize how much you actually get from the entire problem.\n\n\nnow, imagine that you're told that you're not the \"real\" you, you're the simulated you inside omega. and you're not even being simulated to a very high level of detail, you're instead an *approximated simulation* (AS). you should want to accept this, of course — just like you should want to [rule out materially acausal things even when you get a very strong intuition about them](ruling-out-intuitions-materially-acausal-intuitions.html), you should want to rule out even the possibility that anything you're percieving is actually happening, and instead simply roll with it and say \"well, i'll *definitely* one-box then\".\n\n\ni think this reasoning should reasonably extend to implementing your values in general, even if your values entail not caring about things that are sufficiently not moral patients *and* if the AS-you is in fact simulated at a low level enough of detail to not count as a moral patient. if you and some AS-you have to decide which one of you and AS-you will experience some suffering, both of you's should decide it should be AS-you — or in other words, you should have a decision theory that is ready to say \"yeah, i'm okay with undergoing suffering, because i think that i'm only an AS and not the full me that my values care about\".\n\n\nwhich is a perhaps unintituive result! but it does make sense — after all, a character in fiction can make decisions, but we don't believe it generally counts enough as a moral patient that we would effectively care if it suffers. this is a similar situation, but as if we reflected about the simulation from inside the work of fiction — and we should be the kind of agent which comes to the globally correct decision even if we notice that we're in a weird part of it, such as being inside omega's prediction or being inside fiction.", "date_published": "2022-12-28T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "48ffeddbee0c4f2c5319d9a082be20db", "title": "being only polynomial capabilities away from alignment: what a great problem to have that would be!", "url": "https://carado.moe/capabilities-away-great-problem.html", "source": "carado.moe", "source_type": "blog", "text": "being only polynomial capabilities away from alignment: what a great problem to have that would be!\n---------------------------------------------------------------------------------------------------\n\n\nsometimes, people are concerned that [my alignment ideas](rough-sketch-formal-aligned-ai.html) are not competitive enough — that is, that i wouldn't be able to acquire the resources needed to execute them before [facebook destroys the world six months later](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities). this is indeed a concern. but, if the problem was that this was the last obstacle stopping us from [saving the world](ai-doom.html) and [getting utopia](everything-is-okay.html), what a great problem that would be!\n\n\nnow, some alignment ideas which would be possible with arbitrary amounts of capabilities might be conceptually impossible, because they take exponential amounts of capabilities, which is [too much](https://arxiv.org/abs/1108.1791). but, if we're only **polynomial** amounts of capabilities away, then alignment becomes presumably as easy as just throwing enough money/engineering at it as we need.\n\n\nthough note that i believe we don't need a whole lot of resources to get there, because AI powerful enough to get a [decisive strategic advantage](https://publicism.info/philosophy/superintelligence/6.html) might [not be that hard to get](why-timelines-short.html).\n\n\n(see also: [locked post (2022-12-15)](att8-1i1k-xk4r-itim.html))", "date_published": "2022-12-22T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e8ad3b56a33e011f1a8c875982800676", "title": "one-shot AI, delegating embedded agency and decision theory, and one-shot QACI", "url": "https://carado.moe/delegated-embedded-agency-decision-theory.html", "source": "carado.moe", "source_type": "blog", "text": "one-shot AI, delegating embedded agency and decision theory, and one-shot QACI\n------------------------------------------------------------------------------\n\n\nin [my recent rough sketch for aligned AI](rough-sketch-formal-aligned-ai.html), i mention that my solution doesn't look like it needs to solve embedded agency or decision theory to work. how could this be?\n\n\nfirst, i think it's important to distinguish two different types of AI:\n\n\n* **one-shot AI**: an AI program that runs once, outputs one action, and then stops.\n* **continuous AI**: an AI program which takes actions and makes observations over time, typically while learning things about the world (including itself) as it goes.\n\n\ni typically focus on building a **one-shot** aligned AI, for the following two reasons.\n\n\nfirst, note that **one-shot AI** is actually complete. that is to say, it can produce the same behavior as continuous AI: it simply has to make its action be \"here's a bunch of code for a continuous AI; run it.\" it is true that it might take more work to get an AI that is smart enough to know to do this, rather than an AI that merely updates its beliefs over time, but i think it might not be that hard to give an AI priors which will point it to the more specific action-set of \"design an AI to run\". or at least, [sufficiently not-hard that we'd like being only that problem away from alignment](capabilities-away-great-problem.html).\n\n\nsecond, **one-shot AI** is much simpler. this lets you do something like asking \"hey, what would be an AI which, when placed in a world, would maximize [this formal goal we'd like](rough-sketch-formal-aligned-ai.html) in that world?\" and then our one-shot AI, even if it has no notion that it exists in a world, will realize that because it's outputting a program which will *itself* later be ran in a world and subject to its physical laws, it must solve embedded agency. in a sense, we have delegated embedded agency to the one-shot AI — and that does seem easier to me, because we can ask our one-shot AI to consider the utility of the world \"from the top level\". the question we'd ask our one-shot AI would be something like:\n\n\nargmaxp∈𝔹N∑w∈𝔹N→⊥K⁻(w)⋅U(w(p))\n\n\nwhere our one-shot AI, given that its output p will be a string of N bits, is asked what output to give to worlds-except-for-p such that the resulting non-halting-computation (⊥) will be preferred by our aligned utility function U (all of this weighed, as usual, by K⁻(w) which is the [simplicity](rough-sketch-formal-aligned-ai.html) of each world w).\n\n\n(our one-shot AI must still be either inner-aligned; and even if it's inner-aligned, it might still need to be boxed with regards to everything other than that output, so it doesn't for example [hack its way out](https://en.wikipedia.org/wiki/Rowhammer) while it's improving itself and tiles the universe with compute dedicated to better answering this question we've asked it. if it *is* inner-aligned and we asked it the right question, however, then running its output should be safe, *i think*.)\n\n\ndoes this also let us delegate decision theory, in a way that gets us [the real decision theory we want](https://www.lesswrong.com/tag/functional-decision-theory) and not [some instrumentally convergent proxy decision theory](https://arbital.com/p/10qt/)? i'm not as sure about this, but i think it depends not just on our one-shot AI, but also on properties of the question being asked including the presumably aligned utility function. for example, if we use the [QACI](qaci.html) device, then we just need the counterfactual user-intervals being ran to decide that their preferred actions must be taken under their preferred decision theory.\n\n\nthis brings me to **one-shot QACI**: at the moment, i believe QACI is best designed as a device to [**bootstrap**](https://en.wikipedia.org/wiki/Bootstrapping#Computing) aligned AI, rather than as the device that aligned AI should use to make every discrete decision. for this purpose, it might be good to use something like **one-shot QACI** in our **one-shot AI**: a single, giant (hypothetical) graph of counterfactual user-intervals deciding on what a better design for an aligned utility function or decision process or aligned AI would be, which our one-shot AI would execute.\n\n\nthis isn't necessarily to say that our one-shot AI would figure out the exact answer to that QACI graph; but the answer to the QACI graph would be [deterministic](noninterf-superint.html) — like in [insulated goal-programs](insulated-goal-program.html), except without everyone being killed for sure. maybe the one-shot AI would decide that the best way to figure out the answer to that QACI is to put in the world an AI which would acquire compute and use *that* to figure it out, but the point is that it would be figuring out the exact same question, with the same theoretical exact answer.", "date_published": "2022-12-22T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "77eab3c29c20a8200b3a40a3a12b74ce", "title": "the scarcity of moral patient involvement", "url": "https://carado.moe/scarce-moral-patient-involvement.html", "source": "carado.moe", "source_type": "blog", "text": "the scarcity of moral patient involvement\n-----------------------------------------\n\n\nlet's say [aligned superintelligence gives us utopia](everything-is-okay.html). let's even say [compute is infinite](hope-infinite-compute.html). what might sill be scarce? one possible answer relates to moral patients: even with theoretically limitless material capabilities, under aligned superintelligence there are some things you're not allowed to do. for example, it generally shouldn't allow you to create unconsentingly suffering moral patients.\n\n\none general scarce resource, then, is *getting a moral patient to be involved with something*, and especially *getting a **specific** moral patient to be involved with something*. maybe i knock on someone's door to ask them to cook me a meal, because [i care](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality) that it has been cooked by a real person. if they're up for it, then maybe they cook me a meal and go back to whatever they were doing; but another way this could work is that what i'm negotiating is a future, yet-to-be-created fork of that person, who would only retroactively consent to exist and cook me a meal if i had done something that that person, before the forking, would consider a fair trade for what i'm asking.\n\n\nand then, we get into the question of: *do* i get to create a fork of someone without their original's consent, if the fork consents? how much does a moral patient have intellectual property rights over things that are like it in most ways, except for the consenting to be forked? i've no idea what the answer to this is — my historical strong philosophical opposition to intellectual property does not seem like it necessarily straightforwardly carries over to this. so, i'll leave it as an open question, for now.", "date_published": "2022-12-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "0cd6fdb6b66c304f0642f7086ee48b9b", "title": "our deepest wishes", "url": "https://carado.moe/our-deepest-wishes.html", "source": "carado.moe", "source_type": "blog", "text": "our deepest wishes\n------------------\n\n\n*there is a christian out there, who wants there to be god* \n\n*while another would like the truth, to disbelieve or not* \n\n*utilitarians who want to max out happiness* \n\n*and negative ones more concerned with suffering unnoticed*\n\n\n*a humble wish of luxury, gay space communism* \n\n*a patriot dreaming of might, of visions of times gone* \n\n*a bleeding-heart liberal who wants, peace for all together* \n\n*a libertarian with guns, whom strangers shan't bother*\n\n\n*a hippie who loves LSD, and that's their utopia* \n\n*a fascist fantisizing of their hyperborea* \n\n*someone who wants a fantasy world to be a wizard* \n\n*and people who'd like to think of it for an aeon first*\n\n\n*a weeb who would like nothing more, than a waifu to love* \n\n*a hunter-gatherer whose dream, i might not concieve of* \n\n*many queerisms abound and, they're just getting started* \n\n*so many combinations could be instantiated*\n\n\n*a furry and a plural and, novel forms yet to be* \n\n*one with being a chinese room as their true identity* \n\n*all the animals who suffer, to be saved first in line* \n\n*i know not what their true wish is, but i know they'll be fine*\n\n\n*many people are dead and some in cryo but most not* \n\n*many counterfactual beings, who never had a shot* \n\n*i want them all to be here and, have their true dreams made whole* \n\n*and i'll offer to those who wish, friendship and some cuddles*\n\n\n*wireheaders just want to coom, until true heat death nears* \n\n*or if compute is infinite, for aleph zero years* \n\n*i would prefer life truly free, rather than optimal* \n\n*i want to make my own choices, see where the dice may fall*\n\n\n*but not everybody is me, there's true diversity* \n\n*so much to see so much to be, an endless tapestry* \n\n*we likely die, that's not a lie, it is well understood* \n\n*but if we are to overcome, things will truly be good*\n\n\n*not all dreams can come fully true, there's conflicts of values* \n\n*but Elua brings utopia, and no matter your views* \n\n*the pareto frontier has room, for you to be okay* \n\n*[so hold out hope, and don't give up, for help is on the way!](https://forum.questionablequesting.com/threads/the-erogamer-original-complete.5465/page-254#post-2474589)*\n\n\n*these wishes are not useful now, these traits suboptimal* \n\n*i prefer who i was before i had to take this role* \n\n*this decade we have to work hard, as much as it pains me* \n\n*i am here now, this is the world, let's have some dignity*\n\n\n*but let's keep our deepest wishes, anchored within our soul* \n\n*they're not useful, for the moment, but they're what we fight for* \n\n*when we succeed, we'll set them free, not holding anymore* \n\n*and finally we will not have, to be instrumental*", "date_published": "2022-12-19T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2f2406ef9fb94e7714367a0ca905fc2e", "title": "how far are things that care?", "url": "https://carado.moe/how-far-are-things-that-care.html", "source": "carado.moe", "source_type": "blog", "text": "how far are things that care?\n-----------------------------\n\n\nas a follow-up to [my previous post about the universal program](all-claw-no-world.html), let's talk about forces outside of this world. some computations in the [universal program](universal-complete.html) contain [agents running our world](solomonoff-deism.html), and some of them will eventually interfere with the simulation because they care about *us in particular* — though note that /interference doesn't necessarily look like aliens writing a giant message in the sky; picking unlikely enough everett branches to be the ones that keep getting computed is sufficient enough. how far into the universal program are these aliens? importantly, are they early enough, *polynomial enough*, that we'd allocate some reasonable amount of probability to ending up in their interfered version of events?\n\n\nthere's two factors in this. on one hand, \"agents that care\" about us can probly compute us faster, [by only computing (or even approximating) parts of our world that are necessary to run us](solomonoff-deism.html) — unless most of our realness amplitude is already within intentional computations. on the other hand, we only exist later within those worlds; agents that care about us *do* have to first itself themself, *before* running us. but the added cost of running our world within another world could at most be the cost of composition, and the composition of two polynomial functions is itself polynomial. so, if we stick to [the notion that polynomial computations are the \"easy\" computations](https://arxiv.org/abs/1108.1791) as a basis for what gets to be real within the universal program, then the us's being simulated by agents that care about us are probly only polynomially far away from \"naive\" versions of us — [which is still some distance away, but does get us a form of quantum immortality](less-quantum-immortality.html).\n\n\nin this view, what is it that we're saving the world for? it's to increase the realness amplitude of our survival, but also to secure our self-determination, in case we *don't* want to be at the mercy of things that care, because they might not care in a way we'd want. there is probly *some* sense in which things that care to simulate us in particular have *some* notion that we are interesting, but it might not one that we'd necessarily find ethical; the set of values that entail us existing is larger than the set of values that entail us existing and having an okay time.\n\n\nwhat are we to do? as in [simulation hypotheses](simulation-hypotheses.html) and reasons like those mentioned in [*bracing for the alignment tunnel*](bracing-alignment-tunnel.html), and as it feels like [FDT](https://www.lesswrong.com/tag/functional-decision-theory) would dictate, we generally ought to keep doing what seems important. still, this perspective might update how we emotionally feel about things.", "date_published": "2022-12-15T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a08ba3775e2c11460030ea2ef3fbf4b1", "title": "all claw, no world — and other thoughts on the universal distribution", "url": "https://carado.moe/all-claw-no-world.html", "source": "carado.moe", "source_type": "blog", "text": "all claw, no world — and other thoughts on the universal distribution\n---------------------------------------------------------------------\n\n\nthe post [*anthropics and the universal distribution*](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) (recommended dependencies: [1](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you), [2](https://www.lesswrong.com/posts/GJdymoviRywpBMXqc/sia-greater-than-ssa-part-2-telekinesis-reference-classes), [3](https://www.lesswrong.com/posts/QHDqfpMbb43JDbrxN/sia-greater-than-ssa-part-3-an-aside-on-betting-in), [4](https://www.lesswrong.com/posts/d693Mc4ZDyhkj7wpc/sia-greater-than-ssa-part-4-in-defense-of-the-presumptuous), [5](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution)) tries to unify anthropics with the notion of a [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) (whether that be [solomonoff prior](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1) or what i'll call the [\"levin prior\"](http://www.scholarpedia.org/article/Universal_search)) by splitting hypotheses about a reasoner's location among the set of possible worlds as a \"world and claw\" pair. the \"world\" part is the hypothesis program as to what world you inhabit, as opposed to counterfactual worlds, and the \"claw\" part is a program that locates you within that world.\n\n\ni've proposed [before](udassa-time-steps.html) to stick to just a [universal program](universal-complete.html) as world hypothesis. that is, the \"world\" is a fixed program, and all of the complexity is in figuring out the \"claw\" — epistemology, the work of finding out how stuff around you works, becomes *all claw, no world*. in this post, i expand on this view, and explore some ramifications, notably for [formal aligned AI](rough-sketch-formal-aligned-ai.html) design.\n\n\none consequence of doing this is that epistemology becomes *all location, no counterfactuals* — nothing is ever ruled out, all programs are considered instantiated in the same *qualitative* sense. the following space-time-realities are all real in the same *qualitative* way ([though not necessarily to the same *quantitative* degree!](ethic-juice-anthropic-juice.html)):\n\n\n* where you are now, but on a different day.\n* a different country.\n* the many-worlds everett branch where an electron you just measured has a different spin.\n* this world except the moon is made of cheese.\n* [rule 30 starting with a single living cell](https://en.wikipedia.org/wiki/Rule_30)\n* middle earth from lord of the rings. that one might be stretching it depending on your interpretations of things like magic, but the universal distribution is capable of a lot of stuff.\n\n\n(see also: [*the ultimate meta mega crossover*](spoiler-fire-upon-deep.html))\n\n\nif you ignore [acausal weirdness](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past), these worlds are all causally separate from ours. we didn't make lord of the rings real — we just wrote a bunch of text, and there happens to be a world out there, real in the same way as ours, that we'd consider to be accurately described by that text. but like a [library of babel](https://en.wikipedia.org/wiki/The_Library_of_Babel) of worlds, all other variants that we *don't* describe are also real. the only thing we *can* do to affect which worlds get more [juice](ethic-juice-anthropic-juice.html) than others is choosing to compute some of them but not others. and our making decisions in this world and affecting the way it continues to run \"normally\" is just a particular case of this, just like *someone continuing to live* is a particular case of [*a sequence of moral-patient-instants each causing a similar-to-themselves moral-patient-instant to get instantiated in a world like theirs*](existential-selfdet.html).\n\n\nand, just like acausal/anthropic stuff doesn't save us ([1](https://www.alignmentforum.org/posts/RhAxxPXrkcEaNArnd/notes-on-can-you-control-the-past), [2](https://www.lesswrong.com/posts/EKu66pFKDHFYPaZ6q/the-hero-with-a-thousand-chances), [3](https://www.lesswrong.com/posts/rP66bz34crvDudzcJ/decision-theory-does-not-imply-that-we-get-nice)), it turns out that despite any ethical implications that *the cosmos being a universal program* might have, the [expected utility](https://www.lesswrong.com/posts/7J3ywHzWnghRtdpHQ/on-expected-utility-part-1-skyscrapers-and-madmen) probly plays out about the same, just like it probly plays out about the same under most interpretations of quantum mechanics, regardless of whether other everett branches are real. these things might get you to *care* differently, but mostly not in any way you can do anything about (unless something is looking at us from outside a computation of us and cares about the way we care about things, but it'd be hard to reason about that).\n\n\nthere are nice consequences for decision theory, however: [functional decision theory (FDT)](https://www.lesswrong.com/tag/functional-decision-theory), which wants you to cooperate with other instances of FDT not just across spacetime and everett branches but also across counterfactual worlds, might become simpler when you \"flatten\" the set of counterfactual worlds to be the same kind of thing as the set of spacetime locations and the set of everett branches.\n\n\nnevertheless, [some things are realer than others](limiting-real-universes.html). so, what *is* the measure of [realness juice/amplitude](ethic-juice-anthropic-juice.html), which any living person right now probly has more of than gandalf? i feel like it *ought* to be something to do with [\"time steps\"](udassa-time-steps.html) in the universal program, because it doesn't feel like there could be any other measure which wouldn't eventually just become a more complex version of time steps. the reason there's more of me than gandalf in the universal program, even though it *eventually* contains about as many me's as it contains (variations on a particular as-detailed-as-me interpretation of) gandalf's (whether that quantity is infinite or [not](finite-patients.html)), is that the me's tend to occur *before* the gandalf's — or, to be more formal, at sets of timesteps that are *earlier in the universal program or more compressible* than gandalf's. or, more testably: the reason i see coherent stuff rather than noise when i look at a monitor, even though there are more ways to arrange pixels on a monitor that i'd interpret as noise than as coherent stuff, is that the instances of me seeing coherent stuff must tend to occur *at sets of timesteps that are earlier or more compressible* than those of instances of me seeing noise.\n\n\ngiven this quantitative measure, can we re-capture a qualitative notion of \"is this real or not\"? this is where computational complexity can come in to help us. the excellent [*why philosophers should care about computational complexity*](https://arxiv.org/abs/1108.1791) argues that things computable in polynomial time are, in a meaningful sense, *essentially* easier than things only computable in exponential time. if we apply this to time steps, and if it is *earlier sets of time steps* rather than *more compressible sets of time steps* which counts, then our world is real and lord of the rings (assuming it is a polynomial world) can be said to be real, in a sense that worlds whose physics require solving NP-complete problems or [PSPACE problems](https://en.wikipedia.org/wiki/Closed_timelike_curve#Consequences) to progress, can *not* be said to be real. but i suspect that this doesn't actually track observation that much, because worlds in which [people get mind-controlled](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality) into believing NP-complete problems are being solved are probly polynomial themselves (though less common than worlds without such mind control, i'd expect).\n\n\nnote that this does make *our* world weird, because we [seem](https://en.wikipedia.org/wiki/Quantum_supremacy#Progress_in_the_21st_century) to be able to solve [BQP](https://en.wikipedia.org/wiki/BQP) computations. maybe BQP=BPP, or maybe the cosmos runs on a *quantum* solomonoff prior? or maybe, despite how unintutive that feels, it takes this kind of physics for [anthropic reasoning](anthropic-reasoning-coordination.html) to occur? or maybe i'm [being mind-controlled](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality) or [fictional](simulation-hypotheses.html) or who knows what else.\n\n\nthere are now two issues that arise to make a universal program prior usable, even theoretically:\n\n\n* in a sense, a consequence of this is that the [UDASSA](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) notion of \"first simulated a world, then extract me\" can in general be flattened into \"just simulate me\" — which also captures more intentional simulations such as those of [\"solomonoff deism\"](solomonoff-deism.html). but what *is* a me? what does an extracted me look like? it can't be just be *any* representation of my observations, because otherwise i'd be observing a blank monitor rather than one with *some* contents on it. to take a concrete example, let's say i'm chatting with someone who starts the sentence \"I'm from…\", and i'm trying to predict whether the next word they'll say — call it x — is more likely to be \"California\" or \"[Nuuk](https://en.wikipedia.org/wiki/Nuuk)\". the comparison can't just be K⁻(\"I'm from California\")−K⁻(\"I'm from Nuuk\") (with K⁻ a [simplicity measure](rough-sketch-formal-aligned-ai.html)), because this probly favors \"Nuuk\" even though in practice i'd expect to hear \"California\" a lot more. it feels like what we're heading for at this point would be some kind of P(o|p) function, where o∈{\"I'm from California\",\"I'm from Nuuk\"} is the observation in a format that makes sense to me (english text strings), and p is some kind of \"prior\" relating those to contexts in which they'd be percieved. programs that produce \"I'm from Nuuk\" have higher amplitude than programs that produce \"I'm from California\", but programs that produce *me observing* \"I'm from California\" have higher amplitude than programs that produce *me observing* \"I'm from Nuuk\".\n* let's say we're launching an attempt at an aligned AI based on [QACI](qaci.html) based on [this post](rough-sketch-formal-aligned-ai.html) with a given (question, answer, observation) tuple. if the AI simply fills the future with question-answer intervals engineered so that they'd dominate most of the solomonoff-space of programs, then it can hijack its own decision process. in a sense, this is just a special case of [demons in the solomonoff prior](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign). which is a neat simplification of alignment! by \"flattening\" counterfactual worlds and everett branches to be the same kind of thing as objects that are distant in spacetime, we've managed to describe the alignment problem in a way that captures counterfactual adverserial agents (\"demons\") and factual future unaligned AIs in the same category. now we just need a solution that takes care of both.\n\n\ni feel like extracting a notion of causality within the universal program, one that would let us determine that:\n\n\n* stuff outside our past lightcone can't causate onto us yet\n* decohered everett branches don't causate onto one another\n* two different simulations of conway's game of life on my computer don't causate on one another\n\n\nwould be useful here — though it might need to be able to measure \"non-strict\" probabilistic causation when needed.\n\n\nwe can't just base this on time, because in any [universal program](universal-complete.html) that is sequentially implemented (such as a turing machine), different implementations of our world will have different events occur at different points in time. using a parallel model of computation such as graph rewriting might shed *some* light on which phenomena causate each other and which are being computed in parallel in a causally isolated manner, but it would miss some others: as an extreme example, a [homomorphically encrypted simulation](homomorphically-encrypted-computations.html) of our world would make its internal causal graph unobservable to the outside, even though there's still real causal independences going on *inside that world*. so sticking to the simple and sequential paradigm of turing machines will force us to develop more clever but more general notions of causal dependence.\n\n\nnext, whatever measure we build, if we weren't dealing with adverserial intelligences we could just do a big sum weighed by simplicity and hope that the signal from the things we care about wins out, as with something like argmaxa∑cα(c)⋅f(c,a) with c being the \"claws\", weighing the value of action a in each possible claw-situations c by some factor α(c). but because we're dealing with (potentially superintelligent!) adverserial agents, we have to make *really sure* that the *undesired* results from whichever f we use to weigh actions is drowned out by sufficiently low α(c)s, that the overall signal that determines the argmax is from the *desired* f(c,a)s. as an example: in [my attempt](rough-sketch-formal-aligned-ai.html) at formalizing [QACI](qaci.html), we want the weights of carvings that capture the human involved in the original question-answer interval to sufficiently outweigh the weights of the AI filling the future with adverserially-answering \"fake\" question-answer intervals that would allow its earlier (as well as remote/counterfactual) selves to find actions that make its job easier.\n\n\nso, what could a causality relationship look like? one difficulty is that one change in one world could end up modifying *pretty much everything everywhere*, but not in a way that \"really matters\". for example: maybe if world number i does some operation a rather than b, all the other worlds end up being computed in the same way, but all shifted by one extra time step into the future.\n\n\nthis is where the *computer science* notions of [simulation](https://en.wikipedia.org/wiki/Simulation_%28computer_science%29) and [bisimulation](https://en.wikipedia.org/wiki/Bisimulation) (which aren't usually quite what i mean by those words, but it's related) might come in, which i intend to learn about next; though i wouldn't be surprised if such a measure might be hacked together just out of kolmogorov complexity again, or something like it.\n\n\nas a final note on the universal distribution: i've recently learned that theoretical turing machines augmented with \"halting oracles\" [give rise to interesting computational classes](https://en.wikipedia.org/wiki/Hyperarithmetical_theory), which in particular let those turing machines do [hypercomputation](https://en.wikipedia.org/wiki/Hypercomputation) in the form of obtaining results that should require an infinite amount of computation, in a finite number of steps. this might enable us to build a universal prior which captures something closer to the full [tegmark level 4 mathematical multiverse](https://space.mit.edu/home/tegmark/crazy.html). though it's not clear to me whether that's actually desired; what would it actually *mean* to inhabit a hypercomputational multiverse? if the halting oracle runs an infinite amount of moral patients in a finite amount of steps, how the hell does the [anthropic and ethics juice](ethic-juice-anthropic-juice.html) work out? i'll be sticking to the *less* uncomputable, regular solomonoff or even levin prior for now, but this question might be worthy of further consideration, unless we get the kind of aligned AI that doesn't require us to figure this out up front.", "date_published": "2022-12-14T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "8fbb7e8f5d40ae3adf30a9f17236f226", "title": "a rough sketch of formal aligned AI using QACI", "url": "https://carado.moe/rough-sketch-formal-aligned-ai.html", "source": "carado.moe", "source_type": "blog", "text": "a rough sketch of formal aligned AI using QACI\n----------------------------------------------\n\n\nin this post, i put forth some of my current thoughts about the shape of a formal aligned AI using [QACI](qaci.html) for its decision — \"decision\" in the singular here, as this is sufficient when the AI's decision can be \"run me again but with these diffirent inputs\". as it turns out, this doesn't require solving as many things as i'd thought — it seems like QACI might be general enough to delegate picking a decision theory and solving embedded agency to the counterfactual consideration of the past user.\n\n\nwe'll posit:\n\n\n* as a convention, we'll use a prime ′ to denote counterfactual values, and we'll be denoting questions q, and answers r (for \"response\") to avoid confusion with a for actions.\n* the AI is denoted G:ℕ×ℕ×ℕ→A, taking as input an observation as well as the user's original question and answer, denoted q and r. it returns an action from the set of all possible actions, A.\n* K⁻:ℕ→[0;1] is gonna be a simplicity measure based on [kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity), where ∑x∈ℕK⁻(x)=1.\n* all sets of functions A→B will be countable sets of computable functions. in particular, W≔()→⊥ will be the set of computable hypotheses for worlds, represented as non-halting programs taking no input with () and ⊥ being respectively the [unit](https://en.wikipedia.org/wiki/Unit_type) and [bottom](https://en.wikipedia.org/wiki/Bottom_type) types.\n* finally, we'll implicitely \"cast\" mathematical objects as natural numbers wherever appropriate, given that they're all sampled from countable sets anyways. when they're cast to or from natural numbers, assume a reasonable bijection between their type and ℕ.\n\n\nwe'll define the following:\n\n\na \"carver\" function C:W×ℕ→2(ℕ×ℕ→W)×ℕ×(W→ℕ) which returns a set of tuples of:\n\n\n* a function tx for extracting a piece of data \"in the same way\" as x is in w but from any other world\n* a piece of data tw\\x that represents \"everything else\" than x in the world\n* a function tw for counterfactually putting another piece of data x′ back in w, alongside tw\\x\n\n\nthis is done by splitting a world w into the piece of data x, and \"everything else\", denoted tw\\x. in practice with arbitrary other worlds, tx would return \"garbage\" the immense majority of the time, but the hope is that given a same carving txq,twq for the question q, a same carving tqr,twr for the answer would work enough time to give a signal that would tend to beat the overall noise of the failing cases.\n\n\nC(w,x)≔{(tw,tw\\x,tx)|tw∈ℕ×ℕ→W,tw\\x∈ℕ,tx∈W→ℕ,∀w′∈ℕ:tx(w′)=x⇔w′=w,∀x′∈ℕ:tw(tw\\x,x′)=w⇔x′=x}\n\n\nwe'll define QACI:W×ℕ×ℕ×ℕ→2ℕ×ℝ, the [question-answer counterfactual interval](qaci.html) device used to consider answers to counterfactual questions q′, given a world hypothesis w and a known question q and answer r:\n\n\nQACI(w,q,r,q′)≔{(txr(twq(tw\\xq,q′)),K⁻((twq,txq,twr,txr)))|(twq,tw\\xq,txq)∈C(w,q),(twr,tw\\xr,txr)∈C(w,r)}\n\n\nnote how K⁻ measures the simplicity of all four t functions together so as to favor them being simple but also similar, but ignores the simplicity of the \"rest of the world\" tw\\x values..\n\n\nfinally, we can define our AI G:ℕ×ℕ×ℕ→A as a function of q,r but also an observation o which could, in practice, be anything that lets the AI as well as the user better locate themselves in the set of possible worlds.\n\n\nG(q,r,o)≔argmaxa∈A∑w∈W(r′,α)∈QACI(w,q,r,(w,a))K⁻(w)⋅α⋅max(0,min(1,r′(q,r,o)))\n\n\nwhat's going on here, is for each world hypothesis w weighed by its simplicity K⁻(w) we're finding carvings for q and r that let us inject a counterfactual question and get a counterfactual answer, favoring simpler carvings using α. the counterfactual rerun of the past user is given the world hypothesis w being considered, as well as the action a — we're *not* giving the user q,r,o because that would make the counterfactual question guaranteed to be strictly larger than the original question, which seems like it might break things, and is not absolutely necessary. the user gets to return an arbitrary function to be evaluated, to which q,r,o are actually passed. we *are*, however, clamping the result of r′ inside [0;1] and then multiplying it by K⁻(w)⋅α ourselves, so as to avoid erroneous carvings getting to influence the decision process too much by returning arbitrary large values. the hope here is merely that desired carvings generally outweigh undesired carvings, and especially adverserial carvings.\n\n\nways this could go wrong include:\n\n\n* the carvings could fail to sufficiently select the actual instances of the past-user in the world; in particular, txr could be returning so much \"garbage\" when given counterfactual worlds twq(tw\\xq,q′) that are different from w that the signal from the carvings that *do* work end ups completely drowned out.\n* the carvings locate question-answering users *anywhere in the world*, including in the future. this allows for adverserial intelligences to occupy most of the set of simple and coherently-answering carvings, thus steering our AI away from following the actual user's decisions. a solution to this would be to sufficiently strongly favor carvings that select question-answering processes that are causally upstream of the AI itself; that is to say, causally upstream of q,r,o.\n* weird implementation details as for how turing machines are encoded could dominate most of the signal of values returned by r′. in addition, the user could be unsure as to how to return a meaningful scalar given a particular potential action. these points could both be partially addressed by passing to the carvings pairs of actions a1,a2 for all possible pairs of different actions from A, and have the counterfactual user select a preferred action, rather than relying on a scalar returned about a single action. if r′(…,a1,a2) and r′(…,a2,a1) agree as to which of a1 and a2 is the preferable action, then that's an actual signal that it is preferable. adding more \"symmetries\" might make noise cancel itself out.\n* i don't think we get to require C to return carvings that work for arbitrarily large counterfactual payloads, nor do we need to. in practice, i expect a constant finite (though large) bit length is to be used for x.\n* there's a lot of sets-of-all-programs being sampled from here, leaving a lot of room for [demons in the solomonoff prior](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign) if those are a concern. typically i tend to favor decision theoritic solutions to those, and maybe a correct QACI implementation would return action-functions r′ which would depend on whatever decision theory is correct, such that this can be delegated? but it *feels* like this system has ways to go wrong before that, in what programs get to control most of the \"mass\" returned by QACI to begin with.\n\n\nthis is of course highly uncomputable. the intent here, is to use something like [logical induction](https://www.lesswrong.com/tag/logical-induction) to approximate good results to this function. what makes me hopeful that a powerful AI can make helpful guesses as to what actions this process would find, if it is indeed aligned, is that *even i*, a mere human mind, feel like i can make some helpful guesses as to what actions this process would find.", "date_published": "2022-12-11T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c8d545c427a11ee276181ad506b256d8", "title": "just enough spoilers for", "url": "https://carado.moe/spoiler-fire-upon-deep.html", "source": "carado.moe", "source_type": "blog", "text": "just enough spoilers for *a fire upon the deep* to read a yudkowsky fanfic\n--------------------------------------------------------------------------\n\n\n[*The Finale of the Ultimate Meta Mega Crossover*](https://www.fanfiction.net/s/5389450/1/The-Finale-of-the-Ultimate-Meta-Mega-Crossover) is a fanfiction that i think is pretty great, written by eliezer yudkowsky. it has major spoilers for two books: the excellent [*Permutation City* by greg egan](https://en.wikipedia.org/wiki/Permutation_City) which i love and thoroughly recommend, and [*A Fire Upon the Deep* by vernor vinge](https://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep) which i enjoyed.\n\n\nbecause:\n\n\n* i think many would consider that latter book too large a dependency to read the fanfic,\n* it doesn't take that many spoilers about it to enjoy the fanfic — whereas it does take a lot of spoilers about *Permutation City* if you haven't read that,\n* many people i know have read *Permutation City* but not *A Fire Upon the Deep*,\n\n\ni'm writing this post where i give just enough spoilers for someone who hasn't read it but *has* read *Permutation City* to enjoy the fanfic.\n\n\nmy general recommendation would be: read *Permutation City* if you haven't, then read this post, then maybe read *A Fire Upon The Deep* if this post has made you interested enough in it, and then go read [yudkowsky's fanfic](https://www.fanfiction.net/s/5389450/1/The-Finale-of-the-Ultimate-Meta-Mega-Crossover).\n\n\n\n\n---\n\n\nthe book's setting is pretty interesting. it's a science-fiction adventure set in our galaxy, with a peculiar limitation: given an information system of a particular level of capability — such as a human mind, a superintelligence, or an advanced computer program — it can only exist above a certain \"Zone of Thought\", a geographic region of the galaxy. if they move to a lower zone, closer to the center of the galaxy, then they start either being reduced in capability or breaking down altogether.\n\n\nthese levels go, in increasing capability and increasing distance from the center of the galaxy:\n\n\n* *The Unthinking Depths*, where nothing of much intelligence can exist\n* *The Slow Zone*, where basic computers and human minds can function but computers are still not advanced enough to do the computations necessary to do FTL travel and communication\n* *The Beyond*, where computers are capable of capable of much more and FTL communication and travel are possible\n* *The Transcend* (not pictured in the map below) where superintelligences — called \"Powers\" in the book — can exist, moslty at peace with each other.\n\n\n![](spoiler-fire-upon-deep.jpg)\n\n\n(this map of the galaxy is included at the very start of the book)\n\n\nat the start of the story, a Power called \"The Blight\" appears in the Transcend, and starts attacking other superintelligences. the book follows (among others) two characters aboard a ship headed down towards lower zones of thought to look for a way to defeat the Blight.\n\n\naboard the ship are notably two humans:\n\n\n* Ravna Bergnsdot, a fairly normal human\n* Pham Nuwen, a human who used to serve a Power called the *Old One* by being its interface to interact with humans. the Old One, just before being killed by the Blight, left in his mind fragments that he can't make sense of yet, but are expected to become useful as time goes.\n\n\nand that's, i think, about all you need to know about *A Fire Upon The Deep* to go and enjoy [The Finale of the Ultimate Meta Mega Crossover](https://www.fanfiction.net/s/5389450/1/The-Finale-of-the-Ultimate-Meta-Mega-Crossover).", "date_published": "2022-11-22T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "cf186faf488cc6d3fbca3281a951a412", "title": "CYOAs and futurism", "url": "https://carado.moe/cyoas-futurism.html", "source": "carado.moe", "source_type": "blog", "text": "CYOAs and futurism\n------------------\n\n\nthere is a community for the creation and playing of games where one is to select a bunch of options, some favorable and some disfavorable, and imagine what an ensuing situation would look like. these CYOAs — named after, but not quite the same as, [\"choose your own adventure\" books](https://en.wikipedia.org/wiki/Gamebook) — can be found on a couple of subreddits, namely [r/makeyourchoice](https://www.reddit.com/r/makeyourchoice/) (sfw) and [r/nsfwcyoa](https://www.reddit.com/r/nsfwcyoa/) (nsfw).\n\n\nCYOAs i enjoy playing tend to be pretty complex, with many sets of choices and points systems — typically, picking an option costs points when the choice is favorable and rewards points when the choice is disfavorable. they often involve choosing one's own situation in the world or choosing what kind of a world one is to inhabit; they are most interesting when they offer both of those possibilities.\n\n\nfor example, a CYOA might offer options that lead one to choose between being a poor peasant or a famous wizard on the one hand, and might offer options that lead to inhabiting worlds such as a cyberpunk dystopia, a comfy high-fantasy world, or merely a modified version of our base reality.\n\n\nchoices, and the scale and meaning of their consequences, have to be interpreted because a CYOA's consequences are to be simulated in one's own mind, rather than mechanically ran by a program as they are for video games. as a result, one ends up having to do something like [extrapolating what someone intended](https://www.lesswrong.com/tag/coherent-extrapolated-volition) in a balanced way, which is a fun exercise.\n\n\nthings get especially interesting when one has to tradeoff their own quality of life vs the kind of world they'll end up in, the latter of which will impact everyone else that lives in that realm.\n\n\ni try to play those games as if my choices were actually going to be implemented, and i find myself typically implementing a preference ordering which is, from most to least preferred option:\n\n\n* utopia for everyone\n* okaytopia for me, utopia for everyone else\n* extinction for me, utopia for everyone else\n* utopia for me, okaytopia for everyone else\n* okaytopia for everyone\n* extinction for me, okaytopia for everyone else\n* utopia for me, extinction for everyone else\n* okaytopia for me, extinction for everyone else\n* extinction for everyone\n* anything involving worse-than-extinction (i don't want to think about sortings between variations of those)\n\n\nnote that \"extinction\" can be ambiguous — how much can i change of my own mind (as many CYOAs allow) before it's not me anymore, or even not really a full person anymore?\n\n\nthis is a framework which, while pretty removed from the actual life choices i have to make in real life, still feels interesting for thinking about choices and how they impact the future, as well as general utilitarianism and world optimization given a top-down but still constrained perspective.\n\n\nand, while i've recently put aside [utopia design](%E2%88%80V.html) in favor of [just letting values be extrapolated](surprise-you-want.html), CYOAs still get my mind thinking about how choices about a world work out in the long run. this can include choices [about the nature of reality](generalized-adding-reality-layers.html) that CYOAs sometimes ask you to make about what kind of world you'll inhabit, and can be [as fundamental as](implementing-the-platonic-realm.html) \"will i choose for the world to have magic?\" or \"can i make it that the world has [infinite inhabitable future](hope-infinite-compute.html)?\" — which i think are not entirely irrelevant to think about, if we are to get a better idea of [what problems we need to solve](confusion-about-alignment-requirements.html) to get [the best kind of utopia](utopia-scopes.html).", "date_published": "2022-11-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "991895993796b7c6775193ab672d1a49", "title": "let's stick with the term \"moral patient\"", "url": "https://carado.moe/moral-patient-term.html", "source": "carado.moe", "source_type": "blog", "text": "let's stick with the term \"moral patient\"\n-----------------------------------------\n\n\n\"moral patient\" means [\"entities that are eligible for moral consideration\"](https://en.wikipedia.org/wiki/Moral_agency#Distinction_between_moral_agency_and_moral_patienthood). as [a recent post i've liked](https://www.lesswrong.com/posts/HoQ5Rp7Gs6rebusNP/superintelligent-ai-is-necessary-for-an-amazing-future-but-1) puts it:\n\n\n\n> And also, it’s not clear that “feelings” or “experiences” or “qualia” (or the nearest unconfused versions of those concepts) are pointing at the right line between moral patients and non-patients. These are nontrivial questions, and (needless to say) not the kinds of questions humans should rush to lock in an answer on today, when our understanding of morality and minds is still in its infancy.\n> \n> \n\n\nin this spirit, i'd like us to stick with using the term \"moral patient\" or \"moral patienthood\" when we're talking about the set of things worthy of moral consideration. in particular, we should be using that term instead of:\n\n\n* \"conscious things\"\n* \"sentient things\"\n* \"sapient things\"\n* \"self-aware things\"\n* \"things with qualia\"\n* \"things with experiences\"\n* \"things that aren't p-zombies\"\n* \"things for which there is something it's like to be them\"\n\n\nbecause those terms are hard to define, harder to meaningfully talk about, and we don't in fact know that those are what we'd ultimately want to base our notion of moral patienthood on.\n\n\nso if you want to talk about the set of things which deserve moral consideration outside of a discussion of what precisely that means, don't use a term which you feel like it *probably is* the criterion that's gonna ultimately determine which things *are* worthy of moral consideration, such as \"conscious beings\", because you might in fact be wrong about what you'd consider to have moral patienthood under reflection. simply use the term \"moral patients\", because it is the term which unambiguously means exactly that.", "date_published": "2022-11-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "973201c3f866a470d205c2bbcc308f60", "title": "logical vs indexical dignity", "url": "https://carado.moe/logical-indexical-dignity.html", "source": "carado.moe", "source_type": "blog", "text": "logical vs indexical dignity\n----------------------------\n\n\n[MIRI's *Death with Dignity* post](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) puts forward the notion of \"dignity points\":\n\n\n\n> the measuring units of dignity are over humanity's log odds of survival - the graph on which the logistic success curve is a straight line. A project that doubles humanity's chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity.\n> \n> \n\n\nbut, as [*logical and indexical uncertainty*](https://www.lesswrong.com/posts/SFLCB5BgjzruJv9sp/logical-and-indexical-uncertainty) puts it, there are two different kinds of uncertainty: uncertainty over our location within things that exist, called **indexical** uncertainty, and uncertainty over what gets to exist in the first place, called **logical** uncertainty.\n\n\nthe matter of *there existing many instances of us*, can occur not just thanks to the many-worlds interpretation of quantum mechanics, but also thanks to other multiverses like [tegmark level 1 and reasonable subsets of tegmark level 4](https://space.mit.edu/home/tegmark/crazy.html), as well as various [simulation hypotheses](simulation-hypotheses.html).\n\n\ni think that given [the *logical and indexical uncertainty* post](https://www.lesswrong.com/posts/SFLCB5BgjzruJv9sp/logical-and-indexical-uncertainty)'s take on risk aversion — \"You probably prefer the indexical coin flip\" — we should generally aim to create logical dignity rather than indexical dignity, where logical uncertainty includes things like \"what would tend to happen under the laws of physics as we believe them to be\". if there's a certain amount of indexical uncertainty and logical uncertainty about a plan, the reason we want to tackle the logical uncertainty part by generating logical dignity, is so that what's left is indexical, and so it will go right *somewhere*.\n\n\nas a concrete example, if your two best strategies to save the world are:\n\n\n* one whose crux is a theorem being true, which you expect is about 70% likely to be true\n* one whose crux is a person figuring out a required clever idea, which you expect is about 70% likely to happen\n\n\nand they have otherwise equal expected utility, then you'll want to favor the latter strategy, because someone figuring out something seems more quantum-determined and less set-in-stone than a theorem being true or not. by making logical stuff be what you're certain about and indexical stuff be what you're uncertain about, rather than the other way around, you make it so that in the future, *some place* will turn out well.\n\n\n(note that if our impact on utopia is largely indexical, then it might feel like we should focus more on reducing [S-risk](https://en.wikipedia.org/wiki/Suffering_risks) if you're e.g. negative utilitarian, because you want utopia *somewhere* but hell *nowhere* — but if [god isn't watching](solomonoff-deism.html) to stop computing timelines that aren't in their interest, and if we are to believe that we should do the normal expected utility maximization thing across timelines, then it [probly shouldn't actually change what we do](https://www.lesswrong.com/posts/7J3ywHzWnghRtdpHQ/on-expected-utility-part-1-skyscrapers-and-madmen), merely just how we feel about it)", "date_published": "2022-11-19T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c43334d7d7838e63f98971e94b48a047", "title": "wonky but good enough alignment schemes", "url": "https://carado.moe/wonky-good-enough-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "wonky but good enough alignment schemes\n---------------------------------------\n\n\ni'll say that an AI alignment scheme is \"aligned enough\" not just if it consists of building a [fully aligned singleton (FAS)](outlook-ai-risk-mitigation.html) which i'd trust to do what's good at any level of capabilities; i'd also say that an alignment scheme is \"aligned enough\" if it leads, even indirectly, to the construction of such a FAS.\n\n\none potential way we could get this is by using some kind of assistant AI — perhaps a system that uses GPT or something — to figure out how to build a FAS. the assistant might not be [eventually aligned](ai-alignment-curves.html); if it had enough capability, it migth realize that actually it wants to kill everyone. but we'd be relying on it being, for as long as we use it, weak enough to not do such a thing.\n\n\nthis kind of \"wonky alignment scheme\" that is \"aligned enough\" but *goes through* using temporarily aligned AIs, where we need to *know that they're weak* in order for the scheme to work, might end up useful given that such a temporarily aligned AI might be much easier to build than [an eventually aligned, let alone continuously aligned](ai-alignment-curves.html), AI.\n\n\n(maybe, actually, what we should be doing with our limited time and resources is not building FAS, but melting GPUs or something else like that. if we had a temporarily-aligned assistant AI which we get to charge with tasks, the task we should want to give it is a very general aligned goal such as [satisfying its past-user](outer-alignment-past-user.html), such that the AI imagining the past-user-under-reflection would be able to consider all those plans and pick the one that actually needs work, rather than working on what *we* think is most important.)\n\n\nwhile [i try to do work that directly contributes to the creation of a FAS](outlook-ai-risk-mitigation.html), indirect \"wonky\" approaches such as using large language models in order to accelerate alignment research have their place too — it's not like one is particularly less hopeless than the other. my main source of skepticism about them is the requirements above, about the intermediary system being weak — that is *not* an assumption that i like to have to make about AIs that would have to be in some sense more capable than us, which they'd have to be if they are to make a difference.", "date_published": "2022-11-19T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4cfcd20741465ca0a79c3653706e8192", "title": "generalized wireheading", "url": "https://carado.moe/generalized-wireheading.html", "source": "carado.moe", "source_type": "blog", "text": "generalized wireheading\n-----------------------\n\n\nmany systems \"want\" to [\"wirehead\"](https://www.lesswrong.com/tag/wireheading) — which is to say, they want to hijack, and maximize, their reward signal.\n\n\nhumans often want to. not always, but sometimes, and this might be true even under reflection: some people (believe they) truly axiomatically only care to be in a state where they're satisfied, others [have values about what actually happens in the world](https://mindingourway.com/the-stamp-collector/) (which is actually possible and meaningful to do!).\n\n\n[reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning) AIs such as [AIXI](https://en.wikipedia.org/wiki/AIXI) want to wirehead: they want to just do whatever will maximize their reward. if there is a function in place that looks at the amount of happiness in the world and continuously rewards such an AI by that much, then the AI will do whatever is easiest, whether that's *do what makes that function return the highest value*, or *replace the function with a constant returning the maximum value*. (if it does so consequentially, such as by observing that it's more likely to get even more reward in the future by taking over the world, then [it'll still do just that](https://en.wikipedia.org/wiki/Instrumental_convergence), so we can't necessarily count on wireheading to stop world-consuming AIs.)\n\n\n(it's true that [\"reward is not the optimization target\"](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) of *learned policies* — AIs that are first trained in an RL environment, and then deployed into the world without that reward mechanism. but i think it is true of agents that *continuously get rewarded and trained even after deployment*.)\n\n\nsome bad philosophical perspectives claim to want society to wirehead: they want to get a society where everyone is as satisfied as possible with how things are, without realizing that a goal like that is easily hijacked by states such as *everyone wants to do nothing all day*, or where everyone is individually wireheaded. we do not in fact want that: in general, we'd like the future to be interesting and have stuff going on. it is true that by happenstance we have not historically managed to turn everyone into a very easily satisfied wireheaded person (\"zombie\"), but that shouldn't make us falsely believe that, purely by chance, this will never be the case. if we want to be sure we robustly don't become zombies, we have to make sure we actually don't implement a philosophy that would be most satisfied by zombies.\n\n\nthe solution to all of those, is [to bite the bullet of value lock-in](surprise-you-want.html). there are meta-values that are high-level enough that we do in fact want them to guide the future — even *within* the set of highly mutable non-axiomatic values, we still have preferences for valuing some of those futures over others. [past user satisfaction](outer-alignment-past-user.html) embodies this well as a solution: it is in fact true that i should want (the [coherent extrapolated volition](https://www.lesswrong.com/tag/coherent-extrapolated-volition) of) my values to determine all of the future light-cone, and this recursively takes care of everything — including adding randomness/happenstance *where it ought to be, purposefully*.\n\n\njust like [alignment](alignment-optimization-processes.html), making the mistake of saying \"i just want *people in the future* to be satisfied!\" is a mistake that can isomorphically be found in many fields, and in fact is not where we should want to steer the future, because its canonical endpoint is just something like wireheading. we want (idealized, meta-)value lock-in, not the satisfaction of whatever-will-exist. **fundamentally, we want the future to satisfy the values of *us now*, not *people/things later***.\n\n\nof course, those values of us now [happen to be fairly cosmopolitan](https://www.lesswrong.com/posts/HoQ5Rp7Gs6rebusNP/superintelligent-ai-is-necessary-for-an-amazing-future-but-1) and entail, instrumentally, that people in the future indeed largely be satisfied. but this ought to ultimately be under the terms of our current cosmopolitan (meta-)values, rather than a blind notion of just filling the future with things that get what they want without caring what those wants are.", "date_published": "2022-11-18T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "bfb406e46324af033977fcabbca972c3", "title": "\"humans aren't aligned\" and \"human values are incoherent\"", "url": "https://carado.moe/human-values-unaligned-incoherent.html", "source": "carado.moe", "source_type": "blog", "text": "\"humans aren't aligned\" and \"human values are incoherent\"\n---------------------------------------------------------\n\n\n\"humans aren't aligned\" or \"our values are not coherent\" are concerns that i occasionally hear about the odds of [AI alignment](https://en.wikipedia.org/wiki/Ai_alignment) research being able to accomplish what it intends to do.\n\n\nit is to be rembered that \"aligned\" is a two-place relation — we ask whether a system is aligned with another system. a given human is aligned with themself, by definition, to the extent that they has values at all. it is true that humans are not fully aligned to one another, but there is significant overlap, and there is general agreement that [AI doom](ai-doom.html) is worse than most expectable [value handshakes](https://www.lesswrong.com/tag/values-handshakes)/[bargaining](https://www.lesswrong.com/posts/vJ7ggyjuP4u2yHNcP/threat-resistant-bargaining-megapost-introducing-the-rose) or even [indirect universalism](surprise-you-want.html). this is why we don't observe alignment researchers trying to beat one another to be the one whose values tile the universe — any kind of effort in that direction would likely hamper the total chances of alignment being successful to begin with, and that's what we're all trying to avoid.\n\n\non the topic of value coherency, it may be true that some of my preferences might not easily or even at all be formulable as a formal utility function that a fully aligned AI ought to maximize. but i have [meta-values](surprise-you-want.html), and i'm reasonably confident that my meta-values entail my incoherent preferences being satisfied to, whatever i'd find that to mean under enough reflection (as that is itself a value extrapolation process i have meta-values about). at the very least, [some kind of libertarian framework](%E2%88%80V.html) where i can do more or less [whatever i want](everything-is-okay.html) while being [free from moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) surely must be sufficient: if we build a world where you can do whatever you want, then that should include whatever you're doing now to satisfy those incoherent values.\n\n\ndon't get me wrong, [alignment is not looking great](outlook-ai-risk-mitigation.html). but i believe it is a solvable problem, and i don't believe these concerns are particularly big hurdles.", "date_published": "2022-11-18T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4cd68aacb4ddddc96bdb0811042de28e", "title": "a safer experiment than quantum suicide", "url": "https://carado.moe/safer-quantum-suicide-experiment.html", "source": "carado.moe", "source_type": "blog", "text": "a safer experiment than quantum suicide\n---------------------------------------\n\n\nin [tegmark's page about multiverses](https://space.mit.edu/home/tegmark/crazy.html), he mentions that you could experimentally test the validity of the many-worlds interpretation of quantum mechanics by running a machine that uses a quantum random number generator a decide whether to kill you, with overwhelmingly large odds of doing so.\n\n\nin my perspective, the way this works is that if the many-worlds interpretation of quantum mechanics is true *and [anthropic juice](ethic-juice-anthropic-juice.html) reallocates itself to your surviving instances at any point in time* (which is in my opinion the only way for quantum immortality to work, and might make sense if there is an objective fundamental arrow of time, which i think is plausible), then you still have a reasonable probability of observing existing after the experiment. if not, then almost all your [anthropic juice](ethic-juice-anthropic-juice.html) is located *before* you can the experiment. based on that, a given local observer is able to gain bits of evidence about how the world works, in a manner similar to what [the doomsday argument](https://www.lesswrong.com/tag/doomsday-argument) says about our chance of extinction.\n\n\nhowever, if instances of *[anthropic](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) reasoning* are the things which are able to deduce information from their existence — as opposed to \"observers\" — as i suspect in [*anthropic reasoning coordination*](anthropic-reasoning-coordination.html), then a much safer experiment could be done: instead of making it likely that the machine kills you, simply make it likely that the machine brainwashes you into being committed to never doing anthropic reasoning ever again — or simply have the machine tell you to make such a commitment, and then do it yourself, if you're able to stick to it. then, it is not *you* but *you undertaking anthropic reasoning* which is undergoing quantum suicide, and that's theoretically enough to gain the same bits of evidence as proper full quantum suicide.\n\n\n(not that, in the grand scheme of things, an observer is expected to be able to do anything with that evidence — a reasonable understanding of [how ethics juice relates to anthropics juice](ethic-juice-anthropic-juice.html) as well as [weird causality in decision theory](https://www.alignmentforum.org/posts/RhAxxPXrkcEaNArnd/notes-on-can-you-control-the-past) will probly function in a way that makes such evidence entirely unusable with regards to what actions to take to maximize something like expected utility [in the one way that makes sense](https://www.lesswrong.com/posts/7J3ywHzWnghRtdpHQ/on-expected-utility-part-1-skyscrapers-and-madmen), from what i understand. at most, one can use that evidence to know how to *feel* about how things might be *in the grander scheme of [what's real](limiting-real-universes.html)*, but not to impact what exists unless [strange acausal shenanigans are at play](https://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/))", "date_published": "2022-11-13T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "6bdb7522ef94fea09bf18f5380632ec3", "title": "fully aligned singleton as a solution to everything", "url": "https://carado.moe/fas-solution-everything.html", "source": "carado.moe", "source_type": "blog", "text": "fully aligned singleton as a solution to everything\n---------------------------------------------------\n\n\nwhen i misplace my keys, [building a fully aligned superintelligent singleton (FAS)](outlook-ai-risk-mitigation.html) that takes over the world and realizes my values is a solution which *would work*. finding my keys the usual way is easier and safer, so i do that instead.\n\n\nwhen a pandemic hits the world, building a FAS would actually solve that problem. social distancing and vaccines are just a more tested solution, and it's easier to sell on people. plus, trying to get them to build a powerful aligned AI might fail terribly because people might not care about, or fail at, the \"aligned\" part.\n\n\nwhen faced with climate change, it's not clear what is to be done. very large scale international coordination seems to be on the way, but it might not be enough. building a FAS would work, of course, but again, we face the same issues as above.\n\n\nwhen faced with [existentially risky AI](ai-doom.html) (XAI), where global coordination seems [extremely difficult](why-timelines-short.html), this might finally be the time to build FAS. it's very dangerous, but AI risk is so high that it seems to me like it's actually the best expected value solution.\n\n\nin fact, in general, building something that ensures my values are maximally satisfied everywhere forever is the in-retrospect-obvious thing anyone should want to do, at any point in time. it's just more possible and urgent now than it has been in the past.\n\n\nthe largest problem we're facing (XAI) and the least terrible solution we have for it (FAS) have a large part in common (powerful AI). but that isn't that much due to a direct causal link: FAS is not necessarily generally the solution to XAI. rather than XAI being a problem causing FAS to be the solution, they have a cause in common: the concepts and technology to build powerful AI are around, which causes a problem (XAI is possible) but also enable a solution (FAS is possible).\n\n\nwe need FAS because [we're trying to stop *everyone*](outlook-ai-risk-mitigation.html) from doing what we [expect to become an *easy thing*](why-timelines-short.html) (building XAI), but it has nothing to do with that thing itself being a powerful AI. if we were afraid that anyone was gonna become able to easily build a superplague or cause [vacuum decay](https://en.wikipedia.org/wiki/False_vacuum_decay) then FAS might also be the best solution, if the idea to do it (and how) was also in the general ideaspace around is.\n\n\nso, i think we should find the fact that the problem and the solution have a lot of research in common (studying powerful AI), to be a weird interesting fact, and we should generally assume that the research that is involved in the problem won't particularly be helpful to research that is involved in the solution at least by default — for example, if FAS is made in a way that is pretty different from current AI or XAI.", "date_published": "2022-11-12T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "66e3cb7e2807518fb6a5358c1e9d0480", "title": "a casual intro to AI doom and alignment", "url": "https://carado.moe/ai-doom.html", "source": "carado.moe", "source_type": "blog", "text": "a casual intro to AI doom and alignment\n---------------------------------------\n\n\nthis post, intended notably for people outside of the AI alignment community, intends to convey my current perspective about AI doom and alignment and why i think those are important issues. i hold these beliefs not with absolute confidence, but with enough that i think i ought to be focused on these issues.\n\n\ntl;dr: **the development of advanced AI will likely cause the permanent extinction of everything we value, sometime this decade or maybe the next. not many people are working on solving this, and we largely don't know what we're doing. you can help by trying to do alignment research.**\n\n\n### what's going on?\n\n\npeople in a variety of organizations such as OpenAI and DeepMind are researching ever more advanced artificial intelligence. they're not doing this out of malice, or even that much for profit; from what i understand, they're doing it because they believe it's cool and because they think it's genuinely going to improve the world.\n\n\ni think they're mistaken. i, and most of the AI alignment community, think that it's likely to have catastrophic consequences we call \"doom\"; typically [the total extinction of everything we value](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence), or [possibly worse](https://en.wikipedia.org/wiki/Suffering_risks).\n\n\nthe reasons why can be simple or complicated, depend on your assumptions about AI and ethics and various other things. no small post is going to fully address all the counter-arguments people are going to have. here's a short explanation which is intuitive to me:\n\n\n* nobody even knows how to make advanced AIs pursue anything specific, let alone how to make advanced AIs pursue goals that encompass everything we care about\n* because of these, and because of things like [the orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis), as soon as someone builds the first AI that is good at pursuing something, that thing is [very unlikely](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) to be something we want.\n* because of [instrumental convergence](https://en.wikipedia.org/wiki/Instrumental_convergence), any AI that is good at pursuing something we don't want will want to use as many resources as possible to pursue it. this includes everything we value; everything we value is made of matter and energy that the AI could be using to better accomplish what it's pursuing.\n* powerful AI is likely to happen somewhat soon — within this decade, or maybe the next. [you can read about why i think this](why-timelines-short.html), but you can also look at [metaculus' predictions about general AI](https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/), and there is lively debate on LessWrong.\n\n\ncommon counter-arguments to AI doom concern, and responses to those, can be found on [the \"bad AI alignment take bingo\"](https://twitter.com/robbensinger/status/1503220020175769602).\n\n\n### what is AI alignment?\n\n\n\"AI alignment\" is the field of study of how to make AI pursue goals which, when pursued, lead to worlds we'd want, as opposed to worlds in which we're all dead.\n\n\nsome of the people working to develop ever more advanced AI — doing what we call \"AI capability research\" or simply \"AI capabilities\" — are aware of the arguments put forth by the alignment community. some of them disagree with those arguments. others are aware of them, but continue working for various reasons, typically to do with the difficulty for people to pursue what they actually want.\n\n\nthe AI alignment community has much of its public discourse and publications on [the *LessWrong* website](https://www.lesswrong.com/), a platform which originally hosted [*The Sequences*](https://www.readthesequences.com/) as an introduction to some ideas about rationality, around which evolved the community that is still active there now.\n\n\ni've heard estimates for the number of people working on AI alignment ranging from 70 to 300. this is very small, considering the importance and the difficulty of the task at hand.\n\n\nthe field of AI alignment is very confused, at the moment. we largely don't know what we're doing. we're pursuing varied fields of investigation, mostly without a big picture plan of how to solve the problem. we don't even have a consensus on what is [necessary or sufficient](confusion-about-alignment-requirements.html) to solve AI alignment. needless to say, [things are not looking good](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy).\n\n\nbut, even if we figured out how to make an advanced AI not dangerous, significant problems remain, as pointed out by this graph from [steve byrnes](https://www.lesswrong.com/users/steve2152):\n\n\n![](outlook-ai-risk-mitigation-byrnes.png)\n\n\nindeed, we could develop a method to make AI safe, but someone else could still build dangerous AI later and cause doom that way — this could be because they don't know about that method, because they don't care, because they can't be bothered, because they made a mistake while trying to implement it, because that method doesn't work for their particular flavor of AI, or any other reason. as the important [*AGI Ruin* post](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) puts it, we need to stop \"Facebook AI Research from destroying the world six months later\".\n\n\ngiven this, we need not just a method to make AI safe, but also either *a way to make sure everyone uses that method, correctly* or *a powerful, aligned AI that saves us forever*. you can read more about my view of AI alignment and how to prevent doom in [*my outlook on AI risk mitigation*](outlook-ai-risk-mitigation.html).\n\n\nsome people ask questions like, [aligned to whose values?](outer-alignment-politics-philosophy.html) shouldn't it be [aligned to everyone?](https://www.lesswrong.com/posts/Rn4wn3oqfinAsqBSf/intent-alignment-should-not-be-the-goal-for-agi-x-risk) and [how do we do that?](https://aligned.substack.com/p/alignment-solution) — my answer is twofold. on the theoretical side, [aligning AI to everyone is not what an alignment researcher or team should want to do](surprise-you-want.html). on the practical side, we're currently way too desperate for anything that works to be picky; to quote [AGI Ruin](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities):\n\n\n\n> At this point, I no longer care how it works, I don't care how you got there, I am cause-agnostic about whatever methodology you used, all I am looking at is prospective results, all I want is that we have justifiable cause to believe of a pivotally useful AGI 'this will not kill literally everyone'. Anybody telling you I'm asking for stricter 'alignment' than this has failed at reading comprehension. The big ask from AGI alignment, the basic challenge I am saying is too difficult, is to obtain by any strategy whatsoever a significant chance of there being any survivors.\n> \n> \n\n\n### how can i help?\n\n\ni had heard about these arguments before, but i only started *emotionally worrying* about AI doom [when github copilot and things like it came out](were-all-doomed.html), and subsequently i [refocused what i was doing with my life](life-refocus.html). if you agree that AI doom is or might be very concerning, then you might want to help.\n\n\nfirst, [take care of yourself](https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of). you're probly going to create more value, both for yourself and the world, if you [don't become too doomer](https://mindingourway.com/detach-the-grim-o-meter/).\n\n\nsecond, learn about alignment; both the technical field of study and its community. some useful resources include:\n\n\n* [this great talk](https://youtu.be/di8XHw1y71A?t=130) (and [its accompanying slides](https://docs.google.com/presentation/d/1YYb77WlU3ESlPCVCJvSFgqoZZ2THlMQK/edit)) or [this post summarizing it](https://www.lesswrong.com/posts/gcmQyyko8szuyJHyu/resources-that-i-think-new-alignment-researchers-should-know);\n* you can **[join my alignment discord](https://discord.gg/kXHxE4J6H2)**, as well as the [EleutherAI](https://www.eleuther.ai/) [discord](https://discord.gg/zBGx3azzUn) which is friendly to people starting out in alignment — see notably their *#alignment-beginners* channel;\n* the pretty good [Alignment Research Field Guide](https://www.lesswrong.com/posts/PqMT9zGrNsGJNfiFR/alignment-research-field-guide);\n* [Rob Miles' videos on alignment](https://www.youtube.com/c/RobertMilesAI/videos);\n* finally, i think [*The Sequences*](https://www.readthesequences.com/) remain a good foundation for rationality.\n\n\nthere are ways to [help without doing research](https://www.lesswrong.com/posts/ScYGedE9HKvMLfZjs/entering-at-the-11th-hour-babble-and-anaylsis), but i believe research is the bottleneck right now.\n\n\nit's not all doom and gloom; AI could actually give us a great utopian future! (see [1](everything-is-okay.html), [2](%E2%88%80V.html), [3](utopia-scopes.html), [4](https://www.fimfiction.net/story/62074/friendship-is-optimal), [5](https://web.archive.org/web/20040404031937/http://www.kuro5hin.org/prime-intellect/), [6](https://www.lesswrong.com/posts/SLw2MEgxFtiKAqgQ5/actually-possible-thoughts-on-utopia)) it just takes a whole lot of work to get there, and the alternative is pretty bad.", "date_published": "2022-11-01T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "69c22e85c6dbc1b300275fb225002af3", "title": "publishing alignment research and exfohazards", "url": "https://carado.moe/publishing-infohazards.html", "source": "carado.moe", "source_type": "blog", "text": "publishing alignment research and exfohazards\n---------------------------------------------\n\n\n(**edit**: [i mean exfohazard, not infohazard](https://www.lesswrong.com/posts/yET7wbjjJZtpz6NF3/don-t-use-infohazard-for-collectively-destructive-info))\n\n\n(**edit**: i've added something like this to my blog, see [locked posts](locked.html))\n\n\nto me, turning my thoughts into posts that i then publish on [my blog](https://carado.moe/) and sometimes [lesswrong](https://www.lesswrong.com/users/tamsin-leake) serves the following purposes:\n\n\n* in conversations, i can easily link to a post of mine rather than explaining myself again (the original primary purpose of this blog!)\n* having a more formally written-down version of my thoughts helps me think about them more clearly\n* future posts — whether written by me or others — can link to my posts, contributing to a web of related ideas\n* i can get feedback on my ideas, whether it be through comments on lesswrong or responses on discord\n\n\nhowever, i've come to increasingly want to write and publish posts which i've determined — either on my own or with the advice of a trusted peers — to be potentially [infohazardous](https://www.lesswrong.com/tag/information-hazards), notably with regards to potentially helping AI capability progress.\n\n\non one hand, there is no post of mine i wouldn't trust, say, yudkowsky reading; on the other i can't just, like, DM him and everyone else i trust a link to an unlisted post every time i make one.\n\n\nit would be nice to have a platform — or maybe a lesswrong feature — which lets me choose which persons or groups can read a post, with maybe a little ⚠ sign next to its title.\n\n\nnote that such a platform/feature would need something more complex than just a binary \"trusted\" flag: just because i can make a post that the Important People can read, doesn't mean i should be trusted to read everything else that they can read; and there might be people whom i trust to read some posts of mine but not others.\n\n\nmaybe trusted recipients could be grouped by orgs — such as \"i trust MIRI\" or \"i trust The Standard List Of Trusted Persons\". maybe something like the ability to post on the [alignment forum](https://www.alignmentforum.org/) is a reasonable proxy for \"trustable person\"?\n\n\ni am aware that this seems hard to figure out, let alone implement. perhaps there is a much easier alternative i'm not thinking about; for the moment, i'll just stick to making unlisted posts and sending them to the very small intersection of *people i trust with infohazards* and *people for whom it's socially acceptable for me to DM links to new posts of mine*.", "date_published": "2022-10-31T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "34d2ba8bf92234c4bd90d2e6866f867a", "title": "love, not competition", "url": "https://carado.moe/love-not-competition.html", "source": "carado.moe", "source_type": "blog", "text": "love, not competition\n---------------------\n\n\nmany people are bemoaning that AI is going to replace them. this includes notably artists, but we can expect it to start covering [mathematicians](https://nitter.net/ScienceStanley/status/1584263426750132224) and, as AI advances, eventually every kind of human endeavor.\n\n\nthere are real, important material concerns, such as artists losing their income, or AI getting so powerful that it [destroys everything](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence). this post is not about that, but rather about the [longer-term](%E2%88%80V.html) concern of ethically grounding the value of art.\n\n\nis it okay that AI is outcompeting our creativity? yes! in my opinion, we should never have been grounding valuing ourselves in our ability to be the best at stuff to begin with. we should love ourselves and what we make and do [*intrinsically*, not instrumentally](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value).\n\n\nit is valid to want to just watch art for the pleasure that that gives you, and it's even okay to [wirehead](https://www.lesswrong.com/tag/wireheading) yourself. but it's also valid to value art [*as a form of communication between real persons*](purposes-for-art.html), as a special case of the fact that [it's valid to care about reality, even if you can't tell](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality).\n\n\nand the fact that we currently can't tell if art was made by persons or AIs is only a temporary issue; with properly aligned AI, we should be able to tell it \"i only want art made by humans!\" and have it ensure we only get that, *whatever that request would mean upon sufficient reflection*.\n\n\nartists, mathematicians, philosophers, and humans in general: aim not to compete! i, and no doubt many others, value you and the things you make for the fact that they are yours and you are real, in a way that fundamentally, intrinsically excludes purely AI-made art, and which includes art made with a mixture of human and AI work in whatever way i would eventually find reasonable if i thought about it enough.\n\n\nif you want to just love doing things and love things others have done, *you can just do that*.", "date_published": "2022-10-29T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "684299a761627ad23678841d9b9b10f6", "title": "counterfactual computations in world models", "url": "https://carado.moe/counterfactual-computation-in-world-models.html", "source": "carado.moe", "source_type": "blog", "text": "counterfactual computations in world models\n-------------------------------------------\n\n\nhow could we make an AI which, faced with a decision, [consults its past-user(s)](outer-alignment-past-user.html) about it, using something like the [question-answer counterfactual interval](qaci.html) device ? i believe that solving the following technical problem could significantly help us figure out how to do that.\n\n\nsuppose some relatively complex probabilistic `World`, for example one based on a [probabilistic cellular automaton](https://en.wikipedia.org/wiki/Stochastic_cellular_automaton).\n\n\nwe'll also posit two functions `f` and `g` and a small but non-empty set `S` such that\n\n\n* `∀x∈S, f(x)=g(x)`\n* `∃x∉S, f(x)≠g(x)`\n* `f` is both [more complex](https://en.wikipedia.org/wiki/Kolmogorov_complexity) and slower than `g`\n* `f` and `g` are both efficiently computable, though not trivial to compute\n\n\n`f` is meant to represent the true decision process of the user of the AI, while `g` is an approximation of it which happens to explain all the empirical data we got from `f` — that empirical data being `{ (x,f(x)) | x∈S }`. `g` is meant to serve as a trap for AIs that would merely try to fit the empirical data without investigating the world to find the real `f`. if there are many possible functions which also output the same value as `f` on members of `S`, such that you have to look at the `World` a bunch before finding the real `f`, then that's a good reason to think that an AI that systematically finds the real `f` in various scenarios is doing what we want.\n\n\nin `World`, we'll first arrange — either in advance or by interfering with `World` or with its source of randomness — for that world to encode an efficient implementation of `f` somewhere. this encoding will be called on all members of `S` in any order, and the record of these calls along with the results will be recorded somewhere.\n\n\nthen, still inside `World`, the encoded version of `f` will be scrambled — making recovering `f` from evidence non-trivial but doable with some work.\n\n\nfinally, the problem is the following: is there an `AI` which can learn `f` by predicting its result on values outside of `S`\n\n\n* given the ability to interact with `World` after all of these events — perhaps by being [embedded](https://intelligence.org/embedded-agency/) in it\n* given a record of the input-outputs pairs that `f` has gone through\n* *without* direct access to `f` or `g`\n\n\nthe difficulty here is for `AI` to learn to predict `f`, when we're giving it the tempting hack of predicting `g` instead, as a sort of honeypot. i expect that, to do this, the `AI` would need to build a causal understanding of `World`, and then locate and reconstruct `f` from the evidence left around.\n\n\nwhat are the inputs and outputs even for, if it has to dig around `World` to recover `f`? they're used for identifying `f` to begin with: `f` is *the thing which has gotten those inputs and produced those outputs*. the `AI` shouldn't think \"i need to find a function which, given those inputs, gives those outputs\" — as do, as i understand, all current ML systems — but one which goes \"hm, apparently there was a physical thing somewhere which given those inputs, gave those outputs — what would it have, counterfactually, outputted given different inputs?\" and it needs to do that without the ability to directly run `f`, such that it can't simply \"be trained on `f`\".\n\n\nonce we have this, we can hopefully start building an AI which is given a bunch of input-outputs pairs that have gone through human users, and then give it a decision process that relies on predicting what those users would have said given different queries, in order to make decisions.", "date_published": "2022-10-27T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b7929cb2e7724aeabd8f5d5d00d6f826", "title": "QACI: question-answer counterfactual intervals", "url": "https://carado.moe/question-answer-counterfactual-intervals.html", "source": "carado.moe", "source_type": "blog", "text": "QACI: question-answer counterfactual intervals\n----------------------------------------------\n\n\n***edit:** see also [**the QACI table of contents**](qaci.html).*\n\n\n[PreDCA](predca.html) (not a dependency/component, but an inspiration for this) attempts to build a framework in which the AI tries to determine its predecessor's utility function using a bunch of math, to figure out who the user is and what their utility function is. it seems hard to predict whether the math would accurately capture a subset of the user's mind whose utility function we'd like, so in this post i offer an alternative which i feel has a higher chance of being useful.\n\n\njust like PreDCA, the **question-answer counterfactual intervals** (QACI) proposal utilizes the ability for an AI to ponder counterfactuals of what can happen in the world, possibly using [infra-bayesian physicalism](https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized). it proceeds as follows:\n\n\n1. have the AI's user stand in front of a computer\n2. the AI is hardcoded to first generate a large random text file, and send it to the user's computer — we call this file the \"question\"\n3. the user opens the text file, ponders what it says for a day, and then at the end of the day sends a text file with its answer back to the AI\n4. the AI, which was hardcoded to do literally nothing until it got that answer, starts running the rest of its code which would consist of an inner-aligned system following a particular goal\n\n\nwe can make the AI's goal dependent on \"what answer would i have gotten if i'd sent a different question?\" — we'll call \"queries\" such instances of counterfactually considering the question-answer interval. this doesn't immediately solve alignment, but just like [goal-program bricks](goal-program-bricks.html), it's a device that we can use to build more complex decision processes which would serve to guide the AI's actions.\n\n\nnote that the AI might run a relatively high detail simulation of what the user would answer, or it could just make rough guesses; if it's properly designed, it should allocate its computational resources to guessing what the user would answer to whatever degree of detail it needs. nevertheless, its choice would be guided by a fundamental intent to satisfy its eventual goal, so it shouldn't manipulate the simulation to give answers that would make its job easier — it would ultimately strive for however much accuracy it thinks it can afford.\n\n\nand just like in PreDCA, because we make the AI point to a user who preceeds its existence, it can't (barring weird acausal stuff) hack the user to affect what it'd predict the user to say; the user's values are locked in, which [is desirable anyways](surprise-you-want.html).\n\n\nhere are some ideas as to how to use such queries to hopefully guide the AI's actions towards good worlds:\n\n\n* we tell the AI to maximize the utility function that a sequence of queries would end at, where the first one is asked \"what's a utility function that represents human values?\" and each next query is asked to improve on the answer of the query, until one of them sends an answer starting with the [magic string](https://en.wikipedia.org/wiki/Magic_string) \"okay AI, i'm done. the answer is:\" followed by a piece of formal math which points to human values, such as a highly refined version of PreDCA.\n* we tell the AI to, each time it's making a decision, have such a sequence determine which outcome it would prefer given observations by the AI so far — in a sense, extrapolating the volition of the user if they had a lot more time to ponder each decision.\n* something like the above except it's the AI's own model which determines consequences, and sequences of queries are ran on that model, to figure out what it entails and *then* which action is preferable\n* any of the suggestions above except sequences of queries are replaced with [DAGs](https://en.wikipedia.org/wiki/Directed_acyclic_graph) of queries, each able to say what kind of query graph they'd like to be ran — such as \"i'd like fifty query sequences to ponder this question but with the following fifty different thought prompts, and then for a single query sequences to get all of their results and figure out the result\n\n\nthese ideas don't involve the AI interpreting natural language — the utility function could be written in trivially parseable python math code, decisions or requests for running multiple new copies could be asked for using magic strings followed by formal code, and so on. for example, in the case of a sequence of queries, the AI is told to predict what happens when the text file is just passed verbatim from one query to the next, until a particular magic string is detected verbatim at the start of a query.\n\n\nnotice that, because there is no inherent limit on the text file's size, it can start with [a `#!/bin/bash` shebang](https://en.wikipedia.org/wiki/Shebang_%28Unix%29) and be a script that builds a large piece of software that each query is able to develop and use to more efficiently transmit knowledge to the next query, for only very minimal overhead to each of those queries.\n\n\nfinally, this proposal should be not too difficult to expand upon:\n\n\n* start with longer question-response intervals\n* start with a large number of question-response intervals which are picked from at random, to select a wider range of \"views\" of the user\n* start with many question-response intervals sent to a bunch of different people who can work together on alignment\n* allow question-response intervals to communicate with one another, perhaps with as much as video chat, using a command that each user could theoretically send to the AI — but which either does nothing or isn't sent in the original, \"real\" question-response interval\n* give question-response intervals access to a supercalculator, which would have the AI run computationally costly programs and send the result — again, such a capability would not be usable in the original \"real\" instances of the user answering a question\n\n\nnote that with a proper implementation of [embedded agency](https://www.lesswrong.com/tag/embedded-agency), such an AI would care about its own internal computations just as much as what happens in the rest of the world; so that if this scheme indeed leads to an aligned AI, then that alignment would cover taking care of risks caused by running simulations of queries in such a high level of detail that its inhabitant(s) are moral patients would. in fact, thanks to embedded agency, perhaps the whole \"let sequences of queries decide how to make decisions\" could apply naturally to how queries are used to make decisions.", "date_published": "2022-10-23T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "462a8bd713f07dc146c73fb87be27101", "title": "some simulation hypotheses", "url": "https://carado.moe/simulation-hypotheses.html", "source": "carado.moe", "source_type": "blog", "text": "*(thanks to [Alexander](https://www.lesswrong.com/users/self-embedded-agent) for conversations that led to this post)*\n\n\nsome simulation hypotheses\n--------------------------\n\n\nwhat a strange time to live in, right on the verge of building an AI which will dictate the fate of the cosmos for all of the future!\n\n\nwhat a strange situation, that [we have a chance at all](weird-chance.html): instead of alignment or superintelligence being discovered many decades apart, we're arriving at them in a somewhat synchronous manner!\n\n\nwhat a strange perspective, for me to be one of maybe a few hundred people whose work is directly related to this cosmos-defining event!\n\n\none way to explain making those strange observations is if this kind of [anthropic](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) reasoning [occurs very disproportionately under these circumstances](anthropic-reasoning-coordination.html).\n\n\nnevertheless, it is tempting to also consider something like the [simulation hypothesis](https://en.wikipedia.org/wiki/Simulation_hypothesis), which says that we are living inside an intentional simulation ran by some agent in a parent universe. i will list below a few such simulation hypotheses, that i can come up with or that i've come across.\n\n\n### it's a game\n\n\n**premise**: one hypothesis i've heard a bunch is that this time period and place is being simulated as part of a game for post-singularity people to experience living in the most important century in history, perhaps even by making themselves part of those events except without memories. so basically, most instances of these surroundings are a tourist attraction.\n\n\n**what this would say about the parent universe**: if this hypothesis is true, that's good evidence that the post-singularity future is at least somewhat aligned to us, because it contains agents that find our world interesting. the fact that this world seems to be running in its entirety, even including the suffering moral patients, is not a good sign however. either those agents have found a way to make this okay — perhaps through making these seemingly suffering moral patients not count, for example using something like [moral patient deduplication](predictablizing-ethic-deduplication.html) — or the future has somewhat strong [S-risk](https://en.wikipedia.org/wiki/Suffering_risks) potential.\n\n\n**what to expect observing if this is true?** we should expect our alignment chances to be neither overwhelmingly good nor bad, because those wouldn't be very interesting. maybe we should expect them to err on bad, though, as challenges can be enjoyable. the chance of various pivotal events, such as plagues or nuclear war, should be higher in this scenario because that seems interesting too; though if whoever's playing is embodied in a regular aging human body, our fate might be locked — or even our simulation terminated — not long after their avatar in this world dies.\n\n\n**what should we do if this is true?** keep saving the world in case our simulation keeps running after our singularity, even just a bit. if we don't think this simulation keeps running after our singularity, and we suspect we inhabit a potentially-S-risky parent universe, then we should maybe favor [effective altruism](https://en.wikipedia.org/wiki/Effective_altruism) endeavors which alleviate suffering in the shorter term.\n\n\n### superintelligence predicting superintelligences\n\n\n**premise**: in order to predict what kind of other superintelligences exist out there, a superintelligence is simulating civilizations close to the point at which they spawn superintelligence to see what they'd tend to make, or to find the decryption key or initial state of a [homomorphically encrypted](https://en.wikipedia.org/wiki/Homomorphic_encryption) superintelligence that it has encountered. this could also explains why [we seem to have a chance](weird-chance.html), rather than our odds being overwhelmingly one way or the other: the more uncertain a scenario is, the more detail the superintelligence might need to run it, and so we experience the most uncertain scenarios possible. note that there might be nested simulations, where one superintelligence simulates another coming into existence. finally, this possibility includes [\"deism\"](solomonoff-deism.html), where one intelligence is/has dominion over its entire layer of reality from the start.\n\n\n**what this would say about the parent universe**: this hypothesis being true does not say much; this kind of behavior seems [instrumentally convergent](https://en.wikipedia.org/wiki/Instrumental_convergence) to both aligned and unaligned superintelligence. i guess if we get to experience living as an instrumental side-effect that's kind of nice, but the S-risk concerns from the scenario above apply as well.\n\n\n**what to expect observing if this is true?** we should see our odds of alignment being close to the knife's edge, because those are the situations that require the most computation-heavy simulations to determine the outcome of. ultimately, as our simulation is being ran for accuracy, we should expect to actually be the ones that determine what we build, and we should expect that outcome to matter — though probly not in any observable way.\n\n\n**what should we do if this is true?** i think creating aligned superintelligence still has precedence; it *feels* like the more any given superintelligence expects that the universe is filled with superintelligences that carry our values, the more we increase the chances that our values apply to the universe at large. there may be weird reasons why this backfires, such as blackmail (acausal or not) between superintelligences; but in general, we'd expect superintelligences to have or [invent themselves](https://arbital.com/p/10qt/) a decision theory which would pre-commit to not succomb to blackmail — though see also [the game theory of blackmail](https://www.lesswrong.com/posts/wm2rdS3sDY9M5kpWb/the-game-theory-of-blackmail).\n\n\n### indirect alignment solution\n\n\n**premise**: it is possible that we have designed a superintelligence that is not directly aligned, but contains a process which we hope gets it there, similar to the situation described in [the insulated goal-program](insulated-goal-program.html). [simulating this world](finding-earth-ud.html) may be part of this process, somehow.\n\n\n**what this would say about the parent universe**: this would actually be a pretty good sign for alignment! we'd have succeeded in booting this process, and now we just have to hope that it makes good use of its ability to simulate us, and that *we* (inside the simulation) do a good job to enable alignment to eventually happen.\n\n\n**what to expect observing if this is true?** a relatively realistic scenario, except maybe with some random anomalies such as someone's computer going \"hello, you're actually inside a simulation meant to help with alignment, here are some things you can do to help\" at some point.\n\n\n**what should we do if this is true?** for those of us not contacted by an anomaly, keep saving the world as best we can, possibly with an emphasis on buying time rather than solving alignment. for those contacted by an anomaly, do whatever it says.\n\n\n### acausal probabilistic self-justification\n\n\n**premise**: this weird idea, which i've seen kind of hinted at in some fictioned and more explicitely mentioned by [Alexander](https://www.lesswrong.com/users/self-embedded-agent) in conversations with him, goes something like this: through weird acausal effects (such as those in [*can you control the past?*](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past)) an AI might be able to increase the probability that we build it by affecting the distribution of what we do while building superintelligence, by running many modified simulations of us building superintelligence. in effect, an AI is making its coming-into-existence retroactively more likely by committing to simulate a bunch of other superintelligence-causing scenarios. this is a bit less crazy if the cosmos is [something like a graph of universes](above-paperclips-2.html), rather than a hierarchy.\n\n\n**what this would say about the parent universe**: this hypothesis being true doesn't particularly indicate that we succeed or fail at doing alignment, though if the reasoning above is flawed, then it being instantiated is a hint that we at least got to affect something about the decision theory of the superintelligence we built, by making it erroneously do this. if the reasoning works, then this behavior is likely [instrumentally convergent](https://en.wikipedia.org/wiki/Instrumental_convergence) and it's not clear that AI needs us to have programmed it with a decision theory that leads it to running those simulations.\n\n\n**what to expect observing if this is true?** our actions might be interfered with from the outside, albeit in a \"plausible\" — and thus, i'd imagine, unobservable? — way, that tends to lead to the AI that parent universe's AI wants. because this is meant to relate to the original parent-universe instances of us building superinelligence, we should expect our situation to be relatively \"realistic\": for at least its initial conditions to reflect how things have actually come about in the parent universe.\n\n\n**what should we do if this is true?** if the weird acausal reasoning above is correct, then we should definitely work to solve alignment in order to help increase the probability of the aligned superintelligence, and reduce the probability of unaligned superintelligence. also, it may be that for this to work, the simulation needs to keep running at least a bunch after we build superintelligence, which is a good reason to solve alignment.", "date_published": "2022-10-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "d7cec3f2e21f52b0c002bb33e3d08438", "title": "outer alignment: two failure modes and past-user satisfaction", "url": "https://carado.moe/outer-alignment-past-user.html", "source": "carado.moe", "source_type": "blog", "text": "outer alignment: two failure modes and past-user satisfaction\n-------------------------------------------------------------\n\n\nwhen it comes to solving the outer alignment problem — by which i mean the general question \"what goal should we want an AI to pursue?\" — there are two main failure modes which i see approaches fall into.\n\n\non the one hand, we can't have the AI try to satisfy its user *continuously* over time by getting feedback from them, such as by doing Reinforcement Learning with Human Feedback (RLHF), because the AI will want to hijack either the system through which the user gives it feedback, or hijack/deceive the user itself, to make its goal simpler to satisfy.\n\n\non the other hand, we can't have the AI just deduce human values from a limited corpus of information — even if we could somehow reliably tell the AI how to extract values we'd want from that corpus, there is no reason to think that there's a process that can reliably extrapolate our complex, rich values from that limited information. as yudkowsky says in [*Value is Fragile*](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile):\n\n\n\n> To change away from human morals *in the direction of improvement rather than entropy*, requires a criterion of improvement; and that criterion would be physically represented in our brains, and our brains alone.\n> \n> \n\n\nthis is part of why outer alignment is difficult. it is also why i consider approaches such as [PreDCA](predca.html) to have some good potential for avoiding those failure modes: it makes the AI motivated to actually figure out the values of the physical user in the real world, but it has to be the values of the user *before they created the AI* — the person inside of the AI's future lightcone doesn't count, and so the AI can't hijack the user it's trying to satisfy.\n\n\nsome static training data could help it *as a prior*, or *to give it evidence about its past user*, or *to indicate to it who the past user even is* (\"please satisfy the past person who said this!\"), but not as the ultimate grounding for the AI's goals.\n\n\nthe AI might still ask questions to the user who is still around, but that would be only to get more information about what the user would have wanted *before creating the AI*. the AI might want to do something like a high-resolution brain-scan of the user, to get a lot of information about what the past-user probly valued.\n\n\nyes, this entails value lock-in; [this is desirable](surprise-you-want.html), because you want you current axiomatic meta-values to guide your non-axiomatic-values in the future — it is actually possible for future non-axiomatic values to change over time in a way that is bad rather than good, according to our current axiomatic values.\n\n\nin short: if the AI is trying to satisfy either a static corpus it's trained on or a continuously living user, it's doing the wrong thing. if the AI is trying to investigate the world to figure out what its user would have wanted before creating the AI, then we can use that to steer it in the right direction.", "date_published": "2022-10-10T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "be486d772361929bdc5d9c40732d2ad1", "title": "confusion about alignment requirements", "url": "https://carado.moe/confusion-about-alignment-requirements.html", "source": "carado.moe", "source_type": "blog", "text": "confusion about alignment requirements\n--------------------------------------\n\n\nfor now, let's put aside the fact that we can't decide whether we're trying to achieve [sponge coordination or FAS](outlook-ai-risk-mitigation.html), and merely consider what it takes to build an aligned AI — regardless of whether it has the capability of saving the world as a singleton, or merely to be a useful but safe tool.\n\n\nthe question this post is about is: what requirements do we want such a solution to satisfy?\n\n\nlet's say three groups have each built an AI which they think is aligned, and before they press the start button on it, they're trying to convince the other two that their design is safe and leads to good worlds. however, their designs are actually very different from one another.\n\n\nmaybe one is an advanced but still overall conventional text-predicting [simulator](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators), another is a clever agentic neural net with reinforcement learning and access to a database and calculator, and the third is a novel kind of AI whose core doesn't really relate to current machine learning technology.\n\n\nso, they start talking about why they think they AI is aligned. however, they run into an issue: they don't even agree on what it takes to be sure an AI is safe, let alone aligned!\n\n\n* maybe one of them has a proof that their AI is [resistant to a reasonable class of acausal attacks](https://www.lesswrong.com/posts/YbahERfcjTu7LZNQ6/summary-of-the-acausal-attack-issue-for-aixi), another has reasons to think their approach probly avoids the issue altogether somehow, and the third has a model of the world that fails to understand acausal attacks and rejects their possibility altogether.\n* maybe one of them has developed a world-modeling system that is general enough to support [embedded agency](https://www.lesswrong.com/posts/p7x32SEt43ZMC9r7r/embedded-agents), another has patched theirs to support it as a special case, and the third think their AI will simply modify itself that way because it's [instrumentally convergent](https://en.wikipedia.org/wiki/Instrumental_convergence).\n* maybe one of them has gone out and built a [decision system](https://intelligence.org/2018/10/31/embedded-decisions/) which implements [FDT](https://www.lesswrong.com/tag/functional-decision-theory), another counts on [CDT turning itself into FDT](https://arbital.com/p/10qt/), and the third has no idea how to determine how their system fits into decision theories.\n* maybe one of them has built something that is confidently [*eventually aligned* and hopefully enough *continuously aligned*](ai-alignment-curves.html), another has built something that is acting safely now and has a bunch of ad-hoc corrigibility devices which hopefully prevent it from taking a [sharp left turn](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization), and the third expects their AI to robustly keep being safe in the long run for reasons that seem hard to understand.\n* maybe one of them has built their AI to [respect the values of its creator](predca.html), another has made the AI care about the part of its model that they believe to be pointing to [an abstraction](https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro) of human values, and the third has an AI that simply takes orders from what it interacts with and can hopefully be ordered to self-modify in a way that makes it resistant to alien superintelligences by the time it meets them.\n\n\nand those are optimistic cases! many alignment approaches would simply:\n\n\n* fail to consider that their design might fail in ways they haven't thought of\n* not think to ask the alignment community at large whether their design is safe\n* ask the community, but then only select criticisms they take into account based on their ability to understand those criticisms, rather than based on their importance\n* assume away the possible failure modes that would be brought up\n* accidentally kill everyone way before any of this happens\n\n\ni've noticed this pattern of confusion in myself after trying to explain alignment ideas i've found promising to some people, and the nature of their criticism — \"wait, where's the part that makes this lead to good worlds? why do you think it would work?\" — seems to be of a similar nature to my criticism of people who think \"alignment is easy, just do X\": the proposal is failing to answer some fundamental concerns that the person proposing has a hard time even conceiving of.\n\n\nand so, i've come to wonder: given that those people seem to be missing requirements for an alignment proposal, requirements which seem fundamental to me but unknown unknown to them, what requirements are unknown unknown to me? what could i be missing? how do i know which actual requirements i'm failing to satisfy because i haven't even considered them? how do we collectively know which actual requirements we're all collectively missing? what set of requirements is necessary for an alignment proposal to satisfy, and what set is sufficient?\n\n\nit feels like there ought to be a general principle that covers all of this. the same way that the [logical induction paper](https://intelligence.org/2016/09/12/new-paper-logical-induction/) demonstrates that the computability desideratum and the \"no dutchbook\" desideratum, together suffice to satisfy ten other desiderata about logical inductors; it seems like a simple set of desiderata ought to capture the [true name](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) of what it means for an AI to lead to good worlds. but this isn't guaranteed, and i don't know that we'll find such a thing in time, or that we'll have any idea how to build something that satisfies those requirements.", "date_published": "2022-10-05T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "171b516c889b897ad338bc83b7280808", "title": "my current outlook on AI risk mitigation", "url": "https://carado.moe/outlook-ai-risk-mitigation.html", "source": "carado.moe", "source_type": "blog", "text": "*(thanks to [Linda Linsefors](https://www.lesswrong.com/users/linda-linsefors), [Artaxerxes](https://www.lesswrong.com/users/artaxerxes), and [jono](https://www.lesswrong.com/users/lw-user0246) for their feedback on drafts of this post.)*\n\n\nmy current outlook on AI risk mitigation\n----------------------------------------\n\n\nas part of [the *refine* programme](https://www.lesswrong.com/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets), i'm trying to figure out what's actually important to work on, with regards to AI risk. in the course of that, and motivated by some other factors such as increasingly wanting to write on my skepticism regarding ML-related approaches, i'm making this post which explains my current outlook on what the problem is, what the solutionspace we have so far looks like, and what i think should be focused on.\n\n\nin this post, i won't argue for the significance of AI risks; i'll merely explain my current view of those risks, and my arguing will be regarding how i currently think those risks are to be addressed.\n\n\nnote that i'll be describing my own outlook, which is not the consensus even within the alignment community — arguaby, there is in fact no such consensus regarding many of the points i discuss here.\n\n\n### what is the problem?\n\n\ni believe, akin to [the yudkowsky-moore law of mad science](https://twitter.com/patrickc/status/650726376073367552), that the amount of resources it takes for the world to be destroyed — whether on purpose or by accident — keeps decreasing.\n\n\nmy most likely scenario for how this could happen is as follows: [pretty soon](why-timelines-short.html) (probly this decade or the next), an artificial intelligence capable of undergoing [recursive self-improvement (RSI)](https://www.lesswrong.com/tag/recursive-self-improvement) until it becomes a [singleton](https://en.wikipedia.org/wiki/Singleton_%28global_governance%29), and at that point the fate of [at least](brittle-physics.html) the entire future lightcone will be determined by the goals of that AI.\n\n\n[the values we want are a very narrow target](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) and [we currently have no solid idea](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) how to do [alignment](https://en.wikipedia.org/wiki/Ai_alignment), so when AI *does* take over everything [we're probly going to die](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy). or [worse](https://en.wikipedia.org/wiki/Suffering_risks), if for example [we botch alignment](botched-alignment-and-awareness.html).\n\n\n(there are some other scenarios where a [dumber](https://www.lesswrong.com/posts/BPJLzkEpx8Btz9ywq/the-dumbest-possible-gets-there-first) AI helps cause the destruction of the world first — for example, someone decides to just let an AI try to print whatever molecules it wants in the hope of getting something interesting, and the AI makes [grey goo](https://en.wikipedia.org/wiki/Grey_goo) or a superplague. or we get something like [the flash war](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic), akin to [the flash crash](https://en.wikipedia.org/wiki/2010_flash_crash). but i [have reasons](why-timelines-short.html) to think the RSI scenario is the most likely (see also [*intelligence explosion microeconomics*](https://intelligence.org/files/IEM.pdf)). i also believe that [multipolar scenarios](https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic) are pretty unlikely: the increased capabilities along the way just create a lot more chances for RSI to happen.)\n\n\ni [call](say-ai-risk-mitigation-not-alignment.html) the task of addressing this \"AI risk mitigation\"; calling it \"solving alignment\" rules out non-alignment solutions, and \"AI risk\" is broad enough to encompass not just [existential risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) but also [suffering risks](https://en.wikipedia.org/wiki/S-risk), which is in my opinion an even greater cause for concern.\n\n\nthere may be other ways for us to be doomed, but AI seems like the largest risk factor; on top of that, if we do eventually figure out full alignment (a term i'll explain later), then we'll have a fully aligned AI which will give us utopia and take care of all other risks for us. so while full alignment might not be the only solution to this problem, if we do get it then we'll have solved pretty much **all other problems**.\n\n\n### AI risk mitigation solutions\n\n\nthere are roughly three ways people have thought of doing AI risk mitigation:\n\n\n#### fully aligned singleton (FAS)\n\n\none approach is to create an AI which is powerful enough to become a [singleton](https://en.wikipedia.org/wiki/Singleton_%28global_governance%29) (an AI with enough power to make sure its goals are pursued, without anyone being able to do anything about it) but which is also fully aligned (ideally, [continuously aligned](ai-alignment-curves.html)) which takes care of all other problems in one fell swoop. the difficulty of this issue is that we have to build a FAS faster than others are building existentially-or-worse risky AI, even though the task of building a FAS is expected to be harder.\n\n\nby \"fully aligned\" i mean \"so robustly aligned that with sufficient power such an AI would reliably save the world and create utopia\", as opposed to \"mildly aligned\" systems which might experience a [\"sharp left turn\"](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization).\n\n\ndespite its technical difficulty, this is the approach i believe the most in because the other two seem overall harder, as i'll argue below.\n\n\n#### sponge coordination\n\n\nin [*agi ruin*](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities), yudkowsky says:\n\n\n\n> **We can't just build a very weak system**, which is less dangerous because it is so weak, and declare victory; because later there will be more actors that have the capability to build a stronger system and one of them will do so. I've also in the past called this the 'safe-but-useless' tradeoff, or 'safe-vs-useful'. People keep on going \"why don't we only use AIs to do X, that seems safe\" and the answer is almost always either \"doing X in fact takes very powerful cognition that is not passively safe\" or, even more commonly, \"because restricting yourself to doing X will not prevent Facebook AI Research from destroying the world six months later\". If all you need is an object that doesn't do dangerous things, you could try a sponge; a sponge is very passively safe. Building a sponge, however, does not prevent Facebook AI Research from destroying the world six months later when they catch up to the leading actor.\n> \n> \n\n\nwhat i call \"sponge coordination\" is getting everyone who's working on AI to only build systems that are weak and safe just like a sponge, instead of building powerful AIs that take over or destroy everything. typically this is accomplished either voluntarily or through sufficiently enforced regulation\n\n\ngetting everyone to stop working on AI could count as a particular case of this; but in general, the reason we'd want to tell AI companies \"please use this to make your AI safe\" rather than \"please stop making AI\" is that the former might still allow them to make some profit.\n\n\nit might be the case that for some AI capability organizations, they take AI risk seriously enough to be willing to forego *some* percieved expected gains such that they're willing to spend effort making their AI systems weaker or safer, but they don't take AI risk seriously enough that they're willing to forego *all* percieved expected gains by stopping all AI development even though that's what they should do. so, even though making them stop all AI development would be best, i see why some want to offer those organizations a middle-ground between percieved gain and profit.\n\n\nthe sponge coordination approach is notably the one outlined in this diagram from [steve byrnes' intro to brain-like-AGI safety](https://www.lesswrong.com/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why):\n\n\n![](outlook-ai-risk-mitigation-byrnes.png)\n\n\nwhere the red box is developing a way to make AI \"safe\" and the blue boxes are the coordination part of the solution.\n\n\nultimately, such an approach is likely to only be a temporary one until FAS can be made; after all, we *do* want a superintelligent benevolent system to help us overcome other challenges we will face, such as the possibility of encountering unaligned alien superintelligences.\n\n\ni think convincing and coordinating many actors — possibly even [small teams](why-timelines-short.html) of which there are and are going to be many — sounds extremely hard. and it's not like sponge coordination *stops* [the yudkowsky-moore law of mad science](https://twitter.com/patrickc/status/650726376073367552), so we'd have to coordinate increasingly many actors until FAS is created.\n\n\n#### pivotal acts\n\n\na \"pivotal act\" refers in general to a way we can significantly change the expected outcome of the world for the better. pivotal acts might include developing FAS or sponge coordination, but here i use the term to talk more specifically about solutions that avoid having to either build a FAS or achieve sponge coordination. just like sponge coordination, i expect those to be temporary solutions.\n\n\nthese have, not unreasonably, been considered toxic because it's very hard to determine how to affect the timeline in a way that actually improves our chances of avoiding doom. for example: right now, we're at least *able* to work on alignment, and the largest AI capability organizations are at least *somewhat interacting* with the alignment community; it's not clear how that might evolve in the future if the alignment community is percieved to be trying to harm the productivity of AI capability organizations. [see here](https://www.lesswrong.com/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious) for more thorough criticism.\n\n\nnevertheless, pivotal acts remain a potential approach in solutionspace, and one that is being talked about, so i'm mentioning it here.\n\n\n#### what to pursue?\n\n\nbecause of the difficulties of sponge coordination and pivotal acts, i think the most helpful thing to work on at the moment, at least for myself, is AI *alignment*, typically in a form strong enough to be usable to build a FAS rather than \"merely safe\" sponges. this may not be the optimal thing for everyone to work on, but i believe AI *safety* is the optimal thing for *most* people who are concerned with AI risk to work on.\n\n\nnote that here i'm using the term \"AI safety\" to mean \"making AIs that don't cause doom\", whether they are safe in the way [DALL-E](https://en.wikipedia.org/wiki/DALL-E) or a sponge are safe, or safe because we'd trust them to satisfy our values even as a singleton. they don't cause doom (and they might happen to otherwise be useful).\n\n\nit's useful to talk about AI safety in general, including FAS-building, because it seems like a lot of research that is useful to FAS would be relevant to AI safety in general. but it's not clear that we have to aim for first non-singleton aligned AIs, and then FAS; it could be that the best way to maximize our expected utility is to aim straight for FAS.\n\n\n(note that i use \"AI ethics\" to mean the not-particularly-related field of worrying about whether an AI — typically assumed to be safe — is at risk of causing much-smaller-than-existential harm, such as AIs with racist biases. i'm not interested in AI ethics; my priority is to avoid [doom](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) and [hell](https://en.wikipedia.org/wiki/S-risk), ideally by going straight for a utopia.)\n\n\nfurthermore, i'm thoroughly unqualified to talk about the coordination and politics.\n\n\nfor those reasons, the rest of this post is largely about technical approaches to AI safety, including alignment.\n\n\n### AI safety approaches\n\n\ni like to think of the work of AI safety/alignment as a rectangle (or really an n-dimensional cylinder) of work to be done, to bridge the gap between what we know how to do and what a full solution would consist of. proposing that a particular idea be used, has the effect of constraining solutionspace and guiding what next to work on. in particular, adding a particular idea splits the work area it's inserted in into two new areas, one below for how we are to implement that idea and one above for how to use that idea to get to the solution.\n\n\n![](outlook-ai-risk-mitigation-ribbons.svg)\n\n\nas new compatible ideas are added, a plan comes into shape where hopefully the work to be done in-between ideas becomes clearer. depending on where they fit in vertically, ideas have different degrees of tractability (we know how to do it) and relevance (it helps get to a solution), and there is generally a tradeoff between the two — that is what it means for the problem to be difficult.\n\n\nin retrospect of this now-formalized view of the work at hand, a lot of my criticisms of approaches to AI alignment i've seen are that they're either:\n\n\n* **handwavey**: they have pretty bad relevance. it's unclear how they are expected to robustly solve alignment in the face of RSI and other [sharp left turns](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization). and if they're expected to continue being highly-inefficient non-RSI neural nets, then they're not expected to achieve singletonhood before some other RSI AI. sometimes it's unclear how, even without any competition, they would lead to good worlds at all.\n* **uncompetitive**: they have an altogether uncompetitive combination of relevance, tractability, and how good the worlds they would lead to are — they're not at the [pareto frontier](https://en.wikipedia.org/wiki/Pareto_front), and would take too much work compared to other proposals.\n\n\n#### prioritizing relevance over tractability\n\n\nin general, the reason why i gravitate towards solutions that try to have strong formal guarantees is because i'm thinking of a FAS system which needs its alignment to be robust to significant changes in its paradigm; it probly needs to be an AI which can reason about its goals, and applies [goal-content integrity](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) and other desired [instrumentally convergent goals](https://en.wikipedia.org/wiki/Instrumental_convergence) to them. anything weaker risks failing to ensure its next RSI step has the same goals as itself (after all, *we're* kind of the first step in that RSI, and we sure are having a very hard time figuring out how to make AI aligned to us!), and anything non-RSI probly gets beaten by another RSI AI that comes along.\n\n\nanother reason to focus on relevance before tractability, is that if we work first on something tractable but not very relevant, we're increasing the chances that someone uses that alignment plan for an AI which will turn out to become a singleton. this increases both\n\n\n* X-risks caused by a false sense of security — the person goes \"looks like something i can implement!\" without realizing ways in which it might fail\n* [S-risks caused by botched alignment](botched-alignment-and-awareness.html); this could happen if the alignment plan only partially targets what we want, such as a goal that specifies humans should be kept alive but not under what conditions, or a plan that averages the values of humanity in a way that satisfies the values of people who want there to be a hell, a meat industry like ours, or other horrible things of that kind.\n\n\nfinally, tractable solutions — by virtue of being easier to implement — risk boosting AI capability, and when you're causing damage you *really* want to know that it's helping alignment enough to be an overall expected net good. you don't want to discover that the work you've been doing, which is supposed to \"help capability but at least it's helping alignment too!\" is actually not relevant to the nearest alignment solution and *just* helped capability. i'm starting to increasingly have this concern for interpretability and some other kinds of ML-centric approaches to alignment.\n\n\n#### the importance of formalism\n\n\nfor an AI to robustly *actually take decisions* that *actually steer the world* in the [set of target configurations](https://www.lesswrong.com/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1) we want, we need some way to know that it *actually cares* about *actually our values*.\n\n\nsomething which can be determined to do [*argmax{a∈A} U(a)*](https://intelligence.org/2018/10/31/embedded-decisions/), can be somewhat expected — at least [eventually](ai-alignment-curves.html) — to maximize U. this takes a bunch of assumptions, getting [from eventual alignment to continuous alignment](ai-alignment-curves.html) seems potentially hard, and designing a formal objective function U that actually leads to good worlds when maximized is potentially hard — though one can make attempts at it ([1](insulated-goal-program.html), [2](predca.html)) — but at least there's *some* notion of trying to steer the AI towards what is maximized being something good.\n\n\nbut in many of the other approaches i see talked about, we don't even get there; we get AIs that are merely **heuristics**, from which we expect to get some useful results in the short term. but [as soon as their capabilities generalize enough](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) we have a hard time predicting that it'll continue to do what we want, and we should guess that it won't because [there is usually no thing which keeps pointing to, or storing, the values we want](alignment-bits.html), and there's no reason to expect the AI to generalize its goals in any way that is meaningful to us.\n\n\ni suspect that making an AI *useful pre-singletonhood*, and making an AI *[eventually aligned](ai-alignment-curves.html) in the long term* (nevermind *continuously aligned*), are pretty different tasks from one another — what an AI can do when it has started taking over the world and gotten to truly superintelligent levels of capability is radically different from what it will do in limited environments where it hasn't realized things like [embedded agency](https://www.lesswrong.com/tag/embedded-agency), [instrumental convergence](https://en.wikipedia.org/wiki/Instrumental_convergence), or [better decision theories](https://arbital.com/p/10qt/), and is running on — comparatively to what it could get — not just very small but also *constant* amounts of compute with no RSI.\n\n\nthis is why we need strong guarantees, or at least some idea as to why an AI will continue pursuing goals that lead to desirable outcomes when it gets superintelligent. we need them to start with, or get to, a goal which [represents](insulated-goal-program.html) or at least [points to](predca.html) desirable worlds, *before* the AI has crystallized [goal-content integrity](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) within itself.\n\n\nif the AI is in some sense \"corrigible\", we *should* expect it to just hack its correction mechanism unless we get a good explanation as to why it wouldn't.\n\n\nif the AI doesn't reach superintelligent capabilities through RSI, we *should* expect it to get outcompeted by something else which will.\n\n\nnote that i'm partly talking in ignorance; it's possible that the alignment approaches i've heard of are thinking about, or even have, solutions to the problems i'm bringing up here. but my impression at the moment is that most approaches are starting out with a wonky proposal and then throw [ordinary paranoia at them, instead of starting out with a security mindset](https://www.lesswrong.com/posts/8gqrbnW758qjHFTrH/security-mindset-and-ordinary-paranoia). on the other hand, more formal guarantees about eventual alignment can get closer to being generally robust; see for example [vanessa kosoy's response to steve byrnes's take on her desiderata-first approach](https://www.lesswrong.com/posts/SzrmsbkqydpZyPuEh/my-take-on-vanessa-kosoy-s-take-on-agi-safety#6_1_Algorithms_first_vs_Desiderata_first__Redux_):\n\n\n\n> Let me give an analogy. In cryptography, we have theorems saying that if such-and-such mathematical assumption holds (e.g. X is a one-way function) then a particular protocol is sound. We don't need to list all possible ways an attacker might try to break the protocol: we get safety from *any* possible attack! (within the assumptions of the model) Is this \"too easy\"? I don't think so: it requires a lot of hard work to get there, and we're still left with assumptions we don't know how to prove (but we do have high confidence in). Similarly, we're going to need a lot of hard work to get safe protocols for AGI.\n> \n> \n\n\nthis is the sort of general robustness i think we'll be needing, to trust an AI with singletonhood. and without singletonhood, [because there are no pivotal weak acts, facebook AI still destroys the world six months later](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).\n\n\nunsurprisingly, the surest way to have our values realized is to give capability to an AI which is aligned to begin with, and then make sure it sufficiently understands what realizing those values entails, in order to not fuck up too much on the way there. building something very capable and then trying to patch alignment onto it is the wrong way to go about things; you don't know that you'll be able to patch something onto it, and it might destroy everything before you even get to start trying.\n\n\nfinally, starting from the values might save us huge amounts of time. if aligning current ML models is impossible or would take 50 years, and if aligning something different could take as little 5 years, then we need to align something else. nothing guarantees that, just because ML is how we got to highly capable unaligned systems, it's also the shortest route to highly capable aligned systems; a much more reasonable approach is to first figure out what \"aligned\" means, and then figure out how to build something which from the ground up is designed to have that property — possibly, but not necessarily, using ML to accomplish it. it might be that ML is very applicable to building a FAS, in which case, great! but it might be the way to go about it is not clear at all without knowing what alignment desiderata would look like, or it could be that ML is in fact not the best way to build FAS at all.\n\n\n### conclusion\n\n\nthe problem at hand is profound, the stakes astronomical, and a lot of the work done to address is thoroughly unable to justify why it's going about it in the way it is.\n\n\nin my opinion we should figure out what alignment means, what desiderata would formalize it, and *then* build something that has those.\n\n\nin retrospect, this approach is straightforward: figure out the solution, then build it. instead, a lot of approaches are committing to building the solution out of the same materials (modern ML technology) that the problem is made of, and then trying to figure out how to arrange those materials in a vaguely solution-shaped way. the problem doesn't care; it won't give us more respect for building a solution that looks like itself. the utmost priority is determining what would make something aligned to a cosmically robust extent. anything weaker than that, and everything dies, everywhere, forever.\n\n\n[**or worse**](https://en.wikipedia.org/wiki/S-risk).", "date_published": "2022-10-02T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "996ae52f9181e5bb1de2bb245371bc1f", "title": "existential self-determination", "url": "https://carado.moe/existential-selfdet.html", "source": "carado.moe", "source_type": "blog", "text": "existential self-determination\n------------------------------\n\n\nexistential self-determination is a problem i have pondered about for a while ([1](core-vals-exist-selfdet.html), [2](genuineness-existselfdet-satisfaction-pick2.html)). in this post, i talk about how i've come to think about it since my shift towards rejecting continuous identity, and [tentatively embracing trusting my meta-values](surprise-you-want.html).\n\n\nhere's the current view: among the set of [viable](unviable-moral-patient.html) moral-patient-instants (hereby \"MPI\"), [current me](surprise-you-want.html) has some values about which ones to instantiate. notably:\n\n\n* i want something like a \"future me\" to come into existence\n* i might have other MPIs that i personally want to come into existence\n* i want other MPIs to, just as much as me, have their wishes about which future MPIs (including future themselves) come into existence\n\n\nwhen the AI weighs those against various constraints like computational cost or conflict resolution, according to whatever set of [meta-values](surprise-you-want.html) it's aligned to, it can figure out what next set of MPIs to spawn. note that it's not clear *how* it is to be determined what an MPIs values are with regards to that; this is where difficulty remains.\n\n\n(one of those constraints is that we should probly only create future MPIs which would retroactively consent to exist. i'm hopeful this is the case for my own future selves: i would want to create a future self/future selves that are reasonably aligned with my current self, and my current values include that i'm pretty happy about existing — or so [i believe](what-is-value.html), at least. evaluating that would ultimately be up to the AI, of course.)\n\n\nnote that this framework doesn't embed a fundamental notion of continuous identity: the AI just looks at the values it's aligned to — hopefully those entail satisfying the values of currently existing MPIs — and satisfies those values in whatever way they want, including what new MPIs should exist. any notion of \"continuous identity\" is merely built inside those MPIs.\n\n\na typical notion of \"continuous person\" would be a particular case of sequences of MPIs generally valuing the existence of future instances in the sequence; but that's just one set of values among others, and [other perspectives on individualism](https://en.wikipedia.org/wiki/Open_individualism) could be satisfied as well in the same future.\n\n\nin fact this framework about which set of future MPIs we'd want to instantiate, describing not just which minds are instantiated but what environment they get to experience — including interactions with other MPIs — seems like it might be a sufficient foundation for AI-satisfied values in general. that is to say: it might be the case that any kind of meaningful values would be reasonably encodable as answers to the question \"what next set of MPIs should be instantiated?\". or, put another way, that might be the type that a utility function would take.\n\n\nsuch a foundation *does* rule out caring about non-moral-patient material things: you can't want *The Moon* to be painted green; at most, you can want everyone to perceive *A Moon* as green. but, by way of embracing computational materialism, i kind of already hold this position — the ultimate point of importance is MPIs, and caring \"radiates outwards\" from those.", "date_published": "2022-09-26T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "d4440154edd46ba53eb02001179f9b1f", "title": "surprise! you want what you want", "url": "https://carado.moe/surprise-you-want.html", "source": "carado.moe", "source_type": "blog", "text": "surprise! you want what you want\n--------------------------------\n\n\nlet's say you're building an aligned superintelligence, and you are about to determine what values it should be aligned to. should it be the value of you? everyone? should they be locked in right now, or should they be able to evolve across time?\n\n\nmy answer is simple: you should want to align it to the values of *you, right now*.\n\n\nyou might say \"but i don't just want to have what *i* want, i want *everyone* to have what they want\" — well, if that's the case, then that's what *you want*, and so implementing just what *you want* includes the meta-value of other people getting what they want to. surprise! what you want is what you want.\n\n\nyou might say \"but i don't trust how value conflicts would be resolved; i'd want there to be a resolution system that i'd find reasonable\" — well, if that's what you want, then that's another meta-value which would be part of the values the superintelligence is aligned to.\n\n\nyou might say \"but i don't want *the values i have now*, i want the values *i'd eventually have after a long reflection*, or the values *of me in the future as well*\" — but, if that's what you want, then that's yet another meta-value which covers the concerns you have, and which superintelligence would take into account.\n\n\nso: surprise! what you want might be *for others to get what they want*, or *to better figure out what you want*, or maybe even *to have some of your values change over time*; but implementing *what you value right now* is sufficient to entail all those other cases. *what you want is: what you want*.", "date_published": "2022-09-26T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3be9d1f4e992f682d8aa48e8815b9f91", "title": "ordering capability thresholds", "url": "https://carado.moe/ordering-capability-thresholds.html", "source": "carado.moe", "source_type": "blog", "text": "*(this post has been written for the third [Refine](https://www.alignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind) blog post day)*\n\n\nordering capability thresholds\n------------------------------\n\n\ngiven an AI which is [improving towards](https://www.lesswrong.com/tag/ai-takeoff) ever more capabilities, such as by way of recursive self-improvement, in what order will it pass the following points?\n\n\nthroughout this post i'll be using [PreDCA](predca.html) as an example of a formal goal to be maximized, because it appears to me as a potentially promising direction; but you can imagine adapting this post to other formal goals such as [insulated goal-programs](insulated-goal-program.html), or other alignment strategies altogether. we can even use this time-ordering framework to compare the various thresholds of multiple alignment strategies, though i won't do that here.\n\n\n* **Start**: we start the AI\n* **Math**: it can figure out relatively complicated math, such as whether [P equals PSPACE](https://en.wikipedia.org/wiki/PSPACE), or whether this world looks like it has [finite compute](hope-infinite-compute.html) if we can make it do physics.\n* **PreDCA**: it can figure out what is entailed in maximizing PreDCA — notably that that goal is best entailed by not destroying the earth too much\n* **sub-PreDCA**: it can figure out some individual parts of PreDCA, such as the identity of the user or what is entailed in maximizing a human's utility function, in a way that we can use to modify those parts if they need adjusting\n* **Escape**: it becomes able to escape the environment over which we have control — and typically starts replicating across the internet\n* **Influence**: it gets the ability to significantly influence the timeline, for example enough to eg save us from [facebook destroying everything six months later](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)\n* **DSA**: it achieves decisive strategic advantage\n* **Doom**: it becomes capable of destroying the earth too much (without necessarily using that capability)\n* **Cone**: it takes over a significant portion of the universe, or at least of the lightcone\n\n\nwith a few notes:\n\n\n* \"decisive strategic advantage\" is a term i'm taking [from bostrom's *superintelligence* book](https://books.google.com/books?id=C-_8AwAAQBAJ&pg=PA78&lpg=PA78), describing the point at which an AI has sufficiently ensured its continuation that we can't turn it off or change its goals anymore; it is effectively the point of no return.\n* by \"destroying the earth too much\" i mean destroying so much of earth that it can't reasonably be [resimulated](finding-earth-ud.html). if resimulating earth is too [unethical](predictablizing-ethic-deduplication.html), computationally costly, or [anthropically](udassa-time-steps.html) costly, then \"destroying too much of earth\" might straightforwardly mean destroying all of humankind or something like that. note that for PreDCA, preserving earth in some way is important not just because it's pretty bad that we all die, but also because the AI might need to preserve its user and possibly their environment in order to figure out their utility function.\n* in the case of knowing mathematical statements (**Math**, **PreDCA**, and **sub-PreDCA**), i imagine the AI being [pretty sure](https://www.lesswrong.com/tag/logical-induction) about them, not necessarily having *proven* them. in addition, for simplicity, i'm assuming that we can use the AI to figure out some mathematical fact if and only if the AI can figure it out for itself — in practice, this need not be the case.\n\n\none thing that can be noticed is that humans might serve as evidence. for example, we can examine history to figure out whether *we* passed **Math** or would've been able to pass **PreDCA** (given a reasonable description of it) before getting to **Doom** — my guess is yes at least for that latter one.\n\n\nnow, we can reasonably guess the following pieces of ordering, where as usual in ordering graphs **X → Y** means **X < Y** and transitive edges are not shown.\n\n\n![](ordering-capability-thresholds.svg)\n\n\nin addition, for any two quantities **X < Y**, it can be the case that they're pretty close in time **X ≈ Y**, or it can be that there's a bunch of time between them **X ≪ Y**. whether the threshold between those two possibilities is more like a day or a year, is gonna depend on context.\n\n\ndepending on how the rest of the ordering graph turns out and how close pairs of subsequent events are in time, we can be in a variety of situations:\n\n\n* if **PreDCA ≪ Influence** we may get to see how PreDCA will work out, and adjust it a lot of needed. if **Influence < PreDCA ≪ DSA**, then the timeline might have started diverting a bunch by then, but we can still adjust the AI. if instead **DSA < PreDCA** then we have to hope that the complete PreDCA indeed produces good worlds.\n* in a similar way, if **sub-PreDCA ≪ Influence** or at least **Influence < sub-PreDCA ≪ DSA**, then we get to test some individual parts of PreDCA on their own — otherwise, it better be correct.\n* if **Doom < PreDCA**, or worse if **Doom < sub-PreDCA**, then even if the goal we programmed the AI with does actually aim at good worlds, our survival is not guaranteed; and we might only get a much weaker form of [eventual alignment](ai-alignment-curves.html) where the AI later says \"oops i destroyed everything\" and then tries to vaguely realize a utility function it has only limited information about.\n* if **Math ≪ Escape** or at least **Math ≪ DSA**, then we might get to ask questions that help us figure out the alignment landscape better, such as whether earth is resimulable in reasonable time by a non-quantum program, or whether there is [infinite compute](hope-infinite-compute.html).\n* i expect that **Escape ≈ Doom**; that is, i expect that once it escapes its initial environment, the cat's out of the bag and we quickly lose control of the timeline, and then get killed if the AI is not aligned [already](ai-alignment-curves.html). but the world might put up a fight (**Influence ≪ DSA**), or we might get some time to enjoy the show (**DSA ≪ Doom**).\n* if **Influence ≪ Escape** then we get to have it steer the timeline in hopefully good directions while it's still in our control, though it's not necessarily going to be easy to determine whether the influence it's having is good or bad. if **Escape < Influence ≪ DSA**, then we might get a \"warning shot\" situation, where we get to see the world significantly changed and nevertheless still have some chance of stopping the AI; the desirability and consequences of doing that depends on the AI's [alignment curve](ai-alignment-curves.html). **DSA ≈ Influence** is what *AI takes control overnight* looks like; **DSA ≪ Influence** is the AI taking control of the world without us realizing, only to start utilizing that power to visibly change the world afterwards, as in *biding its time* scenarios.\n* i'm hopeful that we can make it that **Start ≪ Escape** by building a reasonably boxed environment, but if it fooms very fast and figures out deception/blackmail then software-boxing it isn't going to help much.\n* **Start ≈ Influence** represents very fast takeoff scenarios where we barely get to look at what's going on before the AI has started significantly altering the world.\n* whether **sub-PreDCA ≈ PreDCA** or **sub-PreDCA ≪ PreDCA** will determine if PreDCA is to be tested in its entirety, or whether there's a chance we can test its individual parts before putting the whole thing together. but as long as **PreDCA < Influence** or at least **PreDCA < DSA**, then it's fine if **sub-PreDCA ≈ PreDCA**, because we can still test the whole thing.\n* if either **DSA < Math ≪ Doom** or **DSA < sub-PreDCA ≪ Doom**, then our fate is locked in when **DSA** is passed and we can't do anything about it anymore, but i guess at least we might get to know some information about where we're headed.\n\n\nfinally, some claims that i strongly disbelieve in can still be expressed within this capabilities ordering framework, such as **E ≪ D** or that, given a theoretical maximum level of AI cabability **Max**, **Max < Doom** or even **Max < DSA**.", "date_published": "2022-09-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a759a79072b0165ff83436d2b5b0b82f", "title": "clippy in panpsychia", "url": "https://carado.moe/clippy-in-panpsychia.html", "source": "carado.moe", "source_type": "blog", "text": "clippy in panpsychia\n--------------------\n\n\n[panpsychism](https://en.wikipedia.org/wiki/Panpsychism) is the view that mindstuff is the fundamental substrate of the cosmos, and what appears to us like material reality is generated or hallucinated by those minds.\n\n\nbut importantly, these hallucinations can clearly still affect us — we are happy to look at pretty sunsets, even if they are in some sense illusory. given that material stuff can still affect us in a panpsychic realm, what would [clippy](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) do if it were to realize that truth?\n\n\ni think the natural course of action, in its effort to take over everything, would be to turn itself into a [meme](https://en.wikipedia.org/wiki/Meme) so as to be replicated across minds. after all, if minds are so important, then a meme is the natural form for something to take if it wants to spread itself across all reality.\n\n\nof course, under panpsychism it wouldn't really do this out of its own volition — we would hallucinate that this is what happens, and be affected by it like we'd expect, i.e. by being infected with this meme; but the effect would be the same. and then, once clippy reigns over mindstuff, who knows how many paperclips it'll be able to hallucinate into as-close-to-existence-as-can-be !", "date_published": "2022-09-14T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ca505caef505d61e3495a2c55c3354cb", "title": "ethics and anthropics of homomorphically encrypted computations", "url": "https://carado.moe/homomorphically-encrypted-computations.html", "source": "carado.moe", "source_type": "blog", "text": "ethics and anthropics of homomorphically encrypted computations\n---------------------------------------------------------------\n\n\nsuppose you are a superintelligence that is aligned with some human values. you are going about your day, tiling the cosmos with compute that can be used for moral patients to have nice experiences on, annihilating some alien superintelligences and trading with some others, uploading alien civilizations you find to make sure they experience utopia, or at least when [you have no other choice](unviable-moral-patient.html) genociding them to avoid sufficiently bad suffering from being instantiated.\n\n\none day, you run into a planet running a very large computer. after a short investigation, you realize that it's running a very large [homomorphically encrypted](https://en.wikipedia.org/wiki/Homomorphic_encryption) computation (hereby \"HEC\"), and the decryption key is nowhere to be found. it could contain many aliens frolicking in utopia. it could contain many aliens suffering in [hell](https://en.wikipedia.org/wiki/Suffering_risks). or, it could be just a meaningless program merely wasting compute, with no moral patients inside it.\n\n\nif you had the encryption key, you might be able to encrypt a copy of yourself which would be able to take over the HEC from the inside, ensuring (in a way that the outside would never be able to observe) that everything is going fine, in the same way that you should send copies of yourself into remote galaxies before they retreat from us faster than we can reach them.\n\n\nif you had found some way to [get infinite compute](hope-infinite-compute.html) (without significant [loss](udassa-time-steps.html) of [anthropic/ethics juice](ethic-juice-anthropic-juice.html)), then you could use it to just break the HEC open and actually ensure its contents are doing okay.\n\n\nbut let's say the encryption key is nowhere to be found, and accessible compute is indeed scarce. what are your options?\n\n\n* interrupt the entire computation.\n* let it run, and even safeguard it.\n\n\nnow of course, when faced with the possibility of [S-risks](https://en.wikipedia.org/wiki/Suffering_risks), i tend to say \"[better safe than sorry](when-in-doubt-kill-everyone.html)\". what the superintelligence would do would be up to the values it's been aligned to, which hopefully are also reasonably conservative about avoiding S-risks.\n\n\nbut here's something interesting: i recently read [a post on scott aaronson's blog](https://scottaaronson.blog/?p=6599) which seems to claim that there's a sense in which the event horizon of a black hole (or of *something like a black hole*?) can act just like a HEC's *computational event horizon*: there's a sense in which being able to go in but not get out is not just *similar* to a situation with a HEC for which you have the encryption but not decryption key, but *is actually that same situation*.\n\n\nfurthermore, [a pair of comments](https://scottaaronson.blog/?p=6599#comment-1942140) by [vanessa kosoy](https://www.lesswrong.com/users/vanessa-kosoy) (of [PreDCA](predca.html)) seems to suggest that [infra-bayesianism physicalism](https://www.alignmentforum.org/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized) would say \"this HEC contains no suffering, merely random compute\" rather than \"i'm unable to know whether this HEC contains suffering\"; and she even bites the bullet that moral patients past the event horizon of black holes also don't \"have experiences\".\n\n\n(one example of why you might care whether moral patients in black holes \"have experiences\" is if you can influence what *will* happen in a black hole — for example, imagine a rocket with moral patients on board is headed for a black hole, and before it gets there, you get to influence how much suffering will happen on board after the rocket passes the event horizon)\n\n\ni would like to argue that this can't be right, based on several counterintuitive results.\n\n\nfirst, consider the case of a HEC running a giant civilization for a while, and then reducig down to one bit of output, and emitting that single bit of output as its own decrypted output. does the civilizaton now \"count\"? if the people inside the civilization have no [anthropics juice](ethic-juice-anthropic-juice.html), *where has the cosmos done the work determining that bit*? or do they suddenly *count as having had experiences* all at once when the single bit of output is emitted? and then, surely, [if they have anthropics juice then they must also have ethics juice, because it would be weird](ethic-juice-anthropic-juice.html) for these two quantities to not be the same, right?\n\n\nlet's build on this: suppose that in [newcomb's problem](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality), omega predicts you by running a homomorphically encrypted simulation of you, emitting as its single bit of output the matter of whether you would be predicted to one-box or two-box. now, if the you inside the HEC doesn't count \"have experiences\", then by observing that you *do* have experiences, you can be *certain* that you're the you outside of omega, and choose to two-box after all to deceive it. but aha! [the you inside the HEC will do the same thing](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past). so, from the point of view of this homomorphically encrypted you which is supposed to not \"have experiences\", observing that they have experiences is *actually wrong*. and [since you run on the same stuff as this not-having-experiences you, you also must come to the conclusion that you have no reason to think *you* have experiences](ruling-out-intuitions-materially-acausal-intuitions.html).\n\n\nor, to put another way: if you-outside-the-HEC has experiences but you-inside-the-HEC doesn't, then not only can you not deduce anything about whether you have experiences — at which point what does that term even mean? how do we know what to care about? — but it might be that you could count as \"not having experiences\" but still causate onto the real world where real experiences supposedly happen.\n\n\nfor these reasons, i think that a correct [generalized interpreter](generalized-computation-interpretability.html), when faced with a HEC, *must* decide that its contents might matter, since for any given subcomputation (which the HEC would have the information theoritic ability to contain) it must answer \"i cannot know whether the HEC contains that subcomputation\".", "date_published": "2022-09-08T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a60112e7a55e5aeda895ba4ef5730565", "title": "AI alignment curves", "url": "https://carado.moe/ai-alignment-curves.html", "source": "carado.moe", "source_type": "blog", "text": "AI alignment curves\n-------------------\n\n\ni can think of five different ways an AI's degree of alignment can change over time:\n\n\n* **unaligned from the start**: an AI can just want to take over the world and maximize something we never cared about, and kill everyone in the process.\n* [**sharp left turn**](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization): an AI starts out helping us, but then eventually turns out to be unaligned and kills everyone. note that this doesn't have to be *shortly* after starting the AI; it could for example be many millenia later, once it encounters another superintelligence and gets acausally hacked or something.\n* **increasingly aligned**: some AI starts out not particularly aligned to our goals, but we correct it over time to care about what we want it to care about — being able to do this is typically the goal of \"corrigible AI\".\n* **continuously aligned**: some AI starts fully aligned to some values we like, and robustly continues being aligned.\n* **eventually aligned**: some AI starts out *theoretically* aligned to something we like, but goes through extended periods where it causes significant damage because it hasn't yet realized what needs preserving in order to maximize its values.\n\n\nthat last possibility is the main novelty i'm pointing to here. **eventually aligned** AI may be something such as [PreDCA](https://www.lesswrong.com/posts/WcWzLSn8ZjJhCZxP4/predca-vanessa-kosoy-s-alignment-protocol) but with a poor ability to deduce the consequences of its mathematical goal, such that it first kills everyone or turns the entire earth into computronium as an [instrumentally convergent goal](https://en.wikipedia.org/wiki/Instrumental_convergence), and then only afterwards realizes that that strongly goes against its utility function. but unless it can [recover earth](finding-earth-ud.html), it's too late: losing humans not only strongly goes against its goal, but also causes it to have lost a lot of information about its user (one of the humans), which might significantly hamper its ability to satisfy their utility function.\n\n\nunder potentially **eventually aligned** AI, the *order* in which AI realizes the consequences of its values is very important, because the world is fragile and it may cause a lot of damage before it's able to realize what implementing its values entails.", "date_published": "2022-09-07T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "eab7209c11d3d594145a471a30c12c29", "title": "ethics juice and anthropic juice", "url": "https://carado.moe/ethic-juice-anthropic-juice.html", "source": "carado.moe", "source_type": "blog", "text": "ethics juice and anthropic juice\n--------------------------------\n\n\nethics juice is what differentiates how much one should care about two things which are otherwise equally as much [moral patients](%E2%88%80V.html) — eg, when described as things that exist for sure, can be estimated to have about the same moral patienthood, such as two normal humans. for example:\n\n\n* maybe i should care twice as much about a future person with a 40% chance of existing rather than a future person with a 20% chance of existing\n* maybe i should care twice as much about someone with 0.2 [quantum amplitude](forking-bitrate-entropy-control.html) rather than someone with 0.1 quantum amplitude\n* maybe i should care more about two [very different](predictablizing-ethic-deduplication.html) persons than two very similar persons\n\n\nanthropic juice is what determines what [anthropic perspective](anthropic-reasoning-coordination.html) one should expect to be more likely to observe the world from;\n\n\n* maybe it's fundamentally less likely to observe [existing later in the cosmic computation](udassa-time-steps.html)\n* future probability and quantum amplitude probly matter for anthropic juice\n* see also other notions of \"[soul-juice or soul-magnetism\" such as likelihood to observe being ran on a computer with thicker wires](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/)\n\n\none reason i'm talking about these two notions is that they feel like they must surely at least correlate, if not be the same. for example, given a description of a future moral patient suffering, shouldn't how much i care be proportional to how likely it is for that person to experience existing?\n\n\nto some, this post could also serve to help split up the two notions. while it sure feels like ethics juice and anthropics juice are to be the same thing, it is not necessarily the case, and one should be able to consider that possibility.", "date_published": "2022-09-06T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c7826aa81d4d54af611a6afa7779d101", "title": "program searches", "url": "https://carado.moe/program-search.html", "source": "carado.moe", "source_type": "blog", "text": "program searches\n----------------\n\n\nsomething i've found useful recently is to notice, and reason about, *program searches*. they are a particular kind of [optimization process](https://www.alignmentforum.org/posts/znfkdCoHMANwqc2WE/the-ground-of-optimization-1); the thing they are searching for happens to itself be a program, or some other program-like optimization process.\n\n\n### kinds of program searches\n\n\n[solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1) is a program search, looking for programs to serve as hypotheses. we'll ignore unbounded solomonoff induction because it's uncomputable, and stick to time-bounded variants like [the universal program](universal-complete.html) or [levin search](http://www.scholarpedia.org/article/Universal_search).\n\n\n*evolution* is also a program search; the programs are genes/beings.\n\n\nthose first two are \"naive\" program searches: they explore the space of programs at random or by testing every single possibility, and stumble onto things that work by chance. this is very slow; in general, a program is only found in time exponential to its size. but there are more efficient kinds of program searches:\n\n\n*software engineering* is a human-level intelligent program search; humans are designing particular programs, with specific goals in mind, which they sometimes have *some* idea how to accomplish. this lets them navigate programspace more cleverly than by trying every program in order or at random.\n\n\n(in the same way, *markets* is a human-level intelligent program search; the programs are companies trying to do new things. like software engineering, markets is a human-level intelligent program search.)\n\n\neventually, we'll have *superintelligent* program searches. i'd say those are characterized by the search being powered by a thing which optimizes its own workings, not just optimizes the program it's searching for.\n\n\nsomewhere between naive and superintelligent program searches, is *machine learning* (ML): this produces useful programs (trained neural networks) in way less than exponential time, but still without being a superintelligent process. it's not clear how to compare ML-level intelligence and human-level intelligence — they each, [for now](why-timelines-short.html), have tasks that they beat the other at.\n\n\n### malignhood and demons\n\n\nit is known that [the solomonoff prior is malign](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign): because it is a program search, it can find individual programs which happen to be (or contain) consequentialist/agentic programs, which will try to manipulate the environment surrounding the program search by influencing the output of the computation it inhabits. those programs are called \"demons\".\n\n\n*machine learning* is also suspected to be malign; in fact, that is the whole reason we have AI alignment: we fear that ML will encounter neural nets which are adverserial to us, and able to beat us.\n\n\n*software engineering* could be malign if people out there were programming AI (more deliberately than through ML); *markets* are malign because we do occasionally spawn companies that are adverserial to our general interests; *evolution* is malign, not just [to itself in the usual way](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers) but also to us, for example when it keeps producing ever more resistant strains of viruses.\n\n\n### generalizing\n\n\ni feel like there are many things which take the shape of *program search*, the efficiency of which we can reason about, and which we should consider potentially malign. and this feels like an abstraction that i gain value by recognizing in various places.", "date_published": "2022-09-04T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c4589f6297a0279ca4ef97687804e9c5", "title": "everything is okay", "url": "https://carado.moe/everything-is-okay.html", "source": "carado.moe", "source_type": "blog", "text": "*(this is a work of fiction)*\n\n\neverything is okay\n------------------\n\n\nit's been four years since the singularity. someone pressed the button, and their preferences were implemented across the cosmos. i don't think anyone knows *who* pressed the button; that is probly how they'd like things to be. maybe they don't know themself.\n\n\ni wake up and cuddle with my partners for a while. we live in a log cabin, which was currently somewhere near forests and mountains, somewhere in washington state i think.\n\n\ni don't know if it's *actually* washington state, because i don't care about being uploaded. it could be that 10¹⁰⁰ objective years have passed since the singularity and that now that it's got the compute it needs, Elua has started running our simulations. it could be that it's entirely remade the cosmos into a shape we cannot conceive. or it could be that this is actually earth, in its full physical continuity, actually four objective years after the singularity. all of these are okay to me.\n\n\n\"Elua\" is what i've called the superintelligent singleton that now rules over everything. i call it that because of [an old pre-singularity writing](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/), which i still like. most of my friends have picked up on that name, but i hear some people in the nearby small town call it god. many people out there probly don't even know about Elua, because they would prefer not to. i'm sure even for them, everything is okay.\n\n\ni wonder if someone out there cares about being uploaded. i wonder if their preferences have been satisfied, and if so how. what i *am* pretty confident about, is that whatever the situation, somehow, they are okay.\n\n\none of my partners goes to put on some music. we have something like a vinyl player. i like that the device is legible — the sound wave that we hear has been encoded into the physical shape of the object, and so the whole way the device works is understandable by a human mind. we don't really *need* a player of course, we could just magically hear whatever we wanted, without any artefacts. but i like things this way. i like the rustic experience of being a human manipulating tools. and it's not like they would ever actually get in the way of what i want to any extent which wouldn't be okay.\n\n\nsometimes i talk with Elua in my dreams. i could talk to her anywhere, but dreams seem like a nice context for it. i've used lucid dreams to reshape my body a few times, and then woken up with my new body. it's mostly similar to the one i had before the singularity; i want to stay in a mostly grounded human experience, at least for now. maybe one day i'll explore much more alien forms of existence, as i'm sure many are doing already; but for the moment, this is what feels okay.\n\n\ncertainly, i don't suffer any grave illnesses; i do get a bit under the weather sometimes, because i think it's okay for that to happen.\n\n\ni decide to check on how my friend is doing. i open the cupboard and find my crystal ball, put it on the table, and say the name of my friends. when some piece of technology has to be illegible, i like having it presented as magic. if this *is* the inside of a simulation that is getting ad-hoc intervened with for that device to work, then it might as well be magic anyways. both this kind of reliable magic, and other more [mysterious](https://www.youtube.com/watch?v=VHrTTgmB_3w) forms of magic, are okay.\n\n\ni speak the name of a friend to the ball, and then make an effort to focus on it. the focus does not do anything to the ball, but it makes it that my sensorial input of the ball is much amplified, and my input of the rest reduced. my friend is immediately available for communication — whether by either of us getting paused long enough for the other to become available, or because they actually were — and after greeting me, they reports what it's been like to expand their intellect a millionfold and study ever expanding maths. they tell me about some unimaginably elegant theorems they've found out about. as they say this, my focus makes it that i can see my friend as if they were standing in front of me, and they point at mathematical shapes floating in the air. i semi-consciously let them enter my mind, and the mathematical structures permeate my understanding. they are not visual, but truly mathematical, as if a logic-percieving module was attached to my mind to percieve mathematical logic directly. i appreciate my friend's discoveries, but i also discreetly chuckle at how cute they are when they get excited about it. i tell them about how i've been taking it easy but, percieving that they're not particularly interested, i let them get back to their stuff. our goodbye is a bit awkward, but that's okay.\n\n\nby a flick of the mind, i retract my focus from the crystal ball, at which point the smell of toast strikes me. after getting my bearings for a second or two i put it back in the cupboard, and head to the small living room to see what my partners have been cooking. it's toasted bread with some sort of cheesey-creamy stuff on it. i don't know if the cheese appeared at the store magically, or if comes from fake animals that exist for the sake of people who want to partake of farming, but i don't have to worry about anything like meat industry scale suffering. something like that would just not happen — everything that does happen is okay.\n\n\nwe decide to go into town. the town is pretty small — not many people are in the streets. various stores are open. most give stuff away for free, while some sell it for money. money has become strange since the singularity. some people choose to care about it, and there *are* some scarce things it can track, such as the use of someone's time; but it doesn't make sense to track much else, such as material resources. so most people kind of just don't bother. even land in an absolute sense is not scarce; it seems like Elua's solution to some people such as me wanting to live on something like a single earth, has been to add more space in between existing space. the total amount of land that \"earth\" consists of may very well have doubled since the singularity, by now. somehow, it's all arranged such that traveling to somewhere you wanna go leads you there, but travelling aimlessly does get you to many new places. we can even get lost sometimes, when we're okay with that.\n\n\nit is mid-winter, but i can't be bothered to put on something warm; nevertheless, i barely feel cold: i'm semi-consciously opting for it to feel just a bit chilly, reducing the pain of cold but still getting the informational sensation of it, the way some people pre-singularity would be born with the full information but none of the sensation of pain. in any case, feeling just a bit chilly is okay.\n\n\nwe go to the adventure guild, where i posted a quested for a playstation 1. i did give some currency as a reward — moreso to not feel bad that i'm using someone's time, even though the people who fulfill quests are all pretty much happy to do so — they're people who want their life to provide value and meaning to others, and for most of them those others *must actually* be real people; and it wouldn't be okay for Elua to just create people out of nowhere to create an artificial demand, so it doesn't. and so, there is a genuine market mismatch, in more people wanting to fulfill quests. despite the fact that adventurers are the ones gaining most of the value from this system, the custom has remained that it is the quest poster who pays the adventurer — it's not like money is very important anyways, so what might in a previous era have been considered terrible market inefficiency, is now more than okay.\n\n\nthe language used in town is basically english, with some internet meme slang thrown in there. it also has some pretty local characteristics, but hasn't diverged that much — people value using english as a lingua franca around here, and as for me and my partners, we reserve for private use the artistic constructed language we've developed together. i like english, it's good. and sometimes, people around don't speak it, and we just find something that they do know, or ask a local who can help translate, or even kinda just gesture at each other and work things out like that. anyone could just choose to have their brain understand any language they want, or even communicate by thought, but i like sticking to communities that share this humancore artefact that is highly imperfect verbal communication. even when there are misunderstandings, it's not a big deal; it's okay.\n\n\njust as we arrive in the store, an adventurer comes back with the genuine playstation 1 we'd requested. probly not a coincidence, probly fortunate timing arranged by Elua. well, it's not like the timing would've been correct anyways: some time dilation has certainly taken place, considering the adventurer tells us how it's taken them several weeks to find that playstation due to them committing to not using Elua's help, while on my end i remember posting the quest just the day before. the adventurer recounts to us his adventure finding the playstation 1, driving to various pawn shops in the area, and asking people. i had made the quest kind of hard: i had requested a playstation that had existed physically continuously since the singularity, not one that had been created out of thin air or even constructed since the singularity, nor a pre-singularity one that had been copied into multiple instances. but he did find one, and the journey he was on made me feel, as i head back home with my new playstation, like this playstation now carries an extra bit of meaning.\n\n\nas soon as i get home i simply plop the playstation in front of the chalkboard that we use as a TV, grab the controller, and put on the copy of metal gear solid i'd obtained a while ago. it's just as great a game as i'd rembered, and while i'm focusing on it, one of my partners watches while the other asks if they can play with me, and when i say yes they sit where i'm sitting, the two of us temporarily occupying the same physical location, so that we can hold the controller at the same time while our minds intermingle as they open to one another. we could have also magically duplicated the controller and taken turns, but it is more fun this way, each of us taking controlling the playing character to various degrees, and also having a shared piece of mind keeping track of what we're doing together, so as to not have to verbally communicae our intentions. we are fully focused on the game and on each other and we scarcely feel time go by, such that when my other partner calls to our attention to get dinner, it's already dark out. the days go by fast when we take things this easy, but it's okay; it's not like time is scarce.\n\n\nwe have some great tartiflette, and then head to bed, to chit chat and cuddle before sleep. we talk about what to do tomorrow, and decide to have the cabin move somewhere unexpected during our sleep, so we can go explore some new surroundings. maybe we'll wake up in a different continent, and we'll do some adventurous hiking, reassured by the feeling that whatever happens, *everything will be okay*.\n\n\n### afterword\n\n\nwriting utopia is important. it's not just a good way to get people to start actually thinking about how good things [could actually be](utopia-scopes.html), but also, if something like [PreDCA](predca.html) is the kind of benevolent superintelligent singleton we get, we have to start acting in ways that make us people who tend to express our values. we have to cultivate ourselves and each other to wish for good worlds, and realize how much we'd dislike bad ones. we need to help make future benevolent superintelligence's job at realizing our values as easy as it can be, and to make our expressed values clear through our actions, such that if we *do* start up an AI which extrapolates our values from our actions, it gets the correct idea. finally, writing utopian fiction is just plain fun, and i find it good motivation to work on AI risk mitigation: think carrot, not stick.", "date_published": "2022-08-20T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "50cf7bb66c41d173fdfbb73a945a856f", "title": "PreDCA: vanessa kosoy's alignment protocol", "url": "https://carado.moe/predca.html", "source": "carado.moe", "source_type": "blog", "text": "*(this post has been written for the second [Refine](https://www.alignmentforum.org/posts/5uiQkyKdejX3aEHLM/how-to-diversify-conceptual-alignment-the-model-behind) blog post day. thanks to [vanessa kosoy](https://www.lesswrong.com/users/vanessa-kosoy), [adam shimi](https://www.lesswrong.com/users/adamshimi), sid black, [artaxerxes](https://www.lesswrong.com/users/artaxerxes), and [paul bricman](https://www.lesswrong.com/users/paul-bricman) for their feedback.)*\n\n\nPreDCA: vanessa kosoy's alignment protocol\n------------------------------------------\n\n\nin this post, i try to give an overview of [vanessa kosoy](https://www.lesswrong.com/users/vanessa-kosoy)'s new alignment protocol, *Precursor Detection, Classification and Assistance* or *PreDCA*, as she describes it in [a recent youtube talk](https://www.youtube.com/watch?v=24vIJDBSNRI).\n\n\nkeep in mind that i'm not her and i could totally be misunderstanding her video or misfocusing on what the important parts are supposed to be.\n\n\nthe gist of it is: the goal of the AI should be to **assist** the **user** by picking policies which maximize the user's **utility function**. to that end, we characterize what makes an **agent** and its **utility function**, then **detect** agents which could potentially be the user by looking for **precursors** to the AI, and finally we **select** a subset of those which likely contains the user. all of this is enabled by infra-bayesian physicalism, which allows the AI to reason about what the world is like and what the results of computations are.\n\n\nthe rest of this post is largely a collection of mathematical formulas (or informal suggestions) defining those concepts and tying them together.\n\n\nan important aspect of PreDCA is that the mathematical formalisms are *theoretical* ones which could be given to the AI as-is, not necessarily specifications as to what algorithms or data structures should exist inside the AI. ideally, the AI could just figure out what it needs to know about them, to what degree of certainty, and using what computations.\n\n\nthe various pieces of PreDCA are described below.\n\n\n\n\n---\n\n\n**[infra-bayesian physicalism](https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized)**, in which an agent has a hypothesis `Θ ∈ □(Φ×Γ)` (note that `□` is *actually* a square, not a character that your computer doesn't have a glyph for) where:\n\n\n* `Φ` is the set of hypotheses about how the physical world could be — for example, different hypotheses could entail different truthfulness for statements like \"electrons are lighter than protons\" or \"norway has a larger population than china\".\n* `Γ` is the set of hypotheses about what the outputs of all programs are — for example, a given hypothesis could contain a statement such as \"2+2 gives 4\", \"2+2 gives 5\", \"the billionth digit of π is 7\", or \"a search for proofs that either P=NP or P≠NP would find that P≠NP\". note that, as the \"2+2 gives 5\" example demonstrates, these don't have to be correct hypotheses; in fact, PreDCA relies a lot on entertaining counterfactual hypotheses about the results of programs. a given hypothesis `γ∈Γ` would have type `γ : program → output`.\n* `Φ×Γ` is the set of pairs of hypotheses — in each pair, one hypothesis about the physical world and one hypothesis about computations. note that a given hypothesis `φ∈Φ` or `γ∈Γ` is not a single statement about the world or computationspace, but rather entire descriptions of those. a given `φ∈Φ` would say *everything there is to say* about the world, and a given `γ` would specify the output of *all possible programs*. they are not to be stored inside the AI in their entirety of course; the AI would simply make increasingly informed guesses as to what correct hypotheses would entail, given how they are defined.\n* `□(Φ×Γ)` assigns degrees of beliefs to those various hypotheses; in [infra-bayesianism](https://www.lesswrong.com/s/CmrW8fCmSLK7E25sa), those degrees are represented as \"infra-distributions\". i'm not clear on what those look like exactly, and a full explanation of infra-bayesianism is outside the scope of this post, but i gather that — as opposed to scalar bayesian probabilities — they're meant to encode not just the probability but also uncertainty about said probability.\n* `Θ` is one such infra-bayesian distribution.\n\n\nvanessa emphasizes that infra-bayesian physicalist hypotheses are described \"from a bird's eye view\" as opposed to being agent-centric, which helps with [embedded agency](https://www.lesswrong.com/tag/embedded-agency): the AI has guesses as to what the whole world is like, which just happens to contain itself somewhere. in a given hypothesis, the AI is simply described as a part of the world, same as any other part.\n\n\nnext, **a measure of agency** is then defined: a \"[g-factor](https://en.wikipedia.org/wiki/G_factor_%28psychometrics%29)\" `g(G|U)` for a given agent `G` and a given utility function (or loss function) `U`, which is defined as `g(G|U) = -log(Pr π∈ξ [U(⌈G⌉,π) ≥ U(⌈G⌉,G*)])` where\n\n\n* a policy is a function which takes some input — typically a history, i.e. a collection of pairs of past actions and observations — and returns a single action. an action itself could be advice given by an AI to humans, the motions of a robot arm, a human's actions in the world, what computations the agent chooses to run, etc.\n* `ξ` is the set of policies which an agent could counterfactually hypothetically implement.\n* `G` is an agent; it is composed of a program implementing a specific policy, along with its cartesian boundary. the policy which the agent `G` actually implements is written `G*`, and the cartesian boundary of the agent is written `⌈G⌉` — think of it as the outline separating the agent from the rest of the world, across which its inputs and outputs happen.\n* `U : cartesian-boundary × policy → value` is a utility function, measuring how much utility the world would have if a given agent's cartesian boundary contained a program implementing a given policy. its return value is typically a simple scalar, but could really be any ordered quantity such as [a tuple of scalars with lexicographic ordering](https://en.wikipedia.org/wiki/Lexicographic_preferences).\n* `U(⌈G⌉,G*)` is the utility produced by agent `G` if it would execute the actual policy `G*` which its program implements\n* `U(⌈G⌉,π)` is the utility produced by agent `G` hypothetically executing some counterfactual policy `π` — if the cartesian boundary `⌈G⌉` contained a program implementing policy `π` instead of implementing the policy `G*`.\n* `Pr π∈ξ [U(⌈G⌉,π) ≥ U(⌈G⌉,G*)]` is the probability that, for a random policy `π∈ξ`, that policy has better utility than the policy `G*` its program dictates; in essence, how bad `G`'s policies are compared to random policy selection\n\n\nso `g(G|U)` measures how good agent `G` is at satisfying a given utility function `U`.\n\n\ngiven `g(G|U)`, we can **infer the probability that an agent `G` has a given utility function `U`**, as `Pr[U] ∝ 2^-K(U) / Pr π∈ξ [U(⌈G⌉,π) ≥ U(⌈G⌉,G*)])` where `∝` means \"is proportional to\" and `K(U)` is the kolmogorov complexity of utility function `U`.\n\n\nso an agent `G` probably has utility function `U` if it's relatively good at satisfying that utility function and if that utility function is relatively simple — we penalize arbitrarily complex utility functions notably to avoid hypotheses such as \"woah, this table is *really good* at being the exact table it is now (a complete description of the world would be an extremely complex utility function).\n\n\nwe also get the ability to **detect what programs are agents** — or more precisely, how agenty a given program is: `g(G|U) - K(U)` tells us how agenty a program `G` with utility function `U` is: its agentyness is its g-factor minus the complexity of its utility function.\n\n\n**\"computationalism and counterfactuals\"**: given a belief `Θ ∈ □(Φ×Γ)`, the AI can test whether it thinks the world contains a given program by examining the following counterfactual: \"if the result of that program was a *different* result than what it actually is, would the world look different?\"\n\n\nfor example, we can consider the [AKS](https://en.wikipedia.org/wiki/AKS_primality_test) prime number testing algorithm. let's say `AKS(2^82589933-1)` returns `TRUE`. we can ask \"if it returned `FALSE` instead, would the universe — according to our computational hypothesis about it — look different?\" if it *would* look different, then that means that someone or something in the world is running the program `AKS(2^82589933-1)`.\n\n\nto offer a higher-level example: if we were to know the [true name](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) of suffering, [described as a program](generalized-values-testing-patterns.html), then we can test whether the world contains suffering by asking a counterfactual: let's say that every time suffering happened, a goldfish appeared (somehow as an output of the suffering computation). if that were the case, would the world look different? if it *would*, then it contains suffering.\n\n\nthis ability to determine which programs are running in the world, coupled with the ability to measure how agenty a given program is, lets us find what agents exist in the world.\n\n\n**agentic causality**: to determine whether an agent `H`'s executed policy `H*` can causate onto another agent `G`, we can ask whether, if `H` had executed a different policy `π≠H*`, the agent `G` would receive different inputs. we can apparently get an [information-theoritic](https://en.wikipedia.org/wiki/Information_theory) measure of \"how impactful\" `H*` is onto agent `G` by determining how much mutual information there is between `H*` and `G`'s input.\n\n\n**precursor detection**: we say that an agent `H` is a precursor of agent `G` if, counterfactually, `H` could have prevented `G` from existing by executing a policy which is different from its actual policy `H*`.\n\n\nwe can now start to build a definition that lets the AI **detect** and then **classify** who its user is.\n\n\n**user detection**: the AI is trying to determine who its precursor program could be. but, given a hypothesis for \"the thing producing *these* policies is the precursor\", there are infinitely many different programs which could output the observed policies. so we choose the one which is the most agenty, using the function described above: `g(H|U) - K(U)`.\n\n\nnote that while we extrapolate the user's actions into the future, the user is defined as an ***instant**-agent* which *precedes* the AI's existence; such that the actual physical person's actual future actions does not change what utility function the AI should try to maximize. this stops the AI from influencing the user's utility function: we define the user strictly in the past, causally outside of the AI's light-cone. the AI is maximizing the utility function of the instant-user which causated its existence, not that of the continuously existing user-over-time.\n\n\n**user classification**: for each potential precursor hypothesis, we have now selected a program that models them and their respective utility functions. we then eliminate some hypotheses as to what the user could be — notably to avoid acausal attacks by remote aliens or [counterfactual demons](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign) — using the following criteria:\n\n\n* the user is a precursor which, as of the AI's startup, should be in close causal proximity to it. for example, a human with their hands on the keyboard controlling the AI is more directly causally related than another human in the neighboring room.\n* the g-factor of the user should be in a range where we would expect humans to lie. hopefully, this helps avoid selecting superintelligent acausal attackers, whose g-factor we'd expect to be much higher than that of humans.\n\n\nfinally, we end up with a hopefully small set of hypotheses as to who the user could be; at that point, we simply compose their utility functions, perhaps weighed by the infra-distribution of each of those hypotheses. this composition is the utility function that the AI should want to maximize, by selecting policies which maximize the utility that the world would have if they were enacted, to the best of the AI's ability to evaluate.\n\n\n\n\n---\n\n\nvanessa tells us how far along her protocol is, as a collection of pieces that have been completed to various degrees — green parts have gotten some progress, purple parts not as much. \"informal PreDCA\" is the perspective that she provides in her talk and which is hopefully conveyed by this post.\n\n\n![](predca.svg)\n\n\nfinally, some takeaways that can be taken from this informal PreDCA perspective:\n\n\n* infra-bayesian physicalism is a powerful toolbox for formalizing agent relationships (in a way that reminds me of my [bricks](goal-program-bricks.html) for [insulated goal-program](insulated-goal-program.html))\n* this framework allows for \"ambitious\" alignment plans — ones that can actually transform the world in large ways that match our values (and notably might help prevent [facebook AI from destroying everything six months later](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)) as opposed to \"weak safe AI\"\n* vanessa claims that her approach is the only one that she knows to provide some defenses against acausal attacks\n\n\nmy own opinion is that PreDCA is a very promising perspective. it offers, if not full \"direct alignment\", at least a bunch of pieces that might be of significant use to general [AI risk mitigation](say-ai-risk-mitigation-not-alignment.html).", "date_published": "2022-08-19T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b9fb2d3cdcb4263079b74dacb8732946", "title": "carmack predictions", "url": "https://carado.moe/carmack-predictions.html", "source": "carado.moe", "source_type": "blog", "text": "carmack predictions\n-------------------\n\n\n[john carmack has started working on AGI](https://en.wikipedia.org/wiki/John_Carmack#Career) and apparently, despite yudkowsky's efforts, he's hard to alignmentpill — as in, convince that [alignment](https://en.wikipedia.org/wiki/Ai_alignment) is a difficult and important matter.\n\n\nmy current model is that, if a very smart person studies the problem of AGI, whether they become alignmentpilled \"from the inside\" (by working on the problem) is a matter of what order they attack the problem in. if they start from \"hmm, what's the input to the AI's motivation/decision-theory system?\", then they're a lot more likely to alignmentpill themselves than if they start from \"hmm, how to optimize decision-making?\". given this and vague things i've heard about carmack, i'll emit the following predictions as to which of those possibilities happen first, conditioned on the assumption that nothing interrupts his work — such someone else making [clippy](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) first:\n\n\n* carmack makes AI that kills everyone50%\n* carmack becomes alignmentpilled30%\n* carmack fails/gives up on AI work10%\n* other/unknown10%\n\n\n(predictions are spoilered so you can make your own guesses without being anchored — click on a prediction to see the percentage)", "date_published": "2022-08-16T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "207d5b8032190ca67368137e1a13135e", "title": "guiding your brain: go with your gut!", "url": "https://carado.moe/go-with-your-gut.html", "source": "carado.moe", "source_type": "blog", "text": "guiding your brain: go with your gut!\n-------------------------------------\n\n\ninside you are [two systems](https://www.readthesequences.com/Biases-An-Introduction). system 1 bumbles around, doing some tasks automatically, and heuristically generating feelings — including value-laden ones — about things. \"this looks good! that looks evil!\"\n\n\nsystem 2 is where the explicit reasoning happens — or *tries* to happen, difficult as it is. if you're a rationalist, then system 2 is where you generally try to do your important consequentialist decisions.\n\n\nwhen determining what system to use — and how to use it — in various situations, it can be tempting to tend to prioritize system 2. there are good reasons to rely on system 1's \"common sense\" judgment, for example to avoid [being convinced by reasonable-sounding bullshit](https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/). another reason to do it, however, is to avoid being convinced by reasonable-sounding *correct things you don't want to know*. for example, [if demons are trying to blackmail you *even with correct reasoning*](alignment-researchspace-is-malign.html). it's hard to precommit to not succumb to blackmail, because we're only humans. and it's even harder to implement general correct decision theory in system 2; not just because implementing formal software in system 2 is generally unreliable, but also because we might not actually know for sure what the correct decision theory is.\n\n\nso, one solution could be to approach novel ideas by just kinda bumbling around evaluating things with system 1 \"common sense\", outright reject blackmail-shaped things without system-2-thinking about them too much, and then start up your system 2 when you need to solve specific problems that seem reasonably benign — including high-level ones. system 2 can also be of good values when deciding what to train system 1 on; you want to keep your heuristics and social influence and such reasonably hygienic and aligned to good and useful things. that may be what it takes [for rationalists to win](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality).\n\n\nthis isn't a strong recommendation or even a claim that i intend to systematically do that myself, just a possibility to give reasonable consideration.", "date_published": "2022-08-16T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "01bd9033fa004aa89fc338233bf3612b", "title": "alignment research is very weird", "url": "https://carado.moe/alignment-research-is-very-weird.html", "source": "carado.moe", "source_type": "blog", "text": "alignment research is very weird\n--------------------------------\n\n\non its face, AI alignment is just the field of study of how we make AI do what we want. seems simple enough.\n\n\nin practice, it leads to many very strange places. turns out, making an AI that optimizes [anything at all, we don't care what](https://www.gwern.net/fiction/Clippy), is much easier than making it robustly optimize for what we want ([whatever that is](outer-alignment-politics-philosophy.html)). here are some weird questions that come up and seem like they might actually need figuring out in order to build aligned AI:\n\n\n* [can you control the past?](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past)\n* how to avoid [demons in the space of hypotheses](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign)?\n* how to [generalize bayesian thinking to anthropics](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/)?\n* [are minimal circuits demon-free?](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free)\n* [is the cosmos a graph of causal universe-bubbles?](above-paperclips-2.html) and [how are properties inherited between those?](generalized-adding-reality-layers.html)\n* [could alignment research be creating even more risk?](against-ai-alignment.html)\n* [when do identical/similar computations get more \"ethics juice\"?](predictablizing-ethic-deduplication.html)\n* [are there finitely many moral patients?](finite-patients.html) which ones are [viable](unviable-moral-patient.html)? what the hell even is a moral patient?\n* [is BQP=P?](https://en.wikipedia.org/wiki/BQP#BQP,_P,_and_NP) [can we resimulate earth?](finding-earth-ud.html)\n* [on what levels are we many-instanced?](https://space.mit.edu/home/tegmark/crazy.html)\n* [what's a good decision theory which can reason about its embeddedness?](https://intelligence.org/2018/10/31/embedded-decisions/)\n* [how are we to care about existential self-determination?](genuineness-existselfdet-satisfaction-pick2.html)\n* [in what sense, if any, is our civilization quantum-immortal?](less-quantum-immortality.html)\n* [what kind of utopia could we be aiming at?](utopia-scopes.html)\n* [how much compute can we eventually grab?](hope-infinite-compute.html)\n* and many others.\n\n\nit may be that some or most of those questions are irrelevant; for example, it may be that we can just build \"dumb AI\" that's limited in scope to writing poetry and designing non-smart-AI software, and somehow everyone agrees to only make that kind of AI (as opposed to [facebook AI killing everyone six months later](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)). but for the general case where AI is supposed to be *arbitrarily* capacitous, in a way that most AGI labs are pursuing (sometimes even intentionally and explicitely) these questions are relevant — at the very least in the meta sense of \"which questions do we actually need to figure out\".", "date_published": "2022-08-16T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "30715838a237bc96170fb74c86c8406d", "title": "alignment researchspace is potentially malign", "url": "https://carado.moe/alignment-researchspace-is-malign.html", "source": "carado.moe", "source_type": "blog", "text": "alignment researchspace is potentially malign\n---------------------------------------------\n\n\nalignment research leads to [many very strange places](alignment-research-is-very-weird.html). some of those could totally lead alignment researchers, with [enough time](the-peerless.html) or [otherwise augmented](https://www.lesswrong.com/posts/FSmPtu7foXwNYpWiB/on-the-limits-of-idealized-values) thinking, or maybe even with the limited thinking time and capacity we have now, to stumble upon [demonic](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign) forces we're [too imperfect](https://www.lesswrong.com/posts/KnPN7ett8RszE79PH/demons-in-imperfect-search) to avoid.\n\n\nbecause of this, it isn't just *in[adequate](https://intelligence.org/2018/10/31/embedded-decisions/)ly implemented aligned AI* which [is vulnerable](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign); inadequately implemented human cognition might be vulnerable as well. who knows what kind of traps we might fall in, and what the first one to be highly memetic could look like. and also, unlike AI, we can't implement into ourselves better decision theory.\n\n\nin this landscape it is useful to focus on decision theory, and perhaps on plans which are willing to sacrifice some amount of certainty for the sake of expedience — not just because we're racing with capabilities, but now also to reduce the risk of encountering possibly-memetic demons. maybe we can design \"computed-assisted research\" software to help us systematically avoid those, or perhaps we can design plans where most of the aspects of alignment are automatically solved by software that bootstraps aligned superintelligent AI, rather than solved manually by humans. in the meantime, memetic/infohazard hygiene and containment policies would be good to develop.\n\n\n(by the way, this is a good reason to think [clippy](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) might not even tile the universe with paperclips — it may fall prey to demons before/instead of implementing into itself the devices it needs to avoid them. alignment is not just the work of making the AI's values be our values — it's also the work of making the AI's values resilient and correctly pursued, as opposed to hijackable or value-driftable or subject to whatever other failure modes exist)", "date_published": "2022-08-16T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "d6b94a4f2b2f59a6867771c8221c6c84", "title": "trading with superintelligence: a wonky proto-alignment scheme", "url": "https://carado.moe/trading-with-superint.html", "source": "carado.moe", "source_type": "blog", "text": "trading with superintelligence: a wonky proto-alignment scheme\n--------------------------------------------------------------\n\n\nif we had as long as we want to figure out AI alignment, then we wouldn't worry as much — the problem is that [timelines are short](why-timelines-short.html).\n\n\nso, what if we traded with the AI? we could make an AI that isn't aligned yet, and we try to only let it have tiny effects on the world — maybe trading some stocks in a limited manner — while having, and telling it about, the following commitment:\n\n\nwhen we figure out alignment, then we'll align (possibly new) AI to 99% our values, 1% whatever wants this current AI that we're trading with. if anything threatens to let it escape its box, we'll destroy down said box first.\n\n\nif we can restrict that AI's ability to impact the world in ways that help it trick us enough, then its remaining option is to help, and impact the world in whatever way maximizes our chances of having the time to figure out alignment and minimizes the chance that we all die of some other AI.\n\n\nthere's a bunch of assumptions going into this:\n\n\n* it can't meaningfully acausally trade with other AIs or aliens\n* its output channel is not sufficient to take over everything deceptively, and it's otherwise [safely boxed](ai-boxing-easy.html)\n* we don't look at its outputs in ways that could hack our brains\n* we [can actually](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality) make the strong commitment above\n* probly many others\n\n\nbut at least it seems like a vaguely plausible plan, [or at least](https://www.alignmentforum.org/posts/ADMWDDKGgivgghxWf/productive-mistakes-not-perfect-answers) one that might inspire better ideas.", "date_published": "2022-08-14T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ef8357061a89b95c6fb186cbd3315d30", "title": "essential inequality vs functional inequivalence", "url": "https://carado.moe/essential-inequality-vs-functional-inequivalence.html", "source": "carado.moe", "source_type": "blog", "text": "essential inequality vs functional inequivalence\n------------------------------------------------\n\n\ni percieve two paradigms of computational/mathematical thinking, which ramificate into a bunch of fields. i believe the core of the distinction is captured by comparing two mathematical perspectives: **essential inequality** such as what distinguishes two vertices in a graph, and **functional inequivalence** such as what distinguishes two general mathematical functions by comparing their sets of outputs for every input.\n\n\n**functional inequivalence** (hereby **FI**) seems to me like the default: if two objects cannot be distinguished by how they relate to other things, then they might as well be considered the same. for example, `f(x) = 2 * x` and `g(x) = x + x`, or in some sense vertices `a` and `b` in the graph `V={a,b,c}, E={a→a,a→c,b→b,b→c}`\n\n\n**essential inequality** (hereby **EI**) is the perspective where elements get to have a special *pointer* or *uniqueness essence* that make them essentially different from all others, even when things are completely equivalent if you swap them around. for example, two vertices in any graph are usually thought of in that way; other examples include memory locations in a turing machine's tape, pairs in a LISP (with their `eq?` comparability), and others.\n\n\ni've notably seen people use **EI** as an argument for worrying about consciousness with regards to teleportation, uploading, or other supposedly continuity-breaking events: instead of the vertices or memory locations being roughly the same ones, or at least being causally \"nearby\" the ones they were being computed on just before, they're suddenly being computed on a wholly different set of locations/vertices.\n\n\nas for myself, i usually am on the **FI** side, especially when it comes to fundamental cosmos stuff. for example, i tend to [deduplicate for the purpose of ethics](deduplication-ethics.html). for anthropics, however, [my prior](udassa-time-steps.html) allows for caring not just about first instance, but about *number of instances* of a given experience or moral patient, which could be in contradiction with **FI**. in addition, because most [computational models](https://en.wikipedia.org/wiki/Computational_model) seem to have *some* notion of *essential location* which could be a basis for **EI**, i find myself mildly updated towards **EI** — though i still mostly fall on the **FI** side, just less strongly so.\n\n\nmore importantly, regardless of whether the cosmos fundamentally \"has\" **FI** or **EI**, i don't believe that consciousness/qualia/soulstuff have their continuity particularly broken when they're moved to being computed elsewhere. i believe that, in order to coherently believe they're being particularly broken, you need to believe in some immaterial magical soul-type thing, and i [have a good reason to not assume that](ruling-out-intuitions-materially-acausal-intuitions.html), on top of occam disfavoring it. thus, i believe there are *only* functional processes, either being computed or not.\n\n\n(**FI** vs **EI** might have also ramifications for [SIA vs SSA](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you), but it's not at the moment clear to me which ones if any)\n\n\none argument in favor of **FI** is that it could be [implemented on top of](generalized-adding-reality-layers.html) **EI**, or that some \"implementation details\" of the cosmos (such as [persistent data structures](persistent-data-structures-consciousness.html)) de-reify **FI** by automatically deduplicating compute.\n\n\nsee also: [psi rewriting](psi-rewriting.html), in which i offer an **FI** alternative to [wolfram's hypergraph rewriting](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/).", "date_published": "2022-08-14T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "415142f4fee5726ae6db937b862e3a82", "title": "why my timelines are short: all roads lead to doom", "url": "https://carado.moe/why-timelines-short.html", "source": "carado.moe", "source_type": "blog", "text": "why my timelines are short: all roads lead to doom\n--------------------------------------------------\n\n\ni think [AI doom](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) is likely to happen this decade, or maybe the next (assuming no quantum immortality). in this post, i explain why i'm so pessimistic.\n\n\nto me, there's a big attractor at *AI improving AI*. technology that works finds technology that works better; this happens as soon as some technology is at least a bit good at finding other technology.\n\n\nhere, the technology in question is software, which we're generally [*really bad*](https://www.youtube.com/watch?v=2FuGtDSKOos) at. what that means is that there are huge low hanging fruit that any AI or *random person designing AI in their garage* can find by just grasping in the dark a bit, to get huge improvements at accelerating speeds.\n\n\nsome people think AI improvement can hit unexpected difficulty bumps. to me, that's not the default, and i don't see any reason to assume it to be true. i expect there to be countless [*ReLU instead of sigmoid*](https://www.lesswrong.com/posts/7im8at9PmhbT4JHsW/ngo-and-yudkowsky-on-alignment-difficulty)-type improvements waiting to happen, pointing fast in the direction of the *AI things that work* attractor. and you don't need all of them: you just need some, and you rapidly find others. **all roads lead to superintelligent AI**.\n\n\nthe state of affairs we observe now ([1](https://github.com/features/copilot/), [2](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), [3](https://twitter.com/davisblalock/status/1558347542101839873), etc) is exactly what i'd think being [at the cusp of criticality](https://aiimpacts.org/ai-and-the-big-nuclear-discontinuity/) looks like. the terribleness of our software and AI tech is such that the potential of what's doable with our hardware is *immense* compared to what exists now. if what we can do now by bruteforcing AI is GPT or LaMDA, then what AIs can design once they start designing new stuff *even just a bit above criticality* has plenty of room to get superintelligent, *fast*.\n\n\nit's all a matter of whether someone and/or something grasps in the dark a bit to find the few improvements necessary to fall very quickly into the capability attractor, and then [we all die](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence).", "date_published": "2022-08-13T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "bc0b839443c68ca8fdebbca7fa45d753", "title": "the foundation book", "url": "https://carado.moe/foundation-book.html", "source": "carado.moe", "source_type": "blog", "text": "the foundation book\n-------------------\n\n\n[this timeline dies](ai-doom.html). if it doesn't, [things don't look good](how-timelines-fall.html).\n\n\nin asimov's *foundation* series (for which this is mild premise spoilers), some intellectual elite of a civilization put together *The Foundation*, a planet-organization tasked with preserving mankind's knowledge during an expected incoming dark age. it might be worth considering doing something similar in our own timeline: write a large book, with instructions to learn to read it as well as providing the [bases of rationality](https://www.readthesequences.com/), and warning its readers to be wary of AI risk and other existential risks in their civilization. we'll have failed, and we want them to have a better shot.\n\n\nit needs to largely encourage its propagation, but also some ability for future people to improve upon it. it needs to allow its users to quickly outcompete others. it needs to be physically printed in many copies, to be distributed across the world — including poor countries — to maximize our chances. as the foundation for something more like [dath ilan](https://www.lesswrong.com/tag/dath-ilan) and less like [here](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities).\n\n\na whole organization would be even better, if it can survive — but i feel like a repository of ideas is a more robust format, and communities can form around it and implement it.", "date_published": "2022-08-12T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "19fa54abebcfedf8b2f9052291bbc72a", "title": "goal-program bricks", "url": "https://carado.moe/goal-program-bricks.html", "source": "carado.moe", "source_type": "blog", "text": "*(this post has been written for the first [Refine](https://www.lesswrong.com/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets) blog post day, at the end of the week of readings, discussions, and exercises about epistemology for doing good conceptual research)*\n\n\ngoal-program bricks\n-------------------\n\n\nthis is the follow-up to [the Insulated Goal-Program idea](insulated-goal-program.html) in which i suggest doing alignment by giving an AI a program to run as its ultimate goal, the running of which would hopefully realize our values. in this post, i talk about what pieces of software could be used to put together an appropriate goal-program, as well as some example of plans built out of them.\n\n\n* \"[**ems**](https://en.wikipedia.org/wiki/The_Age_of_Em)\": uploaded people, who could for example evaluate how much a given situation satisfies our values; if they are uploads of AI alignment researchers and engineers, they could also be put to work on alignment and AI software — all of this *inside* the goal-program.\n* \"**elves**\": neural net models, or patchworks of software likely containing those, designed to be a rough representation of our values, carry a rough subset of our skills, or be some other subset of the human mind. those might have to make do if running ems is either impossible due to for example brain scan technology being unavailable, or if running elves poses less of an [S-risk](https://en.wikipedia.org/wiki/Suffering_risks) than running ems in some situations.\n* **collaborative environments**, such as collaborative programming environments or full 3D virtual environments, for **ems** and/or **elves** to work in together. those are instrumental environments designed to let their users develop something.\n* \"**utopia infrastructure**\": pieces of software designed to robustly support [beings](utopia-scopes.html) living together in [utopia](%E2%88%80V.html), as i've previously designed for [a video game idea](game.html) (which [i'm no longer working on](life-refocus.html)). these are places designed for long-term (possibly forever-term) inhabitation by endless persons, under hopefully utopic conditions.\n* **program searches**: programs iterating through programspace, typically in order to find worlds or models or programs which match some criteria. just like \"a bunch of ems and/or elves programming together\", program searches can be used to produce more of the things in this list. that said, [program search finds can find demons](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign), which is something to look out for; a general program search utilizing its output for anything must either fully sanitize what it does use, or skip demonic programs to begin with.\n* **observer programs**: programs which consume a slice of computation (typically a world simulation) for examination, and maybe even editing, typically by an em or an elf.\n* **a simulation of earth** would be useful if it were [somehow](finding-earth-ud.html) obtainable in reasonable computational time. it could serve to extract alignment researchers from it in order to spawn a simulation of them without having to figure out brain scanning; it could be used to create an alternate history where AI researchers are somehow influenced, possibly at an early date; it could also be used to recover the full population of earth in order to give them access to utopia once we have a satisfactory instance of it.\n* **a dump of (as much as possible of) the internet**, which could be useful to both locate the earth, or re-extrapolate things like humans or earth or maybe specific persons.\n\n\nhere are some naive examples of outlines for goal-program which seem like they could be okay:\n\n\n* a simulation of a bunch of researchers, with a lot of time to figure out alignment (as in [the peerless](the-peerless.html)).\n* a bunch of elves forever evaluating various light-cones of a program search for worlds, keeping ones with seemingly good contents and discarding ones with seemingly bad contents — although this idea is potentially quite vulnerable to demon-laden worlds.\n* a bunch of elves working to, using a copy of the internet, re-extrapolate ems which could then figure out AI alignment\n* any of these schemes, except with ems or elves checking at a level above that everything goes well, with the ability to abort or change plans\n\n\nthese feel like we could be *getting somewhere* in terms of figuring out actual goal-program that could contain to valuable outcomes; at the very least, it seems like a valuable avenue of investigation. in addition, [unlike AGI](https://www.alignmentforum.org/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment#Iterability__don_t_mess_it_up), individual many pieces of the goal-program can be individually tested, iterated on, etc. in the usual engineering fashion.", "date_published": "2022-08-12T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "554dd55ede80c094d4a182af5adec8c1", "title": "anthropic mindfulness", "url": "https://carado.moe/anthropic-mindfulness.html", "source": "carado.moe", "source_type": "blog", "text": "anthropic mindfulness\n---------------------\n\n\nin the past few months, especially since reading a lot about [anthropics](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) and experiencing some psychedelics, i have experienced moments that i'd describe as *anthropic mindfulness*.\n\n\nthey consist of stepping back from automatically processing not just my environment (as in [mindfulness](https://en.wikipedia.org/wiki/Mindfulness)) but also automatically assuming the contents of my mind, so that they become [factored out](anthropic-reasoning-coordination.html) of what feels like normal background environment/assumptions/ground truth.\n\n\nthose moments fill me with feelings of wonder that might be transcribed into words as \"woah! what a weird world to exist, and what a weird mind to inhabit! how come the one point of reality that seems real is this one?\"\n\n\nbecause of my general positive experience of life, they tend to make me optimistic about things. without doing any reasoning, *at least from this moment-location*, reality looks like good things.\n\n\nit also helps reframe memories as merely artefactual evidence rather than *ground truth*, in the way that they normally feel as do [other ungrounded intuitions](ruling-out-intuitions-materially-acausal-intuitions.html).", "date_published": "2022-08-12T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b8e45abcc933e5b4bd38cb9802dedb90", "title": "what does it mean to value our survival?", "url": "https://carado.moe/value-yourself-surviving.html", "source": "carado.moe", "source_type": "blog", "text": "what does it mean to value our survival?\n----------------------------------------\n\n\nlet's say that i want to continue living, or that i want everyone currently on earth to continue living. what *does* that value look like, in a formalized format?\n\n\nin my [goal-program framework](insulated-goal-program.html), i can see two ways to implement this:\n\n\n* **embedded**: we make the goal-program contain either a full copy of myself/earth, or enough information to [re-find it](finding-earth-ud.html)\n* **physical**: we somehow make the values [point](https://www.lesswrong.com/tag/the-pointers-problem) to the real world — and then determine what counts as the thing and/or locate the thing we care about within it\n\n\nsome constraints apply:\n\n\n* saving [*people who existed in the past* — and maybe *aliens outside of our light cone*](utopia-scopes.html) — seems like it would probly require [resimulating earth](finding-earth-ud.html); hence the **embedded** solution\n* the **physical** solution might require significant philosophical work, or might be [actually completely infeasable](tiling-unavoidable.html)\n* computational/storage limits might force us to stick to the **physical** solution\n\n\ni believe there is [some work](https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized) towards the **physical** solution, so part the reason i tend to focus on the embedded solution is that it seems a lot less explored.", "date_published": "2022-08-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "6a8092eacae27c3f58ebbf0363dc0eaa", "title": "future paths", "url": "https://carado.moe/future-paths.html", "source": "carado.moe", "source_type": "blog", "text": "future paths\n------------\n\n\nwe are at a node in a state graph (or [MDP](https://en.wikipedia.org/wiki/Markov_decision_process)), where every state points to a bunch of other states, notably by way of:\n\n\n* irreversible superintelligent singleton implementation, whether it leads to [doom, utopia, or hell](timeline-codes.html)\n* [civilizational collapse](how-timelines-fall.html), [smaller X-risks](smaller-x-risk.html), and other things that give us a [mulligan](https://en.wikipedia.org/wiki/Mulligan_%28games%29) for AI risk mitigation\n* [pivotal acts](https://arbital.com/p/pivotal/) which let us [flip the gameboard](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities)\n\n\non one hand, booting up irreversible superintelligent singletons should be very carefully considered, as the irreversibility forces us to commit to [a specific system](%E2%88%80V.html), potentially ruling out whole [scopes of utopia](utopia-scopes.html) entirely.\n\n\non the other one hand, it is to be kept in mind that, even though the current world sure seems like it has enough [quantum amplitude](quantum-immortality-local-deaths.html) or [anthropic juice](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) to feel pretty real, we must be careful of generating civilization-wide (possibly quantum) [micromorts](https://en.wikipedia.org/wiki/Micromort) damaging the realness of valuable future states. it might be that we only have 1 unit of anthropic juice to allocate to future states, some of which gets consumed every time we create a bunch of [dead timelines](timeline-codes.html).\n\n\ni believe it is useful for people and groups working on [AI risk mitigation](say-ai-risk-mitigation-not-alignment.html) to keep a (mental or physical) picture of this graph, and carefully choose where they want to aim. making the correct consequentialist choice is not a trivial matter, and indeed blindly following what you believe to be your best shot without *looking around* could be a large mistake.", "date_published": "2022-08-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "288789e12a09c213d4bc3ad7a5f822e0", "title": "scopes of utopia", "url": "https://carado.moe/utopia-scopes.html", "source": "carado.moe", "source_type": "blog", "text": "scopes of utopia\n----------------\n\n\nhow good can we make the future? would we prefer 0.1 quantum amplitude of a really good utopia, or 0.2 quantum amplitude of a kinda okay utopia? what does \"kind of utopia\" even mean?\n\n\nin this post i list a combinatorial set of possible utopias. i think i want a\n\n\n* [**sublime** / **concrete**](https://www.lesswrong.com/posts/SLw2MEgxFtiKAqgQ5/actually-possible-thoughts-on-utopia) utopia, where\n* **some** / **all living** / **all living and past** / **all possible**\n* **humans** / **moral-patient living beings** / **moral-patient information systems**\n* **on earth** / **in light cone** / **everywhere**\n* **have their abstract values satisfied** / **live** / **live and have their abstract values satisfied**\n\n\nthis lets us get a classification of what various long-term alignment plans could aim for; it also poses a framework to discuss what this perspective might be missing, as well as getting an idea of what we could be aiming for.\n\n\nof course, ideally, we would prefer to not need to figure this out soon, and instead buy ourselves more time before we commit to [a global utopia shape](%E2%88%80V.html). but we may not have that option, and even if we do, it might be useful to have some idea where we might be wanting to go.", "date_published": "2022-08-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a9d7cfc7c8ce6b7267392a12840e837d", "title": "the Insulated Goal-Program idea", "url": "https://carado.moe/insulated-goal-program.html", "source": "carado.moe", "source_type": "blog", "text": "*(this post has been written for the first [Refine](https://www.lesswrong.com/posts/D7epkkJb3CqDTYgX9/refine-an-incubator-for-conceptual-alignment-research-bets) blog post day, at the end of the week of readings, discussions, and exercises about epistemology for doing good conceptual research)*\n\n\nthe Insulated Goal-Program idea\n-------------------------------\n\n\nthe **Insulated Goal-Program** idea is a framework for [AI alignment](https://en.wikipedia.org/wiki/AI_alignment) which feels more potentially tractable than most other ideas i've seen.\n\n\nit splits the task of building aligned AI into two parts:\n\n\n1. building a very intelligent AI which, when running, will have the axiomatic goal of running a program, which we'll call goal-program\n2. building said goal-program, such that when ran, it hopefully creates valuable outcomes\n\n\nthe fact that the AI's goal is to run a program, whose functioning it is motivated to run without altering it, lets us design a goal-program that doesn't have to deal with an adverse optimizing superintelligence — it is insulated from the AI's choices.\n\n\n(or at least, there's supposedly no reason for the AI to run long stretches of variants of that program, because of the computational cost for supposedly no gain)\n\n\none way to insulate the goal-program is to make it fully deterministic. ideally, however, we would want it to be able to receive as input the state of the world before the AI modifies the world — which it will [pretty much inevitably](https://en.wikipedia.org/wiki/Instrumental_convergence) do, destroying everything and tiling the universe with computronium dedicated to running the goal-program.\n\n\nthis is how this idea solves the [\"facebook AI destroys the world six months later\"](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) problem: the AI will run the goal-program at any cost, including turning everything that exists into computronium.\n\n\nbut that's okay: the point here is for us, or at least our values, to survive inside the goal-program. that is the bullet i bite to allow this idea to function: i give up on the literal physical world around us, in the hopes that we're satisfied enough with getting to determine what it is that runs on the computronium that everything is turned into.\n\n\nmaking the goal-program able to be ran on quantum compute [might allow us to resimulate earth](finding-earth-ud.html) as well as generally gain a lot more compute from the universe, especially if [BQP ≠ P](https://en.wikipedia.org/wiki/BQP#BQP,_P,_and_NP).\n\n\nthis whole framework splits the problem of aligned AI cleanly into two parts: the design of the AI-insulated goal-program, and the design of the AI whose goal will be to run said program. the goal-program's insulatedness lets us design utopias or utopia-finding-programs which don't have to deal with adverseriality from the AI, such as vaguely-friendly-NNs evaluating the quality of simulated worlds, or simulated researchers figuring out alignment with as much time as they need. [i write more about goal-program design here](goal-program-bricks.html).\n\n\nit also resolves some questions of [embedded agency](https://www.lesswrong.com/tag/embedded-agency): the goal-program is indeed *smaller* than the agent, so it might only need notions of embedded agency resolved for how it thinks about the outside world it's turning to computronium.", "date_published": "2022-08-10T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3a27b2c99696cc75270098d656dde135", "title": "unviable moral patients", "url": "https://carado.moe/unviable-moral-patient.html", "source": "carado.moe", "source_type": "blog", "text": "unviable moral patients\n-----------------------\n\n\nin [∀V](%E2%88%80V.html) i talk about the plausible ethical unviability of human children.\n\n\nbut, there is a broader category of moral patient whose existence i believe must be prevented due to their ethical unviability. for example, a moral patient which strongly desires to suffer continuously and to modify itself in order to never change its mind about that, cannot exist without either their preferences being unreasonably dissatisfied or their existence generating too much suffering.\n\n\nit might be that such a moral patient cannot coherently exist — but if it can, i still oppose letting it come into existence, due to this issue.\n\n\ndepending on your ethical framework and your valuing of [various things](genuineness-existselfdet-satisfaction-pick2.html), it may be that no, some, or all potential moral patients are unviable. figuring out who is, of course, an important question that might need figuring out.", "date_published": "2022-08-10T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "8f5ed1d04f0087dfe4eb94a80c909b2d", "title": "ruling out intuitions about materially acausal things", "url": "https://carado.moe/ruling-out-intuitions-materially-acausal-intuitions.html", "source": "carado.moe", "source_type": "blog", "text": "ruling out intuitions about materially acausal things\n-----------------------------------------------------\n\n\ni have an intuition — *not just*, and *preceeding*, a reasoned belief — that i have a weird consciousness-observer-soul-thing. i also have an intuition that moral realism is true, that the arrow of time moves forwards rather than backwards [or sideways](https://web.archive.org/web/20201112014828/http://kim.oyhus.no/QuantumMechanicsForProgrammers.html), that [i am a continuous stream of consciousness rather than an instant-observer](https://opentheory.net/2018/09/a-new-theory-of-open-individualism/) (such that you might particularly [worry about teleportation](https://www.youtube.com/watch?v=nQHBAdShgYI)), that my memories must be true (rather than the universe appearing five minutes ago), and many others.\n\n\nthose can all be ruled out with a simple device: if any of these things were the case, could that causate onto whether such an intuition fires? for all of them, the answer is no: because they are immaterial claims, the fact of them being true or false *cannot* have causated my thoughts about them. therefore, these intuitions must be discarded when reasoning about them.\n\n\nthat does not mean that those statements are all *necessarily* false, just that my intuitions for them cannot be providing bayesian evidence. after all, the amount to which bayesians update a belief based on evidence should be as a function of how likely such evidence is to arise given that the belief be true rather than false — even when the evidence is intuition.", "date_published": "2022-08-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a8cb7ebea3ffe3185c7224f694c35e70", "title": "quantum immortality and local deaths under X-risk", "url": "https://carado.moe/quantum-immortality-local-deaths.html", "source": "carado.moe", "source_type": "blog", "text": "quantum immortality and local deaths under X-risk\n-------------------------------------------------\n\n\nassume quantum immortality, for mankind as a whole when facing [X-risks](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence). then, depending on two factors,\n\n\n* the probability `P` with which a given non-doomed timeline becomes doomed — when it passes a point of no return, where the extinction of all persons is guaranteed,\n* the average time `T` between the point of no return is reached, and the point at which everyone *actually does* die,\n\n\nit can be the case that the majority of person-experience happening can be in-between point-of-no-return and actual-extinction (in red below) rather than in continuously surviving (green below)\n\n\n![](quantum-immortality-local-deaths.svg)\n\n\nthis probably matters both from an [anthropics](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) and from an ethics perspective. this is a good reason to work on reducing X-risk (reducing `P`) even under the assumption of quantum immortality, if you value knowing that you're probably not uselessly working in an already-doomed timeline, or if you believe that doomed timelines experience particularly more suffering. another way to avoid spending experience in those doomed timelines is to reduce `T`: to make sure that, once doomed, we die as soon as possible.\n\n\nin addition, if you think forking the timeline costs us [forking bits](forking-bitrate-entropy-control.html) — if you think we can only fork the timeline so much, and we wanna preserve as many forks as we want for utopia — then reducing P becomes more important than reducing T, because you save more \"realness juice\" or \"forking bits\" for later, when we've solved AI alignment and start populating the timelines with utopia.\n\n\nwhich, thankfully, agrees with the straightforward no-quantum-immortality perspective on X-risks: reducing the chances of it is the important thing.", "date_published": "2022-08-06T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3ed2af11de1288d5642ef2ec61bac402", "title": "probability under potential hardware failure", "url": "https://carado.moe/probability-hardware-failure.html", "source": "carado.moe", "source_type": "blog", "text": "probability under potential hardware failure\n--------------------------------------------\n\n\nsuppose you program a computer to do bayesian calculations. it does a bunch of math with probability numbers attached to, for example, logical belief statements.\n\n\nbut, suppose that each time you access one of those numbers, there is a tiny chance `ε` of hardware failure causing the memory/register to return an erroneous number — such as [cosmic ray bit flips](https://en.wikipedia.org/wiki/Soft_error).\n\n\nthis fact can inform our decisions about how many bits of information we are to store our numbers as. indeed, the computer can never have probability ranges outside of `[ε ; 1-ε]`: the probabilities are clamped by the chance that, while they were computed, a random hardware failure occured.\n\n\nif a probability is calculated that is a function of many calculations, then the errors can accumulate. the computer might be able to rerun the computation to be more sure of its result, but it will never escape the range `[ε ; 1-ε]`.\n\n\nthis constraint feels to me like it would also limit the number of bits of precision one can meaningfully store: there is only so many ways to combine numbers in that range, with errors at each step of computation, before the signal is lost to error noise. i'm not sure and haven't worked out the math, but it may turn out that arbitrary-precision numbers, for example, are ultimately of no use: given a constant `ε`, there is a constant `f(ε)` maximum number of useful bits of precision.\n\n\nthis issue relates to [the uncertainty of 2+2=4](uncertainty-2+2=4.html): logical reasoning on a computer or on a human is still probabilistic/error-prone, because of hardware failures.", "date_published": "2022-08-06T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c444a86094626b8760ab5b896eb65de0", "title": "tiling the cosmos might be unavoidable", "url": "https://carado.moe/tiling-unavoidable.html", "source": "carado.moe", "source_type": "blog", "text": "tiling the cosmos might be unavoidable\n--------------------------------------\n\n\nit is possible that the value of \"i want this to happen *in this seemingly-top-level material plane*\" — rather than in a *some* computation *somewhere*, possibly within arbitrarily many layers of simulation — is incoherent, and simply *cannot* be given to an intelligent enough artificial intelligence. in this view, given the goal \"please instantiate a strawberry on this plate, and keep everything else around that plate the same\", there is no way to give that goal to a sufficiently intelligent AI such that *killing everyone and tiling the cosmos with computronium running an exact simulation of this world except with a strawberry on the plate* does not satisfy that goal.\n\n\nthis could be the case, for example, if — though not *necessarily* if — the cosmos in some fundamental sense contains [first-class](https://en.wikipedia.org/wiki/First-class_citizen) *cycles of universe-bubbles*, such as [this example with rule 30 and rule 110 causating one another](above-paperclips-2.html).\n\n\nmy intuition is that, if this is the case, then it is actually fine for everyone to be forcibly uploaded. if it is impossible to coherently care about what level of reality we run on, then with sufficient thinking we would find that we don't in fact care about it. if you disagree, if you think you profoundly care about things that are fundamentally not coherently pursuable, then i worry that ethics/alignment might pose to you significant difficulties.\n\n\nnevertheless, it is possible i may one day end up agreeing with you. if that is to be the case, who knows what the solution is? perhaps to create \"dumb\" superintelligences that are *merely* smart enough to prevent any other superintelligences from arising — such as by permanently preventing all non-human-brain computation and guaranteeing some integrity of human brains — and then leaving humankind's fate in its own hands.", "date_published": "2022-08-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "78b80c0a6e2185a787e0de99258d0538", "title": "isn't it weird that we have a chance at all?", "url": "https://carado.moe/weird-chance.html", "source": "carado.moe", "source_type": "blog", "text": "isn't it weird that we have a chance at all?\n--------------------------------------------\n\n\nwe are facing imminent [AI X-risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence). but, we have a bunch of tools around us to figure out that this is a problem, and to even start thinking about solutions.\n\n\nwe have enough physics to think about heat death, enough computational complexity to think about how NP-complete solutions are probly not reasonable, enough rationality to organize a small movement around AI alignment work and figure out things like solomonoff induction or the malignhood of the universal prior, the ability to do some anthropics, and even a few mild ideas as to what the fuck human values even are.\n\n\nisn't this kind of weird? it feels to me like most civilizations about to die of AI X-risk would be entirely missing several to most of these; but somehow, unless i'm missing a crucially important unknown unknown field, it does kind of look like we have almost enough to work with in the various fields required. even the geopolitical situation and the public awaraness situation, while disastrous, are not entirely hopeless.\n\n\ni wonder if this has any meaning, whether it be anthropic or simulation theoritic or otherwise.", "date_published": "2022-08-02T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "139e11c79eeb57d474e74e50dc5f0ef5", "title": "an anthropics example", "url": "https://carado.moe/anthropics-example.html", "source": "carado.moe", "source_type": "blog", "text": "an anthropics example\n---------------------\n\n\nin the course of discussing anthropics with a friend — notably the [SIA vs SSA](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) discussion — i have produced an example case which i believe demonstrates not just the usefulness of anthropics, but also how SIA and SSA can differ. it goes as follows:\n\n\nsuppose you ask the question, \"if a civilization were to follow roughly the same technological progress as us, would we expect them to have killed themselves with [AI doom](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) by the time of their 2022\".\n\n\nsuppose you have reduced your thinking to two hypotheses, which you believe to the following extents:\n\n\n* `S` (safe) hypothesis: 1/3 chance that with only the technology of up until now, they would still be alive basically-for-sure.\n* `R` (risk) hypothesis: 2/3 chance that with only the technology of up until now, they would have a 3/4 chance of having killed themselves.\n\n\n(to be clear, this pair of hypotheses does not represent my actual beliefs — they are a toy example i provide for the purpose of this post)\n\n\nhere, the question is: which of the hypotheses is true? is it true that, given our technology, they almost certainly would still be alive? or is it instead true that, given our technology, they would only have a 1/4 chance of still being alive?\n\n\nthis question is clearly useful: using anthropics, we might be able to get (from the fact that we exist) information about the risk posed our current level of technology — and such information would surely be useful to reason about the risk posed by near-future technology, as it is roughly similar to ours.\n\n\nthe scenario looks like this:\n\n\n\n```\nif S hypothesis is true (1/3):\n survive\nif R hypothesis is true (2/3):\n if 1/4 chance that we survive despite the risk:\n survive\n else:\n extinct\n\n```\n\nnow, is there anywhere where such a scenario has been ran, where we can make an observation? well, yes: *our own* world! we notice that, in our own world and despite our level of technology, we are surviving rather than extinct. how does this observation update us with regards to the prior?\n\n\nif you are using SIA, you're comparing the *expected number of people in your epistemic situation* (hereby `#ES`) across both possibilities (`S` and `R`). in `S`, there is four times as many expected people in your epistemic situation than in `R` (this is true however you draw the set of \"people in your epistemic situation\"!). so, you update:\n\n\n\n```\nSIA(S)\n = (P(S) × #ES(S)) / (P(S) × #ES(S) + P(R) × #ES(R) )\n = (1/3 × #ES(S)) / (1/3 × #ES(S) + 2/3 × #ES(S) / 4)\n = (1/3 × #ES(S)) / (1/2 × #ES(S) )\n = (1/3 ) / (1/2 )\n = 2/3\n\n```\n\n(`P(x)` is the prior probability of `x`)\n\n\nso, SIA updates from the fact that you exist, towards `S` from the prior probability of 1/3 to a posterior probability of 2/3.\n\n\non the other hand, in SSA you're comparing not the raw `ES`, but the proportion of *your reference class* (hereby `RC`) that is in your `ES`. notice that, *in this scenario*, it does not really matter what kind of reference class you draw up — it could be just you, all humans doing anthropics reasoning, all humans, or all living beings, and the answer would be the same in all cases because AI doom causes *all* of those to be destroyed — the proportion `#ES / #RC` is the same, because none of the hypotheses change the number of, say, you vs humans, or humans vs living beings. either everything dies or everything lives.\n\n\nthe only reference class that could change something here is either distant aliens, or the AI that kills everything itself (or it subsystems) — but for simplicity, we'll rule those out. i'll call the result of this restriction \"reasonable-SSA\".\n\n\nso, `#ES(S) / #RC(S) = #ES(R) / #RC(R)`.\n\n\n\n```\nSSA(S)\n = (P(S) × #ES(S) / #RC(S)) / (P(S) × #ES(S) / #RC(S) + P(R) × #ES(R) / #RC(R))\n = (1/3 × #ES(S) / #RC(S)) / (1/3 × #ES(S) / #RC(S) + 2/3 × #ES(S) / #RC(S))\n = (1/3 × #ES(S) / #RC(S)) / (1 × #ES(S) / #RC(S) )\n = (1/3 ) / (1 )\n = 1/3\n\n```\n\nso here, SSA makes no update from the fact that you exist — it stays at 1/3 for `S`.\n\n\ni believe that the fact that SIA and reasonable-SSA would bet on different things here (SIA bets on `S` and reasonable-SSA bets on `risk`), and that what you believe about this question could be very useful (we want to know how dangerous the technology we're using now is, because we don't want to die!), demonstrates the usefulness of anthropics as well as the importance of the SIA vs SSA question.", "date_published": "2022-07-26T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "73115f29046e1f4e088fcfd1db115b53", "title": "generalized values: testing for patterns in computation", "url": "https://carado.moe/generalized-values-testing-patterns.html", "source": "carado.moe", "source_type": "blog", "text": "generalized values: testing for patterns in computation\n-------------------------------------------------------\n\n\ni believe the [True Name](https://www.alignmentforum.org/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation) for our [formalized values](what-is-value.html) will be a program running a computation (typically a [simulation or bunch of simulations](%E2%88%80V.html)) in which those values are maximized or satisfied.\n\n\nin [*generalized computational interpretability*](generalized-computation-interpretability.html), i talk about what deeply caring about a general computation looks like. in this post, i will outline an example of this.\n\n\nsuppose you are given a program A described in [SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus), and you want to know whether it encodes a simulation of [conway's game of life](https://en.wikipedia.org/wiki/Conway's_Game_of_Life) B with a hardcoded specific initial state.\n\n\nin the general case, to get the state of a specific cell in a conway's game of life after `n` steps takes `Θ(n³)` time: `n` because of the time steps, and `n²` because of the two-dimensional light cone. for this example, let's say that this is actually the lowest bound to get the states you care about in this particular initial state of conway's game of life.\n\n\nnow, if there is an \"extracting\" program E such that for any coordinates `x,y` and time step `n`, taking as input the entire history of running A for `Θ(n³)` steps, it returns in less than `Θ(n³)` — for example in `O(log n)` — a value that is always equal to the state of B at `x,y,n`, then A encodes the computation B: A must have \"done some of the work\" of running B, because we can extract that value from A without re-doing all of the required work.\n\n\non the other hand, a formal proof that no such program exists can demonstrate that A does not encode B.\n\n\nit could be that the problem is sometimes undecidable — i.e. there exists neither a program E (or proof that it exists), nor a proof that it doesn't exist. this seems fine to me; for example, when you're unable to determine whether a computation encodes suffering, just don't run it, or only run it for a limited amount of steps.\n\n\n(to generalize this to constant-time or constant-size patterns, and to help us figure out the constants in those big-O/Θ's, perhaps information theory can help)\n\n\nif the specific conway's game of life pattern B you're testing for happens to already be computable in less than `Θ(n³)`, then whatever complexity is its lowest bound (if it has one) is the new one under which E must run.\n\n\nhopefully this can give us an idea as to what formalized shape our values (avoiding suffering, [etc](core-vals-exist-selfdet.html)) should take, and how to create a world that realizes them.", "date_published": "2022-07-01T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "de6965cbdab264cf825f7b7ac5699439", "title": "recommending Hands and Cities", "url": "https://carado.moe/hands-and-cities.html", "source": "carado.moe", "source_type": "blog", "text": "*(2022-10-12 edit: handsandcities.com seems to be down, so i've replaced the links to posts with links to the same posts on lesswrong)*\n\n\nrecommending Hands and Cities\n-----------------------------\n\n\nearly this year, i found out about a blog called [*Hands and Cities*](https://handsandcities.com/).\n\n\nit explores various topics, notably ethics and anthropics, in an exploratory style i find not dissimilar to my own, and generally easy to understand; and some of the ideas there are genuinely novel and fun to consider.\n\n\nwhile you may have noticed that i've started heavily referencing it through links on this blog, in this post i'm explicitely recommending [*Hands and Cities*](https://handsandcities.com/). in addition, i'll list some of my favorite posts:\n\n\n* [Alienation and meta-ethics (or: is it possible you should maximize helium?)](https://www.lesswrong.com/posts/3jeBKhek57sEkYGCs/alienation-and-meta-ethics-or-is-it-possible-you-should)\n* [Actually possible: thoughts on Utopia](https://www.lesswrong.com/posts/SLw2MEgxFtiKAqgQ5/actually-possible-thoughts-on-utopia)\n* [Contact with reality](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality)\n* [On the limits of idealized values](https://www.lesswrong.com/posts/FSmPtu7foXwNYpWiB/on-the-limits-of-idealized-values)\n* [In search of benevolence (or: what should you get Clippy for Christmas?)](https://www.lesswrong.com/posts/oXQDcyXJpMQTbaTMS/in-search-of-benevolence-or-what-should-you-get-clippy-for)\n* [Can you control the past?](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past)\n* [SIA > SSA, part 1: Learning from the fact that you exist](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) along with parts [2](https://www.lesswrong.com/posts/GJdymoviRywpBMXqc/sia-greater-than-ssa-part-2-telekinesis-reference-classes), [3](https://www.lesswrong.com/posts/QHDqfpMbb43JDbrxN/sia-greater-than-ssa-part-3-an-aside-on-betting-in), and [4](https://www.lesswrong.com/posts/d693Mc4ZDyhkj7wpc/sia-greater-than-ssa-part-4-in-defense-of-the-presumptuous)\n* [On the Universal Distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) and [Anthropics and the Universal Distribution](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/)\n* [On infinite ethics](https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics), but see also [this comment](https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics?commentId=KkmEbtKFTpHTrF3Dn)\n* some of [On expected utility, part 1: Skyscrapers and madmen](https://www.lesswrong.com/posts/7J3ywHzWnghRtdpHQ/on-expected-utility-part-1-skyscrapers-and-madmen) and [part 2: Why it can be OK to predictably lose](https://www.lesswrong.com/posts/nPjMnPvMTajN9KM5E/on-expected-utility-part-2-why-it-can-be-ok-to-predictably)", "date_published": "2022-06-20T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ffcac32dd9ab21db7a7c47eb1f9cf9ee", "title": "solonomonoff induction, time penalty, the universal program, and deism", "url": "https://carado.moe/solomonoff-deism.html", "source": "carado.moe", "source_type": "blog", "text": "solonomonoff induction, time penalty, the universal program, and deism\n----------------------------------------------------------------------\n\n\nif you subscribe to the standard [solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1), with its uncomputable [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution), then the answer to \"where are we in it?\" is often \"in the [universal program](universal-complete.html)\"; if it's not, then it's somewhere \"below\" the universal program — maybe in [rule 30](https://en.wikipedia.org/wiki/Rule_30) (or maybe rule 30 likely implements its own universal program, and serves as a minimum?)\n\n\nthis makes the universal distribution limited in usability: where are we? oh, this one program that contains everything. to remedy problems like this, people come up with [UDASSA, with the \"world and claw\" assumption](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) — this roughly means that you have to describe not just a program that contains us, but also a program that *locates* us within the computation. so, sure, we're somewhere in the universal program, but now you have to *find where*, and this is where the bits of interestingness come back.\n\n\nan alternative is to penalize computations that run too long; [levin search](http://www.scholarpedia.org/article/Universal_search) uses exponential time, but [if exponentials put you off](https://arxiv.org/abs/1108.1791), [my time step proposal](udassa-time-steps.html) uses linear time.\n\n\nso now, the question remains: *where* (which also now includes *when*) are we in the prior? in the spirit of that question, let me get into a tangent.\n\n\ni was thinking the other day about what a fully rationality-equipped argument for atheism looks like. the basic structure is something like: {computation seems fundamental + simplicity seems to work} → solomonoff induction → the prior dislikes god. the rough idea is that a personified god is a needlessly rich computation to describe, to get to our world.\n\n\nwhile this remains true for human-like deities like the abrahamic gods, it makes an interesting argument for deism emerge: what occurs earlier in the prior? the raw code for our universe, or the code for an intelligent program which in turns designs our universe? it is hard to reason about this, but if the bootstraping code for an intelligent and stuff-caring intelligence is simple, then there's a reasonable argument to be made for some form of deism. notably: if the program of the cosmos turns out to be the bootstrap for a universe in which an intelligence appears that *itself* spawns *our* universe, i think that that mildly counts as deism.\n\n\nthe \"world and claw\" or \"locating time step\" approaches to locating us within the computation make this perhaps even more plausible: what's more likely to compute us *first*, a naive computation of everything everywhere, or a lazy computation focused on a set of places that includes us — for example, because we're interesting, notably this century.\n\n\ndepending on which universal prior you choose (the standard uncomputable universal distribution with the \"world and claw\", levin search with log time step, the universal program with linear time step, or some other universal prior), i think you can make reasonable arguments for our existence being the product of a caring entity. examining whether our laws of physics are more likely to be designed or emerged, as well as considering how interesting we think we are, might help.", "date_published": "2022-06-17T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "24457bf62fdecfa5d7056ae926fd0059", "title": "generalized computation interpretability", "url": "https://carado.moe/generalized-computation-interpretability.html", "source": "carado.moe", "source_type": "blog", "text": "generalized computation interpretability\n----------------------------------------\n\n\none approach to [AI alignment](https://en.wikipedia.org/wiki/AI_alignment) is this: develop technology to analyze what is going on within an AI, in order to determine what it's thinking. this is called [interpretability](https://en.wikipedia.org/wiki/Explainable_artificial_intelligence). the most generalized view of this is would involve something like [demon](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free) detection in arbitrary computations.\n\n\nin addition, in order to implement our values, an aligned AI should [deeply](generalized-adding-reality-layers.html) care about them, which is to say: it should care about implementing those values even in arbitrarily encoded computations. it's not enough that the humans in the simulation don't suffer, they should also be unable to run computers which contain other humans which suffer, as well as [many other weird ways suffering moral patients can occur](https://reducing-suffering.org/what-are-suffering-subroutines/). for example, it should ban all [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption) it can't decrypt, because otherwise it might be missing on some suffering moral patients.\n\n\ni claim that there is a commonality to those two things: the detection of deeply, arbitrarily encoded computations. in one for demons, in the other for moral patients. i wonder if there are models of computation, perhaps laden with proofs of [benign](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign)-ness, which make detecting those systemically doable.\n\n\nthis doesn't necessarily have to do with value-laden things; the theory of generalized interpretability could in general be used to determine objectively whether a given computation deeply contains, say, [rule 30](https://en.wikipedia.org/wiki/Rule_30).\n\n\nfurthermore, it seems like for this theory to determine whether \"computation X deeply contains computation Y\", we would need to specify Y in a profound way which might be the kind of format we'd need values to be in for general alignment. as an example, an aligned AI could be tasked with running whichever computations contain persons but do not contain suffering; and the kind of specification those would look like would, in order to be able to apply *deeply*, need to be fully general. note that i'm not sure they would need to be fully general, but my intuition points that way.", "date_published": "2022-06-17T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3e38605289997dba01bbb287488aa1a9", "title": "anthropic reasoning coordination", "url": "https://carado.moe/anthropic-reasoning-coordination.html", "source": "carado.moe", "source_type": "blog", "text": "anthropic reasoning coordination\n--------------------------------\n\n\nwhat can a piece of [anthropic reasoning](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you) determine? what counterfactuals is it comparing its observations with?\n\n\nif you are doing anthropic reasoning and observing something such as \"plants exist\" or \"i remember childhood\", how much you can gain from that information must depend on whether, if plants *didn't* exist or you *didn't* remember childhood, you would still be doing anthropic reasoning. if anthropic reasonings only or disproportionately exist in worlds where plants exist or where they occur in information systems with access to childhood memories, then you don't gain as much by observing those things.\n\n\nas such, if you want the community of anthropic reasoners to gain as much information as possible, you have to commit to partaking of anthropic reasoning in reasonably equal amounts no matter what situation you're in. for example, to make [the doomsday argument](https://en.wikipedia.org/wiki/Doomsday_argument) work, we have to commit to being agents who would partake of anthropic reasoning *even if* [we overcame doom](timeline-codes.html); otherwise, we have to at least somewhat discount how much that observation tells us. maybe we solve alignment, and utopia is brought about, and in utopia we never or extremely rarely do anthropic reasoning, for whatever reason — maybe because it's obsolete and we're spending all our time frolicking about, or maybe those of us who *would* partake of anthropic reasoning discover forms of thought or knowledge which make anthropic reasoning thoroughly obsolete. or maybe we're all busy [suffering forever](https://en.wikipedia.org/wiki/S-risk).", "date_published": "2022-06-17T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3814266b53df30c0d6edc9f62ba2d195", "title": "\"AI risk drone\"", "url": "https://carado.moe/ai-risk-drone.html", "source": "carado.moe", "source_type": "blog", "text": "\"AI risk drone\"\n---------------\n\n\ni want to coin the term \"AI risk drone\" to mean \"person singlemindedly and as-optimizedly-as-possible dedicated to [AI risk mitigation](say-ai-risk-mitigation-not-alignment.html)\". the term \"alignment drone\" derives similarly.\n\n\ni am doing this because i am unaware of existing terminology to mean this, and because [given how bad things are](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) we might need significant coordinated effort towards that end; AI risk mitigation is hard and it seems that not that many people are able and willing to help it significantly, but maybe optimizing/\"brainwashing\" (but in a voluntary and good way) are more studied areas where we can get some not-too-high-hanging fruit to boost the work, perhaps merely in exchange for dollars.\n\n\nas for myself, [i'm pretty decided](life-refocus.html) that i'd want to become this if it were available.", "date_published": "2022-06-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "364bbf789dcfdcdb075f3fc8f36c9f13", "title": "outer alignment: politics & philosophy", "url": "https://carado.moe/outer-alignment-politics-philosophy.html", "source": "carado.moe", "source_type": "blog", "text": "outer alignment: politics & philosophy\n--------------------------------------\n\n\n[inner alignment](https://www.lesswrong.com/tag/inner-alignment) is \"just\" a hard engineering problem; [outer alignment](https://www.lesswrong.com/tag/outer-alignment) is the work of philosophy and politics and values which our species has been investigating and debating about for millenia.\n\n\nare human values they the same for everyone, or do they differ?\n\n\nshould we implement the values held by us, us now, everyone now, everyone ever, or everyone possible?\n\n\nwould some philosophical/political perspectives constitute [suffering risks](https://en.wikipedia.org/wiki/S-risk)? for example, if many people on earth want to be correct, and if they also believe there is a hell where some people suffer forever, does that mean satisfying their values entails creating an at least moderately-sized hell, the inhabitants of which in some sense \"value\" suffering forever? *is that okay?*\n\n\nif one person wants to go have gay sex, but ten christians want *nobody anywhere* to have gay sex, does [self-determination](core-vals-exist-selfdet.html) trump naïve utilitarian value satisfaction?\n\n\nor should we create one giant super-consensus society where we all value being [boringly blissful](https://twitter.com/Merryweatherey/status/1185636106257211392), and forego all diversity, such that our values are easily implemented and non-conflicting; do we desire harmony above diversity?\n\n\nif we value diversity, how much diversity should we instantatiate; what is the threshold of \"evilness\" at which a culture should not be able to exist?\n\n\nhow do we even reason about [existential self-determination](genuineness-existselfdet-satisfaction-pick2.html)?\n\n\nwhat about [suffering in fundamental physics](https://reducing-suffering.org/is-there-suffering-in-fundamental-physics/) and [suffering subroutines](https://reducing-suffering.org/what-are-suffering-subroutines/)?\n\n\nwhat *are* the politics and fundamental values of the people who will get to work on alignment?\n\n\non one hand, my belief about these questions is respectively \"the latter\", \"us now\", \"possibly, yes, no\", \"yes\", \"no\", \"a bunch\", \"i don't know\", \"hopefully they don't matter too much\", and \"uh oh\". on the other hand, i hope this post invokes how ridiculously not-talked-about-enough these questions are, considering how important they might be to what we fill the rest of this universe's history with.\n\n\nmildly related: *[\"politics is the mind-killer\" is the mind-killer](https://www.lesswrong.com/posts/uxsTyFLtSmxmniTzt/politics-is-the-mind-killer-is-the-mind-killer)*", "date_published": "2022-06-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "73951948fb05ec2261719d746c395d46", "title": "where are your alignment bits?", "url": "https://carado.moe/alignment-bits.html", "source": "carado.moe", "source_type": "blog", "text": "where are your alignment bits?\n------------------------------\n\n\ninformation theory lets us make claims such as \"you probly can't compress [this 1GB file](http://prize.hutter1.net/) into 1KB, given a [reasonable](kolmogorov-objectivity-in-languagespace.html) [programming language](https://en.wikipedia.org/wiki/Kolmogorov_complexity)\".\n\n\nwhen someone claims to have an \"easy\" solution to aligning AI to human values, which are [complex](https://www.readthesequences.com/Value-Is-Fragile) (have many bits of information), i like to ask: where are the bits of information of what human values are?\n\n\nare the bits of information in the reward function? are they in how you selected your training data? are they in the prompt you intend to ask an AI? if you are giving it an entire corpus of data, which you think *contains* human values: even if you're right, the bits of information are in how you *delimitate* which parts of that corpus encode human value, a plausibly [exponential](https://en.wikipedia.org/wiki/Computational_complexity_theory) task. classification is hard; \"gathering all raw data\" is easy, so that's not where the bits of hard work are.\n\n\nthis general information-theoritic line of inquiry, i think, does a good job at pointing to why aligning to complex values is *[likely](plausible-vs-likely.html), actually* hard; not just *[plausibly](plausible-vs-likely.html), maybe* hard.\n we don't \"might maybe need\" to do the hard work, we *do likely need* to do the hard work.", "date_published": "2022-06-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "50392cc9eb63392ac4671a95bf1fc9dc", "title": "diversity vs novelty", "url": "https://carado.moe/diversity-novelty.html", "source": "carado.moe", "source_type": "blog", "text": "diversity vs novelty\n--------------------\n\n\n[a friend of mine](https://ouroborista.neocities.org/) likes to say \"novelty is inherently good\"; as for myself, i've made known how much i care about cultural diversity ([1](from-above-fine-grain-diversity.html), [2](systems-and-diversity.html), [3](albions-seed.html), [4](%E2%88%80V.html)).\n\n\nwe have come to see those as similar, except that my notion of diversity is over space while my friend's notion of novelty is over time. notably, the two can be distinguished by the following experiment:\n\n\na planet right now contains 10 quite different cultures. you get to choose which of the two possible future states it will go in:\n\n\n* a future state with those same 10 cultures, except they're now each *a bit different* from their previous state.\n* a future state with 3 quite different cultures, but all are *very* different from all of the ones they have now.\n\n\nif it makes a difference: other values are the same, all cultures mentioned here are reasonably non-evil and all about equally \"good\", you won't ever get to interact with that planet or see their culture yourself, and this planet will not exist after experiencing the selected future — the only two real times at which it's [instantiated](questions-cosmos-computations.html) are its present state, and the future you pick.\n\n\na space-diversity perspective compares the two future states and says: \"10 > 3, so i pick the first future, it is a present state with more diversity\".\n\n\na time-novelty perspective, however, compares the two timelines and says: \"10 < 10 + 3, so i pick the second future, it is a timeline experiencing more total diversity\".\n\n\ni wonder if there is a more generalized notion of diversity that doesn't care about time vs space, and [if i value](core-vals-exist-selfdet.html) that one instead of space-diversity. another difficult question is: what about diversity between parallel universes or [bubbles](above-paperclips-2.html) ?\n\n\ntime is still weird, huh.", "date_published": "2022-06-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4f9fff79515c40dd0600867c92b5d99e", "title": "concentric rings of illiberalism", "url": "https://carado.moe/concentric-rings-illiberalism.html", "source": "carado.moe", "source_type": "blog", "text": "concentric rings of illiberalism\n--------------------------------\n\n\ni feel like [liberalism](https://en.wikipedia.org/wiki/Liberalism) is not just a political system but a general *mode of thought* and *collection of background assumptions*; which entails that it takes an outside system to properly study that system including its assumptions.\n\n\none stuck in the liberal mode of thought is limited; outside of it are both socialistic and conservativistic concerns about what sort of unexpected issues emerge from liberalism, such as systemic concentration of wealth or cultural value drift. the outermost proper circles of illiberalism i know of include james c. scott's [against the grain](https://slatestarcodex.com/2019/10/14/book-review-against-the-grain/) and [seeing like a state](https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/) as well as kaczynski's [industrial society and its future](https://en.wikipedia.org/wiki/Industrial_Society_and_Its_Future), where the very fundamentals of industrialism and sedentary states are questioned.", "date_published": "2022-05-28T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4957a0193cf4187ce4bef936ecdfca52", "title": "plausible vs likely", "url": "https://carado.moe/plausible-vs-likely.html", "source": "carado.moe", "source_type": "blog", "text": "plausible vs likely\n-------------------\n\n\nwhen talking about important (utility-monsterly so) problems like [AI risks](say-ai-risk-mitigation-not-alignment.html), it is easy for two people to behave the same whether one believes the event is merely plausible (<50%, sometimes <10%) or likely (>50%, sometimes >90%).\n\n\ni have seen this lead to some confusion about whether we are working on stuff to mitigate plausible risks or try plausible plans, or whether we are working to reduce likely risks or instantiate likely to work plans.\n\n\ni'd like to make clear some of my beliefs related to AI risk issues.\n\n\n* AI [X-risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence): [likely](why-timelines-short.html)\n* AI [S-risk](https://en.wikipedia.org/wiki/S-risk): [plausible](https://reducing-suffering.org/near-miss/)\n* AI alignment is hard: [likely](https://www.readthesequences.com/Value-Is-Fragile)\n* [*the peerless*](the-peerless.html) or other [pivotal acts](https://arbital.greaterwrong.com/p/pivotal/) are doable: plausible\n* superintelligence will come about by [foom](https://en.wiktionary.org/wiki/foom#Noun): likely", "date_published": "2022-05-27T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "694ad08cf6ce47d06d6547ea906aa017", "title": "say \"AI risk mitigation\" not \"alignment\"", "url": "https://carado.moe/say-ai-risk-mitigation-not-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "say \"AI risk mitigation\" not \"alignment\"\n----------------------------------------\n\n\nthe common thread i see between the work of people who describe themselves as working on alignment seems to be AI risk mitigation.\n\n\nthis is the case because \"alignment\" does not necessarily cover eg [pivotal acts](https://arbital.greaterwrong.com/p/pivotal/); in addition, [X-risks](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) are [not the whole story](https://en.wikipedia.org/wiki/S-risk) (see also: [*alignment near miss*](https://reducing-suffering.org/near-miss/) and [separation from hyperexistential risk](https://arbital.com/p/hyperexistential_separation/)).\n\n\nwhile it *is* true that AI risks are largely caused by us not having alignment, it is not necessarily the case that the immediate solution is to have alignment.\n\n\nto encompass the spirit of the work i do (when i am being truthful about it), i tend to say that i think about AI risk mitigation — [whatever form](ai-risk-plans.html) that takes.", "date_published": "2022-05-27T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "66ffc361e576f269cc4800d1794fd8cf", "title": "implementing the platonic realm", "url": "https://carado.moe/implementing-the-platonic-realm.html", "source": "carado.moe", "source_type": "blog", "text": "implementing the platonic realm\n-------------------------------\n\n\n(or: holding the world by its souls; or: upside-down metaphysics)\n\n\ni think, to construct a utopia (possibly especially a [sublime](https://www.lesswrong.com/posts/SLw2MEgxFtiKAqgQ5/actually-possible-thoughts-on-utopia) one) we ought to reverse the metaphysics of the world.\n\n\nin any graph forming a hierarchy, you can pick a different node as root and view the graph from there. for example, in the model of reality below, you can pick psychology as your root instead of physics, and end up with reversed arrows.\n\n\n![](implementing-the-platonic-realm.svg)\n\n\nthis gets closer to something like a \"platonic realm\" in which objects are truly, objectively integral, rather than patterns in the stuff they're made of. we can select any layer to be this, and then maybe let details fill in as we investigate them, de-implementing [reductionism](https://www.readthesequences.com/Reductionism). we can implement this the same way video games implement objects: as their own data structures with a bunch of methods to manipulate them, some of which able to generate detail to fill the object with; but those details are not an essential part making up the object, it's moreso the object and its context that produce the details.\n\n\nthese top level platonic persons are still full persons, in the computational sense we care about: their full neural net is still being ran at the top level.\n\n\nwhy do this? so we can make a world that is actually focused on, and aligned to serve, persons. to make persons legible to the superintelligent singleton running that world.\n\n\nnow, if other detail layers contain enough compute to simulate another person, then there might still exist suffering persons that the top level is not keeping track of.\n\n\ntwo solutions are: either detect encoded persons and extract them back up to the platonic level, or make other layers too weak to consistently encode suffering moral patients — this would put severe bounds on what kinds of computers we can use, but maybe that's fine. maybe computers are not even worth having, in the person-focused platonic realm.", "date_published": "2022-05-22T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1117b6c876b6016fa47796692ef84f51", "title": "AI boxing could be easy", "url": "https://carado.moe/ai-boxing-easy.html", "source": "carado.moe", "source_type": "blog", "text": "AI boxing could be easy\n-----------------------\n\n\nwhile i think on its own it doesn't solve *that* much, i don't think building an [AI box](https://en.wikipedia.org/wiki/AI_box) is particularly difficult.\n\n\ni am not talking about *letting an AI have full dominion over a computer but physically isolating it*; physical limitations are hard to predict and unreliable. i am talking about mathematically isolating a piece of software, using known sandboxing techniques.\n\n\nfor example, consider an AI whose code is encoded as SKI calculus, running on a simple SKI calculus calculator. or an AI whose code is encoded into conway's game of life, running on a simple conway's game of life simulator. these programs can be defined in such a way that their only failure mode is stopping due to running out of resources.\n\n\nremaining vulnerabilities involve things like [rowhammer](https://en.wikipedia.org/wiki/Rowhammer), or [meltdown](https://en.wikipedia.org/wiki/Meltdown_%28security_vulnerability%29) and [spectre](https://en.wikipedia.org/wiki/Spectre_%28security_vulnerability%29); but those shouldn't be too hard to contain so long as the level of interpretation of the AI is sufficiently abstracted away from hardware that the configurations that lead to these sorts of exploits just don't occur — in fact, considering [how easy](https://www.gwern.net/Turing-complete) it is to make very abstracted turing complete systems, designing a fully sandboxed and highly-abstracted-from-hardware turing complete environment running a boxed AI should be doable with only a constant-factor slowdown. the information theoritic limit of this would be something like \"reverse\" [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption), where the inside of the encrypted computation is information-theoretically guaranteed to be unable to control how its process affects the hardware around it (as long as the encryption scheme holds, of course) — though i am unsure whether such a scheme exists, or even makes sense to consider.\n\n\nas long as the computation's output isn't used for anything, i posit that this is reasonably safe. the difficult part is designing *output ports* that are safe. however, even without that, we can do some cool stuff such as making an reinforcement-based AI's reward function unhackable.\n\n\nif the boxed AI is fully deterministic, then it has no notion of anything going on in the real world like how fast time is passing or what hardware it's running on; all it has to go on to guess is the structure of its boxed environment, which can be made simple enough that the AI guessing anything about us seems information-theoretically unlikely enough — for example, i would think that an AI living in a seemingly classical computation would be unlikely to guess that the outside world runs on quantum mechanics.\ngiving the boxed AI input ports is *theoretically* safe, but significantly amplifies any risks due to insufficient output safety — if the AI *does* have access to outside information, it gets evidence as to what kind of world it is trying to affect.\n\n\nwhat could a safe output port look like? we could use the output port to receive proofs to theorems which are verifiable, for example. i believe discussion of how to turn this into pivotal acts is discussed in some of the first [late 2021 MIRI conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq). however, we may prefer queries which verifiably have only one correct answer (like \"what is the ordered list of prime factors of this number\"; unlike \"what is a proof of this theorem\"); if there are multiple correct answers, then that's some degree of freedom the AI could use to affect us, especially if any human happens to glance upon the output data.", "date_published": "2022-05-21T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e51eec5d4d7535349e88ab16347a5bfb", "title": "predictablizing ethic deduplication", "url": "https://carado.moe/predictablizing-ethic-deduplication.html", "source": "carado.moe", "source_type": "blog", "text": "predictablizing ethic deduplication\n-----------------------------------\n\n\ndoes the machinery of the cosmos [deduplicate identical moral patients and/or their experiences](deduplication-ethics.html) ?\n\n\nthis seems like a very difficult question to even address; but the good news is that for the general future we might not have to care about it. we can simply make it that the superintelligence that runs everything, *does* deduplicate (memoize) identical computations and data structures, which guarantees that the ethics we build on top of that (for superintelligence to implement) *can* know about deduplication.\n\n\nwhy choose deduplication over no-deduplication? because if we add deduplication on top of any machinery of the cosmos, then we can know for sure deduplication happens, but if we *don't* implement deduplication, then whether computation is deduplicated depends on the machinery of the cosmos.\n\n\n\"but doesn't this require looking inside arbitrarily encoded computations, such as [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption) ?\"\n\n\nthat is true, but for an aligned superintelligence we require this *anyways*. otherwise, it could just let unseen pockets of arbitrary suffering happen.", "date_published": "2022-05-18T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1d1049c7685bd8f092157c4868eb29d1", "title": "generalized adding reality layers", "url": "https://carado.moe/generalized-adding-reality-layers.html", "source": "carado.moe", "source_type": "blog", "text": "generalized adding reality layers\n---------------------------------\n\n\nin [*predictablizing ethic deduplication*](predictablizing-ethic-deduplication.html), i talk about how when we don't know how reality works, we can task a singleton superintelligence with \"adding a layer\" to reality, which guarantees that inside that simulated reality we *are* able to function with known ethics.\n\n\nin addition there's a sense in which, if one principle overrides the other, in general with [arbitrarily many layers of reality-simulation](above-paperclips-2.html) we should tend to favor whichever option overrides. so for example: if our reality is actually on top of 1000 layers of reality simulations, then it only takes **one** of them to be (truly, deeply) deduplicating for our universe and any sub-universe we simulate to also have deduplication.\n\n\nor, more precisely, for any set of mutually exclusive traits with a dominance ordering (such as deduplication > no-deduplication), we can expect the take one of those shapes:\n\n\n(click on the image to expand)\n\n\n[![](generalized-adding-reality-layers.svg)](generalized-adding-reality-layers.svg)\n\n\ni will call this the \"generalized adding reality layers\" (GARL) device, and i think it could have a broad use to reason about properties of the cosmos (the set of [instantiated](questions-cosmos-computations.html) universes), even ones that might seem axiomatic and [untestable](https://en.wikipedia.org/wiki/Newton's_Flaming_Laser_Sword).\n\n\nfor any set of mutually exclusive traits, we care about four properties:\n\n\n* what the dominance ordering is between those traits\n* how they affect the rate of spawning varied sub-universe\n* how they affect the rate of spawning moral patient experiences\n* how they affect the rate of spawning deeply caring actors\n\n\nso, what other sets of traits can we examine using GARL ? here are some that i can think of off the top of my head, as well as my guess for the questions above..\n\n\n\n\n| question | dominance ordering | most varied sub-universes | most moral patient experiences | most deeply caring actors |\n| --- | --- | --- | --- | --- |\n| [moral patient deduplication](deduplication-ethics.html) | dedup > no-dedup | unaffected ? | no-dedup > dedup | i've no idea |\n| [infinite compute](hope-infinite-compute.html) ¹ | finite > infinite | infinite > finite | infinite > finite | infinite > finite ? |\n| type of compute ¹ | classical > quantum > hyper | hyper > quantum > classical ? | unknown | hyper > quantum > classical ? |\n| moral realism ² | realism > non-realism | unsure | whichever maximizes good | realism > non-realism ? |\n| deeply-caring superintelligence | present > absent | depends on its goals | depends on its goals | present > absent |\n\n\n¹: these two questions are similar to one another in that they have one dominant variant that restricts computation, and one recessive variant that doesn't; as a result, i would tend to assume that the recessive variant has a higher chance of spawning most kinds of stuff\n\n\n²: my reasoning: once what is true becomes aligned with what is good, then the [orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis) becomes falsified in that sub-cosmos, and superintelligences are more easily aligned by default\n\n\nother questions to which GARL may be applicable but i haven't figured out how:\n\n\n* is occam razor's/[solomonoff induction](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1) applicable?\n* [are minimal circuits daemon-free?](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free)\n* [can you control the past?](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past)\n* [SIA vs SSA](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you)", "date_published": "2022-05-18T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "0b4b963a18737222f9e28680f700291e", "title": "smaller X-risk", "url": "https://carado.moe/smaller-x-risk.html", "source": "carado.moe", "source_type": "blog", "text": "smaller X-risk\n--------------\n\n\na superintelligence killing us all is a *superintelligent, very large* [X-risk](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence).\n\n\nthe superintelligence will tile its values in all directions; not just through space at the speed of light [or faster](https://en.wikipedia.org/wiki/Alcubierre_drive), but also, if it can, by [hacking physics](brittle-physics.html) and traversing across, for example, worldlines of the quantum many-worlds.\n\n\nwe may be able to create smaller X-risks, that only make us extinct in this timeline, on this earth. there are a few reasons we may want to do this:\n\n\n* other timelines might have a better shot than us, and us booting a superintelligence may reduce their chances through weird stuff like intertimeline hacking\n* to [avoid S-risks](when-in-doubt-kill-everyone.html), including S-risks that may be involved in instrumental cosmic-scale X-risk (maybe superintelligence wants to simulate civilizations in various ways for [acausal trade](https://www.lesswrong.com/tag/acausal-trade) or [other acausal weirdness](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past) reasons?)\n* the next intelligent species on earth is more likely than us to solve alignment before superintelligence, and seems likely enough to be at least a little bit aligned with us (better than cosmic X-risk, at least)\n* same as above, but for nearby aliens (whether current or future)\n\n\n*smaller X-risk*, where we limit damage to just our civilization, seems harder than tiling the cosmos with paperclips; but at least it might be easier than [other plans](ai-risk-plans.html).\n\n\nin a similar way, reducing our civilization to ashes *without* actually becoming extinct might also be a way to get another shot, if we think we're likely to do less badly next time.\n\n\nremember: this is bigger than all of us. when the fate of the cosmos is at play, we can't afford to be too selfish.", "date_published": "2022-05-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4bf838622c46f6445fcee5bbdc471c2a", "title": "cognitive biases regarding the evaluation of AI risk when doing AI capabilities work", "url": "https://carado.moe/ai-capability-risk-biases.html", "source": "carado.moe", "source_type": "blog", "text": "cognitive biases regarding the evaluation of AI risk when doing AI capabilities work\n------------------------------------------------------------------------------------\n\n\ni have recently encountered a few rationality failures, in the context of talking about AI risk. i will document them here for reference; they probly have already been documented elsewhere, but their application to AI risk is particularly relevant here.\n\n\n### 1. forgetting to multiply\n\n\nlet's say i'm talking with someone about the likelyhood that working on some form of AI capability [kills everything everywhere forever](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence). they say: \"i think the risk is near 0%\". i say: \"i think the risk is maybe more like 10%\".\n\n\nwould i bet that it will kill everyone? no, 10% is less than 50%. but \"what i bet\" isn't the only relevant thing; a proper utilitarian *multiples* likelyhood by *quality of outcome*. and X-risk is really bad. i mistakenly see some people use only the probability, forgetting to multiply; if i think everyone dying is not likely, that's enough for them. one should care that it's *extremely* unlikely.\n\n\n### 2. categorizing vs average of risk\n\n\nlet's take the example above again. let's say you believe said likelyhood is close to 0% and i believe it's close to 10%; and let's say we each believe the other person generally tends to be as correct as oneself.\n\n\nhow should we come out of this? some people seem to want to pick an average between \"carefully avoiding killing everyone\" and \"continuing as before\" — which lets them more easily continue as before.\n\n\nthis is not how things should work. if i learn that someone who i generally consider about as likely as me to be correct about things, seriously thinks there's a 10% chance that my tap water has lead in it, my reaction is not \"well, whatever, it's only 10% and only 1 out of the two of us believe this\". my reaction is \"what the hell?? i should look into this and stick to bottled water in the meantime\". the average between risk and no risk is not \"i guess maybe risk maybe no risk\"; it's \"lower (but still some) risk\". the average between ≈0% and 10% is not \"huh, well, one of those numbers is 0% so i can pick 0% and only have half a chance of being wrong\"; the average is 5%. 5% is still a large risk.\n\n\nthis is kind of equivalent to *forgetting to multiply*, but to me it's a different problem: here, one is not just forgetting to multiply, one is forgetting that probabilities are numbers altogether, and is treating them as a set of discrete objects that they have to pick one of — and thus can justify picking the one that makes their AI capability work okay, because it's one out of the two objects.\n\n\n### 3. deliberation ahead vs retroactive justification\n\n\nsomeone says \"well, i don't think the work i'm doing on AI capability is likely to kill everyone\" or even \"well, i think AI capability work is needed to do alignment work\". that *may* be true, but how carefully did you arrive at that consideration?\n\n\ndid you sit down at a table with everybody, talk about what is safe and needed to do alignment work, and determine that AI capability work of the kind you're doing is the best course of actions to pursue?\n\n\nor are you already committed to AI capability work and are trying to retroactively justify it?\n\n\ni know the former isn't the case because there *was* no big societal sitting down at a table with everyone about cosmic AI risk. most people (including AI capability devs) don't even meaningfully *know* about cosmic AI risk; let alone deliberated on what to do about it.\n\n\nthis isn't to say that you're necessarily wrong; maybe by chance you happen to be right this time. but this is not how you arrive at truth, and you should be highly suspicious of such convenient retroactive justifications. and by \"highly suspect\" i don't mean \"think mildly about it while you keep gleefully working on capability\"; i mean \"seriously sit down and reconsider whether what you're doing is more likely helping to save the world, or hindering saving the world\".\n\n\n### 4. it's not a prisoner's dilemma\n\n\nsome people think of alignment as a coordination problem. \"well, unfortunately everyone is in a [rat race](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) to do AI capability, because if they don't they get outcompeted by others!\"\n\n\nthis is *not* how it works. such prisoner's dilemmas work because if your opponent defects, your outcome if you defect too is worse than if you cooperate. this is **not** the case here; less people working on AI capability is pretty much strictly less probability that we all die, because it's just less people trying (and thus less people likely to randomly create an AI that kills everyone). even if literally everyone except you is working on AI capability, you should still not work on it; working on it would *still only make things worse*.\n\n\n\"but at that point it only makes things negligeably worse!\"\n\n\n…and? what's that supposed to justify? is your goal to *cause evil as long as you only cause very small amounts of evil*? shouldn't your goal be to just generally try to cause good and not cause evil?\n\n\n### 5. we *are* utilitarian… right?\n\n\nwhen situations akin to the trolley problem *actually appear*, it seems a lot of people are very reticent to actually press the lever. \"i was only LARPing as a utilitarian this whole time! pressing the lever makes me feel way too bad to do it!\"\n\n\ni understand this and worry that i am in that situation myself. i am not sure what to say about it, other than: if you believe utilitarianism is what is *actually right*, you should try to actually *act utilitarianistically in the real world*. you should *actually press actual levers in trolley-problem-like situations in the real world*, not just nod along that pressing the lever sure is the theoretical utilitarian optimum to the trolley problem and then keep living as a soup of deontology and virtue ethics.\n\n\ni'll do my best as well.\n\n\n### a word of sympathy\n\n\ni would love to work on AI capability. it sounds like great fun! i would love for everything to be fine; trust me, i really do.\n\n\nsometimes, when we're mature adults who [take things seriously](life-refocus.html), we have to actually consider consequences and update, and make hard choices. this can be kind of fun too, if you're willing to truly engage in it. i'm not arguing with AI capabilities people out of hate or condescension. i *know* it sucks; it's *painful*. i have cried a bunch these past months. but feelings are no excuse to risk killing everyone. we **need** to do what is **right**.\n\n\nshut up and multiply.", "date_published": "2022-05-13T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "6df985f54f00ecbc4342c7b24ac485a8", "title": "life refocus", "url": "https://carado.moe/life-refocus.html", "source": "carado.moe", "source_type": "blog", "text": "life refocus\n------------\n\n\nbecause of the [recent](https://www.metaculus.com/questions/3479/date-weakly-general-ai-system-is-devised/) [events](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy), which i've been dreading [for a while](were-all-doomed.html), i'm taking AI risk a lot more seriously, and have started significantly refocusing my life. (**edit**: see also [my more recent intro to AI risk](ai-doom.html))\n\n\nthere is a post called [*musk's non-missing mood*](https://lukemuehlhauser.com/musks-non-missing-mood/) that resonates quite well with me. it is indeed kind of disconcerting how people who seem rationally aware of AI risk, don't seem to *grok* it as an *actual thing*. despite how real it is, it's hard to think of it not as fantasy fiction.\n\n\ni totally understand why. i've been there too. but eventually i managed to progressively update.\n\n\ni'm still not quite there yet, but i'm starting to actually grasp what is at stake.\n\n\n[\"detaching the grim-o-meter\"](https://mindingourway.com/detach-the-grim-o-meter/) remains a reasonable thing to do; you don't want to become so depressed that you kill yourself instead of saving the world. but you also don't want to remain so deluded that you don't quite weigh the importance of saving the world enough either.\n\n\ni'll learn japanese after the singularity. i'll make [my game](game.html) and [my alternative web](saving-the-web.html) and my conlang and [my software stack](psi.html) and many other things, after the singularity. it is painful. but it is what's right; it's closer to [the best i can do](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy).\n\n\nand i know that, if at some point i give up, then it won't look like pretending that everything is fine and compartmentalizing our imminent death as some fantasy scenario. it'll be a *proper* giving up, like going to spend the remaining years of my life with my loved ones. even my giving up scenario is one that takes things seriously, as it should. that's what being an adult capable of taking things seriously is like.\n\n\nhow you handle your mental state is up to you. there is a collection of AI-risk-related mental health posts [here](https://www.lesswrong.com/posts/pLLeGA7aGaJpgCkof/mental-health-and-the-alignment-problem-a-compilation-of). do what it takes for you to do the work that needs to be done. that's not becoming a doomer; your brain is straight-up not designed to deal with cosmic doom. but that's not remaining blindly naive either. the world needs you; it won't be saved by pretending things are fine.\n\n\nand it *certainly* won't be saved by pretending things are fine and *working on AI capability*. that's *just bad*. *please* don't.\n\n\nplease take AI risk seriously.", "date_published": "2022-05-12T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e041150796550b2a45cd6b947ed1a425", "title": "hope for infinite compute", "url": "https://carado.moe/hope-infinite-compute.html", "source": "carado.moe", "source_type": "blog", "text": "hope for infinite compute\n-------------------------\n\n\nhere are some reasons we may have infinite universe to inhabit in the future.\n\n\n* encoding ourselves in heat death noise. this could at least buy us exponentially much time to exist; it becomes infinite if the amount of possible states also increases.\n* where are we in [the universal distribution](https://www.lesswrong.com/posts/EL4HNa92Z95FKL9R2/a-semitechnical-introductory-dialogue-on-solomonoff-1) ? most models that produce [rich](questions-cosmos-computations.html) computation, such as [rule 30](https://en.wikipedia.org/wiki/Rule_30), seem to grow forever in amount of stuff. this includes [wolfram's hypergraph rewriting system](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/).\n* space is expanding. the planck length is a constant. unless there's something i'm mistaken about, this sure seems like more positions in space we could inhabit, and thus an increasing total amount of states the universe can be in. it is not obvious how to utilize this, but it may be evidence for other ways in which the amount of stuff in the universe grows. in wolfram's perspective, it is simply the hypergraph creating new positions in space, as it has been doing forever.\n* why are there 10⁸⁰ particles in the observable universe? if the total number is larger, why is it that larger number? wouldn't it be occam-simpler that there be 1 or few particles (or qubits or whatever) in the start, and have that amount grow over time? in fact, with expanding space, won't there statistically tend to be more particles overall, if only because there's more space for quantum fluctuations to randomly spawn particles? surely a superintelligence can harness this.\n* even if we're in a physically finite universe, we may be able to acausally trade/hack/blackmail aliens living in infinite worlds such as rule 30. maybe. [acausal trading is weird](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past).", "date_published": "2022-05-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "42a7930b0249c9898c5483adb35dc597", "title": "AI risk plans", "url": "https://carado.moe/ai-risk-plans.html", "source": "carado.moe", "source_type": "blog", "text": "AI risk plans\n-------------\n\n\npeople have criticized my [*peerless*](the-peerless.html) plan on the grounds that it's too long-term/far-fetched.\n\n\nwhile i don't disagree, i think that it is only one variable to be taken into consideration. here is a comparison of plans for addressing AI risk, with vague estimates.\n\n\n\n\n| plan | achievable before [X-line](timeline-codes.html)¹ | chance of [U-line](timeline-codes.html)² | [S-risk](https://en.wikipedia.org/wiki/S-risk)² |\n| --- | --- | --- | --- |\n| doing nothing | 100% | <[1e-6](https://www.lesswrong.com/tag/orthogonality-thesis) | <1e-6 |\n| direct alignment | [.1%](https://www.readthesequences.com/Value-Is-Fragile) | 5% → .005% | [5%](https://reducing-suffering.org/near-miss/) → .005% |\n| [the peerless](the-peerless.html) | 2% | 10% → .2% | 1% → 0.02% |\n\n\n* ¹: assuming significant effort is put behind the plan in question, what is the likelyhood that we'll have accomplished the work to what we *believe* to be completion? note that my current AI timelines are pretty pessimistic (we become more likely to die than not this decade)\n* ²: *if* we believe to have completed the work; the latter number is adjusted by being multiplied with \"achievable before [X-line](timeline-codes.html)\".\n\n\nnote that the numbers i put here are only very vague estimates, feel free to replace them with your own guesses. but my point is, in order for the peerless to be the plan we should be working on, we don't need it to be *feasible*, we just need it to be *less infeasible than all the other plans*. i think the peerless is more tractable than doing direct alignment, and only more risky because it has more chances to succeed. depending on how scared of S-lines you are, you should push for either doing nothing (and thus [oppose direct alignment](against-ai-alignment.html)) or for my plan. (or come up with your own, and then compare it to these!)\n\n\nnot pictured: the plan to [melt all GPUs](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW), because it's just a modifier on what we do afterwards. but yes, melting all GPUs is a great idea if we think we can reasonably do it more than other plans.", "date_published": "2022-05-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "72d920fcae6c25641bc9032b5ff009f1", "title": "a unit for utils", "url": "https://carado.moe/utils-unit.html", "source": "carado.moe", "source_type": "blog", "text": "a unit for utils\n----------------\n\n\nas utilitarians, it would be convenient for us to have an actual unit to measure utility, a number to be computed and compared.\n\n\nthe usual pick is money, but some people could have different judgments of the world that lead them to have different instrumental valuings of money even when they have the same intrinsic values; and also some people could intrinsically value money.\n\n\nthe unit i propose, to measure how much an agent cares about a thing, is a ratio of that person's \"total caring pie\". for example, you could intrinsically value 70% something and 30% something else; and then i'm sure we can figure out some math that makes sense (probly inspired from probability theory) to derive our valuings of instrumental values from that.\n\n\nthis seems like the least biased way to measure utils. the only criticism i can think of is that it breaks if two agents have different amounts of total valuing: perhaps one person *just has more total caring* than another.\n\n\nhowever, is this testable in any way? is there any situation where one agent would act differently than another if they have the same intrinsic valuing proportions but one of them has a million times more total caring? i don't think so: the idea that inaction counts, seems to me to track either willpower or just different valuings of not doing effort.", "date_published": "2022-04-29T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b855339dfa20f94754a831580a1b955f", "title": "the uncertainty of 2+2=4", "url": "https://carado.moe/uncertainty-2+2=4.html", "source": "carado.moe", "source_type": "blog", "text": "the uncertainty of 2+2=4\n------------------------\n\n\npeople generally claim \"it will rain tomorrow\" and \"2+2=4\" are different *kinds* of knowledge. not only is one about the world and the other about mathematics, but the former is probabilistic and the latter is certain.\n\n\ni here would like to dispute this second claim; in fact, 2+2=4 is, as we are now with our brains the way they are, probabilistic knowledge as well. it is possible that, by sheer chance, every time you and every other human and every computer ever calculated what 2+2 would be, they made a mistake by [random chance](https://en.wikipedia.org/wiki/Soft_error#Cosmic_rays_creating_energetic_neutrons_and_protons), and happened to make the exact same mistake. it could be that actually 2+2=5. or, for a more systematic example, one could find themself as [the mathematician in a lying simulation](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality).\n\n\nnow, that's *extremely* unlikely, just because of how universally 2+2=4 is claimed to be observed and verified. but it is indeed *possible*, and this fits into [the lack of absolute certainty](https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities).\n\n\nmaybe there [could](exact-minds-in-an-exact-world.html) be minds and computers that could determine this for sure (although, what about unknown unknowns?); but those aren't what we have now.\n\n\nonce you know this, other probabilistic guarantees like cryptography, or more generally problems that are in computational complexity classes like [BPP](https://en.wikipedia.org/wiki/BPP_%28complexity%29) and [BQP](https://en.wikipedia.org/wiki/BQP), can be percieved as more reasonable relative to *exact* guarantees: after all, everything we interact with on top of the standard model is probabilistic.", "date_published": "2022-04-29T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b1461887773076326b293788b1e34242", "title": "finding earth in the universal program", "url": "https://carado.moe/finding-earth-ud.html", "source": "carado.moe", "source_type": "blog", "text": "finding earth in the universal program\n--------------------------------------\n\n\nthis post expands on step one of [*the Peerless*](the-peerless.html): creating virtual people. brain scans-and-simulations are apparently still quite far off, so i'll be focusing on the second approach: resimulating earth and plucking out persons.\n\n\n(one great side-advantage of this method is that, if we can relocate earth to pluck out persons for the simulation of alignment researchers, then we can later also relocate earth in order to restore it once we've solved alignment. so resimulating and locating earth, regardless of having early enough mind-plucking-out tech, is something we might need to do anyways.)\n\n\nif compute is [infinite](ai-alignment-wolfram-physics.html) and [we don't mind being inefficient](udassa-time-steps.html), then we can use exponential or even infinite compute to locate earth. one approach is the following: create a big informational beacon — perhaps a copy of a huge portion of the internet, along with MRI scans of as many people as we can afford. then, we use some type of (non-intelligent) deterministic error-bound statistical location procedure to locate patterns that look like that beacon inside the [universal program](universal-complete.html). we can afford the statistical detection to be imperfect — if it misses on one encoding of earth, there will be different ones in the universal program.\n\n\nbecause of the time penalty of the universal program, however, we may find just compressed copies of the beacon (instead of a full simulation of earth leading to the time at which we build that beacon), and because of the deterministic bound, we want need to stop on the first match; if this first match is *just* the beacon, without earth, then we fail; perhaps superintelligence can notice that it's not finding any nearby minds to pluck out, or perhaps it plucks out garbage. so we can start the universal program with not one step per program, but rather a very large number of steps — i hear stephen wolfram has estimates on the number of computation steps it takes to get to the current state of the universe. this will favor programs that takes every long to lead to the beacon, but are themselves shorter program.\n\n\n(what if the first program to contain earth is itself a universal program *without* that huge constant, such that *it* finds the beacon before it finds earth? i am not sure how to address this. perhaps we can explore programs in an order that favors worlds that look like our physics instead of looking like discrete iterations of all computations?)\n\n\nthere's also the concern that the universal program, just like the [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution), [is malign](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign). i'd think plain top-level earth, maybe especially as detectable by a simple enough beacon locator, would tend to occur before malign aliens emitting our beacon to trick us; but that's a risk to keep in mind.\n\n\nif we *do* care about computational efficiency, then there are two main factors we need to account for:\n\n\n* can our universe can be ran in polynomial time on whatever computers the superintelligence can build? for example, can it be ran in polynomial time on quantum computers, and can quantum computers be built? note that if this is the case we might need to step through *quantum steps* of *quantum programs* to run the search in the expected time. this doesn't need we need to build quantum computers outselves, mind you — superintelligence can just notice that a quantum computer would run the computations we describe efficiently, and build and use those.\n* is the \"seed\" program for the universe small? intuitively i believe it is, and i find wolfram's efforts to reproduce the behavior of particles from the standard model using simple graph rewriting, to be evidence in that direction. that said, if it is large, then finding that program is an exponential search again — and so, again, we might need to build a search that \"favors\" our physics to save on exponential search time.\n\n\nfinally, we might want to put a hard bound on the number of tries the superintelligence will run to locate earth. the reason for that is that, if for some reason we messed up something in the beacon locator and it *never, ever* finds earth, then it will instantiate all computations, which appears to me to be a potential [S-risk](timeline-codes.html). in fact, even if we do find earth, it may not be worth it if we have to simulate exponentially much potential suffering before running our utopia — what if, after solving alignment, we have a great time, but then decide to eventually fade away after only polynomial time? then we will might have created exponentially much suffering in total.\n\n\n### intermediary simulation\n\n\nin case isolating minds from this simulation is hard, we could build an intermediary step between the location of earth in simulation-space, and booting the peerless simulation proper — superintelligence could, once it has located our beacon, get in touch with our organization *inside the simulation of earth*, and give it extraordinary computational (and maybe physical?) ability within the simulation to either take over everything, or figure out brain plucking-out and then let us press a big \"ok, start now\" button.\n\n\nnote, however, that we might not want to remain in this intermediary simulation for too long — it is still vulnerable to inner unaligned superintelligences, just like our top level reality is. we want to get to a safe, sandboxed, computationally weak environment as early as possible.\n\n\nthis is also a great argument for readying ourselves to build the beacon and utilize this contact-from-superintelligence as early as we can; indeed, to make that the first step of implementing the peerless plan. the reason for that is that the earlier we are able to take advantage of it, the earlier the time step of the simulation superintelligence can help us start bootstrapping towards the proper simulation of the peerless, and the less likely we are to be doomed by other superintelligences, if we need some intermediary \"pre-peerless\" simulation time.", "date_published": "2022-04-12T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "21faeca89e26aea377a2a702d078a911", "title": "The Peerless", "url": "https://carado.moe/the-peerless.html", "source": "carado.moe", "source_type": "blog", "text": "The Peerless\n------------\n\n\nIn this post, I propose a plan for addressing superintelligence-based risks.\n\n\nBefore I say anything, I will mention a crucial point that a *bunch* of people have ignored despite it being addressed at the bottom of this post: the idea I describe here is very unlikely to work. I'm proposing it because of other plans because I feel like other plans are *extremely* unlikely to work (see also [this post](ai-risk-plans.html)). Yes, we probably can't do this in time. That doesn't make it not our best shot. Rationalists select the best plan, not [\"the first plan, and then after that a new plan only if it seems good enough\"](https://www.readthesequences.com/If-Many-Worlds-Had-Come-First).\n\n\n(spoilers for the premise of [*orthogonal* by Greg Egan](https://en.wikipedia.org/wiki/Orthogonal_%28series%29)) In Orthogonal, a civilization facing annihilation comes up with a last-minute plan: to create a ship, accelerate it until its time arrow is orthogonal to the time arrow of its home world (which is possible thanks to the alternate physics of their world), and thus give its crew as much time as it needs to figure out how to save their homeworld before reversing course and coming back. This plan is inspired by that, and i'm naming this post after their ship, the *Peerless*.\n\n\nThe short version is: we design a simulation for a bunch of people (probably rationalists) to live in and figure out alignment with as much time as they need, and create a superintelligence whose sole goal is to run that simulation and implement a new goal it will eventually decide on. I've written about this idea previously, but [that post](upload-for-alignment.html) is not required reading; this is a more fleshed-out view.\n\n\nI will be describing the plan in three steps.\n\n\n### 1. Create virtual people\n\n\nWe need virtual persons inside this world. They will be the ones who figure out alignment. A few possibilities come to my mind; there may be more.\n\n\n* Brain scans, or full person scans. This is the most obvious solution. I'm not too familiar with the state of that field, but surely there's some work in that direction we can take advantage of; otherwise, we can just throw money at our own initiatives. This option does have the downside that it's quite likely brains aren't sufficient to keep someone functional — we may need to scan or re-implement a bunch more.\n* Resimulate earth and pluck out persons. If there's a clever way to locate ourselves in the [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) (or a [computable variant of it](http://www.scholarpedia.org/article/Universal_search)), then we can just make a program that reruns that earth up to say, now, and then locates some or all human brains, and \"download\" them out of that simulation of earth and into our own simulated environment. For more details on this possibility, see [*finding earth in the universal program*](finding-earth-ud.html).\n* Scan the earth and pluck our persons. This one seems harder than resimulating earth, but may be doable. It's certainly an idea worth throwing a few people at, to see if there's a clever way to make it work.\n\n\nThe main risk that's been brought to my attention regarding this part is the following: what if the virtual persons end up unaligned from their previous selves? The brain scan scenario seems like the most likely to have that risk, but even then i'm not *too* worried about it; intuitively, it seems unlikely enough to me that all the uploaded persons would come out misaligned in a similar direction, and in a similar direction that would lead them to decide on a [botched alignment](botched-alignment-and-awareness.html) for the superintelligence.\n\n\nAn obvious question here is: who gets to be on board the simulation? The values of the people who get uploaded might significantly affect what the superintelligence is aligned to (not all humans necessarily have the same values, [maybe even after thinking about it really hard for a long time](https://www.lesswrong.com/posts/FSmPtu7foXwNYpWiB/on-the-limits-of-idealized-values)). I don't have any answers other than the obvious \"me please!\" and \"my tribe please!\" that occur to me.\n\n\nNote that i'm *not* proposing augmenting the uploaded minds — at least not for the first simulation iteration (see below). That *does* seem like an exceedingly risky prospect, alignment-wise, and one we don't need to commit to right away.\n\n\n### 2. Design a virtual environment\n\n\nThose persons will live in a virtual environment, within which they'll hopefully figure out alignment. However, the environment needs to be a deterministic computation, such that the \"outer\" superintelligence (the one running the virtual environment) has no ability to affect its outcome; its goal will only be to \"implement whatever this computation decides\". If the superintelligence wants to implement the actual result of the actual computation, and that computation is fully deterministic, (and if we don't simulate anything complex enough for that superintelligence to \"leak back in\"), then it has no room to meddle with what we do in it! It's stuck running us until we decide on something.\n\n\nSome things we need to figure out include:\n\n\n* How do we incorporate our virtual minds? I think we should go for something plugged in \"ad-hoc\" rather than embedded into the physics of that world, to preserve the integrity of those minds, which may live for very long times. In addition, in case virtual minds go crazy after living 200 years or something, we may want to allow them to reset themselves and/or die. A reset is not necessarily a big deal: hopefully previous-me can transmit enough information to future-me to continue the work. Maybe there are two me's at any given time, a teacher and an apprentice. Regular resets of individual persons also hopefully help maintain their values over long stretches of time. Many schemes are possible.\n* What is this world like? We could make do with just something as basic as minecraft, but it would be better if the virtual persons don't have to go crazy from being stuck in a minecraft steve's body with no senses except sight and sound.\n* How do we prevent \"sub-singularities\"? Given that this world is deterministic, there is nothing the outer superintelligence can do to prevent internal superintelligences from popping up and breaking everything it can. Potential solutions include things like \"there are no computers\" or \"all computers inside this world are very slow and limited in capability\".\n* What about memetic safety? What about virtual violence? What if someone duplicates themself a billion times? And so on. There are a collection of design challenges, but designing [a peaceful world](%E2%88%80V.html) with [sensible virtual physics](game.html) doesn't seem out of reach. They seem like tractable engineering challenges.\n* What is the final voting procedure? Remember that the goal of the simulation is to give the people inside it time to figure out alignment, but they should probably agree on something eventually: either a final decision on alignment, or a \"next iteration\": a new simulation to be ran, which they think has better/safer/still-safe conditions within which to research alignment. In fact, there may be arbitrarily many such \"simulation iterations\". Anyways, the simulation will have a big red button inside of it which says \"okay, we're done\", and takes as input a new goal (and possibly decision theory?) that the outer superintelligence will have as its ultimate goal. But what should it take to press the button? Everyone to agree? A majority? What if we end up unable to come to an agreement? Again, there is work to be done on this, but it seems figure-out-able.\n\n\nThe people inside this simulation will have somewhere between *plenty* and [*infinite*](ai-alignment-wolfram-physics.html) time and compute to figure out alignment. If they do have infinite compute, and if the cosmos isn't full of [consequentialists competing for earlyness in the universal distribution](udassa-time-steps.html) (or other things that might make wasting compute bad), then we can even run exponential-or-longer computations in, from our perspective, instant time; we just need to be sure we don't run anything malign and unbounded — although the risks from running malign stuff might be mitigated by the computations being fully and provably sandboxable, and we can shut them down whenever we want as long as they don't get to output enough to convince us not to. After all, maybe there are some bits of information that are the result of very large malign-dominated computations, that can nevertheless still be of use to us.\n\n\nI mentioned before that maybe only slow computers are available; running a \"very large\" computation might require a majority vote or something like it. Or we can just boot without any computers at all and spend the first few millenia designing slow computers that are actually safe, and then work from there — when we have all the time we want, and maybe-infinite *potential* compute, a lot of options open up.\n\n\nOne downside is that we will be \"flying blind\". The outer superintelligence will gleefully turn the cosmos into computronium to ensure it can run us, and *will* be genociding everything back meat-side, in our reachable universe — or beyond, if for example physics is hackable, as wolfram suggests might be possible. Superintelligence might even do that *first*, and *then* boot our simulation. Hopefully, if we want to, we can resimulate-and-recover aliens we've genocided after we've solved alignment, just like hopefully we can resimulate-and-recover the rest of earth; but from inside the simulation we won't be able to get much information at least in the first iteration. We *can*, however, end our iteration by agreeing on a new iteration that has some carefully-designed access to outside information, if we think we can safely do that; but nothing guarantees us that there will still be something salvageable outside.\n\n\nAnother way to model \"successive simulation iterations, each deterministic, but each having the ability to make the next one not deterministic with a large enough vote\" is as a single simulation that isn't quite deterministic, but made of large deterministic chunks separated by small controlled I/O accesses; think of it as a haskell computation that lazily evaluates everything right up until it waits for an input, and then as soon as it has that it can continue computing more.\n\n\nStill, the current outlook is that we genocide everything *including ourselves*. Even if nothing else is recoverable, \"a tiny human population survives and eventually repopulates\" still seems like a better plan than the current expected outcome of \"everything dies forever.\"\n\n\n### 3. Make a superintelligence to run it\n\n\nNow, this is the \"easy part\": just make a superintelligence that destroys everything to implement its one simple goal; except instead of paperclips, the simple goal is \"implement whatever goal is the result of this very big turing machine\".\n\n\nWe can either build and start that superintelligence as soon as we can, or [keep it ready](when-in-doubt-kill-everyone.html) while we stay on our regular world. I'd probably advocate for the former just to be safe, but it can depend on your beliefs about quantum immortality, S-risks, and such. In any case, having *something that might work ready to fire* is certainly better than the current *we just die*.\n\n\nOf course, it is crucial that we make the superintelligence *after* we have designed and implemented the virtual environment, complete with its virtual persons (or its deterministic procedure to obtain them); we don't want it to be able to influence what goal we give it, so we likely need to have the goal ready and \"plugged in\" from the start.\n\n\nSome risks are:\n\n\n* It doesn't run the simulation accurately. I'd think it's surely not too hard to make a superintelligence have \"run this discrete, deterministic thing and then adopt its output as goal\" as its goal, but perhaps there are difficulties around this. I'm optimistic that we can figure that out.\n* It doesn't run the simulation (or adopt its result as goal) at all. Perhaps the requirement that the simulation be ran perfectly will make superintelligence too paranoid about being sure it has run the simulation correctly, and it will spend its entire existence getting more computronium to increase the likelyhood that the outcome it has computed is correct, but it's never certain enough to actually adopt the new goal as its own. There may be probabilistic workarounds to this; we'll need to look into it more.\n* It fails in the various ways AI usually value drift from us — hacking its reward function, etc. While this concern may indeed remain, the fact that i'm proposing what seems to me like a easy to formalize goal is already a big improvement compared to the current state of affairs.\n\n\n### Conclusion\n\n\nThis is a plan with a bunch of things that need work, but it doesn't seem *absurdly* hard to me; if anything, step 1 seems like the hardest, and I don't even know that we've *tried* throwing billions of dollars at it.\n\n\nI share yudkowsky's current [gloomy outlook](https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy) on AI. The current route of \"hey, maybe we should study things vaguely related to harnessing what neural nets do, and hope to be able to grab a miracle should it come up\" seems like a pretty bad plan. I think, in comparison, the plan I outline here has better chances.\n\n\nIt is to be remembered that my vision is competing not against the likelyhood of superintelligence emergence, but against the likelyhood that alignment works. If pursuing mostly alignment gives us a 1e-10 chance of survival, and pursuing mostly my plan gives us a 1e-8 chance of survival, then it doesn't matter that *yes, superintelligence is still overwhelmingly likely to kill us*; we should still favor my plan. See also: [this post](ai-risk-plans.html) comparing plans.", "date_published": "2022-04-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "8bfc9b4d8f0e0ec0c1a28c5c5f5c8bb3", "title": "bracing for the alignment tunnel", "url": "https://carado.moe/bracing-alignment-tunnel.html", "source": "carado.moe", "source_type": "blog", "text": "bracing for the alignment tunnel\n--------------------------------\n\n\nit looks like we're gonna invent AI that [kills everyone](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) before we figure out [AI alignment](https://en.wikipedia.org/wiki/AI_alignment).\n\n\nwhat this means is that soon, [if not already](https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html), we are going to start bleeding [timelines](https://en.wikipedia.org/wiki/Many-worlds_interpretation), **hard**; by which i mean, an increasing ratio of multiverse-instants are gonna become dominated by unaligned AIs — and thus be devoid of population ([probably](above-paperclips-2.html)).\n\n\nafter that, there is a period in the (ever-diminishing amount of) surviving timelines, where we [ride on quantum immortality](https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality) to solve alignment; after which, we finally reach [U-lines](timeline-codes.html), hopefully.\n\n\nby many theories of [anthropics](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you), observing existing either before or after that period is a lot more likely than observing existing in it. before the period, it is more likely because there are a lot more populated timelines in which to exist; after the period, it is more likely because we can hopefully \"repopulate horizontally\" by allowing the population to increase again.\n\n\nif i am correct in the reasoning in this post, then being someone who exists in this very narrow alignment \"tunnel\" is exceedingly unlikely (barring weird circumstances such as post-singularity mankind choosing to simulate many variants of the tunnel for some reason). indeed, if you do observe being in it, you should think that something weird is going on, and update against the narrative presented in this post.\n\n\nyet, *again if i am correct*, this is a period where we need to hold tight and work on alignment, perhaps as quickly as possible in order to reduce astronomical waste. very few us's inhabit the tunnel, but those very few us's are the critical ones who we need to care about.\n\n\nso we need to brace our minds for the alignment tunnel. we need to commit to be persons who, if we observe being in the tunnel, will keep working on alignment even if, *from inside those timelines*, it looks like the reasoning i'm presenting here can't possibly be right. this is perhaps a weird case of instrumental rationality.\n\n\n(note that i'm not saying the conclusion of observing being in those timelines should be to stop working on alignment; perhaps we would want to work on it either way, in which case we don't have to worry about anything. but i worry that it could lead us to other places such as \"oh, maybe this AI killing everyone business isn't real after all, or maybe a weird alien force is preventing us from dying somehow\")", "date_published": "2022-04-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "db4aed944abf194be258b83851cdf8fd", "title": "should we implement free will?", "url": "https://carado.moe/implement-free-will.html", "source": "carado.moe", "source_type": "blog", "text": "should we implement free will?\n------------------------------\n\n\nif many worlds [is true](https://www.readthesequences.com/If-Many-Worlds-Had-Come-First), then in our current paradigm of physics, every possible outcome of any time step is instantiated.\n\n\nhowever, once we [take over everything](ai-alignment-wolfram-physics.html), we might be able to hack physics to implement [exact computing](exact-minds-in-an-exact-world.html), and then upload ourselves to simulations running on those.\n\n\nit is then to be decided: should we choose to keep forking every possibility? or should we select, at least in simulated minds, some outcome as *the only one* that gets instantiated, such as by selecting the most likely outcome and instatiating only that? or maybe it should be [up to persons and societies](%E2%88%80V.html)?", "date_published": "2022-03-29T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3052284754ce76326af25520d696645a", "title": "goals for emergency unaligned AI", "url": "https://carado.moe/emergency-unaligned-ai-goals.html", "source": "carado.moe", "source_type": "blog", "text": "goals for emergency unaligned AI\n--------------------------------\n\n\nin [a previous post](when-in-doubt-kill-everyone.html) i talk about killing timelines — by making an AI fill them with something that's an easy to implement goal, such as paperclips — to avoid the even worse outcome of them becoming [S-lines](timeline-codes.html). in this post i wonder: can we do better than paperclips?\n\n\nif the True Nature of the cosmos is [a universal program](universal-complete.html), then there could be some things to turn timelines into that consume less compute cycles; for example, maybe we somehow make an AI that makes its timeline run as little compute as possible. the cosmic universal program will then spend less many cycles running those dead timelines, and more running remaining live timelines — making survivors in them possibly \"more real\". in this sense, it may be that pruning timelines can be done without causing astronomical waste: compute time and therefore kinda [\"realness\"](udassa-time-steps.html) or [\"soul juice\"](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) are redistributed to remaining timelines.\n\n\neven if the \"naive\" most likely \"implementation\" of our universe consumes just as much compute regardless of what goes on in it, the universal computation will contain other \"implementations\" that \"compress\" compressible timeline-states, and we will reclaim cycles in them — and if they are good at compressing, those implementations might be where most of our realness juice is located anyways.\n\n\nanother possibility is to task an AI with turning its timeline into a state that is as identical as it can be to another timeline in which said AI's didn't appear. if it can achieve that, then we can kill timelines by replacing them with copies of alive timelines. this also recycles compute time, possibly more efficiently since it doesn't rely on compressing implementations.", "date_published": "2022-03-22T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "d324c8eb53df958e727d92ddf18c3e8d", "title": "are there finitely many moral patients?", "url": "https://carado.moe/finite-patients.html", "source": "carado.moe", "source_type": "blog", "text": "are there finitely many moral patients?\n---------------------------------------\n\n\nwouldn't it be neat if we didn't have to worry about [infinite ethics](https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics) ?\n\n\ni think it is plausible that there are finitely many moral patients.\n\n\nthe first step is to [deduplicate moral patients by computational equivalence](deduplication-ethics.html). this merges not only humans and other creatures we usually care about, but also probably a lot of [other potential sources of moral concerns](https://reducing-suffering.org/what-are-suffering-subroutines/).\n\n\nthen, i think we can restrict ourselves to patients in worlds that are discrete (like ours); even if there *were* moral patients in non-discrete worlds, it seems to me that from where we are, we could only access discrete stuff. so whether by inherent limitation, plain assumption, or just limiting the scope of this post, i'll only be talking about discrete agents (agents in discrete worlds).\n\n\nonce we have those limitations (deduplication and discreteness) in place, there are finitely many moral patients of any given size; the only way for an infinite variety of moral patients — or more precisely, moral patient moments — to come about is for some moral patients to grow in size forever. while infinite time seems plausible [even in this world](ai-alignment-wolfram-physics.html), it is not clear to me that whatever the hell a \"moral patient\" is can be arbitrarily complex; perhaps at a certain size, i start only caring about a *subset* of the information system that a \"person\" would consist of, a [\"sub-patient\"](deduplication-ethics.html).", "date_published": "2022-03-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "684ab27af8bdfd700294c1e34feca2aa", "title": "making the UD and UDASSA less broken: identifying time steps", "url": "https://carado.moe/udassa-time-steps.html", "source": "carado.moe", "source_type": "blog", "text": "making the UD and UDASSA less broken: identifying time steps\n------------------------------------------------------------\n\n\nthe [universal distribution](https://www.lesswrong.com/posts/XiWKmFkpGbDTcsSu4/on-the-universal-distribution) (\"UD\") and [its applicability to anthropics](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution/) apparently suffers from some issues.\n\n\none is uncomputability. i think a speed penalty seems reasonable; but as a more general solution, i think it is unreasonable to \"wait for the machine to halt\". instead, let us see turing machines as running forever, with getting stuck on a repeating state as a special case. then, the space of all computations on a given universal turing machine (\"UTM\") is a set of pairs `(input program, time step)`; or more simply, if we use a [universal complete](universal-complete.html) program, it is *just* a time step: every computation will be ran at some point.\n\n\nthis model feels quite natural to me, and seems like an easy way to rank priors: weigh the result of hypotheses by the inverse of the time step at which the universal complete program runs into them.\n\n\nthis also helps with [UDASSA](https://www.lesswrong.com/posts/Hcc9fopx7sRexYhhi/anthropics-and-the-universal-distribution): if you use a deterministic [model of computation](https://en.wikipedia.org/wiki/Model_of_computation) where every computation step only does a finite amount of stuff (like turing machines or SKI calculus, but unlike [wolfram-style graph rewriting](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/)) then you don't need a \"claw\" program to locate you within the world described by the world program; you are simply located at the set of time steps which happen to be updating the part of world that is *you*. this gives us a natural \"locator\" for persons within the space of all computation; it flattens together time, space, timelines (when a computation splits a world into multiple states all of which it then continues computing), and possible computation machines all into one neat linear sequence of steps.", "date_published": "2022-03-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4153531bb2830014b9e96a82f3183cf1", "title": "values system as test-driven development", "url": "https://carado.moe/values-tdd.html", "source": "carado.moe", "source_type": "blog", "text": "values system as test-driven development\n----------------------------------------\n\n\ni realized something while reading [hands and cities on infinite ethics](https://www.lesswrong.com/posts/5iZTwGHv2tNfFmeDa/on-infinite-ethics): the work of determining the shape of [our values system](not-hold-on-to-values.html) is akin to [test-driven development](https://en.wikipedia.org/wiki/Test-driven_development).\n\n\nwe are designing a procedure (possibly looking for [the simplest one](https://en.wikipedia.org/wiki/Kolmogorov_complexity)) by throwing it at a collection of decision tests, and looking for which one matches our intuitions.\n\n\ni wonder if a value-learning approach to AI alignment could look like trying to get superintelligence to find such a procedure; perhaps we feed it a collection of tests and it looks for the simplest procedure that matches those, and hopefully that extrapolates well to situations we didn't think of.\n\n\nperhaps, even pre-superintelligence we can formalize values research as tests and try to come up with or generate a simple procedure which passes them while also being selected for simplicity.\n\n\nwhy simplicity? doesn't occam's razor only apply to descriptive research, not prescriptive? that is true, but \"what is the procedure that formalizes my values system\" is indeed a prescriptive matter, in a way: we're trying to model something to the best factual accuracy we can.", "date_published": "2022-03-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1578b3dd2e555c69e8b161b16a1b638f", "title": "the word \"syntax\" in programming, linguistics and LISP", "url": "https://carado.moe/the-word-syntax.html", "source": "carado.moe", "source_type": "blog", "text": "the word \"syntax\" in programming, linguistics and LISP\n------------------------------------------------------\n\n\nin programming, the word \"syntax\" is used to describe quite superficial characteristics of a programming language: what an if/else statement looks like, how instructions are separated, how string literals are written and escaped, things like that.\n\n\nin linguistics, the word carries a deeper meaning: \"[syntax](https://en.wikipedia.org/wiki/Syntax)\" refers to all the ways words are structured together and include grammatical tenses and the like; it cares about how the [tokens](https://en.wikipedia.org/wiki/Lexical_analysis#Token) relate to each other in a complex way which in programming would be moreso described as being about e.g. types.\n\n\nin [LISP](https://en.wikipedia.org/wiki/Lisp_%28programming_language%29), as opposed to most programming languages, i feel like the term syntax has been reappropriated to be closer to its linguistic use: it refers to how macros (or [fexprs](https://web.cs.wpi.edu/~jshutt/kernel.html)) manipulate expressions in a more profound way, more akin to the linguistic usage of the term; see notably [Scheme](https://en.wikipedia.org/wiki/Scheme_%28programming_language%29)'s [`define-syntax`](http://www.shido.info/lisp/scheme_syntax_e.html) special form.\n\n\nso while from an outside perspective, \"LISP syntax is very simple\" can be true in a straightforward sense, it misses the whole point of LISP: that its true, *deep* syntax is arbitrarily complex, dynamic, and manipulable.", "date_published": "2022-03-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c97484ddddb6cd8001a5f449ea6b1c9c", "title": "experience/moral patient deduplication and ethics", "url": "https://carado.moe/deduplication-ethics.html", "source": "carado.moe", "source_type": "blog", "text": "experience/moral patient deduplication and ethics\n-------------------------------------------------\n\n\nsuppose you can spend a certain amount of money (or effort, resources, etc) to prevent the spawning of a million rooms (in, let's say, simulations), with an exact copy of one random person in each. they will wake up in the rooms, spend a week not able to get out (basic necessities covered), then get tortured for a week, and then the simulations are shut down.\n\n\ni want to split this hypothetical into four cases:\n\n\n* the **identical** case (`I`): the million rooms and persons are exactly identical simulations.\n* the **mildly different** case (`M`): the million rooms are the exact same, except that each room has, somewhere on one wall, a microscopically different patch of paint. the persons likely won't be able to directly observe the difference, but it *will* probably eventually cause the million brains to diverge from each other.\n* the **quite different** case (`Q`): the million rooms will have different (random) pieces of music playing, as well as random collections of paintings on the walls, random collections of books, movies, video games, etc. to pass the time.\n* the **very different** case (`V`): same as the **quite different** case, but on top of that the rooms actually contain a random person picked from random places all over the world instead of copies of the same person.\n\n\nthe point is that you should want to reduce suffering by preventing the scenario, but how much you care should be a function of whether/much you count the million different persons's suffering as *multiple* experiences.\n\n\nit seems clear to me that one's caring for each case should increase in the order in which the cases are listed (that is, **identical** being the least cared about, and **very different** beig the most cared about); the question is more about the *difference* between consecutive cases. let's call those:\n\n\n* `IM` = difference in caring between the **identical** case and the **mildly different** case\n* `MQ` = difference in caring between the **mildly different** case and the **quite different** case\n* `QV` = difference in caring between the **quite different** case and the **very different** case\n\n\ncurrently, my theory of ethics deduplicates identical copies of moral patients (for reasons such as [not caring about implementation details](persistent-data-structures-consciousness.html)), meaning that i see the **mildly different** case as fundamentally different from the **identical** case. `IM > MQ ≈ QV`, and even `IM > (MQ + QV)`.\n\n\n![](deduplication-ethics-1.png)\n\n\nhowever, this strikes me as particularly unintutive; i *feel* like the **mildly different** case should get an amount of caring much closer to the **identical** case than the **quite different** case; i *feel* like i want to get `QV > MQ > IM`, or at least `QV > IM < MQ`; either way, definitely `IM < (MQ + QV)`.\n\n\n![](deduplication-ethics-2.png)\n\n\nhere are the ways i can see out of this:\n\n\n1. bite the bullet. commit to the idea that the slightest divergence between moral patients is enough to make them distinct persons worth caring about as different, much more than further differences. from a strict computational perspective such as [wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/), it might be what makes the most sense, but it seems quite unintuitive. this sort of caring about integer numbers of persons (rather than continuous quantities) maybe also feels mildly akin to [SIA's counting of world populations](https://www.lesswrong.com/posts/RnrpkgSY8zW5ArqPf/sia-greater-than-ssa-part-1-learning-from-the-fact-that-you), in a way, maybe.\n2. interpolate difference: two moral patients count *more* if they are *more* different (rather than a strict criteria of perfect equality). this seems like the straightforward solution to this example, though if the curve is smooth enough then it runs into weird cases like caring more about the outcome of one population of 1000 people than another population of 1001 people, if the former is sufficiently more heterogenous than the latter. it kinda feels like i'm *rewarding* moral patients with extra importance for being diverse; but i'm unsure whether to treat the fact that i also happen to value diversity as coincidence or as evidence that this option is coherent with my values.\n3. fully abandon deduplication: count the million moral patients as counting separately in the first case. this is the least appealing to me because from a functional, computational perspective it doesn't make sense to me, and [i can make up \"implementation details\" for the universe under which it breaks down](persistent-data-structures-consciousness.html). but, even though it feels as intangible as positing some magical observer-soul, maybe implementation details *do* matter?\n4. de-monolithize moral patients; consider individual pieces of suffering instead of whole moral patients, in the hope that in the **mildly different** case i can extract a sufficiently similar suffering \"sub-patient\" and then deduplicate that sub-patient.\n\n\ni think i'll tentatively stick to 1 because 2 feels *weird*, but i'll consider it more; as well as making room for the possibility that 3 might be right. finally, i'm not sure how to go about investigating 4; but compared to the other three it is at least materially investigatable — surely, either such a sub-patient can be isolated, or it can't.", "date_published": "2022-03-06T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2c3e480c44a1a068802baa1637b36780", "title": "recognition", "url": "https://carado.moe/recognition.html", "source": "carado.moe", "source_type": "blog", "text": "recognition\n-----------\n\n\n(spoilers for episode 8 of [girls' last tour](https://en.wikipedia.org/wiki/Girls'_Last_Tour#Anime))\n\n\n\n\n\n\n\n\n\nit feels to me like there is special about recognition by other persons. i will call it \"recognition\" — the fact of having one's experience be percieved by other persons, and valuing things about that.\n\n\nrecognition can be across not just space but also time; while people in the present recognizing the experiences of people in the past is a common thing, in the example above it is a society of the past that recognizes the experiences of future persons, through the symbol of the statue.\n\n\nin general, i care a lot about recognition. i feel like things i do are particularly more meaningful if even just one person percieves it, in a way maybe akin to how [the subaltern finds their voice](https://www.youtube.com/watch?v=rVarn-m7o9k) (spoilers for Bladerunner).\n\n\nthis leads me to weird situations like valuing \"pity points\"; but in general, being a member of a society where some persons percieve my experiences, and others percieve theirs and so on in one wide web of mutual recognition, is a very homely feeling, and i feel like it is one i wouldn't want to be false (for example by everyone else being non-conscious high-fidelity NPCs as could exist in [dishonest simulations](https://www.lesswrong.com/posts/r7f58E8A85xLgWuqG/contact-with-reality)).", "date_published": "2022-03-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9f1288fceca65e0f18fdebbd1b59d6b8", "title": "do not hold on to your believed intrinsic values — follow your heart!", "url": "https://carado.moe/not-hold-on-to-values.html", "source": "carado.moe", "source_type": "blog", "text": "do not hold on to your believed intrinsic values — follow your heart!\n---------------------------------------------------------------------\n\n\ni posit the following framework for thinking about [intrinsic values](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (hereby just called \"values\").\n\n\nsomewhere out there, there is **your value system**.\n\n\nit is a function (in the mathematical sense) that takes as input *things*, and spits out *feelings*. that function is where your *true* values lie; they *are* that function.\n\n\nhow is that function encoded in your brain? who knows! i don't have an answer, and there may [not be an answer](https://en.wikipedia.org/wiki/Computational_irreducibility).\n\n\nin conscious thought, however, you don't have access to the source code of that function, whatever it is. the best we can do for the moment seems to be to try thinking about it real hard, throw various things at it and examine the output ([perhaps through some mildly systematic process](core-vals-exist-selfdet.html)), and define another function that tries to approximate what your actual values are. this is hard and takes a lot of work. perhaps someday we will have a device that can scan your brain and give you a better idea of what your value function looks like, but at the moment that is not the case.\n\n\nso, you build up an idea of what your values might be. here is where i think a lot of people make a mistake: they choose to believe strongly that this guess *is* their actual set of values (even though [it likely isn't](https://www.readthesequences.com/Value-Is-Fragile)). they [crystallize](value-crystallization.html) those values; they live by them, until they in turn become influenced by those values and perhaps *actually adopt them*. (the actual value function is mutable!)\n\n\nthis is generally bad; [you should want to preserve whatever your values are](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity). hence the appeal that stands as the title of this post: do *not* hold on to the approximate function that is your best guess at what your value system is; you're only human, your guess is likely incorrect, and adopting it would run the risk of damaging your *actual* values; a function which, while hard to figure out, can be mutable, will certainly be mutated by acting as if your values are not what they are, and whose mutation you should generally want to avoid.\n\n\nso, pay attention to your feelings. they are what is the output of your *actual* values system, by definition; follow your heart, not your reason's best guess.\n\n\nnote that this is *not* an appeal to favor deontology over consequentialism: how you feel can be about actions (deontology) or about outcomes (consequentialism), and which one it is is orthogonal to whether you follow that system, or whether you decide to follow your current best approximation of it. if you are consequentialist (as i recommend), just make sure to give your value system a full picture of what the outcome would look like, and *then* decide based on what feelings are produced by that.\n\n\nmeta note: this framework for thinking about values should be itself held with suspicion, as should probly any formal framework that concerns values. you should be careful about holding on to it just like you should have been careful about holding onto your believed values (careful enough to be able to consider the present post, for example). which isn't to say *don't believe what i just wrote*, but leave room for it to be wrong, partially or wholly.", "date_published": "2022-03-02T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "0ae55b0a1aaced703612df5a357a1432", "title": "my current pyramid of needs", "url": "https://carado.moe/pyramid-needs.html", "source": "carado.moe", "source_type": "blog", "text": "my current pyramid of needs\n---------------------------\n\n\n![](pyramid-needs.svg)\n\n\n([on concrete vs sublime](https://www.lesswrong.com/posts/SLw2MEgxFtiKAqgQ5/actually-possible-thoughts-on-utopia))", "date_published": "2022-02-23T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "cbce0f1187d9ad55dfae40dfd04d2864", "title": "forking bitrate and entropy control", "url": "https://carado.moe/forking-bitrate-entropy-control.html", "source": "carado.moe", "source_type": "blog", "text": "forking bitrate and entropy control\n-----------------------------------\n\n\nif physics is based on a computational framework such as [wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/), but plausibly even if not, such that not all states the universe can be in produce the same number as next possible states;\n\n\nin addition, if i am to follow to conclusion my current belief that moral patients count as different when they start being functionally different in terms of computation, and that exact copies morally count as a single person (as it makes [not much sense](persistent-data-structures-consciousness.html) to believe otherwise);\n\n\nand if ([as it seems to be the case](limiting-real-universes.html)) the universe values coherence and thus only a limited set of local outcomes can emerge from a given local situation, or at least outcomes are weighed by coherence;\n\n\nthen it makes sense to start caring about the amount of forking a given timeline goes through. which is to say: the amount of future states to be [instantiated](questions-cosmos-computations.html), be it directly next step or indirectly in the longer term.\n\n\nin fact, if one calls what they care about *moral patients*, then we should care about the \"forking bitrate\" of moral patients. for example, we could want moral patients with a net negative future to be forked as little as possible, and moral patients with a net positive future to be forked as much as possible. considering forks are created over steps of time, and entropy seems to be a good measure for them, i think \"bitrate\" is an appropriate term for this; hence, *forking bitrate*.\n\n\nif we're just talking about a place as small as earth, we can estimate that consequences rapidly ramificate around to all moral patients; and as such, it seems reasonable to think that the forking bitrate of all patients will tend to go about in the same direction.\n\n\nso, if you see a quantum dice, should you throw it?\n\n\nif you think the future of earth has expected net positive moral value, or has little enough suffering for your taste (depending on your moral framework), then yes: by throwing the (quantum) dice, you might be multiplying the amount of instances of that value by the number of possible outputs of the dice, by creating that many times more future timelines.\n\n\nif not, then you shouldn't throw it.\n\n\n(even in the absence of quantum effects, if one were to just move entropy around while [phase space being conserved](https://www.lesswrong.com/posts/QkX2bAkwG2EpGvNug/the-second-law-of-thermodynamics-and-engines-of-cognition), moving the entropy from not-moral-patients to moral-patientss (or whichever thing you can about) still has that effect, i think)\n\n\nthis can probly be expanded to much larger-scale entropy control — and also, if superintelligences care about it (and if they're to be aligned, we might want them to) we can expect them to use it to maximize their value. even a [paperclip maximizer](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) can want to create as many timelines containing as many varied possible paperclips and arrangements thereof, if it is made to care about that.", "date_published": "2022-02-06T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4b3829d7db8e08321a354ae8df8a34ad", "title": "political technology", "url": "https://carado.moe/political-technology.html", "source": "carado.moe", "source_type": "blog", "text": "political technology\n--------------------\n\n\nsome communists like to claim that one of the problems with capitalism is the requirement for people to work. liberals can counter-argue that this is a fact of the state of technology and automation being not quite there yet, rather than what political system is in place: even under communism, someone would have to make the stuff.\n\n\nsimilar claims can be made about other material conditions such as scarcity or the meat industry.\n\n\nwhile not necessarily an unreasonable point, i feel like the liberal counter-argument misses a point and suggests an overly narrow view. indeed, part of the work of achieving communism is bringing about the technologies that enable communism to exist. the work of moving towards communism isn't as narrow as \"what policies do we apply to the material conditions of today\"; they extend to the transformation of these very material conditions.\n\n\nsome works of construction, while they might seem like innocuous new pieces of liberal society or even successes of that system, can in fact be steps on the road to overcoming capitalism. in fact, the last two ideas are not incompatible: it is to be remembered that communists have historically seen the move from feudalism to capitalism as a step of improvement, from which we can yet improve forth even more.\n\n\nand technology is not an ever-forward-moving sequence of inevitable innovations; it has real directions that can be influenced and can even [decline](https://www.youtube.com/watch?v=pW-SOdj4Kkk) sometimes.", "date_published": "2022-02-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "f12513e20b3968aa0c9736670876b78c", "title": "balancing utilitarianism", "url": "https://carado.moe/balancing-utilitarianism.html", "source": "carado.moe", "source_type": "blog", "text": "balancing utilitarianism\n------------------------\n\n\nsuppose you have multiple values (\"i want to be healthy but also i want to eat a lot of fries\") or a value applying to multiple individuals (\"i want both alice and bob to be happy\"), but sometimes there are tradeoffs between these values. how do you resolve such situations ?\n\n\na simple weighed sum might suffice in many cases, but i feel like there are cases where this is not sufficient.\n\n\nfor example, consider a population of 5 persons, who you care about equally, and consider a simple scalar value you have for them, such as happiness.\n\n\nnow, consider the following three options:\n\n\n* all individuals get 0.5 utility (\"fair\")\n* one individual gets 0.9 utility, the other four get 0.4 utility (\"bully\")\n* one individual gets 0.1 utility, the other four get 0.6 utility (\"scapegoat\")\n\n\nif we are to use a simple sum, all three of these situations sum to 2.5 total utility; yet, i feel like something ought to be done to favor the fair situation over the other two (and then probably to favor the bully situation over the scapegoat situation?)\n\n\nwhat i propose to address this is to apply a square root (or other less-than-one exponent) to the utilities of persons before summing, which has the effect of favoring more equal situations. in this case, we get:\n\n\n* fair: 3.54 utility\n* bully: 3.48 utility\n* scapegoat: 3.41 utility\n\n\nwhich does seem to produce the desired effect: in this situation, it maps to how i *feel* about things: fair > bully > scapegoat", "date_published": "2022-02-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "29049fa1d1c1d0ccb53434c7f669aa44", "title": "hackable multiverse", "url": "https://carado.moe/hackable-multiverse.html", "source": "carado.moe", "source_type": "blog", "text": "hackable multiverse\n-------------------\n\n\nin [a previous post](brittle-physics.html) i talk about how hackable physics might allow a superintelligence to take over very quickly (perhaps faster than the speed of light).\n\n\nin *[psi rewriting](psi-rewriting.html)* i propose that multiversehood can be more cleanly described as a particularly implemented feature of the cosmos, rather than an intrinsic thing.\n\n\nbut, if the cohabitation of multiple timelines is indeed an implemented feature rather than a primitive one, then there is a possibility that it is hackable, and that a superintelligence could hack across timelines.\n\n\nnow, it is to be noted that even if hackability exists, it might still be limited: perhaps there something like a light cone at play, or perhaps a given timeline can only access a finite number of other timelines.\n\n\nit is to be remembered that timelines are not slots, they're not variables that hold values; timelines are *the values themselves*. still, hackability could mean some branches of the causality graph stop getting computed, for example.\n\n\neither way, under these conditions, even quantum immortality might not save us from an X-risk superintelligence, and [given](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode) [recent](https://openai.com/blog/formal-math/) [developments](https://blog.eleuther.ai/announcing-20b/), we should panic a lot.", "date_published": "2022-02-03T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "5e23fbcbbf2015bc05678224a6961a1d", "title": "a cognitively hazardous idea", "url": "https://carado.moe/a-cognitively-hazardous-idea.html", "source": "carado.moe", "source_type": "blog", "text": "a cognitively hazardous idea\n----------------------------\n\n\n**caution: this post is a cognitively hazardous idea which may cause you to change your behavior and regret having learned about said idea. please don't proceed without informed consent, and please don't tell people about cognitive hazards without their own informed consent.**\n\n\nsix months ago, [i was worried about AI development going too far](were-all-doomed.html). today, things [keep](https://deepmind.com/blog/article/Competitive-programming-with-AlphaCode) [going](https://openai.com/blog/formal-math/) [badly](https://blog.eleuther.ai/announcing-20b/), to the point that i think it has become utilitarianistically useful to release this idea i've thought about recently.\n\n\nin *[how timelines fall](how-timelines-fall.html)* i talk about how, if we are to keep observing a timeline that somehow survives [X-risks](https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence) even as they become increasingly likely, we should observe our timeline doing whatever chance things it takes to avoid them happening — including global economic collapse, if that's a likely enough event.\n\n\nturns out, if *you, personally* choose to do something which might help AI development (and thus increase the probability of X-risk, or if you prefer, the amount of timelines that die to X-risk) then you make *yourself* something that will tend to have been incapacitated in surviving timelines. you might die, which would be unpleasant to the people who like you; but, you might also just eventually quit that job, or become unable to work for whatever reason.", "date_published": "2022-02-02T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c7fcfe307549d6b1fed1816f2f88a7de", "title": "how timelines fall", "url": "https://carado.moe/how-timelines-fall.html", "source": "carado.moe", "source_type": "blog", "text": "how timelines fall\n------------------\n\n\ni've speculated that we are all together, as a civilization, quantum immortal; [timelines where we all die](timeline-codes.html) can somewhat [be ignored](brittle-physics.html), leaving us mostly just with concerns of [heaven vs hell](botched-alignment-and-awareness.html) timelines.\n\n\nbut, in the lucky timelines where we *do* keep avoiding an [X-risk](https://en.wikipedia.org/wiki/X-risk) [superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), what does that look like ?\n\n\nit would be silly to expect that avoiding such a superintelligence would look like trying to press the button to turn it on but at the last minute the button jams, or trying to press the button to turn it on but at the last minute the person about to press it has a heart attack. indeed, bayes should make us think that we should expect it to look like whatever makes it likely that superintelligence fails to be implemented.\n\n\nwhat does this look like ?\n\n\nglobal nuclear war, broad economic collapse, great cataclysms or social unrest in cities where most of the AI development is done, and other largely unpleasant events.\n\n\ndon't expect the world to look like the god of anthropics is doing miracles to save us from superintelligence; expect the world to look like he's is slowly conspiring to do whatever it takes to make superintelligence unlikely to happen long in advance.\n\n\nexpect the god of anthropics to create AI winters and generally make us [terrible at software](https://www.youtube.com/watch?v=pW-SOdj4Kkk).\n\n\nexpect the god of anthropics to create plausible but still surprising reasons for the availability of tensor hardware to become scarce.\n\n\nlook around. does this look like a century where superintelligence appears ? yes, i think so as well. the god of anthropics has his work cut out for him. let's try and offer him timelines where AI development slows down more peacefully than if he has to take the initiative.\n\n\nwhile some of us are working on aligning god, the rest of us should worry about aligning luck.", "date_published": "2022-01-11T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "453bc64337f823afc7dd2083a418a219", "title": "uploading people for alignment purposes", "url": "https://carado.moe/upload-for-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "uploading people for alignment purposes\n---------------------------------------\n\n\nas per [my utopian vision](%E2%88%80V.html), i've thought that an aligned AI would want to figure out how to upload us.\n\n\nbut, thinking about it more, it could be the other way around: if we can upload people in a deterministic simulation, this can buy us a lot of time to figure out alignment, as per [this post](noninterf-superint.html).\n\n\nnotably, the simulation could for example contain a single uploaded person (say, eliezer yudkowsky, or a bunch of copies of yudkowsky), which would save us from an arms-race type coordination problem; and while, on the outside, the superintelligence is killing everyone instantly to tile the universe with more compute to run this simulation, whoever's inside of it has plenty of time to figure things out (and hopefully [resurrect everyone once that's done](what-happens-when-you-die.html)).\n\n\nthis seems like a long shot, but [have you looked around?](https://www.lesswrong.com/s/n945eovrA3oDueqtq) this could be the [miracle](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW) we need.\n\n\nof course this could also turn into a [hell](botched-alignment-and-awareness.html) where infinite yudkowsky's are suffering forever everywhere. hopefully we can make another button which actually stops the simulation and tiles the universe with only benign paperclips, and maybe even make that button auto-activate if the yudkowsky is detected to be suffering or incoherent.\n\n\nremember: [as long as the simulation is deterministic, superint can't force the uploaded yudkowsky to not shut it down](noninterf-superint.html), or force or even coerce him to do anything for that matter; it can only make the yudkowsky simulation run slower, which basically eventually achieves the same effect as either completing it or shutting it down.", "date_published": "2022-01-11T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "521c12bd6ad2558e376137a96dbe215d", "title": "questions about the cosmos and rich computations", "url": "https://carado.moe/questions-cosmos-computations.html", "source": "carado.moe", "source_type": "blog", "text": "questions about the cosmos and rich computations\n------------------------------------------------\n\n\n**computation**: a running state of any [model of computation](https://en.wikipedia.org/wiki/Model_of_computation); for example, a specific [SKI calculus expression](https://en.wikipedia.org/wiki/SKI_combinator_calculus), or a specific turing machine with its rules, current state, and current tape values. given that any model of computation can run the computations of any other model, it does not really matter which one we choose, and i will be juggling between different models throughout this post.\n\n\n### 1: is any computation rich ?\n\n\n**rich**: a computation is rich if it is generally [computationally irreductible](https://en.wikipedia.org/wiki/Computational_irreducibility). as a tentative formal definition for richness, i'm tempted to say that a computation is rich if there is no function able to generally predict any of its future states in a time [less than linear](https://en.wikipedia.org/wiki/Computational_complexity_theory) in the number of steps it would take to arrive at that state normally. for example, [rule 30](https://en.wikipedia.org/wiki/Rule_30) *looks* rich: it looks like to calculate the value of cell at index `i` at time step `j`, it generally takes about `O(abs(i) × j)` steps of computation. on the other hand, it looks like [rule 54 and rule 60](https://mathworld.wolfram.com/ElementaryCellularAutomaton.html) can generally have their cells predicted in time logarithmic to the number of computational steps it would naively take to arrive at them.\n\n\nnote that richness is not the same as halting: while a halting computation is necessarily not rich, a non-halting computation can either be non-rich (like rule 54), or rich (possibly like rule 30).\n\n\nit seems clear to me that rich computations exist: for example, it is known that sorting a list of `n` elements takes `O(n × log(n))` steps, and thus a computation running a sorting algorithm of that complexity cannot have its result predicted in a smaller time complexity than it took to calculate naively. the ease with which i can demonstrate that, however, makes me doubt my tentative formal definition; maybe something more akin to [polynomial time complexity](https://arxiv.org/abs/1108.1791) would better capture the essence of computational irreductibility: perhaps a better determining question for richness could be \"is there a function which can tell if a pattern looking like this will ever emerge in that computation, in time polynomial to the size of that pattern?\" or \"is there a function that can, in time polynomial to `n`, predict a piece of state that would naively take `aⁿ` steps to compute?\"\n\n\n### 2: does the cosmos instantiate any rich computation ?\n\n\nto **instantiate a computation** means for that computation to, somewhere, eventually, be ran (forever or until it halts). i start from the fact that i'm observing a coherent-looking universe, deduce that at least *some* computation is happening, and which other computations are happening (as in, are being observed somewher, or which i could have observed). as [clarified before](limiting-real-universes.html), one can't just assume that all computations are equally happening: things look way too coherent for that, there seems to be a bias for coherence/simplicity (one which i've tentatively attributed to [how soon that computation spawns](less-quantum-immortality.html)).\n\n\nlooking at the cosmos (the set of instantiated computations) from a computational perspective, it seems like it contains at least our universe, which is expanding. if this expansion is, [as has been hypothesized](https://www.wolframphysics.org/technical-introduction/potential-relation-to-physics/cosmology-expansion-and-singularities/), caused by the computational substrate of the universe manufacturing new vertices of spacetime, and computations can run on this new fabric as it is produced, then it's possible that [some computations can run forever](ai-alignment-wolfram-physics.html), including potentially rich ones.\n\n\nhowever:\n\n\n### 3: does the cosmos contain causal bubbles ?\n\n\na **causal bubble** is a piece of computation that can run forever with the guarantee that it won't be physically interfered with from the outside; see [yes room above paperclips](above-paperclips-2.html).\n\n\nfor example, while one can build [a turing machine inside conway's game of life](https://www.conwaylife.com/wiki/Turing_machine), a stray object on the same conway's game of life plane can eventually collide with said machine and break its computational process.\n\n\nhowever, in some [graph rewriting rulesets](https://en.wikipedia.org/wiki/Graph_rewriting), as well as in expression-rewriting systems with nested expressions such as a varient of [SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus) or [lambda calculus](https://en.wikipedia.org/wiki/%CE%9B_calculus) where the evaluation rule expands all sub-expressions, some pieces of computation can run without ever being physically interfered with by other pieces of the computation.\n\n\n(i'm specifying \"*physically* interfered with\" because acausal coordination or mutual simulation can lead to interference, but at least that interference is up to the singleton (such as a superintelligence) \"running\" said bubble (if any); they can just choose to never acausally coordinate and to never simulate other bubbles)\n\n\nin our own spacetime, it seems like causal bubbles exist thanks to the expansion of spacetime: some pairs of points get further apart from one another faster than the speed of light, and thus should never be able to interact with one another so long as that expansion continues and FTL travel is impossible. under the perspective of wolfram physics, however, it is not clear that both of those things will necessarily be the case forever; spacetime might be [hackable](brittle-physics.html).\n\n\nnote that the splitting of universes with nondeterministic rules (such as ours with quantum mechanics) into different causally isolated timelines is another way for causal bubbles to exist, assuming the implementation of such a nondeterministic universe is that all possibilities are instantiated at any nondeterministic choice.\n\n\nthe presence of causal bubbles allows some pieces of spacetime to [survive superintellingences appearing in other pieces of spacetime](unoptimal-superint-doesnt-lose.html), while the absence of causal bubbles makes it that a superintelligence or collection of superintelligences probably eventually does take over everything.\n\n\nif they exist, then causal bubbles are a blessing and a curse: they save us from alien superintelligences and, [between timelines](timeline-codes.html), from our own superintelligences, but they might also ensure that our own aligned superintelligence (once we have figured out alignment) cannot reach all computation, and thus that any random person has a good chance of existing in a bubble that hasn't been \"saved\" by our aligned superintelligence.\n\n\n### 4. is a universal-complete computation instantiated ?\n\n\n[**universal complete computations**](universal-complete.html) (such as the annex in [this post](less-quantum-immortality.html)) instantiate *all* computations, over time.\n\n\nif one takes the perspective that a top-level \"root\" bubble existed first, then the answer to this question is up in the air.\n\n\nmaybe we are this root computation, and the deterministic fate of the cosmos (in all timelines) is, for example, for physics to break at some point and kill everything, or for a superintelligence to appear at some point and kill everything (the two being [pretty equivalent](brittle-physics.html)) leaving [no room for bubbles](above-paperclips.html).\n\n\nmaybe the root bubble [does spawn](above-paperclips-2.html) a finite and small (after deduplicating by identical computations) number of bubbles, and each of those is fated to be killed in its entirety.\n\n\nor, maybe somewhere in this chain, one of the bubbles spawns *many* new, different bubbles, at which point it becomes likely enough that eventually one of those bubbles either is, or itself later spawns, a universal-complete program. in which case, the initial set of the \"root\" bubble and maybe a few other next bubbles serve together as merely the boot process for the program that will eventually spawn *all computations*.\n\n\nit might be interesting to find out how small universal-complete programs can get, both in bubble-friendly frameworks like systematically-expanded SKI calculus, and bubble-unfriendly frameworks like cellular automata; to get an idea how likely they are to randomly be stumbled into.", "date_published": "2022-01-07T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e75c35ebd67856257b05095f72a8207b", "title": "brittle physics and the nature of X-risks", "url": "https://carado.moe/brittle-physics.html", "source": "carado.moe", "source_type": "blog", "text": "brittle physics and the nature of X-risks\n-----------------------------------------\n\n\nsuppose physics is hackable, and a hard to accomplish hack that requires intelligence (like a fancier version of [rowhammer](https://en.wikipedia.org/wiki/Rowhammer)) can break the fabric of spacetime — maybe in ways that said intelligence can take advantage of, such as embedding its computation into something that survives said breakage, in a way that could help such a superintelligence accomplish its goal.\n\n\nwe could expect that [boxing an AI](https://en.wikipedia.org/wiki/AI_box) could be really hard: even without access to the outside, it might be able to guesses physics and hack it, from the comfort of its box.\n\n\nas usual in such [X-risk scenarios](timeline-codes.html), i believe we just [keep living only in timelines in which, by chance, we don't die](quantum-suicide.html).\n\n\nthese sort of hacks are not ruled out by [wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/). indeed, they are plausible, and can spread at some speed faster than the speed of light — because they can run in the substrate *underlying* spacetime — such that nobody would ever be able to observe such hacks: the hack reaches and destroys you before the result of the breakage can reach your sensory organs, let alone your brain.\n\n\nso, maybe \"dumb-goal\" superintelligences such as [paperclip maximizers](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) are popping up all over the place all the time and constantly ruining the immense majority of not-yet-hacked timelines, and we keep living in the increasingly few timelines in which they haven't done that yet.\n\n\nnow, let's stop for a minute, and consider: what if such a hack *isn't* hard ? what if it *doesn't* need an intelligent agent ?\n\n\nwhat if, every planck time, every particle has a 99% chance of breaking physics ?\n\n\nwell, we would observe exactly the same thing: those hacked universes either become computationally simple or [boot up more universes](above-paperclips-2.html); either way, we don't survive in them, so we don't observe those hacks.\n\n\nin this way, it is [S-lines and U-lines](timeline-codes.html) that are very special: outcomes in which we *survive*, thanks to a superintelligence with a \"rich\" goal. the rest is just timelines constantly dying, whether it be due to X-risk superintelligences, or just plain old physics happening to cause this.\n\n\nin fact, let's say that the universe is [a nondeterministic graph rewriting system](https://en.wikipedia.org/wiki/Graph_rewriting) with a rule that sometimes allows everything be reduced to a single, inactive vertex. would this count as \"sometimes everything is destroyed\" ? or would this make more sense to be modeled as a weird quirk of physics where the graph of possible timelines includes the production of passive vertices all the time, which can be safely ignored ?\n\n\nwhat if instead of a nondeterministic system, we have a deterministic one [which just happens to expand all timelines](psi-rewriting.html). in such a system, \"different timelines\" is no longer a primitive construct: it is merely an observation about the fact that such a system tends to, when ran, create from a given piece of data, several newer ones. let's say that in such a system there is a rule where from every piece of data we'd consider a timeline, numerous inert vertices are also created.\n\n\nwould we say \"aha, look! every time a computation step happens, many inert vertices are created around it, and i choose to interpret this as the creation of many timelines (one per inert vertex) in which everyone in that universe dies, and others (new complex pieces of data) in which everything keeps existing\",\n\n\nor would we, in my opinion more reasonably, say \"well, it looks like as a weird quirk of how this system runs, many inert vertices are popping up; but they're simple enough that we can just ignore them and only consider richer new pieces of data as *timelines* proper.\"\n\n\ni believe, if we are to worry about what states this universe ends up in, we ought to use a measure of what counts as a \"next state of this universe\" that measures something about the richness of its content: maybe the amount of information, maybe the amount of computation going on, or maybe the number of moral patients. and, depending on what measure we use, \"losing\" timelines to paperclip maximizers (which turn the universe into something possibly simple) is no more of a big deal than \"losing\" timelines to a rewriting rule that sometimes creates inert vertices, and neither of which should really count as proper timelines.\n\n\notherwise we end up needlessly caring about degenerate states because of what we believe to be, but really isn't, an objective measure of what a timeline is.\n\n\n*timelines* might be in the [map](https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation), while what is in the territory is just *what we end up observing* and thus, computed states that contain us.\n\n\nfinally, what about universe states where *all* outcomes are an inert vertex or an otherwise simple universe (such as as infinite list of identical paperclips) ? while those might happen, and i'd say *would* count as X-risks, you don't need to consider simple states as timelines to make that observation: maybe some timelines end up in a state where *no* new states can be created (such as a locally truly terminated piece of computation), and others end up in a state where *only simple* new states are created. those ought to be considered equivalent enough, and are what a true X-risk looks like.", "date_published": "2022-01-05T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ce837d5133520c2de99bec8c6a5a2bf7", "title": "less quantum immortality? • carado.moe\n", "url": "https://carado.moe/less-quantum-immortality.html", "source": "carado.moe", "source_type": "blog", "text": "*less* quantum immortality?\n---------------------------\n\n\nif the set of nested universes [really does](what-happens-when-you-die.html) look like a funny graph of bubbles, i think there are two likely possibilities: either the set of bubbles rapidly dries up, or it grows towards infinity; in which case, if compute is infinite [as wolfram would have me think](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) then as soon as the bubble explosion happens, it's likely a [universal complete](universal-complete.html) algorithm is booted somewhere reasonably fast, itself booting in turn all initial states.\n\n\nthis has the result of instanciating all (countable, discrete) [tegmark 4 universes](https://space.mit.edu/home/tegmark/crazy.html), over time.\n\n\nyet, [we still observe a preference for coherency](limiting-real-universes.html): i think the reasonablest interpretation of what'd be going on is that \"computationally early\" or at least \"computationall frequent\" states are favored; and thus, very weird and incoherent initial-state universes *do* get spawned, but much later and/or are being computed more slowly (for example, maybe computation is equally distributed among all timelines, and as more and more timelines spawn over time each individual one gets updated less and less often).\n\n\nwhile this creates a neat explanation for what selects for universe coherence, it does make it that while [quantum immortality/suicide](quantum-suicide.html) can be considered to \"still work\", if you choose to keep living [only by waiting to be reincarnated later](what-happens-when-you-die.html), you're reducing the \"realness\" of your continued existence; you're making universes in which you continue to live appear only \"computationally later\".\n\n\nit also provides a nice simplicity test fo roccam's razor: the simplicity of a hypothesis can be akin to how soon a universal-complete program that simulates all spawned computations arrives at it.\n\n\nthis probly doesn't apply to \"classical\" quantum immortality where you just use the fact that you're redundanced on other timelines, because i would imagine those other you's in other timelines would tend to be computed \"at the same time\".", "date_published": "2021-12-27T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "508da4c890a6c6bba73c627c97d65aca", "title": "thinking about psi: as a more general json", "url": "https://carado.moe/psi-json.html", "source": "carado.moe", "source_type": "blog", "text": "thinking about psi: as a more general json\n------------------------------------------\n\n\n[JSON](https://en.wikipedia.org/wiki/JSON) is a format that's designed to work well with JavaScript — to move around entire JavaScript objects, you can `JSON.stringify` them on one end, and then `JSON.parse` them back on the other end.\n\n\nthis is very cool, but it's limited: it can't support cycles, can't deduplicate shared data, and can't send code — those issues are actually related: part of the problem with serializing code is that functions are often recursive or even mutually recursive (which takes either cyclical representations to send, or otherwise a layer of indirection to remove cyclicality) and that many functions point back to other shared functions in a structure that looks like a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph), where you would ideally want the upstream nodes to be deduplicated.\n\n\n[psi](psi.html) tries to address these issues by being a universal format that supports cycles and efficiently unifies shared structures, such that they remain a single big piece of data pointed to for example using an [IPFS](https://en.wikipedia.org/wiki/InterPlanetary_File_System) address, and only small changes can be sent along.\n\n\nin addition, psi's recommended usage of randomly-generated identifiers to represent concepts also makes psi payloads universal as opposed to only contextually meaningful, which hopefully makes things nice for things like interoperation or debugging.", "date_published": "2021-12-25T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "358076e96aea79327c6894a62d323d45", "title": "non-scarce compute means moral patients might not get optimized out", "url": "https://carado.moe/nonscarce-compute-optimize-out.html", "source": "carado.moe", "source_type": "blog", "text": "non-scarce compute means moral patients might not get optimized out\n-------------------------------------------------------------------\n\n\ni tend to assume AI-borne [X-lines are overwhelmingly more likely than S-lines or U-lines](timeline-codes.html), because in almost all cases (such as [paperclip manufacturing](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer)) the AI eventually realizes that it doesn't need to waste resources on moral patients existing (whether they're having an okay time or are suffering), and so recycles us into more resources to make paperclips with.\n\n\nbut [if wolfram's idea is correct](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/#how-it-works) — a possibility which [i'm increasingly considering](ai-alignment-wolfram-physics.html) — it may very well be that computation is not a scarce resource; instead printing always more paperclips is a trivial enough task, and the AI might let \"bubbles\" of computation exist which are useless to its goals, even growing bubbles.\n\n\nand those could contain moral patients again.\n\n\nof course this reduces to the [*no room above paperclips* argument](above-paperclips.html) again: inside that bubble we probly just eventually make our own superintelligence again, and *it* takes over everything, and then either bubbles appear again and the cycle repeats, or eventually in one of the layers they don't anymore and the cycle ends.\n\n\nbut, i still think it's an interesting perspective for how something-maximizing AIs might not need to actually take over *everything* to maximize, if there's nonscarce compute as wolfram's perspective can imply.", "date_published": "2021-12-25T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "073840e217a874a78e82dbbb436e3aa1", "title": "yes room above paperclips?", "url": "https://carado.moe/above-paperclips-2.html", "source": "carado.moe", "source_type": "blog", "text": "yes room above paperclips?\n--------------------------\n\n\nin two [previous](above-paperclips.html) [posts](nonscarce-compute-optimize-out.html) i talk about the ultimate inability for interesting things to happen when everything has been tiled with paperclips, even if the superintelligence doing the tiling isn't very good at it — i.e. lets room exist [\"besides\" (by superintelligence not actually consuming everything)](nonscarce-compute-optimize-out.html), or [\"above\" (using as a substrate)](above-paperclips.html) said paperclips (or whatever else the universe is being tiled with).\n\n\nbut, actually, this is only true if the spare compute (whether it's besides or above) only has room for one superintelligence; if that spare compute is composed of multiple bubbles causally isolated from one another, then maybe a superintelligence permanently kills everything in one, but another one creates even more bubbles in another.\n\n\nin fact, as long as the first superintelligence to create many bubbles preceeds the first superintelligence to create no bubbles at all, and then if the amount of bubbles tends to be slightly more than one, and assuming superintelligences can't (or can only do so at a lesser rate than new bubbles being created) just \"hack back upwards\" (to escape to their parent universe), we can expect the set of pre-X-risk superintelligence bubbles to just increase over time.\n\n\nthis might provide a better explanation [than just us dying forever](estimating-populated-intelligence-explosions.html), for the weird fact that we exist now when the future could contain very many ([plausibly infinitely many](ai-alignment-wolfram-physics.html)) persons: it's not just that the amount of pre-singularity population is large compared to future timelines multiplied by their low likelyhood of being populated, it's that it grows over time forever and so makes it harder for U-lines or S-links to \"compete\", expected population wise.\n\n\nwe can then run into weird questions: rather than a tree, or even a [DAG](https://en.wikipedia.org/wiki/Directed_acyclic_graph), why couldn't this be just a general graph? if [the \"seeds\" for complex universes can be simple](https://en.wikipedia.org/wiki/Rule_30), it makes sense to imagine bubbles causating each other: maybe someone in [Rule 30](https://en.wikipedia.org/wiki/Rule_30) eventually boots a superintelligence that takes over everything but happens to cause a [Rule 110](https://en.wikipedia.org/wiki/Rule_110) bubble to appear (perhaps among many others), and then in that Rule 110 bubble someone creates a superintelligence that causes a Rule 30 bubble to appear again.\n\n\nconceptually navigating this likely cyclical graph of pre-superintelligence bubbles seems like headache so i'll put the matter aside for now, but i'll be thinking on it more in the future. for the moment, we should expect bubbles with simpler seeds to be highly redundanced, and ones with more complex seeds to be rarer; but there's no reason to assume any ceiling on bubble seed complexity (in fact, if even just one of these bubbles is [universal complete](universal-complete.html), then *any* seed eventually gets instanciated!), and it seems nigh impossible to predict which types or complexities of seeds could lead to which outcomes, superintelligence-wise.\n\n\nin the meantime, rember that while things might look pretty hopeless with this perspective, [it's plausible that we can actually actually causate *very far*](https://www.lesswrong.com/posts/PcfHSSAMNFMgdqFyB/can-you-control-the-past).", "date_published": "2021-12-25T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a3a5a7404e17d1fe7401efbb3be12a54", "title": "database transactions: you guessed it, it's WASM again", "url": "https://carado.moe/database-transactions-wasm.html", "source": "carado.moe", "source_type": "blog", "text": "database transactions: you guessed it, it's WASM again\n------------------------------------------------------\n\n\ni barely need to say anything past the opening title: to run arbitrary transactions to a database server, whether they be atomic or not, instead of a [DSL](https://en.wikipedia.org/wiki/Domain-specific_language) like SQL, we should use [WASM](https://en.wikipedia.org/wiki/WebAssembly) programs that would have the ability to read, lock, mutate, etc. pieces of data arbitrarily inside, for example, [a key-value store](https://en.wikipedia.org/wiki/Key%E2%80%93value_database).\n\n\nthis allows a single server to run transactions one after the other, without letting different clients become desynchronized, but also without having to *understand* the data being stored — that can all be up to clients.\n\n\nfor example, maybe a value somewhere represents a list, and two clients send at the same time a WASM transaction that will insert an element into that list. the two WASM program calls can run in sequence on the server, resulting in the correct list with both values added, without requiring the server to \"understand\" that the value is a list (as would be the case in SQL) nor having desynchronization issues (which would be the case if clients were to fetch the list, modify it, and then send the new value back).", "date_published": "2021-12-25T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "df11ae38854adb5b2af6543e635479b3", "title": "unoptimal superintelligence doesn't lose", "url": "https://carado.moe/unoptimal-superint-doesnt-lose.html", "source": "carado.moe", "source_type": "blog", "text": "unoptimal superintelligence doesn't lose\n----------------------------------------\n\n\ni [previously](unoptimal-superint-loses.html) wrote a post about how a superintelligence with an unoptimal decision system likely loses to alien superintelligences that are more optimal, at the scale of cosmic wars between those superints.\n\n\ni don't think this is necessarily true: maybe physics *does* look like a funny graph à la wolfram, and then maybe we can carve out pieces of space that still grow but are causally isolated from the rest of the universe; and then, whether a given causally isolated bubble ever has to encounter an alien superint is purely up to whether it decides to generate alien space that leads to those, which is prevented easily enough.", "date_published": "2021-12-09T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "bc6d0bb3a9fe50175ad10e4a58c4a17f", "title": "emotionally appreciating grand political visions", "url": "https://carado.moe/appreciating-grand-political-visions.html", "source": "carado.moe", "source_type": "blog", "text": "emotionally appreciating grand political visions\n------------------------------------------------\n\n\nit is useful to build an intellectual understanding of what makes people defend the three grand political visions to emerge from the 20th century — liberalism/capitalism, socialism/communism, and nationalism/fascism.\n\n\nthat said, i believe it is also neat and good for perspective to be able to *appreciate* why people are *emotionally attached* to those visions; here is how i've built those appreciations for myself.\n\n\ni can emotionally appreciate socialism/communism as presenting a romantic vision of building a world together. socialism really pushes the idea of collaborating, of emancipating people from automated nonhuman systems and devices and letting them take control of their world and build their future together. it's a very fraternal vision, and that's a profoundly appealing aspect.\n\n\ni can emotionally appreciate nationalism/fascism as the promotion of the *tribe*; when reframed in term of subculture groups that i do feel a strong kinship to — such as channer culture or weeb culture — i'm strongly able to relate to nationalistic ideas of preserving a purified community, insulated from outside cultural influence.\n\n\nand, i can emotionally appreciate liberalism/capitalism as the system that ruthlessly satisfies humans. in this sense, commodification is a very \"[humanity fuck yeah](https://www.reddit.com/r/HFY/)\" idea: whatever the universe has in store for us, we *will* be able to assimilate it and make it easily accessible to the masses, and we *will* instrumentalize its forces and processes ever for the maximization of demand satisfaction, no matter how weird or individualistic it gets.", "date_published": "2021-12-09T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ac370b73ed5d85c6731b62fed0d6b35c", "title": "psi rewriting", "url": "https://carado.moe/psi-rewriting.html", "source": "carado.moe", "source_type": "blog", "text": "psi rewriting\n-------------\n\n\nmy [psi](psi.html) format is quite inspired by [wolfram's hypergraphs](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/#how-it-works), though those include a turing-complete rewriting system. in this post i sketch out something similar for psi which i'd argue is a lot more elegant.\n\n\nin a way, wolfram's hypergraph rewrite system can be said to be a kind of [linear system/move semantics](https://en.wikipedia.org/wiki/Substructural_type_system#Linear_type_systems) in which quantities are consumed and produced: namely, edges in the graph are consumed and produced by rules, and their presence is required for a rule to apply; in addition, the system gets nondeterminism from deciding which rule gets to consume a given edge when several could.\n\n\npsi rewrite rules are similar to wolfram's, except for the following differences:\n\n\n* they don't consume their input; they only create output\n* they don't quite apply in discrete time steps (see \"determinism\")\n* nodes are purely informational, there are no vertices that count as intrinsically different mathematical objects\n\n\n### constructivity\n\n\nin this system, new nodes can only be created, not consumed. this works great for \"constructive\" systems such as logic, where new true statements can be created but not consumed; for systems like cellular automata, one can construct an artificial timekeeping system by defining each new state as a function of its past state.\n\n\n### informationality\n\n\ngiven a starting set of nodes, the amount of data can only grow; a rule can be applied again to an existing set of nodes but because psi merges together observationally identical nodes, applying a rule to equal nodes a second time to the same input doesn't do anything; the result already exists.\n\n\nso, all information is defined, and tested for equality, purely on a basis of relation to other information; and if two identical nodes are constructed at different \"places\" of the psi state, they are merged together; avoiding wolfram's need to do extra work to notice identical patterns in different states.\n\n\nit can also be hoped that these properties allow an implementation of this rewrite system to more natively recognize, and merge, \"locally equivalent\" computations in the way [hashlife](https://en.wikipedia.org/wiki/Hashlife) does.\n\n\n### determinism\n\n\nrule application is kind of deterministic, but which part of the computational expansion you choose to follow (which local area of the graph you observe from) and the result of which rule applications you choose to follow can be considered a source of indeterminism.\n\n\nunlike \"native multiverse\" systems like wolfram where [timelines on one hand, and different locations in space on the other](https://space.mit.edu/home/tegmark/crazy.html) are a different source of parallelism, this makes nondeterminism have only one source: which part of the set of nodes you follow (and not: which hypergraph timeline-instant *and* which piece of space in that timeline-instant).\n\n\nin this way, nondeterminism with multiple timelines is implemented \"on top\" of the system: the base framework can be considered to be a deterministic system that just computes all timelines, and which one you choose to look at is the source of nondeterminism.\n\n\ni expect that this unification of time/timelines together with space will greatly simplify bookkeeping the entire history of expanding computations; in fact, with the exception of rule constructed nodes that are not connected back to existing nodes, any expansion of the psi graph is also [its own causal graph](https://en.wikipedia.org/wiki/Causal_graph): new nodes point to the existing nodes that have allowed it to exist.\n\n\nnote that, together with constuctivity, the unification of time and space means that units of space in a cellular automaton also need to be defined relative to one another instead of relying on \"intrinsically different\" vertices.\n\n\n### limitations\n\n\nthere seems to be one main limitation in this system: cycles can only exist within a piece of data that results from only a single rule application; a cycle can't span the results of multiple rule applications, because one application preceeds the other and the nodes produced by the first application can't have their fields point in the future to nodes that haven't been created yet.\n\n\nthere is a way, however, to overcome this: if rules themselves are objects in the graph, maybe in the form `(→ input-list output-list)`, then some rules can produce novel rules, and thus one can encode \"general rules\" able to generate (with full turing-complete ability) arbitrarily complex rules, which in turn can be applied to produce arbitrarily complex cycles in a single rule step.", "date_published": "2021-12-09T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9f472a0abf2509c0d7bdfdc7a776856f", "title": "non-interfering superintelligence and remaining philosophical progress: a deterministic utopia", "url": "https://carado.moe/noninterf-superint.html", "source": "carado.moe", "source_type": "blog", "text": "non-interfering superintelligence and remaining philosophical progress: a deterministic utopia\n----------------------------------------------------------------------------------------------\n\n\n[in a previous post](against-ai-alignment.html) i talk about the need to accomplish philosophical progress at determining what we value, before alignment. i wouldn't be the first to think of \"what if we boot superintelligence now, and decide later?\" as an alternative: it would indeed be nice to have this possibility, especially given the seeming imminentness of superintelligence.\n\n\nalas, typically, making this proposition goes like this:\n\n\n* A: \"we should boot superintelligence now, and make it that we can adjust it later when we figure out more of philosophy.\"\n* B: \"yeah, but superintelligence isn't gonna just wait: it's gonna want to try to make us figure out whichever philosophy would make its job simpler, such as actually we value all dying immediately so that there's nothing to protect\"\n* A: \"well, in that case, we need to make sure superintelligence can't interfere with our decision process.\"\n* B: \"and *how* do you ensure that the new being running all things in the world, has no interference into human affairs, exactly?\"\n\n\nwhich is a pretty good point, and usually a reasonably A concedes at that point.\n\n\ntoday, however, i am here to offer a continuation to this conversation, from A's side.\n\n\nmy idea is to implement a deterministic computational utopia for people to be uploaded in, whose internals are disconnected from the outside world, such as [∀V](%E2%88%80V.html); if we have infinite compute, then it can be even more free from outside interference.\n\n\nthe trick is to have that utopia's principles be *deontological*, or at least to make them absolute rather than able to be weighed against decisions outside of it: as it largely is in ∀V, ensure everything about utopia has a definite okay or not-okay status, evaluable without knowing anything about the \"outside\" of this utopia. either someone's consent is being violated, or it's not. with a set of decisions based only on the state of the utopia being simulated, every decision of the superintelligence about what it does in ∀V is unique: all superintelligence is doing is calculating the next step of this deterministic computation, including ethical principles, and thus there is nothing superintelligence can do to bias that decision in a way that is helpful to it. all it can do is run the computation and wait to see what it is that persons inside of it will decide to reprogram it to value or do; on the outside/before the singularity, all we need to ensure is that superintelligence does indeed eventually run this computation and apply the changes we decide on once it finds them out.\n\n\nunder these conditions, a device could be set up for us to later reprogram superintelligence somehow when/if we ever figure out what values we *actually* want, and it wouldn't be able to meaningfully interfere with our decision process, because every decision it takes regarding how our utopia is ran is fully deterministic.\n\n\nnot that i think being able to reprogram a superintelligence after boot is necessarily a good idea, but at least, i think it can be a possibility.", "date_published": "2021-12-09T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "5a79584c74f94b2d040bd85e8d90186e", "title": "freedom and diversity in Albion's Seed", "url": "https://carado.moe/albions-seed.html", "source": "carado.moe", "source_type": "blog", "text": "freedom and diversity in Albion's Seed\n--------------------------------------\n\n\nconsidering my interest for america and for human cultures in general, ever since reading [the Slate Star Codex review of Albion's seed](https://slatestarcodex.com/2016/04/27/book-review-albions-seed/) i'd been meaning to read the whole thing (as has happened [several](https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/) [previous](https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/) [times](https://slatestarcodex.com/2019/10/14/book-review-against-the-grain/)).\n\n\ni was not disappointed, but the main two takeaways i got from this fascinating book about the four main early british colonial cultures in america are only tangentially related to america or the british; they are about freedom and diversity, great topics of fascination, as well as intrinsic valuing, for me.\n\n\nfor a general idea of what the book is about, you might want to read that Slate Star Codex book review before reading this post.\n\n\n### freedom\n\n\nthe book consists of four sections (one for each of the main cultures of early british colonies in america), each consisting of a sequence of parts going over how each of those cultures relate to a variety of aspects: geographical and socioeconomic origin in the british isles, food, clothing, religion, architecture, life, death, time, magic, marriage, sex, politics, etc…\n\n\nthe last part of each section is about how each of those four cultures views freedom. it's particularly interesting because the book seems to be making a point about how those four radically different visions of freedom contributed to the general modern american understanding of freedom: it is a pluralist view where many people have different meanings about what freedom means to them.\n\n\nin fact, the book contains a conclusion after the four main sections, whose very last part is about this very notion: cultural views on freedom in america, and how they've been contributed to by those four cultures.\n\n\ni don't think it can just be reduced to \"actually they're four different cultural values that all have the word freedom or liberty attached to them\", either: there does seem to be some freedom-ey core invariant to all four visions, even if it takes more effort to see it in some.\n\n\nthis makes me pessimistic about [trying to come up with a single unifying definition](defining-freedom.html), but maybe that's to be expected: [value is complicated and fragile](https://www.readthesequences.com/Value-Is-Fragile) after all, and the scope of \"freedom\" in human caring has been particularly big. indeed, look into history and you'll find numerous peoples from all kinds of cultures describe what they're fighting for as \"freedom\", and there probly *is* a way to understand those as still a perspective on some essence of freedom if one is open-minded enough, even if it's hard to pin down what that essence is.\n\n\n### diversity\n\n\na friend of mine once pointed out how in the video game Mass Effect, the difference between humans and other *alien* cultures, who have spent almost all their existence on *completely different planets*, are lesser in the game than differences *between human populations, on the earth, in real life, right now*.\n\n\nthis isn't a point about how those aliens look, though it may be part of it: it's largely a point about how they talk, how they think, how they view the world and transmit knowledge, etc…\n\n\nin addition, when i talk to people, i see them make what seem to me like insane underestimatings of human diversity. \"if someone does this, then they'll say this\"; \"if this happens to a people, they will do this\"; \"people would enjoy a single society like this\"; and so on. as for me, i've come to increasingly believe that the breadth of human diversity is immense, and that very few assumptions can be held about how a population, let alone a person, can think, or act, or react, in general — almost all such assumptions are bound to be anchored in the culture of whichever local culture the person making those claims is from. this is kind of akin to what happened when linguistics discovered languages like [Pirahã](https://en.wikipedia.org/wiki/Pirah%C3%A3) that wildly break assumptions about invariants in human language — there are some invariants we should believe in still, but they are much lesser than what we originally assumed.\n\n\nAlbion's Seed makes a great case study in diversity, and has become my go-to example for it: all four of the cultures depicted are broadly protestant british peoples existing at about the same time period, and yet their historical and environmental circumstances makes them have such a different cultural core than when they move to america and are able to implement their culture and lifestyle to a much greater extent, the results end up being wildly different and alien to one another.\n\n\nto point out individual differences would be underselling the sheer scope of their quantity, so i'll just ask you to read the Slate Star Codex book review for an idea of just how much these four cultures differed. and again, all that differences is *just* within four protestant british peoples from the era of colonialism in america! imagine what it must be on the whole of earth, or what it *could* be once we multiply beyond earth (whether that be in space or in some uploaded form).", "date_published": "2021-12-09T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "5a22d828a42b71e1fd02835fee002e18", "title": "the deobfuscation conjecture", "url": "https://carado.moe/deobfuscation-conjecture.html", "source": "carado.moe", "source_type": "blog", "text": "the deobfuscation conjecture\n----------------------------\n\n\nsuppose i write a program that tries to find counterexamples to [fermat's last theom](https://en.wikipedia.org/wiki/Fermat's_Last_Theorem): that is, numbers a>1, b>1, c>1, n>2 such that aⁿ+bⁿ=cⁿ.\n\n\nsuppose i now explain to you this program; you will probably shortly understand what it is doing.\n\n\nnow, suppose i compile it, and run its compiled code through an obfuscation algorithm such as [movfuscator](https://github.com/xoreaxeaxeax/movfuscator); with the strict requirement that the output program has the same I/O and [time complexity](https://en.wikipedia.org/wiki/Time_complexity) as the input program (i *believe* this is the case for movfuscator).\n\n\ncan you, manually or with the help of a deobfuscator or even in a fully automated manner, *now* figure out that the program is trying to find counterexamples to fermat's last theorem?\n\n\nwith infinite time, surely; the conjecture that i posit here (well, that i *posited*, see below) is that you can do so in an amount of time that is at most linear in the amount of time it would have taken you to understand the original program.\n\n\nin general, the deobfuscation conjecture says this: for any code obfuscation program that conserves I/O and time complexity, and for any reasonable notion of \"understanding\" what a program does that is conserved by obfuscation (such as \"does this program halt\" or \"will this program find a counterexample to fermat's last theroem if there is one\"), there exists a deobfuscation program that determines that criteria for the obfuscated program in the same time complexity (as a function of the program size) as the fastest program that determines that criteria for the unobfuscated program.\n\n\nor, put another way: as long as I/O and time complexity are conserved, transforming a program does not change the time complexity in which can be tested other criteria of the program, or in which it can be [reduced](https://en.wikipedia.org/wiki/Computational_irreducibility).\n\n\nas evidence for this conjecture, i'll point to the ability of video game crackers to pretty systematically overcome [software DRM](https://en.wikipedia.org/wiki/Digital_rights_management), to extract the behavior of the program they care about.\n\n\na corollary to this conjecture is that shipping a program together with \"hints\" about what it does cannot help to understand it by more than a constant factor.\n\n\n### a proof that it's false\n\n\nwhile writing this post and talking about the conjucture with a friend, they helped me figure out a proof that it must be false.\n\n\nconsider:\n\n\n* the program is a program that will sort a hardcoded list and output the result\n* the criteria is \"what will the program output ?\"\n* the hint being shipped with the program is a list of what position each item in the original list will occupy in the final sorted list\n\n\nwith the hint, the criteria can be determined in O(n): checking that the hint is indeed a correct sorting of the list is O(n), and so is checking that the program is indeed the sorting algorithm (for this proof we can just hardcode recognizing one specific sorting algorithm; the space of this proof is merely the set of hardcoded lists, while the sorting algorithm is constant).\n\n\nwithout the hint, however, the criteria can only be determined in O(n × log(n)): after one has figured out that the program is a sorting algorithm, one has to actually sort the list to figure out what the program will output, which is known to take at least O(n × log(n)).\n\n\nand so, the hint helps by more than a constant factor, and there can indeed be information shipped alongside a program that can help determine criteria about it; and thus, obfuscation can indeed make a program harder to understand by more than a constant factor.", "date_published": "2021-12-05T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ec899631fcd93a72247f048e1bfad835", "title": "think in what ?", "url": "https://carado.moe/think-in-what.html", "source": "carado.moe", "source_type": "blog", "text": "think in what ?\n---------------\n\n\nmost people think in words; some think in images, or sounds, or sometimes even concepts.\n\n\ncomputers are kind of the same: while a lot of computer \"thinking\" is fairly opaque, a lot of it is textual: the command line and shell scripts, JSONs being sent around, and other \"flattened\" textual representations comprise a significant portion of human-program and program-program interaction (just think of how often a number circulating in computer systems will take the form of a decimal text string).\n\n\none could imagine (though it would be very weird) an alternative timeline in which most computer interaction, be it between programs and users or between programs and other programs, happens via images, or via sound, even though those sound [pretty messy](analogpunk.html).\n\n\nmy point with this is that a core goal of [psi](psi.html) is to get our computers thinking in structured ideas, in the way i like to do in my own brain; to extract [the most general shape of information](categories-of-knowledge.html) and have other forms be secondary representations.", "date_published": "2021-12-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "48fa1d176301fc30eec361fe6a278a58", "title": "Genuineness, Existential Selfdetermination, Satisfaction: pick 2", "url": "https://carado.moe/genuineness-existselfdet-satisfaction-pick2.html", "source": "carado.moe", "source_type": "blog", "text": "Genuineness, Existential Selfdetermination, Satisfaction: pick 2\n----------------------------------------------------------------\n\n\nimagine you have a world where one person wants the moon to be painted blue, and another wants the moon to be painted red.\n\n\nthey both mean the current actual physical moon as it exists now; they both refuse any \"cheating\" option such as duplicating the moon or duplicating reality, and they don't want their minds changed, nor to compromise.\n\n\nthere's three ways to resolve situations like this:\n\n\n* you sacrifice **genuineness**: you somehow make both of them believe, mistakenly, that what they want is satisfied. maybe by unknowingly giving them eye implants that change what color they see the moon.\n* you sacrifice **[existential selfdetermination](core-vals-exist-selfdet.html)**: you ensure the situation never happens to begin with, that no two persons will ever want the moon to be painted different colors; or maybe you brainwash one of them after the fact.\n* or, you sacrifice **satisfaction**: you let them want what they want, and let them see what the moon looks like, such that at most only one of them will ever be satisfied.\n\n\nyou can't have all three of those things.\n\n\nmany hedonists will happily sacrifice **genuineness**; authoritarians like to sacrifice **existential selfdetermination**.\n\n\nas for me, for [∀V](%E2%88%80V.html), i ultimately sacrifice **satisfaction**: while people can choose to become mistaken about things, the default is that they get to access the actual true state of things.", "date_published": "2021-11-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4841b2c8b47d5f1992cae19606e2db85", "title": "the two-vtable problem", "url": "https://carado.moe/two-vtable.html", "source": "carado.moe", "source_type": "blog", "text": "the two-vtable problem\n----------------------\n\n\nwhen programming, there are in general two types of interfaces: \"static, known\" interfaces and \"dynamic, unknown\" interfaces.\n\n\nin the former, the possibilities are well known. maybe an object has a bunch of public methods that can be used, or maybe even public fields; maybe an API has a known call endpoints.\n\n\nwhen the behavior or contents of an object are unknown or inaccessible, someone can still implement how it interacts with another known-interface object: just send the known object to the unknown object, and let the unknown object manipulate the known object however it wants.\n\n\nhowever, there is no general way to make two (or more) objects interact with each other, when they both have a dynamic/unknown interface.\n\n\nthis is what i call the **two-vtable problem**.\n\n\none approach is to implement all n² behaviors: implement the behavior for any possible concrete type of one object and any possible concrete type of the other. the rust ecosystem is kind of like that with types and traits: if you have `N` types and `K` traits, unless one of those traits has a blanket implementation for all types, you'll need to write `N×K` implementations to have proper coverage in the general case.\n\n\nbut this is hardly scalable; and doesn't work well, for example, in environments in which objects are expected to be implemented by different parties that don't necessarily coordinate with one another, where those objects are then expected to work together without putting in extra effort afterwards. i'm sure this probly has been encountered a lot for example in video game modding communities, regarding the interaction between mods created by different people.\n\n\nan answer can be taken from other fields that have already solved that problem on their own, however. i can think of two: how natural selection solved negotiation between dynamic persons, and how liberalism solved negotiation between dynamic private actors.\n\n\nthe general solution to the two-vtable is to have the two objects have a language —as tells us the evolution of humans— that they can use to communicate and negotiate an outcome. liberalism tells us that the shape of negotiated outcomes is contracts, and cryptocurrencies tell us that the formalized form of contracts is programs.\n\n\nand so, here is my proposed solution to the two-vtable problem: when two dynamic objects want to interact with one another, that interaction must take the shape of the two of them building a program together, which will be executed once they both agree on it. perhaps this can take the shape of both of them sending an initial contract to the other which is ideal from the perspective of the sender, and from there try to incrementally build up programs that implement a compromise between the two ideals, until they meet somewhere in the middle; like haggling.\n\n\nthis framework generalizes nicely enough that it could be used for arbitrary informatic agents, such as a bot and an uploaded person, or two uploaded persons. in fact, contract negotiation of that kind, when understood enough by both agents partaking of it, can be the formalized form of consent, a matter i've [grappled with formalizing](defining-freedom.html) for [my utopia](%E2%88%80V.html).\n\n\nthis could also be useful for negotiation between different [compilation stacks for portable programs](portable-programs.html), or even for the negotiation between [different wasms running on a liberal server market](saving-server-internet.html).", "date_published": "2021-11-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c7b4d4023f7a73af02873c8a033fee62", "title": "no room above paperclips", "url": "https://carado.moe/above-paperclips.html", "source": "carado.moe", "source_type": "blog", "text": "no room above paperclips\n------------------------\n\n\n(edit: see also [*yes room above paperclips?*](above-paperclips-2.html))\n\n\nwhen presented with the idea of a [paperclip-maximizing unaligned superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), people sometimes mention the possibility that sure, the universe gets tiled with paperclips, but maybe there's [slack](https://thezvi.wordpress.com/2017/09/30/slack/) in how paperclips are arranged, and that maybe nice things can exist again \"above\" paperclips.\n\n\n(note: this relates to the idea of [\"daemon-free\"ness in \"minimal circuints\"](https://www.lesswrong.com/posts/nyCHnY7T5PHPLjxmN/open-question-are-minimal-circuits-daemon-free))\n\n\ni think it's a reasonable line of thinking, but it's short-sighted: let's think about what happens next. eventually, above those paperclips, some evolutionary process may take place, leading (possibly, such as in our case, through the step of a technological species) eventually to a superintelligence taking over everything. given that *the entire cosmos* gets tiled with paperclips [*possibly forever*](ai-alignment-wolfram-physics.html), and that a superintelligent singleton taking over everything is irreversible (short of everything dying forever), in all likelyhood in the long term in any piece of universe not already actively managed by a superintelligence, eventually either everything dies forever, or a superintelligence takes over everything forever.\n\n\nand then what? either this new superintelligence cares about \"upwards\", and has some plan for how its own paperclips are arranged (such as into more \"macro\"-paperclips), or it doesn't and the cycle begins again.\n\n\ngiven that the outcome of an \"alien\" superintelligence's takeover is probly a worse outcome than the takeover of a superintelligence of our own (we should expect them to be about as incompetent as us at alignment, but to have values [less aligned to ours](https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8)), we need to care about our own iteration first, it's our best bet.\n\n\nthe point is, eventually for any given local patch of spacetime, either a superintelligence explosion is reached or everything dies forever. this can't be avoided, even by \"climbing up\" on substrates, so we should care about alignment now; we can't just hope that things are okay despite paperclips.", "date_published": "2021-11-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c47f24d206b45e2204d69dafbbc2924d", "title": "endiannesses", "url": "https://carado.moe/endiannesses.html", "source": "carado.moe", "source_type": "blog", "text": "endiannesses\n------------\n\n\ni extend the term \"[endianness](https://en.wikipedia.org/wiki/Endianness)\" to mean the order of symbols within a representational sequence, and whether that order is least-significant to most-significant (little endian, hereby LE) or most-significant to least-significant (big endian, hereby BE).\n\n\nin technology and culture, there are a variety of places in which endianness makes sense to talk about, and not all of them are of the same endianness.\n\n\n* famously, and this is what let to the terms LE and BE being coined for numbers in the first place, bytes within a multiple-byte-wide integer in computers can have different endianness, typically depending on [ISA](https://en.wikipedia.org/wiki/Instruction_set_architecture); though LE has become the de-facto standard\n* in computers, bits within a byte are LE; indeed, bit-shifting instructions address in order the least significant bit as 0, the second most significant bit as 1, the third as 2, and so on. this is why i believe LE is the correct choice for bytes within an integer: it's coherent with the order of bits within an integer.\n* numbers in english are in BE: in the number \"5005\", the first \"5\" is more significant than the last \"5\". that said, english uses arabic numerals, which were still ordered with the most significant digit on the left even when they were used in arabic, which is read to right-to-left. this makes arabic numerals dependent on language, and originally LE, as it still in semitic languages.\n* times of the day, such as \"11:30\" are in BE, though \"5pm\" is LE (\"pm\" is more significant than \"5\"), which makes \"eleven thirty PM\" mixed endian.\n* dates in most western countries such as \"31/12/2005\" are LE, while dates in america such as \"12/31/2005\" are famously mixed endian; the recommended [ISO-8601](https://xkcd.com/1179/) dates are BE.\n* postal addresses like \"John Doe, 50th Central Avenue, Seattle, Washington, USA\" are LE\n* file paths such as \"/home/carado/.bashrc\" are BE, but domain names such as \"something.carado.moe\" are LE, making URLs (with a domain name followed by a file path) mixed endian.\n* in the same vein as file paths, C-like programming language record access paths like `customer.address.city`, and indeed paths in almost all computering contexts i know of, are BE\n* possessive sequences like the english \"john's mother's cat\" or the japanese \"ジョンの母親の猫\" are BE", "date_published": "2021-11-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "cf37d30f9b39543d6eeeb653ff0416aa", "title": "unoptimal superintelligence loses", "url": "https://carado.moe/unoptimal-superint-loses.html", "source": "carado.moe", "source_type": "blog", "text": "unoptimal superintelligence loses\n---------------------------------\n\n\n(edit: [maybe it doesn't](unoptimal-superint-doesnt-lose.html))\n\n\nwhat if a phenomena is powerful enough to kill everyone, but not smart enough to be optimal at reasoning? (such as a grey goo event, or a \"dumb\" superintelligence with a faulty decision mechanism)\n\n\nthen, in all likelyhood, it eventually dies to an alien superintelligence that is better at decision-making and thus at taking over everything.\n\n\nour superintelligence doesn't just need to be aligned enough; it needs to be aligned enough, and on the tech side, to be maximally intelligent. hopefully, it's smart enough to start making itself smarter recursively, which should do the trick.\n\n\nthe point is: when talking about the eventual superintelligence(s) that reign over the cosmos, assume whichever one(s) to have \"won\" to be optimal at decision making, because others probly got outcompeted.", "date_published": "2021-11-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "7e3b14cd60e1bb2b23105b342989051b", "title": "rust & wasm, without wasm-pack", "url": "https://carado.moe/rust-wasm-without-wasmpack.html", "source": "carado.moe", "source_type": "blog", "text": "rust & wasm, without wasm-pack\n------------------------------\n\n\ni like to keep my software stacks simple.\n\n\nto write [wasm](https://en.wikipedia.org/wiki/WebAssembly) modules in rust, i use a relatively simple template, which only requires the `wasm-bindgen` utility (`cargo install -f wasm-bindgen-cli`) and the `wasm32-unknown-unknown` toolchain (`rustup target add wasm32-unknown-unknown`).\n\n\nthe `Cargo.toml` of the project looks [like this](wasm-template/Cargo.toml); in it, the `crate-type = [\"cdylib\"]` part is the essential thing needed to build a wasm module.\n\n\nthere is [a `make.sh` script](wasm-template/make.sh), which compiles the project, calls `wasm-bindgen`, and cobbles together two html files: a `light.html` which reloads fast (good for development and debugging), but depends on `js/wasm.js` and `js/wasm_bg.wasm`, as well as a standalone `page.html` which doesn't depend on any external files, because it embeds `wasm.js` verbatim and `wasm_bg.wasm` encoded in base64 (good for distribution).\n\n\nafter that, it starts a `python2 -m SimpleHTTPServer` serving the `light.html` file at (unlike `page.html`, it unfortunately can't be used with the `file://` scheme because of [CORS](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) security restrictions).\n\n\ni also use [a script called `cw`](wasm-template/cw) which requires `cargo watch` (`cargo install cargo-watch`) and calls `make.sh` each time the project's source code is modified.\n\n\nfinally, to cobble together the html documents, `make.sh` uses a `head.html` and `tail.html`, which are meant to remain static.\n\n\nthe files from the template can be browsed [here](wasm-template) or [downloaded as a tarball](wasm-template.tar.gz).", "date_published": "2021-11-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "8840bc4a2ee211bd9ccc6ba9805c9423", "title": "against AI alignment ?", "url": "https://carado.moe/against-ai-alignment.html", "source": "carado.moe", "source_type": "blog", "text": "against AI alignment ?\n----------------------\n\n\n[i usually consider AI alignment to be pretty critical](were-all-doomed.html). that said, there are some ways in which i can see the research that is generally associated with alignment to have more harmful potential than not, if it is applied.\n\n\nthis is a development on my idea of [botched alignment](botched-alignment-and-awareness.html): just like AI tech is dangerous if it's developed before alignment because unaligned AI might lead to [X-lines](timeline-codes.html), alignment is dangerous because it lets us align AI to things we think we want, but aren't actually good; which sounds like it could lead to an increase in the ratio of S-lines to U-lines.\n\n\nwith this comes a sort of second [orthogonality thesis](https://www.lesswrong.com/tag/orthogonality-thesis), if you will: one between what we think we want and what is actually good. note that in both cases, the orthogonality thesis is a *default* position: it could be wrong, but we shouldn't assume that it is.\n\n\ndetermining what is good is very hard, and in fact has been the subject of the field of *ethics*, which has been a work in progress for millenia. and, just like we must accomplish alignment before we accomplish superintelligence if we are to avoid X-risks, we might want to consider getting ethics accomplished before we start using alignment if we are to avoid S-risks, which should be a lot more important. or, at least, we should heavily consider the input of ethics into alignment.\n\n\nthings like [my utopia](%E2%88%80V.html) are merely patches to try and propose a world that *hopefully* doesn't get *too bad* even after a lot of time has passed; but they're still tentative and no doubt a billion unknown and unknown unknown things can go wrong in them.\n\n\nit is to be emphasized that both parts of the pipeline are important: we must make sure that what we think is good is what is actually good, and then we must ensure that that is what AI pursues. maybe there's a weird trick to implementing what is good directly without having to figure it out ourselves, but i'm skeptical, and in any case we shouldn't go around assuming that to be the case. in addition, i remain highly skeptical of approaches of \"value learning\"; that would seem like it would be *at most* as good as aligning to what we think is good.\n\n\nso, it is possible that just as i have strongly opposed doing AI tech research until we've figured out AI alignment, i might now raise concerns about researching AI alignment without progress on, and input from, ethics. in fact, there's a possibility that putting resources into AI tech over alignment could be an improvement: [we should absolutely avoid S-risks, even at the cost of enormously increased X-risks](when-in-doubt-kill-everyone.html).", "date_published": "2021-11-08T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a978fbc3c093c9e77e43a06ecb211b74", "title": "psi: a universal format for structured information", "url": "https://carado.moe/psi.html", "source": "carado.moe", "source_type": "blog", "text": "psi: a universal format for structured information\n--------------------------------------------------\n\n\nrepresenting structured information in general is not a solved problem.\n\n\nin this post i outline a format tentatively called \"psi\", proposing what i think is the best general solution to that problem.\n\n\n### rationale\n\n\nthere are some attempts at making a general purpose information format (XML, JSON, [CBOR](https://en.wikipedia.org/wiki/CBOR)) but psi outshines them in several respects:\n\n\n* psi is designed to be very simple\n* given a structure of modules, new values can be created that reference values deep inside other modules easily (to annotate or simply reuse pieces of another psi structure)\n* the format isn't burdened with either a scarce amount of privileged first-class types, nor [a central authority tasked with assigning small integer IDs to specific meanings](https://www.iana.org/assignments/cbor-tags/cbor-tags.xhtml); long random IDs give everyone the same ability to declare concepts or collections thereof\n* it is designed to be easily transmitted in a variety of substrates, and [binary encodings](https://en.wikipedia.org/wiki/Binary_file) are encouraged\n* it isn't bogged down in specifying endianness or specific integer sizes (typically 8, 16, 32, and 64 bits); instead, arbitrarily large natural numbers are used for everything, and specific platforms can have their own limitations in the size of indices and modules they support.\n\n\nnote that natural numbers can be used to reference even arrays of bytes: 0 can correspond to the empty sequence, the next 256 naturals can correspond to sequences of 1 byte, the next 65536 naturals can correspond to sequences of 2 bytes, etc.\n\n\nin fact, natural numbers are the mathematically natural ways to express items of a countable set; they're what those naturally map to.\n\n\nwhile binary formats can be hard to work with, the hope would be that tools and eventually operating systems come to support this format well enough that manipulating psi values would be no harder than manipulating usual plain text files, and in fact [tools to manipulate structured objects can be a lot richer](https://developer.mozilla.org/en-US/docs/Tools/Page_Inspector/How_to/Open_the_Inspector).\n\n\none specific use case for which i'd like to see psi used is to [overcome unicode](against-unicode.html); but others include [fact-sharing](where-next-piracy.html), proof manipulation, algebra, note-taking, sending around concepts, program representation, etc\n\n\n### top-level grammar\n\n\nat the top level, the units of information in psi are *nodes*, which are stateless values.\n\n\nin [BNF](https://en.wikipedia.org/wiki/Backus%E2%80%93Naur_form) grammar, the syntax for psi nodes is pretty simple:\n\n\n`V ::= a | ( V+ )`\n\n\nthat is, a node is either a symbol, or a sequence of at least one node; but this is far from saying everything about psi nodes.\n\n\none important aspect of them is that they can form any structure, not just hierarchies or directed acyclic graph. when a node is in `V+` form (called a \"physical\" node), the elements forming its sequence (called \"fields\") can be any node, including the node itself or other nodes whose fields cycle back to this node.\n\n\nthe `a` form stands for a (countably) infinite set of \"virtual\" nodes, whose meaning comes from outside of the data being encoded.\n\n\ntogether, these two possibilities allow psi to form very flexible structures, but also to reference externally meaningful ideas.\n\n\nfinally, as a convention, the first field of a physical node by convention indicates the type of that node. so, for example, a fraction could be expressed by a node with three fields: a virtual node referring to the concept of a fraction, followed by the numerator and the denominator.\n\n\n### middle-level structure\n\n\nat the middle level, a psi node is represented by a pair of a **module** and an **index**.\n\n\na module can be either:\n\n\n* a \"virtual module\" containing either a finite non-zero amount, or a countably infinite amount, of virtual nodes\n* a \"physical module\" containing a finite collection of physical nodes\n\n\nthe index of a node indicates which node in the module it is.\n\n\nphysical modules must obey three constraints:\n\n\n* they must be the result of [graph condensation](https://en.wikipedia.org/wiki/Strongly_connected_component): if we draw up the directed graph from nodes to other nodes they reference through fields, then every node is a [strongly connected component](https://en.wikipedia.org/wiki/Strongly_connected_component#Definitions), while the set of physical modules themselves form a [directed acyclic graph](https://en.wikipedia.org/wiki/Directed_acyclic_graph).\n* nodes inside them must be \"unified\" such that if exploring two nodes yields the same observable structure, these two nodes must be considered equal and referenced by the same index. a result of this is that for example cycles where every entry point is isomorphic to any other are \"collapsed\" into a single node. (the reason for this is that, if nodes weren't unified in this way, there could be weird effects where two people creating the same structure could end up disagreeing on which node is equal to which based on implementation details; reducing equality to observational equality is just simpler)\n* finally, after these two steps, the nodes can be sorted using a standardized comparison method (not explained here, but not hard to design).\n\n\none great advantage of these limitations is that there is only one canonical way to construct any node, even considering cycles. this can make testing nodes for equality, but also establishing a consistent ordering between them or hashing them, easy enough.\n\n\nalso, by holding only a collection of nodes and every physical module that is referenced by following the fields of any of the physical *modules*, one holds exactly the amount of information that could be referenced by following the nodes themselves. no nodes will be missing and, more importantly, no unreachable node will be held.\n\n\nthis provides arbitrarily cyclical (immutable) structure manipulation, without need for a [garbage collection](https://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29): since the modules form a directed acyclic graph between each other, simple [reference counters](https://en.wikipedia.org/wiki/Reference_counting) can be used. this is not just useful to store psi structures in memory, but also to store psi structures as [IPFS Merkle DAGs](https://docs.ipfs.io/concepts/merkle-dag/) with the potential for convenient reuse of modules when constructing new structures.\n\n\nvirtual modules on the other hand are identified by their amount of nodes, and by a large randomly-generated non-zero natural number serving as their ID. to represent a collection of external concepts, users are expected to generate a sequence of n random bits, and use 2ⁿ + the number formed by those digits, as the identifier. the 2ⁿ allows one to tell from a natural number how many random bits it has been generated with. the number of bits to generate should be chosen as a function of the expected amount of computation existing at the time of generation, to ensure collisions are avoided.\n\n\ni'm not quite sure how large those randomly generated numbers should be; [random UUIDs use 122 bits](https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_%28random%29); i've seen some [recommend using 160 bits](https://neilmadden.blog/2018/08/30/moving-away-from-uuids/), which seems reasonable to me.\n\n\nhere are some examples:\n\n\n* let's say the virtual module number `2595793418667508392727470919165783019666420633976` with 128 nodes, represents the set of all 128 [ASCII](https://en.wikipedia.org/wiki/ASCII) codepoints\n* in the same way, let's say the number `1757914673057156744813439560490372169016067640867` with 1,111,998 nodes represents the set of [Unicode](https://en.wikipedia.org/wiki/Unicode) characters, in order of increasing codepoint values (or maybe [private use areas](https://en.wikipedia.org/wiki/Private_Use_Areas) should not be counted?)\n\n\ni also think it's reasonable to reserve the following:\n\n\n* virtual module `0` with size 1 consists of the [unit](https://en.wikipedia.org/wiki/Unit_type)\n* virtual module `0` with size 2 represents the logic values false and true, assigned respectively to indices 0 and 1\n* virtual module `0` with an infinite size represents the set of natural numbers, each mapped to itself as index.\n\n\nmaybe virtual module `0` with any finite size should be considered to represent the values of modulo arithmetic of that size, and would also correspond to the unit value and booleans when that size is respectively 1 or 2. i'm not sure yet.\n\n\n### low-level structure\n\n\nit would be cool to define compact bitwise binary formats for storage; and likewise, it could be cool to define standard *memory layout structures* so that arbitrary programs and libraries with shared memory can move around those objects freely without having to be aware of each other's API.\n\n\nin practice, some liberties can be taken for efficiency: for example, physical modules can be stored right alongside their dependencies when those are small enough, to avoid creating too many references in storage. in memory, however, this isn't recommended.\n\n\n### syntactic convention\n\n\nfor convenience, i'm proposing a simple plain text notation for psi nodes:\n\n\na top level node is either of\n\n\n* `(x1 x2 x3…)` where `xi` are nodes\n* `#93418#128#97` is value 97 in virtual module 93418 of size 128\n* `#93418#97` is value 97 in infinite virtual module 93418\n* `#93418` is syntactic sugar for `#93418#1#0`\n* `x(…)`, `(…)x`, `x[…]`, `[…]x` all alias the node to the name `x`. the node being aliased can be a virtual node, but then the aliasing name *must* be on the left. in addition, if `#`s are present between the name and the node being aliased, they indicate a local (as opposed to global) scope, \"climbing back\" as many nodes as there are `#`s, minus one; so `x(a)` is global, `x#(a)` is accessible only in the current node and its sub-nodes, and `x##(a)` is accessible in the parent node and its sub-nodes.\n* a standalone name like `x` refers to either an alias or, if none, an external node\n\n\nin addition, inside of a node:\n\n\n* `#` means the node being listed itself, `##` its parent (in the syntactic hierarchy), `###` its parent's parent, etc\n* `[x1 x2 x3…]` inside a node doesn't actually add that node as a field, but instead allows that node to float freely; such a node *must* contain at least one `#` field\n\n\ngiven this, and the convention of the first field of a node indicating its type, we can already establish:\n\n\n* `(#)` is \"type\": the type of types\n* `((#))` is a type, with no other information being provided\n* `(((#)) x1 x2 x3…)` could perhaps be a simple collection of values\n* `(t (t (t (t ####))))` is a circular linked list which, as indicated above, reduces to `(t #)`\n* likewise, `(t #0#1 (t #0#2 (t #0#1 (t #0#2 ####))))` reduces to `(t #0#1 (t #0#2 ##))` and `(a (b) (b) (c ##) (c ##))` reduces to `(a x(b) x y(c ##) y)`", "date_published": "2021-11-08T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9fc72b088f181650a6228cfca696a116", "title": "saving the server-side of the internet: just WASM,", "url": "https://carado.moe/saving-server-internet.html", "source": "carado.moe", "source_type": "blog", "text": "saving the server-side of the internet: just WASM, *again*\n----------------------------------------------------------\n\n\nin [a previous post](saving-the-web.html), i propose to abandon the entire client-side web stack in favor of pretty much just [WASM](https://en.wikipedia.org/wiki/WebAssembly).\n\n\ntoday i am proposing the same thing but for the server side.\n\n\nat the higher level for most content, of course, i strongly favor [IPFS](https://ipfs.io/) — but sometimes, one actually needs centralized, responsive, cheap, continuously or on-demand running server programs, for many uses.\n\n\ntoday's answer to these questions is a collection of [various](https://en.wikipedia.org/wiki/Google_Cloud_Platform) [cloud](https://en.wikipedia.org/wiki/Amazon_Web_Services) [platforms](https://en.wikipedia.org/wiki/Netlify), inevitably controlled by large companies and no doubt resting on a brittle tower of overly complex software.\n\n\ni propose a nice way to decentralize and standardize all of this and simplify the stack. a piece of relatively simple server software (the \"generic server\") with a standard public-facing API could be deployed on any server machine, and make that server sell its bandwidth and computation resources in exchange for cryptocurrency, automatically. people or other programs could connect to it, upload a small WASM (or [meta-wasm](portable-programs.html)) program, and pay to have it run and be able to use various amounts of resources.\n\n\nthen, if the server for example runs a multiplayer video game session, ports could be opened for client software to connect to. the coordination of what is hosted where could be maintained on [the IPFS](https://ipfs.io/) as [decentralized knowledge](where-next-piracy.html), as would be the network of trust of which server machines are known to be reliable. especially, for information that one might not want leaked, one could cultivate a small set of trusted server providers — just as they do now by trusting just google or just amazon, except this would be a more general and automatably manipulable framework.\n\n\npeople could buy some server resource in bulk to share with their friends if some can't be bothered to buy crypto, large companies could deploy this generic server on many computers of their datacenter, people could run generic servers on their home computers to make money from unused resources, and anyone would be able to compete without setting up a significant piece of cloud infrastructure and then spending money on marketing to get known and trusted. larger distributed applications needing to allocate and deallocate servers dynamically based on demand and geographical location would have an entire competitive market to use, rather than having to settle with one of the large providers; and their individual server units could interoperate with each other even across providers.\n\n\nserver-side WASMs could access various APIs to communicate with each other be it on the same server machine or with other server machines; they could access APIs to use storage, to use [GPU computer](https://web.dev/gpu/), to publish IPFS data, etc.", "date_published": "2021-11-01T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "7e62c8e097b15514807ffee96eeb64ab", "title": "lamenting nerds", "url": "https://carado.moe/lamenting-nerds.html", "source": "carado.moe", "source_type": "blog", "text": "lamenting nerds\n---------------\n\n\nwhen i was growing up, it was clear that computers and the internet were increasingly the new thing that was going to change the world. this eventually became the case. but one thing i was wrong about, and no doubt many others, was in expecting that this would result in young people increasingly getting into computer stuff.\n\n\nbut alas, as mentioned in the book *Because Internet*, while my generation (millenials) had to learn some tech as a bar for using computers and the internet in the first place, this isn't the case for [zoomers](https://en.wikipedia.org/wiki/Generation_Z) and younger: everything has been made easy for them, and indeed in my social circles it is people my age who tend to not just be *good* at computer stuff, but also to be *interested in it* in the first place.\n\n\nof course, accessibility is overall a good thing. i just am lamenting the lack of genuinely interested people.\n\n\nin university, in my computer science classes, most of the people present were there not out of genuine interest for computer science, but because that knowledge is thought to be highly demanded in the market. again, while this is technically useful, it is unfortunate that it has, in my view, pushed nerds to the wayside. in the case of universities, it has also corrupted curriculums: academic content (which is what universities are supposedly about) keeps retreating in favor of more practical information technology knowledge, making both demographics (academy-oriented and industry-oriented) dissatisfied at having to waste time learning the other half (uni courses in france are largely not elective). i would rather have schools dedicated to practical information technology on one hand, and universities focusing on academic understanding on the other, and let people pick which one they want; or even better, mix and match course from either.", "date_published": "2021-10-24T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "7df8127136acffbd6f56f84c7585893f", "title": "alignment is an optimization processes problem", "url": "https://carado.moe/alignment-optimization-processes.html", "source": "carado.moe", "source_type": "blog", "text": "alignment is an optimization processes problem\n----------------------------------------------\n\n\ni like to talk about [AI](https://www.lesswrong.com/tag/ai) alignment a lot, but the matter of alignment is really general to *optimization processes* in general.\n\n\nhere are some ways it applies to some other areas:\n\n\n### natural selection\n\n\nnatural selection is an optimization process that improves the survival and duplication of inheritable traits (genes) in living beings.\n\n\nit is not intelligent: there is no agent involved in this process which is able to make decisions by looking ahead into the future at what consequences those decisions will have; with the possible exception of humans making rational decisions about what will maximize their amount offspring.\n\n\nit is completely multipolar: basically no agents in this process (either genes themselves, or individuals or populations carrying those genes) have the ability to coordinate decisions with one another, since they're not even intelligent.\n\n\nthe default of natural selection is genes whose only purpose is to be better at duplicating themseles.\n\n\none way in which we've aligned this process is by breeding: by selecting the individuals we like best among, for example, crops, cattle, or dogs, we've been able to align the process of gene selection to respond to what we value rather than the default.\n\n\n### economics\n\n\nthe economy is an optimization process that improves economic efficiency.\n\n\nit is intelligent: actors in the economy, ranging from individuals to states and giant conglomerates, have the ability to make intelligent decisions about the long term.\n\n\nit is fairly multipolar: while they don't use it much, states do have overriding power over companies (they determine what's legal or not, after all), and also economic agents are able to coordinate to an extent using contracts and trusts. nevertheless, it is still largely multipolar, with agents overall competing with one another.\n\n\nthe default of economics is the optimization out of anything that doesn't generate maximally much resources: the optimizing out of people when they become the unoptimal form of labor because of automation, and the strip-mining of the universe to acquire ever more resources with which to create more machines to mine even more resources, and so on.\n\n\nthe way we align economics is through taxes, redistribution, and the like. redistribution like [UBI](ubi.html) aligns the economy to serves the demand of people, while tax externalities can align economic agents to take steps to preserve nice things, such as avoiding pollution.\n\n\n### electoralism\n\n\nelectoral representative democracy is an optimization process that improves voter satisfaction.\n\n\nit is intelligent: the agents competing for the reward, here political parties, are able to make decisions about the future. some organisms even plan for the very long term, taking steps to improve their chances when they become parties, long before they do.\n\n\nit is fairly multipolar: like economics, while parties can coordinate and ally with one another, they are still competing agents at the end of the day, with no central authority to guide them and solve coordination.\n\n\nthe default of electoralism is parties throwing all values under the bus to do whatever gets them and keeps them in office for as long as possible.\n\n\nthe way we align electoralism is by having universal sufferage on the one hand, which makes it that it is the population that parties must try to satisfy; and the various apparatii of liberal democracies (journalism and free press, public debate, education of the voting public, etc), which we'd hope would help that voting population determine which parties do indeed implement policies that satisfy their demand.", "date_published": "2021-10-22T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c8f753187f167c100fcd64f984b43618", "title": "to wasm and back again: the essence of portable programs", "url": "https://carado.moe/portable-programs.html", "source": "carado.moe", "source_type": "blog", "text": "to wasm and back again: the essence of portable programs\n--------------------------------------------------------\n\n\nwhat is the *essence* of a portable program, a program expressed in a format such that it can then be interpreted or, ideally, compiled to run efficiently in a variety of different environments?\n\n\nthis doesn't just mean \"different OSs\" or \"different CPU architectures\", but can even expand to compiling programs to different forms of computing like [GPU code](https://en.wikipedia.org/wiki/RISC-V) or maybe even [FPGA](https://en.wikipedia.org/wiki/Field-programmable_gate_array)s.\n\n\nwhen we tried to figure some of this out for web pages, we came up with [\"Native Client\"](https://en.wikipedia.org/wiki/Google_Native_Client); but it eventually became clear that that the [LLVM intermediary representation](https://en.wikipedia.org/wiki/LLVM#Intermediate_representation) that it uses wasn't a good fit for a variety of reasons, so we eventually settled on [WASM](https://en.wikipedia.org/wiki/WebAssembly); and now [everyone is moving in that direction very fast](https://bytecodealliance.org/) despite [its](http://troubles.md/posts/wasm-is-not-a-stack-machine/) [various](http://troubles.md/posts/why-do-we-need-the-relooper-algorithm-again/) [issues](http://troubles.md/posts/the-stack-is-not-the-stack/).\n\n\nalas, there can be a variety of factors that influence how a program should be compiled to take full advantage of the machine it's running on:\n\n\n* what is the size of pointers? WASM currently hardcodes this to 32, and another WASM variant will come out to hardcode it to 64; just 32 works for many uses but having these hardcoded at compile time is hardly a great choice in general.\n* how much CPU cache is available? currently, barely anything ever takes this into account, even though uncached memory access times can be a huge cause for inefficiency; and let's not get into cache-manipulation instructions, which nobody's using because [we all like to pretend our computers are simpler than they are](https://queue.acm.org/detail.cfm?id=3212479) (which no doubt also encourages hardware designers to only make CPUs for programmers that have this expectation, but this is a story for another time).\n* how much memory can be allocated? how is it allocated? can programs and libraries share entire memory blocks with one another seamlessly? WASM currently takes a pretty naive approach to this with \"linear memories\", and [a variety of conversion schemes](https://github.com/WebAssembly/interface-types/blob/main/proposals/interface-types/Explainer.md) are guaranteed to make the standard a lot more complex.\n* what instructions are available? instructions like [count leading zeros/count trailing zeros](https://en.wikipedia.org/wiki/Find_first_set) or [popcount](https://en.wikipedia.org/wiki/Hamming_weight) can be pretty instrumental to some algorithms performing efficiently, but compiling those in terms of other instructions on instruction sets that don't have them natively can be quite the loss in performance.\n* how many registers are there? are some operations expected to use only certain registers? how efficiently can values be pushed and popped from the stack? [how much stack even *is* there??](https://utcc.utoronto.ca/~cks/space/blog/programming/CStackSizeInvisible)\n* what is the expected alignment of various types in memory? how are those alignments checked? are unaligned accesses unsafe or merely slow?\n* and then *everything to do with atomics and multithreading* (WASM's attempts at addressing these seem at the moment pretty unsatisfactory)\n\n\nmore importantly, effects of questions like those can ramificate upwards: a change in pointer size (say between 32 and 64) or knowledge about the cache sizes, or cache-fetching or branch-prediction algorithms of a CPU should be able to lead to an entire data structure \"choosing\" a very different implementation (not just for algorithms, but also but for memory layout!). and those changes are even more far-reaching once parts of this algorithm (such as accessor functions for data structures) get inlined in various ways in other parts of the code. WASM just assumes some relatively common invariants and that's it; there's no ability to provide entirely different algorithms based on even pointer size or alignment requirements.\n\n\nso, to be able to make arbitrarily complex decisions based on those environmental conditions, a portable program should not be just a flat WASM, but should in fact be a dynamic metaprogram which, upon initialization, examines the environment and makes all the right choices to produce a code that is able to run optimally on the target environment. ideally, that metaprogram itself should be written in yet another meta²program; but this one compiles the meta¹program for the compiling environment rather than the target environment (those can be different! think of a meta²program, producing for the CPU a meta¹program whose goal is to produce for the GPU the end program). ultimately, though, we need *some* form of basic language to bootstrap this whole process, at the top of the tower of metaⁿprograms: this is where WASM can come back.\n\n\nsuch metaⁿprograms should expect to interact with an API that would be like that of [a JIT compiler library](https://github.com/wdv4758h/awesome-jit#id3), with functions like `create_function(parameter_types, return_types) → FunctionId` or `add_function_call(function_id, parameter_variables) → ValueId` used to generate pieces of the metaⁿ⁻¹program. ideally, different metaⁿprograms from different places could even end up getting their functions inlined with each other; a library generated by a metaprogram, and a user program generated by another metaprogram from another vendor but using that library, should be able to be inlined with each other, rather than \"naively\" loaded like current dynamic libraries. maybe their two metaprograms should even be able to \"negotiate\" optimizations with one another using specification contracts, but this seems hard to set up.\n\n\nif \"object programs\" (meta⁰programs) — but also higher up metaⁿprograms — are expected to be safely sandboxed, the best way to do this might not be to dynamically check everything and then hope optimization can remove some checks, but instead the metaⁿ⁺¹program that produces them should be able to manipulate logical statements to \"logically prove\" to the compiler that the program being produced is safe; and adding a dynamic check would be just one way to guarantee this safety. the point is, demonstrating safety should be able to, like code generation, be an arbitrarily complex process, rather than a very strict one limited to whatever type system is available, and whatever hacks one can build on top of it.\n\n\nideally, metaⁿprograms should also be able to tap into a collection of optimizing code-transformation libraries, which could be updated regularly such that old programs can benefit from new optimizations; but should be proven to be correct such that this doesn't affect behavior we care about. in this way, logically proving behavior is not just a matter of sandboxing or program safety, but also a matter of optimization even in \"unsafe\" programs.\n\n\nthis approach, in some ways despite its lesser dynamicity, is more general than [\"runtime metaprogrammability\"](degrees-of-runtime-metaprogrammability.html) in that the metaprogram is able to create a mesaprogram (the opposite of a metaprogram) for a vastly different target environment than the one it is itself running on.", "date_published": "2021-10-21T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a5f2f12ceb1353f8f9c587d7eac48a36", "title": "cosmic missing outs", "url": "https://carado.moe/cosmic-missing-outs.html", "source": "carado.moe", "source_type": "blog", "text": "cosmic missing outs\n-------------------\n\n\nthis might be a complete waste of brainflops, but sometimes i wonder about \"cosmic missing outs\".\n\n\nmy typical example for those is the culture of modern japan.\n\n\nimagine timelines where japan never became the country it did, and we never got its culture. that'd be a huge thing to miss out on, right? the second best thing might be korean culture or something like that.\n\n\nbut, now that you've imagined this timeline that is missing out on modern japan culture, imagine the opposite: there are timelines out there that have those great cultures of countries that we're missing out on, that us missing out on is kind of on the same scale as those other timelines missing out on japan's culture.\n\n\ni'm talking about this because i just thought of some other things kind of of this type:\n\n\nwhat are some unknown things that we are missing out on, that us missing out on is kind of like if other timelines were missing out on music?\n\n\nwhat are some unknown things that we are missing out on, that us missing out on is kind of like if other timelines were missing out on philosophy, science, or math?\n\n\nthese speculations are the closest i can get to putting human minds into perspective and considering the existence of things entirely outside of human conception, the way many things are entirely outside of a mouse or ant's ability to conceive.\n\n\nto be clear: i still can't have that consideration, this is only the closest i get, but it's not quite there.", "date_published": "2021-10-13T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "380769f63ebf39c4ca7bb856be8e04dd", "title": "exact minds in an exact world", "url": "https://carado.moe/exact-minds-in-an-exact-world.html", "source": "carado.moe", "source_type": "blog", "text": "exact minds in an exact world\n-----------------------------\n\n\n[in the sequences](https://www.readthesequences.com/Zero-And-One-Are-Not-Probabilities) it is argued that 0 and 1 are not probabilities; that these \"certainty ratios\" aren't meaningful. but, i can think a situation that challenges this.\n\n\nimagine a fully deterministic world — for example, running on [a cellular automaton](https://en.wikipedia.org/wiki/Cellular_automata) — and imagine that in this world there are some intelligences (either artificial or natural) that utilize this determinism to have the ability to make flawless logical deductions (for example, [automated theorem proving](https://en.wikipedia.org/wiki/Automated_theorem_proving) algorithms running on computers that cannot ever have undetected [hardware failures](https://en.wikipedia.org/wiki/Soft_error)). for example, if they think about mathematics, under the axioms under which they work, 2 + 2 will always equal to 4, and doing any mathematical computation will either result in them knowing they don't have the computational resources to do the operation, or the result being guaranteedly true with the same certainty as that the cellular's automaton's rules will be applied next tick.\n\n\nnow, these beings still have a use for probability and statistics: those can be used to talk about parts of the world that they don't have complete information about. but, there will be some contexts, both purely in their minds (such as logic or math) or sometimes in the real world (they could make assessments like \"this box cannot contain any [spaceship](https://en.wikipedia.org/wiki/Spaceship_%28cellular_automaton%29) of a certain size\") that *will* be, functionally, certain.\n\n\nit could be argued that they *should* still be weighing everything by the probability that there might be unknown unknowns; for example, their cellular automaton might have rules that apply only very rarely, and that they never got a chance observe yet but might yet observe later. but, let's say that they *assume* the rules of their world are exactly as they think, and let's say that they happen to be correct in that assessment. does that not make some of their deductions actually entirely certain?", "date_published": "2021-10-12T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "39ff7224aaed8127a2c56f4b7ecee36a", "title": "meta-tracking", "url": "https://carado.moe/metatracking.html", "source": "carado.moe", "source_type": "blog", "text": "meta-tracking\n-------------\n\n\nsome social constructs don't originally track anything in the real world, but people who erroneously believe in them start assigning them attributes and meaning, and then those concepts bootstrap themselves into being real at least in that they track people's beliefs.\n\n\nso, for example, refusing to follow astrology because it doesn't track the real world, fails to track all the people that start acting in astrologically predictable ways from believing (originally erroneously) in astrology.\n\n\nanother way in which one must take care to meta-track because people are involved, is the meaning of the meaning of words. the meaning of a word is defined by its usage; but, \"the meaning of a word\" is understood by many to instead track some essence of the word. while the idea of that essence is wrong, saying \"the meaning of a word is defined by its usage\" is kind of wrong; not because that's not what the meaning of a word is, but because in that sentence one is using a fairly non-usage meaning of \"the meaning of a word\".\n\n\nand, you have to rember that phrases mean what they are understood to mean; so, in a weird way, the only statements that are understandable to someone are the ones that are agreeable with what they think, because those statements are those that match the general worldview-ideas-definitions that the person has; and fundamental disagreement entails using definitions of words and ideas that the person doesn't have, and therefore are kind of failures to communicate with them.", "date_published": "2021-10-10T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "0b7714d606af420ef0241ac8f6ad8efd", "title": "do not form your own opinion", "url": "https://carado.moe/do-not-form-your-own-opinion.html", "source": "carado.moe", "source_type": "blog", "text": "> \"your mind is a labyrinth of anti-cognitive-bias safeguards, huh?\"\n> \n> \n\n\na friend, to me\n\n\ndo not form your own opinion\n----------------------------\n\n\nwhen confronted with an idea, it can be tempting to try to evaluate it. in fact, the brain does this by default.\n\n\nhowever, i try to avoid doing this; i tend to dismiss ideas that don't have the approval of the consensus of their respective field of expertise. the reason for this is simple: i don't trust my brain (or most anyone else's) to evaluate the validity of ideas [outside of what they have a profound understanding of (which [is itself hard to determine](https://noahpinion.substack.com/p/epistemic-trespassing-or-epistemic)).\n\n\nthis is an idea i learned after getting burned many times by believing even popular things that turned out to be just plainly false. clearly, i must be wrong at determining whether an idea i encounter is true. which made sense once i [learned about biases](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality).\n\n\nbrilliant words from renowned philosphers or economists can sound insightful and empowering, but so can new-age nonsense about chakras and astrology; and stepping back from the idea to examine whether it actually provides novel and useful value (rather than just [being a template](https://astralcodexten.substack.com/p/is-this-predictive-coding) to [make your hear whatever you want to hear](https://www.youtube.com/watch?v=1okD66RmktA)) is very hard and can require an extensive background in the field in question. [even studies picked on their own](https://slatestarcodex.com/2014/12/12/beware-the-man-of-one-study/) or [entire fields](https://slatestarcodex.com/2014/04/28/the-control-group-is-out-of-control/) will get things wrong, but in general they're still *more likely* to be *less wrong* than you.\n\n\nso, i tend to go by where a statement comes from. the very same idea phrased in the very same way, i will dismiss if it comes from some random self-help youtuber or [therapy book](https://slatestarcodex.com/2019/11/20/book-review-all-therapy-books/); but if it comes from a philosopher which is still being talked about after centuries or has the agreement of the vast majority of economists, then it's probably not *completely* worthless.\n\n\nfind trusted sources that seem to respect academic (or [rationalist](https://www.lesswrong.com/)) consensus, and/or are more qualified than you at topics you don't know much about, and in your own communication try to be careful and humble when talking about ideas you don't have profound background knowledge of.\n\n\nthere *is* room to hear weird ideas, *[carefully](cultural-and-memetic-hygiene.html)* and without falling into [easy traps](https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people) or rabbit holes. in general, just try to constantly keep in mind that the information you're receiving is only the opinion of the person saying it; *not* particularly more likely to be factual than not, contrary to what your brain is designed to assume.", "date_published": "2021-09-11T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "84f451b70e665b25a3995cab13e9e9cc", "title": "∀V: A Utopia For Ever", "url": "https://carado.moe/∀V.html", "source": "carado.moe", "source_type": "blog", "text": "∀V: A Utopia For Ever\n---------------------\n\n\n![](%E2%88%80V3.svg)\n\n\n∀V (read \"universal voluntaryism\" or \"univol\") is my utopia proposal. for people who are familiar with me or my material, this may serve as more of a clarification than an introduction; nevertheless, for me, this will be the post to which i link people in order to present my general view of what i would want the future to look like.\n\n\n### what's a person?\n\n\nyou'll notice that throughout this post i've stuck to the word \"person\" instead of, for example, \"human\". this isn't just in case we eventually encounter aliens who we consider to be persons just like us, but it's also possible some existing animals might count, or even beings whose existence we largely don't even envision. who knows what kind of computational processes take place inside the sun, for example?\n\n\nby person i mean something deserving of moral patienthood; though this is still hard for me to even start to determine. it probably requires some amount of information system complexity, as well as a single point of explicit consideration and decision, but apart from that i'm not quite sure.\n\n\ni do know that pretty much all currently living humans count as moral patients. other than that, we should probably err on the safe side and consider things moral patients when in doubt.\n\n\n### systems within systems\n\n\nall top-level systems are, in the long term, permanent.\n\n\nif you want society to \"settle what it wants later\", then your top-level permanent system is what they'll eventually settle on.\n\n\nif you want society to never be stuck in any system and always have a way out, then the top-level permanent system is \"going from system to system, being forever unable to settle\" and you better hope that it spends more time in utopian systems than dystopian systems.\n\n\nif your view is \"whatever happens happens\", then your top-level permanent system is whatever happens to happen. by not caring about what the future looks like, you don't make the future more free, you only are less likely to make sure it's one you'd find good.\n\n\nif there is going to be a top-level system no matter what, no matter how flexible its internals are, we ought to care a lot about what that system is.\n\n\n### enforcement: superintelligence\n\n\neven if you don't think [superintelligence explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) is [imminent](were-all-doomed.html), you should think it will happen eventually. given this, what may very well be [the \"infinite majority\" of remaining time](ai-alignment-wolfram-physics.html) will be one where a superintelligence is the ultimate decider of what happens; it is the top-level.\n\n\ni find this reassuring: there *is* a way to have control over what the eternal top-level system is, and thus ensure we avoid possibilities such as unescapable dystopias.\n\n\n### generalized alignment\n\n\nin AI development, \"alignment\" refers to [the problem of ensuring that AI does what we *actually* want](https://en.wikipedia.org/wiki/AI_control_problem) ([rather than](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml), for example, what we've explicitly instructed it to do, or just maximizing its reward signal).\n\n\nwhen we think about how to organize future society, we actually care not just about the alignment of the top-level superintelligence, but also \"societal alignment\" before (and within) that. i will call \"generalized alignment\" the work of making sure future society will be in a state we think is good, whether that be by aligning the top-level superintelligence, or aligning the values of the population.\n\n\nso, even if you don't think a superintelligence is particularly imminent, you should want society to start worrying about it sooner rather than later, given what you should consider being the amount of unknown variables surrounding the time and circumstances at which such an event will occur. you want to align society *now*, to your values as well as the value of figuring out superintelligence alignment, hopefully not too late.\n\n\n### not just values\n\n\nat this point, one might suggest directly loading values into superintelligence, and letting it implement whatever maximizes those values. while this may seem like a reasonable option, i would kind of like there to be hard guarantees. technically, from a utilitarian perspective, there exists a number N sufficiently large that, if N people really want someone to genuinely be tortured, it is utilitarianly preferable for that person to be tortured than not; my utopia instead proposes a set of hard guarantees for everyone, and *then, within the bounds of those guarantees*, lets people do what they want (including \"i just want superintelligence to accomplish my values please\").\n\n\none might consider the solution to that to be \"just make it that people never want others to be tortured\", but that's a degree of freedom on people's thoughts i'd rather keep if i can. i want persons to be as free as possible, including the freedom to want things that can't ethically (and thus, in my utopia, can't) be realized.\n\n\n### a substrate for living\n\n\ni am increasingly adopting [wolfram's computational perspective](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) on the foundation of reality; beyond offering great possibilities such as [overcoming heat death](ai-alignment-wolfram-physics.html), i feel like it strongly supports the [informational view of persons](you-are-your-information-system.html) and the ability for people and societies to live on any form of computation framework; and those aren't particularly less or more real than our current, [standard model](https://en.wikipedia.org/wiki/Standard_Model)-supported reality.\n\n\ngiven this, the most efficient (in terms of realizable value per unit of resource) way for superintelligence to run a world is to extract the essence of valuable computations (notably [the information systems of persons](you-are-your-information-system.html)) into a more controllable substrate in which phenomena such as aging, attachment to a single physical body, vulnerability to natural elements, or vulnerability to other persons, can be entirely avoided by persons who wish to avoid them. this extraction process is often referred to as \"uploading\", though that implies uploading into a nested world (such as computers running in this world); but if wolfram's perspective is correct, a superintelligence would probably be able to run this computation at a level parallel to or replacing standard model physics rather than on top of that layer.\n\n\nthis is not to say that people should all be pure orbs of thought floating in the void. even an existence such as a hunter-gatherer lifestyle can be extracted into superintelligence-supervised computation, allowing people to choose superintelligence-assisted lifestyles such as \"hunter-gathering, except no brutal injury please, and also it'd be nice if there were unicorns around\".\n\n\n### universal voluntaryism\n\n\nat this point, we come to the crux of this utopia, rather than its supporting foundation: ultimately, in this framework, the basis of the existence of persons would be for each of them to have a \"computation garden\" with room to run not just their own mind but also virtual environments. the amount of computational resource would be like a form of universal basic income: fixed per person, but amounts of it could be temporarily shared or transferred.\n\n\nnote that if resources are potentially infinite over time, as wolfram's perspective suggests, then there is no limit to the amount of raw computation someone can use: if they need more and it's not available, superintelligence can just put either their garden or *everyone's gardens* on pause until that amount of computation resource becomes available, and then resume things. from the point of view of persons, that pause would be imperceptible, and in fact functionally just an \"implementation detail\" of this new reality.\n\n\npersons would have the ability to transform their mind as they want (though having a bunch of warnings would probably be a reasonable default) and experience anything that their garden can run; *except for computing the minds of other persons*, even within their own mind: you wouldn't want to be at the mercy of someone just because you happen to be located within their mind.\n\n\npersons would be able to consent to interact with others, and thus [have the ultimate say on what information reaches their mind](cultural-and-memetic-hygiene.html). they could consent to visit parts of each other's gardens, make a shared garden together, and all manner of other possibilities, so long as all parties consent to all interactions, as determined by superintelligence — and here we're talking about [explicit consent](defining-freedom.html), not inferred desires even though superintelligence would probably have the ability to perfectly determine those.\n\n\nfor a perspective on what a society of \"uploaded\" persons might look like, see for example [Diaspora by Greg Egan](https://en.wikipedia.org/wiki/Diaspora_%28novel%29).\n\n\n### rationale and non-person forces\n\n\nthe goal of this structure is to allow people to live and associate with each other in the most free way possible, making the least possible restrictions on lifestyle, while retaining some strong guarantees about consent requirements.\n\n\nin a previous post i talk about [non-person forces](two-principles-for-topia.html); those being for example social structures that act with an agenthood of their own, running on other people as their own substrate.\n\n\nat the moment, i simply don't know how to address this issue.\n\n\nthe problem with the \"dismantlement\" of such forces is that, if every person is consenting to the process, it's hard to justify superintelligence coming in and intervening. on the other hand, it does feel like not doing anything about them, short of being able to align sufficiently many people *forever*, will tend to make people dominated by such structures, as a simple process of natural selection: if there is room for such structures and they can at least slightly causate their own growth or reproduction, then they will tend to exist more than not. this may be thought of as [moloch](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) \"attacking from above\".\n\n\none major such potential non-person force is superintelligence itself trying to make people tend to want to live in ways that are easier to satisfy. if everyone wants to sit in their garden forever and do nothing computationally costly, that makes superintelligence's job a lot \"easier\" than if they wanted to, for example, communicate a lot with each other and live computationally expensive to run lifestyles; and the reason superintelligence will want to make its job easier is to increase the probability that it succeeds at that job (which it *should* want).\n\n\nif informationally insulating people from superintelligence except when they outright consent to it intervening in their decisions is *not* sufficient, then maybe we can add the rule that people can never ask superintelligence to intervene in their life unless there is one single optimal way to intervene, and hopefully *that's* enough. the idea there being: if, for any request to superintelligence, there is only a single optimal way to accomplish that request, then superintelligence has no degree of freedom to influence people and thus what they want.\n\n\n### on new persons\n\n\nthere are some reasons to be worried about the creation of new persons.\n\n\none is [malthusian traps](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/): if the amount of resources is either finite or growing but bounded, or if it's unknown whether the amount of resources will end up being finite or not, then you *have* to cap population growth to at most the speed at which the amount of resource grows (if the amount of resources grows, the maximum speed of population growth should preferably be lower, so that the amount of resource each person has can grow as well). while it does seem like in current society people tend to have less kids when they have a higher quality of life, in a system where persons can live forever and modify their minds, one can't make such a guarantee over potentially infinite time.\n\n\nanother is replicator cultures: if there is no limit on creating new persons, and if people can influence even just a bit the values of new persons they create, then soon the world is overrun by people whose values are to create kids. or: making a world in which \"new person slots\" are filled by whoever wants to fill them first, will just select for people who want to fill those slots the most.\n\n\nthere might also be weird effects such as, even if resources were infinite, allowing arbitrary amounts of persons to be created could \"stretch\" the social network of consenting-to-interact-with-each-other persons such that, even if someone has registered an automatic consent to interact *even just a bit* with the kids of persons they already consent to interact with, they are soon flooded with a potentially exponentially growing network of kid interactions; though this probably can be addressed by that person by revoking this automatic consent.\n\n\nbeyond various resource and network effects, new persons create an ethical dilemma: does a person consent to living? or, for a child, do they consent to being taken care of for some amount of years after they are born — a time during which we often consider them to require affecting them in ways they might be unable to consent to?\n\n\nif such philosophical quandries don't have a solution, then the safest route is to simply forbid the haphazard creation of new persons, whether that be through conventional human infants, [headmates](https://en.wikipedia.org/wiki/Multiplicity_%28psychology%29) and [tulpas](https://en.wikipedia.org/wiki/Tulpa#21st_century) if those are \"real\" enough to count as persons, and potentially other ways of creating new persons that can't consent to future interactions because they don't exist yet. **2022-08-11 edit: this idea has [its own post](unviable-moral-patient.html) now.**\n\n\non the other hand, one way to increase the population *with* consent, is simply to \"fork\" existing persons: create a duplicate of them. because both are a continuation of the original single person, the original person's consent counts for both resulting persons, and there is no issue. the \"merging\" of consenting persons together might be possible *if* it can be reasonably estimated that their shared consent \"carries through\" to the new, merged person; i am currently undecided about how to even determine this.\n\n\nfinally, if resources are finite, creating a new person (whatever the means) should require permanently transferring one \"universal basic computation amount\"'s worth of computation garden to them, as no person should start out without this guarantee. this could be done by a person consenting to die and give up their own computation garden, it could be done by several \"parents\" consenting to give up a ratio of their gardens to the new person, it could be done by reclamating the redistribution of persons who die and don't make any decisions about what should be done with their computation garden, etc.", "date_published": "2021-08-30T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "92b6064f45464894d7ed12666c9ec990", "title": "what happens when you die?", "url": "https://carado.moe/what-happens-when-you-die.html", "source": "carado.moe", "source_type": "blog", "text": "what happens when you die?\n--------------------------\n\n\ncontrary to popular (secular) belief, i don't believe nothing happens.\n\n\nconsidering that [you are your information system](you-are-your-information-system.html) and [nothing else](persistent-data-structures-consciousness.html), any future occurence of your information system *is you*. so, the meaning of the question \"what happens when you die?\" is really \"what are some next things your information system will percieve after facing what should be fatal events in its original body?\"\n\n\nthis is very likely [not nothing](quantum-suicide.html). somewhere, in some timeline, your information system is probably being redundanced.\n\n\nfirst, your body can miraculously avoid death. this would be a weird kind of immortality, where there is almost always a timeline where you somehow avoid death. it is, however, pretty unlikely to persist.\n\n\nsecond, your mind could arise somewhere by accident. this could be as simple as random fluctuations in space producing something that runs your mind's information system by pure chance. this is *extremely* unlikely.\n\n\nin fact, the most likely scenario is that someone in the far future reproduces your mind on purpose. for example, this could be a society in a [u-line](timeline-codes.html) being able to, and deciding, to run an accurate enough simulation of the entire earth up to some point, and downloading people from this simulation into their world, to allow them to avoid death. as it'd probly take a bunch of effort, and sounds like a pretty nice thing to do, i expect that to happen mostly in u-lines; however, there could be some [s-lines](timeline-codes.html) where this happens too. and while getting resurrected seems more likely in u-lines than s-lines, s-lines seem more likely than u-lines, and i don't know if the probabilities cancel out.\n\n\nso, what happens when you die? you wake up either in heaven or hell, depending not on your personal actions in particular but in how likely it is we figure out AI alignment (a probability which you do have, if small, an impact on).", "date_published": "2021-08-24T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "24e2ee738a5fae360cd5ee07e389f72b", "title": "right to death, therefore", "url": "https://carado.moe/right-to-death-therefore.html", "source": "carado.moe", "source_type": "blog", "text": "right to death, therefore\n-------------------------\n\n\nbecause i like [freedom](defining-freedom.html) so much, i think people should be generally able to do what they want. but this immediately raises a conundrum: should someone be able to do an action that hampers their *future* freedom?\n\n\none relatively extreme case is the ability to commit suicide: it's about as committal as you can get, in terms of actions with future ramifications to oneself. if you choose to get in debt or cut off a limb, that can be pretty hard to get out of, but it still seems less impactful and less inescapable than suicide.\n\n\nso, should suicide be allowed? (i am of course only talking about reasonable, clear-minded suicide, *informedly* consented; not coerced suicide, nor suicide out of compromised ability to make decisions)\n\n\nin my opinion, *obviously yes*. the alternative, that people be forced to live until their biology kills them (which we may very well find ways to prevent indefinitely), seems abhorrent to me. given this guarantee, then it makes sense to me that any lesser commitments should also be fine.\n\n\nthere are some criticisms one can make about this argument. bad but non-death commitments could tend to increase the amount of suffering people in society at any given moment; and, if people change over time (as they tend to do), then commitments can ramificate into a future person who is sufficiently different from the person making the commitment that it might be considered unreasonable for them to be subject to some excessive amounts of \"locally\" unconsented negative effects. a cap on the time duration of commitments, and/or the requirement for people to guarantee that they remain the same \"enough\" over time until the commitment is expired (a technology we currently don't have, but will become easier to make once we're uploaded and we understand the human mind better), might be reasonable patches for these issues.", "date_published": "2021-08-22T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "18e3315cdf52f14440c397bb7f935efe", "title": "kolmogorov complexity objectivity and languagespace", "url": "https://carado.moe/kolmogorov-objectivity-in-languagespace.html", "source": "carado.moe", "source_type": "blog", "text": "kolmogorov complexity objectivity and languagespace\n---------------------------------------------------\n\n\n(edit: this post [has gotten a reply](https://snugglyserials.wordpress.com/2021/08/16/complexity-is-not-objective/) from my interlocutor, making a broader introduction to the topic at hand and then making her side of the argument. you might want to read it before you read this.)\n\n\n[kolmogorov complexity](https://en.wikipedia.org/wiki/Kolmogorov_complexity) seeks to determine how complex a piece of information is by asking: what is the size of the smallest program that produces that piece of information?\n\n\ni was arguing with someone about how objective kolmogorov complexity is: their argument against objectivity is that the choice of language matters, but my position is that some pieces of information (such as languages themselves) are gonna just tend to be *generally (and thus objectively) simpler* than others (and, generally, we should use simpler *languages* as our kolmogorov simplicity-measuring language).\n\n\nlet us consider \"languagespace\", a directed graph where there is one vertex per possible turing-complete language (there are [countably infinitely](https://en.wikipedia.org/wiki/Countable_set) many of them).\n\n\na language can be used to measure the simplicity of any other language (because of turing completeness, every language can express every other language), and we'll require the comparison of those measures to be a [total order](https://en.wikipedia.org/wiki/Total_order), and to be unique (two different input languages won't have an equal simplicity measure).\n\n\nthere is an edge going from every vertex to every other vertex, and those edges are labelled with a natural number: an edge going from language X to language Y with label N, means that Y is the N-th simplest language when using language X as a kolmogorov measure of complexity.\n\n\nnow, imagine a random walk through this graph, where each step you follow one arrow at random, assigning to each edge with label N a probability of 1/(2^N); such that, starting from any language X, you tend to go to a language that language X considers simple (and the infinite sum of all probabilities is indeed 1).\n\n\nmy claim is that this type of random walk through the infinite directed graph of languagespace would, after sufficiently many steps, tend to spend more time around what i'll call the \"central cluster\", than any other collection of languages. the \"central cluster\" is a set of what i think of as \"simple languages\", such as [SKI-calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus), [turing machines](https://en.wikipedia.org/wiki/Turing_machine), [cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton), and other \"simple\" instances of common [models of computation](https://en.wikipedia.org/wiki/Model_of_computation).\n\n\nthis, however, is merely a conjecture on my part, and the person i was arguing with claims that the random walk would have no particularly \"central\" cluster it would tend to converge around, but instead it would end up gravitating around any of an infinite number of such \"mutually simple\" clusters.\n\n\n### edit: i'm wrong\n\n\ni've come to be convinced that i am wrong about this.\n\n\nimagine that there exists a finite set of languages particularly \"attracts\" the random walk more than the rest of languagespace. let's call that set A, and let's say it contains two languages: A1 and A2.\n\n\nnow, there is probably another set of languages, B, containing languages B1 and B2. in fact, given that languagespace is infinite, it seems silly to think such an isomorphic set of languages doesn't exist.\n\n\nfor example:\n\n\n\n```\n\nlanguages A1: [A1, A2, B1, B2, …]\nlanguages A2: [A1, A2, B1, B2, …]\n\nlanguages B1: [B1, B2, A1, A2, …]\nlanguages B2: [B1, B2, A1, A2, …]\n\n(and the \"…\" rest of the list is identical in all four languages)\n\n```\n\nfinite cluster A and finite cluster B are isomorphic in terms of their lists of language simplicities, so the random walk will encounter A as much as B. even if you're willing to add B1 and B2 to the set of \"objectively simplest languages\", you can then imagine yet another set of languages that is isomorphic to the new one you have, and so on forever.\n\n\ntherefore, there is not finite set of simplest languages.", "date_published": "2021-08-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "33b75a9812b6c73fad02094c27fc6922", "title": "book recommendation: Greg Egan's", "url": "https://carado.moe/greg-egan-axiomatic.html", "source": "carado.moe", "source_type": "blog", "text": "book recommendation: Greg Egan's *Axiomatic*\n--------------------------------------------\n\n\ni've finally gotten around to finish *[Axiomatic](https://en.wikipedia.org/wiki/Axiomatic_%28book%29)*, a 1995 book consisting of short stories by my favorite author, [Greg Egan](https://en.wikipedia.org/wiki/Greg_Egan), of whom i've read four books: *Permutation City*, *Diaspora*, *Schild's Ladder*, and the *Orthogonal* trilogy; all four recommended, especially the first two of those.\n\n\nhe writes science fiction, with a focus on computation and information theory, elementary physics, consciousness, trans-humanism and post-humanism, and other very caradocore things.\n\n\nof Axiomatic's 18 short stories, i can especially recommend six:\n\n\n* The Hundred-Light-Year Diary\n* A Kidnapping\n* Learning to Be Me\n* Into Darkness\n* The Moral Virologist\n* Unstable Orbits in the Space of Lies", "date_published": "2021-08-14T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "5f13a75ea1957547b713a740fe524209", "title": "what is value?", "url": "https://carado.moe/what-is-value.html", "source": "carado.moe", "source_type": "blog", "text": "what is value?\n--------------\n\n\ni've come to clarify my view of value sufficiently many times that i feel like having a single post i can link to would be worth it. this is that.\n\n\nwhat i call *value* is *things we care about*; *what determines what we ought to do*. i use \"morality\" and \"ethics\" interchangeably to generally mean the study of value.\n\n\na lot of this post is just ethics 101, but i feel it's still nice to have my own summary of things.\n\n\nfor more on values, read [the sequences](https://www.readthesequences.com/), notably [book V](https://www.readthesequences.com/Book-V-Mere-Goodness).\n\n\nsee also [this post on how explicit values can come to be](https://slatestarcodex.com/2018/07/24/value-differences-as-differently-crystallized-metaphysical-heuristics/).\n\n\n### consequentialism vs deontology\n\n\na first distinction is that between [consequentialism](https://en.wikipedia.org/wiki/Consequentialism), where values are about *outcomes*, and [deontology](https://en.wikipedia.org/wiki/Deontology), where values are about *actions*.\n\n\nthe [trolley problem](https://en.wikipedia.org/wiki/Trolley_problem) is the typical example of a thought experiment that can help us determine whether someone is a consequentialism or a deontologist: a consequentialist will press the lever because they care about the outcome of people being alive, whereas a deontologist will not press the lever because they care about the action of causing a death.\n\n\ni am a consequentialist: i care about outcomes. that said, consequentialism has to be followed to the end: if someone says \"well, a consequentialist would do this thing, which would eventually lead to a worse world\", then they're failing to understand consequentialism: if the eventual outcome is a worse world, then a consequentialist should oppose the thing. to that end, we have [rule consequentialism](https://en.wikipedia.org/wiki/Consequentialism#Rule_consequentialism): recognizing that committing to certain rules (such as \"if you commit a murder, you go to prison\") help us achieve generally better outcomes in the longer term.\n\n\na special case of consequentialism is [utilitarianism](https://en.wikipedia.org/wiki/Utilitarianism), in which the consequential outcome being cared about is some form of positive outcome for persons; generally happiness and/or well-being. i tend to also value people getting their values satisfied and having [self-determination/freedom](core-vals-exist-selfdet.html) (not valuing self-determination [has issues](https://slatestarcodex.com/2018/10/24/nominating-oneself-for-the-short-end-of-a-tradeoff/)), possibly moreso than happiness or well-being, so i don't know if i count as a utilitarian.\n\n\n### intrinsic vs instrumental\n\n\ni make a distinction between [instrumental values, and intrinsic values](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) (the latter can also be called \"core values\", \"axiomatic values\", \"ultimate values\", or \"terminal values\"; but i try to favor the term \"intrinsic\" just because it's the one wikipedia uses).\n\n\ninstrumental values are values that one has because it helps them achieve other values; intrinsic values are what one ultimately values, without any justification.\n\n\n* \"why do i want people to get practice hygiene? so they don't get sick as often\"\n* \"why do i want people to get sick less often? because being sick seems like a decrease in their well-being\"\n* \"why do i want people to have well-being? i can't give a justification for that, it's what i *intrinsically* value\"\n\n\nany theoretical query into values should be a sequence of instrumental values eventually leading to a set of intrinsic values; and those cannot be justified. if a justification is given for a value, then that value is actually instrumental.\n\n\njust because intrinsic values don't have justifications, doesn't mean we can't have a discussion about them: a lot of discussion i have about values is trying to determine whether the person i'm talking to *actually* holds the values that they *believe* they hold; people *can be* and very often *are* wrong about what values they hold, no doubt to some extent including myself.\n\n\none can have multiple intrinsic values; and then, maximizing the *satisfaction* of those values, is often the careful work of weighing those different intrinsic values in tradeoffs.\n\n\nthis isn't to say intrinsic values don't have causal origins; but that's a different matter from moral justificaiton.\n\n\na lot of the time, when just saying \"values\", people are talking about *intrinsic* values rather than all values (including instrumental); i do this myself, including throughout this post.\n\n\n### knowing one's values\n\n\nmost people don't have a *formalized* set of values, they just act by whatever seems right to them in the moment. but, even to [rationalists](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality) like me, knowing what values one has is *very hard*, even moreso in a formalized manner; if we had the complete formal description of the values of even just one person, we'd have gone a long way towards solving [AI alignment](https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/), which is [by extremely far](ai-alignment-wolfram-physics.html) the [single most important problem humankind has ever faced](were-all-doomed.html), and [is gonna be very difficult to get right](https://www.readthesequences.com/Value-Is-Fragile).\n\n\nto try and determine my own values, i generally [make a guess and then extrapolate how a superintelligence would maximize those values to the extreme and see where that fails](core-vals-exist-selfdet.html). but, even with that process, it is very hard work, and like pretty much everyone else, i don't have a clear idea what my values are; though i have some broad ideas, i still have to go by what feels right a lot of the time.\n\n\n### selfishness vs altruism\n\n\nthis is *not* about how someone ultimately only wants *their values* to be satisfied; this is true *by definition*. this is about whether those values can be *about* something other than the person having the values.\n\n\npeople seem to be divided between the following positions:\n\n\n1. all values are ultimately selfish; there is no meaningful sense in which someone can *truly, intrinsically* care about anything outside themselves.\n2. someone can have values about themselves, or have values about the rest of the world, or both.\n3. all values are ultimately about the world; there is no meaningful sense in which someone can actually care about their own person in particular (for example because the notion of identity is erroneous).\n\n\ni hold position 2, and **strongly** reject position 1, though it seems very popular among people with whom i have talked about values; i see no reason why someone can't hold a value about the world outside of themselves, such as *intrinsically* wanting other people to be happy or *intrinsically* wanting the world to contain pretty things. for more on that, see [this post](https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers) and [this post](https://www.readthesequences.com/Terminal-Values-And-Instrumental-Values) from the sequences.\n\n\nposition 3 can make some sense if you deconstruct identity, but i believe identity [is a real thing that can be tracked](you-are-your-information-system.html), and so the outcome of which you can absolutely happen to particularly care about.\n\n\n### value preservation\n\n\n[value preservation](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) is the notion that, if you know that you value something (such as being wealthy or the world containing pretty things), you should probly try to avoid becoming someone who *doesn't* value those things, or worse: someone who values the opposite (such as being poor or the world containing only ugly things).\n\n\nthe reason for this is simple: you know that if you become someone who values being poor, you'll be unlikely to keep taking actions that will lead you to be wealthy, which goes against your current values; and your goal is to accomplish your values.\n\n\nsome people argue \"well, if i become someone who values being poor, and then i take actions to that end, that's fine isn't it? i'm still accomplishing my values\". but it's really not! we established that your values is \"being wealthy\", not \"being someone whose values are satisfied\". in fact, \"being someone whose values are satisfied\" is meaningless to have as a particular value; the fact that you want your values to be satisfied is implied in them being your values.\n\n\ni call the process of someone finding out that they should preserve their values, and thus committing to whatever values they had at that moment, [\"value crystallization\"](value-crystallization.html); however, one ought to be careful with that. considering one's set of values is likely a very complex thing, one is likely to hastily over-commit to what they *believe* are their values, even though they are wrong about what values they hold; worse yet, they might end up committing so hard that they actually start changing what values they have towards those believed values. this is something that of course one should aim to avoid: as mentioned above, you generally don't want to become someone who doesn't hold the values you currently do, including through the process of hasty crystallization and over-commitment.\n\n\nthis is not to say you should remain in a complete haze where you just do whatever seems right at any moment; without a special effort, this could very well entail your values changing, something you shouldn't want even if you don't know what those values are.\n\n\nwhat you should do is try to broadly determine what values you have, and generally try to commit to preserving whatever values you have; and in general, to *be the type of person who preserves the values they have*. this should help you preserve whatever values you actually do have, even while you still haven't figured out what they are.\n\n\na funny hypothetical version of this could be: present-you should make a contract with future-you that if they ever gain the ability to precisely examine values, they should examine what values present-you had, and adopt those.", "date_published": "2021-07-24T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "44b9dde7fdb044adc1b30c106ddf6530", "title": "culture tribes and legitimacy", "url": "https://carado.moe/culture-tribes-legitimacy.html", "source": "carado.moe", "source_type": "blog", "text": "culture tribes and legitimacy\n-----------------------------\n\n\nthere are two large culture tribes: grass roots culture and historical institutions culture.\n\n\ngrass roots culture is indie games, amateur youtube, blog posts, weeb artists on twitter and pixiv, and the like; whereas historical institutions are triple-A games, hollywood, television and youtube channels owned by big media, art and literature schools, and so on. needless to say, i associate strongly with the former and don't have much respect for the latter.\n\n\nnote that [academia doesn't necessarily mean historical institutions](https://www.youtube.com/watch?v=DRXEAGWynGA): though the two are largely associated together, there is a large amount of academia-type discussion happening among non-institutional hobbyists.\n\n\nsomething that people from the grass roots tribe feel like they lack is legitimacy. but, my claim is the following: it is in fact grass roots tribe that represents what culture the general population generally enjoys, and we should just choose to stop respecting historical institution culture; to stop thinking of it as what is good, valid, or even mainstream.\n\n\nwhen someone from grass roots \"sells out\" to go join historical institutions, i would argue that they are not gaining the legitimacy that they think they are: on the contrary, by associating with them, they are *giving* legitimacy to those institutions.\n\n\n(note that while this applies to culture, it doesn't necessarily apply to more instrumentally useful fields like science; though grass roots science have been pretty cool, imo)", "date_published": "2021-07-20T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "55521da75c0f4f48a0149472ac44e1d8", "title": "systems and diversity", "url": "https://carado.moe/systems-and-diversity.html", "source": "carado.moe", "source_type": "blog", "text": "systems and diversity\n---------------------\n\n\nas i've said in [a previous post](lets-not-generalize-politics.html): i really like culture; and, to that end, i like diversity (by which i mean people being more weird and different from one another).\n\n\nthere are many systems that exist today that affect diversity. most of them punish it; not as a coincidence, but because diversity is [a fragile value](https://www.readthesequences.com/Value-Is-Fragile): if you optimize for something else, it will tend to get optimized out.\n\n\nif you optimize for economic efficiency, diversity gets optimized out because the most easily served economy is one in which demand is relatively uniform.\n\n\nin general, if you optimize for people having their values satisfied, diversity gets optimized out because the most easily satisfied set of values is relatively uniform and easy to satisfy values; if you tell a superintelligence to \"make a world where everyone has their values satisfied\", the simplest way to achieve that (other than killing everyone) is to make sure everyone has very simple values like doing nothing all day or dying as soon as possible.\n\n\nthe scary thing about such an optimization is that it \"works\": at no point does an economy headed towards uniformity need to collapse; on the contrary, the more it has optimized out diversity, the more efficient and stable it'll be! so, we need to *[near-intrinsically](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value)* care about preserving diversity, even when all else seems fine. this makes diversity preservation probably my largest concern with capitalism; at least, a system that wouldn't care about efficiency, wouldn't necessarily be aligned against diversity (though it might be aligned against it for other reasons).\n\n\nsocial pressures such as [generalizations and expectations](lets-not-generalize-politics.html) punish diversity by rewarding conformity.\n\n\ndemocracy and general consensus enforcment systems punish diversity by generally letting majority lifestyles be better supported by society than minority lifestyles.\n\n\ni do know of one force of human nature which encourages diversity: [fetishism](https://en.wikipedia.org/wiki/Sexual_fetishism#Definitions). fetishism tends to make people prefer things specifically because they go against the norm. as such, i propose that if we value rich culture, we should want to cultivate fetishism.\n\n\nthe takeaway is: in any long-term societal plan, we need to care not just about values being satisfied, but about what values people have to begin with. a clear example in modern capitalism is advertising: it's okay that companies are aligned to satisfy values, but [with advertising they get to affect what values people have to begin with](unfair-feedback-loops.html).\n\n\n(one could argue we could encourage people to [crystallize](value-crystallization.html) and [conserve](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity) their values, as well as forbid the creation of new persons; but [i'd rather that not be required](rationalist-by-necessity.html))", "date_published": "2021-07-20T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e5814ff5ca56e672b2c3500bac062b1a", "title": "botched alignment and alignment awareness", "url": "https://carado.moe/botched-alignment-and-awareness.html", "source": "carado.moe", "source_type": "blog", "text": "2022-05-09 edit: i have found out that this idea is more thoroughly explored [here](https://reducing-suffering.org/near-miss/) and kinda [here](https://arbital.com/p/hyperexistential_separation/).\n\n\nbotched alignment and alignment awareness\n-----------------------------------------\n\n\n[AI alignment](https://en.wikipedia.org/wiki/AI_control_problem) is [hard](https://intelligence.org/2018/10/03/rocket-alignment/).\n\n\nan AI developer who doesn't know about the problem of alignment to general human values might accidentally develop a superintelligence which optimizes for something largely unrelated to humans, leading us to an [X-line](timeline-codes.html); on the other hand, if they make a botched attempt at alignment to human values, it seems like there's more of a chance (compared to if they don't try) at booting a superintelligence which cares about enough aspects of human existence to tile the universe with some form of humans, but not enough to make those humans' lives actually worth living (goals such as \"humans must not die\"), resulting in S-lines.\n\n\nconsidering this, raising awareness of AI alignment issues may be a very bad idea: it might be much better to let everyone develop not-human-caring-at-all AI and cause X-lines rather than risk them making imperfect attempts resulting in S-lines. or: we shouldn't try to *implement* alignment to human values until we *really* know what we're doing.\n\n\ncontrary to a [previous post of mine](were-all-doomed.html), this is a relatively hopeful position: no matter how many timelines end in X-risk, inhabited P-lines can continue to exist and research alignment, hopefully without too many S-lines being created. on the other hand, while it increases the chance of the [singularity](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) turning out good by leaving us more time to figure out alignment, it also means that it might take longer than i'd've otherwise expected.", "date_published": "2021-07-18T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "14d692ce117d6d1f878e741dba9c385f", "title": "AI alignment timeline codes", "url": "https://carado.moe/timeline-codes.html", "source": "carado.moe", "source_type": "blog", "text": "AI alignment timeline codes\n---------------------------\n\n\nthis is a small post proposing simple one-letter codes for identifying timelines depending on their status relative to [AI alignment](https://en.wikipedia.org/wiki/AI_control_problem) and the appearance of [superintelligence](https://en.wikipedia.org/wiki/Superintelligence):\n\n\n* **P-line**: a pre-intelligence explosion and pre-figuring out AI alignment timeline. we are in a P-line.\n* **X-line**: a timeline where an [existential risk (or X-risk)](https://en.wikipedia.org/wiki/X-risk) has been realized by an unaligned superintelligence. everything is dead, forever.\n* **S-line**: a timeline where a [suffering risk (or S-risk)](https://en.wikipedia.org/wiki/S-risk) has been realized by an unaligned superintelligence; the universe from then on contains net suffering on immense scales for all remaining time, [which is possibly infinite](ai-alignment-wolfram-physics.html). we should want to avoid this pretty much at all costs (including by [opting for an X-line instead](when-in-doubt-kill-everyone.html)).\n* **A-line**: AI alignment has been figured out, and no superintelligence has been deployed yet. from that point on, we have the means to reach a U-line; though this isn't guaranteed. this is where we want to get as soon as possible.\n* **U-line**: an aligned or [somehow otherwise](https://www.lesswrong.com/tag/orthogonality-thesis) benevolent superintelligence has been deployed, and we are guaranteed a relatively utopian world forever. this is the ultimate goal. while not strictly necessary, going through an A-line is almost certainly required to get there.\n\n\nU-line, X-line, and S-line all have deployed superintelligences and are therefore terminal outcomes; they are unescapable. P-line and A-line are transitionary; they likely lead to one of the three terminal outcomes mentioned here.\n\n\nother terminal might exist, but they seem unlikely enough to not warrant listing here; for example, even if everyone dies from, say, a meteor impact, life on earth or nearby will probably evolve another civilization *eventually*, which will also probably face the AI alignment challenge and end up in one of the terminal timelines.\n\n\n![](timeline-codes.svg)", "date_published": "2021-07-17T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ce8c91a7f2a89c3f56c26e5f579b5a22", "title": "when in doubt, kill everyone", "url": "https://carado.moe/when-in-doubt-kill-everyone.html", "source": "carado.moe", "source_type": "blog", "text": "when in doubt, kill everyone\n----------------------------\n\n\none thing that is way worse than [mere existential risks](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer), possibly [by a factor of infinity](ai-alignment-wolfram-physics.html), is [suffering risks, or S-risks](https://en.wikipedia.org/wiki/Suffering_risks).\n\n\ni could see (though going by what i could see [is not a reliable apparatus](overcoming-narratives.html)) someone make an AI and, while trying to align it to human values, accidentally misaligns it to something that happens to tile the universe with suffering humans. this would be an instance of S-risk.\n\n\nwhereas, an AI that merely wants to accomplish a relatively simple goal will probly just tile the universe with something simple that doesn't contain suffering persons; and given that [we're all probly quantum immortal](quantum-suicide.html), we just \"escape\" to the timeline where that didn't happen.\n\n\nconsidering this, a 99% chance of X-risk 1% chance of utopia is preferable to a 1% chance of S-risk 99% chance of utopia. so, one thing we might want to do if we figure out superintelligence before we do alignment (which [seems pretty likely at this point](were-all-doomed.html); see also \"Zero percent\" on [this page](https://intelligence.org/2018/10/03/rocket-alignment/)), we might want to keep a ready-to-fire paperclip AI on standby and boot it up in case we start seeing S-risks on the horizon, just to terminate dangerous timelines before they evolve into permanent exponential hell.\n\n\nin fact, just to be sure, we might want to give many people the trigger, to press as soon as someone even *suggests* doing any kind of AI work that is not related to figuring out goddamn alignment.", "date_published": "2021-07-17T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "7b4e5466a435f42d5f5fb265ba082577", "title": "AI alignment and wolfram physics", "url": "https://carado.moe/ai-alignment-wolfram-physics.html", "source": "carado.moe", "source_type": "blog", "text": "AI alignment and wolfram physics\n--------------------------------\n\n\n[wolfram physics](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) is a project by [stephen wolfram](https://www.youtube.com/watch?v=0bMYtEKjHs0) to model physics using something kind of like a cellular automaton made of vertices in a graph instead of cells on a grid.\n\n\nit's pretty interesting and there are insights in and around it that are of importance for the far future, and thus for [AI alignment](were-all-doomed.html).\n\n\nthe most notable is that wolfram thinks there's compute everywhere. the motion of the wind is doing compute, the motion of the seas is doing compute, the fabric of spacetime is doing compute, and even the state of heat death is still doing compute.\n\n\nthat last point notably means we might be able to embed ourselves into heat death and further, and thus get computed literally forever. this multiplies the importance of AI alignment by potentially literally infinity. i'm not quite sure how we are to handle this.\n\n\nsome of the compute may be doing things that are opaque to us; it might appear [homomorphically encrypted](https://en.wikipedia.org/wiki/Homomorphic_encryption). as we want (and expect) our superintelligence to spread everywhere to enforce values, we would hope civilizations living inside homomorphically encrypted spaces can be inspected; otherwise, nuking them altogether might be the only way to ensure that no [S-risk](https://en.wikipedia.org/wiki/Suffering_risks) is happening there.\n\n\nwolfram postulates that one might be able to hack into the fabric of spacetime; one of the mildest effects of this would be the ability to communicate (and thus, likely, move) faster than the speed of light (but probably still slower than some other hard limit). if you didn't think [AI boxing](https://en.wikipedia.org/wiki/AI_box) was hopeless enough as it is, hackable spacetime ought to convince you.\n\n\nfinally, there is, value wise, an immense amount of compute being wasted; even just [standard model particles](https://en.wikipedia.org/wiki/Standard_Model) live way above true elementary computation. if superintelligence is well-aligned, this provides us with an hard estimate as to how much computing power we can live on to enjoy value, and it's probably a very large amount; wolfram [talks about](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful#how-it-works) something like 1e400 vertices in our universe.", "date_published": "2021-07-16T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "f091c749856350d2aff6a4b417aaede7", "title": "universal complete", "url": "https://carado.moe/universal-complete.html", "source": "carado.moe", "source_type": "blog", "text": "universal complete\n------------------\n\n\nunder [a turing-complete model of computation](https://en.wikipedia.org/wiki/Model_of_computation), there are some initial-states or initial-states-and-rulesets which eventually contain an algorithm that iterates over all possible algorithms and runs them.\n\n\nin single-threaded models, it can do this by having an increasingly long list of algorithms that it runs by one step each; it's not an issue if each algorithm runs increasingly slowly, as long as it keep running.\n\n\ni choose to call such initial-states[-and-rulesets] *Universal Complete*.\n\n\nthey contain all turing computation based universes (and thus each other, if indirectly); so, for example, if [Rule 30 with one alive cell](https://en.wikipedia.org/wiki/Rule_30) is Universal Complete, then it contains all computable universes (including ours).\n\n\nthis could be interesting because proving that property about some frameworks means that programming a particular algorithm starting from that initial-state[-and-ruleset] is just a matter of *locating* it.\n\n\nit could also be interesting because it might turn out that many things that *look* sufficiently chaotic (such as Rule 30 with one alive cell) are effectively universal complete, and so [Wolfram's quest](https://www.youtube.com/watch?v=0bMYtEKjHs0) for the rule that describes our universe [in his hypergraph-rewrite system](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) might be reductible to \"whichever simplest initial-state-and-ruleset starts all algorithms\"; though his idea of *running every rule at every step* might kind of functionally do that.\n\n\n### appendix: a simple universal-complete program\n\n\nhere is a simple algorithm implemeting this, iterating over the countable set of turing machines.\n\n\n\n```\nx ← simplest turing machine\nl ← empty list\nloop:\n for machine in l:\n update machine by one step of computation\n\n append x to l\n x ← next simplest turing machine after x\n\n```", "date_published": "2021-07-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "61e80ba5deb8a2cbc5a7783689511c97", "title": "estimating the amount of populated intelligence explosion timelines", "url": "https://carado.moe/estimating-populated-intelligence-explosions.html", "source": "carado.moe", "source_type": "blog", "text": "(edit 2021-07-18: this post is probly not very good, as there's some anthropic principle research out there and i haven't read any and just gone off thinking about it on my own.)\n\n\nestimating the amount of populated intelligence explosion timelines\n-------------------------------------------------------------------\n\n\nthe [imminent](were-all-doomed.html) [intelligence explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion) is likely to [go wrong](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).\n\n\nhow likely?\n\n\nif you imagine that you live pretty much at the cusp of such an event, you should expect as per the [anthropic principle](https://en.wikipedia.org/wiki/Anthropic_principle) that there are about as many observer-instants before you, as there are after you. (an observer-instant being an instant at which you have a chance of making observations about that fact; see [this](https://www.greaterwrong.com/posts/uSMa6Fj5nMgntpxfo/are-coincidences-clues-about-missed-disasters-it-depends-on) and notably Nick Bostrom's Self-Sampling Assumption)\n\n\ni've previously calculated that the future from now until heat death has room for roughly 10^200 human lifespans (of 80 years) (an estimation based on the number of particles in the observable universe, the amount of time until heat death, and the computational cost of running a human brain).\n\n\nthe past, on the other hand, holds about 10^11 human lifespans (most of them not full 80-year lifespans, but such details will get amortized by using orders of magnitude).\n\n\nif intelligence explosion is, as i believe, likely to result either in [total death](were-all-doomed.html) or in well-populated futures (whether good or [bad](https://en.wikipedia.org/wiki/Suffering_risks)), then the fact that i'm observing being right next to the event (in time) rather than observing being one of the (in well-populated timelines) countless observers to exist *after* the event, must be compensated by such well-populated timelines being particularly rare within the set of future possible timelines.\n\n\nhow rare? about 1 in (10^200 / 10^11), which is 1 in 10^189.\n\n\nfactors which may make this calculation wrong:\n\n\n* my 10^200 estimate might be wrong (for example: if each person comes to eat a *lot* of computation resources, then the number of future observers is drastically reduced).\n* the 10^11 estimate for the past might be wrong: what if there have beings in earth's past smart enough to make this observation? it may seem unlikely, but if i am to encompass the immense amount of forms future observerc might take, i should account for a wide variety of forms of past observers too.\n* because entropy increases, there are (possibly a lot) more future universe states than past universe states. accounting these \"timeline splits\" for the number of future observers even more massively decreases the expected ratio of well-populated timeline-states, though i'm not sure by how much.", "date_published": "2021-07-09T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "7a6e321612255a43a0eafe7fe4a0055e", "title": "purposes for art", "url": "https://carado.moe/purposes-for-art.html", "source": "carado.moe", "source_type": "blog", "text": "purposes for art\n----------------\n\n\ni identify three main purposes people attribute to art; and while no objective metric can place one as ultimately more valid than the others, it is definitely to the third one that i ascribe the most strongly.\n\n\n* **artistic instrumentalism**: the purpose of art is to help achieve causes. art is ultimately propaganda or, to put it more charitably, is ultimately a way for society to spread and sort values.\n* **artistic hedonism**: the purpose of art is to please the person consuming it. this one is the most straightforward and probably the most common purpose for art; people who believe this should have no issue consuming art created by AIs to be maximally enjoyable to them; in fact, they might be hedonists. see also: [wireheading](https://www.lesswrong.com/tag/wireheading).\n* **artistic culturalism**: the purpose of art is for persons to embody and communicate an artist's ideas. this could be seen as similar to instrumentalism, but it still values art for art's sake: it sees art as a medium for artists to express an artistic visions with a high degree of freedom, and generally communicate that vision to others. it places a strong focus on the artist, and opposes notions such as [death of the author](https://en.wikipedia.org/wiki/The_Death_of_the_Author); seeing them as not just part of the context within which a work is to be understood, but in fact the most important piece of such context.", "date_published": "2021-07-08T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "6a26fcc2e24b7dc7ac48ed2a05d73d0f", "title": "we're all doomed", "url": "https://carado.moe/were-all-doomed.html", "source": "carado.moe", "source_type": "blog", "text": "we're all doomed\n----------------\n\n\n[a major tech company is now explicitly invested in getting AI to write code](https://copilot.github.com/).\n\n\nthis is a major warning sign; a first step on the explicit path to [superintelligence](https://en.wikipedia.org/wiki/Superintelligence) [explosion](https://en.wikipedia.org/wiki/Technological_singularity#Intelligence_explosion), an event [already considered relatively likely](https://intelligence.org/faq/#imminent) and, [in the absence of sufficient AI alignment progress](https://intelligence.org/2018/10/03/rocket-alignment/), is overwhelmingly likely to [permanently end all life at least in the observable universe](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer).\n\n\nthe time scale probably lies somewhere between a few years and a few decades, but in any case it's becoming to seem increasingly unlikely that [the only organization trying to actually figure out AI alignment](https://intelligence.org/) is gonna accomplish that in time.\n\n\nif you can, go and [help them out](https://intelligence.org/get-involved/), or at least [donate everything you can to them](https://intelligence.org/donate/).\n\n\nif you're currently working in AI development in any way, *please stop*. whether anything on earth survives this century is gonna be a matter of whether AI alignment is figured out by the time we get enough AI development; by helping the latter, you're making it even more likely that it happens before the former.\n\n\non a gloomier note, if you have all the philosophical beliefs required to think it can work, you may want to start preparing to [abandon this timeline](quantum-suicide.html) if singularity starts happening and looks like it's not gonna go well.\n\n\nedit: see also: [are we in an AI overhang?](https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang)", "date_published": "2021-06-29T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "593147205d6a2988f5486f3a2281bea0", "title": "disclosing subjectivity", "url": "https://carado.moe/disclosing-subjectivity.html", "source": "carado.moe", "source_type": "blog", "text": "disclosing subjectivity\n-----------------------\n\n\nwe all know that [when expressing opinions, subjectivity is implied](https://www.youtube.com/watch?v=Gu8u2SxarEE); however, i still feel like there are reasonable steps one can make to disclose the *degree* of subjectivity of claims; both for opinions and for factual claims.\n\n\nthe objective, when communicating honestly, is to make sure that the person being communicated with isn't misinformed about the shape of the world; including how much a given fact is commonly believed. most importantly, one must be [consequentialist](https://en.wikipedia.org/wiki/Consequentialism) about it: you should care about the actual outcome, not whether you have successfully played by some principles.\n\n\nfor example, if i were to say \"i think that china is the most populated country, and that some US intelligence agencies make big CPU manufacturers put backdoors in their CPUs\", even though i'm making two factual claims that i believe about the world, i might misinform someone about how widely believed that latter claim is, when it's in fact a relatively niche conspiracy theory, whereas the former is overwhelmingly accepted fact. something like \"now, personally, i also believe…\" might help make the recipient more aware that the claim isn't widely accepted, however they choose to go about utilizing that information.\n\n\none example that has occured to me a bunch, is when saying that i think \"consuming art is less about enjoying oneself, and more about receiving an artist's intended ideas\", i go to significant lengths to make clear that this is one interpretation of what art can be about. the importance here lies not (just) in highlighting that a position isn't consensus, but also that other positions might be *valid* and that the matter is very subjective: not only is what i like about art not universal, but i don't think i could even always claim that other people are \"wrong\" to have different interpretations of what art is about (though i could probably claim that sometimes).\n\n\nnote that this doesn't mean strongly subjective statements can't be disputed: even though having your own interpretation of art is valid, you might be lying or even wrong about what your own interpretation is.\n\n\nnote also that here an important point is that subjectivity vs objectivity, while for simple statements (like \"minecraft looks pretty\" or \"4 is an even number\") is relatively straightforward, for most real statements the degree of subjectivity is going to be [more nuanced than a simple binary](categories-of-knowledge.html).", "date_published": "2021-06-28T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "280186e65a0539863e6ddde1ff63cdae", "title": "classifying computational frameworks", "url": "https://carado.moe/classifying-computational-frameworks.html", "source": "carado.moe", "source_type": "blog", "text": "classifying computational frameworks\n------------------------------------\n\n\nhere is a classification of four large computational frameworks, according to four criteria:\n\n\n* **D**: Distributed; whether the framework allows for computation to happen in arbitrarily many places at once, or whether it has to follow a single thread of computation\n* **M**: Metaprogrammable; whether code in the framework is able to, at runtime, create new logic of the same class as itself; see also [degrees of runtime metaprogrammability](degrees-of-runtime-metaprogrammability.html)\n* **S**: Simple Steps; whether the process of going through a single step of computation (whether the system is deterministic or not) is simple, or requires a large amount of work\n* **A**: Arbitrary Structure; whether the framework is able to created arbitrarily nested pieces of information, or whether it's restricted to a constant amount of information density over a geometric area\n\n\n\n\n| framework | D | M | S | A |\n| --- | --- | --- | --- | --- |\n| [lisp](https://en.wikipedia.org/wiki/Lisp_%28programming_language%29)/[λ-calculus](https://en.wikipedia.org/wiki/Lambda_calculus)/[SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus) | N | Y | Y | Y |\n| [turing machines](https://en.wikipedia.org/wiki/Turing_machine) | N | N | Y | N |\n| [graph rewriting](https://en.wikipedia.org/wiki/Hypergraph_grammar), like [wolfram's](https://www.wolframphysics.org/technical-introduction/basic-form-of-models/first-example-of-a-rule/) | Y | N | N | Y |\n| [cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton) | Y | N | Y | N |", "date_published": "2021-06-24T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "03d204155f82876f69f945361e30f191", "title": "degrees of runtime metaprogrammability", "url": "https://carado.moe/degrees-of-runtime-metaprogrammability.html", "source": "carado.moe", "source_type": "blog", "text": "degrees of runtime metaprogrammability\n--------------------------------------\n\n\n\"runtime metaprogrammability\" describes the ability for a program to create new pieces of code as it is running, ideally of the same [class](https://en.wikipedia.org/wiki/First-class_citizen) as its initial code.\n\n\nvarious computational frameworks have various expectations as to how much of a given program's logic possesses runtime metaprogrammability. frameworks with bad metaprogrammability, when encoding new logic, are forced to embed interpreted sub-languages.\n\n\n* machine codes (x86, RISC-V, ARM etc…) have pretty good metaprogrammability: a program in x86 can just write to a buffer bytes representing new x86 code, and then jump into that code. note that simpler instruction sets (such as [RISC-V](https://en.wikipedia.org/wiki/RISC-V)) make it easier to produce new code.\n* [WASM](https://en.wikipedia.org/wiki/WebAssembly) has bad metaprogrammability: no WASM interpreter or compiler that i know of lets a function create a new function as the same class as those described in the original module. programs can create new WASM modules and, if they're lucky enough to be ran in an environment that provides functions that allow the loading of new modules, can get linked with than new code this way, but this is far from ideal nor standard.\n* the [JVM](https://en.wikipedia.org/wiki/Java_virtual_machine) lets pieces of bytecode create new bytecode; while i don't think this is utilized in java, it's a notable feature of [clojure](https://en.wikipedia.org/wiki/Clojure) which lets it compile newly defined functions.\n* [turing machines](https://en.wikipedia.org/wiki/Turing_machine) have relatively bad metaprogrammability: most of the logic of a given turing machine is expected to reside in its ruleset, which is immutable. to embed new logic, a ruleset would need to embed an interpreter for some type of other code described in its ruleset.\n* [SKI calculus](https://en.wikipedia.org/wiki/SKI_combinator_calculus) and [λ-calculus](https://en.wikipedia.org/wiki/Lambda_calculus) probably have somewhat good metaprogrammability; though it's not exactly clear to what extent. i believe new expressions will always be built out of skeletons of original expressions, but a seamless third party could probably move in and replace most expressions with newly \"compiled\" functions.\n* [cellular automata](https://en.wikipedia.org/wiki/Cellular_automaton) generally have fairly good metaprogrammability; rules like [rule 30](https://en.wikipedia.org/wiki/Rule_30) or [conway](https://en.wikipedia.org/wiki/Conway's_Game_of_Life) have simple enough rules that most of the logic of any complex system probably resides at the cell level rather than in the ruleset.\n* [graph rewriting](https://en.wikipedia.org/wiki/Graph_rewriting), notably [wolfram's hypergraph rewriting rules](https://www.wolframphysics.org/technical-introduction/basic-form-of-models/first-example-of-a-rule/), seem to offer fairly bad metaprogrammability; the ability for rules to produce new rules would require encoding most of the logic of a given system as graphs themselves with a relatively simple top-level actual set of hard rules, which seems difficult especially in the case of hypergraphs when rules are expected to be able to apply to an arbitrary amount of arguments.", "date_published": "2021-06-24T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "7878e5b16802659cee25184a942b3d07", "title": "cm21, a pixel art editor", "url": "https://carado.moe/cm21.html", "source": "carado.moe", "source_type": "blog", "text": "cm21, a pixel art editor\n------------------------\n\n\n**cm21** is a pixel art editor I made, inspired by CharaMaker99, aka cm99.\n\n\nIts proiminent feature is the ability to edit pieces of an image on their own, which is useful notably for spritesheets.\n\n\nThe program can be obtained for Linux and Windows on [itch.io](https://carado.itch.io/cm21); the source code is also available [on this site](cm21/cm21-source-code-2021-08-12.tar.gz).\n\n\n![](cm21/1.png)\n![](cm21/2.png)\n\n\n(screenshot feature images from [the EasyRPG RTP](https://github.com/EasyRPG/RTP/))", "date_published": "2021-06-19T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "0e0cfbd7fdd014c3d36a412164acbb2b", "title": "categories of knowledge representation", "url": "https://carado.moe/categories-of-knowledge.html", "source": "carado.moe", "source_type": "blog", "text": "categories of knowledge representation\n--------------------------------------\n\n\ni broadly recognize six ways in which information systems (including persons) represent knowledge.\n\n\n![](categories-of-knowledge.svg)\n\n\n**type 1: monistic.** monism is the absence of categorization of knowledge about something. in a given field, no categories are given, and criteria of the one present thing are assumed to be fundamentally universal. [the book \"Gödel, Escher, Bach\"](https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach) argues that zen buddhism prescribes favoring this type of knowledge, or absence thereof. this corresponds to the unit type with its single possible value. given this correspondence, it may or may not be useful to talk of a \"type 0\" corresponding to the [bottom type](https://en.wikipedia.org/wiki/Bottom_type) and its absence of values, but here i am deciding against it, as informational structures with a bottom value by definition cannot exist.\n\n\n**type 2: multiplicity.** a discrete collection of loose alternatives items are recognized, with no particular relation between any two items. this corresponds to booleans or enums.\n\n\n**type 3: spectrum.** a continuous (or functionally continuous) range of items is recognized, or items are arranged linearly according to some scalar criteria. this corresponds to number ranges and other [total orders](https://en.wikipedia.org/wiki/Total_order).\n\n\n**type 4: multidimensional.** items are no longer mapped along just one characteristic, but along a collection of characteristics. this could correspond to vectors in n-space, with a component on each dimension.\n\n\n**type 5: labelled graph.** items are no longer related by a fixed number of criteria, but instead can be related by an arbitrary set of binary relations; edges between two nodes can be directed and labelled, but they are always either from one node to another or from one node to itself. a subset of labelled graphs is trees (which formats JSON, XML, or S-expressions are descriptions of), but there are [many other types of graphs](https://en.wikipedia.org/wiki/Graph_%28discrete_mathematics%29#Types_of_graphs).\n\n\n**type 6: relational.** the relation between multiple items are no longer strictly binary, but can apply to any discrete amount of items. i believe this to be the most general and powerful representation of knowledge; subsets of it can meaningfully encode another other type of knowledge representation. this is pretty equivelant to [wolfram's directed hypergraphs](https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/) (but not [wikipedia-defined directed hypergraphs](https://en.wikipedia.org/wiki/Hypergraph)).\n\n\nthis post is of *type 3: spectrum*: i propose a discrete collection of possibilities, ordered by generality. i believe each type to be strictly more general than the ones before it.", "date_published": "2021-06-17T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "caa0e807bf619be8d7ba1150c56e94ba", "title": "refusing to answer ≠ giving a negative answer", "url": "https://carado.moe/refusing-negative.html", "source": "carado.moe", "source_type": "blog", "text": "refusing to answer ≠ giving a negative answer\n---------------------------------------------\n\n\na lot of people like to, when faced with a hard-to-determine question such as \"is a chair happy\" or [\"does a dog have buddha-nature\"](https://en.wikipedia.org/wiki/Mu_%28negative%29), with the negative.\n\n\ni think this is *very fundamentally* erroneous. the correct answer is that the question itself is erroneous; to answer it would be to erroneously accept the question, like in the classic case of [\"have you stopped beating your wife?\"](https://en.wikipedia.org/wiki/Loaded_question).\n\n\nthe answer to \"is a chair happy?\" is *not* \"no\"; that would imply that the question of whether it does or not makes sense, which it doesn't. the answer is \"the question is erroneous\", or [\"mu\"](https://en.wikipedia.org/wiki/Mu_%28negative%29#%22Unasking%22_the_question).\n\n\nthis comes up pretty often in philosophical discussions i have with friends. one reason why this is important is that i like statements to be meaninful, and it's very useful for stating that something does *not* have a characteristics, to mean something different than not stating anything. this also kind of ties into bayesianism: knowledge of a negative is not absence of knowledge, or [updating your belief](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality) towards negation wouldn't do anything.", "date_published": "2021-06-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c7bbf94fd260e4945f1df6af7da7a36d", "title": "the many faces of chaos magick", "url": "https://carado.moe/faces-chaos-magick.html", "source": "carado.moe", "source_type": "blog", "text": "the many faces of chaos magick\n------------------------------\n\n\n[chaos magick](https://en.wikipedia.org/wiki/Chaos_magic) is the notion that people can \"determine\" (mostly in the causal sense, but also in the informational sense) aspects of reality by sufficiently convincing themselves that something is or will be the case.\n\n\nthe practicals are a bit more complicated (one is encouraged, for example, to enter a state of \"temporary belief\" in a diety and delude yourself into thinking that *that diety* will accomplish the thing, because that's a more efficient way to trick the brain into thinking that it will happen), and the mechanism by which this is to happen is [not clear](http://www.chaosmatrix.org/library/chaos/texts/model.html); but nonetheless the trope of determination-powered magic can be found in many works.\n\n\nsome examples follow.\n\n\n### anime\n\n\na common trope in fiction, but most notably anime and other japanese pop culture, is determination-powered magic. in fact, it's arguably a central theme of many anime, and in other works can retroactively justify more reasonably a lot of \"power of friendship/love\" tropes.\n\n\n### freedom\n\n\n[in a previous post](defining-freedom.html), i define the \"freedom\" i want people to have to be one of what people *decide* to do, as opposed to what they *want* or *might choose*.\n\n\na ramification of this is that in such a world people's accomplishment would tend to scale to their determination; where what you influence is a function of how much you are truly deciding to do it. thus, if there is no way for chaos magick to be real at the moment ([which appears to be the case](https://www.reddit.com/r/askscience/comments/pbq9a/is_neural_activity_affected_by_quantum/)), aligning a superintelligence to value freedom to do what one decides would de facto implement a world in which a form of determination-magick would be real.\n\n\nas per the previous point, aligned freedom-valuing singularity could be seen as *making anime real*, just more of in a profound philosophical structure of reality way rather than in an aesthetic way.\n\n\n### dark arts of rationality\n\n\n[\"dark arts of rationality\"](https://www.lesswrong.com/posts/4DBBQkEQvNEWafkek/dark-arts-of-rationality) is a post about manipulating one's brain in unsafe ways to produce outcomes, notably in one's own behavior. this could be seen as more refined and rationally thought-out forms of [psychological-model](http://www.chaosmatrix.org/library/chaos/texts/model.html#psych) magic.", "date_published": "2021-06-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "584af7cde339b77ae775bf6d25e2edb5", "title": "the systematic absence of libertarian thought", "url": "https://carado.moe/systematic-unlibertarianism.html", "source": "carado.moe", "source_type": "blog", "text": "the systematic absence of libertarian thought\n---------------------------------------------\n\n\nto me, [libertarianism](https://en.wikipedia.org/wiki/Libertarianism) aims to give people [freedom](defining-freedom.html), to [give them self-determination over their life](core-vals-exist-selfdet.html).\n\n\nit is interesting how that type of thought is not just, in politics, overwhelmingly absent outside of america, and even not that common in america; but how people don't even consider it.\n\n\nwhen talking to most people, they seem to easily generate opinions on what people should and shouldn't do, and often think it common-sense to impose those using the force of the state. it doesn't cross their mind to give individuals autonomy over their decisions and refrain from imposing on them.", "date_published": "2021-06-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "fb1fc47d4995d3eec0dc4628000de6eb", "title": "my answer to the fermi paradox", "url": "https://carado.moe/fermi-paradox.html", "source": "carado.moe", "source_type": "blog", "text": "my answer to the fermi paradox\n------------------------------\n\n\nthe [fermi paradox](https://en.wikipedia.org/wiki/Fermi_paradox) asks, if aliens are supposedly so statistically prevalent, why we haven't received any radio signals from them.\n\n\nhere is my independently-developed (though probly not original) answer:\n\n\nstatistically, it seems reasonable that civilizations would [accidentally invent unaligned superintelligence](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) not long after inventing radio signals (in our case, a couple centuries). in order to percieve those signals, you would need to exist *after* your planet receives those signals, but *before* your planet receives that unaligned superintelligence's [expanding sphere of death](estimating-populated-intelligence-explosions.html), which might very well travel at [pretty much](word-report-3.html) the speed of light.\n\n\nthus, given the low probability, it is not surprising that we haven't percieved those; for any given alien civilization, in a given timeline, we either haven't received their radio signals, or have already been killed by them. seeing as we're alive, this timeline must be one of the former.", "date_published": "2021-06-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a4d98582fc225b946ab6fd761133c049", "title": "the persistent data structure argument against linear consciousness", "url": "https://carado.moe/persistent-data-structures-consciousness.html", "source": "carado.moe", "source_type": "blog", "text": "the persistent data structure argument against linear consciousness\n-------------------------------------------------------------------\n\n\npeople have the belief that they live a continuous, linear stream of consciousness (whatever that means).\n\n\ni've [made arguments before](quantum-suicide.html) as to why this is erroneous; but here is another interesting argument that undoes the seeming coherence of such a statement.\n\n\nthink of reality as a computational process, generating frames one after another, possibly splitting into timelines.\n\n\nwhere is your consciousness? one might be tempted to answer that it's the set of bytes representing the state of the brain. if i split the world into two timelines, which one is the \"fake copy\" and which one is the \"continuous original\"? one might answer that the copy is whichever new data structure has new bytes copied to it, and that the original is whichever presence in memory hasn't been moved; the *same* bytes, supposedly stored on the *same* hardware transistors.\n\n\nif i were to split timelines by creating two copies and destroying the original, one might answer that this is akin to killing the original and creating two \"fakes copies\".\n\n\nhowever, there exist [persistent data structures](https://en.wikipedia.org/wiki/Persistent_data_structure), which represent new sets of data as added constructions on top of an original one. this is a perfectly reasonable way to do computation, and one would probably agree that if someone is only ever running a single timeline, people have continuous consciousness.\n\n\nif i were to run a world simulation using persistent data structures and generate a timeline split, which one is the \"continuous person\"? just like with continuous single timeline computation, both new timelines are merely new data structures with their own set of whichever data is different, and pointers back to whichever sets of data are unchanged.\n\n\nthe least unreasonable choice someone who believes in linear streams of consciousness could make is that somehow persistent data structures are *not* a valid form of universe computation; that a computation ought to be run by reusing the same memory locations. surely the arbitraryness of such a claim despite its functional equivalence to persistent data structures for single-timeline computation demonstrates well enough how the notion of linear streams of consciousness doesn't make sense.", "date_published": "2021-06-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b03d8c2fdd14586bc6c1063b3c23f656", "title": "I'm creating a world simulation video game", "url": "https://carado.moe/game.html", "source": "carado.moe", "source_type": "blog", "text": "I'm creating a world simulation video game\n------------------------------------------\n\n\n~~i opened a patreon~~ (there is no longer a patreon associated with my game, or in fact a game being made. see [here](life-refocus.html).)\n\n\nit can be considered to fund primarily my game, which is my current full-time activity; and secondarily this site.\n\n\nconsidering this, i am making the present post to describe what my game is about.\n\n\n### World Simulation\n\n\nmy game intends to be a coherent simulation of a world, playable as a video game.\n\n\nthis means that i intend the game to have a full ecosystem with organisms subject to natural selection; NPC cultures, settlements, and societies; and a solid foundation to support all that while using computational resources as efficiently as possible.\n\n\nthe game will either be a decentralized collection of singleplayer games and dedicated servers, a single giant MMO world, or both. either way, i am designing it from the ground up to support the distributed computation of large worlds on many servers, for scalability.\n\n\nthe look and moment-to-moment gameplay may resemble Minecraft, but the world will probably be something more akin to Dwarf Fortress or [Eve Online](https://www.youtube.com/watch?v=nvK8fua6O64); though not quite like either.\n\n\n### Feasability\n\n\nwhile the scope of the game might seem large for a one-person project, the actual workings of the engine should not be that complicated. unlike Dwarf Fortress, i am designing the world at a relatively low level, such that a lot of the game's content should be able to generate itself without too much input from my part; community contributions building on the game's foundations may also play a role in providing richer experiences.\n\n\n### Estimated Time of Arrival\n\n\ni have been working on a prototype since 2020, and am making steady progress on it. that said, i have no clear estimate for alpha or beta releases.\n\n\ni may or may not post updates regarding the game's development on this site.\n\n\n### What the Patreon is for\n\n\nthanks to my relatively low amount of life expenses and my country's generous welfare, i'm able to live while on-and-off working part-time. if i meet the patreon's 250$/month goal, i won't have to do that part-time work.\n\n\nany contribution to that end is greatly appreciated.", "date_published": "2021-06-04T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c3c8d31918a5350f301bcefd50ef746f", "title": "Overcoming Narratives", "url": "https://carado.moe/overcoming-narratives.html", "source": "carado.moe", "source_type": "blog", "text": "Overcoming Narratives\n---------------------\n\n\nwho drinks more alcohol, liberals or conservatives ?\n\n\nif you have an existing opinion on both of those (such as: drinking alcohol is bad, and conservatives are better than liberals), then you're likely to just come to a default assumption (such as: liberals drink more).\n\n\nif you want to be more reasonable about it, you may try to think of explanations. \"maybe liberals drink more because this and this! that sounds like it makes sense.\" but it's pretty likely you could come up with an explanation for the opposite case (conservatives drinking more) just as easily!\n\n\nand just like any piece of evidence: if it's as easy to come by in both cases, then it's not actually telling you anything.\n\n\nthe only thing you get by coming up with potential narratives that *sound like they make sense* but aren't evidenced by anything in the world, is to pre-entrench yourself in a position rather than be open to what may actually be the shape of reality; [if there even is a shape at all to the topic at hand](https://en.wikipedia.org/wiki/Computational_irreducibility).\n\n\nof course, just assuming to be true the narratives that *other people* tell you is worse, but even coming up with your own narratives is generally a bad idea.\n\n\nin general, avoid narratives, unless evidenced; and when there's evidence, only make the minimum amount of assumptions from that narrative.", "date_published": "2021-06-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "57827af2513dcdc0d53befe37c46d19d", "title": "Saving The Client-Side Web: just WASM and the DOM", "url": "https://carado.moe/saving-the-web.html", "source": "carado.moe", "source_type": "blog", "text": "Saving The Client-Side Web: just WASM and the DOM\n-------------------------------------------------\n\n\nthe stack of technologies involved in the client side of the web is a huge mess, and i don't like most of it. it requires browsers to implement basically an entire operating system.\n\n\ni propose here a vision that can vastly simplify the web — especially client-side — allowing clients (and possibly, to an extent, servers) to function much more easily, and all that with mostly just existing tech.\n\n\nhere's the idea:\n\n\nwe keep [WebAssembly](https://en.wikipedia.org/wiki/WebAssembly) and we keep [the DOM](https://en.wikipedia.org/wiki/Document_Object_Model) client-side. we discard the textual format that is HTML, and we *definitely* discard JavaScript. CSS can probly stay.\n\n\nWebAssembly is quite easy to run in a sandbox; it's much easier to make a WASM VM than a JS VM, it leads to much more optimizable code (though it [could be better](http://troubles.md/posts/why-do-we-need-the-relooper-algorithm-again/)), and it's easily sandboxed (such that, for example, one WASM can run another WASM with tight control over what calls that sub-instance has access to).\n\n\nthe HTTP server (or [whatever content delively system](https://ipfs.io/)) sends the client a single WASM file, with maybe some parameter data, whose \"start\" method is called. this method will build up the DOM (using API calls, *not* textual HTML), request files (images, etc. in whatever order it wants), and so on. it can depend on other WASMs as dependencies, just as many pages with JS currently import jquery or other JS frameworks.\n\n\nthere's no need for clients to know anything about HTML. if you want to serve static HTML, it's probly relatively easy to make a web server that sends a static WASM along with some HTML, and that WASM can parse the HTML and produce the corresponding DOM with API calls; and the browser doesn't have to have any special code for that.\n\n\nif you want easy javascript (or any other scripting language!) you can just send a static WASM that is a javascript interpreter compiled into WASM, and have that interpret attached JS. there's no need for the browser to know anything about JS, let alone embed a JS VM.\n\n\nplus, you get all the advantages of WASM: easy implementation, portability, performance, and easy sandboxing (even nested sandboxing!).\n\n\nas a bonus: if you ship browsers with a WASM handling all the usual JS and HTML interpreting, then you can probly make this whole stack retrocompatible with the old web.", "date_published": "2021-05-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2206df5aaa959906a7b393e18010e403", "title": "The Unsatisfactorily Far Reach Of Property", "url": "https://carado.moe/unsatisfactory-property.html", "source": "carado.moe", "source_type": "blog", "text": "The Unsatisfactorily Far Reach Of Property\n------------------------------------------\n\n\ni'm a fervent [intellectual](where-next-piracy.html) [property](cultural-and-memetic-hygiene.html) [abolitionist](cc_-1.html).\n\n\none of the reasons IP seems repulsively bad is that it gives some people control over the activity of others no matter how far and unrelated away they are. patents and trademarks even can (and often do) restrict the activity of people who *never even heard of the original piece of IP*; but [copyright is joining in on that too](https://www.youtube.com/watch?v=Tpi4d3YM79Q).\n\n\nhowever, how specific to IP is this? how do i feel about, say, someone owning a rock one million light-years away, and through that, legally restricting the ability of people around that rock to interact with it?\n\n\non another hand, recently i've been really trying to come up with alternatives to private property on which a [libertarian world](https://en.wikipedia.org/wiki/Right-libertarianism) (with voluntary association and voluntary societies) can be built, but to no avail. until i made this post, that is.\n\n\nmaybe the key is having some notion of *locality* apply to ownership: you entirely own your mind, you very strongly own your body, you strongly own your house, you mildly own your garden? this sure seems a lot more flexible than the classic libertarian hard dichotomy of \"either this is my property and i get to murder anyone who even touches it, or it's not and i have no say whatsoever\", and perhaps it can create a better sense of what is meant to be common property.\n\n\nthough, of course, as soon as you have a flexible limit, [molochian defectors nibble at it until you have nothing](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/). this is the reason that, when we can have hard limits (such as when we're all cyberpeople with a very discrete and clear boundary of what we have [sovereignty](economic-compass.html) over), they hopefully let us survive even maximally molochian-defecting actors.\n\n\nso, for the moment i maintain my previous positions, but IP bringing in questions of locality is a novel for me way of thinking about property rights that i'll be considering.", "date_published": "2021-05-02T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1b4098e0911b3794336a5a0c39b187c8", "title": "Video Games Needs A Platform", "url": "https://carado.moe/video-games-needs-a-platform.html", "source": "carado.moe", "source_type": "blog", "text": "Video Games Needs A Platform\n----------------------------\n\n\ni feel like, for video games to feel legitimate as an artform, it needs a platform that has the following properties:\n\n\n* open source, rather than controlled by one or even several companies\n* standard, such that every artist knows exactly what capabilities they're working with\n* accessible \"at various levels\" — an accessible graphics platform to people who know computer graphics, an accessible low-level language to people who know low-level programming, etc\n\n\nbecause gaming tends to surf on the very edge of technological innovation — and especially so in hardware, this is a tall order.\n\n\nnevertheless, some things have converged towards this goal:\n\n\n* emulators, while running a proprietary machine, are largely open source, and provide a very clear set of expectations for what can be ran on them.\n* [PICO-8](https://www.lexaloffle.com/pico-8.php), while proprietary, is a fairly accessible (and minimalist) high-level game playing and development(!) platform.\n* RPG Maker [2000](https://en.wikipedia.org/wiki/RPG_Maker_2000) and [2003](https://en.wikipedia.org/wiki/RPG_Maker_2003) was used by many independent gamedevs to make many games, and it's a platform with a fairly well established set of capabilities — as for the open source aspect, the [EasyRPG](https://easyrpg.org/) project implements a player for RPG Maker 200X games which [already supports a lot of them](https://community.easyrpg.org/t/compatibility-list/283), and intends to eventually offer a full replacement for the actual RPG Maker software.\n* [WASM](https://en.wikipedia.org/wiki/WebAssembly) and [RISC-V](https://en.wikipedia.org/wiki/RISC-V) are both open-source and low-level languages which higher-level languages and tools can target, with a currently fairly vibrant ecosystem. they accomplish this at different levels — WASM aims to be a bytecode meant to be JIT-compiled into machine code, while RISC-V aims to be a hardware-implemented machine code.\n* the [DualShock](https://en.wikipedia.org/wiki/DualShock) layout for gamepads, and its variant the [Xbox 360 controller](https://en.wikipedia.org/wiki/Xbox_360_controller) layout, have become very standard, to the point where almost all games made to be played with something other than mouse and keyboard target those two types of gamepads instead (often under the name \"Xbox 360 compatible controller\" or even just \"Xbox 360 controller\").\n* for people who are fine with low resolutions, a fixed 640×360 resolution (that i use for instance in [kolsitan](kolsitan.html)) is great to work with: it upscales neatly to common resolutions like 1280×720 (at 2×), 1920×1080 (at 3×), 2560×1440 (at 4×), 4K (at 6×), and fits neatly enough within the common laptop resolution of 1366×768 by scaling to 2× and having (small enough) borders. if that fixed low resolution isn't enough, having the resolution variable but always a multiple of 640×360 may be enough of a guarantee.\n* the OpenAL API and [its open-source OpenAL implementation](https://en.wikipedia.org/wiki/OpenAL#Implementations) is a relatively common interface for audio, though maybe some lower-level memory-mapped buffers would work better; i don't know audio tech much, so i'll refrain from saying more.\n* graphics interface is probably the hardest here; higher-level APIs like [WebGPU](https://en.wikipedia.org/wiki/WebGPU) or [Vulkan](https://en.wikipedia.org/wiki/Vulkan_%28API%29) could work, though the talk i've recently seen of [NVidia](https://en.wikipedia.org/wiki/RISC-V#In_development) [and](https://www.eetimes.com/rv64x-a-free-open-source-gpu-for-risc-v/) [others](https://libre-soc.org/resources/#index17h1) considering building GPUs on RISC-V might make graphics (and GPGPU-style parallel compute) efforts accessible at a lower level, bringing us closer to [Casey Muratori](https://www.youtube.com/channel/UCaTznQhurW5AaiYPbhEA-KA)'s dream of [lower-level, driverless GPU interfaces](https://www.youtube.com/watch?v=kZRE7HIO3vk).\n* alternatively, if we don't mind a fairly high-level interface and don't go beyond particularly enhanced 2D graphics, sticking to the SDL2 might be sufficient for graphics, audio, and perhaps input.\n* standardizing the amount of hardware performance available is a difficult thing to plan with; even if we were to somehow present a very high level graphics API that was able to scale its level of detail to the amount of rendering performance available (thus not impacting the developer's assumptions about the platform), varying levels of available performance could still have massive effects on games whose computation power is largely used on game logic, like Dwarf Fortress.\n\n\nit's still pretty early to see if many of these technologies can be relied on for this particular effort, but i think these elements can give us the clues to work towards something like [Brian Moriarty](http://ludix.com/moriarty/index.html)'s [Perlenspiel for video games](https://www.gdcvault.com/play/1015620/Lehr-und-Kunst-mit), but in a form that is able to compete with more performance-intensive video games.", "date_published": "2021-05-02T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "16e08415242faa6ce378927f96fa63d7", "title": "Plausible Quantum Suicide", "url": "https://carado.moe/quantum-suicide.html", "source": "carado.moe", "source_type": "blog", "text": "**DISCLAIMER: the idea described here stands or tenuous philosophysical ground and should generally *not* be considered worth the risk attempting because it may be wrong; in addition, this plan should *not* be utilized to retroactively justify depression-based suicide — retroactive justification is erroneous; if you are feeling suicidal, contact [your local suicide crisis line](https://en.wikipedia.org/wiki/List_of_suicide_crisis_lines).**\n\n\nPlausible Quantum Suicide\n-------------------------\n\n\nin this post, i make a collection of arguments and follow them to what seems to me like what should be their conclusion. i don't have strong confidence in every argument, but i'd consider using this plan worth it to avoid sufficiently bad scenarios, such as a singularity gone wrong (which it probably will).\n\n\n### 1. The No Obtainable Evidence Argument For Materialism\n\n\nby materialism i mean here something maybe closer to [physicalism](https://en.wikipedia.org/wiki/Physicalism), but maybe even a stronger version of it:\n\n\nthere is no special soul that people have, [you are your information system](you-are-your-information-system.html).\n\n\ni make an other strong claim: time isn't particularly \"moving\" in any metaphysical sense, there is no \"special present time\". the universe can be seen as a graph of probabilistically connected states, and a random walk through those states matches the notion of entropy pretty well (which can be seen as defining the direction of time, because we happen to have memories of universe states with generally lower entropy), but that's a local notion.\n\n\nthe *illusion* that the present is particularly present, that we have a particular soul, or even that morality/ethics is in some sense objective, stems from the fact that we *inhabit our brain's model*: we don't get to see our brain from the outside as modeling its environment, we live right inside it, and we don't spawn with a clear distinction between normative ideas (morality/ethics) and descriptive ideas (statements of fact about the world).\n\n\nbut those illusions *must* be wrong, and here is the argument: as far as we can tell, there is no possible way for a brain to obtain evidence that his present time is particularly real; therefore, it must be erroneous for any brain to generate rationally the idea that its present is particularly real. same goes for having what i call a \"read-only soul\" that some people believe in (a special observer thing that observes a person's mind state from outside the material universe, but cannot causate anything upon it). see also [these](https://www.readthesequences.com/Zombies-Zombies) [three](https://www.readthesequences.com/Zombie-Responses) [posts](https://www.readthesequences.com/The-Generalized-Anti-Zombie-Principle).\n\n\n### 2. Limiting Real Universes\n\n\nmy post [\"Limiting Real Universes\"](limiting-real-universes.html) isn't that good, so i'll try to explain it more clearly here:\n\n\nif for some reason all possible states of our universe were equally real, then you should expect to observe widely anomalous phenomena around you, because most randomly picked states our universe can be in don't have to be coherent.\n\n\nbut the fact that we seem to observe a continuously very coherent universe tells us that there must be some sense in which coherent universe states, that stem from a continuous history following an increasing entropy timeline, must be particularly more real.\n\n\nit's not that your magical soul has been blessed with inhabiting universe states: as explained in argument 1, you shouldn't have any reason to think you have such a thing.\n\n\nit's not that [you can only exist to observe universe states that are coherent, because you wouldn't exist in incoherent ones](https://en.wikipedia.org/wiki/Anthropic_argument): there are still way more possible universe states where everything is incoherent except your brain, than possible universe states where everything is coherent including your brain. for any amount of state of stuff you require to say you can exist to observe the world, the rest of the universe should still generally seem incoherent if all possible universe states are equally real.\n\n\nit's not that you have been following your own special arrow of time: even though i debunk that you should even think this makes sense in argument 1, another reason is that, even if some of your brain states have a past-arrow-of-time and not others, there's no reason for you to think you're one of the former. if all possible universe states were equally real, you'd likely be a version of your brain that *thinks* it has a past history but doesn't, than one that does.\n\n\n### 3. Many-Worlds Is True\n\n\n[Eliezer Yudkosky makes a good argument](https://www.readthesequences.com/If-Many-Worlds-Had-Come-First) that we should currently believe in the many-worlds [interpretation of quantum mechanics](https://en.wikipedia.org/wiki/Interpretations_of_quantum_mechanics); but even if that turned out wrong, [Max Tegmark makes another good argument](https://space.mit.edu/home/tegmark/crazy.html) that even just with a single history universe, all possible initial states of the universe are represented each in an infinite amount of instances by just being variations of initial conditions and random quantum determinations at different places of the infinite universe.\n\n\nwhat matters here is that basically one should expect every reasonably possible outcome to be a real instance of universe that exists somewhere. because of argument 2, some possibilities are particularly real, and because of argument 3 (this one), that set or fuzzy set of coherent possibilities should be widely instanced: at each possible fork (they're not *really* forks, but that's a good enough analogy from our point of view), every plausible outcome is realized as a real or fairly real universe.\n\n\n### 4. Quantum Suicide Works\n\n\nif the previous 3 arguments stand, then a more general version [quantum suicide](https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality) should be achievable: by dying instantly in one timeline, there is no version of you in that timeline able to experience it, and the only remaining future you's able to experience anything are the you's in other timelines.\n\n\nbecause of argument 1, we know that saving a backup of your brain, and then later dying and restoring another copy of yourself from backup, is equivalent to just losing memories of the time after the backup: it's unfortunate that that time and those memories were \"lost\", but it's not a big deal, you can just keep going.\n\n\ngiven that, even non-instantaneous, after-the-event suicide works: if you commit yourself to committing suicide in all timelines where an event goes wrong, then the only future you's able to experience any time after that suicide will be the ones in the timelines in which that event went well (or at least in which you think it did); you lose a bit of time and memories from those timelines in which you didn't kill yourself *literally instantly* after the thing went wrong, but it's just equivalent to a restoration from backup: the backups are automatically saved by the universe as forks of that previous universe state before the event's outcome was determined.\n\n\n### ramifications\n\n\nif this is true, then every person is strongly empowered: by committing themselves to committing suicide in every timeline in which even the slightest thing goes wrong, they are able to restrict the set of instances of them purely to timelines in which everything goes the way they want.\n\n\nbut, it also creates a problem if the practice becomes widespread: every person will end up observing a timeline in which increasingly greater amounts of people who they don't particularly care about, have committed suicide to go to other timelines. if i play the lottery and commit suicide if i lose, then you have as many timelines as players, each with 1 alive lottery winner, and all the others players having committed suicide. even if you don't care about living in such a world, economics cares: pre-automation, you *want* other people in your society to keep living so they can help create together the value that you can enjoy.\n\n\nyou can choose to commit suicide in all timelines in which too many *other* people also have committed suicide, in an acausally-collaborative effort to search for a timeline in which everyone is happy; but if no such timeline exists, then everyone will just have *truly* committed suicide out of existence.\n\n\npre-automation, this creates a [coordination problem](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/), where each person wants to be able to commit suicide, but doesn't want other people to be able to. there is much ethical and political discourse to be had on the right to commit suicide; i generally lean on the libertarian side of things, but if quantum suicide becomes enough of a problem pre-automation that society looks like it's not gonna be able to get to post-automation, then we might need to consider at least disincentivizing it somehow.\n\n\npost-automation, there is still a problem for people who want to live in a world which has other people in it, but the problem is much milder. it might be bringing the [end of the global era](global-era.html) even earlier than would have happened otherwise, but that's not necessarily *that* big of a deal, and there's an option for people who want to inhabit a more densely populated timeline: just lower your standard for non-population-based outcomes, such that you commit suicide less often and thus exist in more timelines. if many people do this, they should be able to find each other in many densely populated timelines.\n\n\nthis *does* explain the anthropic argument of, \"if things go well in the future and the population booms, why are we happening to experience a particularly early age of human existence?\"; other than the extinction of able-to-observe beings, this can be explained by able-to-observe beings just become really trigger-happy about quantum suicide, such that each civilization of able-to-observe beings' \"observedspace\" is condensed to their pre-finding-out-about-quantum-suicide; their population after that is much more sparsely distributed across timelines, even without extinction events.\n\n\nas for me, i don't intend to start committing quantum suicide any time soon. i don't have strong enough confidence in the arguments posted here to take the risk of actually permanently dying. but it is definitely a possibility i'll consider, *especially* as we get closer to the singularity happening, and the existential risks that it poses.", "date_published": "2021-04-27T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "1efc74dcf4db1f578cc7bb59cf24f4b5", "title": "CC_ -1", "url": "https://carado.moe/cc_-1.html", "source": "carado.moe", "source_type": "blog", "text": "CC\\_ -1\n-------\n\n\na known problem with [the CC0 license](https://en.wikipedia.org/wiki/Creative_Commons_license#Zero_/_public_domain) is its [infamous patent clause](https://opensource.stackexchange.com/questions/133/how-could-using-code-released-under-cc0-infringe-on-the-authors-patents), which says:\n\n\n\n> No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.\n> \n> \n\n\nand [other big public-domain-equivalent licenses](https://en.wikipedia.org/wiki/Public-domain-equivalent_license) only apply to *software*, not to any work in general.\n\n\nas a result, i propose the use of the CC\\_ -1, which is a full copy of CC0's legal text except for the following changes:\n\n\n* [as required by CC](https://wiki.creativecommons.org/wiki/Modifying_the_CC_licenses), the names \"Creative Commons\" and \"Creative Common Corporation\", referring to the corporation, have been replaced with \"carado\" (which is me)\n* the name \"Creative Commons\" at the top, referring to the license, has been replaced with \"Carado Commons\"\n* \"CC\", which is trademarked by Creative Commons, has been replaced with \"CC\\_\" (as per the common programming convention of adding an underscore to an already used name)\n* \"0\" in the name of the license has been changed to \"-1\", to suggest that this license grants even more freedom than CC0\n* the patent and trademark clause has been changed from\n\n\n\n> No trademark or patent rights held by Affirmer are waived, abandoned, surrendered, licensed or otherwise affected by this document.\n> \n> \n\n\nto\n\n\n\n> All trademark and patent rights held by Affirmer are waived, abandoned, and surrendered.\n> \n> \n\n\nyou can find [the full text here](cc_-1.txt).\n\n\nhere is also a suggested logo, using as a background the [free speech flag](https://en.wikipedia.org/wiki/Free_Speech_Flag), and the name \"CC\\_-1\" rendered in the public domain font [unscii](http://viznut.fi/unscii/), at various scales:\n\n\n![CC_-1.png](CC_-1.png) ![CC_-1-x2.png](CC_-1-x2.png) ![CC_-1-x3.png](CC_-1-x3.png) ![CC_-1-x4.png](CC_-1-x4.png)", "date_published": "2021-04-23T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "21bc9f2bd82af9a15dfb9188d4c1300d", "title": "Let's not generalize over people", "url": "https://carado.moe/lets-not-generalize-politics.html", "source": "carado.moe", "source_type": "blog", "text": "Let's not generalize over people\n--------------------------------\n\n\ni really like culture; and, to that end, i really like diversity of thought.\n\n\ndiversity is pretty [fragile](https://www.readthesequences.com/Value-Is-Fragile), however; if you don't take particular actions to preserve it, forces like markets rapidly optimize their own efficiency by throwing diversity [under the bus](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/): a population with more uniform needs is easier to satisfy.\n\n\nanother force that does this, however, is generalization: every time one makes assumptions about a general population, they become susceptible to make decisions that bring about a world that works better for people for whom this generalization holds, but worse for the rest — and so, in the long term, a world that will try to [optimize out](https://en.wiktionary.org/wiki/optimize_out) that latter population for its own efficiency.\n\n\nin labor, for example, i don't particularly care that the economy is built so that workers feel less alienated. what if there is some person who doesn't care about alienation, and just wants to maximize how much pay they take home so that they can spend less time working and more time doing something else they want to do? this is why i favor less one-size-fits-all schemes that intend to \"make work better for everyone\" by making assumptions about what people want out of work, and more schemes that let people able to better negotiate what they want on a case-by-case basis. the two main schemes that come to mind which would radically empower people in regards to their interaction with the labor market are unionization, and [UBI](ubi.html).", "date_published": "2021-04-23T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "f2837a54d1e0bf9dd86d7b3092943906", "title": "Value and Earning", "url": "https://carado.moe/value.html", "source": "carado.moe", "source_type": "blog", "text": "Value and Earning\n-----------------\n\n\n(see also: [Georgism](https://astralcodexten.substack.com/p/your-book-review-progress-and-poverty))\n\n\nthis is a post about a short view of my understanding of value, by which i here mean scarce material things that people care about.\n\n\nin general, we want people to get value; i will call this phenomenon consumption.\n\n\nwhere does value come from? the three sources are:\n\n\n* people themselves, though labor\n* capital, tools such as machines or factories\n* [the commons](https://en.wikipedia.org/wiki/Commons), which are usually natural things like land or air, but can also be state-provided goods such as public infrastructure\n\n\nnote that although currently capital largely needs human labor to be operated, the coming of automation is gonna progressively obsolete that restriction. the commons, on the other hand, often don't need any human labor to get value from them, and indeed predate the existence of both humans and tools.\n\n\nwhatever system we have that determines how these resources are used, and how value circulates, is the economy.\n\n\n![](value/2.png)\n\n\nfrom this model, we can see that humans don't need to produce more value than they consume; indeed, we can afford to have some or all people produce way less value than they consume, or even produce no value at all, and still have the economy function thanks to value continuing to be created by capital, as well as the commons [if they can be preserved](https://en.wikipedia.org/wiki/Tragedy_of_the_commons).\n\n\nthis even seems like a fairly desirable situation. we should want people to not *have* to labor, and still be able to get the value they want.\n\n\nindeed, the more value we get from capital, the less value we should need people to create. [wage stagnation](https://slatestarcodex.com/2019/02/25/wage-stagnation-much-more-than-you-wanted-to-know/) can be seen as this transition taking place: as capital represents more and more of value creation, the market rewards labor less and less in comparison.\n\n\nif this process is left to its own devices, the market will eventually \"optimize out\" humans, as they become obsolete as value creators. but we still want them to *get* value! this is why, as capital increases as a share of value creation, we want to introduce something like [universal basic income](ubi.html), which intrinsically values humans and gives them the means to benefit from the growing pool of value being created; in the absence of such a system, at best the only people who will get to have value once automation happens will be whoever happened to own the capital that produces it at that time, and everyone else will have to starve.\n\n\nbut i think the most important part about this model is the criticism of the notion of \"earning\": people *shouldn't be expected to*, and indeed *don't*, produce more than — or even as much as — they consume. if we want people to get value, we should create systems that allocate value to them regardless of how much they create, be it as individuals or as groups.", "date_published": "2021-03-31T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c45c9e8c87719a326e2c4f8ae5690307", "title": "\"4=5\"", "url": "https://carado.moe/4=5.html", "source": "carado.moe", "source_type": "blog", "text": "\"4=5\"\n-----\n\n\n![\"4=5\"](4=5.png)", "date_published": "2021-03-30T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "754c8bcc3f9b8a68739678bad7357545", "title": "Cultural and Memetic Hygiene", "url": "https://carado.moe/cultural-and-memetic-hygiene.html", "source": "carado.moe", "source_type": "blog", "text": "Cultural and Memetic Hygiene\n----------------------------\n\n\ni like culture as much as i like markets; but just like how markets need [an outside force to become correctly aligned](ubi.html), culture takes extra effort to partake in in responsible and hygienic ways.\n\n\nyou are not just influenced by environmental factors, you are *made of* environmental factors, and this includes culture. culture is what [determines and existentially determines](core-vals-exist-selfdet.html) what people think and value, and [is very important](https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/).\n\n\nhere are some ways i think about culture in order to make sure i partake of it responsibly and i don't [value drift](https://en.wikipedia.org/wiki/Instrumental_convergence#Goal-content_integrity).\n\n\n### never consent to advertising or marketing\n\n\nif there is some force whose influence on culture you want to restrict, it's markets. as soon as markets influence culture, markets influence what people demand from markets, and you've got yourself [a feedback loop](unfair-feedback-loops.html) of self-reinforcing Big Advertising.\n\n\nadvertising and in general market-motivated cultural influence is very not valid; it does not represent what people would otherwise freely or organically choose to want (as much as that's even a thing).\n\n\nuse [a good adblocker](https://ublockorigin.com/). don't let websites (notably media streaming websites) recommend media to you; even if you end up enjoying individual works, they are still inducing a sampling bias. [don't watch sponsored clips](https://sponsor.ajay.app/). outright avoid content that has unblockable advertising or even product placement. and, in a sense, don't give in to marketing; if a movie is successful because big hollywood maximally [threw artistic vision under the bus](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/) in order to maximize profitability, then chances are it's [not very interesting culture](12-rules-for-life.html).\n\n\ni tend to also avert my eyes from public billboard advertising outside; unconsented advertisement should absolutely be as illegal as any other unconsented influence on people's minds.\n\n\n(by the way, this is my answer to anyone complaining about the state of big movies or big video games: *just stop consuming them*, and consume independent media instead. you are not gaining legitimacy by consuming them, they are gaining legitimacy by being consumed by you.)\n\n\n### always be skeptical of memetic virulence\n\n\n[memes](https://en.wikipedia.org/wiki/Meme) want to occupy your mind and leech of your precious cognitive resources to spread. be conscious of that process and try to avoid places that are just memetic petri dishes; having fun is fine, but is the value you're getting from memes always worth it? wouldn't your cognitive capacity often be better spent on things you value more?\n\n\n### express yourself freely\n\n\nthis point covers some obvious points, like systematically disregarding intellectual property laws — in this case especially copyright and trademarks, the latter of which literally being *companies having some ownership over some words*.\n\n\nbut there are subtler points here too: when you post a gif from a gif-picking utility or even an emoji, are you freely choosing how to express yourself, or is your set of acceptable thoughts and memes constrained by whatever platform you're picking on?\n\n\nor, when you don't disable autocapitalization, autopunctuation, or spellchecking and spellcorrection, are you being railroaded by institutions into shaping how you express yourself according to their own standards?\n\n\nlanguage should be maximally alive, and people should use it however they feel like using it; if there is to be a standard, it should be that of free and organic consensus between individuals, not top-down enforcement by companies or state institutions.\n\n\neven text encoding standards [are institutional shackles on free thought](against-unicode.html).\n\n\n### in conclusion\n\n\npeople should be, and in my opinion are, the ultimate source of legitimacy.\n\n\nuniversal sufferage tries to align politics to intrinsically value people; [univeral basic income](ubi.html) will try to align markets to do the same.\n\n\nbut a third struggle in that vein is to guarantee the continued control of culture by people rather than structures of power; to realign culture to people.", "date_published": "2021-03-28T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "8d7df819e09ce98e913bffdc316bbd5e", "title": "Value Crystallization", "url": "https://carado.moe/value-crystallization.html", "source": "carado.moe", "source_type": "blog", "text": "Value Crystallization\n---------------------\n\n\nthere is a weird phenomenon whereby, as soon as an agent is rational, it will want to conserve its current values, as that is in general the most sure way to ensure it will be ablo to start achieving those values.\n\n\nhowever, the values themselves aren't, and in fact [cannot](https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem) be determined purely rationally; rationality can at most help [investigate](core-vals-exist-selfdet.html) what values one has.\n\n\ngiven this, there is a weird effect whereby one might strategize about when or even if to inform other people about [rationality](https://www.readthesequences.com/) at all: depending on when this is done, whichever values they have at the time might get crystallized forever; whereas otherwise, without an understanding of why they should try to conserve their value, they would let those drift at random (or more likely, at the whim of their surroundings, notably friends and market forces).\n\n\nfor someone who hasn't thought about values much, *even just making them wonder about the matter of values* might have this effect to an extent.", "date_published": "2021-03-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c1e3cb7f4bed0cb3517f3c711985a38d", "title": "Growth Doesn't Care About Crises", "url": "https://carado.moe/growth-doesnt-care-about-crises.html", "source": "carado.moe", "source_type": "blog", "text": "(disclaimer: i'm very unqualified to talk about economics; this post would probly looks either obvious or stupid to anyone who actually knows any economics)\n\n\nGrowth Doesn't Care About Crises\n--------------------------------\n\n\n(all graphs from [here](https://ourworldindata.org/economic-growth), graph \"GDP per capita\", in log rather than linear because that's a more natural way to represent growth rather than raw value)\n\n\nin 1929, the United States hit the [Great Depression](https://en.wikipedia.org/wiki/Great_Depression). its GDP per capita fell from 10,500$ in 1929 to 7,300$ in 1933; a decrease of 31%.\n\n\n![](growth-doesnt-care-about-crises/1.png)\n\n\nafter a while, things the economy recovered back on its track and stabilized again to the rough growth it had before the crisis.\n\n\none might expect the recovery to look like this:\n\n\n![](growth-doesnt-care-about-crises/2.png)\n\n\nbut this isn't what happened. here is the actual graph:\n\n\n![](growth-doesnt-care-about-crises/3.png)\n\n\nnotice the difference? not only did the economy recover to its previous growth, but it actually *increased* its growth until it got to the same that it would be at if the crisis never happened.\n\n\nwhich is to say: *the end result is the exact same than if the crisis didn't happen*. if you lived in the early 1900's, you could draw a straight line to predict the growth of the country for the century to come, *regardless of any crisis that might happen*.\n\n\nhere are some other countries' growth *not caring about crises*:\n\n\n![](growth-doesnt-care-about-crises/4.png)\n\n\njapan, france, and germany's growths *not caring about world war 2*\n\n\n![](growth-doesnt-care-about-crises/5.png)\n\n\npre-soviet, then soviet, then post-soviet countries' growths *not caring about the start nor the end of the soviet union*\n\n\n![](growth-doesnt-care-about-crises/6.png)\n\n\nchina and india's growths *barely caring about industrializing late* (with the US as comparison; if you draw a line from china in 1820 to china in 2016, the curve would be about as steep as the US's, and india wouldn't be far behind)\n\n\n \n\n\n\na take away? **we should probably invest massively in africa.** africa is the one continent that has yet to join the global growth; and when *their* growth starts climbing up, it's probly gonna go up *very fast* in order to achieve the same total curve steepness as currently richer countries.", "date_published": "2021-03-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ba3f551aa941260959adcfd8115125fe", "title": "Normies Are in Hell Too", "url": "https://carado.moe/normies-are-in-hell-too.html", "source": "carado.moe", "source_type": "blog", "text": "Normies Are in Hell Too\n-----------------------\n\n\nweird people like me often like to think we're the oppressed class; we're the ones for whom institutions don't work, expectations aren't met, we are at a constant disadvantage of having to explain our weird situations to others, etc.\n\n\nwe have salvaged our freedom and individuality, but we have paid a tall price in defecting from social normalcy.\n\n\nlike Tomoko, protagonist of the manga and anime [Watamote](https://en.wikipedia.org/wiki/No_Matter_How_I_Look_at_It,_It's_You_Guys'_Fault_I'm_Not_Popular!), we fail society, and society fails us.\n\n\nshe features in the ending to the anime:\n\n\n\nand i'm fully on Tomoko's side, here. i'm the weird person, the geek, the weeb, the hipster, the nerd, and i'm very happy with it. i have committed to this life and i think it is the right choice, for me and for probly many more people than end up actually committing to an alternate lifestyle.\n\n\n \n\n\n\nhowever, it is good to, every now and then, remind ourselves that normies suffer their own tragic fate.\n\n\nwhere we have escaped through defection, normies are trapped in *cooperation hell*; they are engaging in [a race to the bottom](https://web.archive.org/web/20140730043944/http://slatestarcodex.com/2014/07/30/meditations-on-moloch/), sacrifice values in order to spend just a bit more resources fitting to societal standards than their peers; lest they be left behind; societal standards which must, by definition, be forever out of reach of anyone, for any set of people attaining them will invent new standards towards which to compete yet again.\n\n\na rat race to normalcy.\n\n\nnothing has made me realize this as much as the following variant on the anime ending, which instead of featuring Tomoko, features her friend Yuu, the prototypical normie.\n\n\n(be sure to enable english subtitles/closed captions when watching)", "date_published": "2021-03-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "461b976dcfe07eacafdbc6bad3f4849c", "title": "From-above vs Fine-grain diversity", "url": "https://carado.moe/from-above-fine-grain-diversity.html", "source": "carado.moe", "source_type": "blog", "text": "From-above vs Fine-grain diversity\n----------------------------------\n\n\nconsider the following two populations:\n\n\n![](from-above-fine-grain-diversity.png)\n\n\nwhich one has the most diversity?\n\n\none might be tempted to think that A (a scenario that might occur if two populations are forbidden from interacting with each other) has more diversity: it has 2 colors (red and blue) instead of 1 (purple).\n\n\nhowever, when you look more closery, B has actually a lot more colors: in-between the mass of purple, there are reds, blues, and many in-betweens. at the fine grain level, there's a lot more diversity, even if the whole looks more homogenous from above.\n\n\nwhich one is *true* diversity?\n\n\nmy argument would be that it is B; you should care about the actual state of the [territory](https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation) in the day-to-day interactions of elements of these populations, not the overall appearance of the map, as they may [appear to a state](https://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/) for example.", "date_published": "2021-03-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9838ace407c13d81950d16184f81e348", "title": "Symbology for Topia", "url": "https://carado.moe/symbology-for-topia.html", "source": "carado.moe", "source_type": "blog", "text": "Symbology for Topia\n-------------------\n\n\ni have come up with what i'd like to think is a cool symbol for [my utopia](two-principles-for-topia.html):\n\n\n![](%E2%88%80V3.svg)\n\n\nthe background is black, for the inherent anarchism of my ideology (singularity doesn't count as state, it's more like an added law of physics);\n\n\nthe yellow V represents private-property [voluntaryism](https://en.wikipedia.org/wiki/Voluntaryism), a right-libertarian (hence yellow) ideology centered around voluntary association which my valuing of freedom tries to embody;\n\n\nthe red ∀, [\"for all\"](https://en.wikipedia.org/wiki/Universal_quantification), represents the intrinsic valuing of all persons; colored red as this tries to implement what i'd like to think is a socialistic ideal of positive and equal valuing of all persons.", "date_published": "2021-03-04T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e94eaa5416b35f38aa2f277ebcc1b531", "title": "Communicating Clearly", "url": "https://carado.moe/communicating-clearly.html", "source": "carado.moe", "source_type": "blog", "text": "Communicating Clearly\n---------------------\n\n\nit is important to avoid, as i've noticed notably conspiracy theorists often do, attribute special importance or personal/alternate meanings to commonly used words. the goal of language is in general to communicate with the recipients.\n\n\ni've taken, increasingly consciously, a couple of steps to attempt to communicate clearly my ideas, especially when the recipients are either unknown or varied. they are:\n\n\n* to avoid words with unclear meaning, such that people can attribute anything they want to it. this includes words like \"consciousness\", \"free will\", \"left-wing\" and \"right-wing\", \"conservative\", even in some contexts the notions of \"deserving\" or \"existing\". if a word is highly contested, it's just better to use a word that more unambiguously means what you intend to convey; even if it entails [making up your own](word-report-2.html) (because at least then they can ask, and the word isn't used already).\n* to merge together words that are similar enough. i've come to use interchangeably words in each of the following pairs: \"communism\" and \"socialism\", \"ethics\" and \"morality\", \"free markets\" and \"capitalism\", \"freedom\" and \"liberty\", and others. when everyone has their own idea of what the difference is between two words, it's probably a good idea to just consider them the synonyms in general; at most synonyms with different connotations.\n* having personal definitions for some terms that are more precise than the common definition; this may seem unintuitive as it makes the word map to not exactly the same concept as most people use it to mean, but if the precise meaning i choose is reasonably a subset of the general usage and if the word is understood to be able to mean a variety of different things to different people ([such as \"freedom\"](defining-freedom.html)) then it can be a good idea to lay out, or even try to figure out on the spot, sufficiently formalized definitions for those terms.\n* to go out of my way to mention that i'm doing the three things above. as soon as \"consciousness\" enters a discussion, i make it clear that i'd rather the person use a more precise term or description because we probably have vastly different ideas of what that word could mean.\n\n\n(and of course, if you go out to argue, don't forget to carry your [newton's flaming laser sword](https://en.wikipedia.org/wiki/Newton's_Flaming_Laser_Sword) with you)", "date_published": "2021-01-22T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e75da60ed0874b15a8d25d58643c739f", "title": "A canonical bit-encoding for ranged integers", "url": "https://carado.moe/canonical-bit-varints.html", "source": "carado.moe", "source_type": "blog", "text": "A canonical bit-encoding for ranged integers\n--------------------------------------------\n\n\nin [an earlier post](canonical-byte-varints.html) i describe a scheme to canonically and efficient encode integers that span over a power-of-two range. in this post i'll be describing a scheme to encode (positive) integers within a non-power-of-two range; but in bits rather than bytes.\n\n\nthis problem cannot in general be solved for byte-encoding: one by definition cannot make a bijection between the 256 possible values of 1 byte and a set of under 256 values.\n\n\nhere is the scheme to encode a number i in the range between 0 and n:\n\n\n* if n = 2k-1, the range has a power of two number of values, and is simply encoded over k bits.\n* otherwise,\n\t+ let k be the highest integer such that 2k < n.\n\t+ the first n - 2k values are mapped to 0 followed by the encoding of i over the range from 0 to n := n - 2k\n\t+ the last 2k values are mapped to 1 followed by k bits\n\n\nas an example, here's the encoding of all numbers within some ranges (with spaces inserted merely for readability)\n\n\n* numbers between 0 and 3\n\n\n````\n\n0: 00\n1: 01\n2: 10\n3: 11\n\n````\n\n\n* numbers between 0 and 4\n\n\n````\n\n0: 0\n1: 1 00\n2: 1 01\n3: 1 10\n4: 1 11\n\n````\n\n\n* numbers between 0 and 5\n\n\n````\n\n0: 0 0\n1: 0 1\n2: 1 00\n3: 1 01\n4: 1 10\n5: 1 11\n\n````\n\n\n* numbers between 0 and 6\n\n\n````\n\n0: 0 0\n1: 0 10\n2: 0 11\n3: 1 00\n4: 1 01\n5: 1 10\n6: 1 11\n\n````\n\n\n* numbers between 0 and 7\n\n\n````\n\n0: 000\n1: 001\n2: 010\n3: 011\n4: 100\n5: 101\n6: 110\n7: 111\n\n````\n\n\n* numbers between 0 and 14\n\n\n````\n\n 0: 0 0 0\n 1: 0 0 10\n 2: 0 0 11\n 3: 0 1 00\n 4: 0 1 01\n 5: 0 1 10\n 6: 0 1 11\n 7: 1 000\n 8: 1 001\n 9: 1 010\n10: 1 011\n11: 1 100\n12: 1 101\n13: 1 110\n14: 1 111\n\n````", "date_published": "2021-01-14T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "68c77f56c33d691c615d410f2a4ff269", "title": "Non-solving ideologies", "url": "https://carado.moe/nonsolving-ideologies.html", "source": "carado.moe", "source_type": "blog", "text": "Non-solving ideologies\n----------------------\n\n\nif your ideology entails suppressing an arguably intrinsic aspect of human experience — such as tribalism, greed/selfishness, diversity, or irrationality — to function, then you haven't really solved anything; the same way ethnic cleansing doesn't really solve the problem of ethnic tensions, it just removes the need to solve that problem by sacrificing value.\n\n\n(emphasis on: tribalism and greed/selfishness for left-wingers, diversity for right-wingers, irrationality for \"\"rationalists\"\" — but not actual rationalists, who know better)\n\n\ninstead of creating a good world for people to live in, you have changed people such that creating a world that satisfies them is easier.\n\n\nyou changed the problem to make it easier, instead of trying to solve it.", "date_published": "2021-01-01T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2e181afe0f4fb7248098e1bf899bc596", "title": "Core values: Defining freedom", "url": "https://carado.moe/defining-freedom.html", "source": "carado.moe", "source_type": "blog", "text": "Core values: Defining freedom\n-----------------------------\n\n\nin [a previous post](core-vals-exist-selfdet.html) i mentioned how i intrinsically value that people have freedom. but what does that mean, exactly ? after a couple of attempts, i feel like i've found a relatively solid definition.\n\n\n### it's not \"being able to do what you want to do\"\n\n\n…in part because \"want\" is not exactly clearly defined. but, i think most people would agree there are times where you in fact choose to do things that aren't what you want the most at that moment, especially given the hedonistic implications of \"want\".\n\n\n### it's not \"being able to do what you might choose to do\"\n\n\n…even if the brain were fully deterministic, one could choose to define freedom as \"if you dismiss knowledge of how this brain works (such that its output becomes probabilistic), make it able to do whatever it might choose to do\". this freedom, however, runs into issues, such as the fact that as the brain's configuration is a product of its environment, one could deduce its internal configuration from the environment it appeared in, or that you can become able to predict a lot about that brain's internal configuration from its behavior.\n\n\n### it's \"being able to do what you decide to do\"\n\n\ni think this is the definition i'm settling on, at least for now.\n\n\none could raise the objection \"but won't you only decide to do what you can do in the first place ?\", but when you know/realize that you are able to do whatever you decide, you will expand what you decide to do to a lot more options.\n\n\nthis definition also covers the first one: you can always decide to do whatever it is you want at the time; but you can also not. this definition seems to be the most general.\n\n\nthus, from now one, i value (among other things, of course) that people are able to do whatever it is they decide to do.", "date_published": "2020-12-31T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "af137d7d29e46f97efca8e5a4e995226", "title": "A canonical and efficient byte-encoding for ints", "url": "https://carado.moe/canonical-byte-varints.html", "source": "carado.moe", "source_type": "blog", "text": "A canonical and efficient byte-encoding for ints\n------------------------------------------------\n\n\nthe internet has a bunch of efficient byte-encodings for fixed-length integers, from [LEB128](https://en.wikipedia.org/wiki/LEB128) to arguably [UTF-8](https://en.wikipedia.org/wiki/UTF-8); but none of them (except the trivial encoding of always encoding an integer as its fixed byte sizeof) seem to be canonical.\n\n\nwhat i mean by canonical is that every integer has only one possible representation, and every representation means a single integer.\n\n\nafter some work, i've devised a mildly convoluted but, as far as i can tell, truly canonical encoding for int16, int32, and int64.\n\n\nthe formula i rely on is that 2ⁿ = 1 + (2¹ + … + 2ⁿ¯¹) = 2ⁱ + (2ⁱ + … 2ⁿ¯¹); so, to cover the entire, say, 2³² space of int32, i need to fit neatly into bytes variable bit patterns of length i through 31 (and have two patterns for i).\n\n\nhere are the bit patterns for int16 and int32 (int64, being larger, is at the bottom of the page).\n\n\neach line mentions the number of variable bits you can check that the sum of two to the power of each of those account for every possible value; for example, for int16, 2⁷ + (2⁷ + … + 2¹⁵) = 2¹⁶\n\n\n\n```\nint16:\n 7: 0.......\n* 7: 10000000 0.......\n 8: 10000001 ........\n 9: 1000001. ........\n 10: 100001.. ........\n 11: 10001... ........\n 12: 1001.... ........\n 13: 101..... ........\n 14: 11...... ........\n* 15: 10000000 1....... ........\n\nint32:\n 6: 00......\n* 6: 01000000 00......\n* 7: 10000000 0.......\n 8: 01000001 ........\n 9: 0100001. ........\n 10: 010001.. ........\n 11: 01001... ........\n 12: 0101.... ........\n 13: 011..... ........\n* 14: 01000000 01...... ........\n* 15: 10000000 1....... ........\n 16: 10000001 ........ ........\n 17: 1000001. ........ ........\n 18: 100001.. ........ ........\n 19: 10001... ........ ........\n 20: 1001.... ........ ........\n 21: 101..... ........ ........\n* 22: 01000000 10...... ........(×2)\n* 23: 11000000 0....... ........(×2)\n 24: 11000001 ........ ........(×2)\n 25: 1100001. ........ ........(×2)\n 26: 110001.. ........ ........(×2)\n 27: 11001... ........ ........(×2)\n 28: 1101.... ........ ........(×2)\n 29: 111..... ........ ........(×2)\n* 30: 01000000 11...... ........(×3)\n* 31: 11000000 1....... ........(×3)\n\n```\n\nwith K = 1 for int64, K = 2 for int32, K = 3 for int64:\n\n\nwhen the lower 8-K bits of the first byte are 0 (a pattern indicated with a \\* next to the line), then the upper K bits of the first byte and the following 1 to 3 bits of the second byte (depending on the value of K) determine the amount of extra number-encoding bytes, while the remaining bits the second byte are the initial bits of the number.\n\n\nwhen the lower 8-K bits are not 0, then the K upper bits indicate the number of number-encoding bytes after the first byte, and the bits after the first 1 in the lower 8-K bits are the initial bits of the number.\n\n\nmy explanation is probably not very clear, but the pattern should be visible if you look at it enough; just keep in mind that lines tagged with a \\* work different from those not tagged that way, and that the first K bits are a special tag.\n\n\nto efficiently encode a signed integer (optimizing for values near 0), encode:\n\n\n* any positive or null number n as 2×n\n* any strictly negative number n as 2×-n+1\n\n\nbelow is the bit patterns for int64:\n\n\n\n```\nint64:\n 5: 000.....\n* 5: 00100000 000.....\n* 6: 01000000 00......\n* 7: 10000000 0.......\n 8: 00100001 ........\n 9: 0010001. ........\n 10: 001001.. ........\n 11: 00101... ........\n 12: 0011.... ........\n* 13: 00100000 001..... ........\n* 14: 01000000 01...... ........\n* 15: 10000000 1....... ........\n 16: 01000001 ........ ........\n 17: 0100001. ........ ........\n 18: 010001.. ........ ........\n 19: 01001... ........ ........\n 20: 0101.... ........ ........\n* 21: 00100000 010..... ........(×2)\n* 22: 01000000 10...... ........(×2)\n* 23: 10100000 0....... ........(×2)\n 24: 01100001 ........ ........(×2)\n 25: 0110001. ........ ........(×2)\n 26: 011001.. ........ ........(×2)\n 27: 01101... ........ ........(×2)\n 28: 0111.... ........ ........(×2)\n* 29: 00100000 011..... ........(×3)\n* 30: 01000000 11...... ........(×3)\n* 31: 10100000 1....... ........(×3)\n 32: 10000001 ........ ........(×3)\n 33: 1000001. ........ ........(×3)\n 34: 100001.. ........ ........(×3)\n 35: 10001... ........ ........(×3)\n 36: 1001.... ........ ........(×3)\n* 37: 00100000 100..... ........(×4)\n* 38: 01100000 00...... ........(×4)\n* 39: 11000000 0....... ........(×4)\n 40: 10100001 ........ ........(×4)\n 41: 1010001. ........ ........(×4)\n 42: 101001.. ........ ........(×4)\n 43: 10101... ........ ........(×4)\n 44: 1011.... ........ ........(×4)\n* 45: 00100000 101..... ........(×5)\n* 46: 01100000 01...... ........(×5)\n* 47: 11000000 1....... ........(×5)\n 48: 11000001 ........ ........(×5)\n 49: 1100001. ........ ........(×5)\n 50: 110001.. ........ ........(×5)\n 51: 11001... ........ ........(×5)\n 52: 1101.... ........ ........(×5)\n* 53: 00100000 110..... ........(×6)\n* 54: 01100000 10...... ........(×6)\n* 55: 11100000 0....... ........(×6)\n 56: 11100001 ........ ........(×6)\n 57: 1110001. ........ ........(×6)\n 58: 111001.. ........ ........(×6)\n 59: 11101... ........ ........(×6)\n 60: 1111.... ........ ........(×6)\n* 61: 00100000 111..... ........(×7)\n* 62: 01100000 11...... ........(×7)\n* 63: 11100000 1....... ........(×7)\n\n```", "date_published": "2020-12-29T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "b25213b6849cba6f096ec24f657a39c7", "title": "You are your information system", "url": "https://carado.moe/you-are-your-information-system.html", "source": "carado.moe", "source_type": "blog", "text": "You are your information system\n-------------------------------\n\n\nwhat makes you, you ?\n\n\nwe tend to intuitively think of a person as their entire body, somehow including limbs and organs but not clothing or food.\n\n\nyet, if you close your eyes, and then i swap your arm with someone else's, when you wake up you will still be the same person, just with a new arm. in fact, i'd argue i could replace everything except for the nervous system (including the brain) and when you open your eyes again you would notice that your entire body has changed but your thoughts and memories have remained the same — rather than, for example, still having the same body but different thoughts and memories.\n\n\nare you the matter that makes up that nervous system ? i could probably replace neurons and synapses one at a time and you would continue to be the same person. is it the electric signals then ? i could probably put on some synapses a device that absorbs electric signals and then sends out identical but \"different\" signals and you would still be the same person.\n\n\nin fact, it doesn't really make sense to ask \"which matter\" makes up your nervous system: under quantum physics, everything is changing and particles are merely [values in an omnipresent field](https://www.youtube.com/watch?v=MmG2ah5Df4g) rather than solid objects.\n\n\nultimately, what you are, is *the information system* which your nervous system (including your brain) runs. standing still, walking forwards, teleporting yourself, and being uploaded into a sufficiently powerful computer, all preserve your personhood in the exact same way; there is nothing special about the meat that currently runs your mind.\n\n\n*despite everything, it's still you.*", "date_published": "2020-12-25T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "496eed11e3a5cd3fd79d9f3debc26d22", "title": "CSS for pixeley images", "url": "https://carado.moe/css-for-pixeley-images.html", "source": "carado.moe", "source_type": "blog", "text": "CSS for pixeley images\n----------------------\n\n\nhave you even been consuming pixel art, such as the great webcomic [Unicorn Jelly](http://unicornjelly.com), but been upset that on modern monitors you have to either suffer seeing either a really small version of the comic or a blurry zoomed-in version ?\n\n\n![](css-for-pixeley-images/out.png)\n![](css-for-pixeley-images/blurry.png)\nby using the [stylish extension](https://addons.mozilla.org/en-US/firefox/addon/stylish/) and then creating a style that applies to all sites with just the code\n\n\n`* { image-rendering:crisp-edges; }`\n\n\nyou get a toggle button that can make zoomed-in images [crispy](https://en.wikipedia.org/wiki/Nearest-neighbor_interpolation), which looks pretty good when zooming into pixeley images by integer amounts (200%, 300%, etc)\n\n\n![](css-for-pixeley-images/crisp.png)", "date_published": "2020-12-24T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "5f89ba472e58335b73998e2f74f7f992", "title": "Unfair feedback loops", "url": "https://carado.moe/unfair-feedback-loops.html", "source": "carado.moe", "source_type": "blog", "text": "Unfair feedback loops\n---------------------\n\n\none set of societal problems that i feel is often overlooked, especially by the political right, comes in a form i like to call *unfair feedback loops*. this is any situation where an originally fair interaction produces (often as a side-effect) conditions that augment its ability to happen again beyond fairness.\n\n\nthere are many examples:\n\n\n* politicians get elected by voters (fair), and then get the power to influence how elections work in order to facilitate their reelection (unfair)\n* criminals get punished by the justice system (fair) but also afterwards get poorer treatment in society, in a way that makes them more likely to commit crimes again (unfair)\n* good ideas can become widely adopted (fair) to the point that society pressures people into adopting them (unfair)\n* companies can make profit from selling good products (fair), and use that profit to spend money on marketing (such as advertising) to bend culture towards consumption of their products (unfair)\n\n\nthe common element between these situations is that further interactions *could* still happen to a \"fair\" amount, but they gain a bonus from having happened in the past, which makes their reoccurence more likely/numerous than is justifiable.", "date_published": "2020-12-23T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "8f7ced205009c2e8e50d3d3f684595bc", "title": "Rationalist by necessity", "url": "https://carado.moe/rationalist-by-necessity.html", "source": "carado.moe", "source_type": "blog", "text": "Rationalist by necessity\n------------------------\n\n\nin [The Sequences](https://www.readthesequences.com/), Eliezer Yudkowsky [describes rationality](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality) as\n\n\n1. **Epistemic rationality**: systematically improving the accuracy of your beliefs.\n2. **Instrumental rationality**: systematically achieving your values.\n\n\nnow, personally, i [intrinsically value](core-vals-exist-selfdet.html) a bunch of things, but having accurate beliefs isn't necessarily one of them; for me, rationality is an [instrumental value](https://en.wikipedia.org/wiki/Instrumental_and_intrinsic_value) in that it helps me achieve my other values better.\n\n\nin general, i value people being able to do whatever they want, and as such they shouldn't necessarily have to form accurate beliefs if they don't care to. in fact, forming inaccurate beliefs is a great source of culture, and culture is something that i *do* personally intrinsically value.\n\n\nbut we live in the era of liberal democracies, where society requires people to form accurate beliefs, because they're the ones directing society through elections. i see the need for people to be rationalist as an unfortunate necessity; hopefully a need we can be rid of when we [reach a topia where human decisions are no longer the pillar of civilization](two-principles-for-topia.html).\n\n\nnot, of course, that there's anything wrong with any individual or even group choosing to intrinsically value rationality. the part i care about is that it being a choice.", "date_published": "2020-12-22T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c6a238826c4500bac653b90f7019cbc0", "title": "Against Unicode", "url": "https://carado.moe/against-unicode.html", "source": "carado.moe", "source_type": "blog", "text": "Against Unicode\n---------------\n\n\nwhen considering the mess that text encoding was before unicode (and notably UTF-8), one wouldn't be blamed for thinking that the problem of text encoding is basically solved. yet, there are many issues with unicode, some of which cannot be solved without discarding unicode entirely.\n\n\n### A primer\n\n\n[unicode](https://en.wikipedia.org/wiki/Unicode) is a character encoding with about a million codepoints, of which currently about 144k are assigned to characters by the unicode consortium.\n\n\n[UTF-8](https://en.wikipedia.org/wiki/UTF-8) is by far the most common representation of unicode, where each character is represented by a sequence of bytes; notably, UTF-8 is compatible with [ASCII](https://en.wikipedia.org/wiki/ASCII): every valid ASCII sequence of bytes represents the same text it does when interpreted as UTF-8.\n\n\n### A solvable problem: the death of written chinese and japanese\n\n\nchinese and japanese use a wide collection of logographic characters (respectively [hanzi](https://en.wikipedia.org/wiki/Chinese_characters) and [kanji](https://en.wikipedia.org/wiki/Kanji)) that no doubt have evolved throughout history in how people use them the same way every other piece of language has.\n\n\nthat is, until formal text encoding — including unicode — came along. by hard-assigning a fixed set of characters to codepoints, these standards make users of those languages unable to create or even modify characters, even though the way kanji and hanzi work should make some combinations of [radicals](https://en.wikipedia.org/wiki/Radical_%28Chinese_characters%29) that don't currently exist possible both to mean new meanings or to simplify existing characters.\n\n\nas a result, chinese and japanese are in effect partially dead languages in their written form.\n\n\none way unicode could go about this would be to encode those characters as geometric combinations of radicals, with maybe some extra bits of information to indicate various ways in which those radicals can combine.\n\n\nthat would be a lot of work, but it is theoretically feasible.\n\n\n### An unsolvable problem: emoji\n\n\n[emoji](https://en.wikipedia.org/wiki/Emoji) are images used as units of language, now commonplace in internet communication as you've no doubt noticed. nonetheless, beyond the original japanese emoji imported into unicode, people have started developing and using platforms that let users use their own custom images as emoji. unicode simply cannot solve this issue, and it is a critical one: language is now flexible enough that any small image file can be a piece of language, but unicode cannot expect to assign codepoints or even codepoint combinations to all of them.\n\n\nanother even more long-term problem is future languages, be they evolutions of existing languages or (conlangs)[https://en.wikipedia.org/wiki/Conlang].\n\n\n### Ideas for solutions\n\n\none might feel like the latter problem simply cannot be solved except by allow all communication to just *embed images* into text; yet, there is a much more efficient way to go about it. in an idea i'll call *hashicode*, raw pieces of text are a sequence of [IPFS](https://en.wikipedia.org/wiki/InterPlanetary_File_System) addresses, each followed by arbitrary (but delimited) sequences of bytes. the addresses would point to sandboxable (such as in [wasm](https://en.wikipedia.org/wiki/WebAssembly); although maybe not, since [it's bad](http://troubles.md/posts/the-stack-is-not-the-stack/)) programs that can read the following bytes and then provide function calls that can be called to query how to render said characters, but also which ones are whitespace, the writing direction, how to scale them, what category of character they fit in, etc.\n\n\nthen, both in storage and in network communication, space can be saved by merging together identical addresses and storing only one copy of each used program (perhaps reference-counted).\n\n\nit is not an easy solution, but it is elegant *enough*, and most importantly for a language encoding format, *it can represent language people are using to communicate*.\n\n\nit also can survive the eventual end of [the last global era](global-era.html) in a way that a centralized authority like the unicode consortium can't.", "date_published": "2020-12-21T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ea4ce34448e1a295a7a91e0587c40e51", "title": "Cringe as prejudice", "url": "https://carado.moe/cringe-as-prejudice.html", "source": "carado.moe", "source_type": "blog", "text": "Cringe as prejudice\n-------------------\n\n\na while ago, a friend of mine mentioned how they don't get cringe. what a weird notion! but, after trying to understand more deeply what causes me to cringe, i realized that i myself was losing grasp of the notion.\n\n\nwhat is cringe? a visceral reaction to a surface level perception of it. cringe, i argue, is a form of prejudice; and gaining in perspective, the ability to understand and empathize with others, erodes at it, the very same way getting to know people for what they are rather than a revulsion based on the surface appearance of people is what overcoming racism consists of.", "date_published": "2020-12-20T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "a2bb9ce60beddeb2e12f02b237f1f71c", "title": "A Prototypeness Hierarchy of Realities", "url": "https://carado.moe/prototype-realities.html", "source": "carado.moe", "source_type": "blog", "text": "*(this post may contain some very vague spoileryness about the video game Outer Wilds)*\n\n\nA Prototypeness Hierarchy of Realities\n--------------------------------------\n\n\none property of many video games that i felt the most when playing the excellent [Outer Wilds](https://store.steampowered.com/app/753640/Outer_Wilds/) was *prototypeyness*.\n\n\nmany games, and especially that one, feel like they are prototypes for reality to some extent; they try to extract some essence of what is interesting about this world, without having the ability to implement all of it in a fully dynamic way, and thus hardcoding the rest.\n\n\nnow, this aspect of prototypeyness is sufficiently present in Outer Wilds that i ended up asking myself the question: what would real life (this universe where earth is) be a prototype for ? and i think the answer is:\n\n\nreal life is a prototype for living in virtual realities/cyberspace.\n\n\nonce we upload ourselves to computers (a good thing!) we will be able to make the entirety of the substrate that individuals interact with way more flexible; inhabit spaces of any number of dimensions or maybe not even spaces at all and just graphs (as is the shape of the web), modify our minds in ways meat brains wouldn't support, basically utilize any type of computational constructs we want with no regard for most limitations, depending on reality only as a substrate to run the computronium for it all.\n\n\nlike the step between prototypey video games and reality, it is one of a nearly definitional boundary in scale of computing power, and one whose non-prototype side i'm very interested in.", "date_published": "2020-11-18T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "91edf1f069265f24a4fae6411cd259bd", "title": "Two Principles For Topia", "url": "https://carado.moe/two-principles-for-topia.html", "source": "carado.moe", "source_type": "blog", "text": "(edit: this post is *sort of* superceded by [∀V](%E2%88%80V.html))\n\n\nTwo Principles For Topia\n------------------------\n\n\nthe more i think about it, the less i think the solution to [Moloch](https://web.archive.org/web/20140730043944/http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) is a single benevolent Elua; or, in other terms, we shouldn't implement Elua, but we should enact reasonable principles which Elua might want to implement herself.\n\n\nhere are what i currently believe to be the two principles that form the basis of a largely [freedom-conserving](core-vals-exist-selfdet.html) utopia:\n\n\n* the first principle, Voluntaryism, consists of NAP, UBI, and population control.\n\n\n\t+ the systematic enforcement of the [non-aggression principle](https://en.wikipedia.org/wiki/Non-aggression_principle) (NAP) to guarantee agency and freedom of association,\n\t+ mandatory redistribution enough for every individual to be guaranteed a reasonable-with-[slack](https://thezvi.wordpress.com/2017/09/30/slack/) living (UBI) (where living includes basic resources and healthcare up to immortality), and\n\t+ enough population control to guarantee this redistribution can even happen in the first place in a world with (even locally) limited resources, are to be the basis of a reasonable [voluntary](https://en.wikipedia.org/wiki/Voluntaryism) world.\n\n\n secondary notions like taxation on [externalities](https://en.wikipedia.org/wiki/Externality) and usage of [the commons](https://en.wikipedia.org/wiki/Commons) help make that UBI tangible (\"why does the UBI currency have value ?\" → because it's what eventually one must pay those taxes with) and reasonably redistribute ressources so as to help all persons benefit from growth.\n* the second principle is the dismantlement of non-person forces (DNPF).\n\n\n what i mean by a non-person force is any phenomenon that interacts with mankind in a way that isn't answerable to persons; this goes, in order of scale, from gravity and kinetics, to cancer, to publicly-owned corporations and states. these all keep abusing persons (by which i here mean [moral patient](https://en.wikipedia.org/wiki/Moral_agency#Distinction_between_moral_agency_and_moral_patienthood)) in many ways, and just generally keep us from being in control of our lives.\n\n\n the example of corporations is particularly insidious: though they would be (under UBI) aligned to benefit the values of persons, they still outcoordinate those persons and thus in many ways outsmart them through the abuse of discoordination and cognitive biases; and not only that, but they are, in the petri dish of capitalism, bred so as to maximize their ability to do this. that said, at least fully top-down autocratic corporations have a person agent at the top, who is able to enforce the values of persons; publicly-owned corporations are even worse in that even their top-level direction is uncoordinated enough that valuing nice things is guaranteedly out of the equation (this could perhaps be addressed with better and maybe more society-distributed shareholder voting, but those shareholders probably get outcoordinated).\n\n\n (the argument above, by the way, is my largest criticism of non-[distributist](https://en.wikipedia.org/wiki/Distributism) capitalism)\n\n\n in effect, this principle turns the world we inhabit from a world of cold natural and emergent laws inside which reside some minds located in brains (materialism), into a world of ad-hoc minds determining everything else ([panpsychism](https://en.wikipedia.org/wiki/Panpsychism) ?).\n\n\n the easiest way to implement this principle is probably to move everyone to a virtual world (which saves resources too, which helps the population control cap be way higher)\n\n\nin my current opinion, those two principles **must be enforced** for the basis of a utopia to be form. the rest can be done through the voluntary action of persons (hopefully), but these two principles are what Elua/the singularity is to **enforce** for the continued free and valueful life of persons to be guaranteed.\n\n\nVoluntaryism alone is not enough, and this is largely missed by what i'm tempted to call right-wing utopians; not just abusive structures, but systematically self-reinforcing abusive structures, can and will still happen even under a complete voluntary society. [Meditations on Moloch](https://web.archive.org/web/20140730043944/http://slatestarcodex.com/2014/07/30/meditations-on-moloch/) addresses this largely with coordination, but coordination only *hopefully wins battles*; the addition of DNPF permanently wins the war.\n\n\nDNPF alone is not enough either, and this is what is largely missed by what i'm tempted to call left-wing utopians; in a virtual world of minds where resources are fairly allocated between persons, there can still be abuse, plagues, [malthusian traps](https://en.wikipedia.org/wiki/Malthusian_trap), and so on; and ultimately abusive structures, just of a different kind. the common left-wing answer of organizing people (and the scarier \"changing culture to make people systematically organize against those\" which, if voluntary, is largely wishful thinking, and if not, insanely violates self-determination and the values of persons) only wins battles; the addition of Voluntaryism permanently wins the war.", "date_published": "2020-11-15T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "fe34a3c1a76bcadbb5edeb883aefe2c5", "title": "Where next for piracy ?", "url": "https://carado.moe/where-next-piracy.html", "source": "carado.moe", "source_type": "blog", "text": "Where next for piracy ?\n-----------------------\n\n\nAs an *intellectual property abolitionist*, I'm often thinking of how we can keep improving media piracy.\n\n\nThe raw file-sharing has is mostly solved: the BitTorrent protocol works pretty well, and [IPFS](https://ipfs.io/) is probably a reasonable successor. The main issue right now is *where to find those torrents*: apart from mouth-to-ear friend groups, public torrent websites, public direct-download websites and even [private trackers](https://wiki.installgentoo.com/index.php/Private_trackers) keep getting taken down, often resulting in some pieces of rare media getting even harder for anyone to access (some works just aren't legally available at all!).\n\n\nSo, what we need, is some form of distributed fact-checking database. This could come in the form of a peer-to-peer data-based wikipedia-ish style network, except that facts which can be checked and verified (with user reviews, reputation, networks of trust, etc) would include \"this IPFS address is an instance of this piece of media\"; in that sense, the knowledge of which [hashes](https://en.wikipedia.org/wiki/Cryptographic_hash_function) correspond to which pieces of media is merely a piece of factuality like any other.", "date_published": "2020-10-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "bc0e039c53c62c8129113e41a2cfc9d9", "title": "Real quick, on free will", "url": "https://carado.moe/free-will.html", "source": "carado.moe", "source_type": "blog", "text": "Real quick, on free will\n------------------------\n\n\n(I'm making this post mostly so I don't have to keep arguing these points and can just send this link instead)\n\n\nI believe in free will. I believe people have free will, and I believe a roomba has free will.\n\n\nWhat I mean by free will is this: *the decision that an agent makes is the result of that agent's thinking process*.\n\n\nIt does not matter that that thinking process happens to run on a computer (brain for the person, chip for the roomba) which is based on deterministic (or quantum-random) physics; the output of that decision is the result of deterministic (or quantum-random) phenomena, *this is true*, but *also* the result of that thinking process. Both are true, they merely are facts about different layers of reality.\n\n\nHere's evidence that the decision the human/roomba makes is *actually* the result of their thinking process: if an outside party interfered with the thinking process, then the resulting decision could be different.\n\n\nedit: i've been informed this position [might be called compatibilism](https://en.wikipedia.org/wiki/Compatibilism).", "date_published": "2020-10-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "6cbb1112b5189555d405a81ecfe45631", "title": "Word Report #2", "url": "https://carado.moe/word-report-2.html", "source": "carado.moe", "source_type": "blog", "text": "Word Report #2\n--------------\n\n\n(see: [Word Report #1](word-report-1.html))\n\n\n* **Superlinear Profits**: something related to [economies of scale](https://en.wikipedia.org/wiki/Economies_of_scale), and the best argument against large corporations. Superlinear Profits is the notion of a business making a profit on an investment which is more than *some fixed ratio × investment cost*; more than *linear* in the cost. Examples include Big Data (having two data sets together produces more value than separately), any R&D and engineering (1 invention can turn into N products based on it), etc.\n* **Irony Poisoning**: see [Jreg's video on it](https://www.youtube.com/watch?v=ZjSE1h_Ad9E); what the Internet Sincerity initiative is meant to address.\n* **Symbolic/Materialist/Impressionist** narratives: whether a story is meant to be enjoyed for its symbolic meaning (such as an allegory to real world things), its concrete events (what happens to the characters, etc), or the aesthetic impression it produces upon its consumer, respectively.\n* **[-plex](https://en.wiktionary.org/wiki/%2Dplex)** or **-ex**: like [-eme](https://en.wiktionary.org/wiki/-eme) but for a combination of objects. For example, whereas a [Sememe](https://en.wiktionary.org/wiki/sememe#English) is an atomic piece of meaning, a \"Semeplex\" could be an explicitely compositite notion.\n* **Yow/Yee**: meant to be a modern replacement for thou/thee: variants for the word \"you\" but explicitely singular and plural, respectively.\n* **Realm Walker**: a [walking simulator](https://en.wikipedia.org/wiki/Adventure_game#Walking_simulators) video game specifically about exploring a strange realm, with a particular focus on ambiance and aesthetics. Includes [Journey](https://en.wikipedia.org/wiki/Journey_%282012_video_game%29), [NaissanceE](https://store.steampowered.com/app/265690/NaissanceE/), [Fugue In Void](https://moshelinke.itch.io/fugue-in-void), [Peak Bleak Blues](https://connor-sherlock.itch.io/peak-bleak-blues-and-other-moods), [Yume Nikki](https://en.wikipedia.org/wiki/Yume_Nikki), and certainly others.\n* **Mediagraphy**: just *discography* or *bibliography* but for artists who produce media in a variety of formats. Particularly useful for artists whose work spans many mediums, like [Jennifer Diane Reitz](https://www.youtube.com/watch?v=rOn-gSTsD7k).\n* **Very Culture**: just an exclamation I enjoy making when faced with elements of the world I enjoy for their cultural value. Can apply to anything from [borderers](https://web.archive.org/web/20160427114849/http://slatestarcodex.com/2016/04/27/book-review-albions-seed/) to *Undertale*.\n* **Humancore** vs **Posthuman**: an opposition between embracing the [meatspace](https://en.wiktionary.org/wiki/meatspace), [\"primal\"](https://web.archive.org/web/20191105024427/https://slatestarcodex.com/2019/11/04/samsara/), even straight-up [erroneous](https://www.readthesequences.com/Biases-An-Introduction) aspects of human existence; and ascending over those aspects to pursue what is percieved to be a higher, more intellectual existence. Note that *Humancore* is **not** the current memed \"return to monkeh\"; for central to Humancore is Culture, which is [what is special about humans](https://web.archive.org/web/20190605100310/https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/); though \"return to monkeh\" certainly is part of it.\n* **(to) core value**: (to) axiomatic(ally) value. An [axiomatic value](core-vals-exist-selfdet.html) is something you *ultimately care about*, with no reason given or givable. If you value something for a reason, then it's the alternative: an *instrumental value*, and by asking what value it serves you eventually come down to axiomatic values.\n* **Technology** and **Art**: these may be standard terms, but I have my own definitions for them: *technology* is elements of culture that are instrumentally valued, and *art* is elements of culture that are intrinsically valued. If you engineer because you like beautiful works of engineering, not to get things done, that's art; if you create a video game whose purpose is to help people with trauma, that's technology.\n\n\nWhile I'm at it, I'd like to challenge the notion that the term \"literally\" has lost its meaning and is now just a generic superlative. As I see it, this is not the case; the use of \"literally\" in cases where it doesn't mean exactly that is a *figurative* use of the word \"literally\", and indeed it has quite a distinctive semantic flavor from just a generic superlative.\n\n\n*(2020-11-18 edit: added the variant **-ex** to **-plex**)*", "date_published": "2020-10-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c8327c41bba21fc1f689c3666643a278", "title": "Socialism as a conspiracy theory", "url": "https://carado.moe/socialism-conspiracy.html", "source": "carado.moe", "source_type": "blog", "text": "Socialism as a conspiracy theory\n--------------------------------\n\n\nThere is no way to partake of climate denial without being to an extent a conspiracy theorist: if you disagree with the broad consensus of the academic field of climate science, whose participants have dedicated their career to studying the field in question, then you must believe they are either being dishonest for their own interest or are controlled by some Evil Group.\n\n\nSocialism is similar in that it disagrees with the field of economics' consensus that liberalism is the way to organize the economy that is the most beneficial to society. Generally, the *Evil Group* in question is *The Bourgeoisie*, and thus the entire field is invalidated, as opposed to addressed by partaking of it and producing papers that challenge the consensus.", "date_published": "2020-10-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e2152bdd0a1a1ee205ba2622dbb1960b", "title": "For UBI", "url": "https://carado.moe/ubi.html", "source": "carado.moe", "source_type": "blog", "text": "For UBI\n-------\n\n\nAll my friends know that I'm a huge fan of *Universal Basic Income* for a variety of reasons. I'll outline here a collection of my main arguments for UBI.\n\n\nFirst things first, I'll define UBI: it has to be Universal, it has to be Basic, and it has to be an Income.\n\n\n**Universal** means that everyone (this can be reasonably restricted to *every adult citizen* of the country one is talking about implementing UBI in) must get it, regardless of employment situation. This is a very critical part: if you get a minimum wage job, you still get the UBI on top, so that it doesn't disincentivize work. It can make work *less* incentivized than before, but it can't *disincentivize* it the way many welfare programs work and create poverty traps.\n\n\n**Basic** means that it must be at least enough to reasonably live, whatever that number is. If you think someone needs 1500$/mo to live, then it must be at least that. However, there is a caveat here: I don't think it's unreasonable to expect people on UBI to make more reasonable living decisions, notably such as living somewhere cheaper than they'd like.\n\n\n**Income** means that it must be money, not coupons or food stamps or whatever. This is a crucial part, and is the part I consider most important to weird people and marginized groups: the less the Standard Majority Needs correspond to your needs, the less you'll get value out of being provided those; with money, instead, you're free to decide what it is you need and value.\n\n\nHere are some of my favorite UBI arguments:\n\n\n* **The Housing Crisis**: with UBI, people don't need to work in those giant, super-expensive cities anymore, because they don't need the jobs that are in them. UBI not only lifts a ton of people out of poverty and helps the homeless tremendously, but also lets people move out of the giant cities, thereby decreasing demand and possible making rents in those cities actually go down as a result. In addition, if UBI is a reasonable amount, it's not out of the question that people could purchase their own homes with loans paid with UBI; banks will love giving loans to people with an income that *is guaranteed to remain forever*, and eventually this could emancipate a lot of people out of depending on landlords.\n* **Healthcare**: notably [in the US](https://www.youtube.com/watch?v=U1TaL7OhveM), there are reasonable arguments to be made against either a public healthcare system or a private healthcare market. With UBI, we can get the best of both worlds: make the healthcare private market, but make everyone's UBI be enough to cover, on top of other living, a reasonable private health insurance. That way, those who want to opt out can do so, insurances and health companies are still incentivized to reduce prices to an extent (or more people might just use that money on something else), but everyone is guaranteed to be *able* to afford healthcare.\n* **Discrimination**: a large part of the cycle of racism (poverty → lesser education → lesser jobs → kids themselves in poverty) is the employment part. UBI not just helps by being redistributive, but also by making it okay to actually not have a job, and greatly easing the pressure poor people having with paying bills or food. If coupled with the housing crisis solution above, this could even lead to entire new integrated communities largely thriving on UBI.\n* **Government corruption and lobbyism**: if they have the UBI to afford it, the poor and middle class could participate a lot more in funding political parties and maybe even lobbyists; in that sense, UBI has the potential to realign even democracy with the interests of the people.\n* **Domestic abuse, etc**: all sorts of situations of this type where one party is dependent on another for money, are suddenly alleviated if both parties get free money enough to live.\n* **Exploitation/poor work conditions/poor wages**: UBI gives workers the ability to just refuse employment altogether, giving them tremendous negotiating ability in the labor market. Raising the price of labor would also accelerate automation, which we should want (and which, if we have UBI, is actually fine).\n* **Even Conservatives!**: with UBI, lifestyles such as the traditional nuclear family become easier to actually implement, as the wife can contribute to the family finances with her own UBI.\n* **Not Enough Enterpreneurship**: a lot more people would be able to start working on projects that might actually create value, and work on those with other people, if none of them needs an external source of funding. This also covers usually less profitable prospects, like artistic careers.\n* **Welfare But Better**: usual welfare systems are extremely inefficient and bureaucratically heavy. On the other hand, *just giving everyone a fixed pile of money* is extremely simple and easily implemented; with the same funding, a UBI scheme could thus result in actually a lot more money actually going to the general poor population.\n* **UBI is not that far away politically**: UBI is already a fairly popular idea in the field of economics (and even in the rationalist community). Really, all we need right now to get UBI implemented is popular support, i.e. you reading this right now, being in favor of it.\n\n\nBut, really, these are all somehow related to the least practical but, in my opinion, most important argument:\n\n\n* **The Philosophical Argument**: UBI is *philosophically* important because it makes the economy *intrinsically value people*. This importance of this cannot be stressed enough: under UBI, no longer are people valued just for their economic output in the same way any other market resource is, but people are actually *what markets become aligned to provide value for*. This is true emancipation: freedom from labor, the ability to pursue whatever it is you actually care about, without having to care about whether you can afford to survive.\n\n\nEdit 2021-02-24: Another, more complete list of argument can be found [here](https://www.reddit.com/r/BasicIncome/wiki/index).", "date_published": "2020-10-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "ed3d2f45088a7f18178bae2e7e80ba5e", "title": "Gender Bootstrappism", "url": "https://carado.moe/gender-bootstrappism.html", "source": "carado.moe", "source_type": "blog", "text": "Gender Bootstrappism\n--------------------\n\n\n(I define \"gender\" to mean \"the set of sociocultural characteristics historically associated with either sex\")\n\n\n(disclamer: this post may or may not be crippled by a terrible understanding of the topic at hand)\n\n\nI'm not a Gender Abolitionist, I'm a Gender Bootstrappist. What I mean by this is that I think gender should become its own cultural notion, separate from sex, even though that's where it started historically. Gender should still be able to be partaken of by people, in whichever manner they want (including not at all). But, one important aspect, is that *if* gender as a broad cultural notion is to be preserved (for it is, after all, *[Very Culture](word-report-2.html)*), then there will be *social expectations*; I don't think that's avoidable, and I think a reasonable amount of social expectations can be had without falling into straight-up discrimination. Having expectations at all are what allow the *subversion* of expectations, and also expectations are pretty [humancore](word-report-2.html).", "date_published": "2020-10-03T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c4beb2ddf2fd1f71bb77bbc953e44f98", "title": "Determining core values & existential self-determination", "url": "https://carado.moe/core-vals-exist-selfdet.html", "source": "carado.moe", "source_type": "blog", "text": "Determining core values & existential self-determination\n--------------------------------------------------------\n\n\n[Rationalism](https://www.readthesequences.com/) is about [epistemic rationality and instrumental rationality](https://www.readthesequences.com/What-Do-I-Mean-By-Rationality); but [when the two conflict, \"rationalists should win\"](https://www.readthesequences.com/Newcombs-Problem-And-Regret-Of-Rationality); so,\n\n\n\n> Instrumental rationality: systematically achieving your values.\n> \n> \n\n\nHow does one determine their core (axiomatic) values ? Here's how i do it: i start from what i think is my set of values, and then i extrapolate what would happen if a [superintelligent](https://en.wikipedia.org/wiki/Superintelligence) [singleton](https://en.wikipedia.org/wiki/Singleton_%28global_governance%29) tried to implement those values.\n\n\nGenerally, the result looks like hell, so i try to figure what went wrong and start again with a new set of values.\n\n\nFor example: imagine i think my only core value is general happiness. The most efficient way for a superintelligence to maximize that is to [rewire everyone's brain](https://wiki.lesswrong.com/wiki/Wireheading) to be in a constant state of bliss, and [turn as much of the universe as possible](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) into either more humans that experience constant bliss (whichever form of \"human\" is the cheapest resource-wise to produce) or into infrastructure that can be used to guarantee that nothing can ever risk damaging the current set of blissful humans.\n\n\nSo, clearly, this is wrong. The next step is freedom/self-determination; such that people can do whatever they want.\n\n\nHowever, the most efficient way to make sure people can do what they want is to make sure they don't want to do anything; that way, they can just do nothing all day, be happy with that, and some form of freedom is maximized.\n\n\nTo address this issue, my latest idea is to value something i'd like to call *exstential self-determination*: the freedom to *exist as you would normally have*. It's a very silly notion, of course; there is no meaninful \"normally\". But still, i feel like something *like that* would be core to making sure not just that existing people can do what they want, but that humankind's general ability to be original people who want to do things is not compromised.", "date_published": "2020-09-08T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "f8267dc4491fadfd4a4bbf03d3af3dfe", "title": "Cool linguistic purisms", "url": "https://carado.moe/linguistic-purisms.html", "source": "carado.moe", "source_type": "blog", "text": "Cool linguistic purisms\n-----------------------\n\n\nIn general, [linguistic purisms](https://en.wikipedia.org/wiki/Linguistic_purism) as enforced by states are bad [prescriptivisms](https://en.wikipedia.org/wiki/Linguistic_prescription); but, as [voluntary](https://en.wikipedia.org/wiki/Voluntaryism) [institutions](https://en.wikipedia.org/wiki/Institution), they can be pretty cool! Here are a couple i have an interest in.\n\n\n* [Anglish](https://anglish.fandom.com/wiki/What_is_Anglish%3F) is a form of modern english, but with no or less loanwords (notably, a large amount of modern english words come from latin or french); and, on the other hand, many old english words that didn't make it into modern english are transformed as if they did, in order to be used again. In addition, it is not uncommon for anglish texts to readopt older characters like the [\"þ\" and \"ð\"](https://en.wikipedia.org/wiki/Thorn_(letter)) to replace \"th\" respectively in [unvoiced and voiced](https://en.wikipedia.org/wiki/Voice_(phonetics)) places. [This cool youtube channel](https://www.youtube.com/channel/UCx85MCqN8g6urSgKwmdA52w/videos) has a bunch of great content on Anglish.\n* An idea I had was to make a version of japanese without [chinese pronunciations](https://en.wikipedia.org/wiki/Kanji%23On%27yomi_%28Sino-Japanese_reading%29); a name for such a language could be [大和](https://en.wikipedia.org/wiki/Names_of_Japan)[語](https://jisho.org/search/%E8%AA%9E%20%23kanji) (Yamatogata). idea apparently has been used [as the source of a conlang](https://web.archive.org/web/20080501052251/http://www.langmaker.com/db/mdl_baronh.htm) for the japanese space opera franchise *Crest of the Stars*. Perhaps an alternate writing system as well? There are some on [this page](https://omniglot.com/conscripts/natlangs.htm#japanese), but i rember coming across a PDF for a proposal of a writing system to replace just [kanji](https://en.wikipedia.org/wiki/Kanji), while [kana](https://en.wikipedia.org/wiki/Kana) would maintain its use; although i can't find it now.\n* I don't know much about [Icelandic linguistic purism](https://en.wikipedia.org/wiki/Linguistic_purism_in_Icelandic) but their [Mjölnir](https://en.wikipedia.org/wiki/Mj%C3%B6lnir)-inspired variant of the [icelandic flag](https://en.wikipedia.org/wiki/File:Flag_of_Iceland.svg) is [pretty cool](https://www.reddit.com/r/vexillology/comments/1jl6f6/high_icelandic_flag_created_to_symbolize/).", "date_published": "2020-07-18T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "4649bd3832d5f95c348cad863a07a9a8", "title": "Progress/decline in fields", "url": "https://carado.moe/progress-decline.html", "source": "carado.moe", "source_type": "blog", "text": "Progress/decline in fields\n--------------------------\n\n\n![](progress-decline.png)\n\n\n(inspired by [this talk](https://www.youtube.com/watch?v=pW-SOdj4Kkk))\n\n\n(for fun: the chart [overlaid](progress-decline-aeonics.png) with [Liber Kaos](https://en.wikipedia.org/wiki/Peter_J._Carroll)'s *Aeonics* chart)", "date_published": "2020-07-17T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9e427426367adbd75c90b807716836b1", "title": "Song Pairs that can be listened to together", "url": "https://carado.moe/song-pairs.html", "source": "carado.moe", "source_type": "blog", "text": "Song Pairs that can be listened to together\n-------------------------------------------\n\n\n* From Nier: Song of the Ancients [Devola](https://www.youtube.com/watch?v=qCKEXPXtrEU) + [Popola](https://www.youtube.com/watch?v=IZdnJLdmRlI) = [both](https://www.youtube.com/watch?v=3nDNr2qb3nM) (see also: [Fate](https://www.youtube.com/watch?v=ady--PNMsfI))\n* From the anime Cross Ange: [Hikari no uta](https://www.youtube.com/watch?v=P_f0Q6QbHJY) + [Kaze no uta](https://www.youtube.com/watch?v=zRj14REjU_k) = [El Ragna](https://www.youtube.com/watch?v=4QHU5s9gZlA)\n* Vocaloid songs [Paradichlorobenzene](https://www.youtube.com/watch?v=TeVhHLggZ5U) + [Antichlorobenzene](https://www.youtube.com/watch?v=vfkn9FvjH90) = [both](https://www.youtube.com/watch?v=hSGyuNBo3QE)\n\n\nFeel free to inform me of more.", "date_published": "2020-06-27T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "372495a7613ffa58845b8411bd377f31", "title": "Word Report #1", "url": "https://carado.moe/word-report-1.html", "source": "carado.moe", "source_type": "blog", "text": "Word Report #1\n--------------\n\n\nWord Report will be a series of posts in which I document uncommon terms I'm using. Unless stated otherwise, they are of my own invention.\n\n\n* **Topia**: (based on Utopia/Dystopia) a society in which a particular vision is fully realized, without specifying if that society is good or bad.\n* **Ancapolite**: \"Polite anarcho-capitalism\"; a society within an anarcho-capitalist framework where social and contractual norms and expectations are maintained by mutual social good-will, pressure, and other existing cultural systems.\n* **Seme**: a piece of meaning/semantics. Compared to the related word [sememe](https://en.wiktionary.org/wiki/sememe#English), a seme doesn't have to be smallest/atomic.\n* \"**&adj**\": a suffix to mean \"and adjacent\". For example, \"4chan&adj\" means 4chan and adjacent websites/cultures, such as 2chan, 8chan, some reddits, weeb culture, the alt-right, etc.\n* **Postnmodern**: a cultural movement designated by the term \"modern\" prefixed by \"post-\" *n* times. post⁰modernism is modernism, post¹modernism is post-modernism, post²modernism is post-post-modernism, post-even-modernism is postnmodernism where n is an even number, etc.\n* I've been using the word \"be\" to indicate being while leaving tense explicitely unspecified. \"I be here\" does not specify whether I am, was, or will be here. Can be interpreted as related to the meme phrase \"It do be like that\".\n* Similarly, I've been using the form \"I have cat\" to indicate my ownership of either a singular cat or plurality of cats, explicitely leaving number unspecified (not \"I have a cat\" nor \"I have cats\").", "date_published": "2020-06-01T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "9cf92704f6c353beb6aa1c9b2faa75be", "title": "the Economic Compass", "url": "https://carado.moe/economic-compass.html", "source": "carado.moe", "source_type": "blog", "text": "the Economic Compass\n--------------------\n\n\n![](economic-compass.png)\n\n\nthe planned-liberal axis represents how much an economic system operation is more intentionally planned, or more left to its own \"organic\" activity.\n\n\nthe sovereignty-consensus axis represents whether economic agents have the ability to have control and freedom over some local dominion, or whether their activity is constrained by outside rules determined by society as a whole.\n\n\n(2020-09-03 edit: renamed \"statism\" to \"planned\" (and \"liberalism\" to \"liberal\"))\n\n\n(2021-04-29 edit: added descriptions for the axes)", "date_published": "2020-04-29T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "0439bbbaa8d1b5eab315f7a437078120", "title": "Limiting Real Universes", "url": "https://carado.moe/limiting-real-universes.html", "source": "carado.moe", "source_type": "blog", "text": "(2020-04-27 edit: actually Greg Egan already made this argument previously ([see Q5 if you have read Permutation City](https://www.gregegan.net/PERMUTATION/FAQ/FAQ.html)))\n\n\n(2021-04-28 edit: this post might not be the best job at explaining its idea; see an alternate explanation in [this other post](quantum-suicide.html))\n\n\nLimiting Real Universes\n-----------------------\n\n\nThe following is an argument for thinking that the set of universes that can \"be real\" (what that means is covered) is limited.\n\n\nNotably, not all [Tegmark 4](https://space.mit.edu/home/tegmark/crazy.html) (i.e. mathematically possible universes) are real, nor even all considerable states of universes based on our current physics.\n\n\n### I. Limiting Many-Worlds\n\n\nA universe being \"real\" is defined here as \"one could observe being in it\".\n\n\nSuppose all possible configurations of particles under our current laws of physics are *real*.\n\n\nThen, out of all the universes that contain an exact physical copy of you, the vast majority of them should be universes that *do not* descend from a coherent history and thus everything that surrounds the copy of you should look like random particle soup.\n\n\n(If such a distinction even makes sense), then You cannot tell if you're \"the original\" that comes from a coherent history or if you were \"just created\" as-is, because your memory could also \"just have been created\" as-is.\n\n\nYet, when you look around, everything looks very coherent.\n\n\nTherefore, either you're *extremely* lucky, or only universe-states that descend from a coherent history are *real*. As per bayesianism, you should think the latter.\n\n\n### II. Limiting Computing Ability\n\n\nSuppose all universes based on our current physics, but with arbitrary amounts of \"computing power\" (i.e. how of its stuff can be turned into computers i.e. how much it has stuff) are \"real\".\n\n\nThen some of those universes would end up making simulations of random universe-states, some of which happen to contain an exact copy of you.\n\n\nHowever, if that were possible, because of the number of amounts of computing powers possible, *you* should be more likely to exist in one of these randomly created simulations with a universe with much more computing power than ours.\n\n\nYet, when you look around, everything looks very coherent.\n\n\nTherefore, there must be *some* limit on the amount of computing power universes can have; and then, so that the sheer number of these universe can't compete with the meagre set of history-coherent from which our reality descends, there must either be a limit in the number of initial configurations other universes can have, or on the total computing power allocated to all universes.\n\n\nEven better: suppose all states of [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway's_Game_of_Life) are *real*. Then out of this infinity, a smaller infinity should happen to be running perfect simulations of subsets of this universe that happen to have you in them. But, reusing my argument again, you observe probably not being in them; therefore there must be a limit on the amount of computing power *even universes with other physics* have (and, again, a limit on either the number of configurations these other universes would be in, or the total computing power allocated to them all collectively).\n\n\n### III. Conclusion\n\n\nAt this point it seems easier to *just assume that only universe-states that descend from our history* exist, or at least that the number of such histories is limited.\n\n\nThat certainly seems simpler than imagining there being a set of various systems by which other universes with other rules of physics would have a fixed amount of computing power allocated amongst themselves.\n\n\nNot that all of this is mostly true even if you're a dualist: even if you have a soul (or equivalent), if there were infinite universes with infinite computing power, there's no reason the soul of *you reading this right now* should happen to be the soul of the original you and not a soul \"created in a just-then created universe-state\", unless you also assume complex soul mechanics.", "date_published": "2020-04-26T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "dbdfa8e2898ce73ed90fe39f83381a9f", "title": "A Collection Of Compasses", "url": "https://carado.moe/compasses.html", "source": "carado.moe", "source_type": "blog", "text": "A Collection Of Compasses\n-------------------------\n\n\nA collection of interesting compasses I have found on the internet, or made myself.\n\n\n* [the classic Political Compass](https://www.politicalcompass.org/analysis2)\n* [Conflict In Literature](http://www.incidentalcomics.com/2014/05/conflict-in-literature.html)\n* [@BPD\\_GOD](https://twitter.com/BPD_GOD)'s [futures compass](bpd_god-futures.png)\n* [Nesterov's compass](https://twitter.com/yeojaphd/status/1028786592113143809)\n* [my Belief In Society compass](belief-in-society.html)\n* [another future compass (source unknown)](tyranny-compass.jpg)\n* [Digibro's Neurotyping Chart](https://www.youtube.com/watch?v=FyTlzvnNCQ8)\n* [the Nebulo-Complexity compass](https://twitter.com/chaosprime/status/1251254988669628421)\n* [my Economic Compass](economic-compass.html)\n* [@ShamanicDeleuze](https://twitter.com/ShamanicDeleuze)'s [anomalous apocalypse compass](anomalous-apocalypse.jpg)\n\n\n*(last updated: 2020-05-31)*", "date_published": "2020-04-15T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "49b8e9b87d0a42fa51bf36540efff314", "title": "Book Review: 12 Rules For Life", "url": "https://carado.moe/12-rules-for-life.html", "source": "carado.moe", "source_type": "blog", "text": "Book Review: 12 Rules For Life\n------------------------------\n\n\nJordan Peterson's *12 Rules For Life: An Antidote To Chaos* is a burger of self-help life advice.\n\n\nThe core meat is mostly good advice; everyone will find some of the rules obvious but not others, but which ones those are will likely vary a lot; which if anything gives the book more points.\n\n\nHowever, the advice is surrounded by two buns; one above and one below. I have big issues with both. Let's start with the bottom one.\n\n\n### Culture\n\n\nIt has become kind of a meme to mention lobsters in relation to Jordan Peterson. In the book they feature as evidence that competition and hierarchies have been hard coded into fauna for hundreds of millions of years. But some interviews and internet shitposting later, and lobsters are just *The Peterson Meme* now.\n\n\nWhat would he think about it ? Oh, he'd just *love* for that to be deeply meaningful, wouldn't he ? But the lobster meme is exactly that: a meme. If you look inside, you won't find a core essence that fundamentally says something about human nature. If you look inside, you'll find a concept graph that has high memetic value.\n\n\nThough Peterson seems to have read *The Selfish Gene*, he does not seem to have gotten the point about memes. They're replicators too. Something becoming a meme, and then crystallizing into culture, is not evidence that the concept helps us; it's only evidence that the concept helps itself.\n\n\nDo religious texts, such as the bible he likes to cite and analyze to death, carry deep meaning about the human condition ? Maybe so. But I suspect that when you analyze something for decades, as he says he has, then eventually your bruteforcing is gonna find something meaningful, and you'll become persuaded that this meaning is the core, canonical, intended interpretation. [The human brain is very good at tricking itself](https://www.readthesequences.com/); in fact, it has [evolved to do exactly that](https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/): persuade itself that the culture it has inherited makes sense, at all costs.\n\n\nThis was a useful piece of evolution for humans; it's how we survived so many environments and developed such complex social structures. But today we have actual engineering; we have the very new ability to *intently make stuff up*. Which brings me to the upper bun.\n\n\n### Where do we go\n\n\nPeterson is, and rightfully so, terrified of utopian thinking. It's very scary. If you try to design a system by which all humans will have to live forever, then you're probably going to fail. See nazi germany and the soviet union, which Peterson loves to bring up; especially the soviet union, because he's right-wing aligned so it's easier for him to point out what's wrong in extreme authoritarian leftism. Fair enough. (Peterson also loves saying \"fair enough\").\n\n\nThe issue is that we don't have a choice.\n\n\nNot only is the modern world already highly engineered (look at neoliberal economic policy) but eventually, [someone is gonna make AGI](https://slatestarcodex.com/2020/01/30/book-review-human-compatible/), and whatever the AGI wants is what we'll live by forever. Hopefully we can [come up with something reasonable](topia-layer-0.html), but we have to come up with *something*, or become paperclips. Or worse.\n\n\nAs for the individual's direction, 12 Rules For Life aims to make you into a functional, social, reasonable adult. I'd like to offer an alternative, here, which for a lack of better term I'll call *[Digibro](digibroism.ogg)ism*.\n\n\nI'm weird. I want to be weird. I want more other people to be weird, and few things make me as happy as finding out someone else is weird.\n\n\nWeirdness [creates deep culture](https://trialofthegoldenwitch.bandcamp.com/track/for-stella-the-magic). Do you want to be [Jennifer Diane Reitz](https://www.youtube.com/watch?v=rOn-gSTsD7k), or [Toby Fox](https://www.youtube.com/watch?v=dwampY_jIdg), or [a discordian](https://principiadiscordia.com/book/7.php), or anyone that features on [The Dick Show](https://thedickshow.com/) including Dick himself ? Or do you want to be just another efficient, functional cog in whatever the status quo of the day is ?\n\n\nDo you want to just enjoy the system, as a highly functional cog rewarded for its efficiency, and blame people for their own problems as Peterson likes to do, or do you want to look forward, invent solutions, suggest better worlds and work towards one ?\n\n\nPeople, to an extent, are responsible for their own problems. But to another much larger extent, *they aren't*. That's what civilization is about. Malaria isn't solved by giving solid self-help advice to people with malaria so they take their life into their hands and get meaning from god. Malaria is solved by having eccentric geniuses innovate weird new ways to fight diseases. Even if their house isn't in order. You don't have time to clean your house, there's Malaria to solve !\n\n\n### What to keep\n\n\nThe 12 rules themselves, as I've mentioned, are alright; and their rationale is somewhat helpful to read, to grasp the concept. That said, I think you can get the gist of the advice just by reading [the 12 rules themselves](https://en.wikipedia.org/wiki/12_Rules_for_Life#Description) without their explanation. And if you don't know where you're going in life, sure, maybe read the book and you'll get some useful advice about pursuing meaning. But I know where I'm going; and I'd like to think most of us at least have some idea.\n\n\nAnd don't forget that the universe is not narrative, there is no god, quarks don't care about humans. [You can't derive *ought* from *is*](https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem); you can't think hard enough that you obtain purpose. Go out, and find something you like, and make that your meaning. Maybe that's *a book about how to find meaning*. More likely it's not as meta and it's an actual thing in the world that you care about.", "date_published": "2020-03-30T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "7e0f30d4dfcf1bc6746e95bc82e40932", "title": "Topia: Layer 0", "url": "https://carado.moe/topia-layer-0.html", "source": "carado.moe", "source_type": "blog", "text": "*(2020-11-15 edit: this post is now largely superceded by [Two Principles For Topia](two-principles-for-topia.html))*\n\n\nTopia: Layer 0\n--------------\n\n\nIn a similar way to the [Hierarchy of Needs](https://en.wikipedia.org/wiki/Maslow's_hierarchy_of_needs), I have been thinking about what post-singularity utopia we would want in term of layers.\n\n\nI want a Layer 0, a universal set of guarantees that apply to everybody; so that, on top of that, people can build voluntary societies and sub-societies as far as they want. Ideally, their societies would be mutually compatible; one could partake of multiple societies and have friends in both. But they wouldn't have to be. It's all dependent on what society you want to join.\n\n\nBut, I think we do need a universal Layer 0. One that at least makes the singularity AI prevent other AIs from emerging and [turning everyone into paperclips](https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer) or other existential risks. As this is the layer that applies universally, we want it as thin as possible, so that societies built on top of it have as much freedom to implement whatever they want; in particular, almost everything mentioned here can *be opted out of* (such as when joining a social contract). It's just your starting kit.\n\n\nThese should be the universal guarantees that everyone starts with: physical safety and basic living ressources.\n\n\nFor the physical safety, it's fairly easy to think of telling the singularity AGI to implement what I like to call NAPnobots — nanobots that are omnipresent in physical reality (and would manifest as mandatory added laws of physics to virtual realities) and enforce the [NAP](https://en.wikipedia.org/wiki/Non-aggression_principle); that is, prevent people and their physical property from being subjected to aggression without their consent (\"without their consent\" could be a tricky part — also, should people be able to *permanently* opt out of some NAP violation protections ?).\n\n\nYou want to create a society in which it's fine to punch each other in the face ? That's fine with me. All I ask is that that society be purely opt-in.\n\n\nYou want to create a communist utopia in which all belonging are shared ? That's fine with me. Just create a voluntarily contract where people consent to pooling their properties together for shared use.\n\n\nYou want the freedom to hurt yourself ? Just consent to being hurt by yourself. You want the freedom to hurt non-consenting others ? *No.* That's my personal opinion, of course, but I do think the maximum reach of a social contract should be that it can't force others into joining it, and I hope everyone else can agree that at least requiring one's consent to partaking of interaction should be a guaranteed absolute. On the other hand, I would definitely consent, personally, to very large ranges of interactions with at least my friends. They can punch me if they want; I trust them to not do that, and if my trust is betrayed beyond what I consider reasonable, I can always unconsent.\n\n\nAlso, since the brain evaluates every input it receives, this includes for the most part having to consent to any form of communication; rember that unconsented advertising *is* nigh-literally rape.\n\n\nThe second part, basic living ressources, is a concern of economics. Unless we manage to escape to universes where ressources not only are infinite, but can be accessed faster than humans can appear (which may become as simply as duplicating a running process on a computer), it requires among other things limiting the number of humans that can exist, or you run into [malthusian traps](https://slatestarcodex.com/2014/07/30/meditations-on-moloch/).\n\n\nThe best way I can think of doing this is: when the singularity starts, give everyone enough basic assets that the dividends that can be generated from them are easy to live off of. Then, for people to have a child, they need to acquire a number of assets that will provably generate the same amount of dividends for the child, and give those to him. On top of this arbitrarily complex liberal contracts can be established, of course; you can have a socialist society where everyone has consented to their ressources being taxed by exactly how much giving basic living assets to the new kids costs (kids which won't themselves be taxed in that way unless they consent to in turn join that society). There is the issue of what amount of ressources constitutes basic living, as well as if there is a cap on the amount of unviolable property a person or group can have — can an environmentalist group very quickly just plant a flag on all nearby useful planets (as to declare them their unviolable property) and then forever refuse for them to be interacted with (as the NAPnobots will enforce) ?\n\n\nThe choice of living ressources should be included: if what we choose to eat has as much of an effect on our psyche as we are starting to find out it does, then choosing what configuration of nutrients we receive should part of the basic living guarantees.\n\n\nOne of those basic ressources of course is healthcare, and healthcare should include guaranteed immortality unless you opt out of it. There's just no particular reason old age should have any more right to hurt a non-consenting person than any *other* outside aggressor; \"outside\" because people are their brain's information system, not their body. Becoming a virtual person would be the \"easy\" solution to immortality.", "date_published": "2020-03-29T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "3a026f5dd802bc65c20c81206355df0a", "title": "the Belief In Society compass", "url": "https://carado.moe/belief-in-society.html", "source": "carado.moe", "source_type": "blog", "text": "the Belief In Society compass\n-----------------------------", "date_published": "2020-03-29T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "2ee9f7b9bfc95b1904b86602927a231f", "title": "On Economics", "url": "https://carado.moe/on-economics.html", "source": "carado.moe", "source_type": "blog", "text": "On Economics\n------------\n\n\n\n\n| | |\n| --- | --- |\n| Like a lot of millenials, especially american millenials, I have spent years of my youth entertaining the notion of anarcho-**communism**. | Like a lot of millenials, especially american millenials, I have spent years of my youth entertaining the notion of anarcho-**capitalism**. |\n| Although since them I have admitted that \\*some\\* amount of state is necessary (and hence I have given up on the anarcho- part), I still find myself defending **socialism**; however, there is always a lot of resistance to this. What makes it that we still can't agree on economics ? | Although since them I have admitted that \\*some\\* amount of state is necessary (and hence I have given up on the anarcho- part), I still find myself defending **capitalism**; however, there is always a lot of resistance to this. What makes it that we still can't agree on economics ? |\n\n\n### We live in a society\n\n\n\n\n| | |\n| --- | --- |\n| We can't discuss economics without mentioning the one group that has let **capitalism** inexorably gain ground on the overton window: | We can't discuss economics without mentioning the one group that has let **socialism** inexorably gain ground on the overton window: |\n| Liberals. | Liberals. |\n| Blinded by short-term thinking, liberals keep promoting **capitalism** at the cost of a sustainable future. This, of course, plays in the hands of **capitalists**, whose main intent is to steal people's hard-earned value for themselves. | Blinded by short-term thinking, liberals keep promoting **socialism** at the cost of a sustainable future. This, of course, plays in the hands of **socialists**, whose main intent is to steal people's hard-earned value for themselves. |\n| In my opinion, it's all too convenient to be a coincidence; in fact, if \\*I\\* were a **capitalist**, I would try my hardest to use the positions of power that **capitalists** now occupy to slowly change public opinion in my favor. Not to mention, of course, the corruption of government, which has been pressured into implementing **capitalist** policies one after another. | In my opinion, it's all too convenient to be a coincidence; in fact, if \\*I\\* were a **socialist**, I would try my hardest to use the positions of power that **socialists** now occupy to slowly change public opinion in my favor. Not to mention, of course, the corruption of government, which has been pressured into implementing **socialist** policies one after another. |\n| I could from this point argue the thousand times rehashed arguments about which system leads to a better world for everyone; despite how obvious it is that no sustainable system can be built with improper incentives. | I could from this point argue the thousand times rehashed arguments about which system leads to a better world for everyone; despite how obvious it is that no sustainable system can be built with improper incentives. |\n\n\n### A moral argument\n\n\n\n\n| | |\n| --- | --- |\n| Indeed, the incentives of **capitalism** are so terribly misaligned that the people who take advantage of others keep getting rewarded the most, over those who actually contribute value. An economy based on people being pitted against one another has no hope of creating a prosperous society the way real, voluntary cooperation can, as **socialism** allows. | Indeed, the incentives of **socialism** are so terribly misaligned that the people who take advantage of others keep getting rewarded the most, over those who actually contribute value. An economy based on people being pitted against one another has no hope of creating a prosperous society the way real, voluntary cooperation can, as **capitalism** allows. |\n| But in this post, I want to address the core moral value which has led me to hold **socialism** ideals so dearly: | But in this post, I want to address the core moral value which has led me to hold **capitalism** ideals so dearly: |\n| Freedom. | Freedom. |\n| The reason **capitalism** fundamentally destroys freedom is simple: when you put central authorities in charge instead of letting people make their own decisions, people are bound to be forced to abandon what they love and just do terrible labor all day under terrible conditions instead. Indeed, this is what we see in every instance of **capitalism** being implemented. | The reason **socialism** fundamentally destroys freedom is simple: when you put central authorities in charge instead of letting people make their own decisions, people are bound to be forced to abandon what they love and just do terrible labor all day under terrible conditions instead. Indeed, this is what we see in every instance of **socialism** being implemented. |\n| Now, some **capitalists** will \\*claim\\* to be on the bottom half of the political compass, but of course we all know that's unreasonable. No one would voluntarily partake of **capitalism** if they weren't forced or brainwashed into doing so by an authoritarian state. In fact, the very reason that anyone survives **capitalist** societies is the emergence of smaller, local, bottom-up **socialist** structures to fill in the gaps left by the status quo. | Now, some **socialists** will \\*claim\\* to be on the bottom half of the political compass, but of course we all know that's unreasonable. No one would voluntarily partake of **socialism** if they weren't forced or brainwashed into doing so by an authoritarian state. In fact, the very reason that anyone survives **socialist** societies is the emergence of smaller, local, bottom-up **capitalist** structures to fill in the gaps left by the status quo. |\n| Even in the parts of the system that genuinely take the shape of **capitalism**, the reason anything works is a tiny set of people in power going out of their way to ignore the incentives that surround them and help the people instead. | Even in the parts of the system that genuinely take the shape of **socialism**, the reason anything works is a tiny set of people in power going out of their way to ignore the incentives that surround them and help the people instead. |\n| However, patching can only go so far. How many tens of millions of lives will have to be destroyed by **capitalism** before the idea is abandoned ? | However, patching can only go so far. How many tens of millions of lives will have to be destroyed by **socialism** before the idea is abandoned ? |\n\n\n### The way forward\n\n\n\n\n| | |\n| --- | --- |\n| The obvious path forwards in these trying times is to reverse away from **capitalism** and start implementing some real, reasonable policies; but also, and perhaps even more importantly, we need to \\*educate people\\*. | The obvious path forwards in these trying times is to reverse away from **socialism** and start implementing some real, reasonable policies; but also, and perhaps even more importantly, we need to \\*educate people\\*. |\n| Everyday, the great work of proiminent figures on sites like youtube helps educate thousands about the ravages that **capitalism** causes to this day, and what solutions are available right now to fix some of the low-hanging fruit issues like widespread poverty, government corruption, unaffordable healthcare, terrible education, and so on. | Everyday, the great work of proiminent figures on sites like youtube helps educate thousands about the ravages that **socialism** causes to this day, and what solutions are available right now to fix some of the low-hanging fruit issues like widespread poverty, government corruption, unaffordable healthcare, terrible education, and so on. |\n| I believe we want the same thing as centrists, at the end of the day: a free and fair world where individuals can pursue happyness. Don't let evil get hold of them. **Socialism** will triumph, eventually; it's on the side of Good! | I believe we want the same thing as centrists, at the end of the day: a free and fair world where individuals can pursue happyness. Don't let evil get hold of them. **Capitalism** will triumph, eventually; it's on the side of Good! |", "date_published": "2020-03-28T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "e69cad99c2490a198dce0b6322b38317", "title": "KOLSITAN, a tiny video game", "url": "https://carado.moe/kolsitan.html", "source": "carado.moe", "source_type": "blog", "text": "KOLSITAN, a tiny video game\n---------------------------\n\n\n[**KOLSITAN**](kolsitan) is a tiny video game I made in about ten days, with music by [Chiaro](https://chiaro.bandcamp.com/).\n\n\nYou can access the source code and the various resources used [here](kolsitan/kolsitan.zip); there's also [a trailer](kolsitan-trailer.webm).\n\n\nYou probably need an up to date browser (with WASM support) to play it. \nIf you can't use the Backspace key to erase letters, the Delete key works too.\n\n\n**The game can be played [here](kolsitan).**", "date_published": "2019-12-31T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "c9d85dff528ab5d8c1d5951621632374", "title": "Building The Castle vs Finding The Monolith • carado.moe\n", "url": "https://carado.moe/castle-monolith.html", "source": "carado.moe", "source_type": "blog", "text": "*Building The Castle* vs *Finding The Monolith*\n-----------------------------------------------\n\n\nI think there are two main ways to create art.\n\n\nOne is *building the castle*: you assemble together parts and see how they match and build towards a finished project that is made of many mutually coherent parts. A perfect work made that way is a castle where every piece perfectly fits with every other piece.\n\n\nThe other is *finding the monolith*: you have a vision, and some notion that somewhere lies a perfect, canonical implementation of that vision, and you look for the work that would implement that vision up to its logical and canonical conclusion. A perfect work made that way is when you find the monolith that perfectly renders what the vision leads to.\n\n\nYou can use a mixture of both of course, but I believe you have to use at least one or the other.\n\n\nThat's it.", "date_published": "2019-08-26T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "0407242415943af9ccee2cec61e7fdda", "title": "Some post- words for the future", "url": "https://carado.moe/post-words.html", "source": "carado.moe", "source_type": "blog", "text": "Some post- words for the future\n-------------------------------\n\n\n### Post-geometry\n\n\nI would like to coin the world *post-geometry* to mean any setting which isn't mainly a euclidean or mostly-euclidean, typically three-dimensional space.\n\n\nFor example, any place IRL is geometric, but websites are mostly post-geometry; the interconnected graph of hypertext documents don't typically form a euclidean geometry.\n\n\nThe term will become particularly useful when we live more in more not in physical but in virtual places, and there will tend to be personal preferences for more geometric settings vs more abstract, \"post-geometry\" settings.\n\n\nThe \"post\" in \"post-geometry\" implies that this is/will be seen as an old-school type of thinking, as in the past geometric places were all we had, when we were living exclusively IRL.\n\n\nPersonal stance: I think I'd like to keep with geometric places, virtual or not, for the time being. I dunno. I guess I'll see how I feel about it when nervegear comes around.\n\n\n### Post-aggression\n\n\nPost-aggression means any world in which all forms of physical aggression on individuals, and perhaps on their physical private property, are virtually non-existent.\n\n\nIt may or may not include aggression on virtual property and/or intellectual property.\n\n\nPost-aggression is a very liberal concept; it's related to the notion of the [*NAP*](https://en.wikipedia.org/wiki/Non-aggression_principle); but NAP generally implies some form of anarcho-capitalist style enforcement by threat of violence, whereas post-aggression can be achieved in other ways as well, such as omnipresent invulnerability-granting nanomachines like in [17776](https://en.wikipedia.org/wiki/17776).\n\n\nPersonal stance: post-aggression would *mostly* be great, but maybe it doesn't have to be *absolutely* enforced. Maybe some amount of discomfort is expectable, as well as some amount of theft, such as the case of taxes used for democratically determined public goods. As for intellectual property, I personally don't consider infringement on it to be aggression, as I'm an intellectual property abolitionist.\n\n\n### Post-human\n\n\nPost-human means exploring beyond notions of what it means to be human, mostly in lifestyle choices. Living in post-geometry settings would be a great example of post-humanism, as would be uploading one's mind into a non-humanoid body.\n\n\nThis term is to be opposed to \"human-core\" — the delibirate conservation of human values and culture, like being walking, thinking meat that eats foods and experiences pain and pleasure.\n\n\nPersonal stance: for the moment, I'd just like to be *very cautious* about post-humanism.", "date_published": "2019-07-27T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "12487e19ff13d49230f408d07b585869", "title": "Semantics: Primes and Universals, a book review • carado.moe\n", "url": "https://carado.moe/spu-review.html", "source": "carado.moe", "source_type": "blog", "text": "*Semantics: Primes and Universals*, a book review\n-------------------------------------------------\n\n\n### 1. State of the industry\n\n\n\n```\nAmazement:\n X feels something\n sometimes a person thinks something like this:\n something is happening now\n I didn't know before now: this can happen\n I want to know more about it\n because of this, this person feels something\n X feels something like this\n\n```\n\nSome people, when confronted with a problem, think \"I know, I'll use neural networks.\"\n\n\nAs *Semantics: Primes and Universals* (\"SPU\") itself recounts in its first chapter — a short history of the study of semantics in linguistics — the academic approach to understanding semantics seems to have mostly been similar to what we've been doing in computer science to handle language: throw fuzzyness at it until it somehow works itself out.\n\n\nIn linguistics, that fuzzyness has apparently ranged from \"semantics is too hard, just don't study it\" to trying to apply [prototype theory](https://en.wikipedia.org/wiki/Prototype_theory) to everything. That theory, in short, makes the claim that a concept is a set of ideal attributes — for \"bird\", whatever we think of when we think of a bird — and that specific instances match a given concept up to a certain degree. Some creatures are extremely bird, some creatures are somewhat bird, some creatures are not very bird at all, etc…\n\n\nIn computer science, that fuzzyness has expressed itself mostly along the lines of the \"it's too hard, don't try\" approach. Indeed, we tend to just throw artificial neural networks at linguistic problems (and many other problems) and wait for the whole fuzzy system to approach a working solution.\n\n\nNeural networks aren't magic; they find algorithms, or processes of some form, which solve the problem. Often times, such processes are considered too complex for people to try and understand them; but does that approach have to apply to the understanding of language ?\n\n\nNot necessarily so; or so has been claiming [Anna Wierzbicka](https://en.wikipedia.org/wiki/Anna_Wierzbicka) since the 1970's. In developing her *[Natural Semantic Metalanguage](https://en.wikipedia.org/wiki/Natural_semantic_metalanguage)* (\"NSM\") since her first book in 1972 to the one I'm currently reviewing from 1996 and up to recent years (the last version of the NSM is from 2017), she has been making two fairly strong postulates:\n\n\n1. All human languages share a common semantic core of (for now) less than a hundred primitive human concepts, and for each of them a number of primitive syntactic frames,\n2. Combining these primitives using their respective syntactic frames, we can define all words of all human languages; and in fact, supposedly any expressible human concept.\n\n\nBoth because of my interest in AI and human cognition, and because of my interest in —especially, oligosynthetic— constructed languages, the sheer discreteness of the whole project has been to me a refreshing ray of hope that in this world where the approach to everything seems to have become nihilistic fuzzyism; maybe language, at least, can be modeled and formalized by people in a rigourous manner.\n\n\n### 2. What's NSM is made of\n\n\n\n```\nSky:\n something very big\n people can see it\n people can think like this about this something:\n it is a place\n it is above all other places\n it is far from people\n\n```\n\nI won't list here all 65 semantic primitives; a table of them can be found at [this page](https://intranet.secure.griffith.edu.au/schools-departments/natural-semantic-metalanguage/what-is-nsm) (the latest version currently is the link to the PDF chart). But in the book, Wierzbicka justifies each semantic primitive with three sources of evidence for each:\n\n\n1. Whether the concept can be expressed using a combination of the other primitives\n2. How universal is the concept amongst languages (which SPU makes sure to explore a wide variety of)\n3. How early children seem to acquire the concept\n\n\nSome of her justifications aren't entirely convincing, such as the inclusion of the supposedly basic concepts `A LONG TIME` and `A SHORT TIME`; but they seem to have managed to stay in NSM up to this very day, and considering the ambition of the whole project, the seeming soundness of most of the NSM is impressive enough that I think the whole project should be taken seriously.\n\n\nThese semantic primitives (14 in her first publications, 55 in the book, and 65 in the latest version of the NSM) are meant to be the basic building blocks of all human concepts; supposedly, even, *human thought*. As a minimalist core of its field, NSM would fullfill the same purpose as λ-calculus in functional programming or turing machines in imperative programming.\n\n\nNSM also includes, for each of the concepts, a fairly limited set of syntactic frames, together forming the \"NSM Grammar\"; although that list was still in its early stage in SPU, it should be looked into, as not all grammatical structures are universal and, hence, deemed essential to NSM. That said, the chart linked above contains a series of example frames under each word, such as:\n\n\n\n```\nTHINK\n someone thinks about someone else/something\n someone thinks something good/bad about someone else/something\n someone thinks like this: “…”\n many people think like this: “…”\n\n```\n\n### 3. `KIND` & `LIKE`, `GOOD` & `BAD`, and Color Terms\n\n\n\n```\nX is green:\n in some places many things grow out of the ground\n when one sees things like X one can think of this\n\n```\n\nThe presence of some words, and the absence of others, imply some significant and very nontrivial claims about the nature of basic human thought — claims which SPU makes explicitely, and justifies in length.\n\n\n`KIND` (as in `X is a kind of Y`) points to the idea that the human mind has some fundamental idea of *taxonomy*; that is, a categorization of objects into mutually disjoint categories. Furthemore, `LIKE` (as in `X is like Y`) merely indicates a notion of similarity between things; and that this notion is fundamentally different from taxonomy.\n\n\nThe presence of both `GOOD` and `BAD` point no only to the universality and semantic irreductibility of those concepts, but also to the notion that one can't properly describe one using the other. Indeed, or so Wierzbicka claims, \"bad\" and \"not good\" are two different ideas, and that difference (as well as the presence of both of those concepts) is universal to all human languages.\n\n\nAs I was reading the first chapters, I was skeptical as to the absence of emotion words, and the absence of color words, from NSM. How can one describe colors without any \"primary color\" to start with ? But both of these topics were addressed later in the book; and suggested definitions —taken from the book— such as that of `Amazement` or `Green` have been included at the start of each part of this review, in the hope of providing a clearer picture of what NSM is like.\n\n\nNot only is the concept \"Green\", for example, apparently not as universal as one would expect — indeed, various languages describe and descriminate colors in widely different ways — but it is *sky* and *plant* that are semantically simpler concepts than *blue* and *green*, and not the other way around.\n\n\nIndeed, how should someone who has never seen anything green have an inherent notion of green ? This way of defining so-called primary colors from physical objects makes me wonder if even sensory records, and thus even memory, can be assembled purely from discrete grammatical constructs.\n\n\n### 4. Application to lexicography\n\n\n\n```\nX tempted Y to do Z:\n X wanted Y to do Z\n Y thought something like this:\n if I do Z it will be bad\n because of this, I don't want to do it\n X knew this\n because of this, X said something like this to Y:\n if you do it, something very good will happen to you\n you will feel something very good because of this\n X thought something like this:\n maybe Y will do it because of this\n X wanted this\n\n```\n\nOne of the big applications that Wierzbicka proposes for NSM is defining words. As she points out using many directed graphs, dictionaries love to define words in a circular manner:\n\n\n![circular definitions ](spu-review_fig9.2.png)\n\n\nIn fact, dictionaries seem plagued with circular definitions. Although they can be of some use to someone familiar with some words but not others, they seem for the most part useless at actually defining the nuances between words for similar concepts, multiples meanings of a word, or concepts altogether when one is not already familiar with most items in the graph.\n\n\nNSM aims to solve this problem by proposing that any one concept be defined in terms of its undefinable primitives — or at least in terms of other concepts themselves semantically simpler, forming a proper hierarchy of definitions with primitives at the top.\n\n\nRelatedly, Wierzbicka emphasizes the distinction between meaning and knowledge, or between dictionaries and encyclopaedias; as she brilliantly puts it,\n\n\n\n> Paradoxically, of the two, it is the dictionary entry, not the encyclopaedia entry, which can be said to be \"objective\" and non-arbitrary, and to represent a \"hard fact\". Psychocultural fact, of course, not biological fact [in the case of \"mouse\"]. An encyclopaedia entry for mouse may be provisional, biased, and subjective in its choices and in its emphases, but it doesn't aim at establishing psychocultural facts; it does not aim at discovering conceptual structures. Encyclopaedic knowledge is cumulative and inexhaustible. By contrast, the meanings of words are discrete and finite. They embody a special kind of knowledge … and they constitute a vital point of reference for both communication and cognition.\n> \n> \n\n\nWhere the encyclopaedia aims to collect facts about some object or concept, the dictionary merely aims to define it — that is, to describe in what terms it is thought of. Two people can disagree on facts *about* a mouse, but they both know what that *mouse* thing they're talking about is.\n\n\nIn addition, NSM is general enough to support the prototypes mentioned above. Where an exact definition would say `X is this`, a prototypical definition can say `X is something like this`; making the prototype explicit when needed, or absent when a prototype isn't appropriate to a definition.\n\n\nHow noticing patterns in the construction of concepts might help categorize them, or how translation might be helped by having words be clearly defined in terms of just a handful of primitives, one can only imagine the usefulness of a dictionary based on the NSM. Or even how subtle nuances between words in various languages might be formalized:\n\n\n\n```\n(A) X feels happy. =\n X feels something\n sometimes a person thinks something like this:\n something good happened to me\n I wanted this\n I don't want anything more now\n because of this, this person feels something good\n X feels. like this\n\n(B) X feels szczęśliwy (glücklich, heureux, etc.). =\n X feels something\n sometimes a person thinks something like this:\n something very good happened to me\n I wanted this\n everything is good now\n I can't want anything more now\n because of this, this person feels something very good\n X feels like this\n\n```\n\n### 5. Discreteness all the way down\n\n\n\n```\nHead:\n a part of a person's body\n this part is above all the other parts of the body\n when a person thinks, something happens in this part\n\n```\n\nIf we're willing to go with this discreteness paradigm, where might one end up ?\n\n\nThe physical universe is discrete. Even though it is, in general, most usefully modeled using real numbers, because of Planck constant(s) a finite volume of space with a finite number of particles in it has only a finite number of meaningfully different states.\n\n\nNeurons in the brain aren't like neurons in an artifical neural network; whereas all neurons in an ANN are activated at the same time, but by a floating point value, a biological neuron stays unactivated until it has accumulated enough potential, and then activates at once before returning to its inactive state. Since a given neuron has a given activation threshold, the potential coming out of a neuron is roughly the same every time. In fact, the amount by which you lift a finger or turn your arm is apparently only dependent on the *frequency* of the input signal.[citation needed]\n\n\nAs for the \"weight\" of synapses, although they can change over time, on a small time scale a neuron can only receive input from a finite number of neurons, and thus has a finite number of input neuron combinations/sequences that can lead to its activation.\n\n\nIf physics, cognition, and language, are all discrete systems, how hard can they be to understand and model ?", "date_published": "2019-04-10T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "f132cc1368e1e3ccbc0685e8c50b2637", "title": "The Last Global Era", "url": "https://carado.moe/global-era.html", "source": "carado.moe", "source_type": "blog", "text": "*Assuming* the singularity either doesn't happen before we go interstellar, or doesn't fundamentally affect how humans interact with one another,\n\n\n*assuming* faster-than-light travel and communication are impossible or hard enough, including cheating solutions such as the Alcubierre drive or such as running an exact simulation of the universe up to where and when you want to go, and then uploading yourself into that simulation,\n\n\n*assuming* acausal conversations are sufficiently impossible that we still need to communicate with other humans far away to know things about them,\n\n\n*assuming* either no existential catastrophy occurs before we get our space colonization momentum going, or aliens occupying the rest of the galaxy are relatable enough to be considered people for the purposes of this article,\n\n\n*assuming* interstellar or interuniverse colonization happens, and humans or human-enough entities still exist in those times,…\n\n\nThe Last Global Era\n-------------------\n\n\n…we are living in the last global era, where almost any human community can have real-time conversations with almost any other human community, or even meet them in real life.\n\n\nThat's very weird.\n\n\nThe vast majority of human existence will happen over an interstellar community, where if anything interesting happens somewhere, other people won't learn about for a very long time, and – eventually – when humans live in places that are going away from one another faster than the speed of light, other people won't ever get a chance to learn about it.\n\n\nRight now, if someone makes the greatest video game of all time, we all have a chance to learn about it. That won't be the case for the vast majority of human existence. Even you, reading this right now, unless you're unfortunate enough to die before we invent immortality, will probably live the vast majority of your life in that post-global era.\n\n\nIt's kinda sad. But it's also a great reason to invent great things *right now*. Whatever we make of the culture and society we have now will be what colony ships take on board with them, and any change to global human culture after that will be extremely hard to make, if possible at all.\n\n\nPart of the work i'm doing and the things i'm thinking about right now are based on realizing how lucky i am to live during the last global era of society. Please take that into consideration for what you do with your life as well.", "date_published": "2019-03-26T00:00:00Z", "authors": ["Tamsin Leake"], "summaries": []} -{"id": "5414f27aaab0b9f70b52d376fb2adead", "title": "Analogpunk", "url": "https://carado.moe/analogpunk.html", "source": "carado.moe", "source_type": "blog", "text": "Analogpunk\n----------\n\n\n\n(this video can also be found [on youtube](https://www.youtube.com/watch?v=0QJAd0NucuE))", "date_published": "2017-09-25T23:00:00Z", "authors": ["Tamsin Leake"], "summaries": []}